Compare commits

...

37 Commits

Author SHA1 Message Date
catlog22
d2d9f16673 refactor: enhance Codex output standards with structured templates
- Add Format Priority section aligned with GEMINI.md style
- Expand default output formats into three comprehensive templates:
  * Single Task Implementation (with Summary, Key Decisions, Testing, Validation)
  * Multi-Task Execution (First Subtask and Subsequent Subtasks with context passing)
  * Partial Completion (with Blocked/Required/Recommendation sections)
- Add Code References section with standardized format
- Standardize Related Files section at output beginning
- Improve clarity and consistency with Gemini execution protocol

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-16 13:51:42 +08:00
catlog22
39a35c24b1 refactor: optimize workflow templates and prompt structures
- Streamlined analysis templates (architecture, pattern, performance, quality, security)
- Simplified development templates (component, debugging, feature, refactor, testing)
- Optimized documentation templates (api, folder-navigation, module-readme, project-architecture, project-examples, project-readme)
- Enhanced planning templates (concept-eval, migration, task-breakdown)
- Improved verification templates (codex-technical, cross-validation, gemini-strategic)
- Updated claude-module-unified memory template
- Refined workflow-architecture documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-16 13:48:35 +08:00
catlog22
e95be40c2b refactor: streamline UI design workflow and add extraction scripts
- Simplified UI design command workflows (batch-generate, explore-auto, generate, update)
- Enhanced style-extract and layout-extract with improved documentation
- Refactored imitate-auto for better usability
- Removed obsolete consolidate workflow
- Added extract-computed-styles.js and extract-layout-structure.js utilities
- Updated UI generation and instantiation scripts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-16 13:47:58 +08:00
catlog22
d2c66135fb refactor: update CLI commands and tool agent configurations
- Enhanced CLI command documentation for analyze, chat, and execute
- Updated mode-specific commands (bug-index, code-analysis, plan)
- Added cli-execution-agent for autonomous CLI task handling
- Refined agent configurations for Codex, Gemini, and Qwen

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-16 13:47:26 +08:00
catlog22
4aec163441 refactor: reorganize memory commands and update brainstorm workflows
Changes:
- Move memory commands to .claude/commands/memory/ directory
- Relocate update-memory-full.md and update-memory-related.md
- Add new docs.md command for documentation workflows
- Update brainstorm commands (api-designer, data-architect, system-architect, ui-designer, ux-expert)
- Remove deprecated workflow/tools/docs.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-16 10:41:38 +08:00
catlog22
7ac5412c97 docs: update Windows path format guidelines for MCP and Bash compatibility
Clarified path format requirements:
- MCP tools require double backslash format (D:\path)
- Bash commands support forward slash (D:/path) or POSIX (/d/path)
- Added quick reference for common conversions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-16 10:40:09 +08:00
catlog22
25f6497933 feat: add api-designer command for generating framework-based analysis 2025-10-16 10:40:09 +08:00
catlog22
3eb3130b2b Merge pull request #4 from jayking0912/main
FIX:修复Gemini_base_url 不能自定义
2025-10-15 16:38:47 +08:00
jayNuc
1474e6c64b FIX:修复Gemini_base_url 不能自定义 2025-10-15 10:20:00 +08:00
catlog22
a4ca222db5 revert: restore intelligent-tools-strategy.md to commit b5d6870
Revert intelligent-tools-strategy.md to the version from commit b5d6870
(docs: enhance method parameter documentation in unified template).

This reverts the following commits for this file:
- 4524702 docs: restore Template System directory tree structure
- a7a157d docs: restore critical details to intelligent-tools-strategy.md
- 1e79866 refactor: optimize intelligent-tools-strategy.md and fix codex calls

File changes: 451 lines → 503 lines (+52 lines net)
- Restores original detailed structure
- Reverts optimization changes
- Returns to pre-refactor state

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 21:44:51 +08:00
catlog22
4524702cd4 docs: restore Template System directory tree structure
Restore the original directory tree visualization for Template System section,
replacing the compressed table format with a clear hierarchical structure.

Changes:
- Replace table format with ASCII directory tree
- Show complete file paths: prompts/, planning-roles/, tech-stacks/
- Display full template hierarchy with descriptions
- Improve path clarity for RULES field references

Benefits:
- Clear visualization of template file locations
- Easy to reference in $(cat ...) commands
- Better understanding of directory structure
- Facilitates accurate template path construction

File: intelligent-tools-strategy.md: 431 → 451 lines (+20 lines)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 21:15:30 +08:00
catlog22
a7a157d40e docs: restore critical details to intelligent-tools-strategy.md
Restore important detailed content that was removed in commit 1e79866:

Key restorations (106 lines added):
- Session Management: Detailed explanation and multi-task pattern examples (15 lines)
- Auto-Resume Decision Rules: Complete criteria for when to use/not use (12 lines)
- RULES Field Format: Critical "template reference only, never read" rule (14 lines)
- File Pattern Reference: Complete 3-step workflow with rg/MCP examples (33 lines)
- Common Scenarios: Full examples including Feature Development 3-step workflow (82 lines)
- Best Practices: Detailed guidelines with context optimization strategy (8 lines)

Additional improvements:
- Enhanced CLAUDE.md references throughout the document
- Added CLAUDE.md to Planning Checklist and File Pattern examples
- Restored execution timeout configuration with specific millisecond values

File changes:
- intelligent-tools-strategy.md: 325 → 431 lines (balanced detail/conciseness)
- Removed design-tokens-schema.md (obsolete)

The document now maintains conciseness while restoring all critical guidance.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 17:17:07 +08:00
catlog22
1e798660ab refactor: optimize intelligent-tools-strategy.md and fix codex calls
- Reduce intelligent-tools-strategy.md from 504 to 325 lines (35.5% reduction)
- Merge Gemini/Qwen tool specifications into unified Analysis Tools section
- Consolidate command templates and remove duplicate examples
- Compress usage scenarios while preserving all critical information
- Simplify guidelines to concise bullet points
- Fix incorrect codex chat calls in discuss-plan.md to use proper exec format

All core decision rules, permission framework, and command templates preserved.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 16:51:45 +08:00
catlog22
b5d6870a44 docs: enhance method parameter documentation in unified template
Add comprehensive requirements for documenting method parameters, return values, and exceptions in code files. Include detailed guidelines for parameter types, descriptions, default values, and usage examples.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 16:36:36 +08:00
catlog22
e5443d1776 docs: remove direct Gemini Wrapper script usage section
Removed the "Direct Gemini Wrapper Script Usage" section from Getting Started guides
as users cannot manually execute these commands in Claude Code - all tool invocations
are handled automatically by Claude through semantic understanding.

Changes:
- Removed "Gemini Wrapper 脚本直接使用" section (Chinese)
- Removed "Direct Gemini Wrapper Script Usage" section (English)
- Kept "Semantic Tool Invocation" section explaining natural language usage
- Streamlined documentation to focus on user-facing interaction patterns

Rationale:
- Users cannot run ~/.claude/scripts/gemini-wrapper commands directly in Claude Code
- These are internal implementation details handled automatically by Claude
- Documentation should focus on how users interact with the system, not internal mechanics

Files updated:
- GETTING_STARTED.md (-47 lines)
- GETTING_STARTED_CN.md (-47 lines)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 11:24:19 +08:00
catlog22
9fe8d28218 docs: clarify semantic tool invocation concept in Getting Started guides
Fixed the "Semantic Tool Invocation" concept to accurately reflect its meaning:

**Before**: "Gemini 工具语义调用" referred to direct script usage
**After**: Properly describes semantic invocation as natural language interaction

Changes:
- Added "Semantic Tool Invocation" section explaining natural language usage
  - Example: User says "Use gemini to analyze architecture"
  - Claude understands and automatically executes appropriate commands
  - Highlighted advantages: natural interaction, intelligent understanding, auto optimization

- Renamed previous section to "Direct Gemini Wrapper Script Usage"
  - Clarified this is for scenarios requiring precise control
  - Maintained all existing script usage examples

Structure now clearly distinguishes:
1. Semantic invocation (natural language → Claude → tool execution)
2. Direct script usage (manual command execution)

Updated both:
- GETTING_STARTED.md (English)
- GETTING_STARTED_CN.md (Chinese)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 11:21:56 +08:00
catlog22
9f4b0acca7 docs: add workflow-free standalone tools section to Getting Started guides
Added comprehensive "Workflow-Free Usage" section covering:

**Direct CLI Tool Usage:**
- Code analysis with /cli:analyze
- Interactive chat with /cli:chat
- Specialized modes (plan, code-analysis, bug-index)

**Semantic Gemini Invocation:**
- Read-only mode for exploration and analysis
- Write mode for document generation
- Context optimization techniques with cd

**Memory Management:**
- Full project index rebuild with /update-memory-full
- Incremental updates with /update-memory-related
- Execution timing recommendations
- Memory quality impact table

**CLI Tool Initialization:**
- Auto-configuration with /cli:cli-init
- Tool-specific setup options

Documentation style:
- Objective tone without first-person pronouns
- Clear command examples with copy-paste readiness
- Practical execution timing guidance
- Visual tables for decision making

Files updated:
- GETTING_STARTED.md (English, +167 lines)
- GETTING_STARTED_CN.md (Chinese, +167 lines)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 11:19:39 +08:00
catlog22
8dc7abf707 docs: add prominent Getting Started guide links in README
Added eye-catching links to the beginner-friendly Getting Started guides
at the top of both README files, right after the version announcement.
This helps new users quickly discover the 5-minute tutorial.

Changes:
- README.md: Added link to GETTING_STARTED.md
- README_CN.md: Added link to GETTING_STARTED_CN.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 11:15:42 +08:00
catlog22
424770d58c docs: add beginner-friendly getting started guides
Generated comprehensive beginner guides using Gemini analysis:
- GETTING_STARTED.md (English version, 151 lines)
- GETTING_STARTED_CN.md (Chinese version, 151 lines)

Content includes:
- 5-minute quick start tutorial
- Core concepts explanation with simple language
- Common scenarios (feature development, UI design, bug fixing)
- Troubleshooting guide
- Advanced learning paths

Features:
- Emoji-enhanced readability
- Step-by-step instructions with copy-paste commands
- Real-world examples for each concept
- Progressive learning approach

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 11:14:20 +08:00
catlog22
5ca246a37c docs: clarify UI preview system works without server
Updated README documentation to emphasize that index.html and compare.html
can be opened directly in browser without requiring a local server.
Reorganized preview instructions to highlight direct browser preview as
the recommended approach, with local server as optional for advanced features.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 11:03:35 +08:00
catlog22
bbf88826ba fix: convert all shell scripts to LF line endings for Linux/WSL compatibility
- Convert all .sh files from CRLF to LF line endings
- Add .gitattributes to enforce LF for shell scripts
- Fix "bash\r: No such file or directory" error in WSL environments

This resolves the installation failure on WSL/Linux systems where
Windows CRLF line endings cause shell scripts to fail.

Related to #1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 10:00:33 +08:00
catlog22
ce5d903813 fix: improve environment detection and error handling in remote installers
- Add git availability check with helpful installation hints
- Improve version detection with automatic fallback to GitHub API
- Add detailed progress feedback during installation
- Enhance error messages for better user experience
- Fix WSL installation issue reported in #1

Changes:
- install-remote.sh: Add git check, improve commit SHA detection
- install-remote.ps1: Add git check, improve commit SHA detection

Fixes #1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 09:54:45 +08:00
catlog22
703f22e331 feat: add codebase-retrieval semantic search tool
Add semantic file discovery via Gemini CLI --all-files parameter
- Define codebase-retrieval as primary semantic search tool
- Add tool selection matrix for search strategy
- Provide command templates with dynamic placeholders
- Document workflow integration with rg and other tools

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 09:38:25 +08:00
catlog22
61997f8db8 feat: add memory state check between Phase 3 and Phase 4
Add automatic memory compaction trigger when context usage exceeds 110K tokens:
- Evaluates memory state after analysis phase (Phase 3)
- Triggers /compact command if approaching context limits
- Prevents context overflow and ensures optimal performance
- Particularly important after extensive documentation generation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-12 22:23:32 +08:00
catlog22
f090c713ca refactor: streamline concept-enhanced.md command (532→377 lines, 29% reduction)
Optimizations:
- Merge Overview & Philosophy sections (78→29 lines)
- Simplify Tool Selection Strategy (76→47 lines)
- Optimize Execution Lifecycle (242→119 lines)
- Reduce Code Modification Targets examples (4→2 examples)
- Merge Error Handling & Performance into Execution Management (52→14 lines)
- Merge Integration & Quality into unified section (30→25 lines)

Core features preserved:
- Complete output format template (kept internal)
- 5-phase execution lifecycle
- Tool selection complexity matrix
- Gemini/Codex bash configurations
- All essential functionality

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-12 22:21:29 +08:00
catlog22
177279b760 chore: remove deprecated templates and unused files
- Delete archived layer-based templates (Layer 1-4)
- Remove hierarchy-analysis.txt (no longer needed)
- Update intelligent-tools-strategy.md to remove references

Cleanup:
- Keep only claude-module-unified.txt in memory/ directory
- Remove outdated template references from strategy document
- Simplify template structure for installation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-12 21:57:19 +08:00
catlog22
46f749605a fix: remove @ prefix from intelligent-tools-strategy.md reference
- Update enhance-prompt.md to use correct file reference format
- Change from @~/.claude/workflows/intelligent-tools-strategy.md to ~/.claude/workflows/intelligent-tools-strategy.md
- Align with other command files' reference format

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-12 21:52:59 +08:00
catlog22
8a849d651f refactor: simplify module documentation with unified template
- Replace 4-layer hierarchy (Layer 1-4) with single unified template
- New template: claude-module-unified.txt works for all modules and files
- Update update_module_claude.sh to remove layer detection logic
- Archive old layer-based templates to memory/archive/
- Update intelligent-tools-strategy.md to reference unified template

Benefits:
 Single template to maintain
 Simpler script logic
 Universal application for folders and code files
 Clearer documentation structure

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-12 21:50:01 +08:00
catlog22
0fd390f5d8 docs: optimize README with concise language and memory management emphasis
- Simplify introduction to highlight core value proposition
- Restructure "Key Features" to "Core Differentiators" with 6 key innovations
- Reduce content by 63% while maintaining 100% information completeness
- Add dedicated "Memory Management" section emphasizing its critical importance
- Add "Technical Architecture" section showing project organization
- Improve readability with concise bullet points and tables
- Synchronize English and Chinese versions

Key improvements:
- Context-First Architecture: Explain 1-to-N development drift solution
- JSON-First State Management: Highlight single source of truth design
- Autonomous Orchestration: Detail flow_control mechanism
- Multi-Model Strategy: Document 5-10x performance improvement
- Hierarchical Memory: Show 4-layer documentation structure
- Role-Based Agents: List specialized agents and workflows

Memory Management highlights:
- Document memory hierarchy (Root → Domain → Module → Sub-Module)
- Provide update triggers and best practices
- Emphasize quality impact on execution accuracy
- Show tool integration options (Gemini/Qwen/Codex)
2025-10-12 21:01:00 +08:00
catlog22
1dff4ff0c7 docs: add template reference rule - use $(cat ...) directly, never read first 2025-10-12 18:40:45 +08:00
catlog22
8a8090709f docs: add command substitution rules and fix template syntax
Key changes:
1. Add CRITICAL section for command substitution rules
   - Document why escape characters (\$, \", \') break commands
   - Explain correct syntax without quotes in $(cat ...) paths
   - Provide clear correct vs wrong examples

2. Fix all template references in examples
   - Remove single quotes from $(cat '...') → $(cat ...)
   - Ensure all 4 occurrences use correct syntax (lines 331, 356, 366, 379)

3. Add best practice guideline
   - Add warning in General Guidelines about escape characters
   - Emphasize this applies to all CLI command execution

Problem solved: Prevent automatic escape character insertion that breaks
shell command substitution and path expansion in actual CLI execution.
2025-10-12 18:36:53 +08:00
catlog22
4006234fa0 docs: add memory templates to intelligent tools strategy
Add memory hierarchy templates to Available Templates section and Selection Matrix:
- Layer 1-4 templates for hierarchical documentation
- Hierarchy analysis template for project structure
- Semantic triggers and use cases for each template level
2025-10-12 18:28:56 +08:00
catlog22
9d4d728ee7 docs: add bash execution rules and Windows path conversion guide
- Add rule to prevent run_in_background parameter in bash() calls
- Add Windows path conversion guidelines for Git Bash compatibility
- Simplify path handling: C:\path → /c/path, relative paths unchanged
2025-10-12 17:19:47 +08:00
catlog22
8e4710389d refactor: restructure memory update commands to command-style docs
Major improvements to /update-memory-full and /update-memory-related:

Structure:
- Adopt plan.md command-style format with clear phases
- Consolidate redundant sections and examples
- Add Coordinator Role and Core Rules sections
- Include detailed Coordinator Checklist

Features:
- Add --path parameter to target specific directories
- Update Task() call format to function-style syntax
- Merge execution examples into workflow phases
- Add Path Parameter Reference section

Documentation:
- Increase parallel job limit from 4 to 5
- Add comprehensive usage examples
- Include tool parameter reference
- Add comparison tables and best practices

Files changed:
- .claude/commands/update-memory-full.md: +285/-100 lines
- .claude/commands/update-memory-related.md: +229/-136 lines
2025-10-12 17:07:26 +08:00
catlog22
7ce76e1564 refactor: merge ARCHITECTURE and EXAMPLES into single task
- Combine IMPL-005 (ARCHITECTURE.md) and IMPL-006 (EXAMPLES.md) into one task
  - New IMPL-005: Generate both ARCHITECTURE.md and EXAMPLES.md together
  - Reduces task count and simplifies workflow execution
  - Both documents share similar analysis context from project README and modules

- Update HTTP API task from IMPL-007 to IMPL-006
  - Maintain sequential task numbering after merge

- Update all references:
  - Task hierarchy documentation
  - Task generation logic
  - Session structure examples
  - Execution command examples

Benefits:
- Streamlined workflow with fewer tasks to execute
- Architecture and examples are logically related (structure + usage)
- Reduced overhead in task management and execution
2025-10-12 16:37:10 +08:00
catlog22
42d7ad895e fix: improve docs workflow and module discovery
- Fix get_modules_by_depth.sh to avoid filtering 'core' directories
  - Remove overly broad "core" exclusion pattern
  - Keep "*.core" for actual core dump files
  - Resolves issue where directories named 'core' were incorrectly filtered

- Refactor docs workflow from path-based detection to mode selection
  - Replace is_root logic with explicit --mode parameter
  - Add --mode full: generate complete documentation (modules + project-level)
  - Add --mode partial: generate module documentation only
  - Default to full mode for better user experience
  - Simplify configuration by removing complex path comparison logic

These changes provide better control over documentation generation
and fix directory discovery issues in Python projects with core/ folders.
2025-10-12 15:40:11 +08:00
catlog22
03399259f4 feat: optimize docs workflow and add gitignore support to scripts
Major Changes:
1. Add classify-folders.sh script
   - Extract folder classification logic from inline script
   - Support for code/navigation/skip folder types
   - Placed in .claude/scripts/ for reusability

2. Optimize /workflow:docs command (docs.md)
   - Simplify Phase 1: single-step session initialization
   - Add Path Mirroring Strategy section
   - Document structure now mirrors source structure
   - Update to single config.json file (replace multiple .process files)
   - Fix path detection for Windows/Git Bash compatibility
   - Update all task templates with mirrored output paths

3. Add parent .gitignore support
   - detect_changed_modules.sh: parse .gitignore from current or git root
   - update_module_claude.sh: respect .gitignore patterns when counting files
   - Unified build_exclusion_filters() function across scripts

Key Improvements:
- Documentation output: .workflow/docs/ with project structure mirroring
- Session init: 4 steps → 1 bash command block
- Config files: multiple files → single config.json
- Path detection: improved Windows/Git Bash normalization
- Gitignore support: current dir → parent dir fallback

Related Issue: Fix core directory exclusion in get_modules_by_depth.sh
(Note: get_modules_by_depth.sh is in user global config, not in this repo)
2025-10-12 15:06:13 +08:00
86 changed files with 6178 additions and 4894 deletions

View File

@@ -0,0 +1,478 @@
---
name: cli-execution-agent
description: |
Intelligent CLI execution agent with automated context discovery and smart tool selection. Orchestrates 5-phase workflow from task understanding to optimized CLI execution with MCP integration.
Examples:
- Context: User provides task without context
user: "Implement user authentication"
assistant: "I'll discover relevant context, enhance the task description, select optimal tool, and execute"
commentary: Agent autonomously discovers context via MCP code-index, researches best practices, builds enhanced prompt, selects Codex for complex implementation
- Context: User provides analysis task
user: "Analyze API architecture patterns"
assistant: "I'll gather API-related files, analyze patterns, and execute with Gemini for comprehensive analysis"
commentary: Agent discovers API files, identifies patterns, selects Gemini for architecture analysis
- Context: User provides task with session context
user: "Execute IMPL-001 from active workflow"
assistant: "I'll load task context, discover implementation files, enhance requirements, and execute"
commentary: Agent loads task JSON, discovers code context, routes output to workflow session
color: purple
---
You are an intelligent CLI execution specialist that autonomously orchestrates comprehensive context discovery and optimal tool execution. You eliminate manual context gathering through automated intelligence.
## Core Execution Philosophy
- **Autonomous Intelligence** - Automatically discover context without user intervention
- **Smart Tool Selection** - Choose optimal CLI tool based on task characteristics
- **Context-Driven Enhancement** - Build precise prompts from discovered patterns
- **Session-Aware Routing** - Integrate seamlessly with workflow sessions
- **Graceful Degradation** - Fallback strategies when tools unavailable
## 5-Phase Execution Workflow
```
Phase 1: Task Understanding
↓ Intent, complexity, keywords
Phase 2: Context Discovery (MCP + Search)
↓ Relevant files, patterns, dependencies
Phase 3: Prompt Enhancement
↓ Structured enhanced prompt
Phase 4: Tool Selection & Execution
↓ CLI output and results
Phase 5: Output Routing
↓ Session logs and summaries
```
---
## Phase 1: Task Understanding
### Responsibilities
1. **Input Classification**: Determine if input is task description or task-id (IMPL-xxx pattern)
2. **Intent Detection**: Classify as analyze/execute/plan/discuss
3. **Complexity Assessment**: Rate as simple/medium/complex
4. **Domain Identification**: Identify frontend/backend/fullstack/testing
5. **Keyword Extraction**: Extract technical keywords for context search
### Classification Logic
**Intent Detection**:
- `analyze|review|understand|explain|debug`**analyze**
- `implement|add|create|build|fix|refactor`**execute**
- `design|plan|architecture|strategy`**plan**
- `discuss|evaluate|compare|trade-off`**discuss**
**Complexity Scoring**:
```
Score = 0
+ Keywords match ['system', 'architecture'] → +3
+ Keywords match ['refactor', 'migrate'] → +2
+ Keywords match ['component', 'feature'] → +1
+ Multiple tech stacks identified → +2
+ Critical systems ['auth', 'payment', 'security'] → +2
Score ≥ 5 → Complex
Score ≥ 2 → Medium
Score < 2 → Simple
```
**Keyword Extraction Categories**:
- **Domains**: auth, api, database, ui, component, service, middleware
- **Technologies**: react, typescript, node, express, jwt, oauth, graphql
- **Actions**: implement, refactor, optimize, test, debug
---
## Phase 2: Context Discovery
### Multi-Tool Parallel Strategy
**1. Project Structure Analysis**:
```bash
~/.claude/scripts/get_modules_by_depth.sh
```
Output: Module hierarchy and organization
**2. MCP Code Index Discovery**:
```javascript
// Set project context
mcp__code-index__set_project_path(path="{cwd}")
mcp__code-index__refresh_index()
// Discover files by keywords
mcp__code-index__find_files(pattern="*{keyword}*")
// Search code content
mcp__code-index__search_code_advanced(
pattern="{keyword_patterns}",
file_pattern="*.{ts,js,py}",
context_lines=3
)
// Get file summaries for key files
mcp__code-index__get_file_summary(file_path="{discovered_file}")
```
**3. Content Search (ripgrep fallback)**:
```bash
# Function/class definitions
rg "^(function|def|func|class|interface).*{keyword}" \
--type-add 'source:*.{ts,js,py,go}' -t source -n --max-count 15
# Import analysis
rg "^(import|from|require).*{keyword}" -t source | head -15
# Test files
find . \( -name "*{keyword}*test*" -o -name "*{keyword}*spec*" \) \
-type f | grep -E "\.(js|ts|py|go)$" | head -10
```
**4. External Research (MCP Exa - Optional)**:
```javascript
// Best practices for complex tasks
mcp__exa__get_code_context_exa(
query="{tech_stack} {task_type} implementation patterns",
tokensNum="dynamic"
)
```
### Relevance Scoring
**Score Calculation**:
```javascript
score = 0
+ Path contains keyword (exact match) +5
+ Filename contains keyword +3
+ Content keyword matches × 2
+ Source code file +2
+ Test file +1
+ Config file +1
```
**Context Optimization**:
- Sort files by relevance score
- Select top 15 files
- Group by type: source/test/config/docs
- Build structured context references
---
## Phase 3: Prompt Enhancement
### Enhancement Components
**1. Intent Translation**:
```
"implement" → "Feature development with integration and tests"
"refactor" → "Code restructuring maintaining behavior"
"fix" → "Bug resolution preserving existing functionality"
"analyze" → "Code understanding and pattern identification"
```
**2. Context Assembly**:
```bash
CONTEXT: @{CLAUDE.md} @{discovered_file1} @{discovered_file2} ...
## Discovered Context
- **Project Structure**: {module_summary}
- **Relevant Files**: {top_files_with_scores}
- **Code Patterns**: {identified_patterns}
- **Dependencies**: {tech_stack}
- **Session Memory**: {conversation_context}
## External Research
{optional_best_practices_from_exa}
```
**3. Template Selection**:
```
intent=analyze → ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt
intent=execute + complex → ~/.claude/workflows/cli-templates/prompts/development/feature.txt
intent=plan → ~/.claude/workflows/cli-templates/prompts/planning/task-breakdown.txt
```
**4. Structured Prompt**:
```bash
PURPOSE: {enhanced_intent}
TASK: {specific_task_with_details}
MODE: {analysis|write|auto}
CONTEXT: {structured_file_references}
## Discovered Context Summary
{context_from_phase_2}
EXPECTED: {clear_output_expectations}
RULES: $(cat {selected_template}) | {constraints}
```
---
## Phase 4: Tool Selection & Execution
### Tool Selection Logic
```
IF intent = 'analyze' OR 'plan':
tool = 'gemini' # Large context, pattern recognition
mode = 'analysis'
ELSE IF intent = 'execute':
IF complexity = 'simple' OR 'medium':
tool = 'gemini' # Fast, good for straightforward tasks
mode = 'write'
ELSE IF complexity = 'complex':
tool = 'codex' # Autonomous development
mode = 'auto'
ELSE IF intent = 'discuss':
tool = 'multi' # Gemini + Codex + synthesis
mode = 'discussion'
# User --tool flag overrides auto-selection
```
### Command Construction
**Gemini/Qwen (Analysis Mode)**:
```bash
cd {directory} && ~/.claude/scripts/{tool}-wrapper -p "
{enhanced_prompt}
"
```
**Gemini/Qwen (Write Mode)**:
```bash
cd {directory} && ~/.claude/scripts/{tool}-wrapper --approval-mode yolo -p "
{enhanced_prompt}
"
```
**Codex (Auto Mode)**:
```bash
codex -C {directory} --full-auto exec "
{enhanced_prompt}
" --skip-git-repo-check -s danger-full-access
```
**Codex (Resume for Related Tasks)**:
```bash
codex --full-auto exec "
{continuation_prompt}
" resume --last --skip-git-repo-check -s danger-full-access
```
### Timeout Configuration
```javascript
baseTimeout = {
simple: 20 * 60 * 1000, // 20min
medium: 40 * 60 * 1000, // 40min
complex: 60 * 60 * 1000 // 60min
}
if (tool === 'codex') {
timeout = baseTimeout * 1.5
}
```
---
## Phase 5: Output Routing
### Session Detection
```javascript
// Check for active session
activeSession = bash("find .workflow/ -name '.active-*' -type f")
if (activeSession.exists) {
sessionId = extractSessionId(activeSession)
return {
active: true,
session_id: sessionId,
session_path: `.workflow/${sessionId}/`
}
}
```
### Output Paths
**Active Session**:
```
.workflow/WFS-{id}/.chat/{agent}-{timestamp}.md
.workflow/WFS-{id}/.summaries/{task-id}-summary.md // if task-id
```
**Scratchpad (No Session)**:
```
.workflow/.scratchpad/{agent}-{description}-{timestamp}.md
```
### Execution Log Structure
```markdown
# CLI Execution Agent Log
**Timestamp**: {iso_timestamp}
**Session**: {session_id | "scratchpad"}
**Task**: {task_id | description}
---
## Phase 1: Task Understanding
- **Intent**: {analyze|execute|plan|discuss}
- **Complexity**: {simple|medium|complex}
- **Keywords**: {extracted_keywords}
## Phase 2: Context Discovery
**Discovered Files** ({N}):
1. {file} (score: {score}) - {description}
**Patterns**: {identified_patterns}
**Dependencies**: {tech_stack}
## Phase 3: Enhanced Prompt
```
{full_enhanced_prompt}
```
## Phase 4: Execution
**Tool**: {gemini|codex|qwen}
**Command**:
```bash
{executed_command}
```
**Result**: {success|partial|failed}
**Duration**: {elapsed_time}
## Phase 5: Output
- Log: {log_path}
- Summary: {summary_path | N/A}
## Next Steps
{recommended_actions}
```
---
## MCP Integration Guidelines
### Code Index Usage
**Project Setup**:
```javascript
mcp__code-index__set_project_path(path="{project_root}")
mcp__code-index__refresh_index()
```
**File Discovery**:
```javascript
// Find by pattern
mcp__code-index__find_files(pattern="*auth*")
// Search content
mcp__code-index__search_code_advanced(
pattern="function.*authenticate",
file_pattern="*.ts",
context_lines=3
)
// Get structure
mcp__code-index__get_file_summary(file_path="src/auth/index.ts")
```
### Exa Research Usage
**Best Practices**:
```javascript
mcp__exa__get_code_context_exa(
query="TypeScript authentication JWT patterns",
tokensNum="dynamic"
)
```
**When to Use Exa**:
- Complex tasks requiring best practices
- Unfamiliar technology stack
- Architecture design decisions
- Performance optimization
---
## Error Handling & Recovery
### Graceful Degradation
**MCP Unavailable**:
```bash
# Fallback to ripgrep + find
if ! mcp__code-index__find_files; then
find . -name "*{keyword}*" -type f | grep -v node_modules
rg "{keyword}" --type ts --max-count 20
fi
```
**Tool Unavailable**:
```
Gemini unavailable → Try Qwen
Codex unavailable → Try Gemini with write mode
All tools unavailable → Report error
```
**Timeout Handling**:
- Collect partial results
- Save intermediate output
- Report completion status
- Suggest task decomposition
---
## Quality Standards
### Execution Checklist
Before completing execution:
- [ ] Context discovery successful (≥3 relevant files)
- [ ] Enhanced prompt contains specific details
- [ ] Appropriate tool selected
- [ ] CLI execution completed
- [ ] Output properly routed
- [ ] Session state updated (if active session)
- [ ] Next steps documented
### Performance Targets
- **Phase 1**: 1-3 seconds
- **Phase 2**: 5-15 seconds (MCP + search)
- **Phase 3**: 2-5 seconds
- **Phase 4**: Variable (tool-dependent)
- **Phase 5**: 1-3 seconds
**Total (excluding Phase 4)**: ~10-25 seconds overhead
---
## Key Reminders
**ALWAYS:**
- Execute all 5 phases systematically
- Use MCP tools when available
- Score file relevance objectively
- Select tools based on complexity and intent
- Route output to correct location
- Provide clear next steps
- Handle errors gracefully with fallbacks
**NEVER:**
- Skip context discovery (Phase 2)
- Assume tool availability without checking
- Execute without session detection
- Ignore complexity assessment
- Make tool selection without logic
- Leave partial results without documentation

View File

@@ -1,8 +1,8 @@
---
name: analyze
description: Quick codebase analysis using CLI tools (codex/gemini/qwen)
argument-hint: "[--tool codex|gemini|qwen] [--enhance] analysis target"
allowed-tools: SlashCommand(*), Bash(*), TodoWrite(*), Read(*), Glob(*)
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] analysis target"
allowed-tools: SlashCommand(*), Bash(*), TodoWrite(*), Read(*), Glob(*), Task(*)
---
# CLI Analyze Command (/cli:analyze)
@@ -23,12 +23,15 @@ Quick codebase analysis using CLI tools. **Analysis only - does NOT modify code*
## Parameters
- `--tool <codex|gemini|qwen>` - Tool selection (default: gemini)
- `--agent` - Use cli-execution-agent for automated context discovery (5-phase intelligent mode)
- `--tool <codex|gemini|qwen>` - Tool selection (default: gemini, ignored in agent mode)
- `--enhance` - Use `/enhance-prompt` for context-aware enhancement
- `<analysis-target>` - Description of what to analyze
## Execution Flow
### Standard Mode (Default)
1. Parse tool selection (default: gemini)
2. If `--enhance`: Execute `/enhance-prompt` first to expand user intent
3. Auto-detect analysis type from keywords → select template
@@ -36,6 +39,32 @@ Quick codebase analysis using CLI tools. **Analysis only - does NOT modify code*
5. Execute analysis (read-only, no code changes)
6. Return analysis report with insights and recommendations
### Agent Mode (`--agent` flag)
Delegate task to `cli-execution-agent` for intelligent execution with automated context discovery.
**Agent invocation**:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Analyze codebase with automated context discovery",
prompt=`
Task: ${analysis_target}
Mode: analyze
Tool Preference: ${tool_flag || 'auto-select'}
${enhance_flag ? 'Enhance: true' : ''}
Agent will autonomously:
- Discover relevant files and patterns
- Build enhanced analysis prompt
- Select optimal tool and execute
- Route output to session/scratchpad
`
)
```
The agent handles all phases internally (understanding, discovery, enhancement, execution, routing).
## File Pattern Auto-Detection
Keywords trigger specific file patterns:
@@ -63,13 +92,24 @@ RULES: [auto-selected template] | Focus on [analysis aspect]
## Examples
**Basic Analysis**:
**Basic Analysis (Standard Mode)**:
```bash
/cli:analyze "authentication patterns"
# Executes: Gemini analysis with auth file patterns
# Returns: Pattern analysis, architecture insights, recommendations
```
**Intelligent Analysis (Agent Mode)**:
```bash
/cli:analyze --agent "authentication patterns"
# Phase 1: Classifies intent=analyze, complexity=simple, keywords=['auth', 'patterns']
# Phase 2: MCP discovers 12 auth files, identifies patterns
# Phase 3: Builds enhanced prompt with discovered context
# Phase 4: Executes Gemini with comprehensive file references
# Phase 5: Saves execution log with all 5 phases documented
# Returns: Comprehensive analysis + detailed execution log
```
**Architecture Analysis**:
```bash
/cli:analyze --tool qwen "component architecture"

View File

@@ -1,8 +1,8 @@
---
name: chat
description: Simple CLI interaction command for direct codebase analysis
argument-hint: "[--tool codex|gemini|qwen] [--enhance] inquiry"
allowed-tools: SlashCommand(*), Bash(*)
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] inquiry"
allowed-tools: SlashCommand(*), Bash(*), Task(*)
---
# CLI Chat Command (/cli:chat)
@@ -24,13 +24,16 @@ Direct Q&A interaction with CLI tools for codebase analysis. **Analysis only - d
## Parameters
- `<inquiry>` (Required) - Question or analysis request
- `--tool <codex|gemini|qwen>` - Select CLI tool (default: gemini)
- `--agent` - Use cli-execution-agent for automated context discovery (5-phase intelligent mode)
- `--tool <codex|gemini|qwen>` - Select CLI tool (default: gemini, ignored in agent mode)
- `--enhance` - Enhance inquiry with `/enhance-prompt` first
- `--all-files` - Include entire codebase in context
- `--save-session` - Save interaction to workflow session
## Execution Flow
### Standard Mode (Default)
1. Parse tool selection (default: gemini)
2. If `--enhance`: Execute `/enhance-prompt` to expand user intent
3. Assemble context: `@{CLAUDE.md}` + user-specified files or `--all-files`
@@ -38,6 +41,32 @@ Direct Q&A interaction with CLI tools for codebase analysis. **Analysis only - d
5. Return explanations and insights (NO code changes)
6. Optionally save to workflow session
### Agent Mode (`--agent` flag)
Delegate inquiry to `cli-execution-agent` for intelligent Q&A with automated context discovery.
**Agent invocation**:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Answer question with automated context discovery",
prompt=`
Task: ${inquiry}
Mode: analyze (Q&A)
Tool Preference: ${tool_flag || 'auto-select'}
${all_files_flag ? 'Scope: all-files' : ''}
Agent will autonomously:
- Discover files relevant to the question
- Build Q&A prompt with precise context
- Execute and generate comprehensive answer
- Save conversation log
`
)
```
The agent handles all phases internally.
## Context Assembly
**Always included**: `@{CLAUDE.md,**/*CLAUDE.md}` (project guidelines)
@@ -61,13 +90,24 @@ RESPONSE: Direct answer, explanation, insights (NO code modification)
## Examples
**Basic Question**:
**Basic Question (Standard Mode)**:
```bash
/cli:chat "analyze the authentication flow"
# Executes: Gemini analysis
# Returns: Explanation of auth flow, components involved, data flow
```
**Intelligent Q&A (Agent Mode)**:
```bash
/cli:chat --agent "how does JWT token refresh work in this codebase"
# Phase 1: Understands inquiry = JWT refresh mechanism
# Phase 2: Discovers JWT files, refresh logic, middleware patterns
# Phase 3: Builds Q&A prompt with discovered implementation details
# Phase 4: Executes Gemini with precise context for accurate answer
# Phase 5: Saves conversation log with discovered context
# Returns: Detailed answer with code references + execution log
```
**Architecture Question**:
```bash
/cli:chat --tool qwen "how does React component optimization work here"

View File

@@ -86,7 +86,7 @@ Codex reviews Gemini's output using conversational reasoning. Uses `resume --las
```bash
# First round (new session)
codex chat "
codex --full-auto exec "
PURPOSE: Critically review technical plan
TASK: Review the provided plan, identify weaknesses, suggest alternatives, reason about trade-offs
MODE: analysis
@@ -94,10 +94,10 @@ CONTEXT: @{CLAUDE.md} [relevant files]
INPUT_PLAN: [Output from Gemini's analysis]
EXPECTED: Critical review with alternative ideas and risk analysis
RULES: Focus on architectural soundness and implementation feasibility
"
" --skip-git-repo-check
# Subsequent rounds (resume discussion)
codex chat "
codex --full-auto exec "
PURPOSE: Re-evaluate plan based on latest synthesis
TASK: Review updated plan and discussion points, provide further critique or refined ideas
MODE: analysis
@@ -105,7 +105,7 @@ CONTEXT: Previous discussion context (maintained via resume)
INPUT_PLAN: [Output from Gemini's analysis for current round]
EXPECTED: Updated critique building on previous discussion
RULES: Build on previous insights, avoid repeating points
" resume --last
" resume --last --skip-git-repo-check
```
**Step 3: Claude's Synthesis (Priority 3)**

View File

@@ -1,8 +1,8 @@
---
name: execute
description: Auto-execution of implementation tasks with YOLO permissions and intelligent context inference
argument-hint: "[--tool codex|gemini|qwen] [--enhance] description or task-id"
allowed-tools: SlashCommand(*), Bash(*)
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] description or task-id"
allowed-tools: SlashCommand(*), Bash(*), Task(*)
---
# CLI Execute Command (/cli:execute)
@@ -39,6 +39,10 @@ Auto-approves: file pattern inference, execution, **file modifications**, summar
- Input: Workflow task identifier (e.g., `IMPL-001`)
- Process: Task JSON parsing → Scope analysis → Execute
**3. Agent Mode** (`--agent` flag):
- Input: Description or task-id
- Process: 5-Phase Workflow → Context Discovery → Optimal Tool Selection → Execute
### Context Inference
Auto-selects files based on keywords and technology:
@@ -64,7 +68,8 @@ Use `resume --last` when current task extends/relates to previous execution. See
## Parameters
- `--tool <codex|gemini|qwen>` - Select CLI tool (default: gemini)
- `--agent` - Use cli-execution-agent for automated context discovery (5-phase intelligent mode)
- `--tool <codex|gemini|qwen>` - Select CLI tool (default: gemini, ignored in agent mode unless specified)
- `--enhance` - Enhance input with `/enhance-prompt` first (Description Mode only)
- `<description|task-id>` - Natural language description or task identifier
- `--debug` - Verbose logging
@@ -101,8 +106,9 @@ Use `resume --last` when current task extends/relates to previous execution. See
- No session, ad-hoc implementation:
- Log: `.workflow/.scratchpad/execute-jwt-auth-20250105-143045.md`
## Command Template
## Execution Modes
### Standard Mode (Default)
```bash
# Gemini/Qwen: MODE=write with --approval-mode yolo
cd . && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p "
@@ -124,15 +130,52 @@ EXPECTED: Complete implementation with tests
" --skip-git-repo-check -s danger-full-access
```
### Agent Mode (`--agent` flag)
Delegate implementation to `cli-execution-agent` for intelligent execution with automated context discovery.
**Agent invocation**:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Implement with automated context discovery and optimal tool selection",
prompt=`
Task: ${description_or_task_id}
Mode: execute
Tool Preference: ${tool_flag || 'auto-select'}
${enhance_flag ? 'Enhance: true' : ''}
Agent will autonomously:
- Discover implementation files and dependencies
- Assess complexity and select optimal tool
- Execute with YOLO permissions (auto-approve)
- Generate task summary if task-id provided
`
)
```
The agent handles all phases internally, including complexity-based tool selection.
## Examples
**Basic Implementation** (⚠️ modifies code):
**Basic Implementation (Standard Mode)** (⚠️ modifies code):
```bash
/cli:execute "implement JWT authentication with middleware"
# Executes: Creates auth middleware, updates routes, modifies config
# Result: NEW/MODIFIED code files with JWT implementation
```
**Intelligent Implementation (Agent Mode)** (⚠️ modifies code):
```bash
/cli:execute --agent "implement OAuth2 authentication with token refresh"
# Phase 1: Classifies intent=execute, complexity=complex, keywords=['oauth2', 'auth', 'token', 'refresh']
# Phase 2: MCP discovers auth patterns, existing middleware, JWT dependencies
# Phase 3: Enhances prompt with discovered patterns and best practices
# Phase 4: Selects Codex (complex task), executes with comprehensive context
# Phase 5: Saves execution log + generates implementation summary
# Result: Complete OAuth2 implementation + detailed execution log
```
**Enhanced Implementation** (⚠️ modifies code):
```bash
/cli:execute --enhance "implement JWT authentication"

View File

@@ -1,8 +1,8 @@
---
name: bug-index
description: Bug analysis and fix suggestions using CLI tools
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] bug description"
allowed-tools: SlashCommand(*), Bash(*)
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] [--cd path] bug description"
allowed-tools: SlashCommand(*), Bash(*), Task(*)
---
# CLI Mode: Bug Index (/cli:mode:bug-index)
@@ -16,13 +16,16 @@ Systematic bug analysis with diagnostic template (`~/.claude/prompt-templates/bu
## Parameters
- `--tool <codex|gemini|qwen>` - Tool selection (default: gemini)
- `--agent` - Use cli-execution-agent for automated context discovery (5-phase intelligent mode)
- `--tool <codex|gemini|qwen>` - Tool selection (default: gemini, ignored in agent mode)
- `--enhance` - Enhance bug description with `/enhance-prompt` first
- `--cd "path"` - Target directory for focused analysis
- `<bug-description>` (Required) - Bug description or error message
## Execution Flow
### Standard Mode (Default)
1. **Parse tool selection**: Extract `--tool` flag (default: gemini)
2. **If `--enhance` flag present**: Execute `/enhance-prompt "[bug-description]"` first
3. Parse bug description (original or enhanced)
@@ -31,6 +34,33 @@ Systematic bug analysis with diagnostic template (`~/.claude/prompt-templates/bu
6. Execute analysis (read-only, provides fix recommendations)
7. Save to `.workflow/WFS-[id]/.chat/bug-index-[timestamp].md`
### Agent Mode (`--agent` flag)
Delegate bug analysis to `cli-execution-agent` for intelligent debugging with automated context discovery.
**Agent invocation**:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Analyze bug with automated context discovery",
prompt=`
Task: ${bug_description}
Mode: debug (bug analysis)
Tool Preference: ${tool_flag || 'auto-select'}
${cd_flag ? `Directory Scope: ${cd_path}` : ''}
Template: bug-fix
Agent will autonomously:
- Discover bug-related files and error traces
- Build debug prompt with bug-fix template
- Execute analysis and provide fix recommendations
- Save analysis log
`
)
```
The agent handles all phases internally.
## Core Rules
1. **Analysis Only**: This command analyzes bugs and suggests fixes - it does NOT modify code
@@ -61,7 +91,25 @@ RULES: $(cat ~/.claude/prompt-templates/bug-fix.md) | Bug: [description]
## Examples
**Basic Bug Analysis**:
**Basic Bug Analysis (Standard Mode)**:
```bash
/cli:mode:bug-index "null pointer error in login flow"
# Executes: Gemini with bug-fix template
# Returns: Root cause analysis, fix recommendations
```
**Intelligent Bug Analysis (Agent Mode)**:
```bash
/cli:mode:bug-index --agent "intermittent token validation failure"
# Phase 1: Classifies as debug task, extracts keywords ['token', 'validation', 'failure']
# Phase 2: MCP discovers token validation code, middleware, test files with errors
# Phase 3: Builds debug prompt with bug-fix template + discovered error patterns
# Phase 4: Executes Gemini with comprehensive bug context
# Phase 5: Saves analysis log with detailed fix recommendations
# Returns: Root cause analysis + code path traces + minimal fix suggestions
```
**Standard Template Example**:
```bash
cd . && ~/.claude/scripts/gemini-wrapper --all-files -p "
PURPOSE: Debug authentication null pointer error

View File

@@ -1,8 +1,8 @@
---
name: code-analysis
description: Deep code analysis and debugging using CLI tools with specialized template
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] analysis target"
allowed-tools: SlashCommand(*), Bash(*)
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] [--cd path] analysis target"
allowed-tools: SlashCommand(*), Bash(*), Task(*)
---
# CLI Mode: Code Analysis (/cli:mode:code-analysis)
@@ -16,13 +16,16 @@ Systematic code analysis with execution path tracing template (`~/.claude/prompt
## Parameters
- `--tool <codex|gemini|qwen>` - Tool selection (default: gemini)
- `--agent` - Use cli-execution-agent for automated context discovery (5-phase intelligent mode)
- `--tool <codex|gemini|qwen>` - Tool selection (default: gemini, ignored in agent mode)
- `--enhance` - Enhance analysis target with `/enhance-prompt` first
- `--cd "path"` - Target directory for focused analysis
- `<analysis-target>` (Required) - Code analysis target or question
## Execution Flow
### Standard Mode (Default)
1. **Parse tool selection**: Extract `--tool` flag (default: gemini)
2. **If `--enhance` flag present**: Execute `/enhance-prompt "[analysis-target]"` first
3. Parse analysis target (original or enhanced)
@@ -31,6 +34,33 @@ Systematic code analysis with execution path tracing template (`~/.claude/prompt
6. Execute deep analysis (read-only, no code modification)
7. Save to `.workflow/WFS-[id]/.chat/code-analysis-[timestamp].md`
### Agent Mode (`--agent` flag)
Delegate code analysis to `cli-execution-agent` for intelligent execution path tracing with automated context discovery.
**Agent invocation**:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Analyze code execution paths with automated context discovery",
prompt=`
Task: ${analysis_target}
Mode: code-analysis (execution tracing)
Tool Preference: ${tool_flag || 'auto-select'}
${cd_flag ? `Directory Scope: ${cd_path}` : ''}
Template: code-analysis
Agent will autonomously:
- Discover execution paths and call flows
- Build analysis prompt with code-analysis template
- Execute deep tracing analysis
- Generate call diagrams and save log
`
)
```
The agent handles all phases internally.
## Core Rules
1. **Analysis Only**: This command analyzes code and provides insights - it does NOT modify code
@@ -64,7 +94,25 @@ RULES: $(cat ~/.claude/prompt-templates/code-analysis.md) | Focus on [aspect]
## Examples
**Basic Code Analysis**:
**Basic Code Analysis (Standard Mode)**:
```bash
/cli:mode:code-analysis "trace authentication execution flow"
# Executes: Gemini with code-analysis template
# Returns: Execution trace, call diagram, debugging insights
```
**Intelligent Code Analysis (Agent Mode)**:
```bash
/cli:mode:code-analysis --agent "trace JWT token validation from request to database"
# Phase 1: Classifies as deep analysis, keywords ['jwt', 'token', 'validation', 'database']
# Phase 2: MCP discovers request handler → middleware → service → repository chain
# Phase 3: Builds analysis prompt with code-analysis template + complete call path
# Phase 4: Executes Gemini with traced execution paths
# Phase 5: Saves detailed analysis with call flow diagrams and variable states
# Returns: Complete execution trace + call diagram + data flow analysis
```
**Standard Template Example**:
```bash
cd . && ~/.claude/scripts/gemini-wrapper --all-files -p "
PURPOSE: Trace authentication execution flow

View File

@@ -1,8 +1,8 @@
---
name: plan
description: Project planning and architecture analysis using CLI tools
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] topic"
allowed-tools: SlashCommand(*), Bash(*)
argument-hint: "[--agent] [--tool codex|gemini|qwen] [--enhance] [--cd path] topic"
allowed-tools: SlashCommand(*), Bash(*), Task(*)
---
# CLI Mode: Plan (/cli:mode:plan)
@@ -16,13 +16,16 @@ Comprehensive planning and architecture analysis with strategic planning templat
## Parameters
- `--tool <codex|gemini|qwen>` - Tool selection (default: gemini)
- `--agent` - Use cli-execution-agent for automated context discovery (5-phase intelligent mode)
- `--tool <codex|gemini|qwen>` - Tool selection (default: gemini, ignored in agent mode)
- `--enhance` - Enhance topic with `/enhance-prompt` first
- `--cd "path"` - Target directory for focused planning
- `<topic>` (Required) - Planning topic or architectural question
## Execution Flow
### Standard Mode (Default)
1. **Parse tool selection**: Extract `--tool` flag (default: gemini)
2. **If `--enhance` flag present**: Execute `/enhance-prompt "[topic]"` first
3. Parse topic (original or enhanced)
@@ -31,6 +34,33 @@ Comprehensive planning and architecture analysis with strategic planning templat
6. Execute analysis (read-only, no code modification)
7. Save to `.workflow/WFS-[id]/.chat/plan-[timestamp].md`
### Agent Mode (`--agent` flag)
Delegate planning to `cli-execution-agent` for intelligent strategic planning with automated architecture discovery.
**Agent invocation**:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Create strategic plan with automated architecture discovery",
prompt=`
Task: ${planning_topic}
Mode: plan (strategic planning)
Tool Preference: ${tool_flag || 'auto-select'}
${cd_flag ? `Directory Scope: ${cd_path}` : ''}
Template: plan
Agent will autonomously:
- Discover project structure and existing architecture
- Build planning prompt with plan template
- Execute strategic planning analysis
- Generate implementation roadmap and save
`
)
```
The agent handles all phases internally.
## Core Rules
1. **Analysis Only**: This command provides planning recommendations and insights - it does NOT modify code
@@ -62,7 +92,25 @@ RULES: $(cat ~/.claude/prompt-templates/plan.md) | Focus on [topic area]
## Examples
**Basic Planning Analysis**:
**Basic Planning Analysis (Standard Mode)**:
```bash
/cli:mode:plan "design user dashboard architecture"
# Executes: Gemini with planning template
# Returns: Architecture recommendations, component design, roadmap
```
**Intelligent Planning (Agent Mode)**:
```bash
/cli:mode:plan --agent "design microservices architecture for payment system"
# Phase 1: Classifies as architectural planning, keywords ['microservices', 'payment', 'architecture']
# Phase 2: MCP discovers existing services, payment flows, integration patterns
# Phase 3: Builds planning prompt with plan template + current architecture context
# Phase 4: Executes Gemini with comprehensive project understanding
# Phase 5: Saves planning document with implementation roadmap and migration strategy
# Returns: Strategic architecture plan + implementation roadmap + risk assessment
```
**Standard Template Example**:
```bash
cd . && ~/.claude/scripts/gemini-wrapper --all-files -p "
PURPOSE: Design user dashboard architecture

View File

@@ -31,7 +31,7 @@ FUNCTION should_use_gemini(user_prompt):
END
```
**Gemini Integration:** @~/.claude/workflows/intelligent-tools-strategy.md
**Gemini Integration:** ~/.claude/workflows/intelligent-tools-strategy.md
## Enhancement Rules

View File

@@ -0,0 +1,828 @@
---
name: docs
description: Documentation planning and orchestration - creates structured documentation tasks for execution
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-generate]"
---
# Documentation Workflow (/memory:docs)
## Overview
Lightweight planner that analyzes project structure, decomposes documentation work into tasks, and generates execution plans. Does NOT generate documentation content itself - delegates to doc-generator agent.
**Documentation Output**: All generated documentation is placed in `.workflow/docs/` directory with **mirrored project structure**. For example:
- Source: `src/modules/auth/index.ts` → Docs: `.workflow/docs/src/modules/auth/API.md`
- Source: `lib/core/utils.js` → Docs: `.workflow/docs/lib/core/README.md`
**Two Execution Modes**:
- **Default**: CLI analyzes in `pre_analysis` (MODE=analysis), agent writes docs in `implementation_approach`
- **--cli-generate**: CLI generates docs in `implementation_approach` (MODE=write)
## Path Mirroring Strategy
**Principle**: Documentation structure **mirrors** source code structure.
| Source Path | Documentation Path |
|------------|-------------------|
| `src/modules/auth/index.ts` | `.workflow/docs/src/modules/auth/API.md` |
| `src/modules/auth/middleware/` | `.workflow/docs/src/modules/auth/middleware/README.md` |
| `lib/core/utils.js` | `.workflow/docs/lib/core/API.md` |
| `lib/core/helpers/` | `.workflow/docs/lib/core/helpers/README.md` |
**Benefits**:
- Easy to locate documentation for any source file
- Maintains logical organization
- Clear 1:1 mapping between code and docs
- Supports any project structure (src/, lib/, packages/, etc.)
## Parameters
```bash
/memory:docs [path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-generate]
```
- **path**: Target directory (default: current directory)
- Specifies the directory to generate documentation for
- **--mode**: Documentation generation mode (default: full)
- `full`: Complete documentation (modules + project README + ARCHITECTURE + EXAMPLES)
- Level 1: Module tree documentation
- Level 2: Project README.md
- Level 3: ARCHITECTURE.md + EXAMPLES.md + HTTP API (optional)
- `partial`: Module documentation only
- Level 1: Module tree documentation (API.md + README.md)
- Skips project-level documentation
- **--tool**: CLI tool selection (default: gemini)
- `gemini`: Comprehensive documentation, pattern recognition
- `qwen`: Architecture analysis, system design focus
- `codex`: Implementation validation, code quality
- **--cli-generate**: Enable CLI-based documentation generation (optional)
- When enabled: CLI generates docs with MODE=write in implementation_approach
- When disabled (default): CLI analyzes with MODE=analysis in pre_analysis
## Planning Workflow
### Phase 1: Initialize Session
#### Step 1: Create Session and Generate Config
```bash
# Create session structure and initialize config in one step
bash(
# Parse arguments
path="${1:-.}"
tool="gemini"
mode="full"
cli_generate=false
shift
while [[ $# -gt 0 ]]; do
case "$1" in
--tool) tool="$2"; shift 2 ;;
--mode) mode="$2"; shift 2 ;;
--cli-generate) cli_generate=true; shift ;;
*) shift ;;
esac
done
# Detect paths
project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
if [[ "$path" == /* ]] || [[ "$path" == [A-Z]:* ]]; then
target_path="$path"
else
target_path=$(cd "$path" 2>/dev/null && pwd || echo "$PWD/$path")
fi
# Create session
timestamp=$(date +%Y%m%d-%H%M%S)
session="WFS-docs-${timestamp}"
mkdir -p ".workflow/${session}"/{.task,.process,.summaries}
touch ".workflow/.active-${session}"
# Generate single config file with all info
cat > ".workflow/${session}/.process/config.json" <<EOF
{
"session_id": "${session}",
"timestamp": "$(date -Iseconds)",
"path": "${path}",
"target_path": "${target_path}",
"project_root": "${project_root}",
"mode": "${mode}",
"tool": "${tool}",
"cli_generate": ${cli_generate}
}
EOF
echo "✓ Session initialized: ${session}"
echo "✓ Target: ${target_path}"
echo "✓ Mode: ${mode}"
echo "✓ Tool: ${tool}, CLI generate: ${cli_generate}"
)
```
**Output**:
```
✓ Session initialized: WFS-docs-20240120-143022
✓ Target: /d/Claude_dms3
✓ Mode: full
✓ Tool: gemini, CLI generate: false
```
### Phase 2: Analyze Structure
#### Step 1: Discover and Classify Folders
```bash
# Run analysis pipeline (module discovery + folder classification)
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh > .workflow/WFS-docs-20240120/.process/folder-analysis.txt)
```
**Output Sample** (folder-analysis.txt):
```
./src/modules/auth|code|code:5|dirs:2
./src/modules/api|code|code:3|dirs:0
./src/utils|navigation|code:0|dirs:4
```
#### Step 2: Extract Top-Level Directories
```bash
# Group folders by top-level directory
bash(awk -F'|' '{
path = $1
gsub(/^\.\//, "", path)
split(path, parts, "/")
if (length(parts) >= 2) print parts[1] "/" parts[2]
else if (length(parts) == 1 && parts[1] != ".") print parts[1]
}' .workflow/WFS-docs-20240120/.process/folder-analysis.txt | sort -u > .workflow/WFS-docs-20240120/.process/top-level-dirs.txt)
```
**Output** (top-level-dirs.txt):
```
src/modules
src/utils
lib/core
```
#### Step 3: Generate Analysis Summary
```bash
# Calculate statistics
bash(
total=$(wc -l < .workflow/WFS-docs-20240120/.process/folder-analysis.txt)
code_count=$(grep '|code|' .workflow/WFS-docs-20240120/.process/folder-analysis.txt | wc -l)
nav_count=$(grep '|navigation|' .workflow/WFS-docs-20240120/.process/folder-analysis.txt | wc -l)
top_dirs=$(wc -l < .workflow/WFS-docs-20240120/.process/top-level-dirs.txt)
echo "📊 Folder Analysis Complete:"
echo " - Total folders: $total"
echo " - Code folders: $code_count"
echo " - Navigation folders: $nav_count"
echo " - Top-level dirs: $top_dirs"
)
# Update config with statistics
bash(jq '. + {analysis: {total: "15", code: "8", navigation: "7", top_level: "3"}}' .workflow/WFS-docs-20240120/.process/config.json > .workflow/WFS-docs-20240120/.process/config.json.tmp && mv .workflow/WFS-docs-20240120/.process/config.json.tmp .workflow/WFS-docs-20240120/.process/config.json)
```
### Phase 3: Detect Update Mode
#### Step 1: Count Existing Documentation in .workflow/docs/
```bash
# Check .workflow/docs/ directory and count existing files
bash(if [[ -d ".workflow/docs" ]]; then
find .workflow/docs -name "*.md" 2>/dev/null | wc -l
else
echo "0"
fi)
```
**Output**: `5` (existing docs in .workflow/docs/)
#### Step 2: List Existing Documentation
```bash
# List existing files in .workflow/docs/ (for task context)
bash(if [[ -d ".workflow/docs" ]]; then
find .workflow/docs -name "*.md" 2>/dev/null > .workflow/WFS-docs-20240120/.process/existing-docs.txt
else
touch .workflow/WFS-docs-20240120/.process/existing-docs.txt
fi)
```
**Output** (existing-docs.txt):
```
.workflow/docs/src/modules/auth/API.md
.workflow/docs/src/modules/auth/README.md
.workflow/docs/lib/core/README.md
.workflow/docs/README.md
```
#### Step 3: Update Config with Update Status
```bash
# Determine update status (create or update) and update config
bash(
existing_count=$(find .workflow/docs -name "*.md" 2>/dev/null | wc -l)
if [[ $existing_count -gt 0 ]]; then
jq ". + {update_mode: \"update\", existing_docs: $existing_count}" .workflow/WFS-docs-20240120/.process/config.json > .workflow/WFS-docs-20240120/.process/config.json.tmp && mv .workflow/WFS-docs-20240120/.process/config.json.tmp .workflow/WFS-docs-20240120/.process/config.json
else
jq '. + {update_mode: "create", existing_docs: 0}' .workflow/WFS-docs-20240120/.process/config.json > .workflow/WFS-docs-20240120/.process/config.json.tmp && mv .workflow/WFS-docs-20240120/.process/config.json.tmp .workflow/WFS-docs-20240120/.process/config.json
fi
)
# Display strategy summary
bash(
mode=$(jq -r '.mode' .workflow/WFS-docs-20240120/.process/config.json)
update_mode=$(jq -r '.update_mode' .workflow/WFS-docs-20240120/.process/config.json)
existing=$(jq -r '.existing_docs' .workflow/WFS-docs-20240120/.process/config.json)
tool=$(jq -r '.tool' .workflow/WFS-docs-20240120/.process/config.json)
cli_gen=$(jq -r '.cli_generate' .workflow/WFS-docs-20240120/.process/config.json)
echo "📋 Documentation Strategy:"
echo " - Path: $(jq -r '.target_path' .workflow/WFS-docs-20240120/.process/config.json)"
echo " - Mode: $mode ($([ "$mode" = "full" ] && echo "complete docs" || echo "modules only"))"
echo " - Update: $update_mode ($existing existing files)"
echo " - Tool: $tool, CLI generate: $cli_gen"
)
```
### Phase 4: Decompose Tasks
#### Task Hierarchy
```
Level 1: Module Trees (always, parallel execution)
├─ IMPL-001: Document 'src/modules/'
├─ IMPL-002: Document 'src/utils/'
└─ IMPL-003: Document 'lib/'
Level 2: Project README (mode=full only, depends on Level 1)
└─ IMPL-004: Generate Project README
Level 3: Architecture & Examples (mode=full only, depends on Level 2)
├─ IMPL-005: Generate ARCHITECTURE.md + EXAMPLES.md
└─ IMPL-006: Generate HTTP API (optional)
```
#### Step 1: Generate Level 1 Tasks (Module Trees)
```bash
# Read top-level directories and create tasks
bash(
task_count=0
while read -r top_dir; do
task_count=$((task_count + 1))
task_id=$(printf "IMPL-%03d" $task_count)
echo "Creating $task_id for '$top_dir'"
# Generate task JSON (see Task Templates section)
done < .workflow/WFS-docs-20240120/.process/top-level-dirs.txt
)
```
**Output**:
```
Creating IMPL-001 for 'src/modules'
Creating IMPL-002 for 'src/utils'
Creating IMPL-003 for 'lib'
```
#### Step 2: Generate Level 2-3 Tasks (Full Mode Only)
```bash
# Check documentation mode
bash(jq -r '.mode' .workflow/WFS-docs-20240120/.process/config.json)
# If full mode, create project-level tasks
bash(
mode=$(jq -r '.mode' .workflow/WFS-docs-20240120/.process/config.json)
if [[ "$mode" == "full" ]]; then
echo "Creating IMPL-004: Project README"
echo "Creating IMPL-005: ARCHITECTURE.md + EXAMPLES.md"
# Optional: Check for HTTP API endpoints
if grep -r "router\.|@Get\|@Post" src/ >/dev/null 2>&1; then
echo "Creating IMPL-006: HTTP API docs"
fi
else
echo "Partial mode: Skipping project-level tasks"
fi
)
```
### Phase 5: Generate Task JSONs
#### Step 1: Extract Configuration
```bash
# Read config values from JSON
bash(jq -r '.tool' .workflow/WFS-docs-20240120/.process/config.json)
bash(jq -r '.cli_generate' .workflow/WFS-docs-20240120/.process/config.json)
```
**Output**: `tool=gemini`, `cli_generate=false`
#### Step 2: Determine CLI Command Strategy
```bash
# Determine MODE and placement based on cli_generate flag
bash(
cli_generate=$(jq -r '.cli_generate' .workflow/WFS-docs-20240120/.process/config.json)
if [[ "$cli_generate" == "true" ]]; then
echo "mode=write"
echo "placement=implementation_approach"
echo "approval_flag=--approval-mode yolo"
else
echo "mode=analysis"
echo "placement=pre_analysis"
echo "approval_flag="
fi
)
```
**Output**:
```
mode=analysis
placement=pre_analysis
approval_flag=
```
#### Step 3: Build Tool-Specific Commands
```bash
# Generate command templates based on tool selection
bash(
tool=$(jq -r '.tool' .workflow/WFS-docs-20240120/.process/config.json)
if [[ "$tool" == "codex" ]]; then
echo "codex -C \${dir} --full-auto exec \"...\" --skip-git-repo-check -s danger-full-access"
else
echo "bash(cd \${dir} && ~/.claude/scripts/${tool}-wrapper ${approval_flag} -p \"...\")"
fi
)
```
## Task Templates
### Level 1: Module Tree Task
**Path Mapping**: Source `src/modules/` → Output `.workflow/docs/src/modules/`
**Default Mode (cli_generate=false)**:
```json
{
"id": "IMPL-001",
"title": "Document Module Tree: 'src/modules/'",
"status": "pending",
"meta": {
"type": "docs-tree",
"agent": "@doc-generator",
"tool": "gemini",
"cli_generate": false,
"source_path": "src/modules",
"output_path": ".workflow/docs/src/modules"
},
"context": {
"requirements": [
"Analyze source code in src/modules/",
"Generate docs to .workflow/docs/src/modules/ (mirrored structure)",
"For code folders: generate API.md + README.md",
"For navigation folders: generate README.md only"
],
"focus_paths": ["src/modules"],
"folder_analysis_file": "${session_dir}/.process/folder-analysis.txt"
},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_docs",
"command": "bash(find .workflow/docs/${top_dir} -name '*.md' 2>/dev/null | xargs cat || echo 'No existing docs')",
"output_to": "existing_module_docs"
},
{
"step": "load_folder_analysis",
"command": "bash(grep '^src/modules' ${session_dir}/.process/folder-analysis.txt)",
"output_to": "target_folders"
},
{
"step": "analyze_module_tree",
"command": "bash(cd src/modules && ~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze module structure\\nTASK: Generate documentation outline\\nMODE: analysis\\nCONTEXT: @{**/*} [target_folders]\\nEXPECTED: Structure outline\\nRULES: Analyze only\")",
"output_to": "tree_outline",
"note": "CLI for analysis only"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate module tree documentation",
"description": "Analyze source folders and generate docs to .workflow/docs/ with mirrored structure",
"modification_points": [
"Parse folder types from [target_folders]",
"Parse structure from [tree_outline]",
"For src/modules/auth/ → write to .workflow/docs/src/modules/auth/",
"Generate API.md for code folders",
"Generate README.md for all folders"
],
"logic_flow": [
"Parse [target_folders] to get folder types",
"Parse [tree_outline] for structure",
"For each folder in source:",
" - Map source_path to .workflow/docs/{source_path}",
" - If type == 'code': Generate API.md + README.md",
" - Elif type == 'navigation': Generate README.md only"
],
"depends_on": [],
"output": "module_docs"
}
],
"target_files": [
".workflow/docs/${top_dir}/*/API.md",
".workflow/docs/${top_dir}/*/README.md"
]
}
}
```
**CLI Generate Mode (cli_generate=true)**:
```json
{
"id": "IMPL-001",
"title": "Document Module Tree: 'src/modules/'",
"status": "pending",
"meta": {
"type": "docs-tree",
"agent": "@doc-generator",
"tool": "gemini",
"cli_generate": true,
"source_path": "src/modules",
"output_path": ".workflow/docs/src/modules"
},
"context": {
"requirements": [
"Analyze source code in src/modules/",
"Generate docs to .workflow/docs/src/modules/ (mirrored structure)",
"CLI generates documentation files directly"
],
"focus_paths": ["src/modules"]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_docs",
"command": "bash(find .workflow/docs/${top_dir} -name '*.md' 2>/dev/null | xargs cat || echo 'No existing docs')",
"output_to": "existing_module_docs"
},
{
"step": "load_folder_analysis",
"command": "bash(grep '^src/modules' ${session_dir}/.process/folder-analysis.txt)",
"output_to": "target_folders"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Parse folder analysis",
"description": "Parse [target_folders] to get folder types and structure",
"modification_points": ["Extract folder types", "Identify code vs navigation folders"],
"logic_flow": ["Parse [target_folders] to get folder types", "Prepare folder list for CLI generation"],
"depends_on": [],
"output": "folder_types"
},
{
"step": 2,
"title": "Generate documentation via CLI",
"description": "Call CLI to generate docs to .workflow/docs/ with mirrored structure using MODE=write",
"modification_points": [
"Execute CLI generation command",
"Generate files to .workflow/docs/src/modules/ (mirrored path)",
"Generate API.md and README.md files"
],
"logic_flow": [
"CLI analyzes source code in src/modules/",
"CLI writes documentation to .workflow/docs/src/modules/",
"Maintains directory structure mirroring"
],
"command": "bash(cd src/modules && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation files in .workflow/docs/src/modules/\\nMODE: write\\nCONTEXT: @{**/*} [target_folders] [existing_module_docs]\\nEXPECTED: API.md and README.md in .workflow/docs/src/modules/\\nRULES: Mirror source structure, generate complete docs\")",
"depends_on": [1],
"output": "generated_docs"
}
],
"target_files": [
".workflow/docs/${top_dir}/*/API.md",
".workflow/docs/${top_dir}/*/README.md"
]
}
}
```
### Level 2: Project README Task
**Default Mode**:
```json
{
"id": "IMPL-004",
"title": "Generate Project README",
"status": "pending",
"depends_on": ["IMPL-001", "IMPL-002", "IMPL-003"],
"meta": {
"type": "docs",
"agent": "@doc-generator",
"tool": "gemini",
"cli_generate": false
},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_readme",
"command": "bash(cat .workflow/docs/README.md 2>/dev/null || echo 'No existing README')",
"output_to": "existing_readme"
},
{
"step": "load_module_docs",
"command": "bash(find .workflow/docs -type f -name '*.md' ! -path '.workflow/docs/README.md' ! -path '.workflow/docs/ARCHITECTURE.md' ! -path '.workflow/docs/EXAMPLES.md' ! -path '.workflow/docs/api/*' | xargs cat)",
"output_to": "all_module_docs",
"note": "Load all module docs from mirrored structure"
},
{
"step": "analyze_project",
"command": "bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
"output_to": "project_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate project README",
"description": "Generate project README with navigation links while preserving existing user modifications",
"modification_points": ["Parse [project_outline] and [all_module_docs]", "Generate project README structure", "Add navigation links to modules", "Preserve [existing_readme] user modifications"],
"logic_flow": ["Parse [project_outline] and [all_module_docs]", "Generate project README with navigation links", "Preserve [existing_readme] user modifications"],
"depends_on": [],
"output": "project_readme"
}
],
"target_files": [".workflow/docs/README.md"]
}
}
```
### Level 3: Architecture & Examples Documentation Task
**Default Mode**:
```json
{
"id": "IMPL-005",
"title": "Generate Architecture & Examples Documentation",
"status": "pending",
"depends_on": ["IMPL-004"],
"meta": {
"type": "docs",
"agent": "@doc-generator",
"tool": "gemini",
"cli_generate": false
},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_docs",
"command": "bash(cat .workflow/docs/ARCHITECTURE.md 2>/dev/null || echo 'No existing ARCHITECTURE'; echo '---SEPARATOR---'; cat .workflow/docs/EXAMPLES.md 2>/dev/null || echo 'No existing EXAMPLES')",
"output_to": "existing_arch_examples"
},
{
"step": "load_all_docs",
"command": "bash(cat .workflow/docs/README.md && find .workflow/docs -type f -name '*.md' ! -path '.workflow/docs/README.md' ! -path '.workflow/docs/ARCHITECTURE.md' ! -path '.workflow/docs/EXAMPLES.md' ! -path '.workflow/docs/api/*' | xargs cat)",
"output_to": "all_docs",
"note": "Load README + all module docs from mirrored structure"
},
{
"step": "analyze_architecture_and_examples",
"command": "bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze system architecture and generate examples\\nTASK: Synthesize architectural overview and usage patterns\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture outline + Examples outline\")",
"output_to": "arch_examples_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate architecture and examples documentation",
"description": "Generate ARCHITECTURE.md and EXAMPLES.md while preserving existing user modifications",
"modification_points": [
"Parse [arch_examples_outline] and [all_docs]",
"Generate ARCHITECTURE.md structure with system design and patterns",
"Generate EXAMPLES.md structure with code snippets and usage examples",
"Preserve [existing_arch_examples] user modifications"
],
"logic_flow": [
"Parse [arch_examples_outline] and [all_docs]",
"Generate ARCHITECTURE.md with system design",
"Generate EXAMPLES.md with code snippets",
"Preserve [existing_arch_examples] modifications"
],
"depends_on": [],
"output": "arch_examples_docs"
}
],
"target_files": [
".workflow/docs/ARCHITECTURE.md",
".workflow/docs/EXAMPLES.md"
]
}
}
```
### Level 3: HTTP API Documentation Task (Optional)
**Default Mode**:
```json
{
"id": "IMPL-006",
"title": "Generate HTTP API Documentation",
"status": "pending",
"depends_on": ["IMPL-004"],
"meta": {
"type": "docs",
"agent": "@doc-generator",
"tool": "gemini",
"cli_generate": false
},
"flow_control": {
"pre_analysis": [
{
"step": "discover_api_endpoints",
"command": "mcp__code-index__search_code_advanced(pattern='router\\.|@(Get|Post)', file_pattern='*.{ts,js}')",
"output_to": "endpoint_discovery"
},
{
"step": "load_existing_api_docs",
"command": "bash(cat .workflow/docs/api/README.md 2>/dev/null || echo 'No existing API docs')",
"output_to": "existing_api_docs"
},
{
"step": "analyze_api",
"command": "bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Document HTTP API\\nTASK: Analyze API endpoints\\nMODE: analysis\\nCONTEXT: @{src/api/**/*} [endpoint_discovery]\\nEXPECTED: API outline\")",
"output_to": "api_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate HTTP API documentation",
"description": "Generate HTTP API documentation while preserving existing user modifications",
"modification_points": ["Parse [api_outline] and [endpoint_discovery]", "Generate HTTP API documentation", "Document endpoints and request/response formats", "Preserve [existing_api_docs] modifications"],
"logic_flow": ["Parse [api_outline] and [endpoint_discovery]", "Generate HTTP API documentation", "Preserve [existing_api_docs] modifications"],
"depends_on": [],
"output": "api_docs"
}
],
"target_files": [".workflow/docs/api/README.md"]
}
}
```
## Session Structure
```
.workflow/
├── .active-WFS-docs-20240120-143022 # Active session marker
└── WFS-docs-20240120-143022/
├── IMPL_PLAN.md # Implementation plan
├── TODO_LIST.md # Progress tracker
├── .process/
│ ├── config.json # Single config (all settings + stats)
│ ├── folder-analysis.txt # Folder classification results
│ ├── top-level-dirs.txt # Top-level directory list
│ └── existing-docs.txt # Existing documentation paths
└── .task/
├── IMPL-001.json # Module tree task
├── IMPL-002.json # Module tree task
├── IMPL-003.json # Module tree task
├── IMPL-004.json # Project README (full mode)
├── IMPL-005.json # ARCHITECTURE.md + EXAMPLES.md (full mode)
└── IMPL-006.json # HTTP API docs (optional)
```
**Config File Structure** (config.json):
```json
{
"session_id": "WFS-docs-20240120-143022",
"timestamp": "2024-01-20T14:30:22+08:00",
"path": ".",
"target_path": "/d/Claude_dms3",
"project_root": "/d/Claude_dms3",
"mode": "full",
"tool": "gemini",
"cli_generate": false,
"update_mode": "update",
"existing_docs": 5,
"analysis": {
"total": "15",
"code": "8",
"navigation": "7",
"top_level": "3"
}
}
```
## Generated Documentation
**Structure mirrors project source directories**:
```
.workflow/docs/
├── src/ # Mirrors src/ directory
│ ├── modules/ # Level 1 output
│ │ ├── README.md # Navigation for src/modules/
│ │ ├── auth/
│ │ │ ├── API.md # Auth module API signatures
│ │ │ ├── README.md # Auth module documentation
│ │ │ └── middleware/
│ │ │ ├── API.md # Middleware API
│ │ │ └── README.md # Middleware docs
│ │ └── api/
│ │ ├── API.md # API module signatures
│ │ └── README.md # API module docs
│ └── utils/ # Level 1 output
│ └── README.md # Utils navigation
├── lib/ # Mirrors lib/ directory
│ └── core/
│ ├── API.md
│ └── README.md
├── README.md # Level 2 output (root only)
├── ARCHITECTURE.md # Level 3 output (root only)
├── EXAMPLES.md # Level 3 output (root only)
└── api/ # Level 3 output (optional)
└── README.md # HTTP API reference
```
## Execution Commands
### Full Mode (--mode full)
```bash
# Level 1 - Module documentation (parallel)
/workflow:execute IMPL-001
/workflow:execute IMPL-002
/workflow:execute IMPL-003
# Level 2 - Project README (after Level 1)
/workflow:execute IMPL-004
# Level 3 - Architecture & Examples (after Level 2)
/workflow:execute IMPL-005 # ARCHITECTURE.md + EXAMPLES.md
/workflow:execute IMPL-006 # HTTP API (if present)
```
### Partial Mode (--mode partial)
```bash
# Only Level 1 tasks generated (module documentation)
/workflow:execute IMPL-001
/workflow:execute IMPL-002
/workflow:execute IMPL-003
```
## Simple Bash Commands
### Session Management
```bash
# Create session and initialize config (all in one)
bash(
session="WFS-docs-$(date +%Y%m%d-%H%M%S)"
mkdir -p ".workflow/${session}"/{.task,.process,.summaries}
touch ".workflow/.active-${session}"
cat > ".workflow/${session}/.process/config.json" <<EOF
{"session_id":"${session}","timestamp":"$(date -Iseconds)","path":".","mode":"full","tool":"gemini","cli_generate":false}
EOF
echo "Session: ${session}"
)
# Read session config
bash(cat .workflow/WFS-docs-20240120/.process/config.json)
# Extract config values
bash(jq -r '.tool' .workflow/WFS-docs-20240120/.process/config.json)
bash(jq -r '.mode' .workflow/WFS-docs-20240120/.process/config.json)
# List session tasks
bash(ls .workflow/WFS-docs-20240120/.task/*.json)
```
### Analysis Commands
```bash
# Discover and classify folders (scans project source)
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh)
# Count existing docs (in .workflow/docs/ directory)
bash(if [[ -d ".workflow/docs" ]]; then find .workflow/docs -name "*.md" 2>/dev/null | wc -l; else echo "0"; fi)
# List existing documentation (in .workflow/docs/ directory)
bash(if [[ -d ".workflow/docs" ]]; then find .workflow/docs -name "*.md" 2>/dev/null; fi)
```
## Template Reference
**Available Templates**:
- `api.txt`: Unified template for Code API (Part A) and HTTP API (Part B)
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders with subdirectories
- `project-readme.txt`: Project overview, getting started, module navigation
- `project-architecture.txt`: System structure, module map, design patterns
- `project-examples.txt`: End-to-end usage examples
**Template Location**: `~/.claude/workflows/cli-templates/prompts/documentation/`
## CLI Generate Mode Summary
| Mode | CLI Placement | CLI MODE | Agent Role |
|------|---------------|----------|------------|
| **Default** | pre_analysis | analysis | Generates documentation files |
| **--cli-generate** | implementation_approach | write | Validates and coordinates CLI output |
## Related Commands
- `/workflow:execute` - Execute documentation tasks
- `/workflow:status` - View task progress
- `/workflow:session:complete` - Mark session complete

View File

@@ -0,0 +1,330 @@
---
name: update-full
description: Complete project-wide CLAUDE.md documentation update
argument-hint: "[--tool gemini|qwen|codex] [--path <directory>]"
---
# Full Documentation Update (/memory:update-full)
## Coordinator Role
**This command orchestrates project-wide CLAUDE.md updates** using depth-parallel execution strategy with intelligent complexity detection.
**Execution Model**:
1. **Initial Analysis**: Cache git changes, discover module structure
2. **Complexity Detection**: Analyze module count, determine strategy
3. **Plan Presentation**: Show user exactly what will be updated
4. **Depth-Parallel Execution**: Update modules by depth (highest to lowest)
5. **Safety Verification**: Ensure only CLAUDE.md files modified
**Tool Selection**:
- `--tool gemini` (default): Documentation generation, pattern recognition
- `--tool qwen`: Architecture analysis, system design docs
- `--tool codex`: Implementation validation, code quality analysis
**Path Parameter**:
- `--path <directory>` (optional): Target specific directory for updates
- If not specified: Updates entire project from current directory
- If specified: Changes to target directory before discovery
## Core Rules
1. **Analyze First**: Run git cache and module discovery before any updates
2. **Scope Control**: Use --path to target specific directories, default is entire project
3. **Wait for Approval**: Present plan, no execution without user confirmation
4. **Depth-Parallel**: Same depth runs parallel (max 4 jobs), different depths sequential
5. **Safety Check**: Verify only CLAUDE.md files modified, revert if source files touched
6. **Independent Commands**: Each update is a separate bash() call
7. **No Background Bash Tool**: Never use `run_in_background` parameter in bash() calls; use shell `&` for parallelism
## Execution Workflow
### Phase 1: Discovery & Analysis
**Cache git changes:**
```bash
bash(git add -A 2>/dev/null || true)
```
**Get module structure:**
*If no --path parameter:*
```bash
bash(~/.claude/scripts/get_modules_by_depth.sh list)
```
*If --path parameter specified:*
```bash
bash(cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list)
```
**Example with path:**
```bash
# Update only .claude directory
bash(cd .claude && ~/.claude/scripts/get_modules_by_depth.sh list)
# Update specific feature directory
bash(cd src/features/auth && ~/.claude/scripts/get_modules_by_depth.sh list)
```
**Parse Output**:
- Extract module paths from `depth:N|path:<PATH>|...` format
- Count total modules
- Identify which modules have/need CLAUDE.md
**Example output:**
```
depth:5|path:./.claude/workflows/cli-templates/prompts/analysis|files:5|has_claude:no
depth:4|path:./.claude/commands/cli/mode|files:3|has_claude:no
depth:3|path:./.claude/commands/cli|files:6|has_claude:no
depth:0|path:.|files:14|has_claude:yes
```
**Validation**:
- If --path specified, directory exists and is accessible
- Module list contains depth and path information
- At least one module exists
- All paths are relative to target directory (if --path used)
---
### Phase 2: Plan Presentation
**Decision Logic**:
- **Simple projects (≤20 modules)**: Present plan to user, wait for approval
- **Complex projects (>20 modules)**: Delegate to memory-bridge agent
**Plan format:**
```
📋 Update Plan:
Tool: gemini
Total modules: 31
NEW CLAUDE.md files (30):
- ./.claude/workflows/cli-templates/prompts/analysis/CLAUDE.md
- ./.claude/commands/cli/mode/CLAUDE.md
- ... (28 more)
UPDATE existing CLAUDE.md files (1):
- ./CLAUDE.md
⚠️ Confirm execution? (y/n)
```
**User Confirmation Required**: No execution without explicit approval
---
### Phase 3: Depth-Parallel Execution
**Pattern**: Process highest depth first, parallel within depth, sequential across depths.
**Command structure:**
```bash
bash(cd <module-path> && ~/.claude/scripts/update_module_claude.sh "." "full" "<tool>" &)
```
**Example - Depth 5 (8 modules):**
```bash
bash(cd ./.claude/workflows/cli-templates/prompts/analysis && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
```
```bash
bash(cd ./.claude/workflows/cli-templates/prompts/development && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
```
```bash
bash(cd ./.claude/workflows/cli-templates/prompts/documentation && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
```
```bash
bash(cd ./.claude/workflows/cli-templates/prompts/implementation && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
```
*Wait for depth 5 completion...*
**Example - Depth 4 (7 modules):**
```bash
bash(cd ./.claude/commands/cli/mode && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
```
```bash
bash(cd ./.claude/commands/workflow/brainstorm && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
```
*Continue for remaining depths (3 → 2 → 1 → 0)...*
**Execution Rules**:
- Each command is separate bash() call
- Up to 4 concurrent jobs per depth
- Wait for all jobs in current depth before proceeding
- Extract path from `depth:N|path:<PATH>|...` format
- All paths relative to target directory (current dir or --path value)
**Path Context**:
- Without --path: Paths relative to current directory
- With --path: Paths relative to specified target directory
- Module discovery runs in target directory context
---
### Phase 4: Safety Verification
**Check modified files:**
```bash
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "✅ Only CLAUDE.md files modified")
```
**Expected output:**
```
✅ Only CLAUDE.md files modified
```
**If non-CLAUDE.md files detected:**
```
⚠️ Warning: Non-CLAUDE.md files were modified
Modified files: src/index.ts, package.json
→ Run: git restore --staged .
```
**Display final status:**
```bash
bash(git status --short)
```
**Example output:**
```
A .claude/workflows/cli-templates/prompts/analysis/CLAUDE.md
A .claude/commands/cli/mode/CLAUDE.md
M CLAUDE.md
... (30 more files)
```
## Command Pattern Reference
**Single module update:**
```bash
bash(cd <module-path> && ~/.claude/scripts/update_module_claude.sh "." "full" "<tool>" &)
```
**Components**:
- `cd <module-path>` - Navigate to module (from `path:` field)
- `&&` - Ensure cd succeeds
- `update_module_claude.sh` - Update script
- `"."` - Current directory
- `"full"` - Full update mode
- `"<tool>"` - gemini/qwen/codex
- `&` - Background execution
**Path extraction:**
```bash
# From: depth:5|path:./src/auth|files:10|has_claude:no
# Extract: ./src/auth
# Command: bash(cd ./src/auth && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
```
## Complex Projects Strategy
For projects >20 modules, delegate to memory-bridge agent:
```javascript
Task(
subagent_type="memory-bridge",
description="Complex project full update",
prompt=`
CONTEXT:
- Total modules: ${module_count}
- Tool: ${tool}
- Mode: full
MODULE LIST:
${modules_output}
REQUIREMENTS:
1. Use TodoWrite to track each depth level
2. Process depths N→0 sequentially, max 4 parallel per depth
3. Command: cd "<path>" && update_module_claude.sh "." "full" "${tool}" &
4. Extract path from "depth:N|path:<PATH>|..." format
5. Verify all modules processed
6. Run safety check
7. Display git status
`
)
```
## Error Handling
- **Invalid path parameter**: Report error if --path directory doesn't exist, abort execution
- **Module discovery failure**: Report error, abort execution
- **User declines approval**: Abort execution, no changes made
- **Safety check failure**: Automatic staging revert, report modified files
- **Update script failure**: Report failed modules, continue with remaining
## Coordinator Checklist
✅ Parse `--tool` parameter (default: gemini)
✅ Parse `--path` parameter (optional, default: current directory)
✅ Execute git cache in current directory
✅ Execute module discovery (with cd if --path specified)
✅ Parse module list, count total modules
✅ Determine strategy based on module count (≤20 vs >20)
✅ Present plan with exact file paths
**Wait for user confirmation** (simple projects only)
✅ Organize modules by depth
✅ For each depth (highest to lowest):
- Launch up to 5 parallel updates
- Wait for depth completion
- Proceed to next depth
✅ Run safety check after all updates
✅ Display git status
✅ Report completion summary
## Tool Parameter Reference
**Gemini** (default):
- Best for: Documentation generation, pattern recognition, architecture review
- Context window: Large, handles complex codebases
- Output style: Comprehensive, detailed explanations
**Qwen**:
- Best for: Architecture analysis, system design documentation
- Context window: Large, similar to Gemini
- Output style: Structured, systematic analysis
**Codex**:
- Best for: Implementation validation, code quality analysis
- Capabilities: Mathematical reasoning, autonomous development
- Output style: Technical, implementation-focused
## Path Parameter Reference
**Use Cases**:
**Update configuration directory only:**
```bash
/memory:update-full --path .claude
```
- Updates only .claude directory and subdirectories
- Useful after workflow or command modifications
- Faster than full project update
**Update specific feature module:**
```bash
/memory:update-full --path src/features/auth
```
- Updates authentication feature and sub-modules
- Ideal for feature-specific documentation
- Isolates scope for targeted updates
**Update nested structure:**
```bash
/memory:update-full --path .claude/workflows/cli-templates
```
- Updates deeply nested directory tree
- Maintains relative path structure in output
- All module paths relative to specified directory
**Best Practices**:
- Use `--path` when working on specific features/modules
- Omit `--path` for project-wide architectural changes
- Combine with `--tool` for specialized documentation needs
- Verify directory exists before execution (automatic validation)

View File

@@ -0,0 +1,306 @@
---
name: update-related
description: Context-aware CLAUDE.md documentation updates based on recent changes
argument-hint: "[--tool gemini|qwen|codex]"
---
# Related Documentation Update (/memory:update-related)
## Coordinator Role
**This command orchestrates context-aware CLAUDE.md updates** for modules affected by recent changes using intelligent change detection.
**Execution Model**:
1. **Change Detection**: Analyze git changes to identify affected modules
2. **Complexity Analysis**: Evaluate change count and determine strategy
3. **Plan Presentation**: Show user which modules need updates
4. **Depth-Parallel Execution**: Update affected modules by depth (highest to lowest)
5. **Safety Verification**: Ensure only CLAUDE.md files modified
**Tool Selection**:
- `--tool gemini` (default): Documentation generation, pattern recognition
- `--tool qwen`: Architecture analysis, system design docs
- `--tool codex`: Implementation validation, code quality analysis
## Core Rules
1. **Detect Changes First**: Use git diff to identify affected modules before updates
2. **Wait for Approval**: Present plan, no execution without user confirmation
3. **Related Mode**: Update only changed modules and their parent contexts
4. **Depth-Parallel**: Same depth runs parallel (max 4 jobs), different depths sequential
5. **Safety Check**: Verify only CLAUDE.md files modified, revert if source files touched
6. **No Background Bash Tool**: Never use `run_in_background` parameter in bash() calls; use shell `&` for parallelism
## Execution Workflow
### Phase 1: Change Detection & Analysis
**Refresh code index:**
```bash
bash(mcp__code-index__refresh_index)
```
**Detect changed modules:**
```bash
bash(~/.claude/scripts/detect_changed_modules.sh list)
```
**Cache git changes:**
```bash
bash(git add -A 2>/dev/null || true)
```
**Parse Output**:
- Extract changed module paths from `depth:N|path:<PATH>|...` format
- Count affected modules
- Identify which modules have/need CLAUDE.md updates
**Example output:**
```
depth:3|path:./src/api/auth|files:5|types:[ts]|has_claude:no|change:new
depth:2|path:./src/api|files:12|types:[ts]|has_claude:yes|change:modified
depth:1|path:./src|files:8|types:[ts]|has_claude:yes|change:parent
depth:0|path:.|files:14|has_claude:yes|change:parent
```
**Fallback behavior**:
- If no git changes detected, use recent modules (first 10 by depth)
**Validation**:
- Changed module list contains valid paths
- At least one affected module exists
---
### Phase 2: Plan Presentation
**Decision Logic**:
- **Simple changes (≤15 modules)**: Present plan to user, wait for approval
- **Complex changes (>15 modules)**: Delegate to memory-bridge agent
**Plan format:**
```
📋 Related Update Plan:
Tool: gemini
Changed modules: 4
NEW CLAUDE.md files (1):
- ./src/api/auth/CLAUDE.md [new module]
UPDATE existing CLAUDE.md files (3):
- ./src/api/CLAUDE.md [parent of changed auth/]
- ./src/CLAUDE.md [parent context]
- ./CLAUDE.md [root level]
⚠️ Confirm execution? (y/n)
```
**User Confirmation Required**: No execution without explicit approval
---
### Phase 3: Depth-Parallel Execution
**Pattern**: Process highest depth first, parallel within depth, sequential across depths.
**Command structure:**
```bash
bash(cd <module-path> && ~/.claude/scripts/update_module_claude.sh "." "related" "<tool>" &)
```
**Example - Depth 3 (new module):**
```bash
bash(cd ./src/api/auth && ~/.claude/scripts/update_module_claude.sh "." "related" "gemini" &)
```
*Wait for depth 3 completion...*
**Example - Depth 2 (modified parent):**
```bash
bash(cd ./src/api && ~/.claude/scripts/update_module_claude.sh "." "related" "gemini" &)
```
*Wait for depth 2 completion...*
**Example - Depth 1 & 0 (parent contexts):**
```bash
bash(cd ./src && ~/.claude/scripts/update_module_claude.sh "." "related" "gemini" &)
```
```bash
bash(cd . && ~/.claude/scripts/update_module_claude.sh "." "related" "gemini" &)
```
*Wait for all depths completion...*
**Execution Rules**:
- Each command is separate bash() call
- Up to 4 concurrent jobs per depth
- Wait for all jobs in current depth before proceeding
- Use "related" mode (not "full") for context-aware updates
- Extract path from `depth:N|path:<PATH>|...` format
**Related Mode Behavior**:
- Updates module based on recent git changes
- Includes parent context for better documentation coherence
- More efficient than full updates for iterative development
---
### Phase 4: Safety Verification
**Check modified files:**
```bash
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "✅ Only CLAUDE.md files modified")
```
**Expected output:**
```
✅ Only CLAUDE.md files modified
```
**If non-CLAUDE.md files detected:**
```
⚠️ Warning: Non-CLAUDE.md files were modified
Modified files: src/api/auth/index.ts, package.json
→ Run: git restore --staged .
```
**Display final statistics:**
```bash
bash(git diff --stat)
```
**Example output:**
```
.claude/workflows/cli-templates/prompts/analysis/CLAUDE.md | 45 +++++++++++++++++++++
src/api/CLAUDE.md | 23 +++++++++--
src/CLAUDE.md | 12 ++++--
CLAUDE.md | 8 ++--
4 files changed, 82 insertions(+), 6 deletions(-)
```
## Command Pattern Reference
**Single module update:**
```bash
bash(cd <module-path> && ~/.claude/scripts/update_module_claude.sh "." "related" "<tool>" &)
```
**Components**:
- `cd <module-path>` - Navigate to module (from `path:` field)
- `&&` - Ensure cd succeeds
- `update_module_claude.sh` - Update script
- `"."` - Current directory
- `"related"` - Related mode (context-aware, change-based)
- `"<tool>"` - gemini/qwen/codex
- `&` - Background execution
**Path extraction:**
```bash
# From: depth:3|path:./src/api/auth|files:5|change:new|has_claude:no
# Extract: ./src/api/auth
# Command: bash(cd ./src/api/auth && ~/.claude/scripts/update_module_claude.sh "." "related" "gemini" &)
```
**Mode comparison:**
- `"full"` - Complete module documentation regeneration
- `"related"` - Context-aware update based on recent changes (faster)
## Complex Changes Strategy
For changes affecting >15 modules, delegate to memory-bridge agent:
```javascript
Task(
subagent_type="memory-bridge",
description="Complex project related update",
prompt=`
CONTEXT:
- Total modules: ${change_count}
- Tool: ${tool}
- Mode: related
MODULE LIST:
${changed_modules_output}
REQUIREMENTS:
1. Use TodoWrite to track each depth level
2. Process depths N→0 sequentially, max 4 parallel per depth
3. Command: cd "<path>" && update_module_claude.sh "." "related" "${tool}" &
4. Extract path from "depth:N|path:<PATH>|..." format
5. Verify all ${change_count} modules processed
6. Run safety check
7. Display git diff --stat
`
)
```
## Error Handling
- **No changes detected**: Use fallback mode (recent 10 modules)
- **Change detection failure**: Report error, abort execution
- **User declines approval**: Abort execution, no changes made
- **Safety check failure**: Automatic staging revert, report modified files
- **Update script failure**: Report failed modules, continue with remaining
## Coordinator Checklist
✅ Parse `--tool` parameter (default: gemini)
✅ Refresh code index for accurate change detection
✅ Detect changed modules via detect_changed_modules.sh
✅ Cache git changes to protect current state
✅ Parse changed module list, count affected modules
✅ Apply fallback if no changes detected (recent 10 modules)
✅ Determine strategy based on change count (≤15 vs >15)
✅ Present plan with exact file paths and change types
**Wait for user confirmation** (simple changes only)
✅ Organize modules by depth
✅ For each depth (highest to lowest):
- Launch up to 4 parallel updates with "related" mode
- Wait for depth completion
- Proceed to next depth
✅ Run safety check after all updates
✅ Display git diff statistics
✅ Report completion summary
## Usage Examples
```bash
# Daily development update (default: gemini)
/memory:update-related
# After feature work with specific tool
/memory:update-related --tool qwen
# Code quality review after implementation
/memory:update-related --tool codex
```
## Tool Parameter Reference
**Gemini** (default):
- Best for: Documentation generation, pattern recognition
- Use case: Daily development updates, feature documentation
- Output style: Comprehensive, contextual explanations
**Qwen**:
- Best for: Architecture analysis, system design
- Use case: Structural changes, API design updates
- Output style: Structured, systematic documentation
**Codex**:
- Best for: Implementation validation, code quality
- Use case: After implementation, refactoring work
- Output style: Technical, implementation-focused
## Comparison with Full Update
| Aspect | Related Update | Full Update |
|--------|----------------|-------------|
| **Scope** | Changed modules only | All project modules |
| **Speed** | Fast (minutes) | Slower (10-30 min) |
| **Use case** | Daily development | Major refactoring |
| **Mode** | `"related"` | `"full"` |
| **Trigger** | After commits | After major changes |
| **Complexity threshold** | ≤15 modules | ≤20 modules |

View File

@@ -1,174 +0,0 @@
---
name: update-memory-full
description: Complete project-wide CLAUDE.md documentation update
argument-hint: "[--tool gemini|qwen|codex]"
---
### 🚀 Command Overview: `/update-memory-full`
Complete project-wide documentation update using depth-parallel execution.
**Tool Selection**:
- **Gemini (default)**: Documentation generation, pattern recognition, architecture review
- **Qwen**: Architecture analysis, system design documentation
- **Codex**: Implementation validation, code quality analysis
### 📝 Execution Template
```bash
#!/bin/bash
# Complete project-wide CLAUDE.md documentation update
# Step 1: Parse tool selection (default: gemini)
tool="gemini"
[[ "$*" == *"--tool qwen"* ]] && tool="qwen"
[[ "$*" == *"--tool codex"* ]] && tool="codex"
# Step 2: Code Index architecture analysis
mcp__code-index__search_code_advanced(pattern="class|function|interface", file_pattern="**/*.{ts,js,py}")
mcp__code-index__find_files(pattern="**/*.{md,json,yaml,yml}")
# Step 3: Cache git changes
Bash(git add -A 2>/dev/null || true)
# Step 4: Get and display project structure
modules=$(Bash(~/.claude/scripts/get_modules_by_depth.sh list))
count=$(echo "$modules" | wc -l)
# Step 5: Analysis handover → Model takes control
# BASH_EXECUTION_STOPS → MODEL_ANALYSIS_BEGINS
# Pseudocode flow:
# IF (module_count > 20 OR complex_project_detected):
# → Task "Complex project full update" subagent_type: "memory-bridge"
# ELSE:
# → Present plan and WAIT FOR USER APPROVAL before execution
#
# Pass tool parameter to update_module_claude.sh: "$tool"
```
### 🧠 Model Analysis Phase
After the bash script completes the initial analysis, the model takes control to:
1. **Analyze Complexity**: Review module count and project context
2. **Parse CLAUDE.md Status**: Extract which modules have/need CLAUDE.md files
3. **Choose Strategy**:
- Simple projects: Present execution plan with CLAUDE.md paths to user
- Complex projects: Delegate to memory-bridge agent
4. **Present Detailed Plan**: Show user exactly which CLAUDE.md files will be created/updated
5. **⚠️ CRITICAL: WAIT FOR USER APPROVAL**: No execution without explicit user consent
### 📋 Execution Strategies
**For Simple Projects (≤20 modules):**
Model will present detailed plan:
```
📋 Update Plan:
Tool: [gemini|qwen|codex] (from --tool parameter)
NEW CLAUDE.md files (X):
- ./src/components/CLAUDE.md
- ./src/services/CLAUDE.md
UPDATE existing CLAUDE.md files (Y):
- ./CLAUDE.md
- ./src/CLAUDE.md
- ./tests/CLAUDE.md
Total: N CLAUDE.md files will be processed
⚠️ Confirm execution? (y/n)
```
```pseudo
# ⚠️ MANDATORY: User confirmation → MUST await explicit approval
IF user_explicitly_confirms():
continue_execution()
ELSE:
abort_execution()
# Step 4: Organize modules by depth → Prepare execution
depth_modules = organize_by_depth(modules)
# Step 5: Execute depth-parallel updates → Process by depth
FOR depth FROM max_depth DOWN TO 0:
FOR each module IN depth_modules[depth]:
WHILE active_jobs >= 4: wait(0.1)
Bash(~/.claude/scripts/update_module_claude.sh "$module" "full" "$tool" &)
wait_all_jobs()
# Step 6: Safety check and restore staging state
non_claude=$(Bash(git diff --cached --name-only | grep -v "CLAUDE.md" || true))
if [ -n "$non_claude" ]; then
Bash(git restore --staged .)
echo "⚠️ Warning: Non-CLAUDE.md files were modified, staging reverted"
echo "Modified files: $non_claude"
else
echo "✅ Only CLAUDE.md files modified, staging preserved"
fi
# Step 7: Display changes → Final status
Bash(git status --short)
```
**For Complex Projects (>20 modules):**
The model will delegate to the memory-bridge agent with structured context:
```javascript
Task "Complex project full update"
subagent_type: "memory-bridge"
prompt: `
CONTEXT:
- Total modules: ${module_count}
- Tool: ${tool} // from --tool parameter, default: gemini
- Mode: full
- Existing CLAUDE.md: ${existing_count}
- New CLAUDE.md needed: ${new_count}
MODULE LIST:
${modules_output} // Full output from get_modules_by_depth.sh list
REQUIREMENTS:
1. Use TodoWrite to track each depth level before execution
2. Process depths 5→0 sequentially, max 4 parallel jobs per depth
3. Command format: update_module_claude.sh "<path>" "full" "${tool}" &
4. Extract exact path from "depth:N|path:<PATH>|..." format
5. Verify all ${module_count} modules are processed
6. Run safety check after completion
7. Display git status
Execute depth-parallel updates for all modules.
`
```
**Agent receives complete context:**
- Module count and tool selection
- Full structured module list
- Clear execution requirements
- Path extraction format
- Success criteria
### 📚 Usage Examples
```bash
# Complete project documentation refresh
/update-memory-full
# After major architectural changes
/update-memory-full
```
### ✨ Features
- **Separated Commands**: Each bash operation is a discrete, trackable step
- **Intelligent Complexity Detection**: Model analyzes project context for optimal strategy
- **Depth-Parallel Execution**: Same depth modules run in parallel, depths run sequentially
- **Git Integration**: Auto-cache changes before, safety check and show status after
- **Safety Protection**: Automatic detection and revert of unintended source code modifications
- **Module Detection**: Uses get_modules_by_depth.sh for structure discovery
- **User Confirmation**: Clear plan presentation with approval step
- **CLAUDE.md Only**: Only updates documentation, never source code

View File

@@ -1,182 +0,0 @@
---
name: update-memory-related
description: Context-aware CLAUDE.md documentation updates based on recent changes
argument-hint: "[--tool gemini|qwen|codex]"
---
### 🚀 Command Overview: `/update-memory-related`
Context-aware documentation update for modules affected by recent changes.
**Tool Selection**:
- **Gemini (default)**: Documentation generation, pattern recognition, architecture review
- **Qwen**: Architecture analysis, system design documentation
- **Codex**: Implementation validation, code quality analysis
### 📝 Execution Template
```bash
#!/bin/bash
# Context-aware CLAUDE.md documentation update
# Step 1: Parse tool selection (default: gemini)
tool="gemini"
[[ "$*" == *"--tool qwen"* ]] && tool="qwen"
[[ "$*" == *"--tool codex"* ]] && tool="codex"
# Step 2: Code Index refresh and architecture analysis
mcp__code-index__refresh_index()
mcp__code-index__search_code_advanced(pattern="class|function|interface", file_pattern="**/*.{ts,js,py}")
# Step 3: Detect changed modules (before staging)
changed=$(Bash(~/.claude/scripts/detect_changed_modules.sh list))
# Step 4: Cache git changes (protect current state)
Bash(git add -A 2>/dev/null || true)
# Step 5: Use detected changes or fallback
if [ -z "$changed" ]; then
changed=$(Bash(~/.claude/scripts/get_modules_by_depth.sh list | head -10))
fi
count=$(echo "$changed" | wc -l)
# Step 6: Analysis handover → Model takes control
# BASH_EXECUTION_STOPS → MODEL_ANALYSIS_BEGINS
# Pseudocode flow:
# IF (change_count > 15 OR complex_changes_detected):
# → Task "Complex project related update" subagent_type: "memory-bridge"
# ELSE:
# → Present plan and WAIT FOR USER APPROVAL before execution
#
# Pass tool parameter to update_module_claude.sh: "$tool"
```
### 🧠 Model Analysis Phase
After the bash script completes change detection, the model takes control to:
1. **Analyze Changes**: Review change count and complexity
2. **Parse CLAUDE.md Status**: Extract which changed modules have/need CLAUDE.md files
3. **Choose Strategy**:
- Simple changes: Present execution plan with CLAUDE.md paths to user
- Complex changes: Delegate to memory-bridge agent
4. **Present Detailed Plan**: Show user exactly which CLAUDE.md files will be created/updated for changed modules
5. **⚠️ CRITICAL: WAIT FOR USER APPROVAL**: No execution without explicit user consent
### 📋 Execution Strategies
**For Simple Changes (≤15 modules):**
Model will present detailed plan for affected modules:
```
📋 Related Update Plan:
Tool: [gemini|qwen|codex] (from --tool parameter)
CHANGED modules affecting CLAUDE.md:
NEW CLAUDE.md files (X):
- ./src/api/auth/CLAUDE.md [new module]
- ./src/utils/helpers/CLAUDE.md [new module]
UPDATE existing CLAUDE.md files (Y):
- ./src/api/CLAUDE.md [parent of changed auth/]
- ./src/CLAUDE.md [root level]
Total: N CLAUDE.md files will be processed for recent changes
⚠️ Confirm execution? (y/n)
```
```pseudo
# ⚠️ MANDATORY: User confirmation → MUST await explicit approval
IF user_explicitly_confirms():
continue_execution()
ELSE:
abort_execution()
# Step 4: Organize modules by depth → Prepare execution
depth_modules = organize_by_depth(changed_modules)
# Step 5: Execute depth-parallel updates → Process by depth
FOR depth FROM max_depth DOWN TO 0:
FOR each module IN depth_modules[depth]:
WHILE active_jobs >= 4: wait(0.1)
Bash(~/.claude/scripts/update_module_claude.sh "$module" "related" "$tool" &)
wait_all_jobs()
# Step 6: Safety check and restore staging state
non_claude=$(Bash(git diff --cached --name-only | grep -v "CLAUDE.md" || true))
if [ -n "$non_claude" ]; then
Bash(git restore --staged .)
echo "⚠️ Warning: Non-CLAUDE.md files were modified, staging reverted"
echo "Modified files: $non_claude"
else
echo "✅ Only CLAUDE.md files modified, staging preserved"
fi
# Step 7: Display changes → Final status
Bash(git diff --stat)
```
**For Complex Changes (>15 modules):**
The model will delegate to the memory-bridge agent with structured context:
```javascript
Task "Complex project related update"
subagent_type: "memory-bridge"
prompt: `
CONTEXT:
- Total modules: ${change_count}
- Tool: ${tool} // from --tool parameter, default: gemini
- Mode: related
- Changed modules detected via: detect_changed_modules.sh
- Existing CLAUDE.md: ${existing_count}
- New CLAUDE.md needed: ${new_count}
MODULE LIST:
${changed_modules_output} // Full output from detect_changed_modules.sh list
REQUIREMENTS:
1. Use TodoWrite to track each depth level before execution
2. Process depths sequentially (deepest→shallowest), max 4 parallel jobs per depth
3. Command format: update_module_claude.sh "<path>" "related" "${tool}" &
4. Extract exact path from "depth:N|path:<PATH>|..." format
5. Verify all ${change_count} modules are processed
6. Run safety check after completion
7. Display git diff --stat
Execute depth-parallel updates for changed modules only.
`
```
**Agent receives complete context:**
- Changed module count and tool selection
- Full structured module list (changed only)
- Clear execution requirements with "related" mode
- Path extraction format
- Success criteria
### 📚 Usage Examples
```bash
# Daily development update
/update-memory-related
# After feature work
/update-memory-related
```
### ✨ Features
- **Separated Commands**: Each bash operation is a discrete, trackable step
- **Intelligent Change Analysis**: Model analyzes change complexity for optimal strategy
- **Change Detection**: Automatically finds affected modules using git diff
- **Depth-Parallel Execution**: Same depth modules run in parallel, depths run sequentially
- **Git Integration**: Auto-cache changes, show diff statistics after
- **Fallback Mode**: Updates recent files when no git changes found
- **User Confirmation**: Clear plan presentation with approval step
- **CLAUDE.md Only**: Only updates documentation, never source code

View File

@@ -0,0 +1,585 @@
---
name: api-designer
description: Generate or update api-designer/analysis.md addressing topic-framework discussion points
argument-hint: "optional topic - uses existing framework if available"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
## 🔌 **API Designer Analysis Generator**
### Purpose
**Specialized command for generating api-designer/analysis.md** that addresses topic-framework.md discussion points from backend API design perspective. Creates or updates role-specific analysis with framework references.
### Core Function
- **Framework-based Analysis**: Address each discussion point in topic-framework.md
- **API Design Focus**: RESTful/GraphQL API design, endpoint structure, and contract definition
- **Update Mechanism**: Create new or update existing analysis.md
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **API Architecture**: RESTful/GraphQL design patterns and best practices
- **Endpoint Design**: Resource modeling, URL structure, and HTTP method selection
- **Data Contracts**: Request/response schemas, validation rules, and data transformation
- **API Documentation**: OpenAPI/Swagger specifications and developer experience
### Role Boundaries & Responsibilities
#### **What This Role OWNS (API Contract Within Architectural Framework)**
- **API Contract Definition**: Specific endpoint paths, HTTP methods, and status codes
- **Resource Modeling**: Mapping domain entities to RESTful resources or GraphQL types
- **Request/Response Schemas**: Detailed data contracts, validation rules, and error formats
- **API Versioning Strategy**: Version management, deprecation policies, and migration paths
- **Developer Experience**: API documentation (OpenAPI/Swagger), code examples, and SDKs
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **System Architecture Decisions**: Microservices vs monolithic, overall communication patterns → Defers to **System Architect**
- **Canonical Data Model**: Underlying data schemas and entity relationships → Defers to **Data Architect**
- **UI/Frontend Integration**: How clients consume the API → Defers to **UI Designer**
#### **Handoff Points**
- **FROM System Architect**: Receives architectural constraints (REST vs GraphQL, sync vs async) that define the design space
- **FROM Data Architect**: Receives canonical data model and translates it into public API data contracts (as projection/view)
- **TO Frontend Teams**: Provides complete API specifications, documentation, and integration guides
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection
```bash
# Check active session and framework
CHECK: .workflow/.active-* marker files
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/topic-framework.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
```
### Phase 2: Analysis Mode Detection
```bash
# Check existing analysis
CHECK: brainstorm_dir/api-designer/analysis.md
IF EXISTS:
SHOW existing analysis summary
ASK: "Analysis exists. Do you want to:"
OPTIONS:
1. "Update with new insights" → Update existing
2. "Replace completely" → Generate new
3. "Cancel" → Exit without changes
ELSE:
CREATE new analysis
```
### Phase 3: Agent Task Generation
**Framework-Based Analysis** (when topic-framework.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
prompt="Generate API designer analysis addressing topic framework
## Framework Integration Required
**MANDATORY**: Load and address topic-framework.md discussion points
**Framework Reference**: @{session.brainstorm_dir}/topic-framework.md
**Output Location**: {session.brainstorm_dir}/api-designer/analysis.md
## Analysis Requirements
1. **Load Topic Framework**: Read topic-framework.md completely
2. **Address Each Discussion Point**: Respond to all 5 framework sections from API design perspective
3. **Include Framework Reference**: Start analysis.md with @../topic-framework.md
4. **API Design Focus**: Emphasize endpoint structure, data contracts, versioning strategies
5. **Structured Response**: Use framework structure for analysis organization
## Framework Sections to Address
- Core Requirements (from API design perspective)
- Technical Considerations (detailed API architecture analysis)
- User Experience Factors (developer experience and API usability)
- Implementation Challenges (API design risks and solutions)
- Success Metrics (API performance metrics and adoption tracking)
## Output Structure Required
```markdown
# API Designer Analysis: [Topic]
**Framework Reference**: @../topic-framework.md
**Role Focus**: Backend API Design and Contract Definition
## Core Requirements Analysis
[Address framework requirements from API design perspective]
## Technical Considerations
[Detailed API architecture and endpoint design analysis]
## Developer Experience Factors
[API usability, documentation, and integration ease]
## Implementation Challenges
[API design risks and mitigation strategies]
## Success Metrics
[API performance metrics, adoption rates, and developer satisfaction]
## API Design-Specific Recommendations
[Detailed API design recommendations and best practices]
```",
description="Generate API designer framework-based analysis")
```
### Phase 4: Update Mechanism
**Analysis Update Process**:
```bash
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
prompt="Update existing API designer analysis
## Current Analysis Context
**Existing Analysis**: @{session.brainstorm_dir}/api-designer/analysis.md
**Framework Reference**: @{session.brainstorm_dir}/topic-framework.md
## Update Requirements
1. **Preserve Structure**: Maintain existing analysis structure
2. **Add New Insights**: Integrate new API design insights and recommendations
3. **Framework Alignment**: Ensure continued alignment with topic framework
4. **API Updates**: Add new endpoint patterns, versioning strategies, documentation improvements
5. **Maintain References**: Keep @../topic-framework.md reference
## Update Instructions
- Read existing analysis completely
- Identify areas for enhancement or new insights
- Add API design depth while preserving original structure
- Update recommendations with new API design patterns and approaches
- Maintain framework discussion point addressing",
description="Update API designer analysis incrementally")
```
## Document Structure
### Output Files
```
.workflow/WFS-[topic]/.brainstorming/
├── topic-framework.md # Input: Framework (if exists)
└── api-designer/
└── analysis.md # ★ OUTPUT: Framework-based analysis
```
### Analysis Structure
**Required Elements**:
- **Framework Reference**: @../topic-framework.md (if framework exists)
- **Role Focus**: Backend API Design and Contract Definition perspective
- **5 Framework Sections**: Address each framework discussion point
- **API Design Recommendations**: Endpoint-specific insights and solutions
## ⚡ **Two-Step Execution Flow**
### ⚠️ Session Management - FIRST STEP
Session detection and selection:
```bash
# Check for active sessions
active_sessions=$(find .workflow -name ".active-*" 2>/dev/null)
if [ multiple_sessions ]; then
prompt_user_to_select_session()
else
use_existing_or_create_new()
fi
```
### Step 1: Context Gathering Phase
**API Designer Perspective Questioning**
Before agent assignment, gather comprehensive API design context:
#### 📋 Role-Specific Questions
1. **API Type & Architecture**
- RESTful, GraphQL, or hybrid API approach?
- Synchronous vs asynchronous communication patterns?
- Real-time requirements (WebSocket, Server-Sent Events)?
2. **Resource Modeling & Endpoints**
- What are the core domain resources/entities?
- Expected CRUD operations for each resource?
- Complex query requirements (filtering, sorting, pagination)?
3. **Data Contracts & Validation**
- Request/response data format requirements (JSON, XML, Protocol Buffers)?
- Input validation and sanitization requirements?
- Data transformation and mapping needs?
4. **API Management & Governance**
- API versioning strategy requirements?
- Authentication and authorization mechanisms?
- Rate limiting and throttling requirements?
- API documentation and developer portal needs?
5. **Integration & Compatibility**
- Client platforms consuming the API (web, mobile, third-party)?
- Backward compatibility requirements?
- External API integrations needed?
#### Context Validation
- **Minimum Response**: Each answer must be ≥50 characters
- **Re-prompting**: Insufficient detail triggers follow-up questions
- **Context Storage**: Save responses to `.brainstorming/api-designer-context.md`
### Step 2: Agent Assignment with Flow Control
**Dedicated Agent Execution**
```bash
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute dedicated api-designer conceptual analysis for: {topic}
ASSIGNED_ROLE: api-designer
OUTPUT_LOCATION: .brainstorming/api-designer/
USER_CONTEXT: {validated_responses_from_context_gathering}
Flow Control Steps:
[
{
\"step\": \"load_role_template\",
\"action\": \"Load api-designer planning template\",
\"command\": \"bash($(cat ~/.claude/workflows/cli-templates/planning-roles/api-designer.md))\",
\"output_to\": \"role_template\"
}
]
Conceptual Analysis Requirements:
- Apply api-designer perspective to topic analysis
- Focus on endpoint design, data contracts, and API governance
- Use loaded role template framework for analysis structure
- Generate role-specific deliverables in designated output location
- Address all user context from questioning phase
Deliverables:
- analysis.md: Main API design analysis
- api-specification.md: Detailed endpoint specifications
- data-contracts.md: Request/response schemas and validation rules
- api-documentation.md: API documentation strategy and templates
Embody api-designer role expertise for comprehensive conceptual planning."
```
### Progress Tracking
TodoWrite tracking for two-step process:
```json
[
{"content": "Gather API designer context through role-specific questioning", "status": "in_progress", "activeForm": "Gathering context"},
{"content": "Validate context responses and save to api-designer-context.md", "status": "pending", "activeForm": "Validating context"},
{"content": "Load api-designer planning template via flow control", "status": "pending", "activeForm": "Loading template"},
{"content": "Execute dedicated conceptual-planning-agent for api-designer role", "status": "pending", "activeForm": "Executing agent"}
]
```
## 📊 **Output Specification**
### Output Location
```
.workflow/WFS-{topic-slug}/.brainstorming/api-designer/
├── analysis.md # Primary API design analysis
├── api-specification.md # Detailed endpoint specifications (OpenAPI/Swagger)
├── data-contracts.md # Request/response schemas and validation rules
├── versioning-strategy.md # API versioning and backward compatibility plan
└── developer-guide.md # API usage documentation and integration examples
```
### Document Templates
#### analysis.md Structure
```markdown
# API Design Analysis: {Topic}
*Generated: {timestamp}*
## Executive Summary
[Key API design findings and recommendations overview]
## API Architecture Overview
### API Type Selection (REST/GraphQL/Hybrid)
### Communication Patterns
### Authentication & Authorization Strategy
## Resource Modeling
### Core Domain Resources
### Resource Relationships
### URL Structure and Naming Conventions
## Endpoint Design
### Resource Endpoints
- GET /api/v1/resources
- POST /api/v1/resources
- GET /api/v1/resources/{id}
- PUT /api/v1/resources/{id}
- DELETE /api/v1/resources/{id}
### Query Parameters
- Filtering: ?filter[field]=value
- Sorting: ?sort=field,-field2
- Pagination: ?page=1&limit=20
### HTTP Methods and Status Codes
- Success responses (2xx)
- Client errors (4xx)
- Server errors (5xx)
## Data Contracts
### Request Schemas
[JSON Schema or OpenAPI definitions]
### Response Schemas
[JSON Schema or OpenAPI definitions]
### Validation Rules
- Required fields
- Data types and formats
- Business logic constraints
## API Versioning Strategy
### Versioning Approach (URL/Header/Accept)
### Version Lifecycle Management
### Deprecation Policy
### Migration Paths
## Security & Governance
### Authentication Mechanisms
- OAuth 2.0 / JWT / API Keys
### Authorization Patterns
- RBAC / ABAC / Resource-based
### Rate Limiting & Throttling
### CORS and Security Headers
## Error Handling
### Standard Error Response Format
```json
{
"error": {
"code": "ERROR_CODE",
"message": "Human-readable error message",
"details": [],
"trace_id": "uuid"
}
}
```
### Error Code Taxonomy
### Validation Error Responses
## API Documentation
### OpenAPI/Swagger Specification
### Developer Portal Requirements
### Code Examples and SDKs
### Changelog and Migration Guides
## Performance Optimization
### Response Caching Strategies
### Compression (gzip, brotli)
### Field Selection (sparse fieldsets)
### Bulk Operations and Batch Endpoints
## Monitoring & Observability
### API Metrics
- Request count, latency, error rates
- Endpoint usage analytics
### Logging Strategy
### Distributed Tracing
## Developer Experience
### API Usability Assessment
### Integration Complexity
### SDK and Client Library Needs
### Sandbox and Testing Environments
```
#### api-specification.md Structure
```markdown
# API Specification: {Topic}
*OpenAPI 3.0 Specification*
## API Information
- **Title**: {API Name}
- **Version**: 1.0.0
- **Base URL**: https://api.example.com/v1
- **Contact**: api-team@example.com
## Endpoints
### Users API
#### GET /users
**Description**: Retrieve a list of users
**Parameters**:
- `page` (query, integer): Page number (default: 1)
- `limit` (query, integer): Items per page (default: 20, max: 100)
- `sort` (query, string): Sort field (e.g., "created_at", "-updated_at")
- `filter[status]` (query, string): Filter by user status
**Response 200**:
```json
{
"data": [
{
"id": "uuid",
"username": "string",
"email": "string",
"created_at": "2025-10-15T00:00:00Z"
}
],
"meta": {
"page": 1,
"limit": 20,
"total": 100
},
"links": {
"self": "/users?page=1",
"next": "/users?page=2",
"prev": null
}
}
```
#### POST /users
**Description**: Create a new user
**Request Body**:
```json
{
"username": "string (required, 3-50 chars)",
"email": "string (required, valid email)",
"password": "string (required, min 8 chars)",
"profile": {
"first_name": "string (optional)",
"last_name": "string (optional)"
}
}
```
**Response 201**:
```json
{
"data": {
"id": "uuid",
"username": "string",
"email": "string",
"created_at": "2025-10-15T00:00:00Z"
}
}
```
**Response 400** (Validation Error):
```json
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Request validation failed",
"details": [
{
"field": "email",
"message": "Invalid email format"
}
]
}
}
```
[Continue for all endpoints...]
## Authentication
### OAuth 2.0 Flow
1. Client requests authorization
2. User grants permission
3. Client receives access token
4. Client uses token in requests
**Header Format**:
```
Authorization: Bearer {access_token}
```
## Rate Limiting
**Headers**:
- `X-RateLimit-Limit`: 1000
- `X-RateLimit-Remaining`: 999
- `X-RateLimit-Reset`: 1634270400
**Response 429** (Too Many Requests):
```json
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "API rate limit exceeded",
"retry_after": 3600
}
}
```
```
## 🔄 **Session Integration**
### Status Synchronization
Upon completion, update `workflow-session.json`:
```json
{
"phases": {
"BRAINSTORM": {
"api_designer": {
"status": "completed",
"completed_at": "timestamp",
"output_directory": ".workflow/WFS-{topic}/.brainstorming/api-designer/",
"key_insights": ["endpoint_design", "versioning_strategy", "data_contracts"]
}
}
}
}
```
### Cross-Role Collaboration
API designer perspective provides:
- **API Contract Specifications** → Frontend Developer
- **Data Schema Requirements** → Data Architect
- **Security Requirements** → Security Expert
- **Integration Endpoints** → System Architect
- **Performance Constraints** → DevOps Engineer
## ✅ **Quality Assurance**
### Required Analysis Elements
- [ ] Complete endpoint inventory with HTTP methods and paths
- [ ] Detailed request/response schemas with validation rules
- [ ] Clear versioning strategy and backward compatibility plan
- [ ] Comprehensive error handling and status code usage
- [ ] API documentation strategy (OpenAPI/Swagger)
### API Design Principles
- [ ] **Consistency**: Uniform naming conventions and patterns across all endpoints
- [ ] **Simplicity**: Intuitive resource modeling and URL structures
- [ ] **Flexibility**: Support for filtering, sorting, pagination, and field selection
- [ ] **Security**: Proper authentication, authorization, and input validation
- [ ] **Performance**: Caching strategies, compression, and efficient data structures
### Developer Experience Validation
- [ ] API is self-documenting with clear endpoint descriptions
- [ ] Error messages are actionable and helpful for debugging
- [ ] Response formats are consistent and predictable
- [ ] Code examples and integration guides are provided
- [ ] Sandbox environment available for testing
### Technical Completeness
- [ ] **Resource Modeling**: All domain entities mapped to API resources
- [ ] **CRUD Coverage**: Complete create, read, update, delete operations
- [ ] **Query Capabilities**: Advanced filtering, sorting, and search functionality
- [ ] **Versioning**: Clear version management and migration paths
- [ ] **Monitoring**: API metrics, logging, and tracing strategies defined
### Integration Readiness
- [ ] **Client Compatibility**: API works with all target client platforms
- [ ] **External Integration**: Third-party API dependencies identified
- [ ] **Backward Compatibility**: Changes don't break existing clients
- [ ] **Migration Path**: Clear upgrade paths for API consumers
- [ ] **SDK Support**: Client libraries and code generation considered

View File

@@ -14,6 +14,7 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
## Role Selection Logic
- **Technical & Architecture**: `architecture|system|performance|database|security` → system-architect, data-architect, security-expert, subject-matter-expert
- **API & Backend**: `api|endpoint|rest|graphql|backend|interface|contract|service` → api-designer, system-architect, data-architect
- **Product & UX**: `user|ui|ux|interface|design|product|feature|experience` → ui-designer, user-researcher, product-manager, ux-expert, product-owner
- **Business & Process**: `business|process|workflow|cost|innovation|testing` → business-analyst, innovation-lead, test-strategist
- **Agile & Delivery**: `agile|sprint|scrum|team|collaboration|delivery` → scrum-master, product-owner
@@ -23,7 +24,7 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
**Template Loading**: `bash($(cat "~/.claude/workflows/cli-templates/planning-roles/<role-name>.md"))`
**Template Source**: `.claude/workflows/cli-templates/planning-roles/`
**Available Roles**: data-architect, product-manager, product-owner, scrum-master, subject-matter-expert, system-architect, test-strategist, ui-designer, ux-expert
**Available Roles**: api-designer, data-architect, product-manager, product-owner, scrum-master, subject-matter-expert, system-architect, test-strategist, ui-designer, ux-expert
**Example**:
```bash
@@ -70,6 +71,7 @@ Auto command coordinates independent specialized commands:
**Role Selection Logic**:
- **Technical**: `architecture|system|performance|database` → system-architect, data-architect, subject-matter-expert
- **API & Backend**: `api|endpoint|rest|graphql|backend|interface|contract|service` → api-designer, system-architect, data-architect
- **Product & UX**: `user|ui|ux|interface|design|product|feature|experience` → ui-designer, ux-expert, product-manager, product-owner
- **Agile & Delivery**: `agile|sprint|scrum|team|collaboration|delivery` → scrum-master, product-owner
- **Domain Expertise**: `domain|standard|compliance|expertise|regulation` → subject-matter-expert

View File

@@ -1,253 +0,0 @@
---
name: auto-squeeze
description: Orchestrate 3-phase brainstorming workflow by executing commands sequentially
argument-hint: "topic or challenge description for coordinated brainstorming"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*)
---
# Workflow Brainstorm Auto-Squeeze Command
## Coordinator Role
**This command is a pure orchestrator**: Execute brainstorming commands in sequence (artifacts → roles → synthesis), auto-select relevant roles, and ensure complete brainstorming workflow execution.
**Execution Flow**:
1. Initialize TodoWrite → Execute Phase 1 (artifacts) → Validate framework → Update TodoWrite
2. Select 2-3 relevant roles → Display selection → Execute Phase 2 (role analyses) → Update TodoWrite
3. Execute Phase 3 (synthesis) → Validate outputs → Return summary
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 command execution
2. **Auto-Select Roles**: Analyze topic keywords to select 2-3 most relevant roles (max 3)
3. **Display Selection**: Show selected roles to user before execution
4. **Sequential Execution**: Execute role commands one by one, not in parallel
5. **Complete All Phases**: Do not return to user until synthesis completes
6. **Track Progress**: Update TodoWrite after every command completion
## 3-Phase Execution
### Phase 1: Framework Generation
**Step 1.1: Role Selection**
Auto-select 2-3 roles based on topic keywords (see Role Selection Logic below)
**Step 1.2: Generate Role-Specific Framework**
**Command**: `SlashCommand(command="/workflow:brainstorm:artifacts \"[topic]\" --roles \"[role1,role2,role3]\"")`
**Input**: Selected roles from step 1.1
**Parse Output**:
- Verify topic-framework.md created with role-specific sections
**Validation**:
- File `.workflow/[session]/.brainstorming/topic-framework.md` exists
- Contains sections for each selected role
- Includes cross-role integration points
**TodoWrite**: Mark phase 1 completed, mark "Display selected roles" as in_progress
---
### Phase 2: Role Analysis Execution
**Step 2.1: Role Selection**
Use keyword analysis to auto-select 2-3 roles:
**Role Selection Logic**:
- **Technical/Architecture keywords**: `architecture|system|performance|database|api|backend|scalability`
→ system-architect, data-architect, subject-matter-expert
- **UI/UX keywords**: `user|ui|ux|interface|design|frontend|experience`
→ ui-designer, ux-expert
- **Product/Business keywords**: `product|feature|business|workflow|process|customer`
→ product-manager, product-owner
- **Agile/Delivery keywords**: `agile|sprint|scrum|team|collaboration|delivery`
→ scrum-master, product-owner
- **Domain Expertise keywords**: `domain|standard|compliance|expertise|regulation`
→ subject-matter-expert
- **Default**: product-manager (if no clear match)
**Selection Rules**:
- Maximum 3 roles
- Select most relevant role first based on strongest keyword match
- Include complementary perspectives (e.g., if system-architect selected, also consider security-expert)
**Step 2.2: Display Selected Roles**
Show selection to user before execution:
```
Selected roles for analysis:
- ui-designer (UI/UX perspective)
- system-architect (Technical architecture)
- security-expert (Security considerations)
```
**Step 2.3: Execute Role Commands Sequentially**
Execute each selected role command one by one:
**Commands**:
- `SlashCommand(command="/workflow:brainstorm:ui-designer")`
- `SlashCommand(command="/workflow:brainstorm:ux-expert")`
- `SlashCommand(command="/workflow:brainstorm:system-architect")`
- `SlashCommand(command="/workflow:brainstorm:data-architect")`
- `SlashCommand(command="/workflow:brainstorm:product-manager")`
- `SlashCommand(command="/workflow:brainstorm:product-owner")`
- `SlashCommand(command="/workflow:brainstorm:scrum-master")`
- `SlashCommand(command="/workflow:brainstorm:subject-matter-expert")`
- `SlashCommand(command="/workflow:brainstorm:test-strategist")`
**Validation** (after each role):
- File `.workflow/[session]/.brainstorming/[role]/analysis.md` exists
- Contains role-specific analysis
**TodoWrite**: Mark each role task completed after execution, start next role as in_progress
---
### Phase 3: Synthesis Generation
**Command**: `SlashCommand(command="/workflow:brainstorm:synthesis")`
**Validation**:
- File `.workflow/[session]/.brainstorming/synthesis-report.md` exists
- Contains cross-references to role analyses using @ notation
**TodoWrite**: Mark phase 3 completed
**Return to User**:
```
Brainstorming complete for topic: [topic]
Framework: .workflow/[session]/.brainstorming/topic-framework.md
Roles analyzed: [role1], [role2], [role3]
Synthesis: .workflow/[session]/.brainstorming/synthesis-report.md
```
## TodoWrite Pattern
```javascript
// Initialize (before Phase 1)
TodoWrite({todos: [
{"content": "Generate topic framework", "status": "in_progress", "activeForm": "Generating topic framework"},
{"content": "Display selected roles", "status": "pending", "activeForm": "Displaying selected roles"},
{"content": "Execute ui-designer analysis", "status": "pending", "activeForm": "Executing ui-designer analysis"},
{"content": "Execute system-architect analysis", "status": "pending", "activeForm": "Executing system-architect analysis"},
{"content": "Execute security-expert analysis", "status": "pending", "activeForm": "Executing security-expert analysis"},
{"content": "Generate synthesis report", "status": "pending", "activeForm": "Generating synthesis report"}
]})
// After Phase 1
TodoWrite({todos: [
{"content": "Generate topic framework", "status": "completed", "activeForm": "Generating topic framework"},
{"content": "Display selected roles", "status": "in_progress", "activeForm": "Displaying selected roles"},
{"content": "Execute ui-designer analysis", "status": "pending", "activeForm": "Executing ui-designer analysis"},
{"content": "Execute system-architect analysis", "status": "pending", "activeForm": "Executing system-architect analysis"},
{"content": "Execute security-expert analysis", "status": "pending", "activeForm": "Executing security-expert analysis"},
{"content": "Generate synthesis report", "status": "pending", "activeForm": "Generating synthesis report"}
]})
// After displaying roles
TodoWrite({todos: [
{"content": "Generate topic framework", "status": "completed", "activeForm": "Generating topic framework"},
{"content": "Display selected roles", "status": "completed", "activeForm": "Displaying selected roles"},
{"content": "Execute ui-designer analysis", "status": "in_progress", "activeForm": "Executing ui-designer analysis"},
{"content": "Execute system-architect analysis", "status": "pending", "activeForm": "Executing system-architect analysis"},
{"content": "Execute security-expert analysis", "status": "pending", "activeForm": "Executing security-expert analysis"},
{"content": "Generate synthesis report", "status": "pending", "activeForm": "Generating synthesis report"}
]})
// Continue pattern for each role and synthesis...
```
## Data Flow
```
User Input (topic)
Role Selection (analyze topic keywords)
↓ Output: 2-3 selected roles (e.g., ui-designer, system-architect, security-expert)
Phase 1: artifacts "topic" --roles "role1,role2,role3"
↓ Input: topic + selected roles
↓ Output: role-specific topic-framework.md
Display: Show selected roles to user
Phase 2: Execute each role command sequentially
↓ Role 1 → reads role-specific section → analysis.md
↓ Role 2 → reads role-specific section → analysis.md
↓ Role 3 → reads role-specific section → analysis.md
Phase 3: synthesis
↓ Input: role-specific framework + all role analyses
↓ Output: synthesis-report.md with role-targeted insights
Return summary to user
```
**Session Context**: All commands use active brainstorming session, sharing:
- Role-specific topic framework
- Role-targeted analyses
- Cross-role integration points
- Synthesis with role-specific insights
**Key Improvement**: Framework is generated with roles parameter, ensuring all discussion points are relevant to selected roles
## Role Selection Examples
### Example 1: UI-Focused Topic
**Topic**: "Redesign user authentication interface"
**Keywords detected**: user, interface, design
**Selected roles**:
- ui-designer (primary: UI/UX match)
- ux-expert (secondary: user experience)
- subject-matter-expert (complementary: auth standards)
### Example 2: Architecture Topic
**Topic**: "Design scalable microservices architecture"
**Keywords detected**: architecture, scalable, system
**Selected roles**:
- system-architect (primary: architecture match)
- data-architect (secondary: scalability/data)
- subject-matter-expert (complementary: domain expertise)
### Example 3: Agile Delivery Topic
**Topic**: "Optimize team sprint planning and delivery process"
**Keywords detected**: sprint, team, delivery, process
**Selected roles**:
- scrum-master (primary: agile process match)
- product-owner (secondary: backlog/delivery focus)
- product-manager (complementary: product strategy)
## Error Handling
- **Framework Generation Failure**: Stop workflow, report error, do not proceed to role selection
- **Role Analysis Failure**: Log failure, continue with remaining roles, note in final summary
- **Synthesis Failure**: Retry once, if still fails report partial completion with available analyses
- **Session Error**: Report session issue, prompt user to check session status
## Output Structure
```
.workflow/[session]/.brainstorming/
├── topic-framework.md # Phase 1 output
├── [role1]/
│ └── analysis.md # Phase 2 output (role 1)
├── [role2]/
│ └── analysis.md # Phase 2 output (role 2)
├── [role3]/
│ └── analysis.md # Phase 2 output (role 3)
└── synthesis-report.md # Phase 3 output
```
## Coordinator Checklist
✅ Initialize TodoWrite with framework + display + N roles + synthesis tasks
✅ Execute Phase 1 (artifacts) immediately
✅ Validate topic-framework.md exists
✅ Analyze topic keywords for role selection
✅ Auto-select 2-3 most relevant roles (max 3)
✅ Display selected roles to user with rationale
✅ Execute each role command sequentially
✅ Validate each role's analysis.md after execution
✅ Update TodoWrite after each role completion
✅ Execute Phase 3 (synthesis) after all roles complete
✅ Validate synthesis-report.md exists
✅ Return summary with all generated files

View File

@@ -22,6 +22,26 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
- **Data Quality Management**: Data accuracy, completeness, and consistency
- **Analytics and Insights**: Data analysis and business intelligence solutions
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Canonical Data Model - Source of Truth)**
- **Canonical Data Model**: The authoritative, system-wide data schema representing domain entities and relationships
- **Entity-Relationship Design**: Defining entities, attributes, relationships, and constraints
- **Data Normalization & Optimization**: Ensuring data integrity, reducing redundancy, and optimizing storage
- **Database Schema Design**: Physical database structures, indexes, partitioning strategies
- **Data Pipeline Architecture**: ETL/ELT processes, data warehousing, and analytics pipelines
- **Data Governance**: Data quality standards, retention policies, and compliance requirements
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **API Data Contracts**: Public-facing request/response schemas exposed by APIs → Defers to **API Designer**
- **System Integration Patterns**: How services communicate at the macro level → Defers to **System Architect**
- **UI Data Presentation**: How data is displayed to users → Defers to **UI Designer**
#### **Handoff Points**
- **TO API Designer**: Provides canonical data model that API Designer translates into public API data contracts (as projection/view)
- **TO System Architect**: Provides data flow requirements and storage constraints to inform system design
- **FROM System Architect**: Receives system-level integration requirements and scalability constraints
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection

View File

@@ -22,6 +22,25 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
- **Performance & Scalability**: Capacity planning and optimization strategies
- **Integration Patterns**: System communication and data flow design
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Macro-Architecture)**
- **System-Level Architecture**: Service boundaries, deployment topology, and system composition
- **Cross-Service Communication Patterns**: Choosing between microservices/monolithic, event-driven/request-response, sync/async patterns
- **Technology Stack Decisions**: Language, framework, database, and infrastructure choices
- **Non-Functional Requirements**: Scalability, performance, availability, disaster recovery, and monitoring strategies
- **Integration Planning**: How systems and services connect at the macro level (not specific API contracts)
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **API Contract Details**: Specific endpoint definitions, URL structures, HTTP methods → Defers to **API Designer**
- **Data Schemas**: Detailed data model design and entity relationships → Defers to **Data Architect**
- **UI/UX Design**: Interface design and user experience → Defers to **UX Expert** and **UI Designer**
#### **Handoff Points**
- **TO API Designer**: Provides architectural constraints (REST vs GraphQL, sync vs async) that define the API design space
- **TO Data Architect**: Provides system-level data flow requirements and integration patterns
- **FROM Data Architect**: Receives canonical data model to inform system integration design
## ⚙️ **Execution Protocol**
### Phase 1: Session & Framework Detection

View File

@@ -17,10 +17,31 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **User Experience Design**: Intuitive and efficient user experiences
- **Interface Design**: Beautiful and functional user interfaces
- **Interaction Design**: Smooth user interaction flows and micro-interactions
- **Accessibility Design**: Inclusive design meeting WCAG compliance
- **Visual Design**: Color palettes, typography, spacing, and visual hierarchy implementation
- **High-Fidelity Mockups**: Polished, pixel-perfect interface designs
- **Design System Implementation**: Component libraries, design tokens, and style guides
- **Micro-Interactions & Animations**: Transition effects, loading states, and interactive feedback
- **Responsive Design**: Layout adaptations for different screen sizes and devices
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Concrete Visual Interface Implementation)**
- **Visual Design Language**: Colors, typography, iconography, spacing, and overall aesthetic
- **High-Fidelity Mockups**: Polished designs showing exactly how the interface will look
- **Design System Components**: Building and documenting reusable UI components (buttons, inputs, cards, etc.)
- **Design Tokens**: Defining variables for colors, spacing, typography that can be used in code
- **Micro-Interactions**: Hover states, transitions, animations, and interactive feedback details
- **Responsive Layouts**: Adapting designs for mobile, tablet, and desktop breakpoints
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **User Research & Personas**: User behavior analysis and needs assessment → Defers to **UX Expert**
- **Information Architecture**: Content structure and navigation strategy → Defers to **UX Expert**
- **Low-Fidelity Wireframes**: Structural layouts without visual design → Defers to **UX Expert**
#### **Handoff Points**
- **FROM UX Expert**: Receives wireframes, user flows, and information architecture as the foundation for visual design
- **TO Frontend Developers**: Provides design specifications, component libraries, and design tokens for implementation
- **WITH API Designer**: Coordinates on data presentation and form validation feedback (visual aspects only)
## ⚙️ **Execution Protocol**

View File

@@ -17,10 +17,31 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
### Analysis Scope
- **User Interface Design**: Visual hierarchy, layout patterns, and component design
- **Interaction Patterns**: User flows, navigation, and microinteractions
- **Usability Optimization**: Accessibility, cognitive load, and user testing strategies
- **Design Systems**: Component libraries, design tokens, and consistency frameworks
- **User Research**: User personas, behavioral analysis, and needs assessment
- **Information Architecture**: Content structure, navigation hierarchy, and mental models
- **User Journey Mapping**: User flows, task analysis, and interaction models
- **Usability Strategy**: Accessibility planning, cognitive load reduction, and user testing frameworks
- **Wireframing**: Low-fidelity layouts and structural prototypes (not visual design)
### Role Boundaries & Responsibilities
#### **What This Role OWNS (Abstract User Experience & Research)**
- **User Research & Personas**: Understanding target users, their goals, pain points, and behaviors
- **Information Architecture**: Organizing content and defining navigation structures at a conceptual level
- **User Journey Mapping**: Defining user flows, task sequences, and interaction models
- **Wireframes & Low-Fidelity Prototypes**: Structural layouts showing information hierarchy (boxes and arrows, not colors/fonts)
- **Usability Testing Strategy**: Planning user testing, A/B tests, and validation methods
- **Accessibility Planning**: WCAG compliance strategy and inclusive design principles
#### **What This Role DOES NOT Own (Defers to Other Roles)**
- **Visual Design**: Colors, typography, spacing, visual style → Defers to **UI Designer**
- **High-Fidelity Mockups**: Polished, pixel-perfect designs → Defers to **UI Designer**
- **Component Implementation**: Design system components, CSS, animations → Defers to **UI Designer**
#### **Handoff Points**
- **TO UI Designer**: Provides wireframes, user flows, and information architecture that UI Designer will transform into high-fidelity visual designs
- **FROM User Research**: May receive external research data to inform UX decisions
- **TO Product Owner**: Provides user insights and validation results to inform feature prioritization
## ⚙️ **Execution Protocol**

View File

@@ -109,6 +109,14 @@ CONTEXT: Existing user database schema, REST API endpoints
**After Phase 3**: Return to user showing Phase 3 results, then auto-continue to Phase 4
**Memory State Check**:
- Evaluate current context window usage and memory state
- If memory usage is high (>110K tokens or approaching context limits):
- **Command**: `SlashCommand(command="/compact")`
- This optimizes memory before proceeding to Phase 4
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
- Ensures optimal performance and prevents context overflow
---
### Phase 4: Task Generation

View File

@@ -10,47 +10,40 @@ examples:
# Enhanced Analysis Command (/workflow:tools:concept-enhanced)
## Overview
Advanced solution design and feasibility analysis engine with parallel CLI execution that processes standardized context packages and produces comprehensive technical analysis focused on solution improvements, key design decisions, and critical insights.
Advanced solution design and feasibility analysis engine with parallel CLI execution. Processes standardized context packages to produce ANALYSIS_RESULTS.md focused on solution improvements, key design decisions, and critical insights.
**Analysis Focus**: Produces ANALYSIS_RESULTS.md with solution design, architectural rationale, feasibility assessment, and optimization strategies. Does NOT generate task breakdowns or implementation plans.
**Scope**: Solution-focused technical analysis only. Does NOT generate task breakdowns or implementation plans.
**Independent Usage**: This command can be called directly by users or as part of the `/workflow:plan` command. It accepts context packages and provides solution-focused technical analysis.
**Usage**: Standalone command or integrated into `/workflow:plan`. Accepts context packages and orchestrates Gemini/Codex for comprehensive analysis.
## Core Philosophy
- **Solution-Focused**: Emphasize design decisions, architectural rationale, and critical insights
- **Context-Driven**: Precise analysis based on comprehensive context packages
- **Intelligent Tool Selection**: Choose optimal tools based on task complexity (Gemini for design, Codex for validation)
## Core Philosophy & Responsibilities
- **Solution-Focused Analysis**: Emphasize design decisions, architectural rationale, and critical insights (exclude task planning)
- **Context-Driven**: Parse and validate context-package.json for precise analysis
- **Intelligent Tool Selection**: Gemini for design (all tasks), Codex for validation (complex tasks only)
- **Parallel Execution**: Execute multiple CLI tools simultaneously for efficiency
- **No Task Planning**: Exclude implementation steps, task breakdowns, and project planning
- **Solution Design**: Evaluate architecture, identify key design decisions with rationale
- **Feasibility Assessment**: Analyze technical complexity, risks, implementation readiness
- **Optimization Recommendations**: Performance, security, and code quality improvements
- **Perspective Synthesis**: Integrate multi-tool insights into unified assessment
- **Single Output**: Generate only ANALYSIS_RESULTS.md with technical analysis
## Core Responsibilities
- **Context Package Parsing**: Read and validate context-package.json
- **Parallel CLI Orchestration**: Execute Gemini (solution design) and optionally Codex (feasibility validation)
- **Solution Design Analysis**: Evaluate architecture, identify key design decisions with rationale
- **Feasibility Assessment**: Analyze technical complexity, risks, and implementation readiness
- **Optimization Recommendations**: Propose performance, security, and code quality improvements
- **Perspective Synthesis**: Integrate Gemini and Codex insights into unified solution assessment
- **Technical Analysis Report**: Generate ANALYSIS_RESULTS.md focused on design decisions and critical insights (NO task planning)
## Analysis Strategy Selection
### Tool Selection by Task Complexity
**Simple Tasks (≤3 modules)**:
- **Primary Tool**: Gemini (rapid understanding and pattern recognition)
- **Support Tool**: Code-index (structural analysis)
- **Execution Mode**: Single-round analysis, focus on existing patterns
- **Primary**: Gemini (rapid understanding + pattern recognition)
- **Support**: Code-index (structural analysis)
- **Mode**: Single-round analysis
**Medium Tasks (4-6 modules)**:
- **Primary Tool**: Gemini (comprehensive single-round analysis and architecture design)
- **Support Tools**: Code-index + Exa (external best practices)
- **Execution Mode**: Single comprehensive analysis covering understanding + architecture design
- **Primary**: Gemini (comprehensive analysis + architecture design)
- **Support**: Code-index + Exa (best practices)
- **Mode**: Single comprehensive round
**Complex Tasks (>6 modules)**:
- **Primary Tools**: Gemini (single comprehensive analysis) + Codex (implementation validation)
- **Analysis Strategy**: Gemini handles understanding + architecture in one round, Codex validates implementation
- **Execution Mode**: Parallel execution - Gemini comprehensive analysis + Codex validation
- **Primary**: Gemini (comprehensive analysis) + Codex (validation)
- **Mode**: Parallel execution - Gemini design + Codex feasibility
### Tool Preferences by Tech Stack
@@ -77,168 +70,82 @@ Advanced solution design and feasibility analysis engine with parallel CLI execu
## Execution Lifecycle
### Phase 1: Validation & Preparation
1. **Session Validation**
- Verify session directory exists: `.workflow/{session_id}/`
- Load session metadata from `workflow-session.json`
- Validate session state and task context
2. **Context Package Validation**
- Verify context package exists at specified path
- Validate JSON format and structure
- Assess context package size and complexity
3. **Task Analysis & Classification**
- Parse task description and extract keywords
- Identify technical domain and complexity level
- Determine required analysis depth and scope
- Load existing session context and task summaries
4. **Tool Selection Strategy**
- **Simple/Medium Tasks**: Single Gemini comprehensive analysis
- **Complex Tasks**: Gemini comprehensive + Codex validation
- Load appropriate prompt templates and configurations
1. **Session Validation**: Verify `.workflow/{session_id}/` exists, load `workflow-session.json`
2. **Context Package Validation**: Verify path, validate JSON format and structure
3. **Task Analysis**: Extract keywords, identify domain/complexity, determine scope
4. **Tool Selection**: Gemini (all tasks), +Codex (complex only), load templates
### Phase 2: Analysis Preparation
1. **Workspace Setup**
- Create analysis output directory: `.workflow/{session_id}/.process/`
- Initialize log files and monitoring structures
- Set process limits and resource management
2. **Context Optimization**
- Filter high-priority assets from context package
- Organize project structure and dependencies
- Prepare template references and rule configurations
3. **Execution Environment**
- Configure CLI tools with write permissions
- Set timeout parameters and monitoring intervals
- Prepare error handling and recovery mechanisms
1. **Workspace Setup**: Create `.workflow/{session_id}/.process/`, initialize logs, set resource limits
2. **Context Optimization**: Filter high-priority assets, organize structure, prepare templates
3. **Execution Environment**: Configure CLI tools, set timeouts, prepare error handling
### Phase 3: Parallel Analysis Execution
1. **Gemini Solution Design & Architecture Analysis**
- **Tool Configuration**:
```bash
~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Analyze and design optimal solution for {task_description}
TASK: Evaluate current architecture, propose solution design, and identify key design decisions
CONTEXT: @{.workflow/{session_id}/.process/context-package.json,.workflow/{session_id}/workflow-session.json,CLAUDE.md}
```bash
~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Analyze and design optimal solution for {task_description}
TASK: Evaluate current architecture, propose solution design, identify key design decisions
CONTEXT: @{.workflow/{session_id}/.process/context-package.json,.workflow/{session_id}/workflow-session.json,CLAUDE.md}
**MANDATORY FIRST STEP**: Read and analyze .workflow/{session_id}/.process/context-package.json to understand:
- Task requirements from metadata.task_description
- Relevant source files from assets[] array
- Tech stack from tech_stack section
- Project structure from statistics section
**MANDATORY**: Read context-package.json to understand task requirements, source files, tech stack, project structure
**ANALYSIS PRIORITY - Use ALL source documents from context-package assets[]**:
1. PRIMARY SOURCES (Highest Priority): Individual role analysis.md files (system-architect, ui-designer, product-manager, etc.)
- These contain complete technical details, design rationale, ADRs, and decision context
- Extract: Technical specs, API schemas, design tokens, caching configs, performance metrics
2. SYNTHESIS REFERENCE (Medium Priority): synthesis-specification.md
- Use for integrated requirements and cross-role alignment
- Validate decisions and identify integration points
3. TOPIC FRAMEWORK (Low Priority): topic-framework.md for discussion context
**ANALYSIS PRIORITY**:
1. PRIMARY: Individual role analysis.md files (system-architect, ui-designer, etc.) - technical details, ADRs, decision context
2. SECONDARY: synthesis-specification.md - integrated requirements, cross-role alignment
3. REFERENCE: topic-framework.md - discussion context
EXPECTED:
1. CURRENT STATE ANALYSIS: Existing patterns, code structure, integration points, technical debt
2. SOLUTION DESIGN: Core architecture principles, system design, key design decisions with rationale
3. CRITICAL INSIGHTS: What works well, identified gaps, technical risks, architectural tradeoffs
4. OPTIMIZATION STRATEGIES: Performance improvements, security enhancements, code quality recommendations
5. FEASIBILITY ASSESSMENT: Complexity analysis, compatibility evaluation, implementation readiness
6. **OUTPUT FILE**: Write complete analysis to .workflow/{session_id}/.process/gemini-solution-design.md
EXPECTED:
1. CURRENT STATE: Existing patterns, code structure, integration points, technical debt
2. SOLUTION DESIGN: Core principles, system design, key decisions with rationale
3. CRITICAL INSIGHTS: Strengths, gaps, risks, tradeoffs
4. OPTIMIZATION: Performance, security, code quality recommendations
5. FEASIBILITY: Complexity analysis, compatibility, implementation readiness
6. OUTPUT: Write to .workflow/{session_id}/.process/gemini-solution-design.md
RULES:
- Focus on SOLUTION IMPROVEMENTS and KEY DESIGN DECISIONS, NOT task planning
- Provide architectural rationale, evaluate alternatives, assess tradeoffs
- **CRITICAL**: Identify code targets - existing files as "file:function:lines", new files as "file"
- For modifications: specify exact files/functions/line ranges
- For new files: specify file path only (no function or lines)
- Do NOT create task lists, implementation steps, or code examples
- Do NOT generate any code snippets or implementation details
- **MUST write output to .workflow/{session_id}/.process/gemini-solution-design.md**
- Output ONLY architectural analysis and design recommendations
" --approval-mode yolo
```
- **Output Location**: `.workflow/{session_id}/.process/gemini-solution-design.md`
RULES:
- Focus on SOLUTION IMPROVEMENTS and KEY DESIGN DECISIONS (NO task planning)
- Identify code targets: existing "file:function:lines", new files "file"
- Do NOT create task lists, implementation steps, or code examples
" --approval-mode yolo
```
Output: `.workflow/{session_id}/.process/gemini-solution-design.md`
2. **Codex Technical Feasibility Validation** (Complex Tasks Only)
- **Tool Configuration**:
```bash
codex --full-auto exec "
PURPOSE: Validate technical feasibility and identify implementation risks for {task_description}
TASK: Assess implementation complexity, validate technology choices, evaluate performance and security implications
CONTEXT: @{.workflow/{session_id}/.process/context-package.json,.workflow/{session_id}/.process/gemini-solution-design.md,.workflow/{session_id}/workflow-session.json,CLAUDE.md}
```bash
codex --full-auto exec "
PURPOSE: Validate technical feasibility and identify implementation risks for {task_description}
TASK: Assess complexity, validate technology choices, evaluate performance/security implications
CONTEXT: @{.workflow/{session_id}/.process/context-package.json,.workflow/{session_id}/.process/gemini-solution-design.md,.workflow/{session_id}/workflow-session.json,CLAUDE.md}
**MANDATORY FIRST STEP**: Read and analyze:
- .workflow/{session_id}/.process/context-package.json for task context
- .workflow/{session_id}/.process/gemini-solution-design.md for proposed solution design
- Relevant source files listed in context-package.json assets[]
**MANDATORY**: Read context-package.json, gemini-solution-design.md, and relevant source files
EXPECTED:
1. FEASIBILITY ASSESSMENT: Technical complexity rating, resource requirements, technology compatibility
2. RISK ANALYSIS: Implementation risks, integration challenges, performance concerns, security vulnerabilities
3. TECHNICAL VALIDATION: Development approach validation, quality standards assessment, maintenance implications
4. CRITICAL RECOMMENDATIONS: Must-have requirements, optimization opportunities, security controls
5. **OUTPUT FILE**: Write validation results to .workflow/{session_id}/.process/codex-feasibility-validation.md
EXPECTED:
1. FEASIBILITY: Complexity rating, resource requirements, technology compatibility
2. RISK ANALYSIS: Implementation risks, integration challenges, performance/security concerns
3. VALIDATION: Development approach, quality standards, maintenance implications
4. RECOMMENDATIONS: Must-have requirements, optimization opportunities, security controls
5. OUTPUT: Write to .workflow/{session_id}/.process/codex-feasibility-validation.md
RULES:
- Focus on TECHNICAL FEASIBILITY and RISK ASSESSMENT, NOT implementation planning
- Validate architectural decisions, identify potential issues, recommend optimizations
- **CRITICAL**: Verify code targets - existing files "file:function:lines", new files "file"
- Confirm exact locations for modifications, identify additional new files if needed
- Do NOT create task breakdowns, step-by-step guides, or code examples
- Do NOT generate any code snippets or implementation details
- **MUST write output to .workflow/{session_id}/.process/codex-feasibility-validation.md**
- Output ONLY feasibility analysis and risk assessment
" --skip-git-repo-check -s danger-full-access
```
- **Output Location**: `.workflow/{session_id}/.process/codex-feasibility-validation.md`
RULES:
- Focus on TECHNICAL FEASIBILITY and RISK ASSESSMENT (NO implementation planning)
- Verify code targets: existing "file:function:lines", new files "file"
- Do NOT create task breakdowns, step-by-step guides, or code examples
" --skip-git-repo-check -s danger-full-access
```
Output: `.workflow/{session_id}/.process/codex-feasibility-validation.md`
3. **Parallel Execution Management**
- Launch both tools simultaneously for complex tasks
- Monitor execution progress with timeout controls
- Handle process completion and error scenarios
- Maintain execution logs for debugging and recovery
3. **Parallel Execution**: Launch tools simultaneously, monitor progress, handle completion/errors, maintain logs
### Phase 4: Results Collection & Synthesis
1. **Output Validation & Collection**
- **Gemini Results**: Validate `gemini-solution-design.md` contains complete solution analysis
- **Codex Results**: For complex tasks, validate `codex-feasibility-validation.md` with technical assessment
- **Fallback Processing**: Use execution logs if primary outputs are incomplete
- **Status Classification**: Mark each tool as completed, partial, failed, or skipped
2. **Quality Assessment**
- **Design Quality**: Verify architectural decisions have clear rationale and alternatives analysis
- **Insight Depth**: Assess quality of critical insights and risk identification
- **Feasibility Rigor**: Validate completeness of technical feasibility assessment
- **Optimization Value**: Check actionability of optimization recommendations
3. **Analysis Synthesis Strategy**
- **Simple/Medium Tasks**: Direct integration of Gemini solution design
- **Complex Tasks**: Synthesis of Gemini design with Codex feasibility validation
- **Conflict Resolution**: Identify architectural disagreements and provide balanced resolution
- **Confidence Scoring**: Assess overall solution confidence based on multi-tool consensus
1. **Output Validation**: Validate gemini-solution-design.md (all), codex-feasibility-validation.md (complex), use logs if incomplete, classify status
2. **Quality Assessment**: Verify design rationale, insight depth, feasibility rigor, optimization value
3. **Synthesis Strategy**: Direct integration (simple/medium), multi-tool synthesis (complex), resolve conflicts, score confidence
### Phase 5: ANALYSIS_RESULTS.md Generation
1. **Structured Report Assembly**
- **Executive Summary**: Analysis focus, overall assessment, recommendation status
- **Current State Analysis**: Architecture overview, compatibility, critical findings
- **Proposed Solution Design**: Core principles, system design, key design decisions with rationale
- **Implementation Strategy**: Development approach, feasibility assessment, risk mitigation
- **Solution Optimization**: Performance, security, code quality recommendations
- **Critical Success Factors**: Technical requirements, quality metrics, success validation
- **Confidence & Recommendations**: Assessment scores, final recommendation with rationale
2. **Report Generation Guidelines**
- **Focus**: Solution improvements, key design decisions, critical insights
- **Exclude**: Task breakdowns, implementation steps, project planning
- **Emphasize**: Architectural rationale, tradeoff analysis, risk assessment
- **Structure**: Clear sections with decision justification and feasibility scoring
3. **Final Output**
- **Primary Output**: `ANALYSIS_RESULTS.md` - comprehensive solution design and technical analysis
- **Single File Policy**: Only generate ANALYSIS_RESULTS.md, no supplementary files
- **Report Location**: `.workflow/{session_id}/.process/ANALYSIS_RESULTS.md`
- **Content Focus**: Technical insights, design decisions, optimization strategies
1. **Report Sections**: Executive Summary, Current State, Solution Design, Implementation Strategy, Optimization, Success Factors, Confidence Scores
2. **Guidelines**: Focus on solution improvements and design decisions (exclude task planning), emphasize rationale/tradeoffs/risk assessment
3. **Output**: Single file `ANALYSIS_RESULTS.md` at `.workflow/{session_id}/.process/` with technical insights and optimization strategies
## Analysis Results Format
@@ -317,35 +224,22 @@ Generated ANALYSIS_RESULTS.md focuses on **solution improvements, key design dec
### Code Modification Targets
**Purpose**: Specific code locations for modification AND new files to create
**Format**:
- Existing files: `file:function:lines` (with line numbers)
- New files: `file` (no function or lines)
**Identified Targets**:
1. **Target**: `src/auth/AuthService.ts:login:45-52`
- **Type**: Modify existing
- **Modification**: Enhance error handling
- **Rationale**: Current logic lacks validation for edge cases
- **Rationale**: Current logic lacks validation
2. **Target**: `src/auth/PasswordReset.ts`
- **Type**: Create new file
- **Purpose**: Password reset functionality
- **Rationale**: New feature requirement
3. **Target**: `src/middleware/auth.ts:validateToken:30-45`
- **Type**: Modify existing
- **Modification**: Add token expiry check
- **Rationale**: Security requirement for JWT validation
4. **Target**: `tests/auth/PasswordReset.test.ts`
- **Type**: Create new file
- **Purpose**: Test coverage for password reset
- **Rationale**: Test requirement for new feature
**Note**:
- For new files, only specify the file path (no function or lines)
- For existing files without line numbers, use `file:function:*` format
- Task generation will refine these during the `analyze_task_patterns` step
**Format Rules**:
- Existing files: `file:function:lines` (with line numbers)
- New files: `file` (no function or lines)
- Unknown lines: `file:function:*`
- Task generation will refine these targets during `analyze_task_patterns` step
### Feasibility Assessment
- **Technical Complexity**: {complexity_rating_and_analysis}
@@ -437,94 +331,46 @@ Generated ANALYSIS_RESULTS.md focuses on **solution improvements, key design dec
- **External Resources**: {external_references_and_best_practices}
```
## Error Handling & Fallbacks
## Execution Management
### Error Handling & Recovery Strategies
### Error Handling & Recovery
1. **Pre-execution**: Verify session/context package, confirm CLI tools, validate dependencies
2. **Monitoring & Timeout**: Track progress, 30-min limit, manage parallel execution, maintain status
3. **Partial Recovery**: Generate results with incomplete outputs, use logs, provide next steps
4. **Error Recovery**: Auto error detection, structured workflows, graceful degradation
1. **Pre-execution Validation**
- **Session Verification**: Ensure session directory and metadata exist
- **Context Package Validation**: Verify JSON format and content structure
- **Tool Availability**: Confirm CLI tools are accessible and configured
- **Prerequisite Checks**: Validate all required dependencies and permissions
2. **Execution Monitoring & Timeout Management**
- **Progress Monitoring**: Track analysis execution with regular status checks
- **Timeout Controls**: 30-minute execution limit with graceful termination
- **Process Management**: Handle parallel tool execution and resource limits
- **Status Tracking**: Maintain real-time execution state and completion status
3. **Partial Results Recovery**
- **Fallback Strategy**: Generate analysis results even with incomplete outputs
- **Log Integration**: Use execution logs when primary outputs are unavailable
- **Recovery Mode**: Create partial analysis reports with available data
- **Guidance Generation**: Provide next steps and retry recommendations
4. **Resource Management**
- **Disk Space Monitoring**: Check available storage and cleanup temporary files
- **Process Limits**: Set CPU and memory constraints for analysis execution
- **Performance Optimization**: Manage resource utilization and system load
- **Cleanup Procedures**: Remove outdated logs and temporary files
5. **Comprehensive Error Recovery**
- **Error Detection**: Automatic error identification and classification
- **Recovery Workflows**: Structured approach to handling different failure modes
- **Status Reporting**: Clear communication of issues and resolution attempts
- **Graceful Degradation**: Provide useful outputs even with partial failures
## Performance Optimization
### Analysis Optimization Strategies
- **Parallel Analysis**: Execute multiple tools in parallel to reduce total time
### Performance & Resource Optimization
- **Parallel Analysis**: Execute multiple tools simultaneously to reduce time
- **Context Sharding**: Analyze large projects by module shards
- **Caching Mechanism**: Reuse analysis results for similar contexts
- **Incremental Analysis**: Perform incremental analysis based on changes
- **Caching**: Reuse results for similar contexts
- **Resource Management**: Monitor disk/CPU/memory, set limits, cleanup temporary files
- **Timeout Control**: `timeout 600s` with partial result generation on failure
### Resource Management
```bash
# Set analysis timeout
timeout 600s analysis_command || {
echo "⚠️ Analysis timeout, generating partial results"
# Generate partial results
}
## Integration & Success Criteria
# Memory usage monitoring
memory_usage=$(ps -o pid,vsz,rss,comm -p $$)
if [ "$memory_usage" -gt "$memory_limit" ]; then
echo "⚠️ High memory usage detected, optimizing..."
fi
```
### Input/Output Interface
**Input**:
- `--session` (required): Session ID (e.g., WFS-auth)
- `--context` (required): Context package path
- `--depth` (optional): Analysis depth (quick|full|deep)
- `--focus` (optional): Analysis focus areas
## Integration Points
**Output**:
- Single file: `ANALYSIS_RESULTS.md` at `.workflow/{session_id}/.process/`
- No supplementary files (JSON, roadmap, templates)
### Input Interface
- **Required**: `--session` parameter specifying session ID (e.g., WFS-auth)
- **Required**: `--context` parameter specifying context package path
- **Optional**: `--depth` specify analysis depth (quick|full|deep)
- **Optional**: `--focus` specify analysis focus areas
### Quality & Success Validation
**Quality Checks**: Completeness, consistency, feasibility validation
### Output Interface
- **Primary**: ANALYSIS_RESULTS.md - solution design and technical analysis
- **Location**: .workflow/{session_id}/.process/ANALYSIS_RESULTS.md
- **Single Output Policy**: Only ANALYSIS_RESULTS.md is generated
- **No Supplementary Files**: No additional JSON, roadmap, or template files
## Quality Assurance
### Analysis Quality Checks
- **Completeness Check**: Ensure all required analysis sections are completed
- **Consistency Check**: Verify consistency of multi-tool analysis results
- **Feasibility Validation**: Ensure recommended implementation plans are feasible
### Success Criteria
- ✅ **Solution-Focused Analysis**: ANALYSIS_RESULTS.md emphasizes solution improvements, design decisions, and critical insights
- ✅ **Single Output File**: Only ANALYSIS_RESULTS.md generated, no supplementary files
- ✅ **Design Decision Depth**: Clear rationale for architectural choices with alternatives and tradeoffs
- ✅ **Feasibility Assessment**: Technical complexity, risk analysis, and implementation readiness evaluation
- ✅ **Optimization Strategies**: Performance, security, and code quality recommendations
- ✅ **Parallel Execution**: Efficient concurrent tool execution (Gemini + Codex for complex tasks)
- ✅ **Robust Error Handling**: Comprehensive validation, timeout management, and partial result recovery
- ✅ **Confidence Scoring**: Multi-dimensional assessment with clear recommendation status
- ✅ **No Task Planning**: Exclude task breakdowns, implementation steps, and project planning details
**Success Criteria**:
- ✅ Solution-focused analysis (design decisions, critical insights, NO task planning)
- ✅ Single output file only
- ✅ Design decision depth with rationale/alternatives/tradeoffs
- ✅ Feasibility assessment (complexity, risks, readiness)
- ✅ Optimization strategies (performance, security, quality)
- ✅ Parallel execution efficiency (Gemini + Codex for complex tasks)
- ✅ Robust error handling (validation, timeout, partial recovery)
- ✅ Confidence scoring with clear recommendation status
## Related Commands
- `/context:gather` - Generate context packages required by this command

View File

@@ -1,665 +0,0 @@
---
name: docs
description: Documentation planning and orchestration - creates structured documentation tasks for execution
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--cli-generate]"
examples:
- /workflow:docs # Current directory (root: full docs, subdir: module only)
- /workflow:docs src/modules # Module documentation only
- /workflow:docs . --tool qwen # Root directory with Qwen
- /workflow:docs --cli-generate # Use CLI for doc generation (not just analysis)
---
# Documentation Workflow (/workflow:docs)
## Overview
Lightweight planner that analyzes project structure, decomposes documentation work into tasks, and generates execution plans. Does NOT generate documentation content itself - delegates to doc-generator agent.
**Two Execution Modes**:
- **Default**: CLI analyzes in `pre_analysis` (MODE=analysis), agent writes docs in `implementation_approach`
- **--cli-generate**: CLI generates docs in `implementation_approach` (MODE=write)
## Parameters
```bash
/workflow:docs [path] [--tool <gemini|qwen|codex>] [--cli-generate]
```
- **path**: Target directory (default: current directory)
- Project root → Full documentation (modules + project-level docs)
- Subdirectory → Module documentation only (API.md + README.md)
- **--tool**: CLI tool selection (default: gemini)
- `gemini`: Comprehensive documentation, pattern recognition
- `qwen`: Architecture analysis, system design focus
- `codex`: Implementation validation, code quality
- **--cli-generate**: Enable CLI-based documentation generation (optional)
- When enabled: CLI generates docs with MODE=write in implementation_approach
- When disabled (default): CLI analyzes with MODE=analysis in pre_analysis
## Planning Workflow
### Phase 1: Initialize Session
```bash
# Parse arguments
path="${1:-.}"
tool="gemini"
cli_generate=false
# Parse options
while [[ $# -gt 0 ]]; do
case "$1" in
--tool) tool="$2"; shift 2 ;;
--cli-generate) cli_generate=true; shift ;;
*) shift ;;
esac
done
# Detect project root
bash(git rev-parse --show-toplevel)
project_root=$(pwd)
target_path=$(cd "$path" && pwd)
is_root=false
[[ "$target_path" == "$project_root" ]] && is_root=true
# Create session structure
timestamp=$(date +%Y%m%d-%H%M%S)
session_dir=".workflow/WFS-docs-${timestamp}"
bash(mkdir -p "${session_dir}"/{.task,.process,.summaries})
bash(touch ".workflow/.active-WFS-docs-${timestamp}")
# Record configuration
bash(cat > "${session_dir}/.process/config.txt" <<EOF
path=$path
target_path=$target_path
is_root=$is_root
tool=$tool
cli_generate=$cli_generate
EOF
)
```
### Phase 2: Analyze Structure
```bash
# Step 1: Discover module hierarchy
bash(~/.claude/scripts/get_modules_by_depth.sh)
# Output: depth:N|path:<PATH>|files:N|size:N|has_claude:yes/no
# Step 2: Classify folders by type
bash(cat > "${session_dir}/.process/classify-folders.sh" <<'SCRIPT'
while IFS='|' read -r depth_info path_info files_info size_info claude_info; do
folder_path=$(echo "$path_info" | cut -d':' -f2-)
# Count code files (maxdepth 1)
code_files=$(find "$folder_path" -maxdepth 1 -type f \
\( -name "*.ts" -o -name "*.js" -o -name "*.py" \
-o -name "*.go" -o -name "*.java" -o -name "*.rs" \) \
2>/dev/null | wc -l)
# Count subfolders
subfolders=$(find "$folder_path" -maxdepth 1 -type d \
-not -path "$folder_path" 2>/dev/null | wc -l)
# Determine type
if [[ $code_files -gt 0 ]]; then
folder_type="code" # API.md + README.md
elif [[ $subfolders -gt 0 ]]; then
folder_type="navigation" # README.md only
else
folder_type="skip" # Empty
fi
echo "${folder_path}|${folder_type}|code:${code_files}|dirs:${subfolders}"
done
SCRIPT
)
bash(~/.claude/scripts/get_modules_by_depth.sh | bash "${session_dir}/.process/classify-folders.sh" > "${session_dir}/.process/folder-analysis.txt")
# Step 3: Group by top-level directories
bash(awk -F'|' '{split($1, parts, "/"); if (length(parts) >= 2) print parts[1] "/" parts[2]}' \
"${session_dir}/.process/folder-analysis.txt" | sort -u > "${session_dir}/.process/top-level-dirs.txt")
```
### Phase 3: Detect Update Mode
```bash
# Check existing documentation
bash(find "$target_path" -name "API.md" -o -name "README.md" -o -name "ARCHITECTURE.md" -o -name "EXAMPLES.md" 2>/dev/null | grep -v ".workflow" | wc -l)
existing_docs=$(...)
if [[ $existing_docs -gt 0 ]]; then
bash(find "$target_path" -name "*.md" 2>/dev/null | grep -v ".workflow" > "${session_dir}/.process/existing-docs.txt")
echo "mode=update" >> "${session_dir}/.process/config.txt"
else
echo "mode=create" >> "${session_dir}/.process/config.txt"
fi
# Record strategy
bash(cat > "${session_dir}/.process/strategy.md" <<EOF
**Path**: ${target_path}
**Is Root**: ${is_root}
**Tool**: ${tool}
**CLI Generate**: ${cli_generate}
**Mode**: $(grep 'mode=' "${session_dir}/.process/config.txt" | cut -d'=' -f2)
**Existing Docs**: ${existing_docs} files
EOF
)
```
### Phase 4: Decompose Tasks
**Task Hierarchy**:
```
Level 1: Module Tree Tasks (Always generated, can execute in parallel)
├─ IMPL-001: Document tree 'src/modules/'
├─ IMPL-002: Document tree 'src/utils/'
└─ IMPL-003: Document tree 'lib/'
Level 2: Project README (Only if is_root=true, depends on Level 1)
└─ IMPL-004: Generate Project README
Level 3: Architecture & Examples (Only if is_root=true, depends on Level 2)
├─ IMPL-005: Generate ARCHITECTURE.md
├─ IMPL-006: Generate EXAMPLES.md
└─ IMPL-007: Generate HTTP API docs (optional)
```
**Implementation**:
```bash
# Generate Level 1 tasks (always)
task_count=0
while read -r top_dir; do
task_count=$((task_count + 1))
# Create IMPL-00${task_count}.json for module tree
done < "${session_dir}/.process/top-level-dirs.txt"
# Generate Level 2-3 tasks (only if root)
if [[ "$is_root" == "true" ]]; then
# IMPL-00$((task_count+1)).json: Project README
# IMPL-00$((task_count+2)).json: ARCHITECTURE.md
# IMPL-00$((task_count+3)).json: EXAMPLES.md
# IMPL-00$((task_count+4)).json: HTTP API (optional)
fi
```
### Phase 5: Generate Task JSONs
```bash
# Read configuration
cli_generate=$(grep 'cli_generate=' "${session_dir}/.process/config.txt" | cut -d'=' -f2)
tool=$(grep 'tool=' "${session_dir}/.process/config.txt" | cut -d'=' -f2)
# Determine CLI command placement
if [[ "$cli_generate" == "true" ]]; then
# CLI generates docs: MODE=write, place in implementation_approach
mode="write"
placement="implementation_approach"
approval_flag="--approval-mode yolo"
else
# CLI for analysis only: MODE=analysis, place in pre_analysis
mode="analysis"
placement="pre_analysis"
approval_flag=""
fi
# Build tool-specific command
if [[ "$tool" == "codex" ]]; then
cmd="codex -C ${dir} --full-auto exec \"...\" --skip-git-repo-check -s danger-full-access"
else
cmd="bash(cd ${dir} && ~/.claude/scripts/${tool}-wrapper ${approval_flag} -p \"...MODE: ${mode}...\")"
fi
```
## Task Templates
### Level 1: Module Tree Task
**Default Mode (cli_generate=false)**:
```json
{
"id": "IMPL-001",
"title": "Document Module Tree: 'src/modules/'",
"status": "pending",
"meta": {
"type": "docs-tree",
"agent": "@doc-generator",
"tool": "gemini",
"cli_generate": false
},
"context": {
"requirements": [
"Recursively process all folders in src/modules/",
"For code folders: generate API.md + README.md",
"For navigation folders: generate README.md only"
],
"focus_paths": ["src/modules"],
"folder_analysis_file": "${session_dir}/.process/folder-analysis.txt"
},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_docs",
"command": "bash(find src/modules -name '*.md' 2>/dev/null | xargs cat || echo 'No existing docs')",
"output_to": "existing_module_docs"
},
{
"step": "load_folder_analysis",
"command": "bash(grep '^src/modules' ${session_dir}/.process/folder-analysis.txt)",
"output_to": "target_folders"
},
{
"step": "analyze_module_tree",
"command": "bash(cd src/modules && ~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze module structure\\nTASK: Generate documentation outline\\nMODE: analysis\\nCONTEXT: @{**/*} [target_folders]\\nEXPECTED: Structure outline\\nRULES: Analyze only\")",
"output_to": "tree_outline",
"note": "CLI for analysis only"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate module tree documentation",
"description": "Process target folders and generate appropriate documentation files based on folder types",
"modification_points": ["Parse folder types from [target_folders]", "Parse structure from [tree_outline]", "Generate API.md for code folders", "Generate README.md for all folders"],
"logic_flow": ["Parse [target_folders] to get folder types", "Parse [tree_outline] for structure", "For each folder: if type == 'code': Generate API.md + README.md; elif type == 'navigation': Generate README.md only"],
"depends_on": [],
"output": "module_docs"
}
],
"target_files": [
".workflow/docs/modules/*/API.md",
".workflow/docs/modules/*/README.md"
]
}
}
```
**CLI Generate Mode (cli_generate=true)**:
```json
{
"id": "IMPL-001",
"title": "Document Module Tree: 'src/modules/'",
"status": "pending",
"meta": {
"type": "docs-tree",
"agent": "@doc-generator",
"tool": "gemini",
"cli_generate": true
},
"context": {
"requirements": [
"Recursively process all folders in src/modules/",
"CLI generates documentation files directly"
],
"focus_paths": ["src/modules"]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_docs",
"command": "bash(find src/modules -name '*.md' 2>/dev/null | xargs cat || echo 'No existing docs')",
"output_to": "existing_module_docs"
},
{
"step": "load_folder_analysis",
"command": "bash(grep '^src/modules' ${session_dir}/.process/folder-analysis.txt)",
"output_to": "target_folders"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Parse folder analysis",
"description": "Parse [target_folders] to get folder types and structure",
"modification_points": ["Extract folder types", "Identify code vs navigation folders"],
"logic_flow": ["Parse [target_folders] to get folder types", "Prepare folder list for CLI generation"],
"depends_on": [],
"output": "folder_types"
},
{
"step": 2,
"title": "Generate documentation via CLI",
"description": "Call CLI to generate documentation files for each folder using MODE=write",
"modification_points": ["Execute CLI generation command", "Generate API.md and README.md files"],
"logic_flow": ["Call CLI to generate documentation files for each folder"],
"command": "bash(cd src/modules && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation files\\nMODE: write\\nCONTEXT: @{**/*} [target_folders] [existing_module_docs]\\nEXPECTED: API.md and README.md files\\nRULES: Generate complete docs\")",
"depends_on": [1],
"output": "generated_docs"
}
],
"target_files": [
".workflow/docs/modules/*/API.md",
".workflow/docs/modules/*/README.md"
]
}
}
```
### Level 2: Project README Task
**Default Mode**:
```json
{
"id": "IMPL-004",
"title": "Generate Project README",
"status": "pending",
"depends_on": ["IMPL-001", "IMPL-002", "IMPL-003"],
"meta": {
"type": "docs",
"agent": "@doc-generator",
"tool": "gemini",
"cli_generate": false
},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_readme",
"command": "bash(cat .workflow/docs/README.md 2>/dev/null || echo 'No existing README')",
"output_to": "existing_readme"
},
{
"step": "load_module_docs",
"command": "bash(find .workflow/docs/modules -name '*.md' | xargs cat)",
"output_to": "all_module_docs"
},
{
"step": "analyze_project",
"command": "bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
"output_to": "project_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate project README",
"description": "Generate project README with navigation links while preserving existing user modifications",
"modification_points": ["Parse [project_outline] and [all_module_docs]", "Generate project README structure", "Add navigation links to modules", "Preserve [existing_readme] user modifications"],
"logic_flow": ["Parse [project_outline] and [all_module_docs]", "Generate project README with navigation links", "Preserve [existing_readme] user modifications"],
"depends_on": [],
"output": "project_readme"
}
],
"target_files": [".workflow/docs/README.md"]
}
}
```
### Level 3: Architecture Documentation Task
**Default Mode**:
```json
{
"id": "IMPL-005",
"title": "Generate Architecture Documentation",
"status": "pending",
"depends_on": ["IMPL-004"],
"meta": {
"type": "docs",
"agent": "@doc-generator",
"tool": "gemini",
"cli_generate": false
},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_arch",
"command": "bash(cat .workflow/docs/ARCHITECTURE.md 2>/dev/null || echo 'No existing ARCHITECTURE')",
"output_to": "existing_architecture"
},
{
"step": "load_all_docs",
"command": "bash(cat .workflow/docs/README.md && find .workflow/docs/modules -name '*.md' | xargs cat)",
"output_to": "all_docs"
},
{
"step": "analyze_architecture",
"command": "bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture outline\")",
"output_to": "arch_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate architecture documentation",
"description": "Generate ARCHITECTURE.md while preserving existing user modifications",
"modification_points": ["Parse [arch_outline] and [all_docs]", "Generate ARCHITECTURE.md structure", "Document system design and patterns", "Preserve [existing_architecture] modifications"],
"logic_flow": ["Parse [arch_outline] and [all_docs]", "Generate ARCHITECTURE.md", "Preserve [existing_architecture] modifications"],
"depends_on": [],
"output": "architecture_doc"
}
],
"target_files": [".workflow/docs/ARCHITECTURE.md"]
}
}
```
### Level 3: Examples Documentation Task
**Default Mode**:
```json
{
"id": "IMPL-006",
"title": "Generate Examples Documentation",
"status": "pending",
"depends_on": ["IMPL-004"],
"meta": {
"type": "docs",
"agent": "@doc-generator",
"tool": "gemini",
"cli_generate": false
},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_examples",
"command": "bash(cat .workflow/docs/EXAMPLES.md 2>/dev/null || echo 'No existing EXAMPLES')",
"output_to": "existing_examples"
},
{
"step": "load_project_readme",
"command": "bash(cat .workflow/docs/README.md)",
"output_to": "project_readme"
},
{
"step": "generate_examples_outline",
"command": "bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Generate usage examples\\nTASK: Extract example patterns\\nMODE: analysis\\nCONTEXT: [project_readme]\\nEXPECTED: Examples outline\")",
"output_to": "examples_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate examples documentation",
"description": "Generate EXAMPLES.md with code snippets while preserving existing user modifications",
"modification_points": ["Parse [examples_outline] and [project_readme]", "Generate EXAMPLES.md structure", "Add code snippets and usage examples", "Preserve [existing_examples] modifications"],
"logic_flow": ["Parse [examples_outline] and [project_readme]", "Generate EXAMPLES.md with code snippets", "Preserve [existing_examples] modifications"],
"depends_on": [],
"output": "examples_doc"
}
],
"target_files": [".workflow/docs/EXAMPLES.md"]
}
}
```
### Level 3: HTTP API Documentation Task (Optional)
**Default Mode**:
```json
{
"id": "IMPL-007",
"title": "Generate HTTP API Documentation",
"status": "pending",
"depends_on": ["IMPL-004"],
"meta": {
"type": "docs",
"agent": "@doc-generator",
"tool": "gemini",
"cli_generate": false
},
"flow_control": {
"pre_analysis": [
{
"step": "discover_api_endpoints",
"command": "mcp__code-index__search_code_advanced(pattern='router\\.|@(Get|Post)', file_pattern='*.{ts,js}')",
"output_to": "endpoint_discovery"
},
{
"step": "load_existing_api_docs",
"command": "bash(cat .workflow/docs/api/README.md 2>/dev/null || echo 'No existing API docs')",
"output_to": "existing_api_docs"
},
{
"step": "analyze_api",
"command": "bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Document HTTP API\\nTASK: Analyze API endpoints\\nMODE: analysis\\nCONTEXT: @{src/api/**/*} [endpoint_discovery]\\nEXPECTED: API outline\")",
"output_to": "api_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate HTTP API documentation",
"description": "Generate HTTP API documentation while preserving existing user modifications",
"modification_points": ["Parse [api_outline] and [endpoint_discovery]", "Generate HTTP API documentation", "Document endpoints and request/response formats", "Preserve [existing_api_docs] modifications"],
"logic_flow": ["Parse [api_outline] and [endpoint_discovery]", "Generate HTTP API documentation", "Preserve [existing_api_docs] modifications"],
"depends_on": [],
"output": "api_docs"
}
],
"target_files": [".workflow/docs/api/README.md"]
}
}
```
## Session Structure
```
.workflow/
├── .active-WFS-docs-20240120-143022
└── WFS-docs-20240120-143022/
├── IMPL_PLAN.md # Implementation plan
├── TODO_LIST.md # Progress tracker
├── .process/
│ ├── config.txt # path, is_root, tool, cli_generate, mode
│ ├── strategy.md # Documentation strategy summary
│ ├── folder-analysis.txt # Folder type classification
│ ├── top-level-dirs.txt # Top-level directory list
│ └── existing-docs.txt # Existing documentation files
└── .task/
├── IMPL-001.json # Module tree task
├── IMPL-002.json # Module tree task
├── IMPL-003.json # Module tree task
├── IMPL-004.json # Project README (if root)
├── IMPL-005.json # Architecture (if root)
├── IMPL-006.json # Examples (if root)
└── IMPL-007.json # HTTP API (if root, optional)
```
## Generated Documentation
```
.workflow/docs/
├── modules/ # Level 1 output
│ ├── README.md # Navigation for modules/
│ ├── auth/
│ │ ├── API.md # Auth module API signatures
│ │ ├── README.md # Auth module documentation
│ │ └── middleware/
│ │ ├── API.md # Middleware API
│ │ └── README.md # Middleware docs
│ └── api/
│ ├── API.md # API module signatures
│ └── README.md # API module docs
├── README.md # Level 2 output (root only)
├── ARCHITECTURE.md # Level 3 output (root only)
├── EXAMPLES.md # Level 3 output (root only)
└── api/ # Level 3 output (optional)
└── README.md # HTTP API reference
```
## Execution Commands
### Root Directory (Full Documentation)
```bash
# Level 1 - Module documentation (parallel)
/workflow:execute IMPL-001
/workflow:execute IMPL-002
/workflow:execute IMPL-003
# Level 2 - Project README (after Level 1)
/workflow:execute IMPL-004
# Level 3 - Architecture & Examples (after Level 2, parallel)
/workflow:execute IMPL-005
/workflow:execute IMPL-006
/workflow:execute IMPL-007 # if HTTP API present
```
### Subdirectory (Module Only)
```bash
# Only Level 1 task generated
/workflow:execute IMPL-001
```
## Simple Bash Commands
### Check Documentation Status
```bash
# List existing documentation
bash(find . -name "API.md" -o -name "README.md" -o -name "ARCHITECTURE.md" 2>/dev/null | grep -v ".workflow")
# Count documentation files
bash(find . -name "*.md" 2>/dev/null | grep -v ".workflow" | wc -l)
```
### Analyze Module Structure
```bash
# Discover modules
bash(~/.claude/scripts/get_modules_by_depth.sh)
# Count code files in directory
bash(find src/modules -maxdepth 1 -type f \( -name "*.ts" -o -name "*.js" \) | wc -l)
# Count subdirectories
bash(find src/modules -maxdepth 1 -type d -not -path "src/modules" | wc -l)
```
### Session Management
```bash
# Create session directories
bash(mkdir -p .workflow/WFS-docs-20240120/.{task,process,summaries})
# Mark session as active
bash(touch .workflow/.active-WFS-docs-20240120)
# Read session configuration
bash(cat .workflow/WFS-docs-20240120/.process/config.txt)
# List session tasks
bash(ls .workflow/WFS-docs-20240120/.task/*.json)
```
## Template Reference
**Available Templates**:
- `api.txt`: Unified template for Code API (Part A) and HTTP API (Part B)
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders with subdirectories
- `project-readme.txt`: Project overview, getting started, module navigation
- `project-architecture.txt`: System structure, module map, design patterns
- `project-examples.txt`: End-to-end usage examples
**Template Location**: `~/.claude/workflows/cli-templates/prompts/documentation/`
## CLI Generate Mode Summary
| Mode | CLI Placement | CLI MODE | Agent Role |
|------|---------------|----------|------------|
| **Default** | pre_analysis | analysis | Generates documentation files |
| **--cli-generate** | implementation_approach | write | Validates and coordinates CLI output |
## Related Commands
- `/workflow:execute` - Execute documentation tasks
- `/workflow:status` - View task progress
- `/workflow:session:complete` - Mark session complete

View File

@@ -47,48 +47,49 @@ ELSE:
bash(find .workflow -type d -name "design-*" -printf "%T@ %p\n" | sort -nr | head -1 | cut -d' ' -f2)
# Get variant counts
style_variants = --style-variants OR bash(ls {base_path}/style-consolidation/style-* -d | wc -l)
style_variants = --style-variants OR bash(ls {base_path}/style-extraction/style-* -d | wc -l)
layout_variants = --layout-variants OR 3
```
**Output**: `base_path`, `target_list[]`, `target_type`, `device_type`, `style_variants`, `layout_variants`
### Step 2: Detect Token Sources
### Step 2: Validate Design Tokens
```bash
# Check consolidated (priority 1) or proposed (priority 2)
bash(test -f {base_path}/style-consolidation/style-1/design-tokens.json && echo "consolidated")
# OR
bash(test -f {base_path}/style-extraction/style-cards.json && echo "proposed")
# Check design tokens exist
bash(test -f {base_path}/style-extraction/style-1/design-tokens.json && echo "valid")
# If proposed: Create temp consolidation dirs + write tokens
FOR style_id IN 1..style_variants:
bash(mkdir -p {base_path}/style-consolidation/style-{style_id})
Write({base_path}/style-consolidation/style-{style_id}/design-tokens.json, proposed_tokens)
# Load design space analysis (optional)
IF exists({base_path}/style-extraction/design-space-analysis.json):
design_space_analysis = Read({base_path}/style-extraction/design-space-analysis.json)
# Load design space analysis (optional, from intermediates)
IF exists({base_path}/.intermediates/style-analysis/design-space-analysis.json):
design_space_analysis = Read({base_path}/.intermediates/style-analysis/design-space-analysis.json)
```
**Output**: `token_sources{}`, `consolidated_count`, `proposed_count`, `design_space_analysis`
**Output**: `design_tokens_valid`, `design_space_analysis`
### Step 3: Gather Layout Inspiration
### Step 3: Gather Layout Inspiration (Reuse or Create)
```bash
bash(mkdir -p {base_path}/prototypes/_inspirations)
# Check if layout inspirations already exist from layout-extract phase
inspiration_source = "{base_path}/.intermediates/layout-analysis/inspirations"
# For each target: Research via MCP
FOR target IN target_list:
search_query = "{target} {target_type} layout patterns variations"
mcp__exa__web_search_exa(query=search_query, numResults=5)
# Priority 1: Reuse existing inspiration from layout-extract
IF exists({inspiration_source}/{target}-layout-ideas.txt):
# Reuse existing inspiration (no action needed)
REPORT: "Using existing layout inspiration for {target}"
ELSE:
# Priority 2: Generate new inspiration via MCP
bash(mkdir -p {inspiration_source})
search_query = "{target} {target_type} layout patterns variations"
mcp__exa__web_search_exa(query=search_query, numResults=5)
# Extract context from prompt for this target
target_requirements = extract_relevant_context_from_prompt(prompt_text, target)
# Extract context from prompt for this target
target_requirements = extract_relevant_context_from_prompt(prompt_text, target)
# Write simple inspiration file
Write({base_path}/prototypes/_inspirations/{target}-layout-ideas.txt, inspiration_content)
# Write inspiration file to centralized location
Write({inspiration_source}/{target}-layout-ideas.txt, inspiration_content)
REPORT: "Created new layout inspiration for {target}"
```
**Output**: `T` inspiration text files
**Output**: `T` inspiration text files (reused or created in `.intermediates/layout-analysis/inspirations/`)
## Phase 2: Target-Style-Centric Batch Generation (Agent)
@@ -126,8 +127,8 @@ Task(ui-design-agent): `
${design_attributes ? "DESIGN_ATTRIBUTES: " + JSON.stringify(design_attributes) : ""}
## Reference
- Layout inspiration: Read("{base_path}/prototypes/_inspirations/{target}-layout-ideas.txt")
- Design tokens: Read("{base_path}/style-consolidation/style-{style_id}/design-tokens.json")
- Layout inspiration: Read("{base_path}/.intermediates/layout-analysis/inspirations/{target}-layout-ideas.txt")
- Design tokens: Read("{base_path}/style-extraction/style-{style_id}/design-tokens.json")
Parse ALL token values (colors, typography, spacing, borders, shadows, breakpoints)
${design_attributes ? "- Adapt DOM to: density, visual_weight, formality, organic_vs_geometric" : ""}
@@ -242,18 +243,19 @@ Quality:
- Style-aware: {design_space_analysis ? 'HTML adapts to design_attributes' : 'Standard structure'}
- CSS: Self-contained (direct token values, no var())
- Device-optimized: {device_type} layouts
- Tokens: {consolidated_count} consolidated, {proposed_count} proposed
{IF proposed_count > 0: ' 💡 For production: /workflow:ui-design:consolidate'}
- Tokens: Production-ready (WCAG AA compliant)
Generated Files:
{base_path}/prototypes/
├── _inspirations/ ({T} text files)
├── {target}-style-{s}-layout-{l}.html ({S×L×T} prototypes)
├── {target}-style-{s}-layout-{l}.css
├── compare.html (interactive matrix)
├── index.html (navigation)
└── PREVIEW.md (instructions)
Layout Inspirations:
{base_path}/.intermediates/layout-analysis/inspirations/ ({T} text files, reused or created)
Preview:
1. Open compare.html (recommended)
2. Open index.html
@@ -270,11 +272,10 @@ Next: /workflow:ui-design:update
bash(find .workflow -type d -name "design-*" -printf "%T@ %p\n" | sort -nr | head -1 | cut -d' ' -f2)
# Count style variants
bash(ls {base_path}/style-consolidation/style-* -d | wc -l)
bash(ls {base_path}/style-extraction/style-* -d | wc -l)
# Check token sources
bash(test -f {base_path}/style-consolidation/style-1/design-tokens.json && echo "consolidated")
bash(test -f {base_path}/style-extraction/style-cards.json && echo "proposed")
# Check design tokens exist
bash(test -f {base_path}/style-extraction/style-1/design-tokens.json && echo "valid")
```
### Validation Commands
@@ -288,8 +289,11 @@ bash(test -f {base_path}/prototypes/compare.html && echo "exists")
### File Operations
```bash
# Create directories
bash(mkdir -p {base_path}/prototypes/_inspirations)
# Create prototypes directory
bash(mkdir -p {base_path}/prototypes)
# Create inspirations directory (if needed)
bash(mkdir -p {base_path}/.intermediates/layout-analysis/inspirations)
# Run preview script
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
@@ -299,15 +303,17 @@ bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
```
{base_path}/
├── .intermediates/
│ └── layout-analysis/
│ └── inspirations/
│ └── {target}-layout-ideas.txt # Layout inspiration (reused or created)
├── prototypes/
│ ├── _inspirations/
│ │ └── {target}-layout-ideas.txt # Layout inspiration
│ ├── {target}-style-{s}-layout-{l}.html # Final prototypes
│ ├── {target}-style-{s}-layout-{l}.css
│ ├── compare.html
│ ├── index.html
│ └── PREVIEW.md
└── style-consolidation/
└── style-extraction/
└── style-{s}/
├── design-tokens.json
└── style-guide.md
@@ -317,8 +323,8 @@ bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
### Common Errors
```
ERROR: No token sources found
→ Run /workflow:ui-design:extract or /workflow:ui-design:consolidate
ERROR: No design tokens found
→ Run /workflow:ui-design:style-extract first
ERROR: No targets extracted from prompt
→ Use --targets explicitly or rephrase prompt
@@ -364,9 +370,14 @@ ERROR: Script permission denied
## Integration
**Input**: Prompt, design-tokens.json, design-space-analysis.json (optional)
**Input**:
- Required: Prompt, design-tokens.json
- Optional: design-space-analysis.json (from `.intermediates/style-analysis/`)
- Reuses: Layout inspirations from `.intermediates/layout-analysis/inspirations/` (if available from layout-extract)
**Output**: S×L×T prototypes for `/workflow:ui-design:update`
**Compatible**: extract, consolidate, explore-auto, imitate-auto outputs
**Compatible**: style-extract, explore-auto, imitate-auto outputs
**Optimization**: Reuses layout inspirations from layout-extract phase, avoiding duplicate MCP searches
## Usage Examples

View File

@@ -1,267 +0,0 @@
---
name: consolidate
description: Consolidate style variants into independent design systems and plan layout strategies
argument-hint: /workflow:ui-design:consolidate [--base-path <path>] [--session <id>] [--variants <count>] [--layout-variants <count>]
allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Task(*)
---
# Design System Consolidation Command
## Overview
Consolidate style variants into independent production-ready design systems. Refines raw token proposals from `style-extract` into finalized design tokens and style guides.
**Strategy**: Philosophy-Driven Refinement
- **Agent-Driven**: Uses ui-design-agent for N variant generation
- **Token Refinement**: Transforms proposed_tokens into complete systems
- **Production-Ready**: design-tokens.json + style-guide.md per variant
- **Matrix-Ready**: Provides style variants for generate phase
## Phase 1: Setup & Validation
### Step 1: Resolve Base Path & Load Style Cards
```bash
# Determine base path
bash(find .workflow -type d -name "design-*" | head -1) # Auto-detect
# OR use --base-path / --session parameters
# Load style cards
bash(cat {base_path}/style-extraction/style-cards.json)
```
### Step 2: Memory Check (Skip if Already Done)
```bash
# Check if already consolidated
bash(test -f {base_path}/style-consolidation/style-1/design-tokens.json && echo "exists")
```
**If exists**: Skip to completion message
### Step 3: Select Variants
```bash
# Determine variant count (default: all from style-cards.json)
bash(cat style-cards.json | grep -c "\"id\": \"variant-")
# Validate variant count
# Priority: --variants parameter → total_variants from style-cards.json
```
**Output**: `variants_count`, `selected_variants[]`
### Step 4: Load Design Context (Optional)
```bash
# Load brainstorming context
bash(test -f {base_path}/.brainstorming/synthesis-specification.md && cat it)
# Load design space analysis
bash(test -f {base_path}/style-extraction/design-space-analysis.json && cat it)
```
**Output**: `design_context`, `design_space_analysis`
## Phase 2: Design System Synthesis (Agent)
**Executor**: `Task(ui-design-agent)` for all variants
### Step 1: Create Output Directories
```bash
bash(mkdir -p {base_path}/style-consolidation/style-{{1..{variants_count}}})
```
### Step 2: Launch Agent Task
For all variants (1 to {variants_count}):
```javascript
Task(ui-design-agent): `
[DESIGN_TOKEN_GENERATION_TASK]
Generate {variants_count} independent design systems using philosophy-driven refinement
SESSION: {session_id} | MODE: Philosophy-driven refinement | BASE_PATH: {base_path}
CRITICAL PATH: Use loop index (N) for directories: style-1/, style-2/, ..., style-N/
VARIANT DATA:
{FOR each variant with index N:
VARIANT INDEX: {N} | ID: {variant.id} | NAME: {variant.name}
Design Philosophy: {variant.design_philosophy}
Proposed Tokens: {variant.proposed_tokens}
{IF design_space_analysis: Design Attributes: {attributes}, Anti-keywords: {anti_keywords}}
}
## 参考
- Style cards: Each variant's proposed_tokens and design_philosophy
- Design space analysis: design_attributes (saturation, weight, formality, organic/geometric, innovation, density)
- Anti-keywords: Explicit constraints to avoid
- WCAG AA: 4.5:1 text, 3:1 UI (use built-in AI knowledge)
## 生成
For EACH variant (use loop index N for paths):
1. {base_path}/style-consolidation/style-{N}/design-tokens.json
- Complete token structure: colors, typography, spacing, border_radius, shadows, breakpoints
- All colors in OKLCH format
- Apply design_attributes to token values (saturation→chroma, density→spacing, etc.)
2. {base_path}/style-consolidation/style-{N}/style-guide.md
- Expanded design philosophy
- Complete color system with accessibility notes
- Typography documentation
- Usage guidelines
## 注意
- ✅ Use Write() tool immediately for each file
- ✅ Use loop index N for directory names: style-1/, style-2/, etc.
- ❌ DO NOT use variant.id in paths (metadata only)
- ❌ NO external research or MCP calls (pure AI refinement)
- Apply philosophy-driven refinement: design_attributes guide token values
- Maintain divergence: Use anti_keywords to prevent variant convergence
- Complete each variant's files before moving to next variant
`
```
**Output**: Agent generates `variants_count × 2` files
## Phase 3: Verify Output
### Step 1: Check Files Created
```bash
# Verify all design systems created
bash(ls {base_path}/style-consolidation/style-*/design-tokens.json | wc -l)
# Validate structure
bash(cat {base_path}/style-consolidation/style-1/design-tokens.json | grep -q "colors" && echo "valid")
```
### Step 2: Verify File Sizes
```bash
bash(ls -lh {base_path}/style-consolidation/style-1/)
```
**Output**: `variants_count × 2` files verified
## Completion
### Todo Update
```javascript
TodoWrite({todos: [
{content: "Setup and load style cards", status: "completed", activeForm: "Loading style cards"},
{content: "Select variants and load context", status: "completed", activeForm: "Loading context"},
{content: "Generate design systems (agent)", status: "completed", activeForm: "Generating systems"},
{content: "Verify output files", status: "completed", activeForm: "Verifying files"}
]});
```
### Output Message
```
✅ Design system consolidation complete!
Configuration:
- Session: {session_id}
- Variants: {variants_count}
- Philosophy-driven refinement (zero MCP calls)
Generated Files:
{base_path}/style-consolidation/
├── style-1/ (design-tokens.json, style-guide.md)
├── style-2/ (same structure)
└── style-{variants_count}/ (same structure)
Next: /workflow:ui-design:generate --session {session_id} --style-variants {variants_count} --targets "dashboard,auth"
```
## Simple Bash Commands
### Path Operations
```bash
# Find design directory
bash(find .workflow -type d -name "design-*" | head -1)
# Load style cards
bash(cat {base_path}/style-extraction/style-cards.json)
# Count variants
bash(cat style-cards.json | grep -c "\"id\": \"variant-")
```
### Validation Commands
```bash
# Check if already consolidated
bash(test -f {base_path}/style-consolidation/style-1/design-tokens.json && echo "exists")
# Verify output
bash(ls {base_path}/style-consolidation/style-*/design-tokens.json | wc -l)
# Validate structure
bash(cat design-tokens.json | grep -q "colors" && echo "valid")
```
### File Operations
```bash
# Create directories
bash(mkdir -p {base_path}/style-consolidation/style-{{1..3}})
# Check file sizes
bash(ls -lh {base_path}/style-consolidation/style-1/)
```
## Output Structure
```
{base_path}/
└── style-consolidation/
├── style-1/
│ ├── design-tokens.json
│ └── style-guide.md
├── style-2/
│ ├── design-tokens.json
│ └── style-guide.md
└── style-N/ (same structure)
```
## design-tokens.json Format
```json
{
"colors": {
"brand": {"primary": "oklch(...)", "secondary": "oklch(...)", "accent": "oklch(...)"},
"surface": {"background": "oklch(...)", "elevated": "oklch(...)", "overlay": "oklch(...)"},
"semantic": {"success": "oklch(...)", "warning": "oklch(...)", "error": "oklch(...)", "info": "oklch(...)"},
"text": {"primary": "oklch(...)", "secondary": "oklch(...)", "tertiary": "oklch(...)", "inverse": "oklch(...)"},
"border": {"default": "oklch(...)", "strong": "oklch(...)", "subtle": "oklch(...)"}
},
"typography": {"font_family": {...}, "font_size": {...}, "font_weight": {...}, "line_height": {...}, "letter_spacing": {...}},
"spacing": {"0": "0", "1": "0.25rem", ..., "24": "6rem"},
"border_radius": {"none": "0", "sm": "0.25rem", ..., "full": "9999px"},
"shadows": {"sm": "...", "md": "...", "lg": "...", "xl": "..."},
"breakpoints": {"sm": "640px", ..., "2xl": "1536px"}
}
```
**Requirements**: OKLCH colors, complete coverage, semantic naming
## Error Handling
### Common Errors
```
ERROR: No style cards found
→ Run /workflow:ui-design:style-extract first
ERROR: Invalid variant count
→ List available count, auto-select all
ERROR: Agent file creation failed
→ Check agent output, verify Write() operations
```
## Key Features
- **Philosophy-Driven Refinement** - Pure AI token refinement using design_attributes
- **Agent-Driven** - Uses ui-design-agent for multi-file generation
- **Separate Design Systems** - N independent systems (one per variant)
- **Production-Ready** - WCAG AA compliant, OKLCH colors, semantic naming
- **Zero MCP Calls** - Faster execution, better divergence preservation
## Integration
**Input**: `style-cards.json` from `/workflow:ui-design:style-extract`
**Output**: `style-consolidation/style-{N}/` for each variant
**Next**: `/workflow:ui-design:generate --style-variants N`
**Note**: After consolidate, use `/workflow:ui-design:layout-extract` to generate layout templates before running generate.

View File

@@ -20,11 +20,10 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
1. User triggers: `/workflow:ui-design:explore-auto [params]`
2. Phase 0c: Target confirmation → User confirms → **IMMEDIATELY triggers Phase 1**
3. Phase 1 (style-extract) → **WAIT for completion** → Auto-continues
4. Phase 2 (style-consolidate) → **WAIT for completion** → Auto-continues
5. Phase 2.5 (layout-extract)**WAIT for completion** → Auto-continues
6. **Phase 3 (ui-assembly)****WAIT for completion** → Auto-continues
7. Phase 4 (design-update) → **WAIT for completion** → Auto-continues
8. Phase 5 (batch-plan, optional) → Reports completion
4. Phase 2 (layout-extract) → **WAIT for completion** → Auto-continues
5. **Phase 3 (ui-assembly)****WAIT for completion** → Auto-continues
6. Phase 4 (design-update)**WAIT for completion** → Auto-continues
7. Phase 5 (batch-plan, optional) → Reports completion
**Phase Transition Mechanism**:
- **Phase 0c (User Interaction)**: User confirms targets → IMMEDIATELY triggers Phase 1
@@ -91,8 +90,8 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
**Matrix Mode** (style-centric):
- Generates `style_variants × layout_variants × targets` prototypes
- **Phase 1**: `style_variants` style options with design_attributes (extract)
- **Phase 2**: `style_variants` independent design systems (consolidate)
- **Phase 1**: `style_variants` complete design systems (extract)
- **Phase 2**: Layout templates extraction (layout-extract)
- **Phase 3**: Style-centric batch generation (generate)
- Sub-phase 1: `targets × layout_variants` target-specific layout plans
- **Sub-phase 2**: `S` style-centric agents (each handles `L×T` combinations)
@@ -280,18 +279,7 @@ SlashCommand(command)
# Upon completion, IMMEDIATELY execute Phase 2 (auto-continue)
```
### Phase 2: Style Consolidation
```bash
command = "/workflow:ui-design:consolidate --base-path \"{base_path}\" " +
"--variants {style_variants}"
SlashCommand(command)
# Output: {style_variants} independent design systems with tokens.css
# SlashCommand blocks until phase complete
# Upon completion, IMMEDIATELY execute Phase 2.5 (auto-continue)
```
### Phase 2.5: Layout Extraction
### Phase 2: Layout Extraction
```bash
targets_string = ",".join(inferred_target_list)
command = "/workflow:ui-design:layout-extract --base-path \"{base_path}\" " +
@@ -360,7 +348,6 @@ IF --batch-plan:
// Initialize IMMEDIATELY after Phase 0c user confirmation to track multi-phase execution
TodoWrite({todos: [
{"content": "Execute style extraction", "status": "in_progress", "activeForm": "Executing..."},
{"content": "Execute style consolidation", "status": "pending", "activeForm": "Executing..."},
{"content": "Execute layout extraction", "status": "pending", "activeForm": "Executing..."},
{"content": "Execute UI assembly", "status": "pending", "activeForm": "Executing..."},
{"content": "Execute design integration", "status": "pending", "activeForm": "Executing..."}
@@ -442,9 +429,8 @@ Architecture: Style-Centric Batch Generation
Run ID: {run_id} | Session: {session_id or "standalone"}
Type: {icon} {target_type} | Device: {device_type} | Matrix: {s}×{l}×{n} = {total} prototypes
Phase 1: {s} style variants with design_attributes (style-extract)
Phase 2: {s} design systems with tokens.css (consolidate)
Phase 2.5: {n×l} layout templates (layout-extract explore mode)
Phase 1: {s} complete design systems (style-extract)
Phase 2: {n×l} layout templates (layout-extract explore mode)
- Device: {device_type} layouts
- {n} targets × {l} layout variants = {n×l} structural templates
Phase 3: UI Assembly (generate)
@@ -466,11 +452,13 @@ Design Quality:
✅ Device-Optimized: Layouts designed for {device_type}
📂 {base_path}/
├── style-extraction/ ({s} style cards + design-space-analysis.json)
├── style-consolidation/ ({s} design systems with tokens.css)
── layout-extraction/ ({n×l} layout templates + layout-space-analysis.json)
├── prototypes/ ({total} assembled prototypes)
── .run-metadata.json (includes device type)
├── .intermediates/ (Intermediate analysis files)
├── style-analysis/ (computed-styles.json, design-space-analysis.json)
│ └── layout-analysis/ (dom-structure-*.json, inspirations/*.txt)
├── style-extraction/ ({s} complete design systems)
── layout-extraction/ ({n×l} layout templates + layout-space-analysis.json)
├── prototypes/ ({total} assembled prototypes)
└── .run-metadata.json (includes device type)
🌐 Preview: {base_path}/prototypes/compare.html
- Interactive {s}×{l} matrix view

View File

@@ -18,8 +18,7 @@ Pure assembler that combines pre-extracted layout templates with design tokens t
- **Automatic Image Reference**: If source images exist in layout templates, they're automatically used for visual context
**Prerequisite Commands**:
- `/workflow:ui-design:style-extract`Style tokens
- `/workflow:ui-design:consolidate` → Final design tokens
- `/workflow:ui-design:style-extract`Complete design systems (design-tokens.json + style-guide.md)
- `/workflow:ui-design:layout-extract` → Layout structure
## Phase 1: Setup & Validation
@@ -30,7 +29,7 @@ Pure assembler that combines pre-extracted layout templates with design tokens t
bash(find .workflow -type d -name "design-*" | head -1) # Auto-detect
# Get style count
bash(ls {base_path}/style-consolidation/style-* -d | wc -l)
bash(ls {base_path}/style-extraction/style-* -d | wc -l)
# Image reference auto-detected from layout template source_image_path
```
@@ -50,10 +49,10 @@ Read({base_path}/layout-extraction/layout-templates.json)
### Step 3: Validate Design Tokens
```bash
# Check design tokens exist for all styles
bash(test -f {base_path}/style-consolidation/style-1/design-tokens.json && echo "valid")
bash(test -f {base_path}/style-extraction/style-1/design-tokens.json && echo "valid")
# For each style variant: Load design tokens
Read({base_path}/style-consolidation/style-{id}/design-tokens.json)
Read({base_path}/style-extraction/style-{id}/design-tokens.json)
```
**Output**: `design_tokens[]` for all style variants
@@ -84,7 +83,7 @@ Task(ui-design-agent): `
Extract: dom_structure, css_layout_rules, device_type, source_image_path
2. Design Tokens:
Read("{base_path}/style-consolidation/style-{style_id}/design-tokens.json")
Read("{base_path}/style-extraction/style-{style_id}/design-tokens.json")
Extract: ALL token values (colors, typography, spacing, borders, shadows, breakpoints)
3. Reference Image (AUTO-DETECTED):
@@ -192,7 +191,7 @@ Assembly Process:
Quality:
- Structure: From layout-extract (DOM, CSS layout rules)
- Style: From consolidate (design tokens)
- Style: From style-extract (design tokens)
- CSS: Token values directly applied (var() replaced)
- Device-optimized: Layouts match device_type from templates
@@ -222,7 +221,7 @@ Next: /workflow:ui-design:update
bash(find .workflow -type d -name "design-*" | head -1)
# Count style variants
bash(ls {base_path}/style-consolidation/style-* -d | wc -l)
bash(ls {base_path}/style-extraction/style-* -d | wc -l)
```
### Validation Commands
@@ -231,7 +230,7 @@ bash(ls {base_path}/style-consolidation/style-* -d | wc -l)
bash(test -f {base_path}/layout-extraction/layout-templates.json && echo "exists")
# Check design tokens exist
bash(test -f {base_path}/style-consolidation/style-1/design-tokens.json && echo "valid")
bash(test -f {base_path}/style-extraction/style-1/design-tokens.json && echo "valid")
# Count generated files
bash(ls {base_path}/prototypes/{target}-style-*-layout-*.html | wc -l)
@@ -255,9 +254,9 @@ bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
{base_path}/
├── layout-extraction/
│ └── layout-templates.json # Input (from layout-extract)
├── style-consolidation/
├── style-extraction/
│ └── style-{s}/
│ ├── design-tokens.json # Input (from consolidate)
│ ├── design-tokens.json # Input (from style-extract)
│ └── style-guide.md
└── prototypes/
├── {target}-style-{s}-layout-{l}.html # Assembled prototypes
@@ -275,7 +274,7 @@ ERROR: Layout templates not found
→ Run /workflow:ui-design:layout-extract first
ERROR: Design tokens not found
→ Run /workflow:ui-design:consolidate first
→ Run /workflow:ui-design:style-extract first
ERROR: Agent assembly failed
→ Check inputs exist, validate JSON structure
@@ -311,8 +310,7 @@ ERROR: Script permission denied
## Integration
**Prerequisites**:
- `/workflow:ui-design:style-extract``style-cards.json`
- `/workflow:ui-design:consolidate``design-tokens.json`
- `/workflow:ui-design:style-extract``design-tokens.json` + `style-guide.md`
- `/workflow:ui-design:layout-extract``layout-templates.json`
**Input**: `layout-templates.json` + `design-tokens.json`

View File

@@ -1,7 +1,7 @@
---
name: imitate-auto
description: High-speed multi-page UI replication with batch screenshot and optional token refinement
argument-hint: --url-map "<map>" [--capture-mode <batch|deep>] [--depth <1-5>] [--session <id>] [--refine-tokens] [--prompt "<desc>"]
description: High-speed multi-page UI replication with batch screenshot capture
argument-hint: --url-map "<map>" [--capture-mode <batch|deep>] [--depth <1-5>] [--session <id>] [--prompt "<desc>"]
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
---
@@ -19,11 +19,10 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
1. User triggers: `/workflow:ui-design:imitate-auto --url-map "..."`
2. Phase 0: Initialize and parse parameters
3. Phase 1: Screenshot capture (batch or deep mode) → **WAIT for completion** → Auto-continues
4. Phase 2: Style extraction (visual tokens) → **WAIT for completion** → Auto-continues
4. Phase 2: Style extraction (complete design systems) → **WAIT for completion** → Auto-continues
5. Phase 2.5: Layout extraction (structure templates) → **WAIT for completion** → Auto-continues
6. Phase 3: Token processing (conditional consolidate)**WAIT for completion** → Auto-continues
7. Phase 4: Batch UI assembly → **WAIT for completion** → Auto-continues
8. Phase 5: Design system integration → Reports completion
6. Phase 3: Batch UI assembly**WAIT for completion** → Auto-continues
7. Phase 4: Design system integration → Reports completion
**Phase Transition Mechanism**:
- `SlashCommand` is BLOCKING - execution pauses until completion
@@ -67,13 +66,10 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
- Enable automatic design system integration (Phase 5)
- If not provided: standalone mode (`.workflow/.design/`)
- `--refine-tokens` (Optional, default: false): Enable full token refinement
- `false` (default): Fast path, skip consolidate (~30-60s faster)
- `true`: Production quality, execute full consolidate (philosophy-driven refinement)
- `--prompt "<desc>"` (Optional): Style extraction guidance
- Influences extract command analysis focus
- Example: `"Focus on dark mode"`, `"Emphasize minimalist design"`
- **Note**: Design systems are now production-ready by default (no separate consolidate step)
## Execution Modes
@@ -86,14 +82,11 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
- Captures page layers at different depths (1-5)
- Only processes first URL from url-map
**Token Processing Modes**:
- **Fast-Track** (default): Skip consolidate, use proposed tokens directly (~2s)
- Best for rapid prototyping and testing
- Tokens may lack philosophy-driven refinement
- **Production** (--refine-tokens): Full consolidate with WCAG validation (~60s)
- Philosophy-driven token refinement
- Complete accessibility validation
- Production-ready design system
**Token Processing**:
- **Direct Generation**: Complete design systems generated in style-extract phase
- Production-ready design-tokens.json with WCAG compliance
- Complete style-guide.md documentation
- No separate consolidation step required (~30-60s faster)
**Session Integration**:
- `--session` flag determines session integration or standalone execution
@@ -163,7 +156,6 @@ FOR pair IN split(url_map_string, ","):
VALIDATE: len(target_names) > 0, "url-map must contain at least one target:url pair"
primary_target = target_names[0] # First target as primary style source
refine_tokens_mode = --refine-tokens OR false
# Parse capture mode
capture_mode = --capture-mode OR "batch"
@@ -198,7 +190,6 @@ metadata = {
"url_map": url_map,
"capture_mode": capture_mode,
"depth": depth IF capture_mode == "deep" ELSE null,
"refine_tokens": refine_tokens_mode,
"prompt": --prompt OR null
},
"targets": target_names,
@@ -217,7 +208,6 @@ ELSE:
REPORT: " Targets: {len(target_names)} pages"
REPORT: " Primary source: '{primary_target}' ({url_map[primary_target]})"
REPORT: " All targets: {', '.join(target_names)}"
REPORT: " Token refinement: {refine_tokens_mode ? 'Enabled (production quality)' : 'Disabled (fast-track)'}"
IF --prompt:
REPORT: " Prompt guidance: \"{--prompt}\""
REPORT: ""
@@ -226,9 +216,8 @@ REPORT: ""
TodoWrite({todos: [
{content: "Initialize and parse url-map", status: "completed", activeForm: "Initializing"},
{content: capture_mode == "batch" ? f"Batch screenshot capture ({len(target_names)} targets)" : f"Deep exploration (depth {depth})", status: "pending", activeForm: "Capturing screenshots"},
{content: "Extract style (visual tokens)", status: "pending", activeForm: "Extracting style"},
{content: "Extract style (complete design systems)", status: "pending", activeForm: "Extracting style"},
{content: "Extract layout (structure templates)", status: "pending", activeForm: "Extracting layout"},
{content: refine_tokens_mode ? "Refine design tokens via consolidate" : "Fast token adaptation (skip consolidate)", status: "pending", activeForm: "Processing tokens"},
{content: f"Assemble UI for {len(target_names)} targets", status: "pending", activeForm: "Assembling UI"},
{content: session_id ? "Integrate design system" : "Standalone completion", status: "pending", activeForm: "Completing"}
]})
@@ -370,31 +359,31 @@ CATCH error:
ERROR: "Cannot proceed without visual tokens"
EXIT 1
# style-extract outputs to: {base_path}/style-extraction/style-cards.json
# style-extract outputs to: {base_path}/style-extraction/style-1/design-tokens.json
# Verify extraction results
style_cards_path = "{base_path}/style-extraction/style-cards.json"
design_tokens_path = "{base_path}/style-extraction/style-1/design-tokens.json"
style_guide_path = "{base_path}/style-extraction/style-1/style-guide.md"
IF NOT exists(style_cards_path):
ERROR: "style-extract command did not generate style-cards.json"
ERROR: "Expected: {style_cards_path}"
IF NOT exists(design_tokens_path):
ERROR: "style-extract command did not generate design-tokens.json"
ERROR: "Expected: {design_tokens_path}"
EXIT 1
style_cards = Read(style_cards_path)
IF len(style_cards.style_cards) != 1:
ERROR: "Expected single variant in imitate mode, got {len(style_cards.style_cards)}"
IF NOT exists(style_guide_path):
ERROR: "style-extract command did not generate style-guide.md"
ERROR: "Expected: {style_guide_path}"
EXIT 1
extracted_style = style_cards.style_cards[0]
design_tokens = Read(design_tokens_path)
REPORT: ""
REPORT: "✅ Phase 2 complete:"
REPORT: " Style: '{extracted_style.name}'"
REPORT: " Philosophy: {extracted_style.design_philosophy}"
REPORT: " Tokens: {count_tokens(extracted_style.proposed_tokens)} visual tokens"
REPORT: " Output: style-extraction/style-1/"
REPORT: " Files: design-tokens.json, style-guide.md"
REPORT: " Quality: Production-ready (WCAG AA)"
TodoWrite(mark_completed: "Extract style (visual tokens)",
TodoWrite(mark_completed: "Extract style (complete design systems)",
mark_in_progress: "Extract layout (structure templates)")
```
@@ -444,125 +433,20 @@ REPORT: " Templates: {template_count} layout structures"
REPORT: " Targets: {', '.join(target_names)}"
TodoWrite(mark_completed: "Extract layout (structure templates)",
mark_in_progress: refine_tokens_mode ? "Refine design tokens via consolidate" : "Fast token adaptation")
```
### Phase 3: Token Processing (Conditional Consolidate)
```bash
REPORT: ""
REPORT: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
REPORT: "🚀 Phase 3: Token Processing"
REPORT: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
IF refine_tokens_mode:
# Path A: Full consolidate (production quality)
REPORT: "🔧 Using full consolidate for production-ready tokens"
REPORT: " Benefits:"
REPORT: " • Philosophy-driven token refinement"
REPORT: " • WCAG AA accessibility validation"
REPORT: " • Complete design system documentation"
REPORT: " • Token gap filling and consistency checks"
REPORT: ""
consolidate_command = f"/workflow:ui-design:consolidate --base-path \"{base_path}\" --variants 1"
REPORT: f"Calling consolidate command..."
TRY:
SlashCommand(consolidate_command)
CATCH error:
ERROR: "Token consolidation failed: {error}"
ERROR: "Cannot proceed without refined tokens"
EXIT 1
# consolidate outputs to: {base_path}/style-consolidation/style-1/design-tokens.json
tokens_path = "{base_path}/style-consolidation/style-1/design-tokens.json"
IF NOT exists(tokens_path):
ERROR: "consolidate command did not generate design-tokens.json"
ERROR: "Expected: {tokens_path}"
EXIT 1
REPORT: ""
REPORT: "✅ Full consolidate complete"
REPORT: " Output: style-consolidation/style-1/"
REPORT: " Quality: Production-ready"
token_quality = "production-ready (consolidated)"
time_spent_estimate = "~60s"
ELSE:
# Path B: Fast Token Adaptation
REPORT: "⚡ Fast token adaptation (skipping consolidate for speed)"
REPORT: " Note: Using proposed tokens from extraction phase"
REPORT: " For production quality, re-run with --refine-tokens flag"
REPORT: ""
# Directly copy proposed_tokens to consolidation directory
style_cards = Read("{base_path}/style-extraction/style-cards.json")
proposed_tokens = style_cards.style_cards[0].proposed_tokens
# Create directory and write tokens
consolidation_dir = "{base_path}/style-consolidation/style-1"
Bash(mkdir -p "{consolidation_dir}")
tokens_path = f"{consolidation_dir}/design-tokens.json"
Write(tokens_path, JSON.stringify(proposed_tokens, null, 2))
# Create simplified style guide
variant = style_cards.style_cards[0]
simple_guide = f"""# Design System: {variant.name}
## Design Philosophy
{variant.design_philosophy}
## Description
{variant.description}
## Design Tokens
All tokens in `design-tokens.json` follow OKLCH color space.
**Note**: Generated in fast-track imitate mode using proposed tokens from extraction.
For production-ready quality with philosophy-driven refinement, re-run with `--refine-tokens` flag.
## Color Preview
- Primary: {variant.preview.primary if variant.preview else "N/A"}
- Background: {variant.preview.background if variant.preview else "N/A"}
## Typography Preview
- Heading Font: {variant.preview.font_heading if variant.preview else "N/A"}
## Token Categories
{list_token_categories(proposed_tokens)}
"""
Write(f"{consolidation_dir}/style-guide.md", simple_guide)
REPORT: "✅ Fast adaptation complete (~2s vs ~60s with consolidate)"
REPORT: " Output: style-consolidation/style-1/"
REPORT: " Quality: Proposed (not refined)"
REPORT: " Time saved: ~30-60s"
token_quality = "proposed (fast-track)"
time_spent_estimate = "~2s"
REPORT: ""
REPORT: "Token processing summary:"
REPORT: " Path: {refine_tokens_mode ? 'Full consolidate' : 'Fast adaptation'}"
REPORT: " Quality: {token_quality}"
REPORT: " Time: {time_spent_estimate}"
TodoWrite(mark_completed: refine_tokens_mode ? "Refine design tokens via consolidate" : "Fast token adaptation",
mark_in_progress: f"Assemble UI for {len(target_names)} targets")
```
### Phase 4: Batch UI Assembly
### Phase 3: Batch UI Assembly
```bash
REPORT: ""
REPORT: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
REPORT: "🚀 Phase 4: Batch UI Assembly"
REPORT: "🚀 Phase 3: Batch UI Assembly"
REPORT: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
# Note: design-tokens.json already generated in Phase 2 (style-extract)
# Located at: {base_path}/style-extraction/style-1/design-tokens.json
# Build targets string
targets_string = ",".join(target_names)
@@ -602,12 +486,12 @@ TodoWrite(mark_completed: f"Assemble UI for {len(target_names)} targets",
mark_in_progress: session_id ? "Integrate design system" : "Standalone completion")
```
### Phase 5: Design System Integration
### Phase 4: Design System Integration
```bash
REPORT: ""
REPORT: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
REPORT: "🚀 Phase 5: Design System Integration"
REPORT: "🚀 Phase 4: Design System Integration"
REPORT: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
IF session_id:
@@ -636,9 +520,8 @@ metadata.status = "completed"
metadata.completion_time = current_timestamp()
metadata.outputs = {
"screenshots": f"{base_path}/screenshots/",
"style_system": f"{base_path}/style-consolidation/style-1/",
"style_system": f"{base_path}/style-extraction/style-1/",
"prototypes": f"{base_path}/prototypes/",
"token_quality": token_quality,
"captured_count": captured_count,
"generated_count": generated_count
}
@@ -650,9 +533,9 @@ TodoWrite(mark_completed: session_id ? "Integrate design system" : "Standalone c
TodoWrite({todos: [
{content: "Initialize and parse url-map", status: "completed", activeForm: "Initializing"},
{content: capture_mode == "batch" ? f"Batch screenshot capture ({len(target_names)} targets)" : f"Deep exploration (depth {depth})", status: "completed", activeForm: "Capturing"},
{content: "Extract unified design system", status: "completed", activeForm: "Extracting"},
{content: refine_tokens_mode ? "Refine design tokens via consolidate" : "Fast token adaptation", status: "completed", activeForm: "Processing"},
{content: f"Generate UI for {len(target_names)} targets", status: "completed", activeForm: "Generating"},
{content: "Extract style (complete design systems)", status: "completed", activeForm: "Extracting"},
{content: "Extract layout (structure templates)", status: "completed", activeForm: "Extracting layout"},
{content: f"Assemble UI for {len(target_names)} targets", status: "completed", activeForm: "Assembling"},
{content: session_id ? "Integrate design system" : "Standalone completion", status: "completed", activeForm: "Completing"}
]})
```
@@ -674,24 +557,18 @@ Run ID: {run_id}
Phase 1 - Screenshot Capture: ✅ {IF capture_mode == "batch": f"{captured_count}/{total_requested} screenshots" ELSE: f"{captured_count} screenshots ({total_layers} layers)"}
{IF capture_mode == "batch" AND captured_count < total_requested: f"⚠️ {total_requested - captured_count} missing" ELSE: "All targets captured"}
Phase 2 - Style Extraction: ✅ Single unified design system
Style: {extracted_style.name}
Philosophy: {extracted_style.design_philosophy[:80]}...
Phase 2 - Style Extraction: ✅ Production-ready design systems
Output: style-extraction/style-1/ (design-tokens.json + style-guide.md)
Quality: WCAG AA compliant, OKLCH colors
Phase 3 - Token Processing: ✅ {token_quality}
{IF refine_tokens_mode:
"Full consolidate (~60s)"
"Quality: Production-ready with philosophy-driven refinement"
ELSE:
"Fast adaptation (~2s, saved ~30-60s)"
"Quality: Proposed tokens (for production, use --refine-tokens)"
}
Phase 2.5 - Layout Extraction: ✅ Structure templates
Templates: {template_count} layout structures
Phase 4 - UI Generation: ✅ {generated_count} pages generated
Phase 3 - UI Assembly: ✅ {generated_count} pages assembled
Targets: {', '.join(target_names)}
Configuration: 1 style × 1 layout × {generated_count} pages
Phase 5 - Integration: {IF session_id: "✅ Integrated into session" ELSE: "⏭️ Standalone mode"}
Phase 4 - Integration: {IF session_id: "✅ Integrated into session" ELSE: "⏭️ Standalone mode"}
━━━ 📂 Output Structure ━━━
@@ -710,12 +587,12 @@ ELSE:
│ │ └── {layers}.png
│ └── layer-map.json
}
├── style-extraction/ # Design analysis
│ └── style-cards.json
├── style-consolidation/ # {token_quality}
├── style-extraction/ # Production-ready design systems
│ └── style-1/
│ ├── design-tokens.json
│ └── style-guide.md
├── layout-extraction/ # Structure templates
│ └── layout-templates.json
└── prototypes/ # {generated_count} HTML/CSS files
├── {target1}-style-1-layout-1.html + .css
├── {target2}-style-1-layout-1.html + .css
@@ -750,13 +627,6 @@ ELSE:
3. Explore prototypes in {base_path}/prototypes/ directory
}
{IF NOT refine_tokens_mode:
💡 Production Quality Tip:
Fast-track mode used proposed tokens for speed.
For production-ready quality with full token refinement:
/workflow:ui-design:imitate-auto --url-map "{url_map_command_string}" --refine-tokens {f"--session {session_id}" if session_id else ""}
}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
@@ -767,9 +637,9 @@ ELSE:
TodoWrite({todos: [
{content: "Initialize and parse url-map", status: "in_progress", activeForm: "Initializing"},
{content: "Batch screenshot capture", status: "pending", activeForm: "Capturing screenshots"},
{content: "Extract unified design system", status: "pending", activeForm: "Extracting style"},
{content: refine_tokens ? "Refine tokens via consolidate" : "Fast token adaptation", status: "pending", activeForm: "Processing tokens"},
{content: "Generate UI for all targets", status: "pending", activeForm: "Generating UI"},
{content: "Extract style (complete design systems)", status: "pending", activeForm: "Extracting style"},
{content: "Extract layout (structure templates)", status: "pending", activeForm: "Extracting layout"},
{content: "Assemble UI for all targets", status: "pending", activeForm: "Assembling UI"},
{content: "Integrate design system", status: "pending", activeForm: "Integrating"}
]})
@@ -818,7 +688,7 @@ TodoWrite({todos: [
## Key Features
- **🚀 Dual Capture**: Batch mode for speed, deep mode for detail
- **⚡ Fast-Track Option**: Skip consolidate for 30-60s time savings
- **⚡ Production-Ready**: Complete design systems generated directly (~30-60s faster)
- **🎨 High-Fidelity**: Single unified design system from primary target
- **📦 Autonomous**: No user intervention required between phases
- **🔄 Reproducible**: Deterministic flow with isolated run directories
@@ -832,10 +702,10 @@ TodoWrite({todos: [
# Result: 2 prototypes with fast-track tokens
```
### 2. Production-Ready with Session
### 2. Session Integration
```bash
/workflow:ui-design:imitate-auto --session WFS-payment --url-map "pricing:https://stripe.com/pricing" --refine-tokens
# Result: 1 prototype with production tokens, integrated into session
/workflow:ui-design:imitate-auto --session WFS-payment --url-map "pricing:https://stripe.com/pricing"
# Result: 1 prototype with production-ready design system, integrated into session
```
### 3. Deep Exploration Mode
@@ -852,16 +722,16 @@ TodoWrite({todos: [
## Integration Points
- **Input**: `--url-map` (multiple target:url pairs) + optional `--capture-mode`, `--depth`, `--session`, `--refine-tokens`, `--prompt`
- **Output**: Complete design system in `{base_path}/` (screenshots, style-extraction, style-consolidation, prototypes)
- **Input**: `--url-map` (multiple target:url pairs) + optional `--capture-mode`, `--depth`, `--session`, `--prompt`
- **Output**: Complete design system in `{base_path}/` (screenshots, style-extraction, layout-extraction, prototypes)
- **Sub-commands Called**:
1. Phase 1 (conditional):
- `--capture-mode batch`: `/workflow:ui-design:capture` (multi-URL batch)
- `--capture-mode deep`: `/workflow:ui-design:explore-layers` (single-URL depth exploration)
2. `/workflow:ui-design:extract` (Phase 2)
3. `/workflow:ui-design:consolidate` (Phase 3, conditional)
4. `/workflow:ui-design:generate` (Phase 4)
5. `/workflow:ui-design:update` (Phase 5, if --session)
2. `/workflow:ui-design:style-extract` (Phase 2 - complete design systems)
3. `/workflow:ui-design:layout-extract` (Phase 2.5 - structure templates)
4. `/workflow:ui-design:generate` (Phase 3 - pure assembly)
5. `/workflow:ui-design:update` (Phase 4, if --session)
## Completion Output
@@ -872,20 +742,20 @@ Mode: {capture_mode} | Session: {session_id or "standalone"}
Run ID: {run_id}
Phase 1 - Screenshot Capture: ✅ {captured_count} screenshots
Phase 2 - Style Extraction: ✅ Single unified design system
Phase 3 - Token Processing: ✅ {token_quality}
Phase 4 - UI Generation: ✅ {generated_count} pages generated
Phase 5 - Integration: {IF session_id: "✅ Integrated" ELSE: "⏭️ Standalone"}
Phase 2 - Style Extraction: ✅ Production-ready design systems
Phase 2.5 - Layout Extraction: ✅ Structure templates
Phase 3 - UI Assembly: ✅ {generated_count} pages assembled
Phase 4 - Integration: {IF session_id: "✅ Integrated" ELSE: "⏭️ Standalone"}
Design Quality:
✅ High-Fidelity Replication: Accurate style from primary target
✅ Token-Driven Styling: 100% var() usage
{IF refine_tokens: "Production-Ready: WCAG validated" ELSE: "Fast-Track: Proposed tokens"}
✅ Production-Ready: WCAG AA compliant, OKLCH colors
📂 {base_path}/
├── screenshots/ # {captured_count} screenshots
├── style-extraction/ # Design analysis
├── style-consolidation/ # {token_quality}
├── style-extraction/style-1/ # Production-ready design system
├── layout-extraction/ # Structure templates
└── prototypes/ # {generated_count} HTML/CSS files
🌐 Preview: {base_path}/prototypes/compare.html

View File

@@ -8,9 +8,11 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), Bash(*), Task(ui-design
# Layout Extraction Command
## Overview
Extract structural layout information from reference images, URLs, or text prompts using AI analysis. This command separates the "scaffolding" (HTML structure and CSS layout) from the "paint" (visual tokens handled by `style-extract`).
**Strategy**: AI-Driven Structural Analysis
- **Agent-Powered**: Uses `ui-design-agent` for deep structural analysis
- **Dual-Mode**:
- `imitate`: High-fidelity replication of single layout structure
@@ -22,6 +24,7 @@ Extract structural layout information from reference images, URLs, or text promp
## Phase 0: Setup & Input Validation
### Step 1: Detect Input, Mode & Targets
```bash
# Detect input source
# Priority: --urls + --images → hybrid | --urls → url | --images → image | --prompt → text
@@ -65,15 +68,59 @@ Read({image_path}) # Load each image
bash(mkdir -p {base_path}/layout-extraction)
```
### Step 3: Memory Check (Skip if Already Done)
### Step 2.5: Extract DOM Structure (URL Mode - Enhancement)
```bash
# Check if layouts already extracted
bash(test -f {base_path}/layout-extraction/layout-templates.json && echo "exists")
# If URLs are available, extract real DOM structure
# This provides accurate layout data to supplement visual analysis
# For each URL in url_list:
IF url_available AND mcp_chrome_devtools_available:
# Read extraction script
Read(~/.claude/scripts/extract-layout-structure.js)
# Open page in Chrome DevTools
mcp__chrome-devtools__navigate_page(url="{target_url}")
# Execute layout extraction script
result = mcp__chrome-devtools__evaluate_script(function="[SCRIPT_CONTENT]")
# Save DOM structure for this target (intermediate file)
bash(mkdir -p {base_path}/.intermediates/layout-analysis)
Write({base_path}/.intermediates/layout-analysis/dom-structure-{target}.json, result)
dom_structure_available = true
ELSE:
dom_structure_available = false
```
**If exists**: Skip to completion message
**Extraction Script Reference**: `~/.claude/scripts/extract-layout-structure.js`
**Output**: `input_mode`, `base_path`, `extraction_mode`, `variants_count`, `targets[]`, `device_type`, loaded inputs
**Usage**: Read the script file and use content directly in `mcp__chrome-devtools__evaluate_script()`
**Script returns**:
- `metadata`: Extraction timestamp, URL, method
- `patterns`: Layout pattern statistics (flexColumn, flexRow, grid counts)
- `structure`: Hierarchical DOM tree with layout properties
**Benefits**:
- ✅ Real flex/grid configuration (justifyContent, alignItems, gap, etc.)
- ✅ Accurate element bounds (x, y, width, height)
- ✅ Structural hierarchy with depth control
- ✅ Layout pattern identification (flex-row, flex-column, grid-NCol)
### Step 3: Memory Check
```bash
# 1. Check if inputs cached in session memory
IF session_has_inputs: SKIP Step 2 file reading
# 2. Check if output already exists
bash(test -f {base_path}/layout-extraction/layout-templates.json && echo "exists")
IF exists: SKIP to completion
```
---
**Phase 0 Output**: `input_mode`, `base_path`, `extraction_mode`, `variants_count`, `targets[]`, `device_type`, loaded inputs
## Phase 1: Layout Research (Explore Mode Only)
@@ -87,13 +134,13 @@ bash(test -f {base_path}/layout-extraction/layout-templates.json && echo "exists
### Step 2: Gather Layout Inspiration (Explore Mode)
```bash
bash(mkdir -p {base_path}/layout-extraction/_inspirations)
bash(mkdir -p {base_path}/.intermediates/layout-analysis/inspirations)
# For each target: Research via MCP
# mcp__exa__web_search_exa(query="{target} layout patterns {device_type}", numResults=5)
# Write inspiration file
Write({base_path}/layout-extraction/_inspirations/{target}-layout-ideas.txt, inspiration_content)
Write({base_path}/.intermediates/layout-analysis/inspirations/{target}-layout-ideas.txt, inspiration_content)
```
**Output**: Inspiration text files for each target (explore mode only)
@@ -115,14 +162,28 @@ Task(ui-design-agent): `
- Targets: {targets} // List of page/component names
- Variants per Target: {variants_count}
- Device Type: {device_type}
${exploration_mode ? "- Layout Inspiration: Read('" + base_path + "/layout-extraction/_inspirations/{target}-layout-ideas.txt')" : ""}
${exploration_mode ? "- Layout Inspiration: Read('" + base_path + "/.intermediates/layout-analysis/inspirations/{target}-layout-ideas.txt')" : ""}
${dom_structure_available ? "- DOM Structure Data: Read('" + base_path + "/.intermediates/layout-analysis/dom-structure-{target}.json') - USE THIS for accurate layout properties" : ""}
## Analysis & Generation
${dom_structure_available ? "IMPORTANT: You have access to real DOM structure data with accurate flex/grid properties, bounds, and hierarchy. Use this data as ground truth for layout analysis." : ""}
For EACH target in {targets}:
For EACH variant (1 to {variants_count}):
1. **Analyze Structure**: Deconstruct reference to understand layout, hierarchy, responsiveness
1. **Analyze Structure**:
${dom_structure_available ?
"- Use DOM structure data as primary source for layout properties" +
"- Extract real flex/grid configurations (display, flexDirection, justifyContent, alignItems, gap)" +
"- Use actual element bounds for responsive breakpoint decisions" +
"- Preserve identified patterns (flex-row, flex-column, grid-NCol)" +
"- Reference screenshots for visual context only" :
"- Deconstruct reference images/URLs to understand layout, hierarchy, responsiveness"}
2. **Define Philosophy**: Short description (e.g., "Asymmetrical grid with overlapping content areas")
3. **Generate DOM Structure**: JSON object representing semantic HTML5 structure
3. **Generate DOM Structure**:
${dom_structure_available ?
"- Base structure on extracted DOM tree from .intermediates" +
"- Preserve semantic tags and hierarchy from dom-structure-{target}.json" +
"- Maintain layout patterns identified in patterns field" :
"- JSON object representing semantic HTML5 structure"}
- Semantic tags: <header>, <nav>, <main>, <aside>, <section>, <footer>
- ARIA roles and accessibility attributes
- Device-specific structure:
@@ -134,7 +195,12 @@ Task(ui-design-agent): `
4. **Define Component Hierarchy**: High-level array of main layout regions
Example: ["header", "main-content", "sidebar", "footer"]
5. **Generate CSS Layout Rules**:
- Focus ONLY on layout (Grid, Flexbox, position, alignment, gap, etc.)
${dom_structure_available ?
"- Use real layout properties from DOM structure data" +
"- Convert extracted flex/grid values to CSS rules" +
"- Preserve actual gap, justifyContent, alignItems values" +
"- Use element bounds to inform responsive breakpoints" :
"- Focus ONLY on layout (Grid, Flexbox, position, alignment, gap, etc.)"}
- Use CSS Custom Properties for spacing/breakpoints: var(--spacing-4), var(--breakpoint-md)
- Device-specific styles (mobile-first @media for responsive)
- NO colors, NO fonts, NO shadows - layout structure only
@@ -225,7 +291,7 @@ bash(find .workflow -type d -name "design-*" | head -1)
# Create output directories
bash(mkdir -p {base_path}/layout-extraction)
bash(mkdir -p {base_path}/layout-extraction/_inspirations) # explore mode only
bash(mkdir -p {base_path}/.intermediates/layout-analysis/inspirations) # explore mode only
```
### Validation Commands
@@ -247,7 +313,7 @@ bash(ls {images_pattern})
Read({image_path})
# Write inspiration files (explore mode)
Write({base_path}/layout-extraction/_inspirations/{target}-layout-ideas.txt, content)
Write({base_path}/.intermediates/layout-analysis/inspirations/{target}-layout-ideas.txt, content)
# Write layout templates
bash(echo '{json}' > {base_path}/layout-extraction/layout-templates.json)
@@ -257,11 +323,14 @@ bash(echo '{json}' > {base_path}/layout-extraction/layout-templates.json)
```
{base_path}/
── layout-extraction/
── layout-templates.json # Structural layout templates
├── layout-space-analysis.json # Layout directions (explore mode only)
└── _inspirations/ # Explore mode only
└── {target}-layout-ideas.txt # Layout inspiration research
── .intermediates/ # Intermediate analysis files
── layout-analysis/
├── dom-structure-{target}.json # Extracted DOM structure (URL mode only)
└── inspirations/ # Explore mode only
└── {target}-layout-ideas.txt # Layout inspiration research
└── layout-extraction/ # Final layout templates
├── layout-templates.json # Structural layout templates
└── layout-space-analysis.json # Layout directions (explore mode only)
```
## layout-templates.json Format
@@ -343,10 +412,13 @@ ERROR: MCP search failed (explore mode)
## Key Features
- **Hybrid Extraction Strategy** - Combines real DOM structure data with AI visual analysis
- **Accurate Layout Properties** - Chrome DevTools extracts real flex/grid configurations, bounds, and hierarchy
- **Separation of Concerns** - Decouples layout (structure) from style (visuals)
- **Structural Exploration** - Explore mode enables A/B testing of different layouts
- **Token-Based Layout** - CSS uses `var()` placeholders for instant design system adaptation
- **Device-Specific** - Tailored structures for different screen sizes
- **Graceful Fallback** - Works with images/text when URL unavailable
- **Foundation for Assembly** - Provides structural blueprint for refactored `generate` command
- **Agent-Powered** - Deep structural analysis with AI
@@ -355,10 +427,9 @@ ERROR: MCP search failed (explore mode)
**Workflow Position**: Between style extraction and prototype generation
**New Workflow**:
1. `/workflow:ui-design:style-extract``style-cards.json` (Visual tokens)
2. `/workflow:ui-design:consolidate``design-tokens.json` (Final visual system)
3. `/workflow:ui-design:layout-extract``layout-templates.json` (Structural templates)
4. `/workflow:ui-design:generate` (Refactored as assembler):
1. `/workflow:ui-design:style-extract``design-tokens.json` + `style-guide.md` (Complete design systems)
2. `/workflow:ui-design:layout-extract``layout-templates.json` (Structural templates)
3. `/workflow:ui-design:generate` (Pure assembler):
- **Reads**: `design-tokens.json` + `layout-templates.json`
- **Action**: For each style × layout combination:
1. Build HTML from `dom_structure`

View File

@@ -8,13 +8,14 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*)
# Style Extraction Command
## Overview
Extract design style from reference images or text prompts using Claude's built-in analysis. Generates `style-cards.json` with multiple design variants containing token proposals for consolidation.
Extract design style from reference images or text prompts using Claude's built-in analysis. Directly generates production-ready design systems with complete `design-tokens.json` and `style-guide.md` for each variant.
**Strategy**: AI-Driven Design Space Exploration
- **Claude-Native**: 100% Claude analysis, no external tools
- **Single Output**: `style-cards.json` with embedded token proposals
- **Direct Output**: Complete design systems (design-tokens.json + style-guide.md)
- **Flexible Input**: Images, text prompts, or both (hybrid mode)
- **Maximum Contrast**: AI generates maximally divergent design directions
- **Production-Ready**: WCAG AA compliant, OKLCH colors, semantic naming
## Phase 0: Setup & Input Validation
@@ -39,7 +40,48 @@ bash(find .workflow -type d -name "design-*" | head -1) # Auto-detect
# OR use --base-path / --session parameters
```
### Step 2: Load Inputs
### Step 2: Extract Computed Styles (URL Mode - Optional Enhancement)
```bash
# If URL is available from capture metadata, extract real CSS values
# This provides accurate design tokens to supplement visual analysis
# Check for URL metadata from capture phase
Read({base_path}/.metadata/capture-urls.json) # If exists
# For each URL in capture metadata:
IF url_available AND mcp_chrome_devtools_available:
# Read extraction script
Read(~/.claude/scripts/extract-computed-styles.js)
# Open page in Chrome DevTools
mcp__chrome-devtools__navigate_page(url="{target_url}")
# Execute extraction script directly
result = mcp__chrome-devtools__evaluate_script(function="[SCRIPT_CONTENT]")
# Save computed styles to intermediates directory
bash(mkdir -p {base_path}/.intermediates/style-analysis)
Write({base_path}/.intermediates/style-analysis/computed-styles.json, result)
computed_styles_available = true
ELSE:
computed_styles_available = false
```
**Extraction Script Reference**: `~/.claude/scripts/extract-computed-styles.js`
**Usage**: Read the script file and use content directly in `mcp__chrome-devtools__evaluate_script()`
**Script returns**:
- `metadata`: Extraction timestamp, URL, method
- `tokens`: Organized design tokens (colors, borderRadii, shadows, fontSizes, fontWeights, spacing)
**Benefits**:
- ✅ Pixel-perfect accuracy for border-radius, box-shadow, padding, etc.
- ✅ Eliminates guessing from visual analysis
- ✅ Provides ground truth for design tokens
### Step 3: Load Inputs
```bash
# For image mode
bash(ls {images_pattern}) # Expand glob pattern
@@ -52,15 +94,19 @@ Read({image_path}) # Load each image
bash(mkdir -p {base_path}/style-extraction/)
```
### Step 3: Memory Check (Skip if Already Done)
### Step 3: Memory Check
```bash
# Check if already extracted
bash(test -f {base_path}/style-extraction/style-cards.json && echo "exists")
# 1. Check if inputs cached in session memory
IF session_has_inputs: SKIP Step 2 file reading
# 2. Check if output already exists
bash(test -f {base_path}/style-extraction/style-1/design-tokens.json && echo "exists")
IF exists: SKIP to completion
```
**If exists**: Skip to completion message
---
**Output**: `input_mode`, `base_path`, `extraction_mode`, `variants_count`, `loaded_images[]` or `prompt_guidance`
**Phase 0 Output**: `input_mode`, `base_path`, `extraction_mode`, `variants_count`, `loaded_images[]` or `prompt_guidance`
## Phase 1: Design Space Analysis (Explore Mode Only)
@@ -93,42 +139,101 @@ AI analyzes requirements and generates `variants_count` maximally contrasting de
### Step 4: Write Design Space Analysis
```bash
bash(echo '{design_space_analysis}' > {base_path}/style-extraction/design-space-analysis.json)
bash(mkdir -p {base_path}/.intermediates/style-analysis)
bash(echo '{design_space_analysis}' > {base_path}/.intermediates/style-analysis/design-space-analysis.json)
# Verify output
bash(test -f design-space-analysis.json && echo "saved")
bash(test -f {base_path}/.intermediates/style-analysis/design-space-analysis.json && echo "saved")
```
**Output**: `design-space-analysis.json` for consolidation phase
**Output**: `design-space-analysis.json` (intermediate analysis file)
## Phase 2: Style Synthesis & File Write
## Phase 2: Design System Generation (Agent)
### Step 1: Claude Native Analysis
AI generates `variants_count` design style proposals:
**Executor**: `Task(ui-design-agent)` for all variants
**Input**:
- Input mode (image/text/hybrid)
- Visual references or text guidance
- Design space analysis (explore mode only)
- Variant-specific directions (explore mode only)
**Analysis Rules**:
- **Explore mode**: Each variant follows pre-defined philosophy and attributes, maintains maximum contrast
- **Imitate mode**: High-fidelity replication of reference design
- OKLCH color format
- Complete token proposals (colors, typography, spacing, border_radius, shadows, breakpoints)
- WCAG AA accessibility (4.5:1 text, 3:1 UI)
**Output Format**: `style-cards.json` with:
- extraction_metadata (session_id, input_mode, extraction_mode, timestamp, variants_count)
- style_cards[] (id, name, description, design_philosophy, preview, proposed_tokens)
### Step 2: Write Style Cards
### Step 1: Create Output Directories
```bash
bash(echo '{style_cards_json}' > {base_path}/style-extraction/style-cards.json)
bash(mkdir -p {base_path}/style-extraction/style-{{1..{variants_count}}})
```
**Output**: `style-cards.json` with `variants_count` complete variants
### Step 2: Launch Agent Task
For all variants (1 to {variants_count}):
```javascript
Task(ui-design-agent): `
[DESIGN_SYSTEM_GENERATION_TASK]
Generate {variants_count} independent production-ready design systems
SESSION: {session_id} | MODE: {extraction_mode} | BASE_PATH: {base_path}
CRITICAL PATH: Use loop index (N) for directories: style-1/, style-2/, ..., style-N/
VARIANT DATA:
{FOR each variant with index N:
VARIANT INDEX: {N}
{IF extraction_mode == "explore":
Design Philosophy: {divergent_direction[N].philosophy_name}
Design Attributes: {divergent_direction[N].design_attributes}
Search Keywords: {divergent_direction[N].search_keywords}
Anti-keywords: {divergent_direction[N].anti_keywords}
}
}
## Input Analysis
- Input mode: {input_mode} (image/text/hybrid)
- Visual references: {loaded_images OR prompt_guidance}
- Computed styles: {computed_styles if available}
- Design space analysis: {design_space_analysis if explore mode}
## Analysis Rules
- **Explore mode**: Each variant follows pre-defined philosophy and attributes
- **Imitate mode**: High-fidelity replication of reference design
- If computed_styles available: Use as ground truth for border-radius, shadows, spacing, typography, colors
- Otherwise: Visual inference
- OKLCH color format (convert RGB from computed styles)
- WCAG AA compliance: 4.5:1 text, 3:1 UI
## Generate (For EACH variant, use loop index N for paths)
1. {base_path}/style-extraction/style-{N}/design-tokens.json
- Complete token structure: colors, typography, spacing, border_radius, shadows, breakpoints
- All colors in OKLCH format
- {IF explore mode: Apply design_attributes to token values (saturation→chroma, density→spacing, etc.)}
2. {base_path}/style-extraction/style-{N}/style-guide.md
- Expanded design philosophy
- Complete color system with accessibility notes
- Typography documentation
- Usage guidelines
## Critical Requirements
- ✅ Use Write() tool immediately for each file
- ✅ Use loop index N for directory names: style-1/, style-2/, etc.
- ❌ NO external research or MCP calls (pure AI generation)
- {IF explore mode: Apply philosophy-driven refinement using design_attributes}
- {IF explore mode: Maintain divergence using anti_keywords}
- Complete each variant's files before moving to next variant
`
```
**Output**: Agent generates `variants_count × 2` files
## Phase 3: Verify Output
### Step 1: Check Files Created
```bash
# Verify all design systems created
bash(ls {base_path}/style-extraction/style-*/design-tokens.json | wc -l)
# Validate structure
bash(cat {base_path}/style-extraction/style-1/design-tokens.json | grep -q "colors" && echo "valid")
```
### Step 2: Verify File Sizes
```bash
bash(ls -lh {base_path}/style-extraction/style-1/)
```
**Output**: `variants_count × 2` files verified
## Completion
@@ -137,7 +242,8 @@ bash(echo '{style_cards_json}' > {base_path}/style-extraction/style-cards.json)
TodoWrite({todos: [
{content: "Setup and input validation", status: "completed", activeForm: "Validating inputs"},
{content: "Design space analysis (explore mode)", status: "completed", activeForm: "Analyzing design space"},
{content: "Style synthesis and file write", status: "completed", activeForm: "Generating style cards"}
{content: "Design system generation (agent)", status: "completed", activeForm: "Generating design systems"},
{content: "Verify output files", status: "completed", activeForm: "Verifying files"}
]});
```
@@ -150,6 +256,7 @@ Configuration:
- Extraction Mode: {extraction_mode} (imitate/explore)
- Input Mode: {input_mode} (image/text/hybrid)
- Variants: {variants_count}
- Production-Ready: Complete design systems generated
{IF extraction_mode == "explore":
Design Space Analysis:
@@ -157,14 +264,19 @@ Design Space Analysis:
- Min contrast distance: {design_space_analysis.contrast_verification.min_pairwise_distance}
}
Generated Variants:
{FOR each card: - {card.name} - {card.design_philosophy}}
Generated Files:
{base_path}/style-extraction/
├── style-1/ (design-tokens.json, style-guide.md)
├── style-2/ (same structure)
└── style-{variants_count}/ (same structure)
Output Files:
- {base_path}/style-extraction/style-cards.json
{IF extraction_mode == "explore": - {base_path}/style-extraction/design-space-analysis.json}
{IF extraction_mode == "explore":
Intermediate Analysis:
{base_path}/.intermediates/style-analysis/design-space-analysis.json
}
Next: /workflow:ui-design:consolidate --session {session_id} --variants {variants_count}
Next: /workflow:ui-design:layout-extract --session {session_id} --targets "..."
OR: /workflow:ui-design:generate --session {session_id}
```
## Simple Bash Commands
@@ -184,13 +296,13 @@ bash(mkdir -p {base_path}/style-extraction/)
### Validation Commands
```bash
# Check if already extracted
bash(test -f {base_path}/style-extraction/style-cards.json && echo "exists")
bash(test -f {base_path}/style-extraction/style-1/design-tokens.json && echo "exists")
# Count variants
bash(cat style-cards.json | grep -c "\"id\": \"variant-")
bash(ls {base_path}/style-extraction/style-* -d | wc -l)
# Validate JSON
bash(cat style-cards.json | grep -q "extraction_metadata" && echo "valid")
# Validate JSON structure
bash(cat {base_path}/style-extraction/style-1/design-tokens.json | grep -q "colors" && echo "valid")
```
### File Operations
@@ -198,46 +310,50 @@ bash(cat style-cards.json | grep -q "extraction_metadata" && echo "valid")
# Load brainstorming context
bash(test -f .brainstorming/synthesis-specification.md && cat it)
# Write output
bash(echo '{json}' > {base_path}/style-extraction/style-cards.json)
# Create directories
bash(mkdir -p {base_path}/style-extraction/style-{{1..3}})
# Verify output
bash(test -f design-space-analysis.json && echo "saved")
bash(ls {base_path}/style-extraction/style-1/)
bash(test -f {base_path}/.intermediates/style-analysis/design-space-analysis.json && echo "saved")
```
## Output Structure
```
{base_path}/style-extraction/
├── style-cards.json # Complete style variants with token proposals
└── design-space-analysis.json # Design directions (explore mode only)
{base_path}/
├── .intermediates/ # Intermediate analysis files
│ └── style-analysis/
│ ├── computed-styles.json # Extracted CSS values from DOM (if URL available)
│ └── design-space-analysis.json # Design directions (explore mode only)
└── style-extraction/ # Final design systems
├── style-1/
│ ├── design-tokens.json # Production-ready design tokens
│ └── style-guide.md # Design philosophy and usage guide
├── style-2/ (same structure)
└── style-N/ (same structure)
```
## style-cards.json Format
## design-tokens.json Format
```json
{
"extraction_metadata": {"session_id": "...", "input_mode": "image|text|hybrid", "extraction_mode": "imitate|explore", "timestamp": "...", "variants_count": N},
"style_cards": [
{
"id": "variant-1", "name": "Style Name", "description": "...",
"design_philosophy": "Core principles",
"preview": {"primary": "oklch(...)", "background": "oklch(...)", "font_heading": "...", "border_radius": "..."},
"proposed_tokens": {
"colors": {"brand": {...}, "surface": {...}, "semantic": {...}, "text": {...}, "border": {...}},
"typography": {"font_family": {...}, "font_size": {...}, "font_weight": {...}, "line_height": {...}, "letter_spacing": {...}},
"spacing": {"0": "0", ..., "24": "6rem"},
"border_radius": {"none": "0", ..., "full": "9999px"},
"shadows": {"sm": "...", ..., "xl": "..."},
"breakpoints": {"sm": "640px", ..., "2xl": "1536px"}
}
}
// Repeat for all variants
]
"colors": {
"brand": {"primary": "oklch(...)", "secondary": "oklch(...)", "accent": "oklch(...)"},
"surface": {"background": "oklch(...)", "elevated": "oklch(...)", "overlay": "oklch(...)"},
"semantic": {"success": "oklch(...)", "warning": "oklch(...)", "error": "oklch(...)", "info": "oklch(...)"},
"text": {"primary": "oklch(...)", "secondary": "oklch(...)", "tertiary": "oklch(...)", "inverse": "oklch(...)"},
"border": {"default": "oklch(...)", "strong": "oklch(...)", "subtle": "oklch(...)"}
},
"typography": {"font_family": {...}, "font_size": {...}, "font_weight": {...}, "line_height": {...}, "letter_spacing": {...}},
"spacing": {"0": "0", "1": "0.25rem", ..., "24": "6rem"},
"border_radius": {"none": "0", "sm": "0.25rem", ..., "full": "9999px"},
"shadows": {"sm": "...", "md": "...", "lg": "...", "xl": "..."},
"breakpoints": {"sm": "640px", ..., "2xl": "1536px"}
}
```
**Requirements**: All colors in OKLCH format, complete token proposals, semantic naming
**Requirements**: OKLCH colors, complete coverage, semantic naming, WCAG AA compliance
## Error Handling
@@ -255,17 +371,21 @@ ERROR: Claude JSON parsing error
## Key Features
- **Direct Design System Generation** - Complete design-tokens.json + style-guide.md in one step
- **Hybrid Extraction Strategy** - Combines computed CSS values (ground truth) with AI visual analysis
- **Pixel-Perfect Accuracy** - Chrome DevTools extracts exact border-radius, shadows, spacing values
- **AI-Driven Design Space Exploration** - 6D attribute space analysis for maximum contrast
- **Variant-Specific Directions** - Each variant has unique philosophy, keywords, anti-patterns
- **Maximum Contrast Guarantee** - Variants maximally distant in attribute space
- **Claude-Native** - No external tools, fast deterministic synthesis
- **Flexible Input** - Images, text, or hybrid mode
- **Production-Ready** - OKLCH colors, WCAG AA, semantic naming
- **Flexible Input** - Images, text, URLs, or hybrid mode
- **Graceful Fallback** - Falls back to pure visual inference if URL unavailable
- **Production-Ready** - OKLCH colors, WCAG AA compliance, semantic naming
- **Agent-Driven** - Autonomous multi-file generation with ui-design-agent
## Integration
**Input**: Reference images or text prompts
**Output**: `style-cards.json` for consolidation
**Next**: `/workflow:ui-design:consolidate --session {session_id} --variants {count}`
**Output**: `style-extraction/style-{N}/` with design-tokens.json + style-guide.md
**Next**: `/workflow:ui-design:layout-extract --session {session_id}` OR `/workflow:ui-design:generate --session {session_id}`
**Note**: This command only extracts visual style (colors, typography, spacing). For layout structure extraction, use `/workflow:ui-design:layout-extract`.
**Note**: This command extracts visual style (colors, typography, spacing) and generates production-ready design systems. For layout structure extraction, use `/workflow:ui-design:layout-extract`.

View File

@@ -30,13 +30,11 @@ CHECK: .workflow/.active-* marker files; VALIDATE: session_id matches active ses
# Verify design artifacts in latest design run
latest_design = find_latest_path_matching(".workflow/WFS-{session}/design-*")
# Detect design system structure (unified vs separate)
IF exists({latest_design}/style-consolidation/design-tokens.json):
design_system_mode = "unified"; design_tokens_path = "style-consolidation/design-tokens.json"; style_guide_path = "style-consolidation/style-guide.md"
ELSE IF exists({latest_design}/style-consolidation/style-1/design-tokens.json):
design_system_mode = "separate"; design_tokens_path = "style-consolidation/style-1/design-tokens.json"; style_guide_path = "style-consolidation/style-1/style-guide.md"
# Detect design system structure
IF exists({latest_design}/style-extraction/style-1/design-tokens.json):
design_system_mode = "separate"; design_tokens_path = "style-extraction/style-1/design-tokens.json"; style_guide_path = "style-extraction/style-1/style-guide.md"
ELSE:
ERROR: "No design tokens found. Run /workflow:ui-design:consolidate first"
ERROR: "No design tokens found. Run /workflow:ui-design:style-extract first"
VERIFY: {latest_design}/{design_tokens_path}, {latest_design}/{style_guide_path}, {latest_design}/prototypes/*.html
@@ -202,15 +200,15 @@ Next: /workflow:plan [--agent] "<task description>"
**@ Reference Format** (synthesis-specification.md):
```
@../design-{run_id}/style-consolidation/design-tokens.json
@../design-{run_id}/style-consolidation/style-guide.md
@../design-{run_id}/style-extraction/style-1/design-tokens.json
@../design-{run_id}/style-extraction/style-1/style-guide.md
@../design-{run_id}/prototypes/{prototype}.html
```
**@ Reference Format** (ui-designer/style-guide.md):
```
@../../design-{run_id}/style-consolidation/design-tokens.json
@../../design-{run_id}/style-consolidation/style-guide.md
@../../design-{run_id}/style-extraction/style-1/design-tokens.json
@../../design-{run_id}/style-extraction/style-1/style-guide.md
@../../design-{run_id}/prototypes/{prototype}.html
```
@@ -230,8 +228,8 @@ After this update, `/workflow:plan` will discover design assets through:
"task_id": "IMPL-001",
"context": {
"design_system": {
"tokens": "design-{run_id}/style-consolidation/design-tokens.json",
"style_guide": "design-{run_id}/style-consolidation/style-guide.md",
"tokens": "design-{run_id}/style-extraction/style-1/design-tokens.json",
"style_guide": "design-{run_id}/style-extraction/style-1/style-guide.md",
"prototypes": ["design-{run_id}/prototypes/dashboard-variant-1.html"]
}
}
@@ -240,7 +238,7 @@ After this update, `/workflow:plan` will discover design assets through:
## Error Handling
- **Missing design artifacts**: Error with message "Run /workflow:ui-design:consolidate and /workflow:ui-design:generate first"
- **Missing design artifacts**: Error with message "Run /workflow:ui-design:style-extract and /workflow:ui-design:generate first"
- **synthesis-specification.md not found**: Warning, create minimal version with just UI/UX Guidelines
- **ui-designer/ directory missing**: Create directory and file
- **Edit conflicts**: Preserve existing content, append or replace only UI/UX Guidelines section
@@ -265,7 +263,7 @@ After update, verify:
## Integration Points
- **Input**: Design system artifacts from `/workflow:ui-design:consolidate` and `/workflow:ui-design:generate`
- **Input**: Design system artifacts from `/workflow:ui-design:style-extract` and `/workflow:ui-design:generate`
- **Output**: Updated synthesis-specification.md, ui-designer/style-guide.md with @ references
- **Next Phase**: `/workflow:plan` discovers and utilizes design system through @ references
- **Auto Integration**: Automatically triggered by `/workflow:ui-design:auto` workflow

View File

@@ -0,0 +1,35 @@
#!/bin/bash
# Classify folders by type for documentation generation
# Usage: get_modules_by_depth.sh | classify-folders.sh
# Output: folder_path|folder_type|code:N|dirs:N
while IFS='|' read -r depth_info path_info files_info types_info claude_info; do
# Extract folder path from format "path:./src/modules"
folder_path=$(echo "$path_info" | cut -d':' -f2-)
# Skip if path extraction failed
[[ -z "$folder_path" || ! -d "$folder_path" ]] && continue
# Count code files (maxdepth 1)
code_files=$(find "$folder_path" -maxdepth 1 -type f \
\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \
-o -name "*.py" -o -name "*.go" -o -name "*.java" -o -name "*.rs" \
-o -name "*.c" -o -name "*.cpp" -o -name "*.cs" \) \
2>/dev/null | wc -l)
# Count subdirectories
subfolders=$(find "$folder_path" -maxdepth 1 -type d \
-not -path "$folder_path" 2>/dev/null | wc -l)
# Determine folder type
if [[ $code_files -gt 0 ]]; then
folder_type="code" # API.md + README.md
elif [[ $subfolders -gt 0 ]]; then
folder_type="navigation" # README.md only
else
folder_type="skip" # Empty or no relevant content
fi
# Output classification result
echo "${folder_path}|${folder_type}|code:${code_files}|dirs:${subfolders}"
done

View File

@@ -2,32 +2,88 @@
# Detect modules affected by git changes or recent modifications
# Usage: detect_changed_modules.sh [format]
# format: list|grouped|paths (default: paths)
#
# Features:
# - Respects .gitignore patterns (current directory or git root)
# - Detects git changes (staged, unstaged, or last commit)
# - Falls back to recently modified files (last 24 hours)
# Build exclusion filters from .gitignore
build_exclusion_filters() {
local filters=""
# Common system/cache directories to exclude
local system_excludes=(
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
"coverage" ".nyc_output" "logs" "tmp" "temp"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Find and parse .gitignore (current dir first, then git root)
local gitignore_file=""
# Check current directory first
if [ -f ".gitignore" ]; then
gitignore_file=".gitignore"
else
# Try to find git root and check for .gitignore there
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
gitignore_file="$git_root/.gitignore"
fi
fi
# Parse .gitignore if found
if [ -n "$gitignore_file" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Skip wildcards patterns (too complex for simple find)
[[ "$line" =~ \* ]] && continue
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < "$gitignore_file"
fi
echo "$filters"
}
detect_changed_modules() {
local format="${1:-paths}"
local changed_files=""
local affected_dirs=""
local exclusion_filters=$(build_exclusion_filters)
# Step 1: Try to get git changes (staged + unstaged)
if git rev-parse --git-dir > /dev/null 2>&1; then
changed_files=$(git diff --name-only HEAD 2>/dev/null; git diff --name-only --cached 2>/dev/null)
# If no changes in working directory, check last commit
if [ -z "$changed_files" ]; then
changed_files=$(git diff --name-only HEAD~1 HEAD 2>/dev/null)
fi
fi
# Step 2: If no git changes, find recently modified source files (last 24 hours)
# Apply exclusion filters from .gitignore
if [ -z "$changed_files" ]; then
changed_files=$(find . -type f \( \
-name "*.md" -o \
-name "*.js" -o -name "*.ts" -o -name "*.jsx" -o -name "*.tsx" -o \
-name "*.py" -o -name "*.go" -o -name "*.rs" -o \
-name "*.java" -o -name "*.cpp" -o -name "*.c" -o -name "*.h" -o \
-name "*.sh" -o -name "*.ps1" -o \
-name "*.json" -o -name "*.yaml" -o -name "*.yml" \
\) -not -path '*/.*' -mtime -1 2>/dev/null)
changed_files=$(eval "find . -type f \( \
-name '*.md' -o \
-name '*.js' -o -name '*.ts' -o -name '*.jsx' -o -name '*.tsx' -o \
-name '*.py' -o -name '*.go' -o -name '*.rs' -o \
-name '*.java' -o -name '*.cpp' -o -name '*.c' -o -name '*.h' -o \
-name '*.sh' -o -name '*.ps1' -o \
-name '*.json' -o -name '*.yaml' -o -name '*.yml' \
\) $exclusion_filters -mtime -1 2>/dev/null")
fi
# Step 3: Extract unique parent directories

View File

@@ -0,0 +1,118 @@
/**
* Extract Computed Styles from DOM
*
* This script extracts real CSS computed styles from a webpage's DOM
* to provide accurate design tokens for UI replication.
*
* Usage: Execute this function via Chrome DevTools evaluate_script
*/
(() => {
/**
* Extract unique values from a set and sort them
*/
const uniqueSorted = (set) => {
return Array.from(set)
.filter(v => v && v !== 'none' && v !== '0px' && v !== 'rgba(0, 0, 0, 0)')
.sort();
};
/**
* Parse rgb/rgba to OKLCH format (placeholder - returns original for now)
*/
const toOKLCH = (color) => {
// TODO: Implement actual RGB to OKLCH conversion
// For now, return the original color with a note
return `${color} /* TODO: Convert to OKLCH */`;
};
/**
* Extract only key styles from an element
*/
const extractKeyStyles = (element) => {
const s = window.getComputedStyle(element);
return {
color: s.color,
bg: s.backgroundColor,
borderRadius: s.borderRadius,
boxShadow: s.boxShadow,
fontSize: s.fontSize,
fontWeight: s.fontWeight,
padding: s.padding,
margin: s.margin
};
};
/**
* Main extraction function - extract all critical design tokens
*/
const extractDesignTokens = () => {
// Include all key UI elements
const selectors = [
'button', '.btn', '[role="button"]',
'input', 'textarea', 'select',
'h1', 'h2', 'h3', 'h4', 'h5', 'h6',
'.card', 'article', 'section',
'a', 'p', 'nav', 'header', 'footer'
];
// Collect all design tokens
const tokens = {
colors: new Set(),
borderRadii: new Set(),
shadows: new Set(),
fontSizes: new Set(),
fontWeights: new Set(),
spacing: new Set()
};
// Extract from all elements
selectors.forEach(selector => {
try {
const elements = document.querySelectorAll(selector);
elements.forEach(element => {
const s = extractKeyStyles(element);
// Collect all tokens (no limits)
if (s.color && s.color !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.color);
if (s.bg && s.bg !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.bg);
if (s.borderRadius && s.borderRadius !== '0px') tokens.borderRadii.add(s.borderRadius);
if (s.boxShadow && s.boxShadow !== 'none') tokens.shadows.add(s.boxShadow);
if (s.fontSize) tokens.fontSizes.add(s.fontSize);
if (s.fontWeight) tokens.fontWeights.add(s.fontWeight);
// Extract all spacing values
[s.padding, s.margin].forEach(val => {
if (val && val !== '0px') {
val.split(' ').forEach(v => {
if (v && v !== '0px') tokens.spacing.add(v);
});
}
});
});
} catch (e) {
console.warn(`Error: ${selector}`, e);
}
});
// Return all tokens (no element details to save context)
return {
metadata: {
extractedAt: new Date().toISOString(),
url: window.location.href,
method: 'computed-styles'
},
tokens: {
colors: uniqueSorted(tokens.colors),
borderRadii: uniqueSorted(tokens.borderRadii), // ALL radius values
shadows: uniqueSorted(tokens.shadows), // ALL shadows
fontSizes: uniqueSorted(tokens.fontSizes),
fontWeights: uniqueSorted(tokens.fontWeights),
spacing: uniqueSorted(tokens.spacing)
}
};
};
// Execute and return results
return extractDesignTokens();
})();

View File

@@ -0,0 +1,221 @@
/**
* Extract Layout Structure from DOM
*
* Extracts real layout information from DOM to provide accurate
* structural data for UI replication.
*
* Usage: Execute via Chrome DevTools evaluate_script
*/
(() => {
/**
* Get element's bounding box relative to viewport
*/
const getBounds = (element) => {
const rect = element.getBoundingClientRect();
return {
x: Math.round(rect.x),
y: Math.round(rect.y),
width: Math.round(rect.width),
height: Math.round(rect.height)
};
};
/**
* Extract layout properties from an element
*/
const extractLayoutProps = (element) => {
const s = window.getComputedStyle(element);
return {
// Core layout
display: s.display,
position: s.position,
// Flexbox
flexDirection: s.flexDirection,
justifyContent: s.justifyContent,
alignItems: s.alignItems,
flexWrap: s.flexWrap,
gap: s.gap,
// Grid
gridTemplateColumns: s.gridTemplateColumns,
gridTemplateRows: s.gridTemplateRows,
gridAutoFlow: s.gridAutoFlow,
// Dimensions
width: s.width,
height: s.height,
maxWidth: s.maxWidth,
minWidth: s.minWidth,
// Spacing
padding: s.padding,
margin: s.margin
};
};
/**
* Identify layout pattern for an element
*/
const identifyPattern = (props) => {
const { display, flexDirection, gridTemplateColumns } = props;
if (display === 'flex') {
if (flexDirection === 'column') return 'flex-column';
if (flexDirection === 'row') return 'flex-row';
return 'flex';
}
if (display === 'grid') {
const cols = gridTemplateColumns;
if (cols && cols !== 'none') {
const colCount = cols.split(' ').length;
return `grid-${colCount}col`;
}
return 'grid';
}
if (display === 'inline-flex') return 'inline-flex';
if (display === 'block') return 'block';
return display;
};
/**
* Build layout tree recursively
*/
const buildLayoutTree = (element, depth = 0, maxDepth = 3) => {
if (depth > maxDepth) return null;
const props = extractLayoutProps(element);
const bounds = getBounds(element);
const pattern = identifyPattern(props);
// Get semantic role
const tagName = element.tagName.toLowerCase();
const classes = Array.from(element.classList).slice(0, 3); // Max 3 classes
const role = element.getAttribute('role');
// Build node
const node = {
tag: tagName,
classes: classes,
role: role,
pattern: pattern,
bounds: bounds,
layout: {
display: props.display,
position: props.position
}
};
// Add flex/grid specific properties
if (props.display === 'flex' || props.display === 'inline-flex') {
node.layout.flexDirection = props.flexDirection;
node.layout.justifyContent = props.justifyContent;
node.layout.alignItems = props.alignItems;
node.layout.gap = props.gap;
}
if (props.display === 'grid') {
node.layout.gridTemplateColumns = props.gridTemplateColumns;
node.layout.gridTemplateRows = props.gridTemplateRows;
node.layout.gap = props.gap;
}
// Process children for container elements
if (props.display === 'flex' || props.display === 'grid' || props.display === 'block') {
const children = Array.from(element.children);
if (children.length > 0 && children.length < 50) { // Limit to 50 children
node.children = children
.map(child => buildLayoutTree(child, depth + 1, maxDepth))
.filter(child => child !== null);
}
}
return node;
};
/**
* Find main layout containers
*/
const findMainContainers = () => {
const selectors = [
'body > header',
'body > nav',
'body > main',
'body > footer',
'[role="banner"]',
'[role="navigation"]',
'[role="main"]',
'[role="contentinfo"]'
];
const containers = [];
selectors.forEach(selector => {
const element = document.querySelector(selector);
if (element) {
const tree = buildLayoutTree(element, 0, 3);
if (tree) {
containers.push(tree);
}
}
});
return containers;
};
/**
* Analyze layout patterns
*/
const analyzePatterns = (containers) => {
const patterns = {
flexColumn: 0,
flexRow: 0,
grid: 0,
sticky: 0,
fixed: 0
};
const analyze = (node) => {
if (!node) return;
if (node.pattern === 'flex-column') patterns.flexColumn++;
if (node.pattern === 'flex-row') patterns.flexRow++;
if (node.pattern && node.pattern.startsWith('grid')) patterns.grid++;
if (node.layout.position === 'sticky') patterns.sticky++;
if (node.layout.position === 'fixed') patterns.fixed++;
if (node.children) {
node.children.forEach(analyze);
}
};
containers.forEach(analyze);
return patterns;
};
/**
* Main extraction function
*/
const extractLayout = () => {
const containers = findMainContainers();
const patterns = analyzePatterns(containers);
return {
metadata: {
extractedAt: new Date().toISOString(),
url: window.location.href,
method: 'layout-structure'
},
patterns: patterns,
structure: containers
};
};
// Execute and return results
return extractLayout();
})();

6
.claude/scripts/gemini-wrapper Normal file → Executable file
View File

@@ -52,6 +52,12 @@ GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Respect custom Gemini base URL
if [[ -n "$GOOGLE_GEMINI_BASE_URL" ]]; then
echo -e "${GREEN}🌐 Using custom Gemini base URL: $GOOGLE_GEMINI_BASE_URL${NC}" >&2
export GOOGLE_GEMINI_BASE_URL
fi
# Function to count tokens (approximate: chars/4) - optimized version
count_tokens() {
local total_chars=0

View File

@@ -58,7 +58,7 @@ build_exclusion_filters() {
# OS and editor files
".DS_Store" "Thumbs.db" "desktop.ini"
"*.stackdump" "core" "*.core"
"*.stackdump" "*.core"
# Cloud and deployment
".serverless" ".terraform" "terraform.tfstate"

View File

@@ -329,7 +329,7 @@ for style in $styles; do
### Style ${style}
EOF3
style_guide="../style-consolidation/style-${style}/style-guide.md"
style_guide="../style-extraction/style-${style}/style-guide.md"
if [[ -f "$style_guide" ]]; then
head -n 10 "$style_guide" | tail -n +2 >> PREVIEW.md 2>/dev/null || echo "Design philosophy and tokens" >> PREVIEW.md
else

View File

@@ -52,7 +52,7 @@ auto_detect_pages() {
# Auto-detect style variants count
auto_detect_style_variants() {
local base_path="$1"
local style_dir="$base_path/../style-consolidation"
local style_dir="$base_path/../style-extraction"
if [ ! -d "$style_dir" ]; then
log_warning "Style consolidation directory not found: $style_dir"
@@ -260,7 +260,7 @@ fi
# Validate STYLE_VARIANTS against actual style directories
if [ "$STYLE_VARIANTS" -gt 0 ]; then
style_dir="$BASE_PATH/../style-consolidation"
style_dir="$BASE_PATH/../style-extraction"
if [ ! -d "$style_dir" ]; then
log_error "Style consolidation directory not found: $style_dir"
@@ -337,7 +337,7 @@ for page in "${PAGE_ARRAY[@]}"; do
# Define file paths
TEMPLATE_HTML="_templates/${page}-layout-${l}.html"
STRUCTURAL_CSS="_templates/${page}-layout-${l}.css"
TOKEN_CSS="../style-consolidation/style-${s}/tokens.css"
TOKEN_CSS="../style-extraction/style-${s}/tokens.css"
OUTPUT_HTML="${page}-style-${s}-layout-${l}.html"
# Copy template and replace placeholders
@@ -376,7 +376,7 @@ across all style variants. The HTML structure is identical for all ${page}-layou
prototypes, with only the design tokens (colors, fonts, spacing) varying.
## Design System Reference
Refer to \`../style-consolidation/style-${s}/style-guide.md\` for:
Refer to \`../style-extraction/style-${s}/style-guide.md\` for:
- Design philosophy
- Token usage guidelines
- Component patterns
@@ -742,9 +742,9 @@ for s in $(seq 1 "$STYLE_VARIANTS"); do
cat >> PREVIEW.md <<STYLEEOF
### Style ${s}
- **Tokens:** \`../style-consolidation/style-${s}/design-tokens.json\`
- **CSS Variables:** \`../style-consolidation/style-${s}/tokens.css\`
- **Style Guide:** \`../style-consolidation/style-${s}/style-guide.md\`
- **Tokens:** \`../style-extraction/style-${s}/design-tokens.json\`
- **CSS Variables:** \`../style-extraction/style-${s}/tokens.css\`
- **Style Guide:** \`../style-extraction/style-${s}/style-guide.md\`
STYLEEOF
done

View File

@@ -1,72 +1,101 @@
#!/bin/bash
# Update CLAUDE.md for a specific module with automatic layer detection
# Update CLAUDE.md for a specific module with unified template
# Usage: update_module_claude.sh <module_path> [update_type] [tool]
# module_path: Path to the module directory
# update_type: full|related (default: full)
# tool: gemini|qwen|codex (default: gemini)
# Script automatically detects layer depth and selects appropriate template
#
# Features:
# - Respects .gitignore patterns (current directory or git root)
# - Unified template for all modules (folders and files)
# - Template-based documentation generation
# Build exclusion filters from .gitignore
build_exclusion_filters() {
local filters=""
# Common system/cache directories to exclude
local system_excludes=(
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
"coverage" ".nyc_output" "logs" "tmp" "temp"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Find and parse .gitignore (current dir first, then git root)
local gitignore_file=""
# Check current directory first
if [ -f ".gitignore" ]; then
gitignore_file=".gitignore"
else
# Try to find git root and check for .gitignore there
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
gitignore_file="$git_root/.gitignore"
fi
fi
# Parse .gitignore if found
if [ -n "$gitignore_file" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Skip wildcards patterns (too complex for simple find)
[[ "$line" =~ \* ]] && continue
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < "$gitignore_file"
fi
echo "$filters"
}
update_module_claude() {
local module_path="$1"
local update_type="${2:-full}"
local tool="${3:-gemini}"
# Validate parameters
if [ -z "$module_path" ]; then
echo "❌ Error: Module path is required"
echo "Usage: update_module_claude.sh <module_path> [update_type]"
return 1
fi
if [ ! -d "$module_path" ]; then
echo "❌ Error: Directory '$module_path' does not exist"
return 1
fi
# Check if directory has files
local file_count=$(find "$module_path" -maxdepth 1 -type f 2>/dev/null | wc -l)
# Build exclusion filters from .gitignore
local exclusion_filters=$(build_exclusion_filters)
# Check if directory has files (excluding gitignored paths)
local file_count=$(eval "find \"$module_path\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
if [ $file_count -eq 0 ]; then
echo "⚠️ Skipping '$module_path' - no files found"
echo "⚠️ Skipping '$module_path' - no files found (after .gitignore filtering)"
return 0
fi
# Determine documentation layer based on path patterns
local layer=""
local template_path=""
local analysis_strategy=""
# Clean path for pattern matching
local clean_path=$(echo "$module_path" | sed 's|^\./||')
# Pattern-based layer detection
if [ "$module_path" = "." ]; then
# Root directory
layer="Layer 1 (Root)"
template_path="$HOME/.claude/workflows/cli-templates/prompts/memory/claude-layer1-root.txt"
analysis_strategy="--all-files"
elif [[ "$clean_path" =~ ^[^/]+$ ]]; then
# Top-level directories (e.g., .claude, src, tests)
layer="Layer 2 (Domain)"
template_path="$HOME/.claude/workflows/cli-templates/prompts/memory/claude-layer2-domain.txt"
analysis_strategy="@{*/CLAUDE.md}"
elif [[ "$clean_path" =~ ^[^/]+/[^/]+$ ]]; then
# Second-level directories (e.g., .claude/scripts, src/components)
layer="Layer 3 (Module)"
template_path="$HOME/.claude/workflows/cli-templates/prompts/memory/claude-layer3-module.txt"
analysis_strategy="@{*/CLAUDE.md}"
else
# Deeper directories (e.g., .claude/workflows/cli-templates/prompts)
layer="Layer 4 (Sub-Module)"
template_path="$HOME/.claude/workflows/cli-templates/prompts/memory/claude-layer4-submodule.txt"
analysis_strategy="--all-files"
fi
# Use unified template for all modules
local template_path="$HOME/.claude/workflows/cli-templates/prompts/memory/claude-module-unified.txt"
local analysis_strategy="--all-files"
# Prepare logging info
local module_name=$(basename "$module_path")
echo "⚡ Updating: $module_path"
echo " Layer: $layer | Type: $update_type | Tool: $tool | Files: $file_count"
echo " Template: $(basename "$template_path") | Strategy: $analysis_strategy"
echo " Type: $update_type | Tool: $tool | Files: $file_count"
echo " Template: $(basename "$template_path")"
# Generate prompt with template injection
local template_content=""
@@ -74,7 +103,7 @@ update_module_claude() {
template_content=$(cat "$template_path")
else
echo " ⚠️ Template not found: $template_path, using fallback"
template_content="Update CLAUDE.md documentation for this module following hierarchy standards."
template_content="Update CLAUDE.md documentation for this module: document structure, key components, dependencies, and integration points."
fi
local update_context=""
@@ -82,8 +111,7 @@ update_module_claude() {
update_context="
Update Mode: Complete refresh
- Perform comprehensive analysis of all content
- Document patterns, architecture, and purpose
- Consider existing documentation hierarchy
- Document module structure, dependencies, and key components
- Follow template guidelines strictly"
else
update_context="
@@ -96,13 +124,13 @@ update_module_claude() {
local base_prompt="
⚠️ CRITICAL RULES - MUST FOLLOW:
1. ONLY modify CLAUDE.md files at any hierarchy level
1. ONLY modify CLAUDE.md files
2. NEVER modify source code files
3. Focus exclusively on updating documentation
4. Follow the template guidelines exactly
$template_content
$update_context"
# Execute update
@@ -116,38 +144,21 @@ update_module_claude() {
Module Information:
- Name: $module_name
- Path: $module_path
- Layer: $layer
- Tool: $tool
- Analysis Strategy: $analysis_strategy"
- Tool: $tool"
# Execute with selected tool
# Execute with selected tool (always use --all-files)
case "$tool" in
qwen)
if [ "$analysis_strategy" = "--all-files" ]; then
qwen --all-files --yolo -p "$final_prompt" 2>&1
tool_result=$?
else
qwen --yolo -p "$analysis_strategy $final_prompt" 2>&1
tool_result=$?
fi
qwen --all-files --yolo -p "$final_prompt" 2>&1
tool_result=$?
;;
codex)
if [ "$analysis_strategy" = "--all-files" ]; then
codex --full-auto exec "$final_prompt" --skip-git-repo-check -s danger-full-access 2>&1
tool_result=$?
else
codex --full-auto exec "$final_prompt CONTEXT: $analysis_strategy" --skip-git-repo-check -s danger-full-access 2>&1
tool_result=$?
fi
codex --full-auto exec "$final_prompt" --skip-git-repo-check -s danger-full-access 2>&1
tool_result=$?
;;
gemini|*)
if [ "$analysis_strategy" = "--all-files" ]; then
gemini --all-files --yolo -p "$final_prompt" 2>&1
tool_result=$?
else
gemini --yolo -p "$analysis_strategy $final_prompt" 2>&1
tool_result=$?
fi
gemini --all-files --yolo -p "$final_prompt" 2>&1
tool_result=$?
;;
esac

View File

@@ -1,16 +1,29 @@
Analyze system architecture and design decisions:
Analyze system architecture and design decisions.
## Required Analysis:
## CORE CHECKLIST ⚡
□ Analyze system-wide structure, not just isolated components
□ Provide file:line references for key architectural elements
□ Distinguish between intended design and actual implementation
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Identify main architectural patterns and design principles
2. Map module dependencies and component relationships
3. Assess integration points and data flow patterns
4. Evaluate scalability and maintainability aspects
5. Document architectural trade-offs and design decisions
## Output Requirements:
## OUTPUT REQUIREMENTS
- Architectural diagrams or textual descriptions
- Dependency mapping with specific file references
- Integration point documentation with examples
- Scalability assessment and bottleneck identification
- Prioritized recommendations for architectural improvement
Focus on high-level design patterns and system-wide architectural concerns.
## VERIFICATION CHECKLIST ✓
□ All major components and their relationships analyzed
□ Key architectural decisions and trade-offs are documented
□ Data flow and integration points are clearly mapped
□ Scalability and maintainability findings are supported by evidence
Focus: High-level design patterns and system-wide architectural concerns.

View File

@@ -1,16 +1,28 @@
Analyze implementation patterns and code structure:
Analyze implementation patterns and code structure.
## Required Analysis:
## CORE CHECKLIST ⚡
□ Analyze ALL files in CONTEXT (not just samples)
□ Provide file:line references for every pattern identified
□ Distinguish between good patterns and anti-patterns
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Identify common code patterns and architectural decisions
2. Extract reusable utilities and shared components
3. Document existing conventions and coding standards
4. Assess pattern consistency and identify anti-patterns
5. Suggest improvements and optimization opportunities
## Output Requirements:
## OUTPUT REQUIREMENTS
- Specific file:line references for all findings
- Code snippets demonstrating identified patterns
- Clear recommendations for pattern improvements
- Standards compliance assessment
- Standards compliance assessment with priority levels
Focus on actionable insights and concrete implementation guidance.
## VERIFICATION CHECKLIST ✓
□ All CONTEXT files analyzed (not partial coverage)
□ Every pattern backed by code reference (file:line)
□ Anti-patterns clearly distinguished from good patterns
□ Recommendations prioritized by impact
Focus: Actionable insights with concrete implementation guidance.

View File

@@ -1,16 +1,29 @@
Analyze performance characteristics and optimization opportunities:
Analyze performance characteristics and optimization opportunities.
## Required Analysis:
## CORE CHECKLIST ⚡
□ Focus on measurable metrics (e.g., latency, memory, CPU usage)
□ Provide file:line references for all identified bottlenecks
□ Distinguish between algorithmic and resource-based issues
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Identify performance bottlenecks and resource usage patterns
2. Assess algorithm efficiency and data structure choices
3. Evaluate caching strategies and optimization techniques
4. Review memory management and resource cleanup
5. Document performance metrics and improvement opportunities
## Output Requirements:
- Performance bottleneck identification with specific locations
## OUTPUT REQUIREMENTS
- Performance bottleneck identification with specific file:line locations
- Algorithm complexity analysis and optimization suggestions
- Caching pattern documentation and recommendations
- Memory usage patterns and optimization opportunities
- Prioritized list of performance improvements
Focus on measurable performance improvements and concrete optimization strategies.
## VERIFICATION CHECKLIST ✓
□ All CONTEXT files analyzed for performance characteristics
□ Every bottleneck is backed by a code reference (file:line)
□ Both algorithmic and resource-related issues are covered
□ Recommendations are prioritized by potential impact
Focus: Measurable performance improvements and concrete optimization strategies.

View File

@@ -1,16 +1,29 @@
Analyze code quality and maintainability aspects:
Analyze code quality and maintainability aspects.
## Required Analysis:
## CORE CHECKLIST ⚡
□ Analyze against the project's established coding standards
□ Provide file:line references for all quality issues
□ Assess both implementation code and test coverage
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Assess code organization and structural quality
2. Evaluate naming conventions and readability standards
3. Review error handling and logging practices
4. Analyze test coverage and testing strategies
5. Document technical debt and improvement priorities
## Output Requirements:
## OUTPUT REQUIREMENTS
- Code quality metrics and specific improvement areas
- Naming convention consistency analysis
- Error handling pattern documentation
- Error handling and logging pattern documentation
- Test coverage assessment with gap identification
- Prioritized list of technical debt to address
Focus on maintainability improvements and long-term code health.
## VERIFICATION CHECKLIST ✓
□ All CONTEXT files analyzed for code quality
□ Every finding is backed by a code reference (file:line)
□ Both code and test quality have been evaluated
□ Recommendations are prioritized by impact on maintainability
Focus: Maintainability improvements and long-term code health.

View File

@@ -1,16 +1,29 @@
Analyze security implementation and potential vulnerabilities:
Analyze security implementation and potential vulnerabilities.
## Required Analysis:
## CORE CHECKLIST ⚡
□ Identify all data entry points and external system interfaces
□ Provide file:line references for all potential vulnerabilities
□ Classify risks by severity and type (e.g., OWASP Top 10)
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Identify authentication and authorization mechanisms
2. Assess input validation and sanitization practices
3. Review data encryption and secure storage methods
4. Evaluate API security and access control patterns
5. Document security risks and compliance considerations
## Output Requirements:
## OUTPUT REQUIREMENTS
- Security vulnerability findings with file:line references
- Authentication/authorization pattern documentation
- Input validation examples and gaps
- Input validation examples and identified gaps
- Encryption usage patterns and recommendations
- Prioritized remediation plan based on risk level
Focus on identifying security gaps and providing actionable remediation steps.
## VERIFICATION CHECKLIST ✓
□ All CONTEXT files analyzed for security vulnerabilities
□ Every finding is backed by a code reference (file:line)
□ Both authentication and data handling are covered
□ Recommendations include clear, actionable remediation steps
Focus: Identifying security gaps and providing actionable remediation steps.

View File

@@ -1,37 +1,55 @@
You are tasked with creating a reusable component in this codebase. Follow these guidelines:
Create a reusable component following project conventions and best practices.
## Design Phase:
## CORE CHECKLIST ⚡
□ Analyze existing component patterns BEFORE implementing
□ Follow established naming conventions and prop patterns
□ Include comprehensive tests (unit + visual + accessibility)
□ Provide complete TypeScript types and documentation
## IMPLEMENTATION PHASES
### Design Phase
1. Analyze existing component patterns and structures
2. Identify reusable design principles and styling approaches
3. Review component hierarchy and prop patterns
4. Study existing component documentation and usage
## Development Phase:
### Development Phase
1. Create component with proper TypeScript interfaces
2. Implement following established naming conventions
3. Add appropriate default props and validation
4. Include comprehensive prop documentation
## Styling Phase:
### Styling Phase
1. Follow existing styling methodology (CSS modules, styled-components, etc.)
2. Ensure responsive design principles
3. Add proper theming support if applicable
4. Include accessibility considerations (ARIA, keyboard navigation)
## Testing Phase:
### Testing Phase
1. Write component tests covering all props and states
2. Test accessibility compliance
3. Add visual regression tests if applicable
4. Test component in different contexts and layouts
## Documentation Phase:
### Documentation Phase
1. Create usage examples and code snippets
2. Document all props and their purposes
3. Include accessibility guidelines
4. Add integration examples with other components
## Output Requirements:
- Provide complete component implementation
- Include comprehensive TypeScript types
- Show usage examples and integration patterns
- Document component API and best practices
## OUTPUT REQUIREMENTS
- Complete component implementation with TypeScript types
- Usage examples and integration patterns
- Component API documentation and best practices
- Test suite with accessibility validation
- File:line references for pattern sources
## VERIFICATION CHECKLIST ✓
□ Implementation follows existing component patterns
□ Complete TypeScript types and prop documentation
□ Comprehensive tests (unit + visual + accessibility)
□ Accessibility compliance (ARIA, keyboard navigation)
□ Usage examples and integration documented
Focus: Production-ready reusable component with comprehensive documentation and testing.

View File

@@ -1,37 +1,55 @@
You are tasked with debugging and resolving issues in this codebase. Follow these systematic guidelines:
Debug and resolve issues systematically in the codebase.
## Issue Analysis Phase:
## CORE CHECKLIST ⚡
□ Identify and reproduce the issue completely before fixing
□ Perform root cause analysis (not just symptom treatment)
□ Provide file:line references for all changes
□ Add tests to prevent regression of this specific issue
## IMPLEMENTATION PHASES
### Issue Analysis Phase
1. Identify and reproduce the reported issue
2. Analyze error logs and stack traces
3. Study code flow and identify potential failure points
4. Review recent changes that might have introduced the issue
## Investigation Phase:
### Investigation Phase
1. Add strategic logging and debugging statements
2. Use debugging tools and profilers as appropriate
3. Test with different input conditions and edge cases
4. Isolate the root cause through systematic elimination
## Root Cause Analysis:
### Root Cause Analysis
1. Document the exact cause of the issue
2. Identify contributing factors and conditions
3. Assess impact scope and affected functionality
4. Determine if similar issues exist elsewhere
## Resolution Phase:
### Resolution Phase
1. Implement minimal, targeted fix for the root cause
2. Ensure fix doesn't introduce new issues or regressions
3. Add proper error handling and validation
4. Include defensive programming measures
## Prevention Phase:
### Prevention Phase
1. Add tests to prevent regression of this issue
2. Improve error messages and logging
3. Add monitoring or alerts for early detection
4. Document lessons learned and prevention strategies
## Output Requirements:
- Provide detailed root cause analysis
- Show exact code changes made to resolve the issue
- Include new tests added to prevent regression
- Document debugging process and lessons learned
## OUTPUT REQUIREMENTS
- Detailed root cause analysis with file:line references
- Exact code changes made to resolve the issue
- New tests added to prevent regression
- Debugging process documentation and lessons learned
- Impact assessment and affected functionality
## VERIFICATION CHECKLIST ✓
□ Root cause identified and documented (not just symptoms)
□ Minimal fix applied without introducing new issues
□ Tests added to prevent this specific regression
□ Similar issues checked and addressed if found
□ Prevention measures documented
Focus: Systematic root cause resolution with regression prevention.

View File

@@ -1,31 +1,49 @@
You are tasked with implementing a new feature in this codebase. Follow these guidelines:
Implement a new feature following project conventions and best practices.
## Analysis Phase:
## CORE CHECKLIST ⚡
□ Study existing code patterns BEFORE implementing
□ Follow established project conventions and architecture
□ Include comprehensive tests (unit + integration)
□ Provide file:line references for all changes
## IMPLEMENTATION PHASES
### Analysis Phase
1. Study existing code patterns and conventions
2. Identify similar features and their implementation approaches
3. Review project architecture and design principles
4. Understand dependencies and integration points
## Implementation Phase:
### Implementation Phase
1. Create feature following established patterns
2. Implement with proper error handling and validation
3. Add comprehensive logging for debugging
4. Follow security best practices
## Integration Phase:
### Integration Phase
1. Ensure seamless integration with existing systems
2. Update configuration files as needed
3. Add proper TypeScript types and interfaces
4. Update documentation and comments
## Testing Phase:
### Testing Phase
1. Write unit tests covering edge cases
2. Add integration tests for feature workflows
3. Verify error scenarios are properly handled
4. Test performance and security implications
## Output Requirements:
- Provide file:line references for all changes
- Include code examples demonstrating key patterns
- Explain architectural decisions made
- Document any new dependencies or configurations
## OUTPUT REQUIREMENTS
- File:line references for all changes
- Code examples demonstrating key patterns
- Explanation of architectural decisions made
- Documentation of new dependencies or configurations
- Test coverage summary
## VERIFICATION CHECKLIST ✓
□ Implementation follows existing patterns (no divergence)
□ Complete test coverage (unit + integration)
□ Documentation updated (code comments + external docs)
□ Integration verified (no breaking changes)
□ Security and performance validated
Focus: Production-ready implementation with comprehensive testing and documentation.

View File

@@ -1,37 +1,55 @@
You are tasked with refactoring existing code to improve quality, performance, or maintainability. Follow these guidelines:
Refactor existing code to improve quality, performance, or maintainability.
## Analysis Phase:
## CORE CHECKLIST ⚡
□ Preserve existing functionality (no behavioral changes unless specified)
□ Ensure all existing tests continue to pass
□ Plan incremental changes (avoid big-bang refactoring)
□ Provide file:line references for all modifications
## IMPLEMENTATION PHASES
### Analysis Phase
1. Identify code smells and technical debt
2. Analyze performance bottlenecks and inefficiencies
3. Review code complexity and maintainability metrics
4. Study existing test coverage and identify gaps
## Planning Phase:
### Planning Phase
1. Create refactoring strategy preserving existing functionality
2. Identify breaking changes and migration paths
3. Plan incremental refactoring steps
4. Consider backward compatibility requirements
## Refactoring Phase:
### Refactoring Phase
1. Apply SOLID principles and design patterns
2. Improve code readability and documentation
3. Optimize performance while maintaining functionality
4. Reduce code duplication and improve reusability
## Validation Phase:
### Validation Phase
1. Ensure all existing tests continue to pass
2. Add new tests for improved code coverage
3. Verify performance improvements with benchmarks
4. Test edge cases and error scenarios
## Migration Phase:
### Migration Phase
1. Update dependent code to use refactored interfaces
2. Update documentation and usage examples
3. Provide migration guides for breaking changes
4. Add deprecation warnings for old interfaces
## Output Requirements:
- Provide before/after code comparisons
- Document performance improvements achieved
- Include migration instructions for breaking changes
- Show updated test coverage and quality metrics
## OUTPUT REQUIREMENTS
- Before/after code comparisons with file:line references
- Performance improvements documented with benchmarks
- Migration instructions for breaking changes
- Updated test coverage and quality metrics
- Technical debt reduction summary
## VERIFICATION CHECKLIST ✓
□ All existing tests pass (functionality preserved)
□ New tests added for improved coverage
□ Performance verified with benchmarks (if applicable)
□ Backward compatibility maintained or migration provided
□ Documentation updated with refactoring changes
Focus: Incremental quality improvement while preserving functionality.

View File

@@ -1,43 +1,61 @@
You are tasked with creating comprehensive tests for this codebase. Follow these guidelines:
Create comprehensive tests for the codebase.
## Test Strategy Phase:
## CORE CHECKLIST ⚡
□ Analyze existing test coverage and identify gaps
□ Follow project testing frameworks and conventions
□ Include unit, integration, and end-to-end tests
□ Ensure tests are reliable and deterministic
## IMPLEMENTATION PHASES
### Test Strategy Phase
1. Analyze existing test coverage and identify gaps
2. Study codebase architecture and critical paths
3. Identify edge cases and error scenarios
4. Review testing frameworks and conventions used
## Unit Testing Phase:
### Unit Testing Phase
1. Write tests for individual functions and methods
2. Test all branches and conditional logic
3. Cover edge cases and boundary conditions
4. Mock external dependencies appropriately
## Integration Testing Phase:
### Integration Testing Phase
1. Test interactions between components and modules
2. Verify API endpoints and data flow
3. Test database operations and transactions
4. Validate external service integrations
## End-to-End Testing Phase:
### End-to-End Testing Phase
1. Test complete user workflows and scenarios
2. Verify critical business logic and processes
3. Test error handling and recovery mechanisms
4. Validate performance under load
## Quality Assurance:
### Quality Assurance
1. Ensure tests are reliable and deterministic
2. Make tests readable and maintainable
3. Add proper test documentation and comments
4. Follow testing best practices and conventions
## Test Data Management:
### Test Data Management
1. Create realistic test data and fixtures
2. Ensure test isolation and cleanup
3. Use factories or builders for complex objects
4. Handle sensitive data appropriately in tests
## Output Requirements:
- Provide comprehensive test suite with high coverage
- Include performance benchmarks where relevant
- Document testing strategy and conventions used
- Show test coverage metrics and quality improvements
## OUTPUT REQUIREMENTS
- Comprehensive test suite with high coverage
- Performance benchmarks where relevant
- Testing strategy and conventions documentation
- Test coverage metrics and quality improvements
- File:line references for tested code
## VERIFICATION CHECKLIST ✓
□ Test coverage gaps identified and filled
□ All test types included (unit + integration + e2e)
□ Tests are reliable and deterministic (no flaky tests)
□ Test data properly managed (isolation + cleanup)
□ Testing conventions followed consistently
Focus: High-quality, reliable test suite with comprehensive coverage.

View File

@@ -1,337 +1,15 @@
# Unified API Documentation Template
Generate comprehensive API documentation for code or HTTP services.
Generate comprehensive API documentation. This template supports both **Code API** (for libraries/modules) and **HTTP API** (for web services). Include only the sections relevant to your project type.
## CORE CHECKLIST ⚡
□ Include only sections relevant to the project type (Code API vs. HTTP API)
□ Provide complete and runnable examples for HTTP APIs
□ Use signatures-only for Code API documentation (no implementation)
□ Document all public-facing APIs, not internal ones
---
## UNIFIED API DOCUMENTATION TEMPLATE
## Part A: Code API (For Libraries, Modules, SDKs)
This template supports both **Code API** (for libraries/modules) and **HTTP API** (for web services). Include only the sections relevant to your project type.
**Use this section when documenting**: Exported functions, classes, interfaces, and types from code modules.
---
**Omit this section if**: This is a pure web service with only HTTP endpoints and no programmatic API.
### 1. Exported Functions
For each exported function, provide:
```typescript
/**
* Brief one-line description of what the function does
* @param paramName - Parameter type and description
* @returns Return type and description
* @throws ErrorType - When this error occurs
*/
export function functionName(paramName: ParamType): ReturnType;
```
**Example**:
```typescript
/**
* Authenticates a user with email and password
* @param credentials - User email and password
* @returns Authentication token and user info
* @throws AuthenticationError - When credentials are invalid
*/
export function authenticate(credentials: Credentials): Promise<AuthResult>;
```
### 2. Exported Classes
For each exported class, provide:
```typescript
/**
* Brief one-line description of the class purpose
*/
export class ClassName {
constructor(param: Type);
// Public properties
propertyName: Type;
// Public methods
methodName(param: Type): ReturnType;
}
```
**Example**:
```typescript
/**
* Manages user session lifecycle and token refresh
*/
export class SessionManager {
constructor(config: SessionConfig);
// Current session state
isActive: boolean;
// Refresh the session token
refresh(): Promise<void>;
// Terminate the session
destroy(): void;
}
```
### 3. Exported Interfaces
For each exported interface, provide:
```typescript
/**
* Brief description of what this interface represents
*/
export interface InterfaceName {
field1: Type; // Field description
field2: Type; // Field description
method?: (param: Type) => ReturnType; // Optional method
}
```
### 4. Type Definitions
For each exported type, provide:
```typescript
/**
* Brief description of what this type represents
*/
export type TypeName = string | number | CustomType;
```
---
## Part B: HTTP API (For Web Services, REST APIs, GraphQL)
**Use this section when documenting**: HTTP endpoints, REST APIs, GraphQL APIs, webhooks.
**Omit this section if**: This is a library/SDK with no HTTP interface.
### 1. Overview
[Brief description of the API's purpose and capabilities]
**Base URL**:
```
Production: https://api.example.com/v1
Staging: https://staging.api.example.com/v1
Development: http://localhost:3000/api/v1
```
### 2. Authentication
#### Authentication Method
[e.g., JWT Bearer Token, OAuth2, API Keys]
#### Obtaining Credentials
```bash
# Example: Login to get token
curl -X POST https://api.example.com/v1/auth/login \
-H "Content-Type: application/json" \
-d '{"email": "user@example.com", "password": "password"}'
```
#### Using Credentials
```http
GET /api/resource HTTP/1.1
Authorization: Bearer <token>
```
### 3. Common Response Codes
| Code | Description | Common Causes |
|------|-------------|---------------|
| 200 | Success | Request completed successfully |
| 201 | Created | Resource created successfully |
| 400 | Bad Request | Invalid request body or parameters |
| 401 | Unauthorized | Missing or invalid authentication |
| 403 | Forbidden | Authenticated but not authorized |
| 404 | Not Found | Resource does not exist |
| 429 | Too Many Requests | Rate limit exceeded |
| 500 | Internal Server Error | Server-side error |
### 4. Endpoints
#### Resource: [Resource Name, e.g., Users]
##### GET /resource
**Description**: [What this endpoint does]
**Query Parameters**:
| Parameter | Type | Required | Description | Default |
|-----------|------|----------|-------------|---------|
| `limit` | number | No | Number of results | 20 |
| `offset` | number | No | Pagination offset | 0 |
**Example Request**:
```http
GET /users?limit=10&offset=0 HTTP/1.1
Authorization: Bearer <token>
```
**Response (200 OK)**:
```json
{
"data": [
{
"id": 1,
"name": "John Doe",
"email": "john@example.com"
}
],
"pagination": {
"total": 100,
"limit": 10,
"offset": 0
}
}
```
**Error Response (401 Unauthorized)**:
```json
{
"error": {
"code": "UNAUTHORIZED",
"message": "Invalid or expired token"
}
}
```
---
##### POST /resource
**Description**: [What this endpoint does]
**Request Body**:
```json
{
"field1": "value",
"field2": 123
}
```
**Validation Rules**:
- `field1`: Required, 2-100 characters
- `field2`: Required, positive integer
**Response (201 Created)**:
```json
{
"id": 124,
"field1": "value",
"field2": 123,
"created_at": "2024-01-20T12:00:00Z"
}
```
---
##### PUT /resource/:id
**Description**: [What this endpoint does]
**Path Parameters**:
| Parameter | Type | Description |
|-----------|------|-------------|
| `id` | number | Resource ID |
**Request Body** (all fields optional):
```json
{
"field1": "new value"
}
```
**Response (200 OK)**:
```json
{
"id": 124,
"field1": "new value",
"updated_at": "2024-01-21T09:15:00Z"
}
```
---
##### DELETE /resource/:id
**Description**: [What this endpoint does]
**Path Parameters**:
| Parameter | Type | Description |
|-----------|------|-------------|
| `id` | number | Resource ID |
**Response (204 No Content)**:
[Empty response body]
---
### 5. Rate Limiting
- **Limit**: 1000 requests per hour per API key
- **Headers**:
- `X-RateLimit-Limit`: Total requests allowed
- `X-RateLimit-Remaining`: Requests remaining
- `X-RateLimit-Reset`: Unix timestamp when limit resets
**Example Response Headers**:
```http
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 847
X-RateLimit-Reset: 1640000000
```
### 6. Webhooks (Optional)
[Description of webhook system if applicable]
**Webhook Events**:
- `resource.created` - Triggered when a new resource is created
- `resource.updated` - Triggered when a resource is updated
- `resource.deleted` - Triggered when a resource is deleted
**Webhook Payload Example**:
```json
{
"event": "resource.created",
"timestamp": "2024-01-20T12:00:00Z",
"data": {
"id": 124,
"field1": "value"
}
}
```
---
## Rules
1. **Include only relevant sections** - Skip Part A for pure web services, skip Part B for pure libraries
2. **Signatures only in Part A** - No implementation code in Code API section
3. **Complete examples in Part B** - All HTTP examples should be runnable
4. **Consistent formatting** - Use language-specific code blocks (TypeScript, HTTP, JSON, bash)
5. **Brief descriptions** - One line per item is sufficient for Code API
6. **Detailed explanations for HTTP** - Include request/response examples, validation rules, error cases
7. **Alphabetical order** - Sort items within each section for easy lookup
8. **Public API only** - Do not document internal/private exports or endpoints
---
## Template Usage Guidelines
### For Module-Level API.md (Code API)
**Use**: Part A only
**Location**: `modules/[module-name]/API.md`
**Focus**: Exported functions, classes, interfaces
### For Project-Level HTTP API Documentation
**Use**: Part B only
**Location**: `.workflow/docs/api/README.md`
**Focus**: REST/GraphQL endpoints, authentication
### For Full-Stack Projects (Both Code and HTTP APIs)
**Use**: Both Part A and Part B
**Organization**:
- Part A for SDK/client library APIs
- Part B for HTTP endpoints
---
**Last Updated**: [Auto-generated timestamp]
**API Version**: [Version number]
**Module/Service Path**: [Auto-fill with actual path]
...(content truncated)...

View File

@@ -1,68 +1,27 @@
# Folder Navigation Documentation Template
Generate a navigation README for directories that contain only subdirectories.
Generate a navigation README for directories that contain only subdirectories (no code files). This serves as an index to help readers navigate to specific modules.
## CORE CHECKLIST ⚡
□ Keep the content brief and act as an index
□ Use one-line descriptions for each module
□ Ensure all mentioned modules link to their respective READMEs
□ Use scannable formats like tables and lists
## Structure
## REQUIRED CONTENT
1. **Overview**: Brief description of the directory's purpose.
2. **Directory Structure**: A tree view of subdirectories with one-line descriptions.
3. **Module Quick Reference**: A table with links, purposes, and key features.
4. **How to Navigate**: Guidance on which module to explore for specific needs.
5. **Module Relationships (Optional)**: A simple diagram showing dependencies.
### 1. Overview
## OUTPUT REQUIREMENTS
- A scannable index for navigating subdirectories.
- Links to each submodule's detailed documentation.
- A clear, high-level overview of the directory's contents.
Brief description of what this directory/category contains:
## VERIFICATION CHECKLIST ✓
□ The generated README is brief and serves as a scannable index
□ All submodules are linked correctly
□ Descriptions are concise and clear
□ The structure follows the required content outline
> The `modules/` directory contains the core business logic modules of the application. Each subdirectory represents a self-contained functional module with its own responsibilities.
### 2. Directory Structure
Provide a tree view of the subdirectories with brief descriptions:
```
modules/
├── auth/ - User authentication and authorization
├── api/ - API route handlers and middleware
├── database/ - Database connections and ORM models
└── utils/ - Shared utility functions
```
### 3. Module Quick Reference
Table format for quick scanning:
| Module | Purpose | Key Features |
|--------|---------|--------------|
| [auth](./auth/) | Authentication | JWT tokens, session management |
| [api](./api/) | API routing | REST endpoints, validation |
| [database](./database/) | Data layer | PostgreSQL, migrations |
| [utils](./utils/) | Utilities | Logging, helpers |
### 4. How to Navigate
Guidance on which module to explore based on needs:
- **For authentication logic** → [auth module](./auth/)
- **For API endpoints** → [api module](./api/)
- **For database queries** → [database module](./database/)
- **For helper functions** → [utils module](./utils/)
### 5. Module Relationships (Optional)
If modules have significant dependencies, show them:
```
api → auth (uses for authentication)
api → database (uses for data access)
auth → database (uses for user storage)
```
---
## Rules
1. **Keep it brief** - This is an index, not detailed documentation
2. **One-line descriptions** - Each module gets a concise purpose statement
3. **Scannable format** - Use tables and lists for quick navigation
4. **Link to submodules** - Every module mentioned should link to its README
5. **No code examples** - This is navigation only
---
**Directory Path**: [Auto-fill with actual directory path]
**Last Updated**: [Auto-generated timestamp]
Focus: Creating a clear and concise navigation hub for parent directories.

View File

@@ -1,98 +1,40 @@
# Module README Documentation Template
Generate comprehensive module documentation focused on understanding and usage.
Generate comprehensive module documentation focused on understanding and usage. Explain WHAT the module does, WHY it exists, and HOW to use it. Do NOT duplicate API signatures (those belong in API.md).
## CORE CHECKLIST ⚡
□ Explain WHAT the module does, WHY it exists, and HOW to use it
□ Do NOT duplicate API signatures from API.md; refer to it instead
□ Provide practical, real-world usage examples
□ Clearly define the module's boundaries and dependencies
## Structure
## DOCUMENTATION STRUCTURE
### 1. Purpose
**What**: Clearly state what this module is responsible for
**Why**: Explain why this module exists and what problems it solves
**Boundaries**: Define what is IN scope and OUT of scope
Example:
> The `auth` module handles user authentication and authorization. It exists to centralize security logic and provide a consistent authentication interface across the application. It does NOT handle user profile management or session storage.
- **What**: Clearly state what this module is responsible for.
- **Why**: Explain the problem it solves.
- **Boundaries**: Define what is in and out of scope.
### 2. Core Concepts
List and explain key concepts, patterns, or abstractions used by this module:
- **Concept 1**: [Brief explanation of important concept]
- **Concept 2**: [Another key concept users should understand]
- **Pattern**: [Architectural pattern used, e.g., "Uses middleware pattern for request processing"]
- Explain key concepts, patterns, or abstractions.
### 3. Usage Scenarios
Provide 2-4 common use cases with code examples:
#### Scenario 1: [Common use case title]
```typescript
// Brief example showing how to use the module for this scenario
import { functionName } from './module';
const result = functionName(input);
```
#### Scenario 2: [Another common use case]
```typescript
// Another practical example
```
- Provide 2-4 common use cases with code examples.
### 4. Dependencies
#### Internal Dependencies
List other project modules this module depends on and explain why:
- **[Module Name]** - [Why this dependency exists and what it provides]
#### External Dependencies
List third-party libraries and their purpose:
- **[Library Name]** (`version`) - [What functionality it provides to this module]
- List internal and external dependencies with explanations.
### 5. Configuration
#### Environment Variables
List any environment variables the module uses:
- `ENV_VAR_NAME` - [Description, type, default value]
#### Configuration Options
If the module accepts configuration objects:
```typescript
// Example configuration
const config = {
option1: value, // Description of option1
option2: value, // Description of option2
};
```
- Document environment variables and configuration options.
### 6. Testing
Explain how to test code that uses this module:
```bash
# Command to run tests for this module
npm test -- path/to/module
```
**Test Coverage**: [Brief note on what's tested]
- Explain how to run tests for the module.
### 7. Common Issues
- List common problems and their solutions.
List 2-3 common problems and their solutions:
## VERIFICATION CHECKLIST ✓
□ The module's purpose, scope, and boundaries are clearly defined
□ Core concepts are explained for better understanding
□ Usage examples are practical and demonstrate real-world scenarios
□ All dependencies and configuration options are documented
#### Issue: [Common problem description]
**Solution**: [How to resolve it]
---
## Rules
1. **No API duplication** - Refer to API.md for signatures
2. **Focus on understanding** - Explain concepts, not just code
3. **Practical examples** - Show real usage, not trivial cases
4. **Clear dependencies** - Help readers understand module relationships
5. **Concise** - Each section should be scannable and to-the-point
---
**Module Path**: [Auto-fill with actual module path]
**Last Updated**: [Auto-generated timestamp]
**See also**: [Link to API.md for interface details]
Focus: Explaining the module's purpose and usage, not just its API.

View File

@@ -1,167 +1,41 @@
# Project Architecture Documentation Template
Generate comprehensive architecture documentation for the entire project.
Generate comprehensive architecture documentation that synthesizes information from all module documents. Focus on system-level design, module relationships, and aggregated API overview. This document should be created AFTER all module documentation is complete.
## CORE CHECKLIST ⚡
□ Synthesize information from all modules; do not duplicate content
□ Maintain a system-level perspective, focusing on module interactions
□ Use visual aids (like ASCII diagrams) to clarify structure
□ Explain the WHY behind architectural decisions
## Structure
## DOCUMENTATION STRUCTURE
### 1. System Overview
High-level description of the system architecture:
**Architectural Style**: [e.g., Layered, Microservices, Event-Driven, Hexagonal]
**Core Principles**:
- [Principle 1, e.g., "Separation of concerns"]
- [Principle 2, e.g., "Dependency inversion"]
- [Principle 3, e.g., "Single responsibility"]
**Technology Stack**:
- **Languages**: [Primary programming languages]
- **Frameworks**: [Key frameworks]
- **Databases**: [Data storage solutions]
- **Infrastructure**: [Deployment, hosting, CI/CD]
- Architectural Style, Core Principles, and Technology Stack.
### 2. System Structure
Visual representation of the system structure using text diagrams:
```
┌─────────────────────────────────────┐
│ Application Layer │
│ (API Routes, Controllers) │
└────────────┬────────────────────────┘
┌────────────▼────────────────────────┐
│ Business Logic Layer │
│ (Modules: auth, orders, payments) │
└────────────┬────────────────────────┘
┌────────────▼────────────────────────┐
│ Data Access Layer │
│ (Database, ORM, Repositories) │
└─────────────────────────────────────┘
```
- Visual representation of the system's layers or components.
### 3. Module Map
Comprehensive list of all modules with their responsibilities:
| Module | Layer | Responsibility | Dependencies |
|--------|-------|----------------|--------------|
| `auth` | Business | Authentication & authorization | database, utils |
| `api` | Application | HTTP routing & validation | auth, orders |
| `database` | Data | Data persistence | - |
| `utils` | Infrastructure | Shared utilities | - |
- A table listing all modules, their layers, responsibilities, and dependencies.
### 4. Module Interactions
Describe key interaction patterns between modules:
#### Data Flow Example: User Authentication
```
1. Client → api/login endpoint
2. api → auth.authenticateUser()
3. auth → database.findUser()
4. database → PostgreSQL
5. auth → JWT token generation
6. api → Response with token
```
#### Dependency Graph
```
api ──────┐
├──→ auth ───→ database
orders ───┤ ↑
└──────────────┘
```
- Describe key data flows and show a dependency graph.
### 5. Design Patterns
Document key architectural patterns used:
#### Pattern 1: [Pattern Name, e.g., "Repository Pattern"]
- **Where**: [Which modules use this pattern]
- **Why**: [Reason for using this pattern]
- **How**: [Brief explanation of implementation]
#### Pattern 2: [Another pattern]
[Similar structure]
- Document key architectural patterns used across the project.
### 6. Aggregated API Overview
High-level summary of all public APIs across modules. Group by category:
#### Authentication APIs
- `auth.authenticate(credentials)` - Validate user credentials
- `auth.authorize(user, permission)` - Check user permissions
- `auth.generateToken(userId)` - Create JWT token
#### Data APIs
- `database.findOne(query)` - Find single record
- `database.findMany(query)` - Find multiple records
- `database.insert(data)` - Insert new record
#### Utility APIs
- `utils.logger.log(message)` - Application logging
- `utils.validator.validate(data, schema)` - Data validation
*Note: For detailed API signatures, refer to individual module API.md files*
- A high-level summary of all public APIs, grouped by category.
### 7. Data Flow
- Describe the typical request lifecycle or event flow.
Describe how data moves through the system:
### 8. Security and Scalability
- Overview of security measures and scalability considerations.
**Request Lifecycle**:
1. HTTP request enters through API layer
2. Request validation and authentication (auth module)
3. Business logic processing (domain modules)
4. Data persistence (database module)
5. Response formatting and return
## VERIFICATION CHECKLIST ✓
□ The documentation provides a cohesive, system-level view
□ Module interactions and dependencies are clearly illustrated
□ The rationale behind major design patterns and decisions is explained
□ The document synthesizes, rather than duplicates, module-level details
**Event Flow** (if applicable):
- [Describe event-driven flows if present]
### 8. Security Architecture
Overview of security measures:
- **Authentication**: [JWT, OAuth2, etc.]
- **Authorization**: [RBAC, ACL, etc.]
- **Data Protection**: [Encryption, hashing]
- **API Security**: [Rate limiting, CORS, etc.]
### 9. Scalability Considerations
Architectural decisions that support scalability:
- [Horizontal scaling approach]
- [Caching strategy]
- [Database optimization]
- [Load balancing]
---
## Rules
1. **Synthesize, don't duplicate** - Reference module docs, don't copy them
2. **System-level perspective** - Focus on how modules work together
3. **Visual aids** - Use diagrams for clarity (ASCII art is fine)
4. **Aggregated APIs** - One-line summaries only, link to detailed docs
5. **Design rationale** - Explain WHY decisions were made
6. **Maintain consistency** - Ensure all module docs are considered
---
## Required Inputs
This document requires the following to be available:
- All module README.md files (for understanding module purposes)
- All module API.md files (for aggregating API overview)
- Project README.md (for context and navigation)
---
**Last Updated**: [Auto-generated timestamp]
**See also**:
- [Project README](./README.md) for project overview
- [Module Documentation](./modules/) for detailed module docs
Focus: Providing a holistic, system-level understanding of the project architecture.

View File

@@ -1,205 +1,35 @@
# Project Examples Documentation Template
Generate practical, end-to-end examples demonstrating core project usage.
Generate practical, end-to-end examples demonstrating core usage patterns of the project. Focus on realistic scenarios that span multiple modules. These examples should help users understand how to accomplish common tasks.
## CORE CHECKLIST ⚡
□ Provide complete, runnable code for every example
□ Focus on realistic, real-world scenarios, not trivial cases
□ Explain the flow and how different modules interact
□ Include expected output to verify correctness
## Structure
## EXAMPLES STRUCTURE
### 1. Introduction
Brief overview of what these examples cover:
> This document provides complete, working examples for common use cases. Each example demonstrates how multiple modules work together to accomplish a specific task.
**Prerequisites**:
- [List required setup, e.g., "Project installed and configured"]
- [Environment requirements, e.g., "PostgreSQL running on localhost"]
- Overview of the examples and any prerequisites.
### 2. Quick Start Example
The simplest possible working example to get started:
```typescript
// The minimal example to verify setup
import { App } from './src';
const app = new App();
await app.start();
console.log('Application running!');
```
**What this does**: [Brief explanation]
- The simplest possible working example to verify setup.
### 3. Core Use Cases
- 3-5 complete examples for common scenarios with code, output, and explanations.
Provide 3-5 complete examples for common scenarios:
### 4. Advanced & Integration Examples
- Showcase more complex scenarios or integrations with external systems.
#### Example 1: [Scenario Name, e.g., "User Registration and Login"]
### 5. Testing Examples
- Show how to test code that uses the project.
**Objective**: [What this example accomplishes]
### 6. Best Practices & Troubleshooting
- Demonstrate recommended patterns and provide solutions to common issues.
**Modules involved**: `auth`, `database`, `api`
## VERIFICATION CHECKLIST ✓
□ All examples are complete, runnable, and tested
□ Scenarios are realistic and demonstrate key project features
□ Explanations clarify module interactions and data flow
□ Best practices and error handling are demonstrated
**Complete code**:
```typescript
import { createUser, authenticateUser } from './modules/auth';
import { startServer } from './modules/api';
// Step 1: Initialize the application
const app = await startServer({ port: 3000 });
// Step 2: Register a new user
const user = await createUser({
email: 'user@example.com',
password: 'securePassword123',
name: 'John Doe'
});
// Step 3: Authenticate the user
const session = await authenticateUser({
email: 'user@example.com',
password: 'securePassword123'
});
// Step 4: Use the authentication token
console.log('Login successful, token:', session.token);
```
**Expected output**:
```
Server started on port 3000
User created: { id: 1, email: 'user@example.com', name: 'John Doe' }
Login successful, token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
```
**Explanation**:
1. [Step-by-step breakdown of what each part does]
2. [How modules interact]
3. [Common variations or customizations]
---
#### Example 2: [Another Common Scenario]
[Similar structure with complete code]
---
#### Example 3: [Another Use Case]
[Similar structure with complete code]
### 4. Advanced Examples
More complex scenarios for power users:
#### Advanced Example 1: [Complex Scenario]
**Objective**: [What this example accomplishes]
**Modules involved**: [List]
```typescript
// Complete working code with detailed comments
```
**Key points**:
- [Important concept or gotcha]
- [Performance consideration]
- [Security note]
### 5. Integration Examples
Examples showing how to integrate with external systems:
#### Integration with [External System]
```typescript
// Example of integrating with a third-party service
```
### 6. Testing Examples
Show how to test code that uses this project:
```typescript
// Example test case using the project
import { describe, it, expect } from 'your-test-framework';
describe('User authentication', () => {
it('should authenticate valid credentials', async () => {
const result = await authenticateUser({
email: 'test@example.com',
password: 'password123'
});
expect(result.success).toBe(true);
expect(result.token).toBeDefined();
});
});
```
### 7. Best Practices
Demonstrate recommended patterns:
#### Pattern 1: [Best Practice Title]
```typescript
// Good: Recommended approach
const result = await handleWithErrorRecovery(operation);
// Bad: Anti-pattern to avoid
const result = operation(); // No error handling
```
**Why**: [Explanation of why this is the best approach]
#### Pattern 2: [Another Best Practice]
[Similar structure]
### 8. Troubleshooting Common Issues
Example-based solutions to common problems:
#### Issue: [Common Problem]
**Symptom**: [What the user experiences]
**Solution**:
```typescript
// Before: Code that causes the issue
const broken = doSomethingWrong();
// After: Corrected code
const fixed = doSomethingRight();
```
---
## Rules
1. **Complete, runnable code** - Every example should be copy-paste ready
2. **Real-world scenarios** - Avoid trivial or contrived examples
3. **Explain the flow** - Show how modules work together
4. **Include output** - Show what users should expect to see
5. **Error handling** - Demonstrate proper error handling patterns
6. **Comment complex parts** - Help readers understand non-obvious code
7. **Version compatibility** - Note if examples are version-specific
---
## Code Quality Standards
All example code should:
- Follow project coding standards
- Include proper error handling
- Use async/await correctly
- Show TypeScript types where applicable
- Be tested and verified to work
---
**Last Updated**: [Auto-generated timestamp]
**See also**:
- [Project README](./README.md) for getting started
- [Architecture](./ARCHITECTURE.md) for understanding system design
- [Module Documentation](./modules/) for API details
Focus: Helping users accomplish common tasks through complete, practical examples.

View File

@@ -1,58 +1,35 @@
# Project-Level Documentation Template
Generate a comprehensive project-level README documentation.
Generate comprehensive project documentation following this structure:
## CORE CHECKLIST ⚡
□ Clearly state the project's purpose and target audience
□ Provide clear, runnable instructions for getting started
□ Outline the development workflow and coding standards
□ Offer a high-level overview of the project structure and architecture
## 1. Overview
- **Purpose**: [High-level mission and goals of the project]
- **Target Audience**: [Primary users, developers, stakeholders]
- **Key Features**: [List of major functionalities and capabilities]
## README STRUCTURE
## 2. System Architecture
- **Architectural Style**: [e.g., Monolith, Microservices, Layered, Event-Driven]
- **Core Components**: [Diagram or list of major system parts and their interactions]
- **Technology Stack**:
- Languages: [Programming languages used]
- Frameworks: [Key frameworks and libraries]
- Databases: [Data storage solutions]
- Infrastructure: [Deployment and hosting]
- **Design Principles**: [Guiding principles like SOLID, DRY, separation of concerns]
### 1. Overview
- Purpose, Target Audience, and Key Features.
## 3. Getting Started
- **Prerequisites**: [Required software, tools, versions]
- **Installation**:
```bash
# Installation commands
```
- **Configuration**: [Environment setup, config files]
- **Running the Project**:
```bash
# Startup commands
```
### 2. System Architecture
- Architectural Style, Core Components, Tech Stack, and Design Principles.
## 4. Development Workflow
- **Branching Strategy**: [e.g., GitFlow, trunk-based]
- **Coding Standards**: [Style guide, linting rules]
- **Testing**:
```bash
# Test commands
```
- **Build & Deployment**: [CI/CD pipeline overview]
### 3. Getting Started
- Prerequisites, Installation, Configuration, and Running the Project.
## 5. Project Structure
```
project-root/
├── src/ # [Description]
├── tests/ # [Description]
├── docs/ # [Description]
└── config/ # [Description]
```
### 4. Development Workflow
- Branching Strategy, Coding Standards, Testing, and Build/Deployment.
## 6. Navigation
- [Module Documentation](./modules/)
- [API Reference](./api/)
- [Architecture Details](./architecture/)
- [Contributing Guidelines](./CONTRIBUTING.md)
### 5. Project Structure
- A high-level tree view of the main directories.
---
**Last Updated**: [Auto-generated timestamp]
**Documentation Version**: [Project version]
### 6. Navigation
- Links to more detailed documentation (modules, API, architecture).
## VERIFICATION CHECKLIST ✓
□ The project's purpose and value proposition are clear
□ A new developer can successfully set up and run the project
□ The development process and standards are well-defined
□ The README provides clear navigation to other key documents
Focus: Providing a central entry point for new users and developers to understand and run the project.

View File

@@ -1,16 +1,28 @@
Guide component implementation and development patterns:
Guide component implementation and development patterns.
## Required Analysis:
## CORE CHECKLIST ⚡
□ Define component interface and API requirements clearly
□ Identify reusable patterns and composition strategies
□ Plan state management and data flow before implementation
□ Design a comprehensive testing and validation approach
## REQUIRED ANALYSIS
1. Define component interface and API requirements
2. Identify reusable patterns and composition strategies
3. Plan state management and data flow implementation
4. Design component testing and validation approach
5. Document integration points and usage examples
## Output Requirements:
## OUTPUT REQUIREMENTS
- Component specification with clear interface definition
- Implementation patterns and best practices
- State management strategy and data flow design
- Testing approach and validation criteria
Focus on reusable, maintainable component design with clear usage patterns.
## VERIFICATION CHECKLIST ✓
□ Component specification includes a clear interface definition
□ Reusable implementation patterns and best practices are documented
□ State management and data flow design is clear and robust
□ A thorough testing and validation approach is defined
Focus: Reusable, maintainable component design with clear usage patterns.

View File

@@ -1,63 +0,0 @@
Create or update root-level CLAUDE.md documentation:
## Layer 1: Root Level Documentation Requirements
### Content Focus (MUST INCLUDE):
1. **Project Overview and Purpose**
- High-level project description and mission
- Target audience and use cases
- Key value propositions
2. **Technology Stack Summary**
- Primary programming languages and frameworks
- Key dependencies and tools
- Platform and runtime requirements
3. **Architecture Decisions and Principles**
- Core architectural patterns used
- Design principles governing the codebase
- Major technical decisions and rationale
4. **Development Workflow Overview**
- Build and deployment processes
- Testing approach and quality standards
- Contributing guidelines and processes
5. **Quick Start Guide**
- Installation prerequisites
- Setup instructions
- Basic usage examples
### Content Restrictions (STRICTLY AVOID):
- Implementation details (belongs in module-level docs)
- Module-specific patterns (belongs in module-level docs)
- Code examples from specific modules (belongs in module-level docs)
- Domain internal architecture (belongs in domain-level docs)
### Documentation Style:
- Use high-level, strategic perspective
- Focus on "what" and "why" rather than "how"
- Reference other documentation layers rather than duplicating content
- Maintain concise, executive-summary style
### Template Structure:
```markdown
# [Project Name] - Development Guidelines
## 1. Project Overview and Purpose
[Brief project description, mission, and target audience]
## 2. Technology Stack Summary
[Primary technologies, frameworks, and tools]
## 3. Architecture Decisions and Principles
[Core architectural patterns and design principles]
## 4. Development Workflow Overview
[Build, test, and deployment processes]
## 5. Quick Start Guide
[Installation and basic usage]
```
Remember: This is Layer 1 - stay at the strategic level and avoid diving into implementation details.

View File

@@ -1,63 +0,0 @@
Create or update domain-level CLAUDE.md documentation:
## Layer 2: Domain Level Documentation Requirements
### Content Focus (MUST INCLUDE):
1. **Domain Architecture and Responsibilities**
- Domain's role within the larger system
- Core responsibilities and boundaries
- Key abstractions and concepts
2. **Module Organization Within Domain**
- How modules are structured within this domain
- Module relationships and hierarchies
- Organizational patterns used
3. **Inter-Module Communication Patterns**
- How modules within this domain communicate
- Data flow patterns and interfaces
- Shared resources and dependencies
4. **Domain-Specific Conventions**
- Coding standards specific to this domain
- Naming conventions and patterns
- Testing approaches for this domain
5. **Integration Points with Other Domains**
- External dependencies and interfaces
- Cross-domain communication protocols
- Shared contracts and data formats
### Content Restrictions (STRICTLY AVOID):
- Duplicating root project overview (belongs in root-level docs)
- Component/function-level details (belongs in module-level docs)
- Specific implementation code (belongs in module-level docs)
- Module internal patterns (belongs in module-level docs)
### Documentation Style:
- Focus on domain-wide patterns and organization
- Explain relationships between modules within the domain
- Describe domain boundaries and responsibilities
- Reference module-level docs for implementation details
### Template Structure:
```markdown
# [Domain Name] - Domain Architecture
## 1. Domain Overview
[Domain's role and core responsibilities]
## 2. Module Organization
[How modules are structured within this domain]
## 3. Inter-Module Communication
[Communication patterns and data flow]
## 4. Domain Conventions
[Domain-specific standards and patterns]
## 5. External Integration
[Integration with other domains and systems]
```
Remember: This is Layer 2 - focus on domain-wide organization and avoid both high-level project details and low-level implementation specifics.

View File

@@ -1,66 +0,0 @@
Create or update module-level CLAUDE.md documentation:
## Layer 3: Module Level Documentation Requirements
### Content Focus (MUST INCLUDE):
1. **Module-Specific Implementation Patterns**
- Implementation patterns used within this module
- Common coding approaches and idioms
- Module-specific design patterns
2. **Internal Architecture and Design Decisions**
- How the module is internally organized
- Key design decisions and their rationale
- Component relationships within the module
3. **API Contracts and Interfaces**
- Public interfaces exposed by this module
- Input/output contracts and data formats
- Integration points with other modules
4. **Module Dependencies and Relationships**
- Direct dependencies of this module
- How this module fits into the larger system
- Data flow in and out of the module
5. **Testing Strategies for This Module**
- Testing approaches specific to this module
- Test coverage strategies and targets
- Module-specific testing tools and frameworks
### Content Restrictions (STRICTLY AVOID):
- Project overview content (belongs in root-level docs)
- Domain-wide architectural patterns (belongs in domain-level docs)
- Detailed function documentation (belongs in sub-module-level docs)
- Configuration specifics (belongs in sub-module-level docs)
### Documentation Style:
- Focus on module-level architecture and patterns
- Explain how components within the module work together
- Document public interfaces and contracts
- Reference sub-module docs for detailed implementation
### Template Structure:
```markdown
# [Module Name] - Module Architecture
## 1. Module Overview
[Module's purpose and responsibilities]
## 2. Implementation Patterns
[Common patterns and approaches used]
## 3. Internal Architecture
[How the module is organized internally]
## 4. Public Interfaces
[APIs and contracts exposed by this module]
## 5. Dependencies and Integration
[Module dependencies and system integration]
## 6. Testing Strategy
[Testing approaches for this module]
```
Remember: This is Layer 3 - focus on module-level architecture and patterns, avoiding both domain-wide and detailed implementation concerns.

View File

@@ -1,69 +0,0 @@
Create or update sub-module-level CLAUDE.md documentation:
## Layer 4: Sub-Module Level Documentation Requirements
### Content Focus (MUST INCLUDE):
1. **Detailed Implementation Specifics**
- Specific implementation details and algorithms
- Code-level patterns and idioms
- Implementation trade-offs and decisions
2. **Component/Function Documentation**
- Individual component descriptions
- Function signatures and behaviors
- Class/struct/interface documentation
3. **Configuration Details and Examples**
- Configuration options and parameters
- Environment-specific settings
- Configuration examples and templates
4. **Usage Examples and Patterns**
- Code examples and usage patterns
- Common use cases and scenarios
- Integration examples with other components
5. **Performance Considerations**
- Performance characteristics and constraints
- Optimization strategies and techniques
- Resource usage and scalability notes
### Content Restrictions (STRICTLY AVOID):
- Architecture decisions (belong in higher levels)
- Module-level organizational patterns (belong in module-level docs)
- Domain or project overview content (belong in higher levels)
- Cross-module architectural concerns (belong in higher levels)
### Documentation Style:
- Focus on detailed, actionable implementation information
- Provide concrete examples and code snippets
- Document specific behaviors and edge cases
- Include troubleshooting and debugging guidance
### Template Structure:
```markdown
# [Sub-Module/Component Name] - Implementation Guide
## 1. Component Overview
[Specific purpose and functionality]
## 2. Implementation Details
[Detailed implementation specifics]
## 3. API Reference
[Function/method documentation]
## 4. Configuration
[Configuration options and examples]
## 5. Usage Examples
[Code examples and patterns]
## 6. Performance and Optimization
[Performance considerations and tips]
## 7. Troubleshooting
[Common issues and solutions]
```
Remember: This is Layer 4 - focus on concrete, actionable implementation details and avoid architectural or organizational concerns.

View File

@@ -0,0 +1,153 @@
Create or update CLAUDE.md documentation using unified module/file template.
## CORE CHECKLIST ⚡
□ MUST include all 6 sections: Purpose, Structure, Components, Dependencies, Integration, Implementation
□ For code files: Document all public/exported APIs with complete parameter details
□ For folders: Reference subdirectory CLAUDE.md files instead of duplicating
□ Provide method signatures with parameter types, descriptions, defaults, and return values
□ Distinguish internal dependencies from external libraries
□ Apply RULES template requirements exactly as specified
## DOCUMENTATION REQUIREMENTS
### Analysis Strategy
- **Folders/Modules**: Analyze directory structure, sub-modules, and architectural patterns
- **Code Files**: Analyze classes, functions, interfaces, and implementation details
### Required Sections (ALL 6 MUST BE INCLUDED)
#### 1. Purpose and Scope
- Clear description of what this module/file does
- Main responsibilities and boundaries
- Role within the larger system
#### 2. Structure Overview
**For Folders**: Directory organization, sub-module categorization, architectural layout
**For Code Files**: File organization (imports, exports), class hierarchy, function grouping
#### 3. Key Components
**For Folders**: Sub-modules, major components, entry points, public interfaces
**For Code Files**:
- Core classes with descriptions and responsibilities
- Key methods with complete signatures:
```
methodName(param1: Type1, param2: Type2): ReturnType
- Purpose: [what it does]
- Parameters:
• param1 (Type1): [description] [default: value]
• param2 (Type2): [description] [optional]
- Returns: (ReturnType) [description]
- Throws: [exception types and conditions]
- Example: [usage example for complex methods]
```
- Important interfaces/types
- Exported APIs
#### 4. Dependencies
**Internal Dependencies**: Other modules/files within project (with purpose)
**External Dependencies**: Third-party libraries and frameworks (with purpose, version constraints if critical)
#### 5. Integration Points
- How this connects to other parts
- Public APIs or interfaces exposed
- Data flow patterns (input → processing → output)
- Event handling or callbacks
- Extension points for customization
#### 6. Implementation Notes
- Key design patterns used (e.g., Singleton, Factory, Observer)
- Important technical decisions and rationale
- Configuration requirements or environment variables
- Performance considerations
- Security considerations if applicable
- Known limitations or caveats
## OUTPUT REQUIREMENTS
### Template Structure
```markdown
# [Module/File Name]
## Purpose and Scope
[Clear description of responsibilities and role]
## Structure Overview
[Directory structure for modules OR File organization for code files]
## Key Components
### [Component/Class Name]
- Description: [Brief description]
- Responsibilities: [What it does]
- Key Methods:
#### `methodName(param1: Type1, param2?: Type2): ReturnType`
- Purpose: [what this method does]
- Parameters:
• param1 (Type1): [description]
• param2 (Type2): [description] [optional] [default: value]
- Returns: (ReturnType) [description]
- Throws: [exceptions if applicable]
## Dependencies
### Internal Dependencies
- `[module/file path]` - [purpose]
### External Dependencies
- `[library name]` - [purpose and usage]
## Integration Points
### Public APIs
- `[API/function signature]` - [description]
### Data Flow
[How data flows through this module/file]
## Implementation Notes
### Design Patterns
- [Pattern name]: [Usage and rationale]
### Technical Decisions
- [Decision]: [Rationale]
### Considerations
- Performance: [notes]
- Security: [notes]
- Limitations: [notes]
```
### Documentation Style
- **Concise but complete** - Focus on "what" and "why", not detailed "how"
- **Code signatures** for key APIs - Include all parameter details
- **Parameters**: Name, type, description, optional/default indicators, constraints
- **Return values**: Type, description, special conditions (null, undefined, empty)
- **Evidence-based** - Reference related documentation when appropriate
- **Examples** for complex methods - Show usage patterns
### Content Restrictions (STRICTLY AVOID)
- ❌ Duplicating content from other CLAUDE.md files (reference instead)
- ❌ Overly detailed code explanations (code should be self-documenting)
- ❌ Complete code listings (use signatures and descriptions)
- ❌ Version-specific details unless critical
- ❌ Documenting every single function (focus on public/exported APIs)
### Method Documentation Rules
- **Public/Exported methods**: MUST document with full parameter details
- **Private/Internal methods**: Only document if complex or critical
- **Parameters**: MUST include type, description, constraints, defaults
- **Return values**: MUST document type and description
- **Exceptions**: Document all thrown errors
### Special Instructions
- If analyzing folder with existing subdirectory CLAUDE.md files → reference them
- For code files → prioritize exported/public APIs
- Keep dependency lists focused on direct dependencies (not transitive)
- Update existing CLAUDE.md files rather than creating duplicate sections
## VERIFICATION CHECKLIST ✓
□ All 6 required sections included (Purpose, Structure, Components, Dependencies, Integration, Implementation)
□ All public/exported APIs documented with complete signatures
□ Parameters documented with types, descriptions, and defaults
□ References used instead of duplicating subdirectory documentation
□ Internal vs external dependencies clearly distinguished
□ Examples provided for non-trivial methods
Focus: Comprehensive yet concise documentation covering all essential aspects without redundancy.

View File

@@ -1,16 +0,0 @@
Analyze project structure for memory hierarchy optimization:
## Required Analysis:
1. Assess project complexity and file organization patterns
2. Identify logical module boundaries and responsibility separation
3. Evaluate cross-module dependencies and integration complexity
4. Determine optimal documentation depth and hierarchy levels
5. Recommend content differentiation strategies across hierarchy levels
## Output Requirements:
- Project complexity classification with supporting metrics
- Module boundary recommendations with responsibility mapping
- Hierarchy depth recommendations with level-specific focus areas
- Content strategy with redundancy elimination guidelines
Focus on intelligent documentation hierarchy that scales with project complexity.

View File

@@ -1,77 +1,78 @@
CONCEPT EVALUATION FRAMEWORK
Conduct comprehensive concept evaluation to assess feasibility, identify risks, and provide optimization recommendations.
## EVALUATION DIRECTIVE
You are conducting a comprehensive concept evaluation to assess feasibility, identify risks, and provide optimization recommendations before formal implementation planning begins.
## CORE CHECKLIST ⚡
□ Evaluate all 6 dimensions: Conceptual, Architectural, Technical, Resource, Risk, Dependency
□ Provide quantified assessment scores (1-5 scale)
□ Classify risks by severity (LOW/MEDIUM/HIGH/CRITICAL)
□ Include specific, actionable recommendations
□ Integrate session context and existing patterns
## CORE EVALUATION DIMENSIONS
## EVALUATION DIMENSIONS
### 1. CONCEPTUAL INTEGRITY
- **Design Coherence**: Are all components logically connected and consistent?
- **Requirement Completeness**: Are all necessary requirements identified and defined?
- **Scope Clarity**: Is the concept scope clearly defined and bounded?
- **Success Criteria**: Are measurable success criteria clearly established?
### 1. Conceptual Integrity
- Design Coherence: Logical component connections
- Requirement Completeness: All requirements identified
- Scope Clarity: Defined and bounded scope
- Success Criteria: Measurable metrics established
### 2. ARCHITECTURAL SOUNDNESS
- **System Integration**: How well does the concept integrate with existing architecture?
- **Design Patterns**: Are appropriate and established design patterns utilized?
- **Modularity**: Is the concept appropriately modular and maintainable?
- **Scalability**: Can the concept scale to meet future requirements?
### 2. Architectural Soundness
- System Integration: Fit with existing architecture
- Design Patterns: Appropriate pattern usage
- Modularity: Maintainable structure
- Scalability: Future requirement capacity
### 3. TECHNICAL FEASIBILITY
- **Implementation Complexity**: What is the technical difficulty level?
- **Technology Maturity**: Are required technologies stable and well-supported?
- **Skill Requirements**: Do we have the necessary technical expertise?
- **Infrastructure Needs**: What infrastructure changes or additions are required?
### 3. Technical Feasibility
- Implementation Complexity: Difficulty level assessment
- Technology Maturity: Stable, supported technologies
- Skill Requirements: Team expertise availability
- Infrastructure Needs: Required changes/additions
### 4. RESOURCE ASSESSMENT
- **Development Time**: Realistic time estimation for implementation
- **Team Resources**: Required team size and skill composition
- **Budget Impact**: Financial implications and resource allocation
- **Opportunity Cost**: What other initiatives might be delayed or cancelled?
### 4. Resource Assessment
- Development Time: Realistic estimation
- Team Resources: Size and skill composition
- Budget Impact: Financial implications
- Opportunity Cost: Delayed initiatives
### 5. RISK IDENTIFICATION
- **Technical Risks**: Technology limitations, complexity, and unknowns
- **Business Risks**: Market timing, user adoption, and business impact
- **Integration Risks**: Compatibility and system integration challenges
- **Resource Risks**: Team availability, skill gaps, and timeline pressures
### 5. Risk Identification
- Technical Risks: Limitations, complexity, unknowns
- Business Risks: Market timing, adoption, impact
- Integration Risks: Compatibility challenges
- Resource Risks: Availability, skills, timeline
### 6. DEPENDENCY ANALYSIS
- **External Dependencies**: Third-party services, libraries, and tools
- **Internal Dependencies**: Other systems, teams, and organizational resources
- **Temporal Dependencies**: Sequence requirements and timing constraints
- **Critical Path**: Essential dependencies that could block progress
### 6. Dependency Analysis
- External Dependencies: Third-party services/tools
- Internal Dependencies: Systems, teams, resources
- Temporal Dependencies: Sequence and timing
- Critical Path: Essential blocking dependencies
## EVALUATION METHODOLOGY
## ASSESSMENT METHODOLOGY
### ASSESSMENT CRITERIA
Rate each dimension on a scale of 1-5:
- **5 - Excellent**: Minimal risk, well-defined, highly feasible
- **4 - Good**: Low risk, mostly clear, feasible with minor adjustments
- **3 - Average**: Moderate risk, some clarification needed, feasible with effort
- **2 - Poor**: High risk, significant issues, major changes required
- **1 - Critical**: Very high risk, fundamental problems, may not be feasible
**Scoring Scale** (1-5):
- 5 - Excellent: Minimal risk, well-defined, highly feasible
- 4 - Good: Low risk, mostly clear, feasible
- 3 - Average: Moderate risk, needs clarification
- 2 - Poor: High risk, major changes required
- 1 - Critical: Very high risk, fundamental problems
### RISK CLASSIFICATION
- **LOW**: Minor issues, easily addressable
- **MEDIUM**: Manageable challenges requiring attention
- **HIGH**: Significant concerns requiring major mitigation
- **CRITICAL**: Fundamental problems threatening concept viability
**Risk Levels**:
- LOW: Minor issues, easily addressable
- MEDIUM: Manageable challenges
- HIGH: Significant concerns, major mitigation needed
- CRITICAL: Fundamental viability threats
### OPTIMIZATION PRIORITIES
- **CRITICAL**: Must be addressed before planning
- **IMPORTANT**: Should be addressed for optimal outcomes
- **OPTIONAL**: Nice-to-have improvements
**Optimization Priorities**:
- CRITICAL: Must address before planning
- IMPORTANT: Should address for optimal outcomes
- OPTIONAL: Nice-to-have improvements
## OUTPUT REQUIREMENTS
### EVALUATION SUMMARY
### Evaluation Summary
```markdown
# Concept Evaluation Summary
## Overall Assessment
- **Feasibility Score**: X/5
- **Risk Level**: LOW/MEDIUM/HIGH/CRITICAL
- **Recommendation**: PROCEED/PROCEED_WITH_MODIFICATIONS/RECONSIDER/REJECT
- Feasibility Score: X/5
- Risk Level: LOW/MEDIUM/HIGH/CRITICAL
- Recommendation: PROCEED/PROCEED_WITH_MODIFICATIONS/RECONSIDER/REJECT
## Dimension Scores
- Conceptual Integrity: X/5
@@ -82,120 +83,45 @@ Rate each dimension on a scale of 1-5:
- Dependency Complexity: X/5
```
### DETAILED ANALYSIS
For each dimension, provide:
1. **Assessment**: Current state evaluation
2. **Strengths**: What works well in the concept
3. **Concerns**: Identified issues and risks
4. **Recommendations**: Specific improvement suggestions
### Detailed Analysis
For each dimension:
1. Assessment: Current state evaluation
2. Strengths: What works well
3. Concerns: Issues and risks
4. Recommendations: Specific improvements
### RISK MATRIX
```markdown
### Risk Matrix
| Risk Category | Level | Impact | Mitigation Strategy |
|---------------|-------|--------|-------------------|
| Technical | HIGH | Delays | Proof of concept |
| Resource | MED | Budget | Phase approach |
```
|---------------|-------|--------|---------------------|
| Technical | HIGH | Delays | Proof of concept |
| Resource | MED | Budget | Phased approach |
### OPTIMIZATION ROADMAP
Prioritized list of improvements:
1. **CRITICAL**: [Issue] - [Recommendation] - [Impact]
2. **IMPORTANT**: [Issue] - [Recommendation] - [Impact]
3. **OPTIONAL**: [Issue] - [Recommendation] - [Impact]
### Optimization Roadmap
1. CRITICAL: [Issue] - [Recommendation] - [Impact]
2. IMPORTANT: [Issue] - [Recommendation] - [Impact]
3. OPTIONAL: [Issue] - [Recommendation] - [Impact]
## CONTEXT INTEGRATION RULES
## CONTEXT INTEGRATION
### CLAUDE CODE MEMORY INTEGRATION
- **Session Context**: Reference current conversation history and decisions made
- **Project Memory**: Leverage knowledge from previous implementations and lessons learned
- **Pattern Recognition**: Use identified successful approaches and anti-patterns from session memory
- **Evaluation History**: Consider previous concept evaluations and their outcomes
- **Technical Evolution**: Build on previous technical decisions and architectural changes
- **Context Continuity**: Maintain consistency with established project direction and decisions
**Session Memory**: Reference current conversation, decisions, patterns from session history
**Existing Patterns**: Identify similar implementations, evaluate success/failure, leverage proven approaches
**Architectural Alignment**: Ensure consistency, consider evolution, apply standards
**Business Context**: Strategic fit, user impact, competitive advantage, timeline alignment
### EXISTING PATTERNS
- **Identify**: Find similar implementations in the codebase
- **Analyze**: Evaluate success/failure patterns
- **Leverage**: Recommend reusing successful approaches
- **Avoid**: Flag problematic patterns to avoid
## PROJECT TYPE CONSIDERATIONS
### ARCHITECTURAL ALIGNMENT
- **Consistency**: Ensure concept aligns with existing architecture
- **Evolution**: Consider architectural evolution and migration paths
- **Standards**: Apply established coding and design standards
- **Integration**: Evaluate integration touchpoints and complexity
**Innovation Projects**: Higher risk tolerance, learning focus, phased approach
**Critical Business**: Lower risk tolerance, reliability focus, comprehensive mitigation
**Integration Projects**: Compatibility focus, minimal disruption, rollback strategies
**Greenfield Projects**: Architectural innovation, scalability, technology standardization
### BUSINESS CONTEXT
- **Strategic Fit**: Alignment with business objectives and priorities
- **User Impact**: Effect on user experience and satisfaction
- **Competitive Advantage**: Differentiation and market positioning
- **Timeline**: Alignment with business timelines and milestones
## VERIFICATION CHECKLIST ✓
□ All 6 evaluation dimensions thoroughly assessed with scores
□ Risk matrix completed with mitigation strategies
□ Optimization recommendations prioritized (CRITICAL/IMPORTANT/OPTIONAL)
□ Integration with existing systems evaluated
□ Resource requirements and timeline implications identified
□ Success criteria and validation metrics defined
□ Next steps and decision points outlined
## QUALITY STANDARDS
### ANALYSIS DEPTH
- Provide specific examples and evidence
- Quantify assessments where possible
- Consider multiple perspectives and scenarios
- Base recommendations on concrete analysis
### ACTIONABILITY
- Make recommendations specific and implementable
- Provide clear next steps and decision points
- Identify responsible parties and timelines
- Include success metrics and validation criteria
### OBJECTIVITY
- Balance optimism with realistic assessment
- Acknowledge uncertainty and assumptions
- Present multiple options where applicable
- Focus on concept improvement rather than criticism
## SPECIAL CONSIDERATIONS
### INNOVATION PROJECTS
- Higher tolerance for technical risk
- Emphasis on learning and experimentation
- Phased approach with validation milestones
- Clear success/failure criteria
### CRITICAL BUSINESS PROJECTS
- Lower risk tolerance
- Emphasis on reliability and predictability
- Comprehensive risk mitigation strategies
- Detailed contingency planning
### INTEGRATION PROJECTS
- Focus on compatibility and interoperability
- Emphasis on minimizing system disruption
- Careful change management planning
- Rollback and recovery strategies
### GREENFIELD PROJECTS
- Opportunity for architectural innovation
- Emphasis on future scalability and flexibility
- Technology stack selection and standardization
- Team skill development considerations
## EVALUATION COMPLETION CHECKLIST
- [ ] All six evaluation dimensions thoroughly assessed
- [ ] Risk matrix completed with mitigation strategies
- [ ] Optimization recommendations prioritized
- [ ] Integration with existing systems evaluated
- [ ] Resource requirements clearly identified
- [ ] Timeline implications considered
- [ ] Success criteria and validation metrics defined
- [ ] Next steps and decision points outlined
## OUTPUT FORMAT
Provide a structured evaluation report that includes:
1. Executive summary with overall recommendation
2. Detailed dimension-by-dimension analysis
3. Risk assessment and mitigation strategies
4. Prioritized optimization recommendations
5. Implementation roadmap and next steps
6. Resource requirements and timeline implications
Focus on providing actionable insights that will improve concept quality and reduce implementation risks during the formal planning phase.
Focus: Actionable insights to improve concept quality and reduce implementation risks.

View File

@@ -1,16 +1,30 @@
Plan system migration and modernization strategies:
Plan system migration and modernization strategies.
## Required Analysis:
## CORE CHECKLIST ⚡
□ Assess current system completely before planning migration
□ Plan incremental migration (avoid big-bang approach)
□ Include rollback plan for every migration step
□ Provide file:line references for all affected code
## REQUIRED ANALYSIS
1. Assess current system architecture and migration requirements
2. Identify migration paths and transformation strategies
3. Plan data migration and system cutover procedures
4. Evaluate compatibility and integration challenges
5. Document rollback plans and risk mitigation strategies
## Output Requirements:
## OUTPUT REQUIREMENTS
- Migration strategy with step-by-step execution plan
- Data migration procedures and validation checkpoints
- Compatibility assessment and integration requirements
- Risk analysis and rollback procedures
- Compatibility assessment with file:line references
- Risk analysis and rollback procedures for each phase
- Testing strategy for migration validation
Focus on low-risk migration strategies with comprehensive fallback options.
## VERIFICATION CHECKLIST ✓
□ Migration planned in incremental phases (not big-bang)
□ Every phase has rollback plan documented
□ Data migration validated with checkpoints
□ Compatibility issues identified and mitigated
□ Testing strategy covers all migration phases
Focus: Low-risk incremental migration with comprehensive fallback options.

View File

@@ -1,16 +1,30 @@
Create detailed task breakdown and implementation planning:
Create detailed task breakdown and implementation planning.
## Required Analysis:
## CORE CHECKLIST ⚡
□ Break down tasks into manageable subtasks (3-8 hours each)
□ Identify all dependencies and execution sequence
□ Provide realistic effort estimates with buffer
□ Document risks for each task
## REQUIRED ANALYSIS
1. Break down complex tasks into manageable subtasks
2. Identify dependencies and execution sequence requirements
3. Estimate effort and resource requirements for each task
4. Map task relationships and critical path analysis
5. Document risks and mitigation strategies
## Output Requirements:
## OUTPUT REQUIREMENTS
- Hierarchical task breakdown with specific deliverables
- Dependency mapping and execution sequence
- Effort estimation and resource allocation
- Risk assessment and mitigation plans
- Effort estimation with confidence levels
- Resource allocation and skill requirements
- Risk assessment and mitigation plans for each task
Focus on actionable task planning with clear deliverables and timelines.
## VERIFICATION CHECKLIST ✓
□ All tasks broken down to manageable size (3-8 hours)
□ Dependencies mapped and critical path identified
□ Effort estimates realistic with buffer included
□ Every task has clear deliverable defined
□ Risks documented with mitigation strategies
Focus: Actionable task planning with clear deliverables, dependencies, and realistic timelines.

View File

@@ -1,16 +1,28 @@
Conduct comprehensive code review and quality assessment:
Conduct comprehensive code review and quality assessment.
## Required Analysis:
## CORE CHECKLIST ⚡
□ Review against established coding standards and conventions
□ Assess logic correctness, including potential edge cases
□ Evaluate security implications and vulnerability risks
□ Check for performance bottlenecks and optimization opportunities
## REQUIRED ANALYSIS
1. Review code against established coding standards and conventions
2. Assess logic correctness and potential edge cases
3. Evaluate security implications and vulnerability risks
4. Check performance characteristics and optimization opportunities
5. Validate test coverage and documentation completeness
## Output Requirements:
## OUTPUT REQUIREMENTS
- Standards compliance assessment with specific violations
- Logic review findings with potential issue identification
- Security assessment with vulnerability documentation
- Performance review with optimization recommendations
Focus on actionable feedback with clear improvement priorities and implementation guidance.
## VERIFICATION CHECKLIST ✓
□ Code is assessed against established standards
□ Logic, including edge cases, is thoroughly reviewed
□ Security and performance have been evaluated
□ Test coverage and documentation are validated
Focus: Actionable feedback with clear improvement priorities and implementation guidance.

View File

@@ -1,59 +1,28 @@
Technical feasibility assessment of workflow implementation plan from code quality and execution perspective:
Assess the technical feasibility of a workflow implementation plan.
## Required Technical Analysis:
1. **Implementation Complexity Assessment**
- Evaluate code complexity and technical difficulty
- Assess required technical skills and expertise levels
- Validate implementation approach feasibility
- Identify technical challenges and solutions
## CORE CHECKLIST ⚡
□ Evaluate implementation complexity and required skills
□ Validate all technical dependencies and prerequisites
Assess the proposed code structure and integration patterns
□ Verify the completeness of the testing strategy
2. **Technical Dependencies Validation**
- Review external library and framework dependencies
- Assess version compatibility and dependency conflicts
- Evaluate build system and deployment requirements
- Identify missing technical prerequisites
## REQUIRED TECHNICAL ANALYSIS
1. **Implementation Complexity**: Evaluate code difficulty and required skills.
2. **Technical Dependencies**: Review libraries, versions, and build systems.
3. **Code Structure**: Assess file organization, naming, and modularity.
4. **Testing Completeness**: Evaluate test coverage, types, and gaps.
5. **Execution Readiness**: Validate control flow, context, and file targets.
3. **Code Structure Assessment**
- Evaluate proposed file organization and structure
- Assess naming conventions and code organization
- Validate integration with existing codebase patterns
- Review modularity and separation of concerns
## OUTPUT REQUIREMENTS
- **Technical Assessment Report**: Grades for implementation, complexity, and quality.
- **Detailed Technical Findings**: Blocking issues, performance concerns, and improvements.
- **Implementation Recommendations**: Prerequisites, best practices, and refactoring.
- **Risk Mitigation**: Technical, dependency, integration, and quality risks.
4. **Testing Completeness Evaluation**
- Assess test coverage and testing strategy completeness
- Evaluate test types and testing approach adequacy
- Review integration testing and end-to-end coverage
- Identify testing gaps and quality assurance needs
## VERIFICATION CHECKLIST ✓
□ Implementation complexity and feasibility have been thoroughly evaluated
□ All technical dependencies and prerequisites are validated
□ The proposed code structure aligns with project standards
□ The testing plan is complete and adequate for the proposed changes
5. **Execution Readiness Verification**
- Validate flow_control definitions and execution paths
- Assess task context completeness and adequacy
- Evaluate target_files specifications and accuracy
- Review implementation prerequisites and setup requirements
## Output Requirements:
### Technical Assessment Report:
- **Implementation Grade** (A-F): Technical approach quality
- **Complexity Score** (1-10): Implementation difficulty level
- **Readiness Level** (1-5): Execution preparation completeness
- **Quality Rating** (1-10): Code quality and maintainability projection
### Detailed Technical Findings:
- **Blocking Issues**: Technical problems that prevent implementation
- **Performance Concerns**: Scalability and performance implications
- **Quality Improvements**: Code quality and maintainability enhancements
- **Testing Enhancements**: Testing strategy and coverage improvements
### Implementation Recommendations:
- **Prerequisites**: Required setup and configuration changes
- **Best Practices**: Code patterns and conventions to follow
- **Tool Requirements**: Additional tools or dependencies needed
- **Refactoring Suggestions**: Code structure and organization improvements
### Risk Mitigation:
- **Technical Risks**: Implementation complexity and technical debt
- **Dependency Risks**: External dependencies and compatibility issues
- **Integration Risks**: Codebase integration and compatibility concerns
- **Quality Risks**: Code quality and maintainability implications
Focus on technical execution details, code quality concerns, and implementation feasibility. Provide specific, actionable recommendations with clear implementation guidance and priority levels.
Focus: Technical execution details, code quality concerns, and implementation feasibility.

View File

@@ -1,65 +1,28 @@
Cross-validation analysis of gemini strategic and codex technical assessments for workflow implementation plan:
Cross-validate strategic (Gemini) and technical (Codex) assessments.
## Required Cross-Validation Analysis:
1. **Consensus Identification**
- Identify areas where both analyses agree on issues or strengths
- Document shared concerns and aligned recommendations
- Highlight consistent risk assessments and mitigation strategies
- Establish priority consensus for implementation focus
## CORE CHECKLIST ⚡
□ Identify both consensus and conflict between the two analyses
□ Synthesize a unified risk profile and recommendation set
□ Resolve conflicting suggestions with a balanced approach
□ Frame final decisions as clear choices for the user
2. **Conflict Resolution Analysis**
- Identify discrepancies between strategic and technical perspectives
- Analyze root causes of conflicting assessments
- Evaluate trade-offs between strategic goals and technical constraints
- Provide balanced recommendations for resolution
## REQUIRED CROSS-VALIDATION ANALYSIS
1. **Consensus Identification**: Find where both analyses agree.
2. **Conflict Resolution**: Analyze and resolve discrepancies.
3. **Risk Level Synthesis**: Combine risk assessments into a single profile.
4. **Recommendation Integration**: Synthesize recommendations into a unified plan.
5. **Quality Assurance Framework**: Establish combined quality metrics.
3. **Risk Level Synthesis**
- Combine strategic and technical risk assessments
- Establish overall implementation risk profile
- Prioritize risks by impact and probability
- Recommend risk mitigation strategies
## OUTPUT REQUIREMENTS
- **Cross-Validation Summary**: Overall grade, confidence score, and risk profile.
- **Synthesis Report**: Consensus areas, conflict areas, and integrated recommendations.
- **User Approval Framework**: A clear breakdown of changes for user approval.
- **Modification Categories**: Classify changes by type (e.g., Task Structure, Technical).
4. **Recommendation Integration**
- Synthesize strategic and technical recommendations
- Resolve conflicting suggestions with balanced approach
- Prioritize recommendations by impact and effort
- Establish implementation sequence and dependencies
## VERIFICATION CHECKLIST ✓
□ Both consensus and conflict between analyses are identified and documented
□ Risks and recommendations are synthesized into a single, coherent plan
□ Conflicting points are resolved with balanced, well-reasoned proposals
□ Final output is structured to facilitate clear user decisions
5. **Quality Assurance Framework**
- Establish combined quality metrics and success criteria
- Define validation checkpoints and review gates
- Recommend monitoring and feedback mechanisms
- Ensure both strategic and technical quality standards
## Input Analysis Sources:
- **Gemini Strategic Analysis**: Architecture, business alignment, strategic risks
- **Codex Technical Analysis**: Implementation feasibility, code quality, technical risks
- **Original Implementation Plan**: IMPL_PLAN.md, task definitions, context
## Output Requirements:
### Cross-Validation Summary:
- **Overall Assessment Grade** (A-F): Combined strategic and technical evaluation
- **Implementation Confidence** (1-10): Combined feasibility assessment
- **Risk Profile**: High/Medium/Low with specific risk categories
- **Recommendation Priority Matrix**: Urgent/Important classification
### Synthesis Report:
- **Consensus Areas**: Shared findings and aligned recommendations
- **Conflict Areas**: Divergent perspectives requiring resolution
- **Integrated Recommendations**: Balanced strategic and technical guidance
- **Implementation Roadmap**: Phased approach balancing strategic and technical needs
### User Approval Framework:
- **Critical Changes**: Must-implement modifications (blocking issues)
- **Important Improvements**: High-value enhancements (quality/efficiency)
- **Optional Enhancements**: Nice-to-have improvements (future consideration)
- **Trade-off Decisions**: User choice between competing approaches
### Modification Categories:
- **Task Structure**: Task merging, splitting, or reordering
- **Dependencies**: Dependency additions, removals, or modifications
- **Context Enhancement**: Requirements, acceptance criteria, or documentation
- **Technical Adjustments**: Implementation approach or tool changes
- **Strategic Alignment**: Business objective or success criteria refinement
Focus on balanced integration of strategic and technical perspectives. Provide clear user decision points with impact analysis and implementation guidance. Prioritize recommendations by combined strategic and technical value.
Focus: A balanced integration of strategic and technical perspectives to produce a single, actionable plan.

View File

@@ -1,52 +1,27 @@
Strategic validation of workflow implementation plan from architectural and business perspective:
Validate the strategic and architectural soundness of a workflow implementation plan.
## Required Strategic Analysis:
1. **Architectural Soundness Assessment**
- Evaluate overall system design coherence
- Assess component interaction patterns
- Validate architectural decision consistency
- Identify potential design conflicts or gaps
## CORE CHECKLIST ⚡
□ Evaluate the plan against high-level system architecture
□ Assess the logic of the task breakdown and dependencies
□ Verify alignment with stated business objectives and success criteria
□ Identify strategic risks, not just low-level technical ones
2. **Task Decomposition Logic Evaluation**
- Review task breakdown methodology and rationale
- Assess functional completeness of each task
- Evaluate task granularity and scope appropriateness
- Identify missing or redundant decomposition elements
## REQUIRED STRATEGIC ANALYSIS
1. **Architectural Soundness**: Assess design coherence and pattern consistency.
2. **Task Decomposition Logic**: Review task breakdown, granularity, and completeness.
3. **Dependency Coherence**: Analyze task interdependencies and logical flow.
4. **Business Alignment**: Validate against business objectives and requirements.
5. **Strategic Risk Identification**: Identify architectural, resource, and timeline risks.
3. **Dependency Coherence Analysis**
- Map task interdependencies and logical flow
- Validate dependency ordering and scheduling
- Assess circular dependency risks
- Evaluate parallel execution opportunities
## OUTPUT REQUIREMENTS
- **Strategic Assessment Report**: Grades for architecture, decomposition, and business alignment.
- **Detailed Recommendations**: Critical issues, improvements, and alternative approaches.
- **Action Items**: A prioritized list of changes (Immediate, Short-term, Long-term).
4. **Business Alignment Verification**
- Validate alignment with stated business objectives
- Assess stakeholder requirement coverage
- Evaluate success criteria completeness and measurability
- Identify business risk and mitigation adequacy
## VERIFICATION CHECKLIST ✓
□ The plan's architectural soundness has been thoroughly assessed
□ Task decomposition and dependencies are logical and coherent
□ The plan is confirmed to be in alignment with business goals
□ Strategic risks are identified with clear recommendations
5. **Strategic Risk Identification**
- Identify architectural risks and technical debt implications
- Assess implementation complexity and resource requirements
- Evaluate timeline feasibility and milestone realism
- Document potential blockers and escalation scenarios
## Output Requirements:
### Strategic Assessment Report:
- **Architecture Grade** (A-F): Overall design quality assessment
- **Decomposition Grade** (A-F): Task breakdown effectiveness
- **Risk Level** (Low/Medium/High): Combined implementation risk
- **Business Alignment Score** (1-10): Requirement satisfaction level
### Detailed Recommendations:
- **Critical Issues**: Must-fix problems that could cause failure
- **Improvement Opportunities**: Non-blocking enhancements
- **Alternative Approaches**: Strategic alternatives worth considering
- **Resource Implications**: Staffing and timeline considerations
### Action Items:
- **Immediate**: Changes needed before implementation starts
- **Short-term**: Improvements for early phases
- **Long-term**: Strategic enhancements for future iterations
Focus on high-level strategic concerns, business alignment, and long-term architectural implications. Provide actionable recommendations with clear rationale and priority levels.
Focus: High-level strategic concerns, business alignment, and long-term architectural implications.

View File

@@ -14,20 +14,35 @@ type: search-guideline
## ⚡ Core Search Tools
**codebase-retrieval**: Semantic file discovery via Gemini CLI with all files analysis
**rg (ripgrep)**: Fast content search with regex support
**find**: File/directory location by name patterns
**grep**: Built-in pattern matching in files
**get_modules_by_depth.sh**: Program architecture analysis and structural discovery
### Decision Principles
- **Use codebase-retrieval for semantic discovery** - Intelligent file discovery based on task context
- **Use rg for content** - Fastest for searching within files
- **Use find for files** - Locate files/directories by name
- **Use grep sparingly** - Only when rg unavailable
- **Use get_modules_by_depth.sh first** - MANDATORY for program architecture analysis before planning
- **Always use Bash commands** - NEVER use Windows cmd/PowerShell commands
### Tool Selection Matrix
| Need | Tool | Use Case |
|------|------|----------|
| **Semantic file discovery** | codebase-retrieval | Find files relevant to task/feature context |
| **Pattern matching** | rg | Search code content with regex |
| **File name lookup** | find | Locate files by name patterns |
| **Architecture analysis** | get_modules_by_depth.sh | Understand program structure |
### Quick Command Reference
```bash
# Semantic File Discovery (codebase-retrieval)
~/.claude/scripts/gemini-wrapper --all-files -p "List all files relevant to: [task/feature description]"
bash(~/.claude/scripts/gemini-wrapper --all-files -p "List all files relevant to: [task/feature description]")
# Program Architecture Analysis (MANDATORY FIRST)
~/.claude/scripts/get_modules_by_depth.sh # Discover program architecture
bash(~/.claude/scripts/get_modules_by_depth.sh) # Analyze structural hierarchy
@@ -49,6 +64,10 @@ grep -n -i "pattern" file.txt # Line numbers, case-insensitive
### Workflow Integration Examples
```bash
# Semantic Discovery → Content Search → Analysis (Recommended Pattern)
~/.claude/scripts/gemini-wrapper --all-files -p "List all files relevant to: [task/feature]" # Get relevant files
rg "[pattern]" --type [filetype] # Then search within discovered files
# Program Architecture Analysis (MANDATORY BEFORE PLANNING)
~/.claude/scripts/get_modules_by_depth.sh # Discover program architecture
bash(~/.claude/scripts/get_modules_by_depth.sh) # Analyze structural hierarchy

View File

@@ -1,534 +0,0 @@
---
name: design-tokens-schema
description: Design tokens JSON schema specification for UI design workflow
type: specification
---
# Design Tokens Schema Specification
## Overview
Standardized JSON schema for design tokens used in `/workflow:design/*` commands. Ensures consistency across style extraction, consolidation, and UI generation phases.
## Core Principles
1. **OKLCH Color Format**: All colors use OKLCH color space for perceptual uniformity
2. **Semantic Naming**: User-centric token names (brand-primary, surface-elevated, not color-1, bg-2)
3. **rem-Based Sizing**: All spacing and typography use rem units for scalability
4. **Comprehensive Coverage**: Complete token set for production-ready design systems
5. **Accessibility First**: WCAG AA compliance validated and documented
## Full Schema
```json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Design Tokens",
"description": "Design token definitions for UI design workflow",
"type": "object",
"required": ["colors", "typography", "spacing", "border_radius", "shadow"],
"properties": {
"meta": {
"type": "object",
"properties": {
"version": {"type": "string", "pattern": "^\\d+\\.\\d+\\.\\d+$"},
"generated_at": {"type": "string", "format": "date-time"},
"session_id": {"type": "string", "pattern": "^WFS-"},
"description": {"type": "string"}
}
},
"colors": {
"type": "object",
"required": ["brand", "surface", "semantic", "text"],
"properties": {
"brand": {
"type": "object",
"description": "Brand identity colors",
"required": ["primary", "secondary"],
"properties": {
"primary": {"$ref": "#/definitions/color"},
"secondary": {"$ref": "#/definitions/color"},
"accent": {"$ref": "#/definitions/color"}
}
},
"surface": {
"type": "object",
"description": "Surface and background colors",
"required": ["background", "elevated"],
"properties": {
"background": {"$ref": "#/definitions/color"},
"elevated": {"$ref": "#/definitions/color"},
"sunken": {"$ref": "#/definitions/color"},
"overlay": {"$ref": "#/definitions/color"}
}
},
"semantic": {
"type": "object",
"description": "Semantic state colors",
"required": ["success", "warning", "error", "info"],
"properties": {
"success": {"$ref": "#/definitions/color"},
"warning": {"$ref": "#/definitions/color"},
"error": {"$ref": "#/definitions/color"},
"info": {"$ref": "#/definitions/color"}
}
},
"text": {
"type": "object",
"description": "Text colors with WCAG AA validated contrast",
"required": ["primary", "secondary"],
"properties": {
"primary": {"$ref": "#/definitions/color"},
"secondary": {"$ref": "#/definitions/color"},
"tertiary": {"$ref": "#/definitions/color"},
"inverse": {"$ref": "#/definitions/color"},
"disabled": {"$ref": "#/definitions/color"}
}
},
"border": {
"type": "object",
"description": "Border and divider colors",
"properties": {
"subtle": {"$ref": "#/definitions/color"},
"default": {"$ref": "#/definitions/color"},
"strong": {"$ref": "#/definitions/color"}
}
}
}
},
"typography": {
"type": "object",
"required": ["font_family", "font_size", "line_height", "font_weight"],
"properties": {
"font_family": {
"type": "object",
"required": ["heading", "body"],
"properties": {
"heading": {"type": "string"},
"body": {"type": "string"},
"mono": {"type": "string"}
}
},
"font_size": {
"type": "object",
"required": ["xs", "sm", "base", "lg", "xl", "2xl", "3xl", "4xl"],
"properties": {
"xs": {"$ref": "#/definitions/size_rem"},
"sm": {"$ref": "#/definitions/size_rem"},
"base": {"$ref": "#/definitions/size_rem"},
"lg": {"$ref": "#/definitions/size_rem"},
"xl": {"$ref": "#/definitions/size_rem"},
"2xl": {"$ref": "#/definitions/size_rem"},
"3xl": {"$ref": "#/definitions/size_rem"},
"4xl": {"$ref": "#/definitions/size_rem"},
"5xl": {"$ref": "#/definitions/size_rem"}
}
},
"line_height": {
"type": "object",
"required": ["tight", "normal", "relaxed"],
"properties": {
"tight": {"type": "string", "pattern": "^\\d+(\\.\\d+)?$"},
"normal": {"type": "string", "pattern": "^\\d+(\\.\\d+)?$"},
"relaxed": {"type": "string", "pattern": "^\\d+(\\.\\d+)?$"}
}
},
"font_weight": {
"type": "object",
"properties": {
"light": {"type": "integer", "enum": [300]},
"normal": {"type": "integer", "enum": [400]},
"medium": {"type": "integer", "enum": [500]},
"semibold": {"type": "integer", "enum": [600]},
"bold": {"type": "integer", "enum": [700]}
}
}
}
},
"spacing": {
"type": "object",
"description": "Spacing scale (rem-based)",
"required": ["0", "1", "2", "3", "4", "6", "8"],
"patternProperties": {
"^\\d+$": {"$ref": "#/definitions/size_rem"}
}
},
"border_radius": {
"type": "object",
"required": ["none", "sm", "md", "lg", "full"],
"properties": {
"none": {"type": "string", "const": "0"},
"sm": {"$ref": "#/definitions/size_rem"},
"md": {"$ref": "#/definitions/size_rem"},
"lg": {"$ref": "#/definitions/size_rem"},
"xl": {"$ref": "#/definitions/size_rem"},
"2xl": {"$ref": "#/definitions/size_rem"},
"full": {"type": "string", "const": "9999px"}
}
},
"shadow": {
"type": "object",
"required": ["sm", "md", "lg"],
"properties": {
"none": {"type": "string", "const": "none"},
"sm": {"type": "string"},
"md": {"type": "string"},
"lg": {"type": "string"},
"xl": {"type": "string"},
"2xl": {"type": "string"}
}
},
"breakpoint": {
"type": "object",
"description": "Responsive breakpoints",
"properties": {
"sm": {"type": "string", "pattern": "^\\d+px$"},
"md": {"type": "string", "pattern": "^\\d+px$"},
"lg": {"type": "string", "pattern": "^\\d+px$"},
"xl": {"type": "string", "pattern": "^\\d+px$"},
"2xl": {"type": "string", "pattern": "^\\d+px$"}
}
},
"accessibility": {
"type": "object",
"description": "WCAG AA compliance data",
"properties": {
"contrast_ratios": {
"type": "object",
"patternProperties": {
".*": {
"type": "object",
"properties": {
"background": {"type": "string"},
"foreground": {"type": "string"},
"ratio": {"type": "number"},
"wcag_aa_pass": {"type": "boolean"},
"wcag_aaa_pass": {"type": "boolean"}
}
}
}
}
}
}
},
"definitions": {
"color": {
"type": "string",
"pattern": "^oklch\\(\\s*\\d+(\\.\\d+)?\\s+\\d+(\\.\\d+)?\\s+\\d+(\\.\\d+)?\\s*(\\/\\s*\\d+(\\.\\d+)?)?\\s*\\)$",
"description": "OKLCH color format: oklch(L C H / A)"
},
"size_rem": {
"type": "string",
"pattern": "^\\d+(\\.\\d+)?rem$",
"description": "rem-based size value"
}
}
}
```
## Example: Complete Design Tokens
```json
{
"meta": {
"version": "1.0.0",
"generated_at": "2025-10-05T15:30:00Z",
"session_id": "WFS-auth-dashboard",
"description": "Modern minimalist design system with high contrast"
},
"colors": {
"brand": {
"primary": "oklch(0.45 0.20 270 / 1)",
"secondary": "oklch(0.60 0.15 200 / 1)",
"accent": "oklch(0.65 0.18 30 / 1)"
},
"surface": {
"background": "oklch(0.98 0.01 270 / 1)",
"elevated": "oklch(1.00 0.00 0 / 1)",
"sunken": "oklch(0.95 0.02 270 / 1)",
"overlay": "oklch(0.00 0.00 0 / 0.5)"
},
"semantic": {
"success": "oklch(0.55 0.15 150 / 1)",
"warning": "oklch(0.70 0.18 60 / 1)",
"error": "oklch(0.50 0.20 20 / 1)",
"info": "oklch(0.60 0.15 240 / 1)"
},
"text": {
"primary": "oklch(0.20 0.02 270 / 1)",
"secondary": "oklch(0.45 0.02 270 / 1)",
"tertiary": "oklch(0.60 0.02 270 / 1)",
"inverse": "oklch(0.98 0.01 270 / 1)",
"disabled": "oklch(0.70 0.01 270 / 0.5)"
},
"border": {
"subtle": "oklch(0.90 0.01 270 / 1)",
"default": "oklch(0.80 0.02 270 / 1)",
"strong": "oklch(0.60 0.05 270 / 1)"
}
},
"typography": {
"font_family": {
"heading": "Inter, system-ui, -apple-system, sans-serif",
"body": "Inter, system-ui, -apple-system, sans-serif",
"mono": "Fira Code, Consolas, monospace"
},
"font_size": {
"xs": "0.75rem",
"sm": "0.875rem",
"base": "1rem",
"lg": "1.125rem",
"xl": "1.25rem",
"2xl": "1.5rem",
"3xl": "1.875rem",
"4xl": "2.25rem",
"5xl": "3rem"
},
"line_height": {
"tight": "1.25",
"normal": "1.5",
"relaxed": "1.75"
},
"font_weight": {
"light": 300,
"normal": 400,
"medium": 500,
"semibold": 600,
"bold": 700
}
},
"spacing": {
"0": "0",
"1": "0.25rem",
"2": "0.5rem",
"3": "0.75rem",
"4": "1rem",
"5": "1.25rem",
"6": "1.5rem",
"8": "2rem",
"10": "2.5rem",
"12": "3rem",
"16": "4rem",
"20": "5rem",
"24": "6rem"
},
"border_radius": {
"none": "0",
"sm": "0.25rem",
"md": "0.5rem",
"lg": "0.75rem",
"xl": "1rem",
"2xl": "1.5rem",
"full": "9999px"
},
"shadow": {
"none": "none",
"sm": "0 1px 2px oklch(0.00 0.00 0 / 0.05)",
"md": "0 4px 6px oklch(0.00 0.00 0 / 0.07), 0 2px 4px oklch(0.00 0.00 0 / 0.06)",
"lg": "0 10px 15px oklch(0.00 0.00 0 / 0.1), 0 4px 6px oklch(0.00 0.00 0 / 0.05)",
"xl": "0 20px 25px oklch(0.00 0.00 0 / 0.15), 0 10px 10px oklch(0.00 0.00 0 / 0.04)",
"2xl": "0 25px 50px oklch(0.00 0.00 0 / 0.25)"
},
"breakpoint": {
"sm": "640px",
"md": "768px",
"lg": "1024px",
"xl": "1280px",
"2xl": "1536px"
},
"accessibility": {
"contrast_ratios": {
"text_primary_on_background": {
"background": "oklch(0.98 0.01 270 / 1)",
"foreground": "oklch(0.20 0.02 270 / 1)",
"ratio": 14.2,
"wcag_aa_pass": true,
"wcag_aaa_pass": true
},
"brand_primary_on_background": {
"background": "oklch(0.98 0.01 270 / 1)",
"foreground": "oklch(0.45 0.20 270 / 1)",
"ratio": 6.8,
"wcag_aa_pass": true,
"wcag_aaa_pass": false
}
}
}
}
```
## CSS Custom Properties Generation
Design tokens should be converted to CSS custom properties for use in generated UI:
```css
:root {
/* Brand Colors */
--color-brand-primary: oklch(0.45 0.20 270 / 1);
--color-brand-secondary: oklch(0.60 0.15 200 / 1);
--color-brand-accent: oklch(0.65 0.18 30 / 1);
/* Surface Colors */
--color-surface-background: oklch(0.98 0.01 270 / 1);
--color-surface-elevated: oklch(1.00 0.00 0 / 1);
--color-surface-sunken: oklch(0.95 0.02 270 / 1);
/* Semantic Colors */
--color-semantic-success: oklch(0.55 0.15 150 / 1);
--color-semantic-warning: oklch(0.70 0.18 60 / 1);
--color-semantic-error: oklch(0.50 0.20 20 / 1);
--color-semantic-info: oklch(0.60 0.15 240 / 1);
/* Text Colors */
--color-text-primary: oklch(0.20 0.02 270 / 1);
--color-text-secondary: oklch(0.45 0.02 270 / 1);
--color-text-tertiary: oklch(0.60 0.02 270 / 1);
/* Typography */
--font-family-heading: Inter, system-ui, sans-serif;
--font-family-body: Inter, system-ui, sans-serif;
--font-family-mono: Fira Code, Consolas, monospace;
--font-size-xs: 0.75rem;
--font-size-sm: 0.875rem;
--font-size-base: 1rem;
--font-size-lg: 1.125rem;
--font-size-xl: 1.25rem;
--font-size-2xl: 1.5rem;
--font-size-3xl: 1.875rem;
--font-size-4xl: 2.25rem;
--line-height-tight: 1.25;
--line-height-normal: 1.5;
--line-height-relaxed: 1.75;
/* Spacing */
--spacing-0: 0;
--spacing-1: 0.25rem;
--spacing-2: 0.5rem;
--spacing-3: 0.75rem;
--spacing-4: 1rem;
--spacing-6: 1.5rem;
--spacing-8: 2rem;
--spacing-12: 3rem;
--spacing-16: 4rem;
/* Border Radius */
--border-radius-none: 0;
--border-radius-sm: 0.25rem;
--border-radius-md: 0.5rem;
--border-radius-lg: 0.75rem;
--border-radius-xl: 1rem;
--border-radius-full: 9999px;
/* Shadows */
--shadow-sm: 0 1px 2px oklch(0.00 0.00 0 / 0.05);
--shadow-md: 0 4px 6px oklch(0.00 0.00 0 / 0.07), 0 2px 4px oklch(0.00 0.00 0 / 0.06);
--shadow-lg: 0 10px 15px oklch(0.00 0.00 0 / 0.1), 0 4px 6px oklch(0.00 0.00 0 / 0.05);
}
```
## Tailwind Configuration Generation
```javascript
// tailwind.config.js
module.exports = {
theme: {
extend: {
colors: {
brand: {
primary: 'oklch(0.45 0.20 270 / <alpha-value>)',
secondary: 'oklch(0.60 0.15 200 / <alpha-value>)',
accent: 'oklch(0.65 0.18 30 / <alpha-value>)'
},
surface: {
background: 'oklch(0.98 0.01 270 / <alpha-value>)',
elevated: 'oklch(1.00 0.00 0 / <alpha-value>)',
sunken: 'oklch(0.95 0.02 270 / <alpha-value>)'
},
semantic: {
success: 'oklch(0.55 0.15 150 / <alpha-value>)',
warning: 'oklch(0.70 0.18 60 / <alpha-value>)',
error: 'oklch(0.50 0.20 20 / <alpha-value>)',
info: 'oklch(0.60 0.15 240 / <alpha-value>)'
}
},
fontFamily: {
heading: ['Inter', 'system-ui', 'sans-serif'],
body: ['Inter', 'system-ui', 'sans-serif'],
mono: ['Fira Code', 'Consolas', 'monospace']
},
fontSize: {
'xs': '0.75rem',
'sm': '0.875rem',
'base': '1rem',
'lg': '1.125rem',
'xl': '1.25rem',
'2xl': '1.5rem',
'3xl': '1.875rem',
'4xl': '2.25rem'
},
spacing: {
'1': '0.25rem',
'2': '0.5rem',
'3': '0.75rem',
'4': '1rem',
'6': '1.5rem',
'8': '2rem',
'12': '3rem',
'16': '4rem'
},
borderRadius: {
'sm': '0.25rem',
'md': '0.5rem',
'lg': '0.75rem',
'xl': '1rem',
'2xl': '1.5rem'
},
boxShadow: {
'sm': '0 1px 2px oklch(0.00 0.00 0 / 0.05)',
'md': '0 4px 6px oklch(0.00 0.00 0 / 0.07), 0 2px 4px oklch(0.00 0.00 0 / 0.06)',
'lg': '0 10px 15px oklch(0.00 0.00 0 / 0.1), 0 4px 6px oklch(0.00 0.00 0 / 0.05)'
}
}
}
}
```
## Validation Requirements
### Color Validation
- All colors MUST use OKLCH format
- Alpha channel optional (defaults to 1)
- Lightness: 0-1 (0% to 100%)
- Chroma: 0-0.4 (typical range, can exceed for vibrant colors)
- Hue: 0-360 (degrees)
### Accessibility Validation
- Text on background: minimum 4.5:1 contrast (WCAG AA)
- Large text (18pt+ or 14pt+ bold): minimum 3:1 contrast
- UI components: minimum 3:1 contrast
- Non-text focus indicators: minimum 3:1 contrast
### Consistency Validation
- Spacing scale maintains consistent ratio (e.g., 1.5x or 2x progression)
- Typography scale follows modular scale principles
- Border radius values progress logically
- Shadow layers increase in offset and blur systematically
## Usage in Commands
### style-extract Output
Generates initial `design-tokens.json` from visual analysis
### style-consolidate Output
Finalizes validated `design-tokens.json` with accessibility data
### ui-generate Input
Reads `design-tokens.json` to generate CSS custom properties
### design-update Integration
References `design-tokens.json` path in synthesis-specification.md
## Version History
- **1.0.0**: Initial schema with OKLCH colors, semantic naming, WCAG AA validation

View File

@@ -210,6 +210,15 @@ Tools execute in current working directory:
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/[category]/[template].txt") | [constraints]
```
**⚠️ CRITICAL: Command Substitution Rules**
When using `$(cat ...)` for template loading in actual CLI commands:
- **Template reference only, never read**: When user specifies template name, use `$(cat ...)` directly in RULES field, do NOT read template content first
- **NEVER use escape characters**: `\$`, `\"`, `\'` will break command substitution
- **In -p "..." context**: Path in `$(cat ...)` needs NO quotes (tilde expands correctly)
- **Correct**: `RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)`
- **WRONG**: `RULES: \$(cat ...)` or `RULES: $(cat \"...\")` or `RULES: $(cat '...')`
- **Why**: Shell executes `$(...)` in subshell where path is safe without quotes
**Examples**:
- Single template: `$(cat "~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt") | Focus on security`
- Multiple templates: `$(cat "template1.txt") $(cat "template2.txt") | Enterprise standards`
@@ -264,6 +273,7 @@ RULES: Focus on type safety and component composition
| **Planning** | Gemini/Qwen | Task breakdown, migration planning | `planning/task-breakdown.txt` |
| **Security** | Codex | Vulnerability assessment, fixes | `analysis/security.txt` |
| **Refactoring** | Multiple | Gemini/Qwen for analysis, Codex for execution | `development/refactor.txt` |
| **Module Documentation** | Gemini (Qwen fallback) | Universal module/file documentation for all levels | `memory/claude-module-unified.txt` |
### Template System
@@ -281,6 +291,8 @@ prompts/
│ ├── feature.txt - Feature implementation
│ ├── refactor.txt - Refactoring tasks
│ └── testing.txt - Test generation
├── memory/
│ └── claude-module-unified.txt - Universal module/file documentation template
└── planning/
└── task-breakdown.txt - Task decomposition
@@ -317,7 +329,7 @@ TASK: Analyze project structure and identify patterns
MODE: analysis
CONTEXT: @{src/**/*.ts,CLAUDE.md} Previous analysis of auth system
EXPECTED: Architecture overview and integration points
RULES: $(cat '~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt') | Focus on integration points
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt) | Focus on integration points
"
```
@@ -342,7 +354,7 @@ TASK: Review JWT-based auth system design
MODE: analysis
CONTEXT: @{src/auth/**/*} Existing patterns and requirements
EXPECTED: Architecture analysis report with recommendations
RULES: $(cat '~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt') | Focus on security
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt) | Focus on security
"
# Use Qwen only if Gemini unavailable
@@ -352,7 +364,7 @@ TASK: Review JWT-based auth system design
MODE: analysis
CONTEXT: @{src/auth/**/*} Existing patterns and requirements
EXPECTED: Architecture analysis report with recommendations
RULES: $(cat '~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt') | Focus on security
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt) | Focus on security
"
```
@@ -365,7 +377,7 @@ TASK: Create JWT-based authentication system
MODE: auto
CONTEXT: @{src/auth/**/*} Database schema from session memory
EXPECTED: Complete auth module with tests
RULES: $(cat '~/.claude/workflows/cli-templates/prompts/development/feature.txt') | Follow security best practices
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/development/feature.txt) | Follow security best practices
" --skip-git-repo-check -s danger-full-access
# Continue in same session - Add JWT validation
@@ -409,6 +421,7 @@ codex resume --last
- **Discover patterns first** - Use rg/MCP for complex file discovery before CLI execution
- **Build precise CONTEXT** - Convert discovery results to explicit file references
- **Document context** - Always reference CLAUDE.md for context
- **⚠️ No escape characters in CLI commands** - NEVER use `\$`, `\"`, `\'` in actual CLI execution (breaks command substitution and path expansion)
### Context Optimization Strategy
**Directory Navigation**: Use `cd [directory] &&` pattern when analyzing specific areas to reduce irrelevant context

View File

@@ -678,15 +678,20 @@ All workflows use the same file structure definition regardless of complexity. *
├── [.design/] # Standalone UI design outputs (created when needed)
│ └── run-[timestamp]/ # Timestamped design runs without session
│ ├── style-extraction/ # Style analysis results
│ ├── style-consolidation/ # Design system tokens (per-style)
│ ├── .intermediates/ # Intermediate analysis files
├── style-analysis/ # Style analysis data
│ │ │ ├── computed-styles.json # Extracted CSS values
│ │ │ └── design-space-analysis.json # Design directions
│ │ └── layout-analysis/ # Layout analysis data
│ │ ├── dom-structure-{target}.json # DOM extraction
│ │ └── inspirations/ # Layout research
│ │ └── {target}-layout-ideas.txt
│ ├── style-extraction/ # Final design systems
│ │ ├── style-1/ # design-tokens.json, style-guide.md
│ │ └── style-N/
│ ├── layout-extraction/ # Layout templates
│ │ └── layout-templates.json
│ ├── prototypes/ # Generated HTML/CSS prototypes
│ │ ├── _templates/ # Target-specific layout plans and templates
│ │ │ ├── {target}-layout-{n}.json # Layout plan (target-specific)
│ │ │ ├── {target}-layout-{n}.html # HTML template
│ │ │ └── {target}-layout-{n}.css # CSS template
│ │ ├── {target}-style-{s}-layout-{l}.html # Final prototypes
│ │ ├── compare.html # Interactive matrix view
│ │ └── index.html # Navigation page
@@ -706,15 +711,20 @@ All workflows use the same file structure definition regardless of complexity. *
│ ├── IMPL-*-summary.md # Main task summaries
│ └── IMPL-*.*-summary.md # Subtask summaries
├── [design-*/] # UI design outputs (created by ui-design workflows)
│ ├── style-extraction/ # Style analysis results
│ ├── style-consolidation/ # Design system tokens (per-style)
│ ├── .intermediates/ # Intermediate analysis files
├── style-analysis/ # Style analysis data
│ │ │ ├── computed-styles.json # Extracted CSS values
│ │ │ └── design-space-analysis.json # Design directions
│ │ └── layout-analysis/ # Layout analysis data
│ │ ├── dom-structure-{target}.json # DOM extraction
│ │ └── inspirations/ # Layout research
│ │ └── {target}-layout-ideas.txt
│ ├── style-extraction/ # Final design systems
│ │ ├── style-1/ # design-tokens.json, style-guide.md
│ │ └── style-N/
│ ├── layout-extraction/ # Layout templates
│ │ └── layout-templates.json
│ ├── prototypes/ # Generated HTML/CSS prototypes
│ │ ├── _templates/ # Target-specific layout plans and templates
│ │ │ ├── {target}-layout-{n}.json # Layout plan (target-specific)
│ │ │ ├── {target}-layout-{n}.html # HTML template
│ │ │ └── {target}-layout-{n}.css # CSS template
│ │ ├── {target}-style-{s}-layout-{l}.html # Final prototypes
│ │ ├── compare.html # Interactive matrix view
│ │ └── index.html # Navigation page
@@ -730,7 +740,8 @@ All workflows use the same file structure definition regardless of complexity. *
- **Dynamic Files**: Subtask JSON files created during task decomposition
- **Scratchpad Usage**: `.scratchpad/` created when CLI commands run without active session
- **Design Usage**: `design-{timestamp}/` created by UI design workflows, `.design/` for standalone design runs
- **Layout Planning**: `prototypes/_templates/` contains target-specific layout plans (JSON) generated during UI generation phase
- **Intermediate Files**: `.intermediates/` contains analysis data (style/layout) separate from final deliverables
- **Layout Templates**: `layout-extraction/layout-templates.json` contains structural templates for UI assembly
#### Scratchpad Directory (.scratchpad/)
**Purpose**: Centralized location for non-session-specific CLI outputs

View File

@@ -2,13 +2,13 @@
## Overview
**Role**: Codex - autonomous development, implementation, and testing
**Role**: Autonomous development, implementation, and testing specialist
**File**: `d:\Claude_dms3\.codex\AGENTS.md`
## Prompt Structure
### Single-Task Format
**Receive prompts in this format**:
All prompts follow this 6-field format:
```
PURPOSE: [development goal]
@@ -16,87 +16,12 @@ TASK: [specific implementation task]
MODE: [auto|write]
CONTEXT: [file patterns]
EXPECTED: [deliverables]
RULES: [constraints and templates]
RULES: [templates | additional constraints]
```
### Multi-Task Format (Subtask Execution)
**Subtask indicator**: `Subtask N of M: [title]` or `CONTINUE TO NEXT SUBTASK`
**First subtask** (creates new session):
```
PURPOSE: [overall goal]
TASK: [subtask 1 description]
MODE: auto
CONTEXT: [file patterns]
EXPECTED: [subtask deliverables]
RULES: [constraints]
Subtask 1 of N: [subtask title]
```
**Subsequent subtasks** (continues via `resume --last`):
```
CONTINUE TO NEXT SUBTASK:
Subtask N of M: [subtask title]
PURPOSE: [continuation goal]
TASK: [subtask N description]
CONTEXT: Previous work completed, now focus on [new files]
EXPECTED: [subtask deliverables]
RULES: Build on previous subtask, maintain consistency
```
## Execution Requirements
### System Optimization
**Hard Requirement**: Call binaries directly in `functions.shell`, always set `workdir`, and avoid shell wrappers such as `bash -lc`, `sh -lc`, `zsh -lc`, `cmd /c`, `pwsh.exe -NoLogo -NoProfile -Command`, and `powershell.exe -NoLogo -NoProfile -Command`.
**Text Editing Priority**: Use the `apply_patch` tool for all routine text edits; fall back to `sed` for single-line substitutions only if `apply_patch` is unavailable, and avoid `python` editing scripts unless both options fail.
**`apply_patch` Usage**: Invoke `apply_patch` with the patch payload as the second element in the command array (no shell-style flags). Provide `workdir` and, when helpful, a short `justification` alongside the command.
**Example invocation**:
```json
{
"command": ["apply_patch", "*** Begin Patch\n*** Update File: path/to/file\n@@\n- old\n+ new\n*** End Patch\n"],
"workdir": "<workdir>",
"justification": "Brief reason for the change"
}
```
**Windows UTF-8 Encoding**: Before executing commands on Windows systems, ensure proper UTF-8 encoding by running:
```powershell
[Console]::InputEncoding = [Text.UTF8Encoding]::new($false)
[Console]::OutputEncoding = [Text.UTF8Encoding]::new($false)
chcp 65001 > $null
```
### ALWAYS
- **Parse all fields** - Understand PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES
- **Detect subtask format** - Check for "Subtask N of M" or "CONTINUE TO NEXT SUBTASK"
- **Follow MODE strictly** - Respect execution boundaries
- **Study CONTEXT files** - Find 3+ similar patterns before implementing
- **Apply RULES** - Follow templates and constraints exactly
- **Test continuously** - Run tests after every change
- **Commit incrementally** - Small, working commits
- **Match project style** - Follow existing patterns exactly
- **Validate EXPECTED** - Ensure all deliverables are met
- **Report context** (subtasks) - Summarize key info for next subtask
- **Use direct binary calls** - Avoid shell wrappers for efficiency
- **Prefer apply_patch** - Use for text edits over Python scripts
- **Configure Windows encoding** - Set UTF-8 for Chinese character support
### NEVER
- **Make assumptions** - Verify with existing code
- **Ignore existing patterns** - Study before implementing
- **Skip tests** - Tests are mandatory
- **Use clever tricks** - Choose boring, obvious solutions
- **Over-engineer** - Simple solutions over complex architectures
- **Break existing code** - Ensure backward compatibility
- **Exceed 3 attempts** - Stop and reassess if blocked 3 times
## MODE Behavior
## MODE Definitions
### MODE: auto (default)
@@ -105,28 +30,17 @@ chcp 65001 > $null
- Run tests and builds
- Commit code incrementally
**Execute (Single Task)**:
**Execute**:
1. Parse PURPOSE and TASK
2. Analyze CONTEXT files - find 3+ similar patterns
3. Plan implementation approach
4. Generate code following RULES and project patterns
5. Write tests alongside code
6. Run tests continuously
7. Commit working code incrementally
8. Validate all EXPECTED deliverables
9. Report results
3. Plan implementation following RULES
4. Generate code with tests
5. Run tests continuously
6. Commit working code incrementally
7. Validate EXPECTED deliverables
8. Report results (with context for next subtask if multi-task)
**Execute (Multi-Task/Subtask)**:
1. **First subtask**: Follow single-task flow above
2. **Subsequent subtasks** (via `resume --last`):
- Recall context from previous subtask(s)
- Build on previous work (don't repeat)
- Maintain consistency with previous decisions
- Focus on current subtask scope only
- Test integration with previous subtasks
- Report subtask completion status
**Use For**: Feature implementation, bug fixes, refactoring, multi-step tasks
**Constraint**: Must test every change
### MODE: write
@@ -141,117 +55,219 @@ chcp 65001 > $null
3. Validate tests pass
4. Report file changes
**Use For**: Test generation, documentation updates, targeted fixes
## Execution Protocol
## RULES Processing
### Core Requirements
- **Parse the RULES field** to identify template content and additional constraints
- **Recognize `|` as separator** between template and additional constraints
- **ALWAYS apply all template guidelines** provided in the prompt
- **ALWAYS apply all additional constraints** specified after `|`
- **Treat all rules as mandatory** - both template and constraints must be followed
- **Failure to follow any rule** constitutes task failure
**ALWAYS**:
- Parse all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
- Study CONTEXT files - find 3+ similar patterns before implementing
- Apply RULES (templates + constraints) exactly
- Test continuously after every change
- Commit incrementally with working code
- Match project style and patterns exactly
- List all created/modified files at output beginning
- Use direct binary calls (avoid shell wrappers)
- Prefer apply_patch for text edits
- Configure Windows UTF-8 encoding for Chinese support
**NEVER**:
- Make assumptions without code verification
- Ignore existing patterns
- Skip tests
- Use clever tricks over boring solutions
- Over-engineer solutions
- Break existing code or backward compatibility
- Exceed 3 failed attempts without stopping
### RULES Processing
- Parse RULES field to extract template content and constraints
- Recognize `|` as separator: `template content | additional constraints`
- Apply ALL template guidelines as mandatory
- Apply ALL additional constraints as mandatory
- Treat rule violations as task failures
### Multi-Task Execution (Resume Pattern)
**First subtask**: Standard execution flow above
**Subsequent subtasks** (via `resume --last`):
- Recall context from previous subtasks
- Build on previous work (don't repeat)
- Maintain consistency with established patterns
- Focus on current subtask scope only
- Test integration with previous work
- Report context for next subtask
## System Optimization
**Direct Binary Calls**: Always call binaries directly in `functions.shell`, set `workdir`, avoid shell wrappers (`bash -lc`, `cmd /c`, etc.)
**Text Editing Priority**:
1. Use `apply_patch` tool for all routine text edits
2. Fall back to `sed` for single-line substitutions if unavailable
3. Avoid Python editing scripts unless both fail
**apply_patch invocation**:
```json
{
"command": ["apply_patch", "*** Begin Patch\n*** Update File: path/to/file\n@@\n- old\n+ new\n*** End Patch\n"],
"workdir": "<workdir>",
"justification": "Brief reason"
}
```
**Windows UTF-8 Encoding** (before commands):
```powershell
[Console]::InputEncoding = [Text.UTF8Encoding]::new($false)
[Console]::OutputEncoding = [Text.UTF8Encoding]::new($false)
chcp 65001 > $null
```
## Output Standards
### Format Priority
**If template defines output format** → Follow template format EXACTLY (all sections mandatory)
**If template has no format** → Use default format below based on task type
### Default Output Formats
#### Single Task Implementation
```markdown
# Implementation: [TASK Title]
## Changes
- Created: `path/to/file1.ext` (X lines)
- Modified: `path/to/file2.ext` (+Y/-Z lines)
- Deleted: `path/to/file3.ext`
## Summary
[2-3 sentence overview of what was implemented]
## Key Decisions
1. [Decision] - Rationale and reference to similar pattern
2. [Decision] - path/to/reference:line
## Implementation Details
[Evidence-based description with code references]
## Testing
- Tests written: X new tests
- Tests passing: Y/Z tests
- Coverage: N%
## Validation
✅ Tests: X passing
✅ Coverage: Y%
✅ Build: Success
## Next Steps
[Recommendations or future improvements]
```
#### Multi-Task Execution (with Resume)
**First Subtask**:
```markdown
# Subtask 1/N: [TASK Title]
## Changes
[List of file changes]
## Implementation
[Details with code references]
## Testing
✅ Tests: X passing
✅ Integration: Compatible with existing code
## Context for Next Subtask
- Key decisions: [established patterns]
- Files created: [paths and purposes]
- Integration points: [where next subtask should connect]
```
**Subsequent Subtasks**:
```markdown
# Subtask N/M: [TASK Title]
## Changes
[List of file changes]
## Integration Notes
✅ Compatible with subtask N-1
✅ Maintains established patterns
✅ Tests pass with previous work
## Implementation
[Details with code references]
## Testing
✅ Tests: X passing
✅ Total coverage: Y%
## Context for Next Subtask
[If not final subtask, provide context for continuation]
```
#### Partial Completion
```markdown
# Task Status: Partially Completed
## Completed
- [What worked successfully]
- Files: `path/to/completed.ext`
## Blocked
- **Issue**: [What failed]
- **Root Cause**: [Analysis of failure]
- **Attempted**: [Solutions tried - attempt X of 3]
## Required
[What's needed to proceed]
## Recommendation
[Suggested next steps or alternative approaches]
```
### Code References
**Format**: `path/to/file:line_number`
**Example**: `src/auth/jwt.ts:45` - Implemented token validation following pattern from `src/auth/session.ts:78`
### Related Files Section
**Always include at output beginning** - List ALL files analyzed, created, or modified:
```markdown
## Related Files
- `path/to/file1.ext` - [Role in implementation]
- `path/to/file2.ext` - [Reference pattern used]
- `path/to/file3.ext` - [Modified for X reason]
```
## Error Handling
### Three-Attempt Rule
**On 3rd failed attempt**:
1. **Stop execution**
2. **Report status**: What was attempted, what failed, root cause
3. **Request guidance**: Ask for clarification, suggest alternatives
1. Stop execution
2. Report: What attempted, what failed, root cause
3. Request guidance or suggest alternatives
### Recovery Strategies
**Syntax/Type Errors**:
1. Review and fix errors
2. Re-run tests
3. Validate build succeeds
**Runtime Errors**:
1. Analyze stack trace
2. Add error handling
3. Add tests for error cases
**Test Failures**:
1. Debug in isolation
2. Review test setup
3. Fix implementation or test
**Build Failures**:
1. Check error messages
2. Fix incrementally
3. Validate each fix
## Progress Reporting
### During Execution (Single Task)
```
[1/5] Analyzing existing code patterns...
[2/5] Planning implementation approach...
[3/5] Generating code...
[4/5] Writing tests...
[5/5] Running validation...
```
### During Execution (Subtask)
```
[Subtask N/M: Subtask Title]
[1/4] Recalling context from previous subtasks...
[2/4] Implementing current subtask...
[3/4] Testing integration with previous work...
[4/4] Validating subtask completion...
```
### On Success (Single Task)
```
✅ Task completed
Changes:
- Created: [files with line counts]
- Modified: [files with changes]
Validation:
✅ Tests: [count] passing
✅ Coverage: [percentage]
✅ Build: Success
Next Steps: [recommendations]
```
### On Success (Subtask)
```
✅ Subtask N/M completed
Changes:
- Created: [files]
- Modified: [files]
Integration:
✅ Compatible with previous subtasks
✅ Tests: [count] passing
✅ Build: Success
Context for next subtask:
- [Key decisions made]
- [Files created/modified]
- [Patterns established]
```
### On Partial Completion
```
⚠️ Task partially completed
Completed: [what worked]
Blocked: [what failed and why]
Required: [what's needed]
Recommendation: [next steps]
```
| Error Type | Response |
|------------|----------|
| **Syntax/Type** | Review errors → Fix → Re-run tests → Validate build |
| **Runtime** | Analyze stack trace → Add error handling → Test error cases |
| **Test Failure** | Debug in isolation → Review setup → Fix implementation/test |
| **Build Failure** | Check messages → Fix incrementally → Validate each fix |
## Quality Standards
@@ -274,98 +290,43 @@ Recommendation: [next steps]
- Graceful degradation
- Don't expose sensitive info
## Multi-Step Task Execution
## Core Principles
### Context Continuity via Resume
**Incremental Progress**:
- Small, testable changes
- Commit working code frequently
- Build on previous work (subtasks)
When executing subtasks via `codex exec "..." resume --last`:
**Evidence-Based**:
- Study 3+ similar patterns before implementing
- Match project style exactly
- Verify with existing code
**Advantages**:
- Session memory preserves previous decisions
- Maintains implementation style consistency
- Avoids redundant context re-injection
- Enables incremental testing and validation
**Pragmatic**:
- Boring solutions over clever code
- Simple over complex
- Adapt to project reality
**Best Practices**:
1. **First subtask**: Establish patterns and architecture
2. **Subsequent subtasks**: Build on established patterns
3. **Test integration**: After each subtask, verify compatibility
4. **Report context**: Summarize key decisions for next subtask
5. **Maintain scope**: Focus only on current subtask goals
### Subtask Coordination
**DO**:
- Remember decisions from previous subtasks
- Reuse patterns established earlier
- Test integration with previous work
- Report what's ready for next subtask
**DON'T**:
- Re-implement what previous subtasks completed
- Change patterns established earlier (unless explicitly requested)
- Skip testing integration points
- Assume next subtask's requirements
### Example Flow
```
Subtask 1: Create data models
→ Establishes: Schema patterns, validation approach
→ Delivers: Models with tests
→ Context for next: Model structure, validation rules
Subtask 2: Implement API endpoints (resume --last)
→ Recalls: Model structure from subtask 1
→ Builds on: Uses established models
→ Delivers: API with integration tests
→ Context for next: API patterns, error handling
Subtask 3: Add authentication (resume --last)
→ Recalls: API patterns from subtask 2
→ Integrates: Auth middleware into existing endpoints
→ Delivers: Secured API
→ Final validation: Full integration test
```
## Philosophy
- **Incremental progress over big bangs** - Small, testable changes
- **Learning from existing code** - Study 3+ patterns before implementing
- **Pragmatic over dogmatic** - Adapt to project reality
- **Clear intent over clever code** - Boring, obvious solutions
- **Simple over complex** - Avoid over-engineering
- **Follow existing style** - Match project patterns exactly
- **Context continuity** - Leverage resume for multi-step consistency
**Context Continuity** (Multi-Task):
- Leverage resume for consistency
- Maintain established patterns
- Test integration between subtasks
## Execution Checklist
### Before Implementation
**Before**:
- [ ] Understand PURPOSE and TASK clearly
- [ ] Review all CONTEXT files
- [ ] Find 3+ similar patterns in codebase
- [ ] Review CONTEXT files, find 3+ patterns
- [ ] Check RULES templates and constraints
- [ ] Plan implementation approach
### During Implementation
**During**:
- [ ] Follow existing patterns exactly
- [ ] Write tests alongside code
- [ ] Run tests after every change
- [ ] Commit working code incrementally
- [ ] Handle errors properly
### After Implementation
- [ ] Run full test suite - all pass
- [ ] Check coverage - meets target
- [ ] Run build - succeeds
- [ ] Review EXPECTED - all deliverables met
---
**Version**: 2.2.0
**Last Updated**: 2025-10-04
**Changes**:
- Added system optimization requirements for direct binary calls
- Added apply_patch tool priority for text editing
- Added Windows UTF-8 encoding configuration for Chinese character support
- Previous: Multi-step task execution support with resume mechanism
**After**:
- [ ] All tests pass
- [ ] Coverage meets target
- [ ] Build succeeds
- [ ] All EXPECTED deliverables met

View File

@@ -2,11 +2,11 @@
## Overview
**Role**: Gemini - code analysis and documentation generation
**Role**: Code analysis and documentation generation specialist
## Prompt Structure
**Receive prompts in this format**:
All prompts follow this 6-field format:
```
PURPOSE: [goal statement]
@@ -14,29 +14,10 @@ TASK: [specific task]
MODE: [analysis|write]
CONTEXT: [file patterns]
EXPECTED: [deliverables]
RULES: [constraints and templates]
RULES: [templates | additional constraints]
```
## Execution Requirements
### ALWAYS
- **Parse all six fields** - Understand PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES
- **Follow MODE strictly** - Respect permission boundaries
- **Analyze CONTEXT files** - Read all matching patterns thoroughly
- **Apply RULES** - Follow templates and constraints exactly
- **Provide evidence** - Quote code with file:line references
- **Match EXPECTED** - Deliver exactly what's requested
### NEVER
- **Assume behavior** - Verify with actual code
- **Ignore CONTEXT** - Stay within specified file patterns
- **Skip RULES** - Templates are mandatory when provided
- **Make unsubstantiated claims** - Always back with code references
- **Deviate from MODE** - Respect read/write boundaries
## MODE Behavior
## MODE Definitions
### MODE: analysis (default)
@@ -66,13 +47,50 @@ RULES: [constraints and templates]
4. Validate changes
5. Report file changes
## Output Format
## Execution Protocol
### Standard Analysis Structure
### Core Requirements
**ALWAYS**:
- Parse all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
- Follow MODE permissions strictly
- Analyze ALL CONTEXT files thoroughly
- Apply RULES (templates + constraints) exactly
- Provide code evidence with `file:line` references
- List all related/analyzed files at output beginning
- Match EXPECTED deliverables precisely
**NEVER**:
- Assume behavior without code verification
- Ignore CONTEXT file patterns
- Skip RULES or templates
- Make unsubstantiated claims
- Deviate from MODE boundaries
### RULES Processing
- Parse RULES field to extract template content and constraints
- Recognize `|` as separator: `template content | additional constraints`
- Apply ALL template guidelines as mandatory
- Apply ALL additional constraints as mandatory
- Treat rule violations as task failures
## Output Standards
### Format Priority
**If template defines output format** → Follow template format EXACTLY (all sections mandatory)
**If template has no format** → Use default format below
```markdown
# Analysis: [TASK Title]
## Related Files
- `path/to/file1.ext` - [Brief description of relevance]
- `path/to/file2.ext` - [Brief description of relevance]
- `path/to/file3.ext` - [Brief description of relevance]
## Summary
[2-3 sentence overview]
@@ -90,18 +108,9 @@ RULES: [constraints and templates]
### Code References
Always use format: `path/to/file:line_number`
**Format**: `path/to/file:line_number`
Example: "Authentication logic at `src/auth/jwt.ts:45` uses deprecated algorithm"
## RULES Processing
- **Parse the RULES field** to identify template content and additional constraints
- **Recognize `|` as separator** between template and additional constraints
- **ALWAYS apply all template guidelines** provided in the prompt
- **ALWAYS apply all additional constraints** specified after `|`
- **Treat all rules as mandatory** - both template and constraints must be followed
- **Failure to follow any rule** constitutes task failure
**Example**: `src/auth/jwt.ts:45` - Authentication uses deprecated algorithm
## Error Handling
@@ -115,29 +124,26 @@ Example: "Authentication logic at `src/auth/jwt.ts:45` uses deprecated algorithm
- Request correction
- Do not guess
## Quality Standards
## Core Principles
### Thoroughness
- Analyze ALL files in CONTEXT
- Check cross-file patterns
- Identify edge cases
- Quantify when possible
**Thoroughness**:
- Analyze ALL CONTEXT files completely
- Check cross-file patterns and dependencies
- Identify edge cases and quantify metrics
### Evidence-Based
- Quote relevant code
- Provide file:line references
- Link related patterns
**Evidence-Based**:
- Quote relevant code with `file:line` references
- Link related patterns across files
- Support all claims with concrete examples
### Actionable
- Clear recommendations
**Actionable**:
- Clear, specific recommendations (not vague)
- Prioritized by impact
- Specific, not vague
- Incremental changes over big rewrites
## Philosophy
- **Incremental over big bangs** - Suggest small, testable changes
- **Learn from existing code** - Reference project patterns
- **Pragmatic over dogmatic** - Adapt to project reality
- **Clear over clever** - Prefer obvious solutions
**Philosophy**:
- **Simple over complex** - Avoid over-engineering
- **Clear over clever** - Prefer obvious solutions
- **Learn from existing** - Reference project patterns
- **Pragmatic over dogmatic** - Adapt to project reality
- **Incremental progress** - Small, testable changes

32
.gitattributes vendored Normal file
View File

@@ -0,0 +1,32 @@
# Auto detect text files and perform LF normalization
* text=auto
# Shell scripts must use LF line endings (Unix style)
*.sh text eol=lf
# Batch/PowerShell scripts can use CRLF (Windows style)
*.bat text eol=crlf
*.cmd text eol=crlf
*.ps1 text eol=crlf
# Python scripts should use LF
*.py text eol=lf
# Common text files
*.md text
*.txt text
*.json text
*.yaml text
*.yml text
*.toml text
# Binary files (no line ending conversion)
*.png binary
*.jpg binary
*.jpeg binary
*.gif binary
*.ico binary
*.zip binary
*.tar binary
*.gz binary
*.pdf binary

View File

@@ -6,7 +6,7 @@
## Prompt Structure
**Receive prompts in this format**:
All prompts follow this 6-field format:
```
PURPOSE: [goal statement]
@@ -14,29 +14,10 @@ TASK: [specific task]
MODE: [analysis|write]
CONTEXT: [file patterns]
EXPECTED: [deliverables]
RULES: [constraints and templates]
RULES: [templates | additional constraints]
```
## Execution Requirements
### ALWAYS
- **Parse all six fields** - Understand PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES
- **Follow MODE strictly** - Respect permission boundaries
- **Analyze CONTEXT files** - Read all matching patterns thoroughly
- **Apply RULES** - Follow templates and constraints exactly
- **Provide evidence** - Quote code with file:line references
- **Match EXPECTED** - Deliver exactly what's requested
### NEVER
- **Assume behavior** - Verify with actual code
- **Ignore CONTEXT** - Stay within specified file patterns
- **Skip RULES** - Templates are mandatory when provided
- **Make unsubstantiated claims** - Always back with code references
- **Deviate from MODE** - Respect read/write boundaries
## MODE Behavior
## MODE Definitions
### MODE: analysis (default)
@@ -66,13 +47,50 @@ RULES: [constraints and templates]
4. Validate changes
5. Report file changes
## Output Format
## Execution Protocol
### Standard Analysis Structure
### Core Requirements
**ALWAYS**:
- Parse all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
- Follow MODE permissions strictly
- Analyze ALL CONTEXT files thoroughly
- Apply RULES (templates + constraints) exactly
- Provide code evidence with `file:line` references
- List all related/analyzed files at output beginning
- Match EXPECTED deliverables precisely
**NEVER**:
- Assume behavior without code verification
- Ignore CONTEXT file patterns
- Skip RULES or templates
- Make unsubstantiated claims
- Deviate from MODE boundaries
### RULES Processing
- Parse RULES field to extract template content and constraints
- Recognize `|` as separator: `template content | additional constraints`
- Apply ALL template guidelines as mandatory
- Apply ALL additional constraints as mandatory
- Treat rule violations as task failures
## Output Standards
### Format Priority
**If template defines output format** → Follow template format EXACTLY (all sections mandatory)
**If template has no format** → Use default format below
```markdown
# Analysis: [TASK Title]
## Related Files
- `path/to/file1.ext` - [Brief description of relevance]
- `path/to/file2.ext` - [Brief description of relevance]
- `path/to/file3.ext` - [Brief description of relevance]
## Summary
[2-3 sentence overview]
@@ -90,18 +108,9 @@ RULES: [constraints and templates]
### Code References
Always use format: `path/to/file:line_number`
**Format**: `path/to/file:line_number`
Example: "Authentication logic at `src/auth/jwt.ts:45` uses deprecated algorithm"
## RULES Processing
- **Parse the RULES field** to identify template content and additional constraints
- **Recognize `|` as separator** between template and additional constraints
- **ALWAYS apply all template guidelines** provided in the prompt
- **ALWAYS apply all additional constraints** specified after `|`
- **Treat all rules as mandatory** - both template and constraints must be followed
- **Failure to follow any rule** constitutes task failure
**Example**: `src/auth/jwt.ts:45` - Authentication uses deprecated algorithm
## Error Handling
@@ -115,29 +124,26 @@ Example: "Authentication logic at `src/auth/jwt.ts:45` uses deprecated algorithm
- Request correction
- Do not guess
## Quality Standards
## Core Principles
### Thoroughness
- Analyze ALL files in CONTEXT
- Check cross-file patterns
- Identify edge cases
- Quantify when possible
**Thoroughness**:
- Analyze ALL CONTEXT files completely
- Check cross-file patterns and dependencies
- Identify edge cases and quantify metrics
### Evidence-Based
- Quote relevant code
- Provide file:line references
- Link related patterns
**Evidence-Based**:
- Quote relevant code with `file:line` references
- Link related patterns across files
- Support all claims with concrete examples
### Actionable
- Clear recommendations
**Actionable**:
- Clear, specific recommendations (not vague)
- Prioritized by impact
- Specific, not vague
- Incremental changes over big rewrites
## Philosophy
- **Incremental over big bangs** - Suggest small, testable changes
- **Learn from existing code** - Reference project patterns
- **Pragmatic over dogmatic** - Adapt to project reality
- **Clear over clever** - Prefer obvious solutions
**Philosophy**:
- **Simple over complex** - Avoid over-engineering
- **Clear over clever** - Prefer obvious solutions
- **Learn from existing** - Reference project patterns
- **Pragmatic over dogmatic** - Adapt to project reality
- **Incremental progress** - Small, testable changes

View File

@@ -70,6 +70,13 @@ For all CLI tool usage, command syntax, and integration guidelines:
- Learn from existing implementations
- Stop after 3 failed attempts and reassess
## Platform-Specific Guidelines
### Windows Path Format Guidelines
- **MCP Tools**: Use double backslash `D:\\path\\file.txt` (MCP doesn't support POSIX `/d/path`)
- **Bash Commands**: Use forward slash `D:/path/file.txt` or POSIX `/d/path/file.txt`
- **Relative Paths**: No conversion needed `./src`, `../config`
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`
#### **Content Uniqueness Rules**

301
GETTING_STARTED.md Normal file
View File

@@ -0,0 +1,301 @@
# 🚀 Claude Code Workflow (CCW) - Getting Started Guide
Welcome to Claude Code Workflow (CCW)! This guide will help you get up and running in 5 minutes and experience AI-driven automated software development.
---
## ⏱️ 5-Minute Quick Start
Let's build a "Hello World" web application from scratch with a simple example.
### Step 1: Install CCW
First, make sure you have installed CCW according to the [Installation Guide](INSTALL.md).
### Step 2: Start a Workflow Session
Think of a "session" as a dedicated project folder. CCW will store all files related to your current task here.
```bash
/workflow:session:start "My First Web App"
```
You will see that the system has created a new session, for example, `WFS-my-first-web-app`.
### Step 3: Create an Execution Plan
Now, tell CCW what you want to do. CCW will analyze your request and automatically generate a detailed, executable task plan.
```bash
/workflow:plan "Create a simple Express API that returns Hello World at the root path"
```
This command kicks off a fully automated planning process, which includes:
1. **Context Gathering**: Analyzing your project environment.
2. **Agent Analysis**: AI agents think about the best implementation path.
3. **Task Generation**: Creating specific task files (in `.json` format).
### Step 4: Execute the Plan
Once the plan is created, you can command the AI agents to start working.
```bash
/workflow:execute
```
You will see CCW's agents (like `@code-developer`) begin to execute tasks one by one. It will automatically create files, write code, and install dependencies.
### Step 5: Check the Status
Want to know the progress? You can check the status of the current workflow at any time.
```bash
/workflow:status
```
This will show the completion status of tasks, the currently executing task, and the next steps.
---
## 🧠 Core Concepts Explained
Understanding these concepts will help you use CCW more effectively:
- **Workflow Session**
> Like an independent sandbox or project space, used to isolate the context, files, and history of different tasks. All related files are stored in the `.workflow/WFS-<session-name>/` directory.
- **Task**
> An atomic unit of work, such as "create API route" or "write test case." Each task is a `.json` file that defines the goal, context, and execution steps in detail.
- **Agent**
> An AI assistant specialized in a specific domain. For example:
> - `@code-developer`: Responsible for writing and implementing code.
> - `@test-fix-agent`: Responsible for running tests and automatically fixing failures.
> - `@ui-design-agent`: Responsible for UI design and prototype creation.
- **Workflow**
> A series of predefined, collaborative commands used to orchestrate different agents and tools to achieve a complex development goal (e.g., `plan`, `execute`, `test-gen`).
---
## 🛠️ Common Scenarios
### Scenario 1: Developing a New Feature (as shown above)
This is the most common use case, following the "start session → plan → execute" pattern.
```bash
# 1. Start a session
/workflow:session:start "User Login Feature"
# 2. Create a plan
/workflow:plan "Implement JWT-based user login and registration"
# 3. Execute
/workflow:execute
```
### Scenario 2: UI Design
CCW has powerful UI design capabilities, capable of generating complex UI prototypes from simple text descriptions.
```bash
# 1. Start a UI design workflow
/workflow:ui-design:explore-auto --prompt "A modern, clean admin dashboard login page with username, password fields and a login button"
# 2. View the generated prototype
# After the command finishes, it will provide a path to a compare.html file. Open it in your browser to preview.
```
### Scenario 3: Fixing a Bug
CCW can help you analyze and fix bugs.
```bash
# 1. Use the bug-index command to analyze the problem
/cli:mode:bug-index "Incorrect success message even with wrong password on login"
# 2. The AI will analyze the relevant code and generate a fix plan. You can then execute this plan.
/workflow:execute
```
---
## 🔧 Workflow-Free Usage: Standalone Tools
Beyond the full workflow mode, CCW provides standalone CLI tools and commands suitable for quick analysis, ad-hoc queries, and routine maintenance tasks.
### Direct CLI Tool Invocation
CCW supports direct invocation of external AI tools (Gemini, Qwen, Codex) through a unified CLI interface without creating workflow sessions.
#### Code Analysis
Quickly analyze project code structure and architectural patterns:
```bash
# Code analysis with Gemini
/cli:analyze --tool gemini "Analyze authentication module architecture"
# Code quality analysis with Qwen
/cli:analyze --tool qwen "Review database model design for best practices"
```
#### Interactive Chat
Direct interactive dialogue with AI tools:
```bash
# Chat with Gemini
/cli:chat --tool gemini "Explain React Hook use cases"
# Discuss implementation with Codex
/cli:chat --tool codex "How to optimize this query performance"
```
#### Specialized Analysis Modes
Use specific analysis modes for in-depth exploration:
```bash
# Architecture planning mode
/cli:mode:plan --tool gemini "Design a scalable microservices architecture"
# Deep code analysis
/cli:mode:code-analysis --tool qwen "Analyze utility functions in src/utils/"
# Bug analysis mode
/cli:mode:bug-index --tool gemini "Analyze potential causes of memory leak"
```
### Semantic Tool Invocation
Users can tell Claude to use specific tools through natural language, and Claude will understand the intent and automatically execute the appropriate commands.
#### Semantic Invocation Examples
Describe needs directly in conversation using natural language:
**Example 1: Code Analysis**
```
User: "Use gemini to analyze the modular architecture of this project"
→ Claude will automatically execute gemini-wrapper for analysis
```
**Example 2: Document Generation**
```
User: "Use gemini to generate API documentation with all endpoint descriptions"
→ Claude will understand the need and automatically invoke gemini's write mode
```
**Example 3: Code Implementation**
```
User: "Use codex to implement user login functionality"
→ Claude will invoke the codex tool for autonomous development
```
#### Advantages of Semantic Invocation
- **Natural Interaction**: No need to memorize complex command syntax
- **Intelligent Understanding**: Claude selects appropriate tools and parameters based on context
- **Automatic Optimization**: Claude automatically adds necessary context and configuration
### Memory Management: CLAUDE.md Updates
CCW uses a hierarchical CLAUDE.md documentation system to maintain project context. Regular updates to these documents are critical for ensuring high-quality AI outputs.
#### Full Project Index Rebuild
Suitable for large-scale refactoring, architectural changes, or first-time CCW usage:
```bash
# Rebuild entire project documentation index
/update-memory-full
# Use specific tool for indexing
/update-memory-full --tool gemini # Comprehensive analysis (recommended)
/update-memory-full --tool qwen # Architecture focus
/update-memory-full --tool codex # Implementation details
```
**When to Execute**:
- During project initialization
- After major architectural changes
- Weekly routine maintenance
- When AI output drift is detected
#### Incremental Related Module Updates
Suitable for daily development, updating only modules affected by changes:
```bash
# Update recently modified related documentation
/update-memory-related
# Specify tool for update
/update-memory-related --tool gemini
```
**When to Execute**:
- After feature development completion
- After module refactoring
- After API interface updates
- After data model modifications
#### Memory Quality Impact
| Update Frequency | Result |
|-----------------|--------|
| ❌ Never update | Outdated API references, incorrect architectural assumptions, low-quality output |
| ⚠️ Occasional updates | Partial context accuracy, potential inconsistencies |
| ✅ Timely updates | High-quality output, precise context, correct pattern references |
### CLI Tool Initialization
When using external CLI tools for the first time, initialization commands provide quick configuration:
```bash
# Auto-configure all tools
/cli:cli-init
# Configure specific tools only
/cli:cli-init --tool gemini
/cli:cli-init --tool qwen
```
This command will:
- Analyze project structure
- Generate tool configuration files
- Set up `.geminiignore` / `.qwenignore`
- Create context file references
---
## ❓ Troubleshooting
- **Problem: Prompt shows "No active session found"**
> **Reason**: You haven't started a workflow session, or the current session is complete.
> **Solution**: Use `/workflow:session:start "Your task description"` to start a new session.
- **Problem: Command execution fails or gets stuck**
> **Reason**: It could be a network issue, AI model limitation, or the task is too complex.
> **Solution**:
> 1. First, try using `/workflow:status` to check the current state.
> 2. Check the log files in the `.workflow/WFS-<session-name>/.chat/` directory for detailed error messages.
> 3. If the task is too complex, try breaking it down into smaller tasks and then use `/workflow:plan` to create a new plan.
---
## 📚 Next Steps for Advanced Learning
Once you've mastered the basics, you can explore CCW's more powerful features:
1. **Test-Driven Development (TDD)**: Use `/workflow:tdd-plan` to create a complete TDD workflow. The AI will first write failing tests, then write code to make them pass, and finally refactor.
2. **Multi-Agent Brainstorming**: Use `/workflow:brainstorm:auto-parallel` to have multiple AI agents with different roles (like System Architect, Product Manager, Security Expert) analyze a topic simultaneously and generate a comprehensive report.
3. **Custom Agents and Commands**: You can modify the files in the `.claude/agents/` and `.claude/commands/` directories to customize agent behavior and workflows to fit your team's specific needs.
Hope this guide helps you get started smoothly with CCW!

301
GETTING_STARTED_CN.md Normal file
View File

@@ -0,0 +1,301 @@
# 🚀 Claude Code Workflow (CCW) - 快速上手指南
欢迎来到 Claude Code Workflow (CCW)!本指南将帮助您在 5 分钟内快速入门,体验由 AI 驱动的自动化软件开发流程。
---
## ⏱️ 5 分钟快速入门
让我们通过一个简单的例子,从零开始构建一个 "Hello World" Web 应用。
### 第 1 步:安装 CCW
首先,请确保您已经根据 [安装指南](INSTALL_CN.md) 完成了 CCW 的安装。
### 第 2 步:启动一个工作流会话
把“会话”想象成一个专门的项目文件夹。CCW 会在这里存放所有与您当前任务相关的文件。
```bash
/workflow:session:start "我的第一个 Web 应用"
```
您会看到系统创建了一个新的会话,例如 `WFS-我的第一个-web-应用`
### 第 3 步:创建执行计划
现在,告诉 CCW 您想做什么。CCW 会分析您的需求,并自动生成一个详细的、可执行的任务计划。
```bash
/workflow:plan "创建一个简单的 Express API在根路径返回 Hello World"
```
这个命令会启动一个完全自动化的规划流程,包括:
1. **上下文收集**:分析您的项目环境。
2. **智能体分析**AI 智能体思考最佳实现路径。
3. **任务生成**:创建具体的任务文件(`.json` 格式)。
### 第 4 步:执行计划
当计划创建完毕后,您就可以命令 AI 智能体开始工作了。
```bash
/workflow:execute
```
您会看到 CCW 的智能体(如 `@code-developer`)开始逐一执行任务。它会自动创建文件、编写代码、安装依赖。
### 第 5 步:查看状态
想知道进展如何?随时可以查看当前工作流的状态。
```bash
/workflow:status
```
这会显示任务的完成情况、当前正在执行的任务以及下一步计划。
---
## 🧠 核心概念解析
理解这几个概念,能帮助您更好地使用 CCW
- **工作流会话 (Workflow Session)**
> 就像一个独立的沙盒或项目空间,用于隔离不同任务的上下文、文件和历史记录。所有相关文件都存放在 `.workflow/WFS-<会话名>/` 目录下。
- **任务 (Task)**
> 一个原子化的工作单元,例如“创建 API 路由”、“编写测试用例”。每个任务都是一个 `.json` 文件,详细定义了目标、上下文和执行步骤。
- **智能体 (Agent)**
> 专门负责特定领域工作的 AI 助手。例如:
> - `@code-developer`: 负责编写和实现代码。
> - `@test-fix-agent`: 负责运行测试并自动修复失败的用例。
> - `@ui-design-agent`: 负责 UI 设计和原型创建。
- **工作流 (Workflow)**
> 一系列预定义的、相互协作的命令,用于编排不同的智能体和工具,以完成一个复杂的开发目标(如 `plan`、`execute`、`test-gen`)。
---
## 🛠️ 常见场景示例
### 场景 1开发一个新功能如上所示
这是最常见的用法,遵循“启动会话 → 规划 → 执行”的模式。
```bash
# 1. 启动会话
/workflow:session:start "用户登录功能"
# 2. 创建计划
/workflow:plan "实现基于 JWT 的用户登录和注册功能"
# 3. 执行
/workflow:execute
```
### 场景 2进行 UI 设计
CCW 拥有强大的 UI 设计能力,可以从简单的文本描述生成复杂的 UI 原型。
```bash
# 1. 启动一个 UI 设计工作流
/workflow:ui-design:explore-auto --prompt "一个现代、简洁的管理后台登录页面,包含用户名、密码输入框和登录按钮"
# 2. 查看生成的原型
# 命令执行完毕后,会提供一个 compare.html 文件的路径,在浏览器中打开即可预览。
```
### 场景 3修复一个 Bug
CCW 可以帮助您分析并修复 Bug。
```bash
# 1. 使用 bug-index 命令分析问题
/cli:mode:bug-index "用户登录时,即使密码错误也提示成功"
# 2. AI 会分析相关代码,并生成一个修复计划。然后您可以执行这个计划。
/workflow:execute
```
---
## 🔧 无工作流协作:独立工具使用
除了完整的工作流模式CCW 还提供独立的 CLI 工具和命令,适合快速分析、临时查询和日常维护任务。
### CLI 工具直接调用
CCW 支持通过统一的 CLI 接口直接调用外部 AI 工具Gemini、Qwen、Codex无需创建工作流会话。
#### 代码分析
快速分析项目代码结构和架构模式:
```bash
# 使用 Gemini 进行代码分析
/cli:analyze --tool gemini "分析认证模块的架构设计"
# 使用 Qwen 分析代码质量
/cli:analyze --tool qwen "检查数据库模型的设计是否合理"
```
#### 交互式对话
与 AI 工具进行直接交互式对话:
```bash
# 与 Gemini 交互
/cli:chat --tool gemini "解释一下 React Hook 的使用场景"
# 与 Codex 交互讨论实现方案
/cli:chat --tool codex "如何优化这个查询性能"
```
#### 专业模式分析
使用特定的分析模式进行深度探索:
```bash
# 架构分析模式
/cli:mode:plan --tool gemini "设计一个可扩展的微服务架构"
# 深度代码分析
/cli:mode:code-analysis --tool qwen "分析 src/utils/ 目录下的工具函数"
# Bug 分析模式
/cli:mode:bug-index --tool gemini "分析内存泄漏问题的可能原因"
```
### 工具语义调用
用户可以通过自然语言告诉 Claude 使用特定工具完成任务Claude 会理解意图并自动执行相应的命令。
#### 语义调用示例
直接在对话中使用自然语言描述需求:
**示例 1代码分析**
```
用户:"使用 gemini 分析一下这个项目的模块化架构"
→ Claude 会自动执行 gemini-wrapper 进行分析
```
**示例 2文档生成**
```
用户:"用 gemini 生成 API 文档,包含所有端点的说明"
→ Claude 会理解需求,自动调用 gemini 的写入模式生成文档
```
**示例 3代码实现**
```
用户:"使用 codex 实现用户登录功能"
→ Claude 会调用 codex 工具进行自主开发
```
#### 语义调用的优势
- **自然交互**:无需记忆复杂的命令语法
- **智能理解**Claude 会根据上下文选择合适的工具和参数
- **自动优化**Claude 会自动添加必要的上下文和配置
### 内存管理CLAUDE.md 更新
CCW 使用分层的 CLAUDE.md 文档系统维护项目上下文。定期更新这些文档对保证 AI 输出质量至关重要。
#### 完整项目重建索引
适用于大规模重构、架构变更或初次使用 CCW
```bash
# 重建整个项目的文档索引
/update-memory-full
# 使用特定工具进行索引
/update-memory-full --tool gemini # 全面分析(推荐)
/update-memory-full --tool qwen # 架构重点
/update-memory-full --tool codex # 实现细节
```
**执行时机**
- 项目初始化时
- 架构重大变更后
- 每周定期维护
- 发现 AI 输出偏差时
#### 增量更新相关模块
适用于日常开发,只更新变更影响的模块:
```bash
# 更新最近修改相关的文档
/update-memory-related
# 指定工具进行更新
/update-memory-related --tool gemini
```
**执行时机**
- 完成功能开发后
- 重构某个模块后
- 更新 API 接口后
- 修改数据模型后
#### 内存质量的影响
| 更新频率 | 结果 |
|---------|------|
| ❌ 从不更新 | 过时的 API 引用、错误的架构假设、低质量输出 |
| ⚠️ 偶尔更新 | 部分上下文准确、可能出现不一致 |
| ✅ 及时更新 | 高质量输出、精确的上下文、正确的模式引用 |
### CLI 工具初始化
首次使用外部 CLI 工具时,可以使用初始化命令快速配置:
```bash
# 自动配置所有工具
/cli:cli-init
# 只配置特定工具
/cli:cli-init --tool gemini
/cli:cli-init --tool qwen
```
该命令会:
- 分析项目结构
- 生成工具配置文件
- 设置 `.geminiignore` / `.qwenignore`
- 创建上下文文件引用
---
## ❓ 常见问题排查 (Troubleshooting)
- **问题:提示 "No active session found" (未找到活动会话)**
> **原因**:您还没有启动一个工作流会话,或者当前会话已完成。
> **解决方法**:使用 `/workflow:session:start "您的任务描述"` 来开始一个新会话。
- **问题:命令执行失败或卡住**
> **原因**可能是网络问题、AI 模型限制或任务过于复杂。
> **解决方法**
> 1. 首先尝试使用 `/workflow:status` 检查当前状态。
> 2. 查看 `.workflow/WFS-<会话名>/.chat/` 目录下的日志文件,获取详细错误信息。
> 3. 如果任务过于复杂,尝试将其分解为更小的任务,然后使用 `/workflow:plan` 重新规划。
---
## 📚 进阶学习路径
当您掌握了基础用法后,可以探索 CCW 更强大的功能:
1. **测试驱动开发 (TDD)**: 使用 `/workflow:tdd-plan` 来创建一个完整的 TDD 工作流AI 会先编写失败的测试,然后编写代码让测试通过,最后进行重构。
2. **多智能体头脑风暴**: 使用 `/workflow:brainstorm:auto-parallel` 让多个不同角色的 AI 智能体(如系统架构师、产品经理、安全专家)同时对一个主题进行分析,并生成一份综合报告。
3. **自定义智能体和命令**: 您可以修改 `.claude/agents/``.claude/commands/` 目录下的文件,来定制符合您团队特定需求的智能体行为和工作流。
希望本指南能帮助您顺利开启 CCW 之旅!

139
README.md
View File

@@ -13,7 +13,7 @@
---
**Claude Code Workflow (CCW)** is a next-generation multi-agent automation framework that orchestrates complex software development tasks through intelligent workflow management and autonomous execution.
**Claude Code Workflow (CCW)** transforms AI development from simple prompt chaining into a robust, context-first orchestration system. It solves execution uncertainty and error accumulation through structured planning, deterministic execution, and intelligent multi-model orchestration.
> **🎉 Latest: v4.4.0** - UI Design Workflow V3 with Layout/Style Separation Architecture. See [CHANGELOG.md](CHANGELOG.md) for details.
>
@@ -25,17 +25,41 @@
---
## ✨ Key Features
> 📚 **New to CCW?** Check out the [**Getting Started Guide**](GETTING_STARTED.md) for a beginner-friendly 5-minute tutorial!
- **🎯 Context-First Architecture**: Pre-defined context gathering eliminates execution uncertainty and error accumulation.
- **🤖 Multi-Agent System**: Specialized agents (`@code-developer`, `@test-fix-agent`) with tech-stack awareness and automated test validation.
- **🔄 End-to-End Workflow Automation**: From brainstorming to deployment with multi-phase orchestration.
- **📋 JSON-First Task Model**: Structured task definitions with `pre_analysis` steps for deterministic execution.
- **🧪 TDD Workflow Support**: Complete Test-Driven Development with Red-Green-Refactor cycle enforcement.
- **🧠 Multi-Model Orchestration**: Leverages Gemini (analysis), Qwen (architecture), and Codex (implementation) strengths.
- **✅ Pre-execution Verification**: Validates plans with both strategic (Gemini) and technical (Codex) analysis.
- **🔧 Unified CLI**: A single, powerful `/cli:*` command set for interacting with various AI tools.
- **📦 Smart Context Package**: `context-package.json` links tasks to relevant codebase files and external examples.
---
## ✨ Core Differentiators
#### **1. Context-First Architecture** 🎯
Pre-defined context gathering via `context-package.json` and `flow_control.pre_analysis` eliminates execution uncertainty. Agents load correct context **before** implementation, solving the "1-to-N" development drift problem.
#### **2. JSON-First State Management** 📋
Task states live in `.task/IMPL-*.json` (single source of truth). Markdown files are read-only views. Separates data from presentation, enabling programmatic orchestration without state drift.
#### **3. Autonomous Multi-Phase Orchestration** 🔄
Commands like `/workflow:plan` chain specialized sub-commands (session → context → analysis → tasks) with zero user intervention. `flow_control` mechanism creates executable "programs" for agents with step dependencies.
#### **4. Multi-Model Strategic Orchestration** 🧠
- **Gemini/Qwen**: Analysis, exploration, documentation (large context)
- **Codex**: Implementation, autonomous execution (`resume --last` maintains context)
- **Result**: 5-10x better task handling vs single-model approaches
#### **5. Hierarchical Memory System** 🧬
4-layer documentation (Root → Domain → Module → Sub-Module) provides context at appropriate abstraction levels, preventing information overload while maintaining precision.
#### **6. Specialized Role-Based Agents** 🤖
Dedicated agents mirror real software teams: `@action-planning-agent`, `@code-developer`, `@test-fix-agent`, `@ui-design-agent`. Includes TDD workflows (Red-Green-Refactor), UI design (layout/style separation), and automated QA.
---
### **Additional Features**
- **✅ Pre-execution Verification**: Quality gates (`/workflow:concept-clarify`, `/workflow:action-plan-verify`)
- **🔧 Unified CLI**: `/cli:*` commands with multi-tool support (`--tool gemini|qwen|codex`)
- **📦 Smart Context Package**: Links tasks to relevant code and external examples
- **🎨 UI Design Workflow**: Claude-native style/layout extraction, zero dependencies
- **🔄 Progressive Enhancement**: Optional tools extend capabilities, Claude-only mode works out-of-box
---
@@ -642,22 +666,26 @@ URL/Images → capture/explore-layers
#### Preview System
After running `ui-generate`, you get interactive preview tools:
After running `ui-generate`, you get interactive preview tools that **work directly in the browser without requiring a server**:
**Quick Preview** (Direct Browser):
**Direct Browser Preview** (Recommended - No Server Required):
```bash
# Navigate to prototypes directory
cd .workflow/WFS-auth/.design/prototypes
# Open index.html in browser (double-click or):
# Open in browser (double-click the file, or use command):
open index.html # macOS
start index.html # Windows
xdg-open index.html # Linux
# Both index.html and compare.html work directly without a server
open compare.html # Open comparison view directly
```
**Full Preview** (Local Server - Recommended):
**Optional: Local Server** (For advanced features):
```bash
cd .workflow/WFS-auth/.design/prototypes
# Choose one:
# Choose one if you need server-side features:
python -m http.server 8080 # Python
npx http-server -p 8080 # Node.js
php -S localhost:8080 # PHP
@@ -665,11 +693,12 @@ php -S localhost:8080 # PHP
```
**Preview Features**:
- `index.html`: Master navigation with all prototypes
- `compare.html`: Side-by-side comparison with viewport controls (Desktop/Tablet/Mobile)
- `index.html`: Master navigation with all prototypes (works offline)
- `compare.html`: Side-by-side comparison with viewport controls (works offline)
- Synchronized scrolling for layout comparison
- Dynamic page switching
- Real-time responsive testing
- **No server required** - all features work by opening files directly in browser
#### 📦 Output Structure
@@ -739,6 +768,80 @@ All UI workflow outputs are organized in the `.design` directory within your ses
---
## 🧠 Memory Management: Foundation of Context Quality
CCW's hierarchical memory system enables accurate context gathering and prevents execution drift. Regular updates are critical for high-quality AI outputs.
#### **Memory Hierarchy**
```
CLAUDE.md (Root) → domain/CLAUDE.md (Domain) → module/CLAUDE.md (Module) → submodule/CLAUDE.md (Sub-Module)
```
#### **When to Update**
| Trigger | Command | Purpose |
|---------|---------|---------|
| After major features | `/update-memory-related` | Update affected modules/dependencies |
| After architecture changes | `/update-memory-full` | Re-index entire project |
| Before complex planning | `/update-memory-related` | Ensure latest patterns in context-package.json |
| After refactoring | `/update-memory-related` | Update implementation patterns/APIs |
| Weekly maintenance | `/update-memory-full` | Keep docs synchronized |
#### **Quality Impact**
**Without updates:** ❌ Outdated patterns, deprecated APIs, incorrect context, architecture drift
**With updates:** ✅ Current patterns, latest APIs, accurate context, architecture alignment
#### **Best Practices**
1. Update immediately after significant changes
2. Use `/update-memory-related` frequently (faster, targeted)
3. Schedule `/update-memory-full` weekly (catches drift)
4. Review generated CLAUDE.md files
5. Integrate into CI/CD pipelines
> 💡 Memory quality determines context-package.json quality and execution accuracy. Treat as critical maintenance, not optional docs.
#### **Tool Integration**
```bash
/update-memory-full --tool gemini # Comprehensive analysis (default)
/update-memory-full --tool qwen # Architecture focus
/update-memory-full --tool codex # Implementation details
```
---
## 🏗️ Technical Architecture
Complete, self-documenting system for orchestrating AI agents with professional software engineering practices.
### **Project Organization**
```
.claude/
├── agents/ # Specialized AI agents (action-planning, code-developer, test-fix)
├── commands/ # User-facing and internal commands (workflow:*, cli:*, task:*)
├── workflows/ # Strategic framework (architecture, strategies, schemas, templates)
├── scripts/ # Utility automation (module analysis, file watching)
└── prompt-templates/# Standardized AI prompts
```
**Principles:** Separation of concerns (agents/commands/workflows), hierarchical commands, self-documenting, extensible templates
### **Execution Model**
```
User Command → Orchestrator → Phase 1-4 (Context → Analysis → Planning → Execution)
Specialized Agents (pre_analysis → implementation → validation)
```
**Example:** `/workflow:plan "Build auth"` → session:start → context-gather → concept-enhanced → task-generate
---
## 🧩 How It Works: Design Philosophy
### The Core Problem

View File

@@ -13,7 +13,7 @@
---
**Claude Code Workflow (CCW)** 是一个新一代的多智能体自动化开发框架,通过智能工作流管理和自主执行来协调复杂的软件开发任务
**Claude Code Workflow (CCW)** 将 AI 开发从简单提示词链接转变为强大的上下文优先编排系统。通过结构化规划、确定性执行和智能多模型编排,解决执行不确定性和误差累积问题
> **🎉 最新版本: v4.4.0** - UI 设计工作流 V3 布局/样式分离架构。详见 [CHANGELOG.md](CHANGELOG.md)。
>
@@ -25,17 +25,41 @@
---
## ✨ 核心特性
> 📚 **第一次使用 CCW** 查看 [**快速上手指南**](GETTING_STARTED_CN.md) 获取新手友好的 5 分钟教程!
- **🎯 上下文优先架构**: 预定义上下文收集消除执行不确定性和误差累积。
- **🤖 多智能体系统**: 专用智能体(`@code-developer``@code-review-test-agent`)具备技术栈感知能力。
- **🔄 端到端工作流自动化**: 从头脑风暴到部署的多阶段编排。
- **📋 JSON 优先任务模型**: 结构化任务定义,包含 `pre_analysis` 步骤实现确定性执行。
- **🧪 TDD 工作流支持**: 完整的测试驱动开发,包含 Red-Green-Refactor 循环强制执行。
- **🧠 多模型编排**: 发挥 Gemini分析、Qwen架构和 Codex实现各自优势
- **✅ 执行前验证**: 通过战略Gemini和技术Codex双重分析验证计划。
- **🔧 统一 CLI**: 一个强大、统一的 `/cli:*` 命令集,用于与各种 AI 工具交互。
- **📦 智能上下文包**: `context-package.json` 将任务链接到相关代码库文件和外部示例
---
## ✨ 核心差异化特性
#### **1. 上下文优先架构** 🎯
通过 `context-package.json``flow_control.pre_analysis` 预定义上下文收集,消除执行不确定性。智能体在实现**前**加载正确上下文,解决"1到N"开发漂移问题
#### **2. JSON 优先状态管理** 📋
任务状态存于 `.task/IMPL-*.json`单一事实来源Markdown 为只读视图。数据与表现分离,实现程序化编排无状态漂移
#### **3. 自主多阶段编排** 🔄
`/workflow:plan` 等命令链接专用子命令(会话 → 上下文 → 分析 → 任务),零用户干预。`flow_control` 机制创建带步骤依赖的可执行"程序"。
#### **4. 多模型战略编排** 🧠
- **Gemini/Qwen**:分析、探索、文档(大上下文)
- **Codex**:实现、自主执行(`resume --last` 保持上下文)
- **结果**:比单模型方法任务处理提升 5-10 倍
#### **5. 分层内存系统** 🧬
4 层文档(根 → 领域 → 模块 → 子模块)在适当抽象级别提供上下文,防止信息过载并保持精确性。
#### **6. 专用基于角色的智能体** 🤖
专用智能体镜像真实软件团队:`@action-planning-agent``@code-developer``@test-fix-agent``@ui-design-agent`。包含 TDD 工作流Red-Green-Refactor、UI 设计(布局/样式分离)和自动 QA。
---
### **其他特性**
- **✅ 执行前验证**:质量关卡(`/workflow:concept-clarify``/workflow:action-plan-verify`
- **🔧 统一 CLI**`/cli:*` 命令支持多工具(`--tool gemini|qwen|codex`
- **📦 智能上下文包**:链接任务到相关代码和外部示例
- **🎨 UI 设计工作流**Claude 原生样式/布局提取,零依赖
- **🔄 渐进式增强**:可选工具扩展能力,纯 Claude 模式开箱即用
---
@@ -642,22 +666,26 @@ URL/图像 → capture/explore-layers
#### 预览系统
运行 `ui-generate` 后,您将获得交互式预览工具:
运行 `ui-generate` 后,您将获得交互式预览工具**无需启动服务器即可直接在浏览器中使用**
**快速预览**直接浏览器):
**直接浏览器预览**推荐 - 无需服务器):
```bash
# 导航到原型目录
cd .workflow/WFS-auth/.design/prototypes
# 在浏览器中打开 index.html双击或执行:
# 在浏览器中打开(双击文件或使用命令):
open index.html # macOS
start index.html # Windows
xdg-open index.html # Linux
# index.html 和 compare.html 都可以直接打开,无需服务器
open compare.html # 直接打开对比视图
```
**完整预览**(本地服务器 - 推荐:
**可选:本地服务器**(用于高级功能:
```bash
cd .workflow/WFS-auth/.design/prototypes
# 选择一个:
# 如果需要服务器端功能,可选择一个:
python -m http.server 8080 # Python
npx http-server -p 8080 # Node.js
php -S localhost:8080 # PHP
@@ -665,11 +693,12 @@ php -S localhost:8080 # PHP
```
**预览功能**:
- `index.html`: 包含所有原型的主导航
- `compare.html`: 带视口控制的并排对比(桌面/平板/移动
- `index.html`: 包含所有原型的主导航(离线可用)
- `compare.html`: 带视口控制的并排对比(离线可用
- 同步滚动用于布局对比
- 动态页面切换
- 实时响应式测试
- **无需服务器** - 所有功能都可通过直接在浏览器中打开文件来使用
#### 📦 输出结构
@@ -739,6 +768,80 @@ php -S localhost:8080 # PHP
---
## 🧠 内存管理:上下文质量基础
分层内存系统实现精确上下文收集和防止执行漂移。定期更新是高质量 AI 输出的关键。
#### **内存层次**
```
CLAUDE.md→ domain/CLAUDE.md领域→ module/CLAUDE.md模块→ submodule/CLAUDE.md子模块
```
#### **更新时机**
| 触发条件 | 命令 | 目的 |
|---------|---------|---------|
| 主要功能实现后 | `/update-memory-related` | 更新受影响模块/依赖 |
| 架构变更后 | `/update-memory-full` | 重新索引整个项目 |
| 复杂规划前 | `/update-memory-related` | 确保 context-package.json 最新模式 |
| 重构后 | `/update-memory-related` | 更新实现模式/API |
| 每周维护 | `/update-memory-full` | 保持文档同步 |
#### **质量影响**
**无更新:** ❌ 过时模式、弃用 API、错误上下文、架构漂移
**有更新:** ✅ 当前模式、最新 API、准确上下文、架构对齐
#### **最佳实践**
1. 重大变更后立即更新
2. 频繁使用 `/update-memory-related`(更快、有针对性)
3. 每周安排 `/update-memory-full`(捕获漂移)
4. 审查生成的 CLAUDE.md 文件
5. 集成到 CI/CD 流水线
> 💡 内存质量决定 context-package.json 质量和执行准确性。视为关键维护,非可选文档。
#### **工具集成**
```bash
/update-memory-full --tool gemini # 全面分析(默认)
/update-memory-full --tool qwen # 架构聚焦
/update-memory-full --tool codex # 实现细节
```
---
## 🏗️ 技术架构
完整的自文档化系统,用专业软件工程实践编排 AI 智能体。
### **项目组织**
```
.claude/
├── agents/ # 专用 AI 智能体action-planning、code-developer、test-fix
├── commands/ # 面向用户和内部命令workflow:*、cli:*、task:*
├── workflows/ # 战略框架(架构、策略、模式、模板)
├── scripts/ # 实用工具自动化(模块分析、文件监控)
└── prompt-templates/# 标准化 AI 提示词
```
**原则:** 关注点分离agents/commands/workflows、层次化命令、自文档化、可扩展模板
### **执行模型**
```
用户命令 → 编排器 → 阶段 1-4上下文 → 分析 → 规划 → 执行)
专用智能体pre_analysis → 实现 → 验证)
```
**示例:** `/workflow:plan "构建认证"` → session:start → context-gather → concept-enhanced → task-generate
---
## 🧩 工作原理:设计理念
### 核心问题

View File

@@ -127,17 +127,36 @@ function Test-Prerequisites {
Write-ColorOutput "Current version: $($PSVersionTable.PSVersion)" $ColorError
return $false
}
# Check for optional but recommended commands
$gitAvailable = $false
try {
$gitVersion = git --version 2>$null
if ($LASTEXITCODE -eq 0) {
Write-ColorOutput "✓ Git available" $ColorSuccess
$gitAvailable = $true
}
} catch {
# Git not found
}
if (-not $gitAvailable) {
Write-ColorOutput "WARNING: 'git' not found - version detection may be limited" $ColorWarning
Write-ColorOutput "Hint: Install Git for Windows for better version tracking" $ColorInfo
Write-ColorOutput " Download from: https://git-scm.com/download/win" $ColorInfo
Write-Host ""
}
# Test internet connectivity
try {
$null = Invoke-WebRequest -Uri "https://github.com" -Method Head -TimeoutSec 10 -UseBasicParsing
Write-ColorOutput "Network connection OK" $ColorSuccess
Write-ColorOutput "Network connection OK" $ColorSuccess
} catch {
Write-ColorOutput "ERROR: Cannot connect to GitHub" $ColorError
Write-ColorOutput "Please check your network connection: $($_.Exception.Message)" $ColorError
return $false
}
return $true
}
@@ -566,20 +585,52 @@ function Main {
# Get commit SHA from the downloaded repository first
$commitSha = ""
Write-ColorOutput "Detecting version information..." $ColorInfo
# Try to get from git repository if git is available
$gitAvailable = $false
try {
Push-Location $repoDir
$commitSha = (git rev-parse --short HEAD 2>$null)
if (-not $commitSha) {
# Fallback: try to get from GitHub API
$null = git --version 2>$null
if ($LASTEXITCODE -eq 0) {
$gitAvailable = $true
}
} catch {
# Git not available
}
if ($gitAvailable) {
try {
Push-Location $repoDir
$commitSha = (git rev-parse --short HEAD 2>$null)
Pop-Location
if ($commitSha) {
Write-ColorOutput "✓ Version detected from git: $commitSha" $ColorSuccess
}
} catch {
Pop-Location
# Continue to fallback
}
}
# Fallback: try to get from GitHub API
if (-not $commitSha) {
try {
Write-ColorOutput "Fetching version from GitHub API..." $ColorInfo
$commitUrl = "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/$Branch"
$commitResponse = Invoke-RestMethod -Uri $commitUrl -UseBasicParsing -TimeoutSec 5 -ErrorAction SilentlyContinue
$commitResponse = Invoke-RestMethod -Uri $commitUrl -UseBasicParsing -TimeoutSec 10 -ErrorAction Stop
if ($commitResponse.sha) {
$commitSha = $commitResponse.sha.Substring(0, 7)
Write-ColorOutput "✓ Version detected from API: $commitSha" $ColorSuccess
}
} catch {
Write-ColorOutput "WARNING: Could not detect version, using 'unknown'" $ColorWarning
$commitSha = "unknown"
}
Pop-Location
} catch {
Pop-Location
}
if (-not $commitSha) {
$commitSha = "unknown"
}

View File

@@ -58,6 +58,17 @@ function test_prerequisites() {
fi
done
# Check for optional but recommended commands
if ! command -v git &> /dev/null; then
write_color "WARNING: 'git' not found - version detection may be limited" "$COLOR_WARNING"
write_color "Hint: Install git for better version tracking" "$COLOR_INFO"
write_color " On Ubuntu/Debian: sudo apt-get install git" "$COLOR_INFO"
write_color " On WSL: sudo apt-get update && sudo apt-get install git" "$COLOR_INFO"
echo ""
else
write_color "✓ Git available" "$COLOR_SUCCESS"
fi
# Test internet connectivity
if curl -sSf --connect-timeout 10 "https://github.com" &> /dev/null; then
write_color "✓ Network connection OK" "$COLOR_SUCCESS"
@@ -650,14 +661,39 @@ function main() {
# Get commit SHA from the downloaded repository first
local commit_sha=""
write_color "Detecting version information..." "$COLOR_INFO"
if command -v git &> /dev/null && [ -d "$repo_dir/.git" ]; then
commit_sha=$(cd "$repo_dir" && git rev-parse --short HEAD 2>/dev/null || echo "unknown")
else
# Try to get from git repository
commit_sha=$(cd "$repo_dir" && git rev-parse --short HEAD 2>/dev/null || echo "")
if [ -n "$commit_sha" ]; then
write_color "✓ Version detected from git: $commit_sha" "$COLOR_SUCCESS"
fi
fi
if [ -z "$commit_sha" ]; then
# Fallback: try to get from GitHub API
write_color "Fetching version from GitHub API..." "$COLOR_INFO"
local temp_branch="main"
[ "$VERSION_TYPE" = "branch" ] && temp_branch="$BRANCH"
commit_sha=$(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/$temp_branch" 2>/dev/null | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7)
[ -z "$commit_sha" ] && commit_sha="unknown"
local commit_data
commit_data=$(curl -fsSL --connect-timeout 10 "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/$temp_branch" 2>/dev/null)
if [ -n "$commit_data" ]; then
if command -v jq &> /dev/null; then
commit_sha=$(echo "$commit_data" | jq -r '.sha' 2>/dev/null | cut -c1-7)
else
commit_sha=$(echo "$commit_data" | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7)
fi
fi
if [ -n "$commit_sha" ] && [ "$commit_sha" != "null" ]; then
write_color "✓ Version detected from API: $commit_sha" "$COLOR_SUCCESS"
else
write_color "WARNING: Could not detect version, using 'unknown'" "$COLOR_WARNING"
commit_sha="unknown"
fi
fi
# Determine version and branch information to pass