Compare commits

...

29 Commits

Author SHA1 Message Date
catlog22
72f27fb2f8 feat: add implementation_approach command field execution mode
- Add CLI mode example to implementation_approach showing optional command field
- Document two execution modes:
  1. Default Mode (agent execution): No command field, agent interprets steps
  2. CLI Mode (command execution): With command field, uses CLI tools (codex/gemini/qwen)

- Add Implementation Approach Execution Modes section with:
  - Mode descriptions and use cases
  - Required fields for each mode
  - Command patterns for CLI mode (codex, gemini)
  - Mode selection strategy

- Simplify implementation_approach examples to use generic placeholders
- Emphasize command field is optional, agent decides based on task complexity

Benefits:
- Clear documentation of both execution patterns
- Agents understand when to use CLI tools vs direct execution
- Pattern examples show structure without over-specifying content
- Supports both autonomous agent work and CLI tool delegation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 19:42:08 +08:00
catlog22
be129f5821 refactor: simplify pre_analysis JSON examples to show patterns only
- Simplify pre_analysis JSON examples to focus on structure patterns
- Replace specific commands with generic placeholders ([pattern], [lang], [N])
- Reduce OPTIONAL section to 5 key pattern examples (vs 12 detailed examples)
- Condense dynamic adaptation guide to essential principles
- Remove overly detailed command examples that could mislead

Benefits:
- Clearer focus on structure patterns rather than specific implementations
- Reduced cognitive load - easier to understand the schema
- Less risk of agents copying specific examples blindly
- More flexibility for agents to create task-appropriate steps

Pattern examples now show:
- Project structure analysis pattern
- Local search pattern (bash/rg/find)
- Gemini CLI pattern
- Qwen CLI pattern
- MCP tools pattern

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 19:39:34 +08:00
catlog22
b1bb74af0d feat: enhance pre_analysis with comprehensive CLI examples and dynamic guidance
- Add extensive pre_analysis examples organized by category:
  - Required context loading steps
  - Project structure analysis
  - Local codebase exploration (bash/rg/find)
  - Gemini CLI analysis (architecture, execution flow)
  - Qwen CLI analysis (code quality, fallback)
  - MCP tools integration
  - Tech-specific searches (React, database, etc.)

- Add "举一反三" (draw inferences) principle with detailed guidance:
  - Dynamic step selection based on task type
  - Tool selection strategy (Gemini/Qwen/Bash/MCP)
  - Task-specific adaptation examples (security, performance, API)
  - Command composition patterns

- Emphasize that examples are reference templates, not fixed requirements
- Agent must dynamically select and adapt steps based on actual task needs
- Provide clear guidelines for when to use each type of analysis tool

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 19:36:47 +08:00
catlog22
a7a654805c refactor: remove context_signature field from task JSON schema
- Remove context_signature field from meta object
- Simplify meta object to focus on essential fields
- Keep execution_group for parallelization support

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 19:33:29 +08:00
catlog22
c0c894ced1 feat: enhance action-planning-agent task JSON schema definition
- Add missing top-level field: context_package_path
- Add missing context fields: parent, inherited, shared_context
- Add meta fields for parallelization: execution_group, context_signature
- Fix status enumeration: pending|active|completed|blocked|container
- Fix meta.type enumeration: distinguish test-gen from test-fix
- Expand meta.agent options: add action-planning-agent, test-fix-agent, universal-executor
- Enhance artifacts structure: add source, usage, contains fields
- Restructure schema presentation into 4 clear sections
- Add practical pre_analysis examples with CLI bash search commands
- Emphasize quantification requirements for requirements/acceptance fields

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 19:24:46 +08:00
catlog22
7517f4f8ec docs: clarify manifest.json location in plan.md Phase 1
Add explicit note in Phase 1 validation to prevent main Claude from
incorrectly looking for manifest.json in .workflow/active/WFS-xxx/.

manifest.json only exists in .workflow/archives/ (for archived sessions).
Active sessions only have workflow-session.json (metadata).

This prevents confusion during plan execution before agent invocation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 13:44:02 +08:00
catlog22
0b45ff7345 docs: clarify optional historical archive analysis in context-search-agent
Add note in Phase 2 to clarify that historical archive analysis
(querying .workflow/archives/manifest.json) is optional, addressing
discrepancy between context-gather.md (4 tracks) and agent implementation
(3 tracks).

This prevents confusion about manifest.json location and makes the
optional nature of archive querying explicit.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 13:39:36 +08:00
catlog22
0416b23186 feat: add context-package.json sync to synthesis and design-sync workflows
Add Phase 6 to both synthesis.md and design-sync.md to automatically
update context-package.json when artifacts are modified, preventing
stale cache issues when transitioning to /workflow:plan.

Changes:
- synthesis.md: Add Phase 6 to sync updated role analyses to context-package
- design-sync.md: Add Phase 5 to sync design system refs + role analyses

This ensures:
- synthesis → plan: context-package reflects synthesis enhancements
- ui-design → plan: context-package includes design system references
- No manual context-gather re-run needed after artifact updates

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 12:53:08 +08:00
catlog22
948cf3fcd7 fix: prevent unwanted auto-commits during CLI tool execution 2025-11-23 12:29:44 +08:00
catlog22
4272ca9ebd Add CLI commands for full and related documentation generation
- Implemented `/memory:docs-full-cli` for comprehensive project documentation generation using CLI execution with batched agents and tool fallback.
- Introduced `/memory:docs-related-cli` to generate/update documentation for git-changed modules, optimizing for efficiency with direct parallel execution for fewer modules and agent batch processing for larger sets.
- Defined execution flows, strategies, and error handling for both commands, ensuring robust documentation processes.
2025-11-23 11:27:23 +08:00
catlog22
73fed4893b docs: Generate ARCHITECTURE.md and EXAMPLES.md
Generated system design and usage examples documentation based on project
context and provided templates.

ARCHITECTURE.md provides an overview of the system's architecture,
including core principles, structure, module map, interactions, design patterns,
API overview, data flow, security, and scalability.

EXAMPLES.md offers end-to-end usage examples, covering quick start, core use cases,
advanced scenarios, testing examples, and best practices.
2025-11-23 11:23:21 +08:00
catlog22
f09c6e2a7a docs: Generate comprehensive project overview README.md 2025-11-23 11:20:35 +08:00
catlog22
65a204a563 fix: enforce synchronous bash execution in memory update workflows
Explicitly set run_in_background: false for all Bash commands in update-full and update-related workflows to prevent unwanted background execution. This ensures synchronous execution without requiring BashOutput polling.

Changes:
- Phase 1-4: Add run_in_background: false to all Bash calls
- Simplify code examples and remove redundant explanations
- Update both direct execution and agent batch execution patterns

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:23:30 +08:00
catlog22
ffbc440a7e feat: optimize file tracking in installation process with bulk operations 2025-11-23 09:50:51 +08:00
catlog22
3c28c61bea fix: update minimal documentation output guideline for clarity 2025-11-22 23:21:26 +08:00
catlog22
b0b99a4217 Enhance installation and uninstallation processes
- Added optional quiet mode for backup notifications to reduce output clutter.
- Improved file merging with progress reporting and summary of backed up files.
- Implemented a cleanup function to move old installations to a backup directory before new installations.
- Enhanced manifest handling to distinguish between Global and Path installations.
- Updated uninstallation process to preserve global files if a Global installation exists.
- Improved error handling and user feedback during file operations.
2025-11-22 23:20:39 +08:00
catlog22
4f533f6fd5 feat(lite-execute): intelligent task grouping with parallel/sequential execution
Major improvements:
- Implement dependency-aware task grouping algorithm
  * Same file modifications → sequential batch
  * Different files + no dependencies → parallel batch
  * Keyword-based dependency inference (use/integrate/call/import)
  * Batch limits: Agent 7 tasks, CLI 4 tasks

- Add parallel execution strategy
  * All parallel batches launch concurrently (single Claude message)
  * Sequential batches execute in order
  * No concurrency limit for CLI tools
  * Execution flow: parallel-first → sequential-after

- Enhance code review with task.json acceptance criteria
  * Unified review template for all tools (Gemini/Qwen/Codex/Agent)
  * task.json in CONTEXT field for CLI tools
  * Explicit acceptance criteria verification

- Clarify execution requirements
  * Remove ambiguous strategy instructions
  * Add explicit "MUST complete ALL tasks" requirement
  * Prevent CLI tools from partial execution misunderstanding

- Simplify documentation (~260 lines reduced)
  * Remove redundant examples and explanations
  * Keep core algorithm logic with inline comments
  * Eliminate duplicate descriptions across sections

Breaking changes: None (backward compatible)
2025-11-22 21:05:43 +08:00
catlog22
530c348e95 feat: enhance task execution with intelligent grouping and dependency analysis 2025-11-22 20:57:16 +08:00
catlog22
a98b26b111 feat: add session folder structure for lite-plan artifacts
- Create dedicated session folder (.workflow/.lite-plan/{task-slug}-{timestamp}/)
  for each lite-plan execution to organize all planning artifacts
- Always export task.json (removed optional export question from Phase 4)
- Save exploration.json, plan.json, and task.json to session folder
- Add session artifact paths to executionContext for lite-execute delegation
- Update lite-execute to use artifact file paths for CLI/agent context
- Enable CLI tools (Gemini/Qwen/Codex) and agents to access detailed
  planning context via file references

Benefits:
- Clean separation between different task executions
- All artifacts automatically saved for reusability
- Enhanced context available for execution phase
- Natural audit trail of planning sessions
2025-11-22 20:32:01 +08:00
catlog22
9f7e33cbde Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2025-11-22 20:22:21 +08:00
catlog22
a25464ce28 Merge pull request #29 from catlog22/claude/analyze-workflow-logic-015GvXNP62281T3ogUSaJdP4
Analyze workflow execution logic and document findings
2025-11-20 20:16:28 +08:00
catlog22
0a3f2a5b03 Merge pull request #28 from catlog22/claude/check-active-workflow-files-01S7quRK9xHWStgpQ9WeTnF2
Check for .active identifier in workflow files
2025-11-20 20:15:39 +08:00
Claude
1929b7f72d docs: remove obsolete .active marker file references
Remove outdated references to .active-session marker files that are no longer used in the workflow implementation. The system now uses directory-based session management where active sessions are identified by their location in .workflow/active/ directory.

Changes:
- WORKFLOW_DIAGRAMS.md: Replace .active-session marker with actual directory structure
- COMMAND_SPEC.md: Update session:complete description to reflect directory-based archival

The .archiving marker is still valid and used for transactional session completion.
2025-11-20 12:13:45 +00:00
Claude
b8889d99c9 refactor: simplify session selection logic expression
- Condense Case A/B/C headers for clarity
- Merge bash if-else into single line using && ||
- Combine steps 1-4 in Case C into compact flow
- Remove redundant explanations while keeping key info
- Reduce from 64 lines to 39 lines (39% reduction)
2025-11-20 12:12:56 +00:00
Claude
a79a3221ce fix: enhance workflow execute session selection with user interaction
- Add detailed session discovery logic with 3 cases (none, single, multiple)
- Implement AskUserQuestion for multiple active sessions
- Display rich session metadata (project name, task progress, completion %)
- Support flexible user input (session number, full ID, or partial ID)
- Maintain backward compatibility for single-session scenarios
- Improve user experience with clear confirmation messages

This minimal change addresses session conflict issues without complex
locking mechanisms, focusing on explicit user choice when ambiguity exists.
2025-11-20 12:08:35 +00:00
catlog22
67c18d1b03 Merge pull request #27 from catlog22/claude/cleanup-root-docs-01SAjhmQa3Wc4EWxZoCnFb77
Remove unnecessary documentation from root directory
2025-11-20 19:55:15 +08:00
catlog22
8d828e8762 Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2025-11-20 19:44:31 +08:00
catlog22
07caf20e0d Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2025-11-20 18:51:24 +08:00
catlog22
d7bee9bdf2 docs: clarify path parameter description in /memory:docs command
Improve the path parameter documentation to eliminate ambiguity:
- Change "Target directory" to "Source directory to analyze"
- Explicitly state documentation is generated in .workflow/docs/{project_name}/ at workspace root
- Emphasize docs are NOT created within the source path itself
- Add concrete example showing path mirroring behavior

This resolves potential confusion about where documentation files are created.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 15:48:06 +08:00
23 changed files with 3265 additions and 2358 deletions

View File

@@ -137,19 +137,44 @@ Break work into 3-5 logical implementation stages with:
- Dependencies on previous stages
- Estimated complexity and time requirements
### 2. Task JSON Generation (5-Field Schema + Artifacts)
### 2. Task JSON Generation (6-Field Schema + Artifacts)
Generate individual `.task/IMPL-*.json` files with:
**Required Fields**:
#### Top-Level Fields
```json
{
"id": "IMPL-N[.M]",
"title": "Descriptive task name",
"status": "pending",
"status": "pending|active|completed|blocked|container",
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
}
```
**Field Descriptions**:
- `id`: Task identifier (format: `IMPL-N` or `IMPL-N.M` for subtasks, max 2 levels)
- `title`: Descriptive task name summarizing the work
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies), `container` (has subtasks, cannot be executed directly)
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
#### Meta Object
```json
{
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer"
},
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
"execution_group": "parallel-abc123|null"
}
}
```
**Field Descriptions**:
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
- `agent`: Assigned agent for execution
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
#### Context Object
```json
{
"context": {
"requirements": [
"Implement 3 features: [authentication, authorization, session management]",
@@ -162,43 +187,131 @@ Generate individual `.task/IMPL-*.json` files with:
"5 files created: verify by ls src/auth/*.ts | wc -l = 5",
"Test coverage >=80%: verify by npm test -- --coverage | grep auth"
],
"parent": "IMPL-N",
"depends_on": ["IMPL-N"],
"inherited": {
"from": "IMPL-N",
"context": ["Authentication system design completed", "JWT strategy defined"]
},
"shared_context": {
"tech_stack": ["Node.js", "TypeScript", "Express"],
"auth_strategy": "JWT with refresh tokens",
"conventions": ["Follow existing auth patterns in src/auth/legacy/"]
},
"artifacts": [
{
"type": "synthesis_specification",
"type": "synthesis_specification|topic_framework|individual_role_analysis",
"source": "brainstorm_clarification|brainstorm_framework|brainstorm_roles",
"path": "{from artifacts_inventory}",
"priority": "highest"
"priority": "highest|high|medium|low",
"usage": "Architecture decisions and API specifications",
"contains": "role_specific_requirements_and_design"
}
]
},
}
}
```
**Field Descriptions**:
- `requirements`: **QUANTIFIED** implementation requirements (MUST include explicit counts and enumerated lists, e.g., "5 files: [list]")
- `focus_paths`: Target directories/files (concrete paths without wildcards)
- `acceptance`: **MEASURABLE** acceptance criteria (MUST include verification commands, e.g., "verify by ls ... | wc -l = N")
- `parent`: Parent task ID for subtasks (establishes container/subtask hierarchy)
- `depends_on`: Prerequisite task IDs that must complete before this task starts
- `inherited`: Context, patterns, and dependencies passed from parent task
- `shared_context`: Tech stack, conventions, and architectural strategies for the task
- `artifacts`: Referenced brainstorming outputs with detailed metadata
#### Flow Control Object
**IMPORTANT**: The `pre_analysis` examples below are **reference templates only**. Agent MUST dynamically select, adapt, and expand steps based on actual task requirements. Apply the principle of **"举一反三"** (draw inferences from examples) - use these patterns as inspiration to create task-specific analysis steps.
**Dynamic Step Selection Guidelines**:
- **Context Loading**: Always include context package and role analysis loading
- **Architecture Analysis**: Add module structure analysis for complex projects
- **Pattern Discovery**: Use CLI tools (gemini/qwen/bash) based on task complexity and available tools
- **Tech-Specific Analysis**: Add language/framework-specific searches for specialized tasks
- **MCP Integration**: Utilize MCP tools when available for enhanced context
```json
{
"flow_control": {
"pre_analysis": [
// === REQUIRED: Context Package Loading (Always Include) ===
{
"step": "load_synthesis_specification",
"commands": ["bash(ls {path} 2>/dev/null)", "Read({path})"],
"output_to": "synthesis_specification",
"on_error": "skip_optional"
"step": "load_context_package",
"action": "Load context package for artifact paths and smart context",
"commands": ["Read({{context_package_path}})"],
"output_to": "context_package",
"on_error": "fail"
},
{
"step": "mcp_codebase_exploration",
"command": "mcp__code-index__find_files() && mcp__code-index__search_code_advanced()",
"output_to": "codebase_structure"
"step": "load_role_analysis_artifacts",
"action": "Load role analyses from context-package.json",
"commands": [
"Read({{context_package_path}})",
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
"Read(each extracted path)"
],
"output_to": "role_analysis_artifacts",
"on_error": "skip_optional"
},
// === OPTIONAL: Select and adapt based on task needs ===
// Pattern: Project structure analysis
{
"step": "analyze_project_architecture",
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
"output_to": "project_architecture"
},
// Pattern: Local search (bash/rg/find)
{
"step": "search_existing_patterns",
"commands": [
"bash(rg '[pattern]' --type [lang] -n --max-count [N])",
"bash(find . -name '[pattern]' -type f | head -[N])"
],
"output_to": "search_results"
},
// Pattern: Gemini CLI deep analysis
{
"step": "gemini_analyze_[aspect]",
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
"output_to": "analysis_result"
},
// Pattern: Qwen CLI analysis (fallback/alternative)
{
"step": "qwen_analyze_[aspect]",
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
"output_to": "analysis_result"
},
// Pattern: MCP tools
{
"step": "mcp_search_[target]",
"command": "mcp__[tool]__[function](parameters)",
"output_to": "mcp_results"
}
],
"implementation_approach": [
// === DEFAULT MODE: Agent Execution (no command field) ===
{
"step": 1,
"title": "Load and analyze role analyses",
"description": "Load 3 role analysis files and extract quantified requirements",
"description": "Load role analysis files and extract quantified requirements",
"modification_points": [
"Load 3 role analysis files: [system-architect/analysis.md, product-manager/analysis.md, ui-designer/analysis.md]",
"Extract 15 requirements from role analyses",
"Parse 8 architecture decisions from system-architect analysis"
"Load N role analysis files: [list]",
"Extract M requirements from role analyses",
"Parse K architecture decisions"
],
"logic_flow": [
"Read 3 role analyses from artifacts inventory",
"Parse architecture decisions (8 total)",
"Extract implementation requirements (15 total)",
"Read role analyses from artifacts inventory",
"Parse architecture decisions",
"Extract implementation requirements",
"Build consolidated requirements list"
],
"depends_on": [],
@@ -207,21 +320,33 @@ Generate individual `.task/IMPL-*.json` files with:
{
"step": 2,
"title": "Implement following specification",
"description": "Implement 3 features across 5 files following consolidated role analyses",
"description": "Implement features following consolidated role analyses",
"modification_points": [
"Create 5 new files in src/auth/: [auth.service.ts (180 lines), auth.controller.ts (120 lines), auth.middleware.ts (60 lines), auth.types.ts (40 lines), auth.test.ts (200 lines)]",
"Modify 2 functions: [validateUser() in users.service.ts lines 45-60, hashPassword() in utils.ts lines 120-135]",
"Implement 3 core features: [JWT authentication, role-based authorization, session management]"
"Create N new files: [list with line counts]",
"Modify M functions: [func() in file lines X-Y]",
"Implement K core features: [list]"
],
"logic_flow": [
"Apply 15 requirements from [synthesis_requirements]",
"Implement 3 features across 5 new files (600 total lines)",
"Modify 2 existing functions (30 lines total)",
"Write 25 test cases covering all features",
"Validate against 3 acceptance criteria"
"Apply requirements from [synthesis_requirements]",
"Implement features across new files",
"Modify existing functions",
"Write test cases covering all features",
"Validate against acceptance criteria"
],
"depends_on": [1],
"output": "implementation"
},
// === CLI MODE: Command Execution (optional command field) ===
{
"step": 3,
"title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution",
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
"modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2],
"output": "cli_implementation"
}
],
"target_files": [
@@ -237,6 +362,72 @@ Generate individual `.task/IMPL-*.json` files with:
}
```
**Field Descriptions**:
- `pre_analysis`: Context loading and preparation steps (executed sequentially before implementation)
- `implementation_approach`: Implementation steps with dependency management (array of step objects)
- `target_files`: Specific files/functions/lines to modify (format: `file:function:lines` for existing, `file` for new)
**Implementation Approach Execution Modes**:
The `implementation_approach` supports **two execution modes** based on the presence of the `command` field:
1. **Default Mode (Agent Execution)** - `command` field **omitted**:
- Agent interprets `modification_points` and `logic_flow` autonomously
- Direct agent execution with full context awareness
- No external tool overhead
- **Use for**: Standard implementation tasks where agent capability is sufficient
- **Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
2. **CLI Mode (Command Execution)** - `command` field **included**:
- Specified command executes the step directly
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
- **Required fields**: Same as default mode **PLUS** `command`
- **Command patterns**:
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
**Mode Selection Strategy**:
- **Default to agent execution** for most tasks
- **Use CLI mode** when:
- User explicitly requests CLI tool (codex/gemini/qwen)
- Task requires multi-step autonomous reasoning beyond agent capability
- Complex refactoring needs specialized tool analysis
- Building on previous CLI execution context (use `resume --last`)
**Key Principle**: The `command` field is **optional**. Agent must decide based on task complexity and user preference.
**Pre-Analysis Step Selection Guide (举一反三 Principle)**:
The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
1. **Always Include** (Required):
- `load_context_package` - Essential for all tasks
- `load_role_analysis_artifacts` - Critical for accessing brainstorming insights
2. **Selectively Include Based on Task Type**:
- **Architecture tasks**: Project structure + Gemini architecture analysis
- **Refactoring tasks**: Gemini execution flow tracing + code quality analysis
- **Frontend tasks**: React/Vue component searches + UI pattern analysis
- **Backend tasks**: Database schema + API endpoint searches
- **Security tasks**: Vulnerability scans + security pattern analysis
- **Performance tasks**: Bottleneck identification + profiling data
3. **Tool Selection Strategy**:
- **Gemini CLI**: Deep analysis (architecture, execution flow, patterns)
- **Qwen CLI**: Fallback or code quality analysis
- **Bash/rg/find**: Quick pattern matching and file discovery
- **MCP tools**: Semantic search and external research
4. **Command Composition Patterns**:
- **Single command**: `bash([simple_search])`
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
- **MCP integration**: `mcp__[tool]__[function]([params])`
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
**Artifact Mapping**:
- Use `artifacts_inventory` from context package
- Highest priority: synthesis_specification

View File

@@ -102,6 +102,8 @@ if (!memory.has("README.md")) Read(README.md)
Execute all 3 tracks in parallel for comprehensive coverage.
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
#### Track 1: Reference Documentation
Extract from Phase 0 loaded docs:

View File

@@ -0,0 +1,472 @@
---
name: docs-full-cli
description: Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel
argument-hint: "[path] [--tool <gemini|qwen|codex>]"
---
# Full Documentation Generation - CLI Mode (/memory:docs-full-cli)
## Overview
Orchestrates project-wide documentation generation using CLI-based execution with batched agents and automatic tool fallback.
**Parameters**:
- `path`: Target directory (default: current directory)
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
**Execution Flow**: Discovery → Plan Presentation → Execution → Verification
## 3-Layer Architecture & Auto-Strategy Selection
### Layer Definition & Strategy Assignment
| Layer | Depth | Strategy | Purpose | Context Pattern |
|-------|-------|----------|---------|----------------|
| **Layer 3** (Deepest) | ≥3 | `full` | Generate docs for all subdirectories with code | `@**/*` (all files) |
| **Layer 2** (Middle) | 1-2 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
| **Layer 1** (Top) | 0 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
**Generation Direction**: Layer 3 → Layer 2 → Layer 1 (bottom-up dependency flow)
**Strategy Auto-Selection**: Strategies are automatically determined by directory depth - no user configuration needed.
### Strategy Details
#### Full Strategy (Layer 3 Only)
- **Use Case**: Deepest directories with comprehensive file coverage
- **Behavior**: Generates API.md + README.md for current directory AND subdirectories containing code
- **Context**: All files in current directory tree (`@**/*`)
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
- **Benefits**: Creates foundation documentation for upper layers to reference
#### Single Strategy (Layers 1-2)
- **Use Case**: Upper layers that aggregate from existing documentation
- **Behavior**: Generates API.md + README.md only in current directory
- **Context**: Direct children docs + current directory code files
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
- **Benefits**: Minimal context consumption, clear layer separation
### Example Flow
```
src/auth/handlers/ (depth 3) → FULL STRATEGY
CONTEXT: @**/* (all files in handlers/ and subdirs)
GENERATES: .workflow/docs/project/src/auth/handlers/{API.md,README.md} + subdirs
src/auth/ (depth 2) → SINGLE STRATEGY
CONTEXT: @*/API.md @*/README.md @*.ts (handlers docs + current code)
GENERATES: .workflow/docs/project/src/auth/{API.md,README.md} only
src/ (depth 1) → SINGLE STRATEGY
CONTEXT: @*/API.md @*/README.md (auth docs, utils docs)
GENERATES: .workflow/docs/project/src/{API.md,README.md} only
./ (depth 0) → SINGLE STRATEGY
CONTEXT: @*/API.md @*/README.md (src docs, tests docs)
GENERATES: .workflow/docs/project/{API.md,README.md} only
```
## Core Execution Rules
1. **Analyze First**: Module discovery + folder classification before generation
2. **Wait for Approval**: Present plan, no execution without user confirmation
3. **Execution Strategy**:
- **<20 modules**: Direct parallel execution (max 4 concurrent per layer)
- **≥20 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
4. **Tool Fallback**: Auto-retry with fallback tools on failure
5. **Layer Sequential**: Process layers 3→2→1 (bottom-up), parallel batches within layer
6. **Safety Check**: Verify only docs files modified in .workflow/docs/
7. **Layer-based Grouping**: Group modules by LAYER (not depth) for execution
## Tool Fallback Hierarchy
```javascript
--tool gemini [gemini, qwen, codex] // default
--tool qwen [qwen, gemini, codex]
--tool codex [codex, gemini, qwen]
```
**Trigger**: Non-zero exit code from generation script
| Tool | Best For | Fallback To |
|--------|--------------------------------|----------------|
| gemini | Documentation, patterns | qwen → codex |
| qwen | Architecture, system design | gemini → codex |
| codex | Implementation, code quality | gemini → qwen |
## Execution Phases
### Phase 1: Discovery & Analysis
```javascript
// Get project metadata
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Get module structure with classification
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
// OR with path parameter
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
### Phase 2: Plan Presentation
**For <20 modules**:
```
Documentation Generation Plan:
Tool: gemini (fallback: qwen → codex)
Total: 7 modules
Execution: Direct parallel (< 20 modules threshold)
Project: myproject
Output: .workflow/docs/myproject/
Will generate docs for:
- ./core/interfaces (12 files, type: code) - depth 2 [Layer 2] - single strategy
- ./core (22 files, type: code) - depth 1 [Layer 2] - single strategy
- ./models (9 files, type: code) - depth 1 [Layer 2] - single strategy
- ./utils (12 files, type: navigation) - depth 1 [Layer 2] - single strategy
- . (5 files, type: code) - depth 0 [Layer 1] - single strategy
Documentation Strategy (Auto-Selected):
- Layer 2 (depth 1-2): API.md + README.md (current dir only, reference child docs)
- Layer 1 (depth 0): API.md + README.md (current dir only, reference child docs)
Output Structure:
- Code folders: API.md + README.md
- Navigation folders: README.md only
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
Execution order: Layer 2 → Layer 1
Estimated time: ~5-10 minutes
Confirm execution? (y/n)
```
**For ≥20 modules**:
```
Documentation Generation Plan:
Tool: gemini (fallback: qwen → codex)
Total: 31 modules
Execution: Agent batch processing (4 modules/agent)
Project: myproject
Output: .workflow/docs/myproject/
Will generate docs for:
- ./src/features/auth (12 files, type: code) - depth 3 [Layer 3] - full strategy
- ./.claude/commands/cli (6 files, type: code) - depth 3 [Layer 3] - full strategy
- ./src/utils (8 files, type: code) - depth 2 [Layer 2] - single strategy
...
Documentation Strategy (Auto-Selected):
- Layer 3 (depth ≥3): API.md + README.md (all subdirs with code)
- Layer 2 (depth 1-2): API.md + README.md (current dir only)
- Layer 1 (depth 0): API.md + README.md (current dir only)
Output Structure:
- Code folders: API.md + README.md
- Navigation folders: README.md only
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
Execution order: Layer 3 → Layer 2 → Layer 1
Agent allocation (by LAYER):
- Layer 3 (14 modules, depth ≥3): 4 agents [4, 4, 4, 2]
- Layer 2 (15 modules, depth 1-2): 4 agents [4, 4, 4, 3]
- Layer 1 (2 modules, depth 0): 1 agent [2]
Estimated time: ~15-25 minutes
Confirm execution? (y/n)
```
### Phase 3A: Direct Execution (<20 modules)
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
```javascript
let project_name = detect_project_name();
for (let layer of [3, 2, 1]) {
if (modules_by_layer[layer].length === 0) continue;
let batches = batch_modules(modules_by_layer[layer], 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
let strategy = module.depth >= 3 ? "full" : "single";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`${module.path} (Layer ${layer}) docs generated with ${tool}`);
return true;
}
}
report(`❌ FAILED: ${module.path} (Layer ${layer}) failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
### Phase 3B: Agent Batch Execution (≥20 modules)
**Strategy**: Batch modules into groups of 4, spawn memory-bridge agents per batch.
```javascript
// Group modules by LAYER and batch within each layer
let modules_by_layer = group_by_layer(module_list);
let tool_order = construct_tool_order(primary_tool);
let project_name = detect_project_name();
for (let layer of [3, 2, 1]) {
if (modules_by_layer[layer].length === 0) continue;
let batches = batch_modules(modules_by_layer[layer], 4);
let worker_tasks = [];
for (let batch of batches) {
worker_tasks.push(
Task(
subagent_type="memory-bridge",
description=`Generate docs for ${batch.length} modules in Layer ${layer}`,
prompt=generate_batch_worker_prompt(batch, tool_order, layer, project_name)
)
);
}
await parallel_execute(worker_tasks);
}
```
**Batch Worker Prompt Template**:
```
PURPOSE: Generate documentation for assigned modules with tool fallback
TASK: Generate API.md + README.md for assigned modules using specified strategies.
PROJECT: {{project_name}}
OUTPUT: .workflow/docs/{{project_name}}/
MODULES:
{{module_path_1}} (strategy: {{strategy_1}}, type: {{folder_type_1}})
{{module_path_2}} (strategy: {{strategy_2}}, type: {{folder_type_2}})
...
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
- Accepts strategy parameter: full | single
- Accepts folder type detection: code | navigation
- Tool execution via direct CLI commands (gemini/qwen/codex)
- Output path: .workflow/docs/{{project_name}}/{module_path}/
EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
run_in_background: false
})
exit_code=$?
if [ $exit_code -eq 0 ]; then
report "✅ {{module_path}} docs generated with $tool"
break
else
report "⚠️ {{module_path}} failed with $tool, trying next..."
continue
fi
done
2. Handle complete failure (all tools failed):
if [ $exit_code -ne 0 ]; then
report "❌ FAILED: {{module_path}} - all tools exhausted"
# Continue to next module (do not abort batch)
fi
FOLDER TYPE HANDLING:
- code: Generate API.md + README.md
- navigation: Generate README.md only
FAILURE HANDLING:
- Module-level isolation: One module's failure does not affect others
- Exit code detection: Non-zero exit code triggers next tool
- Exhaustion reporting: Log modules where all tools failed
- Batch continuation: Always process remaining modules
REPORTING FORMAT:
Per-module status:
✅ path/to/module docs generated with {tool}
⚠️ path/to/module failed with {tool}, trying next...
❌ FAILED: path/to/module - all tools exhausted
```
### Phase 4: Project-Level Documentation
**After all module documentation is generated, create project-level documentation files.**
```javascript
let project_name = detect_project_name();
let project_root = get_project_root();
// Step 1: Generate Project README
report("Generating project README.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ Project README generated with ${tool}`);
break;
}
}
// Step 2: Generate Architecture & Examples
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ Architecture docs generated with ${tool}`);
break;
}
}
// Step 3: Generate HTTP API documentation (if API routes detected)
Bash({command: 'rg "router\\.|@Get|@Post" -g "*.{ts,js,py}" 2>/dev/null && echo "API_FOUND" || echo "NO_API"', run_in_background: false});
if (bash_result.stdout.includes("API_FOUND")) {
report("Generating HTTP API documentation...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ HTTP API docs generated with ${tool}`);
break;
}
}
}
```
**Expected Output**:
```
Project-Level Documentation:
✅ README.md (project root overview)
✅ ARCHITECTURE.md (system design)
✅ EXAMPLES.md (usage examples)
✅ api/README.md (HTTP API reference) [optional]
```
### Phase 5: Verification
```javascript
// Check documentation files created
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
// Display structure
Bash({command: 'tree -L 3 .workflow/docs/', run_in_background: false});
```
**Result Summary**:
```
Documentation Generation Summary:
Total: 31 | Success: 29 | Failed: 2
Tool usage: gemini: 25, qwen: 4, codex: 0
Failed: path1, path2
Generated documentation:
.workflow/docs/myproject/
├── src/
│ ├── auth/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
└── README.md
```
## Error Handling
**Batch Worker**: Tool fallback per module, batch isolation, clear status reporting
**Coordinator**: Invalid path abort, user decline handling, verification with cleanup
**Fallback Triggers**: Non-zero exit code, script timeout, unexpected output
## Output Structure
```
.workflow/docs/{project_name}/
├── src/ # Mirrors source structure
│ ├── modules/
│ │ ├── README.md # Navigation
│ │ ├── auth/
│ │ │ ├── API.md # API signatures
│ │ │ ├── README.md # Module docs
│ │ │ └── middleware/
│ │ │ ├── API.md
│ │ │ └── README.md
│ │ └── api/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
├── lib/
│ └── core/
│ ├── API.md
│ └── README.md
├── README.md # ✨ Project root overview (auto-generated)
├── ARCHITECTURE.md # ✨ System design (auto-generated)
├── EXAMPLES.md # ✨ Usage examples (auto-generated)
└── api/ # ✨ Optional (auto-generated if HTTP API detected)
└── README.md # HTTP API reference
```
## Usage Examples
```bash
# Full project documentation generation
/memory:docs-full-cli
# Target specific directory
/memory:docs-full-cli src/features/auth
/memory:docs-full-cli .claude
# Use specific tool
/memory:docs-full-cli --tool qwen
/memory:docs-full-cli src --tool qwen
```
## Key Advantages
- **Efficiency**: 30 modules → 8 agents (73% reduction from sequential)
- **Resilience**: 3-tier tool fallback per module
- **Performance**: Parallel batches, no concurrency limits
- **Observability**: Per-module tool usage, batch-level metrics
- **Automation**: Zero configuration - strategy auto-selected by directory depth
- **Path Mirroring**: Clear 1:1 mapping between source and documentation structure
## Template Reference
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
- `api.txt`: Code API documentation (Part A: Code API, Part B: HTTP API)
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders with subdirectories
## Related Commands
- `/memory:docs` - Agent-based documentation planning workflow
- `/memory:docs-related-cli` - Update docs for changed modules only
- `/workflow:execute` - Execute documentation tasks (when using agent mode)

View File

@@ -0,0 +1,386 @@
---
name: docs-related-cli
description: Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel
argument-hint: "[--tool <gemini|qwen|codex>]"
---
# Related Documentation Generation - CLI Mode (/memory:docs-related-cli)
## Overview
Orchestrates context-aware documentation generation/update for changed modules using CLI-based execution with batched agents and automatic tool fallback (gemini→qwen→codex).
**Parameters**:
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
**Execution Flow**:
1. Change Detection → 2. Plan Presentation → 3. Batched Execution → 4. Verification
## Core Rules
1. **Detect Changes First**: Use git diff to identify affected modules
2. **Wait for Approval**: Present plan, no execution without user confirmation
3. **Execution Strategy**:
- **<15 modules**: Direct parallel execution (max 4 concurrent per depth, no agent overhead)
- **≥15 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
4. **Tool Fallback**: Auto-retry with fallback tools on failure
5. **Depth Sequential**: Process depths N→0, parallel batches within depth (both modes)
6. **Related Mode**: Generate/update only changed modules and their parent contexts
7. **Single Strategy**: Always use `single` strategy (incremental update)
## Tool Fallback Hierarchy
```javascript
--tool gemini [gemini, qwen, codex] // default
--tool qwen [qwen, gemini, codex]
--tool codex [codex, gemini, qwen]
```
**Trigger**: Non-zero exit code from generation script
| Tool | Best For | Fallback To |
|--------|--------------------------------|----------------|
| gemini | Documentation, patterns | qwen → codex |
| qwen | Architecture, system design | gemini → codex |
| codex | Implementation, code quality | gemini → qwen |
## Phase 1: Change Detection & Analysis
```javascript
// Get project metadata
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Detect changed modules
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|change:<TYPE>|type:<code|navigation>` to extract affected modules.
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack (Node.js/Python/Go/Rust/etc).
**Fallback**: If no changes detected, use recent modules (first 10 by depth).
## Phase 2: Plan Presentation
**Present filtered plan**:
```
Related Documentation Generation Plan:
Tool: gemini (fallback: qwen → codex)
Changed: 4 modules | Batching: 4 modules/agent
Project: myproject
Output: .workflow/docs/myproject/
Will generate/update docs for:
- ./src/api/auth (5 files, type: code) [new module]
- ./src/api (12 files, type: code) [parent of changed auth/]
- ./src (8 files, type: code) [parent context]
- . (14 files, type: code) [root level]
Documentation Strategy:
- Strategy: single (all modules - incremental update)
- Output: API.md + README.md (code folders), README.md only (navigation folders)
- Context: Current dir code + child docs
Auto-skipped (12 paths):
- Tests: ./src/api/auth.test.ts (8 paths)
- Config: tsconfig.json (3 paths)
- Other: node_modules (1 path)
Agent allocation:
- Depth 3 (1 module): 1 agent [1]
- Depth 2 (1 module): 1 agent [1]
- Depth 1 (1 module): 1 agent [1]
- Depth 0 (1 module): 1 agent [1]
Estimated time: ~5-10 minutes
Confirm execution? (y/n)
```
**Decision logic**:
- User confirms "y": Proceed with execution
- User declines "n": Abort, no changes
- <15 modules: Direct execution
- ≥15 modules: Agent batch execution
## Phase 3A: Direct Execution (<15 modules)
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead.
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
```javascript
let project_name = detect_project_name();
for (let depth of sorted_depths.reverse()) { // N → 0
let batches = batch_modules(modules_by_depth[depth], 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`${module.path} docs generated with ${tool}`);
return true;
}
}
report(`❌ FAILED: ${module.path} failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
## Phase 3B: Agent Batch Execution (≥15 modules)
### Batching Strategy
```javascript
// Batch modules into groups of 4
function batch_modules(modules, batch_size = 4) {
let batches = [];
for (let i = 0; i < modules.length; i += batch_size) {
batches.push(modules.slice(i, i + batch_size));
}
return batches;
}
// Examples: 10→[4,4,2] | 8→[4,4] | 3→[3]
```
### Coordinator Orchestration
```javascript
let modules_by_depth = group_by_depth(changed_modules);
let tool_order = construct_tool_order(primary_tool);
let project_name = detect_project_name();
for (let depth of sorted_depths.reverse()) { // N → 0
let batches = batch_modules(modules_by_depth[depth], 4);
let worker_tasks = [];
for (let batch of batches) {
worker_tasks.push(
Task(
subagent_type="memory-bridge",
description=`Generate docs for ${batch.length} modules at depth ${depth}`,
prompt=generate_batch_worker_prompt(batch, tool_order, depth, project_name, "related")
)
);
}
await parallel_execute(worker_tasks); // Batches run in parallel
}
```
### Batch Worker Prompt Template
```
PURPOSE: Generate/update documentation for assigned modules with tool fallback (related mode)
TASK:
Generate documentation for the following modules based on recent changes. For each module, try tools in order until success.
PROJECT: {{project_name}}
OUTPUT: .workflow/docs/{{project_name}}/
MODULES:
{{module_path_1}} (type: {{folder_type_1}})
{{module_path_2}} (type: {{folder_type_2}})
{{module_path_3}} (type: {{folder_type_3}})
{{module_path_4}} (type: {{folder_type_4}})
TOOLS (try in order):
1. {{tool_1}}
2. {{tool_2}}
3. {{tool_3}}
EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module
→ Failure: Report "❌ FAILED: {{module_path}} failed all tools", proceed to next module
FOLDER TYPE HANDLING:
- code: Generate API.md + README.md
- navigation: Generate README.md only
REPORTING:
Report final summary with:
- Total processed: X modules
- Successful: Y modules
- Failed: Z modules
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
```
## Phase 4: Verification
```javascript
// Check documentation files created/updated
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
// Display recent changes
Bash({command: 'find .workflow/docs -type f -name "*.md" -mmin -60 2>/dev/null', run_in_background: false});
```
**Aggregate results**:
```
Documentation Generation Summary:
Total: 4 | Success: 4 | Failed: 0
Tool usage:
- gemini: 4 modules
- qwen: 0 modules (fallback)
- codex: 0 modules
Changes:
.workflow/docs/myproject/src/api/auth/API.md (new)
.workflow/docs/myproject/src/api/auth/README.md (new)
.workflow/docs/myproject/src/api/API.md (updated)
.workflow/docs/myproject/src/api/README.md (updated)
.workflow/docs/myproject/src/API.md (updated)
.workflow/docs/myproject/src/README.md (updated)
.workflow/docs/myproject/API.md (updated)
.workflow/docs/myproject/README.md (updated)
```
## Execution Summary
**Module Count Threshold**:
- **<15 modules**: Coordinator executes Phase 3A (Direct Execution)
- **≥15 modules**: Coordinator executes Phase 3B (Agent Batch Execution)
**Agent Hierarchy** (for ≥15 modules):
- **Coordinator**: Handles batch division, spawns worker agents per depth
- **Worker Agents**: Each processes 4 modules with tool fallback (related mode)
## Error Handling
**Batch Worker**:
- Tool fallback per module (auto-retry)
- Batch isolation (failures don't propagate)
- Clear per-module status reporting
**Coordinator**:
- No changes: Use fallback (recent 10 modules)
- User decline: No execution
- Verification fail: Report incomplete modules
- Partial failures: Continue execution, report failed modules
**Fallback Triggers**:
- Non-zero exit code
- Script timeout
- Unexpected output
## Output Structure
```
.workflow/docs/{project_name}/
├── src/ # Mirrors source structure
│ ├── modules/
│ │ ├── README.md
│ │ ├── auth/
│ │ │ ├── API.md # Updated based on code changes
│ │ │ └── README.md # Updated based on code changes
│ │ └── api/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
└── README.md
```
## Usage Examples
```bash
# Daily development documentation update
/memory:docs-related-cli
# After feature work with specific tool
/memory:docs-related-cli --tool qwen
# Code quality documentation review after implementation
/memory:docs-related-cli --tool codex
```
## Key Advantages
**Efficiency**: 30 modules → 8 agents (73% reduction)
**Resilience**: 3-tier fallback per module
**Performance**: Parallel batches, no concurrency limits
**Context-aware**: Updates based on actual git changes
**Fast**: Only affected modules, not entire project
**Incremental**: Single strategy for focused updates
## Coordinator Checklist
- Parse `--tool` (default: gemini)
- Get project metadata (name, root)
- Detect changed modules via detect_changed_modules.sh
- **Smart filter modules** (auto-detect tech stack, skip tests/build/config/vendor)
- Cache git changes
- Apply fallback if no changes (recent 10 modules)
- Construct tool fallback order
- **Present filtered plan** with skip reasons and change types
- **Wait for y/n confirmation**
- Determine execution mode:
- **<15 modules**: Direct execution (Phase 3A)
- For each depth (N→0): Sequential module updates with tool fallback
- **≥15 modules**: Agent batch execution (Phase 3B)
- For each depth (N→0): Batch modules (4 per batch), spawn batch workers in parallel
- Wait for depth/batch completion
- Aggregate results
- Verification check (documentation files created/updated)
- Display summary + recent changes
## Comparison with Full Documentation Generation
| Aspect | Related Generation | Full Generation |
|--------|-------------------|-----------------|
| **Scope** | Changed modules only | All project modules |
| **Speed** | Fast (minutes) | Slower (10-30 min) |
| **Use case** | Daily development | Initial setup, major refactoring |
| **Strategy** | `single` (all) | `full` (L3) + `single` (L1-2) |
| **Trigger** | After commits | After setup or major changes |
| **Batching** | 4 modules/agent | 4 modules/agent |
| **Fallback** | gemini→qwen→codex | gemini→qwen→codex |
| **Complexity threshold** | ≤15 modules | ≤20 modules |
## Template Reference
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
- `api.txt`: Code API documentation
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders
## Related Commands
- `/memory:docs-full-cli` - Full project documentation generation
- `/memory:docs` - Agent-based documentation planning workflow
- `/memory:update-related` - Update CLAUDE.md for changed modules

View File

@@ -44,7 +44,11 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
/memory:docs [path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]
```
- **path**: Target directory (default: current directory)
- **path**: Source directory to analyze (default: current directory)
- Specifies the source code directory to be documented
- Documentation is generated in a separate `.workflow/docs/{project_name}/` directory at the workspace root, **not** within the source `path` itself
- The source path's structure is mirrored within the project-specific documentation folder
- Example: analyzing `src/modules` produces documentation at `.workflow/docs/{project_name}/src/modules/`
- **--mode**: Documentation generation mode (default: full)
- `full`: Complete documentation (modules + README + ARCHITECTURE + EXAMPLES + HTTP API)
- `partial`: Module documentation only (API.md + README.md)

View File

@@ -95,14 +95,15 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
### Phase 1: Discovery & Analysis
```bash
# Cache git changes
bash(git add -A 2>/dev/null || true)
```javascript
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
# Get module structure
bash(~/.claude/scripts/get_modules_by_depth.sh list)
# OR with --path
bash(cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list)
// Get module structure
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
// OR with --path
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
@@ -172,26 +173,23 @@ Update Plan:
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
```javascript
// Group modules by LAYER (not depth)
let modules_by_layer = group_by_layer(module_list);
let tool_order = construct_tool_order(primary_tool);
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
// Process by LAYER (3 → 2 → 1), not by depth
```javascript
for (let layer of [3, 2, 1]) {
if (modules_by_layer[layer].length === 0) continue;
let batches = batch_modules(modules_by_layer[layer], 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
// Auto-determine strategy based on depth
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
for (let tool of tool_order) {
let exit_code = bash(`cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`);
if (exit_code === 0) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`${module.path} (Layer ${layer}) updated with ${tool}`);
return true;
}
@@ -200,7 +198,6 @@ for (let layer of [3, 2, 1]) {
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
@@ -255,7 +252,10 @@ EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}")
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
run_in_background: false
})
exit_code=$?
if [ $exit_code -eq 0 ]; then
@@ -287,12 +287,12 @@ REPORTING FORMAT:
```
### Phase 4: Safety Verification
```bash
# Check only CLAUDE.md modified
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
```javascript
// Check only CLAUDE.md files modified
Bash({command: 'git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified"', run_in_background: false});
# Display status
bash(git status --short)
// Display status
Bash({command: "git status --short", run_in_background: false});
```
**Result Summary**:

View File

@@ -39,12 +39,12 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
## Phase 1: Change Detection & Analysis
```bash
# Detect changed modules (no index refresh needed)
bash(~/.claude/scripts/detect_changed_modules.sh list)
```javascript
// Detect changed modules
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
# Cache git changes
bash(git add -A 2>/dev/null || true)
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|change:<TYPE>` to extract affected modules.
@@ -89,47 +89,36 @@ Related Update Plan:
## Phase 3A: Direct Execution (<15 modules)
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead, tool fallback per module.
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead.
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
```javascript
let modules_by_depth = group_by_depth(changed_modules);
let tool_order = construct_tool_order(primary_tool);
for (let depth of sorted_depths.reverse()) { // N → 0
let modules = modules_by_depth[depth];
let batches = batch_modules(modules, 4); // Split into groups of 4
let batches = batch_modules(modules_by_depth[depth], 4);
for (let batch of batches) {
// Execute batch in parallel (max 4 concurrent)
let parallel_tasks = batch.map(module => {
return async () => {
let success = false;
for (let tool of tool_order) {
let exit_code = bash(cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}");
if (exit_code === 0) {
report("${module.path} updated with ${tool}");
success = true;
break;
Bash({
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`${module.path} updated with ${tool}`);
return true;
}
}
if (!success) {
report("FAILED: ${module.path} failed all tools");
}
report(`❌ FAILED: ${module.path} failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task())); // Run batch in parallel
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
**Benefits**:
- No agent startup overhead
- Parallel execution within depth (max 4 concurrent)
- Tool fallback still applies per module
- Faster for small changesets (<15 modules)
- Same batching strategy as Phase 3B but without agent layer
---
## Phase 3B: Agent Batch Execution (≥15 modules)
@@ -193,19 +182,27 @@ TOOLS (try in order):
EXECUTION:
For each module above:
1. cd "{{module_path}}"
2. Try tool 1:
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}")
→ Success: Report "{{module_path}} updated with {{tool_1}}", proceed to next module
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
3. Try tool 2:
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}")
→ Success: Report "{{module_path}} updated with {{tool_2}}", proceed to next module
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
4. Try tool 3:
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}")
→ Success: Report "{{module_path}} updated with {{tool_3}}", proceed to next module
→ Failure: Report "FAILED: {{module_path}} failed all tools", proceed to next module
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module
→ Failure: Report "❌ FAILED: {{module_path}} failed all tools", proceed to next module
REPORTING:
Report final summary with:
@@ -213,30 +210,16 @@ Report final summary with:
- Successful: Y modules
- Failed: Z modules
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
- Detailed results for each module
```
### Example Execution
**Depth 3 (new module)**:
```javascript
Task(subagent_type="memory-bridge", batch=[./src/api/auth], mode="related")
```
**Benefits**:
- 4 modules → 1 agent (75% reduction)
- Parallel batches, sequential within batch
- Each module gets full fallback chain
- Context-aware updates based on git changes
## Phase 4: Safety Verification
```bash
# Check only CLAUDE.md modified
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
```javascript
// Check only CLAUDE.md modified
Bash({command: 'git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified"', run_in_background: false});
# Display statistics
bash(git diff --stat)
// Display statistics
Bash({command: "git diff --stat", run_in_background: false});
```
**Aggregate results**:

View File

@@ -381,6 +381,64 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
- Ambiguities resolved, placeholders removed
- Consistent terminology
### Phase 6: Update Context Package
**Purpose**: Sync updated role analyses to context-package.json to avoid stale cache
**Operations**:
```bash
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
# 1. Read existing package
context_pkg = Read(context_pkg_path)
# 2. Re-read brainstorm artifacts (now with synthesis enhancements)
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
# 2.1 Update guidance-specification if exists
IF exists({brainstorm_dir}/guidance-specification.md):
context_pkg.brainstorm_artifacts.guidance_specification.content = Read({brainstorm_dir}/guidance-specification.md)
context_pkg.brainstorm_artifacts.guidance_specification.updated_at = NOW()
# 2.2 Update synthesis-specification if exists
IF exists({brainstorm_dir}/synthesis-specification.md):
IF context_pkg.brainstorm_artifacts.synthesis_output:
context_pkg.brainstorm_artifacts.synthesis_output.content = Read({brainstorm_dir}/synthesis-specification.md)
context_pkg.brainstorm_artifacts.synthesis_output.updated_at = NOW()
# 2.3 Re-read all role analysis files
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
context_pkg.brainstorm_artifacts.role_analyses = []
FOR file IN role_analysis_files:
role_name = extract_role_from_path(file) # e.g., "ui-designer"
relative_path = file.replace({brainstorm_dir}/, "")
context_pkg.brainstorm_artifacts.role_analyses.push({
"role": role_name,
"files": [{
"path": relative_path,
"type": "primary",
"content": Read(file),
"updated_at": NOW()
}]
})
# 3. Update metadata
context_pkg.metadata.updated_at = NOW()
context_pkg.metadata.synthesis_timestamp = NOW()
# 4. Write back
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
REPORT: "✅ Updated context-package.json with synthesis results"
```
**TodoWrite Update**:
```json
{"content": "Update context package with synthesis results", "status": "completed", "activeForm": "Updating context package"}
```
## Session Metadata
Update `workflow-session.json`:

View File

@@ -54,13 +54,64 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
### Phase 1: Discovery
**Applies to**: Normal mode only (skipped in resume mode)
**Process**:
1. **Check Active Sessions**: Find sessions in `.workflow/active/` directory
2. **Select Session**: If multiple found, prompt user selection
3. **Load Session Metadata**: Read `workflow-session.json` ONLY (minimal context)
4. **DO NOT read task JSONs yet** - defer until execution phase
**Purpose**: Find and select active workflow session with user confirmation when multiple sessions exist
**Resume Mode**: This phase is completely skipped when `--resume-session="session-id"` flag is provided.
**Process**:
#### Step 1.1: Count Active Sessions
```bash
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
```
#### Step 1.2: Handle Session Selection
**Case A: No Sessions** (count = 0)
```
ERROR: No active workflow sessions found
Run /workflow:plan "task description" to create a session
```
**Case B: Single Session** (count = 1)
```bash
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
```
Auto-select and continue to Phase 2.
**Case C: Multiple Sessions** (count > 1)
List sessions with metadata and prompt user selection:
```bash
bash(for dir in .workflow/active/WFS-*/; do
session=$(basename "$dir")
project=$(jq -r '.project // "Unknown"' "$dir/workflow-session.json" 2>/dev/null)
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
echo "${session} | ${project} | ${completed}/${total} tasks (${progress}%)"
done)
```
Use AskUserQuestion to present formatted options:
```
Multiple active workflow sessions detected. Please select one:
1. WFS-auth-system | Authentication System | 3/5 tasks (60%)
2. WFS-payment-module | Payment Integration | 0/8 tasks (0%)
Enter number, full session ID, or partial match:
```
Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "auth"), validate selection, and continue to Phase 2.
#### Step 1.3: Load Session Metadata
```bash
bash(cat .workflow/active/${sessionId}/workflow-session.json)
```
**Output**: Store session metadata in memory
**DO NOT read task JSONs yet** - defer until execution phase (lazy loading)
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
### Phase 2: Planning Document Analysis
**Applies to**: Normal mode only (skipped in resume mode)

View File

@@ -185,86 +185,104 @@ Execution Complete
previousExecutionResults = []
```
### Step 2: Create TodoWrite Execution List
### Step 2: Task Grouping & Batch Creation
**Operations**:
- Create execution tracking from task list
- Typically single execution call for all tasks
- Split into multiple calls if task list very large (>10 tasks)
**Execution Call Creation**:
**Dependency Analysis & Grouping Algorithm**:
```javascript
function createExecutionCalls(tasks) {
const taskTitles = tasks.map(t => t.title || t)
// Infer dependencies: same file → sequential, keywords (use/integrate) → sequential
function inferDependencies(tasks) {
return tasks.map((task, i) => {
const deps = []
const file = task.file || task.title.match(/in\s+([^\s:]+)/)?.[1]
const keywords = (task.description || task.title).toLowerCase()
// Single call for ≤10 tasks (most common)
if (tasks.length <= 10) {
return [{
method: executionMethod === "Codex" ? "Codex" : "Agent",
taskSummary: taskTitles.length <= 3
? taskTitles.join(', ')
: `${taskTitles.slice(0, 2).join(', ')}, and ${taskTitles.length - 2} more`,
tasks: tasks
}]
}
// Split into multiple calls for >10 tasks
const callSize = 5
const calls = []
for (let i = 0; i < tasks.length; i += callSize) {
const batchTasks = tasks.slice(i, i + callSize)
const batchTitles = batchTasks.map(t => t.title || t)
calls.push({
method: executionMethod === "Codex" ? "Codex" : "Agent",
taskSummary: `Tasks ${i + 1}-${Math.min(i + callSize, tasks.length)}: ${batchTitles[0]}...`,
tasks: batchTasks
})
}
return calls
for (let j = 0; j < i; j++) {
const prevFile = tasks[j].file || tasks[j].title.match(/in\s+([^\s:]+)/)?.[1]
if (file && prevFile === file) deps.push(j) // Same file
else if (/use|integrate|call|import/.test(keywords)) deps.push(j) // Keyword dependency
}
return { ...task, taskIndex: i, dependencies: deps }
})
}
// Create execution calls with IDs
executionCalls = createExecutionCalls(planObject.tasks).map((call, index) => ({
...call,
id: `[${call.method}-${index+1}]`
}))
// Group into batches: independent → parallel [P1,P2...], dependent → sequential [S1,S2...]
function createExecutionCalls(tasks, executionMethod) {
const tasksWithDeps = inferDependencies(tasks)
const maxBatch = executionMethod === "Codex" ? 4 : 7
const calls = []
const processed = new Set()
// Parallel: independent tasks, different files, max batch size
const parallelGroups = []
tasksWithDeps.forEach(t => {
if (t.dependencies.length === 0 && !processed.has(t.taskIndex)) {
const group = [t]
processed.add(t.taskIndex)
tasksWithDeps.forEach(o => {
if (!o.dependencies.length && !processed.has(o.taskIndex) &&
group.length < maxBatch && t.file !== o.file) {
group.push(o)
processed.add(o.taskIndex)
}
})
parallelGroups.push(group)
}
})
// Sequential: dependent tasks, batch when deps satisfied
const remaining = tasksWithDeps.filter(t => !processed.has(t.taskIndex))
while (remaining.length > 0) {
const batch = remaining.filter((t, i) =>
i < maxBatch && t.dependencies.every(d => processed.has(d))
)
if (!batch.length) break
batch.forEach(t => processed.add(t.taskIndex))
calls.push({ executionType: "sequential", groupId: `S${calls.length + 1}`, tasks: batch })
remaining.splice(0, remaining.length, ...remaining.filter(t => !processed.has(t.taskIndex)))
}
// Combine results
return [
...parallelGroups.map((g, i) => ({
method: executionMethod, executionType: "parallel", groupId: `P${i+1}`,
taskSummary: g.map(t => t.title).join(' | '), tasks: g
})),
...calls.map(c => ({ ...c, method: executionMethod, taskSummary: c.tasks.map(t => t.title).join(' → ') }))
]
}
executionCalls = createExecutionCalls(planObject.tasks, executionMethod).map(c => ({ ...c, id: `[${c.groupId}]` }))
// Create TodoWrite list
TodoWrite({
todos: executionCalls.map(call => ({
content: `${call.id} (${call.taskSummary})`,
todos: executionCalls.map(c => ({
content: `${c.executionType === "parallel" ? "⚡" : "→"} ${c.id} (${c.tasks.length} tasks)`,
status: "pending",
activeForm: `Executing ${call.id} (${call.taskSummary})`
activeForm: `Executing ${c.id}`
}))
})
```
**Example Execution Lists**:
```
Single call (typical):
[ ] [Agent-1] (Create AuthService, Add JWT utilities, Implement middleware)
Few tasks:
[ ] [Codex-1] (Create AuthService, Add JWT utilities, and 3 more)
Large task sets (>10):
[ ] [Agent-1] (Tasks 1-5: Create AuthService, Add JWT utilities, ...)
[ ] [Agent-2] (Tasks 6-10: Create tests, Update docs, ...)
```
### Step 3: Launch Execution
**IMPORTANT**: CLI execution MUST run in foreground (no background execution)
**Execution Loop**:
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
```javascript
for (currentIndex = 0; currentIndex < executionCalls.length; currentIndex++) {
const currentCall = executionCalls[currentIndex]
const parallel = executionCalls.filter(c => c.executionType === "parallel")
const sequential = executionCalls.filter(c => c.executionType === "sequential")
// Update TodoWrite: mark current call in_progress
// Launch execution with previousExecutionResults context
// After completion: collect result, add to previousExecutionResults
// Update TodoWrite: mark current call completed
// Phase 1: Launch all parallel batches (single message with multiple tool calls)
if (parallel.length > 0) {
TodoWrite({ todos: executionCalls.map(c => ({ status: c.executionType === "parallel" ? "in_progress" : "pending" })) })
parallelResults = await Promise.all(parallel.map(c => executeBatch(c)))
previousExecutionResults.push(...parallelResults)
TodoWrite({ todos: executionCalls.map(c => ({ status: parallel.includes(c) ? "completed" : "pending" })) })
}
// Phase 2: Execute sequential batches one by one
for (const call of sequential) {
TodoWrite({ todos: executionCalls.map(c => ({ status: c === call ? "in_progress" : "..." })) })
result = await executeBatch(call)
previousExecutionResults.push(result)
TodoWrite({ todos: executionCalls.map(c => ({ status: "completed" or "pending" })) })
}
```
@@ -323,12 +341,17 @@ ${result.notes ? `Notes: ${result.notes}` : ''}
${clarificationContext ? `\n## Clarifications\n${JSON.stringify(clarificationContext, null, 2)}` : ''}
## Instructions
- Reference original request to ensure alignment
- Review previous results to understand completed work
- Build on previous work, avoid duplication
- Test functionality as you implement
- Complete all assigned tasks
${executionContext?.session?.artifacts ? `\n## Planning Artifacts
Detailed planning context available in:
${executionContext.session.artifacts.exploration ? `- Exploration: ${executionContext.session.artifacts.exploration}` : ''}
- Plan: ${executionContext.session.artifacts.plan}
- Task: ${executionContext.session.artifacts.task}
Read these files for detailed architecture, patterns, and constraints.` : ''}
## Requirements
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
Return only after all tasks are fully implemented and tested.
`
)
```
@@ -341,6 +364,11 @@ When to use:
- `executionMethod = "Codex"`
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
**Artifact Path Delegation**:
- Include artifact file paths in CLI prompt for enhanced context
- Codex can read artifact files for detailed planning information
- Example: Reference exploration.json for architecture patterns
Command format:
```bash
function formatTaskForCodex(task, index) {
@@ -390,12 +418,18 @@ Constraints: ${explorationContext.constraints || 'None'}
${clarificationContext ? `\n### User Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `${q}: ${a}`).join('\n')}` : ''}
## Execution Instructions
- Reference original request to ensure alignment
- Review previous results for context continuity
- Build on previous work, don't duplicate completed tasks
- Complete all assigned tasks in single execution
- Test functionality as you implement
${executionContext?.session?.artifacts ? `\n### Planning Artifact Files
Detailed planning context available in session folder:
${executionContext.session.artifacts.exploration ? `- Exploration: ${executionContext.session.artifacts.exploration}` : ''}
- Plan: ${executionContext.session.artifacts.plan}
- Task: ${executionContext.session.artifacts.task}
Read these files for complete architecture details, code patterns, and integration constraints.
` : ''}
## Requirements
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
Return only after all tasks are fully implemented and tested.
Complexity: ${planObject.complexity}
" --skip-git-repo-check -s danger-full-access
@@ -414,105 +448,72 @@ bash_result = Bash(
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
### Step 4: Track Execution Progress
### Step 4: Progress Tracking
**Real-time TodoWrite Updates** at execution call level:
```javascript
// When call starts
TodoWrite({
todos: [
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "in_progress", activeForm: "..." },
{ content: "[Agent-2] (Add middleware + Update routes)", status: "pending", activeForm: "..." }
]
})
// When call completes
TodoWrite({
todos: [
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "completed", activeForm: "..." },
{ content: "[Agent-2] (Add middleware + Update routes)", status: "in_progress", activeForm: "..." }
]
})
```
**User Visibility**:
- User sees execution call progress (not individual task progress)
- Current execution highlighted as "in_progress"
- Completed executions marked with checkmark
- Each execution shows task summary for context
Progress tracked at batch level (not individual task level). Icons: ⚡ (parallel, concurrent), → (sequential, one-by-one)
### Step 5: Code Review (Optional)
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
**Operations**:
- Agent Review: Current agent performs direct review
- Gemini Review: Execute gemini CLI with review prompt
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
**Review Focus**: Verify implementation against task.json acceptance criteria
- Read task.json from session artifacts for acceptance criteria
- Check each acceptance criterion is fulfilled
- Validate code quality and identify issues
- Ensure alignment with planned approach
**Command Formats**:
**Operations**:
- Agent Review: Current agent performs direct review (read task.json for acceptance criteria)
- Gemini Review: Execute gemini CLI with review prompt (task.json in CONTEXT)
- Custom tool: Execute specified CLI tool (qwen, codex, etc.) with task.json reference
**Unified Review Template** (All tools use same standard):
**Review Criteria**:
- **Acceptance Criteria**: Verify each criterion from task.json `context.acceptance`
- **Code Quality**: Analyze quality, identify issues, suggest improvements
- **Plan Alignment**: Validate implementation matches planned approach
**Shared Prompt Template** (used by all CLI tools):
```
PURPOSE: Code review for implemented changes against task.json acceptance criteria
TASK: • Verify task.json acceptance criteria fulfillment • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence
MODE: analysis
CONTEXT: @**/* @{task.json} @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against task.json requirements
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from task.json.
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on task.json acceptance criteria and plan adherence | analysis=READ-ONLY
```
**Tool-Specific Execution** (Apply shared prompt template above):
```bash
# Agent Review: Direct agent review (no CLI)
# Uses analysis prompt and TodoWrite tools directly
# Method 1: Agent Review (current agent)
# - Read task.json: ${executionContext.session.artifacts.task}
# - Apply unified review criteria (see Shared Prompt Template)
# - Report findings directly
# Gemini Review:
gemini -p "
PURPOSE: Code review for implemented changes
TASK: • Analyze quality • Identify issues • Suggest improvements
MODE: analysis
CONTEXT: @**/* | Memory: Review lite-execute changes
EXPECTED: Quality assessment with recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
"
# Method 2: Gemini Review (recommended)
gemini -p "[Shared Prompt Template with artifacts]"
# CONTEXT includes: @**/* @${task.json} @${plan.json} [@${exploration.json}]
# Qwen Review (custom tool via "Other"):
qwen -p "
PURPOSE: Code review for implemented changes
TASK: • Analyze quality • Identify issues • Suggest improvements
MODE: analysis
CONTEXT: @**/* | Memory: Review lite-execute changes
EXPECTED: Quality assessment with recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
"
# Method 3: Qwen Review (alternative)
qwen -p "[Shared Prompt Template with artifacts]"
# Same prompt as Gemini, different execution engine
# Codex Review (custom tool via "Other"):
codex --full-auto exec "Review recent code changes for quality, potential issues, and improvements" --skip-git-repo-check -s danger-full-access
# Method 4: Codex Review (autonomous)
codex --full-auto exec "[Verify task.json acceptance criteria at ${task.json}]" --skip-git-repo-check -s danger-full-access
```
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
- `@{task.json}``@${executionContext.session.artifacts.task}`
- `@{plan.json}``@${executionContext.session.artifacts.plan}`
- `[@{exploration.json}]``@${executionContext.session.artifacts.exploration}` (if exists)
## Best Practices
### Execution Intelligence
1. **Context Continuity**: Each execution call receives previous results
- Prevents duplication across multiple executions
- Maintains coherent implementation flow
- Builds on completed work
2. **Execution Call Tracking**: Progress at call level, not task level
- Each call handles all or subset of tasks
- Clear visibility of current execution
- Simple progress updates
3. **Flexible Execution**: Multiple input modes supported
- In-memory: Seamless lite-plan integration
- Prompt: Quick standalone execution
- File: Intelligent format detection
- Enhanced Task JSON (lite-plan export): Full plan extraction
- Plain text: Uses as prompt
### Task Management
1. **Live Progress Updates**: Real-time TodoWrite tracking
- Execution calls created before execution starts
- Updated as executions progress
- Clear completion status
2. **Simple Execution**: Straightforward task handling
- All tasks in single call (typical)
- Split only for very large task sets (>10)
- Agent/Codex determines optimal execution order
**Input Modes**: In-memory (lite-plan), prompt (standalone), file (JSON/text)
**Batch Limits**: Agent 7 tasks, CLI 4 tasks
**Execution**: Parallel batches use single Claude message with multiple tool calls (no concurrency limit)
## Error Handling
@@ -546,10 +547,26 @@ Passed from lite-plan via global variable:
clarificationContext: {...} | null,
executionMethod: "Agent" | "Codex" | "Auto",
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
originalUserInput: string
originalUserInput: string,
// Session artifacts location (saved by lite-plan)
session: {
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
folder: string, // Session folder path: .workflow/.lite-plan/{session-id}
artifacts: {
exploration: string | null, // exploration.json path (if exploration performed)
plan: string, // plan.json path (always present)
task: string // task.json path (always exported)
}
}
}
```
**Artifact Usage**:
- Artifact files contain detailed planning context
- Pass artifact paths to CLI tools and agents for enhanced context
- See execution options below for usage examples
### executionResult (Output)
Collected after each execution call completes:

View File

@@ -130,6 +130,13 @@ needsExploration = (
**Exploration Execution** (if needed):
```javascript
// Generate session identifiers for artifact storage
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
const shortTimestamp = timestamp.substring(0, 19).replace('T', '-') // YYYY-MM-DD-HH-mm-ss
const sessionId = `${taskSlug}-${shortTimestamp}`
const sessionFolder = `.workflow/.lite-plan/${sessionId}`
Task(
subagent_type="cli-explore-agent",
description="Analyze codebase for task context",
@@ -149,9 +156,14 @@ Task(
Output Format: JSON-like structured object
`
)
// Save exploration results for CLI/agent access in lite-execute
const explorationFile = `${sessionFolder}/exploration.json`
Write(explorationFile, JSON.stringify(explorationContext, null, 2))
```
**Output**: `explorationContext` (see Data Structures section)
**Output**: `explorationContext` (in-memory, see Data Structures section)
**Artifact**: Saved to `{sessionFolder}/exploration.json` for CLI/agent use
**Progress Tracking**:
- Mark Phase 1 completed
@@ -228,6 +240,14 @@ Current Claude generates plan directly:
- Estimated Time: Total implementation time
- Recommended Execution: "Agent"
```javascript
// Save planning results to session folder (same as Option B)
const planFile = `${sessionFolder}/plan.json`
Write(planFile, JSON.stringify(planObject, null, 2))
```
**Artifact**: Saved to `{sessionFolder}/plan.json` for CLI/agent use
**Option B: Agent-Based Planning (Medium/High Complexity)**
Delegate to cli-lite-planning-agent:
@@ -270,9 +290,14 @@ Task(
Format: "{Action} in {file_path}: {details} following {pattern}"
`
)
// Save planning results to session folder
const planFile = `${sessionFolder}/plan.json`
Write(planFile, JSON.stringify(planObject, null, 2))
```
**Output**: `planObject` (see Data Structures section)
**Artifact**: Saved to `{sessionFolder}/plan.json` for CLI/agent use
**Progress Tracking**:
- Mark Phase 3 completed
@@ -315,7 +340,7 @@ ${i+1}. **${task.title}** (${task.file})
**Step 4.2: Collect User Confirmation**
Four questions via single AskUserQuestion call:
Three questions via single AskUserQuestion call:
```javascript
AskUserQuestion({
@@ -353,15 +378,6 @@ Confirm plan? (Multi-select: can supplement via "Other")`,
{ label: "Agent Review", description: "@code-reviewer agent" },
{ label: "Skip", description: "No review" }
]
},
{
question: "Export plan to Enhanced Task JSON file?\n\nAllows reuse with lite-execute later.",
header: "Export JSON",
multiSelect: false,
options: [
{ label: "Yes", description: "Export to JSON (recommended for complex tasks)" },
{ label: "No", description: "Keep in-memory only" }
]
}
]
})
@@ -384,10 +400,6 @@ Code Review (after execution):
├─ Gemini Review → gemini CLI analysis
├─ Agent Review → Current Claude review
└─ Other → Custom tool (e.g., qwen, codex)
Export JSON:
├─ Yes → Export to .workflow/lite-plans/plan-{timestamp}.json
└─ No → In-memory only
```
**Progress Tracking**:
@@ -398,48 +410,48 @@ Export JSON:
### Phase 5: Dispatch to Execution
**Step 5.1: Export Enhanced Task JSON (Optional)**
**Step 5.1: Export Enhanced Task JSON**
Only execute if `userSelection.export_task_json === "Yes"`:
Always export Enhanced Task JSON to session folder:
```javascript
if (userSelection.export_task_json === "Yes") {
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
const taskId = `LP-${timestamp}`
const filename = `.workflow/lite-plans/${taskId}.json`
const taskId = `LP-${shortTimestamp}`
const filename = `${sessionFolder}/task.json`
const enhancedTaskJson = {
id: taskId,
title: original_task_description,
status: "pending",
const enhancedTaskJson = {
id: taskId,
title: original_task_description,
status: "pending",
meta: {
type: "planning",
created_at: new Date().toISOString(),
complexity: planObject.complexity,
estimated_time: planObject.estimated_time,
recommended_execution: planObject.recommended_execution,
workflow: "lite-plan"
meta: {
type: "planning",
created_at: new Date().toISOString(),
complexity: planObject.complexity,
estimated_time: planObject.estimated_time,
recommended_execution: planObject.recommended_execution,
workflow: "lite-plan",
session_id: sessionId,
session_folder: sessionFolder
},
context: {
requirements: [original_task_description],
plan: {
summary: planObject.summary,
approach: planObject.approach,
tasks: planObject.tasks
},
context: {
requirements: [original_task_description],
plan: {
summary: planObject.summary,
approach: planObject.approach,
tasks: planObject.tasks
},
exploration: explorationContext || null,
clarifications: clarificationContext || null,
focus_paths: explorationContext?.relevant_files || [],
acceptance: planObject.tasks.flatMap(t => t.acceptance)
}
exploration: explorationContext || null,
clarifications: clarificationContext || null,
focus_paths: explorationContext?.relevant_files || [],
acceptance: planObject.tasks.flatMap(t => t.acceptance)
}
Write(filename, JSON.stringify(enhancedTaskJson, null, 2))
console.log(`Enhanced Task JSON exported to: ${filename}`)
console.log(`Reuse with: /workflow:lite-execute ${filename}`)
}
Write(filename, JSON.stringify(enhancedTaskJson, null, 2))
console.log(`Enhanced Task JSON exported to: ${filename}`)
console.log(`Session folder: ${sessionFolder}`)
console.log(`Reuse with: /workflow:lite-execute ${filename}`)
```
**Step 5.2: Store Execution Context**
@@ -451,7 +463,18 @@ executionContext = {
clarificationContext: clarificationContext || null,
executionMethod: userSelection.execution_method,
codeReviewTool: userSelection.code_review_tool,
originalUserInput: original_task_description
originalUserInput: original_task_description,
// Session artifacts location
session: {
id: sessionId,
folder: sessionFolder,
artifacts: {
exploration: explorationContext ? `${sessionFolder}/exploration.json` : null,
plan: `${sessionFolder}/plan.json`,
task: `${sessionFolder}/task.json` // Always exported
}
}
}
```
@@ -462,7 +485,11 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
```
**Execution Handoff**:
- lite-execute reads `executionContext` variable
- lite-execute reads `executionContext` variable from memory
- `executionContext.session.artifacts` contains file paths to saved planning artifacts:
- `exploration` - exploration.json (if exploration performed)
- `plan` - plan.json (always exists)
- `task` - task.json (if user selected export)
- All execution logic handled by lite-execute
- lite-plan completes after successful handoff
@@ -502,7 +529,7 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
- Plan confirmation (multi-select with supplements)
- Execution method selection
- Code review tool selection (custom via "Other")
- JSON export option
- Enhanced Task JSON always exported to session folder
- Allows plan refinement without re-selecting execution method
### Task Management
@@ -519,11 +546,11 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
- Medium: 5-7 tasks (detailed)
- High: 7-10 tasks (comprehensive)
3. **No File Artifacts During Planning**:
- All planning stays in memory
- Optional Enhanced Task JSON export (user choice)
- Faster workflow, cleaner workspace
- Plan context passed directly to execution
3. **Session Artifact Management**:
- All planning artifacts saved to dedicated session folder
- Enhanced Task JSON always exported for reusability
- Plan context passed to execution via memory and files
- Clean organization with session-based folder structure
### Planning Standards
@@ -550,6 +577,39 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
| Phase 4 Confirmation Timeout | User no response > 5 minutes | Save context to temp var, display resume instructions, exit gracefully |
| Phase 4 Modification Loop | User requests modify > 3 times | Suggest breaking task into smaller pieces or using `/workflow:plan` |
## Session Folder Structure
Each lite-plan execution creates a dedicated session folder to organize all artifacts:
```
.workflow/.lite-plan/{task-slug}-{short-timestamp}/
├── exploration.json # Exploration results (if exploration performed)
├── plan.json # Planning results (always created)
└── task.json # Enhanced Task JSON (always created)
```
**Folder Naming Convention**:
- `{task-slug}`: First 40 characters of task description, lowercased, non-alphanumeric replaced with `-`
- `{short-timestamp}`: YYYY-MM-DD-HH-mm-ss format
- Example: `.workflow/.lite-plan/implement-user-auth-jwt-2025-01-15-14-30-45/`
**File Contents**:
- `exploration.json`: Complete explorationContext object (if exploration performed, see Data Structures)
- `plan.json`: Complete planObject (always created, see Data Structures)
- `task.json`: Enhanced Task JSON with all context (always created, see Data Structures)
**Access Patterns**:
- **lite-plan**: Creates folder and writes all artifacts during execution, passes paths via `executionContext.session.artifacts`
- **lite-execute**: Reads artifact paths from `executionContext.session.artifacts` (see lite-execute.md for usage details)
- **User**: Can inspect artifacts for debugging or reference
- **Reuse**: Pass `task.json` path to `/workflow:lite-execute {path}` for re-execution
**Benefits**:
- Clean separation between different task executions
- Easy to find and inspect artifacts for specific tasks
- Natural history/audit trail of planning sessions
- Supports concurrent lite-plan executions without conflicts
## Data Structures
### explorationContext
@@ -621,7 +681,18 @@ Context passed to lite-execute via --in-memory (Phase 5):
clarificationContext: {...} | null, // User responses from Phase 2
executionMethod: "Agent" | "Codex" | "Auto",
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
originalUserInput: string // User's original task description
originalUserInput: string, // User's original task description
// Session artifacts location (for lite-execute to access saved files)
session: {
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
folder: string, // Session folder path: .workflow/.lite-plan/{session-id}
artifacts: {
exploration: string | null, // exploration.json path (if exploration performed)
plan: string, // plan.json path (always present)
task: string // task.json path (always exported)
}
}
}
```

View File

@@ -72,6 +72,8 @@ CONTEXT: Existing user database schema, REST API endpoints
- Session ID successfully extracted
- Session directory `.workflow/active/[sessionId]/` exists
**Note**: Session directory contains `workflow-session.json` (metadata). Do NOT look for `manifest.json` here - it only exists in `.workflow/archives/` for archived sessions.
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2

View File

@@ -213,8 +213,6 @@ Refer to: @.claude/agents/action-planning-agent.md for:
Generate all three documents and report completion status:
- Task JSON files created: N files
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
- MCP enhancements: code-index, exa-research
- Session ready for execution: /workflow:execute
`
)
```

View File

@@ -227,7 +227,68 @@ Write(file_path=".workflow/active/WFS-{session}/.brainstorming/ui-designer/desig
content="[generated content with @ references]")
```
### Phase 5: Completion
### Phase 5: Update Context Package
**Purpose**: Sync design system references to context-package.json
**Operations**:
```bash
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
# 1. Read existing package
context_pkg = Read(context_pkg_path)
# 2. Update brainstorm_artifacts (role analyses now contain @ design references)
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
context_pkg.brainstorm_artifacts.role_analyses = []
FOR file IN role_analysis_files:
role_name = extract_role_from_path(file)
relative_path = file.replace({brainstorm_dir}/, "")
context_pkg.brainstorm_artifacts.role_analyses.push({
"role": role_name,
"files": [{
"path": relative_path,
"type": "primary",
"content": Read(file), # Contains @ design system references
"updated_at": NOW()
}]
})
# 3. Add design_system_references field
context_pkg.design_system_references = {
"design_run_id": design_id,
"tokens": `${design_id}/${design_tokens_path}`,
"style_guide": `${design_id}/${style_guide_path}`,
"prototypes": selected_list.map(p => `${design_id}/prototypes/${p}.html`),
"updated_at": NOW()
}
# 4. Optional: Add animations and layouts if they exist
IF exists({latest_design}/animation-extraction/animation-tokens.json):
context_pkg.design_system_references.animations = `${design_id}/animation-extraction/animation-tokens.json`
IF exists({latest_design}/layout-extraction/layout-templates.json):
context_pkg.design_system_references.layouts = `${design_id}/layout-extraction/layout-templates.json`
# 5. Update metadata
context_pkg.metadata.updated_at = NOW()
context_pkg.metadata.design_sync_timestamp = NOW()
# 6. Write back
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
REPORT: "✅ Updated context-package.json with design system references"
```
**TodoWrite Update**:
```json
{"content": "Update context package with design references", "status": "completed", "activeForm": "Updating context package"}
```
### Phase 6: Completion
```javascript
TodoWrite({todos: [

View File

@@ -0,0 +1,713 @@
#!/bin/bash
# Generate documentation for modules and projects with multiple strategies
# Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]
# strategy: full|single|project-readme|project-architecture|http-api
# source_path: Path to the source module directory (or project root for project-level docs)
# project_name: Project name for output path (e.g., "myproject")
# tool: gemini|qwen|codex (default: gemini)
# model: Model name (optional, uses tool defaults)
#
# Default Models:
# gemini: gemini-2.5-flash
# qwen: coder-model
# codex: gpt5-codex
#
# Module-Level Strategies:
# full: Full documentation generation
# - Read: All files in current and subdirectories (@**/*)
# - Generate: API.md + README.md for each directory containing code files
# - Use: Deep directories (Layer 3), comprehensive documentation
#
# single: Single-layer documentation
# - Read: Current directory code + child API.md/README.md files
# - Generate: API.md + README.md only in current directory
# - Use: Upper layers (Layer 1-2), incremental updates
#
# Project-Level Strategies:
# project-readme: Project overview documentation
# - Read: All module API.md and README.md files
# - Generate: README.md (project root)
# - Use: After all module docs are generated
#
# project-architecture: System design documentation
# - Read: All module docs + project README
# - Generate: ARCHITECTURE.md + EXAMPLES.md
# - Use: After project README is generated
#
# http-api: HTTP API documentation
# - Read: API route files + existing docs
# - Generate: api/README.md
# - Use: For projects with HTTP APIs
#
# Output Structure:
# Module docs: .workflow/docs/{project_name}/{source_path}/API.md
# Module docs: .workflow/docs/{project_name}/{source_path}/README.md
# Project docs: .workflow/docs/{project_name}/README.md
# Project docs: .workflow/docs/{project_name}/ARCHITECTURE.md
# Project docs: .workflow/docs/{project_name}/EXAMPLES.md
# API docs: .workflow/docs/{project_name}/api/README.md
#
# Features:
# - Path mirroring: source structure → docs structure
# - Template-driven generation
# - Respects .gitignore patterns
# - Detects code vs navigation folders
# - Tool fallback support
# Build exclusion filters from .gitignore
build_exclusion_filters() {
local filters=""
# Common system/cache directories to exclude
local system_excludes=(
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
"coverage" ".nyc_output" "logs" "tmp" "temp" ".workflow"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Find and parse .gitignore (current dir first, then git root)
local gitignore_file=""
# Check current directory first
if [ -f ".gitignore" ]; then
gitignore_file=".gitignore"
else
# Try to find git root and check for .gitignore there
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
gitignore_file="$git_root/.gitignore"
fi
fi
# Parse .gitignore if found
if [ -n "$gitignore_file" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Skip wildcards patterns (too complex for simple find)
[[ "$line" =~ \* ]] && continue
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < "$gitignore_file"
fi
echo "$filters"
}
# Detect folder type (code vs navigation)
detect_folder_type() {
local target_path="$1"
local exclusion_filters="$2"
# Count code files (primary indicators)
local code_count=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
if [ $code_count -gt 0 ]; then
echo "code"
else
echo "navigation"
fi
}
# Scan directory structure and generate structured information
scan_directory_structure() {
local target_path="$1"
local strategy="$2"
if [ ! -d "$target_path" ]; then
echo "Directory not found: $target_path"
return 1
fi
local exclusion_filters=$(build_exclusion_filters)
local structure_info=""
# Get basic directory info
local dir_name=$(basename "$target_path")
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
local folder_type=$(detect_folder_type "$target_path" "$exclusion_filters")
structure_info+="Directory: $dir_name\n"
structure_info+="Total files: $total_files\n"
structure_info+="Total directories: $total_dirs\n"
structure_info+="Folder type: $folder_type\n\n"
if [ "$strategy" = "full" ]; then
# For full: show all subdirectories with file counts
structure_info+="Subdirectories with files:\n"
while IFS= read -r dir; do
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
local rel_path=${dir#$target_path/}
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
if [ $file_count -gt 0 ]; then
local subdir_type=$(detect_folder_type "$dir" "$exclusion_filters")
structure_info+=" - $rel_path/ ($file_count files, type: $subdir_type)\n"
fi
fi
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
else
# For single: show direct children only
structure_info+="Direct subdirectories:\n"
while IFS= read -r dir; do
if [ -n "$dir" ]; then
local dir_name=$(basename "$dir")
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
local has_api=$([ -f "$dir/API.md" ] && echo " [has API.md]" || echo "")
local has_readme=$([ -f "$dir/README.md" ] && echo " [has README.md]" || echo "")
structure_info+=" - $dir_name/ ($file_count files)$has_api$has_readme\n"
fi
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
fi
# Show main file types in current directory
structure_info+="\nCurrent directory files:\n"
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
structure_info+=" - Code files: $code_files\n"
structure_info+=" - Config files: $config_files\n"
structure_info+=" - Documentation: $doc_files\n"
printf "%b" "$structure_info"
}
# Calculate output path based on source path and project name
calculate_output_path() {
local source_path="$1"
local project_name="$2"
local project_root="$3"
# Get absolute path of source (normalize to Unix-style path)
local abs_source=$(cd "$source_path" && pwd)
# Normalize project root to same format
local norm_project_root=$(cd "$project_root" && pwd)
# Calculate relative path from project root
local rel_path="${abs_source#$norm_project_root}"
# Remove leading slash if present
rel_path="${rel_path#/}"
# If source is project root, use project name directly
if [ "$abs_source" = "$norm_project_root" ] || [ -z "$rel_path" ]; then
echo "$norm_project_root/.workflow/docs/$project_name"
else
echo "$norm_project_root/.workflow/docs/$project_name/$rel_path"
fi
}
generate_module_docs() {
local strategy="$1"
local source_path="$2"
local project_name="$3"
local tool="${4:-gemini}"
local model="$5"
# Validate parameters
if [ -z "$strategy" ] || [ -z "$source_path" ] || [ -z "$project_name" ]; then
echo "❌ Error: Strategy, source path, and project name are required"
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
echo "Module strategies: full, single"
echo "Project strategies: project-readme, project-architecture, http-api"
return 1
fi
# Validate strategy
local valid_strategies=("full" "single" "project-readme" "project-architecture" "http-api")
local strategy_valid=false
for valid_strategy in "${valid_strategies[@]}"; do
if [ "$strategy" = "$valid_strategy" ]; then
strategy_valid=true
break
fi
done
if [ "$strategy_valid" = false ]; then
echo "❌ Error: Invalid strategy '$strategy'"
echo "Valid module strategies: full, single"
echo "Valid project strategies: project-readme, project-architecture, http-api"
return 1
fi
if [ ! -d "$source_path" ]; then
echo "❌ Error: Source directory '$source_path' does not exist"
return 1
fi
# Set default models if not specified
if [ -z "$model" ]; then
case "$tool" in
gemini)
model="gemini-2.5-flash"
;;
qwen)
model="coder-model"
;;
codex)
model="gpt5-codex"
;;
*)
model=""
;;
esac
fi
# Build exclusion filters
local exclusion_filters=$(build_exclusion_filters)
# Get project root
local project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
# Determine if this is a project-level strategy
local is_project_level=false
if [[ "$strategy" =~ ^project- ]] || [ "$strategy" = "http-api" ]; then
is_project_level=true
fi
# Calculate output path
local output_path
if [ "$is_project_level" = true ]; then
# Project-level docs go to project root
if [ "$strategy" = "http-api" ]; then
output_path="$project_root/.workflow/docs/$project_name/api"
else
output_path="$project_root/.workflow/docs/$project_name"
fi
else
output_path=$(calculate_output_path "$source_path" "$project_name" "$project_root")
fi
# Create output directory
mkdir -p "$output_path"
# Detect folder type (only for module-level strategies)
local folder_type=""
if [ "$is_project_level" = false ]; then
folder_type=$(detect_folder_type "$source_path" "$exclusion_filters")
fi
# Load templates based on strategy
local api_template=""
local readme_template=""
local template_content=""
if [ "$is_project_level" = true ]; then
# Project-level templates
case "$strategy" in
project-readme)
local proj_readme_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-readme.txt"
if [ -f "$proj_readme_path" ]; then
template_content=$(cat "$proj_readme_path")
echo " 📋 Loaded Project README template: $(wc -l < "$proj_readme_path") lines"
fi
;;
project-architecture)
local arch_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-architecture.txt"
local examples_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-examples.txt"
if [ -f "$arch_path" ]; then
template_content=$(cat "$arch_path")
echo " 📋 Loaded Architecture template: $(wc -l < "$arch_path") lines"
fi
if [ -f "$examples_path" ]; then
template_content="$template_content
EXAMPLES TEMPLATE:
$(cat "$examples_path")"
echo " 📋 Loaded Examples template: $(wc -l < "$examples_path") lines"
fi
;;
http-api)
local api_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
if [ -f "$api_path" ]; then
template_content=$(cat "$api_path")
echo " 📋 Loaded HTTP API template: $(wc -l < "$api_path") lines"
fi
;;
esac
else
# Module-level templates
local api_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
local readme_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/module-readme.txt"
local nav_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/folder-navigation.txt"
if [ "$folder_type" = "code" ]; then
if [ -f "$api_template_path" ]; then
api_template=$(cat "$api_template_path")
echo " 📋 Loaded API template: $(wc -l < "$api_template_path") lines"
fi
if [ -f "$readme_template_path" ]; then
readme_template=$(cat "$readme_template_path")
echo " 📋 Loaded README template: $(wc -l < "$readme_template_path") lines"
fi
else
# Navigation folder uses navigation template
if [ -f "$nav_template_path" ]; then
readme_template=$(cat "$nav_template_path")
echo " 📋 Loaded Navigation template: $(wc -l < "$nav_template_path") lines"
fi
fi
fi
# Scan directory structure (only for module-level strategies)
local structure_info=""
if [ "$is_project_level" = false ]; then
echo " 🔍 Scanning directory structure..."
structure_info=$(scan_directory_structure "$source_path" "$strategy")
fi
# Prepare logging info
local module_name=$(basename "$source_path")
echo "⚡ Generating docs: $source_path$output_path"
echo " Strategy: $strategy | Tool: $tool | Model: $model | Type: $folder_type"
echo " Output: $output_path"
# Build strategy-specific prompt
local final_prompt=""
# Project-level strategies
if [ "$strategy" = "project-readme" ]; then
final_prompt="PURPOSE: Generate comprehensive project overview documentation
PROJECT: $project_name
OUTPUT: Current directory (file will be moved to final location)
Read: @.workflow/docs/$project_name/**/*.md
Context: All module documentation files from the project
Generate ONE documentation file in current directory:
- README.md - Project root documentation
Template:
$template_content
Instructions:
- Create README.md in CURRENT DIRECTORY
- Synthesize information from all module docs
- Include project overview, getting started, and navigation
- Create clear module navigation with links
- Follow template structure exactly"
elif [ "$strategy" = "project-architecture" ]; then
final_prompt="PURPOSE: Generate system design and usage examples documentation
PROJECT: $project_name
OUTPUT: Current directory (files will be moved to final location)
Read: @.workflow/docs/$project_name/**/*.md
Context: All project documentation including module docs and project README
Generate TWO documentation files in current directory:
1. ARCHITECTURE.md - System architecture and design patterns
2. EXAMPLES.md - End-to-end usage examples
Template:
$template_content
Instructions:
- Create both ARCHITECTURE.md and EXAMPLES.md in CURRENT DIRECTORY
- Synthesize architectural patterns from module documentation
- Document system structure, module relationships, and design decisions
- Provide practical code examples and usage scenarios
- Follow template structure for both files"
elif [ "$strategy" = "http-api" ]; then
final_prompt="PURPOSE: Generate HTTP API reference documentation
PROJECT: $project_name
OUTPUT: Current directory (file will be moved to final location)
Read: @**/*.{ts,js,py,go,rs} @.workflow/docs/$project_name/**/*.md
Context: API route files and existing documentation
Generate ONE documentation file in current directory:
- README.md - HTTP API documentation (in api/ subdirectory)
Template:
$template_content
Instructions:
- Create README.md in CURRENT DIRECTORY
- Document all HTTP endpoints (routes, methods, parameters, responses)
- Include authentication requirements and error codes
- Provide request/response examples
- Follow template structure (Part B: HTTP API documentation)"
# Module-level strategies
elif [ "$strategy" = "full" ]; then
# Full strategy: read all files, generate for each directory
if [ "$folder_type" = "code" ]; then
final_prompt="PURPOSE: Generate comprehensive API and module documentation
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (files will be moved to final location)
Read: @**/*
Generate TWO documentation files in current directory:
1. API.md - Code API documentation (functions, classes, interfaces)
Template:
$api_template
2. README.md - Module overview documentation
Template:
$readme_template
Instructions:
- Generate both API.md and README.md in CURRENT DIRECTORY
- If subdirectories contain code files, generate their docs too (recursive)
- Work bottom-up: deepest directories first
- Follow template structure exactly
- Use structure analysis for context"
else
# Navigation folder - README only
final_prompt="PURPOSE: Generate navigation documentation for folder structure
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (file will be moved to final location)
Read: @**/*
Generate ONE documentation file in current directory:
- README.md - Navigation and folder overview
Template:
$readme_template
Instructions:
- Create README.md in CURRENT DIRECTORY
- Focus on folder structure and navigation
- Link to subdirectory documentation
- Use structure analysis for context"
fi
else
# Single strategy: read current + child docs only
if [ "$folder_type" = "code" ]; then
final_prompt="PURPOSE: Generate API and module documentation for current directory
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (files will be moved to final location)
Read: @*/API.md @*/README.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.go @*.rs @*.md @*.json @*.yaml @*.yml
Generate TWO documentation files in current directory:
1. API.md - Code API documentation
Template:
$api_template
2. README.md - Module overview
Template:
$readme_template
Instructions:
- Generate both API.md and README.md in CURRENT DIRECTORY
- Reference child documentation, do not duplicate
- Follow template structure
- Use structure analysis for current directory context"
else
# Navigation folder - README only
final_prompt="PURPOSE: Generate navigation documentation
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (file will be moved to final location)
Read: @*/API.md @*/README.md @*.md
Generate ONE documentation file in current directory:
- README.md - Navigation and overview
Template:
$readme_template
Instructions:
- Create README.md in CURRENT DIRECTORY
- Link to child documentation
- Use structure analysis for navigation context"
fi
fi
# Execute documentation generation
local start_time=$(date +%s)
echo " 🔄 Starting documentation generation..."
if cd "$source_path" 2>/dev/null; then
local tool_result=0
# Store current output path for CLI context
export DOC_OUTPUT_PATH="$output_path"
# Record git HEAD before CLI execution (to detect unwanted auto-commits)
local git_head_before=""
if git rev-parse --git-dir >/dev/null 2>&1; then
git_head_before=$(git rev-parse HEAD 2>/dev/null)
fi
# Execute with selected tool
case "$tool" in
qwen)
if [ "$model" = "coder-model" ]; then
qwen -p "$final_prompt" --yolo 2>&1
else
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
fi
tool_result=$?
;;
codex)
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
tool_result=$?
;;
gemini)
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
*)
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
esac
# Move generated files to output directory
local docs_created=0
local moved_files=""
if [ $tool_result -eq 0 ]; then
if [ "$is_project_level" = true ]; then
# Project-level documentation files
case "$strategy" in
project-readme)
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="README.md "
}
fi
;;
project-architecture)
if [ -f "ARCHITECTURE.md" ]; then
mv "ARCHITECTURE.md" "$output_path/ARCHITECTURE.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="ARCHITECTURE.md "
}
fi
if [ -f "EXAMPLES.md" ]; then
mv "EXAMPLES.md" "$output_path/EXAMPLES.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="EXAMPLES.md "
}
fi
;;
http-api)
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="api/README.md "
}
fi
;;
esac
else
# Module-level documentation files
# Check and move API.md if it exists
if [ "$folder_type" = "code" ] && [ -f "API.md" ]; then
mv "API.md" "$output_path/API.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="API.md "
}
fi
# Check and move README.md if it exists
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="README.md "
}
fi
fi
fi
# Check if CLI tool auto-committed (and revert if needed)
if [ -n "$git_head_before" ]; then
local git_head_after=$(git rev-parse HEAD 2>/dev/null)
if [ "$git_head_before" != "$git_head_after" ]; then
echo " ⚠️ Detected unwanted auto-commit by CLI tool, reverting..."
git reset --soft "$git_head_before" 2>/dev/null
echo " ✅ Auto-commit reverted (files remain staged)"
fi
fi
if [ $docs_created -gt 0 ]; then
local end_time=$(date +%s)
local duration=$((end_time - start_time))
echo " ✅ Generated $docs_created doc(s) in ${duration}s: $moved_files"
cd - > /dev/null
return 0
else
echo " ❌ Documentation generation failed for $source_path"
cd - > /dev/null
return 1
fi
else
echo " ❌ Cannot access directory: $source_path"
return 1
fi
}
# Execute function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
# Show help if no arguments or help requested
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
echo ""
echo "Module-Level Strategies:"
echo " full - Generate docs for all subdirectories with code"
echo " single - Generate docs only for current directory"
echo ""
echo "Project-Level Strategies:"
echo " project-readme - Generate project root README.md"
echo " project-architecture - Generate ARCHITECTURE.md + EXAMPLES.md"
echo " http-api - Generate HTTP API documentation (api/README.md)"
echo ""
echo "Tools: gemini (default), qwen, codex"
echo "Models: Use tool defaults if not specified"
echo ""
echo "Module Examples:"
echo " ./generate_module_docs.sh full ./src/auth myproject"
echo " ./generate_module_docs.sh single ./components myproject gemini"
echo ""
echo "Project Examples:"
echo " ./generate_module_docs.sh project-readme . myproject"
echo " ./generate_module_docs.sh project-architecture . myproject qwen"
echo " ./generate_module_docs.sh http-api . myproject"
exit 0
fi
generate_module_docs "$@"
fi

View File

@@ -1,567 +0,0 @@
# 🏗️ Claude Code Workflow (CCW) - Architecture Overview
This document provides a high-level overview of CCW's architecture, design principles, and system components.
---
## 📋 Table of Contents
- [Design Philosophy](#design-philosophy)
- [System Architecture](#system-architecture)
- [Core Components](#core-components)
- [Data Flow](#data-flow)
- [Multi-Agent System](#multi-agent-system)
- [CLI Tool Integration](#cli-tool-integration)
- [Session Management](#session-management)
- [Memory System](#memory-system)
---
## 🎯 Design Philosophy
CCW is built on several core design principles that differentiate it from traditional AI-assisted development tools:
### 1. **Context-First Architecture**
- Pre-defined context gathering eliminates execution uncertainty
- Agents receive the correct information *before* implementation
- Context is loaded dynamically based on task requirements
### 2. **JSON-First State Management**
- Task states live in `.task/IMPL-*.json` files as the single source of truth
- Markdown documents are read-only generated views
- Eliminates state drift and synchronization complexity
- Enables programmatic orchestration
### 3. **Autonomous Multi-Phase Orchestration**
- Commands chain specialized sub-commands and agents
- Automates complex workflows with zero user intervention
- Each phase validates its output before proceeding
### 4. **Multi-Model Strategy**
- Leverages unique strengths of different AI models
- Gemini for analysis and exploration
- Codex for implementation
- Qwen for architecture and planning
### 5. **Hierarchical Memory System**
- 4-layer documentation system (CLAUDE.md files)
- Provides context at the appropriate level of abstraction
- Prevents information overload
### 6. **Specialized Role-Based Agents**
- Suite of agents mirrors a real software team
- Each agent has specific responsibilities
- Agents collaborate to complete complex tasks
---
## 🏛️ System Architecture
```mermaid
graph TB
subgraph "User Interface Layer"
CLI[Slash Commands]
CHAT[Natural Language]
end
subgraph "Orchestration Layer"
WF[Workflow Engine]
SM[Session Manager]
TM[Task Manager]
end
subgraph "Agent Layer"
AG1[@code-developer]
AG2[@test-fix-agent]
AG3[@ui-design-agent]
AG4[@cli-execution-agent]
AG5[More Agents...]
end
subgraph "Tool Layer"
GEMINI[Gemini CLI]
QWEN[Qwen CLI]
CODEX[Codex CLI]
BASH[Bash/System]
end
subgraph "Data Layer"
JSON[Task JSON Files]
MEM[CLAUDE.md Memory]
STATE[Session State]
end
CLI --> WF
CHAT --> WF
WF --> SM
WF --> TM
SM --> STATE
TM --> JSON
WF --> AG1
WF --> AG2
WF --> AG3
WF --> AG4
AG1 --> GEMINI
AG1 --> QWEN
AG1 --> CODEX
AG2 --> BASH
AG3 --> GEMINI
AG4 --> CODEX
GEMINI --> MEM
QWEN --> MEM
CODEX --> JSON
```
---
## 🔧 Core Components
### 1. **Workflow Engine**
The workflow engine orchestrates complex development processes through multiple phases:
- **Planning Phase**: Analyzes requirements and generates implementation plans
- **Execution Phase**: Coordinates agents to implement tasks
- **Verification Phase**: Validates implementation quality
- **Testing Phase**: Generates and executes tests
- **Review Phase**: Performs code review and quality analysis
**Key Features**:
- Multi-phase orchestration
- Automatic session management
- Context propagation between phases
- Quality gates at each phase transition
### 2. **Session Manager**
Manages isolated workflow contexts:
```
.workflow/
├── active/ # Active sessions
│ ├── WFS-user-auth/ # User authentication session
│ ├── WFS-payment/ # Payment integration session
│ └── WFS-dashboard/ # Dashboard redesign session
└── archives/ # Completed sessions
└── WFS-old-feature/ # Archived session
```
**Capabilities**:
- Directory-based session tracking
- Session state persistence
- Parallel session support
- Session archival and resumption
### 3. **Task Manager**
Handles hierarchical task structures:
```json
{
"id": "IMPL-1.2",
"title": "Implement JWT authentication",
"status": "pending",
"meta": {
"type": "feature",
"agent": "code-developer"
},
"context": {
"requirements": ["JWT authentication", "OAuth2 support"],
"focus_paths": ["src/auth", "tests/auth"],
"acceptance": ["JWT validation works", "OAuth flow complete"]
},
"flow_control": {
"pre_analysis": [...],
"implementation_approach": {...}
}
}
```
**Features**:
- JSON-first data model
- Hierarchical task decomposition (max 2 levels)
- Dynamic subtask creation
- Dependency tracking
### 4. **Memory System**
Four-layer hierarchical documentation:
```
CLAUDE.md (Project root - high-level overview)
├── src/CLAUDE.md (Source layer - module summaries)
│ ├── auth/CLAUDE.md (Module layer - component details)
│ │ └── jwt/CLAUDE.md (Component layer - implementation details)
```
**Memory Commands**:
- `/memory:update-full` - Complete project rebuild
- `/memory:update-related` - Incremental updates for changed modules
- `/memory:load` - Quick context loading for specific tasks
---
## 🔄 Data Flow
### Typical Workflow Execution Flow
```mermaid
sequenceDiagram
participant User
participant CLI
participant Workflow
participant Agent
participant Tool
participant Data
User->>CLI: /workflow:plan "Feature description"
CLI->>Workflow: Initialize planning workflow
Workflow->>Data: Create session
Workflow->>Agent: @action-planning-agent
Agent->>Tool: gemini-wrapper analyze
Tool->>Data: Update CLAUDE.md
Agent->>Data: Generate IMPL-*.json
Workflow->>User: Plan complete
User->>CLI: /workflow:execute
CLI->>Workflow: Start execution
Workflow->>Data: Load tasks from JSON
Workflow->>Agent: @code-developer
Agent->>Tool: Read context
Agent->>Tool: Implement code
Agent->>Data: Update task status
Workflow->>User: Execution complete
```
### Context Flow
```mermaid
graph LR
A[User Request] --> B[Context Gathering]
B --> C[CLAUDE.md Memory]
B --> D[Task JSON]
B --> E[Session State]
C --> F[Agent Context]
D --> F
E --> F
F --> G[Tool Execution]
G --> H[Implementation]
H --> I[Update State]
```
---
## 🤖 Multi-Agent System
### Agent Specialization
CCW uses specialized agents for different types of tasks:
| Agent | Responsibility | Tools Used |
|-------|---------------|------------|
| **@code-developer** | Code implementation | Gemini, Qwen, Codex, Bash |
| **@test-fix-agent** | Test generation and fixing | Codex, Bash |
| **@ui-design-agent** | UI design and prototyping | Gemini, Claude Vision |
| **@action-planning-agent** | Task planning and decomposition | Gemini |
| **@cli-execution-agent** | Autonomous CLI task handling | Codex, Gemini, Qwen |
| **@cli-explore-agent** | Codebase exploration | ripgrep, find |
| **@context-search-agent** | Context gathering | Grep, Glob |
| **@doc-generator** | Documentation generation | Gemini, Qwen |
| **@memory-bridge** | Memory system updates | Gemini, Qwen |
| **@universal-executor** | General task execution | All tools |
### Agent Communication
Agents communicate through:
1. **Shared Session State**: All agents can read/write session JSON
2. **Task JSON Files**: Tasks contain context for agent handoffs
3. **CLAUDE.md Memory**: Shared project knowledge base
4. **Flow Control**: Pre-analysis and implementation approach definitions
---
## 🛠️ CLI Tool Integration
### Three CLI Tools
CCW integrates three external AI tools, each optimized for specific tasks:
#### 1. **Gemini CLI** - Deep Analysis
- **Strengths**: Pattern recognition, architecture understanding, comprehensive analysis
- **Use Cases**:
- Codebase exploration
- Architecture analysis
- Bug diagnosis
- Memory system updates
#### 2. **Qwen CLI** - Architecture & Planning
- **Strengths**: System design, code generation, architectural planning
- **Use Cases**:
- Architecture design
- System planning
- Code generation
- Refactoring strategies
#### 3. **Codex CLI** - Autonomous Development
- **Strengths**: Self-directed implementation, error fixing, test generation
- **Use Cases**:
- Feature implementation
- Bug fixes
- Test generation
- Autonomous development
### Tool Selection Strategy
CCW automatically selects the best tool based on task type:
```
Analysis Task → Gemini CLI
Planning Task → Qwen CLI
Implementation Task → Codex CLI
```
Users can override with `--tool` parameter:
```bash
/cli:analyze --tool codex "Analyze authentication flow"
```
---
## 📦 Session Management
### Session Lifecycle
```mermaid
stateDiagram-v2
[*] --> Creating: /workflow:session:start
Creating --> Active: Session initialized
Active --> Paused: User pauses
Paused --> Active: /workflow:session:resume
Active --> Completed: /workflow:session:complete
Completed --> Archived: Move to archives/
Archived --> [*]
```
### Session Structure
```
.workflow/active/WFS-feature-name/
├── workflow-session.json # Session metadata
├── .task/ # Task JSON files
│ ├── IMPL-1.json
│ ├── IMPL-1.1.json
│ └── IMPL-2.json
├── .chat/ # Chat logs
├── brainstorming/ # Brainstorm artifacts
│ ├── guidance-specification.md
│ └── system-architect/analysis.md
└── artifacts/ # Generated files
├── IMPL_PLAN.md
└── verification-report.md
```
---
## 💾 Memory System
### Hierarchical CLAUDE.md Structure
The memory system maintains project knowledge across four layers:
#### **Layer 1: Project Root**
```markdown
# Project Overview
- High-level architecture
- Technology stack
- Key design decisions
- Entry points
```
#### **Layer 2: Source Directory**
```markdown
# Source Code Structure
- Module summaries
- Dependency relationships
- Common patterns
```
#### **Layer 3: Module Directory**
```markdown
# Module Details
- Component responsibilities
- API interfaces
- Internal structure
```
#### **Layer 4: Component Directory**
```markdown
# Component Implementation
- Function signatures
- Implementation details
- Usage examples
```
### Memory Update Strategies
#### Full Update (`/memory:update-full`)
- Rebuilds entire project documentation
- Uses layer-based execution (Layer 3 → 1)
- Batch processing (4 modules/agent)
- Fallback mechanism (gemini → qwen → codex)
#### Incremental Update (`/memory:update-related`)
- Updates only changed modules
- Analyzes git changes
- Efficient for daily development
#### Quick Load (`/memory:load`)
- No file updates
- Task-specific context gathering
- Returns JSON context package
- Fast context injection
---
## 🔐 Quality Assurance
### Quality Gates
CCW enforces quality at multiple levels:
1. **Planning Phase**:
- Requirements coverage check
- Dependency validation
- Task specification quality assessment
2. **Execution Phase**:
- Context validation before implementation
- Pattern consistency checks
- Test generation
3. **Review Phase**:
- Code quality analysis
- Security review
- Architecture review
### Verification Commands
- `/workflow:action-plan-verify` - Validates plan quality before execution
- `/workflow:tdd-verify` - Verifies TDD cycle compliance
- `/workflow:review` - Post-implementation review
---
## 🚀 Performance Optimizations
### 1. **Lazy Loading**
- Files created only when needed
- On-demand document generation
- Minimal upfront cost
### 2. **Parallel Execution**
- Independent tasks run concurrently
- Multi-agent parallel brainstorming
- Batch processing for memory updates
### 3. **Context Caching**
- CLAUDE.md acts as knowledge cache
- Reduces redundant analysis
- Faster context retrieval
### 4. **Atomic Session Management**
- Ultra-fast session switching (<10ms)
- Simple file marker system
- No database overhead
---
## 📊 Scalability
### Horizontal Scalability
- **Multiple Sessions**: Run parallel workflows for different features
- **Team Collaboration**: Session-based isolation prevents conflicts
- **Incremental Updates**: Only update affected modules
### Vertical Scalability
- **Hierarchical Tasks**: Efficient task decomposition (max 2 levels)
- **Selective Context**: Load only relevant context for each task
- **Batch Processing**: Process multiple modules per agent invocation
---
## 🔮 Extensibility
### Adding New Agents
Create agent definition in `.claude/agents/`:
```markdown
# Agent Name
## Role
Agent description
## Tools Available
- Tool 1
- Tool 2
## Prompt
Agent instructions...
```
### Adding New Commands
Create command in `.claude/commands/`:
```bash
#!/usr/bin/env bash
# Command implementation
```
### Custom Workflows
Combine existing commands to create custom workflows:
```bash
/workflow:brainstorm:auto-parallel "Topic"
/workflow:plan
/workflow:action-plan-verify
/workflow:execute
/workflow:review
```
---
## 🎓 Best Practices
### For Users
1. **Keep Memory Updated**: Run `/memory:update-related` after major changes
2. **Use Quality Gates**: Run `/workflow:action-plan-verify` before execution
3. **Session Management**: Complete sessions with `/workflow:session:complete`
4. **Tool Selection**: Let CCW auto-select tools unless you have specific needs
### For Developers
1. **Follow JSON-First**: Never modify markdown documents directly
2. **Agent Context**: Provide complete context in task JSON
3. **Error Handling**: Implement graceful fallbacks
4. **Testing**: Test agents independently before integration
---
## 📚 Further Reading
- [Getting Started Guide](GETTING_STARTED.md) - Quick start tutorial
- [Command Reference](COMMAND_REFERENCE.md) - All available commands
- [Command Specification](COMMAND_SPEC.md) - Detailed command specs
- [Workflow Diagrams](WORKFLOW_DIAGRAMS.md) - Visual workflow representations
- [Contributing Guide](CONTRIBUTING.md) - How to contribute
- [Examples](EXAMPLES.md) - Real-world use cases
---
**Last Updated**: 2025-11-20
**Version**: 5.8.1

View File

@@ -29,6 +29,7 @@ For all CLI tool usage, command syntax, and integration guidelines:
- **Clear intent over clever code** - Be boring and obvious
- **Follow existing code style** - Match import patterns, naming conventions, and formatting of existing codebase
- **No unsolicited reports** - Task summaries can be performed internally, but NEVER generate additional reports, documentation files, or summary files without explicit user permission
- **Minimal documentation output** - Avoid unnecessary documentation. If required, save to .workflow/.scratchpad/
### Simplicity Means

View File

@@ -180,7 +180,7 @@ Commands for creating, listing, and managing workflow sessions.
- **Syntax**: `/workflow:session:complete [--detailed]`
- **Parameters**:
- `--detailed` (Flag): Shows a more detailed completion summary.
- **Responsibilities**: Marks the currently active session as "completed", records timestamps, and removes the `.active-*` marker file.
- **Responsibilities**: Marks the currently active session as "completed", records timestamps, and moves the session from `.workflow/active/` to `.workflow/archives/`.
- **Agent Calls**: None.
- **Example**:
```bash

View File

@@ -1,823 +0,0 @@
# 📖 Claude Code Workflow - Real-World Examples
This document provides practical, real-world examples of using CCW for common development tasks.
---
## 📋 Table of Contents
- [Quick Start Examples](#quick-start-examples)
- [Web Development](#web-development)
- [API Development](#api-development)
- [Testing & Quality Assurance](#testing--quality-assurance)
- [Refactoring](#refactoring)
- [UI/UX Design](#uiux-design)
- [Bug Fixes](#bug-fixes)
- [Documentation](#documentation)
- [DevOps & Automation](#devops--automation)
- [Complex Projects](#complex-projects)
---
## 🚀 Quick Start Examples
### Example 1: Simple Express API
**Objective**: Create a basic Express.js API with CRUD operations
```bash
# Option 1: Lite workflow (fastest)
/workflow:lite-plan "Create Express API with CRUD endpoints for users (GET, POST, PUT, DELETE)"
# Option 2: Full workflow (more structured)
/workflow:plan "Create Express API with CRUD endpoints for users"
/workflow:execute
```
**What CCW does**:
1. Analyzes your project structure
2. Creates Express app setup
3. Implements CRUD routes
4. Adds error handling middleware
5. Creates basic tests
**Result**:
```
src/
├── app.js # Express app setup
├── routes/
│ └── users.js # User CRUD routes
├── controllers/
│ └── userController.js
└── tests/
└── users.test.js
```
### Example 2: React Component
**Objective**: Create a React login form component
```bash
/workflow:lite-plan "Create a React login form component with email and password fields, validation, and submit handling"
```
**What CCW does**:
1. Creates LoginForm component
2. Adds form validation (email format, password requirements)
3. Implements state management
4. Adds error display
5. Creates component tests
**Result**:
```jsx
// components/LoginForm.jsx
import React, { useState } from 'react';
export function LoginForm({ onSubmit }) {
const [email, setEmail] = useState('');
const [password, setPassword] = useState('');
const [errors, setErrors] = useState({});
// ... validation and submit logic
}
```
---
## 🌐 Web Development
### Example 3: Full-Stack Todo Application
**Objective**: Build a complete todo application with React frontend and Express backend
#### Phase 1: Planning with Brainstorming
```bash
# Multi-perspective analysis
/workflow:brainstorm:auto-parallel "Full-stack todo application with user authentication, real-time updates, and dark mode"
# Review brainstorming artifacts
# Then create implementation plan
/workflow:plan
# Verify plan quality
/workflow:action-plan-verify
```
**Brainstorming generates**:
- System architecture analysis
- UI/UX design recommendations
- Data model design
- Security considerations
- API design patterns
#### Phase 2: Implementation
```bash
# Execute the plan
/workflow:execute
# Monitor progress
/workflow:status
```
**What CCW implements**:
**Backend** (`server/`):
- Express server setup
- MongoDB/PostgreSQL integration
- JWT authentication
- RESTful API endpoints
- WebSocket for real-time updates
- Input validation middleware
**Frontend** (`client/`):
- React app with routing
- Authentication flow
- Todo CRUD operations
- Real-time updates via WebSocket
- Dark mode toggle
- Responsive design
#### Phase 3: Testing
```bash
# Generate comprehensive tests
/workflow:test-gen WFS-todo-application
# Execute test tasks
/workflow:execute
# Run iterative test-fix cycle
/workflow:test-cycle-execute
```
**Tests created**:
- Unit tests for components
- Integration tests for API
- E2E tests for user flows
- Authentication tests
- WebSocket connection tests
#### Phase 4: Quality Review
```bash
# Security review
/workflow:review --type security
# Architecture review
/workflow:review --type architecture
# General quality review
/workflow:review
```
**Complete session**:
```bash
/workflow:session:complete
```
---
### Example 4: E-commerce Product Catalog
**Objective**: Build product catalog with search, filters, and pagination
```bash
# Start with UI design exploration
/workflow:ui-design:explore-auto --prompt "Modern e-commerce product catalog with grid layout, filters sidebar, and search bar" --targets "catalog,product-card" --style-variants 3
# Review designs in compare.html
# Sync selected designs
/workflow:ui-design:design-sync --session <session-id> --selected-prototypes "catalog-v2,product-card-v1"
# Create implementation plan
/workflow:plan
# Execute
/workflow:execute
```
**Features implemented**:
- Product grid with responsive layout
- Search functionality with debounce
- Category/price/rating filters
- Pagination with infinite scroll option
- Product card with image, title, price, rating
- Sort options (price, popularity, newest)
---
## 🔌 API Development
### Example 5: RESTful API with Authentication
**Objective**: Create RESTful API with JWT authentication and role-based access control
```bash
# Detailed planning
/workflow:plan "RESTful API with JWT authentication, role-based access control (admin, user), and protected endpoints for posts resource"
# Verify plan
/workflow:action-plan-verify
# Execute
/workflow:execute
```
**Implementation includes**:
**Authentication**:
```javascript
// routes/auth.js
POST /api/auth/register
POST /api/auth/login
POST /api/auth/refresh
POST /api/auth/logout
```
**Protected Resources**:
```javascript
// routes/posts.js
GET /api/posts # Public
GET /api/posts/:id # Public
POST /api/posts # Authenticated
PUT /api/posts/:id # Authenticated (owner or admin)
DELETE /api/posts/:id # Authenticated (owner or admin)
```
**Middleware**:
- `authenticate` - Verifies JWT token
- `authorize(['admin'])` - Role-based access
- `validateRequest` - Input validation
- `errorHandler` - Centralized error handling
### Example 6: GraphQL API
**Objective**: Convert REST API to GraphQL
```bash
# Analyze existing REST API
/cli:analyze "Analyze REST API structure in src/routes/"
# Plan GraphQL migration
/workflow:plan "Migrate REST API to GraphQL with queries, mutations, and subscriptions for posts and users"
# Execute migration
/workflow:execute
```
**GraphQL schema created**:
```graphql
type Query {
posts(limit: Int, offset: Int): [Post!]!
post(id: ID!): Post
user(id: ID!): User
}
type Mutation {
createPost(input: CreatePostInput!): Post!
updatePost(id: ID!, input: UpdatePostInput!): Post!
deletePost(id: ID!): Boolean!
}
type Subscription {
postCreated: Post!
postUpdated: Post!
}
```
---
## 🧪 Testing & Quality Assurance
### Example 7: Test-Driven Development (TDD)
**Objective**: Implement user authentication using TDD approach
```bash
# Start TDD workflow
/workflow:tdd-plan "User authentication with email/password login, registration, and password reset"
# Execute (Red-Green-Refactor cycles)
/workflow:execute
# Verify TDD compliance
/workflow:tdd-verify
```
**TDD cycle tasks created**:
**Cycle 1: Registration**
1. `IMPL-1.1` - Write failing test for user registration
2. `IMPL-1.2` - Implement registration to pass test
3. `IMPL-1.3` - Refactor registration code
**Cycle 2: Login**
1. `IMPL-2.1` - Write failing test for login
2. `IMPL-2.2` - Implement login to pass test
3. `IMPL-2.3` - Refactor login code
**Cycle 3: Password Reset**
1. `IMPL-3.1` - Write failing test for password reset
2. `IMPL-3.2` - Implement password reset
3. `IMPL-3.3` - Refactor password reset
### Example 8: Adding Tests to Existing Code
**Objective**: Generate comprehensive tests for existing authentication module
```bash
# Create test generation workflow from existing code
/workflow:test-gen WFS-authentication-implementation
# Execute test tasks
/workflow:execute
# Run test-fix cycle until all tests pass
/workflow:test-cycle-execute --max-iterations 5
```
**Tests generated**:
- Unit tests for each function
- Integration tests for auth flow
- Edge case tests (invalid input, expired tokens, etc.)
- Security tests (SQL injection, XSS, etc.)
- Performance tests (load testing, rate limiting)
**Test coverage**: Aims for 80%+ coverage
---
## 🔄 Refactoring
### Example 9: Monolith to Microservices
**Objective**: Refactor monolithic application to microservices architecture
#### Phase 1: Analysis
```bash
# Deep architecture analysis
/cli:mode:plan --tool gemini "Analyze current monolithic architecture and create microservices migration strategy"
# Multi-role brainstorming
/workflow:brainstorm:auto-parallel "Migrate monolith to microservices with API gateway, service discovery, and message queue" --count 5
```
#### Phase 2: Planning
```bash
# Create detailed migration plan
/workflow:plan "Phase 1 microservices migration: Extract user service and auth service from monolith"
# Verify plan
/workflow:action-plan-verify
```
#### Phase 3: Implementation
```bash
# Execute migration
/workflow:execute
# Review architecture
/workflow:review --type architecture
```
**Microservices created**:
```
services/
├── user-service/
│ ├── src/
│ ├── Dockerfile
│ └── package.json
├── auth-service/
│ ├── src/
│ ├── Dockerfile
│ └── package.json
├── api-gateway/
│ ├── src/
│ └── config/
└── docker-compose.yml
```
### Example 10: Code Optimization
**Objective**: Optimize database queries for performance
```bash
# Analyze current performance
/cli:mode:code-analysis "Analyze database query performance in src/repositories/"
# Create optimization plan
/workflow:plan "Optimize database queries with indexing, query optimization, and caching"
# Execute optimizations
/workflow:execute
```
**Optimizations implemented**:
- Database indexing strategy
- N+1 query elimination
- Query result caching (Redis)
- Connection pooling
- Pagination for large datasets
- Database query monitoring
---
## 🎨 UI/UX Design
### Example 11: Design System Creation
**Objective**: Create a complete design system for a SaaS application
```bash
# Extract design from local reference images
/workflow:ui-design:imitate-auto --input "design-refs/*.png"
# Or import from existing code
/workflow:ui-design:imitate-auto --input "./src/components"
# Or create from scratch
/workflow:ui-design:explore-auto --prompt "Modern SaaS design system with primary components: buttons, inputs, cards, modals, navigation" --targets "button,input,card,modal,navbar" --style-variants 3
```
**Design system includes**:
- Color palette (primary, secondary, accent, neutral)
- Typography scale (headings, body, captions)
- Spacing system (4px grid)
- Component library:
- Buttons (primary, secondary, outline, ghost)
- Form inputs (text, select, checkbox, radio)
- Cards (basic, elevated, outlined)
- Modals (small, medium, large)
- Navigation (sidebar, topbar, breadcrumbs)
- Animation patterns
- Responsive breakpoints
**Output**:
```
design-system/
├── tokens/
│ ├── colors.json
│ ├── typography.json
│ └── spacing.json
├── components/
│ ├── Button.jsx
│ ├── Input.jsx
│ └── ...
└── documentation/
└── design-system.html
```
### Example 12: Responsive Landing Page
**Objective**: Design and implement a marketing landing page
```bash
# Design exploration
/workflow:ui-design:explore-auto --prompt "Modern SaaS landing page with hero section, features grid, pricing table, testimonials, and CTA" --targets "hero,features,pricing,testimonials" --style-variants 2 --layout-variants 3 --device-type responsive
# Select best designs and sync
/workflow:ui-design:design-sync --session <session-id> --selected-prototypes "hero-v2,features-v1,pricing-v3"
# Implement
/workflow:plan
/workflow:execute
```
**Sections implemented**:
- Hero section with animated background
- Feature cards with icons
- Pricing comparison table
- Customer testimonials carousel
- FAQ accordion
- Contact form
- Responsive navigation
- Dark mode support
---
## 🐛 Bug Fixes
### Example 13: Quick Bug Fix
**Objective**: Fix login button not working on mobile
```bash
# Analyze bug
/cli:mode:bug-diagnosis "Login button click event not firing on mobile Safari"
# Claude analyzes and implements fix
```
**Fix implemented**:
```javascript
// Before
button.onclick = handleLogin;
// After (adds touch event support)
button.addEventListener('click', handleLogin);
button.addEventListener('touchend', (e) => {
e.preventDefault();
handleLogin(e);
});
```
### Example 14: Complex Bug Investigation
**Objective**: Debug memory leak in React application
#### Investigation
```bash
# Start session for thorough investigation
/workflow:session:start "Memory Leak Investigation"
# Deep bug analysis
/cli:mode:bug-diagnosis --tool gemini "Memory leak in React components - event listeners not cleaned up"
# Create fix plan
/workflow:plan "Fix memory leaks in React components: cleanup event listeners and cancel subscriptions"
```
#### Implementation
```bash
# Execute fixes
/workflow:execute
# Generate tests to prevent regression
/workflow:test-gen WFS-memory-leak-investigation
# Execute tests
/workflow:execute
```
**Issues found and fixed**:
1. Missing cleanup in `useEffect` hooks
2. Event listeners not removed
3. Uncancelled API requests on unmount
4. Large state objects not cleared
---
## 📝 Documentation
### Example 15: API Documentation Generation
**Objective**: Generate comprehensive API documentation
```bash
# Analyze existing API
/memory:load "Generate API documentation for all endpoints"
# Create documentation
/workflow:plan "Generate OpenAPI/Swagger documentation for REST API with examples and authentication info"
# Execute
/workflow:execute
```
**Documentation includes**:
- OpenAPI 3.0 specification
- Interactive Swagger UI
- Request/response examples
- Authentication guide
- Rate limiting info
- Error codes reference
### Example 16: Project README Generation
**Objective**: Create comprehensive README for open-source project
```bash
# Update project memory first
/memory:update-full --tool gemini
# Generate README
/workflow:plan "Create comprehensive README.md with installation, usage, examples, API reference, and contributing guidelines"
/workflow:execute
```
**README sections**:
- Project overview
- Features
- Installation instructions
- Quick start guide
- Usage examples
- API reference
- Configuration
- Contributing guidelines
- License
---
## ⚙️ DevOps & Automation
### Example 17: CI/CD Pipeline Setup
**Objective**: Set up GitHub Actions CI/CD pipeline
```bash
/workflow:plan "Create GitHub Actions workflow for Node.js app with linting, testing, building, and deployment to AWS"
/workflow:execute
```
**Pipeline created**:
```yaml
# .github/workflows/ci-cd.yml
name: CI/CD
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run tests
run: npm test
build:
needs: test
runs-on: ubuntu-latest
steps:
- name: Build
run: npm run build
deploy:
needs: build
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- name: Deploy to AWS
run: npm run deploy
```
### Example 18: Docker Containerization
**Objective**: Dockerize full-stack application
```bash
# Plan containerization
/workflow:plan "Dockerize full-stack app with React frontend, Express backend, PostgreSQL database, and Redis cache using docker-compose"
# Execute
/workflow:execute
# Review
/workflow:review --type architecture
```
**Created files**:
```
├── docker-compose.yml
├── frontend/
│ └── Dockerfile
├── backend/
│ └── Dockerfile
├── .dockerignore
└── README.docker.md
```
---
## 🏗️ Complex Projects
### Example 19: Real-Time Chat Application
**Objective**: Build real-time chat with WebSocket, message history, and file sharing
#### Complete Workflow
```bash
# 1. Brainstorm
/workflow:brainstorm:auto-parallel "Real-time chat application with WebSocket, message history, file upload, user presence, typing indicators" --count 5
# 2. UI Design
/workflow:ui-design:explore-auto --prompt "Modern chat interface with message list, input box, user sidebar, file preview" --targets "chat-window,message-bubble,user-list" --style-variants 2
# 3. Sync designs
/workflow:ui-design:design-sync --session <session-id>
# 4. Plan implementation
/workflow:plan
# 5. Verify plan
/workflow:action-plan-verify
# 6. Execute
/workflow:execute
# 7. Generate tests
/workflow:test-gen <session-id>
# 8. Execute tests
/workflow:execute
# 9. Review
/workflow:review --type security
/workflow:review --type architecture
# 10. Complete
/workflow:session:complete
```
**Features implemented**:
- WebSocket server (Socket.io)
- Real-time messaging
- Message persistence (MongoDB)
- File upload (S3/local storage)
- User authentication
- Typing indicators
- Read receipts
- User presence (online/offline)
- Message search
- Emoji support
- Mobile responsive
### Example 20: Data Analytics Dashboard
**Objective**: Build interactive dashboard with charts and real-time data
```bash
# Brainstorm data viz approach
/workflow:brainstorm:auto-parallel "Data analytics dashboard with real-time metrics, interactive charts, filters, and export functionality"
# Plan implementation
/workflow:plan "Analytics dashboard with Chart.js/D3.js, real-time data updates via WebSocket, date range filters, and CSV export"
# Execute
/workflow:execute
```
**Dashboard features**:
- Real-time metric cards (users, revenue, conversions)
- Line charts (trends over time)
- Bar charts (comparisons)
- Pie charts (distributions)
- Data tables with sorting/filtering
- Date range picker
- Export to CSV/PDF
- Responsive grid layout
- Dark mode
- WebSocket updates every 5 seconds
---
## 💡 Tips for Effective Examples
### Best Practices
1. **Start with clear objectives**
- Define what you want to build
- List key features
- Specify technologies if needed
2. **Use appropriate workflow**
- Simple tasks: `/workflow:lite-plan`
- Complex features: `/workflow:brainstorm``/workflow:plan`
- Existing code: `/workflow:test-gen` or `/cli:analyze`
3. **Leverage quality gates**
- Run `/workflow:action-plan-verify` before execution
- Use `/workflow:review` after implementation
- Generate tests with `/workflow:test-gen`
4. **Maintain memory**
- Update memory after major changes
- Use `/memory:load` for quick context
- Keep CLAUDE.md files up to date
5. **Complete sessions**
- Always run `/workflow:session:complete`
- Generates lessons learned
- Archives session for reference
---
## 🔗 Related Resources
- [Getting Started Guide](GETTING_STARTED.md) - Basics
- [Architecture](ARCHITECTURE.md) - How it works
- [Command Reference](COMMAND_REFERENCE.md) - All commands
- [FAQ](FAQ.md) - Common questions
- [Contributing](CONTRIBUTING.md) - How to contribute
---
## 📬 Share Your Examples
Have a great example to share? Contribute to this document!
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
---
**Last Updated**: 2025-11-20
**Version**: 5.8.1

File diff suppressed because it is too large Load Diff

View File

@@ -225,6 +225,7 @@ function get_backup_directory() {
function backup_file_to_folder() {
local file_path="$1"
local backup_folder="$2"
local quiet="${3:-}" # Optional quiet mode
if [ ! -f "$file_path" ]; then
return 1
@@ -249,10 +250,16 @@ function backup_file_to_folder() {
local backup_file_path="${backup_sub_dir}/${file_name}"
if cp "$file_path" "$backup_file_path"; then
write_color "Backed up: $file_name" "$COLOR_INFO"
# Only output if not in quiet mode
if [ "$quiet" != "quiet" ]; then
write_color "Backed up: $file_name" "$COLOR_INFO"
fi
return 0
else
write_color "WARNING: Failed to backup file $file_path" "$COLOR_WARNING"
# Always show warnings
if [ "$quiet" != "quiet" ]; then
write_color "WARNING: Failed to backup file $file_path" "$COLOR_WARNING"
fi
return 1
fi
}
@@ -443,14 +450,25 @@ function merge_directory_contents() {
return 1
fi
mkdir -p "$destination"
write_color "Created destination directory: $destination" "$COLOR_INFO"
# Create destination directory if it doesn't exist
if [ ! -d "$destination" ]; then
mkdir -p "$destination"
write_color "Created destination directory: $destination" "$COLOR_INFO"
fi
# Count total files first
local total_files=$(find "$source" -type f | wc -l)
local merged_count=0
local skipped_count=0
local backed_up_count=0
local processed_count=0
write_color "Processing $total_files files in $description..." "$COLOR_INFO"
# Find all files recursively
while IFS= read -r -d '' file; do
((processed_count++))
local relative_path="${file#$source/}"
local dest_path="${destination}/${relative_path}"
local dest_dir=$(dirname "$dest_path")
@@ -458,41 +476,58 @@ function merge_directory_contents() {
mkdir -p "$dest_dir"
if [ -f "$dest_path" ]; then
local file_name=$(basename "$relative_path")
# Use BackupAll mode for automatic backup without confirmation (default behavior)
if [ "$BACKUP_ALL" = true ] && [ "$NO_BACKUP" = false ]; then
if [ -n "$backup_folder" ]; then
backup_file_to_folder "$dest_path" "$backup_folder"
write_color "Auto-backed up: $file_name" "$COLOR_INFO"
# Quiet backup - no individual file output
if backup_file_to_folder "$dest_path" "$backup_folder" "quiet"; then
((backed_up_count++))
fi
fi
cp "$file" "$dest_path"
((merged_count++))
elif [ "$NO_BACKUP" = true ]; then
# No backup mode - ask for confirmation
if confirm_action "File '$relative_path' already exists. Replace it? (NO BACKUP)" false; then
cp "$file" "$dest_path"
((merged_count++))
else
write_color "Skipped $file_name (no backup)" "$COLOR_WARNING"
((skipped_count++))
fi
elif confirm_action "File '$relative_path' already exists. Replace it?" false; then
if [ -n "$backup_folder" ]; then
backup_file_to_folder "$dest_path" "$backup_folder"
write_color "Backed up existing $file_name" "$COLOR_INFO"
# Quiet backup - no individual file output
if backup_file_to_folder "$dest_path" "$backup_folder" "quiet"; then
((backed_up_count++))
fi
fi
cp "$file" "$dest_path"
((merged_count++))
else
write_color "Skipped $file_name" "$COLOR_WARNING"
((skipped_count++))
fi
else
cp "$file" "$dest_path"
((merged_count++))
fi
# Show progress every 100 files (optimized for performance)
if [ $((processed_count % 100)) -eq 0 ] || [ "$processed_count" -eq "$total_files" ]; then
local percent=$((processed_count * 100 / total_files))
echo -ne "\rMerging $description: $processed_count/$total_files files ($percent%)..."
fi
done < <(find "$source" -type f -print0)
write_color "✓ Merged $merged_count files, skipped $skipped_count files" "$COLOR_SUCCESS"
# Clear progress line
echo -ne "\r\033[K"
# Show summary
if [ "$backed_up_count" -gt 0 ]; then
write_color "✓ Merged $merged_count files ($backed_up_count backed up), skipped $skipped_count files" "$COLOR_SUCCESS"
else
write_color "✓ Merged $merged_count files, skipped $skipped_count files" "$COLOR_SUCCESS"
fi
return 0
}
@@ -508,6 +543,10 @@ function install_global() {
write_color "Global installation path: $user_home" "$COLOR_INFO"
# Clean up old installation before proceeding (fast move operation)
echo ""
move_old_installation "$user_home" "Global"
# Initialize manifest
local manifest_file=$(new_install_manifest "Global" "$user_home")
@@ -548,12 +587,8 @@ function install_global() {
# Track .claude directory in manifest
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
# Track files from SOURCE directory, not destination
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_claude_dir}"
local target_path="${global_claude_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_claude_dir" -type f -print0)
# Track files from SOURCE directory using bulk operation
add_manifest_entries_bulk "$manifest_file" "$source_claude_dir" "$global_claude_dir" "File"
fi
# Handle CLAUDE.md file
@@ -572,12 +607,8 @@ function install_global() {
# Track .codex directory in manifest
add_manifest_entry "$manifest_file" "$global_codex_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_codex_dir}"
local target_path="${global_codex_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_codex_dir" -type f -print0)
# Track files from SOURCE directory using bulk operation
add_manifest_entries_bulk "$manifest_file" "$source_codex_dir" "$global_codex_dir" "File"
fi
# Backup critical config files in .gemini directory before installation
@@ -589,12 +620,8 @@ function install_global() {
# Track .gemini directory in manifest
add_manifest_entry "$manifest_file" "$global_gemini_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_gemini_dir}"
local target_path="${global_gemini_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_gemini_dir" -type f -print0)
# Track files from SOURCE directory using bulk operation
add_manifest_entries_bulk "$manifest_file" "$source_gemini_dir" "$global_gemini_dir" "File"
fi
# Backup critical config files in .qwen directory before installation
@@ -606,12 +633,8 @@ function install_global() {
# Track .qwen directory in manifest
add_manifest_entry "$manifest_file" "$global_qwen_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_qwen_dir}"
local target_path="${global_qwen_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_qwen_dir" -type f -print0)
# Track files from SOURCE directory using bulk operation
add_manifest_entries_bulk "$manifest_file" "$source_qwen_dir" "$global_qwen_dir" "File"
fi
# Remove empty backup folder
@@ -627,7 +650,7 @@ function install_global() {
create_version_json "$global_claude_dir" "Global"
# Save installation manifest
save_install_manifest "$manifest_file" "$user_home"
save_install_manifest "$manifest_file" "$user_home" "Global"
return 0
}
@@ -642,6 +665,10 @@ function install_path() {
local global_claude_dir="${user_home}/.claude"
write_color "Global path: $user_home" "$COLOR_INFO"
# Clean up old installation before proceeding (fast move operation)
echo ""
move_old_installation "$target_dir" "Path"
# Initialize manifest
local manifest_file=$(new_install_manifest "Path" "$target_dir")
@@ -687,12 +714,8 @@ function install_path() {
# Track local folder in manifest
add_manifest_entry "$manifest_file" "$dest_folder" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_folder}"
local target_path="${dest_folder}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_folder" -type f -print0)
# Track files from SOURCE directory using bulk operation
add_manifest_entries_bulk "$manifest_file" "$source_folder" "$dest_folder" "File"
fi
write_color "✓ Installed local folder: $folder" "$COLOR_SUCCESS"
else
@@ -700,11 +723,15 @@ function install_path() {
fi
done
# Global components - exclude local folders
# Global components - exclude local folders (use same efficient method as Global mode)
write_color "Installing global components to $global_claude_dir..." "$COLOR_INFO"
local merged_count=0
# Create temporary directory for global files only
local temp_global_dir="/tmp/claude-global-$$"
mkdir -p "$temp_global_dir"
# Copy global files to temp directory (excluding local folders)
write_color "Preparing global components..." "$COLOR_INFO"
while IFS= read -r -d '' file; do
local relative_path="${file#$source_claude_dir/}"
local top_folder=$(echo "$relative_path" | cut -d'/' -f1)
@@ -714,37 +741,24 @@ function install_path() {
continue
fi
local dest_path="${global_claude_dir}/${relative_path}"
local dest_dir=$(dirname "$dest_path")
local temp_dest_path="${temp_global_dir}/${relative_path}"
local temp_dest_dir=$(dirname "$temp_dest_path")
mkdir -p "$dest_dir"
if [ -f "$dest_path" ]; then
if [ "$BACKUP_ALL" = true ] && [ "$NO_BACKUP" = false ]; then
if [ -n "$backup_folder" ]; then
backup_file_to_folder "$dest_path" "$backup_folder"
fi
cp "$file" "$dest_path"
((merged_count++))
elif [ "$NO_BACKUP" = true ]; then
if confirm_action "File '$relative_path' already exists in global location. Replace it? (NO BACKUP)" false; then
cp "$file" "$dest_path"
((merged_count++))
fi
elif confirm_action "File '$relative_path' already exists in global location. Replace it?" false; then
if [ -n "$backup_folder" ]; then
backup_file_to_folder "$dest_path" "$backup_folder"
fi
cp "$file" "$dest_path"
((merged_count++))
fi
else
cp "$file" "$dest_path"
((merged_count++))
fi
mkdir -p "$temp_dest_dir"
cp "$file" "$temp_dest_path"
done < <(find "$source_claude_dir" -type f -print0)
write_color "✓ Merged $merged_count files to global location" "$COLOR_SUCCESS"
# Use bulk merge method (same as Global mode - fast!)
if merge_directory_contents "$temp_global_dir" "$global_claude_dir" "global components" "$backup_folder"; then
# Track global files in manifest using bulk method (fast!)
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
# Track files from TEMP directory using bulk operation
add_manifest_entries_bulk "$manifest_file" "$temp_global_dir" "$global_claude_dir" "File"
fi
# Clean up temp directory
rm -rf "$temp_global_dir"
# Handle CLAUDE.md file in global .claude directory
local global_claude_md="${global_claude_dir}/CLAUDE.md"
@@ -763,12 +777,8 @@ function install_path() {
# Track .codex directory in manifest
add_manifest_entry "$manifest_file" "$local_codex_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_codex_dir}"
local target_path="${local_codex_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_codex_dir" -type f -print0)
# Track files from SOURCE directory using bulk operation
add_manifest_entries_bulk "$manifest_file" "$source_codex_dir" "$local_codex_dir" "File"
fi
# Backup critical config files in .gemini directory before installation
@@ -780,12 +790,8 @@ function install_path() {
# Track .gemini directory in manifest
add_manifest_entry "$manifest_file" "$local_gemini_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_gemini_dir}"
local target_path="${local_gemini_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_gemini_dir" -type f -print0)
# Track files from SOURCE directory using bulk operation
add_manifest_entries_bulk "$manifest_file" "$source_gemini_dir" "$local_gemini_dir" "File"
fi
# Backup critical config files in .qwen directory before installation
@@ -797,12 +803,8 @@ function install_path() {
# Track .qwen directory in manifest
add_manifest_entry "$manifest_file" "$local_qwen_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_qwen_dir}"
local target_path="${local_qwen_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_qwen_dir" -type f -print0)
# Track files from SOURCE directory using bulk operation
add_manifest_entries_bulk "$manifest_file" "$source_qwen_dir" "$local_qwen_dir" "File"
fi
# Remove empty backup folder
@@ -822,7 +824,7 @@ function install_path() {
create_version_json "$global_claude_dir" "Global"
# Save installation manifest
save_install_manifest "$manifest_file" "$target_dir"
save_install_manifest "$manifest_file" "$target_dir" "Path"
return 0
}
@@ -911,8 +913,15 @@ function new_install_manifest() {
mkdir -p "$MANIFEST_DIR"
# Generate unique manifest ID based on timestamp and mode
# Distinguish between Global and Path installations with clear naming
local timestamp=$(date +"%Y%m%d-%H%M%S")
local manifest_id="install-${installation_mode}-${timestamp}"
local mode_prefix
if [ "$installation_mode" = "Global" ]; then
mode_prefix="manifest-global"
else
mode_prefix="manifest-path"
fi
local manifest_id="${mode_prefix}-${timestamp}"
# Create manifest file path
local manifest_file="${MANIFEST_DIR}/${manifest_id}.json"
@@ -971,12 +980,88 @@ EOF
jq --argjson entry "$entry_json" '.directories += [$entry]' "$manifest_file" > "$temp_file"
fi
mv "$temp_file" "$manifest_file"
# Only replace manifest if jq succeeded
if [ -s "$temp_file" ]; then
mv "$temp_file" "$manifest_file"
else
write_color "WARNING: Failed to add manifest entry (jq error)" "$COLOR_WARNING"
rm -f "$temp_file"
fi
}
function add_manifest_entries_bulk() {
local manifest_file="$1"
local source_dir="$2"
local target_base="$3"
local entry_type="$4"
if [ ! -f "$manifest_file" ]; then
write_color "WARNING: Manifest file not found: $manifest_file" "$COLOR_WARNING"
return 1
fi
if [ ! -d "$source_dir" ]; then
return 0
fi
local timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
local temp_file="${manifest_file}.tmp"
local paths_file=$(mktemp)
local entries_file=$(mktemp)
# Collect all file paths and compute target paths using bash string operations
# This mimics the original while loop logic
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_dir}"
local target_path="${target_base}${relative_path}"
echo "$target_path"
done < <(find "$source_dir" -type f -print0) > "$paths_file"
# Check if paths_file has content
if [ ! -s "$paths_file" ]; then
rm -f "$paths_file" "$entries_file"
return 0
fi
# Generate JSON entries from paths (filter empty lines)
grep -v '^$' "$paths_file" | jq -R --arg date "$timestamp" --arg type "$entry_type" '
{
"path": .,
"type": $type,
"timestamp": $date
}
' | jq -s '.' > "$entries_file"
# Check if entries_file has valid content
if [ ! -s "$entries_file" ]; then
rm -f "$paths_file" "$entries_file"
return 0
fi
# Add all entries to manifest using --slurpfile to avoid argument length limit
if [ "$entry_type" = "File" ]; then
jq --slurpfile entries "$entries_file" '.files += $entries[0]' "$manifest_file" > "$temp_file"
else
jq --slurpfile entries "$entries_file" '.directories += $entries[0]' "$manifest_file" > "$temp_file"
fi
# Only replace manifest if jq succeeded and temp_file has content
if [ -s "$temp_file" ]; then
mv "$temp_file" "$manifest_file"
else
write_color "WARNING: Failed to update manifest (jq error), keeping original" "$COLOR_WARNING"
rm -f "$temp_file"
fi
rm -f "$paths_file" "$entries_file"
return 0
}
function remove_old_manifests_for_path() {
local installation_path="$1"
local current_manifest_file="$2" # Optional: exclude this file from deletion
local installation_mode="$2"
local current_manifest_file="$3" # Optional: exclude this file from deletion
if [ ! -d "$MANIFEST_DIR" ]; then
return 0
@@ -986,7 +1071,8 @@ function remove_old_manifests_for_path() {
local target_path=$(echo "$installation_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
local removed_count=0
# Find and remove old manifests for the same installation path
# Find and remove old manifests for the same installation path and mode
# Support both new (manifest-*) and old (install-*) format
while IFS= read -r -d '' file; do
# Skip the current manifest file if specified
if [ -n "$current_manifest_file" ] && [ "$file" = "$current_manifest_file" ]; then
@@ -994,19 +1080,20 @@ function remove_old_manifests_for_path() {
fi
local manifest_path=$(jq -r '.installation_path // ""' "$file" 2>/dev/null)
local manifest_mode=$(jq -r '.installation_mode // "Global"' "$file" 2>/dev/null)
if [ -n "$manifest_path" ]; then
# Normalize manifest path
local normalized_manifest_path=$(echo "$manifest_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
# If paths match, remove this old manifest
if [ "$normalized_manifest_path" = "$target_path" ]; then
# Only remove if BOTH path and mode match
if [ "$normalized_manifest_path" = "$target_path" ] && [ "$manifest_mode" = "$installation_mode" ]; then
rm -f "$file"
write_color "Removed old manifest: $(basename "$file")" "$COLOR_INFO"
((removed_count++))
fi
fi
done < <(find "$MANIFEST_DIR" -name "install-*.json" -type f -print0 2>/dev/null)
done < <(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f -print0 2>/dev/null)
if [ "$removed_count" -gt 0 ]; then
write_color "Removed $removed_count old manifest(s) for installation path: $installation_path" "$COLOR_SUCCESS"
@@ -1018,10 +1105,11 @@ function remove_old_manifests_for_path() {
function save_install_manifest() {
local manifest_file="$1"
local installation_path="$2"
local installation_mode="$3"
# Remove old manifests for the same installation path (excluding current one)
if [ -n "$installation_path" ]; then
remove_old_manifests_for_path "$installation_path" "$manifest_file"
# Remove old manifests for the same installation path and mode (excluding current one)
if [ -n "$installation_path" ] && [ -n "$installation_mode" ]; then
remove_old_manifests_for_path "$installation_path" "$installation_mode" "$manifest_file"
fi
if [ -f "$manifest_file" ]; then
@@ -1045,10 +1133,16 @@ function migrate_legacy_manifest() {
# Create manifest directory if it doesn't exist
mkdir -p "$MANIFEST_DIR"
# Read legacy manifest
# Read legacy manifest and generate new manifest ID with new naming convention
local mode=$(jq -r '.installation_mode // "Global"' "$legacy_manifest")
local timestamp=$(date +"%Y%m%d-%H%M%S")
local manifest_id="install-${mode}-${timestamp}-migrated"
local mode_prefix
if [ "$mode" = "Global" ]; then
mode_prefix="manifest-global"
else
mode_prefix="manifest-path"
fi
local manifest_id="${mode_prefix}-${timestamp}-migrated"
# Create new manifest file
local new_manifest="${MANIFEST_DIR}/${manifest_id}.json"
@@ -1072,8 +1166,8 @@ function get_all_install_manifests() {
return
fi
# Check if any manifest files exist
local manifest_count=$(find "$MANIFEST_DIR" -name "install-*.json" -type f 2>/dev/null | wc -l)
# Check if any manifest files exist (both new and old formats)
local manifest_count=$(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f 2>/dev/null | wc -l)
if [ "$manifest_count" -eq 0 ]; then
echo "[]"
@@ -1102,7 +1196,7 @@ function get_all_install_manifests() {
manifest_content=$(echo "$manifest_content" | jq --argjson fc "$files_count" --argjson dc "$dirs_count" '. + {files_count: $fc, directories_count: $dc}')
all_manifests+="$manifest_content"
done < <(find "$MANIFEST_DIR" -name "install-*.json" -type f -print0 | sort -z)
done < <(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f -print0 | sort -z)
all_manifests+="]"
@@ -1128,6 +1222,112 @@ function get_all_install_manifests() {
echo "$latest_manifests"
}
function move_old_installation() {
local installation_path="$1"
local installation_mode="$2"
write_color "Checking for previous installation..." "$COLOR_INFO"
# Find existing manifest for this installation path and mode
local manifests_json=$(get_all_install_manifests)
local target_path=$(echo "$installation_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
local old_manifest=$(echo "$manifests_json" | jq --arg path "$target_path" --arg mode "$installation_mode" '
.[] | select(
(.installation_path | ascii_downcase | sub("/+$"; "")) == $path and
.installation_mode == $mode
)
')
if [ -z "$old_manifest" ] || [ "$old_manifest" = "null" ]; then
write_color "No previous $installation_mode installation found at this path" "$COLOR_INFO"
return 0
fi
local install_date=$(echo "$old_manifest" | jq -r '.installation_date')
local files_count=$(echo "$old_manifest" | jq -r '.files_count')
local dirs_count=$(echo "$old_manifest" | jq -r '.directories_count')
write_color "Found previous installation from $install_date" "$COLOR_INFO"
write_color "Files: $files_count, Directories: $dirs_count" "$COLOR_INFO"
# Create backup folder
local timestamp=$(date +"%Y%m%d-%H%M%S")
local backup_dir="${installation_path}/claude-backup-old-${timestamp}"
mkdir -p "$backup_dir"
write_color "Created backup folder: $backup_dir" "$COLOR_SUCCESS"
local moved_files=0
local removed_dirs=0
local failed_items=()
# Move files first (from manifest)
write_color "Moving old installation files to backup..." "$COLOR_INFO"
while IFS= read -r file_path; do
if [ -z "$file_path" ] || [ "$file_path" = "null" ]; then
continue
fi
if [ -f "$file_path" ]; then
# Calculate relative path from installation root
local relative_path="${file_path#$installation_path}"
relative_path="${relative_path#/}"
if [ -z "$relative_path" ]; then
relative_path=$(basename "$file_path")
fi
local backup_dest_dir=$(dirname "${backup_dir}/${relative_path}")
mkdir -p "$backup_dest_dir"
if mv "$file_path" "${backup_dest_dir}/" 2>/dev/null; then
((moved_files++))
else
write_color " WARNING: Failed to move file: $file_path" "$COLOR_WARNING"
failed_items+=("$file_path")
fi
fi
done <<< "$(echo "$old_manifest" | jq -r '.files[].path')"
# Remove empty directories (in reverse order to handle nested dirs)
write_color "Cleaning up empty directories..." "$COLOR_INFO"
while IFS= read -r dir_path; do
if [ -z "$dir_path" ] || [ "$dir_path" = "null" ]; then
continue
fi
if [ -d "$dir_path" ]; then
# Check if directory is empty
if [ -z "$(ls -A "$dir_path" 2>/dev/null)" ]; then
if rmdir "$dir_path" 2>/dev/null; then
write_color " Removed empty directory: $dir_path" "$COLOR_INFO"
((removed_dirs++))
fi
else
write_color " Directory not empty (preserved): $dir_path" "$COLOR_INFO"
fi
fi
done <<< "$(echo "$old_manifest" | jq -r '.directories[].path' | awk '{ print length, $0 }' | sort -rn | cut -d' ' -f2-)"
# Note: Old manifest will be automatically removed by save_install_manifest
# via remove_old_manifests_for_path to ensure robust cleanup
echo ""
write_color "Old installation cleanup summary:" "$COLOR_INFO"
echo " Files moved: $moved_files"
echo " Directories removed: $removed_dirs"
echo " Backup location: $backup_dir"
if [ ${#failed_items[@]} -gt 0 ]; then
write_color " Failed items: ${#failed_items[@]}" "$COLOR_WARNING"
fi
echo ""
# Return backup path for reference
return 0
}
# ============================================================================
# UNINSTALLATION FUNCTIONS
# ============================================================================
@@ -1173,26 +1373,50 @@ function uninstall_claude_workflow() {
if [ "$manifests_count" -eq 1 ]; then
selected_manifest=$(echo "$manifests_json" | jq '.[0]')
write_color "Only one installation found, will uninstall:" "$COLOR_INFO"
# Read version from version.json
local install_path=$(echo "$selected_manifest" | jq -r '.installation_path // ""')
local install_mode=$(echo "$selected_manifest" | jq -r '.installation_mode // "Unknown"')
local version_str="Version Unknown"
# Determine version.json path
local version_json_path="${install_path}/.claude/version.json"
if [ -f "$version_json_path" ]; then
local ver=$(jq -r '.version // ""' "$version_json_path" 2>/dev/null)
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
version_str="v$ver"
fi
fi
write_color "Found installation: $version_str - $install_path" "$COLOR_INFO"
else
# Multiple manifests - let user choose
# Multiple manifests - let user choose (simplified: only version and path)
local options=()
for i in $(seq 0 $((manifests_count - 1))); do
local m=$(echo "$manifests_json" | jq ".[$i]")
# Safely extract date string
local date_str=$(echo "$m" | jq -r '.installation_date // "unknown date"' | cut -c1-10)
local mode=$(echo "$m" | jq -r '.installation_mode // "Unknown"')
local files_count=$(echo "$m" | jq -r '.files_count // 0')
local dirs_count=$(echo "$m" | jq -r '.directories_count // 0')
local path_info=$(echo "$m" | jq -r '.installation_path // ""')
local install_mode=$(echo "$m" | jq -r '.installation_mode // "Unknown"')
local version_str="Version Unknown"
if [ -n "$path_info" ]; then
path_info=" ($path_info)"
# Read version from version.json
local version_json_path="${path_info}/.claude/version.json"
if [ -f "$version_json_path" ]; then
local ver=$(jq -r '.version // ""' "$version_json_path" 2>/dev/null)
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
version_str="v$ver"
fi
fi
options+=("$((i + 1)). [$mode] $date_str - $files_count files, $dirs_count dirs$path_info")
local path_str="Path Unknown"
if [ -n "$path_info" ]; then
path_str="$path_info"
fi
options+=("$((i + 1)). $version_str - $path_str")
done
options+=("Cancel - Don't uninstall anything")
@@ -1210,16 +1434,24 @@ function uninstall_claude_workflow() {
selected_manifest=$(echo "$manifests_json" | jq ".[$selected_index]")
fi
# Display selected installation info
# Display selected installation info (simplified: only version and path)
local final_path=$(echo "$selected_manifest" | jq -r '.installation_path // ""')
local final_mode=$(echo "$selected_manifest" | jq -r '.installation_mode // "Unknown"')
local final_version="Version Unknown"
# Read version from version.json
local final_version_path="${final_path}/.claude/version.json"
if [ -f "$final_version_path" ]; then
local ver=$(jq -r '.version // ""' "$final_version_path" 2>/dev/null)
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
final_version="v$ver"
fi
fi
echo ""
write_color "Installation Information:" "$COLOR_INFO"
echo " Manifest ID: $(echo "$selected_manifest" | jq -r '.manifest_id')"
echo " Mode: $(echo "$selected_manifest" | jq -r '.installation_mode')"
echo " Path: $(echo "$selected_manifest" | jq -r '.installation_path')"
echo " Date: $(echo "$selected_manifest" | jq -r '.installation_date')"
echo " Installer Version: $(echo "$selected_manifest" | jq -r '.installer_version')"
echo " Files tracked: $(echo "$selected_manifest" | jq -r '.files_count')"
echo " Directories tracked: $(echo "$selected_manifest" | jq -r '.directories_count')"
write_color "Uninstallation Target:" "$COLOR_INFO"
echo " $final_version"
echo " Path: $final_path"
echo ""
# Confirm uninstallation
@@ -1229,55 +1461,64 @@ function uninstall_claude_workflow() {
fi
local removed_files=0
local removed_dirs=0
local failed_items=()
local skipped_files=0
# Remove files first
# Check if this is a Path mode uninstallation and if Global installation exists
local is_path_mode=false
local has_global_installation=false
if [ "$final_mode" = "Path" ]; then
is_path_mode=true
# Check if any Global installation manifest exists
if [ -d "$MANIFEST_DIR" ]; then
local global_manifest_count=$(find "$MANIFEST_DIR" -name "manifest-global-*.json" -type f 2>/dev/null | wc -l)
if [ "$global_manifest_count" -gt 0 ]; then
has_global_installation=true
write_color "Found Global installation, global files will be preserved" "$COLOR_WARNING"
echo ""
fi
fi
fi
# Only remove files listed in manifest - do NOT remove directories
write_color "Removing installed files..." "$COLOR_INFO"
local files_array=$(echo "$selected_manifest" | jq -c '.files[]')
local files_array=$(echo "$selected_manifest" | jq -c '.files[]' 2>/dev/null)
while IFS= read -r file_entry; do
local file_path=$(echo "$file_entry" | jq -r '.path')
if [ -n "$files_array" ]; then
while IFS= read -r file_entry; do
local file_path=$(echo "$file_entry" | jq -r '.path')
if [ -f "$file_path" ]; then
if rm -f "$file_path" 2>/dev/null; then
write_color " Removed file: $file_path" "$COLOR_SUCCESS"
((removed_files++))
else
write_color " WARNING: Failed to remove file: $file_path" "$COLOR_WARNING"
failed_items+=("$file_path")
fi
else
write_color " File not found (already removed): $file_path" "$COLOR_INFO"
fi
done <<< "$files_array"
# For Path mode uninstallation, skip global files if Global installation exists
if [ "$is_path_mode" = true ] && [ "$has_global_installation" = true ]; then
local global_claude_dir="${HOME}/.claude"
# Remove directories (in reverse order by path length)
write_color "Removing installed directories..." "$COLOR_INFO"
local dirs_array=$(echo "$selected_manifest" | jq -c '.directories[] | {path: .path, length: (.path | length)}' | sort -t: -k2 -rn | jq -c '.path')
while IFS= read -r dir_path_json; do
local dir_path=$(echo "$dir_path_json" | jq -r '.')
if [ -d "$dir_path" ]; then
# Check if directory is empty
if [ -z "$(ls -A "$dir_path" 2>/dev/null)" ]; then
if rmdir "$dir_path" 2>/dev/null; then
write_color " Removed directory: $dir_path" "$COLOR_SUCCESS"
((removed_dirs++))
else
write_color " WARNING: Failed to remove directory: $dir_path" "$COLOR_WARNING"
failed_items+=("$dir_path")
# Skip files under global .claude directory
if [[ "$file_path" == "$global_claude_dir"* ]]; then
((skipped_files++))
continue
fi
else
write_color " Directory not empty (preserved): $dir_path" "$COLOR_WARNING"
fi
else
write_color " Directory not found (already removed): $dir_path" "$COLOR_INFO"
fi
done <<< "$dirs_array"
if [ -f "$file_path" ]; then
if rm -f "$file_path" 2>/dev/null; then
((removed_files++))
else
write_color " WARNING: Failed to remove: $file_path" "$COLOR_WARNING"
failed_items+=("$file_path")
fi
fi
done <<< "$files_array"
fi
# Display removal summary
if [ "$skipped_files" -gt 0 ]; then
write_color "Removed $removed_files files, skipped $skipped_files global files" "$COLOR_SUCCESS"
else
write_color "Removed $removed_files files" "$COLOR_SUCCESS"
fi
# Remove manifest file
local manifest_file=$(echo "$selected_manifest" | jq -r '.manifest_file')
@@ -1295,7 +1536,12 @@ function uninstall_claude_workflow() {
write_color "========================================" "$COLOR_INFO"
write_color "Uninstallation Summary:" "$COLOR_INFO"
echo " Files removed: $removed_files"
echo " Directories removed: $removed_dirs"
if [ "$skipped_files" -gt 0 ]; then
echo " Files skipped (global files preserved): $skipped_files"
echo ""
write_color "Note: $skipped_files global files were preserved due to existing Global installation" "$COLOR_INFO"
fi
if [ ${#failed_items[@]} -gt 0 ]; then
echo ""
@@ -1307,7 +1553,11 @@ function uninstall_claude_workflow() {
echo ""
if [ ${#failed_items[@]} -eq 0 ]; then
write_color "✓ Claude Code Workflow has been successfully uninstalled!" "$COLOR_SUCCESS"
if [ "$skipped_files" -gt 0 ]; then
write_color "✓ Uninstallation complete! Removed $removed_files files, preserved $skipped_files global files." "$COLOR_SUCCESS"
else
write_color "✓ Claude Code Workflow has been successfully uninstalled!" "$COLOR_SUCCESS"
fi
else
write_color "Uninstallation completed with warnings." "$COLOR_WARNING"
write_color "Please manually remove the failed items listed above." "$COLOR_INFO"

198
README.md
View File

@@ -1,198 +0,0 @@
# 🚀 Claude Code Workflow (CCW)
<div align="center">
[![Version](https://img.shields.io/badge/version-v5.8.1-blue.svg)](https://github.com/catlog22/Claude-Code-Workflow/releases)
[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
[![Platform](https://img.shields.io/badge/platform-Windows%20%7C%20Linux%20%7C%20macOS-lightgrey.svg)]()
**Languages:** [English](README.md) | [中文](README_CN.md)
</div>
---
**Claude Code Workflow (CCW)** transforms AI development from simple prompt chaining into a robust, context-first orchestration system. It solves execution uncertainty and error accumulation through structured planning, deterministic execution, and intelligent multi-model orchestration.
> **🎉 Version 5.8.1: Lite-Plan Workflow & CLI Tools Enhancement**
>
> **Core Improvements**:
> - ✨ **Lite-Plan Workflow** (`/workflow:lite-plan`) - Lightweight interactive planning with intelligent automation
> - **Three-Dimensional Multi-Select Confirmation**: Task approval + Execution method + Code review tool
> - **Smart Code Exploration**: Auto-detects when codebase context is needed (use `-e` flag to force)
> - **Parallel Task Execution**: Identifies independent tasks for concurrent execution
> - **Flexible Execution**: Choose between Agent (@code-developer) or CLI (Gemini/Qwen/Codex)
> - **Optional Post-Review**: Built-in code quality analysis with your choice of AI tool
> - ✨ **CLI Tools Optimization** - Simplified command syntax with auto-model-selection
> - Removed `-m` parameter requirement for Gemini, Qwen, and Codex (auto-selects best model)
> - Clearer command structure and improved documentation
> - 🔄 **Execution Workflow Enhancement** - Streamlined phases with lazy loading strategy
> - 🎨 **CLI Explore Agent** - Improved visibility with yellow color scheme
>
> See [CHANGELOG.md](CHANGELOG.md) for full details.
> 📚 **New to CCW?** Check out the [**Getting Started Guide**](GETTING_STARTED.md) for a beginner-friendly 5-minute tutorial!
---
## ✨ Core Concepts
CCW is built on a set of core principles that differentiate it from traditional AI development approaches:
- **Context-First Architecture**: Pre-defined context gathering eliminates execution uncertainty by ensuring agents have the correct information *before* implementation.
- **JSON-First State Management**: Task states live in `.task/IMPL-*.json` files as the single source of truth, enabling programmatic orchestration without state drift.
- **Autonomous Multi-Phase Orchestration**: Commands chain specialized sub-commands and agents to automate complex workflows with zero user intervention.
- **Multi-Model Strategy**: Leverages the unique strengths of different AI models (Gemini for analysis, Codex for implementation) for superior results.
- **Hierarchical Memory System**: A 4-layer documentation system provides context at the appropriate level of abstraction, preventing information overload.
- **Specialized Role-Based Agents**: A suite of agents (`@code-developer`, `@test-fix-agent`, etc.) mirrors a real software team to handle diverse tasks.
---
## ⚙️ Installation
For detailed installation instructions, please refer to the [**INSTALL.md**](INSTALL.md) guide.
### **🚀 Quick One-Line Installation**
**Windows (PowerShell):**
```powershell
Invoke-Expression (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1" -UseBasicParsing).Content
```
**Linux/macOS (Bash/Zsh):**
```bash
bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
```
### **✅ Verify Installation**
After installation, open **Claude Code** and check if the workflow commands are available by running:
```bash
/workflow:session:list
```
If the slash commands (e.g., `/workflow:*`) are recognized, the installation was successful.
---
## 🛠️ Command Reference
CCW provides a rich set of commands for managing workflows, tasks, and interacting with AI tools. For a complete list and detailed descriptions of all available commands, please see the [**COMMAND_REFERENCE.md**](COMMAND_REFERENCE.md) file.
For a detailed technical specification of every command, see the [**COMMAND_SPEC.md**](COMMAND_SPEC.md).
---
### 💡 **Need Help? Use the Interactive Command Guide**
CCW includes a built-in **command-guide skill** to help you discover and use commands effectively:
- **`CCW-help`** - Get interactive help and command recommendations
- **`CCW-issue`** - Report bugs or request features with guided templates
The command guide provides:
- 🔍 **Smart Command Search** - Find commands by keyword, category, or use-case
- 🤖 **Next-Step Recommendations** - Get suggestions for what to do after any command
- 📖 **Detailed Documentation** - View parameters, examples, and best practices
- 🎓 **Beginner Onboarding** - Learn the top 14 essential commands with a guided learning path
- 📝 **Issue Reporting** - Generate standardized bug reports and feature requests
**Example Usage**:
```
User: "CCW-help"
→ Interactive menu with command search, recommendations, and documentation
User: "What's next after /workflow:plan?"
→ Recommends /workflow:execute, /workflow:action-plan-verify, with workflow patterns
User: "CCW-issue"
→ Guided template generation for bugs, features, or questions
```
---
## 🚀 Getting Started
The best way to get started is to follow the 5-minute tutorial in the [**Getting Started Guide**](GETTING_STARTED.md).
Here is a quick example of a common development workflow:
### **Option 1: Lite-Plan Workflow** (⚡ Recommended for Quick Tasks)
Lightweight interactive workflow with in-memory planning and immediate execution:
```bash
# Basic usage with auto-detection
/workflow:lite-plan "Add JWT authentication to user login"
# Force code exploration
/workflow:lite-plan -e "Refactor logging module for better performance"
# Basic usage
/workflow:lite-plan "Add unit tests for auth service"
```
**Interactive Flow**:
1. **Phase 1**: Automatic task analysis and smart code exploration (if needed)
2. **Phase 2**: Answer clarification questions (if any)
3. **Phase 3**: Review generated plan with task breakdown
4. **Phase 4**: Three-dimensional confirmation:
- ✅ Confirm/Modify/Cancel task
- 🔧 Choose execution: Agent / Provide Plan / CLI (Gemini/Qwen/Codex)
- 🔍 Optional code review: No / Claude / Gemini / Qwen / Codex
5. **Phase 5**: Watch real-time execution with live task tracking
### **Option 2: Full Workflow** (Comprehensive Planning)
Traditional multi-phase workflow for complex projects:
1. **Create a Plan** (automatically starts a session):
```bash
/workflow:plan "Implement JWT-based user login and registration"
```
2. **Execute the Plan**:
```bash
/workflow:execute
```
3. **Check Status** (optional):
```bash
/workflow:status
```
---
## 📚 Documentation
CCW provides comprehensive documentation to help you get started and master advanced features:
### 📖 **Getting Started**
- [**Getting Started Guide**](GETTING_STARTED.md) - 5-minute quick start tutorial
- [**Installation Guide**](INSTALL.md) - Detailed installation instructions ([中文](INSTALL_CN.md))
- [**Workflow Decision Guide**](WORKFLOW_DECISION_GUIDE_EN.md) - 🌳 Interactive flowchart for choosing the right commands
- [**Examples**](EXAMPLES.md) - Real-world use cases and practical examples
- [**FAQ**](FAQ.md) - Frequently asked questions and troubleshooting
### 🏗️ **Architecture & Design**
- [**Architecture Overview**](ARCHITECTURE.md) - System design and core components
- [**Project Introduction**](PROJECT_INTRODUCTION.md) - Detailed project overview (中文)
- [**Workflow Diagrams**](WORKFLOW_DIAGRAMS.md) - Visual workflow representations
### 📋 **Command Reference**
- [**Command Reference**](COMMAND_REFERENCE.md) - Complete list of all commands
- [**Command Specification**](COMMAND_SPEC.md) - Detailed technical specifications
- [**Command Flow Standard**](COMMAND_FLOW_STANDARD.md) - Command design patterns
### 🤝 **Contributing**
- [**Contributing Guide**](CONTRIBUTING.md) - How to contribute to CCW
- [**Changelog**](CHANGELOG.md) - Version history and release notes
---
## 🤝 Contributing & Support
- **Repository**: [GitHub - Claude-Code-Workflow](https://github.com/catlog22/Claude-Code-Workflow)
- **Issues**: Report bugs or request features on [GitHub Issues](https://github.com/catlog22/Claude-Code-Workflow/issues).
- **Discussions**: Join the [Community Forum](https://github.com/catlog22/Claude-Code-Workflow/discussions).
- **Contributing**: See [CONTRIBUTING.md](CONTRIBUTING.md) for contribution guidelines.
## 📄 License
This project is licensed under the **MIT License**. See the [LICENSE](LICENSE) file for details.

View File

@@ -14,9 +14,10 @@ graph TB
end
subgraph "Session Management"
MARKER[".active-session marker"]
SESSION["workflow-session.json"]
WDIR[".workflow/ directories"]
ACTIVE_DIR[".workflow/active/"]
ARCHIVE_DIR[".workflow/archives/"]
end
subgraph "Task System"
@@ -124,9 +125,7 @@ stateDiagram-v2
CreateStructure --> CreateJSON: Create workflow-session.json
CreateJSON --> CreatePlan: Create IMPL_PLAN.md
CreatePlan --> CreateTasks: Create .task/ directory
CreateTasks --> SetActive: touch .active-session-name
SetActive --> Active: Session Ready
CreateTasks --> Active: Session Ready in .workflow/active/
Active --> Paused: Switch to Another Session
Active --> Working: Execute Tasks