mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-08 02:14:08 +08:00
Compare commits
14 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
72f27fb2f8 | ||
|
|
be129f5821 | ||
|
|
b1bb74af0d | ||
|
|
a7a654805c | ||
|
|
c0c894ced1 | ||
|
|
7517f4f8ec | ||
|
|
0b45ff7345 | ||
|
|
0416b23186 | ||
|
|
948cf3fcd7 | ||
|
|
4272ca9ebd | ||
|
|
73fed4893b | ||
|
|
f09c6e2a7a | ||
|
|
65a204a563 | ||
|
|
ffbc440a7e |
@@ -137,19 +137,44 @@ Break work into 3-5 logical implementation stages with:
|
|||||||
- Dependencies on previous stages
|
- Dependencies on previous stages
|
||||||
- Estimated complexity and time requirements
|
- Estimated complexity and time requirements
|
||||||
|
|
||||||
### 2. Task JSON Generation (5-Field Schema + Artifacts)
|
### 2. Task JSON Generation (6-Field Schema + Artifacts)
|
||||||
Generate individual `.task/IMPL-*.json` files with:
|
Generate individual `.task/IMPL-*.json` files with:
|
||||||
|
|
||||||
**Required Fields**:
|
#### Top-Level Fields
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"id": "IMPL-N[.M]",
|
"id": "IMPL-N[.M]",
|
||||||
"title": "Descriptive task name",
|
"title": "Descriptive task name",
|
||||||
"status": "pending",
|
"status": "pending|active|completed|blocked|container",
|
||||||
|
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
- `id`: Task identifier (format: `IMPL-N` or `IMPL-N.M` for subtasks, max 2 levels)
|
||||||
|
- `title`: Descriptive task name summarizing the work
|
||||||
|
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies), `container` (has subtasks, cannot be executed directly)
|
||||||
|
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
||||||
|
|
||||||
|
#### Meta Object
|
||||||
|
```json
|
||||||
|
{
|
||||||
"meta": {
|
"meta": {
|
||||||
"type": "feature|bugfix|refactor|test|docs",
|
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||||
"agent": "@code-developer"
|
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
||||||
},
|
"execution_group": "parallel-abc123|null"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
|
||||||
|
- `agent`: Assigned agent for execution
|
||||||
|
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
||||||
|
|
||||||
|
#### Context Object
|
||||||
|
```json
|
||||||
|
{
|
||||||
"context": {
|
"context": {
|
||||||
"requirements": [
|
"requirements": [
|
||||||
"Implement 3 features: [authentication, authorization, session management]",
|
"Implement 3 features: [authentication, authorization, session management]",
|
||||||
@@ -162,43 +187,131 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
"5 files created: verify by ls src/auth/*.ts | wc -l = 5",
|
"5 files created: verify by ls src/auth/*.ts | wc -l = 5",
|
||||||
"Test coverage >=80%: verify by npm test -- --coverage | grep auth"
|
"Test coverage >=80%: verify by npm test -- --coverage | grep auth"
|
||||||
],
|
],
|
||||||
|
"parent": "IMPL-N",
|
||||||
"depends_on": ["IMPL-N"],
|
"depends_on": ["IMPL-N"],
|
||||||
|
"inherited": {
|
||||||
|
"from": "IMPL-N",
|
||||||
|
"context": ["Authentication system design completed", "JWT strategy defined"]
|
||||||
|
},
|
||||||
|
"shared_context": {
|
||||||
|
"tech_stack": ["Node.js", "TypeScript", "Express"],
|
||||||
|
"auth_strategy": "JWT with refresh tokens",
|
||||||
|
"conventions": ["Follow existing auth patterns in src/auth/legacy/"]
|
||||||
|
},
|
||||||
"artifacts": [
|
"artifacts": [
|
||||||
{
|
{
|
||||||
"type": "synthesis_specification",
|
"type": "synthesis_specification|topic_framework|individual_role_analysis",
|
||||||
|
"source": "brainstorm_clarification|brainstorm_framework|brainstorm_roles",
|
||||||
"path": "{from artifacts_inventory}",
|
"path": "{from artifacts_inventory}",
|
||||||
"priority": "highest"
|
"priority": "highest|high|medium|low",
|
||||||
|
"usage": "Architecture decisions and API specifications",
|
||||||
|
"contains": "role_specific_requirements_and_design"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
},
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
- `requirements`: **QUANTIFIED** implementation requirements (MUST include explicit counts and enumerated lists, e.g., "5 files: [list]")
|
||||||
|
- `focus_paths`: Target directories/files (concrete paths without wildcards)
|
||||||
|
- `acceptance`: **MEASURABLE** acceptance criteria (MUST include verification commands, e.g., "verify by ls ... | wc -l = N")
|
||||||
|
- `parent`: Parent task ID for subtasks (establishes container/subtask hierarchy)
|
||||||
|
- `depends_on`: Prerequisite task IDs that must complete before this task starts
|
||||||
|
- `inherited`: Context, patterns, and dependencies passed from parent task
|
||||||
|
- `shared_context`: Tech stack, conventions, and architectural strategies for the task
|
||||||
|
- `artifacts`: Referenced brainstorming outputs with detailed metadata
|
||||||
|
|
||||||
|
#### Flow Control Object
|
||||||
|
|
||||||
|
**IMPORTANT**: The `pre_analysis` examples below are **reference templates only**. Agent MUST dynamically select, adapt, and expand steps based on actual task requirements. Apply the principle of **"举一反三"** (draw inferences from examples) - use these patterns as inspiration to create task-specific analysis steps.
|
||||||
|
|
||||||
|
**Dynamic Step Selection Guidelines**:
|
||||||
|
- **Context Loading**: Always include context package and role analysis loading
|
||||||
|
- **Architecture Analysis**: Add module structure analysis for complex projects
|
||||||
|
- **Pattern Discovery**: Use CLI tools (gemini/qwen/bash) based on task complexity and available tools
|
||||||
|
- **Tech-Specific Analysis**: Add language/framework-specific searches for specialized tasks
|
||||||
|
- **MCP Integration**: Utilize MCP tools when available for enhanced context
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
"flow_control": {
|
"flow_control": {
|
||||||
"pre_analysis": [
|
"pre_analysis": [
|
||||||
|
// === REQUIRED: Context Package Loading (Always Include) ===
|
||||||
{
|
{
|
||||||
"step": "load_synthesis_specification",
|
"step": "load_context_package",
|
||||||
"commands": ["bash(ls {path} 2>/dev/null)", "Read({path})"],
|
"action": "Load context package for artifact paths and smart context",
|
||||||
"output_to": "synthesis_specification",
|
"commands": ["Read({{context_package_path}})"],
|
||||||
"on_error": "skip_optional"
|
"output_to": "context_package",
|
||||||
|
"on_error": "fail"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"step": "mcp_codebase_exploration",
|
"step": "load_role_analysis_artifacts",
|
||||||
"command": "mcp__code-index__find_files() && mcp__code-index__search_code_advanced()",
|
"action": "Load role analyses from context-package.json",
|
||||||
"output_to": "codebase_structure"
|
"commands": [
|
||||||
|
"Read({{context_package_path}})",
|
||||||
|
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
|
||||||
|
"Read(each extracted path)"
|
||||||
|
],
|
||||||
|
"output_to": "role_analysis_artifacts",
|
||||||
|
"on_error": "skip_optional"
|
||||||
|
},
|
||||||
|
|
||||||
|
// === OPTIONAL: Select and adapt based on task needs ===
|
||||||
|
|
||||||
|
// Pattern: Project structure analysis
|
||||||
|
{
|
||||||
|
"step": "analyze_project_architecture",
|
||||||
|
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
|
||||||
|
"output_to": "project_architecture"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: Local search (bash/rg/find)
|
||||||
|
{
|
||||||
|
"step": "search_existing_patterns",
|
||||||
|
"commands": [
|
||||||
|
"bash(rg '[pattern]' --type [lang] -n --max-count [N])",
|
||||||
|
"bash(find . -name '[pattern]' -type f | head -[N])"
|
||||||
|
],
|
||||||
|
"output_to": "search_results"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: Gemini CLI deep analysis
|
||||||
|
{
|
||||||
|
"step": "gemini_analyze_[aspect]",
|
||||||
|
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
|
||||||
|
"output_to": "analysis_result"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: Qwen CLI analysis (fallback/alternative)
|
||||||
|
{
|
||||||
|
"step": "qwen_analyze_[aspect]",
|
||||||
|
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
|
||||||
|
"output_to": "analysis_result"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: MCP tools
|
||||||
|
{
|
||||||
|
"step": "mcp_search_[target]",
|
||||||
|
"command": "mcp__[tool]__[function](parameters)",
|
||||||
|
"output_to": "mcp_results"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"implementation_approach": [
|
"implementation_approach": [
|
||||||
|
// === DEFAULT MODE: Agent Execution (no command field) ===
|
||||||
{
|
{
|
||||||
"step": 1,
|
"step": 1,
|
||||||
"title": "Load and analyze role analyses",
|
"title": "Load and analyze role analyses",
|
||||||
"description": "Load 3 role analysis files and extract quantified requirements",
|
"description": "Load role analysis files and extract quantified requirements",
|
||||||
"modification_points": [
|
"modification_points": [
|
||||||
"Load 3 role analysis files: [system-architect/analysis.md, product-manager/analysis.md, ui-designer/analysis.md]",
|
"Load N role analysis files: [list]",
|
||||||
"Extract 15 requirements from role analyses",
|
"Extract M requirements from role analyses",
|
||||||
"Parse 8 architecture decisions from system-architect analysis"
|
"Parse K architecture decisions"
|
||||||
],
|
],
|
||||||
"logic_flow": [
|
"logic_flow": [
|
||||||
"Read 3 role analyses from artifacts inventory",
|
"Read role analyses from artifacts inventory",
|
||||||
"Parse architecture decisions (8 total)",
|
"Parse architecture decisions",
|
||||||
"Extract implementation requirements (15 total)",
|
"Extract implementation requirements",
|
||||||
"Build consolidated requirements list"
|
"Build consolidated requirements list"
|
||||||
],
|
],
|
||||||
"depends_on": [],
|
"depends_on": [],
|
||||||
@@ -207,21 +320,33 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
{
|
{
|
||||||
"step": 2,
|
"step": 2,
|
||||||
"title": "Implement following specification",
|
"title": "Implement following specification",
|
||||||
"description": "Implement 3 features across 5 files following consolidated role analyses",
|
"description": "Implement features following consolidated role analyses",
|
||||||
"modification_points": [
|
"modification_points": [
|
||||||
"Create 5 new files in src/auth/: [auth.service.ts (180 lines), auth.controller.ts (120 lines), auth.middleware.ts (60 lines), auth.types.ts (40 lines), auth.test.ts (200 lines)]",
|
"Create N new files: [list with line counts]",
|
||||||
"Modify 2 functions: [validateUser() in users.service.ts lines 45-60, hashPassword() in utils.ts lines 120-135]",
|
"Modify M functions: [func() in file lines X-Y]",
|
||||||
"Implement 3 core features: [JWT authentication, role-based authorization, session management]"
|
"Implement K core features: [list]"
|
||||||
],
|
],
|
||||||
"logic_flow": [
|
"logic_flow": [
|
||||||
"Apply 15 requirements from [synthesis_requirements]",
|
"Apply requirements from [synthesis_requirements]",
|
||||||
"Implement 3 features across 5 new files (600 total lines)",
|
"Implement features across new files",
|
||||||
"Modify 2 existing functions (30 lines total)",
|
"Modify existing functions",
|
||||||
"Write 25 test cases covering all features",
|
"Write test cases covering all features",
|
||||||
"Validate against 3 acceptance criteria"
|
"Validate against acceptance criteria"
|
||||||
],
|
],
|
||||||
"depends_on": [1],
|
"depends_on": [1],
|
||||||
"output": "implementation"
|
"output": "implementation"
|
||||||
|
},
|
||||||
|
|
||||||
|
// === CLI MODE: Command Execution (optional command field) ===
|
||||||
|
{
|
||||||
|
"step": 3,
|
||||||
|
"title": "Execute implementation using CLI tool",
|
||||||
|
"description": "Use Codex/Gemini for complex autonomous execution",
|
||||||
|
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
|
||||||
|
"modification_points": ["[Same as default mode]"],
|
||||||
|
"logic_flow": ["[Same as default mode]"],
|
||||||
|
"depends_on": [1, 2],
|
||||||
|
"output": "cli_implementation"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"target_files": [
|
"target_files": [
|
||||||
@@ -237,6 +362,72 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
- `pre_analysis`: Context loading and preparation steps (executed sequentially before implementation)
|
||||||
|
- `implementation_approach`: Implementation steps with dependency management (array of step objects)
|
||||||
|
- `target_files`: Specific files/functions/lines to modify (format: `file:function:lines` for existing, `file` for new)
|
||||||
|
|
||||||
|
**Implementation Approach Execution Modes**:
|
||||||
|
|
||||||
|
The `implementation_approach` supports **two execution modes** based on the presence of the `command` field:
|
||||||
|
|
||||||
|
1. **Default Mode (Agent Execution)** - `command` field **omitted**:
|
||||||
|
- Agent interprets `modification_points` and `logic_flow` autonomously
|
||||||
|
- Direct agent execution with full context awareness
|
||||||
|
- No external tool overhead
|
||||||
|
- **Use for**: Standard implementation tasks where agent capability is sufficient
|
||||||
|
- **Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
|
||||||
|
|
||||||
|
2. **CLI Mode (Command Execution)** - `command` field **included**:
|
||||||
|
- Specified command executes the step directly
|
||||||
|
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
||||||
|
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
||||||
|
- **Required fields**: Same as default mode **PLUS** `command`
|
||||||
|
- **Command patterns**:
|
||||||
|
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
|
||||||
|
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
|
||||||
|
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
|
||||||
|
|
||||||
|
**Mode Selection Strategy**:
|
||||||
|
- **Default to agent execution** for most tasks
|
||||||
|
- **Use CLI mode** when:
|
||||||
|
- User explicitly requests CLI tool (codex/gemini/qwen)
|
||||||
|
- Task requires multi-step autonomous reasoning beyond agent capability
|
||||||
|
- Complex refactoring needs specialized tool analysis
|
||||||
|
- Building on previous CLI execution context (use `resume --last`)
|
||||||
|
|
||||||
|
**Key Principle**: The `command` field is **optional**. Agent must decide based on task complexity and user preference.
|
||||||
|
|
||||||
|
**Pre-Analysis Step Selection Guide (举一反三 Principle)**:
|
||||||
|
|
||||||
|
The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
|
||||||
|
|
||||||
|
1. **Always Include** (Required):
|
||||||
|
- `load_context_package` - Essential for all tasks
|
||||||
|
- `load_role_analysis_artifacts` - Critical for accessing brainstorming insights
|
||||||
|
|
||||||
|
2. **Selectively Include Based on Task Type**:
|
||||||
|
- **Architecture tasks**: Project structure + Gemini architecture analysis
|
||||||
|
- **Refactoring tasks**: Gemini execution flow tracing + code quality analysis
|
||||||
|
- **Frontend tasks**: React/Vue component searches + UI pattern analysis
|
||||||
|
- **Backend tasks**: Database schema + API endpoint searches
|
||||||
|
- **Security tasks**: Vulnerability scans + security pattern analysis
|
||||||
|
- **Performance tasks**: Bottleneck identification + profiling data
|
||||||
|
|
||||||
|
3. **Tool Selection Strategy**:
|
||||||
|
- **Gemini CLI**: Deep analysis (architecture, execution flow, patterns)
|
||||||
|
- **Qwen CLI**: Fallback or code quality analysis
|
||||||
|
- **Bash/rg/find**: Quick pattern matching and file discovery
|
||||||
|
- **MCP tools**: Semantic search and external research
|
||||||
|
|
||||||
|
4. **Command Composition Patterns**:
|
||||||
|
- **Single command**: `bash([simple_search])`
|
||||||
|
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
||||||
|
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
|
||||||
|
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
||||||
|
|
||||||
|
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
||||||
|
|
||||||
**Artifact Mapping**:
|
**Artifact Mapping**:
|
||||||
- Use `artifacts_inventory` from context package
|
- Use `artifacts_inventory` from context package
|
||||||
- Highest priority: synthesis_specification
|
- Highest priority: synthesis_specification
|
||||||
|
|||||||
@@ -102,6 +102,8 @@ if (!memory.has("README.md")) Read(README.md)
|
|||||||
|
|
||||||
Execute all 3 tracks in parallel for comprehensive coverage.
|
Execute all 3 tracks in parallel for comprehensive coverage.
|
||||||
|
|
||||||
|
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
|
||||||
|
|
||||||
#### Track 1: Reference Documentation
|
#### Track 1: Reference Documentation
|
||||||
|
|
||||||
Extract from Phase 0 loaded docs:
|
Extract from Phase 0 loaded docs:
|
||||||
|
|||||||
472
.claude/commands/memory/docs-full-cli.md
Normal file
472
.claude/commands/memory/docs-full-cli.md
Normal file
@@ -0,0 +1,472 @@
|
|||||||
|
---
|
||||||
|
name: docs-full-cli
|
||||||
|
description: Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel
|
||||||
|
argument-hint: "[path] [--tool <gemini|qwen|codex>]"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Full Documentation Generation - CLI Mode (/memory:docs-full-cli)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Orchestrates project-wide documentation generation using CLI-based execution with batched agents and automatic tool fallback.
|
||||||
|
|
||||||
|
**Parameters**:
|
||||||
|
- `path`: Target directory (default: current directory)
|
||||||
|
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
|
||||||
|
|
||||||
|
**Execution Flow**: Discovery → Plan Presentation → Execution → Verification
|
||||||
|
|
||||||
|
## 3-Layer Architecture & Auto-Strategy Selection
|
||||||
|
|
||||||
|
### Layer Definition & Strategy Assignment
|
||||||
|
|
||||||
|
| Layer | Depth | Strategy | Purpose | Context Pattern |
|
||||||
|
|-------|-------|----------|---------|----------------|
|
||||||
|
| **Layer 3** (Deepest) | ≥3 | `full` | Generate docs for all subdirectories with code | `@**/*` (all files) |
|
||||||
|
| **Layer 2** (Middle) | 1-2 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
|
||||||
|
| **Layer 1** (Top) | 0 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
|
||||||
|
|
||||||
|
**Generation Direction**: Layer 3 → Layer 2 → Layer 1 (bottom-up dependency flow)
|
||||||
|
|
||||||
|
**Strategy Auto-Selection**: Strategies are automatically determined by directory depth - no user configuration needed.
|
||||||
|
|
||||||
|
### Strategy Details
|
||||||
|
|
||||||
|
#### Full Strategy (Layer 3 Only)
|
||||||
|
- **Use Case**: Deepest directories with comprehensive file coverage
|
||||||
|
- **Behavior**: Generates API.md + README.md for current directory AND subdirectories containing code
|
||||||
|
- **Context**: All files in current directory tree (`@**/*`)
|
||||||
|
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
|
||||||
|
- **Benefits**: Creates foundation documentation for upper layers to reference
|
||||||
|
|
||||||
|
#### Single Strategy (Layers 1-2)
|
||||||
|
- **Use Case**: Upper layers that aggregate from existing documentation
|
||||||
|
- **Behavior**: Generates API.md + README.md only in current directory
|
||||||
|
- **Context**: Direct children docs + current directory code files
|
||||||
|
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
|
||||||
|
- **Benefits**: Minimal context consumption, clear layer separation
|
||||||
|
|
||||||
|
### Example Flow
|
||||||
|
```
|
||||||
|
src/auth/handlers/ (depth 3) → FULL STRATEGY
|
||||||
|
CONTEXT: @**/* (all files in handlers/ and subdirs)
|
||||||
|
GENERATES: .workflow/docs/project/src/auth/handlers/{API.md,README.md} + subdirs
|
||||||
|
↓
|
||||||
|
src/auth/ (depth 2) → SINGLE STRATEGY
|
||||||
|
CONTEXT: @*/API.md @*/README.md @*.ts (handlers docs + current code)
|
||||||
|
GENERATES: .workflow/docs/project/src/auth/{API.md,README.md} only
|
||||||
|
↓
|
||||||
|
src/ (depth 1) → SINGLE STRATEGY
|
||||||
|
CONTEXT: @*/API.md @*/README.md (auth docs, utils docs)
|
||||||
|
GENERATES: .workflow/docs/project/src/{API.md,README.md} only
|
||||||
|
↓
|
||||||
|
./ (depth 0) → SINGLE STRATEGY
|
||||||
|
CONTEXT: @*/API.md @*/README.md (src docs, tests docs)
|
||||||
|
GENERATES: .workflow/docs/project/{API.md,README.md} only
|
||||||
|
```
|
||||||
|
|
||||||
|
## Core Execution Rules
|
||||||
|
|
||||||
|
1. **Analyze First**: Module discovery + folder classification before generation
|
||||||
|
2. **Wait for Approval**: Present plan, no execution without user confirmation
|
||||||
|
3. **Execution Strategy**:
|
||||||
|
- **<20 modules**: Direct parallel execution (max 4 concurrent per layer)
|
||||||
|
- **≥20 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
|
||||||
|
4. **Tool Fallback**: Auto-retry with fallback tools on failure
|
||||||
|
5. **Layer Sequential**: Process layers 3→2→1 (bottom-up), parallel batches within layer
|
||||||
|
6. **Safety Check**: Verify only docs files modified in .workflow/docs/
|
||||||
|
7. **Layer-based Grouping**: Group modules by LAYER (not depth) for execution
|
||||||
|
|
||||||
|
## Tool Fallback Hierarchy
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
--tool gemini → [gemini, qwen, codex] // default
|
||||||
|
--tool qwen → [qwen, gemini, codex]
|
||||||
|
--tool codex → [codex, gemini, qwen]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Trigger**: Non-zero exit code from generation script
|
||||||
|
|
||||||
|
| Tool | Best For | Fallback To |
|
||||||
|
|--------|--------------------------------|----------------|
|
||||||
|
| gemini | Documentation, patterns | qwen → codex |
|
||||||
|
| qwen | Architecture, system design | gemini → codex |
|
||||||
|
| codex | Implementation, code quality | gemini → qwen |
|
||||||
|
|
||||||
|
## Execution Phases
|
||||||
|
|
||||||
|
### Phase 1: Discovery & Analysis
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Get project metadata
|
||||||
|
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||||
|
|
||||||
|
// Get module structure with classification
|
||||||
|
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
|
||||||
|
|
||||||
|
// OR with path parameter
|
||||||
|
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
|
||||||
|
|
||||||
|
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
|
||||||
|
|
||||||
|
### Phase 2: Plan Presentation
|
||||||
|
|
||||||
|
**For <20 modules**:
|
||||||
|
```
|
||||||
|
Documentation Generation Plan:
|
||||||
|
Tool: gemini (fallback: qwen → codex)
|
||||||
|
Total: 7 modules
|
||||||
|
Execution: Direct parallel (< 20 modules threshold)
|
||||||
|
Project: myproject
|
||||||
|
Output: .workflow/docs/myproject/
|
||||||
|
|
||||||
|
Will generate docs for:
|
||||||
|
- ./core/interfaces (12 files, type: code) - depth 2 [Layer 2] - single strategy
|
||||||
|
- ./core (22 files, type: code) - depth 1 [Layer 2] - single strategy
|
||||||
|
- ./models (9 files, type: code) - depth 1 [Layer 2] - single strategy
|
||||||
|
- ./utils (12 files, type: navigation) - depth 1 [Layer 2] - single strategy
|
||||||
|
- . (5 files, type: code) - depth 0 [Layer 1] - single strategy
|
||||||
|
|
||||||
|
Documentation Strategy (Auto-Selected):
|
||||||
|
- Layer 2 (depth 1-2): API.md + README.md (current dir only, reference child docs)
|
||||||
|
- Layer 1 (depth 0): API.md + README.md (current dir only, reference child docs)
|
||||||
|
|
||||||
|
Output Structure:
|
||||||
|
- Code folders: API.md + README.md
|
||||||
|
- Navigation folders: README.md only
|
||||||
|
|
||||||
|
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
|
||||||
|
Execution order: Layer 2 → Layer 1
|
||||||
|
Estimated time: ~5-10 minutes
|
||||||
|
|
||||||
|
Confirm execution? (y/n)
|
||||||
|
```
|
||||||
|
|
||||||
|
**For ≥20 modules**:
|
||||||
|
```
|
||||||
|
Documentation Generation Plan:
|
||||||
|
Tool: gemini (fallback: qwen → codex)
|
||||||
|
Total: 31 modules
|
||||||
|
Execution: Agent batch processing (4 modules/agent)
|
||||||
|
Project: myproject
|
||||||
|
Output: .workflow/docs/myproject/
|
||||||
|
|
||||||
|
Will generate docs for:
|
||||||
|
- ./src/features/auth (12 files, type: code) - depth 3 [Layer 3] - full strategy
|
||||||
|
- ./.claude/commands/cli (6 files, type: code) - depth 3 [Layer 3] - full strategy
|
||||||
|
- ./src/utils (8 files, type: code) - depth 2 [Layer 2] - single strategy
|
||||||
|
...
|
||||||
|
|
||||||
|
Documentation Strategy (Auto-Selected):
|
||||||
|
- Layer 3 (depth ≥3): API.md + README.md (all subdirs with code)
|
||||||
|
- Layer 2 (depth 1-2): API.md + README.md (current dir only)
|
||||||
|
- Layer 1 (depth 0): API.md + README.md (current dir only)
|
||||||
|
|
||||||
|
Output Structure:
|
||||||
|
- Code folders: API.md + README.md
|
||||||
|
- Navigation folders: README.md only
|
||||||
|
|
||||||
|
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
|
||||||
|
Execution order: Layer 3 → Layer 2 → Layer 1
|
||||||
|
|
||||||
|
Agent allocation (by LAYER):
|
||||||
|
- Layer 3 (14 modules, depth ≥3): 4 agents [4, 4, 4, 2]
|
||||||
|
- Layer 2 (15 modules, depth 1-2): 4 agents [4, 4, 4, 3]
|
||||||
|
- Layer 1 (2 modules, depth 0): 1 agent [2]
|
||||||
|
|
||||||
|
Estimated time: ~15-25 minutes
|
||||||
|
|
||||||
|
Confirm execution? (y/n)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3A: Direct Execution (<20 modules)
|
||||||
|
|
||||||
|
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
|
||||||
|
|
||||||
|
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
let project_name = detect_project_name();
|
||||||
|
|
||||||
|
for (let layer of [3, 2, 1]) {
|
||||||
|
if (modules_by_layer[layer].length === 0) continue;
|
||||||
|
let batches = batch_modules(modules_by_layer[layer], 4);
|
||||||
|
|
||||||
|
for (let batch of batches) {
|
||||||
|
let parallel_tasks = batch.map(module => {
|
||||||
|
return async () => {
|
||||||
|
let strategy = module.depth >= 3 ? "full" : "single";
|
||||||
|
for (let tool of tool_order) {
|
||||||
|
Bash({
|
||||||
|
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ ${module.path} (Layer ${layer}) docs generated with ${tool}`);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
report(`❌ FAILED: ${module.path} (Layer ${layer}) failed all tools`);
|
||||||
|
return false;
|
||||||
|
};
|
||||||
|
});
|
||||||
|
await Promise.all(parallel_tasks.map(task => task()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3B: Agent Batch Execution (≥20 modules)
|
||||||
|
|
||||||
|
**Strategy**: Batch modules into groups of 4, spawn memory-bridge agents per batch.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Group modules by LAYER and batch within each layer
|
||||||
|
let modules_by_layer = group_by_layer(module_list);
|
||||||
|
let tool_order = construct_tool_order(primary_tool);
|
||||||
|
let project_name = detect_project_name();
|
||||||
|
|
||||||
|
for (let layer of [3, 2, 1]) {
|
||||||
|
if (modules_by_layer[layer].length === 0) continue;
|
||||||
|
|
||||||
|
let batches = batch_modules(modules_by_layer[layer], 4);
|
||||||
|
let worker_tasks = [];
|
||||||
|
|
||||||
|
for (let batch of batches) {
|
||||||
|
worker_tasks.push(
|
||||||
|
Task(
|
||||||
|
subagent_type="memory-bridge",
|
||||||
|
description=`Generate docs for ${batch.length} modules in Layer ${layer}`,
|
||||||
|
prompt=generate_batch_worker_prompt(batch, tool_order, layer, project_name)
|
||||||
|
)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
await parallel_execute(worker_tasks);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Batch Worker Prompt Template**:
|
||||||
|
```
|
||||||
|
PURPOSE: Generate documentation for assigned modules with tool fallback
|
||||||
|
|
||||||
|
TASK: Generate API.md + README.md for assigned modules using specified strategies.
|
||||||
|
|
||||||
|
PROJECT: {{project_name}}
|
||||||
|
OUTPUT: .workflow/docs/{{project_name}}/
|
||||||
|
|
||||||
|
MODULES:
|
||||||
|
{{module_path_1}} (strategy: {{strategy_1}}, type: {{folder_type_1}})
|
||||||
|
{{module_path_2}} (strategy: {{strategy_2}}, type: {{folder_type_2}})
|
||||||
|
...
|
||||||
|
|
||||||
|
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
||||||
|
|
||||||
|
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
|
||||||
|
- Accepts strategy parameter: full | single
|
||||||
|
- Accepts folder type detection: code | navigation
|
||||||
|
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
||||||
|
- Output path: .workflow/docs/{{project_name}}/{module_path}/
|
||||||
|
|
||||||
|
EXECUTION FLOW (for each module):
|
||||||
|
1. Tool fallback loop (exit on first success):
|
||||||
|
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||||
|
Bash({
|
||||||
|
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
|
exit_code=$?
|
||||||
|
|
||||||
|
if [ $exit_code -eq 0 ]; then
|
||||||
|
report "✅ {{module_path}} docs generated with $tool"
|
||||||
|
break
|
||||||
|
else
|
||||||
|
report "⚠️ {{module_path}} failed with $tool, trying next..."
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
2. Handle complete failure (all tools failed):
|
||||||
|
if [ $exit_code -ne 0 ]; then
|
||||||
|
report "❌ FAILED: {{module_path}} - all tools exhausted"
|
||||||
|
# Continue to next module (do not abort batch)
|
||||||
|
fi
|
||||||
|
|
||||||
|
FOLDER TYPE HANDLING:
|
||||||
|
- code: Generate API.md + README.md
|
||||||
|
- navigation: Generate README.md only
|
||||||
|
|
||||||
|
FAILURE HANDLING:
|
||||||
|
- Module-level isolation: One module's failure does not affect others
|
||||||
|
- Exit code detection: Non-zero exit code triggers next tool
|
||||||
|
- Exhaustion reporting: Log modules where all tools failed
|
||||||
|
- Batch continuation: Always process remaining modules
|
||||||
|
|
||||||
|
REPORTING FORMAT:
|
||||||
|
Per-module status:
|
||||||
|
✅ path/to/module docs generated with {tool}
|
||||||
|
⚠️ path/to/module failed with {tool}, trying next...
|
||||||
|
❌ FAILED: path/to/module - all tools exhausted
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Project-Level Documentation
|
||||||
|
|
||||||
|
**After all module documentation is generated, create project-level documentation files.**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
let project_name = detect_project_name();
|
||||||
|
let project_root = get_project_root();
|
||||||
|
|
||||||
|
// Step 1: Generate Project README
|
||||||
|
report("Generating project README.md...");
|
||||||
|
for (let tool of tool_order) {
|
||||||
|
Bash({
|
||||||
|
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ Project README generated with ${tool}`);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 2: Generate Architecture & Examples
|
||||||
|
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
|
||||||
|
for (let tool of tool_order) {
|
||||||
|
Bash({
|
||||||
|
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ Architecture docs generated with ${tool}`);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 3: Generate HTTP API documentation (if API routes detected)
|
||||||
|
Bash({command: 'rg "router\\.|@Get|@Post" -g "*.{ts,js,py}" 2>/dev/null && echo "API_FOUND" || echo "NO_API"', run_in_background: false});
|
||||||
|
if (bash_result.stdout.includes("API_FOUND")) {
|
||||||
|
report("Generating HTTP API documentation...");
|
||||||
|
for (let tool of tool_order) {
|
||||||
|
Bash({
|
||||||
|
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ HTTP API docs generated with ${tool}`);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**:
|
||||||
|
```
|
||||||
|
Project-Level Documentation:
|
||||||
|
✅ README.md (project root overview)
|
||||||
|
✅ ARCHITECTURE.md (system design)
|
||||||
|
✅ EXAMPLES.md (usage examples)
|
||||||
|
✅ api/README.md (HTTP API reference) [optional]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 5: Verification
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Check documentation files created
|
||||||
|
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
|
||||||
|
|
||||||
|
// Display structure
|
||||||
|
Bash({command: 'tree -L 3 .workflow/docs/', run_in_background: false});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result Summary**:
|
||||||
|
```
|
||||||
|
Documentation Generation Summary:
|
||||||
|
Total: 31 | Success: 29 | Failed: 2
|
||||||
|
Tool usage: gemini: 25, qwen: 4, codex: 0
|
||||||
|
Failed: path1, path2
|
||||||
|
|
||||||
|
Generated documentation:
|
||||||
|
.workflow/docs/myproject/
|
||||||
|
├── src/
|
||||||
|
│ ├── auth/
|
||||||
|
│ │ ├── API.md
|
||||||
|
│ │ └── README.md
|
||||||
|
│ └── utils/
|
||||||
|
│ └── README.md
|
||||||
|
└── README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
**Batch Worker**: Tool fallback per module, batch isolation, clear status reporting
|
||||||
|
**Coordinator**: Invalid path abort, user decline handling, verification with cleanup
|
||||||
|
**Fallback Triggers**: Non-zero exit code, script timeout, unexpected output
|
||||||
|
|
||||||
|
## Output Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/docs/{project_name}/
|
||||||
|
├── src/ # Mirrors source structure
|
||||||
|
│ ├── modules/
|
||||||
|
│ │ ├── README.md # Navigation
|
||||||
|
│ │ ├── auth/
|
||||||
|
│ │ │ ├── API.md # API signatures
|
||||||
|
│ │ │ ├── README.md # Module docs
|
||||||
|
│ │ │ └── middleware/
|
||||||
|
│ │ │ ├── API.md
|
||||||
|
│ │ │ └── README.md
|
||||||
|
│ │ └── api/
|
||||||
|
│ │ ├── API.md
|
||||||
|
│ │ └── README.md
|
||||||
|
│ └── utils/
|
||||||
|
│ └── README.md
|
||||||
|
├── lib/
|
||||||
|
│ └── core/
|
||||||
|
│ ├── API.md
|
||||||
|
│ └── README.md
|
||||||
|
├── README.md # ✨ Project root overview (auto-generated)
|
||||||
|
├── ARCHITECTURE.md # ✨ System design (auto-generated)
|
||||||
|
├── EXAMPLES.md # ✨ Usage examples (auto-generated)
|
||||||
|
└── api/ # ✨ Optional (auto-generated if HTTP API detected)
|
||||||
|
└── README.md # HTTP API reference
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Full project documentation generation
|
||||||
|
/memory:docs-full-cli
|
||||||
|
|
||||||
|
# Target specific directory
|
||||||
|
/memory:docs-full-cli src/features/auth
|
||||||
|
/memory:docs-full-cli .claude
|
||||||
|
|
||||||
|
# Use specific tool
|
||||||
|
/memory:docs-full-cli --tool qwen
|
||||||
|
/memory:docs-full-cli src --tool qwen
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Advantages
|
||||||
|
|
||||||
|
- **Efficiency**: 30 modules → 8 agents (73% reduction from sequential)
|
||||||
|
- **Resilience**: 3-tier tool fallback per module
|
||||||
|
- **Performance**: Parallel batches, no concurrency limits
|
||||||
|
- **Observability**: Per-module tool usage, batch-level metrics
|
||||||
|
- **Automation**: Zero configuration - strategy auto-selected by directory depth
|
||||||
|
- **Path Mirroring**: Clear 1:1 mapping between source and documentation structure
|
||||||
|
|
||||||
|
## Template Reference
|
||||||
|
|
||||||
|
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
|
||||||
|
- `api.txt`: Code API documentation (Part A: Code API, Part B: HTTP API)
|
||||||
|
- `module-readme.txt`: Module purpose, usage, dependencies
|
||||||
|
- `folder-navigation.txt`: Navigation README for folders with subdirectories
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/memory:docs` - Agent-based documentation planning workflow
|
||||||
|
- `/memory:docs-related-cli` - Update docs for changed modules only
|
||||||
|
- `/workflow:execute` - Execute documentation tasks (when using agent mode)
|
||||||
386
.claude/commands/memory/docs-related-cli.md
Normal file
386
.claude/commands/memory/docs-related-cli.md
Normal file
@@ -0,0 +1,386 @@
|
|||||||
|
---
|
||||||
|
name: docs-related-cli
|
||||||
|
description: Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel
|
||||||
|
argument-hint: "[--tool <gemini|qwen|codex>]"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Related Documentation Generation - CLI Mode (/memory:docs-related-cli)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Orchestrates context-aware documentation generation/update for changed modules using CLI-based execution with batched agents and automatic tool fallback (gemini→qwen→codex).
|
||||||
|
|
||||||
|
**Parameters**:
|
||||||
|
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
|
||||||
|
|
||||||
|
**Execution Flow**:
|
||||||
|
1. Change Detection → 2. Plan Presentation → 3. Batched Execution → 4. Verification
|
||||||
|
|
||||||
|
## Core Rules
|
||||||
|
|
||||||
|
1. **Detect Changes First**: Use git diff to identify affected modules
|
||||||
|
2. **Wait for Approval**: Present plan, no execution without user confirmation
|
||||||
|
3. **Execution Strategy**:
|
||||||
|
- **<15 modules**: Direct parallel execution (max 4 concurrent per depth, no agent overhead)
|
||||||
|
- **≥15 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
|
||||||
|
4. **Tool Fallback**: Auto-retry with fallback tools on failure
|
||||||
|
5. **Depth Sequential**: Process depths N→0, parallel batches within depth (both modes)
|
||||||
|
6. **Related Mode**: Generate/update only changed modules and their parent contexts
|
||||||
|
7. **Single Strategy**: Always use `single` strategy (incremental update)
|
||||||
|
|
||||||
|
## Tool Fallback Hierarchy
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
--tool gemini → [gemini, qwen, codex] // default
|
||||||
|
--tool qwen → [qwen, gemini, codex]
|
||||||
|
--tool codex → [codex, gemini, qwen]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Trigger**: Non-zero exit code from generation script
|
||||||
|
|
||||||
|
| Tool | Best For | Fallback To |
|
||||||
|
|--------|--------------------------------|----------------|
|
||||||
|
| gemini | Documentation, patterns | qwen → codex |
|
||||||
|
| qwen | Architecture, system design | gemini → codex |
|
||||||
|
| codex | Implementation, code quality | gemini → qwen |
|
||||||
|
|
||||||
|
## Phase 1: Change Detection & Analysis
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Get project metadata
|
||||||
|
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||||
|
|
||||||
|
// Detect changed modules
|
||||||
|
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
|
||||||
|
|
||||||
|
// Cache git changes
|
||||||
|
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parse output** `depth:N|path:<PATH>|change:<TYPE>|type:<code|navigation>` to extract affected modules.
|
||||||
|
|
||||||
|
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack (Node.js/Python/Go/Rust/etc).
|
||||||
|
|
||||||
|
**Fallback**: If no changes detected, use recent modules (first 10 by depth).
|
||||||
|
|
||||||
|
## Phase 2: Plan Presentation
|
||||||
|
|
||||||
|
**Present filtered plan**:
|
||||||
|
```
|
||||||
|
Related Documentation Generation Plan:
|
||||||
|
Tool: gemini (fallback: qwen → codex)
|
||||||
|
Changed: 4 modules | Batching: 4 modules/agent
|
||||||
|
Project: myproject
|
||||||
|
Output: .workflow/docs/myproject/
|
||||||
|
|
||||||
|
Will generate/update docs for:
|
||||||
|
- ./src/api/auth (5 files, type: code) [new module]
|
||||||
|
- ./src/api (12 files, type: code) [parent of changed auth/]
|
||||||
|
- ./src (8 files, type: code) [parent context]
|
||||||
|
- . (14 files, type: code) [root level]
|
||||||
|
|
||||||
|
Documentation Strategy:
|
||||||
|
- Strategy: single (all modules - incremental update)
|
||||||
|
- Output: API.md + README.md (code folders), README.md only (navigation folders)
|
||||||
|
- Context: Current dir code + child docs
|
||||||
|
|
||||||
|
Auto-skipped (12 paths):
|
||||||
|
- Tests: ./src/api/auth.test.ts (8 paths)
|
||||||
|
- Config: tsconfig.json (3 paths)
|
||||||
|
- Other: node_modules (1 path)
|
||||||
|
|
||||||
|
Agent allocation:
|
||||||
|
- Depth 3 (1 module): 1 agent [1]
|
||||||
|
- Depth 2 (1 module): 1 agent [1]
|
||||||
|
- Depth 1 (1 module): 1 agent [1]
|
||||||
|
- Depth 0 (1 module): 1 agent [1]
|
||||||
|
|
||||||
|
Estimated time: ~5-10 minutes
|
||||||
|
|
||||||
|
Confirm execution? (y/n)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Decision logic**:
|
||||||
|
- User confirms "y": Proceed with execution
|
||||||
|
- User declines "n": Abort, no changes
|
||||||
|
- <15 modules: Direct execution
|
||||||
|
- ≥15 modules: Agent batch execution
|
||||||
|
|
||||||
|
## Phase 3A: Direct Execution (<15 modules)
|
||||||
|
|
||||||
|
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead.
|
||||||
|
|
||||||
|
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
let project_name = detect_project_name();
|
||||||
|
|
||||||
|
for (let depth of sorted_depths.reverse()) { // N → 0
|
||||||
|
let batches = batch_modules(modules_by_depth[depth], 4);
|
||||||
|
|
||||||
|
for (let batch of batches) {
|
||||||
|
let parallel_tasks = batch.map(module => {
|
||||||
|
return async () => {
|
||||||
|
for (let tool of tool_order) {
|
||||||
|
Bash({
|
||||||
|
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ ${module.path} docs generated with ${tool}`);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
report(`❌ FAILED: ${module.path} failed all tools`);
|
||||||
|
return false;
|
||||||
|
};
|
||||||
|
});
|
||||||
|
await Promise.all(parallel_tasks.map(task => task()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Phase 3B: Agent Batch Execution (≥15 modules)
|
||||||
|
|
||||||
|
### Batching Strategy
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Batch modules into groups of 4
|
||||||
|
function batch_modules(modules, batch_size = 4) {
|
||||||
|
let batches = [];
|
||||||
|
for (let i = 0; i < modules.length; i += batch_size) {
|
||||||
|
batches.push(modules.slice(i, i + batch_size));
|
||||||
|
}
|
||||||
|
return batches;
|
||||||
|
}
|
||||||
|
// Examples: 10→[4,4,2] | 8→[4,4] | 3→[3]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Coordinator Orchestration
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
let modules_by_depth = group_by_depth(changed_modules);
|
||||||
|
let tool_order = construct_tool_order(primary_tool);
|
||||||
|
let project_name = detect_project_name();
|
||||||
|
|
||||||
|
for (let depth of sorted_depths.reverse()) { // N → 0
|
||||||
|
let batches = batch_modules(modules_by_depth[depth], 4);
|
||||||
|
let worker_tasks = [];
|
||||||
|
|
||||||
|
for (let batch of batches) {
|
||||||
|
worker_tasks.push(
|
||||||
|
Task(
|
||||||
|
subagent_type="memory-bridge",
|
||||||
|
description=`Generate docs for ${batch.length} modules at depth ${depth}`,
|
||||||
|
prompt=generate_batch_worker_prompt(batch, tool_order, depth, project_name, "related")
|
||||||
|
)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
await parallel_execute(worker_tasks); // Batches run in parallel
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Batch Worker Prompt Template
|
||||||
|
|
||||||
|
```
|
||||||
|
PURPOSE: Generate/update documentation for assigned modules with tool fallback (related mode)
|
||||||
|
|
||||||
|
TASK:
|
||||||
|
Generate documentation for the following modules based on recent changes. For each module, try tools in order until success.
|
||||||
|
|
||||||
|
PROJECT: {{project_name}}
|
||||||
|
OUTPUT: .workflow/docs/{{project_name}}/
|
||||||
|
|
||||||
|
MODULES:
|
||||||
|
{{module_path_1}} (type: {{folder_type_1}})
|
||||||
|
{{module_path_2}} (type: {{folder_type_2}})
|
||||||
|
{{module_path_3}} (type: {{folder_type_3}})
|
||||||
|
{{module_path_4}} (type: {{folder_type_4}})
|
||||||
|
|
||||||
|
TOOLS (try in order):
|
||||||
|
1. {{tool_1}}
|
||||||
|
2. {{tool_2}}
|
||||||
|
3. {{tool_3}}
|
||||||
|
|
||||||
|
EXECUTION:
|
||||||
|
For each module above:
|
||||||
|
1. Try tool 1:
|
||||||
|
Bash({
|
||||||
|
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
|
||||||
|
→ Failure: Try tool 2
|
||||||
|
2. Try tool 2:
|
||||||
|
Bash({
|
||||||
|
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
|
||||||
|
→ Failure: Try tool 3
|
||||||
|
3. Try tool 3:
|
||||||
|
Bash({
|
||||||
|
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module
|
||||||
|
→ Failure: Report "❌ FAILED: {{module_path}} failed all tools", proceed to next module
|
||||||
|
|
||||||
|
FOLDER TYPE HANDLING:
|
||||||
|
- code: Generate API.md + README.md
|
||||||
|
- navigation: Generate README.md only
|
||||||
|
|
||||||
|
REPORTING:
|
||||||
|
Report final summary with:
|
||||||
|
- Total processed: X modules
|
||||||
|
- Successful: Y modules
|
||||||
|
- Failed: Z modules
|
||||||
|
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
|
||||||
|
```
|
||||||
|
|
||||||
|
## Phase 4: Verification
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Check documentation files created/updated
|
||||||
|
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
|
||||||
|
|
||||||
|
// Display recent changes
|
||||||
|
Bash({command: 'find .workflow/docs -type f -name "*.md" -mmin -60 2>/dev/null', run_in_background: false});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Aggregate results**:
|
||||||
|
```
|
||||||
|
Documentation Generation Summary:
|
||||||
|
Total: 4 | Success: 4 | Failed: 0
|
||||||
|
|
||||||
|
Tool usage:
|
||||||
|
- gemini: 4 modules
|
||||||
|
- qwen: 0 modules (fallback)
|
||||||
|
- codex: 0 modules
|
||||||
|
|
||||||
|
Changes:
|
||||||
|
.workflow/docs/myproject/src/api/auth/API.md (new)
|
||||||
|
.workflow/docs/myproject/src/api/auth/README.md (new)
|
||||||
|
.workflow/docs/myproject/src/api/API.md (updated)
|
||||||
|
.workflow/docs/myproject/src/api/README.md (updated)
|
||||||
|
.workflow/docs/myproject/src/API.md (updated)
|
||||||
|
.workflow/docs/myproject/src/README.md (updated)
|
||||||
|
.workflow/docs/myproject/API.md (updated)
|
||||||
|
.workflow/docs/myproject/README.md (updated)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Execution Summary
|
||||||
|
|
||||||
|
**Module Count Threshold**:
|
||||||
|
- **<15 modules**: Coordinator executes Phase 3A (Direct Execution)
|
||||||
|
- **≥15 modules**: Coordinator executes Phase 3B (Agent Batch Execution)
|
||||||
|
|
||||||
|
**Agent Hierarchy** (for ≥15 modules):
|
||||||
|
- **Coordinator**: Handles batch division, spawns worker agents per depth
|
||||||
|
- **Worker Agents**: Each processes 4 modules with tool fallback (related mode)
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
**Batch Worker**:
|
||||||
|
- Tool fallback per module (auto-retry)
|
||||||
|
- Batch isolation (failures don't propagate)
|
||||||
|
- Clear per-module status reporting
|
||||||
|
|
||||||
|
**Coordinator**:
|
||||||
|
- No changes: Use fallback (recent 10 modules)
|
||||||
|
- User decline: No execution
|
||||||
|
- Verification fail: Report incomplete modules
|
||||||
|
- Partial failures: Continue execution, report failed modules
|
||||||
|
|
||||||
|
**Fallback Triggers**:
|
||||||
|
- Non-zero exit code
|
||||||
|
- Script timeout
|
||||||
|
- Unexpected output
|
||||||
|
|
||||||
|
## Output Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/docs/{project_name}/
|
||||||
|
├── src/ # Mirrors source structure
|
||||||
|
│ ├── modules/
|
||||||
|
│ │ ├── README.md
|
||||||
|
│ │ ├── auth/
|
||||||
|
│ │ │ ├── API.md # Updated based on code changes
|
||||||
|
│ │ │ └── README.md # Updated based on code changes
|
||||||
|
│ │ └── api/
|
||||||
|
│ │ ├── API.md
|
||||||
|
│ │ └── README.md
|
||||||
|
│ └── utils/
|
||||||
|
│ └── README.md
|
||||||
|
└── README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Daily development documentation update
|
||||||
|
/memory:docs-related-cli
|
||||||
|
|
||||||
|
# After feature work with specific tool
|
||||||
|
/memory:docs-related-cli --tool qwen
|
||||||
|
|
||||||
|
# Code quality documentation review after implementation
|
||||||
|
/memory:docs-related-cli --tool codex
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Advantages
|
||||||
|
|
||||||
|
**Efficiency**: 30 modules → 8 agents (73% reduction)
|
||||||
|
**Resilience**: 3-tier fallback per module
|
||||||
|
**Performance**: Parallel batches, no concurrency limits
|
||||||
|
**Context-aware**: Updates based on actual git changes
|
||||||
|
**Fast**: Only affected modules, not entire project
|
||||||
|
**Incremental**: Single strategy for focused updates
|
||||||
|
|
||||||
|
## Coordinator Checklist
|
||||||
|
|
||||||
|
- Parse `--tool` (default: gemini)
|
||||||
|
- Get project metadata (name, root)
|
||||||
|
- Detect changed modules via detect_changed_modules.sh
|
||||||
|
- **Smart filter modules** (auto-detect tech stack, skip tests/build/config/vendor)
|
||||||
|
- Cache git changes
|
||||||
|
- Apply fallback if no changes (recent 10 modules)
|
||||||
|
- Construct tool fallback order
|
||||||
|
- **Present filtered plan** with skip reasons and change types
|
||||||
|
- **Wait for y/n confirmation**
|
||||||
|
- Determine execution mode:
|
||||||
|
- **<15 modules**: Direct execution (Phase 3A)
|
||||||
|
- For each depth (N→0): Sequential module updates with tool fallback
|
||||||
|
- **≥15 modules**: Agent batch execution (Phase 3B)
|
||||||
|
- For each depth (N→0): Batch modules (4 per batch), spawn batch workers in parallel
|
||||||
|
- Wait for depth/batch completion
|
||||||
|
- Aggregate results
|
||||||
|
- Verification check (documentation files created/updated)
|
||||||
|
- Display summary + recent changes
|
||||||
|
|
||||||
|
## Comparison with Full Documentation Generation
|
||||||
|
|
||||||
|
| Aspect | Related Generation | Full Generation |
|
||||||
|
|--------|-------------------|-----------------|
|
||||||
|
| **Scope** | Changed modules only | All project modules |
|
||||||
|
| **Speed** | Fast (minutes) | Slower (10-30 min) |
|
||||||
|
| **Use case** | Daily development | Initial setup, major refactoring |
|
||||||
|
| **Strategy** | `single` (all) | `full` (L3) + `single` (L1-2) |
|
||||||
|
| **Trigger** | After commits | After setup or major changes |
|
||||||
|
| **Batching** | 4 modules/agent | 4 modules/agent |
|
||||||
|
| **Fallback** | gemini→qwen→codex | gemini→qwen→codex |
|
||||||
|
| **Complexity threshold** | ≤15 modules | ≤20 modules |
|
||||||
|
|
||||||
|
## Template Reference
|
||||||
|
|
||||||
|
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
|
||||||
|
- `api.txt`: Code API documentation
|
||||||
|
- `module-readme.txt`: Module purpose, usage, dependencies
|
||||||
|
- `folder-navigation.txt`: Navigation README for folders
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/memory:docs-full-cli` - Full project documentation generation
|
||||||
|
- `/memory:docs` - Agent-based documentation planning workflow
|
||||||
|
- `/memory:update-related` - Update CLAUDE.md for changed modules
|
||||||
@@ -95,14 +95,15 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
|
|||||||
|
|
||||||
### Phase 1: Discovery & Analysis
|
### Phase 1: Discovery & Analysis
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
# Cache git changes
|
// Cache git changes
|
||||||
bash(git add -A 2>/dev/null || true)
|
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||||
|
|
||||||
# Get module structure
|
// Get module structure
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh list)
|
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
|
||||||
# OR with --path
|
|
||||||
bash(cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list)
|
// OR with --path
|
||||||
|
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
|
||||||
```
|
```
|
||||||
|
|
||||||
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
|
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
|
||||||
@@ -172,26 +173,23 @@ Update Plan:
|
|||||||
|
|
||||||
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
|
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
|
||||||
|
|
||||||
```javascript
|
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||||
// Group modules by LAYER (not depth)
|
|
||||||
let modules_by_layer = group_by_layer(module_list);
|
|
||||||
let tool_order = construct_tool_order(primary_tool);
|
|
||||||
|
|
||||||
// Process by LAYER (3 → 2 → 1), not by depth
|
```javascript
|
||||||
for (let layer of [3, 2, 1]) {
|
for (let layer of [3, 2, 1]) {
|
||||||
if (modules_by_layer[layer].length === 0) continue;
|
if (modules_by_layer[layer].length === 0) continue;
|
||||||
|
|
||||||
let batches = batch_modules(modules_by_layer[layer], 4);
|
let batches = batch_modules(modules_by_layer[layer], 4);
|
||||||
|
|
||||||
for (let batch of batches) {
|
for (let batch of batches) {
|
||||||
let parallel_tasks = batch.map(module => {
|
let parallel_tasks = batch.map(module => {
|
||||||
return async () => {
|
return async () => {
|
||||||
// Auto-determine strategy based on depth
|
|
||||||
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
|
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
|
||||||
|
|
||||||
for (let tool of tool_order) {
|
for (let tool of tool_order) {
|
||||||
let exit_code = bash(`cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`);
|
Bash({
|
||||||
if (exit_code === 0) {
|
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
report(`✅ ${module.path} (Layer ${layer}) updated with ${tool}`);
|
report(`✅ ${module.path} (Layer ${layer}) updated with ${tool}`);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
@@ -200,7 +198,6 @@ for (let layer of [3, 2, 1]) {
|
|||||||
return false;
|
return false;
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
await Promise.all(parallel_tasks.map(task => task()));
|
await Promise.all(parallel_tasks.map(task => task()));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -255,7 +252,10 @@ EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
|
|||||||
EXECUTION FLOW (for each module):
|
EXECUTION FLOW (for each module):
|
||||||
1. Tool fallback loop (exit on first success):
|
1. Tool fallback loop (exit on first success):
|
||||||
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}")
|
Bash({
|
||||||
|
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
exit_code=$?
|
exit_code=$?
|
||||||
|
|
||||||
if [ $exit_code -eq 0 ]; then
|
if [ $exit_code -eq 0 ]; then
|
||||||
@@ -287,12 +287,12 @@ REPORTING FORMAT:
|
|||||||
```
|
```
|
||||||
### Phase 4: Safety Verification
|
### Phase 4: Safety Verification
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
# Check only CLAUDE.md modified
|
// Check only CLAUDE.md files modified
|
||||||
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
|
Bash({command: 'git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified"', run_in_background: false});
|
||||||
|
|
||||||
# Display status
|
// Display status
|
||||||
bash(git status --short)
|
Bash({command: "git status --short", run_in_background: false});
|
||||||
```
|
```
|
||||||
|
|
||||||
**Result Summary**:
|
**Result Summary**:
|
||||||
|
|||||||
@@ -39,12 +39,12 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
|
|||||||
|
|
||||||
## Phase 1: Change Detection & Analysis
|
## Phase 1: Change Detection & Analysis
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
# Detect changed modules (no index refresh needed)
|
// Detect changed modules
|
||||||
bash(~/.claude/scripts/detect_changed_modules.sh list)
|
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
|
||||||
|
|
||||||
# Cache git changes
|
// Cache git changes
|
||||||
bash(git add -A 2>/dev/null || true)
|
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||||
```
|
```
|
||||||
|
|
||||||
**Parse output** `depth:N|path:<PATH>|change:<TYPE>` to extract affected modules.
|
**Parse output** `depth:N|path:<PATH>|change:<TYPE>` to extract affected modules.
|
||||||
@@ -89,47 +89,36 @@ Related Update Plan:
|
|||||||
|
|
||||||
## Phase 3A: Direct Execution (<15 modules)
|
## Phase 3A: Direct Execution (<15 modules)
|
||||||
|
|
||||||
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead, tool fallback per module.
|
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead.
|
||||||
|
|
||||||
|
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
let modules_by_depth = group_by_depth(changed_modules);
|
|
||||||
let tool_order = construct_tool_order(primary_tool);
|
|
||||||
|
|
||||||
for (let depth of sorted_depths.reverse()) { // N → 0
|
for (let depth of sorted_depths.reverse()) { // N → 0
|
||||||
let modules = modules_by_depth[depth];
|
let batches = batch_modules(modules_by_depth[depth], 4);
|
||||||
let batches = batch_modules(modules, 4); // Split into groups of 4
|
|
||||||
|
|
||||||
for (let batch of batches) {
|
for (let batch of batches) {
|
||||||
// Execute batch in parallel (max 4 concurrent)
|
|
||||||
let parallel_tasks = batch.map(module => {
|
let parallel_tasks = batch.map(module => {
|
||||||
return async () => {
|
return async () => {
|
||||||
let success = false;
|
|
||||||
for (let tool of tool_order) {
|
for (let tool of tool_order) {
|
||||||
let exit_code = bash(cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}");
|
Bash({
|
||||||
if (exit_code === 0) {
|
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
|
||||||
report("${module.path} updated with ${tool}");
|
run_in_background: false
|
||||||
success = true;
|
});
|
||||||
break;
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ ${module.path} updated with ${tool}`);
|
||||||
|
return true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (!success) {
|
report(`❌ FAILED: ${module.path} failed all tools`);
|
||||||
report("FAILED: ${module.path} failed all tools");
|
return false;
|
||||||
}
|
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
await Promise.all(parallel_tasks.map(task => task()));
|
||||||
await Promise.all(parallel_tasks.map(task => task())); // Run batch in parallel
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- No agent startup overhead
|
|
||||||
- Parallel execution within depth (max 4 concurrent)
|
|
||||||
- Tool fallback still applies per module
|
|
||||||
- Faster for small changesets (<15 modules)
|
|
||||||
- Same batching strategy as Phase 3B but without agent layer
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Phase 3B: Agent Batch Execution (≥15 modules)
|
## Phase 3B: Agent Batch Execution (≥15 modules)
|
||||||
@@ -193,19 +182,27 @@ TOOLS (try in order):
|
|||||||
|
|
||||||
EXECUTION:
|
EXECUTION:
|
||||||
For each module above:
|
For each module above:
|
||||||
1. cd "{{module_path}}"
|
1. Try tool 1:
|
||||||
2. Try tool 1:
|
Bash({
|
||||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}")
|
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
|
||||||
→ Success: Report "{{module_path}} updated with {{tool_1}}", proceed to next module
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
|
||||||
→ Failure: Try tool 2
|
→ Failure: Try tool 2
|
||||||
3. Try tool 2:
|
2. Try tool 2:
|
||||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}")
|
Bash({
|
||||||
→ Success: Report "{{module_path}} updated with {{tool_2}}", proceed to next module
|
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
|
||||||
→ Failure: Try tool 3
|
→ Failure: Try tool 3
|
||||||
4. Try tool 3:
|
3. Try tool 3:
|
||||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}")
|
Bash({
|
||||||
→ Success: Report "{{module_path}} updated with {{tool_3}}", proceed to next module
|
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
|
||||||
→ Failure: Report "FAILED: {{module_path}} failed all tools", proceed to next module
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module
|
||||||
|
→ Failure: Report "❌ FAILED: {{module_path}} failed all tools", proceed to next module
|
||||||
|
|
||||||
REPORTING:
|
REPORTING:
|
||||||
Report final summary with:
|
Report final summary with:
|
||||||
@@ -213,30 +210,16 @@ Report final summary with:
|
|||||||
- Successful: Y modules
|
- Successful: Y modules
|
||||||
- Failed: Z modules
|
- Failed: Z modules
|
||||||
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
|
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
|
||||||
- Detailed results for each module
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example Execution
|
|
||||||
|
|
||||||
**Depth 3 (new module)**:
|
|
||||||
```javascript
|
|
||||||
Task(subagent_type="memory-bridge", batch=[./src/api/auth], mode="related")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- 4 modules → 1 agent (75% reduction)
|
|
||||||
- Parallel batches, sequential within batch
|
|
||||||
- Each module gets full fallback chain
|
|
||||||
- Context-aware updates based on git changes
|
|
||||||
|
|
||||||
## Phase 4: Safety Verification
|
## Phase 4: Safety Verification
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
# Check only CLAUDE.md modified
|
// Check only CLAUDE.md modified
|
||||||
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
|
Bash({command: 'git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified"', run_in_background: false});
|
||||||
|
|
||||||
# Display statistics
|
// Display statistics
|
||||||
bash(git diff --stat)
|
Bash({command: "git diff --stat", run_in_background: false});
|
||||||
```
|
```
|
||||||
|
|
||||||
**Aggregate results**:
|
**Aggregate results**:
|
||||||
|
|||||||
@@ -381,6 +381,64 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|||||||
- Ambiguities resolved, placeholders removed
|
- Ambiguities resolved, placeholders removed
|
||||||
- Consistent terminology
|
- Consistent terminology
|
||||||
|
|
||||||
|
### Phase 6: Update Context Package
|
||||||
|
|
||||||
|
**Purpose**: Sync updated role analyses to context-package.json to avoid stale cache
|
||||||
|
|
||||||
|
**Operations**:
|
||||||
|
```bash
|
||||||
|
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||||
|
|
||||||
|
# 1. Read existing package
|
||||||
|
context_pkg = Read(context_pkg_path)
|
||||||
|
|
||||||
|
# 2. Re-read brainstorm artifacts (now with synthesis enhancements)
|
||||||
|
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
|
||||||
|
|
||||||
|
# 2.1 Update guidance-specification if exists
|
||||||
|
IF exists({brainstorm_dir}/guidance-specification.md):
|
||||||
|
context_pkg.brainstorm_artifacts.guidance_specification.content = Read({brainstorm_dir}/guidance-specification.md)
|
||||||
|
context_pkg.brainstorm_artifacts.guidance_specification.updated_at = NOW()
|
||||||
|
|
||||||
|
# 2.2 Update synthesis-specification if exists
|
||||||
|
IF exists({brainstorm_dir}/synthesis-specification.md):
|
||||||
|
IF context_pkg.brainstorm_artifacts.synthesis_output:
|
||||||
|
context_pkg.brainstorm_artifacts.synthesis_output.content = Read({brainstorm_dir}/synthesis-specification.md)
|
||||||
|
context_pkg.brainstorm_artifacts.synthesis_output.updated_at = NOW()
|
||||||
|
|
||||||
|
# 2.3 Re-read all role analysis files
|
||||||
|
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
|
||||||
|
context_pkg.brainstorm_artifacts.role_analyses = []
|
||||||
|
|
||||||
|
FOR file IN role_analysis_files:
|
||||||
|
role_name = extract_role_from_path(file) # e.g., "ui-designer"
|
||||||
|
relative_path = file.replace({brainstorm_dir}/, "")
|
||||||
|
|
||||||
|
context_pkg.brainstorm_artifacts.role_analyses.push({
|
||||||
|
"role": role_name,
|
||||||
|
"files": [{
|
||||||
|
"path": relative_path,
|
||||||
|
"type": "primary",
|
||||||
|
"content": Read(file),
|
||||||
|
"updated_at": NOW()
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
|
||||||
|
# 3. Update metadata
|
||||||
|
context_pkg.metadata.updated_at = NOW()
|
||||||
|
context_pkg.metadata.synthesis_timestamp = NOW()
|
||||||
|
|
||||||
|
# 4. Write back
|
||||||
|
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
|
||||||
|
|
||||||
|
REPORT: "✅ Updated context-package.json with synthesis results"
|
||||||
|
```
|
||||||
|
|
||||||
|
**TodoWrite Update**:
|
||||||
|
```json
|
||||||
|
{"content": "Update context package with synthesis results", "status": "completed", "activeForm": "Updating context package"}
|
||||||
|
```
|
||||||
|
|
||||||
## Session Metadata
|
## Session Metadata
|
||||||
|
|
||||||
Update `workflow-session.json`:
|
Update `workflow-session.json`:
|
||||||
|
|||||||
@@ -72,6 +72,8 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
- Session ID successfully extracted
|
- Session ID successfully extracted
|
||||||
- Session directory `.workflow/active/[sessionId]/` exists
|
- Session directory `.workflow/active/[sessionId]/` exists
|
||||||
|
|
||||||
|
**Note**: Session directory contains `workflow-session.json` (metadata). Do NOT look for `manifest.json` here - it only exists in `.workflow/archives/` for archived sessions.
|
||||||
|
|
||||||
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
|
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
|
||||||
|
|
||||||
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
|
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
|
||||||
|
|||||||
@@ -213,8 +213,6 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
|||||||
Generate all three documents and report completion status:
|
Generate all three documents and report completion status:
|
||||||
- Task JSON files created: N files
|
- Task JSON files created: N files
|
||||||
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
|
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
|
||||||
- MCP enhancements: code-index, exa-research
|
|
||||||
- Session ready for execution: /workflow:execute
|
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -227,7 +227,68 @@ Write(file_path=".workflow/active/WFS-{session}/.brainstorming/ui-designer/desig
|
|||||||
content="[generated content with @ references]")
|
content="[generated content with @ references]")
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 5: Completion
|
### Phase 5: Update Context Package
|
||||||
|
|
||||||
|
**Purpose**: Sync design system references to context-package.json
|
||||||
|
|
||||||
|
**Operations**:
|
||||||
|
```bash
|
||||||
|
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||||
|
|
||||||
|
# 1. Read existing package
|
||||||
|
context_pkg = Read(context_pkg_path)
|
||||||
|
|
||||||
|
# 2. Update brainstorm_artifacts (role analyses now contain @ design references)
|
||||||
|
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
|
||||||
|
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
|
||||||
|
|
||||||
|
context_pkg.brainstorm_artifacts.role_analyses = []
|
||||||
|
FOR file IN role_analysis_files:
|
||||||
|
role_name = extract_role_from_path(file)
|
||||||
|
relative_path = file.replace({brainstorm_dir}/, "")
|
||||||
|
|
||||||
|
context_pkg.brainstorm_artifacts.role_analyses.push({
|
||||||
|
"role": role_name,
|
||||||
|
"files": [{
|
||||||
|
"path": relative_path,
|
||||||
|
"type": "primary",
|
||||||
|
"content": Read(file), # Contains @ design system references
|
||||||
|
"updated_at": NOW()
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
|
||||||
|
# 3. Add design_system_references field
|
||||||
|
context_pkg.design_system_references = {
|
||||||
|
"design_run_id": design_id,
|
||||||
|
"tokens": `${design_id}/${design_tokens_path}`,
|
||||||
|
"style_guide": `${design_id}/${style_guide_path}`,
|
||||||
|
"prototypes": selected_list.map(p => `${design_id}/prototypes/${p}.html`),
|
||||||
|
"updated_at": NOW()
|
||||||
|
}
|
||||||
|
|
||||||
|
# 4. Optional: Add animations and layouts if they exist
|
||||||
|
IF exists({latest_design}/animation-extraction/animation-tokens.json):
|
||||||
|
context_pkg.design_system_references.animations = `${design_id}/animation-extraction/animation-tokens.json`
|
||||||
|
|
||||||
|
IF exists({latest_design}/layout-extraction/layout-templates.json):
|
||||||
|
context_pkg.design_system_references.layouts = `${design_id}/layout-extraction/layout-templates.json`
|
||||||
|
|
||||||
|
# 5. Update metadata
|
||||||
|
context_pkg.metadata.updated_at = NOW()
|
||||||
|
context_pkg.metadata.design_sync_timestamp = NOW()
|
||||||
|
|
||||||
|
# 6. Write back
|
||||||
|
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
|
||||||
|
|
||||||
|
REPORT: "✅ Updated context-package.json with design system references"
|
||||||
|
```
|
||||||
|
|
||||||
|
**TodoWrite Update**:
|
||||||
|
```json
|
||||||
|
{"content": "Update context package with design references", "status": "completed", "activeForm": "Updating context package"}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 6: Completion
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
TodoWrite({todos: [
|
TodoWrite({todos: [
|
||||||
|
|||||||
713
.claude/scripts/generate_module_docs.sh
Normal file
713
.claude/scripts/generate_module_docs.sh
Normal file
@@ -0,0 +1,713 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Generate documentation for modules and projects with multiple strategies
|
||||||
|
# Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]
|
||||||
|
# strategy: full|single|project-readme|project-architecture|http-api
|
||||||
|
# source_path: Path to the source module directory (or project root for project-level docs)
|
||||||
|
# project_name: Project name for output path (e.g., "myproject")
|
||||||
|
# tool: gemini|qwen|codex (default: gemini)
|
||||||
|
# model: Model name (optional, uses tool defaults)
|
||||||
|
#
|
||||||
|
# Default Models:
|
||||||
|
# gemini: gemini-2.5-flash
|
||||||
|
# qwen: coder-model
|
||||||
|
# codex: gpt5-codex
|
||||||
|
#
|
||||||
|
# Module-Level Strategies:
|
||||||
|
# full: Full documentation generation
|
||||||
|
# - Read: All files in current and subdirectories (@**/*)
|
||||||
|
# - Generate: API.md + README.md for each directory containing code files
|
||||||
|
# - Use: Deep directories (Layer 3), comprehensive documentation
|
||||||
|
#
|
||||||
|
# single: Single-layer documentation
|
||||||
|
# - Read: Current directory code + child API.md/README.md files
|
||||||
|
# - Generate: API.md + README.md only in current directory
|
||||||
|
# - Use: Upper layers (Layer 1-2), incremental updates
|
||||||
|
#
|
||||||
|
# Project-Level Strategies:
|
||||||
|
# project-readme: Project overview documentation
|
||||||
|
# - Read: All module API.md and README.md files
|
||||||
|
# - Generate: README.md (project root)
|
||||||
|
# - Use: After all module docs are generated
|
||||||
|
#
|
||||||
|
# project-architecture: System design documentation
|
||||||
|
# - Read: All module docs + project README
|
||||||
|
# - Generate: ARCHITECTURE.md + EXAMPLES.md
|
||||||
|
# - Use: After project README is generated
|
||||||
|
#
|
||||||
|
# http-api: HTTP API documentation
|
||||||
|
# - Read: API route files + existing docs
|
||||||
|
# - Generate: api/README.md
|
||||||
|
# - Use: For projects with HTTP APIs
|
||||||
|
#
|
||||||
|
# Output Structure:
|
||||||
|
# Module docs: .workflow/docs/{project_name}/{source_path}/API.md
|
||||||
|
# Module docs: .workflow/docs/{project_name}/{source_path}/README.md
|
||||||
|
# Project docs: .workflow/docs/{project_name}/README.md
|
||||||
|
# Project docs: .workflow/docs/{project_name}/ARCHITECTURE.md
|
||||||
|
# Project docs: .workflow/docs/{project_name}/EXAMPLES.md
|
||||||
|
# API docs: .workflow/docs/{project_name}/api/README.md
|
||||||
|
#
|
||||||
|
# Features:
|
||||||
|
# - Path mirroring: source structure → docs structure
|
||||||
|
# - Template-driven generation
|
||||||
|
# - Respects .gitignore patterns
|
||||||
|
# - Detects code vs navigation folders
|
||||||
|
# - Tool fallback support
|
||||||
|
|
||||||
|
# Build exclusion filters from .gitignore
|
||||||
|
build_exclusion_filters() {
|
||||||
|
local filters=""
|
||||||
|
|
||||||
|
# Common system/cache directories to exclude
|
||||||
|
local system_excludes=(
|
||||||
|
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
||||||
|
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
||||||
|
"coverage" ".nyc_output" "logs" "tmp" "temp" ".workflow"
|
||||||
|
)
|
||||||
|
|
||||||
|
for exclude in "${system_excludes[@]}"; do
|
||||||
|
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Find and parse .gitignore (current dir first, then git root)
|
||||||
|
local gitignore_file=""
|
||||||
|
|
||||||
|
# Check current directory first
|
||||||
|
if [ -f ".gitignore" ]; then
|
||||||
|
gitignore_file=".gitignore"
|
||||||
|
else
|
||||||
|
# Try to find git root and check for .gitignore there
|
||||||
|
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
||||||
|
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
||||||
|
gitignore_file="$git_root/.gitignore"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Parse .gitignore if found
|
||||||
|
if [ -n "$gitignore_file" ]; then
|
||||||
|
while IFS= read -r line; do
|
||||||
|
# Skip empty lines and comments
|
||||||
|
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
||||||
|
|
||||||
|
# Remove trailing slash and whitespace
|
||||||
|
line=$(echo "$line" | sed 's|/$||' | xargs)
|
||||||
|
|
||||||
|
# Skip wildcards patterns (too complex for simple find)
|
||||||
|
[[ "$line" =~ \* ]] && continue
|
||||||
|
|
||||||
|
# Add to filters
|
||||||
|
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
||||||
|
done < "$gitignore_file"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "$filters"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Detect folder type (code vs navigation)
|
||||||
|
detect_folder_type() {
|
||||||
|
local target_path="$1"
|
||||||
|
local exclusion_filters="$2"
|
||||||
|
|
||||||
|
# Count code files (primary indicators)
|
||||||
|
local code_count=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
|
||||||
|
if [ $code_count -gt 0 ]; then
|
||||||
|
echo "code"
|
||||||
|
else
|
||||||
|
echo "navigation"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Scan directory structure and generate structured information
|
||||||
|
scan_directory_structure() {
|
||||||
|
local target_path="$1"
|
||||||
|
local strategy="$2"
|
||||||
|
|
||||||
|
if [ ! -d "$target_path" ]; then
|
||||||
|
echo "Directory not found: $target_path"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
local exclusion_filters=$(build_exclusion_filters)
|
||||||
|
local structure_info=""
|
||||||
|
|
||||||
|
# Get basic directory info
|
||||||
|
local dir_name=$(basename "$target_path")
|
||||||
|
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
local folder_type=$(detect_folder_type "$target_path" "$exclusion_filters")
|
||||||
|
|
||||||
|
structure_info+="Directory: $dir_name\n"
|
||||||
|
structure_info+="Total files: $total_files\n"
|
||||||
|
structure_info+="Total directories: $total_dirs\n"
|
||||||
|
structure_info+="Folder type: $folder_type\n\n"
|
||||||
|
|
||||||
|
if [ "$strategy" = "full" ]; then
|
||||||
|
# For full: show all subdirectories with file counts
|
||||||
|
structure_info+="Subdirectories with files:\n"
|
||||||
|
while IFS= read -r dir; do
|
||||||
|
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
|
||||||
|
local rel_path=${dir#$target_path/}
|
||||||
|
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
if [ $file_count -gt 0 ]; then
|
||||||
|
local subdir_type=$(detect_folder_type "$dir" "$exclusion_filters")
|
||||||
|
structure_info+=" - $rel_path/ ($file_count files, type: $subdir_type)\n"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
|
||||||
|
else
|
||||||
|
# For single: show direct children only
|
||||||
|
structure_info+="Direct subdirectories:\n"
|
||||||
|
while IFS= read -r dir; do
|
||||||
|
if [ -n "$dir" ]; then
|
||||||
|
local dir_name=$(basename "$dir")
|
||||||
|
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
local has_api=$([ -f "$dir/API.md" ] && echo " [has API.md]" || echo "")
|
||||||
|
local has_readme=$([ -f "$dir/README.md" ] && echo " [has README.md]" || echo "")
|
||||||
|
structure_info+=" - $dir_name/ ($file_count files)$has_api$has_readme\n"
|
||||||
|
fi
|
||||||
|
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Show main file types in current directory
|
||||||
|
structure_info+="\nCurrent directory files:\n"
|
||||||
|
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
|
||||||
|
structure_info+=" - Code files: $code_files\n"
|
||||||
|
structure_info+=" - Config files: $config_files\n"
|
||||||
|
structure_info+=" - Documentation: $doc_files\n"
|
||||||
|
|
||||||
|
printf "%b" "$structure_info"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Calculate output path based on source path and project name
|
||||||
|
calculate_output_path() {
|
||||||
|
local source_path="$1"
|
||||||
|
local project_name="$2"
|
||||||
|
local project_root="$3"
|
||||||
|
|
||||||
|
# Get absolute path of source (normalize to Unix-style path)
|
||||||
|
local abs_source=$(cd "$source_path" && pwd)
|
||||||
|
|
||||||
|
# Normalize project root to same format
|
||||||
|
local norm_project_root=$(cd "$project_root" && pwd)
|
||||||
|
|
||||||
|
# Calculate relative path from project root
|
||||||
|
local rel_path="${abs_source#$norm_project_root}"
|
||||||
|
|
||||||
|
# Remove leading slash if present
|
||||||
|
rel_path="${rel_path#/}"
|
||||||
|
|
||||||
|
# If source is project root, use project name directly
|
||||||
|
if [ "$abs_source" = "$norm_project_root" ] || [ -z "$rel_path" ]; then
|
||||||
|
echo "$norm_project_root/.workflow/docs/$project_name"
|
||||||
|
else
|
||||||
|
echo "$norm_project_root/.workflow/docs/$project_name/$rel_path"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
generate_module_docs() {
|
||||||
|
local strategy="$1"
|
||||||
|
local source_path="$2"
|
||||||
|
local project_name="$3"
|
||||||
|
local tool="${4:-gemini}"
|
||||||
|
local model="$5"
|
||||||
|
|
||||||
|
# Validate parameters
|
||||||
|
if [ -z "$strategy" ] || [ -z "$source_path" ] || [ -z "$project_name" ]; then
|
||||||
|
echo "❌ Error: Strategy, source path, and project name are required"
|
||||||
|
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
||||||
|
echo "Module strategies: full, single"
|
||||||
|
echo "Project strategies: project-readme, project-architecture, http-api"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Validate strategy
|
||||||
|
local valid_strategies=("full" "single" "project-readme" "project-architecture" "http-api")
|
||||||
|
local strategy_valid=false
|
||||||
|
for valid_strategy in "${valid_strategies[@]}"; do
|
||||||
|
if [ "$strategy" = "$valid_strategy" ]; then
|
||||||
|
strategy_valid=true
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ "$strategy_valid" = false ]; then
|
||||||
|
echo "❌ Error: Invalid strategy '$strategy'"
|
||||||
|
echo "Valid module strategies: full, single"
|
||||||
|
echo "Valid project strategies: project-readme, project-architecture, http-api"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -d "$source_path" ]; then
|
||||||
|
echo "❌ Error: Source directory '$source_path' does not exist"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Set default models if not specified
|
||||||
|
if [ -z "$model" ]; then
|
||||||
|
case "$tool" in
|
||||||
|
gemini)
|
||||||
|
model="gemini-2.5-flash"
|
||||||
|
;;
|
||||||
|
qwen)
|
||||||
|
model="coder-model"
|
||||||
|
;;
|
||||||
|
codex)
|
||||||
|
model="gpt5-codex"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
model=""
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build exclusion filters
|
||||||
|
local exclusion_filters=$(build_exclusion_filters)
|
||||||
|
|
||||||
|
# Get project root
|
||||||
|
local project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||||
|
|
||||||
|
# Determine if this is a project-level strategy
|
||||||
|
local is_project_level=false
|
||||||
|
if [[ "$strategy" =~ ^project- ]] || [ "$strategy" = "http-api" ]; then
|
||||||
|
is_project_level=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Calculate output path
|
||||||
|
local output_path
|
||||||
|
if [ "$is_project_level" = true ]; then
|
||||||
|
# Project-level docs go to project root
|
||||||
|
if [ "$strategy" = "http-api" ]; then
|
||||||
|
output_path="$project_root/.workflow/docs/$project_name/api"
|
||||||
|
else
|
||||||
|
output_path="$project_root/.workflow/docs/$project_name"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
output_path=$(calculate_output_path "$source_path" "$project_name" "$project_root")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create output directory
|
||||||
|
mkdir -p "$output_path"
|
||||||
|
|
||||||
|
# Detect folder type (only for module-level strategies)
|
||||||
|
local folder_type=""
|
||||||
|
if [ "$is_project_level" = false ]; then
|
||||||
|
folder_type=$(detect_folder_type "$source_path" "$exclusion_filters")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Load templates based on strategy
|
||||||
|
local api_template=""
|
||||||
|
local readme_template=""
|
||||||
|
local template_content=""
|
||||||
|
|
||||||
|
if [ "$is_project_level" = true ]; then
|
||||||
|
# Project-level templates
|
||||||
|
case "$strategy" in
|
||||||
|
project-readme)
|
||||||
|
local proj_readme_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-readme.txt"
|
||||||
|
if [ -f "$proj_readme_path" ]; then
|
||||||
|
template_content=$(cat "$proj_readme_path")
|
||||||
|
echo " 📋 Loaded Project README template: $(wc -l < "$proj_readme_path") lines"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
project-architecture)
|
||||||
|
local arch_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-architecture.txt"
|
||||||
|
local examples_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-examples.txt"
|
||||||
|
if [ -f "$arch_path" ]; then
|
||||||
|
template_content=$(cat "$arch_path")
|
||||||
|
echo " 📋 Loaded Architecture template: $(wc -l < "$arch_path") lines"
|
||||||
|
fi
|
||||||
|
if [ -f "$examples_path" ]; then
|
||||||
|
template_content="$template_content
|
||||||
|
|
||||||
|
EXAMPLES TEMPLATE:
|
||||||
|
$(cat "$examples_path")"
|
||||||
|
echo " 📋 Loaded Examples template: $(wc -l < "$examples_path") lines"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
http-api)
|
||||||
|
local api_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
||||||
|
if [ -f "$api_path" ]; then
|
||||||
|
template_content=$(cat "$api_path")
|
||||||
|
echo " 📋 Loaded HTTP API template: $(wc -l < "$api_path") lines"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
else
|
||||||
|
# Module-level templates
|
||||||
|
local api_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
||||||
|
local readme_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/module-readme.txt"
|
||||||
|
local nav_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/folder-navigation.txt"
|
||||||
|
|
||||||
|
if [ "$folder_type" = "code" ]; then
|
||||||
|
if [ -f "$api_template_path" ]; then
|
||||||
|
api_template=$(cat "$api_template_path")
|
||||||
|
echo " 📋 Loaded API template: $(wc -l < "$api_template_path") lines"
|
||||||
|
fi
|
||||||
|
if [ -f "$readme_template_path" ]; then
|
||||||
|
readme_template=$(cat "$readme_template_path")
|
||||||
|
echo " 📋 Loaded README template: $(wc -l < "$readme_template_path") lines"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Navigation folder uses navigation template
|
||||||
|
if [ -f "$nav_template_path" ]; then
|
||||||
|
readme_template=$(cat "$nav_template_path")
|
||||||
|
echo " 📋 Loaded Navigation template: $(wc -l < "$nav_template_path") lines"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Scan directory structure (only for module-level strategies)
|
||||||
|
local structure_info=""
|
||||||
|
if [ "$is_project_level" = false ]; then
|
||||||
|
echo " 🔍 Scanning directory structure..."
|
||||||
|
structure_info=$(scan_directory_structure "$source_path" "$strategy")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Prepare logging info
|
||||||
|
local module_name=$(basename "$source_path")
|
||||||
|
|
||||||
|
echo "⚡ Generating docs: $source_path → $output_path"
|
||||||
|
echo " Strategy: $strategy | Tool: $tool | Model: $model | Type: $folder_type"
|
||||||
|
echo " Output: $output_path"
|
||||||
|
|
||||||
|
# Build strategy-specific prompt
|
||||||
|
local final_prompt=""
|
||||||
|
|
||||||
|
# Project-level strategies
|
||||||
|
if [ "$strategy" = "project-readme" ]; then
|
||||||
|
final_prompt="PURPOSE: Generate comprehensive project overview documentation
|
||||||
|
|
||||||
|
PROJECT: $project_name
|
||||||
|
OUTPUT: Current directory (file will be moved to final location)
|
||||||
|
|
||||||
|
Read: @.workflow/docs/$project_name/**/*.md
|
||||||
|
|
||||||
|
Context: All module documentation files from the project
|
||||||
|
|
||||||
|
Generate ONE documentation file in current directory:
|
||||||
|
- README.md - Project root documentation
|
||||||
|
|
||||||
|
Template:
|
||||||
|
$template_content
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Create README.md in CURRENT DIRECTORY
|
||||||
|
- Synthesize information from all module docs
|
||||||
|
- Include project overview, getting started, and navigation
|
||||||
|
- Create clear module navigation with links
|
||||||
|
- Follow template structure exactly"
|
||||||
|
|
||||||
|
elif [ "$strategy" = "project-architecture" ]; then
|
||||||
|
final_prompt="PURPOSE: Generate system design and usage examples documentation
|
||||||
|
|
||||||
|
PROJECT: $project_name
|
||||||
|
OUTPUT: Current directory (files will be moved to final location)
|
||||||
|
|
||||||
|
Read: @.workflow/docs/$project_name/**/*.md
|
||||||
|
|
||||||
|
Context: All project documentation including module docs and project README
|
||||||
|
|
||||||
|
Generate TWO documentation files in current directory:
|
||||||
|
1. ARCHITECTURE.md - System architecture and design patterns
|
||||||
|
2. EXAMPLES.md - End-to-end usage examples
|
||||||
|
|
||||||
|
Template:
|
||||||
|
$template_content
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Create both ARCHITECTURE.md and EXAMPLES.md in CURRENT DIRECTORY
|
||||||
|
- Synthesize architectural patterns from module documentation
|
||||||
|
- Document system structure, module relationships, and design decisions
|
||||||
|
- Provide practical code examples and usage scenarios
|
||||||
|
- Follow template structure for both files"
|
||||||
|
|
||||||
|
elif [ "$strategy" = "http-api" ]; then
|
||||||
|
final_prompt="PURPOSE: Generate HTTP API reference documentation
|
||||||
|
|
||||||
|
PROJECT: $project_name
|
||||||
|
OUTPUT: Current directory (file will be moved to final location)
|
||||||
|
|
||||||
|
Read: @**/*.{ts,js,py,go,rs} @.workflow/docs/$project_name/**/*.md
|
||||||
|
|
||||||
|
Context: API route files and existing documentation
|
||||||
|
|
||||||
|
Generate ONE documentation file in current directory:
|
||||||
|
- README.md - HTTP API documentation (in api/ subdirectory)
|
||||||
|
|
||||||
|
Template:
|
||||||
|
$template_content
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Create README.md in CURRENT DIRECTORY
|
||||||
|
- Document all HTTP endpoints (routes, methods, parameters, responses)
|
||||||
|
- Include authentication requirements and error codes
|
||||||
|
- Provide request/response examples
|
||||||
|
- Follow template structure (Part B: HTTP API documentation)"
|
||||||
|
|
||||||
|
# Module-level strategies
|
||||||
|
elif [ "$strategy" = "full" ]; then
|
||||||
|
# Full strategy: read all files, generate for each directory
|
||||||
|
if [ "$folder_type" = "code" ]; then
|
||||||
|
final_prompt="PURPOSE: Generate comprehensive API and module documentation
|
||||||
|
|
||||||
|
Directory Structure Analysis:
|
||||||
|
$structure_info
|
||||||
|
|
||||||
|
SOURCE: $source_path
|
||||||
|
OUTPUT: Current directory (files will be moved to final location)
|
||||||
|
|
||||||
|
Read: @**/*
|
||||||
|
|
||||||
|
Generate TWO documentation files in current directory:
|
||||||
|
1. API.md - Code API documentation (functions, classes, interfaces)
|
||||||
|
Template:
|
||||||
|
$api_template
|
||||||
|
|
||||||
|
2. README.md - Module overview documentation
|
||||||
|
Template:
|
||||||
|
$readme_template
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Generate both API.md and README.md in CURRENT DIRECTORY
|
||||||
|
- If subdirectories contain code files, generate their docs too (recursive)
|
||||||
|
- Work bottom-up: deepest directories first
|
||||||
|
- Follow template structure exactly
|
||||||
|
- Use structure analysis for context"
|
||||||
|
else
|
||||||
|
# Navigation folder - README only
|
||||||
|
final_prompt="PURPOSE: Generate navigation documentation for folder structure
|
||||||
|
|
||||||
|
Directory Structure Analysis:
|
||||||
|
$structure_info
|
||||||
|
|
||||||
|
SOURCE: $source_path
|
||||||
|
OUTPUT: Current directory (file will be moved to final location)
|
||||||
|
|
||||||
|
Read: @**/*
|
||||||
|
|
||||||
|
Generate ONE documentation file in current directory:
|
||||||
|
- README.md - Navigation and folder overview
|
||||||
|
|
||||||
|
Template:
|
||||||
|
$readme_template
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Create README.md in CURRENT DIRECTORY
|
||||||
|
- Focus on folder structure and navigation
|
||||||
|
- Link to subdirectory documentation
|
||||||
|
- Use structure analysis for context"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Single strategy: read current + child docs only
|
||||||
|
if [ "$folder_type" = "code" ]; then
|
||||||
|
final_prompt="PURPOSE: Generate API and module documentation for current directory
|
||||||
|
|
||||||
|
Directory Structure Analysis:
|
||||||
|
$structure_info
|
||||||
|
|
||||||
|
SOURCE: $source_path
|
||||||
|
OUTPUT: Current directory (files will be moved to final location)
|
||||||
|
|
||||||
|
Read: @*/API.md @*/README.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.go @*.rs @*.md @*.json @*.yaml @*.yml
|
||||||
|
|
||||||
|
Generate TWO documentation files in current directory:
|
||||||
|
1. API.md - Code API documentation
|
||||||
|
Template:
|
||||||
|
$api_template
|
||||||
|
|
||||||
|
2. README.md - Module overview
|
||||||
|
Template:
|
||||||
|
$readme_template
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Generate both API.md and README.md in CURRENT DIRECTORY
|
||||||
|
- Reference child documentation, do not duplicate
|
||||||
|
- Follow template structure
|
||||||
|
- Use structure analysis for current directory context"
|
||||||
|
else
|
||||||
|
# Navigation folder - README only
|
||||||
|
final_prompt="PURPOSE: Generate navigation documentation
|
||||||
|
|
||||||
|
Directory Structure Analysis:
|
||||||
|
$structure_info
|
||||||
|
|
||||||
|
SOURCE: $source_path
|
||||||
|
OUTPUT: Current directory (file will be moved to final location)
|
||||||
|
|
||||||
|
Read: @*/API.md @*/README.md @*.md
|
||||||
|
|
||||||
|
Generate ONE documentation file in current directory:
|
||||||
|
- README.md - Navigation and overview
|
||||||
|
|
||||||
|
Template:
|
||||||
|
$readme_template
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Create README.md in CURRENT DIRECTORY
|
||||||
|
- Link to child documentation
|
||||||
|
- Use structure analysis for navigation context"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Execute documentation generation
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
echo " 🔄 Starting documentation generation..."
|
||||||
|
|
||||||
|
if cd "$source_path" 2>/dev/null; then
|
||||||
|
local tool_result=0
|
||||||
|
|
||||||
|
# Store current output path for CLI context
|
||||||
|
export DOC_OUTPUT_PATH="$output_path"
|
||||||
|
|
||||||
|
# Record git HEAD before CLI execution (to detect unwanted auto-commits)
|
||||||
|
local git_head_before=""
|
||||||
|
if git rev-parse --git-dir >/dev/null 2>&1; then
|
||||||
|
git_head_before=$(git rev-parse HEAD 2>/dev/null)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Execute with selected tool
|
||||||
|
case "$tool" in
|
||||||
|
qwen)
|
||||||
|
if [ "$model" = "coder-model" ]; then
|
||||||
|
qwen -p "$final_prompt" --yolo 2>&1
|
||||||
|
else
|
||||||
|
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||||
|
fi
|
||||||
|
tool_result=$?
|
||||||
|
;;
|
||||||
|
codex)
|
||||||
|
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
|
||||||
|
tool_result=$?
|
||||||
|
;;
|
||||||
|
gemini)
|
||||||
|
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||||
|
tool_result=$?
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
|
||||||
|
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||||
|
tool_result=$?
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
# Move generated files to output directory
|
||||||
|
local docs_created=0
|
||||||
|
local moved_files=""
|
||||||
|
|
||||||
|
if [ $tool_result -eq 0 ]; then
|
||||||
|
if [ "$is_project_level" = true ]; then
|
||||||
|
# Project-level documentation files
|
||||||
|
case "$strategy" in
|
||||||
|
project-readme)
|
||||||
|
if [ -f "README.md" ]; then
|
||||||
|
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="README.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
project-architecture)
|
||||||
|
if [ -f "ARCHITECTURE.md" ]; then
|
||||||
|
mv "ARCHITECTURE.md" "$output_path/ARCHITECTURE.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="ARCHITECTURE.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
if [ -f "EXAMPLES.md" ]; then
|
||||||
|
mv "EXAMPLES.md" "$output_path/EXAMPLES.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="EXAMPLES.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
http-api)
|
||||||
|
if [ -f "README.md" ]; then
|
||||||
|
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="api/README.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
else
|
||||||
|
# Module-level documentation files
|
||||||
|
# Check and move API.md if it exists
|
||||||
|
if [ "$folder_type" = "code" ] && [ -f "API.md" ]; then
|
||||||
|
mv "API.md" "$output_path/API.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="API.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check and move README.md if it exists
|
||||||
|
if [ -f "README.md" ]; then
|
||||||
|
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="README.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if CLI tool auto-committed (and revert if needed)
|
||||||
|
if [ -n "$git_head_before" ]; then
|
||||||
|
local git_head_after=$(git rev-parse HEAD 2>/dev/null)
|
||||||
|
if [ "$git_head_before" != "$git_head_after" ]; then
|
||||||
|
echo " ⚠️ Detected unwanted auto-commit by CLI tool, reverting..."
|
||||||
|
git reset --soft "$git_head_before" 2>/dev/null
|
||||||
|
echo " ✅ Auto-commit reverted (files remain staged)"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $docs_created -gt 0 ]; then
|
||||||
|
local end_time=$(date +%s)
|
||||||
|
local duration=$((end_time - start_time))
|
||||||
|
echo " ✅ Generated $docs_created doc(s) in ${duration}s: $moved_files"
|
||||||
|
cd - > /dev/null
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
echo " ❌ Documentation generation failed for $source_path"
|
||||||
|
cd - > /dev/null
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " ❌ Cannot access directory: $source_path"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Execute function if script is run directly
|
||||||
|
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||||
|
# Show help if no arguments or help requested
|
||||||
|
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
|
||||||
|
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
||||||
|
echo ""
|
||||||
|
echo "Module-Level Strategies:"
|
||||||
|
echo " full - Generate docs for all subdirectories with code"
|
||||||
|
echo " single - Generate docs only for current directory"
|
||||||
|
echo ""
|
||||||
|
echo "Project-Level Strategies:"
|
||||||
|
echo " project-readme - Generate project root README.md"
|
||||||
|
echo " project-architecture - Generate ARCHITECTURE.md + EXAMPLES.md"
|
||||||
|
echo " http-api - Generate HTTP API documentation (api/README.md)"
|
||||||
|
echo ""
|
||||||
|
echo "Tools: gemini (default), qwen, codex"
|
||||||
|
echo "Models: Use tool defaults if not specified"
|
||||||
|
echo ""
|
||||||
|
echo "Module Examples:"
|
||||||
|
echo " ./generate_module_docs.sh full ./src/auth myproject"
|
||||||
|
echo " ./generate_module_docs.sh single ./components myproject gemini"
|
||||||
|
echo ""
|
||||||
|
echo "Project Examples:"
|
||||||
|
echo " ./generate_module_docs.sh project-readme . myproject"
|
||||||
|
echo " ./generate_module_docs.sh project-architecture . myproject qwen"
|
||||||
|
echo " ./generate_module_docs.sh http-api . myproject"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
generate_module_docs "$@"
|
||||||
|
fi
|
||||||
567
ARCHITECTURE.md
567
ARCHITECTURE.md
@@ -1,567 +0,0 @@
|
|||||||
# 🏗️ Claude Code Workflow (CCW) - Architecture Overview
|
|
||||||
|
|
||||||
This document provides a high-level overview of CCW's architecture, design principles, and system components.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 Table of Contents
|
|
||||||
|
|
||||||
- [Design Philosophy](#design-philosophy)
|
|
||||||
- [System Architecture](#system-architecture)
|
|
||||||
- [Core Components](#core-components)
|
|
||||||
- [Data Flow](#data-flow)
|
|
||||||
- [Multi-Agent System](#multi-agent-system)
|
|
||||||
- [CLI Tool Integration](#cli-tool-integration)
|
|
||||||
- [Session Management](#session-management)
|
|
||||||
- [Memory System](#memory-system)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Design Philosophy
|
|
||||||
|
|
||||||
CCW is built on several core design principles that differentiate it from traditional AI-assisted development tools:
|
|
||||||
|
|
||||||
### 1. **Context-First Architecture**
|
|
||||||
- Pre-defined context gathering eliminates execution uncertainty
|
|
||||||
- Agents receive the correct information *before* implementation
|
|
||||||
- Context is loaded dynamically based on task requirements
|
|
||||||
|
|
||||||
### 2. **JSON-First State Management**
|
|
||||||
- Task states live in `.task/IMPL-*.json` files as the single source of truth
|
|
||||||
- Markdown documents are read-only generated views
|
|
||||||
- Eliminates state drift and synchronization complexity
|
|
||||||
- Enables programmatic orchestration
|
|
||||||
|
|
||||||
### 3. **Autonomous Multi-Phase Orchestration**
|
|
||||||
- Commands chain specialized sub-commands and agents
|
|
||||||
- Automates complex workflows with zero user intervention
|
|
||||||
- Each phase validates its output before proceeding
|
|
||||||
|
|
||||||
### 4. **Multi-Model Strategy**
|
|
||||||
- Leverages unique strengths of different AI models
|
|
||||||
- Gemini for analysis and exploration
|
|
||||||
- Codex for implementation
|
|
||||||
- Qwen for architecture and planning
|
|
||||||
|
|
||||||
### 5. **Hierarchical Memory System**
|
|
||||||
- 4-layer documentation system (CLAUDE.md files)
|
|
||||||
- Provides context at the appropriate level of abstraction
|
|
||||||
- Prevents information overload
|
|
||||||
|
|
||||||
### 6. **Specialized Role-Based Agents**
|
|
||||||
- Suite of agents mirrors a real software team
|
|
||||||
- Each agent has specific responsibilities
|
|
||||||
- Agents collaborate to complete complex tasks
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🏛️ System Architecture
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
graph TB
|
|
||||||
subgraph "User Interface Layer"
|
|
||||||
CLI[Slash Commands]
|
|
||||||
CHAT[Natural Language]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Orchestration Layer"
|
|
||||||
WF[Workflow Engine]
|
|
||||||
SM[Session Manager]
|
|
||||||
TM[Task Manager]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Agent Layer"
|
|
||||||
AG1[@code-developer]
|
|
||||||
AG2[@test-fix-agent]
|
|
||||||
AG3[@ui-design-agent]
|
|
||||||
AG4[@cli-execution-agent]
|
|
||||||
AG5[More Agents...]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Tool Layer"
|
|
||||||
GEMINI[Gemini CLI]
|
|
||||||
QWEN[Qwen CLI]
|
|
||||||
CODEX[Codex CLI]
|
|
||||||
BASH[Bash/System]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Data Layer"
|
|
||||||
JSON[Task JSON Files]
|
|
||||||
MEM[CLAUDE.md Memory]
|
|
||||||
STATE[Session State]
|
|
||||||
end
|
|
||||||
|
|
||||||
CLI --> WF
|
|
||||||
CHAT --> WF
|
|
||||||
WF --> SM
|
|
||||||
WF --> TM
|
|
||||||
SM --> STATE
|
|
||||||
TM --> JSON
|
|
||||||
WF --> AG1
|
|
||||||
WF --> AG2
|
|
||||||
WF --> AG3
|
|
||||||
WF --> AG4
|
|
||||||
AG1 --> GEMINI
|
|
||||||
AG1 --> QWEN
|
|
||||||
AG1 --> CODEX
|
|
||||||
AG2 --> BASH
|
|
||||||
AG3 --> GEMINI
|
|
||||||
AG4 --> CODEX
|
|
||||||
GEMINI --> MEM
|
|
||||||
QWEN --> MEM
|
|
||||||
CODEX --> JSON
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔧 Core Components
|
|
||||||
|
|
||||||
### 1. **Workflow Engine**
|
|
||||||
|
|
||||||
The workflow engine orchestrates complex development processes through multiple phases:
|
|
||||||
|
|
||||||
- **Planning Phase**: Analyzes requirements and generates implementation plans
|
|
||||||
- **Execution Phase**: Coordinates agents to implement tasks
|
|
||||||
- **Verification Phase**: Validates implementation quality
|
|
||||||
- **Testing Phase**: Generates and executes tests
|
|
||||||
- **Review Phase**: Performs code review and quality analysis
|
|
||||||
|
|
||||||
**Key Features**:
|
|
||||||
- Multi-phase orchestration
|
|
||||||
- Automatic session management
|
|
||||||
- Context propagation between phases
|
|
||||||
- Quality gates at each phase transition
|
|
||||||
|
|
||||||
### 2. **Session Manager**
|
|
||||||
|
|
||||||
Manages isolated workflow contexts:
|
|
||||||
|
|
||||||
```
|
|
||||||
.workflow/
|
|
||||||
├── active/ # Active sessions
|
|
||||||
│ ├── WFS-user-auth/ # User authentication session
|
|
||||||
│ ├── WFS-payment/ # Payment integration session
|
|
||||||
│ └── WFS-dashboard/ # Dashboard redesign session
|
|
||||||
└── archives/ # Completed sessions
|
|
||||||
└── WFS-old-feature/ # Archived session
|
|
||||||
```
|
|
||||||
|
|
||||||
**Capabilities**:
|
|
||||||
- Directory-based session tracking
|
|
||||||
- Session state persistence
|
|
||||||
- Parallel session support
|
|
||||||
- Session archival and resumption
|
|
||||||
|
|
||||||
### 3. **Task Manager**
|
|
||||||
|
|
||||||
Handles hierarchical task structures:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"id": "IMPL-1.2",
|
|
||||||
"title": "Implement JWT authentication",
|
|
||||||
"status": "pending",
|
|
||||||
"meta": {
|
|
||||||
"type": "feature",
|
|
||||||
"agent": "code-developer"
|
|
||||||
},
|
|
||||||
"context": {
|
|
||||||
"requirements": ["JWT authentication", "OAuth2 support"],
|
|
||||||
"focus_paths": ["src/auth", "tests/auth"],
|
|
||||||
"acceptance": ["JWT validation works", "OAuth flow complete"]
|
|
||||||
},
|
|
||||||
"flow_control": {
|
|
||||||
"pre_analysis": [...],
|
|
||||||
"implementation_approach": {...}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Features**:
|
|
||||||
- JSON-first data model
|
|
||||||
- Hierarchical task decomposition (max 2 levels)
|
|
||||||
- Dynamic subtask creation
|
|
||||||
- Dependency tracking
|
|
||||||
|
|
||||||
### 4. **Memory System**
|
|
||||||
|
|
||||||
Four-layer hierarchical documentation:
|
|
||||||
|
|
||||||
```
|
|
||||||
CLAUDE.md (Project root - high-level overview)
|
|
||||||
├── src/CLAUDE.md (Source layer - module summaries)
|
|
||||||
│ ├── auth/CLAUDE.md (Module layer - component details)
|
|
||||||
│ │ └── jwt/CLAUDE.md (Component layer - implementation details)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Memory Commands**:
|
|
||||||
- `/memory:update-full` - Complete project rebuild
|
|
||||||
- `/memory:update-related` - Incremental updates for changed modules
|
|
||||||
- `/memory:load` - Quick context loading for specific tasks
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔄 Data Flow
|
|
||||||
|
|
||||||
### Typical Workflow Execution Flow
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
sequenceDiagram
|
|
||||||
participant User
|
|
||||||
participant CLI
|
|
||||||
participant Workflow
|
|
||||||
participant Agent
|
|
||||||
participant Tool
|
|
||||||
participant Data
|
|
||||||
|
|
||||||
User->>CLI: /workflow:plan "Feature description"
|
|
||||||
CLI->>Workflow: Initialize planning workflow
|
|
||||||
Workflow->>Data: Create session
|
|
||||||
Workflow->>Agent: @action-planning-agent
|
|
||||||
Agent->>Tool: gemini-wrapper analyze
|
|
||||||
Tool->>Data: Update CLAUDE.md
|
|
||||||
Agent->>Data: Generate IMPL-*.json
|
|
||||||
Workflow->>User: Plan complete
|
|
||||||
|
|
||||||
User->>CLI: /workflow:execute
|
|
||||||
CLI->>Workflow: Start execution
|
|
||||||
Workflow->>Data: Load tasks from JSON
|
|
||||||
Workflow->>Agent: @code-developer
|
|
||||||
Agent->>Tool: Read context
|
|
||||||
Agent->>Tool: Implement code
|
|
||||||
Agent->>Data: Update task status
|
|
||||||
Workflow->>User: Execution complete
|
|
||||||
```
|
|
||||||
|
|
||||||
### Context Flow
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
graph LR
|
|
||||||
A[User Request] --> B[Context Gathering]
|
|
||||||
B --> C[CLAUDE.md Memory]
|
|
||||||
B --> D[Task JSON]
|
|
||||||
B --> E[Session State]
|
|
||||||
C --> F[Agent Context]
|
|
||||||
D --> F
|
|
||||||
E --> F
|
|
||||||
F --> G[Tool Execution]
|
|
||||||
G --> H[Implementation]
|
|
||||||
H --> I[Update State]
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🤖 Multi-Agent System
|
|
||||||
|
|
||||||
### Agent Specialization
|
|
||||||
|
|
||||||
CCW uses specialized agents for different types of tasks:
|
|
||||||
|
|
||||||
| Agent | Responsibility | Tools Used |
|
|
||||||
|-------|---------------|------------|
|
|
||||||
| **@code-developer** | Code implementation | Gemini, Qwen, Codex, Bash |
|
|
||||||
| **@test-fix-agent** | Test generation and fixing | Codex, Bash |
|
|
||||||
| **@ui-design-agent** | UI design and prototyping | Gemini, Claude Vision |
|
|
||||||
| **@action-planning-agent** | Task planning and decomposition | Gemini |
|
|
||||||
| **@cli-execution-agent** | Autonomous CLI task handling | Codex, Gemini, Qwen |
|
|
||||||
| **@cli-explore-agent** | Codebase exploration | ripgrep, find |
|
|
||||||
| **@context-search-agent** | Context gathering | Grep, Glob |
|
|
||||||
| **@doc-generator** | Documentation generation | Gemini, Qwen |
|
|
||||||
| **@memory-bridge** | Memory system updates | Gemini, Qwen |
|
|
||||||
| **@universal-executor** | General task execution | All tools |
|
|
||||||
|
|
||||||
### Agent Communication
|
|
||||||
|
|
||||||
Agents communicate through:
|
|
||||||
1. **Shared Session State**: All agents can read/write session JSON
|
|
||||||
2. **Task JSON Files**: Tasks contain context for agent handoffs
|
|
||||||
3. **CLAUDE.md Memory**: Shared project knowledge base
|
|
||||||
4. **Flow Control**: Pre-analysis and implementation approach definitions
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🛠️ CLI Tool Integration
|
|
||||||
|
|
||||||
### Three CLI Tools
|
|
||||||
|
|
||||||
CCW integrates three external AI tools, each optimized for specific tasks:
|
|
||||||
|
|
||||||
#### 1. **Gemini CLI** - Deep Analysis
|
|
||||||
- **Strengths**: Pattern recognition, architecture understanding, comprehensive analysis
|
|
||||||
- **Use Cases**:
|
|
||||||
- Codebase exploration
|
|
||||||
- Architecture analysis
|
|
||||||
- Bug diagnosis
|
|
||||||
- Memory system updates
|
|
||||||
|
|
||||||
#### 2. **Qwen CLI** - Architecture & Planning
|
|
||||||
- **Strengths**: System design, code generation, architectural planning
|
|
||||||
- **Use Cases**:
|
|
||||||
- Architecture design
|
|
||||||
- System planning
|
|
||||||
- Code generation
|
|
||||||
- Refactoring strategies
|
|
||||||
|
|
||||||
#### 3. **Codex CLI** - Autonomous Development
|
|
||||||
- **Strengths**: Self-directed implementation, error fixing, test generation
|
|
||||||
- **Use Cases**:
|
|
||||||
- Feature implementation
|
|
||||||
- Bug fixes
|
|
||||||
- Test generation
|
|
||||||
- Autonomous development
|
|
||||||
|
|
||||||
### Tool Selection Strategy
|
|
||||||
|
|
||||||
CCW automatically selects the best tool based on task type:
|
|
||||||
|
|
||||||
```
|
|
||||||
Analysis Task → Gemini CLI
|
|
||||||
Planning Task → Qwen CLI
|
|
||||||
Implementation Task → Codex CLI
|
|
||||||
```
|
|
||||||
|
|
||||||
Users can override with `--tool` parameter:
|
|
||||||
```bash
|
|
||||||
/cli:analyze --tool codex "Analyze authentication flow"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📦 Session Management
|
|
||||||
|
|
||||||
### Session Lifecycle
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
stateDiagram-v2
|
|
||||||
[*] --> Creating: /workflow:session:start
|
|
||||||
Creating --> Active: Session initialized
|
|
||||||
Active --> Paused: User pauses
|
|
||||||
Paused --> Active: /workflow:session:resume
|
|
||||||
Active --> Completed: /workflow:session:complete
|
|
||||||
Completed --> Archived: Move to archives/
|
|
||||||
Archived --> [*]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Session Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
.workflow/active/WFS-feature-name/
|
|
||||||
├── workflow-session.json # Session metadata
|
|
||||||
├── .task/ # Task JSON files
|
|
||||||
│ ├── IMPL-1.json
|
|
||||||
│ ├── IMPL-1.1.json
|
|
||||||
│ └── IMPL-2.json
|
|
||||||
├── .chat/ # Chat logs
|
|
||||||
├── brainstorming/ # Brainstorm artifacts
|
|
||||||
│ ├── guidance-specification.md
|
|
||||||
│ └── system-architect/analysis.md
|
|
||||||
└── artifacts/ # Generated files
|
|
||||||
├── IMPL_PLAN.md
|
|
||||||
└── verification-report.md
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 💾 Memory System
|
|
||||||
|
|
||||||
### Hierarchical CLAUDE.md Structure
|
|
||||||
|
|
||||||
The memory system maintains project knowledge across four layers:
|
|
||||||
|
|
||||||
#### **Layer 1: Project Root**
|
|
||||||
```markdown
|
|
||||||
# Project Overview
|
|
||||||
- High-level architecture
|
|
||||||
- Technology stack
|
|
||||||
- Key design decisions
|
|
||||||
- Entry points
|
|
||||||
```
|
|
||||||
|
|
||||||
#### **Layer 2: Source Directory**
|
|
||||||
```markdown
|
|
||||||
# Source Code Structure
|
|
||||||
- Module summaries
|
|
||||||
- Dependency relationships
|
|
||||||
- Common patterns
|
|
||||||
```
|
|
||||||
|
|
||||||
#### **Layer 3: Module Directory**
|
|
||||||
```markdown
|
|
||||||
# Module Details
|
|
||||||
- Component responsibilities
|
|
||||||
- API interfaces
|
|
||||||
- Internal structure
|
|
||||||
```
|
|
||||||
|
|
||||||
#### **Layer 4: Component Directory**
|
|
||||||
```markdown
|
|
||||||
# Component Implementation
|
|
||||||
- Function signatures
|
|
||||||
- Implementation details
|
|
||||||
- Usage examples
|
|
||||||
```
|
|
||||||
|
|
||||||
### Memory Update Strategies
|
|
||||||
|
|
||||||
#### Full Update (`/memory:update-full`)
|
|
||||||
- Rebuilds entire project documentation
|
|
||||||
- Uses layer-based execution (Layer 3 → 1)
|
|
||||||
- Batch processing (4 modules/agent)
|
|
||||||
- Fallback mechanism (gemini → qwen → codex)
|
|
||||||
|
|
||||||
#### Incremental Update (`/memory:update-related`)
|
|
||||||
- Updates only changed modules
|
|
||||||
- Analyzes git changes
|
|
||||||
- Efficient for daily development
|
|
||||||
|
|
||||||
#### Quick Load (`/memory:load`)
|
|
||||||
- No file updates
|
|
||||||
- Task-specific context gathering
|
|
||||||
- Returns JSON context package
|
|
||||||
- Fast context injection
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔐 Quality Assurance
|
|
||||||
|
|
||||||
### Quality Gates
|
|
||||||
|
|
||||||
CCW enforces quality at multiple levels:
|
|
||||||
|
|
||||||
1. **Planning Phase**:
|
|
||||||
- Requirements coverage check
|
|
||||||
- Dependency validation
|
|
||||||
- Task specification quality assessment
|
|
||||||
|
|
||||||
2. **Execution Phase**:
|
|
||||||
- Context validation before implementation
|
|
||||||
- Pattern consistency checks
|
|
||||||
- Test generation
|
|
||||||
|
|
||||||
3. **Review Phase**:
|
|
||||||
- Code quality analysis
|
|
||||||
- Security review
|
|
||||||
- Architecture review
|
|
||||||
|
|
||||||
### Verification Commands
|
|
||||||
|
|
||||||
- `/workflow:action-plan-verify` - Validates plan quality before execution
|
|
||||||
- `/workflow:tdd-verify` - Verifies TDD cycle compliance
|
|
||||||
- `/workflow:review` - Post-implementation review
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Performance Optimizations
|
|
||||||
|
|
||||||
### 1. **Lazy Loading**
|
|
||||||
- Files created only when needed
|
|
||||||
- On-demand document generation
|
|
||||||
- Minimal upfront cost
|
|
||||||
|
|
||||||
### 2. **Parallel Execution**
|
|
||||||
- Independent tasks run concurrently
|
|
||||||
- Multi-agent parallel brainstorming
|
|
||||||
- Batch processing for memory updates
|
|
||||||
|
|
||||||
### 3. **Context Caching**
|
|
||||||
- CLAUDE.md acts as knowledge cache
|
|
||||||
- Reduces redundant analysis
|
|
||||||
- Faster context retrieval
|
|
||||||
|
|
||||||
### 4. **Atomic Session Management**
|
|
||||||
- Ultra-fast session switching (<10ms)
|
|
||||||
- Simple file marker system
|
|
||||||
- No database overhead
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Scalability
|
|
||||||
|
|
||||||
### Horizontal Scalability
|
|
||||||
|
|
||||||
- **Multiple Sessions**: Run parallel workflows for different features
|
|
||||||
- **Team Collaboration**: Session-based isolation prevents conflicts
|
|
||||||
- **Incremental Updates**: Only update affected modules
|
|
||||||
|
|
||||||
### Vertical Scalability
|
|
||||||
|
|
||||||
- **Hierarchical Tasks**: Efficient task decomposition (max 2 levels)
|
|
||||||
- **Selective Context**: Load only relevant context for each task
|
|
||||||
- **Batch Processing**: Process multiple modules per agent invocation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔮 Extensibility
|
|
||||||
|
|
||||||
### Adding New Agents
|
|
||||||
|
|
||||||
Create agent definition in `.claude/agents/`:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Agent Name
|
|
||||||
|
|
||||||
## Role
|
|
||||||
Agent description
|
|
||||||
|
|
||||||
## Tools Available
|
|
||||||
- Tool 1
|
|
||||||
- Tool 2
|
|
||||||
|
|
||||||
## Prompt
|
|
||||||
Agent instructions...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Adding New Commands
|
|
||||||
|
|
||||||
Create command in `.claude/commands/`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/usr/bin/env bash
|
|
||||||
# Command implementation
|
|
||||||
```
|
|
||||||
|
|
||||||
### Custom Workflows
|
|
||||||
|
|
||||||
Combine existing commands to create custom workflows:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/workflow:brainstorm:auto-parallel "Topic"
|
|
||||||
/workflow:plan
|
|
||||||
/workflow:action-plan-verify
|
|
||||||
/workflow:execute
|
|
||||||
/workflow:review
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎓 Best Practices
|
|
||||||
|
|
||||||
### For Users
|
|
||||||
|
|
||||||
1. **Keep Memory Updated**: Run `/memory:update-related` after major changes
|
|
||||||
2. **Use Quality Gates**: Run `/workflow:action-plan-verify` before execution
|
|
||||||
3. **Session Management**: Complete sessions with `/workflow:session:complete`
|
|
||||||
4. **Tool Selection**: Let CCW auto-select tools unless you have specific needs
|
|
||||||
|
|
||||||
### For Developers
|
|
||||||
|
|
||||||
1. **Follow JSON-First**: Never modify markdown documents directly
|
|
||||||
2. **Agent Context**: Provide complete context in task JSON
|
|
||||||
3. **Error Handling**: Implement graceful fallbacks
|
|
||||||
4. **Testing**: Test agents independently before integration
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Further Reading
|
|
||||||
|
|
||||||
- [Getting Started Guide](GETTING_STARTED.md) - Quick start tutorial
|
|
||||||
- [Command Reference](COMMAND_REFERENCE.md) - All available commands
|
|
||||||
- [Command Specification](COMMAND_SPEC.md) - Detailed command specs
|
|
||||||
- [Workflow Diagrams](WORKFLOW_DIAGRAMS.md) - Visual workflow representations
|
|
||||||
- [Contributing Guide](CONTRIBUTING.md) - How to contribute
|
|
||||||
- [Examples](EXAMPLES.md) - Real-world use cases
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: 2025-11-20
|
|
||||||
**Version**: 5.8.1
|
|
||||||
823
EXAMPLES.md
823
EXAMPLES.md
@@ -1,823 +0,0 @@
|
|||||||
# 📖 Claude Code Workflow - Real-World Examples
|
|
||||||
|
|
||||||
This document provides practical, real-world examples of using CCW for common development tasks.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 Table of Contents
|
|
||||||
|
|
||||||
- [Quick Start Examples](#quick-start-examples)
|
|
||||||
- [Web Development](#web-development)
|
|
||||||
- [API Development](#api-development)
|
|
||||||
- [Testing & Quality Assurance](#testing--quality-assurance)
|
|
||||||
- [Refactoring](#refactoring)
|
|
||||||
- [UI/UX Design](#uiux-design)
|
|
||||||
- [Bug Fixes](#bug-fixes)
|
|
||||||
- [Documentation](#documentation)
|
|
||||||
- [DevOps & Automation](#devops--automation)
|
|
||||||
- [Complex Projects](#complex-projects)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Quick Start Examples
|
|
||||||
|
|
||||||
### Example 1: Simple Express API
|
|
||||||
|
|
||||||
**Objective**: Create a basic Express.js API with CRUD operations
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Option 1: Lite workflow (fastest)
|
|
||||||
/workflow:lite-plan "Create Express API with CRUD endpoints for users (GET, POST, PUT, DELETE)"
|
|
||||||
|
|
||||||
# Option 2: Full workflow (more structured)
|
|
||||||
/workflow:plan "Create Express API with CRUD endpoints for users"
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**What CCW does**:
|
|
||||||
1. Analyzes your project structure
|
|
||||||
2. Creates Express app setup
|
|
||||||
3. Implements CRUD routes
|
|
||||||
4. Adds error handling middleware
|
|
||||||
5. Creates basic tests
|
|
||||||
|
|
||||||
**Result**:
|
|
||||||
```
|
|
||||||
src/
|
|
||||||
├── app.js # Express app setup
|
|
||||||
├── routes/
|
|
||||||
│ └── users.js # User CRUD routes
|
|
||||||
├── controllers/
|
|
||||||
│ └── userController.js
|
|
||||||
└── tests/
|
|
||||||
└── users.test.js
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 2: React Component
|
|
||||||
|
|
||||||
**Objective**: Create a React login form component
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/workflow:lite-plan "Create a React login form component with email and password fields, validation, and submit handling"
|
|
||||||
```
|
|
||||||
|
|
||||||
**What CCW does**:
|
|
||||||
1. Creates LoginForm component
|
|
||||||
2. Adds form validation (email format, password requirements)
|
|
||||||
3. Implements state management
|
|
||||||
4. Adds error display
|
|
||||||
5. Creates component tests
|
|
||||||
|
|
||||||
**Result**:
|
|
||||||
```jsx
|
|
||||||
// components/LoginForm.jsx
|
|
||||||
import React, { useState } from 'react';
|
|
||||||
|
|
||||||
export function LoginForm({ onSubmit }) {
|
|
||||||
const [email, setEmail] = useState('');
|
|
||||||
const [password, setPassword] = useState('');
|
|
||||||
const [errors, setErrors] = useState({});
|
|
||||||
|
|
||||||
// ... validation and submit logic
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🌐 Web Development
|
|
||||||
|
|
||||||
### Example 3: Full-Stack Todo Application
|
|
||||||
|
|
||||||
**Objective**: Build a complete todo application with React frontend and Express backend
|
|
||||||
|
|
||||||
#### Phase 1: Planning with Brainstorming
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Multi-perspective analysis
|
|
||||||
/workflow:brainstorm:auto-parallel "Full-stack todo application with user authentication, real-time updates, and dark mode"
|
|
||||||
|
|
||||||
# Review brainstorming artifacts
|
|
||||||
# Then create implementation plan
|
|
||||||
/workflow:plan
|
|
||||||
|
|
||||||
# Verify plan quality
|
|
||||||
/workflow:action-plan-verify
|
|
||||||
```
|
|
||||||
|
|
||||||
**Brainstorming generates**:
|
|
||||||
- System architecture analysis
|
|
||||||
- UI/UX design recommendations
|
|
||||||
- Data model design
|
|
||||||
- Security considerations
|
|
||||||
- API design patterns
|
|
||||||
|
|
||||||
#### Phase 2: Implementation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Execute the plan
|
|
||||||
/workflow:execute
|
|
||||||
|
|
||||||
# Monitor progress
|
|
||||||
/workflow:status
|
|
||||||
```
|
|
||||||
|
|
||||||
**What CCW implements**:
|
|
||||||
|
|
||||||
**Backend** (`server/`):
|
|
||||||
- Express server setup
|
|
||||||
- MongoDB/PostgreSQL integration
|
|
||||||
- JWT authentication
|
|
||||||
- RESTful API endpoints
|
|
||||||
- WebSocket for real-time updates
|
|
||||||
- Input validation middleware
|
|
||||||
|
|
||||||
**Frontend** (`client/`):
|
|
||||||
- React app with routing
|
|
||||||
- Authentication flow
|
|
||||||
- Todo CRUD operations
|
|
||||||
- Real-time updates via WebSocket
|
|
||||||
- Dark mode toggle
|
|
||||||
- Responsive design
|
|
||||||
|
|
||||||
#### Phase 3: Testing
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate comprehensive tests
|
|
||||||
/workflow:test-gen WFS-todo-application
|
|
||||||
|
|
||||||
# Execute test tasks
|
|
||||||
/workflow:execute
|
|
||||||
|
|
||||||
# Run iterative test-fix cycle
|
|
||||||
/workflow:test-cycle-execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**Tests created**:
|
|
||||||
- Unit tests for components
|
|
||||||
- Integration tests for API
|
|
||||||
- E2E tests for user flows
|
|
||||||
- Authentication tests
|
|
||||||
- WebSocket connection tests
|
|
||||||
|
|
||||||
#### Phase 4: Quality Review
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Security review
|
|
||||||
/workflow:review --type security
|
|
||||||
|
|
||||||
# Architecture review
|
|
||||||
/workflow:review --type architecture
|
|
||||||
|
|
||||||
# General quality review
|
|
||||||
/workflow:review
|
|
||||||
```
|
|
||||||
|
|
||||||
**Complete session**:
|
|
||||||
```bash
|
|
||||||
/workflow:session:complete
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Example 4: E-commerce Product Catalog
|
|
||||||
|
|
||||||
**Objective**: Build product catalog with search, filters, and pagination
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start with UI design exploration
|
|
||||||
/workflow:ui-design:explore-auto --prompt "Modern e-commerce product catalog with grid layout, filters sidebar, and search bar" --targets "catalog,product-card" --style-variants 3
|
|
||||||
|
|
||||||
# Review designs in compare.html
|
|
||||||
# Sync selected designs
|
|
||||||
/workflow:ui-design:design-sync --session <session-id> --selected-prototypes "catalog-v2,product-card-v1"
|
|
||||||
|
|
||||||
# Create implementation plan
|
|
||||||
/workflow:plan
|
|
||||||
|
|
||||||
# Execute
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**Features implemented**:
|
|
||||||
- Product grid with responsive layout
|
|
||||||
- Search functionality with debounce
|
|
||||||
- Category/price/rating filters
|
|
||||||
- Pagination with infinite scroll option
|
|
||||||
- Product card with image, title, price, rating
|
|
||||||
- Sort options (price, popularity, newest)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔌 API Development
|
|
||||||
|
|
||||||
### Example 5: RESTful API with Authentication
|
|
||||||
|
|
||||||
**Objective**: Create RESTful API with JWT authentication and role-based access control
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Detailed planning
|
|
||||||
/workflow:plan "RESTful API with JWT authentication, role-based access control (admin, user), and protected endpoints for posts resource"
|
|
||||||
|
|
||||||
# Verify plan
|
|
||||||
/workflow:action-plan-verify
|
|
||||||
|
|
||||||
# Execute
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**Implementation includes**:
|
|
||||||
|
|
||||||
**Authentication**:
|
|
||||||
```javascript
|
|
||||||
// routes/auth.js
|
|
||||||
POST /api/auth/register
|
|
||||||
POST /api/auth/login
|
|
||||||
POST /api/auth/refresh
|
|
||||||
POST /api/auth/logout
|
|
||||||
```
|
|
||||||
|
|
||||||
**Protected Resources**:
|
|
||||||
```javascript
|
|
||||||
// routes/posts.js
|
|
||||||
GET /api/posts # Public
|
|
||||||
GET /api/posts/:id # Public
|
|
||||||
POST /api/posts # Authenticated
|
|
||||||
PUT /api/posts/:id # Authenticated (owner or admin)
|
|
||||||
DELETE /api/posts/:id # Authenticated (owner or admin)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Middleware**:
|
|
||||||
- `authenticate` - Verifies JWT token
|
|
||||||
- `authorize(['admin'])` - Role-based access
|
|
||||||
- `validateRequest` - Input validation
|
|
||||||
- `errorHandler` - Centralized error handling
|
|
||||||
|
|
||||||
### Example 6: GraphQL API
|
|
||||||
|
|
||||||
**Objective**: Convert REST API to GraphQL
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Analyze existing REST API
|
|
||||||
/cli:analyze "Analyze REST API structure in src/routes/"
|
|
||||||
|
|
||||||
# Plan GraphQL migration
|
|
||||||
/workflow:plan "Migrate REST API to GraphQL with queries, mutations, and subscriptions for posts and users"
|
|
||||||
|
|
||||||
# Execute migration
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**GraphQL schema created**:
|
|
||||||
```graphql
|
|
||||||
type Query {
|
|
||||||
posts(limit: Int, offset: Int): [Post!]!
|
|
||||||
post(id: ID!): Post
|
|
||||||
user(id: ID!): User
|
|
||||||
}
|
|
||||||
|
|
||||||
type Mutation {
|
|
||||||
createPost(input: CreatePostInput!): Post!
|
|
||||||
updatePost(id: ID!, input: UpdatePostInput!): Post!
|
|
||||||
deletePost(id: ID!): Boolean!
|
|
||||||
}
|
|
||||||
|
|
||||||
type Subscription {
|
|
||||||
postCreated: Post!
|
|
||||||
postUpdated: Post!
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧪 Testing & Quality Assurance
|
|
||||||
|
|
||||||
### Example 7: Test-Driven Development (TDD)
|
|
||||||
|
|
||||||
**Objective**: Implement user authentication using TDD approach
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start TDD workflow
|
|
||||||
/workflow:tdd-plan "User authentication with email/password login, registration, and password reset"
|
|
||||||
|
|
||||||
# Execute (Red-Green-Refactor cycles)
|
|
||||||
/workflow:execute
|
|
||||||
|
|
||||||
# Verify TDD compliance
|
|
||||||
/workflow:tdd-verify
|
|
||||||
```
|
|
||||||
|
|
||||||
**TDD cycle tasks created**:
|
|
||||||
|
|
||||||
**Cycle 1: Registration**
|
|
||||||
1. `IMPL-1.1` - Write failing test for user registration
|
|
||||||
2. `IMPL-1.2` - Implement registration to pass test
|
|
||||||
3. `IMPL-1.3` - Refactor registration code
|
|
||||||
|
|
||||||
**Cycle 2: Login**
|
|
||||||
1. `IMPL-2.1` - Write failing test for login
|
|
||||||
2. `IMPL-2.2` - Implement login to pass test
|
|
||||||
3. `IMPL-2.3` - Refactor login code
|
|
||||||
|
|
||||||
**Cycle 3: Password Reset**
|
|
||||||
1. `IMPL-3.1` - Write failing test for password reset
|
|
||||||
2. `IMPL-3.2` - Implement password reset
|
|
||||||
3. `IMPL-3.3` - Refactor password reset
|
|
||||||
|
|
||||||
### Example 8: Adding Tests to Existing Code
|
|
||||||
|
|
||||||
**Objective**: Generate comprehensive tests for existing authentication module
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create test generation workflow from existing code
|
|
||||||
/workflow:test-gen WFS-authentication-implementation
|
|
||||||
|
|
||||||
# Execute test tasks
|
|
||||||
/workflow:execute
|
|
||||||
|
|
||||||
# Run test-fix cycle until all tests pass
|
|
||||||
/workflow:test-cycle-execute --max-iterations 5
|
|
||||||
```
|
|
||||||
|
|
||||||
**Tests generated**:
|
|
||||||
- Unit tests for each function
|
|
||||||
- Integration tests for auth flow
|
|
||||||
- Edge case tests (invalid input, expired tokens, etc.)
|
|
||||||
- Security tests (SQL injection, XSS, etc.)
|
|
||||||
- Performance tests (load testing, rate limiting)
|
|
||||||
|
|
||||||
**Test coverage**: Aims for 80%+ coverage
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔄 Refactoring
|
|
||||||
|
|
||||||
### Example 9: Monolith to Microservices
|
|
||||||
|
|
||||||
**Objective**: Refactor monolithic application to microservices architecture
|
|
||||||
|
|
||||||
#### Phase 1: Analysis
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Deep architecture analysis
|
|
||||||
/cli:mode:plan --tool gemini "Analyze current monolithic architecture and create microservices migration strategy"
|
|
||||||
|
|
||||||
# Multi-role brainstorming
|
|
||||||
/workflow:brainstorm:auto-parallel "Migrate monolith to microservices with API gateway, service discovery, and message queue" --count 5
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Phase 2: Planning
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create detailed migration plan
|
|
||||||
/workflow:plan "Phase 1 microservices migration: Extract user service and auth service from monolith"
|
|
||||||
|
|
||||||
# Verify plan
|
|
||||||
/workflow:action-plan-verify
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Phase 3: Implementation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Execute migration
|
|
||||||
/workflow:execute
|
|
||||||
|
|
||||||
# Review architecture
|
|
||||||
/workflow:review --type architecture
|
|
||||||
```
|
|
||||||
|
|
||||||
**Microservices created**:
|
|
||||||
```
|
|
||||||
services/
|
|
||||||
├── user-service/
|
|
||||||
│ ├── src/
|
|
||||||
│ ├── Dockerfile
|
|
||||||
│ └── package.json
|
|
||||||
├── auth-service/
|
|
||||||
│ ├── src/
|
|
||||||
│ ├── Dockerfile
|
|
||||||
│ └── package.json
|
|
||||||
├── api-gateway/
|
|
||||||
│ ├── src/
|
|
||||||
│ └── config/
|
|
||||||
└── docker-compose.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 10: Code Optimization
|
|
||||||
|
|
||||||
**Objective**: Optimize database queries for performance
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Analyze current performance
|
|
||||||
/cli:mode:code-analysis "Analyze database query performance in src/repositories/"
|
|
||||||
|
|
||||||
# Create optimization plan
|
|
||||||
/workflow:plan "Optimize database queries with indexing, query optimization, and caching"
|
|
||||||
|
|
||||||
# Execute optimizations
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**Optimizations implemented**:
|
|
||||||
- Database indexing strategy
|
|
||||||
- N+1 query elimination
|
|
||||||
- Query result caching (Redis)
|
|
||||||
- Connection pooling
|
|
||||||
- Pagination for large datasets
|
|
||||||
- Database query monitoring
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎨 UI/UX Design
|
|
||||||
|
|
||||||
### Example 11: Design System Creation
|
|
||||||
|
|
||||||
**Objective**: Create a complete design system for a SaaS application
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Extract design from local reference images
|
|
||||||
/workflow:ui-design:imitate-auto --input "design-refs/*.png"
|
|
||||||
|
|
||||||
# Or import from existing code
|
|
||||||
/workflow:ui-design:imitate-auto --input "./src/components"
|
|
||||||
|
|
||||||
# Or create from scratch
|
|
||||||
/workflow:ui-design:explore-auto --prompt "Modern SaaS design system with primary components: buttons, inputs, cards, modals, navigation" --targets "button,input,card,modal,navbar" --style-variants 3
|
|
||||||
```
|
|
||||||
|
|
||||||
**Design system includes**:
|
|
||||||
- Color palette (primary, secondary, accent, neutral)
|
|
||||||
- Typography scale (headings, body, captions)
|
|
||||||
- Spacing system (4px grid)
|
|
||||||
- Component library:
|
|
||||||
- Buttons (primary, secondary, outline, ghost)
|
|
||||||
- Form inputs (text, select, checkbox, radio)
|
|
||||||
- Cards (basic, elevated, outlined)
|
|
||||||
- Modals (small, medium, large)
|
|
||||||
- Navigation (sidebar, topbar, breadcrumbs)
|
|
||||||
- Animation patterns
|
|
||||||
- Responsive breakpoints
|
|
||||||
|
|
||||||
**Output**:
|
|
||||||
```
|
|
||||||
design-system/
|
|
||||||
├── tokens/
|
|
||||||
│ ├── colors.json
|
|
||||||
│ ├── typography.json
|
|
||||||
│ └── spacing.json
|
|
||||||
├── components/
|
|
||||||
│ ├── Button.jsx
|
|
||||||
│ ├── Input.jsx
|
|
||||||
│ └── ...
|
|
||||||
└── documentation/
|
|
||||||
└── design-system.html
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 12: Responsive Landing Page
|
|
||||||
|
|
||||||
**Objective**: Design and implement a marketing landing page
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Design exploration
|
|
||||||
/workflow:ui-design:explore-auto --prompt "Modern SaaS landing page with hero section, features grid, pricing table, testimonials, and CTA" --targets "hero,features,pricing,testimonials" --style-variants 2 --layout-variants 3 --device-type responsive
|
|
||||||
|
|
||||||
# Select best designs and sync
|
|
||||||
/workflow:ui-design:design-sync --session <session-id> --selected-prototypes "hero-v2,features-v1,pricing-v3"
|
|
||||||
|
|
||||||
# Implement
|
|
||||||
/workflow:plan
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**Sections implemented**:
|
|
||||||
- Hero section with animated background
|
|
||||||
- Feature cards with icons
|
|
||||||
- Pricing comparison table
|
|
||||||
- Customer testimonials carousel
|
|
||||||
- FAQ accordion
|
|
||||||
- Contact form
|
|
||||||
- Responsive navigation
|
|
||||||
- Dark mode support
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🐛 Bug Fixes
|
|
||||||
|
|
||||||
### Example 13: Quick Bug Fix
|
|
||||||
|
|
||||||
**Objective**: Fix login button not working on mobile
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Analyze bug
|
|
||||||
/cli:mode:bug-diagnosis "Login button click event not firing on mobile Safari"
|
|
||||||
|
|
||||||
# Claude analyzes and implements fix
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix implemented**:
|
|
||||||
```javascript
|
|
||||||
// Before
|
|
||||||
button.onclick = handleLogin;
|
|
||||||
|
|
||||||
// After (adds touch event support)
|
|
||||||
button.addEventListener('click', handleLogin);
|
|
||||||
button.addEventListener('touchend', (e) => {
|
|
||||||
e.preventDefault();
|
|
||||||
handleLogin(e);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 14: Complex Bug Investigation
|
|
||||||
|
|
||||||
**Objective**: Debug memory leak in React application
|
|
||||||
|
|
||||||
#### Investigation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start session for thorough investigation
|
|
||||||
/workflow:session:start "Memory Leak Investigation"
|
|
||||||
|
|
||||||
# Deep bug analysis
|
|
||||||
/cli:mode:bug-diagnosis --tool gemini "Memory leak in React components - event listeners not cleaned up"
|
|
||||||
|
|
||||||
# Create fix plan
|
|
||||||
/workflow:plan "Fix memory leaks in React components: cleanup event listeners and cancel subscriptions"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Implementation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Execute fixes
|
|
||||||
/workflow:execute
|
|
||||||
|
|
||||||
# Generate tests to prevent regression
|
|
||||||
/workflow:test-gen WFS-memory-leak-investigation
|
|
||||||
|
|
||||||
# Execute tests
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**Issues found and fixed**:
|
|
||||||
1. Missing cleanup in `useEffect` hooks
|
|
||||||
2. Event listeners not removed
|
|
||||||
3. Uncancelled API requests on unmount
|
|
||||||
4. Large state objects not cleared
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 Documentation
|
|
||||||
|
|
||||||
### Example 15: API Documentation Generation
|
|
||||||
|
|
||||||
**Objective**: Generate comprehensive API documentation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Analyze existing API
|
|
||||||
/memory:load "Generate API documentation for all endpoints"
|
|
||||||
|
|
||||||
# Create documentation
|
|
||||||
/workflow:plan "Generate OpenAPI/Swagger documentation for REST API with examples and authentication info"
|
|
||||||
|
|
||||||
# Execute
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**Documentation includes**:
|
|
||||||
- OpenAPI 3.0 specification
|
|
||||||
- Interactive Swagger UI
|
|
||||||
- Request/response examples
|
|
||||||
- Authentication guide
|
|
||||||
- Rate limiting info
|
|
||||||
- Error codes reference
|
|
||||||
|
|
||||||
### Example 16: Project README Generation
|
|
||||||
|
|
||||||
**Objective**: Create comprehensive README for open-source project
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Update project memory first
|
|
||||||
/memory:update-full --tool gemini
|
|
||||||
|
|
||||||
# Generate README
|
|
||||||
/workflow:plan "Create comprehensive README.md with installation, usage, examples, API reference, and contributing guidelines"
|
|
||||||
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**README sections**:
|
|
||||||
- Project overview
|
|
||||||
- Features
|
|
||||||
- Installation instructions
|
|
||||||
- Quick start guide
|
|
||||||
- Usage examples
|
|
||||||
- API reference
|
|
||||||
- Configuration
|
|
||||||
- Contributing guidelines
|
|
||||||
- License
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚙️ DevOps & Automation
|
|
||||||
|
|
||||||
### Example 17: CI/CD Pipeline Setup
|
|
||||||
|
|
||||||
**Objective**: Set up GitHub Actions CI/CD pipeline
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/workflow:plan "Create GitHub Actions workflow for Node.js app with linting, testing, building, and deployment to AWS"
|
|
||||||
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pipeline created**:
|
|
||||||
```yaml
|
|
||||||
# .github/workflows/ci-cd.yml
|
|
||||||
name: CI/CD
|
|
||||||
|
|
||||||
on: [push, pull_request]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v2
|
|
||||||
- name: Run tests
|
|
||||||
run: npm test
|
|
||||||
|
|
||||||
build:
|
|
||||||
needs: test
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Build
|
|
||||||
run: npm run build
|
|
||||||
|
|
||||||
deploy:
|
|
||||||
needs: build
|
|
||||||
if: github.ref == 'refs/heads/main'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Deploy to AWS
|
|
||||||
run: npm run deploy
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 18: Docker Containerization
|
|
||||||
|
|
||||||
**Objective**: Dockerize full-stack application
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Plan containerization
|
|
||||||
/workflow:plan "Dockerize full-stack app with React frontend, Express backend, PostgreSQL database, and Redis cache using docker-compose"
|
|
||||||
|
|
||||||
# Execute
|
|
||||||
/workflow:execute
|
|
||||||
|
|
||||||
# Review
|
|
||||||
/workflow:review --type architecture
|
|
||||||
```
|
|
||||||
|
|
||||||
**Created files**:
|
|
||||||
```
|
|
||||||
├── docker-compose.yml
|
|
||||||
├── frontend/
|
|
||||||
│ └── Dockerfile
|
|
||||||
├── backend/
|
|
||||||
│ └── Dockerfile
|
|
||||||
├── .dockerignore
|
|
||||||
└── README.docker.md
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🏗️ Complex Projects
|
|
||||||
|
|
||||||
### Example 19: Real-Time Chat Application
|
|
||||||
|
|
||||||
**Objective**: Build real-time chat with WebSocket, message history, and file sharing
|
|
||||||
|
|
||||||
#### Complete Workflow
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Brainstorm
|
|
||||||
/workflow:brainstorm:auto-parallel "Real-time chat application with WebSocket, message history, file upload, user presence, typing indicators" --count 5
|
|
||||||
|
|
||||||
# 2. UI Design
|
|
||||||
/workflow:ui-design:explore-auto --prompt "Modern chat interface with message list, input box, user sidebar, file preview" --targets "chat-window,message-bubble,user-list" --style-variants 2
|
|
||||||
|
|
||||||
# 3. Sync designs
|
|
||||||
/workflow:ui-design:design-sync --session <session-id>
|
|
||||||
|
|
||||||
# 4. Plan implementation
|
|
||||||
/workflow:plan
|
|
||||||
|
|
||||||
# 5. Verify plan
|
|
||||||
/workflow:action-plan-verify
|
|
||||||
|
|
||||||
# 6. Execute
|
|
||||||
/workflow:execute
|
|
||||||
|
|
||||||
# 7. Generate tests
|
|
||||||
/workflow:test-gen <session-id>
|
|
||||||
|
|
||||||
# 8. Execute tests
|
|
||||||
/workflow:execute
|
|
||||||
|
|
||||||
# 9. Review
|
|
||||||
/workflow:review --type security
|
|
||||||
/workflow:review --type architecture
|
|
||||||
|
|
||||||
# 10. Complete
|
|
||||||
/workflow:session:complete
|
|
||||||
```
|
|
||||||
|
|
||||||
**Features implemented**:
|
|
||||||
- WebSocket server (Socket.io)
|
|
||||||
- Real-time messaging
|
|
||||||
- Message persistence (MongoDB)
|
|
||||||
- File upload (S3/local storage)
|
|
||||||
- User authentication
|
|
||||||
- Typing indicators
|
|
||||||
- Read receipts
|
|
||||||
- User presence (online/offline)
|
|
||||||
- Message search
|
|
||||||
- Emoji support
|
|
||||||
- Mobile responsive
|
|
||||||
|
|
||||||
### Example 20: Data Analytics Dashboard
|
|
||||||
|
|
||||||
**Objective**: Build interactive dashboard with charts and real-time data
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Brainstorm data viz approach
|
|
||||||
/workflow:brainstorm:auto-parallel "Data analytics dashboard with real-time metrics, interactive charts, filters, and export functionality"
|
|
||||||
|
|
||||||
# Plan implementation
|
|
||||||
/workflow:plan "Analytics dashboard with Chart.js/D3.js, real-time data updates via WebSocket, date range filters, and CSV export"
|
|
||||||
|
|
||||||
# Execute
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
|
|
||||||
**Dashboard features**:
|
|
||||||
- Real-time metric cards (users, revenue, conversions)
|
|
||||||
- Line charts (trends over time)
|
|
||||||
- Bar charts (comparisons)
|
|
||||||
- Pie charts (distributions)
|
|
||||||
- Data tables with sorting/filtering
|
|
||||||
- Date range picker
|
|
||||||
- Export to CSV/PDF
|
|
||||||
- Responsive grid layout
|
|
||||||
- Dark mode
|
|
||||||
- WebSocket updates every 5 seconds
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 💡 Tips for Effective Examples
|
|
||||||
|
|
||||||
### Best Practices
|
|
||||||
|
|
||||||
1. **Start with clear objectives**
|
|
||||||
- Define what you want to build
|
|
||||||
- List key features
|
|
||||||
- Specify technologies if needed
|
|
||||||
|
|
||||||
2. **Use appropriate workflow**
|
|
||||||
- Simple tasks: `/workflow:lite-plan`
|
|
||||||
- Complex features: `/workflow:brainstorm` → `/workflow:plan`
|
|
||||||
- Existing code: `/workflow:test-gen` or `/cli:analyze`
|
|
||||||
|
|
||||||
3. **Leverage quality gates**
|
|
||||||
- Run `/workflow:action-plan-verify` before execution
|
|
||||||
- Use `/workflow:review` after implementation
|
|
||||||
- Generate tests with `/workflow:test-gen`
|
|
||||||
|
|
||||||
4. **Maintain memory**
|
|
||||||
- Update memory after major changes
|
|
||||||
- Use `/memory:load` for quick context
|
|
||||||
- Keep CLAUDE.md files up to date
|
|
||||||
|
|
||||||
5. **Complete sessions**
|
|
||||||
- Always run `/workflow:session:complete`
|
|
||||||
- Generates lessons learned
|
|
||||||
- Archives session for reference
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔗 Related Resources
|
|
||||||
|
|
||||||
- [Getting Started Guide](GETTING_STARTED.md) - Basics
|
|
||||||
- [Architecture](ARCHITECTURE.md) - How it works
|
|
||||||
- [Command Reference](COMMAND_REFERENCE.md) - All commands
|
|
||||||
- [FAQ](FAQ.md) - Common questions
|
|
||||||
- [Contributing](CONTRIBUTING.md) - How to contribute
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📬 Share Your Examples
|
|
||||||
|
|
||||||
Have a great example to share? Contribute to this document!
|
|
||||||
|
|
||||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: 2025-11-20
|
|
||||||
**Version**: 5.8.1
|
|
||||||
@@ -511,8 +511,8 @@ function merge_directory_contents() {
|
|||||||
((merged_count++))
|
((merged_count++))
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Show progress every 20 files
|
# Show progress every 100 files (optimized for performance)
|
||||||
if [ $((processed_count % 20)) -eq 0 ] || [ "$processed_count" -eq "$total_files" ]; then
|
if [ $((processed_count % 100)) -eq 0 ] || [ "$processed_count" -eq "$total_files" ]; then
|
||||||
local percent=$((processed_count * 100 / total_files))
|
local percent=$((processed_count * 100 / total_files))
|
||||||
echo -ne "\rMerging $description: $processed_count/$total_files files ($percent%)..."
|
echo -ne "\rMerging $description: $processed_count/$total_files files ($percent%)..."
|
||||||
fi
|
fi
|
||||||
@@ -587,12 +587,8 @@ function install_global() {
|
|||||||
# Track .claude directory in manifest
|
# Track .claude directory in manifest
|
||||||
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
|
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
|
||||||
|
|
||||||
# Track files from SOURCE directory, not destination
|
# Track files from SOURCE directory using bulk operation
|
||||||
while IFS= read -r -d '' source_file; do
|
add_manifest_entries_bulk "$manifest_file" "$source_claude_dir" "$global_claude_dir" "File"
|
||||||
local relative_path="${source_file#$source_claude_dir}"
|
|
||||||
local target_path="${global_claude_dir}${relative_path}"
|
|
||||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
|
||||||
done < <(find "$source_claude_dir" -type f -print0)
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Handle CLAUDE.md file
|
# Handle CLAUDE.md file
|
||||||
@@ -611,12 +607,8 @@ function install_global() {
|
|||||||
# Track .codex directory in manifest
|
# Track .codex directory in manifest
|
||||||
add_manifest_entry "$manifest_file" "$global_codex_dir" "Directory"
|
add_manifest_entry "$manifest_file" "$global_codex_dir" "Directory"
|
||||||
|
|
||||||
# Track files from SOURCE directory
|
# Track files from SOURCE directory using bulk operation
|
||||||
while IFS= read -r -d '' source_file; do
|
add_manifest_entries_bulk "$manifest_file" "$source_codex_dir" "$global_codex_dir" "File"
|
||||||
local relative_path="${source_file#$source_codex_dir}"
|
|
||||||
local target_path="${global_codex_dir}${relative_path}"
|
|
||||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
|
||||||
done < <(find "$source_codex_dir" -type f -print0)
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Backup critical config files in .gemini directory before installation
|
# Backup critical config files in .gemini directory before installation
|
||||||
@@ -628,12 +620,8 @@ function install_global() {
|
|||||||
# Track .gemini directory in manifest
|
# Track .gemini directory in manifest
|
||||||
add_manifest_entry "$manifest_file" "$global_gemini_dir" "Directory"
|
add_manifest_entry "$manifest_file" "$global_gemini_dir" "Directory"
|
||||||
|
|
||||||
# Track files from SOURCE directory
|
# Track files from SOURCE directory using bulk operation
|
||||||
while IFS= read -r -d '' source_file; do
|
add_manifest_entries_bulk "$manifest_file" "$source_gemini_dir" "$global_gemini_dir" "File"
|
||||||
local relative_path="${source_file#$source_gemini_dir}"
|
|
||||||
local target_path="${global_gemini_dir}${relative_path}"
|
|
||||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
|
||||||
done < <(find "$source_gemini_dir" -type f -print0)
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Backup critical config files in .qwen directory before installation
|
# Backup critical config files in .qwen directory before installation
|
||||||
@@ -645,12 +633,8 @@ function install_global() {
|
|||||||
# Track .qwen directory in manifest
|
# Track .qwen directory in manifest
|
||||||
add_manifest_entry "$manifest_file" "$global_qwen_dir" "Directory"
|
add_manifest_entry "$manifest_file" "$global_qwen_dir" "Directory"
|
||||||
|
|
||||||
# Track files from SOURCE directory
|
# Track files from SOURCE directory using bulk operation
|
||||||
while IFS= read -r -d '' source_file; do
|
add_manifest_entries_bulk "$manifest_file" "$source_qwen_dir" "$global_qwen_dir" "File"
|
||||||
local relative_path="${source_file#$source_qwen_dir}"
|
|
||||||
local target_path="${global_qwen_dir}${relative_path}"
|
|
||||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
|
||||||
done < <(find "$source_qwen_dir" -type f -print0)
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Remove empty backup folder
|
# Remove empty backup folder
|
||||||
@@ -730,12 +714,8 @@ function install_path() {
|
|||||||
# Track local folder in manifest
|
# Track local folder in manifest
|
||||||
add_manifest_entry "$manifest_file" "$dest_folder" "Directory"
|
add_manifest_entry "$manifest_file" "$dest_folder" "Directory"
|
||||||
|
|
||||||
# Track files from SOURCE directory
|
# Track files from SOURCE directory using bulk operation
|
||||||
while IFS= read -r -d '' source_file; do
|
add_manifest_entries_bulk "$manifest_file" "$source_folder" "$dest_folder" "File"
|
||||||
local relative_path="${source_file#$source_folder}"
|
|
||||||
local target_path="${dest_folder}${relative_path}"
|
|
||||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
|
||||||
done < <(find "$source_folder" -type f -print0)
|
|
||||||
fi
|
fi
|
||||||
write_color "✓ Installed local folder: $folder" "$COLOR_SUCCESS"
|
write_color "✓ Installed local folder: $folder" "$COLOR_SUCCESS"
|
||||||
else
|
else
|
||||||
@@ -773,12 +753,8 @@ function install_path() {
|
|||||||
# Track global files in manifest using bulk method (fast!)
|
# Track global files in manifest using bulk method (fast!)
|
||||||
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
|
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
|
||||||
|
|
||||||
# Track files from TEMP directory
|
# Track files from TEMP directory using bulk operation
|
||||||
while IFS= read -r -d '' source_file; do
|
add_manifest_entries_bulk "$manifest_file" "$temp_global_dir" "$global_claude_dir" "File"
|
||||||
local relative_path="${source_file#$temp_global_dir}"
|
|
||||||
local target_path="${global_claude_dir}${relative_path}"
|
|
||||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
|
||||||
done < <(find "$temp_global_dir" -type f -print0)
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Clean up temp directory
|
# Clean up temp directory
|
||||||
@@ -801,12 +777,8 @@ function install_path() {
|
|||||||
# Track .codex directory in manifest
|
# Track .codex directory in manifest
|
||||||
add_manifest_entry "$manifest_file" "$local_codex_dir" "Directory"
|
add_manifest_entry "$manifest_file" "$local_codex_dir" "Directory"
|
||||||
|
|
||||||
# Track files from SOURCE directory
|
# Track files from SOURCE directory using bulk operation
|
||||||
while IFS= read -r -d '' source_file; do
|
add_manifest_entries_bulk "$manifest_file" "$source_codex_dir" "$local_codex_dir" "File"
|
||||||
local relative_path="${source_file#$source_codex_dir}"
|
|
||||||
local target_path="${local_codex_dir}${relative_path}"
|
|
||||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
|
||||||
done < <(find "$source_codex_dir" -type f -print0)
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Backup critical config files in .gemini directory before installation
|
# Backup critical config files in .gemini directory before installation
|
||||||
@@ -818,12 +790,8 @@ function install_path() {
|
|||||||
# Track .gemini directory in manifest
|
# Track .gemini directory in manifest
|
||||||
add_manifest_entry "$manifest_file" "$local_gemini_dir" "Directory"
|
add_manifest_entry "$manifest_file" "$local_gemini_dir" "Directory"
|
||||||
|
|
||||||
# Track files from SOURCE directory
|
# Track files from SOURCE directory using bulk operation
|
||||||
while IFS= read -r -d '' source_file; do
|
add_manifest_entries_bulk "$manifest_file" "$source_gemini_dir" "$local_gemini_dir" "File"
|
||||||
local relative_path="${source_file#$source_gemini_dir}"
|
|
||||||
local target_path="${local_gemini_dir}${relative_path}"
|
|
||||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
|
||||||
done < <(find "$source_gemini_dir" -type f -print0)
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Backup critical config files in .qwen directory before installation
|
# Backup critical config files in .qwen directory before installation
|
||||||
@@ -835,12 +803,8 @@ function install_path() {
|
|||||||
# Track .qwen directory in manifest
|
# Track .qwen directory in manifest
|
||||||
add_manifest_entry "$manifest_file" "$local_qwen_dir" "Directory"
|
add_manifest_entry "$manifest_file" "$local_qwen_dir" "Directory"
|
||||||
|
|
||||||
# Track files from SOURCE directory
|
# Track files from SOURCE directory using bulk operation
|
||||||
while IFS= read -r -d '' source_file; do
|
add_manifest_entries_bulk "$manifest_file" "$source_qwen_dir" "$local_qwen_dir" "File"
|
||||||
local relative_path="${source_file#$source_qwen_dir}"
|
|
||||||
local target_path="${local_qwen_dir}${relative_path}"
|
|
||||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
|
||||||
done < <(find "$source_qwen_dir" -type f -print0)
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Remove empty backup folder
|
# Remove empty backup folder
|
||||||
@@ -1016,7 +980,82 @@ EOF
|
|||||||
jq --argjson entry "$entry_json" '.directories += [$entry]' "$manifest_file" > "$temp_file"
|
jq --argjson entry "$entry_json" '.directories += [$entry]' "$manifest_file" > "$temp_file"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
mv "$temp_file" "$manifest_file"
|
# Only replace manifest if jq succeeded
|
||||||
|
if [ -s "$temp_file" ]; then
|
||||||
|
mv "$temp_file" "$manifest_file"
|
||||||
|
else
|
||||||
|
write_color "WARNING: Failed to add manifest entry (jq error)" "$COLOR_WARNING"
|
||||||
|
rm -f "$temp_file"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function add_manifest_entries_bulk() {
|
||||||
|
local manifest_file="$1"
|
||||||
|
local source_dir="$2"
|
||||||
|
local target_base="$3"
|
||||||
|
local entry_type="$4"
|
||||||
|
|
||||||
|
if [ ! -f "$manifest_file" ]; then
|
||||||
|
write_color "WARNING: Manifest file not found: $manifest_file" "$COLOR_WARNING"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -d "$source_dir" ]; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
local timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||||
|
local temp_file="${manifest_file}.tmp"
|
||||||
|
local paths_file=$(mktemp)
|
||||||
|
local entries_file=$(mktemp)
|
||||||
|
|
||||||
|
# Collect all file paths and compute target paths using bash string operations
|
||||||
|
# This mimics the original while loop logic
|
||||||
|
while IFS= read -r -d '' source_file; do
|
||||||
|
local relative_path="${source_file#$source_dir}"
|
||||||
|
local target_path="${target_base}${relative_path}"
|
||||||
|
echo "$target_path"
|
||||||
|
done < <(find "$source_dir" -type f -print0) > "$paths_file"
|
||||||
|
|
||||||
|
# Check if paths_file has content
|
||||||
|
if [ ! -s "$paths_file" ]; then
|
||||||
|
rm -f "$paths_file" "$entries_file"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Generate JSON entries from paths (filter empty lines)
|
||||||
|
grep -v '^$' "$paths_file" | jq -R --arg date "$timestamp" --arg type "$entry_type" '
|
||||||
|
{
|
||||||
|
"path": .,
|
||||||
|
"type": $type,
|
||||||
|
"timestamp": $date
|
||||||
|
}
|
||||||
|
' | jq -s '.' > "$entries_file"
|
||||||
|
|
||||||
|
# Check if entries_file has valid content
|
||||||
|
if [ ! -s "$entries_file" ]; then
|
||||||
|
rm -f "$paths_file" "$entries_file"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Add all entries to manifest using --slurpfile to avoid argument length limit
|
||||||
|
if [ "$entry_type" = "File" ]; then
|
||||||
|
jq --slurpfile entries "$entries_file" '.files += $entries[0]' "$manifest_file" > "$temp_file"
|
||||||
|
else
|
||||||
|
jq --slurpfile entries "$entries_file" '.directories += $entries[0]' "$manifest_file" > "$temp_file"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Only replace manifest if jq succeeded and temp_file has content
|
||||||
|
if [ -s "$temp_file" ]; then
|
||||||
|
mv "$temp_file" "$manifest_file"
|
||||||
|
else
|
||||||
|
write_color "WARNING: Failed to update manifest (jq error), keeping original" "$COLOR_WARNING"
|
||||||
|
rm -f "$temp_file"
|
||||||
|
fi
|
||||||
|
|
||||||
|
rm -f "$paths_file" "$entries_file"
|
||||||
|
|
||||||
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
function remove_old_manifests_for_path() {
|
function remove_old_manifests_for_path() {
|
||||||
|
|||||||
198
README.md
198
README.md
@@ -1,198 +0,0 @@
|
|||||||
# 🚀 Claude Code Workflow (CCW)
|
|
||||||
|
|
||||||
<div align="center">
|
|
||||||
|
|
||||||
[](https://github.com/catlog22/Claude-Code-Workflow/releases)
|
|
||||||
[](LICENSE)
|
|
||||||
[]()
|
|
||||||
|
|
||||||
**Languages:** [English](README.md) | [中文](README_CN.md)
|
|
||||||
|
|
||||||
</div>
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Claude Code Workflow (CCW)** transforms AI development from simple prompt chaining into a robust, context-first orchestration system. It solves execution uncertainty and error accumulation through structured planning, deterministic execution, and intelligent multi-model orchestration.
|
|
||||||
|
|
||||||
> **🎉 Version 5.8.1: Lite-Plan Workflow & CLI Tools Enhancement**
|
|
||||||
>
|
|
||||||
> **Core Improvements**:
|
|
||||||
> - ✨ **Lite-Plan Workflow** (`/workflow:lite-plan`) - Lightweight interactive planning with intelligent automation
|
|
||||||
> - **Three-Dimensional Multi-Select Confirmation**: Task approval + Execution method + Code review tool
|
|
||||||
> - **Smart Code Exploration**: Auto-detects when codebase context is needed (use `-e` flag to force)
|
|
||||||
> - **Parallel Task Execution**: Identifies independent tasks for concurrent execution
|
|
||||||
> - **Flexible Execution**: Choose between Agent (@code-developer) or CLI (Gemini/Qwen/Codex)
|
|
||||||
> - **Optional Post-Review**: Built-in code quality analysis with your choice of AI tool
|
|
||||||
> - ✨ **CLI Tools Optimization** - Simplified command syntax with auto-model-selection
|
|
||||||
> - Removed `-m` parameter requirement for Gemini, Qwen, and Codex (auto-selects best model)
|
|
||||||
> - Clearer command structure and improved documentation
|
|
||||||
> - 🔄 **Execution Workflow Enhancement** - Streamlined phases with lazy loading strategy
|
|
||||||
> - 🎨 **CLI Explore Agent** - Improved visibility with yellow color scheme
|
|
||||||
>
|
|
||||||
> See [CHANGELOG.md](CHANGELOG.md) for full details.
|
|
||||||
|
|
||||||
> 📚 **New to CCW?** Check out the [**Getting Started Guide**](GETTING_STARTED.md) for a beginner-friendly 5-minute tutorial!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✨ Core Concepts
|
|
||||||
|
|
||||||
CCW is built on a set of core principles that differentiate it from traditional AI development approaches:
|
|
||||||
|
|
||||||
- **Context-First Architecture**: Pre-defined context gathering eliminates execution uncertainty by ensuring agents have the correct information *before* implementation.
|
|
||||||
- **JSON-First State Management**: Task states live in `.task/IMPL-*.json` files as the single source of truth, enabling programmatic orchestration without state drift.
|
|
||||||
- **Autonomous Multi-Phase Orchestration**: Commands chain specialized sub-commands and agents to automate complex workflows with zero user intervention.
|
|
||||||
- **Multi-Model Strategy**: Leverages the unique strengths of different AI models (Gemini for analysis, Codex for implementation) for superior results.
|
|
||||||
- **Hierarchical Memory System**: A 4-layer documentation system provides context at the appropriate level of abstraction, preventing information overload.
|
|
||||||
- **Specialized Role-Based Agents**: A suite of agents (`@code-developer`, `@test-fix-agent`, etc.) mirrors a real software team to handle diverse tasks.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚙️ Installation
|
|
||||||
|
|
||||||
For detailed installation instructions, please refer to the [**INSTALL.md**](INSTALL.md) guide.
|
|
||||||
|
|
||||||
### **🚀 Quick One-Line Installation**
|
|
||||||
|
|
||||||
**Windows (PowerShell):**
|
|
||||||
```powershell
|
|
||||||
Invoke-Expression (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1" -UseBasicParsing).Content
|
|
||||||
```
|
|
||||||
|
|
||||||
**Linux/macOS (Bash/Zsh):**
|
|
||||||
```bash
|
|
||||||
bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **✅ Verify Installation**
|
|
||||||
After installation, open **Claude Code** and check if the workflow commands are available by running:
|
|
||||||
```bash
|
|
||||||
/workflow:session:list
|
|
||||||
```
|
|
||||||
If the slash commands (e.g., `/workflow:*`) are recognized, the installation was successful.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🛠️ Command Reference
|
|
||||||
|
|
||||||
CCW provides a rich set of commands for managing workflows, tasks, and interacting with AI tools. For a complete list and detailed descriptions of all available commands, please see the [**COMMAND_REFERENCE.md**](COMMAND_REFERENCE.md) file.
|
|
||||||
|
|
||||||
For a detailed technical specification of every command, see the [**COMMAND_SPEC.md**](COMMAND_SPEC.md).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 💡 **Need Help? Use the Interactive Command Guide**
|
|
||||||
|
|
||||||
CCW includes a built-in **command-guide skill** to help you discover and use commands effectively:
|
|
||||||
|
|
||||||
- **`CCW-help`** - Get interactive help and command recommendations
|
|
||||||
- **`CCW-issue`** - Report bugs or request features with guided templates
|
|
||||||
|
|
||||||
The command guide provides:
|
|
||||||
- 🔍 **Smart Command Search** - Find commands by keyword, category, or use-case
|
|
||||||
- 🤖 **Next-Step Recommendations** - Get suggestions for what to do after any command
|
|
||||||
- 📖 **Detailed Documentation** - View parameters, examples, and best practices
|
|
||||||
- 🎓 **Beginner Onboarding** - Learn the top 14 essential commands with a guided learning path
|
|
||||||
- 📝 **Issue Reporting** - Generate standardized bug reports and feature requests
|
|
||||||
|
|
||||||
**Example Usage**:
|
|
||||||
```
|
|
||||||
User: "CCW-help"
|
|
||||||
→ Interactive menu with command search, recommendations, and documentation
|
|
||||||
|
|
||||||
User: "What's next after /workflow:plan?"
|
|
||||||
→ Recommends /workflow:execute, /workflow:action-plan-verify, with workflow patterns
|
|
||||||
|
|
||||||
User: "CCW-issue"
|
|
||||||
→ Guided template generation for bugs, features, or questions
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Getting Started
|
|
||||||
|
|
||||||
The best way to get started is to follow the 5-minute tutorial in the [**Getting Started Guide**](GETTING_STARTED.md).
|
|
||||||
|
|
||||||
Here is a quick example of a common development workflow:
|
|
||||||
|
|
||||||
### **Option 1: Lite-Plan Workflow** (⚡ Recommended for Quick Tasks)
|
|
||||||
|
|
||||||
Lightweight interactive workflow with in-memory planning and immediate execution:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Basic usage with auto-detection
|
|
||||||
/workflow:lite-plan "Add JWT authentication to user login"
|
|
||||||
|
|
||||||
# Force code exploration
|
|
||||||
/workflow:lite-plan -e "Refactor logging module for better performance"
|
|
||||||
|
|
||||||
# Basic usage
|
|
||||||
/workflow:lite-plan "Add unit tests for auth service"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Interactive Flow**:
|
|
||||||
1. **Phase 1**: Automatic task analysis and smart code exploration (if needed)
|
|
||||||
2. **Phase 2**: Answer clarification questions (if any)
|
|
||||||
3. **Phase 3**: Review generated plan with task breakdown
|
|
||||||
4. **Phase 4**: Three-dimensional confirmation:
|
|
||||||
- ✅ Confirm/Modify/Cancel task
|
|
||||||
- 🔧 Choose execution: Agent / Provide Plan / CLI (Gemini/Qwen/Codex)
|
|
||||||
- 🔍 Optional code review: No / Claude / Gemini / Qwen / Codex
|
|
||||||
5. **Phase 5**: Watch real-time execution with live task tracking
|
|
||||||
|
|
||||||
### **Option 2: Full Workflow** (Comprehensive Planning)
|
|
||||||
|
|
||||||
Traditional multi-phase workflow for complex projects:
|
|
||||||
|
|
||||||
1. **Create a Plan** (automatically starts a session):
|
|
||||||
```bash
|
|
||||||
/workflow:plan "Implement JWT-based user login and registration"
|
|
||||||
```
|
|
||||||
2. **Execute the Plan**:
|
|
||||||
```bash
|
|
||||||
/workflow:execute
|
|
||||||
```
|
|
||||||
3. **Check Status** (optional):
|
|
||||||
```bash
|
|
||||||
/workflow:status
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Documentation
|
|
||||||
|
|
||||||
CCW provides comprehensive documentation to help you get started and master advanced features:
|
|
||||||
|
|
||||||
### 📖 **Getting Started**
|
|
||||||
- [**Getting Started Guide**](GETTING_STARTED.md) - 5-minute quick start tutorial
|
|
||||||
- [**Installation Guide**](INSTALL.md) - Detailed installation instructions ([中文](INSTALL_CN.md))
|
|
||||||
- [**Workflow Decision Guide**](WORKFLOW_DECISION_GUIDE_EN.md) - 🌳 Interactive flowchart for choosing the right commands
|
|
||||||
- [**Examples**](EXAMPLES.md) - Real-world use cases and practical examples
|
|
||||||
- [**FAQ**](FAQ.md) - Frequently asked questions and troubleshooting
|
|
||||||
|
|
||||||
### 🏗️ **Architecture & Design**
|
|
||||||
- [**Architecture Overview**](ARCHITECTURE.md) - System design and core components
|
|
||||||
- [**Project Introduction**](PROJECT_INTRODUCTION.md) - Detailed project overview (中文)
|
|
||||||
- [**Workflow Diagrams**](WORKFLOW_DIAGRAMS.md) - Visual workflow representations
|
|
||||||
|
|
||||||
### 📋 **Command Reference**
|
|
||||||
- [**Command Reference**](COMMAND_REFERENCE.md) - Complete list of all commands
|
|
||||||
- [**Command Specification**](COMMAND_SPEC.md) - Detailed technical specifications
|
|
||||||
- [**Command Flow Standard**](COMMAND_FLOW_STANDARD.md) - Command design patterns
|
|
||||||
|
|
||||||
### 🤝 **Contributing**
|
|
||||||
- [**Contributing Guide**](CONTRIBUTING.md) - How to contribute to CCW
|
|
||||||
- [**Changelog**](CHANGELOG.md) - Version history and release notes
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🤝 Contributing & Support
|
|
||||||
|
|
||||||
- **Repository**: [GitHub - Claude-Code-Workflow](https://github.com/catlog22/Claude-Code-Workflow)
|
|
||||||
- **Issues**: Report bugs or request features on [GitHub Issues](https://github.com/catlog22/Claude-Code-Workflow/issues).
|
|
||||||
- **Discussions**: Join the [Community Forum](https://github.com/catlog22/Claude-Code-Workflow/discussions).
|
|
||||||
- **Contributing**: See [CONTRIBUTING.md](CONTRIBUTING.md) for contribution guidelines.
|
|
||||||
|
|
||||||
## 📄 License
|
|
||||||
|
|
||||||
This project is licensed under the **MIT License**. See the [LICENSE](LICENSE) file for details.
|
|
||||||
Reference in New Issue
Block a user