Add CLI commands for full and related documentation generation

- Implemented `/memory:docs-full-cli` for comprehensive project documentation generation using CLI execution with batched agents and tool fallback.
- Introduced `/memory:docs-related-cli` to generate/update documentation for git-changed modules, optimizing for efficiency with direct parallel execution for fewer modules and agent batch processing for larger sets.
- Defined execution flows, strategies, and error handling for both commands, ensuring robust documentation processes.
This commit is contained in:
catlog22
2025-11-23 11:27:23 +08:00
parent 73fed4893b
commit 4272ca9ebd
6 changed files with 1555 additions and 797 deletions

View File

@@ -0,0 +1,472 @@
---
name: docs-full-cli
description: Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel
argument-hint: "[path] [--tool <gemini|qwen|codex>]"
---
# Full Documentation Generation - CLI Mode (/memory:docs-full-cli)
## Overview
Orchestrates project-wide documentation generation using CLI-based execution with batched agents and automatic tool fallback.
**Parameters**:
- `path`: Target directory (default: current directory)
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
**Execution Flow**: Discovery → Plan Presentation → Execution → Verification
## 3-Layer Architecture & Auto-Strategy Selection
### Layer Definition & Strategy Assignment
| Layer | Depth | Strategy | Purpose | Context Pattern |
|-------|-------|----------|---------|----------------|
| **Layer 3** (Deepest) | ≥3 | `full` | Generate docs for all subdirectories with code | `@**/*` (all files) |
| **Layer 2** (Middle) | 1-2 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
| **Layer 1** (Top) | 0 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
**Generation Direction**: Layer 3 → Layer 2 → Layer 1 (bottom-up dependency flow)
**Strategy Auto-Selection**: Strategies are automatically determined by directory depth - no user configuration needed.
### Strategy Details
#### Full Strategy (Layer 3 Only)
- **Use Case**: Deepest directories with comprehensive file coverage
- **Behavior**: Generates API.md + README.md for current directory AND subdirectories containing code
- **Context**: All files in current directory tree (`@**/*`)
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
- **Benefits**: Creates foundation documentation for upper layers to reference
#### Single Strategy (Layers 1-2)
- **Use Case**: Upper layers that aggregate from existing documentation
- **Behavior**: Generates API.md + README.md only in current directory
- **Context**: Direct children docs + current directory code files
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
- **Benefits**: Minimal context consumption, clear layer separation
### Example Flow
```
src/auth/handlers/ (depth 3) → FULL STRATEGY
CONTEXT: @**/* (all files in handlers/ and subdirs)
GENERATES: .workflow/docs/project/src/auth/handlers/{API.md,README.md} + subdirs
src/auth/ (depth 2) → SINGLE STRATEGY
CONTEXT: @*/API.md @*/README.md @*.ts (handlers docs + current code)
GENERATES: .workflow/docs/project/src/auth/{API.md,README.md} only
src/ (depth 1) → SINGLE STRATEGY
CONTEXT: @*/API.md @*/README.md (auth docs, utils docs)
GENERATES: .workflow/docs/project/src/{API.md,README.md} only
./ (depth 0) → SINGLE STRATEGY
CONTEXT: @*/API.md @*/README.md (src docs, tests docs)
GENERATES: .workflow/docs/project/{API.md,README.md} only
```
## Core Execution Rules
1. **Analyze First**: Module discovery + folder classification before generation
2. **Wait for Approval**: Present plan, no execution without user confirmation
3. **Execution Strategy**:
- **<20 modules**: Direct parallel execution (max 4 concurrent per layer)
- **≥20 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
4. **Tool Fallback**: Auto-retry with fallback tools on failure
5. **Layer Sequential**: Process layers 3→2→1 (bottom-up), parallel batches within layer
6. **Safety Check**: Verify only docs files modified in .workflow/docs/
7. **Layer-based Grouping**: Group modules by LAYER (not depth) for execution
## Tool Fallback Hierarchy
```javascript
--tool gemini [gemini, qwen, codex] // default
--tool qwen [qwen, gemini, codex]
--tool codex [codex, gemini, qwen]
```
**Trigger**: Non-zero exit code from generation script
| Tool | Best For | Fallback To |
|--------|--------------------------------|----------------|
| gemini | Documentation, patterns | qwen → codex |
| qwen | Architecture, system design | gemini → codex |
| codex | Implementation, code quality | gemini → qwen |
## Execution Phases
### Phase 1: Discovery & Analysis
```javascript
// Get project metadata
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Get module structure with classification
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
// OR with path parameter
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
### Phase 2: Plan Presentation
**For <20 modules**:
```
Documentation Generation Plan:
Tool: gemini (fallback: qwen → codex)
Total: 7 modules
Execution: Direct parallel (< 20 modules threshold)
Project: myproject
Output: .workflow/docs/myproject/
Will generate docs for:
- ./core/interfaces (12 files, type: code) - depth 2 [Layer 2] - single strategy
- ./core (22 files, type: code) - depth 1 [Layer 2] - single strategy
- ./models (9 files, type: code) - depth 1 [Layer 2] - single strategy
- ./utils (12 files, type: navigation) - depth 1 [Layer 2] - single strategy
- . (5 files, type: code) - depth 0 [Layer 1] - single strategy
Documentation Strategy (Auto-Selected):
- Layer 2 (depth 1-2): API.md + README.md (current dir only, reference child docs)
- Layer 1 (depth 0): API.md + README.md (current dir only, reference child docs)
Output Structure:
- Code folders: API.md + README.md
- Navigation folders: README.md only
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
Execution order: Layer 2 → Layer 1
Estimated time: ~5-10 minutes
Confirm execution? (y/n)
```
**For ≥20 modules**:
```
Documentation Generation Plan:
Tool: gemini (fallback: qwen → codex)
Total: 31 modules
Execution: Agent batch processing (4 modules/agent)
Project: myproject
Output: .workflow/docs/myproject/
Will generate docs for:
- ./src/features/auth (12 files, type: code) - depth 3 [Layer 3] - full strategy
- ./.claude/commands/cli (6 files, type: code) - depth 3 [Layer 3] - full strategy
- ./src/utils (8 files, type: code) - depth 2 [Layer 2] - single strategy
...
Documentation Strategy (Auto-Selected):
- Layer 3 (depth ≥3): API.md + README.md (all subdirs with code)
- Layer 2 (depth 1-2): API.md + README.md (current dir only)
- Layer 1 (depth 0): API.md + README.md (current dir only)
Output Structure:
- Code folders: API.md + README.md
- Navigation folders: README.md only
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
Execution order: Layer 3 → Layer 2 → Layer 1
Agent allocation (by LAYER):
- Layer 3 (14 modules, depth ≥3): 4 agents [4, 4, 4, 2]
- Layer 2 (15 modules, depth 1-2): 4 agents [4, 4, 4, 3]
- Layer 1 (2 modules, depth 0): 1 agent [2]
Estimated time: ~15-25 minutes
Confirm execution? (y/n)
```
### Phase 3A: Direct Execution (<20 modules)
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
```javascript
let project_name = detect_project_name();
for (let layer of [3, 2, 1]) {
if (modules_by_layer[layer].length === 0) continue;
let batches = batch_modules(modules_by_layer[layer], 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
let strategy = module.depth >= 3 ? "full" : "single";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`${module.path} (Layer ${layer}) docs generated with ${tool}`);
return true;
}
}
report(`❌ FAILED: ${module.path} (Layer ${layer}) failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
### Phase 3B: Agent Batch Execution (≥20 modules)
**Strategy**: Batch modules into groups of 4, spawn memory-bridge agents per batch.
```javascript
// Group modules by LAYER and batch within each layer
let modules_by_layer = group_by_layer(module_list);
let tool_order = construct_tool_order(primary_tool);
let project_name = detect_project_name();
for (let layer of [3, 2, 1]) {
if (modules_by_layer[layer].length === 0) continue;
let batches = batch_modules(modules_by_layer[layer], 4);
let worker_tasks = [];
for (let batch of batches) {
worker_tasks.push(
Task(
subagent_type="memory-bridge",
description=`Generate docs for ${batch.length} modules in Layer ${layer}`,
prompt=generate_batch_worker_prompt(batch, tool_order, layer, project_name)
)
);
}
await parallel_execute(worker_tasks);
}
```
**Batch Worker Prompt Template**:
```
PURPOSE: Generate documentation for assigned modules with tool fallback
TASK: Generate API.md + README.md for assigned modules using specified strategies.
PROJECT: {{project_name}}
OUTPUT: .workflow/docs/{{project_name}}/
MODULES:
{{module_path_1}} (strategy: {{strategy_1}}, type: {{folder_type_1}})
{{module_path_2}} (strategy: {{strategy_2}}, type: {{folder_type_2}})
...
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
- Accepts strategy parameter: full | single
- Accepts folder type detection: code | navigation
- Tool execution via direct CLI commands (gemini/qwen/codex)
- Output path: .workflow/docs/{{project_name}}/{module_path}/
EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
run_in_background: false
})
exit_code=$?
if [ $exit_code -eq 0 ]; then
report "✅ {{module_path}} docs generated with $tool"
break
else
report "⚠️ {{module_path}} failed with $tool, trying next..."
continue
fi
done
2. Handle complete failure (all tools failed):
if [ $exit_code -ne 0 ]; then
report "❌ FAILED: {{module_path}} - all tools exhausted"
# Continue to next module (do not abort batch)
fi
FOLDER TYPE HANDLING:
- code: Generate API.md + README.md
- navigation: Generate README.md only
FAILURE HANDLING:
- Module-level isolation: One module's failure does not affect others
- Exit code detection: Non-zero exit code triggers next tool
- Exhaustion reporting: Log modules where all tools failed
- Batch continuation: Always process remaining modules
REPORTING FORMAT:
Per-module status:
✅ path/to/module docs generated with {tool}
⚠️ path/to/module failed with {tool}, trying next...
❌ FAILED: path/to/module - all tools exhausted
```
### Phase 4: Project-Level Documentation
**After all module documentation is generated, create project-level documentation files.**
```javascript
let project_name = detect_project_name();
let project_root = get_project_root();
// Step 1: Generate Project README
report("Generating project README.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ Project README generated with ${tool}`);
break;
}
}
// Step 2: Generate Architecture & Examples
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ Architecture docs generated with ${tool}`);
break;
}
}
// Step 3: Generate HTTP API documentation (if API routes detected)
Bash({command: 'rg "router\\.|@Get|@Post" -g "*.{ts,js,py}" 2>/dev/null && echo "API_FOUND" || echo "NO_API"', run_in_background: false});
if (bash_result.stdout.includes("API_FOUND")) {
report("Generating HTTP API documentation...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ HTTP API docs generated with ${tool}`);
break;
}
}
}
```
**Expected Output**:
```
Project-Level Documentation:
✅ README.md (project root overview)
✅ ARCHITECTURE.md (system design)
✅ EXAMPLES.md (usage examples)
✅ api/README.md (HTTP API reference) [optional]
```
### Phase 5: Verification
```javascript
// Check documentation files created
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
// Display structure
Bash({command: 'tree -L 3 .workflow/docs/', run_in_background: false});
```
**Result Summary**:
```
Documentation Generation Summary:
Total: 31 | Success: 29 | Failed: 2
Tool usage: gemini: 25, qwen: 4, codex: 0
Failed: path1, path2
Generated documentation:
.workflow/docs/myproject/
├── src/
│ ├── auth/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
└── README.md
```
## Error Handling
**Batch Worker**: Tool fallback per module, batch isolation, clear status reporting
**Coordinator**: Invalid path abort, user decline handling, verification with cleanup
**Fallback Triggers**: Non-zero exit code, script timeout, unexpected output
## Output Structure
```
.workflow/docs/{project_name}/
├── src/ # Mirrors source structure
│ ├── modules/
│ │ ├── README.md # Navigation
│ │ ├── auth/
│ │ │ ├── API.md # API signatures
│ │ │ ├── README.md # Module docs
│ │ │ └── middleware/
│ │ │ ├── API.md
│ │ │ └── README.md
│ │ └── api/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
├── lib/
│ └── core/
│ ├── API.md
│ └── README.md
├── README.md # ✨ Project root overview (auto-generated)
├── ARCHITECTURE.md # ✨ System design (auto-generated)
├── EXAMPLES.md # ✨ Usage examples (auto-generated)
└── api/ # ✨ Optional (auto-generated if HTTP API detected)
└── README.md # HTTP API reference
```
## Usage Examples
```bash
# Full project documentation generation
/memory:docs-full-cli
# Target specific directory
/memory:docs-full-cli src/features/auth
/memory:docs-full-cli .claude
# Use specific tool
/memory:docs-full-cli --tool qwen
/memory:docs-full-cli src --tool qwen
```
## Key Advantages
- **Efficiency**: 30 modules → 8 agents (73% reduction from sequential)
- **Resilience**: 3-tier tool fallback per module
- **Performance**: Parallel batches, no concurrency limits
- **Observability**: Per-module tool usage, batch-level metrics
- **Automation**: Zero configuration - strategy auto-selected by directory depth
- **Path Mirroring**: Clear 1:1 mapping between source and documentation structure
## Template Reference
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
- `api.txt`: Code API documentation (Part A: Code API, Part B: HTTP API)
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders with subdirectories
## Related Commands
- `/memory:docs` - Agent-based documentation planning workflow
- `/memory:docs-related-cli` - Update docs for changed modules only
- `/workflow:execute` - Execute documentation tasks (when using agent mode)

View File

@@ -0,0 +1,386 @@
---
name: docs-related-cli
description: Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel
argument-hint: "[--tool <gemini|qwen|codex>]"
---
# Related Documentation Generation - CLI Mode (/memory:docs-related-cli)
## Overview
Orchestrates context-aware documentation generation/update for changed modules using CLI-based execution with batched agents and automatic tool fallback (gemini→qwen→codex).
**Parameters**:
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
**Execution Flow**:
1. Change Detection → 2. Plan Presentation → 3. Batched Execution → 4. Verification
## Core Rules
1. **Detect Changes First**: Use git diff to identify affected modules
2. **Wait for Approval**: Present plan, no execution without user confirmation
3. **Execution Strategy**:
- **<15 modules**: Direct parallel execution (max 4 concurrent per depth, no agent overhead)
- **≥15 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
4. **Tool Fallback**: Auto-retry with fallback tools on failure
5. **Depth Sequential**: Process depths N→0, parallel batches within depth (both modes)
6. **Related Mode**: Generate/update only changed modules and their parent contexts
7. **Single Strategy**: Always use `single` strategy (incremental update)
## Tool Fallback Hierarchy
```javascript
--tool gemini [gemini, qwen, codex] // default
--tool qwen [qwen, gemini, codex]
--tool codex [codex, gemini, qwen]
```
**Trigger**: Non-zero exit code from generation script
| Tool | Best For | Fallback To |
|--------|--------------------------------|----------------|
| gemini | Documentation, patterns | qwen → codex |
| qwen | Architecture, system design | gemini → codex |
| codex | Implementation, code quality | gemini → qwen |
## Phase 1: Change Detection & Analysis
```javascript
// Get project metadata
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Detect changed modules
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|change:<TYPE>|type:<code|navigation>` to extract affected modules.
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack (Node.js/Python/Go/Rust/etc).
**Fallback**: If no changes detected, use recent modules (first 10 by depth).
## Phase 2: Plan Presentation
**Present filtered plan**:
```
Related Documentation Generation Plan:
Tool: gemini (fallback: qwen → codex)
Changed: 4 modules | Batching: 4 modules/agent
Project: myproject
Output: .workflow/docs/myproject/
Will generate/update docs for:
- ./src/api/auth (5 files, type: code) [new module]
- ./src/api (12 files, type: code) [parent of changed auth/]
- ./src (8 files, type: code) [parent context]
- . (14 files, type: code) [root level]
Documentation Strategy:
- Strategy: single (all modules - incremental update)
- Output: API.md + README.md (code folders), README.md only (navigation folders)
- Context: Current dir code + child docs
Auto-skipped (12 paths):
- Tests: ./src/api/auth.test.ts (8 paths)
- Config: tsconfig.json (3 paths)
- Other: node_modules (1 path)
Agent allocation:
- Depth 3 (1 module): 1 agent [1]
- Depth 2 (1 module): 1 agent [1]
- Depth 1 (1 module): 1 agent [1]
- Depth 0 (1 module): 1 agent [1]
Estimated time: ~5-10 minutes
Confirm execution? (y/n)
```
**Decision logic**:
- User confirms "y": Proceed with execution
- User declines "n": Abort, no changes
- <15 modules: Direct execution
- ≥15 modules: Agent batch execution
## Phase 3A: Direct Execution (<15 modules)
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead.
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
```javascript
let project_name = detect_project_name();
for (let depth of sorted_depths.reverse()) { // N → 0
let batches = batch_modules(modules_by_depth[depth], 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`${module.path} docs generated with ${tool}`);
return true;
}
}
report(`❌ FAILED: ${module.path} failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
## Phase 3B: Agent Batch Execution (≥15 modules)
### Batching Strategy
```javascript
// Batch modules into groups of 4
function batch_modules(modules, batch_size = 4) {
let batches = [];
for (let i = 0; i < modules.length; i += batch_size) {
batches.push(modules.slice(i, i + batch_size));
}
return batches;
}
// Examples: 10→[4,4,2] | 8→[4,4] | 3→[3]
```
### Coordinator Orchestration
```javascript
let modules_by_depth = group_by_depth(changed_modules);
let tool_order = construct_tool_order(primary_tool);
let project_name = detect_project_name();
for (let depth of sorted_depths.reverse()) { // N → 0
let batches = batch_modules(modules_by_depth[depth], 4);
let worker_tasks = [];
for (let batch of batches) {
worker_tasks.push(
Task(
subagent_type="memory-bridge",
description=`Generate docs for ${batch.length} modules at depth ${depth}`,
prompt=generate_batch_worker_prompt(batch, tool_order, depth, project_name, "related")
)
);
}
await parallel_execute(worker_tasks); // Batches run in parallel
}
```
### Batch Worker Prompt Template
```
PURPOSE: Generate/update documentation for assigned modules with tool fallback (related mode)
TASK:
Generate documentation for the following modules based on recent changes. For each module, try tools in order until success.
PROJECT: {{project_name}}
OUTPUT: .workflow/docs/{{project_name}}/
MODULES:
{{module_path_1}} (type: {{folder_type_1}})
{{module_path_2}} (type: {{folder_type_2}})
{{module_path_3}} (type: {{folder_type_3}})
{{module_path_4}} (type: {{folder_type_4}})
TOOLS (try in order):
1. {{tool_1}}
2. {{tool_2}}
3. {{tool_3}}
EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module
→ Failure: Report "❌ FAILED: {{module_path}} failed all tools", proceed to next module
FOLDER TYPE HANDLING:
- code: Generate API.md + README.md
- navigation: Generate README.md only
REPORTING:
Report final summary with:
- Total processed: X modules
- Successful: Y modules
- Failed: Z modules
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
```
## Phase 4: Verification
```javascript
// Check documentation files created/updated
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
// Display recent changes
Bash({command: 'find .workflow/docs -type f -name "*.md" -mmin -60 2>/dev/null', run_in_background: false});
```
**Aggregate results**:
```
Documentation Generation Summary:
Total: 4 | Success: 4 | Failed: 0
Tool usage:
- gemini: 4 modules
- qwen: 0 modules (fallback)
- codex: 0 modules
Changes:
.workflow/docs/myproject/src/api/auth/API.md (new)
.workflow/docs/myproject/src/api/auth/README.md (new)
.workflow/docs/myproject/src/api/API.md (updated)
.workflow/docs/myproject/src/api/README.md (updated)
.workflow/docs/myproject/src/API.md (updated)
.workflow/docs/myproject/src/README.md (updated)
.workflow/docs/myproject/API.md (updated)
.workflow/docs/myproject/README.md (updated)
```
## Execution Summary
**Module Count Threshold**:
- **<15 modules**: Coordinator executes Phase 3A (Direct Execution)
- **≥15 modules**: Coordinator executes Phase 3B (Agent Batch Execution)
**Agent Hierarchy** (for ≥15 modules):
- **Coordinator**: Handles batch division, spawns worker agents per depth
- **Worker Agents**: Each processes 4 modules with tool fallback (related mode)
## Error Handling
**Batch Worker**:
- Tool fallback per module (auto-retry)
- Batch isolation (failures don't propagate)
- Clear per-module status reporting
**Coordinator**:
- No changes: Use fallback (recent 10 modules)
- User decline: No execution
- Verification fail: Report incomplete modules
- Partial failures: Continue execution, report failed modules
**Fallback Triggers**:
- Non-zero exit code
- Script timeout
- Unexpected output
## Output Structure
```
.workflow/docs/{project_name}/
├── src/ # Mirrors source structure
│ ├── modules/
│ │ ├── README.md
│ │ ├── auth/
│ │ │ ├── API.md # Updated based on code changes
│ │ │ └── README.md # Updated based on code changes
│ │ └── api/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
└── README.md
```
## Usage Examples
```bash
# Daily development documentation update
/memory:docs-related-cli
# After feature work with specific tool
/memory:docs-related-cli --tool qwen
# Code quality documentation review after implementation
/memory:docs-related-cli --tool codex
```
## Key Advantages
**Efficiency**: 30 modules → 8 agents (73% reduction)
**Resilience**: 3-tier fallback per module
**Performance**: Parallel batches, no concurrency limits
**Context-aware**: Updates based on actual git changes
**Fast**: Only affected modules, not entire project
**Incremental**: Single strategy for focused updates
## Coordinator Checklist
- Parse `--tool` (default: gemini)
- Get project metadata (name, root)
- Detect changed modules via detect_changed_modules.sh
- **Smart filter modules** (auto-detect tech stack, skip tests/build/config/vendor)
- Cache git changes
- Apply fallback if no changes (recent 10 modules)
- Construct tool fallback order
- **Present filtered plan** with skip reasons and change types
- **Wait for y/n confirmation**
- Determine execution mode:
- **<15 modules**: Direct execution (Phase 3A)
- For each depth (N→0): Sequential module updates with tool fallback
- **≥15 modules**: Agent batch execution (Phase 3B)
- For each depth (N→0): Batch modules (4 per batch), spawn batch workers in parallel
- Wait for depth/batch completion
- Aggregate results
- Verification check (documentation files created/updated)
- Display summary + recent changes
## Comparison with Full Documentation Generation
| Aspect | Related Generation | Full Generation |
|--------|-------------------|-----------------|
| **Scope** | Changed modules only | All project modules |
| **Speed** | Fast (minutes) | Slower (10-30 min) |
| **Use case** | Daily development | Initial setup, major refactoring |
| **Strategy** | `single` (all) | `full` (L3) + `single` (L1-2) |
| **Trigger** | After commits | After setup or major changes |
| **Batching** | 4 modules/agent | 4 modules/agent |
| **Fallback** | gemini→qwen→codex | gemini→qwen→codex |
| **Complexity threshold** | ≤15 modules | ≤20 modules |
## Template Reference
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
- `api.txt`: Code API documentation
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders
## Related Commands
- `/memory:docs-full-cli` - Full project documentation generation
- `/memory:docs` - Agent-based documentation planning workflow
- `/memory:update-related` - Update CLAUDE.md for changed modules

View File

@@ -0,0 +1,697 @@
#!/bin/bash
# Generate documentation for modules and projects with multiple strategies
# Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]
# strategy: full|single|project-readme|project-architecture|http-api
# source_path: Path to the source module directory (or project root for project-level docs)
# project_name: Project name for output path (e.g., "myproject")
# tool: gemini|qwen|codex (default: gemini)
# model: Model name (optional, uses tool defaults)
#
# Default Models:
# gemini: gemini-2.5-flash
# qwen: coder-model
# codex: gpt5-codex
#
# Module-Level Strategies:
# full: Full documentation generation
# - Read: All files in current and subdirectories (@**/*)
# - Generate: API.md + README.md for each directory containing code files
# - Use: Deep directories (Layer 3), comprehensive documentation
#
# single: Single-layer documentation
# - Read: Current directory code + child API.md/README.md files
# - Generate: API.md + README.md only in current directory
# - Use: Upper layers (Layer 1-2), incremental updates
#
# Project-Level Strategies:
# project-readme: Project overview documentation
# - Read: All module API.md and README.md files
# - Generate: README.md (project root)
# - Use: After all module docs are generated
#
# project-architecture: System design documentation
# - Read: All module docs + project README
# - Generate: ARCHITECTURE.md + EXAMPLES.md
# - Use: After project README is generated
#
# http-api: HTTP API documentation
# - Read: API route files + existing docs
# - Generate: api/README.md
# - Use: For projects with HTTP APIs
#
# Output Structure:
# Module docs: .workflow/docs/{project_name}/{source_path}/API.md
# Module docs: .workflow/docs/{project_name}/{source_path}/README.md
# Project docs: .workflow/docs/{project_name}/README.md
# Project docs: .workflow/docs/{project_name}/ARCHITECTURE.md
# Project docs: .workflow/docs/{project_name}/EXAMPLES.md
# API docs: .workflow/docs/{project_name}/api/README.md
#
# Features:
# - Path mirroring: source structure → docs structure
# - Template-driven generation
# - Respects .gitignore patterns
# - Detects code vs navigation folders
# - Tool fallback support
# Build exclusion filters from .gitignore
build_exclusion_filters() {
local filters=""
# Common system/cache directories to exclude
local system_excludes=(
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
"coverage" ".nyc_output" "logs" "tmp" "temp" ".workflow"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Find and parse .gitignore (current dir first, then git root)
local gitignore_file=""
# Check current directory first
if [ -f ".gitignore" ]; then
gitignore_file=".gitignore"
else
# Try to find git root and check for .gitignore there
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
gitignore_file="$git_root/.gitignore"
fi
fi
# Parse .gitignore if found
if [ -n "$gitignore_file" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Skip wildcards patterns (too complex for simple find)
[[ "$line" =~ \* ]] && continue
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < "$gitignore_file"
fi
echo "$filters"
}
# Detect folder type (code vs navigation)
detect_folder_type() {
local target_path="$1"
local exclusion_filters="$2"
# Count code files (primary indicators)
local code_count=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
if [ $code_count -gt 0 ]; then
echo "code"
else
echo "navigation"
fi
}
# Scan directory structure and generate structured information
scan_directory_structure() {
local target_path="$1"
local strategy="$2"
if [ ! -d "$target_path" ]; then
echo "Directory not found: $target_path"
return 1
fi
local exclusion_filters=$(build_exclusion_filters)
local structure_info=""
# Get basic directory info
local dir_name=$(basename "$target_path")
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
local folder_type=$(detect_folder_type "$target_path" "$exclusion_filters")
structure_info+="Directory: $dir_name\n"
structure_info+="Total files: $total_files\n"
structure_info+="Total directories: $total_dirs\n"
structure_info+="Folder type: $folder_type\n\n"
if [ "$strategy" = "full" ]; then
# For full: show all subdirectories with file counts
structure_info+="Subdirectories with files:\n"
while IFS= read -r dir; do
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
local rel_path=${dir#$target_path/}
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
if [ $file_count -gt 0 ]; then
local subdir_type=$(detect_folder_type "$dir" "$exclusion_filters")
structure_info+=" - $rel_path/ ($file_count files, type: $subdir_type)\n"
fi
fi
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
else
# For single: show direct children only
structure_info+="Direct subdirectories:\n"
while IFS= read -r dir; do
if [ -n "$dir" ]; then
local dir_name=$(basename "$dir")
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
local has_api=$([ -f "$dir/API.md" ] && echo " [has API.md]" || echo "")
local has_readme=$([ -f "$dir/README.md" ] && echo " [has README.md]" || echo "")
structure_info+=" - $dir_name/ ($file_count files)$has_api$has_readme\n"
fi
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
fi
# Show main file types in current directory
structure_info+="\nCurrent directory files:\n"
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
structure_info+=" - Code files: $code_files\n"
structure_info+=" - Config files: $config_files\n"
structure_info+=" - Documentation: $doc_files\n"
printf "%b" "$structure_info"
}
# Calculate output path based on source path and project name
calculate_output_path() {
local source_path="$1"
local project_name="$2"
local project_root="$3"
# Get absolute path of source (normalize to Unix-style path)
local abs_source=$(cd "$source_path" && pwd)
# Normalize project root to same format
local norm_project_root=$(cd "$project_root" && pwd)
# Calculate relative path from project root
local rel_path="${abs_source#$norm_project_root}"
# Remove leading slash if present
rel_path="${rel_path#/}"
# If source is project root, use project name directly
if [ "$abs_source" = "$norm_project_root" ] || [ -z "$rel_path" ]; then
echo "$norm_project_root/.workflow/docs/$project_name"
else
echo "$norm_project_root/.workflow/docs/$project_name/$rel_path"
fi
}
generate_module_docs() {
local strategy="$1"
local source_path="$2"
local project_name="$3"
local tool="${4:-gemini}"
local model="$5"
# Validate parameters
if [ -z "$strategy" ] || [ -z "$source_path" ] || [ -z "$project_name" ]; then
echo "❌ Error: Strategy, source path, and project name are required"
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
echo "Module strategies: full, single"
echo "Project strategies: project-readme, project-architecture, http-api"
return 1
fi
# Validate strategy
local valid_strategies=("full" "single" "project-readme" "project-architecture" "http-api")
local strategy_valid=false
for valid_strategy in "${valid_strategies[@]}"; do
if [ "$strategy" = "$valid_strategy" ]; then
strategy_valid=true
break
fi
done
if [ "$strategy_valid" = false ]; then
echo "❌ Error: Invalid strategy '$strategy'"
echo "Valid module strategies: full, single"
echo "Valid project strategies: project-readme, project-architecture, http-api"
return 1
fi
if [ ! -d "$source_path" ]; then
echo "❌ Error: Source directory '$source_path' does not exist"
return 1
fi
# Set default models if not specified
if [ -z "$model" ]; then
case "$tool" in
gemini)
model="gemini-2.5-flash"
;;
qwen)
model="coder-model"
;;
codex)
model="gpt5-codex"
;;
*)
model=""
;;
esac
fi
# Build exclusion filters
local exclusion_filters=$(build_exclusion_filters)
# Get project root
local project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
# Determine if this is a project-level strategy
local is_project_level=false
if [[ "$strategy" =~ ^project- ]] || [ "$strategy" = "http-api" ]; then
is_project_level=true
fi
# Calculate output path
local output_path
if [ "$is_project_level" = true ]; then
# Project-level docs go to project root
if [ "$strategy" = "http-api" ]; then
output_path="$project_root/.workflow/docs/$project_name/api"
else
output_path="$project_root/.workflow/docs/$project_name"
fi
else
output_path=$(calculate_output_path "$source_path" "$project_name" "$project_root")
fi
# Create output directory
mkdir -p "$output_path"
# Detect folder type (only for module-level strategies)
local folder_type=""
if [ "$is_project_level" = false ]; then
folder_type=$(detect_folder_type "$source_path" "$exclusion_filters")
fi
# Load templates based on strategy
local api_template=""
local readme_template=""
local template_content=""
if [ "$is_project_level" = true ]; then
# Project-level templates
case "$strategy" in
project-readme)
local proj_readme_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-readme.txt"
if [ -f "$proj_readme_path" ]; then
template_content=$(cat "$proj_readme_path")
echo " 📋 Loaded Project README template: $(wc -l < "$proj_readme_path") lines"
fi
;;
project-architecture)
local arch_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-architecture.txt"
local examples_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-examples.txt"
if [ -f "$arch_path" ]; then
template_content=$(cat "$arch_path")
echo " 📋 Loaded Architecture template: $(wc -l < "$arch_path") lines"
fi
if [ -f "$examples_path" ]; then
template_content="$template_content
EXAMPLES TEMPLATE:
$(cat "$examples_path")"
echo " 📋 Loaded Examples template: $(wc -l < "$examples_path") lines"
fi
;;
http-api)
local api_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
if [ -f "$api_path" ]; then
template_content=$(cat "$api_path")
echo " 📋 Loaded HTTP API template: $(wc -l < "$api_path") lines"
fi
;;
esac
else
# Module-level templates
local api_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
local readme_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/module-readme.txt"
local nav_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/folder-navigation.txt"
if [ "$folder_type" = "code" ]; then
if [ -f "$api_template_path" ]; then
api_template=$(cat "$api_template_path")
echo " 📋 Loaded API template: $(wc -l < "$api_template_path") lines"
fi
if [ -f "$readme_template_path" ]; then
readme_template=$(cat "$readme_template_path")
echo " 📋 Loaded README template: $(wc -l < "$readme_template_path") lines"
fi
else
# Navigation folder uses navigation template
if [ -f "$nav_template_path" ]; then
readme_template=$(cat "$nav_template_path")
echo " 📋 Loaded Navigation template: $(wc -l < "$nav_template_path") lines"
fi
fi
fi
# Scan directory structure (only for module-level strategies)
local structure_info=""
if [ "$is_project_level" = false ]; then
echo " 🔍 Scanning directory structure..."
structure_info=$(scan_directory_structure "$source_path" "$strategy")
fi
# Prepare logging info
local module_name=$(basename "$source_path")
echo "⚡ Generating docs: $source_path$output_path"
echo " Strategy: $strategy | Tool: $tool | Model: $model | Type: $folder_type"
echo " Output: $output_path"
# Build strategy-specific prompt
local final_prompt=""
# Project-level strategies
if [ "$strategy" = "project-readme" ]; then
final_prompt="PURPOSE: Generate comprehensive project overview documentation
PROJECT: $project_name
OUTPUT: Current directory (file will be moved to final location)
Read: @.workflow/docs/$project_name/**/*.md
Context: All module documentation files from the project
Generate ONE documentation file in current directory:
- README.md - Project root documentation
Template:
$template_content
Instructions:
- Create README.md in CURRENT DIRECTORY
- Synthesize information from all module docs
- Include project overview, getting started, and navigation
- Create clear module navigation with links
- Follow template structure exactly"
elif [ "$strategy" = "project-architecture" ]; then
final_prompt="PURPOSE: Generate system design and usage examples documentation
PROJECT: $project_name
OUTPUT: Current directory (files will be moved to final location)
Read: @.workflow/docs/$project_name/**/*.md
Context: All project documentation including module docs and project README
Generate TWO documentation files in current directory:
1. ARCHITECTURE.md - System architecture and design patterns
2. EXAMPLES.md - End-to-end usage examples
Template:
$template_content
Instructions:
- Create both ARCHITECTURE.md and EXAMPLES.md in CURRENT DIRECTORY
- Synthesize architectural patterns from module documentation
- Document system structure, module relationships, and design decisions
- Provide practical code examples and usage scenarios
- Follow template structure for both files"
elif [ "$strategy" = "http-api" ]; then
final_prompt="PURPOSE: Generate HTTP API reference documentation
PROJECT: $project_name
OUTPUT: Current directory (file will be moved to final location)
Read: @**/*.{ts,js,py,go,rs} @.workflow/docs/$project_name/**/*.md
Context: API route files and existing documentation
Generate ONE documentation file in current directory:
- README.md - HTTP API documentation (in api/ subdirectory)
Template:
$template_content
Instructions:
- Create README.md in CURRENT DIRECTORY
- Document all HTTP endpoints (routes, methods, parameters, responses)
- Include authentication requirements and error codes
- Provide request/response examples
- Follow template structure (Part B: HTTP API documentation)"
# Module-level strategies
elif [ "$strategy" = "full" ]; then
# Full strategy: read all files, generate for each directory
if [ "$folder_type" = "code" ]; then
final_prompt="PURPOSE: Generate comprehensive API and module documentation
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (files will be moved to final location)
Read: @**/*
Generate TWO documentation files in current directory:
1. API.md - Code API documentation (functions, classes, interfaces)
Template:
$api_template
2. README.md - Module overview documentation
Template:
$readme_template
Instructions:
- Generate both API.md and README.md in CURRENT DIRECTORY
- If subdirectories contain code files, generate their docs too (recursive)
- Work bottom-up: deepest directories first
- Follow template structure exactly
- Use structure analysis for context"
else
# Navigation folder - README only
final_prompt="PURPOSE: Generate navigation documentation for folder structure
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (file will be moved to final location)
Read: @**/*
Generate ONE documentation file in current directory:
- README.md - Navigation and folder overview
Template:
$readme_template
Instructions:
- Create README.md in CURRENT DIRECTORY
- Focus on folder structure and navigation
- Link to subdirectory documentation
- Use structure analysis for context"
fi
else
# Single strategy: read current + child docs only
if [ "$folder_type" = "code" ]; then
final_prompt="PURPOSE: Generate API and module documentation for current directory
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (files will be moved to final location)
Read: @*/API.md @*/README.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.go @*.rs @*.md @*.json @*.yaml @*.yml
Generate TWO documentation files in current directory:
1. API.md - Code API documentation
Template:
$api_template
2. README.md - Module overview
Template:
$readme_template
Instructions:
- Generate both API.md and README.md in CURRENT DIRECTORY
- Reference child documentation, do not duplicate
- Follow template structure
- Use structure analysis for current directory context"
else
# Navigation folder - README only
final_prompt="PURPOSE: Generate navigation documentation
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (file will be moved to final location)
Read: @*/API.md @*/README.md @*.md
Generate ONE documentation file in current directory:
- README.md - Navigation and overview
Template:
$readme_template
Instructions:
- Create README.md in CURRENT DIRECTORY
- Link to child documentation
- Use structure analysis for navigation context"
fi
fi
# Execute documentation generation
local start_time=$(date +%s)
echo " 🔄 Starting documentation generation..."
if cd "$source_path" 2>/dev/null; then
local tool_result=0
# Store current output path for CLI context
export DOC_OUTPUT_PATH="$output_path"
# Execute with selected tool
case "$tool" in
qwen)
if [ "$model" = "coder-model" ]; then
qwen -p "$final_prompt" --yolo 2>&1
else
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
fi
tool_result=$?
;;
codex)
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
tool_result=$?
;;
gemini)
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
*)
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
esac
# Move generated files to output directory
local docs_created=0
local moved_files=""
if [ $tool_result -eq 0 ]; then
if [ "$is_project_level" = true ]; then
# Project-level documentation files
case "$strategy" in
project-readme)
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="README.md "
}
fi
;;
project-architecture)
if [ -f "ARCHITECTURE.md" ]; then
mv "ARCHITECTURE.md" "$output_path/ARCHITECTURE.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="ARCHITECTURE.md "
}
fi
if [ -f "EXAMPLES.md" ]; then
mv "EXAMPLES.md" "$output_path/EXAMPLES.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="EXAMPLES.md "
}
fi
;;
http-api)
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="api/README.md "
}
fi
;;
esac
else
# Module-level documentation files
# Check and move API.md if it exists
if [ "$folder_type" = "code" ] && [ -f "API.md" ]; then
mv "API.md" "$output_path/API.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="API.md "
}
fi
# Check and move README.md if it exists
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="README.md "
}
fi
fi
fi
if [ $docs_created -gt 0 ]; then
local end_time=$(date +%s)
local duration=$((end_time - start_time))
echo " ✅ Generated $docs_created doc(s) in ${duration}s: $moved_files"
cd - > /dev/null
return 0
else
echo " ❌ Documentation generation failed for $source_path"
cd - > /dev/null
return 1
fi
else
echo " ❌ Cannot access directory: $source_path"
return 1
fi
}
# Execute function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
# Show help if no arguments or help requested
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
echo ""
echo "Module-Level Strategies:"
echo " full - Generate docs for all subdirectories with code"
echo " single - Generate docs only for current directory"
echo ""
echo "Project-Level Strategies:"
echo " project-readme - Generate project root README.md"
echo " project-architecture - Generate ARCHITECTURE.md + EXAMPLES.md"
echo " http-api - Generate HTTP API documentation (api/README.md)"
echo ""
echo "Tools: gemini (default), qwen, codex"
echo "Models: Use tool defaults if not specified"
echo ""
echo "Module Examples:"
echo " ./generate_module_docs.sh full ./src/auth myproject"
echo " ./generate_module_docs.sh single ./components myproject gemini"
echo ""
echo "Project Examples:"
echo " ./generate_module_docs.sh project-readme . myproject"
echo " ./generate_module_docs.sh project-architecture . myproject qwen"
echo " ./generate_module_docs.sh http-api . myproject"
exit 0
fi
generate_module_docs "$@"
fi

View File

@@ -1,170 +0,0 @@
# Architecture: Claude_dms3 System Architecture
## Related Files
- `workflow-architecture.md` - Core system, session, and task architecture.
- `intelligent-tools-strategy.md` - Describes tool selection, prompt structure, and execution modes.
- `mcp-tool-strategy.md` - Details triggering mechanisms for external tools like Exa.
- `action-planning-agent.md` - Explains the role and process of the planning agent.
- `code-developer.md` - Explains the role and process of the code implementation agent.
## Summary
The Claude_dms3 project is a sophisticated CLI-based system designed for autonomous software engineering. It leverages a "JSON-only data model" for task state management, "directory-based session management," and a "unified file structure" to organize workflows. The system employs multiple specialized AI agents (e.g., action-planning-agent, code-developer) and integrates external tools (Gemini, Qwen, Codex, Exa) through a "universal prompt template" and "intelligent tool selection strategy." Its core philosophy emphasizes incremental progress, context-driven execution, and strict quality standards, supported by a hierarchical task system with dynamic decomposition.
## Key Findings
1. **JSON-Only Data Model for Task State**: The system's task state is stored exclusively in JSON files (`.task/IMPL-*.json`), which are the single source of truth. All markdown documents are read-only generated views, eliminating bidirectional sync complexity. (`workflow-architecture.md`)
2. **Directory-Based Session Management**: Workflow sessions are managed through a simple directory structure (`.workflow/active/` for active, `.workflow/archives/` for completed), where the location of a session directory determines its state. (`workflow-architecture.md`)
3. **Hierarchical Task Structure with Flow Control**: Tasks are organized hierarchically (max 2 levels: IMPL-N, IMPL-N.M) and include a `flow_control` field within their JSON schema to define sequential `pre_analysis` and `implementation_approach` steps with dependency management. (`workflow-architecture.md`)
4. **Intelligent Tool Selection and Universal Prompt Template**: The system dynamically selects AI models (Gemini, Qwen, Codex) based on the task type (analysis vs. implementation) and uses a consistent "Universal Prompt Template" structure for all CLI tools, standardizing interaction and context passing. (`intelligent-tools-strategy.md`)
5. **Agent-Based Execution**: Specialized agents like the `action-planning-agent` (for generating plans and tasks) and `code-developer` (for implementing code and tests) process tasks based on provided context packages, ensuring clear separation of concerns and automated execution. (`action-planning-agent.md`, `code-developer.md`)
6. **Quantification Requirements**: A mandatory rule enforces explicit counts and enumerations in all task specifications, requirements, and acceptance criteria to eliminate ambiguity and ensure measurable outcomes. (`action-planning-agent.md`)
## Detailed Analysis
### 1. System Overview
The Claude_dms3 system is a CLI-driven, multi-agent orchestrator for software development. Its core principles revolve around **autonomy, consistency, and traceability**. The architectural style can be described as an **Agent-Oriented Architecture** interacting with external LLM-based tools, governed by a **Command-Query Responsibility Segregation (CQRS)**-like approach where JSON files are the command/state store and markdown files are generated views.
**Core Principles**:
* **JSON-Only Data Model**: `.task/IMPL-*.json` files are the single source of truth for task states, ensuring consistency and avoiding synchronization issues. (`workflow-architecture.md`)
* **Context-Driven Execution**: Agents and tools operate with rich context, including session metadata, analysis results, and project-specific artifacts, passed via "context packages." (`action-planning-agent.md`, `code-developer.md`)
* **Incremental Progress**: The `code-developer` agent, for instance, emphasizes small, testable changes. (`code-developer.md`)
* **Quantification**: Explicit counts and measurable acceptance criteria are mandatory in task definitions. (`action-planning-agent.md`)
**Technology Stack**:
* **Core**: Shell scripting (Bash/PowerShell), `jq` for JSON processing, `find`, `grep`, `rg` for file system operations.
* **AI Models**: Gemini (primary for analysis), Qwen (fallback for analysis), Codex (for implementation and testing).
* **External Tools**: Exa (for code context and web search).
### 2. System Structure
The system follows a layered architecture, conceptually:
```
+--------------------------+
| User Interface | (CLI Commands: /cli:chat, /workflow:plan, etc.)
+------------+-------------+
|
v
+------------+-------------+
| Command Layer | (Parses user input, prepares context for agents)
+------------+-------------+
|
v
+------------+-------------+
| Agent Orchestration | (Selects & coordinates agents, manages workflow sessions)
+------------+-------------+
|
v
+------------+-------------+ +-------------------+
| Specialized AI Agents |<----> External AI Tools |
| (action-planning, code- | | (Gemini, Qwen, |
| developer, test-fix) | | Codex, Exa) |
+------------+-------------+ +-------------------+
^
| (Reads/Writes Task JSONs)
+------------+-------------+
| Data / State Layer | (JSON-only task files, workflow-session.json)
+--------------------------+
```
### 3. Module Map
| Module / Component | Layer | Responsibilities | Dependencies |
| :-------------------------- | :------------------- | :---------------------------------------------------- | :-------------------------------------------------------------------------------- |
| **User Interface** | Presentation | User command input, display of CLI output. | Command Layer |
| **Command Layer** | Application | Parse user commands, prepare context for agents. | Agent Orchestration, Data Layer |
| **Agent Orchestration** | Application | Manage workflow sessions, select and invoke agents. | Specialized AI Agents, Data Layer, External AI Tools |
| `action-planning-agent` | Specialized AI Agent | Create implementation plans, generate task JSONs. | External AI Tools (Exa), Data Layer |
| `code-developer` | Specialized AI Agent | Implement code, write tests, follow tech stack. | External AI Tools (Exa, Codex), Data Layer |
| `workflow-architecture` | Core System Logic | Defines system-wide conventions for data model, session, and task management. | N/A (foundational) |
| `intelligent-tools-strategy`| Core System Logic | Defines tool selection logic, prompt templates. | External AI Tools (Gemini, Qwen, Codex) |
| `mcp-tool-strategy` | Core System Logic | Defines conditions for triggering Exa tools. | External AI Tools (Exa) |
| **Data / State Layer** | Persistence | Store task definitions (`.task/*.json`), session metadata (`workflow-session.json`). | Specialized AI Agents, Agent Orchestration |
| **External AI Tools** | External Service | Provide LLM capabilities (analysis, code gen, search).| Specialized AI Agents |
### 4. Module Interactions
**Core Data Flow (Workflow Execution Example)**:
1. **User Initiates Workflow**: User executes a CLI command (e.g., `/workflow:plan "Implement feature X"`).
2. **Command Layer Processes**: The command layer translates the user's request into a structured input for the agent orchestration.
3. **Agent Orchestration Invokes Planning Agent**: The system determines that a planning task is needed and invokes the `action-planning-agent`.
4. **Planning Agent's Context Assessment**: The `action-planning-agent` receives a "context package" (JSON) containing session metadata, analysis results (if any), and an inventory of brainstorming artifacts. It can optionally use MCP tools (Exa) for further context enhancement. (`action-planning-agent.md`)
5. **Planning Agent Generates Tasks**: Based on the context, the `action-planning-agent` generates:
* Multiple task JSON files (`.task/IMPL-*.json`) adhering to a 5-field schema and quantification requirements.
* An overall `IMPL_PLAN.md` document.
* A `TODO_LIST.md` for progress tracking.
All these are stored in the respective session directory within `.workflow/active/WFS-[topic-slug]`. (`action-planning-agent.md`)
6. **User/System Initiates Implementation**: Once planning is complete, the user or system might trigger an implementation phase (e.g., `/cli:execute IMPL-1`).
7. **Agent Orchestration Invokes Code Developer**: The `code-developer` agent is invoked for the specified task.
8. **Code Developer's Context Assessment**: The `code-developer` receives the task's JSON (which includes `flow_control` for `pre_analysis` steps and `implementation_approach`) and assesses the context, potentially loading tech stack guidelines. It can use local search tools (`rg`, `find`) and MCP tools (Exa) for further information gathering. (`code-developer.md`)
9. **Code Developer Executes Task**: The `code-developer` executes the `implementation_approach` steps sequentially, respecting `depends_on` relationships and using external AI tools (Codex for implementation, Gemini/Qwen for analysis). It performs incremental changes, ensuring code quality and test pass rates. (`code-developer.md`)
10. **Task Completion and Summary**: Upon successful completion of a task, the `code-developer` updates the `TODO_LIST.md` and generates a detailed summary (`.summaries/IMPL-*-summary.md`) within the session directory. (`code-developer.md`)
**Dependency Graph (High-Level)**:
```mermaid
graph TD
User[User Input/CLI] --> CommandLayer
CommandLayer --> AgentOrchestration
AgentOrchestration --> PlanningAgent
AgentOrchestration --> CodeDeveloper
AgentOrchestration --> OtherAgents[Other Specialized Agents]
PlanningAgent --> DataLayer[Data Layer: .task/*.json, workflow-session.json]
PlanningAgent --> ExternalTools[External AI Tools: Gemini, Qwen, Codex, Exa]
CodeDeveloper --> DataLayer
CodeDeveloper --> ExternalTools
OtherAgents --> DataLayer
OtherAgents --> ExternalTools
DataLayer --> AgentOrchestration
ExternalTools --> AgentOrchestration
```
### 5. Design Patterns
* **Agent-Oriented Programming**: The system is composed of autonomous, specialized agents that interact to achieve complex goals. Each agent (e.g., `action-planning-agent`, `code-developer`) has a defined role, input, and output.
* **Single Source of Truth**: The "JSON-only data model" for task states is a strict application of this pattern, simplifying data consistency.
* **Command Pattern**: CLI commands abstract complex operations, encapsulating requests as objects to be passed to agents.
* **Strategy Pattern**: The "Intelligent Tools Selection Strategy" dynamically chooses the appropriate AI model (Gemini, Qwen, Codex) based on the task's needs.
* **Template Method Pattern**: The "Universal Prompt Template" and various task-specific templates provide a skeletal structure for commands, allowing agents to fill in details.
* **Observer Pattern (Implicit)**: Changes in `.task/*.json` or `workflow-session.json` (the state) implicitly trigger updates in derived views like `TODO_LIST.md`.
* **Context Object Pattern**: The "context package" passed to agents bundles all relevant information needed for task execution.
### 6. Aggregated API Overview
The system's "API" is primarily its command-line interface and the structured inputs/outputs (JSON files) that agents process. There are no traditional RESTful APIs exposed in the public sense within the core system, but rather internal "tool calls" made by agents to external AI services.
**Key "API" (CLI Commands) Categories**:
* **Workflow Management**: `workflow:plan`, `workflow:execute`, `workflow:status`, `workflow:session:*`
* **CLI Utilities**: `cli:analyze`, `cli:chat`, `cli:codex-execute`
* **Memory Management**: `memory:load`, `memory:update`
* **Task Management**: `task:breakdown`, `task:create`, `task:replan`
* **Brainstorming**: `workflow:brainstorm:*`
* **UI Design**: `workflow:ui-design:*`
**Internal API (Agent Inputs/Outputs)**:
* **Context Package (Input to Agents)**: A JSON object containing `session_id`, `session_metadata`, `analysis_results`, `artifacts_inventory`, `context_package`, `mcp_capabilities`, `mcp_analysis`. (`action-planning-agent.md`)
* **Task JSON (`.task/IMPL-*.json`)**: Standardized 6-field JSON schema (`id`, `title`, `status`, `meta`, `context`, `flow_control`). (`workflow-architecture.md`)
* **`flow_control` object**: Contains `pre_analysis` (context gathering) and `implementation_approach` (implementation steps) arrays, with support for variable references and dependency management. (`workflow-architecture.md`)
### 7. Data Flow
The data flow is highly structured and context-rich, moving between the command layer, agent orchestration, specialized agents, and external AI tools. A typical flow for implementing a feature involves:
1. **Plan Generation**: User request -> Command Layer -> Agent Orchestration -> `action-planning-agent`.
2. **Context Loading**: `action-planning-agent` loads `context package` (session state, existing analysis, artifacts).
3. **Task & Plan Output**: `action-planning-agent` writes `.task/IMPL-*.json`, `IMPL_PLAN.md`, `TODO_LIST.md`.
4. **Task Execution**: Agent Orchestration selects an `IMPL-N` task -> `code-developer` agent.
5. **Pre-Analysis**: `code-developer` executes `flow_control.pre_analysis` steps (e.g., `bash()` commands, `Read()`, `Glob()`, `Grep()`, `mcp__exa__*()` calls, `gemini` for pattern analysis).
6. **Implementation**: `code-developer` executes `flow_control.implementation_approach` steps (e.g., `codex` for code generation, `bash()` for tests, `gemini` for quality review). Outputs from earlier steps feed into later ones via `[variable_name]` references.
7. **Status Update**: `code-developer` updates task status in `.task/IMPL-*.json`, `TODO_LIST.md`, and generates `.summaries/IMPL-*-summary.md`.
### 8. Security and Scalability
**Security**:
* **Strict Permission Framework**: Each CLI execution requires explicit user authorization. `analysis` mode is read-only, `write` and `auto` modes require explicit `--approval-mode yolo` (Gemini/Qwen) or `--skip-git-repo-check -s danger-full-access` (Codex). This prevents unauthorized modifications. (`intelligent-tools-strategy.md`)
* **No File Modifications in Analysis Mode**: By design, analysis agents cannot modify the file system, reducing risk. (`intelligent-tools-strategy.md`)
* **Context Scope Limitation**: Use of `cd` and `--include-directories` limits the context provided to agents, preventing agents from accessing unrelated parts of the codebase. (`intelligent-tools-strategy.md`)
* **Quantification Requirements**: The strict quantification and explicit listing of modification points provide transparency and auditability for agent actions. (`action-planning-agent.md`)
**Scalability**:
* **On-Demand Resource Creation**: Directories and files (like subtask JSONs) are created only when needed, avoiding unnecessary resource allocation. (`workflow-architecture.md`)
* **Distributed AI Processing**: Leveraging external AI services (Gemini, Qwen, Codex) offloads heavy computational tasks, allowing the core CLI system to remain lightweight and orchestrate.
* **Hierarchical Task Decomposition**: The ability to break down complex problems into smaller, manageable subtasks (max 2 levels) inherently supports handling larger projects by distributing work. (`workflow-architecture.md`)
* **Task Complexity Classification**: The system classifies tasks (simple, medium, complex) and applies appropriate timeout allocations and strategies, ensuring efficient resource utilization. (`workflow-architecture.md`)
* **Session Management**: The directory-based session management allows for multiple concurrent workflows by separating their states. (`workflow-architecture.md`)

View File

@@ -1,299 +0,0 @@
# EXAMPLES: Claude_dms3 Usage Examples
## Related Files
- `GETTING_STARTED.md` - Initial setup and quick start guide.
- `EXAMPLES.md` (root) - Comprehensive real-world examples across various development phases.
- `.claude/skills/command-guide/guides/examples.md` - Specific command usage examples across different modes.
## Introduction
This document provides practical, end-to-end examples demonstrating the core usage of the Claude_dms3 system. It covers everything from quick start guides to complex development workflows, illustrating how specialized AI agents and integrated tools can automate and streamline software engineering tasks.
**Prerequisites**: Ensure Claude_dms3 is installed and configured as per the [Installation Guide](INSTALL.md) and that you have a basic understanding of the core concepts explained in [GETTING_STARTED.md](GETTING_STARTED.md).
## Quick Start Example
Let's create a "Hello World" web application using a simple Express API.
### Step 1: Create an Execution Plan
Tell Claude_dms3 what you want to build. The system will analyze your request and automatically generate a detailed, executable task plan.
```bash
/workflow:plan "Create a simple Express API that returns Hello World at the root path"
```
* **Explanation**: The `/workflow:plan` command initiates a fully automated planning process. This includes context gathering from your project, analysis by AI agents to determine the best implementation path, and the generation of specific task files (in `.json` format) in a new workflow session (`.workflow/active/WFS-create-a-simple-express-api/`).
### Step 2: Execute the Plan
Once the plan is created, command the AI agents to start working.
```bash
/workflow:execute
```
* **Explanation**: Claude_dms3's agents, such as `@code-developer`, will begin executing the planned tasks one by one. This involves creating files, writing code, and installing necessary dependencies to fulfill the "Hello World" API request.
### Step 3: Check the Status (Optional)
Monitor the progress of the current workflow at any time.
```bash
/workflow:status
```
* **Explanation**: This command provides an overview of task completion, the currently executing task, and the upcoming steps in the workflow.
## Core Use Cases
### 1. Full-Stack Todo Application Development
**Objective**: Build a complete todo application with a React frontend and an Express backend, including user authentication, real-time updates, and dark mode.
#### Phase 1: Planning with Multi-Agent Brainstorming
Utilize brainstorming to analyze the complex requirements from multiple perspectives before implementation.
```bash
# Multi-perspective analysis for the full-stack application
/workflow:brainstorm:auto-parallel "Full-stack todo application with user authentication, real-time updates, and dark mode"
# Review brainstorming artifacts, then create the implementation plan
/workflow:plan
# Verify the plan quality
/workflow:action-plan-verify
```
#### Phase 2: Implementation
Execute the generated plan to build the application components.
```bash
# Execute the plan
/workflow:execute
# Monitor progress
/workflow:status
```
#### Phase 3: Testing
Generate and execute comprehensive tests for the implemented features.
```bash
# Generate comprehensive tests
/workflow:test-gen WFS-todo-application # WFS-todo-application is the session ID
# Execute test tasks
/workflow:execute
# Run an iterative test-fix cycle if needed
/workflow:test-cycle-execute
```
#### Phase 4: Quality Review & Completion
Review the implemented solution for security, architecture, and overall quality, then complete the session.
```bash
# Security review
/workflow:review --type security
# Architecture review
/workflow:review --type architecture
# General quality review
/workflow:review
# Complete the session
/workflow:session:complete
```
### 2. RESTful API with Authentication
**Objective**: Create a RESTful API with JWT authentication and role-based access control for a `posts` resource.
```bash
# Initiate detailed planning for the API
/workflow:plan "RESTful API with JWT authentication, role-based access control (admin, user), and protected endpoints for posts resource"
# Verify the plan for consistency and completeness
/workflow:action-plan-verify
# Execute the implementation plan
/workflow:execute
```
* **Implementation includes**:
* **Authentication Endpoints**: `POST /api/auth/register`, `POST /api/auth/login`, `POST /api/auth/refresh`, `POST /api/auth/logout`.
* **Protected Resources**: `GET /api/posts` (public), `GET /api/posts/:id` (public), `POST /api/posts` (authenticated), `PUT /api/posts/:id` (authenticated, owner or admin), `DELETE /api/posts/:id` (authenticated, owner or admin).
* **Middleware**: `authenticate` (verifies JWT token), `authorize(['admin'])` (role-based access), `validateRequest` (input validation), `errorHandler` (centralized error handling).
### 3. Test-Driven Development (TDD)
**Objective**: Implement user authentication (login, registration, password reset) using a TDD approach.
```bash
# Start the TDD workflow for user authentication
/workflow:tdd-plan "User authentication with email/password login, registration, and password reset"
# Execute the TDD cycles (Red-Green-Refactor)
/workflow:execute
# Verify TDD compliance (optional)
/workflow:tdd-verify
```
* **TDD cycle tasks created**: Claude_dms3 will create tasks in cycles (e.g., Registration, Login, Password Reset), where each cycle involves writing a failing test, implementing the feature to pass the test, and then refactoring the code.
## Advanced & Integration Examples
### 1. Monolith to Microservices Refactoring
**Objective**: Refactor a monolithic application into a microservices architecture with an API gateway, service discovery, and message queue.
#### Phase 1: Analysis
Perform deep architecture analysis and multi-role brainstorming.
```bash
# Deep architecture analysis to create a migration strategy
/cli:mode:plan --tool gemini "Analyze current monolithic architecture and create microservices migration strategy"
# Multi-role brainstorming for microservices design
/workflow:brainstorm:auto-parallel "Migrate monolith to microservices with API gateway, service discovery, and message queue" --count 5
```
#### Phase 2: Planning
Create a detailed migration plan based on the analysis.
```bash
# Create a detailed migration plan for the first phase
/workflow:plan "Phase 1 microservices migration: Extract user service and auth service from monolith"
# Verify the plan
/workflow:action-plan-verify
```
#### Phase 3: Implementation
Execute the migration plan and review the architecture.
```bash
# Execute the migration tasks
/workflow:execute
# Review the new microservices architecture
/workflow:review --type architecture
```
### 2. Real-Time Chat Application
**Objective**: Build a real-time chat application with WebSocket, message history, and file sharing.
#### Complete Workflow
This example combines brainstorming, UI design, planning, implementation, testing, and review.
```bash
# 1. Brainstorm for comprehensive feature specification
/workflow:brainstorm:auto-parallel "Real-time chat application with WebSocket, message history, file upload, user presence, typing indicators" --count 5
# 2. UI Design exploration
/workflow:ui-design:explore-auto --prompt "Modern chat interface with message list, input box, user sidebar, file preview" --targets "chat-window,message-bubble,user-list" --style-variants 2
# 3. Sync selected designs (assuming a session ID from the UI design step)
/workflow:ui-design:design-sync --session <session-id>
# 4. Plan the implementation
/workflow:plan
# 5. Verify the plan
/workflow:action-plan-verify
# 6. Execute the implementation
/workflow:execute
# 7. Generate tests for the application
/workflow:test-gen <session-id>
# 8. Execute the generated tests
/workflow:execute
# 9. Review the security and architecture
/workflow:review --type security
/workflow:review --type architecture
# 10. Complete the session
/workflow:session:complete
```
## Testing Examples
### 1. Adding Tests to Existing Code
**Objective**: Generate comprehensive tests for an existing authentication module.
```bash
# Create a test generation workflow for the authentication implementation
/workflow:test-gen WFS-authentication-implementation # WFS-authentication-implementation is the session ID
# Execute the test tasks (generate and run tests)
/workflow:execute
# Run a test-fix cycle until all tests pass
/workflow:test-cycle-execute --max-iterations 5
```
* **Tests generated**: Unit tests for each function, integration tests for the auth flow, edge case tests (invalid input, expired tokens), security tests (SQL injection, XSS), and performance tests.
### 2. Bug Fixing - Complex Bug Investigation
**Objective**: Debug a memory leak in a React application caused by uncleared event listeners.
#### Investigation
Start a dedicated session for thorough investigation.
```bash
# Start a new session for memory leak investigation
/workflow:session:start "Memory Leak Investigation"
# Perform deep bug analysis using Gemini
/cli:mode:bug-diagnosis --tool gemini "Memory leak in React components - event listeners not cleaned up"
# Create a fix plan based on the analysis
/workflow:plan "Fix memory leaks in React components: cleanup event listeners and cancel subscriptions"
```
#### Implementation
Execute the fixes and generate tests to prevent regression.
```bash
# Execute the memory leak fixes
/workflow:execute
# Generate tests to prevent future regressions
/workflow:test-gen WFS-memory-leak-investigation
# Execute the generated tests
/workflow:execute
```
## Best Practices & Troubleshooting
### Best Practices for Effective Usage
1. **Start with clear objectives**: Define what you want to build, list key features, and specify technologies.
2. **Use appropriate workflow**:
* Simple tasks: `/workflow:lite-plan`
* Complex features: `/workflow:brainstorm``/workflow:plan`
* Existing code: `/workflow:test-gen` or `/cli:analyze`
3. **Leverage quality gates**:
* Run `/workflow:action-plan-verify` before execution.
* Use `/workflow:review` after implementation.
* Generate tests with `/workflow:test-gen`.
4. **Maintain memory**:
* Update memory after major changes with `/memory:update-full` or `/memory:update-related`.
* Use `/memory:load` for quick, task-specific context.
5. **Complete sessions**: Always run `/workflow:session:complete` to generate lessons learned and archive the session.
### Troubleshooting Common Issues
* **Problem: Prompt shows "No active session found"**
* **Reason**: You haven't started a workflow session, or the current session is complete.
* **Solution**: Use `/workflow:session:start "Your task description"` to start a new session.
* **Problem: Command execution fails or gets stuck**
* **Reason**: Could be a network issue, AI model limitation, or the task is too complex.
* **Solution**:
1. First, try `/workflow:status` to check the current state.
2. Check log files in the `.workflow/WFS-<session-name>/.chat/` directory for detailed error messages.
3. If the task is too complex, break it down into smaller tasks and use `/workflow:plan` to create a new plan.
## Conclusion
This document provides a foundational understanding of how to leverage Claude_dms3 for various software development tasks, from initial planning to complex refactoring and comprehensive testing. By following these examples and best practices, users can effectively harness the power of AI-driven automation to enhance their development workflows.

328
README.md
View File

@@ -1,328 +0,0 @@
# 🚀 Claude Code Workflow (CCW)
<div align="center">
[![Version](https://img.shields.io/badge/version-v5.8.1-blue.svg)](https://github.com/catlog22/Claude-Code-Workflow/releases)
[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
[![Platform](https://img.shields.io/badge/platform-Windows%20%7C%20Linux%20%7C%20macOS-lightgrey.svg)]()
**Languages:** [English](README.md) | [中文](README_CN.md)
</div>
---
## 1. Overview
Claude Code Workflow (CCW) transforms AI development from simple prompt chaining into a robust, context-first orchestration system. It solves execution uncertainty and error accumulation through structured planning, deterministic execution, and intelligent multi-model orchestration. CCW is designed for AI developers and software engineers seeking to streamline their development processes.
**Key Features**:
* **Context-First Architecture**: Ensures agents receive correct information before implementation.
* **JSON-First State Management**: Task states use `.task/IMPL-*.json` files as the single source of truth for programmatic orchestration.
* **Autonomous Multi-Phase Orchestration**: Commands chain specialized sub-commands and agents to automate complex workflows.
* **Multi-Model Strategy**: Leverages Gemini for analysis, Codex for implementation, and Qwen for architecture/planning.
* **Hierarchical Memory System**: A 4-layer CLAUDE.md documentation system provides context at appropriate abstraction levels.
* **Specialized Role-Based Agents**: A suite of agents (e.g., `@code-developer`, `@test-fix-agent`) mirrors a real software team.
* **Lite-Plan Workflow**: A lightweight interactive planning and execution workflow with in-memory planning, smart code exploration, three-dimensional multi-select confirmation, and parallel task execution support.
* **CLI Tools Optimization**: Simplified command syntax with auto-model-selection for Gemini, Qwen, and Codex.
---
## 2. System Architecture
CCW is built on a foundation of robust design principles and a multi-layered architecture to facilitate AI-driven software development.
### Architectural Style and Design Principles
**Design Philosophy**:
* **Context-First Architecture**: Pre-defined context gathering eliminates execution uncertainty.
* **JSON-First State Management**: Task states live in `.task/IMPL-*.json` files as the single source of truth.
* **Autonomous Multi-Phase Orchestration**: Commands chain specialized sub-commands and agents.
* **Multi-Model Strategy**: Leverages the unique strengths of different AI models (Gemini for analysis, Codex for implementation, Qwen for architecture and planning).
* **Hierarchical Memory System**: A 4-layer documentation system provides context at the appropriate level of abstraction.
* **Specialized Role-Based Agents**: A suite of agents mirrors a real software team.
**Core Beliefs**:
* **Pursue good taste**: Eliminate edge cases to make code logic natural and elegant.
* **Embrace extreme simplicity**: Complexity is the root of all evil.
* **Be pragmatic**: Code must solve real-world problems, not hypothetical ones.
* **Data structures first**: Good programmers worry about data structures.
* **Never break backward compatibility**: Existing functionality is sacred and inviolable.
* **Incremental progress over big bangs**: Small changes that compile and pass tests.
* **Learning from existing code**: Study and plan before implementing.
* **Clear intent over clever code**: Be boring and obvious.
* **Follow existing code style**: Match import patterns, naming conventions, and formatting of existing codebase.
### System Architecture Diagram
```mermaid
graph TB
subgraph "User Interface Layer"
CLI[Slash Commands]
CHAT[Natural Language]
end
subgraph "Orchestration Layer"
WF[Workflow Engine]
SM[Session Manager]
TM[Task Manager]
end
subgraph "Agent Layer"
AG1[@code-developer]
AG2[@test-fix-agent]
AG3[@ui-design-agent]
AG4[@cli-execution-agent]
AG5[More Agents...]
end
subgraph "Tool Layer"
GEMINI[Gemini CLI]
QWEN[Qwen CLI]
CODEX[Codex CLI]
BASH[Bash/System]
end
subgraph "Data Layer"
JSON[Task JSON Files]
MEM[CLAUDE.md Memory]
STATE[Session State]
end
CLI --> WF
CHAT --> WF
WF --> SM
WF --> TM
SM --> STATE
TM --> JSON
WF --> AG1
WF --> AG2
WF --> AG3
WF --> AG4
AG1 --> GEMINI
AG1 --> QWEN
AG1 --> CODEX
AG2 --> BASH
AG3 --> GEMINI
AG4 --> CODEX
GEMINI --> MEM
QWEN --> MEM
CODEX --> JSON
```
### Core Components
1. **Workflow Engine**: Orchestrates complex development processes through planning, execution, verification, testing, and review phases.
2. **Session Manager**: Manages isolated workflow contexts, providing directory-based session tracking, persistence, parallel support, and archival.
3. **Task Manager**: Handles hierarchical task structures using a JSON-first data model, dynamic subtask creation, and dependency tracking.
4. **Memory System**: A four-layer hierarchical CLAUDE.md documentation system for project knowledge (`CLAUDE.md`, `src/CLAUDE.md`, `auth/CLAUDE.md`, `jwt/CLAUDE.md`).
5. **Multi-Agent System**: Specialized agents for different types of tasks, such as `@code-developer` (implementation), `@test-fix-agent` (testing), `@ui-design-agent` (UI design), `@action-planning-agent` (planning), and `@cli-execution-agent` (CLI task handling).
6. **CLI Tool Integration**: Seamlessly integrates Gemini CLI (for deep analysis), Qwen CLI (for architecture and planning), and Codex CLI (for autonomous development).
---
## 3. Getting Started
This section provides a quick guide to installing, configuring, and running Claude Code Workflow.
### Prerequisites
Before you begin, ensure you have the following:
* **Claude Code**: The latest version installed.
* **Git**: For version control.
* **Text Editor**: VS Code, Vim, or your preferred editor.
* **Basic Knowledge**: Familiarity with Bash scripting, Markdown formatting, JSON structure, and Git workflow.
### Installation
For detailed installation instructions, please refer to the [INSTALL.md](INSTALL.md) guide.
#### 🚀 Quick One-Line Installation
**Windows (PowerShell):**
```powershell
Invoke-Expression (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1" -UseBasicParsing).Content
```
**Linux/macOS (Bash/Zsh):**
```bash
bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
```
The installer provides an interactive menu for version selection (Latest Stable Release, Latest Development Version, or Specific Release Version).
#### ✅ Verify Installation
After installation, open **Claude Code** and check if the workflow commands are available by running:
```bash
/workflow:session:list
```
If slash commands like `/workflow:*` are recognized, the installation was successful.
### Configuration
CCW uses a **configuration-based tool control system** that makes external CLI tools optional. This allows for progressive enhancement, graceful degradation, and flexible configuration.
**Configuration File**: `~/.claude/workflows/tool-control.yaml`
**Optional CLI Tools** (for enhanced capabilities):
* **System Utilities**: `ripgrep (rg)` for fast code search, `jq` for JSON processing.
* **External AI Tools**: Gemini CLI, Codex CLI, Qwen Code. These need to be configured in `tool-control.yaml` after installation.
### Running the Project (Quick Start)
Let's build a "Hello World" web application from scratch with a simple example. For a more detailed tutorial, see the [Getting Started Guide](GETTING_STARTED.md).
#### Step 1: Create an Execution Plan (Automatically Starts a Session)
Tell CCW what you want to do. CCW will analyze your request and automatically generate a detailed, executable task plan.
```bash
/workflow:plan "Create a simple Express API that returns Hello World at the root path"
```
This command initiates a planning process including context gathering, AI agent analysis, and task generation.
#### Step 2: Execute the Plan
Once the plan is created, command the AI agents to start working.
```bash
/workflow:execute
```
CCW's agents (e.g., `@code-developer`) will execute tasks, create files, write code, and install dependencies.
#### Step 3: Check the Status
Check the progress of the current workflow at any time.
```bash
/workflow:status
```
This command shows the completion status of tasks, the currently executing task, and next steps.
---
## 4. Development Workflow
This section outlines the development processes, coding standards, testing practices, and guidelines for contributing to CCW.
### Development Setup
1. **Fork and Clone**: Fork the repository on GitHub and then clone your fork.
2. **Set Up Upstream Remote**: Add the upstream remote to keep your fork in sync.
3. **Create Development Branch**: Create a feature branch for your work.
4. **Install CCW for Testing**: Install your development version of CCW.
### How to Contribute
CCW welcomes contributions in the form of bug fixes, new features, documentation improvements, new commands, and new agents. Refer to the [CONTRIBUTING.md](CONTRIBUTING.md) for detailed instructions.
### Coding Standards
**General Principles**:
* Follow the project's core beliefs (simplicity, clear intent, pragmatic solutions).
* Ensure single responsibility per function/class.
* Avoid premature abstractions and clever tricks.
* Maintain backward compatibility.
* Make incremental progress with small, testable changes.
* Learn from existing code and follow existing style/patterns.
**Specific Standards**:
* **Bash Script Standards**: Use `set -euo pipefail`, function definitions, and a `main` function structure.
* **JSON Standards**: Use 2-space indentation, validate syntax, and include all required fields.
* **Markdown Standards**: Use clear headings, bullet points, code blocks, and proper emphasis.
* **File Organization**: Follow the established directory structure for agents, commands, skills, and workflows within `.claude/`.
### Testing Guidelines
* **Manual Testing**: Test happy paths, error handling, and edge cases.
* **Integration Testing**: Verify how your changes interact with existing commands and workflows.
* **Testing Checklist**: Ensure commands execute without errors, error messages are clear, session state is preserved, and documentation is accurate.
### Submitting Changes
* **Commit Message Guidelines**: Follow the Conventional Commits specification (e.g., `feat(workflow): add new command`).
* **Pull Request Process**: Create a pull request on GitHub, fill out the PR template, link related issues, and address review comments.
---
## 5. Project Structure
The project follows a modular and organized structure to manage agents, commands, skills, and workflows effectively.
```
.
├── .claude/ # Internal Claude configurations, agents, commands, skills, and workflows
│ ├── agents/ # Definitions and roles of AI agents
│ ├── commands/ # Implementations of CCW slash commands
│ ├── scripts/ # Utility scripts for various tasks
│ ├── skills/ # Modular, reusable AI capabilities (e.g., command-guide)
│ ├── templates/ # Generic templates for different workflows
│ └── workflows/ # Workflow definitions, strategies, and related documentation
├── .git/ # Git repository data
├── .gemini/ # Gemini CLI tool configurations
├── .codex/ # Codex CLI tool configurations
├── ARCHITECTURE.md # High-level system architecture overview
├── CHANGELOG.md # Detailed history of project changes and releases
├── CLAUDE.md # Core development guidelines and philosophical principles
├── COMMAND_REFERENCE.md # Comprehensive reference for all CCW commands
├── COMMAND_SPEC.md # Detailed technical specifications for each command
├── CONTRIBUTING.md # Guidelines for contributing to the project
├── GETTING_STARTED.md # Quick start guide for new users
├── INSTALL.md # Instructions for installing CCW
├── LICENSE # Project's open-source license information
├── README.md # This overview documentation file
└── WORKFLOW_DIAGRAMS.md # Visual diagrams illustrating CCW workflows and architecture
```
---
## 6. Navigation
This section provides links to more detailed documentation for various aspects of Claude Code Workflow.
### 📖 Documentation
* [**Getting Started Guide**](GETTING_STARTED.md) - A 5-minute quick start tutorial for new users.
* [**Installation Guide**](INSTALL.md) - Detailed instructions on how to install CCW.
* [**Command Reference**](COMMAND_REFERENCE.md) - A complete list of all available CCW commands with brief descriptions.
* [**Command Specification**](COMMAND_SPEC.md) - Detailed technical specifications for every command.
* [**Architecture Overview**](ARCHITECTURE.md) - An in-depth look at the system's design and core components.
* [**Workflow Decision Guide**](WORKFLOW_DECISION_GUIDE.md) - An interactive flowchart to help choose the right commands and workflows.
* [**Changelog**](CHANGELOG.md) - A history of all notable changes and releases.
* [**Contributing Guide**](CONTRIBUTING.md) - Guidelines and instructions for contributing to the project.
* [**Examples**](EXAMPLES.md) - Real-world use cases and practical examples of CCW in action.
* [**FAQ**](FAQ.md) - Frequently asked questions and troubleshooting tips.
* [**Workflow Diagrams**](WORKFLOW_DIAGRAMS.md) - Visual representations of CCW's various workflows.
* [**Development Guidelines**](CLAUDE.md) - Core development principles and coding standards.
### 💡 Need Help? Use the Interactive Command Guide
CCW includes a built-in **command-guide skill** to help you discover and use commands effectively:
* **`CCW-help`**: Get interactive help and command recommendations.
* **`CCW-issue`**: Report bugs or request features with guided templates.
**Example Usage**:
```
User: "CCW-help"
→ Interactive menu with command search, recommendations, and documentation
User: "What's next after /workflow:plan?"
→ Recommends /workflow:execute, /workflow:action-plan-verify, with workflow patterns
User: "CCW-issue"
→ Guided template generation for bugs, features, or questions
```
---
## 🤝 Contributing & Support
* **Repository**: [GitHub - Claude-Code-Workflow](https://github.com/catlog22/Claude-Code-Workflow)
* **Issues**: Report bugs or request features on [GitHub Issues](https://github.com/catlog22/Claude-Code-Workflow/issues).
* **Discussions**: Join the [Community Forum](https://github.com/catlog22/Claude-Code-Workflow/discussions).
* **Contributing**: See [CONTRIBUTING.md](CONTRIBUTING.md) for contribution guidelines.
## 📄 License
This project is licensed under the **MIT License**. See the [LICENSE](LICENSE) file for details.