mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-08 02:14:08 +08:00
Compare commits
27 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
72f27fb2f8 | ||
|
|
be129f5821 | ||
|
|
b1bb74af0d | ||
|
|
a7a654805c | ||
|
|
c0c894ced1 | ||
|
|
7517f4f8ec | ||
|
|
0b45ff7345 | ||
|
|
0416b23186 | ||
|
|
948cf3fcd7 | ||
|
|
4272ca9ebd | ||
|
|
73fed4893b | ||
|
|
f09c6e2a7a | ||
|
|
65a204a563 | ||
|
|
ffbc440a7e | ||
|
|
3c28c61bea | ||
|
|
b0b99a4217 | ||
|
|
4f533f6fd5 | ||
|
|
530c348e95 | ||
|
|
a98b26b111 | ||
|
|
9f7e33cbde | ||
|
|
a25464ce28 | ||
|
|
0a3f2a5b03 | ||
|
|
1929b7f72d | ||
|
|
b8889d99c9 | ||
|
|
a79a3221ce | ||
|
|
67c18d1b03 | ||
|
|
2301f263cd |
@@ -137,19 +137,44 @@ Break work into 3-5 logical implementation stages with:
|
||||
- Dependencies on previous stages
|
||||
- Estimated complexity and time requirements
|
||||
|
||||
### 2. Task JSON Generation (5-Field Schema + Artifacts)
|
||||
### 2. Task JSON Generation (6-Field Schema + Artifacts)
|
||||
Generate individual `.task/IMPL-*.json` files with:
|
||||
|
||||
**Required Fields**:
|
||||
#### Top-Level Fields
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-N[.M]",
|
||||
"title": "Descriptive task name",
|
||||
"status": "pending",
|
||||
"status": "pending|active|completed|blocked|container",
|
||||
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||
}
|
||||
```
|
||||
|
||||
**Field Descriptions**:
|
||||
- `id`: Task identifier (format: `IMPL-N` or `IMPL-N.M` for subtasks, max 2 levels)
|
||||
- `title`: Descriptive task name summarizing the work
|
||||
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies), `container` (has subtasks, cannot be executed directly)
|
||||
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
||||
|
||||
#### Meta Object
|
||||
```json
|
||||
{
|
||||
"meta": {
|
||||
"type": "feature|bugfix|refactor|test|docs",
|
||||
"agent": "@code-developer"
|
||||
},
|
||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
||||
"execution_group": "parallel-abc123|null"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Field Descriptions**:
|
||||
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
|
||||
- `agent`: Assigned agent for execution
|
||||
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
||||
|
||||
#### Context Object
|
||||
```json
|
||||
{
|
||||
"context": {
|
||||
"requirements": [
|
||||
"Implement 3 features: [authentication, authorization, session management]",
|
||||
@@ -162,43 +187,131 @@ Generate individual `.task/IMPL-*.json` files with:
|
||||
"5 files created: verify by ls src/auth/*.ts | wc -l = 5",
|
||||
"Test coverage >=80%: verify by npm test -- --coverage | grep auth"
|
||||
],
|
||||
"parent": "IMPL-N",
|
||||
"depends_on": ["IMPL-N"],
|
||||
"inherited": {
|
||||
"from": "IMPL-N",
|
||||
"context": ["Authentication system design completed", "JWT strategy defined"]
|
||||
},
|
||||
"shared_context": {
|
||||
"tech_stack": ["Node.js", "TypeScript", "Express"],
|
||||
"auth_strategy": "JWT with refresh tokens",
|
||||
"conventions": ["Follow existing auth patterns in src/auth/legacy/"]
|
||||
},
|
||||
"artifacts": [
|
||||
{
|
||||
"type": "synthesis_specification",
|
||||
"type": "synthesis_specification|topic_framework|individual_role_analysis",
|
||||
"source": "brainstorm_clarification|brainstorm_framework|brainstorm_roles",
|
||||
"path": "{from artifacts_inventory}",
|
||||
"priority": "highest"
|
||||
"priority": "highest|high|medium|low",
|
||||
"usage": "Architecture decisions and API specifications",
|
||||
"contains": "role_specific_requirements_and_design"
|
||||
}
|
||||
]
|
||||
},
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Field Descriptions**:
|
||||
- `requirements`: **QUANTIFIED** implementation requirements (MUST include explicit counts and enumerated lists, e.g., "5 files: [list]")
|
||||
- `focus_paths`: Target directories/files (concrete paths without wildcards)
|
||||
- `acceptance`: **MEASURABLE** acceptance criteria (MUST include verification commands, e.g., "verify by ls ... | wc -l = N")
|
||||
- `parent`: Parent task ID for subtasks (establishes container/subtask hierarchy)
|
||||
- `depends_on`: Prerequisite task IDs that must complete before this task starts
|
||||
- `inherited`: Context, patterns, and dependencies passed from parent task
|
||||
- `shared_context`: Tech stack, conventions, and architectural strategies for the task
|
||||
- `artifacts`: Referenced brainstorming outputs with detailed metadata
|
||||
|
||||
#### Flow Control Object
|
||||
|
||||
**IMPORTANT**: The `pre_analysis` examples below are **reference templates only**. Agent MUST dynamically select, adapt, and expand steps based on actual task requirements. Apply the principle of **"举一反三"** (draw inferences from examples) - use these patterns as inspiration to create task-specific analysis steps.
|
||||
|
||||
**Dynamic Step Selection Guidelines**:
|
||||
- **Context Loading**: Always include context package and role analysis loading
|
||||
- **Architecture Analysis**: Add module structure analysis for complex projects
|
||||
- **Pattern Discovery**: Use CLI tools (gemini/qwen/bash) based on task complexity and available tools
|
||||
- **Tech-Specific Analysis**: Add language/framework-specific searches for specialized tasks
|
||||
- **MCP Integration**: Utilize MCP tools when available for enhanced context
|
||||
|
||||
```json
|
||||
{
|
||||
"flow_control": {
|
||||
"pre_analysis": [
|
||||
// === REQUIRED: Context Package Loading (Always Include) ===
|
||||
{
|
||||
"step": "load_synthesis_specification",
|
||||
"commands": ["bash(ls {path} 2>/dev/null)", "Read({path})"],
|
||||
"output_to": "synthesis_specification",
|
||||
"on_error": "skip_optional"
|
||||
"step": "load_context_package",
|
||||
"action": "Load context package for artifact paths and smart context",
|
||||
"commands": ["Read({{context_package_path}})"],
|
||||
"output_to": "context_package",
|
||||
"on_error": "fail"
|
||||
},
|
||||
{
|
||||
"step": "mcp_codebase_exploration",
|
||||
"command": "mcp__code-index__find_files() && mcp__code-index__search_code_advanced()",
|
||||
"output_to": "codebase_structure"
|
||||
"step": "load_role_analysis_artifacts",
|
||||
"action": "Load role analyses from context-package.json",
|
||||
"commands": [
|
||||
"Read({{context_package_path}})",
|
||||
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
|
||||
"Read(each extracted path)"
|
||||
],
|
||||
"output_to": "role_analysis_artifacts",
|
||||
"on_error": "skip_optional"
|
||||
},
|
||||
|
||||
// === OPTIONAL: Select and adapt based on task needs ===
|
||||
|
||||
// Pattern: Project structure analysis
|
||||
{
|
||||
"step": "analyze_project_architecture",
|
||||
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
|
||||
"output_to": "project_architecture"
|
||||
},
|
||||
|
||||
// Pattern: Local search (bash/rg/find)
|
||||
{
|
||||
"step": "search_existing_patterns",
|
||||
"commands": [
|
||||
"bash(rg '[pattern]' --type [lang] -n --max-count [N])",
|
||||
"bash(find . -name '[pattern]' -type f | head -[N])"
|
||||
],
|
||||
"output_to": "search_results"
|
||||
},
|
||||
|
||||
// Pattern: Gemini CLI deep analysis
|
||||
{
|
||||
"step": "gemini_analyze_[aspect]",
|
||||
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
|
||||
"output_to": "analysis_result"
|
||||
},
|
||||
|
||||
// Pattern: Qwen CLI analysis (fallback/alternative)
|
||||
{
|
||||
"step": "qwen_analyze_[aspect]",
|
||||
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
|
||||
"output_to": "analysis_result"
|
||||
},
|
||||
|
||||
// Pattern: MCP tools
|
||||
{
|
||||
"step": "mcp_search_[target]",
|
||||
"command": "mcp__[tool]__[function](parameters)",
|
||||
"output_to": "mcp_results"
|
||||
}
|
||||
],
|
||||
"implementation_approach": [
|
||||
// === DEFAULT MODE: Agent Execution (no command field) ===
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Load and analyze role analyses",
|
||||
"description": "Load 3 role analysis files and extract quantified requirements",
|
||||
"description": "Load role analysis files and extract quantified requirements",
|
||||
"modification_points": [
|
||||
"Load 3 role analysis files: [system-architect/analysis.md, product-manager/analysis.md, ui-designer/analysis.md]",
|
||||
"Extract 15 requirements from role analyses",
|
||||
"Parse 8 architecture decisions from system-architect analysis"
|
||||
"Load N role analysis files: [list]",
|
||||
"Extract M requirements from role analyses",
|
||||
"Parse K architecture decisions"
|
||||
],
|
||||
"logic_flow": [
|
||||
"Read 3 role analyses from artifacts inventory",
|
||||
"Parse architecture decisions (8 total)",
|
||||
"Extract implementation requirements (15 total)",
|
||||
"Read role analyses from artifacts inventory",
|
||||
"Parse architecture decisions",
|
||||
"Extract implementation requirements",
|
||||
"Build consolidated requirements list"
|
||||
],
|
||||
"depends_on": [],
|
||||
@@ -207,21 +320,33 @@ Generate individual `.task/IMPL-*.json` files with:
|
||||
{
|
||||
"step": 2,
|
||||
"title": "Implement following specification",
|
||||
"description": "Implement 3 features across 5 files following consolidated role analyses",
|
||||
"description": "Implement features following consolidated role analyses",
|
||||
"modification_points": [
|
||||
"Create 5 new files in src/auth/: [auth.service.ts (180 lines), auth.controller.ts (120 lines), auth.middleware.ts (60 lines), auth.types.ts (40 lines), auth.test.ts (200 lines)]",
|
||||
"Modify 2 functions: [validateUser() in users.service.ts lines 45-60, hashPassword() in utils.ts lines 120-135]",
|
||||
"Implement 3 core features: [JWT authentication, role-based authorization, session management]"
|
||||
"Create N new files: [list with line counts]",
|
||||
"Modify M functions: [func() in file lines X-Y]",
|
||||
"Implement K core features: [list]"
|
||||
],
|
||||
"logic_flow": [
|
||||
"Apply 15 requirements from [synthesis_requirements]",
|
||||
"Implement 3 features across 5 new files (600 total lines)",
|
||||
"Modify 2 existing functions (30 lines total)",
|
||||
"Write 25 test cases covering all features",
|
||||
"Validate against 3 acceptance criteria"
|
||||
"Apply requirements from [synthesis_requirements]",
|
||||
"Implement features across new files",
|
||||
"Modify existing functions",
|
||||
"Write test cases covering all features",
|
||||
"Validate against acceptance criteria"
|
||||
],
|
||||
"depends_on": [1],
|
||||
"output": "implementation"
|
||||
},
|
||||
|
||||
// === CLI MODE: Command Execution (optional command field) ===
|
||||
{
|
||||
"step": 3,
|
||||
"title": "Execute implementation using CLI tool",
|
||||
"description": "Use Codex/Gemini for complex autonomous execution",
|
||||
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
|
||||
"modification_points": ["[Same as default mode]"],
|
||||
"logic_flow": ["[Same as default mode]"],
|
||||
"depends_on": [1, 2],
|
||||
"output": "cli_implementation"
|
||||
}
|
||||
],
|
||||
"target_files": [
|
||||
@@ -237,6 +362,72 @@ Generate individual `.task/IMPL-*.json` files with:
|
||||
}
|
||||
```
|
||||
|
||||
**Field Descriptions**:
|
||||
- `pre_analysis`: Context loading and preparation steps (executed sequentially before implementation)
|
||||
- `implementation_approach`: Implementation steps with dependency management (array of step objects)
|
||||
- `target_files`: Specific files/functions/lines to modify (format: `file:function:lines` for existing, `file` for new)
|
||||
|
||||
**Implementation Approach Execution Modes**:
|
||||
|
||||
The `implementation_approach` supports **two execution modes** based on the presence of the `command` field:
|
||||
|
||||
1. **Default Mode (Agent Execution)** - `command` field **omitted**:
|
||||
- Agent interprets `modification_points` and `logic_flow` autonomously
|
||||
- Direct agent execution with full context awareness
|
||||
- No external tool overhead
|
||||
- **Use for**: Standard implementation tasks where agent capability is sufficient
|
||||
- **Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
|
||||
|
||||
2. **CLI Mode (Command Execution)** - `command` field **included**:
|
||||
- Specified command executes the step directly
|
||||
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
||||
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
||||
- **Required fields**: Same as default mode **PLUS** `command`
|
||||
- **Command patterns**:
|
||||
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
|
||||
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
|
||||
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
|
||||
|
||||
**Mode Selection Strategy**:
|
||||
- **Default to agent execution** for most tasks
|
||||
- **Use CLI mode** when:
|
||||
- User explicitly requests CLI tool (codex/gemini/qwen)
|
||||
- Task requires multi-step autonomous reasoning beyond agent capability
|
||||
- Complex refactoring needs specialized tool analysis
|
||||
- Building on previous CLI execution context (use `resume --last`)
|
||||
|
||||
**Key Principle**: The `command` field is **optional**. Agent must decide based on task complexity and user preference.
|
||||
|
||||
**Pre-Analysis Step Selection Guide (举一反三 Principle)**:
|
||||
|
||||
The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
|
||||
|
||||
1. **Always Include** (Required):
|
||||
- `load_context_package` - Essential for all tasks
|
||||
- `load_role_analysis_artifacts` - Critical for accessing brainstorming insights
|
||||
|
||||
2. **Selectively Include Based on Task Type**:
|
||||
- **Architecture tasks**: Project structure + Gemini architecture analysis
|
||||
- **Refactoring tasks**: Gemini execution flow tracing + code quality analysis
|
||||
- **Frontend tasks**: React/Vue component searches + UI pattern analysis
|
||||
- **Backend tasks**: Database schema + API endpoint searches
|
||||
- **Security tasks**: Vulnerability scans + security pattern analysis
|
||||
- **Performance tasks**: Bottleneck identification + profiling data
|
||||
|
||||
3. **Tool Selection Strategy**:
|
||||
- **Gemini CLI**: Deep analysis (architecture, execution flow, patterns)
|
||||
- **Qwen CLI**: Fallback or code quality analysis
|
||||
- **Bash/rg/find**: Quick pattern matching and file discovery
|
||||
- **MCP tools**: Semantic search and external research
|
||||
|
||||
4. **Command Composition Patterns**:
|
||||
- **Single command**: `bash([simple_search])`
|
||||
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
||||
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
|
||||
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
||||
|
||||
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
||||
|
||||
**Artifact Mapping**:
|
||||
- Use `artifacts_inventory` from context package
|
||||
- Highest priority: synthesis_specification
|
||||
|
||||
@@ -102,6 +102,8 @@ if (!memory.has("README.md")) Read(README.md)
|
||||
|
||||
Execute all 3 tracks in parallel for comprehensive coverage.
|
||||
|
||||
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
|
||||
|
||||
#### Track 1: Reference Documentation
|
||||
|
||||
Extract from Phase 0 loaded docs:
|
||||
|
||||
472
.claude/commands/memory/docs-full-cli.md
Normal file
472
.claude/commands/memory/docs-full-cli.md
Normal file
@@ -0,0 +1,472 @@
|
||||
---
|
||||
name: docs-full-cli
|
||||
description: Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel
|
||||
argument-hint: "[path] [--tool <gemini|qwen|codex>]"
|
||||
---
|
||||
|
||||
# Full Documentation Generation - CLI Mode (/memory:docs-full-cli)
|
||||
|
||||
## Overview
|
||||
|
||||
Orchestrates project-wide documentation generation using CLI-based execution with batched agents and automatic tool fallback.
|
||||
|
||||
**Parameters**:
|
||||
- `path`: Target directory (default: current directory)
|
||||
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
|
||||
|
||||
**Execution Flow**: Discovery → Plan Presentation → Execution → Verification
|
||||
|
||||
## 3-Layer Architecture & Auto-Strategy Selection
|
||||
|
||||
### Layer Definition & Strategy Assignment
|
||||
|
||||
| Layer | Depth | Strategy | Purpose | Context Pattern |
|
||||
|-------|-------|----------|---------|----------------|
|
||||
| **Layer 3** (Deepest) | ≥3 | `full` | Generate docs for all subdirectories with code | `@**/*` (all files) |
|
||||
| **Layer 2** (Middle) | 1-2 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
|
||||
| **Layer 1** (Top) | 0 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
|
||||
|
||||
**Generation Direction**: Layer 3 → Layer 2 → Layer 1 (bottom-up dependency flow)
|
||||
|
||||
**Strategy Auto-Selection**: Strategies are automatically determined by directory depth - no user configuration needed.
|
||||
|
||||
### Strategy Details
|
||||
|
||||
#### Full Strategy (Layer 3 Only)
|
||||
- **Use Case**: Deepest directories with comprehensive file coverage
|
||||
- **Behavior**: Generates API.md + README.md for current directory AND subdirectories containing code
|
||||
- **Context**: All files in current directory tree (`@**/*`)
|
||||
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
|
||||
- **Benefits**: Creates foundation documentation for upper layers to reference
|
||||
|
||||
#### Single Strategy (Layers 1-2)
|
||||
- **Use Case**: Upper layers that aggregate from existing documentation
|
||||
- **Behavior**: Generates API.md + README.md only in current directory
|
||||
- **Context**: Direct children docs + current directory code files
|
||||
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
|
||||
- **Benefits**: Minimal context consumption, clear layer separation
|
||||
|
||||
### Example Flow
|
||||
```
|
||||
src/auth/handlers/ (depth 3) → FULL STRATEGY
|
||||
CONTEXT: @**/* (all files in handlers/ and subdirs)
|
||||
GENERATES: .workflow/docs/project/src/auth/handlers/{API.md,README.md} + subdirs
|
||||
↓
|
||||
src/auth/ (depth 2) → SINGLE STRATEGY
|
||||
CONTEXT: @*/API.md @*/README.md @*.ts (handlers docs + current code)
|
||||
GENERATES: .workflow/docs/project/src/auth/{API.md,README.md} only
|
||||
↓
|
||||
src/ (depth 1) → SINGLE STRATEGY
|
||||
CONTEXT: @*/API.md @*/README.md (auth docs, utils docs)
|
||||
GENERATES: .workflow/docs/project/src/{API.md,README.md} only
|
||||
↓
|
||||
./ (depth 0) → SINGLE STRATEGY
|
||||
CONTEXT: @*/API.md @*/README.md (src docs, tests docs)
|
||||
GENERATES: .workflow/docs/project/{API.md,README.md} only
|
||||
```
|
||||
|
||||
## Core Execution Rules
|
||||
|
||||
1. **Analyze First**: Module discovery + folder classification before generation
|
||||
2. **Wait for Approval**: Present plan, no execution without user confirmation
|
||||
3. **Execution Strategy**:
|
||||
- **<20 modules**: Direct parallel execution (max 4 concurrent per layer)
|
||||
- **≥20 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
|
||||
4. **Tool Fallback**: Auto-retry with fallback tools on failure
|
||||
5. **Layer Sequential**: Process layers 3→2→1 (bottom-up), parallel batches within layer
|
||||
6. **Safety Check**: Verify only docs files modified in .workflow/docs/
|
||||
7. **Layer-based Grouping**: Group modules by LAYER (not depth) for execution
|
||||
|
||||
## Tool Fallback Hierarchy
|
||||
|
||||
```javascript
|
||||
--tool gemini → [gemini, qwen, codex] // default
|
||||
--tool qwen → [qwen, gemini, codex]
|
||||
--tool codex → [codex, gemini, qwen]
|
||||
```
|
||||
|
||||
**Trigger**: Non-zero exit code from generation script
|
||||
|
||||
| Tool | Best For | Fallback To |
|
||||
|--------|--------------------------------|----------------|
|
||||
| gemini | Documentation, patterns | qwen → codex |
|
||||
| qwen | Architecture, system design | gemini → codex |
|
||||
| codex | Implementation, code quality | gemini → qwen |
|
||||
|
||||
## Execution Phases
|
||||
|
||||
### Phase 1: Discovery & Analysis
|
||||
|
||||
```javascript
|
||||
// Get project metadata
|
||||
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||
|
||||
// Get module structure with classification
|
||||
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
|
||||
|
||||
// OR with path parameter
|
||||
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
|
||||
```
|
||||
|
||||
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
|
||||
|
||||
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
|
||||
|
||||
### Phase 2: Plan Presentation
|
||||
|
||||
**For <20 modules**:
|
||||
```
|
||||
Documentation Generation Plan:
|
||||
Tool: gemini (fallback: qwen → codex)
|
||||
Total: 7 modules
|
||||
Execution: Direct parallel (< 20 modules threshold)
|
||||
Project: myproject
|
||||
Output: .workflow/docs/myproject/
|
||||
|
||||
Will generate docs for:
|
||||
- ./core/interfaces (12 files, type: code) - depth 2 [Layer 2] - single strategy
|
||||
- ./core (22 files, type: code) - depth 1 [Layer 2] - single strategy
|
||||
- ./models (9 files, type: code) - depth 1 [Layer 2] - single strategy
|
||||
- ./utils (12 files, type: navigation) - depth 1 [Layer 2] - single strategy
|
||||
- . (5 files, type: code) - depth 0 [Layer 1] - single strategy
|
||||
|
||||
Documentation Strategy (Auto-Selected):
|
||||
- Layer 2 (depth 1-2): API.md + README.md (current dir only, reference child docs)
|
||||
- Layer 1 (depth 0): API.md + README.md (current dir only, reference child docs)
|
||||
|
||||
Output Structure:
|
||||
- Code folders: API.md + README.md
|
||||
- Navigation folders: README.md only
|
||||
|
||||
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
|
||||
Execution order: Layer 2 → Layer 1
|
||||
Estimated time: ~5-10 minutes
|
||||
|
||||
Confirm execution? (y/n)
|
||||
```
|
||||
|
||||
**For ≥20 modules**:
|
||||
```
|
||||
Documentation Generation Plan:
|
||||
Tool: gemini (fallback: qwen → codex)
|
||||
Total: 31 modules
|
||||
Execution: Agent batch processing (4 modules/agent)
|
||||
Project: myproject
|
||||
Output: .workflow/docs/myproject/
|
||||
|
||||
Will generate docs for:
|
||||
- ./src/features/auth (12 files, type: code) - depth 3 [Layer 3] - full strategy
|
||||
- ./.claude/commands/cli (6 files, type: code) - depth 3 [Layer 3] - full strategy
|
||||
- ./src/utils (8 files, type: code) - depth 2 [Layer 2] - single strategy
|
||||
...
|
||||
|
||||
Documentation Strategy (Auto-Selected):
|
||||
- Layer 3 (depth ≥3): API.md + README.md (all subdirs with code)
|
||||
- Layer 2 (depth 1-2): API.md + README.md (current dir only)
|
||||
- Layer 1 (depth 0): API.md + README.md (current dir only)
|
||||
|
||||
Output Structure:
|
||||
- Code folders: API.md + README.md
|
||||
- Navigation folders: README.md only
|
||||
|
||||
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
|
||||
Execution order: Layer 3 → Layer 2 → Layer 1
|
||||
|
||||
Agent allocation (by LAYER):
|
||||
- Layer 3 (14 modules, depth ≥3): 4 agents [4, 4, 4, 2]
|
||||
- Layer 2 (15 modules, depth 1-2): 4 agents [4, 4, 4, 3]
|
||||
- Layer 1 (2 modules, depth 0): 1 agent [2]
|
||||
|
||||
Estimated time: ~15-25 minutes
|
||||
|
||||
Confirm execution? (y/n)
|
||||
```
|
||||
|
||||
### Phase 3A: Direct Execution (<20 modules)
|
||||
|
||||
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
|
||||
|
||||
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||
|
||||
```javascript
|
||||
let project_name = detect_project_name();
|
||||
|
||||
for (let layer of [3, 2, 1]) {
|
||||
if (modules_by_layer[layer].length === 0) continue;
|
||||
let batches = batch_modules(modules_by_layer[layer], 4);
|
||||
|
||||
for (let batch of batches) {
|
||||
let parallel_tasks = batch.map(module => {
|
||||
return async () => {
|
||||
let strategy = module.depth >= 3 ? "full" : "single";
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
report(`✅ ${module.path} (Layer ${layer}) docs generated with ${tool}`);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
report(`❌ FAILED: ${module.path} (Layer ${layer}) failed all tools`);
|
||||
return false;
|
||||
};
|
||||
});
|
||||
await Promise.all(parallel_tasks.map(task => task()));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3B: Agent Batch Execution (≥20 modules)
|
||||
|
||||
**Strategy**: Batch modules into groups of 4, spawn memory-bridge agents per batch.
|
||||
|
||||
```javascript
|
||||
// Group modules by LAYER and batch within each layer
|
||||
let modules_by_layer = group_by_layer(module_list);
|
||||
let tool_order = construct_tool_order(primary_tool);
|
||||
let project_name = detect_project_name();
|
||||
|
||||
for (let layer of [3, 2, 1]) {
|
||||
if (modules_by_layer[layer].length === 0) continue;
|
||||
|
||||
let batches = batch_modules(modules_by_layer[layer], 4);
|
||||
let worker_tasks = [];
|
||||
|
||||
for (let batch of batches) {
|
||||
worker_tasks.push(
|
||||
Task(
|
||||
subagent_type="memory-bridge",
|
||||
description=`Generate docs for ${batch.length} modules in Layer ${layer}`,
|
||||
prompt=generate_batch_worker_prompt(batch, tool_order, layer, project_name)
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
await parallel_execute(worker_tasks);
|
||||
}
|
||||
```
|
||||
|
||||
**Batch Worker Prompt Template**:
|
||||
```
|
||||
PURPOSE: Generate documentation for assigned modules with tool fallback
|
||||
|
||||
TASK: Generate API.md + README.md for assigned modules using specified strategies.
|
||||
|
||||
PROJECT: {{project_name}}
|
||||
OUTPUT: .workflow/docs/{{project_name}}/
|
||||
|
||||
MODULES:
|
||||
{{module_path_1}} (strategy: {{strategy_1}}, type: {{folder_type_1}})
|
||||
{{module_path_2}} (strategy: {{strategy_2}}, type: {{folder_type_2}})
|
||||
...
|
||||
|
||||
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
||||
|
||||
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
|
||||
- Accepts strategy parameter: full | single
|
||||
- Accepts folder type detection: code | navigation
|
||||
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
||||
- Output path: .workflow/docs/{{project_name}}/{module_path}/
|
||||
|
||||
EXECUTION FLOW (for each module):
|
||||
1. Tool fallback loop (exit on first success):
|
||||
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
|
||||
run_in_background: false
|
||||
})
|
||||
exit_code=$?
|
||||
|
||||
if [ $exit_code -eq 0 ]; then
|
||||
report "✅ {{module_path}} docs generated with $tool"
|
||||
break
|
||||
else
|
||||
report "⚠️ {{module_path}} failed with $tool, trying next..."
|
||||
continue
|
||||
fi
|
||||
done
|
||||
|
||||
2. Handle complete failure (all tools failed):
|
||||
if [ $exit_code -ne 0 ]; then
|
||||
report "❌ FAILED: {{module_path}} - all tools exhausted"
|
||||
# Continue to next module (do not abort batch)
|
||||
fi
|
||||
|
||||
FOLDER TYPE HANDLING:
|
||||
- code: Generate API.md + README.md
|
||||
- navigation: Generate README.md only
|
||||
|
||||
FAILURE HANDLING:
|
||||
- Module-level isolation: One module's failure does not affect others
|
||||
- Exit code detection: Non-zero exit code triggers next tool
|
||||
- Exhaustion reporting: Log modules where all tools failed
|
||||
- Batch continuation: Always process remaining modules
|
||||
|
||||
REPORTING FORMAT:
|
||||
Per-module status:
|
||||
✅ path/to/module docs generated with {tool}
|
||||
⚠️ path/to/module failed with {tool}, trying next...
|
||||
❌ FAILED: path/to/module - all tools exhausted
|
||||
```
|
||||
|
||||
### Phase 4: Project-Level Documentation
|
||||
|
||||
**After all module documentation is generated, create project-level documentation files.**
|
||||
|
||||
```javascript
|
||||
let project_name = detect_project_name();
|
||||
let project_root = get_project_root();
|
||||
|
||||
// Step 1: Generate Project README
|
||||
report("Generating project README.md...");
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
report(`✅ Project README generated with ${tool}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Step 2: Generate Architecture & Examples
|
||||
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
report(`✅ Architecture docs generated with ${tool}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Step 3: Generate HTTP API documentation (if API routes detected)
|
||||
Bash({command: 'rg "router\\.|@Get|@Post" -g "*.{ts,js,py}" 2>/dev/null && echo "API_FOUND" || echo "NO_API"', run_in_background: false});
|
||||
if (bash_result.stdout.includes("API_FOUND")) {
|
||||
report("Generating HTTP API documentation...");
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
report(`✅ HTTP API docs generated with ${tool}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Expected Output**:
|
||||
```
|
||||
Project-Level Documentation:
|
||||
✅ README.md (project root overview)
|
||||
✅ ARCHITECTURE.md (system design)
|
||||
✅ EXAMPLES.md (usage examples)
|
||||
✅ api/README.md (HTTP API reference) [optional]
|
||||
```
|
||||
|
||||
### Phase 5: Verification
|
||||
|
||||
```javascript
|
||||
// Check documentation files created
|
||||
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
|
||||
|
||||
// Display structure
|
||||
Bash({command: 'tree -L 3 .workflow/docs/', run_in_background: false});
|
||||
```
|
||||
|
||||
**Result Summary**:
|
||||
```
|
||||
Documentation Generation Summary:
|
||||
Total: 31 | Success: 29 | Failed: 2
|
||||
Tool usage: gemini: 25, qwen: 4, codex: 0
|
||||
Failed: path1, path2
|
||||
|
||||
Generated documentation:
|
||||
.workflow/docs/myproject/
|
||||
├── src/
|
||||
│ ├── auth/
|
||||
│ │ ├── API.md
|
||||
│ │ └── README.md
|
||||
│ └── utils/
|
||||
│ └── README.md
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Batch Worker**: Tool fallback per module, batch isolation, clear status reporting
|
||||
**Coordinator**: Invalid path abort, user decline handling, verification with cleanup
|
||||
**Fallback Triggers**: Non-zero exit code, script timeout, unexpected output
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/docs/{project_name}/
|
||||
├── src/ # Mirrors source structure
|
||||
│ ├── modules/
|
||||
│ │ ├── README.md # Navigation
|
||||
│ │ ├── auth/
|
||||
│ │ │ ├── API.md # API signatures
|
||||
│ │ │ ├── README.md # Module docs
|
||||
│ │ │ └── middleware/
|
||||
│ │ │ ├── API.md
|
||||
│ │ │ └── README.md
|
||||
│ │ └── api/
|
||||
│ │ ├── API.md
|
||||
│ │ └── README.md
|
||||
│ └── utils/
|
||||
│ └── README.md
|
||||
├── lib/
|
||||
│ └── core/
|
||||
│ ├── API.md
|
||||
│ └── README.md
|
||||
├── README.md # ✨ Project root overview (auto-generated)
|
||||
├── ARCHITECTURE.md # ✨ System design (auto-generated)
|
||||
├── EXAMPLES.md # ✨ Usage examples (auto-generated)
|
||||
└── api/ # ✨ Optional (auto-generated if HTTP API detected)
|
||||
└── README.md # HTTP API reference
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```bash
|
||||
# Full project documentation generation
|
||||
/memory:docs-full-cli
|
||||
|
||||
# Target specific directory
|
||||
/memory:docs-full-cli src/features/auth
|
||||
/memory:docs-full-cli .claude
|
||||
|
||||
# Use specific tool
|
||||
/memory:docs-full-cli --tool qwen
|
||||
/memory:docs-full-cli src --tool qwen
|
||||
```
|
||||
|
||||
## Key Advantages
|
||||
|
||||
- **Efficiency**: 30 modules → 8 agents (73% reduction from sequential)
|
||||
- **Resilience**: 3-tier tool fallback per module
|
||||
- **Performance**: Parallel batches, no concurrency limits
|
||||
- **Observability**: Per-module tool usage, batch-level metrics
|
||||
- **Automation**: Zero configuration - strategy auto-selected by directory depth
|
||||
- **Path Mirroring**: Clear 1:1 mapping between source and documentation structure
|
||||
|
||||
## Template Reference
|
||||
|
||||
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
|
||||
- `api.txt`: Code API documentation (Part A: Code API, Part B: HTTP API)
|
||||
- `module-readme.txt`: Module purpose, usage, dependencies
|
||||
- `folder-navigation.txt`: Navigation README for folders with subdirectories
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/memory:docs` - Agent-based documentation planning workflow
|
||||
- `/memory:docs-related-cli` - Update docs for changed modules only
|
||||
- `/workflow:execute` - Execute documentation tasks (when using agent mode)
|
||||
386
.claude/commands/memory/docs-related-cli.md
Normal file
386
.claude/commands/memory/docs-related-cli.md
Normal file
@@ -0,0 +1,386 @@
|
||||
---
|
||||
name: docs-related-cli
|
||||
description: Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel
|
||||
argument-hint: "[--tool <gemini|qwen|codex>]"
|
||||
---
|
||||
|
||||
# Related Documentation Generation - CLI Mode (/memory:docs-related-cli)
|
||||
|
||||
## Overview
|
||||
|
||||
Orchestrates context-aware documentation generation/update for changed modules using CLI-based execution with batched agents and automatic tool fallback (gemini→qwen→codex).
|
||||
|
||||
**Parameters**:
|
||||
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
|
||||
|
||||
**Execution Flow**:
|
||||
1. Change Detection → 2. Plan Presentation → 3. Batched Execution → 4. Verification
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Detect Changes First**: Use git diff to identify affected modules
|
||||
2. **Wait for Approval**: Present plan, no execution without user confirmation
|
||||
3. **Execution Strategy**:
|
||||
- **<15 modules**: Direct parallel execution (max 4 concurrent per depth, no agent overhead)
|
||||
- **≥15 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
|
||||
4. **Tool Fallback**: Auto-retry with fallback tools on failure
|
||||
5. **Depth Sequential**: Process depths N→0, parallel batches within depth (both modes)
|
||||
6. **Related Mode**: Generate/update only changed modules and their parent contexts
|
||||
7. **Single Strategy**: Always use `single` strategy (incremental update)
|
||||
|
||||
## Tool Fallback Hierarchy
|
||||
|
||||
```javascript
|
||||
--tool gemini → [gemini, qwen, codex] // default
|
||||
--tool qwen → [qwen, gemini, codex]
|
||||
--tool codex → [codex, gemini, qwen]
|
||||
```
|
||||
|
||||
**Trigger**: Non-zero exit code from generation script
|
||||
|
||||
| Tool | Best For | Fallback To |
|
||||
|--------|--------------------------------|----------------|
|
||||
| gemini | Documentation, patterns | qwen → codex |
|
||||
| qwen | Architecture, system design | gemini → codex |
|
||||
| codex | Implementation, code quality | gemini → qwen |
|
||||
|
||||
## Phase 1: Change Detection & Analysis
|
||||
|
||||
```javascript
|
||||
// Get project metadata
|
||||
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||
|
||||
// Detect changed modules
|
||||
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
|
||||
|
||||
// Cache git changes
|
||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||
```
|
||||
|
||||
**Parse output** `depth:N|path:<PATH>|change:<TYPE>|type:<code|navigation>` to extract affected modules.
|
||||
|
||||
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack (Node.js/Python/Go/Rust/etc).
|
||||
|
||||
**Fallback**: If no changes detected, use recent modules (first 10 by depth).
|
||||
|
||||
## Phase 2: Plan Presentation
|
||||
|
||||
**Present filtered plan**:
|
||||
```
|
||||
Related Documentation Generation Plan:
|
||||
Tool: gemini (fallback: qwen → codex)
|
||||
Changed: 4 modules | Batching: 4 modules/agent
|
||||
Project: myproject
|
||||
Output: .workflow/docs/myproject/
|
||||
|
||||
Will generate/update docs for:
|
||||
- ./src/api/auth (5 files, type: code) [new module]
|
||||
- ./src/api (12 files, type: code) [parent of changed auth/]
|
||||
- ./src (8 files, type: code) [parent context]
|
||||
- . (14 files, type: code) [root level]
|
||||
|
||||
Documentation Strategy:
|
||||
- Strategy: single (all modules - incremental update)
|
||||
- Output: API.md + README.md (code folders), README.md only (navigation folders)
|
||||
- Context: Current dir code + child docs
|
||||
|
||||
Auto-skipped (12 paths):
|
||||
- Tests: ./src/api/auth.test.ts (8 paths)
|
||||
- Config: tsconfig.json (3 paths)
|
||||
- Other: node_modules (1 path)
|
||||
|
||||
Agent allocation:
|
||||
- Depth 3 (1 module): 1 agent [1]
|
||||
- Depth 2 (1 module): 1 agent [1]
|
||||
- Depth 1 (1 module): 1 agent [1]
|
||||
- Depth 0 (1 module): 1 agent [1]
|
||||
|
||||
Estimated time: ~5-10 minutes
|
||||
|
||||
Confirm execution? (y/n)
|
||||
```
|
||||
|
||||
**Decision logic**:
|
||||
- User confirms "y": Proceed with execution
|
||||
- User declines "n": Abort, no changes
|
||||
- <15 modules: Direct execution
|
||||
- ≥15 modules: Agent batch execution
|
||||
|
||||
## Phase 3A: Direct Execution (<15 modules)
|
||||
|
||||
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead.
|
||||
|
||||
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||
|
||||
```javascript
|
||||
let project_name = detect_project_name();
|
||||
|
||||
for (let depth of sorted_depths.reverse()) { // N → 0
|
||||
let batches = batch_modules(modules_by_depth[depth], 4);
|
||||
|
||||
for (let batch of batches) {
|
||||
let parallel_tasks = batch.map(module => {
|
||||
return async () => {
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
report(`✅ ${module.path} docs generated with ${tool}`);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
report(`❌ FAILED: ${module.path} failed all tools`);
|
||||
return false;
|
||||
};
|
||||
});
|
||||
await Promise.all(parallel_tasks.map(task => task()));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Phase 3B: Agent Batch Execution (≥15 modules)
|
||||
|
||||
### Batching Strategy
|
||||
|
||||
```javascript
|
||||
// Batch modules into groups of 4
|
||||
function batch_modules(modules, batch_size = 4) {
|
||||
let batches = [];
|
||||
for (let i = 0; i < modules.length; i += batch_size) {
|
||||
batches.push(modules.slice(i, i + batch_size));
|
||||
}
|
||||
return batches;
|
||||
}
|
||||
// Examples: 10→[4,4,2] | 8→[4,4] | 3→[3]
|
||||
```
|
||||
|
||||
### Coordinator Orchestration
|
||||
|
||||
```javascript
|
||||
let modules_by_depth = group_by_depth(changed_modules);
|
||||
let tool_order = construct_tool_order(primary_tool);
|
||||
let project_name = detect_project_name();
|
||||
|
||||
for (let depth of sorted_depths.reverse()) { // N → 0
|
||||
let batches = batch_modules(modules_by_depth[depth], 4);
|
||||
let worker_tasks = [];
|
||||
|
||||
for (let batch of batches) {
|
||||
worker_tasks.push(
|
||||
Task(
|
||||
subagent_type="memory-bridge",
|
||||
description=`Generate docs for ${batch.length} modules at depth ${depth}`,
|
||||
prompt=generate_batch_worker_prompt(batch, tool_order, depth, project_name, "related")
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
await parallel_execute(worker_tasks); // Batches run in parallel
|
||||
}
|
||||
```
|
||||
|
||||
### Batch Worker Prompt Template
|
||||
|
||||
```
|
||||
PURPOSE: Generate/update documentation for assigned modules with tool fallback (related mode)
|
||||
|
||||
TASK:
|
||||
Generate documentation for the following modules based on recent changes. For each module, try tools in order until success.
|
||||
|
||||
PROJECT: {{project_name}}
|
||||
OUTPUT: .workflow/docs/{{project_name}}/
|
||||
|
||||
MODULES:
|
||||
{{module_path_1}} (type: {{folder_type_1}})
|
||||
{{module_path_2}} (type: {{folder_type_2}})
|
||||
{{module_path_3}} (type: {{folder_type_3}})
|
||||
{{module_path_4}} (type: {{folder_type_4}})
|
||||
|
||||
TOOLS (try in order):
|
||||
1. {{tool_1}}
|
||||
2. {{tool_2}}
|
||||
3. {{tool_3}}
|
||||
|
||||
EXECUTION:
|
||||
For each module above:
|
||||
1. Try tool 1:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
|
||||
→ Failure: Try tool 2
|
||||
2. Try tool 2:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
|
||||
→ Failure: Try tool 3
|
||||
3. Try tool 3:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module
|
||||
→ Failure: Report "❌ FAILED: {{module_path}} failed all tools", proceed to next module
|
||||
|
||||
FOLDER TYPE HANDLING:
|
||||
- code: Generate API.md + README.md
|
||||
- navigation: Generate README.md only
|
||||
|
||||
REPORTING:
|
||||
Report final summary with:
|
||||
- Total processed: X modules
|
||||
- Successful: Y modules
|
||||
- Failed: Z modules
|
||||
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
|
||||
```
|
||||
|
||||
## Phase 4: Verification
|
||||
|
||||
```javascript
|
||||
// Check documentation files created/updated
|
||||
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
|
||||
|
||||
// Display recent changes
|
||||
Bash({command: 'find .workflow/docs -type f -name "*.md" -mmin -60 2>/dev/null', run_in_background: false});
|
||||
```
|
||||
|
||||
**Aggregate results**:
|
||||
```
|
||||
Documentation Generation Summary:
|
||||
Total: 4 | Success: 4 | Failed: 0
|
||||
|
||||
Tool usage:
|
||||
- gemini: 4 modules
|
||||
- qwen: 0 modules (fallback)
|
||||
- codex: 0 modules
|
||||
|
||||
Changes:
|
||||
.workflow/docs/myproject/src/api/auth/API.md (new)
|
||||
.workflow/docs/myproject/src/api/auth/README.md (new)
|
||||
.workflow/docs/myproject/src/api/API.md (updated)
|
||||
.workflow/docs/myproject/src/api/README.md (updated)
|
||||
.workflow/docs/myproject/src/API.md (updated)
|
||||
.workflow/docs/myproject/src/README.md (updated)
|
||||
.workflow/docs/myproject/API.md (updated)
|
||||
.workflow/docs/myproject/README.md (updated)
|
||||
```
|
||||
|
||||
## Execution Summary
|
||||
|
||||
**Module Count Threshold**:
|
||||
- **<15 modules**: Coordinator executes Phase 3A (Direct Execution)
|
||||
- **≥15 modules**: Coordinator executes Phase 3B (Agent Batch Execution)
|
||||
|
||||
**Agent Hierarchy** (for ≥15 modules):
|
||||
- **Coordinator**: Handles batch division, spawns worker agents per depth
|
||||
- **Worker Agents**: Each processes 4 modules with tool fallback (related mode)
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Batch Worker**:
|
||||
- Tool fallback per module (auto-retry)
|
||||
- Batch isolation (failures don't propagate)
|
||||
- Clear per-module status reporting
|
||||
|
||||
**Coordinator**:
|
||||
- No changes: Use fallback (recent 10 modules)
|
||||
- User decline: No execution
|
||||
- Verification fail: Report incomplete modules
|
||||
- Partial failures: Continue execution, report failed modules
|
||||
|
||||
**Fallback Triggers**:
|
||||
- Non-zero exit code
|
||||
- Script timeout
|
||||
- Unexpected output
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/docs/{project_name}/
|
||||
├── src/ # Mirrors source structure
|
||||
│ ├── modules/
|
||||
│ │ ├── README.md
|
||||
│ │ ├── auth/
|
||||
│ │ │ ├── API.md # Updated based on code changes
|
||||
│ │ │ └── README.md # Updated based on code changes
|
||||
│ │ └── api/
|
||||
│ │ ├── API.md
|
||||
│ │ └── README.md
|
||||
│ └── utils/
|
||||
│ └── README.md
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```bash
|
||||
# Daily development documentation update
|
||||
/memory:docs-related-cli
|
||||
|
||||
# After feature work with specific tool
|
||||
/memory:docs-related-cli --tool qwen
|
||||
|
||||
# Code quality documentation review after implementation
|
||||
/memory:docs-related-cli --tool codex
|
||||
```
|
||||
|
||||
## Key Advantages
|
||||
|
||||
**Efficiency**: 30 modules → 8 agents (73% reduction)
|
||||
**Resilience**: 3-tier fallback per module
|
||||
**Performance**: Parallel batches, no concurrency limits
|
||||
**Context-aware**: Updates based on actual git changes
|
||||
**Fast**: Only affected modules, not entire project
|
||||
**Incremental**: Single strategy for focused updates
|
||||
|
||||
## Coordinator Checklist
|
||||
|
||||
- Parse `--tool` (default: gemini)
|
||||
- Get project metadata (name, root)
|
||||
- Detect changed modules via detect_changed_modules.sh
|
||||
- **Smart filter modules** (auto-detect tech stack, skip tests/build/config/vendor)
|
||||
- Cache git changes
|
||||
- Apply fallback if no changes (recent 10 modules)
|
||||
- Construct tool fallback order
|
||||
- **Present filtered plan** with skip reasons and change types
|
||||
- **Wait for y/n confirmation**
|
||||
- Determine execution mode:
|
||||
- **<15 modules**: Direct execution (Phase 3A)
|
||||
- For each depth (N→0): Sequential module updates with tool fallback
|
||||
- **≥15 modules**: Agent batch execution (Phase 3B)
|
||||
- For each depth (N→0): Batch modules (4 per batch), spawn batch workers in parallel
|
||||
- Wait for depth/batch completion
|
||||
- Aggregate results
|
||||
- Verification check (documentation files created/updated)
|
||||
- Display summary + recent changes
|
||||
|
||||
## Comparison with Full Documentation Generation
|
||||
|
||||
| Aspect | Related Generation | Full Generation |
|
||||
|--------|-------------------|-----------------|
|
||||
| **Scope** | Changed modules only | All project modules |
|
||||
| **Speed** | Fast (minutes) | Slower (10-30 min) |
|
||||
| **Use case** | Daily development | Initial setup, major refactoring |
|
||||
| **Strategy** | `single` (all) | `full` (L3) + `single` (L1-2) |
|
||||
| **Trigger** | After commits | After setup or major changes |
|
||||
| **Batching** | 4 modules/agent | 4 modules/agent |
|
||||
| **Fallback** | gemini→qwen→codex | gemini→qwen→codex |
|
||||
| **Complexity threshold** | ≤15 modules | ≤20 modules |
|
||||
|
||||
## Template Reference
|
||||
|
||||
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
|
||||
- `api.txt`: Code API documentation
|
||||
- `module-readme.txt`: Module purpose, usage, dependencies
|
||||
- `folder-navigation.txt`: Navigation README for folders
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/memory:docs-full-cli` - Full project documentation generation
|
||||
- `/memory:docs` - Agent-based documentation planning workflow
|
||||
- `/memory:update-related` - Update CLAUDE.md for changed modules
|
||||
@@ -95,14 +95,15 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
|
||||
|
||||
### Phase 1: Discovery & Analysis
|
||||
|
||||
```bash
|
||||
# Cache git changes
|
||||
bash(git add -A 2>/dev/null || true)
|
||||
```javascript
|
||||
// Cache git changes
|
||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||
|
||||
# Get module structure
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh list)
|
||||
# OR with --path
|
||||
bash(cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list)
|
||||
// Get module structure
|
||||
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
|
||||
|
||||
// OR with --path
|
||||
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
|
||||
```
|
||||
|
||||
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
|
||||
@@ -172,26 +173,23 @@ Update Plan:
|
||||
|
||||
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
|
||||
|
||||
```javascript
|
||||
// Group modules by LAYER (not depth)
|
||||
let modules_by_layer = group_by_layer(module_list);
|
||||
let tool_order = construct_tool_order(primary_tool);
|
||||
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||
|
||||
// Process by LAYER (3 → 2 → 1), not by depth
|
||||
```javascript
|
||||
for (let layer of [3, 2, 1]) {
|
||||
if (modules_by_layer[layer].length === 0) continue;
|
||||
|
||||
let batches = batch_modules(modules_by_layer[layer], 4);
|
||||
|
||||
for (let batch of batches) {
|
||||
let parallel_tasks = batch.map(module => {
|
||||
return async () => {
|
||||
// Auto-determine strategy based on depth
|
||||
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
|
||||
|
||||
for (let tool of tool_order) {
|
||||
let exit_code = bash(`cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`);
|
||||
if (exit_code === 0) {
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
report(`✅ ${module.path} (Layer ${layer}) updated with ${tool}`);
|
||||
return true;
|
||||
}
|
||||
@@ -200,7 +198,6 @@ for (let layer of [3, 2, 1]) {
|
||||
return false;
|
||||
};
|
||||
});
|
||||
|
||||
await Promise.all(parallel_tasks.map(task => task()));
|
||||
}
|
||||
}
|
||||
@@ -255,7 +252,10 @@ EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
|
||||
EXECUTION FLOW (for each module):
|
||||
1. Tool fallback loop (exit on first success):
|
||||
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}")
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
|
||||
run_in_background: false
|
||||
})
|
||||
exit_code=$?
|
||||
|
||||
if [ $exit_code -eq 0 ]; then
|
||||
@@ -287,12 +287,12 @@ REPORTING FORMAT:
|
||||
```
|
||||
### Phase 4: Safety Verification
|
||||
|
||||
```bash
|
||||
# Check only CLAUDE.md modified
|
||||
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
|
||||
```javascript
|
||||
// Check only CLAUDE.md files modified
|
||||
Bash({command: 'git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified"', run_in_background: false});
|
||||
|
||||
# Display status
|
||||
bash(git status --short)
|
||||
// Display status
|
||||
Bash({command: "git status --short", run_in_background: false});
|
||||
```
|
||||
|
||||
**Result Summary**:
|
||||
|
||||
@@ -39,12 +39,12 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
|
||||
|
||||
## Phase 1: Change Detection & Analysis
|
||||
|
||||
```bash
|
||||
# Detect changed modules (no index refresh needed)
|
||||
bash(~/.claude/scripts/detect_changed_modules.sh list)
|
||||
```javascript
|
||||
// Detect changed modules
|
||||
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
|
||||
|
||||
# Cache git changes
|
||||
bash(git add -A 2>/dev/null || true)
|
||||
// Cache git changes
|
||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||
```
|
||||
|
||||
**Parse output** `depth:N|path:<PATH>|change:<TYPE>` to extract affected modules.
|
||||
@@ -89,47 +89,36 @@ Related Update Plan:
|
||||
|
||||
## Phase 3A: Direct Execution (<15 modules)
|
||||
|
||||
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead, tool fallback per module.
|
||||
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead.
|
||||
|
||||
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||
|
||||
```javascript
|
||||
let modules_by_depth = group_by_depth(changed_modules);
|
||||
let tool_order = construct_tool_order(primary_tool);
|
||||
|
||||
for (let depth of sorted_depths.reverse()) { // N → 0
|
||||
let modules = modules_by_depth[depth];
|
||||
let batches = batch_modules(modules, 4); // Split into groups of 4
|
||||
let batches = batch_modules(modules_by_depth[depth], 4);
|
||||
|
||||
for (let batch of batches) {
|
||||
// Execute batch in parallel (max 4 concurrent)
|
||||
let parallel_tasks = batch.map(module => {
|
||||
return async () => {
|
||||
let success = false;
|
||||
for (let tool of tool_order) {
|
||||
let exit_code = bash(cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}");
|
||||
if (exit_code === 0) {
|
||||
report("${module.path} updated with ${tool}");
|
||||
success = true;
|
||||
break;
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
report(`✅ ${module.path} updated with ${tool}`);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
if (!success) {
|
||||
report("FAILED: ${module.path} failed all tools");
|
||||
}
|
||||
report(`❌ FAILED: ${module.path} failed all tools`);
|
||||
return false;
|
||||
};
|
||||
});
|
||||
|
||||
await Promise.all(parallel_tasks.map(task => task())); // Run batch in parallel
|
||||
await Promise.all(parallel_tasks.map(task => task()));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- No agent startup overhead
|
||||
- Parallel execution within depth (max 4 concurrent)
|
||||
- Tool fallback still applies per module
|
||||
- Faster for small changesets (<15 modules)
|
||||
- Same batching strategy as Phase 3B but without agent layer
|
||||
|
||||
---
|
||||
|
||||
## Phase 3B: Agent Batch Execution (≥15 modules)
|
||||
@@ -193,19 +182,27 @@ TOOLS (try in order):
|
||||
|
||||
EXECUTION:
|
||||
For each module above:
|
||||
1. cd "{{module_path}}"
|
||||
2. Try tool 1:
|
||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}")
|
||||
→ Success: Report "{{module_path}} updated with {{tool_1}}", proceed to next module
|
||||
1. Try tool 1:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
|
||||
→ Failure: Try tool 2
|
||||
3. Try tool 2:
|
||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}")
|
||||
→ Success: Report "{{module_path}} updated with {{tool_2}}", proceed to next module
|
||||
2. Try tool 2:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
|
||||
→ Failure: Try tool 3
|
||||
4. Try tool 3:
|
||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}")
|
||||
→ Success: Report "{{module_path}} updated with {{tool_3}}", proceed to next module
|
||||
→ Failure: Report "FAILED: {{module_path}} failed all tools", proceed to next module
|
||||
3. Try tool 3:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module
|
||||
→ Failure: Report "❌ FAILED: {{module_path}} failed all tools", proceed to next module
|
||||
|
||||
REPORTING:
|
||||
Report final summary with:
|
||||
@@ -213,30 +210,16 @@ Report final summary with:
|
||||
- Successful: Y modules
|
||||
- Failed: Z modules
|
||||
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
|
||||
- Detailed results for each module
|
||||
```
|
||||
|
||||
### Example Execution
|
||||
|
||||
**Depth 3 (new module)**:
|
||||
```javascript
|
||||
Task(subagent_type="memory-bridge", batch=[./src/api/auth], mode="related")
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- 4 modules → 1 agent (75% reduction)
|
||||
- Parallel batches, sequential within batch
|
||||
- Each module gets full fallback chain
|
||||
- Context-aware updates based on git changes
|
||||
|
||||
## Phase 4: Safety Verification
|
||||
|
||||
```bash
|
||||
# Check only CLAUDE.md modified
|
||||
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
|
||||
```javascript
|
||||
// Check only CLAUDE.md modified
|
||||
Bash({command: 'git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified"', run_in_background: false});
|
||||
|
||||
# Display statistics
|
||||
bash(git diff --stat)
|
||||
// Display statistics
|
||||
Bash({command: "git diff --stat", run_in_background: false});
|
||||
```
|
||||
|
||||
**Aggregate results**:
|
||||
|
||||
@@ -381,6 +381,64 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
||||
- Ambiguities resolved, placeholders removed
|
||||
- Consistent terminology
|
||||
|
||||
### Phase 6: Update Context Package
|
||||
|
||||
**Purpose**: Sync updated role analyses to context-package.json to avoid stale cache
|
||||
|
||||
**Operations**:
|
||||
```bash
|
||||
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||
|
||||
# 1. Read existing package
|
||||
context_pkg = Read(context_pkg_path)
|
||||
|
||||
# 2. Re-read brainstorm artifacts (now with synthesis enhancements)
|
||||
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
|
||||
|
||||
# 2.1 Update guidance-specification if exists
|
||||
IF exists({brainstorm_dir}/guidance-specification.md):
|
||||
context_pkg.brainstorm_artifacts.guidance_specification.content = Read({brainstorm_dir}/guidance-specification.md)
|
||||
context_pkg.brainstorm_artifacts.guidance_specification.updated_at = NOW()
|
||||
|
||||
# 2.2 Update synthesis-specification if exists
|
||||
IF exists({brainstorm_dir}/synthesis-specification.md):
|
||||
IF context_pkg.brainstorm_artifacts.synthesis_output:
|
||||
context_pkg.brainstorm_artifacts.synthesis_output.content = Read({brainstorm_dir}/synthesis-specification.md)
|
||||
context_pkg.brainstorm_artifacts.synthesis_output.updated_at = NOW()
|
||||
|
||||
# 2.3 Re-read all role analysis files
|
||||
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
|
||||
context_pkg.brainstorm_artifacts.role_analyses = []
|
||||
|
||||
FOR file IN role_analysis_files:
|
||||
role_name = extract_role_from_path(file) # e.g., "ui-designer"
|
||||
relative_path = file.replace({brainstorm_dir}/, "")
|
||||
|
||||
context_pkg.brainstorm_artifacts.role_analyses.push({
|
||||
"role": role_name,
|
||||
"files": [{
|
||||
"path": relative_path,
|
||||
"type": "primary",
|
||||
"content": Read(file),
|
||||
"updated_at": NOW()
|
||||
}]
|
||||
})
|
||||
|
||||
# 3. Update metadata
|
||||
context_pkg.metadata.updated_at = NOW()
|
||||
context_pkg.metadata.synthesis_timestamp = NOW()
|
||||
|
||||
# 4. Write back
|
||||
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
|
||||
|
||||
REPORT: "✅ Updated context-package.json with synthesis results"
|
||||
```
|
||||
|
||||
**TodoWrite Update**:
|
||||
```json
|
||||
{"content": "Update context package with synthesis results", "status": "completed", "activeForm": "Updating context package"}
|
||||
```
|
||||
|
||||
## Session Metadata
|
||||
|
||||
Update `workflow-session.json`:
|
||||
|
||||
@@ -54,13 +54,64 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
||||
### Phase 1: Discovery
|
||||
**Applies to**: Normal mode only (skipped in resume mode)
|
||||
|
||||
**Process**:
|
||||
1. **Check Active Sessions**: Find sessions in `.workflow/active/` directory
|
||||
2. **Select Session**: If multiple found, prompt user selection
|
||||
3. **Load Session Metadata**: Read `workflow-session.json` ONLY (minimal context)
|
||||
4. **DO NOT read task JSONs yet** - defer until execution phase
|
||||
**Purpose**: Find and select active workflow session with user confirmation when multiple sessions exist
|
||||
|
||||
**Resume Mode**: This phase is completely skipped when `--resume-session="session-id"` flag is provided.
|
||||
**Process**:
|
||||
|
||||
#### Step 1.1: Count Active Sessions
|
||||
```bash
|
||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
|
||||
```
|
||||
|
||||
#### Step 1.2: Handle Session Selection
|
||||
|
||||
**Case A: No Sessions** (count = 0)
|
||||
```
|
||||
ERROR: No active workflow sessions found
|
||||
Run /workflow:plan "task description" to create a session
|
||||
```
|
||||
|
||||
**Case B: Single Session** (count = 1)
|
||||
```bash
|
||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
|
||||
```
|
||||
Auto-select and continue to Phase 2.
|
||||
|
||||
**Case C: Multiple Sessions** (count > 1)
|
||||
|
||||
List sessions with metadata and prompt user selection:
|
||||
```bash
|
||||
bash(for dir in .workflow/active/WFS-*/; do
|
||||
session=$(basename "$dir")
|
||||
project=$(jq -r '.project // "Unknown"' "$dir/workflow-session.json" 2>/dev/null)
|
||||
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
||||
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
||||
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
|
||||
echo "${session} | ${project} | ${completed}/${total} tasks (${progress}%)"
|
||||
done)
|
||||
```
|
||||
|
||||
Use AskUserQuestion to present formatted options:
|
||||
```
|
||||
Multiple active workflow sessions detected. Please select one:
|
||||
|
||||
1. WFS-auth-system | Authentication System | 3/5 tasks (60%)
|
||||
2. WFS-payment-module | Payment Integration | 0/8 tasks (0%)
|
||||
|
||||
Enter number, full session ID, or partial match:
|
||||
```
|
||||
|
||||
Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "auth"), validate selection, and continue to Phase 2.
|
||||
|
||||
#### Step 1.3: Load Session Metadata
|
||||
```bash
|
||||
bash(cat .workflow/active/${sessionId}/workflow-session.json)
|
||||
```
|
||||
|
||||
**Output**: Store session metadata in memory
|
||||
**DO NOT read task JSONs yet** - defer until execution phase (lazy loading)
|
||||
|
||||
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
|
||||
|
||||
### Phase 2: Planning Document Analysis
|
||||
**Applies to**: Normal mode only (skipped in resume mode)
|
||||
|
||||
@@ -185,86 +185,104 @@ Execution Complete
|
||||
previousExecutionResults = []
|
||||
```
|
||||
|
||||
### Step 2: Create TodoWrite Execution List
|
||||
### Step 2: Task Grouping & Batch Creation
|
||||
|
||||
**Operations**:
|
||||
- Create execution tracking from task list
|
||||
- Typically single execution call for all tasks
|
||||
- Split into multiple calls if task list very large (>10 tasks)
|
||||
|
||||
**Execution Call Creation**:
|
||||
**Dependency Analysis & Grouping Algorithm**:
|
||||
```javascript
|
||||
function createExecutionCalls(tasks) {
|
||||
const taskTitles = tasks.map(t => t.title || t)
|
||||
// Infer dependencies: same file → sequential, keywords (use/integrate) → sequential
|
||||
function inferDependencies(tasks) {
|
||||
return tasks.map((task, i) => {
|
||||
const deps = []
|
||||
const file = task.file || task.title.match(/in\s+([^\s:]+)/)?.[1]
|
||||
const keywords = (task.description || task.title).toLowerCase()
|
||||
|
||||
// Single call for ≤10 tasks (most common)
|
||||
if (tasks.length <= 10) {
|
||||
return [{
|
||||
method: executionMethod === "Codex" ? "Codex" : "Agent",
|
||||
taskSummary: taskTitles.length <= 3
|
||||
? taskTitles.join(', ')
|
||||
: `${taskTitles.slice(0, 2).join(', ')}, and ${taskTitles.length - 2} more`,
|
||||
tasks: tasks
|
||||
}]
|
||||
}
|
||||
|
||||
// Split into multiple calls for >10 tasks
|
||||
const callSize = 5
|
||||
const calls = []
|
||||
for (let i = 0; i < tasks.length; i += callSize) {
|
||||
const batchTasks = tasks.slice(i, i + callSize)
|
||||
const batchTitles = batchTasks.map(t => t.title || t)
|
||||
calls.push({
|
||||
method: executionMethod === "Codex" ? "Codex" : "Agent",
|
||||
taskSummary: `Tasks ${i + 1}-${Math.min(i + callSize, tasks.length)}: ${batchTitles[0]}...`,
|
||||
tasks: batchTasks
|
||||
})
|
||||
}
|
||||
return calls
|
||||
for (let j = 0; j < i; j++) {
|
||||
const prevFile = tasks[j].file || tasks[j].title.match(/in\s+([^\s:]+)/)?.[1]
|
||||
if (file && prevFile === file) deps.push(j) // Same file
|
||||
else if (/use|integrate|call|import/.test(keywords)) deps.push(j) // Keyword dependency
|
||||
}
|
||||
return { ...task, taskIndex: i, dependencies: deps }
|
||||
})
|
||||
}
|
||||
|
||||
// Create execution calls with IDs
|
||||
executionCalls = createExecutionCalls(planObject.tasks).map((call, index) => ({
|
||||
...call,
|
||||
id: `[${call.method}-${index+1}]`
|
||||
}))
|
||||
// Group into batches: independent → parallel [P1,P2...], dependent → sequential [S1,S2...]
|
||||
function createExecutionCalls(tasks, executionMethod) {
|
||||
const tasksWithDeps = inferDependencies(tasks)
|
||||
const maxBatch = executionMethod === "Codex" ? 4 : 7
|
||||
const calls = []
|
||||
const processed = new Set()
|
||||
|
||||
// Parallel: independent tasks, different files, max batch size
|
||||
const parallelGroups = []
|
||||
tasksWithDeps.forEach(t => {
|
||||
if (t.dependencies.length === 0 && !processed.has(t.taskIndex)) {
|
||||
const group = [t]
|
||||
processed.add(t.taskIndex)
|
||||
tasksWithDeps.forEach(o => {
|
||||
if (!o.dependencies.length && !processed.has(o.taskIndex) &&
|
||||
group.length < maxBatch && t.file !== o.file) {
|
||||
group.push(o)
|
||||
processed.add(o.taskIndex)
|
||||
}
|
||||
})
|
||||
parallelGroups.push(group)
|
||||
}
|
||||
})
|
||||
|
||||
// Sequential: dependent tasks, batch when deps satisfied
|
||||
const remaining = tasksWithDeps.filter(t => !processed.has(t.taskIndex))
|
||||
while (remaining.length > 0) {
|
||||
const batch = remaining.filter((t, i) =>
|
||||
i < maxBatch && t.dependencies.every(d => processed.has(d))
|
||||
)
|
||||
if (!batch.length) break
|
||||
batch.forEach(t => processed.add(t.taskIndex))
|
||||
calls.push({ executionType: "sequential", groupId: `S${calls.length + 1}`, tasks: batch })
|
||||
remaining.splice(0, remaining.length, ...remaining.filter(t => !processed.has(t.taskIndex)))
|
||||
}
|
||||
|
||||
// Combine results
|
||||
return [
|
||||
...parallelGroups.map((g, i) => ({
|
||||
method: executionMethod, executionType: "parallel", groupId: `P${i+1}`,
|
||||
taskSummary: g.map(t => t.title).join(' | '), tasks: g
|
||||
})),
|
||||
...calls.map(c => ({ ...c, method: executionMethod, taskSummary: c.tasks.map(t => t.title).join(' → ') }))
|
||||
]
|
||||
}
|
||||
|
||||
executionCalls = createExecutionCalls(planObject.tasks, executionMethod).map(c => ({ ...c, id: `[${c.groupId}]` }))
|
||||
|
||||
// Create TodoWrite list
|
||||
TodoWrite({
|
||||
todos: executionCalls.map(call => ({
|
||||
content: `${call.id} (${call.taskSummary})`,
|
||||
todos: executionCalls.map(c => ({
|
||||
content: `${c.executionType === "parallel" ? "⚡" : "→"} ${c.id} (${c.tasks.length} tasks)`,
|
||||
status: "pending",
|
||||
activeForm: `Executing ${call.id} (${call.taskSummary})`
|
||||
activeForm: `Executing ${c.id}`
|
||||
}))
|
||||
})
|
||||
```
|
||||
|
||||
**Example Execution Lists**:
|
||||
```
|
||||
Single call (typical):
|
||||
[ ] [Agent-1] (Create AuthService, Add JWT utilities, Implement middleware)
|
||||
|
||||
Few tasks:
|
||||
[ ] [Codex-1] (Create AuthService, Add JWT utilities, and 3 more)
|
||||
|
||||
Large task sets (>10):
|
||||
[ ] [Agent-1] (Tasks 1-5: Create AuthService, Add JWT utilities, ...)
|
||||
[ ] [Agent-2] (Tasks 6-10: Create tests, Update docs, ...)
|
||||
```
|
||||
|
||||
### Step 3: Launch Execution
|
||||
|
||||
**IMPORTANT**: CLI execution MUST run in foreground (no background execution)
|
||||
|
||||
**Execution Loop**:
|
||||
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
|
||||
```javascript
|
||||
for (currentIndex = 0; currentIndex < executionCalls.length; currentIndex++) {
|
||||
const currentCall = executionCalls[currentIndex]
|
||||
const parallel = executionCalls.filter(c => c.executionType === "parallel")
|
||||
const sequential = executionCalls.filter(c => c.executionType === "sequential")
|
||||
|
||||
// Update TodoWrite: mark current call in_progress
|
||||
// Launch execution with previousExecutionResults context
|
||||
// After completion: collect result, add to previousExecutionResults
|
||||
// Update TodoWrite: mark current call completed
|
||||
// Phase 1: Launch all parallel batches (single message with multiple tool calls)
|
||||
if (parallel.length > 0) {
|
||||
TodoWrite({ todos: executionCalls.map(c => ({ status: c.executionType === "parallel" ? "in_progress" : "pending" })) })
|
||||
parallelResults = await Promise.all(parallel.map(c => executeBatch(c)))
|
||||
previousExecutionResults.push(...parallelResults)
|
||||
TodoWrite({ todos: executionCalls.map(c => ({ status: parallel.includes(c) ? "completed" : "pending" })) })
|
||||
}
|
||||
|
||||
// Phase 2: Execute sequential batches one by one
|
||||
for (const call of sequential) {
|
||||
TodoWrite({ todos: executionCalls.map(c => ({ status: c === call ? "in_progress" : "..." })) })
|
||||
result = await executeBatch(call)
|
||||
previousExecutionResults.push(result)
|
||||
TodoWrite({ todos: executionCalls.map(c => ({ status: "completed" or "pending" })) })
|
||||
}
|
||||
```
|
||||
|
||||
@@ -323,12 +341,17 @@ ${result.notes ? `Notes: ${result.notes}` : ''}
|
||||
|
||||
${clarificationContext ? `\n## Clarifications\n${JSON.stringify(clarificationContext, null, 2)}` : ''}
|
||||
|
||||
## Instructions
|
||||
- Reference original request to ensure alignment
|
||||
- Review previous results to understand completed work
|
||||
- Build on previous work, avoid duplication
|
||||
- Test functionality as you implement
|
||||
- Complete all assigned tasks
|
||||
${executionContext?.session?.artifacts ? `\n## Planning Artifacts
|
||||
Detailed planning context available in:
|
||||
${executionContext.session.artifacts.exploration ? `- Exploration: ${executionContext.session.artifacts.exploration}` : ''}
|
||||
- Plan: ${executionContext.session.artifacts.plan}
|
||||
- Task: ${executionContext.session.artifacts.task}
|
||||
|
||||
Read these files for detailed architecture, patterns, and constraints.` : ''}
|
||||
|
||||
## Requirements
|
||||
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
|
||||
Return only after all tasks are fully implemented and tested.
|
||||
`
|
||||
)
|
||||
```
|
||||
@@ -341,6 +364,11 @@ When to use:
|
||||
- `executionMethod = "Codex"`
|
||||
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
|
||||
|
||||
**Artifact Path Delegation**:
|
||||
- Include artifact file paths in CLI prompt for enhanced context
|
||||
- Codex can read artifact files for detailed planning information
|
||||
- Example: Reference exploration.json for architecture patterns
|
||||
|
||||
Command format:
|
||||
```bash
|
||||
function formatTaskForCodex(task, index) {
|
||||
@@ -390,12 +418,18 @@ Constraints: ${explorationContext.constraints || 'None'}
|
||||
|
||||
${clarificationContext ? `\n### User Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `${q}: ${a}`).join('\n')}` : ''}
|
||||
|
||||
## Execution Instructions
|
||||
- Reference original request to ensure alignment
|
||||
- Review previous results for context continuity
|
||||
- Build on previous work, don't duplicate completed tasks
|
||||
- Complete all assigned tasks in single execution
|
||||
- Test functionality as you implement
|
||||
${executionContext?.session?.artifacts ? `\n### Planning Artifact Files
|
||||
Detailed planning context available in session folder:
|
||||
${executionContext.session.artifacts.exploration ? `- Exploration: ${executionContext.session.artifacts.exploration}` : ''}
|
||||
- Plan: ${executionContext.session.artifacts.plan}
|
||||
- Task: ${executionContext.session.artifacts.task}
|
||||
|
||||
Read these files for complete architecture details, code patterns, and integration constraints.
|
||||
` : ''}
|
||||
|
||||
## Requirements
|
||||
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
|
||||
Return only after all tasks are fully implemented and tested.
|
||||
|
||||
Complexity: ${planObject.complexity}
|
||||
" --skip-git-repo-check -s danger-full-access
|
||||
@@ -414,105 +448,72 @@ bash_result = Bash(
|
||||
|
||||
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
|
||||
|
||||
### Step 4: Track Execution Progress
|
||||
### Step 4: Progress Tracking
|
||||
|
||||
**Real-time TodoWrite Updates** at execution call level:
|
||||
|
||||
```javascript
|
||||
// When call starts
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "in_progress", activeForm: "..." },
|
||||
{ content: "[Agent-2] (Add middleware + Update routes)", status: "pending", activeForm: "..." }
|
||||
]
|
||||
})
|
||||
|
||||
// When call completes
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "completed", activeForm: "..." },
|
||||
{ content: "[Agent-2] (Add middleware + Update routes)", status: "in_progress", activeForm: "..." }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
**User Visibility**:
|
||||
- User sees execution call progress (not individual task progress)
|
||||
- Current execution highlighted as "in_progress"
|
||||
- Completed executions marked with checkmark
|
||||
- Each execution shows task summary for context
|
||||
Progress tracked at batch level (not individual task level). Icons: ⚡ (parallel, concurrent), → (sequential, one-by-one)
|
||||
|
||||
### Step 5: Code Review (Optional)
|
||||
|
||||
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
|
||||
|
||||
**Operations**:
|
||||
- Agent Review: Current agent performs direct review
|
||||
- Gemini Review: Execute gemini CLI with review prompt
|
||||
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
|
||||
**Review Focus**: Verify implementation against task.json acceptance criteria
|
||||
- Read task.json from session artifacts for acceptance criteria
|
||||
- Check each acceptance criterion is fulfilled
|
||||
- Validate code quality and identify issues
|
||||
- Ensure alignment with planned approach
|
||||
|
||||
**Command Formats**:
|
||||
**Operations**:
|
||||
- Agent Review: Current agent performs direct review (read task.json for acceptance criteria)
|
||||
- Gemini Review: Execute gemini CLI with review prompt (task.json in CONTEXT)
|
||||
- Custom tool: Execute specified CLI tool (qwen, codex, etc.) with task.json reference
|
||||
|
||||
**Unified Review Template** (All tools use same standard):
|
||||
|
||||
**Review Criteria**:
|
||||
- **Acceptance Criteria**: Verify each criterion from task.json `context.acceptance`
|
||||
- **Code Quality**: Analyze quality, identify issues, suggest improvements
|
||||
- **Plan Alignment**: Validate implementation matches planned approach
|
||||
|
||||
**Shared Prompt Template** (used by all CLI tools):
|
||||
```
|
||||
PURPOSE: Code review for implemented changes against task.json acceptance criteria
|
||||
TASK: • Verify task.json acceptance criteria fulfillment • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* @{task.json} @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against task.json requirements
|
||||
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from task.json.
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on task.json acceptance criteria and plan adherence | analysis=READ-ONLY
|
||||
```
|
||||
|
||||
**Tool-Specific Execution** (Apply shared prompt template above):
|
||||
|
||||
```bash
|
||||
# Agent Review: Direct agent review (no CLI)
|
||||
# Uses analysis prompt and TodoWrite tools directly
|
||||
# Method 1: Agent Review (current agent)
|
||||
# - Read task.json: ${executionContext.session.artifacts.task}
|
||||
# - Apply unified review criteria (see Shared Prompt Template)
|
||||
# - Report findings directly
|
||||
|
||||
# Gemini Review:
|
||||
gemini -p "
|
||||
PURPOSE: Code review for implemented changes
|
||||
TASK: • Analyze quality • Identify issues • Suggest improvements
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Review lite-execute changes
|
||||
EXPECTED: Quality assessment with recommendations
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
|
||||
"
|
||||
# Method 2: Gemini Review (recommended)
|
||||
gemini -p "[Shared Prompt Template with artifacts]"
|
||||
# CONTEXT includes: @**/* @${task.json} @${plan.json} [@${exploration.json}]
|
||||
|
||||
# Qwen Review (custom tool via "Other"):
|
||||
qwen -p "
|
||||
PURPOSE: Code review for implemented changes
|
||||
TASK: • Analyze quality • Identify issues • Suggest improvements
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Review lite-execute changes
|
||||
EXPECTED: Quality assessment with recommendations
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
|
||||
"
|
||||
# Method 3: Qwen Review (alternative)
|
||||
qwen -p "[Shared Prompt Template with artifacts]"
|
||||
# Same prompt as Gemini, different execution engine
|
||||
|
||||
# Codex Review (custom tool via "Other"):
|
||||
codex --full-auto exec "Review recent code changes for quality, potential issues, and improvements" --skip-git-repo-check -s danger-full-access
|
||||
# Method 4: Codex Review (autonomous)
|
||||
codex --full-auto exec "[Verify task.json acceptance criteria at ${task.json}]" --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
|
||||
- `@{task.json}` → `@${executionContext.session.artifacts.task}`
|
||||
- `@{plan.json}` → `@${executionContext.session.artifacts.plan}`
|
||||
- `[@{exploration.json}]` → `@${executionContext.session.artifacts.exploration}` (if exists)
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Execution Intelligence
|
||||
|
||||
1. **Context Continuity**: Each execution call receives previous results
|
||||
- Prevents duplication across multiple executions
|
||||
- Maintains coherent implementation flow
|
||||
- Builds on completed work
|
||||
|
||||
2. **Execution Call Tracking**: Progress at call level, not task level
|
||||
- Each call handles all or subset of tasks
|
||||
- Clear visibility of current execution
|
||||
- Simple progress updates
|
||||
|
||||
3. **Flexible Execution**: Multiple input modes supported
|
||||
- In-memory: Seamless lite-plan integration
|
||||
- Prompt: Quick standalone execution
|
||||
- File: Intelligent format detection
|
||||
- Enhanced Task JSON (lite-plan export): Full plan extraction
|
||||
- Plain text: Uses as prompt
|
||||
|
||||
### Task Management
|
||||
|
||||
1. **Live Progress Updates**: Real-time TodoWrite tracking
|
||||
- Execution calls created before execution starts
|
||||
- Updated as executions progress
|
||||
- Clear completion status
|
||||
|
||||
2. **Simple Execution**: Straightforward task handling
|
||||
- All tasks in single call (typical)
|
||||
- Split only for very large task sets (>10)
|
||||
- Agent/Codex determines optimal execution order
|
||||
**Input Modes**: In-memory (lite-plan), prompt (standalone), file (JSON/text)
|
||||
**Batch Limits**: Agent 7 tasks, CLI 4 tasks
|
||||
**Execution**: Parallel batches use single Claude message with multiple tool calls (no concurrency limit)
|
||||
|
||||
## Error Handling
|
||||
|
||||
@@ -546,10 +547,26 @@ Passed from lite-plan via global variable:
|
||||
clarificationContext: {...} | null,
|
||||
executionMethod: "Agent" | "Codex" | "Auto",
|
||||
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
||||
originalUserInput: string
|
||||
originalUserInput: string,
|
||||
|
||||
// Session artifacts location (saved by lite-plan)
|
||||
session: {
|
||||
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
|
||||
folder: string, // Session folder path: .workflow/.lite-plan/{session-id}
|
||||
artifacts: {
|
||||
exploration: string | null, // exploration.json path (if exploration performed)
|
||||
plan: string, // plan.json path (always present)
|
||||
task: string // task.json path (always exported)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Artifact Usage**:
|
||||
- Artifact files contain detailed planning context
|
||||
- Pass artifact paths to CLI tools and agents for enhanced context
|
||||
- See execution options below for usage examples
|
||||
|
||||
### executionResult (Output)
|
||||
|
||||
Collected after each execution call completes:
|
||||
|
||||
@@ -130,6 +130,13 @@ needsExploration = (
|
||||
|
||||
**Exploration Execution** (if needed):
|
||||
```javascript
|
||||
// Generate session identifiers for artifact storage
|
||||
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
|
||||
const shortTimestamp = timestamp.substring(0, 19).replace('T', '-') // YYYY-MM-DD-HH-mm-ss
|
||||
const sessionId = `${taskSlug}-${shortTimestamp}`
|
||||
const sessionFolder = `.workflow/.lite-plan/${sessionId}`
|
||||
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
description="Analyze codebase for task context",
|
||||
@@ -149,9 +156,14 @@ Task(
|
||||
Output Format: JSON-like structured object
|
||||
`
|
||||
)
|
||||
|
||||
// Save exploration results for CLI/agent access in lite-execute
|
||||
const explorationFile = `${sessionFolder}/exploration.json`
|
||||
Write(explorationFile, JSON.stringify(explorationContext, null, 2))
|
||||
```
|
||||
|
||||
**Output**: `explorationContext` (see Data Structures section)
|
||||
**Output**: `explorationContext` (in-memory, see Data Structures section)
|
||||
**Artifact**: Saved to `{sessionFolder}/exploration.json` for CLI/agent use
|
||||
|
||||
**Progress Tracking**:
|
||||
- Mark Phase 1 completed
|
||||
@@ -228,6 +240,14 @@ Current Claude generates plan directly:
|
||||
- Estimated Time: Total implementation time
|
||||
- Recommended Execution: "Agent"
|
||||
|
||||
```javascript
|
||||
// Save planning results to session folder (same as Option B)
|
||||
const planFile = `${sessionFolder}/plan.json`
|
||||
Write(planFile, JSON.stringify(planObject, null, 2))
|
||||
```
|
||||
|
||||
**Artifact**: Saved to `{sessionFolder}/plan.json` for CLI/agent use
|
||||
|
||||
**Option B: Agent-Based Planning (Medium/High Complexity)**
|
||||
|
||||
Delegate to cli-lite-planning-agent:
|
||||
@@ -270,9 +290,14 @@ Task(
|
||||
Format: "{Action} in {file_path}: {details} following {pattern}"
|
||||
`
|
||||
)
|
||||
|
||||
// Save planning results to session folder
|
||||
const planFile = `${sessionFolder}/plan.json`
|
||||
Write(planFile, JSON.stringify(planObject, null, 2))
|
||||
```
|
||||
|
||||
**Output**: `planObject` (see Data Structures section)
|
||||
**Artifact**: Saved to `{sessionFolder}/plan.json` for CLI/agent use
|
||||
|
||||
**Progress Tracking**:
|
||||
- Mark Phase 3 completed
|
||||
@@ -315,7 +340,7 @@ ${i+1}. **${task.title}** (${task.file})
|
||||
|
||||
**Step 4.2: Collect User Confirmation**
|
||||
|
||||
Four questions via single AskUserQuestion call:
|
||||
Three questions via single AskUserQuestion call:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
@@ -353,15 +378,6 @@ Confirm plan? (Multi-select: can supplement via "Other")`,
|
||||
{ label: "Agent Review", description: "@code-reviewer agent" },
|
||||
{ label: "Skip", description: "No review" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Export plan to Enhanced Task JSON file?\n\nAllows reuse with lite-execute later.",
|
||||
header: "Export JSON",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Yes", description: "Export to JSON (recommended for complex tasks)" },
|
||||
{ label: "No", description: "Keep in-memory only" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
@@ -384,10 +400,6 @@ Code Review (after execution):
|
||||
├─ Gemini Review → gemini CLI analysis
|
||||
├─ Agent Review → Current Claude review
|
||||
└─ Other → Custom tool (e.g., qwen, codex)
|
||||
|
||||
Export JSON:
|
||||
├─ Yes → Export to .workflow/lite-plans/plan-{timestamp}.json
|
||||
└─ No → In-memory only
|
||||
```
|
||||
|
||||
**Progress Tracking**:
|
||||
@@ -398,48 +410,48 @@ Export JSON:
|
||||
|
||||
### Phase 5: Dispatch to Execution
|
||||
|
||||
**Step 5.1: Export Enhanced Task JSON (Optional)**
|
||||
**Step 5.1: Export Enhanced Task JSON**
|
||||
|
||||
Only execute if `userSelection.export_task_json === "Yes"`:
|
||||
Always export Enhanced Task JSON to session folder:
|
||||
|
||||
```javascript
|
||||
if (userSelection.export_task_json === "Yes") {
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
|
||||
const taskId = `LP-${timestamp}`
|
||||
const filename = `.workflow/lite-plans/${taskId}.json`
|
||||
const taskId = `LP-${shortTimestamp}`
|
||||
const filename = `${sessionFolder}/task.json`
|
||||
|
||||
const enhancedTaskJson = {
|
||||
id: taskId,
|
||||
title: original_task_description,
|
||||
status: "pending",
|
||||
const enhancedTaskJson = {
|
||||
id: taskId,
|
||||
title: original_task_description,
|
||||
status: "pending",
|
||||
|
||||
meta: {
|
||||
type: "planning",
|
||||
created_at: new Date().toISOString(),
|
||||
complexity: planObject.complexity,
|
||||
estimated_time: planObject.estimated_time,
|
||||
recommended_execution: planObject.recommended_execution,
|
||||
workflow: "lite-plan"
|
||||
meta: {
|
||||
type: "planning",
|
||||
created_at: new Date().toISOString(),
|
||||
complexity: planObject.complexity,
|
||||
estimated_time: planObject.estimated_time,
|
||||
recommended_execution: planObject.recommended_execution,
|
||||
workflow: "lite-plan",
|
||||
session_id: sessionId,
|
||||
session_folder: sessionFolder
|
||||
},
|
||||
|
||||
context: {
|
||||
requirements: [original_task_description],
|
||||
plan: {
|
||||
summary: planObject.summary,
|
||||
approach: planObject.approach,
|
||||
tasks: planObject.tasks
|
||||
},
|
||||
|
||||
context: {
|
||||
requirements: [original_task_description],
|
||||
plan: {
|
||||
summary: planObject.summary,
|
||||
approach: planObject.approach,
|
||||
tasks: planObject.tasks
|
||||
},
|
||||
exploration: explorationContext || null,
|
||||
clarifications: clarificationContext || null,
|
||||
focus_paths: explorationContext?.relevant_files || [],
|
||||
acceptance: planObject.tasks.flatMap(t => t.acceptance)
|
||||
}
|
||||
exploration: explorationContext || null,
|
||||
clarifications: clarificationContext || null,
|
||||
focus_paths: explorationContext?.relevant_files || [],
|
||||
acceptance: planObject.tasks.flatMap(t => t.acceptance)
|
||||
}
|
||||
|
||||
Write(filename, JSON.stringify(enhancedTaskJson, null, 2))
|
||||
console.log(`Enhanced Task JSON exported to: ${filename}`)
|
||||
console.log(`Reuse with: /workflow:lite-execute ${filename}`)
|
||||
}
|
||||
|
||||
Write(filename, JSON.stringify(enhancedTaskJson, null, 2))
|
||||
console.log(`Enhanced Task JSON exported to: ${filename}`)
|
||||
console.log(`Session folder: ${sessionFolder}`)
|
||||
console.log(`Reuse with: /workflow:lite-execute ${filename}`)
|
||||
```
|
||||
|
||||
**Step 5.2: Store Execution Context**
|
||||
@@ -451,7 +463,18 @@ executionContext = {
|
||||
clarificationContext: clarificationContext || null,
|
||||
executionMethod: userSelection.execution_method,
|
||||
codeReviewTool: userSelection.code_review_tool,
|
||||
originalUserInput: original_task_description
|
||||
originalUserInput: original_task_description,
|
||||
|
||||
// Session artifacts location
|
||||
session: {
|
||||
id: sessionId,
|
||||
folder: sessionFolder,
|
||||
artifacts: {
|
||||
exploration: explorationContext ? `${sessionFolder}/exploration.json` : null,
|
||||
plan: `${sessionFolder}/plan.json`,
|
||||
task: `${sessionFolder}/task.json` // Always exported
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -462,7 +485,11 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
|
||||
```
|
||||
|
||||
**Execution Handoff**:
|
||||
- lite-execute reads `executionContext` variable
|
||||
- lite-execute reads `executionContext` variable from memory
|
||||
- `executionContext.session.artifacts` contains file paths to saved planning artifacts:
|
||||
- `exploration` - exploration.json (if exploration performed)
|
||||
- `plan` - plan.json (always exists)
|
||||
- `task` - task.json (if user selected export)
|
||||
- All execution logic handled by lite-execute
|
||||
- lite-plan completes after successful handoff
|
||||
|
||||
@@ -502,7 +529,7 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
|
||||
- Plan confirmation (multi-select with supplements)
|
||||
- Execution method selection
|
||||
- Code review tool selection (custom via "Other")
|
||||
- JSON export option
|
||||
- Enhanced Task JSON always exported to session folder
|
||||
- Allows plan refinement without re-selecting execution method
|
||||
|
||||
### Task Management
|
||||
@@ -519,11 +546,11 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
|
||||
- Medium: 5-7 tasks (detailed)
|
||||
- High: 7-10 tasks (comprehensive)
|
||||
|
||||
3. **No File Artifacts During Planning**:
|
||||
- All planning stays in memory
|
||||
- Optional Enhanced Task JSON export (user choice)
|
||||
- Faster workflow, cleaner workspace
|
||||
- Plan context passed directly to execution
|
||||
3. **Session Artifact Management**:
|
||||
- All planning artifacts saved to dedicated session folder
|
||||
- Enhanced Task JSON always exported for reusability
|
||||
- Plan context passed to execution via memory and files
|
||||
- Clean organization with session-based folder structure
|
||||
|
||||
### Planning Standards
|
||||
|
||||
@@ -550,6 +577,39 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
|
||||
| Phase 4 Confirmation Timeout | User no response > 5 minutes | Save context to temp var, display resume instructions, exit gracefully |
|
||||
| Phase 4 Modification Loop | User requests modify > 3 times | Suggest breaking task into smaller pieces or using `/workflow:plan` |
|
||||
|
||||
## Session Folder Structure
|
||||
|
||||
Each lite-plan execution creates a dedicated session folder to organize all artifacts:
|
||||
|
||||
```
|
||||
.workflow/.lite-plan/{task-slug}-{short-timestamp}/
|
||||
├── exploration.json # Exploration results (if exploration performed)
|
||||
├── plan.json # Planning results (always created)
|
||||
└── task.json # Enhanced Task JSON (always created)
|
||||
```
|
||||
|
||||
**Folder Naming Convention**:
|
||||
- `{task-slug}`: First 40 characters of task description, lowercased, non-alphanumeric replaced with `-`
|
||||
- `{short-timestamp}`: YYYY-MM-DD-HH-mm-ss format
|
||||
- Example: `.workflow/.lite-plan/implement-user-auth-jwt-2025-01-15-14-30-45/`
|
||||
|
||||
**File Contents**:
|
||||
- `exploration.json`: Complete explorationContext object (if exploration performed, see Data Structures)
|
||||
- `plan.json`: Complete planObject (always created, see Data Structures)
|
||||
- `task.json`: Enhanced Task JSON with all context (always created, see Data Structures)
|
||||
|
||||
**Access Patterns**:
|
||||
- **lite-plan**: Creates folder and writes all artifacts during execution, passes paths via `executionContext.session.artifacts`
|
||||
- **lite-execute**: Reads artifact paths from `executionContext.session.artifacts` (see lite-execute.md for usage details)
|
||||
- **User**: Can inspect artifacts for debugging or reference
|
||||
- **Reuse**: Pass `task.json` path to `/workflow:lite-execute {path}` for re-execution
|
||||
|
||||
**Benefits**:
|
||||
- Clean separation between different task executions
|
||||
- Easy to find and inspect artifacts for specific tasks
|
||||
- Natural history/audit trail of planning sessions
|
||||
- Supports concurrent lite-plan executions without conflicts
|
||||
|
||||
## Data Structures
|
||||
|
||||
### explorationContext
|
||||
@@ -621,7 +681,18 @@ Context passed to lite-execute via --in-memory (Phase 5):
|
||||
clarificationContext: {...} | null, // User responses from Phase 2
|
||||
executionMethod: "Agent" | "Codex" | "Auto",
|
||||
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
||||
originalUserInput: string // User's original task description
|
||||
originalUserInput: string, // User's original task description
|
||||
|
||||
// Session artifacts location (for lite-execute to access saved files)
|
||||
session: {
|
||||
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
|
||||
folder: string, // Session folder path: .workflow/.lite-plan/{session-id}
|
||||
artifacts: {
|
||||
exploration: string | null, // exploration.json path (if exploration performed)
|
||||
plan: string, // plan.json path (always present)
|
||||
task: string // task.json path (always exported)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -72,6 +72,8 @@ CONTEXT: Existing user database schema, REST API endpoints
|
||||
- Session ID successfully extracted
|
||||
- Session directory `.workflow/active/[sessionId]/` exists
|
||||
|
||||
**Note**: Session directory contains `workflow-session.json` (metadata). Do NOT look for `manifest.json` here - it only exists in `.workflow/archives/` for archived sessions.
|
||||
|
||||
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
|
||||
|
||||
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
|
||||
|
||||
@@ -213,8 +213,6 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
Generate all three documents and report completion status:
|
||||
- Task JSON files created: N files
|
||||
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
|
||||
- MCP enhancements: code-index, exa-research
|
||||
- Session ready for execution: /workflow:execute
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
@@ -227,7 +227,68 @@ Write(file_path=".workflow/active/WFS-{session}/.brainstorming/ui-designer/desig
|
||||
content="[generated content with @ references]")
|
||||
```
|
||||
|
||||
### Phase 5: Completion
|
||||
### Phase 5: Update Context Package
|
||||
|
||||
**Purpose**: Sync design system references to context-package.json
|
||||
|
||||
**Operations**:
|
||||
```bash
|
||||
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||
|
||||
# 1. Read existing package
|
||||
context_pkg = Read(context_pkg_path)
|
||||
|
||||
# 2. Update brainstorm_artifacts (role analyses now contain @ design references)
|
||||
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
|
||||
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
|
||||
|
||||
context_pkg.brainstorm_artifacts.role_analyses = []
|
||||
FOR file IN role_analysis_files:
|
||||
role_name = extract_role_from_path(file)
|
||||
relative_path = file.replace({brainstorm_dir}/, "")
|
||||
|
||||
context_pkg.brainstorm_artifacts.role_analyses.push({
|
||||
"role": role_name,
|
||||
"files": [{
|
||||
"path": relative_path,
|
||||
"type": "primary",
|
||||
"content": Read(file), # Contains @ design system references
|
||||
"updated_at": NOW()
|
||||
}]
|
||||
})
|
||||
|
||||
# 3. Add design_system_references field
|
||||
context_pkg.design_system_references = {
|
||||
"design_run_id": design_id,
|
||||
"tokens": `${design_id}/${design_tokens_path}`,
|
||||
"style_guide": `${design_id}/${style_guide_path}`,
|
||||
"prototypes": selected_list.map(p => `${design_id}/prototypes/${p}.html`),
|
||||
"updated_at": NOW()
|
||||
}
|
||||
|
||||
# 4. Optional: Add animations and layouts if they exist
|
||||
IF exists({latest_design}/animation-extraction/animation-tokens.json):
|
||||
context_pkg.design_system_references.animations = `${design_id}/animation-extraction/animation-tokens.json`
|
||||
|
||||
IF exists({latest_design}/layout-extraction/layout-templates.json):
|
||||
context_pkg.design_system_references.layouts = `${design_id}/layout-extraction/layout-templates.json`
|
||||
|
||||
# 5. Update metadata
|
||||
context_pkg.metadata.updated_at = NOW()
|
||||
context_pkg.metadata.design_sync_timestamp = NOW()
|
||||
|
||||
# 6. Write back
|
||||
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
|
||||
|
||||
REPORT: "✅ Updated context-package.json with design system references"
|
||||
```
|
||||
|
||||
**TodoWrite Update**:
|
||||
```json
|
||||
{"content": "Update context package with design references", "status": "completed", "activeForm": "Updating context package"}
|
||||
```
|
||||
|
||||
### Phase 6: Completion
|
||||
|
||||
```javascript
|
||||
TodoWrite({todos: [
|
||||
|
||||
713
.claude/scripts/generate_module_docs.sh
Normal file
713
.claude/scripts/generate_module_docs.sh
Normal file
@@ -0,0 +1,713 @@
|
||||
#!/bin/bash
|
||||
# Generate documentation for modules and projects with multiple strategies
|
||||
# Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]
|
||||
# strategy: full|single|project-readme|project-architecture|http-api
|
||||
# source_path: Path to the source module directory (or project root for project-level docs)
|
||||
# project_name: Project name for output path (e.g., "myproject")
|
||||
# tool: gemini|qwen|codex (default: gemini)
|
||||
# model: Model name (optional, uses tool defaults)
|
||||
#
|
||||
# Default Models:
|
||||
# gemini: gemini-2.5-flash
|
||||
# qwen: coder-model
|
||||
# codex: gpt5-codex
|
||||
#
|
||||
# Module-Level Strategies:
|
||||
# full: Full documentation generation
|
||||
# - Read: All files in current and subdirectories (@**/*)
|
||||
# - Generate: API.md + README.md for each directory containing code files
|
||||
# - Use: Deep directories (Layer 3), comprehensive documentation
|
||||
#
|
||||
# single: Single-layer documentation
|
||||
# - Read: Current directory code + child API.md/README.md files
|
||||
# - Generate: API.md + README.md only in current directory
|
||||
# - Use: Upper layers (Layer 1-2), incremental updates
|
||||
#
|
||||
# Project-Level Strategies:
|
||||
# project-readme: Project overview documentation
|
||||
# - Read: All module API.md and README.md files
|
||||
# - Generate: README.md (project root)
|
||||
# - Use: After all module docs are generated
|
||||
#
|
||||
# project-architecture: System design documentation
|
||||
# - Read: All module docs + project README
|
||||
# - Generate: ARCHITECTURE.md + EXAMPLES.md
|
||||
# - Use: After project README is generated
|
||||
#
|
||||
# http-api: HTTP API documentation
|
||||
# - Read: API route files + existing docs
|
||||
# - Generate: api/README.md
|
||||
# - Use: For projects with HTTP APIs
|
||||
#
|
||||
# Output Structure:
|
||||
# Module docs: .workflow/docs/{project_name}/{source_path}/API.md
|
||||
# Module docs: .workflow/docs/{project_name}/{source_path}/README.md
|
||||
# Project docs: .workflow/docs/{project_name}/README.md
|
||||
# Project docs: .workflow/docs/{project_name}/ARCHITECTURE.md
|
||||
# Project docs: .workflow/docs/{project_name}/EXAMPLES.md
|
||||
# API docs: .workflow/docs/{project_name}/api/README.md
|
||||
#
|
||||
# Features:
|
||||
# - Path mirroring: source structure → docs structure
|
||||
# - Template-driven generation
|
||||
# - Respects .gitignore patterns
|
||||
# - Detects code vs navigation folders
|
||||
# - Tool fallback support
|
||||
|
||||
# Build exclusion filters from .gitignore
|
||||
build_exclusion_filters() {
|
||||
local filters=""
|
||||
|
||||
# Common system/cache directories to exclude
|
||||
local system_excludes=(
|
||||
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
||||
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
||||
"coverage" ".nyc_output" "logs" "tmp" "temp" ".workflow"
|
||||
)
|
||||
|
||||
for exclude in "${system_excludes[@]}"; do
|
||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
||||
done
|
||||
|
||||
# Find and parse .gitignore (current dir first, then git root)
|
||||
local gitignore_file=""
|
||||
|
||||
# Check current directory first
|
||||
if [ -f ".gitignore" ]; then
|
||||
gitignore_file=".gitignore"
|
||||
else
|
||||
# Try to find git root and check for .gitignore there
|
||||
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
||||
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
||||
gitignore_file="$git_root/.gitignore"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Parse .gitignore if found
|
||||
if [ -n "$gitignore_file" ]; then
|
||||
while IFS= read -r line; do
|
||||
# Skip empty lines and comments
|
||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
||||
|
||||
# Remove trailing slash and whitespace
|
||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
||||
|
||||
# Skip wildcards patterns (too complex for simple find)
|
||||
[[ "$line" =~ \* ]] && continue
|
||||
|
||||
# Add to filters
|
||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
||||
done < "$gitignore_file"
|
||||
fi
|
||||
|
||||
echo "$filters"
|
||||
}
|
||||
|
||||
# Detect folder type (code vs navigation)
|
||||
detect_folder_type() {
|
||||
local target_path="$1"
|
||||
local exclusion_filters="$2"
|
||||
|
||||
# Count code files (primary indicators)
|
||||
local code_count=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||
|
||||
if [ $code_count -gt 0 ]; then
|
||||
echo "code"
|
||||
else
|
||||
echo "navigation"
|
||||
fi
|
||||
}
|
||||
|
||||
# Scan directory structure and generate structured information
|
||||
scan_directory_structure() {
|
||||
local target_path="$1"
|
||||
local strategy="$2"
|
||||
|
||||
if [ ! -d "$target_path" ]; then
|
||||
echo "Directory not found: $target_path"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local exclusion_filters=$(build_exclusion_filters)
|
||||
local structure_info=""
|
||||
|
||||
# Get basic directory info
|
||||
local dir_name=$(basename "$target_path")
|
||||
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local folder_type=$(detect_folder_type "$target_path" "$exclusion_filters")
|
||||
|
||||
structure_info+="Directory: $dir_name\n"
|
||||
structure_info+="Total files: $total_files\n"
|
||||
structure_info+="Total directories: $total_dirs\n"
|
||||
structure_info+="Folder type: $folder_type\n\n"
|
||||
|
||||
if [ "$strategy" = "full" ]; then
|
||||
# For full: show all subdirectories with file counts
|
||||
structure_info+="Subdirectories with files:\n"
|
||||
while IFS= read -r dir; do
|
||||
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
|
||||
local rel_path=${dir#$target_path/}
|
||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
if [ $file_count -gt 0 ]; then
|
||||
local subdir_type=$(detect_folder_type "$dir" "$exclusion_filters")
|
||||
structure_info+=" - $rel_path/ ($file_count files, type: $subdir_type)\n"
|
||||
fi
|
||||
fi
|
||||
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
|
||||
else
|
||||
# For single: show direct children only
|
||||
structure_info+="Direct subdirectories:\n"
|
||||
while IFS= read -r dir; do
|
||||
if [ -n "$dir" ]; then
|
||||
local dir_name=$(basename "$dir")
|
||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local has_api=$([ -f "$dir/API.md" ] && echo " [has API.md]" || echo "")
|
||||
local has_readme=$([ -f "$dir/README.md" ] && echo " [has README.md]" || echo "")
|
||||
structure_info+=" - $dir_name/ ($file_count files)$has_api$has_readme\n"
|
||||
fi
|
||||
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
|
||||
fi
|
||||
|
||||
# Show main file types in current directory
|
||||
structure_info+="\nCurrent directory files:\n"
|
||||
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
|
||||
|
||||
structure_info+=" - Code files: $code_files\n"
|
||||
structure_info+=" - Config files: $config_files\n"
|
||||
structure_info+=" - Documentation: $doc_files\n"
|
||||
|
||||
printf "%b" "$structure_info"
|
||||
}
|
||||
|
||||
# Calculate output path based on source path and project name
|
||||
calculate_output_path() {
|
||||
local source_path="$1"
|
||||
local project_name="$2"
|
||||
local project_root="$3"
|
||||
|
||||
# Get absolute path of source (normalize to Unix-style path)
|
||||
local abs_source=$(cd "$source_path" && pwd)
|
||||
|
||||
# Normalize project root to same format
|
||||
local norm_project_root=$(cd "$project_root" && pwd)
|
||||
|
||||
# Calculate relative path from project root
|
||||
local rel_path="${abs_source#$norm_project_root}"
|
||||
|
||||
# Remove leading slash if present
|
||||
rel_path="${rel_path#/}"
|
||||
|
||||
# If source is project root, use project name directly
|
||||
if [ "$abs_source" = "$norm_project_root" ] || [ -z "$rel_path" ]; then
|
||||
echo "$norm_project_root/.workflow/docs/$project_name"
|
||||
else
|
||||
echo "$norm_project_root/.workflow/docs/$project_name/$rel_path"
|
||||
fi
|
||||
}
|
||||
|
||||
generate_module_docs() {
|
||||
local strategy="$1"
|
||||
local source_path="$2"
|
||||
local project_name="$3"
|
||||
local tool="${4:-gemini}"
|
||||
local model="$5"
|
||||
|
||||
# Validate parameters
|
||||
if [ -z "$strategy" ] || [ -z "$source_path" ] || [ -z "$project_name" ]; then
|
||||
echo "❌ Error: Strategy, source path, and project name are required"
|
||||
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
||||
echo "Module strategies: full, single"
|
||||
echo "Project strategies: project-readme, project-architecture, http-api"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Validate strategy
|
||||
local valid_strategies=("full" "single" "project-readme" "project-architecture" "http-api")
|
||||
local strategy_valid=false
|
||||
for valid_strategy in "${valid_strategies[@]}"; do
|
||||
if [ "$strategy" = "$valid_strategy" ]; then
|
||||
strategy_valid=true
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$strategy_valid" = false ]; then
|
||||
echo "❌ Error: Invalid strategy '$strategy'"
|
||||
echo "Valid module strategies: full, single"
|
||||
echo "Valid project strategies: project-readme, project-architecture, http-api"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [ ! -d "$source_path" ]; then
|
||||
echo "❌ Error: Source directory '$source_path' does not exist"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Set default models if not specified
|
||||
if [ -z "$model" ]; then
|
||||
case "$tool" in
|
||||
gemini)
|
||||
model="gemini-2.5-flash"
|
||||
;;
|
||||
qwen)
|
||||
model="coder-model"
|
||||
;;
|
||||
codex)
|
||||
model="gpt5-codex"
|
||||
;;
|
||||
*)
|
||||
model=""
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# Build exclusion filters
|
||||
local exclusion_filters=$(build_exclusion_filters)
|
||||
|
||||
# Get project root
|
||||
local project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
|
||||
# Determine if this is a project-level strategy
|
||||
local is_project_level=false
|
||||
if [[ "$strategy" =~ ^project- ]] || [ "$strategy" = "http-api" ]; then
|
||||
is_project_level=true
|
||||
fi
|
||||
|
||||
# Calculate output path
|
||||
local output_path
|
||||
if [ "$is_project_level" = true ]; then
|
||||
# Project-level docs go to project root
|
||||
if [ "$strategy" = "http-api" ]; then
|
||||
output_path="$project_root/.workflow/docs/$project_name/api"
|
||||
else
|
||||
output_path="$project_root/.workflow/docs/$project_name"
|
||||
fi
|
||||
else
|
||||
output_path=$(calculate_output_path "$source_path" "$project_name" "$project_root")
|
||||
fi
|
||||
|
||||
# Create output directory
|
||||
mkdir -p "$output_path"
|
||||
|
||||
# Detect folder type (only for module-level strategies)
|
||||
local folder_type=""
|
||||
if [ "$is_project_level" = false ]; then
|
||||
folder_type=$(detect_folder_type "$source_path" "$exclusion_filters")
|
||||
fi
|
||||
|
||||
# Load templates based on strategy
|
||||
local api_template=""
|
||||
local readme_template=""
|
||||
local template_content=""
|
||||
|
||||
if [ "$is_project_level" = true ]; then
|
||||
# Project-level templates
|
||||
case "$strategy" in
|
||||
project-readme)
|
||||
local proj_readme_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-readme.txt"
|
||||
if [ -f "$proj_readme_path" ]; then
|
||||
template_content=$(cat "$proj_readme_path")
|
||||
echo " 📋 Loaded Project README template: $(wc -l < "$proj_readme_path") lines"
|
||||
fi
|
||||
;;
|
||||
project-architecture)
|
||||
local arch_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-architecture.txt"
|
||||
local examples_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-examples.txt"
|
||||
if [ -f "$arch_path" ]; then
|
||||
template_content=$(cat "$arch_path")
|
||||
echo " 📋 Loaded Architecture template: $(wc -l < "$arch_path") lines"
|
||||
fi
|
||||
if [ -f "$examples_path" ]; then
|
||||
template_content="$template_content
|
||||
|
||||
EXAMPLES TEMPLATE:
|
||||
$(cat "$examples_path")"
|
||||
echo " 📋 Loaded Examples template: $(wc -l < "$examples_path") lines"
|
||||
fi
|
||||
;;
|
||||
http-api)
|
||||
local api_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
||||
if [ -f "$api_path" ]; then
|
||||
template_content=$(cat "$api_path")
|
||||
echo " 📋 Loaded HTTP API template: $(wc -l < "$api_path") lines"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
else
|
||||
# Module-level templates
|
||||
local api_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
||||
local readme_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/module-readme.txt"
|
||||
local nav_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/folder-navigation.txt"
|
||||
|
||||
if [ "$folder_type" = "code" ]; then
|
||||
if [ -f "$api_template_path" ]; then
|
||||
api_template=$(cat "$api_template_path")
|
||||
echo " 📋 Loaded API template: $(wc -l < "$api_template_path") lines"
|
||||
fi
|
||||
if [ -f "$readme_template_path" ]; then
|
||||
readme_template=$(cat "$readme_template_path")
|
||||
echo " 📋 Loaded README template: $(wc -l < "$readme_template_path") lines"
|
||||
fi
|
||||
else
|
||||
# Navigation folder uses navigation template
|
||||
if [ -f "$nav_template_path" ]; then
|
||||
readme_template=$(cat "$nav_template_path")
|
||||
echo " 📋 Loaded Navigation template: $(wc -l < "$nav_template_path") lines"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Scan directory structure (only for module-level strategies)
|
||||
local structure_info=""
|
||||
if [ "$is_project_level" = false ]; then
|
||||
echo " 🔍 Scanning directory structure..."
|
||||
structure_info=$(scan_directory_structure "$source_path" "$strategy")
|
||||
fi
|
||||
|
||||
# Prepare logging info
|
||||
local module_name=$(basename "$source_path")
|
||||
|
||||
echo "⚡ Generating docs: $source_path → $output_path"
|
||||
echo " Strategy: $strategy | Tool: $tool | Model: $model | Type: $folder_type"
|
||||
echo " Output: $output_path"
|
||||
|
||||
# Build strategy-specific prompt
|
||||
local final_prompt=""
|
||||
|
||||
# Project-level strategies
|
||||
if [ "$strategy" = "project-readme" ]; then
|
||||
final_prompt="PURPOSE: Generate comprehensive project overview documentation
|
||||
|
||||
PROJECT: $project_name
|
||||
OUTPUT: Current directory (file will be moved to final location)
|
||||
|
||||
Read: @.workflow/docs/$project_name/**/*.md
|
||||
|
||||
Context: All module documentation files from the project
|
||||
|
||||
Generate ONE documentation file in current directory:
|
||||
- README.md - Project root documentation
|
||||
|
||||
Template:
|
||||
$template_content
|
||||
|
||||
Instructions:
|
||||
- Create README.md in CURRENT DIRECTORY
|
||||
- Synthesize information from all module docs
|
||||
- Include project overview, getting started, and navigation
|
||||
- Create clear module navigation with links
|
||||
- Follow template structure exactly"
|
||||
|
||||
elif [ "$strategy" = "project-architecture" ]; then
|
||||
final_prompt="PURPOSE: Generate system design and usage examples documentation
|
||||
|
||||
PROJECT: $project_name
|
||||
OUTPUT: Current directory (files will be moved to final location)
|
||||
|
||||
Read: @.workflow/docs/$project_name/**/*.md
|
||||
|
||||
Context: All project documentation including module docs and project README
|
||||
|
||||
Generate TWO documentation files in current directory:
|
||||
1. ARCHITECTURE.md - System architecture and design patterns
|
||||
2. EXAMPLES.md - End-to-end usage examples
|
||||
|
||||
Template:
|
||||
$template_content
|
||||
|
||||
Instructions:
|
||||
- Create both ARCHITECTURE.md and EXAMPLES.md in CURRENT DIRECTORY
|
||||
- Synthesize architectural patterns from module documentation
|
||||
- Document system structure, module relationships, and design decisions
|
||||
- Provide practical code examples and usage scenarios
|
||||
- Follow template structure for both files"
|
||||
|
||||
elif [ "$strategy" = "http-api" ]; then
|
||||
final_prompt="PURPOSE: Generate HTTP API reference documentation
|
||||
|
||||
PROJECT: $project_name
|
||||
OUTPUT: Current directory (file will be moved to final location)
|
||||
|
||||
Read: @**/*.{ts,js,py,go,rs} @.workflow/docs/$project_name/**/*.md
|
||||
|
||||
Context: API route files and existing documentation
|
||||
|
||||
Generate ONE documentation file in current directory:
|
||||
- README.md - HTTP API documentation (in api/ subdirectory)
|
||||
|
||||
Template:
|
||||
$template_content
|
||||
|
||||
Instructions:
|
||||
- Create README.md in CURRENT DIRECTORY
|
||||
- Document all HTTP endpoints (routes, methods, parameters, responses)
|
||||
- Include authentication requirements and error codes
|
||||
- Provide request/response examples
|
||||
- Follow template structure (Part B: HTTP API documentation)"
|
||||
|
||||
# Module-level strategies
|
||||
elif [ "$strategy" = "full" ]; then
|
||||
# Full strategy: read all files, generate for each directory
|
||||
if [ "$folder_type" = "code" ]; then
|
||||
final_prompt="PURPOSE: Generate comprehensive API and module documentation
|
||||
|
||||
Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
SOURCE: $source_path
|
||||
OUTPUT: Current directory (files will be moved to final location)
|
||||
|
||||
Read: @**/*
|
||||
|
||||
Generate TWO documentation files in current directory:
|
||||
1. API.md - Code API documentation (functions, classes, interfaces)
|
||||
Template:
|
||||
$api_template
|
||||
|
||||
2. README.md - Module overview documentation
|
||||
Template:
|
||||
$readme_template
|
||||
|
||||
Instructions:
|
||||
- Generate both API.md and README.md in CURRENT DIRECTORY
|
||||
- If subdirectories contain code files, generate their docs too (recursive)
|
||||
- Work bottom-up: deepest directories first
|
||||
- Follow template structure exactly
|
||||
- Use structure analysis for context"
|
||||
else
|
||||
# Navigation folder - README only
|
||||
final_prompt="PURPOSE: Generate navigation documentation for folder structure
|
||||
|
||||
Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
SOURCE: $source_path
|
||||
OUTPUT: Current directory (file will be moved to final location)
|
||||
|
||||
Read: @**/*
|
||||
|
||||
Generate ONE documentation file in current directory:
|
||||
- README.md - Navigation and folder overview
|
||||
|
||||
Template:
|
||||
$readme_template
|
||||
|
||||
Instructions:
|
||||
- Create README.md in CURRENT DIRECTORY
|
||||
- Focus on folder structure and navigation
|
||||
- Link to subdirectory documentation
|
||||
- Use structure analysis for context"
|
||||
fi
|
||||
else
|
||||
# Single strategy: read current + child docs only
|
||||
if [ "$folder_type" = "code" ]; then
|
||||
final_prompt="PURPOSE: Generate API and module documentation for current directory
|
||||
|
||||
Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
SOURCE: $source_path
|
||||
OUTPUT: Current directory (files will be moved to final location)
|
||||
|
||||
Read: @*/API.md @*/README.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.go @*.rs @*.md @*.json @*.yaml @*.yml
|
||||
|
||||
Generate TWO documentation files in current directory:
|
||||
1. API.md - Code API documentation
|
||||
Template:
|
||||
$api_template
|
||||
|
||||
2. README.md - Module overview
|
||||
Template:
|
||||
$readme_template
|
||||
|
||||
Instructions:
|
||||
- Generate both API.md and README.md in CURRENT DIRECTORY
|
||||
- Reference child documentation, do not duplicate
|
||||
- Follow template structure
|
||||
- Use structure analysis for current directory context"
|
||||
else
|
||||
# Navigation folder - README only
|
||||
final_prompt="PURPOSE: Generate navigation documentation
|
||||
|
||||
Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
SOURCE: $source_path
|
||||
OUTPUT: Current directory (file will be moved to final location)
|
||||
|
||||
Read: @*/API.md @*/README.md @*.md
|
||||
|
||||
Generate ONE documentation file in current directory:
|
||||
- README.md - Navigation and overview
|
||||
|
||||
Template:
|
||||
$readme_template
|
||||
|
||||
Instructions:
|
||||
- Create README.md in CURRENT DIRECTORY
|
||||
- Link to child documentation
|
||||
- Use structure analysis for navigation context"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Execute documentation generation
|
||||
local start_time=$(date +%s)
|
||||
echo " 🔄 Starting documentation generation..."
|
||||
|
||||
if cd "$source_path" 2>/dev/null; then
|
||||
local tool_result=0
|
||||
|
||||
# Store current output path for CLI context
|
||||
export DOC_OUTPUT_PATH="$output_path"
|
||||
|
||||
# Record git HEAD before CLI execution (to detect unwanted auto-commits)
|
||||
local git_head_before=""
|
||||
if git rev-parse --git-dir >/dev/null 2>&1; then
|
||||
git_head_before=$(git rev-parse HEAD 2>/dev/null)
|
||||
fi
|
||||
|
||||
# Execute with selected tool
|
||||
case "$tool" in
|
||||
qwen)
|
||||
if [ "$model" = "coder-model" ]; then
|
||||
qwen -p "$final_prompt" --yolo 2>&1
|
||||
else
|
||||
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
fi
|
||||
tool_result=$?
|
||||
;;
|
||||
codex)
|
||||
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
gemini)
|
||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
*)
|
||||
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
|
||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
esac
|
||||
|
||||
# Move generated files to output directory
|
||||
local docs_created=0
|
||||
local moved_files=""
|
||||
|
||||
if [ $tool_result -eq 0 ]; then
|
||||
if [ "$is_project_level" = true ]; then
|
||||
# Project-level documentation files
|
||||
case "$strategy" in
|
||||
project-readme)
|
||||
if [ -f "README.md" ]; then
|
||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="README.md "
|
||||
}
|
||||
fi
|
||||
;;
|
||||
project-architecture)
|
||||
if [ -f "ARCHITECTURE.md" ]; then
|
||||
mv "ARCHITECTURE.md" "$output_path/ARCHITECTURE.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="ARCHITECTURE.md "
|
||||
}
|
||||
fi
|
||||
if [ -f "EXAMPLES.md" ]; then
|
||||
mv "EXAMPLES.md" "$output_path/EXAMPLES.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="EXAMPLES.md "
|
||||
}
|
||||
fi
|
||||
;;
|
||||
http-api)
|
||||
if [ -f "README.md" ]; then
|
||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="api/README.md "
|
||||
}
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
else
|
||||
# Module-level documentation files
|
||||
# Check and move API.md if it exists
|
||||
if [ "$folder_type" = "code" ] && [ -f "API.md" ]; then
|
||||
mv "API.md" "$output_path/API.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="API.md "
|
||||
}
|
||||
fi
|
||||
|
||||
# Check and move README.md if it exists
|
||||
if [ -f "README.md" ]; then
|
||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="README.md "
|
||||
}
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check if CLI tool auto-committed (and revert if needed)
|
||||
if [ -n "$git_head_before" ]; then
|
||||
local git_head_after=$(git rev-parse HEAD 2>/dev/null)
|
||||
if [ "$git_head_before" != "$git_head_after" ]; then
|
||||
echo " ⚠️ Detected unwanted auto-commit by CLI tool, reverting..."
|
||||
git reset --soft "$git_head_before" 2>/dev/null
|
||||
echo " ✅ Auto-commit reverted (files remain staged)"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ $docs_created -gt 0 ]; then
|
||||
local end_time=$(date +%s)
|
||||
local duration=$((end_time - start_time))
|
||||
echo " ✅ Generated $docs_created doc(s) in ${duration}s: $moved_files"
|
||||
cd - > /dev/null
|
||||
return 0
|
||||
else
|
||||
echo " ❌ Documentation generation failed for $source_path"
|
||||
cd - > /dev/null
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
echo " ❌ Cannot access directory: $source_path"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Execute function if script is run directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
# Show help if no arguments or help requested
|
||||
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
|
||||
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
||||
echo ""
|
||||
echo "Module-Level Strategies:"
|
||||
echo " full - Generate docs for all subdirectories with code"
|
||||
echo " single - Generate docs only for current directory"
|
||||
echo ""
|
||||
echo "Project-Level Strategies:"
|
||||
echo " project-readme - Generate project root README.md"
|
||||
echo " project-architecture - Generate ARCHITECTURE.md + EXAMPLES.md"
|
||||
echo " http-api - Generate HTTP API documentation (api/README.md)"
|
||||
echo ""
|
||||
echo "Tools: gemini (default), qwen, codex"
|
||||
echo "Models: Use tool defaults if not specified"
|
||||
echo ""
|
||||
echo "Module Examples:"
|
||||
echo " ./generate_module_docs.sh full ./src/auth myproject"
|
||||
echo " ./generate_module_docs.sh single ./components myproject gemini"
|
||||
echo ""
|
||||
echo "Project Examples:"
|
||||
echo " ./generate_module_docs.sh project-readme . myproject"
|
||||
echo " ./generate_module_docs.sh project-architecture . myproject qwen"
|
||||
echo " ./generate_module_docs.sh http-api . myproject"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
generate_module_docs "$@"
|
||||
fi
|
||||
567
ARCHITECTURE.md
567
ARCHITECTURE.md
@@ -1,567 +0,0 @@
|
||||
# 🏗️ Claude Code Workflow (CCW) - Architecture Overview
|
||||
|
||||
This document provides a high-level overview of CCW's architecture, design principles, and system components.
|
||||
|
||||
---
|
||||
|
||||
## 📋 Table of Contents
|
||||
|
||||
- [Design Philosophy](#design-philosophy)
|
||||
- [System Architecture](#system-architecture)
|
||||
- [Core Components](#core-components)
|
||||
- [Data Flow](#data-flow)
|
||||
- [Multi-Agent System](#multi-agent-system)
|
||||
- [CLI Tool Integration](#cli-tool-integration)
|
||||
- [Session Management](#session-management)
|
||||
- [Memory System](#memory-system)
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Design Philosophy
|
||||
|
||||
CCW is built on several core design principles that differentiate it from traditional AI-assisted development tools:
|
||||
|
||||
### 1. **Context-First Architecture**
|
||||
- Pre-defined context gathering eliminates execution uncertainty
|
||||
- Agents receive the correct information *before* implementation
|
||||
- Context is loaded dynamically based on task requirements
|
||||
|
||||
### 2. **JSON-First State Management**
|
||||
- Task states live in `.task/IMPL-*.json` files as the single source of truth
|
||||
- Markdown documents are read-only generated views
|
||||
- Eliminates state drift and synchronization complexity
|
||||
- Enables programmatic orchestration
|
||||
|
||||
### 3. **Autonomous Multi-Phase Orchestration**
|
||||
- Commands chain specialized sub-commands and agents
|
||||
- Automates complex workflows with zero user intervention
|
||||
- Each phase validates its output before proceeding
|
||||
|
||||
### 4. **Multi-Model Strategy**
|
||||
- Leverages unique strengths of different AI models
|
||||
- Gemini for analysis and exploration
|
||||
- Codex for implementation
|
||||
- Qwen for architecture and planning
|
||||
|
||||
### 5. **Hierarchical Memory System**
|
||||
- 4-layer documentation system (CLAUDE.md files)
|
||||
- Provides context at the appropriate level of abstraction
|
||||
- Prevents information overload
|
||||
|
||||
### 6. **Specialized Role-Based Agents**
|
||||
- Suite of agents mirrors a real software team
|
||||
- Each agent has specific responsibilities
|
||||
- Agents collaborate to complete complex tasks
|
||||
|
||||
---
|
||||
|
||||
## 🏛️ System Architecture
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "User Interface Layer"
|
||||
CLI[Slash Commands]
|
||||
CHAT[Natural Language]
|
||||
end
|
||||
|
||||
subgraph "Orchestration Layer"
|
||||
WF[Workflow Engine]
|
||||
SM[Session Manager]
|
||||
TM[Task Manager]
|
||||
end
|
||||
|
||||
subgraph "Agent Layer"
|
||||
AG1[@code-developer]
|
||||
AG2[@test-fix-agent]
|
||||
AG3[@ui-design-agent]
|
||||
AG4[@cli-execution-agent]
|
||||
AG5[More Agents...]
|
||||
end
|
||||
|
||||
subgraph "Tool Layer"
|
||||
GEMINI[Gemini CLI]
|
||||
QWEN[Qwen CLI]
|
||||
CODEX[Codex CLI]
|
||||
BASH[Bash/System]
|
||||
end
|
||||
|
||||
subgraph "Data Layer"
|
||||
JSON[Task JSON Files]
|
||||
MEM[CLAUDE.md Memory]
|
||||
STATE[Session State]
|
||||
end
|
||||
|
||||
CLI --> WF
|
||||
CHAT --> WF
|
||||
WF --> SM
|
||||
WF --> TM
|
||||
SM --> STATE
|
||||
TM --> JSON
|
||||
WF --> AG1
|
||||
WF --> AG2
|
||||
WF --> AG3
|
||||
WF --> AG4
|
||||
AG1 --> GEMINI
|
||||
AG1 --> QWEN
|
||||
AG1 --> CODEX
|
||||
AG2 --> BASH
|
||||
AG3 --> GEMINI
|
||||
AG4 --> CODEX
|
||||
GEMINI --> MEM
|
||||
QWEN --> MEM
|
||||
CODEX --> JSON
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Core Components
|
||||
|
||||
### 1. **Workflow Engine**
|
||||
|
||||
The workflow engine orchestrates complex development processes through multiple phases:
|
||||
|
||||
- **Planning Phase**: Analyzes requirements and generates implementation plans
|
||||
- **Execution Phase**: Coordinates agents to implement tasks
|
||||
- **Verification Phase**: Validates implementation quality
|
||||
- **Testing Phase**: Generates and executes tests
|
||||
- **Review Phase**: Performs code review and quality analysis
|
||||
|
||||
**Key Features**:
|
||||
- Multi-phase orchestration
|
||||
- Automatic session management
|
||||
- Context propagation between phases
|
||||
- Quality gates at each phase transition
|
||||
|
||||
### 2. **Session Manager**
|
||||
|
||||
Manages isolated workflow contexts:
|
||||
|
||||
```
|
||||
.workflow/
|
||||
├── active/ # Active sessions
|
||||
│ ├── WFS-user-auth/ # User authentication session
|
||||
│ ├── WFS-payment/ # Payment integration session
|
||||
│ └── WFS-dashboard/ # Dashboard redesign session
|
||||
└── archives/ # Completed sessions
|
||||
└── WFS-old-feature/ # Archived session
|
||||
```
|
||||
|
||||
**Capabilities**:
|
||||
- Directory-based session tracking
|
||||
- Session state persistence
|
||||
- Parallel session support
|
||||
- Session archival and resumption
|
||||
|
||||
### 3. **Task Manager**
|
||||
|
||||
Handles hierarchical task structures:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-1.2",
|
||||
"title": "Implement JWT authentication",
|
||||
"status": "pending",
|
||||
"meta": {
|
||||
"type": "feature",
|
||||
"agent": "code-developer"
|
||||
},
|
||||
"context": {
|
||||
"requirements": ["JWT authentication", "OAuth2 support"],
|
||||
"focus_paths": ["src/auth", "tests/auth"],
|
||||
"acceptance": ["JWT validation works", "OAuth flow complete"]
|
||||
},
|
||||
"flow_control": {
|
||||
"pre_analysis": [...],
|
||||
"implementation_approach": {...}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Features**:
|
||||
- JSON-first data model
|
||||
- Hierarchical task decomposition (max 2 levels)
|
||||
- Dynamic subtask creation
|
||||
- Dependency tracking
|
||||
|
||||
### 4. **Memory System**
|
||||
|
||||
Four-layer hierarchical documentation:
|
||||
|
||||
```
|
||||
CLAUDE.md (Project root - high-level overview)
|
||||
├── src/CLAUDE.md (Source layer - module summaries)
|
||||
│ ├── auth/CLAUDE.md (Module layer - component details)
|
||||
│ │ └── jwt/CLAUDE.md (Component layer - implementation details)
|
||||
```
|
||||
|
||||
**Memory Commands**:
|
||||
- `/memory:update-full` - Complete project rebuild
|
||||
- `/memory:update-related` - Incremental updates for changed modules
|
||||
- `/memory:load` - Quick context loading for specific tasks
|
||||
|
||||
---
|
||||
|
||||
## 🔄 Data Flow
|
||||
|
||||
### Typical Workflow Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant User
|
||||
participant CLI
|
||||
participant Workflow
|
||||
participant Agent
|
||||
participant Tool
|
||||
participant Data
|
||||
|
||||
User->>CLI: /workflow:plan "Feature description"
|
||||
CLI->>Workflow: Initialize planning workflow
|
||||
Workflow->>Data: Create session
|
||||
Workflow->>Agent: @action-planning-agent
|
||||
Agent->>Tool: gemini-wrapper analyze
|
||||
Tool->>Data: Update CLAUDE.md
|
||||
Agent->>Data: Generate IMPL-*.json
|
||||
Workflow->>User: Plan complete
|
||||
|
||||
User->>CLI: /workflow:execute
|
||||
CLI->>Workflow: Start execution
|
||||
Workflow->>Data: Load tasks from JSON
|
||||
Workflow->>Agent: @code-developer
|
||||
Agent->>Tool: Read context
|
||||
Agent->>Tool: Implement code
|
||||
Agent->>Data: Update task status
|
||||
Workflow->>User: Execution complete
|
||||
```
|
||||
|
||||
### Context Flow
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A[User Request] --> B[Context Gathering]
|
||||
B --> C[CLAUDE.md Memory]
|
||||
B --> D[Task JSON]
|
||||
B --> E[Session State]
|
||||
C --> F[Agent Context]
|
||||
D --> F
|
||||
E --> F
|
||||
F --> G[Tool Execution]
|
||||
G --> H[Implementation]
|
||||
H --> I[Update State]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🤖 Multi-Agent System
|
||||
|
||||
### Agent Specialization
|
||||
|
||||
CCW uses specialized agents for different types of tasks:
|
||||
|
||||
| Agent | Responsibility | Tools Used |
|
||||
|-------|---------------|------------|
|
||||
| **@code-developer** | Code implementation | Gemini, Qwen, Codex, Bash |
|
||||
| **@test-fix-agent** | Test generation and fixing | Codex, Bash |
|
||||
| **@ui-design-agent** | UI design and prototyping | Gemini, Claude Vision |
|
||||
| **@action-planning-agent** | Task planning and decomposition | Gemini |
|
||||
| **@cli-execution-agent** | Autonomous CLI task handling | Codex, Gemini, Qwen |
|
||||
| **@cli-explore-agent** | Codebase exploration | ripgrep, find |
|
||||
| **@context-search-agent** | Context gathering | Grep, Glob |
|
||||
| **@doc-generator** | Documentation generation | Gemini, Qwen |
|
||||
| **@memory-bridge** | Memory system updates | Gemini, Qwen |
|
||||
| **@universal-executor** | General task execution | All tools |
|
||||
|
||||
### Agent Communication
|
||||
|
||||
Agents communicate through:
|
||||
1. **Shared Session State**: All agents can read/write session JSON
|
||||
2. **Task JSON Files**: Tasks contain context for agent handoffs
|
||||
3. **CLAUDE.md Memory**: Shared project knowledge base
|
||||
4. **Flow Control**: Pre-analysis and implementation approach definitions
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ CLI Tool Integration
|
||||
|
||||
### Three CLI Tools
|
||||
|
||||
CCW integrates three external AI tools, each optimized for specific tasks:
|
||||
|
||||
#### 1. **Gemini CLI** - Deep Analysis
|
||||
- **Strengths**: Pattern recognition, architecture understanding, comprehensive analysis
|
||||
- **Use Cases**:
|
||||
- Codebase exploration
|
||||
- Architecture analysis
|
||||
- Bug diagnosis
|
||||
- Memory system updates
|
||||
|
||||
#### 2. **Qwen CLI** - Architecture & Planning
|
||||
- **Strengths**: System design, code generation, architectural planning
|
||||
- **Use Cases**:
|
||||
- Architecture design
|
||||
- System planning
|
||||
- Code generation
|
||||
- Refactoring strategies
|
||||
|
||||
#### 3. **Codex CLI** - Autonomous Development
|
||||
- **Strengths**: Self-directed implementation, error fixing, test generation
|
||||
- **Use Cases**:
|
||||
- Feature implementation
|
||||
- Bug fixes
|
||||
- Test generation
|
||||
- Autonomous development
|
||||
|
||||
### Tool Selection Strategy
|
||||
|
||||
CCW automatically selects the best tool based on task type:
|
||||
|
||||
```
|
||||
Analysis Task → Gemini CLI
|
||||
Planning Task → Qwen CLI
|
||||
Implementation Task → Codex CLI
|
||||
```
|
||||
|
||||
Users can override with `--tool` parameter:
|
||||
```bash
|
||||
/cli:analyze --tool codex "Analyze authentication flow"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📦 Session Management
|
||||
|
||||
### Session Lifecycle
|
||||
|
||||
```mermaid
|
||||
stateDiagram-v2
|
||||
[*] --> Creating: /workflow:session:start
|
||||
Creating --> Active: Session initialized
|
||||
Active --> Paused: User pauses
|
||||
Paused --> Active: /workflow:session:resume
|
||||
Active --> Completed: /workflow:session:complete
|
||||
Completed --> Archived: Move to archives/
|
||||
Archived --> [*]
|
||||
```
|
||||
|
||||
### Session Structure
|
||||
|
||||
```
|
||||
.workflow/active/WFS-feature-name/
|
||||
├── workflow-session.json # Session metadata
|
||||
├── .task/ # Task JSON files
|
||||
│ ├── IMPL-1.json
|
||||
│ ├── IMPL-1.1.json
|
||||
│ └── IMPL-2.json
|
||||
├── .chat/ # Chat logs
|
||||
├── brainstorming/ # Brainstorm artifacts
|
||||
│ ├── guidance-specification.md
|
||||
│ └── system-architect/analysis.md
|
||||
└── artifacts/ # Generated files
|
||||
├── IMPL_PLAN.md
|
||||
└── verification-report.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💾 Memory System
|
||||
|
||||
### Hierarchical CLAUDE.md Structure
|
||||
|
||||
The memory system maintains project knowledge across four layers:
|
||||
|
||||
#### **Layer 1: Project Root**
|
||||
```markdown
|
||||
# Project Overview
|
||||
- High-level architecture
|
||||
- Technology stack
|
||||
- Key design decisions
|
||||
- Entry points
|
||||
```
|
||||
|
||||
#### **Layer 2: Source Directory**
|
||||
```markdown
|
||||
# Source Code Structure
|
||||
- Module summaries
|
||||
- Dependency relationships
|
||||
- Common patterns
|
||||
```
|
||||
|
||||
#### **Layer 3: Module Directory**
|
||||
```markdown
|
||||
# Module Details
|
||||
- Component responsibilities
|
||||
- API interfaces
|
||||
- Internal structure
|
||||
```
|
||||
|
||||
#### **Layer 4: Component Directory**
|
||||
```markdown
|
||||
# Component Implementation
|
||||
- Function signatures
|
||||
- Implementation details
|
||||
- Usage examples
|
||||
```
|
||||
|
||||
### Memory Update Strategies
|
||||
|
||||
#### Full Update (`/memory:update-full`)
|
||||
- Rebuilds entire project documentation
|
||||
- Uses layer-based execution (Layer 3 → 1)
|
||||
- Batch processing (4 modules/agent)
|
||||
- Fallback mechanism (gemini → qwen → codex)
|
||||
|
||||
#### Incremental Update (`/memory:update-related`)
|
||||
- Updates only changed modules
|
||||
- Analyzes git changes
|
||||
- Efficient for daily development
|
||||
|
||||
#### Quick Load (`/memory:load`)
|
||||
- No file updates
|
||||
- Task-specific context gathering
|
||||
- Returns JSON context package
|
||||
- Fast context injection
|
||||
|
||||
---
|
||||
|
||||
## 🔐 Quality Assurance
|
||||
|
||||
### Quality Gates
|
||||
|
||||
CCW enforces quality at multiple levels:
|
||||
|
||||
1. **Planning Phase**:
|
||||
- Requirements coverage check
|
||||
- Dependency validation
|
||||
- Task specification quality assessment
|
||||
|
||||
2. **Execution Phase**:
|
||||
- Context validation before implementation
|
||||
- Pattern consistency checks
|
||||
- Test generation
|
||||
|
||||
3. **Review Phase**:
|
||||
- Code quality analysis
|
||||
- Security review
|
||||
- Architecture review
|
||||
|
||||
### Verification Commands
|
||||
|
||||
- `/workflow:action-plan-verify` - Validates plan quality before execution
|
||||
- `/workflow:tdd-verify` - Verifies TDD cycle compliance
|
||||
- `/workflow:review` - Post-implementation review
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Performance Optimizations
|
||||
|
||||
### 1. **Lazy Loading**
|
||||
- Files created only when needed
|
||||
- On-demand document generation
|
||||
- Minimal upfront cost
|
||||
|
||||
### 2. **Parallel Execution**
|
||||
- Independent tasks run concurrently
|
||||
- Multi-agent parallel brainstorming
|
||||
- Batch processing for memory updates
|
||||
|
||||
### 3. **Context Caching**
|
||||
- CLAUDE.md acts as knowledge cache
|
||||
- Reduces redundant analysis
|
||||
- Faster context retrieval
|
||||
|
||||
### 4. **Atomic Session Management**
|
||||
- Ultra-fast session switching (<10ms)
|
||||
- Simple file marker system
|
||||
- No database overhead
|
||||
|
||||
---
|
||||
|
||||
## 📊 Scalability
|
||||
|
||||
### Horizontal Scalability
|
||||
|
||||
- **Multiple Sessions**: Run parallel workflows for different features
|
||||
- **Team Collaboration**: Session-based isolation prevents conflicts
|
||||
- **Incremental Updates**: Only update affected modules
|
||||
|
||||
### Vertical Scalability
|
||||
|
||||
- **Hierarchical Tasks**: Efficient task decomposition (max 2 levels)
|
||||
- **Selective Context**: Load only relevant context for each task
|
||||
- **Batch Processing**: Process multiple modules per agent invocation
|
||||
|
||||
---
|
||||
|
||||
## 🔮 Extensibility
|
||||
|
||||
### Adding New Agents
|
||||
|
||||
Create agent definition in `.claude/agents/`:
|
||||
|
||||
```markdown
|
||||
# Agent Name
|
||||
|
||||
## Role
|
||||
Agent description
|
||||
|
||||
## Tools Available
|
||||
- Tool 1
|
||||
- Tool 2
|
||||
|
||||
## Prompt
|
||||
Agent instructions...
|
||||
```
|
||||
|
||||
### Adding New Commands
|
||||
|
||||
Create command in `.claude/commands/`:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
# Command implementation
|
||||
```
|
||||
|
||||
### Custom Workflows
|
||||
|
||||
Combine existing commands to create custom workflows:
|
||||
|
||||
```bash
|
||||
/workflow:brainstorm:auto-parallel "Topic"
|
||||
/workflow:plan
|
||||
/workflow:action-plan-verify
|
||||
/workflow:execute
|
||||
/workflow:review
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Best Practices
|
||||
|
||||
### For Users
|
||||
|
||||
1. **Keep Memory Updated**: Run `/memory:update-related` after major changes
|
||||
2. **Use Quality Gates**: Run `/workflow:action-plan-verify` before execution
|
||||
3. **Session Management**: Complete sessions with `/workflow:session:complete`
|
||||
4. **Tool Selection**: Let CCW auto-select tools unless you have specific needs
|
||||
|
||||
### For Developers
|
||||
|
||||
1. **Follow JSON-First**: Never modify markdown documents directly
|
||||
2. **Agent Context**: Provide complete context in task JSON
|
||||
3. **Error Handling**: Implement graceful fallbacks
|
||||
4. **Testing**: Test agents independently before integration
|
||||
|
||||
---
|
||||
|
||||
## 📚 Further Reading
|
||||
|
||||
- [Getting Started Guide](GETTING_STARTED.md) - Quick start tutorial
|
||||
- [Command Reference](COMMAND_REFERENCE.md) - All available commands
|
||||
- [Command Specification](COMMAND_SPEC.md) - Detailed command specs
|
||||
- [Workflow Diagrams](WORKFLOW_DIAGRAMS.md) - Visual workflow representations
|
||||
- [Contributing Guide](CONTRIBUTING.md) - How to contribute
|
||||
- [Examples](EXAMPLES.md) - Real-world use cases
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-11-20
|
||||
**Version**: 5.8.1
|
||||
@@ -29,6 +29,7 @@ For all CLI tool usage, command syntax, and integration guidelines:
|
||||
- **Clear intent over clever code** - Be boring and obvious
|
||||
- **Follow existing code style** - Match import patterns, naming conventions, and formatting of existing codebase
|
||||
- **No unsolicited reports** - Task summaries can be performed internally, but NEVER generate additional reports, documentation files, or summary files without explicit user permission
|
||||
- **Minimal documentation output** - Avoid unnecessary documentation. If required, save to .workflow/.scratchpad/
|
||||
|
||||
### Simplicity Means
|
||||
|
||||
|
||||
@@ -1,278 +0,0 @@
|
||||
# 命令文档审计报告
|
||||
|
||||
**审计日期**: 2025-11-20
|
||||
**审计范围**: 73个命令文档文件
|
||||
**审计方法**: 自动化扫描 + 手动内容分析
|
||||
|
||||
---
|
||||
|
||||
## 发现的问题
|
||||
|
||||
### 1. 包含版本信息的文件
|
||||
|
||||
#### [CRITICAL] version.md
|
||||
**文件路径**: `/home/user/Claude-Code-Workflow/.claude/commands/version.md`
|
||||
|
||||
**问题位置**:
|
||||
- 第1-3行:包含在YAML头中
|
||||
- 第96-102行:示例中包含完整版本号和发布日期(如"v3.2.2"、"2025-10-03")
|
||||
- 第127-130行:包含开发版本号和日期
|
||||
- 第155-172行:版本比较和升级建议
|
||||
|
||||
**内容摘要**:
|
||||
```
|
||||
Latest Stable: v3.2.2
|
||||
Release: v3.2.2: Independent Test-Gen Workflow with Cross-Session Context
|
||||
Published: 2025-10-03T04:10:08Z
|
||||
|
||||
Latest Dev: a03415b
|
||||
Message: feat: Add version tracking and upgrade check system
|
||||
Date: 2025-10-03T04:46:44Z
|
||||
```
|
||||
|
||||
**严重程度**: ⚠️ 高 - 文件本质上是版本管理命令,但包含具体版本号、发布日期和完整版本历史
|
||||
|
||||
---
|
||||
|
||||
### 2. 包含额外无关内容的文件
|
||||
|
||||
#### [HIGH] tdd-plan.md
|
||||
**文件路径**: `/home/user/Claude-Code-Workflow/.claude/commands/workflow/tdd-plan.md`
|
||||
|
||||
**问题位置**: 第420-523行
|
||||
|
||||
**部分内容**:
|
||||
```markdown
|
||||
## TDD Workflow Enhancements
|
||||
|
||||
### Overview
|
||||
The TDD workflow has been significantly enhanced by integrating best practices
|
||||
from both traditional `plan --agent` and `test-gen` workflows...
|
||||
|
||||
### Key Improvements
|
||||
|
||||
#### 1. Test Coverage Analysis (Phase 3)
|
||||
**Adopted from test-gen workflow**
|
||||
|
||||
#### 2. Iterative Green Phase with Test-Fix Cycle
|
||||
**Adopted from test-gen workflow**
|
||||
|
||||
#### 3. Agent-Driven Planning
|
||||
**From plan --agent workflow**
|
||||
|
||||
### Workflow Comparison
|
||||
| Aspect | Previous | Current (Optimized) |
|
||||
| **Task Count** | 5 features = 15 tasks | 5 features = 5 tasks (70% reduction) |
|
||||
| **Task Management** | High overhead (15 tasks) | Low overhead (5 tasks) |
|
||||
|
||||
### Migration Notes
|
||||
**Backward Compatibility**: Fully compatible
|
||||
- Existing TDD workflows continue to work
|
||||
- New features are additive, not breaking
|
||||
```
|
||||
|
||||
**问题分析**:
|
||||
- 包含"增强"、"改进"、"演进"等版本历史相关内容
|
||||
- 包含"工作流比较"部分,对比了"之前"和"现在"的版本
|
||||
- 包含"迁移说明",描述了从旧版本的升级路径
|
||||
- 约100行内容(第420-523行)不是关于命令如何使用,而是关于如何改进的
|
||||
|
||||
**严重程度**: ⚠️ 中-高 - 约18%的文件内容(100/543行)是版本演进相关,而不是核心功能说明
|
||||
|
||||
---
|
||||
|
||||
### 3. 任务不够专注的文件
|
||||
|
||||
#### [MEDIUM] tdd-plan.md (继续)
|
||||
**问题**: 文件中包含过多关于与其他命令(plan、test-gen)集成的说明
|
||||
|
||||
**相关部分**:
|
||||
- 第475-488行:与"plan --agent"工作流的比较
|
||||
- 第427-441行:描述从test-gen工作流"采纳"的特性
|
||||
- 第466-473行:描述从plan --agent工作流"采纳"的特性
|
||||
|
||||
**问题分析**: 虽然这些集成说明可能有用,但在命令文档中过度强调其他命令的关系,使文档的焦点分散。建议这类内容应放在项目级文档或架构文档中,而不是在具体命令文档中。
|
||||
|
||||
**严重程度**: ⚠️ 中 - 降低了文档的焦点,但不是严重问题
|
||||
|
||||
---
|
||||
|
||||
## 合规文件统计
|
||||
|
||||
### 审计结果汇总
|
||||
|
||||
| 类别 | 计数 | 百分比 |
|
||||
|------|------|--------|
|
||||
| **完全合规的文件** | 70 | 95.9% |
|
||||
| **有版本信息的文件** | 1 | 1.4% |
|
||||
| **包含额外无关内容的文件** | 1 | 1.4% |
|
||||
| **任务不够专注的文件** | 1* | 1.4% |
|
||||
| **总计** | 73 | 100% |
|
||||
|
||||
*注: tdd-plan.md 同时出现在"额外无关内容"和"任务不专注"两个类别中
|
||||
|
||||
### 问题严重程度分布
|
||||
|
||||
| 严重程度 | 文件数 | 说明 |
|
||||
|---------|--------|------|
|
||||
| CRITICAL | 0 | 没有需要立即阻止执行的问题 |
|
||||
| HIGH | 1 | version.md - 包含完整版本号和发布信息 |
|
||||
| MEDIUM | 1 | tdd-plan.md - 包含过度的版本演进说明和工作流对比 |
|
||||
| LOW | 0 | 无其他问题 |
|
||||
|
||||
---
|
||||
|
||||
## 详细发现
|
||||
|
||||
### version.md - 完整分析
|
||||
|
||||
**问题本质**: version.md命令的存在目的就是管理和报告版本信息。文件中包含版本号、发布日期、更新日志等内容不仅是合理的,而是必需的。
|
||||
|
||||
**但审计角度**: 根据用户的审计标准:
|
||||
- ✓ "包含版本号、版本历史、changelog等内容" - **是的,明确包含**
|
||||
- 示例版本号: v3.2.1, v3.2.2, 3.4.0-dev
|
||||
- 发布日期: 2025-10-03T12:00:00Z
|
||||
- 版本历史信息和升级路径
|
||||
|
||||
**结论**: 该文件符合审计标准中的"版本信息"类别,应被标记为有问题(尽管这是功能需求)
|
||||
|
||||
---
|
||||
|
||||
### tdd-plan.md - 完整分析
|
||||
|
||||
**第一个问题 - 额外的版本演进信息**:
|
||||
```
|
||||
## TDD Workflow Enhancements (行420)
|
||||
### Overview
|
||||
The TDD workflow has been **significantly enhanced** by integrating best practices
|
||||
from **both traditional `plan --agent` and `test-gen` workflows**
|
||||
|
||||
### Key Improvements
|
||||
#### 1. Test Coverage Analysis (Phase 3)
|
||||
**Adopted from test-gen workflow** (行428)
|
||||
|
||||
#### 2. Iterative Green Phase with Test-Fix Cycle
|
||||
**Adopted from test-gen workflow** (行443)
|
||||
|
||||
#### 3. Agent-Driven Planning
|
||||
**From plan --agent workflow** (行467)
|
||||
```
|
||||
|
||||
这部分内容完全是关于命令的历史演变和改进,不是关于如何使用该命令。
|
||||
|
||||
**第二个问题 - 工作流对比表**:
|
||||
```
|
||||
### Workflow Comparison (行475)
|
||||
| Aspect | Previous | Current (Optimized) |
|
||||
| **Phases** | 6 | 7 |
|
||||
| **Task Count** | 5 features = 15 tasks | 5 features = 5 tasks (70% reduction) |
|
||||
```
|
||||
|
||||
直接对比了"之前"和"现在"的实现,这是版本历史相关内容。
|
||||
|
||||
**第三个问题 - 迁移说明**:
|
||||
```
|
||||
### Migration Notes (行490)
|
||||
**Backward Compatibility**: Fully compatible
|
||||
- Existing TDD workflows continue to work
|
||||
- New features are additive, not breaking
|
||||
```
|
||||
|
||||
这是版本升级路径说明,不是命令核心功能文档的一部分。
|
||||
|
||||
**统计**:
|
||||
- 总行数: 543行
|
||||
- 有问题的行: ~103行(第420-523行)
|
||||
- 占比: ~19%
|
||||
|
||||
**结论**: tdd-plan.md 同时违反了两个审计标准:
|
||||
1. 包含版本演进历史相关内容
|
||||
2. 过度描述与其他命令的关系(缺乏任务专注度)
|
||||
|
||||
---
|
||||
|
||||
## 建议
|
||||
|
||||
### 高优先级
|
||||
|
||||
1. **移除 version.md 中的具体版本号**
|
||||
- 当前做法: 包含硬编码的版本号、日期等
|
||||
- 建议: 使用变量或运行时获取版本信息,文档中只描述版本命令的功能
|
||||
- 理由: 版本号应该由版本控制系统管理,而不是在文档中硬编码
|
||||
|
||||
2. **从 tdd-plan.md 中移除第420-523行(版本演进部分)**
|
||||
- 当前: ~103行关于"增强"、"改进"、"迁移"的内容
|
||||
- 建议: 移到单独的"CHANGELOG.md"或项目级文档
|
||||
- 理由: 这是历史演变信息,不是使用指南
|
||||
|
||||
### 中优先级
|
||||
|
||||
3. **重构 tdd-plan.md 中的工作流关系**
|
||||
- 当前: 第475-495行详细对比与其他命令的区别
|
||||
- 建议: 简化对其他命令的引用,保留"Related Commands"部分即可
|
||||
- 理由: 过度关注与其他命令的关系分散了文档焦点
|
||||
|
||||
4. **统一版本信息管理策略**
|
||||
- 建议: 建立项目级文档规范,明确哪些信息应在命令文档中出现
|
||||
- 范围: 适用于所有命令文档
|
||||
|
||||
---
|
||||
|
||||
## 合规性评定
|
||||
|
||||
### 总体评分: 96/100
|
||||
|
||||
- ✓ **整体质量高**: 95.9%的文件完全合规
|
||||
- ⚠️ **两个文件需要整改**:
|
||||
- version.md: 版本信息管理需要优化
|
||||
- tdd-plan.md: 版本演进内容需要分离
|
||||
|
||||
### 推荐行动
|
||||
|
||||
| 优先级 | 行动 | 预期影响 |
|
||||
|--------|------|---------|
|
||||
| **高** | 清理 version.md 的硬编码版本号 | 提高版本管理的可维护性 |
|
||||
| **高** | 从 tdd-plan.md 移除第420-523行 | 提高文档专注度,减少19% |
|
||||
| **中** | 建立版本信息管理规范 | 防止未来重复问题 |
|
||||
| **低** | 简化 tdd-plan.md 中的工作流关系说明 | 进一步改善文档清晰度 |
|
||||
|
||||
---
|
||||
|
||||
## 附录
|
||||
|
||||
### 审计方法论
|
||||
|
||||
1. **自动扫描**: 使用grep搜索关键词(version, changelog, release, history等)
|
||||
2. **内容分析**: 手动读取匹配文件的完整内容
|
||||
3. **结构分析**: 检查是否包含与核心功能无关的内容
|
||||
4. **统计分析**: 计算问题内容占比
|
||||
|
||||
### 数据来源
|
||||
|
||||
- 总文件数: 73
|
||||
- 详细分析文件: 15
|
||||
- 快速扫描文件: 58
|
||||
|
||||
### 文件列表(完整性检查)
|
||||
|
||||
已审计的所有命令文档:
|
||||
- ✓ version.md (有问题)
|
||||
- ✓ enhance-prompt.md
|
||||
- ✓ test-fix-gen.md
|
||||
- ✓ test-gen.md
|
||||
- ✓ test-cycle-execute.md
|
||||
- ✓ tdd-plan.md (有问题)
|
||||
- ✓ tdd-verify.md
|
||||
- ✓ status.md
|
||||
- ✓ review.md
|
||||
- ✓ plan.md
|
||||
- ✓ lite-plan.md
|
||||
- ✓ lite-execute.md
|
||||
- ✓ init.md
|
||||
- ✓ execute.md
|
||||
- ✓ action-plan-verify.md
|
||||
- ... 以及其他58个文件 (全部合规)
|
||||
|
||||
---
|
||||
|
||||
**审计完成** - 生成时间: 2025-11-20
|
||||
@@ -1,274 +0,0 @@
|
||||
# Command Flow Expression Standard
|
||||
|
||||
**用途**:规范命令文档中Task、SlashCommand、Skill和Bash调用的标准表达方式
|
||||
|
||||
**版本**:v2.1.0
|
||||
|
||||
---
|
||||
|
||||
## 核心原则
|
||||
|
||||
1. **统一格式** - 所有调用使用标准化格式
|
||||
2. **清晰参数** - 必需参数明确标注,可选参数加方括号
|
||||
3. **减少冗余** - 避免不必要的echo命令和管道操作
|
||||
4. **工具优先** - 优先使用专用工具(Write/Read/Edit)而非Bash变通
|
||||
5. **可读性** - 保持缩进和换行的一致性
|
||||
|
||||
---
|
||||
|
||||
## 1. Task调用标准(Agent启动)
|
||||
|
||||
### 标准格式
|
||||
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="agent-type",
|
||||
description="Brief description",
|
||||
prompt=`
|
||||
FULL TASK PROMPT HERE
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
### 规范要求
|
||||
|
||||
- `subagent_type`: Agent类型(字符串)
|
||||
- `description`: 简短描述(5-10词,动词开头)
|
||||
- `prompt`: 完整任务提示(使用反引号包裹多行内容)
|
||||
- 参数字段缩进2空格
|
||||
|
||||
### 正确示例
|
||||
|
||||
```javascript
|
||||
// CLI执行agent
|
||||
Task(
|
||||
subagent_type="cli-execution-agent",
|
||||
description="Analyze codebase patterns",
|
||||
prompt=`
|
||||
PURPOSE: Identify code patterns for refactoring
|
||||
TASK: Scan project files and extract common patterns
|
||||
MODE: analysis
|
||||
CONTEXT: @src/**/*
|
||||
EXPECTED: Pattern list with usage examples
|
||||
`
|
||||
)
|
||||
|
||||
// 代码开发agent
|
||||
Task(
|
||||
subagent_type="code-developer",
|
||||
description="Implement authentication module",
|
||||
prompt=`
|
||||
GOAL: Build JWT-based authentication
|
||||
SCOPE: User login, token validation, session management
|
||||
CONTEXT: @src/auth/**/* @CLAUDE.md
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. SlashCommand调用标准
|
||||
|
||||
### 标准格式
|
||||
|
||||
```javascript
|
||||
SlashCommand(command="/category:command-name [flags] arguments")
|
||||
```
|
||||
|
||||
### 规范要求
|
||||
|
||||
单行调用 | 双引号包裹 | 完整路径`/category:command-name` | 参数顺序: 标志→参数值
|
||||
|
||||
### 正确示例
|
||||
|
||||
```javascript
|
||||
// 无参数
|
||||
SlashCommand(command="/workflow:status")
|
||||
|
||||
// 带标志和参数
|
||||
SlashCommand(command="/workflow:session:start --auto \"task description\"")
|
||||
|
||||
// 变量替换
|
||||
SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"description\"")
|
||||
|
||||
// 多个标志
|
||||
SlashCommand(command="/workflow:plan --agent --cli-execute \"feature description\"")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Skill调用标准
|
||||
|
||||
### 标准格式
|
||||
|
||||
```javascript
|
||||
Skill(command: "skill-name")
|
||||
```
|
||||
|
||||
### 规范要求
|
||||
|
||||
单行调用 | 冒号语法`command:` | 双引号包裹skill-name
|
||||
|
||||
### 正确示例
|
||||
|
||||
```javascript
|
||||
// 项目SKILL
|
||||
Skill(command: "claude_dms3")
|
||||
|
||||
// 技术栈SKILL
|
||||
Skill(command: "react-dev")
|
||||
|
||||
// 工作流SKILL
|
||||
Skill(command: "workflow-progress")
|
||||
|
||||
// 变量替换
|
||||
Skill(command: "${skill_name}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Bash命令标准
|
||||
|
||||
### 核心原则:优先使用专用工具
|
||||
|
||||
**工具优先级**:
|
||||
1. **Write工具** → 创建/覆盖文件内容
|
||||
2. **Edit工具** → 修改现有文件内容
|
||||
3. **Read工具** → 读取文件内容
|
||||
4. **Bash命令** → 仅用于真正的系统操作(git, npm, test等)
|
||||
|
||||
### 标准格式
|
||||
|
||||
```javascript
|
||||
bash(command args)
|
||||
```
|
||||
|
||||
### 合理使用Bash的场景
|
||||
|
||||
```javascript
|
||||
// ✅ Git操作
|
||||
bash(git status --short)
|
||||
bash(git commit -m "commit message")
|
||||
|
||||
// ✅ 包管理器和测试
|
||||
bash(npm install)
|
||||
bash(npm test)
|
||||
|
||||
// ✅ 文件系统查询和文本处理
|
||||
bash(find .workflow -name "*.json" -type f)
|
||||
bash(rg "pattern" --type js --files-with-matches)
|
||||
```
|
||||
|
||||
### 避免Bash的场景
|
||||
|
||||
```javascript
|
||||
// ❌ 文件创建/写入 → 使用Write工具
|
||||
bash(echo "content" > file.txt) // 错误
|
||||
Write({file_path: "file.txt", content: "content"}) // 正确
|
||||
|
||||
// ❌ 文件读取 → 使用Read工具
|
||||
bash(cat file.txt) // 错误
|
||||
Read({file_path: "file.txt"}) // 正确
|
||||
|
||||
// ❌ 简单字符串处理 → 在代码中处理
|
||||
bash(echo "text" | tr '[:upper:]' '[:lower:]') // 错误
|
||||
"text".toLowerCase() // 正确
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. 组合调用模式(伪代码准则)
|
||||
|
||||
### 核心准则
|
||||
|
||||
直接写执行逻辑(无FUNCTION/END包裹)| 用`#`注释分段 | 变量赋值`variable = value` | 条件`IF/ELSE` | 循环`FOR` | 验证`VALIDATE` | 错误`ERROR + EXIT 1`
|
||||
|
||||
### 顺序调用(依赖关系)
|
||||
|
||||
```pseudo
|
||||
# Phase 1-2: Session and Context
|
||||
sessionId = SlashCommand(command="/workflow:session:start --auto \"description\"")
|
||||
PARSE sessionId from output
|
||||
VALIDATE: bash(test -d .workflow/{sessionId})
|
||||
|
||||
contextPath = SlashCommand(command="/workflow:tools:context-gather --session {sessionId} \"desc\"")
|
||||
context_json = READ(contextPath)
|
||||
|
||||
# Phase 3-4: Conditional and Agent
|
||||
IF context_json.conflict_risk IN ["medium", "high"]:
|
||||
SlashCommand(command="/workflow:tools:conflict-resolution --session {sessionId}")
|
||||
|
||||
Task(subagent_type="action-planning-agent", description="Generate tasks", prompt=`SESSION: {sessionId}`)
|
||||
|
||||
VALIDATE: bash(test -f .workflow/{sessionId}/IMPL_PLAN.md)
|
||||
RETURN summary
|
||||
```
|
||||
|
||||
### 并行调用(无依赖)
|
||||
|
||||
```pseudo
|
||||
PARALLEL_START:
|
||||
check_git = bash(git status)
|
||||
check_count = bash(find .workflow -name "*.json" | wc -l)
|
||||
check_skill = Skill(command: "project-name")
|
||||
WAIT_ALL_COMPLETE
|
||||
VALIDATE results
|
||||
RETURN summary
|
||||
```
|
||||
|
||||
### 条件分支调用
|
||||
|
||||
```pseudo
|
||||
IF task_type CONTAINS "test": agent = "test-fix-agent"
|
||||
ELSE IF task_type CONTAINS "implement": agent = "code-developer"
|
||||
ELSE: agent = "universal-executor"
|
||||
|
||||
Skill(command: "project-name")
|
||||
Task(subagent_type=agent, description="Execute task", prompt=build_prompt(task_type))
|
||||
VALIDATE output
|
||||
RETURN result
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. 变量和占位符规范
|
||||
|
||||
| 上下文 | 格式 | 示例 |
|
||||
|--------|------|------|
|
||||
| **Markdown说明** | `[variableName]` | `[sessionId]`, `[contextPath]` |
|
||||
| **JavaScript代码** | `${variableName}` | `${sessionId}`, `${contextPath}` |
|
||||
| **Bash命令** | `$variable` | `$session_id`, `$context_path` |
|
||||
|
||||
---
|
||||
|
||||
## 7. 快速检查清单
|
||||
|
||||
**Task**: subagent_type已指定 | description≤10词 | prompt用反引号 | 缩进2空格
|
||||
|
||||
**SlashCommand**: 完整路径 `/category:command` | 标志在前 | 变量用`[var]` | 双引号包裹
|
||||
|
||||
**Skill**: 冒号语法 `command:` | 双引号包裹 | 单行格式
|
||||
|
||||
**Bash**: 能用Write/Edit/Read工具吗?| 避免不必要echo | 真正的系统操作
|
||||
|
||||
---
|
||||
|
||||
## 8. 常见错误及修复
|
||||
|
||||
```javascript
|
||||
// ❌ 错误1: Bash中不必要的echo
|
||||
bash(echo '{"status":"active"}' > status.json)
|
||||
// ✅ 正确: 使用Write工具
|
||||
Write({file_path: "status.json", content: '{"status":"active"}'})
|
||||
|
||||
// ❌ 错误2: Task单行格式
|
||||
Task(subagent_type="agent", description="Do task", prompt=`...`)
|
||||
// ✅ 正确: 多行格式
|
||||
Task(subagent_type="agent", description="Do task", prompt=`...`)
|
||||
|
||||
// ❌ 错误3: Skill使用等号
|
||||
Skill(command="skill-name")
|
||||
// ✅ 正确: 使用冒号
|
||||
Skill(command: "skill-name")
|
||||
```
|
||||
|
||||
@@ -180,7 +180,7 @@ Commands for creating, listing, and managing workflow sessions.
|
||||
- **Syntax**: `/workflow:session:complete [--detailed]`
|
||||
- **Parameters**:
|
||||
- `--detailed` (Flag): Shows a more detailed completion summary.
|
||||
- **Responsibilities**: Marks the currently active session as "completed", records timestamps, and removes the `.active-*` marker file.
|
||||
- **Responsibilities**: Marks the currently active session as "completed", records timestamps, and moves the session from `.workflow/active/` to `.workflow/archives/`.
|
||||
- **Agent Calls**: None.
|
||||
- **Example**:
|
||||
```bash
|
||||
|
||||
@@ -1,126 +0,0 @@
|
||||
# Command Template: Executor
|
||||
|
||||
**用途**:直接执行特定功能的执行器命令模板
|
||||
|
||||
**特征**:专注于自身功能实现,移除 Related Commands 段落
|
||||
|
||||
---
|
||||
|
||||
## 模板结构
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: command-name
|
||||
description: Brief description of what this command does
|
||||
argument-hint: "[flags] arguments"
|
||||
allowed-tools: Read(*), Edit(*), Write(*), Bash(*), TodoWrite(*)
|
||||
---
|
||||
|
||||
# Command Name (/category:command-name)
|
||||
|
||||
## Overview
|
||||
Clear description of what this command does and its purpose.
|
||||
|
||||
**Key Characteristics**:
|
||||
- Executes specific functionality directly
|
||||
- Does NOT orchestrate other commands
|
||||
- Focuses on single responsibility
|
||||
- Returns concrete results
|
||||
|
||||
## Core Functionality
|
||||
- Function 1: Description
|
||||
- Function 2: Description
|
||||
- Function 3: Description
|
||||
|
||||
## Usage
|
||||
|
||||
### Command Syntax
|
||||
```bash
|
||||
/category:command-name [FLAGS] <ARGUMENTS>
|
||||
|
||||
# Flags
|
||||
--flag1 Description
|
||||
--flag2 Description
|
||||
|
||||
# Arguments
|
||||
<arg1> Description
|
||||
<arg2> Description (optional)
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Step 1: Step Name
|
||||
Description of what happens in this step
|
||||
|
||||
**Operations**:
|
||||
- Operation 1
|
||||
- Operation 2
|
||||
|
||||
**Validation**:
|
||||
- Check 1
|
||||
- Check 2
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Step Name
|
||||
[Repeat for each step]
|
||||
|
||||
---
|
||||
|
||||
## Input/Output
|
||||
|
||||
### Input Requirements
|
||||
- Input 1: Description and format
|
||||
- Input 2: Description and format
|
||||
|
||||
### Output Format
|
||||
```
|
||||
Output description and structure
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Error message 1 | Root cause | How to fix |
|
||||
| Error message 2 | Root cause | How to fix |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Practice 1**: Description and rationale
|
||||
2. **Practice 2**: Description and rationale
|
||||
3. **Practice 3**: Description and rationale
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 使用规则
|
||||
|
||||
### 核心原则
|
||||
1. **移除 Related Commands** - 执行器不协调其他命令
|
||||
2. **专注单一职责** - 每个执行器只做一件事
|
||||
3. **清晰的步骤划分** - 明确执行流程
|
||||
4. **完整的错误处理** - 列出常见错误和解决方案
|
||||
|
||||
### 可选段落
|
||||
根据命令特性,以下段落可选:
|
||||
- **Configuration**: 有配置参数时使用
|
||||
- **Output Files**: 生成文件时使用
|
||||
- **Exit Codes**: 有明确退出码时使用
|
||||
- **Environment Variables**: 依赖环境变量时使用
|
||||
|
||||
### 格式要求
|
||||
- 无 emoji/图标装饰
|
||||
- 纯文本状态指示器
|
||||
- 使用表格组织错误信息
|
||||
- 提供实用的示例代码
|
||||
|
||||
## 示例参考
|
||||
|
||||
参考已重构的执行器命令:
|
||||
- `.claude/commands/task/create.md`
|
||||
- `.claude/commands/task/breakdown.md`
|
||||
- `.claude/commands/task/execute.md`
|
||||
- `.claude/commands/cli/execute.md`
|
||||
- `.claude/commands/version.md`
|
||||
@@ -1,140 +0,0 @@
|
||||
# Command Template: Orchestrator
|
||||
|
||||
**用途**:协调多个子命令的编排器命令模板
|
||||
|
||||
**特征**:保留 Related Commands 段落,明确说明调用的命令链
|
||||
|
||||
---
|
||||
|
||||
## 模板结构
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: command-name
|
||||
description: Brief description of what this command orchestrates
|
||||
argument-hint: "[flags] arguments"
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||
---
|
||||
|
||||
# Command Name (/category:command-name)
|
||||
|
||||
## Overview
|
||||
Clear description of what this command orchestrates and its role.
|
||||
|
||||
**Key Characteristics**:
|
||||
- Orchestrates X phases/commands
|
||||
- Coordinates between multiple slash commands
|
||||
- Does NOT execute directly - delegates to specialized commands
|
||||
- Manages workflow state and progress tracking
|
||||
|
||||
## Core Responsibilities
|
||||
- Responsibility 1: Description
|
||||
- Responsibility 2: Description
|
||||
- Responsibility 3: Description
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Phase 1: Phase Name
|
||||
**Command**: `SlashCommand(command="/command:name args")`
|
||||
|
||||
**Input**: Description of inputs
|
||||
|
||||
**Expected Behavior**:
|
||||
- Behavior 1
|
||||
- Behavior 2
|
||||
|
||||
**Parse Output**:
|
||||
- Extract: variable name (pattern description)
|
||||
|
||||
**Validation**:
|
||||
- Validation rule 1
|
||||
- Validation rule 2
|
||||
|
||||
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Phase Name
|
||||
[Repeat structure for each phase]
|
||||
|
||||
---
|
||||
|
||||
## TodoWrite Pattern
|
||||
|
||||
Track progress through all phases:
|
||||
|
||||
```javascript
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute phase 1", "status": "in_progress|completed", "activeForm": "Executing phase 1"},
|
||||
{"content": "Execute phase 2", "status": "pending|in_progress|completed", "activeForm": "Executing phase 2"},
|
||||
{"content": "Execute phase 3", "status": "pending|in_progress|completed", "activeForm": "Executing phase 3"}
|
||||
]})
|
||||
```
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
Phase 1: command-1 → output-1
|
||||
↓
|
||||
Phase 2: command-2 (input: output-1) → output-2
|
||||
↓
|
||||
Phase 3: command-3 (input: output-2) → final-result
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Phase | Error | Action |
|
||||
|-------|-------|--------|
|
||||
| 1 | Error description | Recovery action |
|
||||
| 2 | Error description | Recovery action |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Usage
|
||||
```bash
|
||||
/category:command-name
|
||||
/category:command-name --flag "argument"
|
||||
```
|
||||
|
||||
## Related Commands
|
||||
|
||||
**Prerequisite Commands**:
|
||||
- `/command:prerequisite` - Description of when to use before this
|
||||
|
||||
**Called by This Command**:
|
||||
- `/command:phase1` - Description (Phase 1)
|
||||
- `/command:phase2` - Description (Phase 2)
|
||||
- `/command:phase3` - Description (Phase 3)
|
||||
|
||||
**Follow-up Commands**:
|
||||
- `/command:next` - Description of what to do after this
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 使用规则
|
||||
|
||||
### 核心原则
|
||||
1. **保留 Related Commands** - 明确说明命令调用链
|
||||
2. **清晰的阶段划分** - 每个Phase独立可追踪
|
||||
3. **数据流可视化** - 展示Phase间的数据传递
|
||||
4. **TodoWrite追踪** - 实时更新执行进度
|
||||
|
||||
### Related Commands 分类
|
||||
- **Prerequisite Commands**: 执行本命令前需要先运行的命令
|
||||
- **Called by This Command**: 本命令会调用的子命令(按阶段分组)
|
||||
- **Follow-up Commands**: 执行本命令后的推荐下一步
|
||||
|
||||
### 格式要求
|
||||
- 无 emoji/图标装饰
|
||||
- 纯文本状态指示器
|
||||
- 使用表格组织错误信息
|
||||
- 清晰的数据流图
|
||||
|
||||
## 示例参考
|
||||
|
||||
参考已重构的编排器命令:
|
||||
- `.claude/commands/workflow/plan.md`
|
||||
- `.claude/commands/workflow/execute.md`
|
||||
- `.claude/commands/workflow/session/complete.md`
|
||||
- `.claude/commands/workflow/session/start.md`
|
||||
823
EXAMPLES.md
823
EXAMPLES.md
@@ -1,823 +0,0 @@
|
||||
# 📖 Claude Code Workflow - Real-World Examples
|
||||
|
||||
This document provides practical, real-world examples of using CCW for common development tasks.
|
||||
|
||||
---
|
||||
|
||||
## 📋 Table of Contents
|
||||
|
||||
- [Quick Start Examples](#quick-start-examples)
|
||||
- [Web Development](#web-development)
|
||||
- [API Development](#api-development)
|
||||
- [Testing & Quality Assurance](#testing--quality-assurance)
|
||||
- [Refactoring](#refactoring)
|
||||
- [UI/UX Design](#uiux-design)
|
||||
- [Bug Fixes](#bug-fixes)
|
||||
- [Documentation](#documentation)
|
||||
- [DevOps & Automation](#devops--automation)
|
||||
- [Complex Projects](#complex-projects)
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start Examples
|
||||
|
||||
### Example 1: Simple Express API
|
||||
|
||||
**Objective**: Create a basic Express.js API with CRUD operations
|
||||
|
||||
```bash
|
||||
# Option 1: Lite workflow (fastest)
|
||||
/workflow:lite-plan "Create Express API with CRUD endpoints for users (GET, POST, PUT, DELETE)"
|
||||
|
||||
# Option 2: Full workflow (more structured)
|
||||
/workflow:plan "Create Express API with CRUD endpoints for users"
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**What CCW does**:
|
||||
1. Analyzes your project structure
|
||||
2. Creates Express app setup
|
||||
3. Implements CRUD routes
|
||||
4. Adds error handling middleware
|
||||
5. Creates basic tests
|
||||
|
||||
**Result**:
|
||||
```
|
||||
src/
|
||||
├── app.js # Express app setup
|
||||
├── routes/
|
||||
│ └── users.js # User CRUD routes
|
||||
├── controllers/
|
||||
│ └── userController.js
|
||||
└── tests/
|
||||
└── users.test.js
|
||||
```
|
||||
|
||||
### Example 2: React Component
|
||||
|
||||
**Objective**: Create a React login form component
|
||||
|
||||
```bash
|
||||
/workflow:lite-plan "Create a React login form component with email and password fields, validation, and submit handling"
|
||||
```
|
||||
|
||||
**What CCW does**:
|
||||
1. Creates LoginForm component
|
||||
2. Adds form validation (email format, password requirements)
|
||||
3. Implements state management
|
||||
4. Adds error display
|
||||
5. Creates component tests
|
||||
|
||||
**Result**:
|
||||
```jsx
|
||||
// components/LoginForm.jsx
|
||||
import React, { useState } from 'react';
|
||||
|
||||
export function LoginForm({ onSubmit }) {
|
||||
const [email, setEmail] = useState('');
|
||||
const [password, setPassword] = useState('');
|
||||
const [errors, setErrors] = useState({});
|
||||
|
||||
// ... validation and submit logic
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🌐 Web Development
|
||||
|
||||
### Example 3: Full-Stack Todo Application
|
||||
|
||||
**Objective**: Build a complete todo application with React frontend and Express backend
|
||||
|
||||
#### Phase 1: Planning with Brainstorming
|
||||
|
||||
```bash
|
||||
# Multi-perspective analysis
|
||||
/workflow:brainstorm:auto-parallel "Full-stack todo application with user authentication, real-time updates, and dark mode"
|
||||
|
||||
# Review brainstorming artifacts
|
||||
# Then create implementation plan
|
||||
/workflow:plan
|
||||
|
||||
# Verify plan quality
|
||||
/workflow:action-plan-verify
|
||||
```
|
||||
|
||||
**Brainstorming generates**:
|
||||
- System architecture analysis
|
||||
- UI/UX design recommendations
|
||||
- Data model design
|
||||
- Security considerations
|
||||
- API design patterns
|
||||
|
||||
#### Phase 2: Implementation
|
||||
|
||||
```bash
|
||||
# Execute the plan
|
||||
/workflow:execute
|
||||
|
||||
# Monitor progress
|
||||
/workflow:status
|
||||
```
|
||||
|
||||
**What CCW implements**:
|
||||
|
||||
**Backend** (`server/`):
|
||||
- Express server setup
|
||||
- MongoDB/PostgreSQL integration
|
||||
- JWT authentication
|
||||
- RESTful API endpoints
|
||||
- WebSocket for real-time updates
|
||||
- Input validation middleware
|
||||
|
||||
**Frontend** (`client/`):
|
||||
- React app with routing
|
||||
- Authentication flow
|
||||
- Todo CRUD operations
|
||||
- Real-time updates via WebSocket
|
||||
- Dark mode toggle
|
||||
- Responsive design
|
||||
|
||||
#### Phase 3: Testing
|
||||
|
||||
```bash
|
||||
# Generate comprehensive tests
|
||||
/workflow:test-gen WFS-todo-application
|
||||
|
||||
# Execute test tasks
|
||||
/workflow:execute
|
||||
|
||||
# Run iterative test-fix cycle
|
||||
/workflow:test-cycle-execute
|
||||
```
|
||||
|
||||
**Tests created**:
|
||||
- Unit tests for components
|
||||
- Integration tests for API
|
||||
- E2E tests for user flows
|
||||
- Authentication tests
|
||||
- WebSocket connection tests
|
||||
|
||||
#### Phase 4: Quality Review
|
||||
|
||||
```bash
|
||||
# Security review
|
||||
/workflow:review --type security
|
||||
|
||||
# Architecture review
|
||||
/workflow:review --type architecture
|
||||
|
||||
# General quality review
|
||||
/workflow:review
|
||||
```
|
||||
|
||||
**Complete session**:
|
||||
```bash
|
||||
/workflow:session:complete
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Example 4: E-commerce Product Catalog
|
||||
|
||||
**Objective**: Build product catalog with search, filters, and pagination
|
||||
|
||||
```bash
|
||||
# Start with UI design exploration
|
||||
/workflow:ui-design:explore-auto --prompt "Modern e-commerce product catalog with grid layout, filters sidebar, and search bar" --targets "catalog,product-card" --style-variants 3
|
||||
|
||||
# Review designs in compare.html
|
||||
# Sync selected designs
|
||||
/workflow:ui-design:design-sync --session <session-id> --selected-prototypes "catalog-v2,product-card-v1"
|
||||
|
||||
# Create implementation plan
|
||||
/workflow:plan
|
||||
|
||||
# Execute
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**Features implemented**:
|
||||
- Product grid with responsive layout
|
||||
- Search functionality with debounce
|
||||
- Category/price/rating filters
|
||||
- Pagination with infinite scroll option
|
||||
- Product card with image, title, price, rating
|
||||
- Sort options (price, popularity, newest)
|
||||
|
||||
---
|
||||
|
||||
## 🔌 API Development
|
||||
|
||||
### Example 5: RESTful API with Authentication
|
||||
|
||||
**Objective**: Create RESTful API with JWT authentication and role-based access control
|
||||
|
||||
```bash
|
||||
# Detailed planning
|
||||
/workflow:plan "RESTful API with JWT authentication, role-based access control (admin, user), and protected endpoints for posts resource"
|
||||
|
||||
# Verify plan
|
||||
/workflow:action-plan-verify
|
||||
|
||||
# Execute
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**Implementation includes**:
|
||||
|
||||
**Authentication**:
|
||||
```javascript
|
||||
// routes/auth.js
|
||||
POST /api/auth/register
|
||||
POST /api/auth/login
|
||||
POST /api/auth/refresh
|
||||
POST /api/auth/logout
|
||||
```
|
||||
|
||||
**Protected Resources**:
|
||||
```javascript
|
||||
// routes/posts.js
|
||||
GET /api/posts # Public
|
||||
GET /api/posts/:id # Public
|
||||
POST /api/posts # Authenticated
|
||||
PUT /api/posts/:id # Authenticated (owner or admin)
|
||||
DELETE /api/posts/:id # Authenticated (owner or admin)
|
||||
```
|
||||
|
||||
**Middleware**:
|
||||
- `authenticate` - Verifies JWT token
|
||||
- `authorize(['admin'])` - Role-based access
|
||||
- `validateRequest` - Input validation
|
||||
- `errorHandler` - Centralized error handling
|
||||
|
||||
### Example 6: GraphQL API
|
||||
|
||||
**Objective**: Convert REST API to GraphQL
|
||||
|
||||
```bash
|
||||
# Analyze existing REST API
|
||||
/cli:analyze "Analyze REST API structure in src/routes/"
|
||||
|
||||
# Plan GraphQL migration
|
||||
/workflow:plan "Migrate REST API to GraphQL with queries, mutations, and subscriptions for posts and users"
|
||||
|
||||
# Execute migration
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**GraphQL schema created**:
|
||||
```graphql
|
||||
type Query {
|
||||
posts(limit: Int, offset: Int): [Post!]!
|
||||
post(id: ID!): Post
|
||||
user(id: ID!): User
|
||||
}
|
||||
|
||||
type Mutation {
|
||||
createPost(input: CreatePostInput!): Post!
|
||||
updatePost(id: ID!, input: UpdatePostInput!): Post!
|
||||
deletePost(id: ID!): Boolean!
|
||||
}
|
||||
|
||||
type Subscription {
|
||||
postCreated: Post!
|
||||
postUpdated: Post!
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Testing & Quality Assurance
|
||||
|
||||
### Example 7: Test-Driven Development (TDD)
|
||||
|
||||
**Objective**: Implement user authentication using TDD approach
|
||||
|
||||
```bash
|
||||
# Start TDD workflow
|
||||
/workflow:tdd-plan "User authentication with email/password login, registration, and password reset"
|
||||
|
||||
# Execute (Red-Green-Refactor cycles)
|
||||
/workflow:execute
|
||||
|
||||
# Verify TDD compliance
|
||||
/workflow:tdd-verify
|
||||
```
|
||||
|
||||
**TDD cycle tasks created**:
|
||||
|
||||
**Cycle 1: Registration**
|
||||
1. `IMPL-1.1` - Write failing test for user registration
|
||||
2. `IMPL-1.2` - Implement registration to pass test
|
||||
3. `IMPL-1.3` - Refactor registration code
|
||||
|
||||
**Cycle 2: Login**
|
||||
1. `IMPL-2.1` - Write failing test for login
|
||||
2. `IMPL-2.2` - Implement login to pass test
|
||||
3. `IMPL-2.3` - Refactor login code
|
||||
|
||||
**Cycle 3: Password Reset**
|
||||
1. `IMPL-3.1` - Write failing test for password reset
|
||||
2. `IMPL-3.2` - Implement password reset
|
||||
3. `IMPL-3.3` - Refactor password reset
|
||||
|
||||
### Example 8: Adding Tests to Existing Code
|
||||
|
||||
**Objective**: Generate comprehensive tests for existing authentication module
|
||||
|
||||
```bash
|
||||
# Create test generation workflow from existing code
|
||||
/workflow:test-gen WFS-authentication-implementation
|
||||
|
||||
# Execute test tasks
|
||||
/workflow:execute
|
||||
|
||||
# Run test-fix cycle until all tests pass
|
||||
/workflow:test-cycle-execute --max-iterations 5
|
||||
```
|
||||
|
||||
**Tests generated**:
|
||||
- Unit tests for each function
|
||||
- Integration tests for auth flow
|
||||
- Edge case tests (invalid input, expired tokens, etc.)
|
||||
- Security tests (SQL injection, XSS, etc.)
|
||||
- Performance tests (load testing, rate limiting)
|
||||
|
||||
**Test coverage**: Aims for 80%+ coverage
|
||||
|
||||
---
|
||||
|
||||
## 🔄 Refactoring
|
||||
|
||||
### Example 9: Monolith to Microservices
|
||||
|
||||
**Objective**: Refactor monolithic application to microservices architecture
|
||||
|
||||
#### Phase 1: Analysis
|
||||
|
||||
```bash
|
||||
# Deep architecture analysis
|
||||
/cli:mode:plan --tool gemini "Analyze current monolithic architecture and create microservices migration strategy"
|
||||
|
||||
# Multi-role brainstorming
|
||||
/workflow:brainstorm:auto-parallel "Migrate monolith to microservices with API gateway, service discovery, and message queue" --count 5
|
||||
```
|
||||
|
||||
#### Phase 2: Planning
|
||||
|
||||
```bash
|
||||
# Create detailed migration plan
|
||||
/workflow:plan "Phase 1 microservices migration: Extract user service and auth service from monolith"
|
||||
|
||||
# Verify plan
|
||||
/workflow:action-plan-verify
|
||||
```
|
||||
|
||||
#### Phase 3: Implementation
|
||||
|
||||
```bash
|
||||
# Execute migration
|
||||
/workflow:execute
|
||||
|
||||
# Review architecture
|
||||
/workflow:review --type architecture
|
||||
```
|
||||
|
||||
**Microservices created**:
|
||||
```
|
||||
services/
|
||||
├── user-service/
|
||||
│ ├── src/
|
||||
│ ├── Dockerfile
|
||||
│ └── package.json
|
||||
├── auth-service/
|
||||
│ ├── src/
|
||||
│ ├── Dockerfile
|
||||
│ └── package.json
|
||||
├── api-gateway/
|
||||
│ ├── src/
|
||||
│ └── config/
|
||||
└── docker-compose.yml
|
||||
```
|
||||
|
||||
### Example 10: Code Optimization
|
||||
|
||||
**Objective**: Optimize database queries for performance
|
||||
|
||||
```bash
|
||||
# Analyze current performance
|
||||
/cli:mode:code-analysis "Analyze database query performance in src/repositories/"
|
||||
|
||||
# Create optimization plan
|
||||
/workflow:plan "Optimize database queries with indexing, query optimization, and caching"
|
||||
|
||||
# Execute optimizations
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**Optimizations implemented**:
|
||||
- Database indexing strategy
|
||||
- N+1 query elimination
|
||||
- Query result caching (Redis)
|
||||
- Connection pooling
|
||||
- Pagination for large datasets
|
||||
- Database query monitoring
|
||||
|
||||
---
|
||||
|
||||
## 🎨 UI/UX Design
|
||||
|
||||
### Example 11: Design System Creation
|
||||
|
||||
**Objective**: Create a complete design system for a SaaS application
|
||||
|
||||
```bash
|
||||
# Extract design from local reference images
|
||||
/workflow:ui-design:imitate-auto --input "design-refs/*.png"
|
||||
|
||||
# Or import from existing code
|
||||
/workflow:ui-design:imitate-auto --input "./src/components"
|
||||
|
||||
# Or create from scratch
|
||||
/workflow:ui-design:explore-auto --prompt "Modern SaaS design system with primary components: buttons, inputs, cards, modals, navigation" --targets "button,input,card,modal,navbar" --style-variants 3
|
||||
```
|
||||
|
||||
**Design system includes**:
|
||||
- Color palette (primary, secondary, accent, neutral)
|
||||
- Typography scale (headings, body, captions)
|
||||
- Spacing system (4px grid)
|
||||
- Component library:
|
||||
- Buttons (primary, secondary, outline, ghost)
|
||||
- Form inputs (text, select, checkbox, radio)
|
||||
- Cards (basic, elevated, outlined)
|
||||
- Modals (small, medium, large)
|
||||
- Navigation (sidebar, topbar, breadcrumbs)
|
||||
- Animation patterns
|
||||
- Responsive breakpoints
|
||||
|
||||
**Output**:
|
||||
```
|
||||
design-system/
|
||||
├── tokens/
|
||||
│ ├── colors.json
|
||||
│ ├── typography.json
|
||||
│ └── spacing.json
|
||||
├── components/
|
||||
│ ├── Button.jsx
|
||||
│ ├── Input.jsx
|
||||
│ └── ...
|
||||
└── documentation/
|
||||
└── design-system.html
|
||||
```
|
||||
|
||||
### Example 12: Responsive Landing Page
|
||||
|
||||
**Objective**: Design and implement a marketing landing page
|
||||
|
||||
```bash
|
||||
# Design exploration
|
||||
/workflow:ui-design:explore-auto --prompt "Modern SaaS landing page with hero section, features grid, pricing table, testimonials, and CTA" --targets "hero,features,pricing,testimonials" --style-variants 2 --layout-variants 3 --device-type responsive
|
||||
|
||||
# Select best designs and sync
|
||||
/workflow:ui-design:design-sync --session <session-id> --selected-prototypes "hero-v2,features-v1,pricing-v3"
|
||||
|
||||
# Implement
|
||||
/workflow:plan
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**Sections implemented**:
|
||||
- Hero section with animated background
|
||||
- Feature cards with icons
|
||||
- Pricing comparison table
|
||||
- Customer testimonials carousel
|
||||
- FAQ accordion
|
||||
- Contact form
|
||||
- Responsive navigation
|
||||
- Dark mode support
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Bug Fixes
|
||||
|
||||
### Example 13: Quick Bug Fix
|
||||
|
||||
**Objective**: Fix login button not working on mobile
|
||||
|
||||
```bash
|
||||
# Analyze bug
|
||||
/cli:mode:bug-diagnosis "Login button click event not firing on mobile Safari"
|
||||
|
||||
# Claude analyzes and implements fix
|
||||
```
|
||||
|
||||
**Fix implemented**:
|
||||
```javascript
|
||||
// Before
|
||||
button.onclick = handleLogin;
|
||||
|
||||
// After (adds touch event support)
|
||||
button.addEventListener('click', handleLogin);
|
||||
button.addEventListener('touchend', (e) => {
|
||||
e.preventDefault();
|
||||
handleLogin(e);
|
||||
});
|
||||
```
|
||||
|
||||
### Example 14: Complex Bug Investigation
|
||||
|
||||
**Objective**: Debug memory leak in React application
|
||||
|
||||
#### Investigation
|
||||
|
||||
```bash
|
||||
# Start session for thorough investigation
|
||||
/workflow:session:start "Memory Leak Investigation"
|
||||
|
||||
# Deep bug analysis
|
||||
/cli:mode:bug-diagnosis --tool gemini "Memory leak in React components - event listeners not cleaned up"
|
||||
|
||||
# Create fix plan
|
||||
/workflow:plan "Fix memory leaks in React components: cleanup event listeners and cancel subscriptions"
|
||||
```
|
||||
|
||||
#### Implementation
|
||||
|
||||
```bash
|
||||
# Execute fixes
|
||||
/workflow:execute
|
||||
|
||||
# Generate tests to prevent regression
|
||||
/workflow:test-gen WFS-memory-leak-investigation
|
||||
|
||||
# Execute tests
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**Issues found and fixed**:
|
||||
1. Missing cleanup in `useEffect` hooks
|
||||
2. Event listeners not removed
|
||||
3. Uncancelled API requests on unmount
|
||||
4. Large state objects not cleared
|
||||
|
||||
---
|
||||
|
||||
## 📝 Documentation
|
||||
|
||||
### Example 15: API Documentation Generation
|
||||
|
||||
**Objective**: Generate comprehensive API documentation
|
||||
|
||||
```bash
|
||||
# Analyze existing API
|
||||
/memory:load "Generate API documentation for all endpoints"
|
||||
|
||||
# Create documentation
|
||||
/workflow:plan "Generate OpenAPI/Swagger documentation for REST API with examples and authentication info"
|
||||
|
||||
# Execute
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**Documentation includes**:
|
||||
- OpenAPI 3.0 specification
|
||||
- Interactive Swagger UI
|
||||
- Request/response examples
|
||||
- Authentication guide
|
||||
- Rate limiting info
|
||||
- Error codes reference
|
||||
|
||||
### Example 16: Project README Generation
|
||||
|
||||
**Objective**: Create comprehensive README for open-source project
|
||||
|
||||
```bash
|
||||
# Update project memory first
|
||||
/memory:update-full --tool gemini
|
||||
|
||||
# Generate README
|
||||
/workflow:plan "Create comprehensive README.md with installation, usage, examples, API reference, and contributing guidelines"
|
||||
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**README sections**:
|
||||
- Project overview
|
||||
- Features
|
||||
- Installation instructions
|
||||
- Quick start guide
|
||||
- Usage examples
|
||||
- API reference
|
||||
- Configuration
|
||||
- Contributing guidelines
|
||||
- License
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ DevOps & Automation
|
||||
|
||||
### Example 17: CI/CD Pipeline Setup
|
||||
|
||||
**Objective**: Set up GitHub Actions CI/CD pipeline
|
||||
|
||||
```bash
|
||||
/workflow:plan "Create GitHub Actions workflow for Node.js app with linting, testing, building, and deployment to AWS"
|
||||
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**Pipeline created**:
|
||||
```yaml
|
||||
# .github/workflows/ci-cd.yml
|
||||
name: CI/CD
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Run tests
|
||||
run: npm test
|
||||
|
||||
build:
|
||||
needs: test
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Build
|
||||
run: npm run build
|
||||
|
||||
deploy:
|
||||
needs: build
|
||||
if: github.ref == 'refs/heads/main'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Deploy to AWS
|
||||
run: npm run deploy
|
||||
```
|
||||
|
||||
### Example 18: Docker Containerization
|
||||
|
||||
**Objective**: Dockerize full-stack application
|
||||
|
||||
```bash
|
||||
# Plan containerization
|
||||
/workflow:plan "Dockerize full-stack app with React frontend, Express backend, PostgreSQL database, and Redis cache using docker-compose"
|
||||
|
||||
# Execute
|
||||
/workflow:execute
|
||||
|
||||
# Review
|
||||
/workflow:review --type architecture
|
||||
```
|
||||
|
||||
**Created files**:
|
||||
```
|
||||
├── docker-compose.yml
|
||||
├── frontend/
|
||||
│ └── Dockerfile
|
||||
├── backend/
|
||||
│ └── Dockerfile
|
||||
├── .dockerignore
|
||||
└── README.docker.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Complex Projects
|
||||
|
||||
### Example 19: Real-Time Chat Application
|
||||
|
||||
**Objective**: Build real-time chat with WebSocket, message history, and file sharing
|
||||
|
||||
#### Complete Workflow
|
||||
|
||||
```bash
|
||||
# 1. Brainstorm
|
||||
/workflow:brainstorm:auto-parallel "Real-time chat application with WebSocket, message history, file upload, user presence, typing indicators" --count 5
|
||||
|
||||
# 2. UI Design
|
||||
/workflow:ui-design:explore-auto --prompt "Modern chat interface with message list, input box, user sidebar, file preview" --targets "chat-window,message-bubble,user-list" --style-variants 2
|
||||
|
||||
# 3. Sync designs
|
||||
/workflow:ui-design:design-sync --session <session-id>
|
||||
|
||||
# 4. Plan implementation
|
||||
/workflow:plan
|
||||
|
||||
# 5. Verify plan
|
||||
/workflow:action-plan-verify
|
||||
|
||||
# 6. Execute
|
||||
/workflow:execute
|
||||
|
||||
# 7. Generate tests
|
||||
/workflow:test-gen <session-id>
|
||||
|
||||
# 8. Execute tests
|
||||
/workflow:execute
|
||||
|
||||
# 9. Review
|
||||
/workflow:review --type security
|
||||
/workflow:review --type architecture
|
||||
|
||||
# 10. Complete
|
||||
/workflow:session:complete
|
||||
```
|
||||
|
||||
**Features implemented**:
|
||||
- WebSocket server (Socket.io)
|
||||
- Real-time messaging
|
||||
- Message persistence (MongoDB)
|
||||
- File upload (S3/local storage)
|
||||
- User authentication
|
||||
- Typing indicators
|
||||
- Read receipts
|
||||
- User presence (online/offline)
|
||||
- Message search
|
||||
- Emoji support
|
||||
- Mobile responsive
|
||||
|
||||
### Example 20: Data Analytics Dashboard
|
||||
|
||||
**Objective**: Build interactive dashboard with charts and real-time data
|
||||
|
||||
```bash
|
||||
# Brainstorm data viz approach
|
||||
/workflow:brainstorm:auto-parallel "Data analytics dashboard with real-time metrics, interactive charts, filters, and export functionality"
|
||||
|
||||
# Plan implementation
|
||||
/workflow:plan "Analytics dashboard with Chart.js/D3.js, real-time data updates via WebSocket, date range filters, and CSV export"
|
||||
|
||||
# Execute
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**Dashboard features**:
|
||||
- Real-time metric cards (users, revenue, conversions)
|
||||
- Line charts (trends over time)
|
||||
- Bar charts (comparisons)
|
||||
- Pie charts (distributions)
|
||||
- Data tables with sorting/filtering
|
||||
- Date range picker
|
||||
- Export to CSV/PDF
|
||||
- Responsive grid layout
|
||||
- Dark mode
|
||||
- WebSocket updates every 5 seconds
|
||||
|
||||
---
|
||||
|
||||
## 💡 Tips for Effective Examples
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Start with clear objectives**
|
||||
- Define what you want to build
|
||||
- List key features
|
||||
- Specify technologies if needed
|
||||
|
||||
2. **Use appropriate workflow**
|
||||
- Simple tasks: `/workflow:lite-plan`
|
||||
- Complex features: `/workflow:brainstorm` → `/workflow:plan`
|
||||
- Existing code: `/workflow:test-gen` or `/cli:analyze`
|
||||
|
||||
3. **Leverage quality gates**
|
||||
- Run `/workflow:action-plan-verify` before execution
|
||||
- Use `/workflow:review` after implementation
|
||||
- Generate tests with `/workflow:test-gen`
|
||||
|
||||
4. **Maintain memory**
|
||||
- Update memory after major changes
|
||||
- Use `/memory:load` for quick context
|
||||
- Keep CLAUDE.md files up to date
|
||||
|
||||
5. **Complete sessions**
|
||||
- Always run `/workflow:session:complete`
|
||||
- Generates lessons learned
|
||||
- Archives session for reference
|
||||
|
||||
---
|
||||
|
||||
## 🔗 Related Resources
|
||||
|
||||
- [Getting Started Guide](GETTING_STARTED.md) - Basics
|
||||
- [Architecture](ARCHITECTURE.md) - How it works
|
||||
- [Command Reference](COMMAND_REFERENCE.md) - All commands
|
||||
- [FAQ](FAQ.md) - Common questions
|
||||
- [Contributing](CONTRIBUTING.md) - How to contribute
|
||||
|
||||
---
|
||||
|
||||
## 📬 Share Your Examples
|
||||
|
||||
Have a great example to share? Contribute to this document!
|
||||
|
||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-11-20
|
||||
**Version**: 5.8.1
|
||||
File diff suppressed because it is too large
Load Diff
@@ -225,6 +225,7 @@ function get_backup_directory() {
|
||||
function backup_file_to_folder() {
|
||||
local file_path="$1"
|
||||
local backup_folder="$2"
|
||||
local quiet="${3:-}" # Optional quiet mode
|
||||
|
||||
if [ ! -f "$file_path" ]; then
|
||||
return 1
|
||||
@@ -249,10 +250,16 @@ function backup_file_to_folder() {
|
||||
local backup_file_path="${backup_sub_dir}/${file_name}"
|
||||
|
||||
if cp "$file_path" "$backup_file_path"; then
|
||||
write_color "Backed up: $file_name" "$COLOR_INFO"
|
||||
# Only output if not in quiet mode
|
||||
if [ "$quiet" != "quiet" ]; then
|
||||
write_color "Backed up: $file_name" "$COLOR_INFO"
|
||||
fi
|
||||
return 0
|
||||
else
|
||||
write_color "WARNING: Failed to backup file $file_path" "$COLOR_WARNING"
|
||||
# Always show warnings
|
||||
if [ "$quiet" != "quiet" ]; then
|
||||
write_color "WARNING: Failed to backup file $file_path" "$COLOR_WARNING"
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
@@ -443,14 +450,25 @@ function merge_directory_contents() {
|
||||
return 1
|
||||
fi
|
||||
|
||||
mkdir -p "$destination"
|
||||
write_color "Created destination directory: $destination" "$COLOR_INFO"
|
||||
# Create destination directory if it doesn't exist
|
||||
if [ ! -d "$destination" ]; then
|
||||
mkdir -p "$destination"
|
||||
write_color "Created destination directory: $destination" "$COLOR_INFO"
|
||||
fi
|
||||
|
||||
# Count total files first
|
||||
local total_files=$(find "$source" -type f | wc -l)
|
||||
local merged_count=0
|
||||
local skipped_count=0
|
||||
local backed_up_count=0
|
||||
local processed_count=0
|
||||
|
||||
write_color "Processing $total_files files in $description..." "$COLOR_INFO"
|
||||
|
||||
# Find all files recursively
|
||||
while IFS= read -r -d '' file; do
|
||||
((processed_count++))
|
||||
|
||||
local relative_path="${file#$source/}"
|
||||
local dest_path="${destination}/${relative_path}"
|
||||
local dest_dir=$(dirname "$dest_path")
|
||||
@@ -458,41 +476,58 @@ function merge_directory_contents() {
|
||||
mkdir -p "$dest_dir"
|
||||
|
||||
if [ -f "$dest_path" ]; then
|
||||
local file_name=$(basename "$relative_path")
|
||||
|
||||
# Use BackupAll mode for automatic backup without confirmation (default behavior)
|
||||
if [ "$BACKUP_ALL" = true ] && [ "$NO_BACKUP" = false ]; then
|
||||
if [ -n "$backup_folder" ]; then
|
||||
backup_file_to_folder "$dest_path" "$backup_folder"
|
||||
write_color "Auto-backed up: $file_name" "$COLOR_INFO"
|
||||
# Quiet backup - no individual file output
|
||||
if backup_file_to_folder "$dest_path" "$backup_folder" "quiet"; then
|
||||
((backed_up_count++))
|
||||
fi
|
||||
fi
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
elif [ "$NO_BACKUP" = true ]; then
|
||||
# No backup mode - ask for confirmation
|
||||
if confirm_action "File '$relative_path' already exists. Replace it? (NO BACKUP)" false; then
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
else
|
||||
write_color "Skipped $file_name (no backup)" "$COLOR_WARNING"
|
||||
((skipped_count++))
|
||||
fi
|
||||
elif confirm_action "File '$relative_path' already exists. Replace it?" false; then
|
||||
if [ -n "$backup_folder" ]; then
|
||||
backup_file_to_folder "$dest_path" "$backup_folder"
|
||||
write_color "Backed up existing $file_name" "$COLOR_INFO"
|
||||
# Quiet backup - no individual file output
|
||||
if backup_file_to_folder "$dest_path" "$backup_folder" "quiet"; then
|
||||
((backed_up_count++))
|
||||
fi
|
||||
fi
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
else
|
||||
write_color "Skipped $file_name" "$COLOR_WARNING"
|
||||
((skipped_count++))
|
||||
fi
|
||||
else
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
fi
|
||||
|
||||
# Show progress every 100 files (optimized for performance)
|
||||
if [ $((processed_count % 100)) -eq 0 ] || [ "$processed_count" -eq "$total_files" ]; then
|
||||
local percent=$((processed_count * 100 / total_files))
|
||||
echo -ne "\rMerging $description: $processed_count/$total_files files ($percent%)..."
|
||||
fi
|
||||
done < <(find "$source" -type f -print0)
|
||||
|
||||
write_color "✓ Merged $merged_count files, skipped $skipped_count files" "$COLOR_SUCCESS"
|
||||
# Clear progress line
|
||||
echo -ne "\r\033[K"
|
||||
|
||||
# Show summary
|
||||
if [ "$backed_up_count" -gt 0 ]; then
|
||||
write_color "✓ Merged $merged_count files ($backed_up_count backed up), skipped $skipped_count files" "$COLOR_SUCCESS"
|
||||
else
|
||||
write_color "✓ Merged $merged_count files, skipped $skipped_count files" "$COLOR_SUCCESS"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
@@ -508,6 +543,10 @@ function install_global() {
|
||||
|
||||
write_color "Global installation path: $user_home" "$COLOR_INFO"
|
||||
|
||||
# Clean up old installation before proceeding (fast move operation)
|
||||
echo ""
|
||||
move_old_installation "$user_home" "Global"
|
||||
|
||||
# Initialize manifest
|
||||
local manifest_file=$(new_install_manifest "Global" "$user_home")
|
||||
|
||||
@@ -548,12 +587,8 @@ function install_global() {
|
||||
# Track .claude directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory, not destination
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_claude_dir}"
|
||||
local target_path="${global_claude_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_claude_dir" -type f -print0)
|
||||
# Track files from SOURCE directory using bulk operation
|
||||
add_manifest_entries_bulk "$manifest_file" "$source_claude_dir" "$global_claude_dir" "File"
|
||||
fi
|
||||
|
||||
# Handle CLAUDE.md file
|
||||
@@ -572,12 +607,8 @@ function install_global() {
|
||||
# Track .codex directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$global_codex_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_codex_dir}"
|
||||
local target_path="${global_codex_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_codex_dir" -type f -print0)
|
||||
# Track files from SOURCE directory using bulk operation
|
||||
add_manifest_entries_bulk "$manifest_file" "$source_codex_dir" "$global_codex_dir" "File"
|
||||
fi
|
||||
|
||||
# Backup critical config files in .gemini directory before installation
|
||||
@@ -589,12 +620,8 @@ function install_global() {
|
||||
# Track .gemini directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$global_gemini_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_gemini_dir}"
|
||||
local target_path="${global_gemini_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_gemini_dir" -type f -print0)
|
||||
# Track files from SOURCE directory using bulk operation
|
||||
add_manifest_entries_bulk "$manifest_file" "$source_gemini_dir" "$global_gemini_dir" "File"
|
||||
fi
|
||||
|
||||
# Backup critical config files in .qwen directory before installation
|
||||
@@ -606,12 +633,8 @@ function install_global() {
|
||||
# Track .qwen directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$global_qwen_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_qwen_dir}"
|
||||
local target_path="${global_qwen_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_qwen_dir" -type f -print0)
|
||||
# Track files from SOURCE directory using bulk operation
|
||||
add_manifest_entries_bulk "$manifest_file" "$source_qwen_dir" "$global_qwen_dir" "File"
|
||||
fi
|
||||
|
||||
# Remove empty backup folder
|
||||
@@ -627,7 +650,7 @@ function install_global() {
|
||||
create_version_json "$global_claude_dir" "Global"
|
||||
|
||||
# Save installation manifest
|
||||
save_install_manifest "$manifest_file" "$user_home"
|
||||
save_install_manifest "$manifest_file" "$user_home" "Global"
|
||||
|
||||
return 0
|
||||
}
|
||||
@@ -642,6 +665,10 @@ function install_path() {
|
||||
local global_claude_dir="${user_home}/.claude"
|
||||
write_color "Global path: $user_home" "$COLOR_INFO"
|
||||
|
||||
# Clean up old installation before proceeding (fast move operation)
|
||||
echo ""
|
||||
move_old_installation "$target_dir" "Path"
|
||||
|
||||
# Initialize manifest
|
||||
local manifest_file=$(new_install_manifest "Path" "$target_dir")
|
||||
|
||||
@@ -687,12 +714,8 @@ function install_path() {
|
||||
# Track local folder in manifest
|
||||
add_manifest_entry "$manifest_file" "$dest_folder" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_folder}"
|
||||
local target_path="${dest_folder}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_folder" -type f -print0)
|
||||
# Track files from SOURCE directory using bulk operation
|
||||
add_manifest_entries_bulk "$manifest_file" "$source_folder" "$dest_folder" "File"
|
||||
fi
|
||||
write_color "✓ Installed local folder: $folder" "$COLOR_SUCCESS"
|
||||
else
|
||||
@@ -700,11 +723,15 @@ function install_path() {
|
||||
fi
|
||||
done
|
||||
|
||||
# Global components - exclude local folders
|
||||
# Global components - exclude local folders (use same efficient method as Global mode)
|
||||
write_color "Installing global components to $global_claude_dir..." "$COLOR_INFO"
|
||||
|
||||
local merged_count=0
|
||||
# Create temporary directory for global files only
|
||||
local temp_global_dir="/tmp/claude-global-$$"
|
||||
mkdir -p "$temp_global_dir"
|
||||
|
||||
# Copy global files to temp directory (excluding local folders)
|
||||
write_color "Preparing global components..." "$COLOR_INFO"
|
||||
while IFS= read -r -d '' file; do
|
||||
local relative_path="${file#$source_claude_dir/}"
|
||||
local top_folder=$(echo "$relative_path" | cut -d'/' -f1)
|
||||
@@ -714,37 +741,24 @@ function install_path() {
|
||||
continue
|
||||
fi
|
||||
|
||||
local dest_path="${global_claude_dir}/${relative_path}"
|
||||
local dest_dir=$(dirname "$dest_path")
|
||||
local temp_dest_path="${temp_global_dir}/${relative_path}"
|
||||
local temp_dest_dir=$(dirname "$temp_dest_path")
|
||||
|
||||
mkdir -p "$dest_dir"
|
||||
|
||||
if [ -f "$dest_path" ]; then
|
||||
if [ "$BACKUP_ALL" = true ] && [ "$NO_BACKUP" = false ]; then
|
||||
if [ -n "$backup_folder" ]; then
|
||||
backup_file_to_folder "$dest_path" "$backup_folder"
|
||||
fi
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
elif [ "$NO_BACKUP" = true ]; then
|
||||
if confirm_action "File '$relative_path' already exists in global location. Replace it? (NO BACKUP)" false; then
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
fi
|
||||
elif confirm_action "File '$relative_path' already exists in global location. Replace it?" false; then
|
||||
if [ -n "$backup_folder" ]; then
|
||||
backup_file_to_folder "$dest_path" "$backup_folder"
|
||||
fi
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
fi
|
||||
else
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
fi
|
||||
mkdir -p "$temp_dest_dir"
|
||||
cp "$file" "$temp_dest_path"
|
||||
done < <(find "$source_claude_dir" -type f -print0)
|
||||
|
||||
write_color "✓ Merged $merged_count files to global location" "$COLOR_SUCCESS"
|
||||
# Use bulk merge method (same as Global mode - fast!)
|
||||
if merge_directory_contents "$temp_global_dir" "$global_claude_dir" "global components" "$backup_folder"; then
|
||||
# Track global files in manifest using bulk method (fast!)
|
||||
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
|
||||
|
||||
# Track files from TEMP directory using bulk operation
|
||||
add_manifest_entries_bulk "$manifest_file" "$temp_global_dir" "$global_claude_dir" "File"
|
||||
fi
|
||||
|
||||
# Clean up temp directory
|
||||
rm -rf "$temp_global_dir"
|
||||
|
||||
# Handle CLAUDE.md file in global .claude directory
|
||||
local global_claude_md="${global_claude_dir}/CLAUDE.md"
|
||||
@@ -763,12 +777,8 @@ function install_path() {
|
||||
# Track .codex directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$local_codex_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_codex_dir}"
|
||||
local target_path="${local_codex_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_codex_dir" -type f -print0)
|
||||
# Track files from SOURCE directory using bulk operation
|
||||
add_manifest_entries_bulk "$manifest_file" "$source_codex_dir" "$local_codex_dir" "File"
|
||||
fi
|
||||
|
||||
# Backup critical config files in .gemini directory before installation
|
||||
@@ -780,12 +790,8 @@ function install_path() {
|
||||
# Track .gemini directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$local_gemini_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_gemini_dir}"
|
||||
local target_path="${local_gemini_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_gemini_dir" -type f -print0)
|
||||
# Track files from SOURCE directory using bulk operation
|
||||
add_manifest_entries_bulk "$manifest_file" "$source_gemini_dir" "$local_gemini_dir" "File"
|
||||
fi
|
||||
|
||||
# Backup critical config files in .qwen directory before installation
|
||||
@@ -797,12 +803,8 @@ function install_path() {
|
||||
# Track .qwen directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$local_qwen_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_qwen_dir}"
|
||||
local target_path="${local_qwen_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_qwen_dir" -type f -print0)
|
||||
# Track files from SOURCE directory using bulk operation
|
||||
add_manifest_entries_bulk "$manifest_file" "$source_qwen_dir" "$local_qwen_dir" "File"
|
||||
fi
|
||||
|
||||
# Remove empty backup folder
|
||||
@@ -822,7 +824,7 @@ function install_path() {
|
||||
create_version_json "$global_claude_dir" "Global"
|
||||
|
||||
# Save installation manifest
|
||||
save_install_manifest "$manifest_file" "$target_dir"
|
||||
save_install_manifest "$manifest_file" "$target_dir" "Path"
|
||||
|
||||
return 0
|
||||
}
|
||||
@@ -911,8 +913,15 @@ function new_install_manifest() {
|
||||
mkdir -p "$MANIFEST_DIR"
|
||||
|
||||
# Generate unique manifest ID based on timestamp and mode
|
||||
# Distinguish between Global and Path installations with clear naming
|
||||
local timestamp=$(date +"%Y%m%d-%H%M%S")
|
||||
local manifest_id="install-${installation_mode}-${timestamp}"
|
||||
local mode_prefix
|
||||
if [ "$installation_mode" = "Global" ]; then
|
||||
mode_prefix="manifest-global"
|
||||
else
|
||||
mode_prefix="manifest-path"
|
||||
fi
|
||||
local manifest_id="${mode_prefix}-${timestamp}"
|
||||
|
||||
# Create manifest file path
|
||||
local manifest_file="${MANIFEST_DIR}/${manifest_id}.json"
|
||||
@@ -971,12 +980,88 @@ EOF
|
||||
jq --argjson entry "$entry_json" '.directories += [$entry]' "$manifest_file" > "$temp_file"
|
||||
fi
|
||||
|
||||
mv "$temp_file" "$manifest_file"
|
||||
# Only replace manifest if jq succeeded
|
||||
if [ -s "$temp_file" ]; then
|
||||
mv "$temp_file" "$manifest_file"
|
||||
else
|
||||
write_color "WARNING: Failed to add manifest entry (jq error)" "$COLOR_WARNING"
|
||||
rm -f "$temp_file"
|
||||
fi
|
||||
}
|
||||
|
||||
function add_manifest_entries_bulk() {
|
||||
local manifest_file="$1"
|
||||
local source_dir="$2"
|
||||
local target_base="$3"
|
||||
local entry_type="$4"
|
||||
|
||||
if [ ! -f "$manifest_file" ]; then
|
||||
write_color "WARNING: Manifest file not found: $manifest_file" "$COLOR_WARNING"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [ ! -d "$source_dir" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
local temp_file="${manifest_file}.tmp"
|
||||
local paths_file=$(mktemp)
|
||||
local entries_file=$(mktemp)
|
||||
|
||||
# Collect all file paths and compute target paths using bash string operations
|
||||
# This mimics the original while loop logic
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_dir}"
|
||||
local target_path="${target_base}${relative_path}"
|
||||
echo "$target_path"
|
||||
done < <(find "$source_dir" -type f -print0) > "$paths_file"
|
||||
|
||||
# Check if paths_file has content
|
||||
if [ ! -s "$paths_file" ]; then
|
||||
rm -f "$paths_file" "$entries_file"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Generate JSON entries from paths (filter empty lines)
|
||||
grep -v '^$' "$paths_file" | jq -R --arg date "$timestamp" --arg type "$entry_type" '
|
||||
{
|
||||
"path": .,
|
||||
"type": $type,
|
||||
"timestamp": $date
|
||||
}
|
||||
' | jq -s '.' > "$entries_file"
|
||||
|
||||
# Check if entries_file has valid content
|
||||
if [ ! -s "$entries_file" ]; then
|
||||
rm -f "$paths_file" "$entries_file"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Add all entries to manifest using --slurpfile to avoid argument length limit
|
||||
if [ "$entry_type" = "File" ]; then
|
||||
jq --slurpfile entries "$entries_file" '.files += $entries[0]' "$manifest_file" > "$temp_file"
|
||||
else
|
||||
jq --slurpfile entries "$entries_file" '.directories += $entries[0]' "$manifest_file" > "$temp_file"
|
||||
fi
|
||||
|
||||
# Only replace manifest if jq succeeded and temp_file has content
|
||||
if [ -s "$temp_file" ]; then
|
||||
mv "$temp_file" "$manifest_file"
|
||||
else
|
||||
write_color "WARNING: Failed to update manifest (jq error), keeping original" "$COLOR_WARNING"
|
||||
rm -f "$temp_file"
|
||||
fi
|
||||
|
||||
rm -f "$paths_file" "$entries_file"
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
function remove_old_manifests_for_path() {
|
||||
local installation_path="$1"
|
||||
local current_manifest_file="$2" # Optional: exclude this file from deletion
|
||||
local installation_mode="$2"
|
||||
local current_manifest_file="$3" # Optional: exclude this file from deletion
|
||||
|
||||
if [ ! -d "$MANIFEST_DIR" ]; then
|
||||
return 0
|
||||
@@ -986,7 +1071,8 @@ function remove_old_manifests_for_path() {
|
||||
local target_path=$(echo "$installation_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
|
||||
local removed_count=0
|
||||
|
||||
# Find and remove old manifests for the same installation path
|
||||
# Find and remove old manifests for the same installation path and mode
|
||||
# Support both new (manifest-*) and old (install-*) format
|
||||
while IFS= read -r -d '' file; do
|
||||
# Skip the current manifest file if specified
|
||||
if [ -n "$current_manifest_file" ] && [ "$file" = "$current_manifest_file" ]; then
|
||||
@@ -994,19 +1080,20 @@ function remove_old_manifests_for_path() {
|
||||
fi
|
||||
|
||||
local manifest_path=$(jq -r '.installation_path // ""' "$file" 2>/dev/null)
|
||||
local manifest_mode=$(jq -r '.installation_mode // "Global"' "$file" 2>/dev/null)
|
||||
|
||||
if [ -n "$manifest_path" ]; then
|
||||
# Normalize manifest path
|
||||
local normalized_manifest_path=$(echo "$manifest_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
# If paths match, remove this old manifest
|
||||
if [ "$normalized_manifest_path" = "$target_path" ]; then
|
||||
# Only remove if BOTH path and mode match
|
||||
if [ "$normalized_manifest_path" = "$target_path" ] && [ "$manifest_mode" = "$installation_mode" ]; then
|
||||
rm -f "$file"
|
||||
write_color "Removed old manifest: $(basename "$file")" "$COLOR_INFO"
|
||||
((removed_count++))
|
||||
fi
|
||||
fi
|
||||
done < <(find "$MANIFEST_DIR" -name "install-*.json" -type f -print0 2>/dev/null)
|
||||
done < <(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f -print0 2>/dev/null)
|
||||
|
||||
if [ "$removed_count" -gt 0 ]; then
|
||||
write_color "Removed $removed_count old manifest(s) for installation path: $installation_path" "$COLOR_SUCCESS"
|
||||
@@ -1018,10 +1105,11 @@ function remove_old_manifests_for_path() {
|
||||
function save_install_manifest() {
|
||||
local manifest_file="$1"
|
||||
local installation_path="$2"
|
||||
local installation_mode="$3"
|
||||
|
||||
# Remove old manifests for the same installation path (excluding current one)
|
||||
if [ -n "$installation_path" ]; then
|
||||
remove_old_manifests_for_path "$installation_path" "$manifest_file"
|
||||
# Remove old manifests for the same installation path and mode (excluding current one)
|
||||
if [ -n "$installation_path" ] && [ -n "$installation_mode" ]; then
|
||||
remove_old_manifests_for_path "$installation_path" "$installation_mode" "$manifest_file"
|
||||
fi
|
||||
|
||||
if [ -f "$manifest_file" ]; then
|
||||
@@ -1045,10 +1133,16 @@ function migrate_legacy_manifest() {
|
||||
# Create manifest directory if it doesn't exist
|
||||
mkdir -p "$MANIFEST_DIR"
|
||||
|
||||
# Read legacy manifest
|
||||
# Read legacy manifest and generate new manifest ID with new naming convention
|
||||
local mode=$(jq -r '.installation_mode // "Global"' "$legacy_manifest")
|
||||
local timestamp=$(date +"%Y%m%d-%H%M%S")
|
||||
local manifest_id="install-${mode}-${timestamp}-migrated"
|
||||
local mode_prefix
|
||||
if [ "$mode" = "Global" ]; then
|
||||
mode_prefix="manifest-global"
|
||||
else
|
||||
mode_prefix="manifest-path"
|
||||
fi
|
||||
local manifest_id="${mode_prefix}-${timestamp}-migrated"
|
||||
|
||||
# Create new manifest file
|
||||
local new_manifest="${MANIFEST_DIR}/${manifest_id}.json"
|
||||
@@ -1072,8 +1166,8 @@ function get_all_install_manifests() {
|
||||
return
|
||||
fi
|
||||
|
||||
# Check if any manifest files exist
|
||||
local manifest_count=$(find "$MANIFEST_DIR" -name "install-*.json" -type f 2>/dev/null | wc -l)
|
||||
# Check if any manifest files exist (both new and old formats)
|
||||
local manifest_count=$(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f 2>/dev/null | wc -l)
|
||||
|
||||
if [ "$manifest_count" -eq 0 ]; then
|
||||
echo "[]"
|
||||
@@ -1102,7 +1196,7 @@ function get_all_install_manifests() {
|
||||
manifest_content=$(echo "$manifest_content" | jq --argjson fc "$files_count" --argjson dc "$dirs_count" '. + {files_count: $fc, directories_count: $dc}')
|
||||
|
||||
all_manifests+="$manifest_content"
|
||||
done < <(find "$MANIFEST_DIR" -name "install-*.json" -type f -print0 | sort -z)
|
||||
done < <(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f -print0 | sort -z)
|
||||
|
||||
all_manifests+="]"
|
||||
|
||||
@@ -1128,6 +1222,112 @@ function get_all_install_manifests() {
|
||||
echo "$latest_manifests"
|
||||
}
|
||||
|
||||
function move_old_installation() {
|
||||
local installation_path="$1"
|
||||
local installation_mode="$2"
|
||||
|
||||
write_color "Checking for previous installation..." "$COLOR_INFO"
|
||||
|
||||
# Find existing manifest for this installation path and mode
|
||||
local manifests_json=$(get_all_install_manifests)
|
||||
local target_path=$(echo "$installation_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
local old_manifest=$(echo "$manifests_json" | jq --arg path "$target_path" --arg mode "$installation_mode" '
|
||||
.[] | select(
|
||||
(.installation_path | ascii_downcase | sub("/+$"; "")) == $path and
|
||||
.installation_mode == $mode
|
||||
)
|
||||
')
|
||||
|
||||
if [ -z "$old_manifest" ] || [ "$old_manifest" = "null" ]; then
|
||||
write_color "No previous $installation_mode installation found at this path" "$COLOR_INFO"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local install_date=$(echo "$old_manifest" | jq -r '.installation_date')
|
||||
local files_count=$(echo "$old_manifest" | jq -r '.files_count')
|
||||
local dirs_count=$(echo "$old_manifest" | jq -r '.directories_count')
|
||||
|
||||
write_color "Found previous installation from $install_date" "$COLOR_INFO"
|
||||
write_color "Files: $files_count, Directories: $dirs_count" "$COLOR_INFO"
|
||||
|
||||
# Create backup folder
|
||||
local timestamp=$(date +"%Y%m%d-%H%M%S")
|
||||
local backup_dir="${installation_path}/claude-backup-old-${timestamp}"
|
||||
mkdir -p "$backup_dir"
|
||||
write_color "Created backup folder: $backup_dir" "$COLOR_SUCCESS"
|
||||
|
||||
local moved_files=0
|
||||
local removed_dirs=0
|
||||
local failed_items=()
|
||||
|
||||
# Move files first (from manifest)
|
||||
write_color "Moving old installation files to backup..." "$COLOR_INFO"
|
||||
while IFS= read -r file_path; do
|
||||
if [ -z "$file_path" ] || [ "$file_path" = "null" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
if [ -f "$file_path" ]; then
|
||||
# Calculate relative path from installation root
|
||||
local relative_path="${file_path#$installation_path}"
|
||||
relative_path="${relative_path#/}"
|
||||
|
||||
if [ -z "$relative_path" ]; then
|
||||
relative_path=$(basename "$file_path")
|
||||
fi
|
||||
|
||||
local backup_dest_dir=$(dirname "${backup_dir}/${relative_path}")
|
||||
|
||||
mkdir -p "$backup_dest_dir"
|
||||
if mv "$file_path" "${backup_dest_dir}/" 2>/dev/null; then
|
||||
((moved_files++))
|
||||
else
|
||||
write_color " WARNING: Failed to move file: $file_path" "$COLOR_WARNING"
|
||||
failed_items+=("$file_path")
|
||||
fi
|
||||
fi
|
||||
done <<< "$(echo "$old_manifest" | jq -r '.files[].path')"
|
||||
|
||||
# Remove empty directories (in reverse order to handle nested dirs)
|
||||
write_color "Cleaning up empty directories..." "$COLOR_INFO"
|
||||
while IFS= read -r dir_path; do
|
||||
if [ -z "$dir_path" ] || [ "$dir_path" = "null" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
if [ -d "$dir_path" ]; then
|
||||
# Check if directory is empty
|
||||
if [ -z "$(ls -A "$dir_path" 2>/dev/null)" ]; then
|
||||
if rmdir "$dir_path" 2>/dev/null; then
|
||||
write_color " Removed empty directory: $dir_path" "$COLOR_INFO"
|
||||
((removed_dirs++))
|
||||
fi
|
||||
else
|
||||
write_color " Directory not empty (preserved): $dir_path" "$COLOR_INFO"
|
||||
fi
|
||||
fi
|
||||
done <<< "$(echo "$old_manifest" | jq -r '.directories[].path' | awk '{ print length, $0 }' | sort -rn | cut -d' ' -f2-)"
|
||||
|
||||
# Note: Old manifest will be automatically removed by save_install_manifest
|
||||
# via remove_old_manifests_for_path to ensure robust cleanup
|
||||
|
||||
echo ""
|
||||
write_color "Old installation cleanup summary:" "$COLOR_INFO"
|
||||
echo " Files moved: $moved_files"
|
||||
echo " Directories removed: $removed_dirs"
|
||||
echo " Backup location: $backup_dir"
|
||||
|
||||
if [ ${#failed_items[@]} -gt 0 ]; then
|
||||
write_color " Failed items: ${#failed_items[@]}" "$COLOR_WARNING"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Return backup path for reference
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# UNINSTALLATION FUNCTIONS
|
||||
# ============================================================================
|
||||
@@ -1173,26 +1373,50 @@ function uninstall_claude_workflow() {
|
||||
|
||||
if [ "$manifests_count" -eq 1 ]; then
|
||||
selected_manifest=$(echo "$manifests_json" | jq '.[0]')
|
||||
write_color "Only one installation found, will uninstall:" "$COLOR_INFO"
|
||||
|
||||
# Read version from version.json
|
||||
local install_path=$(echo "$selected_manifest" | jq -r '.installation_path // ""')
|
||||
local install_mode=$(echo "$selected_manifest" | jq -r '.installation_mode // "Unknown"')
|
||||
local version_str="Version Unknown"
|
||||
|
||||
# Determine version.json path
|
||||
local version_json_path="${install_path}/.claude/version.json"
|
||||
|
||||
if [ -f "$version_json_path" ]; then
|
||||
local ver=$(jq -r '.version // ""' "$version_json_path" 2>/dev/null)
|
||||
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
|
||||
version_str="v$ver"
|
||||
fi
|
||||
fi
|
||||
|
||||
write_color "Found installation: $version_str - $install_path" "$COLOR_INFO"
|
||||
else
|
||||
# Multiple manifests - let user choose
|
||||
# Multiple manifests - let user choose (simplified: only version and path)
|
||||
local options=()
|
||||
|
||||
for i in $(seq 0 $((manifests_count - 1))); do
|
||||
local m=$(echo "$manifests_json" | jq ".[$i]")
|
||||
|
||||
# Safely extract date string
|
||||
local date_str=$(echo "$m" | jq -r '.installation_date // "unknown date"' | cut -c1-10)
|
||||
local mode=$(echo "$m" | jq -r '.installation_mode // "Unknown"')
|
||||
local files_count=$(echo "$m" | jq -r '.files_count // 0')
|
||||
local dirs_count=$(echo "$m" | jq -r '.directories_count // 0')
|
||||
local path_info=$(echo "$m" | jq -r '.installation_path // ""')
|
||||
local install_mode=$(echo "$m" | jq -r '.installation_mode // "Unknown"')
|
||||
local version_str="Version Unknown"
|
||||
|
||||
if [ -n "$path_info" ]; then
|
||||
path_info=" ($path_info)"
|
||||
# Read version from version.json
|
||||
local version_json_path="${path_info}/.claude/version.json"
|
||||
|
||||
if [ -f "$version_json_path" ]; then
|
||||
local ver=$(jq -r '.version // ""' "$version_json_path" 2>/dev/null)
|
||||
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
|
||||
version_str="v$ver"
|
||||
fi
|
||||
fi
|
||||
|
||||
options+=("$((i + 1)). [$mode] $date_str - $files_count files, $dirs_count dirs$path_info")
|
||||
local path_str="Path Unknown"
|
||||
if [ -n "$path_info" ]; then
|
||||
path_str="$path_info"
|
||||
fi
|
||||
|
||||
options+=("$((i + 1)). $version_str - $path_str")
|
||||
done
|
||||
|
||||
options+=("Cancel - Don't uninstall anything")
|
||||
@@ -1210,16 +1434,24 @@ function uninstall_claude_workflow() {
|
||||
selected_manifest=$(echo "$manifests_json" | jq ".[$selected_index]")
|
||||
fi
|
||||
|
||||
# Display selected installation info
|
||||
# Display selected installation info (simplified: only version and path)
|
||||
local final_path=$(echo "$selected_manifest" | jq -r '.installation_path // ""')
|
||||
local final_mode=$(echo "$selected_manifest" | jq -r '.installation_mode // "Unknown"')
|
||||
local final_version="Version Unknown"
|
||||
|
||||
# Read version from version.json
|
||||
local final_version_path="${final_path}/.claude/version.json"
|
||||
if [ -f "$final_version_path" ]; then
|
||||
local ver=$(jq -r '.version // ""' "$final_version_path" 2>/dev/null)
|
||||
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
|
||||
final_version="v$ver"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
write_color "Installation Information:" "$COLOR_INFO"
|
||||
echo " Manifest ID: $(echo "$selected_manifest" | jq -r '.manifest_id')"
|
||||
echo " Mode: $(echo "$selected_manifest" | jq -r '.installation_mode')"
|
||||
echo " Path: $(echo "$selected_manifest" | jq -r '.installation_path')"
|
||||
echo " Date: $(echo "$selected_manifest" | jq -r '.installation_date')"
|
||||
echo " Installer Version: $(echo "$selected_manifest" | jq -r '.installer_version')"
|
||||
echo " Files tracked: $(echo "$selected_manifest" | jq -r '.files_count')"
|
||||
echo " Directories tracked: $(echo "$selected_manifest" | jq -r '.directories_count')"
|
||||
write_color "Uninstallation Target:" "$COLOR_INFO"
|
||||
echo " $final_version"
|
||||
echo " Path: $final_path"
|
||||
echo ""
|
||||
|
||||
# Confirm uninstallation
|
||||
@@ -1229,55 +1461,64 @@ function uninstall_claude_workflow() {
|
||||
fi
|
||||
|
||||
local removed_files=0
|
||||
local removed_dirs=0
|
||||
local failed_items=()
|
||||
local skipped_files=0
|
||||
|
||||
# Remove files first
|
||||
# Check if this is a Path mode uninstallation and if Global installation exists
|
||||
local is_path_mode=false
|
||||
local has_global_installation=false
|
||||
|
||||
if [ "$final_mode" = "Path" ]; then
|
||||
is_path_mode=true
|
||||
|
||||
# Check if any Global installation manifest exists
|
||||
if [ -d "$MANIFEST_DIR" ]; then
|
||||
local global_manifest_count=$(find "$MANIFEST_DIR" -name "manifest-global-*.json" -type f 2>/dev/null | wc -l)
|
||||
if [ "$global_manifest_count" -gt 0 ]; then
|
||||
has_global_installation=true
|
||||
write_color "Found Global installation, global files will be preserved" "$COLOR_WARNING"
|
||||
echo ""
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Only remove files listed in manifest - do NOT remove directories
|
||||
write_color "Removing installed files..." "$COLOR_INFO"
|
||||
|
||||
local files_array=$(echo "$selected_manifest" | jq -c '.files[]')
|
||||
local files_array=$(echo "$selected_manifest" | jq -c '.files[]' 2>/dev/null)
|
||||
|
||||
while IFS= read -r file_entry; do
|
||||
local file_path=$(echo "$file_entry" | jq -r '.path')
|
||||
if [ -n "$files_array" ]; then
|
||||
while IFS= read -r file_entry; do
|
||||
local file_path=$(echo "$file_entry" | jq -r '.path')
|
||||
|
||||
if [ -f "$file_path" ]; then
|
||||
if rm -f "$file_path" 2>/dev/null; then
|
||||
write_color " Removed file: $file_path" "$COLOR_SUCCESS"
|
||||
((removed_files++))
|
||||
else
|
||||
write_color " WARNING: Failed to remove file: $file_path" "$COLOR_WARNING"
|
||||
failed_items+=("$file_path")
|
||||
fi
|
||||
else
|
||||
write_color " File not found (already removed): $file_path" "$COLOR_INFO"
|
||||
fi
|
||||
done <<< "$files_array"
|
||||
# For Path mode uninstallation, skip global files if Global installation exists
|
||||
if [ "$is_path_mode" = true ] && [ "$has_global_installation" = true ]; then
|
||||
local global_claude_dir="${HOME}/.claude"
|
||||
|
||||
# Remove directories (in reverse order by path length)
|
||||
write_color "Removing installed directories..." "$COLOR_INFO"
|
||||
|
||||
local dirs_array=$(echo "$selected_manifest" | jq -c '.directories[] | {path: .path, length: (.path | length)}' | sort -t: -k2 -rn | jq -c '.path')
|
||||
|
||||
while IFS= read -r dir_path_json; do
|
||||
local dir_path=$(echo "$dir_path_json" | jq -r '.')
|
||||
|
||||
if [ -d "$dir_path" ]; then
|
||||
# Check if directory is empty
|
||||
if [ -z "$(ls -A "$dir_path" 2>/dev/null)" ]; then
|
||||
if rmdir "$dir_path" 2>/dev/null; then
|
||||
write_color " Removed directory: $dir_path" "$COLOR_SUCCESS"
|
||||
((removed_dirs++))
|
||||
else
|
||||
write_color " WARNING: Failed to remove directory: $dir_path" "$COLOR_WARNING"
|
||||
failed_items+=("$dir_path")
|
||||
# Skip files under global .claude directory
|
||||
if [[ "$file_path" == "$global_claude_dir"* ]]; then
|
||||
((skipped_files++))
|
||||
continue
|
||||
fi
|
||||
else
|
||||
write_color " Directory not empty (preserved): $dir_path" "$COLOR_WARNING"
|
||||
fi
|
||||
else
|
||||
write_color " Directory not found (already removed): $dir_path" "$COLOR_INFO"
|
||||
fi
|
||||
done <<< "$dirs_array"
|
||||
|
||||
if [ -f "$file_path" ]; then
|
||||
if rm -f "$file_path" 2>/dev/null; then
|
||||
((removed_files++))
|
||||
else
|
||||
write_color " WARNING: Failed to remove: $file_path" "$COLOR_WARNING"
|
||||
failed_items+=("$file_path")
|
||||
fi
|
||||
fi
|
||||
done <<< "$files_array"
|
||||
fi
|
||||
|
||||
# Display removal summary
|
||||
if [ "$skipped_files" -gt 0 ]; then
|
||||
write_color "Removed $removed_files files, skipped $skipped_files global files" "$COLOR_SUCCESS"
|
||||
else
|
||||
write_color "Removed $removed_files files" "$COLOR_SUCCESS"
|
||||
fi
|
||||
|
||||
# Remove manifest file
|
||||
local manifest_file=$(echo "$selected_manifest" | jq -r '.manifest_file')
|
||||
@@ -1295,7 +1536,12 @@ function uninstall_claude_workflow() {
|
||||
write_color "========================================" "$COLOR_INFO"
|
||||
write_color "Uninstallation Summary:" "$COLOR_INFO"
|
||||
echo " Files removed: $removed_files"
|
||||
echo " Directories removed: $removed_dirs"
|
||||
|
||||
if [ "$skipped_files" -gt 0 ]; then
|
||||
echo " Files skipped (global files preserved): $skipped_files"
|
||||
echo ""
|
||||
write_color "Note: $skipped_files global files were preserved due to existing Global installation" "$COLOR_INFO"
|
||||
fi
|
||||
|
||||
if [ ${#failed_items[@]} -gt 0 ]; then
|
||||
echo ""
|
||||
@@ -1307,7 +1553,11 @@ function uninstall_claude_workflow() {
|
||||
|
||||
echo ""
|
||||
if [ ${#failed_items[@]} -eq 0 ]; then
|
||||
write_color "✓ Claude Code Workflow has been successfully uninstalled!" "$COLOR_SUCCESS"
|
||||
if [ "$skipped_files" -gt 0 ]; then
|
||||
write_color "✓ Uninstallation complete! Removed $removed_files files, preserved $skipped_files global files." "$COLOR_SUCCESS"
|
||||
else
|
||||
write_color "✓ Claude Code Workflow has been successfully uninstalled!" "$COLOR_SUCCESS"
|
||||
fi
|
||||
else
|
||||
write_color "Uninstallation completed with warnings." "$COLOR_WARNING"
|
||||
write_color "Please manually remove the failed items listed above." "$COLOR_INFO"
|
||||
|
||||
@@ -1,620 +0,0 @@
|
||||
# Lite-Fix Command Design Document
|
||||
|
||||
**Date**: 2025-11-20
|
||||
**Version**: 2.0.0 (Simplified Design)
|
||||
**Status**: Design Complete
|
||||
**Related**: PLANNING_GAP_ANALYSIS.md (Scenario #8: Emergency Fix Scenario)
|
||||
|
||||
---
|
||||
|
||||
## Design Overview
|
||||
|
||||
`/workflow:lite-fix` is a lightweight bug diagnosis and fix workflow command that fills the gap in emergency fix scenarios in the current planning system. Designed with reference to the successful `/workflow:lite-plan` pattern, optimized for bug fixing scenarios.
|
||||
|
||||
### Core Design Principles
|
||||
|
||||
1. **Rapid Response** - Supports 15 minutes to 4 hours fix cycles
|
||||
2. **Intelligent Adaptation** - Automatically adjusts workflow complexity based on risk assessment
|
||||
3. **Progressive Verification** - Flexible testing strategy from smoke tests to full suite
|
||||
4. **Automated Follow-up** - Hotfix mode auto-generates comprehensive fix tasks
|
||||
|
||||
### Key Innovation: **Intelligent Self-Adaptation**
|
||||
|
||||
Unlike traditional fixed-mode commands, lite-fix uses **Phase 2 Impact Assessment** to automatically determine severity and adapt the entire workflow:
|
||||
|
||||
```javascript
|
||||
// Phase 2 auto-determines severity
|
||||
risk_score = (user_impact × 0.4) + (system_risk × 0.3) + (business_impact × 0.3)
|
||||
|
||||
// Workflow auto-adapts
|
||||
if (risk_score < 3.0) → Full test suite, comprehensive diagnosis
|
||||
else if (risk_score < 5.0) → Focused integration, moderate diagnosis
|
||||
else if (risk_score < 8.0) → Smoke+critical, focused diagnosis
|
||||
else → Smoke only, minimal diagnosis
|
||||
```
|
||||
|
||||
**Result**: Users don't need to manually select severity modes - the system intelligently adapts.
|
||||
|
||||
---
|
||||
|
||||
## Design Comparison: lite-fix vs lite-plan
|
||||
|
||||
| Dimension | lite-plan | lite-fix (v2.0) | Design Rationale |
|
||||
|-----------|-----------|-----------------|------------------|
|
||||
| **Target Scenario** | New feature development | Bug fixes | Different development intent |
|
||||
| **Time Budget** | 1-6 hours | Auto-adapt (15min-4h) | Bug fixes more urgent |
|
||||
| **Exploration Phase** | Optional (`-e` flag) | Adaptive depth | Bug needs diagnosis |
|
||||
| **Output Type** | Implementation plan | Diagnosis + fix plan | Bug needs root cause |
|
||||
| **Verification Strategy** | Full test suite | Auto-adaptive (Smoke→Full) | Risk vs speed tradeoff |
|
||||
| **Branch Strategy** | Feature branch | Feature/Hotfix branch | Production needs special handling |
|
||||
| **Follow-up Mechanism** | None | Hotfix auto-generates tasks | Technical debt management |
|
||||
| **Intelligence Level** | Manual | **Auto-adaptive** | **Key innovation** |
|
||||
|
||||
---
|
||||
|
||||
## Two-Mode Design (Simplified from Three)
|
||||
|
||||
### Mode 1: Default (Intelligent Auto-Adaptive)
|
||||
|
||||
**Use Cases**:
|
||||
- All standard bugs (90% of scenarios)
|
||||
- Automatic severity assessment
|
||||
- Workflow adapts to risk score
|
||||
|
||||
**Workflow Characteristics**:
|
||||
```
|
||||
Adaptive diagnosis → Impact assessment → Auto-severity detection
|
||||
↓
|
||||
Strategy selection (count based on risk) → Adaptive testing
|
||||
↓
|
||||
Confirmation (dimensions based on risk) → Execution
|
||||
```
|
||||
|
||||
**Example Use Cases**:
|
||||
```bash
|
||||
# Low severity (auto-detected)
|
||||
/workflow:lite-fix "User profile bio field shows HTML tags"
|
||||
# → Full test suite, multiple strategy options, 3-4 hour budget
|
||||
|
||||
# Medium severity (auto-detected)
|
||||
/workflow:lite-fix "Shopping cart occasionally loses items"
|
||||
# → Focused integration tests, best strategy, 1-2 hour budget
|
||||
|
||||
# High severity (auto-detected)
|
||||
/workflow:lite-fix "Login fails for all users after deployment"
|
||||
# → Smoke+critical tests, single strategy, 30-60 min budget
|
||||
```
|
||||
|
||||
### Mode 2: Hotfix (`--hotfix`)
|
||||
|
||||
**Use Cases**:
|
||||
- Production outage only
|
||||
- 100% user impact or business interruption
|
||||
- Requires 15-30 minute fix
|
||||
|
||||
**Workflow Characteristics**:
|
||||
```
|
||||
Minimal diagnosis → Skip assessment (assume critical)
|
||||
↓
|
||||
Surgical fix → Production smoke tests
|
||||
↓
|
||||
Hotfix branch (from production tag) → Auto follow-up tasks
|
||||
```
|
||||
|
||||
**Example Use Case**:
|
||||
```bash
|
||||
/workflow:lite-fix --hotfix "Payment gateway 5xx errors"
|
||||
# → Hotfix branch from v2.3.1 tag, smoke tests only, follow-up tasks auto-generated
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Command Syntax (Simplified)
|
||||
|
||||
### Before (v1.0 - Complex)
|
||||
|
||||
```bash
|
||||
/workflow:lite-fix [--critical|--hotfix] [--incident ID] "bug description"
|
||||
|
||||
# 3 modes, 3 parameters
|
||||
--critical, -c Critical bug mode
|
||||
--hotfix, -h Production hotfix mode
|
||||
--incident <ID> Incident tracking ID
|
||||
```
|
||||
|
||||
**Problems**:
|
||||
- Users need to manually determine severity (Regular vs Critical)
|
||||
- Too many parameters (3 flags)
|
||||
- Incident ID as separate parameter adds complexity
|
||||
|
||||
### After (v2.0 - Simplified)
|
||||
|
||||
```bash
|
||||
/workflow:lite-fix [--hotfix] "bug description"
|
||||
|
||||
# 2 modes, 1 parameter
|
||||
--hotfix, -h Production hotfix mode only
|
||||
```
|
||||
|
||||
**Improvements**:
|
||||
- ✅ Automatic severity detection (no manual selection)
|
||||
- ✅ Single optional flag (67% reduction)
|
||||
- ✅ Incident info can be in bug description
|
||||
- ✅ Matches lite-plan simplicity
|
||||
|
||||
---
|
||||
|
||||
## Intelligent Adaptive Workflow
|
||||
|
||||
### Phase 1: Diagnosis - Adaptive Search Depth
|
||||
|
||||
**Confidence-based Strategy Selection**:
|
||||
|
||||
```javascript
|
||||
// High confidence (specific error message provided)
|
||||
if (has_specific_error_message || has_file_path_hint) {
|
||||
strategy = "direct_grep"
|
||||
time_budget = "5 minutes"
|
||||
grep -r '${error_message}' src/ --include='*.ts' -n | head -10
|
||||
}
|
||||
// Medium confidence (module or feature mentioned)
|
||||
else if (has_module_hint) {
|
||||
strategy = "cli-explore-agent_focused"
|
||||
time_budget = "10-15 minutes"
|
||||
Task(subagent="cli-explore-agent", scope="focused")
|
||||
}
|
||||
// Low confidence (vague symptoms)
|
||||
else {
|
||||
strategy = "cli-explore-agent_broad"
|
||||
time_budget = "20 minutes"
|
||||
Task(subagent="cli-explore-agent", scope="comprehensive")
|
||||
}
|
||||
```
|
||||
|
||||
**Output**:
|
||||
- Root cause (file:line, issue, introduced_by)
|
||||
- Reproduction steps
|
||||
- Affected scope
|
||||
- **Confidence level** (used in Phase 2)
|
||||
|
||||
### Phase 2: Impact Assessment - Auto-Severity Detection
|
||||
|
||||
**Risk Score Calculation**:
|
||||
|
||||
```javascript
|
||||
risk_score = (user_impact × 0.4) + (system_risk × 0.3) + (business_impact × 0.3)
|
||||
|
||||
// Examples:
|
||||
// - UI typo: user_impact=1, system_risk=0, business_impact=0 → risk_score=0.4 (LOW)
|
||||
// - Cart bug: user_impact=5, system_risk=3, business_impact=4 → risk_score=4.1 (MEDIUM)
|
||||
// - Login failure: user_impact=9, system_risk=7, business_impact=8 → risk_score=8.1 (CRITICAL)
|
||||
```
|
||||
|
||||
**Workflow Adaptation Table**:
|
||||
|
||||
| Risk Score | Severity | Diagnosis | Test Strategy | Review | Time Budget |
|
||||
|------------|----------|-----------|---------------|--------|-------------|
|
||||
| **< 3.0** | Low | Comprehensive | Full test suite | Optional | 3-4 hours |
|
||||
| **3.0-5.0** | Medium | Moderate | Focused integration | Optional | 1-2 hours |
|
||||
| **5.0-8.0** | High | Focused | Smoke + critical | Skip | 30-60 min |
|
||||
| **≥ 8.0** | Critical | Minimal | Smoke only | Skip | 15-30 min |
|
||||
|
||||
**Output**:
|
||||
```javascript
|
||||
{
|
||||
risk_score: 6.5,
|
||||
severity: "high",
|
||||
workflow_adaptation: {
|
||||
diagnosis_depth: "focused",
|
||||
test_strategy: "smoke_and_critical",
|
||||
review_optional: true,
|
||||
time_budget: "45_minutes"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Fix Planning - Adaptive Strategy Count
|
||||
|
||||
**Before Phase 2 adaptation**:
|
||||
- Always generate 1-3 strategy options
|
||||
- User manually selects
|
||||
|
||||
**After Phase 2 adaptation**:
|
||||
```javascript
|
||||
if (risk_score < 5.0) {
|
||||
// Low-medium risk: User has time to choose
|
||||
strategies = generateMultipleStrategies() // 2-3 options
|
||||
user_selection = true
|
||||
}
|
||||
else {
|
||||
// High-critical risk: Speed is priority
|
||||
strategies = [selectBestStrategy()] // Single option
|
||||
user_selection = false
|
||||
}
|
||||
```
|
||||
|
||||
**Example**:
|
||||
```javascript
|
||||
// Low risk (risk_score=2.5) → Multiple options
|
||||
[
|
||||
{ strategy: "immediate_patch", time: "15min", pros: ["Quick"], cons: ["Not comprehensive"] },
|
||||
{ strategy: "comprehensive_fix", time: "2h", pros: ["Root cause"], cons: ["Longer"] }
|
||||
]
|
||||
|
||||
// High risk (risk_score=6.5) → Single best
|
||||
{ strategy: "surgical_fix", time: "5min", risk: "minimal" }
|
||||
```
|
||||
|
||||
### Phase 4: Verification - Auto-Test Level Selection
|
||||
|
||||
**Test strategy determined by Phase 2 risk_score**:
|
||||
|
||||
```javascript
|
||||
// Already determined in Phase 2
|
||||
test_strategy = workflow_adaptation.test_strategy
|
||||
|
||||
// Map to specific test commands
|
||||
test_commands = {
|
||||
"full_test_suite": "npm test",
|
||||
"focused_integration": "npm test -- affected-module.test.ts",
|
||||
"smoke_and_critical": "npm test -- critical.smoke.test.ts",
|
||||
"smoke_only": "npm test -- smoke.test.ts"
|
||||
}
|
||||
```
|
||||
|
||||
**Auto-suggested to user** (can override if needed)
|
||||
|
||||
### Phase 5: User Confirmation - Adaptive Dimensions
|
||||
|
||||
**Dimension count adapts to risk score**:
|
||||
|
||||
```javascript
|
||||
dimensions = [
|
||||
"Fix approach confirmation", // Always present
|
||||
"Execution method", // Always present
|
||||
"Verification level" // Always present (auto-suggested)
|
||||
]
|
||||
|
||||
// Optional 4th dimension for low-risk bugs
|
||||
if (risk_score < 5.0) {
|
||||
dimensions.push("Post-fix review") // Only for low-medium severity
|
||||
}
|
||||
```
|
||||
|
||||
**Result**:
|
||||
- High-risk bugs: 3 dimensions (faster confirmation)
|
||||
- Low-risk bugs: 4 dimensions (includes review)
|
||||
|
||||
### Phase 6: Execution - Same as Before
|
||||
|
||||
Dispatch to lite-execute with adapted context.
|
||||
|
||||
---
|
||||
|
||||
## Six-Phase Execution Flow Design
|
||||
|
||||
### Phase Summary Comparison
|
||||
|
||||
| Phase | v1.0 (3 modes) | v2.0 (Adaptive) |
|
||||
|-------|----------------|-----------------|
|
||||
| 1. Diagnosis | Manual mode selection → Fixed depth | Confidence detection → Adaptive depth |
|
||||
| 2. Impact | Assessment only | **Assessment + Auto-severity + Workflow adaptation** |
|
||||
| 3. Planning | Fixed strategy count | **Risk-based strategy count** |
|
||||
| 4. Verification | Manual test selection | **Auto-suggested test level** |
|
||||
| 5. Confirmation | Fixed dimensions | **Adaptive dimensions (3 or 4)** |
|
||||
| 6. Execution | Same | Same |
|
||||
|
||||
**Key Difference**: Phases 2-5 now adapt based on Phase 2 risk score.
|
||||
|
||||
---
|
||||
|
||||
## Data Structure Extensions
|
||||
|
||||
### diagnosisContext (Extended)
|
||||
|
||||
```javascript
|
||||
{
|
||||
symptom: string,
|
||||
error_message: string | null,
|
||||
keywords: string[],
|
||||
confidence_level: "high" | "medium" | "low", // ← NEW: Search confidence
|
||||
root_cause: {
|
||||
file: string,
|
||||
line_range: string,
|
||||
issue: string,
|
||||
introduced_by: string
|
||||
},
|
||||
reproduction_steps: string[],
|
||||
affected_scope: {...}
|
||||
}
|
||||
```
|
||||
|
||||
### impactContext (Extended)
|
||||
|
||||
```javascript
|
||||
{
|
||||
affected_users: {...},
|
||||
system_risk: {...},
|
||||
business_impact: {...},
|
||||
risk_score: number, // 0-10
|
||||
severity: "low" | "medium" | "high" | "critical",
|
||||
workflow_adaptation: { // ← NEW: Adaptation decisions
|
||||
diagnosis_depth: string,
|
||||
test_strategy: string,
|
||||
review_optional: boolean,
|
||||
time_budget: string
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Roadmap
|
||||
|
||||
### Phase 1: Core Functionality (Sprint 1) - 5-8 days
|
||||
|
||||
**Completed** ✅:
|
||||
- [x] Command specification (lite-fix.md - 652 lines)
|
||||
- [x] Design document (this document)
|
||||
- [x] Mode simplification (3→2)
|
||||
- [x] Parameter reduction (3→1)
|
||||
|
||||
**Remaining**:
|
||||
- [ ] Implement 6-phase workflow
|
||||
- [ ] Implement intelligent adaptation logic
|
||||
- [ ] Integrate with lite-execute
|
||||
|
||||
### Phase 2: Advanced Features (Sprint 2) - 3-5 days
|
||||
|
||||
- [ ] Diagnosis caching mechanism
|
||||
- [ ] Auto-severity keyword detection
|
||||
- [ ] Hotfix branch management scripts
|
||||
- [ ] Follow-up task auto-generation
|
||||
|
||||
### Phase 3: Optimization (Sprint 3) - 2-3 days
|
||||
|
||||
- [ ] Performance optimization (diagnosis speed)
|
||||
- [ ] Error handling refinement
|
||||
- [ ] Documentation and examples
|
||||
- [ ] User feedback iteration
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Efficiency Improvements
|
||||
|
||||
| Mode | v1.0 Manual Selection | v2.0 Auto-Adaptive | Improvement |
|
||||
|------|----------------------|-------------------|-------------|
|
||||
| Low severity | 4-6 hours (manual Regular) | <3 hours (auto-detected) | 50% faster |
|
||||
| Medium severity | 2-3 hours (need to select Critical) | <1.5 hours (auto-detected) | 40% faster |
|
||||
| High severity | 1-2 hours (if user selects Critical correctly) | <1 hour (auto-detected) | 50% faster |
|
||||
|
||||
**Key**: Users no longer waste time deciding which mode to use.
|
||||
|
||||
### Quality Metrics
|
||||
|
||||
- **Diagnosis Accuracy**: >85% (structured root cause analysis)
|
||||
- **First-time Fix Success Rate**: >90% (comprehensive impact assessment)
|
||||
- **Regression Rate**: <5% (adaptive verification strategy)
|
||||
- **Mode Selection Accuracy**: 100% (automatic, no human error)
|
||||
|
||||
### User Experience
|
||||
|
||||
**v1.0 User Flow**:
|
||||
```
|
||||
User: "Is this bug Regular or Critical? Not sure..."
|
||||
User: "Let me read the mode descriptions again..."
|
||||
User: "OK I'll try --critical"
|
||||
System: "Executing critical mode..." (might be wrong choice)
|
||||
```
|
||||
|
||||
**v2.0 User Flow**:
|
||||
```
|
||||
User: "/workflow:lite-fix 'Shopping cart loses items'"
|
||||
System: "Analyzing impact... Risk score: 6.5 (High severity detected)"
|
||||
System: "Adapting workflow: Focused diagnosis, Smoke+critical tests"
|
||||
User: "Perfect, proceed" (no mode selection needed)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Comparison with Other Commands
|
||||
|
||||
| Command | Modes | Parameters | Adaptation | Complexity |
|
||||
|---------|-------|------------|------------|------------|
|
||||
| `/workflow:lite-fix` (v2.0) | 2 | 1 | **Auto** | Low ✅ |
|
||||
| `/workflow:lite-plan` | 1 + explore flag | 1 | Manual | Low ✅ |
|
||||
| `/workflow:plan` | Multiple | Multiple | Manual | High |
|
||||
| `/workflow:lite-fix` (v1.0) | 3 | 3 | Manual | Medium ❌ |
|
||||
|
||||
**Conclusion**: v2.0 matches lite-plan's simplicity while adding intelligence.
|
||||
|
||||
---
|
||||
|
||||
## Architecture Decision Records (ADRs)
|
||||
|
||||
### ADR-001: Why Remove Critical Mode?
|
||||
|
||||
**Decision**: Remove `--critical` flag, use automatic severity detection
|
||||
|
||||
**Rationale**:
|
||||
1. Users often misjudge bug severity (too conservative or too aggressive)
|
||||
2. Phase 2 impact assessment provides objective risk scoring
|
||||
3. Automatic adaptation eliminates mode selection overhead
|
||||
4. Aligns with "lite" philosophy - simpler is better
|
||||
|
||||
**Alternatives Rejected**:
|
||||
- Keep 3 modes: Too complex, user confusion
|
||||
- Use continuous severity slider (0-10): Still requires manual input
|
||||
|
||||
**Result**: 90% of users can use default mode without thinking about severity.
|
||||
|
||||
### ADR-002: Why Keep Hotfix as Separate Mode?
|
||||
|
||||
**Decision**: Keep `--hotfix` as explicit flag (not auto-detect)
|
||||
|
||||
**Rationale**:
|
||||
1. Production incidents require explicit user intent (safety measure)
|
||||
2. Hotfix has special workflow (branch from production tag, follow-up tasks)
|
||||
3. Clear distinction: "Is this a production incident?" → Yes/No decision
|
||||
4. Prevents accidental hotfix branch creation
|
||||
|
||||
**Alternatives Rejected**:
|
||||
- Auto-detect hotfix based on keywords: Too risky, false positives
|
||||
- Merge into default mode with risk_score≥9.0: Loses explicit intent
|
||||
|
||||
**Result**: Users explicitly choose when to trigger hotfix workflow.
|
||||
|
||||
### ADR-003: Why Adaptive Confirmation Dimensions?
|
||||
|
||||
**Decision**: Use 3 or 4 confirmation dimensions based on risk score
|
||||
|
||||
**Rationale**:
|
||||
1. High-risk bugs need speed → Skip optional code review
|
||||
2. Low-risk bugs have time → Add code review dimension for quality
|
||||
3. Adaptive UX provides best of both worlds
|
||||
|
||||
**Alternatives Rejected**:
|
||||
- Always 4 dimensions: Slows down high-risk fixes
|
||||
- Always 3 dimensions: Misses quality improvement opportunities for low-risk bugs
|
||||
|
||||
**Result**: Workflow adapts to urgency while maintaining quality.
|
||||
|
||||
### ADR-004: Why Remove --incident Parameter?
|
||||
|
||||
**Decision**: Remove `--incident <ID>` parameter
|
||||
|
||||
**Rationale**:
|
||||
1. Incident ID can be included in bug description string
|
||||
2. Or tracked separately in follow-up task metadata
|
||||
3. Reduces command-line parameter count (simplification goal)
|
||||
4. Matches lite-plan's simple syntax
|
||||
|
||||
**Alternatives Rejected**:
|
||||
- Keep as optional parameter: Adds complexity for rare use case
|
||||
- Auto-extract from description: Over-engineering
|
||||
|
||||
**Result**: Simpler command syntax, incident tracking handled elsewhere.
|
||||
|
||||
---
|
||||
|
||||
## Risk Assessment and Mitigation
|
||||
|
||||
### Risk 1: Auto-Severity Detection Errors
|
||||
|
||||
**Risk**: System incorrectly assesses severity (e.g., critical bug marked as low)
|
||||
|
||||
**Mitigation**:
|
||||
1. User can see risk score and severity in Phase 2 output
|
||||
2. User can escalate to `/workflow:plan` if automated assessment seems wrong
|
||||
3. Provide clear explanation of risk score calculation
|
||||
4. Phase 5 confirmation allows user to override test strategy
|
||||
|
||||
**Likelihood**: Low (risk score formula well-tested)
|
||||
|
||||
### Risk 2: Users Miss --hotfix Flag
|
||||
|
||||
**Risk**: Production incident handled as default mode (slower process)
|
||||
|
||||
**Mitigation**:
|
||||
1. Auto-suggest `--hotfix` if keywords detected ("production", "outage", "down")
|
||||
2. If risk_score ≥ 9.0, prompt: "Consider using --hotfix for production incidents"
|
||||
3. Documentation clearly explains when to use hotfix
|
||||
|
||||
**Likelihood**: Medium → Mitigation reduces to Low
|
||||
|
||||
### Risk 3: Adaptive Workflow Confusion
|
||||
|
||||
**Risk**: Users confused by different workflows for different bugs
|
||||
|
||||
**Mitigation**:
|
||||
1. Clear output explaining why workflow adapted ("Risk score: 6.5 → Using focused diagnosis")
|
||||
2. Consistent 6-phase structure (only depth/complexity changes)
|
||||
3. Documentation with examples for each risk level
|
||||
|
||||
**Likelihood**: Low (transparency in adaptation decisions)
|
||||
|
||||
---
|
||||
|
||||
## Gap Coverage from PLANNING_GAP_ANALYSIS.md
|
||||
|
||||
This design addresses **Scenario #8: Emergency Fix Scenario** from the gap analysis:
|
||||
|
||||
| Gap Item | Coverage | Implementation |
|
||||
|----------|----------|----------------|
|
||||
| Workflow simplification | ✅ 100% | 2 modes vs 3, 1 parameter vs 3 |
|
||||
| Fast verification | ✅ 100% | Adaptive test strategy (smoke to full) |
|
||||
| Hotfix branch management | ✅ 100% | Branch from production tag, dual merge |
|
||||
| Comprehensive fix follow-up | ✅ 100% | Auto-generated follow-up tasks |
|
||||
|
||||
**Additional Enhancements** (beyond original gap):
|
||||
- ✅ Intelligent auto-adaptation (not in original gap)
|
||||
- ✅ Risk score calculation (quantitative severity)
|
||||
- ✅ Diagnosis caching (performance optimization)
|
||||
|
||||
---
|
||||
|
||||
## Design Evolution Summary
|
||||
|
||||
### v1.0 → v2.0 Changes
|
||||
|
||||
| Aspect | v1.0 | v2.0 | Impact |
|
||||
|--------|------|------|--------|
|
||||
| **Modes** | 3 (Regular, Critical, Hotfix) | **2 (Default, Hotfix)** | -33% complexity |
|
||||
| **Parameters** | 3 (--critical, --hotfix, --incident) | **1 (--hotfix)** | -67% parameters |
|
||||
| **Adaptation** | Manual mode selection | **Intelligent auto-adaptation** | 🚀 Key innovation |
|
||||
| **User Decision Points** | 3 (mode + incident + confirmation) | **1 (hotfix or not)** | -67% decisions |
|
||||
| **Documentation** | 707 lines | **652 lines** | -8% length |
|
||||
| **Workflow Intelligence** | Low | **High** | Major upgrade |
|
||||
|
||||
### Philosophy Shift
|
||||
|
||||
**v1.0**: "Provide multiple modes for different scenarios"
|
||||
- User selects mode based on perceived severity
|
||||
- Fixed workflows for each mode
|
||||
|
||||
**v2.0**: "Intelligent single mode that adapts to reality"
|
||||
- System assesses actual severity
|
||||
- Workflow automatically optimizes for risk level
|
||||
- User only decides: "Is this a production incident?" (Yes → --hotfix)
|
||||
|
||||
**Result**: Simpler to use, smarter behavior, same powerful capabilities.
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
`/workflow:lite-fix` v2.0 represents a significant simplification while maintaining (and enhancing) full functionality:
|
||||
|
||||
**Core Achievements**:
|
||||
1. ⚡ **Simplified Interface**: 2 modes, 1 parameter (vs 3 modes, 3 parameters)
|
||||
2. 🧠 **Intelligent Adaptation**: Auto-severity detection with risk score
|
||||
3. 🎯 **Optimized Workflows**: Each bug gets appropriate process depth
|
||||
4. 🛡️ **Quality Assurance**: Adaptive verification strategy
|
||||
5. 📋 **Tech Debt Management**: Hotfix auto-generates follow-up tasks
|
||||
|
||||
**Competitive Advantages**:
|
||||
- Matches lite-plan's simplicity (1 optional flag)
|
||||
- Exceeds lite-plan's intelligence (auto-adaptation)
|
||||
- Solves 90% of bug scenarios without mode selection
|
||||
- Explicit hotfix mode for safety-critical production fixes
|
||||
|
||||
**Expected Impact**:
|
||||
- Reduce bug fix time by 50-70%
|
||||
- Eliminate mode selection errors (100% accuracy)
|
||||
- Improve diagnosis accuracy to 85%+
|
||||
- Systematize technical debt from hotfixes
|
||||
|
||||
**Next Steps**:
|
||||
1. Review this design document
|
||||
2. Approve v2.0 simplified approach
|
||||
3. Implement Phase 1 core functionality (estimated 5-8 days)
|
||||
4. Iterate based on user feedback
|
||||
|
||||
---
|
||||
|
||||
**Document Version**: 2.0.0
|
||||
**Author**: Claude (Sonnet 4.5)
|
||||
**Review Status**: Pending Approval
|
||||
**Implementation Status**: Design Complete, Development Pending
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,401 +0,0 @@
|
||||
# 🚀 Claude Code Workflow (CCW): 下一代多智能体软件开发自动化框架
|
||||
|
||||
[](https://github.com/catlog22/Claude-Code-Workflow/releases)
|
||||
[](https://github.com/modelcontextprotocol)
|
||||
[](LICENSE)
|
||||
|
||||
---
|
||||
|
||||
## 📋 项目概述
|
||||
|
||||
**Claude Code Workflow (CCW)** 是一个革命性的多智能体自动化开发框架,它通过智能工作流管理和自主执行来协调复杂的软件开发任务。CCW 不仅仅是一个工具,它是一个完整的开发生态系统,将人工智能的强大能力与结构化的开发流程相结合。
|
||||
|
||||
## 🎯 概念设计与核心理念
|
||||
|
||||
### 设计哲学
|
||||
|
||||
CCW 的设计基于几个核心理念:
|
||||
|
||||
1. **🧠 智能协作而非替代**: 不是完全取代开发者,而是作为智能助手协同工作
|
||||
2. **📊 JSON 优先架构**: 以 JSON 作为单一数据源,消除同步复杂性
|
||||
3. **🔄 完整的开发生命周期**: 覆盖从构思到部署的每一个环节
|
||||
4. **🤖 多智能体协调**: 专门的智能体处理不同类型的开发任务
|
||||
5. **⚡ 原子化会话管理**: 超快速的上下文切换和并行工作
|
||||
|
||||
### 架构创新
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[🖥️ CLI 接口层] --> B[📋 会话管理层]
|
||||
B --> C[📊 JSON 任务数据层]
|
||||
C --> D[🤖 多智能体编排层]
|
||||
|
||||
A --> A1[Gemini CLI - 分析探索]
|
||||
A --> A2[Codex CLI - 自主开发]
|
||||
A --> A3[Qwen CLI - 架构生成]
|
||||
|
||||
B --> B1[.active-session 标记]
|
||||
B --> B2[工作流会话状态]
|
||||
|
||||
C --> C1[IMPL-*.json 任务定义]
|
||||
C --> C2[动态任务分解]
|
||||
C --> C3[依赖关系映射]
|
||||
|
||||
D --> D1[概念规划智能体]
|
||||
D --> D2[代码开发智能体]
|
||||
D --> D3[测试审查智能体]
|
||||
D --> D4[记忆桥接智能体]
|
||||
```
|
||||
|
||||
## 🔥 解决的核心问题
|
||||
|
||||
### 1. **项目上下文丢失问题**
|
||||
**传统痛点**: 在复杂项目中,开发者经常在不同任务间切换时丢失上下文,需要重新理解代码结构和业务逻辑。
|
||||
|
||||
**CCW 解决方案**:
|
||||
- 📚 **智能内存更新系统**: 自动维护 `CLAUDE.md` 文档,实时跟踪代码库变化
|
||||
- 🔄 **会话持久化**: 完整保存工作流状态,支持无缝恢复
|
||||
- 📊 **上下文继承**: 任务间自动传递相关上下文信息
|
||||
|
||||
### 2. **开发流程不统一问题**
|
||||
**传统痛点**: 团队成员使用不同的开发流程,导致代码质量不一致,难以协作。
|
||||
|
||||
**CCW 解决方案**:
|
||||
- 🔄 **标准化工作流**: 强制执行 Brainstorm → Plan → Verify → Execute → Test → Review 流程
|
||||
- ✅ **质量门禁**: 每个阶段都有验证机制确保质量
|
||||
- 📋 **可追溯性**: 完整记录决策过程和实现细节
|
||||
|
||||
### 3. **重复性任务自动化不足**
|
||||
**传统痛点**: 大量重复性的代码生成、测试编写、文档更新工作消耗开发者精力。
|
||||
|
||||
**CCW 解决方案**:
|
||||
- 🤖 **多智能体自动化**: 不同类型任务分配给专门的智能体
|
||||
- 🧪 **自动测试生成**: 根据实现自动生成全面的测试套件
|
||||
- 📝 **文档自动更新**: 代码变更时自动更新相关文档
|
||||
|
||||
### 4. **代码库理解困难**
|
||||
**传统痛点**: 在大型项目中,理解现有代码结构和模式需要大量时间。
|
||||
|
||||
**CCW 解决方案**:
|
||||
- 🔧 **MCP 工具集成**: 通过 Model Context Protocol 实现高级代码分析
|
||||
- 🔍 **模式识别**: 自动识别代码库中的设计模式和架构约定
|
||||
- 🌐 **外部最佳实践**: 集成外部 API 模式和行业最佳实践
|
||||
|
||||
## 🛠️ 核心工作流介绍
|
||||
|
||||
### 📊 JSON 优先数据模型
|
||||
|
||||
CCW 采用独特的 JSON 优先架构,所有工作流状态都存储在结构化的 JSON 文件中:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-1.2",
|
||||
"title": "实现 JWT 认证系统",
|
||||
"status": "pending",
|
||||
"meta": {
|
||||
"type": "feature",
|
||||
"agent": "code-developer"
|
||||
},
|
||||
"context": {
|
||||
"requirements": ["JWT 认证", "OAuth2 支持"],
|
||||
"focus_paths": ["src/auth", "tests/auth"],
|
||||
"acceptance": ["JWT 验证工作", "OAuth 流程完整"]
|
||||
},
|
||||
"flow_control": {
|
||||
"pre_analysis": [...],
|
||||
"implementation_approach": {...}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 🧠 智能内存管理系统
|
||||
|
||||
#### 自动内存更新
|
||||
CCW 的内存更新系统是其核心特色之一:
|
||||
|
||||
```bash
|
||||
# 日常开发后的自动更新
|
||||
/update-memory-related # 智能分析最近变更,只更新相关模块
|
||||
|
||||
# 重大变更后的全面更新
|
||||
/update-memory-full # 完整扫描项目,重建所有文档
|
||||
|
||||
# 模块特定更新
|
||||
cd src/auth && /update-memory-related # 针对特定模块的精准更新
|
||||
```
|
||||
|
||||
#### CLAUDE.md 四层架构
|
||||
```
|
||||
CLAUDE.md (项目级总览)
|
||||
├── src/CLAUDE.md (源码层文档)
|
||||
├── src/auth/CLAUDE.md (模块层文档)
|
||||
└── src/auth/jwt/CLAUDE.md (组件层文档)
|
||||
```
|
||||
|
||||
### 🔧 Flow Control 与 CLI 工具集成
|
||||
|
||||
#### 预分析阶段 (pre_analysis)
|
||||
```json
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "mcp_codebase_exploration",
|
||||
"action": "使用 MCP 工具探索代码库结构",
|
||||
"command": "mcp__code-index__find_files(pattern=\"[task_focus_patterns]\")",
|
||||
"output_to": "codebase_structure"
|
||||
},
|
||||
{
|
||||
"step": "mcp_external_context",
|
||||
"action": "获取外部 API 示例和最佳实践",
|
||||
"command": "mcp__exa__get_code_context_exa(query=\"[task_technology] [task_patterns]\")",
|
||||
"output_to": "external_context"
|
||||
},
|
||||
{
|
||||
"step": "gather_task_context",
|
||||
"action": "分析任务上下文,不进行实现",
|
||||
"command": "gemini-wrapper -p \"分析 [task_title] 的现有模式和依赖\"",
|
||||
"output_to": "task_context"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
#### 实现方法定义 (implementation_approach)
|
||||
```json
|
||||
"implementation_approach": {
|
||||
"task_description": "基于 [design] 分析结果实现 JWT 认证",
|
||||
"modification_points": [
|
||||
"使用 [parent] 模式添加 JWT 生成",
|
||||
"基于 [context] 实现验证中间件"
|
||||
],
|
||||
"logic_flow": [
|
||||
"用户登录 → 使用 [inherited] 验证 → 生成 JWT",
|
||||
"受保护路由 → 提取 JWT → 使用 [shared] 规则验证"
|
||||
],
|
||||
"target_files": [
|
||||
"src/auth/login.ts:handleLogin:75-120",
|
||||
"src/middleware/auth.ts:validateToken"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 🚀 CLI 工具协同工作
|
||||
|
||||
#### 三大 CLI 工具分工
|
||||
```mermaid
|
||||
graph LR
|
||||
A[Gemini CLI] --> A1[深度分析]
|
||||
A --> A2[模式识别]
|
||||
A --> A3[架构理解]
|
||||
|
||||
B[Qwen CLI] --> B1[架构设计]
|
||||
B --> B2[代码生成]
|
||||
B --> B3[系统规划]
|
||||
|
||||
C[Codex CLI] --> C1[自主开发]
|
||||
C --> C2[错误修复]
|
||||
C --> C3[测试生成]
|
||||
```
|
||||
|
||||
#### 智能工具选择策略
|
||||
CCW 基于任务类型自动选择最适合的工具:
|
||||
|
||||
```bash
|
||||
# 探索和理解阶段
|
||||
/cli:analyze --tool gemini "认证系统架构模式"
|
||||
|
||||
# 设计和规划阶段
|
||||
/cli:mode:plan --tool qwen "微服务认证架构设计"
|
||||
|
||||
# 实现和开发阶段
|
||||
/cli:execute --tool codex "实现 JWT 认证系统"
|
||||
```
|
||||
|
||||
### 🔄 完整开发生命周期
|
||||
|
||||
#### 1. 头脑风暴阶段
|
||||
```bash
|
||||
# 多角色专家视角分析
|
||||
/workflow:brainstorm:system-architect "用户认证系统"
|
||||
/workflow:brainstorm:security-expert "认证安全考虑"
|
||||
/workflow:brainstorm:ui-designer "认证用户体验"
|
||||
|
||||
# 综合所有视角
|
||||
/workflow:brainstorm:synthesis
|
||||
```
|
||||
|
||||
#### 2. 规划与验证
|
||||
```bash
|
||||
# 创建实现计划
|
||||
/workflow:plan "用户认证系统与 JWT 支持"
|
||||
|
||||
# 双重验证机制
|
||||
/workflow:plan-verify # Gemini 战略 + Codex 技术双重验证
|
||||
```
|
||||
|
||||
#### 3. 执行与测试
|
||||
```bash
|
||||
# 智能体协调执行
|
||||
/workflow:execute
|
||||
|
||||
# 自动生成测试工作流
|
||||
/workflow:test-gen WFS-user-auth-system
|
||||
```
|
||||
|
||||
#### 4. 审查与文档
|
||||
```bash
|
||||
# 质量审查
|
||||
/workflow:review
|
||||
|
||||
# 分层文档生成
|
||||
/workflow:docs "all"
|
||||
```
|
||||
|
||||
## 🔧 技术创新亮点
|
||||
|
||||
### 1. **MCP 工具集成** *(实验性)*
|
||||
- **Exa MCP Server**: 获取真实世界的 API 模式和最佳实践
|
||||
- **Code Index MCP**: 高级内部代码库搜索和索引
|
||||
- **自动回退**: MCP 不可用时无缝切换到传统工具
|
||||
|
||||
### 2. **原子化会话管理**
|
||||
```bash
|
||||
# 超快速会话切换 (<10ms)
|
||||
.workflow/.active-user-auth-system # 简单的文件标记
|
||||
|
||||
# 并行会话支持
|
||||
.workflow/WFS-user-auth/ # 认证系统会话
|
||||
.workflow/WFS-payment/ # 支付系统会话
|
||||
.workflow/WFS-dashboard/ # 仪表板会话
|
||||
```
|
||||
|
||||
### 3. **智能上下文传递**
|
||||
- **依赖上下文**: 任务完成后自动传递关键信息给依赖任务
|
||||
- **继承上下文**: 子任务自动继承父任务的设计决策
|
||||
- **共享上下文**: 会话级别的全局规则和模式
|
||||
|
||||
### 4. **动态任务分解**
|
||||
```json
|
||||
// 主任务自动分解为子任务
|
||||
"IMPL-1": "用户认证系统",
|
||||
"IMPL-1.1": "JWT 令牌生成",
|
||||
"IMPL-1.2": "认证中间件",
|
||||
"IMPL-1.3": "用户登录接口"
|
||||
```
|
||||
|
||||
## 🎯 使用场景示例
|
||||
|
||||
### 场景 1: 新功能开发
|
||||
```bash
|
||||
# 1. 启动专门会话
|
||||
/workflow:session:start "OAuth2 集成"
|
||||
|
||||
# 2. 多视角头脑风暴
|
||||
/workflow:brainstorm:system-architect "OAuth2 架构设计"
|
||||
/workflow:brainstorm:security-expert "OAuth2 安全考虑"
|
||||
|
||||
# 3. 执行完整开发流程
|
||||
/workflow:plan "OAuth2 与现有认证系统集成"
|
||||
/workflow:plan-verify
|
||||
/workflow:execute
|
||||
/workflow:test-gen WFS-oauth2-integration
|
||||
/workflow:review
|
||||
```
|
||||
|
||||
### 场景 2: 紧急错误修复
|
||||
```bash
|
||||
# 快速错误解决工作流
|
||||
/workflow:session:start "支付验证修复"
|
||||
/cli:mode:bug-diagnosis --tool gemini "并发请求时支付验证失败"
|
||||
/cli:execute --tool codex "修复支付验证竞态条件"
|
||||
/workflow:review
|
||||
```
|
||||
|
||||
### 场景 3: 架构重构
|
||||
```bash
|
||||
# 深度架构分析和重构
|
||||
/workflow:session:start "微服务重构"
|
||||
/cli:analyze --tool gemini "当前单体架构的技术债务"
|
||||
/workflow:plan "单体到微服务的迁移策略"
|
||||
/workflow:execute
|
||||
/workflow:test-gen WFS-microservice-refactoring
|
||||
```
|
||||
|
||||
## 🌟 核心优势
|
||||
|
||||
### 1. **提升开发效率**
|
||||
- ⚡ **10x 上下文切换速度**: 原子化会话管理
|
||||
- 🤖 **自动化重复任务**: 90% 的样板代码和测试自动生成
|
||||
- 📊 **智能决策支持**: 基于历史模式的建议
|
||||
|
||||
### 2. **保证代码质量**
|
||||
- ✅ **强制质量门禁**: 每个阶段的验证机制
|
||||
- 🔍 **自动模式检测**: 识别并遵循现有代码约定
|
||||
- 📝 **完整可追溯性**: 从需求到实现的完整记录
|
||||
|
||||
### 3. **降低学习成本**
|
||||
- 📚 **智能文档系统**: 自动维护的项目知识库
|
||||
- 🔄 **标准化流程**: 统一的开发工作流
|
||||
- 💡 **最佳实践集成**: 外部优秀模式的自动引入
|
||||
|
||||
### 4. **支持团队协作**
|
||||
- 🔀 **并行会话支持**: 多人同时工作不冲突
|
||||
- 📊 **透明的进度跟踪**: 实时可见的任务状态
|
||||
- 🤝 **知识共享**: 决策过程和实现细节的完整记录
|
||||
|
||||
## 🚀 开始使用
|
||||
|
||||
### 快速安装
|
||||
```powershell
|
||||
# Windows 一键安装
|
||||
Invoke-Expression (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1" -UseBasicParsing).Content
|
||||
|
||||
# 验证安装
|
||||
/workflow:session:list
|
||||
```
|
||||
|
||||
### 可选 MCP 工具增强
|
||||
```bash
|
||||
# 安装 Exa MCP Server (外部 API 模式)
|
||||
# 安装指南: https://github.com/exa-labs/exa-mcp-server
|
||||
|
||||
# 安装 Code Index MCP (高级代码搜索)
|
||||
# 安装指南: https://github.com/johnhuang316/code-index-mcp
|
||||
```
|
||||
|
||||
## 📈 项目状态与路线图
|
||||
|
||||
### 当前状态 (v2.1.0-experimental)
|
||||
- ✅ 核心多智能体系统完成
|
||||
- ✅ JSON 优先架构稳定
|
||||
- ✅ 完整工作流生命周期支持
|
||||
- 🧪 MCP 工具集成 (实验性)
|
||||
- ✅ 智能内存管理系统
|
||||
|
||||
### 即将推出
|
||||
- 🔮 **AI 辅助代码审查**: 更智能的质量检测
|
||||
- 🌐 **云端协作支持**: 团队级工作流共享
|
||||
- 📊 **性能分析集成**: 自动性能优化建议
|
||||
- 🔧 **更多 MCP 工具**: 扩展外部工具生态
|
||||
|
||||
## 🤝 社区与支持
|
||||
|
||||
- 📚 **文档**: [项目 Wiki](https://github.com/catlog22/Claude-Code-Workflow/wiki)
|
||||
- 🐛 **问题反馈**: [GitHub Issues](https://github.com/catlog22/Claude-Code-Workflow/issues)
|
||||
- 💬 **社区讨论**: [讨论区](https://github.com/catlog22/Claude-Code-Workflow/discussions)
|
||||
- 📋 **更新日志**: [发布历史](CHANGELOG.md)
|
||||
|
||||
---
|
||||
|
||||
## 💡 结语
|
||||
|
||||
**Claude Code Workflow** 不仅仅是一个开发工具,它代表了软件开发工作流的未来趋势。通过智能化的多智能体协作、结构化的开发流程和先进的上下文管理,CCW 让开发者能够专注于创造性工作,而将重复性和机械性任务交给 AI 助手。
|
||||
|
||||
我们相信,未来的软件开发将是人机协作的典范,CCW 正是这一愿景的先锋实践。
|
||||
|
||||
🌟 **立即体验 CCW,开启您的智能化开发之旅!**
|
||||
|
||||
[](https://github.com/catlog22/Claude-Code-Workflow)
|
||||
[](https://github.com/catlog22/Claude-Code-Workflow/releases/latest)
|
||||
|
||||
---
|
||||
|
||||
*本文档由 Claude Code Workflow 的智能文档系统自动生成和维护*
|
||||
198
README.md
198
README.md
@@ -1,198 +0,0 @@
|
||||
# 🚀 Claude Code Workflow (CCW)
|
||||
|
||||
<div align="center">
|
||||
|
||||
[](https://github.com/catlog22/Claude-Code-Workflow/releases)
|
||||
[](LICENSE)
|
||||
[]()
|
||||
|
||||
**Languages:** [English](README.md) | [中文](README_CN.md)
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
**Claude Code Workflow (CCW)** transforms AI development from simple prompt chaining into a robust, context-first orchestration system. It solves execution uncertainty and error accumulation through structured planning, deterministic execution, and intelligent multi-model orchestration.
|
||||
|
||||
> **🎉 Version 5.8.1: Lite-Plan Workflow & CLI Tools Enhancement**
|
||||
>
|
||||
> **Core Improvements**:
|
||||
> - ✨ **Lite-Plan Workflow** (`/workflow:lite-plan`) - Lightweight interactive planning with intelligent automation
|
||||
> - **Three-Dimensional Multi-Select Confirmation**: Task approval + Execution method + Code review tool
|
||||
> - **Smart Code Exploration**: Auto-detects when codebase context is needed (use `-e` flag to force)
|
||||
> - **Parallel Task Execution**: Identifies independent tasks for concurrent execution
|
||||
> - **Flexible Execution**: Choose between Agent (@code-developer) or CLI (Gemini/Qwen/Codex)
|
||||
> - **Optional Post-Review**: Built-in code quality analysis with your choice of AI tool
|
||||
> - ✨ **CLI Tools Optimization** - Simplified command syntax with auto-model-selection
|
||||
> - Removed `-m` parameter requirement for Gemini, Qwen, and Codex (auto-selects best model)
|
||||
> - Clearer command structure and improved documentation
|
||||
> - 🔄 **Execution Workflow Enhancement** - Streamlined phases with lazy loading strategy
|
||||
> - 🎨 **CLI Explore Agent** - Improved visibility with yellow color scheme
|
||||
>
|
||||
> See [CHANGELOG.md](CHANGELOG.md) for full details.
|
||||
|
||||
> 📚 **New to CCW?** Check out the [**Getting Started Guide**](GETTING_STARTED.md) for a beginner-friendly 5-minute tutorial!
|
||||
|
||||
---
|
||||
|
||||
## ✨ Core Concepts
|
||||
|
||||
CCW is built on a set of core principles that differentiate it from traditional AI development approaches:
|
||||
|
||||
- **Context-First Architecture**: Pre-defined context gathering eliminates execution uncertainty by ensuring agents have the correct information *before* implementation.
|
||||
- **JSON-First State Management**: Task states live in `.task/IMPL-*.json` files as the single source of truth, enabling programmatic orchestration without state drift.
|
||||
- **Autonomous Multi-Phase Orchestration**: Commands chain specialized sub-commands and agents to automate complex workflows with zero user intervention.
|
||||
- **Multi-Model Strategy**: Leverages the unique strengths of different AI models (Gemini for analysis, Codex for implementation) for superior results.
|
||||
- **Hierarchical Memory System**: A 4-layer documentation system provides context at the appropriate level of abstraction, preventing information overload.
|
||||
- **Specialized Role-Based Agents**: A suite of agents (`@code-developer`, `@test-fix-agent`, etc.) mirrors a real software team to handle diverse tasks.
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Installation
|
||||
|
||||
For detailed installation instructions, please refer to the [**INSTALL.md**](INSTALL.md) guide.
|
||||
|
||||
### **🚀 Quick One-Line Installation**
|
||||
|
||||
**Windows (PowerShell):**
|
||||
```powershell
|
||||
Invoke-Expression (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1" -UseBasicParsing).Content
|
||||
```
|
||||
|
||||
**Linux/macOS (Bash/Zsh):**
|
||||
```bash
|
||||
bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
|
||||
```
|
||||
|
||||
### **✅ Verify Installation**
|
||||
After installation, open **Claude Code** and check if the workflow commands are available by running:
|
||||
```bash
|
||||
/workflow:session:list
|
||||
```
|
||||
If the slash commands (e.g., `/workflow:*`) are recognized, the installation was successful.
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Command Reference
|
||||
|
||||
CCW provides a rich set of commands for managing workflows, tasks, and interacting with AI tools. For a complete list and detailed descriptions of all available commands, please see the [**COMMAND_REFERENCE.md**](COMMAND_REFERENCE.md) file.
|
||||
|
||||
For a detailed technical specification of every command, see the [**COMMAND_SPEC.md**](COMMAND_SPEC.md).
|
||||
|
||||
---
|
||||
|
||||
### 💡 **Need Help? Use the Interactive Command Guide**
|
||||
|
||||
CCW includes a built-in **command-guide skill** to help you discover and use commands effectively:
|
||||
|
||||
- **`CCW-help`** - Get interactive help and command recommendations
|
||||
- **`CCW-issue`** - Report bugs or request features with guided templates
|
||||
|
||||
The command guide provides:
|
||||
- 🔍 **Smart Command Search** - Find commands by keyword, category, or use-case
|
||||
- 🤖 **Next-Step Recommendations** - Get suggestions for what to do after any command
|
||||
- 📖 **Detailed Documentation** - View parameters, examples, and best practices
|
||||
- 🎓 **Beginner Onboarding** - Learn the top 14 essential commands with a guided learning path
|
||||
- 📝 **Issue Reporting** - Generate standardized bug reports and feature requests
|
||||
|
||||
**Example Usage**:
|
||||
```
|
||||
User: "CCW-help"
|
||||
→ Interactive menu with command search, recommendations, and documentation
|
||||
|
||||
User: "What's next after /workflow:plan?"
|
||||
→ Recommends /workflow:execute, /workflow:action-plan-verify, with workflow patterns
|
||||
|
||||
User: "CCW-issue"
|
||||
→ Guided template generation for bugs, features, or questions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Getting Started
|
||||
|
||||
The best way to get started is to follow the 5-minute tutorial in the [**Getting Started Guide**](GETTING_STARTED.md).
|
||||
|
||||
Here is a quick example of a common development workflow:
|
||||
|
||||
### **Option 1: Lite-Plan Workflow** (⚡ Recommended for Quick Tasks)
|
||||
|
||||
Lightweight interactive workflow with in-memory planning and immediate execution:
|
||||
|
||||
```bash
|
||||
# Basic usage with auto-detection
|
||||
/workflow:lite-plan "Add JWT authentication to user login"
|
||||
|
||||
# Force code exploration
|
||||
/workflow:lite-plan -e "Refactor logging module for better performance"
|
||||
|
||||
# Basic usage
|
||||
/workflow:lite-plan "Add unit tests for auth service"
|
||||
```
|
||||
|
||||
**Interactive Flow**:
|
||||
1. **Phase 1**: Automatic task analysis and smart code exploration (if needed)
|
||||
2. **Phase 2**: Answer clarification questions (if any)
|
||||
3. **Phase 3**: Review generated plan with task breakdown
|
||||
4. **Phase 4**: Three-dimensional confirmation:
|
||||
- ✅ Confirm/Modify/Cancel task
|
||||
- 🔧 Choose execution: Agent / Provide Plan / CLI (Gemini/Qwen/Codex)
|
||||
- 🔍 Optional code review: No / Claude / Gemini / Qwen / Codex
|
||||
5. **Phase 5**: Watch real-time execution with live task tracking
|
||||
|
||||
### **Option 2: Full Workflow** (Comprehensive Planning)
|
||||
|
||||
Traditional multi-phase workflow for complex projects:
|
||||
|
||||
1. **Create a Plan** (automatically starts a session):
|
||||
```bash
|
||||
/workflow:plan "Implement JWT-based user login and registration"
|
||||
```
|
||||
2. **Execute the Plan**:
|
||||
```bash
|
||||
/workflow:execute
|
||||
```
|
||||
3. **Check Status** (optional):
|
||||
```bash
|
||||
/workflow:status
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
CCW provides comprehensive documentation to help you get started and master advanced features:
|
||||
|
||||
### 📖 **Getting Started**
|
||||
- [**Getting Started Guide**](GETTING_STARTED.md) - 5-minute quick start tutorial
|
||||
- [**Installation Guide**](INSTALL.md) - Detailed installation instructions ([中文](INSTALL_CN.md))
|
||||
- [**Workflow Decision Guide**](WORKFLOW_DECISION_GUIDE_EN.md) - 🌳 Interactive flowchart for choosing the right commands
|
||||
- [**Examples**](EXAMPLES.md) - Real-world use cases and practical examples
|
||||
- [**FAQ**](FAQ.md) - Frequently asked questions and troubleshooting
|
||||
|
||||
### 🏗️ **Architecture & Design**
|
||||
- [**Architecture Overview**](ARCHITECTURE.md) - System design and core components
|
||||
- [**Project Introduction**](PROJECT_INTRODUCTION.md) - Detailed project overview (中文)
|
||||
- [**Workflow Diagrams**](WORKFLOW_DIAGRAMS.md) - Visual workflow representations
|
||||
|
||||
### 📋 **Command Reference**
|
||||
- [**Command Reference**](COMMAND_REFERENCE.md) - Complete list of all commands
|
||||
- [**Command Specification**](COMMAND_SPEC.md) - Detailed technical specifications
|
||||
- [**Command Flow Standard**](COMMAND_FLOW_STANDARD.md) - Command design patterns
|
||||
|
||||
### 🤝 **Contributing**
|
||||
- [**Contributing Guide**](CONTRIBUTING.md) - How to contribute to CCW
|
||||
- [**Changelog**](CHANGELOG.md) - Version history and release notes
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Contributing & Support
|
||||
|
||||
- **Repository**: [GitHub - Claude-Code-Workflow](https://github.com/catlog22/Claude-Code-Workflow)
|
||||
- **Issues**: Report bugs or request features on [GitHub Issues](https://github.com/catlog22/Claude-Code-Workflow/issues).
|
||||
- **Discussions**: Join the [Community Forum](https://github.com/catlog22/Claude-Code-Workflow/discussions).
|
||||
- **Contributing**: See [CONTRIBUTING.md](CONTRIBUTING.md) for contribution guidelines.
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the **MIT License**. See the [LICENSE](LICENSE) file for details.
|
||||
@@ -14,9 +14,10 @@ graph TB
|
||||
end
|
||||
|
||||
subgraph "Session Management"
|
||||
MARKER[".active-session marker"]
|
||||
SESSION["workflow-session.json"]
|
||||
WDIR[".workflow/ directories"]
|
||||
ACTIVE_DIR[".workflow/active/"]
|
||||
ARCHIVE_DIR[".workflow/archives/"]
|
||||
end
|
||||
|
||||
subgraph "Task System"
|
||||
@@ -124,9 +125,7 @@ stateDiagram-v2
|
||||
CreateStructure --> CreateJSON: Create workflow-session.json
|
||||
CreateJSON --> CreatePlan: Create IMPL_PLAN.md
|
||||
CreatePlan --> CreateTasks: Create .task/ directory
|
||||
CreateTasks --> SetActive: touch .active-session-name
|
||||
|
||||
SetActive --> Active: Session Ready
|
||||
CreateTasks --> Active: Session Ready in .workflow/active/
|
||||
|
||||
Active --> Paused: Switch to Another Session
|
||||
Active --> Working: Execute Tasks
|
||||
|
||||
Reference in New Issue
Block a user