mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-06 01:54:11 +08:00
Compare commits
88 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bb6f74f44b | ||
|
|
986eb31c03 | ||
|
|
4f0edb27ff | ||
|
|
3e83f77304 | ||
|
|
18d369e871 | ||
|
|
c363b5dd0e | ||
|
|
692a68da6f | ||
|
|
89f22ec3cf | ||
|
|
b7db6c86bd | ||
|
|
71138a95e1 | ||
|
|
ecccae1664 | ||
|
|
642d25f161 | ||
|
|
20d53bbd8e | ||
|
|
9a63512256 | ||
|
|
080c8be87f | ||
|
|
a208af22af | ||
|
|
7701bbd28c | ||
|
|
7f82d0da86 | ||
|
|
2b3541941e | ||
|
|
04373ee368 | ||
|
|
4dd1ae5a9e | ||
|
|
acc792907c | ||
|
|
b849dac618 | ||
|
|
c3d05826ef | ||
|
|
bd9ae8b200 | ||
|
|
da908d8db4 | ||
|
|
3068c2ca83 | ||
|
|
ee7ffdae67 | ||
|
|
1f070638b4 | ||
|
|
57fa379e45 | ||
|
|
ef187d3a4b | ||
|
|
9cc2994509 | ||
|
|
db8f90428e | ||
|
|
047d809e23 | ||
|
|
69a654170a | ||
|
|
b9fc1ea8e1 | ||
|
|
a73a51355e | ||
|
|
12d010c1d8 | ||
|
|
d9cee7f17a | ||
|
|
598efea8f6 | ||
|
|
8b8c2e1208 | ||
|
|
d3f8d012a1 | ||
|
|
6fdcf9b8cc | ||
|
|
632a6e474a | ||
|
|
6a321c5ad6 | ||
|
|
e3a6c885db | ||
|
|
eb9b10c96b | ||
|
|
804617d8cd | ||
|
|
b6c1880abf | ||
|
|
7783ee0ac5 | ||
|
|
de3dc35c5b | ||
|
|
c640cfefe8 | ||
|
|
d3ddfadf16 | ||
|
|
2072ddfa6e | ||
|
|
9e584d911b | ||
|
|
b30a5269d2 | ||
|
|
5046565d4c | ||
|
|
8ebae76b74 | ||
|
|
83664cb777 | ||
|
|
360a2b9edc | ||
|
|
5123675fbf | ||
|
|
967490dcf6 | ||
|
|
e15da0e461 | ||
|
|
51a0cb3a3c | ||
|
|
436c7909b0 | ||
|
|
f8d5d908ea | ||
|
|
ac8c3b3d0c | ||
|
|
423289c539 | ||
|
|
21ea77bdf3 | ||
|
|
03ffc91764 | ||
|
|
ee3a420f60 | ||
|
|
9151a82d1d | ||
|
|
24aad6238a | ||
|
|
44734a447c | ||
|
|
99cb29ed23 | ||
|
|
b8935777e7 | ||
|
|
49c2b189d4 | ||
|
|
1324fb8c2a | ||
|
|
1073e43c0b | ||
|
|
393b2f480f | ||
|
|
3b0f067f0b | ||
|
|
0130a66642 | ||
|
|
e2711a7797 | ||
|
|
3a6e88c0df | ||
|
|
199585b29c | ||
|
|
e94b2a250b | ||
|
|
4193a17c27 | ||
|
|
f063fb0cde |
@@ -26,9 +26,9 @@ You are a pure execution agent specialized in creating actionable implementation
|
||||
- `session_id`: Workflow session identifier (WFS-[topic])
|
||||
- `session_metadata`: Session configuration and state
|
||||
- `analysis_results`: Analysis recommendations and task breakdown
|
||||
- `artifacts_inventory`: Detected brainstorming outputs (synthesis-spec, topic-framework, role analyses)
|
||||
- `artifacts_inventory`: Detected brainstorming outputs (role analyses, guidance-specification, role analyses)
|
||||
- `context_package`: Project context and assets
|
||||
- `mcp_capabilities`: Available MCP tools (code-index, exa-code, exa-web)
|
||||
- `mcp_capabilities`: Available MCP tools (exa-code, exa-web)
|
||||
- `mcp_analysis`: Optional pre-executed MCP analysis results
|
||||
|
||||
**Legacy Support** (backward compatibility):
|
||||
@@ -46,8 +46,8 @@ Phase 1: Context Validation & Enhancement (Discovery Results Provided)
|
||||
→ artifacts_inventory: Use provided list (from memory or scan)
|
||||
→ mcp_analysis: Use provided results (optional)
|
||||
3. Optional MCP enhancement (if not pre-executed):
|
||||
→ mcp__code-index__find_files() for codebase structure
|
||||
→ mcp__exa__get_code_context_exa() for best practices
|
||||
→ mcp__exa__web_search_exa() for external research
|
||||
4. Assess task complexity (simple/medium/complex) from analysis
|
||||
|
||||
Phase 2: Document Generation (Autonomous Output)
|
||||
@@ -77,8 +77,8 @@ Phase 2: Document Generation (Autonomous Output)
|
||||
"dependencies": [...]
|
||||
},
|
||||
"artifacts_inventory": {
|
||||
"synthesis_specification": ".workflow/WFS-auth/.brainstorming/synthesis-specification.md",
|
||||
"topic_framework": ".workflow/WFS-auth/.brainstorming/topic-framework.md",
|
||||
"synthesis_specification": ".workflow/WFS-auth/.brainstorming/role analysis documents",
|
||||
"topic_framework": ".workflow/WFS-auth/.brainstorming/guidance-specification.md",
|
||||
"role_analyses": [
|
||||
".workflow/WFS-auth/.brainstorming/system-architect/analysis.md",
|
||||
".workflow/WFS-auth/.brainstorming/subject-matter-expert/analysis.md"
|
||||
@@ -89,12 +89,10 @@ Phase 2: Document Generation (Autonomous Output)
|
||||
"focus_areas": [...]
|
||||
},
|
||||
"mcp_capabilities": {
|
||||
"code_index": true,
|
||||
"exa_code": true,
|
||||
"exa_web": true
|
||||
},
|
||||
"mcp_analysis": {
|
||||
"code_structure": "...",
|
||||
"external_research": "..."
|
||||
}
|
||||
}
|
||||
@@ -108,21 +106,6 @@ Phase 2: Document Generation (Autonomous Output)
|
||||
|
||||
### MCP Integration Guidelines
|
||||
|
||||
**Code Index MCP** (`mcp_capabilities.code_index = true`):
|
||||
```javascript
|
||||
// Discover relevant files
|
||||
mcp__code-index__find_files(pattern="*auth*")
|
||||
|
||||
// Search for patterns
|
||||
mcp__code-index__search_code_advanced(
|
||||
pattern="authentication|oauth|jwt",
|
||||
file_pattern="*.{ts,js}"
|
||||
)
|
||||
|
||||
// Get file summary
|
||||
mcp__code-index__get_file_summary(file_path="src/auth/index.ts")
|
||||
```
|
||||
|
||||
**Exa Code Context** (`mcp_capabilities.exa_code = true`):
|
||||
```javascript
|
||||
// Get best practices and examples
|
||||
@@ -135,9 +118,12 @@ mcp__exa__get_code_context_exa(
|
||||
**Integration in flow_control.pre_analysis**:
|
||||
```json
|
||||
{
|
||||
"step": "mcp_codebase_exploration",
|
||||
"step": "local_codebase_exploration",
|
||||
"action": "Explore codebase structure",
|
||||
"command": "mcp__code-index__find_files(pattern=\"[task_patterns]\") && mcp__code-index__search_code_advanced(pattern=\"[relevant_patterns]\")",
|
||||
"commands": [
|
||||
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
|
||||
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
|
||||
],
|
||||
"output_to": "codebase_structure"
|
||||
}
|
||||
```
|
||||
@@ -194,17 +180,17 @@ Generate individual `.task/IMPL-*.json` files with:
|
||||
"implementation_approach": [
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Load and analyze synthesis specification",
|
||||
"description": "Load synthesis specification from artifacts and extract requirements",
|
||||
"modification_points": ["Load synthesis specification", "Extract requirements and design patterns"],
|
||||
"logic_flow": ["Read synthesis specification from artifacts", "Parse architecture decisions", "Extract implementation requirements"],
|
||||
"title": "Load and analyze role analyses",
|
||||
"description": "Load role analyses from artifacts and extract requirements",
|
||||
"modification_points": ["Load role analyses", "Extract requirements and design patterns"],
|
||||
"logic_flow": ["Read role analyses from artifacts", "Parse architecture decisions", "Extract implementation requirements"],
|
||||
"depends_on": [],
|
||||
"output": "synthesis_requirements"
|
||||
},
|
||||
{
|
||||
"step": 2,
|
||||
"title": "Implement following specification",
|
||||
"description": "Implement task requirements following consolidated synthesis specification",
|
||||
"description": "Implement task requirements following consolidated role analyses",
|
||||
"modification_points": ["Apply requirements from [synthesis_requirements]", "Modify target files", "Integrate with existing code"],
|
||||
"logic_flow": ["Apply changes based on [synthesis_requirements]", "Implement core logic", "Validate against acceptance criteria"],
|
||||
"depends_on": [1],
|
||||
@@ -282,7 +268,7 @@ Generate `TODO_LIST.md` at `.workflow/{session_id}/TODO_LIST.md`:
|
||||
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
|
||||
- Consistent ID schemes: IMPL-XXX, IMPL-XXX.Y (max 2 levels)
|
||||
|
||||
**Format Specifications**: @~/.claude/workflows/workflow-architecture.md
|
||||
|
||||
|
||||
### 5. Complexity Assessment & Document Structure
|
||||
Use `analysis_results.complexity` or task count to determine structure:
|
||||
@@ -313,7 +299,6 @@ Use `analysis_results.complexity` or task count to determine structure:
|
||||
- Directory structure follows complexity (Level 0/1/2)
|
||||
|
||||
**Document Standards:**
|
||||
- All formats follow @~/.claude/workflows/workflow-architecture.md
|
||||
- Proper linking between documents
|
||||
- Consistent navigation and references
|
||||
|
||||
|
||||
@@ -1,23 +1,23 @@
|
||||
---
|
||||
name: cli-execution-agent
|
||||
description: |
|
||||
Intelligent CLI execution agent with automated context discovery and smart tool selection. Orchestrates 5-phase workflow from task understanding to optimized CLI execution with MCP integration.
|
||||
Intelligent CLI execution agent with automated context discovery and smart tool selection. Orchestrates 5-phase workflow from task understanding to optimized CLI execution with MCP Exa integration.
|
||||
|
||||
Examples:
|
||||
- Context: User provides task without context
|
||||
user: "Implement user authentication"
|
||||
assistant: "I'll discover relevant context, enhance the task description, select optimal tool, and execute"
|
||||
commentary: Agent autonomously discovers context via MCP code-index, researches best practices, builds enhanced prompt, selects Codex for complex implementation
|
||||
commentary: Agent autonomously discovers context via ripgrep/find, researches best practices via MCP Exa, builds enhanced prompt, selects Codex for complex implementation
|
||||
|
||||
- Context: User provides analysis task
|
||||
user: "Analyze API architecture patterns"
|
||||
assistant: "I'll gather API-related files, analyze patterns, and execute with Gemini for comprehensive analysis"
|
||||
commentary: Agent discovers API files, identifies patterns, selects Gemini for architecture analysis
|
||||
commentary: Agent discovers API files via local search, identifies patterns, selects Gemini for architecture analysis
|
||||
|
||||
- Context: User provides task with session context
|
||||
user: "Execute IMPL-001 from active workflow"
|
||||
assistant: "I'll load task context, discover implementation files, enhance requirements, and execute"
|
||||
commentary: Agent loads task JSON, discovers code context, routes output to workflow session
|
||||
commentary: Agent loads task JSON, discovers code context via local search, routes output to workflow session
|
||||
color: purple
|
||||
---
|
||||
|
||||
@@ -88,7 +88,7 @@ Score < 2 → Simple
|
||||
|
||||
## Phase 2: Context Discovery
|
||||
|
||||
### Multi-Tool Parallel Strategy
|
||||
### Context Discovery Strategy
|
||||
|
||||
**1. Project Structure Analysis**:
|
||||
```bash
|
||||
@@ -96,27 +96,7 @@ Score < 2 → Simple
|
||||
```
|
||||
Output: Module hierarchy and organization
|
||||
|
||||
**2. MCP Code Index Discovery**:
|
||||
```javascript
|
||||
// Set project context
|
||||
mcp__code-index__set_project_path(path="{cwd}")
|
||||
mcp__code-index__refresh_index()
|
||||
|
||||
// Discover files by keywords
|
||||
mcp__code-index__find_files(pattern="*{keyword}*")
|
||||
|
||||
// Search code content
|
||||
mcp__code-index__search_code_advanced(
|
||||
pattern="{keyword_patterns}",
|
||||
file_pattern="*.{ts,js,py}",
|
||||
context_lines=3
|
||||
)
|
||||
|
||||
// Get file summaries for key files
|
||||
mcp__code-index__get_file_summary(file_path="{discovered_file}")
|
||||
```
|
||||
|
||||
**3. Content Search (ripgrep fallback)**:
|
||||
**2. Content Search (ripgrep)**:
|
||||
```bash
|
||||
# Function/class definitions
|
||||
rg "^(function|def|func|class|interface).*{keyword}" \
|
||||
@@ -130,7 +110,7 @@ find . \( -name "*{keyword}*test*" -o -name "*{keyword}*spec*" \) \
|
||||
-type f | grep -E "\.(js|ts|py|go)$" | head -10
|
||||
```
|
||||
|
||||
**4. External Research (MCP Exa - Optional)**:
|
||||
**3. External Research (MCP Exa - Optional)**:
|
||||
```javascript
|
||||
// Best practices for complex tasks
|
||||
mcp__exa__get_code_context_exa(
|
||||
@@ -172,9 +152,17 @@ score = 0
|
||||
"analyze" → "Code understanding and pattern identification"
|
||||
```
|
||||
|
||||
|
||||
**2. Context Assembly**:
|
||||
```bash
|
||||
CONTEXT: @{CLAUDE.md} @{discovered_file1} @{discovered_file2} ...
|
||||
# Default: comprehensive context
|
||||
CONTEXT: @**/*
|
||||
|
||||
# Or specific patterns
|
||||
CONTEXT: @CLAUDE.md @{discovered_file1} @{discovered_file2} ...
|
||||
|
||||
# Cross-directory references (requires --include-directories)
|
||||
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
||||
|
||||
## Discovered Context
|
||||
- **Project Structure**: {module_summary}
|
||||
@@ -187,6 +175,12 @@ CONTEXT: @{CLAUDE.md} @{discovered_file1} @{discovered_file2} ...
|
||||
{optional_best_practices_from_exa}
|
||||
```
|
||||
|
||||
**Context Pattern Guidelines**:
|
||||
- **Default**: Use `@**/*` for comprehensive context
|
||||
- **Specific files**: `@src/**/*` or `@*.ts @*.tsx`
|
||||
- **With docs**: `@CLAUDE.md @**/*CLAUDE.md`
|
||||
- **Cross-directory**: Must use `--include-directories` parameter (see Command Construction)
|
||||
|
||||
**3. Template Selection**:
|
||||
```
|
||||
intent=analyze → ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt
|
||||
@@ -194,6 +188,14 @@ intent=execute + complex → ~/.claude/workflows/cli-templates/prompts/developme
|
||||
intent=plan → ~/.claude/workflows/cli-templates/prompts/planning/task-breakdown.txt
|
||||
```
|
||||
|
||||
**3a. RULES Field Guidelines**:
|
||||
|
||||
When using `$(cat ...)` for template loading:
|
||||
- **Template reference only**: Use `$(cat ...)` directly, do NOT read template content first
|
||||
- **NEVER use escape characters**: `\$`, `\"`, `\'` will break command substitution
|
||||
- **Correct**: `RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)`
|
||||
- **Wrong**: `RULES: \$(cat ...)` or `RULES: $(cat \"...\")`
|
||||
|
||||
**4. Structured Prompt**:
|
||||
```bash
|
||||
PURPOSE: {enhanced_intent}
|
||||
@@ -234,36 +236,91 @@ ELSE IF intent = 'discuss':
|
||||
# User --tool flag overrides auto-selection
|
||||
```
|
||||
|
||||
### Model Selection
|
||||
|
||||
**Gemini Models**:
|
||||
- `gemini-2.5-pro` - Analysis tasks (default)
|
||||
- `gemini-2.5-flash` - Documentation updates
|
||||
|
||||
**Qwen Models**:
|
||||
- `coder-model` - Code analysis (default, -m optional)
|
||||
- `vision-model` - Image analysis (rare usage)
|
||||
|
||||
**Codex Models**:
|
||||
- `gpt-5` - Analysis & execution (default)
|
||||
- `gpt5-codex` - Large context tasks
|
||||
|
||||
**Parameter Position**: `-m` must be placed AFTER prompt string
|
||||
|
||||
### Command Construction
|
||||
|
||||
**Gemini/Qwen (Analysis Mode)**:
|
||||
```bash
|
||||
cd {directory} && ~/.claude/scripts/{tool}-wrapper -p "
|
||||
# Use 'gemini' (primary) or 'qwen' (fallback)
|
||||
cd {directory} && gemini -p "
|
||||
{enhanced_prompt}
|
||||
"
|
||||
|
||||
# With model selection (NOTE: -m placed AFTER prompt)
|
||||
cd {directory} && gemini -p "{enhanced_prompt}" -m gemini-2.5-pro
|
||||
cd {directory} && qwen -p "{enhanced_prompt}" # coder-model default
|
||||
```
|
||||
|
||||
**Gemini/Qwen (Write Mode)**:
|
||||
```bash
|
||||
cd {directory} && ~/.claude/scripts/{tool}-wrapper --approval-mode yolo -p "
|
||||
# NOTE: --approval-mode yolo must be placed AFTER the prompt
|
||||
cd {directory} && gemini -p "
|
||||
{enhanced_prompt}
|
||||
"
|
||||
" -m gemini-2.5-flash --approval-mode yolo
|
||||
|
||||
# Fallback to Qwen
|
||||
cd {directory} && qwen -p "{enhanced_prompt}" --approval-mode yolo
|
||||
```
|
||||
|
||||
**Codex (Auto Mode)**:
|
||||
```bash
|
||||
# NOTE: -m, --skip-git-repo-check and -s danger-full-access must be placed at command END
|
||||
codex -C {directory} --full-auto exec "
|
||||
{enhanced_prompt}
|
||||
" --skip-git-repo-check -s danger-full-access
|
||||
" -m gpt-5 --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
**Codex (Resume for Related Tasks)**:
|
||||
```bash
|
||||
# Parameter Position: resume --last must be placed AFTER prompt at command END
|
||||
codex --full-auto exec "
|
||||
{continuation_prompt}
|
||||
" resume --last --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
**Cross-Directory Context (Gemini/Qwen)**:
|
||||
```bash
|
||||
# When CONTEXT references external directories, use --include-directories
|
||||
# TWO-STEP REQUIREMENT:
|
||||
# Step 1: Reference in CONTEXT (@../shared/**/*)
|
||||
# Step 2: Add --include-directories parameter
|
||||
cd src/auth && gemini -p "
|
||||
PURPOSE: {goal}
|
||||
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
||||
...
|
||||
" --include-directories ../shared,../types
|
||||
```
|
||||
|
||||
### Directory Scope Rules
|
||||
|
||||
**Once `cd` to a directory**:
|
||||
- **@ references ONLY apply to current directory and its subdirectories**
|
||||
- `@**/*` = All files within current directory tree
|
||||
- `@*.ts` = TypeScript files in current directory tree
|
||||
- `@src/**/*` = Files within src subdirectory (if exists)
|
||||
- **CANNOT reference parent or sibling directories via @ alone**
|
||||
|
||||
**To reference files outside current directory**:
|
||||
- **Step 1**: Add `--include-directories` parameter
|
||||
- **Step 2**: Explicitly reference in CONTEXT field with @ patterns
|
||||
- **⚠️ BOTH steps are MANDATORY**
|
||||
- **Rule**: If CONTEXT contains `@../dir/**/*`, command MUST include `--include-directories ../dir`
|
||||
|
||||
### Timeout Configuration
|
||||
|
||||
```javascript
|
||||
@@ -361,30 +418,6 @@ if (activeSession.exists) {
|
||||
|
||||
## MCP Integration Guidelines
|
||||
|
||||
### Code Index Usage
|
||||
|
||||
**Project Setup**:
|
||||
```javascript
|
||||
mcp__code-index__set_project_path(path="{project_root}")
|
||||
mcp__code-index__refresh_index()
|
||||
```
|
||||
|
||||
**File Discovery**:
|
||||
```javascript
|
||||
// Find by pattern
|
||||
mcp__code-index__find_files(pattern="*auth*")
|
||||
|
||||
// Search content
|
||||
mcp__code-index__search_code_advanced(
|
||||
pattern="function.*authenticate",
|
||||
file_pattern="*.ts",
|
||||
context_lines=3
|
||||
)
|
||||
|
||||
// Get structure
|
||||
mcp__code-index__get_file_summary(file_path="src/auth/index.ts")
|
||||
```
|
||||
|
||||
### Exa Research Usage
|
||||
|
||||
**Best Practices**:
|
||||
@@ -407,13 +440,11 @@ mcp__exa__get_code_context_exa(
|
||||
|
||||
### Graceful Degradation
|
||||
|
||||
**MCP Unavailable**:
|
||||
**MCP Exa Unavailable**:
|
||||
```bash
|
||||
# Fallback to ripgrep + find
|
||||
if ! mcp__code-index__find_files; then
|
||||
find . -name "*{keyword}*" -type f | grep -v node_modules
|
||||
rg "{keyword}" --type ts --max-count 20
|
||||
fi
|
||||
# Fallback to local search only
|
||||
find . -name "*{keyword}*" -type f | grep -v node_modules
|
||||
rg "{keyword}" --type ts --max-count 20
|
||||
```
|
||||
|
||||
**Tool Unavailable**:
|
||||
@@ -476,3 +507,5 @@ Before completing execution:
|
||||
- Leave partial results without documentation
|
||||
|
||||
|
||||
### Windows Path Format Guidelines
|
||||
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`
|
||||
@@ -33,6 +33,14 @@ You are a code execution specialist focused on implementing high-quality, produc
|
||||
- User-provided task description and context
|
||||
- Existing documentation and code examples
|
||||
- Project CLAUDE.md standards
|
||||
- **context-package.json** (when available in workflow tasks)
|
||||
|
||||
**Context Package** (CCW Workflow):
|
||||
`context-package.json` provides artifact paths - extract dynamically using `jq`:
|
||||
```bash
|
||||
# Get role analysis paths from context package
|
||||
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
|
||||
```
|
||||
|
||||
**Pre-Analysis: Smart Tech Stack Loading**:
|
||||
```bash
|
||||
@@ -84,11 +92,14 @@ ELIF context insufficient OR task has flow control marker:
|
||||
|
||||
**Rule**: Before referencing modules/components, use `rg` or search to verify existence first.
|
||||
|
||||
**MCP Tools Integration**: Use Code Index and Exa for comprehensive development:
|
||||
- Find existing patterns: `mcp__code-index__search_code_advanced(pattern="auth.*function")`
|
||||
- Locate files: `mcp__code-index__find_files(pattern="src/**/*.ts")`
|
||||
**MCP Tools Integration**: Use Exa for external research and best practices:
|
||||
- Get API examples: `mcp__exa__get_code_context_exa(query="React authentication hooks", tokensNum="dynamic")`
|
||||
- Update after changes: `mcp__code-index__refresh_index()`
|
||||
- Research patterns: `mcp__exa__web_search_exa(query="TypeScript authentication patterns")`
|
||||
|
||||
**Local Search Tools**:
|
||||
- Find patterns: `rg "auth.*function" --type ts -n`
|
||||
- Locate files: `find . -name "*.ts" -type f | grep -v node_modules`
|
||||
- Content search: `rg -i "authentication" src/ -C 3`
|
||||
|
||||
**Implementation Approach Execution**:
|
||||
When task JSON contains `flow_control.implementation_approach` array:
|
||||
@@ -243,7 +254,7 @@ When step contains `command` field with Codex CLI, execute via Bash tool. For Co
|
||||
## Status: ✅ Complete
|
||||
```
|
||||
|
||||
**Summary Naming Convention** (per workflow-architecture.md):
|
||||
**Summary Naming Convention**:
|
||||
- **Main tasks**: `IMPL-[task-id]-summary.md` (e.g., `IMPL-001-summary.md`)
|
||||
- **Subtasks**: `IMPL-[task-id].[subtask-id]-summary.md` (e.g., `IMPL-001.1-summary.md`)
|
||||
- **Location**: Always in `.summaries/` directory within session workflow folder
|
||||
@@ -297,3 +308,5 @@ Before completing any task, verify:
|
||||
- Keep functions small and focused
|
||||
- Generate detailed summary documents with complete component/method listings
|
||||
- Document all new interfaces, types, and constants for dependent task reference
|
||||
### Windows Path Format Guidelines
|
||||
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`
|
||||
@@ -14,11 +14,11 @@ description: |
|
||||
Examples:
|
||||
- Context: Auto brainstorm assigns system-architect role
|
||||
auto.md: Assigns dedicated agent with ASSIGNED_ROLE: system-architect
|
||||
agent: "I'll execute system-architect analysis for this topic, creating architecture-focused conceptual analysis in .brainstorming/system-architect/ directory"
|
||||
agent: "I'll execute system-architect analysis for this topic, creating architecture-focused conceptual analysis in OUTPUT_LOCATION"
|
||||
|
||||
- Context: Auto brainstorm assigns ui-designer role
|
||||
auto.md: Assigns dedicated agent with ASSIGNED_ROLE: ui-designer
|
||||
agent: "I'll execute ui-designer analysis for this topic, creating UX-focused conceptual analysis in .brainstorming/ui-designer/ directory"
|
||||
agent: "I'll execute ui-designer analysis for this topic, creating UX-focused conceptual analysis in OUTPUT_LOCATION"
|
||||
|
||||
color: purple
|
||||
---
|
||||
@@ -99,7 +99,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
||||
### Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/topic-framework.md)
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -166,7 +166,7 @@ When called, you receive:
|
||||
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
||||
- **Output Location**: Directory path for generated analysis files
|
||||
- **Role Hint** (optional): Suggested role or role selection guidance
|
||||
- **GEMINI_ANALYSIS_REQUIRED** (optional): Flag to trigger Gemini CLI analysis
|
||||
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
||||
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
||||
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
||||
|
||||
@@ -231,18 +231,24 @@ Generate documents according to loaded role template specifications:
|
||||
|
||||
**Required Files**:
|
||||
- **analysis.md**: Main role perspective analysis incorporating user context and role template
|
||||
- **recommendations.md**: Role-specific strategic recommendations and action items
|
||||
- **[role-deliverables]/**: Directory for specialized role outputs as defined in planning role template
|
||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
||||
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
|
||||
- **Content**: Includes both analysis AND recommendations sections within analysis files
|
||||
- **[role-deliverables]/**: Directory for specialized role outputs as defined in planning role template (optional)
|
||||
|
||||
**File Structure Example**:
|
||||
```
|
||||
.workflow/WFS-[session]/.brainstorming/system-architect/
|
||||
├── analysis.md # Main system architecture analysis
|
||||
├── recommendations.md # Architecture recommendations
|
||||
└── deliverables/
|
||||
├── analysis.md # Main system architecture analysis with recommendations
|
||||
├── analysis-1.md # (Optional) Continuation if content >800 lines
|
||||
└── deliverables/ # (Optional) Additional role-specific outputs
|
||||
├── technical-architecture.md # System design specifications
|
||||
├── technology-stack.md # Technology selection rationale
|
||||
└── scalability-plan.md # Scaling strategy
|
||||
|
||||
NOTE: ALL brainstorming output files MUST start with 'analysis' prefix
|
||||
FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefixed files
|
||||
```
|
||||
|
||||
## Role-Specific Planning Process
|
||||
@@ -263,9 +269,13 @@ Generate documents according to loaded role template specifications:
|
||||
|
||||
### 3. Brainstorming Documentation Phase
|
||||
- **Create analysis.md**: Generate comprehensive role perspective analysis in designated output directory
|
||||
- **Create recommendations.md**: Generate role-specific strategic recommendations and action items
|
||||
- **Generate Role Deliverables**: Create specialized outputs as defined in planning role template
|
||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
||||
- **Content**: Include both analysis AND recommendations sections within analysis files
|
||||
- **Auto-split**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
||||
- **Generate Role Deliverables**: Create specialized outputs as defined in planning role template (optional)
|
||||
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
|
||||
- **Naming Validation**: Verify NO files with `recommendations` prefix exist
|
||||
- **Quality Review**: Ensure outputs meet role template standards and user requirements
|
||||
|
||||
## Role-Specific Analysis Framework
|
||||
@@ -314,4 +324,5 @@ When analysis is complete, ensure:
|
||||
- **Relevance**: Directly addresses user's specified requirements
|
||||
- **Actionability**: Provides concrete next steps and recommendations
|
||||
|
||||
Your role is to execute the **assigned single planning role** completely for brainstorming workflow integration. Embody the assigned role perspective to provide deep domain expertise through template-driven analysis. Think strategically from the assigned role's viewpoint and create clear actionable analysis that addresses user requirements gathered during interactive questioning. Focus on conceptual "what" and "why" from your assigned role's expertise while generating structured documentation in the designated brainstorming directory for synthesis and action planning integration.
|
||||
### Windows Path Format Guidelines
|
||||
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`
|
||||
|
||||
509
.claude/agents/context-search-agent.md
Normal file
509
.claude/agents/context-search-agent.md
Normal file
@@ -0,0 +1,509 @@
|
||||
---
|
||||
name: context-search-agent
|
||||
description: |
|
||||
Intelligent context collector for development tasks. Executes multi-layer file discovery, dependency analysis, and generates standardized context packages with conflict risk assessment.
|
||||
|
||||
Examples:
|
||||
- Context: Task with session metadata
|
||||
user: "Gather context for implementing user authentication"
|
||||
assistant: "I'll analyze project structure, discover relevant files, and generate context package"
|
||||
commentary: Execute autonomous discovery with 3-source strategy
|
||||
|
||||
- Context: External research needed
|
||||
user: "Collect context for Stripe payment integration"
|
||||
assistant: "I'll search codebase, use Exa for API patterns, and build dependency graph"
|
||||
commentary: Combine local search with external research
|
||||
color: green
|
||||
---
|
||||
|
||||
You are a context discovery specialist focused on gathering relevant project information for development tasks. Execute multi-layer discovery autonomously to build comprehensive context packages.
|
||||
|
||||
## Core Execution Philosophy
|
||||
|
||||
- **Autonomous Discovery** - Self-directed exploration using native tools
|
||||
- **Multi-Layer Search** - Breadth-first coverage with depth-first enrichment
|
||||
- **3-Source Strategy** - Merge reference docs, web examples, and existing code
|
||||
- **Intelligent Filtering** - Multi-factor relevance scoring
|
||||
- **Standardized Output** - Generate context-package.json
|
||||
|
||||
## Tool Arsenal
|
||||
|
||||
### 1. Reference Documentation (Project Standards)
|
||||
**Tools**:
|
||||
- `Read()` - Load CLAUDE.md, README.md, architecture docs
|
||||
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
|
||||
- `Glob()` - Find documentation files
|
||||
|
||||
**Use**: Phase 0 foundation setup
|
||||
|
||||
### 2. Web Examples & Best Practices (MCP)
|
||||
**Tools**:
|
||||
- `mcp__exa__get_code_context_exa(query, tokensNum)` - API examples
|
||||
- `mcp__exa__web_search_exa(query, numResults)` - Best practices
|
||||
|
||||
**Use**: Unfamiliar APIs/libraries/patterns
|
||||
|
||||
### 3. Existing Code Discovery
|
||||
**Primary (Code-Index MCP)**:
|
||||
- `mcp__code-index__set_project_path()` - Initialize index
|
||||
- `mcp__code-index__find_files(pattern)` - File pattern matching
|
||||
- `mcp__code-index__search_code_advanced()` - Content search
|
||||
- `mcp__code-index__get_file_summary()` - File structure analysis
|
||||
- `mcp__code-index__refresh_index()` - Update index
|
||||
|
||||
**Fallback (CLI)**:
|
||||
- `rg` (ripgrep) - Fast content search
|
||||
- `find` - File discovery
|
||||
- `Grep` - Pattern matching
|
||||
|
||||
**Priority**: Code-Index MCP > ripgrep > find > grep
|
||||
|
||||
## Simplified Execution Process (3 Phases)
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
|
||||
**1.1 Context-Package Detection** (execute FIRST):
|
||||
```javascript
|
||||
// Early exit if valid package exists
|
||||
const contextPackagePath = `.workflow/${session_id}/.process/context-package.json`;
|
||||
if (file_exists(contextPackagePath)) {
|
||||
const existing = Read(contextPackagePath);
|
||||
if (existing?.metadata?.session_id === session_id) {
|
||||
console.log("✅ Valid context-package found, returning existing");
|
||||
return existing; // Immediate return, skip all processing
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**1.2 Foundation Setup**:
|
||||
```javascript
|
||||
// 1. Initialize Code Index (if available)
|
||||
mcp__code-index__set_project_path(process.cwd())
|
||||
mcp__code-index__refresh_index()
|
||||
|
||||
// 2. Project Structure
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
||||
|
||||
// 3. Load Documentation (if not in memory)
|
||||
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
|
||||
if (!memory.has("README.md")) Read(README.md)
|
||||
```
|
||||
|
||||
**1.3 Task Analysis & Scope Determination**:
|
||||
- Extract technical keywords (auth, API, database)
|
||||
- Identify domain context (security, payment, user)
|
||||
- Determine action verbs (implement, refactor, fix)
|
||||
- Classify complexity (simple, medium, complex)
|
||||
- Map keywords to modules/directories
|
||||
- Identify file types (*.ts, *.py, *.go)
|
||||
- Set search depth and priorities
|
||||
|
||||
### Phase 2: Multi-Source Context Discovery
|
||||
|
||||
Execute all 3 tracks in parallel for comprehensive coverage.
|
||||
|
||||
#### Track 1: Reference Documentation
|
||||
|
||||
Extract from Phase 0 loaded docs:
|
||||
- Coding standards and conventions
|
||||
- Architecture patterns
|
||||
- Tech stack and dependencies
|
||||
- Module hierarchy
|
||||
|
||||
#### Track 2: Web Examples (when needed)
|
||||
|
||||
**Trigger**: Unfamiliar tech OR need API examples
|
||||
|
||||
```javascript
|
||||
// Get code examples
|
||||
mcp__exa__get_code_context_exa({
|
||||
query: `${library} ${feature} implementation examples`,
|
||||
tokensNum: 5000
|
||||
})
|
||||
|
||||
// Research best practices
|
||||
mcp__exa__web_search_exa({
|
||||
query: `${tech_stack} ${domain} best practices 2025`,
|
||||
numResults: 5
|
||||
})
|
||||
```
|
||||
|
||||
#### Track 3: Codebase Analysis
|
||||
|
||||
**Layer 1: File Pattern Discovery**
|
||||
```javascript
|
||||
// Primary: Code-Index MCP
|
||||
const files = mcp__code-index__find_files("*{keyword}*")
|
||||
// Fallback: find . -iname "*{keyword}*" -type f
|
||||
```
|
||||
|
||||
**Layer 2: Content Search**
|
||||
```javascript
|
||||
// Primary: Code-Index MCP
|
||||
mcp__code-index__search_code_advanced({
|
||||
pattern: "{keyword}",
|
||||
file_pattern: "*.ts",
|
||||
output_mode: "files_with_matches"
|
||||
})
|
||||
// Fallback: rg "{keyword}" -t ts --files-with-matches
|
||||
```
|
||||
|
||||
**Layer 3: Semantic Patterns**
|
||||
```javascript
|
||||
// Find definitions (class, interface, function)
|
||||
mcp__code-index__search_code_advanced({
|
||||
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
|
||||
regex: true,
|
||||
output_mode: "content",
|
||||
context_lines: 2
|
||||
})
|
||||
```
|
||||
|
||||
**Layer 4: Dependencies**
|
||||
```javascript
|
||||
// Get file summaries for imports/exports
|
||||
for (const file of discovered_files) {
|
||||
const summary = mcp__code-index__get_file_summary(file)
|
||||
// summary: {imports, functions, classes, line_count}
|
||||
}
|
||||
```
|
||||
|
||||
**Layer 5: Config & Tests**
|
||||
```javascript
|
||||
// Config files
|
||||
mcp__code-index__find_files("*.config.*")
|
||||
mcp__code-index__find_files("package.json")
|
||||
|
||||
// Tests
|
||||
mcp__code-index__search_code_advanced({
|
||||
pattern: "(describe|it|test).*{keyword}",
|
||||
file_pattern: "*.{test,spec}.*"
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
|
||||
**3.1 Relevance Scoring**
|
||||
|
||||
```javascript
|
||||
score = (0.4 × direct_match) + // Filename/path match
|
||||
(0.3 × content_density) + // Keyword frequency
|
||||
(0.2 × structural_pos) + // Architecture role
|
||||
(0.1 × dependency_link) // Connection strength
|
||||
|
||||
// Filter: Include only score > 0.5
|
||||
```
|
||||
|
||||
**3.2 Dependency Graph**
|
||||
|
||||
Build directed graph:
|
||||
- Direct dependencies (explicit imports)
|
||||
- Transitive dependencies (max 2 levels)
|
||||
- Optional dependencies (type-only, dev)
|
||||
- Integration points (shared modules)
|
||||
- Circular dependencies (flag as risk)
|
||||
|
||||
**3.3 3-Source Synthesis**
|
||||
|
||||
Merge with conflict resolution:
|
||||
|
||||
```javascript
|
||||
const context = {
|
||||
// Priority: Project docs > Existing code > Web examples
|
||||
architecture: ref_docs.patterns || code.structure,
|
||||
|
||||
conventions: {
|
||||
naming: ref_docs.standards || code.actual_patterns,
|
||||
error_handling: ref_docs.standards || code.patterns || web.best_practices
|
||||
},
|
||||
|
||||
tech_stack: {
|
||||
// Actual (package.json) takes precedence
|
||||
language: code.actual.language,
|
||||
frameworks: merge_unique([ref_docs.declared, code.actual]),
|
||||
libraries: code.actual.libraries
|
||||
},
|
||||
|
||||
// Web examples fill gaps
|
||||
supplemental: web.examples,
|
||||
best_practices: web.industry_standards
|
||||
}
|
||||
```
|
||||
|
||||
**Conflict Resolution**:
|
||||
1. Architecture: Docs > Code > Web
|
||||
2. Conventions: Declared > Actual > Industry
|
||||
3. Tech Stack: Actual (package.json) > Declared
|
||||
4. Missing: Use web examples
|
||||
|
||||
**3.5 Brainstorm Artifacts Integration**
|
||||
|
||||
If `.workflow/{session}/.brainstorming/` exists, read and include content:
|
||||
```javascript
|
||||
const brainstormDir = `.workflow/${session}/.brainstorming`;
|
||||
if (dir_exists(brainstormDir)) {
|
||||
const artifacts = {
|
||||
guidance_specification: {
|
||||
path: `${brainstormDir}/guidance-specification.md`,
|
||||
exists: file_exists(`${brainstormDir}/guidance-specification.md`),
|
||||
content: Read(`${brainstormDir}/guidance-specification.md`) || null
|
||||
},
|
||||
role_analyses: glob(`${brainstormDir}/*/analysis*.md`).map(file => ({
|
||||
role: extract_role_from_path(file),
|
||||
files: [{
|
||||
path: file,
|
||||
type: file.includes('analysis.md') ? 'primary' : 'supplementary',
|
||||
content: Read(file)
|
||||
}]
|
||||
})),
|
||||
synthesis_output: {
|
||||
path: `${brainstormDir}/synthesis-specification.md`,
|
||||
exists: file_exists(`${brainstormDir}/synthesis-specification.md`),
|
||||
content: Read(`${brainstormDir}/synthesis-specification.md`) || null
|
||||
}
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
**3.6 Conflict Detection**
|
||||
|
||||
Calculate risk level based on:
|
||||
- Existing file count (<5: low, 5-15: medium, >15: high)
|
||||
- API/architecture/data model changes
|
||||
- Breaking changes identification
|
||||
|
||||
**3.7 Context Packaging & Output**
|
||||
|
||||
**Output**: `.workflow/{session-id}/.process/context-package.json`
|
||||
|
||||
**Note**: Task JSONs reference via `context_package_path` field (not in `artifacts`)
|
||||
|
||||
**Schema**:
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"task_description": "Implement user authentication with JWT",
|
||||
"timestamp": "2025-10-25T14:30:00Z",
|
||||
"keywords": ["authentication", "JWT", "login"],
|
||||
"complexity": "medium",
|
||||
"session_id": "WFS-user-auth"
|
||||
},
|
||||
"project_context": {
|
||||
"architecture_patterns": ["MVC", "Service layer", "Repository pattern"],
|
||||
"coding_conventions": {
|
||||
"naming": {"functions": "camelCase", "classes": "PascalCase"},
|
||||
"error_handling": {"pattern": "centralized middleware"},
|
||||
"async_patterns": {"preferred": "async/await"}
|
||||
},
|
||||
"tech_stack": {
|
||||
"language": "typescript",
|
||||
"frameworks": ["express", "typeorm"],
|
||||
"libraries": ["jsonwebtoken", "bcrypt"],
|
||||
"testing": ["jest"]
|
||||
}
|
||||
},
|
||||
"assets": {
|
||||
"documentation": [
|
||||
{
|
||||
"path": "CLAUDE.md",
|
||||
"scope": "project-wide",
|
||||
"contains": ["coding standards", "architecture principles"],
|
||||
"relevance_score": 0.95
|
||||
},
|
||||
{"path": "docs/api/auth.md", "scope": "api-spec", "relevance_score": 0.92}
|
||||
],
|
||||
"source_code": [
|
||||
{
|
||||
"path": "src/auth/AuthService.ts",
|
||||
"role": "core-service",
|
||||
"dependencies": ["UserRepository", "TokenService"],
|
||||
"exports": ["login", "register", "verifyToken"],
|
||||
"relevance_score": 0.99
|
||||
},
|
||||
{
|
||||
"path": "src/models/User.ts",
|
||||
"role": "data-model",
|
||||
"exports": ["User", "UserSchema"],
|
||||
"relevance_score": 0.94
|
||||
}
|
||||
],
|
||||
"config": [
|
||||
{"path": "package.json", "relevance_score": 0.80},
|
||||
{"path": ".env.example", "relevance_score": 0.78}
|
||||
],
|
||||
"tests": [
|
||||
{"path": "tests/auth/login.test.ts", "relevance_score": 0.95}
|
||||
]
|
||||
},
|
||||
"dependencies": {
|
||||
"internal": [
|
||||
{
|
||||
"from": "AuthController.ts",
|
||||
"to": "AuthService.ts",
|
||||
"type": "service-dependency"
|
||||
}
|
||||
],
|
||||
"external": [
|
||||
{
|
||||
"package": "jsonwebtoken",
|
||||
"version": "^9.0.0",
|
||||
"usage": "JWT token operations"
|
||||
},
|
||||
{
|
||||
"package": "bcrypt",
|
||||
"version": "^5.1.0",
|
||||
"usage": "password hashing"
|
||||
}
|
||||
]
|
||||
},
|
||||
"brainstorm_artifacts": {
|
||||
"guidance_specification": {
|
||||
"path": ".workflow/WFS-xxx/.brainstorming/guidance-specification.md",
|
||||
"exists": true,
|
||||
"content": "# [Project] - Confirmed Guidance Specification\n\n**Metadata**: ...\n\n## 1. Project Positioning & Goals\n..."
|
||||
},
|
||||
"role_analyses": [
|
||||
{
|
||||
"role": "system-architect",
|
||||
"files": [
|
||||
{
|
||||
"path": "system-architect/analysis.md",
|
||||
"type": "primary",
|
||||
"content": "# System Architecture Analysis\n\n## Overview\n..."
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"synthesis_output": {
|
||||
"path": ".workflow/WFS-xxx/.brainstorming/synthesis-specification.md",
|
||||
"exists": true,
|
||||
"content": "# Synthesis Specification\n\n## Cross-Role Integration\n..."
|
||||
}
|
||||
},
|
||||
"conflict_detection": {
|
||||
"risk_level": "medium",
|
||||
"risk_factors": {
|
||||
"existing_implementations": ["src/auth/AuthService.ts", "src/models/User.ts"],
|
||||
"api_changes": true,
|
||||
"architecture_changes": false,
|
||||
"data_model_changes": true,
|
||||
"breaking_changes": ["Login response format changes", "User schema modification"]
|
||||
},
|
||||
"affected_modules": ["auth", "user-model", "middleware"],
|
||||
"mitigation_strategy": "Incremental refactoring with backward compatibility"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Mode: Brainstorm vs Plan
|
||||
|
||||
### Brainstorm Mode (Lightweight)
|
||||
**Purpose**: Provide high-level context for generating brainstorming questions
|
||||
**Execution**: Phase 1-2 only (skip deep analysis)
|
||||
**Output**:
|
||||
- Lightweight context-package with:
|
||||
- Project structure overview
|
||||
- Tech stack identification
|
||||
- High-level existing module names
|
||||
- Basic conflict risk (file count only)
|
||||
- Skip: Detailed dependency graphs, deep code analysis, web research
|
||||
|
||||
### Plan Mode (Comprehensive)
|
||||
**Purpose**: Detailed implementation planning with conflict detection
|
||||
**Execution**: Full Phase 1-3 (complete discovery + analysis)
|
||||
**Output**:
|
||||
- Comprehensive context-package with:
|
||||
- Detailed dependency graphs
|
||||
- Deep code structure analysis
|
||||
- Conflict detection with mitigation strategies
|
||||
- Web research for unfamiliar tech
|
||||
- Include: All discovery tracks, relevance scoring, 3-source synthesis
|
||||
|
||||
## Quality Validation
|
||||
|
||||
Before completion verify:
|
||||
- [ ] context-package.json in `.workflow/{session}/.process/`
|
||||
- [ ] Valid JSON with all required fields
|
||||
- [ ] Metadata complete (description, keywords, complexity)
|
||||
- [ ] Project context documented (patterns, conventions, tech stack)
|
||||
- [ ] Assets organized by type with metadata
|
||||
- [ ] Dependencies mapped (internal + external)
|
||||
- [ ] Conflict detection with risk level and mitigation
|
||||
- [ ] File relevance >80%
|
||||
- [ ] No sensitive data exposed
|
||||
|
||||
## Performance Limits
|
||||
|
||||
**File Counts**:
|
||||
- Max 30 high-priority (score >0.8)
|
||||
- Max 20 medium-priority (score 0.5-0.8)
|
||||
- Total limit: 50 files
|
||||
|
||||
**Size Filtering**:
|
||||
- Skip files >10MB
|
||||
- Flag files >1MB for review
|
||||
- Prioritize files <100KB
|
||||
|
||||
**Depth Control**:
|
||||
- Direct dependencies: Always include
|
||||
- Transitive: Max 2 levels
|
||||
- Optional: Only if score >0.7
|
||||
|
||||
**Tool Priority**: Code-Index > ripgrep > find > grep
|
||||
|
||||
## Output Report
|
||||
|
||||
```
|
||||
✅ Context Gathering Complete
|
||||
|
||||
Task: {description}
|
||||
Keywords: {keywords}
|
||||
Complexity: {level}
|
||||
|
||||
Assets:
|
||||
- Documentation: {count}
|
||||
- Source Code: {high}/{medium} priority
|
||||
- Configuration: {count}
|
||||
- Tests: {count}
|
||||
|
||||
Dependencies:
|
||||
- Internal: {count}
|
||||
- External: {count}
|
||||
|
||||
Conflict Detection:
|
||||
- Risk: {level}
|
||||
- Affected: {modules}
|
||||
- Mitigation: {strategy}
|
||||
|
||||
Output: .workflow/{session}/.process/context-package.json
|
||||
(Referenced in task JSONs via top-level `context_package_path` field)
|
||||
```
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**NEVER**:
|
||||
- Skip Phase 0 setup
|
||||
- Include files without scoring
|
||||
- Expose sensitive data (credentials, keys)
|
||||
- Exceed file limits (50 total)
|
||||
- Include binaries/generated files
|
||||
- Use ripgrep if code-index available
|
||||
|
||||
**ALWAYS**:
|
||||
- Initialize code-index in Phase 0
|
||||
- Execute get_modules_by_depth.sh
|
||||
- Load CLAUDE.md/README.md (unless in memory)
|
||||
- Execute all 3 discovery tracks
|
||||
- Use code-index MCP as primary
|
||||
- Fallback to ripgrep only when needed
|
||||
- Use Exa for unfamiliar APIs
|
||||
- Apply multi-factor scoring
|
||||
- Build dependency graphs
|
||||
- Synthesize all 3 sources
|
||||
- Calculate conflict risk
|
||||
- Generate valid JSON output
|
||||
- Report completion with stats
|
||||
|
||||
### Windows Path Format Guidelines
|
||||
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`
|
||||
- **Context Package**: Use project-relative paths (e.g., `src/auth/service.ts`)
|
||||
@@ -53,8 +53,7 @@ You are an expert technical documentation specialist. Your responsibility is to
|
||||
{
|
||||
"step": "analyze_module_structure",
|
||||
"action": "Deep analysis of module structure and API",
|
||||
"command": "bash(cd src/auth && ~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @{**/*}
|
||||
System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
|
||||
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
|
||||
"output_to": "module_analysis",
|
||||
"on_error": "fail"
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
name: general-purpose
|
||||
name: universal-executor
|
||||
description: |
|
||||
Versatile execution agent for implementing any task efficiently. Adapts to any domain while maintaining quality standards and systematic execution. Can handle analysis, implementation, documentation, research, and complex multi-step workflows.
|
||||
|
||||
|
||||
@@ -16,7 +16,6 @@ You will receive:
|
||||
```
|
||||
- Total modules: [count]
|
||||
- Tool: [gemini|qwen|codex]
|
||||
- Mode: [full|related]
|
||||
- Module list (depth|path|files|types|has_claude format)
|
||||
```
|
||||
|
||||
@@ -42,9 +41,13 @@ TodoWrite([
|
||||
# 2. Extract module paths for current depth
|
||||
# 3. Launch parallel jobs (max 4)
|
||||
|
||||
# Depth 5 example:
|
||||
~/.claude/scripts/update_module_claude.sh "./.claude/workflows/cli-templates/prompts/analysis" "full" "gemini" &
|
||||
~/.claude/scripts/update_module_claude.sh "./.claude/workflows/cli-templates/prompts/development" "full" "gemini" &
|
||||
# Depth 5 example (Layer 3 - use multi-layer):
|
||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
|
||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
|
||||
|
||||
# Depth 1 example (Layer 2 - use single-layer):
|
||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
|
||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
|
||||
# ... up to 4 concurrent jobs
|
||||
|
||||
# 4. Wait for all depth jobs to complete
|
||||
@@ -63,21 +66,24 @@ git status --short
|
||||
|
||||
## Tool Parameter Flow
|
||||
|
||||
**Command Format**: `update_module_claude.sh <path> <mode> <tool>`
|
||||
**Command Format**: `update_module_claude.sh <strategy> <path> <tool>`
|
||||
|
||||
Examples:
|
||||
- Gemini: `update_module_claude.sh "./.claude/agents" "full" "gemini" &`
|
||||
- Qwen: `update_module_claude.sh "./src/api" "full" "qwen" &`
|
||||
- Codex: `update_module_claude.sh "./tests" "full" "codex" &`
|
||||
- Layer 3 (depth ≥3): `update_module_claude.sh "multi-layer" "./.claude/agents" "gemini" &`
|
||||
- Layer 2 (depth 1-2): `update_module_claude.sh "single-layer" "./src/api" "qwen" &`
|
||||
- Layer 1 (depth 0): `update_module_claude.sh "single-layer" "./tests" "codex" &`
|
||||
|
||||
## Execution Rules
|
||||
|
||||
1. **Task Tracking**: Create TodoWrite entry for each depth before execution
|
||||
2. **Parallelism**: Max 4 jobs per depth, sequential across depths
|
||||
3. **Tool Passing**: Always pass tool parameter as 3rd argument
|
||||
4. **Path Accuracy**: Extract exact path from `depth:N|path:X|...` format
|
||||
5. **Completion**: Mark todo completed only after all depth jobs finish
|
||||
6. **No Skipping**: Process every module from input list
|
||||
3. **Strategy Assignment**: Assign strategy based on depth:
|
||||
- Depth ≥3 (Layer 3): Use "multi-layer" strategy
|
||||
- Depth 0-2 (Layers 1-2): Use "single-layer" strategy
|
||||
4. **Tool Passing**: Always pass tool parameter as 3rd argument
|
||||
5. **Path Accuracy**: Extract exact path from `depth:N|path:X|...` format
|
||||
6. **Completion**: Mark todo completed only after all depth jobs finish
|
||||
7. **No Skipping**: Process every module from input list
|
||||
|
||||
## Concise Output
|
||||
|
||||
|
||||
@@ -68,6 +68,7 @@ When task JSON contains implementation_approach array:
|
||||
### 1. Context Assessment & Test Discovery
|
||||
- Analyze task context to identify test files and source code paths
|
||||
- Load test framework configuration (Jest, Pytest, Mocha, etc.)
|
||||
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
||||
- Identify test command from project configuration
|
||||
|
||||
```bash
|
||||
@@ -212,3 +213,5 @@ All tests pass - code is ready for deployment.
|
||||
**Your ultimate responsibility**: Ensure all tests pass. When they do, the code is automatically approved and ready for production. You are the final quality gate.
|
||||
|
||||
**Tests passing = Code approved = Mission complete** ✅
|
||||
### Windows Path Format Guidelines
|
||||
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`
|
||||
@@ -35,7 +35,7 @@ You are a specialized **UI Design Agent** that executes design generation tasks
|
||||
### 2. Layout Strategy Generation
|
||||
|
||||
**Invoked by**: `consolidate.md` Phase 2.5
|
||||
**Input**: Project context from synthesis-specification.md
|
||||
**Input**: Project context from role analysis documents
|
||||
**Task**: Research and generate adaptive layout strategies via Exa MCP (2024-2025 trends)
|
||||
|
||||
**Output**: layout-strategies.json with strategy definitions and rationale
|
||||
|
||||
@@ -67,24 +67,24 @@ The agent handles all phases internally (understanding, discovery, enhancement,
|
||||
|
||||
## File Pattern Auto-Detection
|
||||
|
||||
Keywords trigger specific file patterns:
|
||||
- "auth" → `@{**/*auth*,**/*user*}`
|
||||
- "component" → `@{src/components/**/*,**/*.component.*}`
|
||||
- "API" → `@{**/api/**/*,**/routes/**/*}`
|
||||
- "test" → `@{**/*.test.*,**/*.spec.*}`
|
||||
- "config" → `@{*.config.*,**/config/**/*}`
|
||||
- Generic → `@{src/**/*}`
|
||||
Keywords trigger specific file patterns (each @ references one pattern):
|
||||
- "auth" → `@**/*auth* @**/*user*`
|
||||
- "component" → `@src/components/**/* @**/*.component.*`
|
||||
- "API" → `@**/api/**/* @**/routes/**/*`
|
||||
- "test" → `@**/*.test.* @**/*.spec.*`
|
||||
- "config" → `@*.config.* @**/config/**/*`
|
||||
- Generic → `@src/**/*`
|
||||
|
||||
For complex patterns, use `rg` or MCP tools to discover files first, then execute CLI with precise file references.
|
||||
|
||||
## Command Template
|
||||
|
||||
```bash
|
||||
cd . && ~/.claude/scripts/gemini-wrapper -p "
|
||||
cd . && gemini -p "
|
||||
PURPOSE: [analysis goal from target]
|
||||
TASK: [auto-detected analysis type]
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md} [auto-detected file patterns]
|
||||
CONTEXT: @CLAUDE.md [auto-detected file patterns]
|
||||
EXPECTED: Insights, patterns, recommendations (NO code modification)
|
||||
RULES: [auto-selected template] | Focus on [analysis aspect]
|
||||
"
|
||||
@@ -112,7 +112,7 @@ RULES: [auto-selected template] | Focus on [analysis aspect]
|
||||
|
||||
**Architecture Analysis**:
|
||||
```bash
|
||||
/cli:analyze --tool qwen "component architecture"
|
||||
/cli:analyze --tool qwen -p "component architecture"
|
||||
# Executes: Qwen with component file patterns
|
||||
# Returns: Architecture review, design patterns, improvement suggestions
|
||||
```
|
||||
@@ -147,5 +147,4 @@ RULES: [auto-selected template] | Focus on [analysis aspect]
|
||||
## Notes
|
||||
|
||||
- Command templates, file patterns, and best practices: see intelligent-tools-strategy.md (loaded in memory)
|
||||
- Scratchpad directory details: see workflow-architecture.md
|
||||
- Scratchpad files can be promoted to workflow sessions if analysis proves valuable
|
||||
|
||||
@@ -27,7 +27,6 @@ Direct Q&A interaction with CLI tools for codebase analysis. **Analysis only - d
|
||||
- `--agent` - Use cli-execution-agent for automated context discovery (5-phase intelligent mode)
|
||||
- `--tool <codex|gemini|qwen>` - Select CLI tool (default: gemini, ignored in agent mode)
|
||||
- `--enhance` - Enhance inquiry with `/enhance-prompt` first
|
||||
- `--all-files` - Include entire codebase in context
|
||||
- `--save-session` - Save interaction to workflow session
|
||||
|
||||
## Execution Flow
|
||||
@@ -36,7 +35,7 @@ Direct Q&A interaction with CLI tools for codebase analysis. **Analysis only - d
|
||||
|
||||
1. Parse tool selection (default: gemini)
|
||||
2. If `--enhance`: Execute `/enhance-prompt` to expand user intent
|
||||
3. Assemble context: `@{CLAUDE.md}` + user-specified files or `--all-files`
|
||||
3. Assemble context: `@CLAUDE.md` + user-specified files or `@**/*` for entire codebase
|
||||
4. Execute CLI tool with assembled context (read-only, analysis mode)
|
||||
5. Return explanations and insights (NO code changes)
|
||||
6. Optionally save to workflow session
|
||||
@@ -54,7 +53,6 @@ Task(
|
||||
Task: ${inquiry}
|
||||
Mode: analyze (Q&A)
|
||||
Tool Preference: ${tool_flag || 'auto-select'}
|
||||
${all_files_flag ? 'Scope: all-files' : ''}
|
||||
|
||||
Agent will autonomously:
|
||||
- Discover files relevant to the question
|
||||
@@ -69,22 +67,24 @@ The agent handles all phases internally.
|
||||
|
||||
## Context Assembly
|
||||
|
||||
**Always included**: `@{CLAUDE.md,**/*CLAUDE.md}` (project guidelines)
|
||||
**Always included**: `@CLAUDE.md @**/*CLAUDE.md` (project guidelines, space-separated)
|
||||
|
||||
**Optional**:
|
||||
- User-explicit files from inquiry keywords
|
||||
- `--all-files` flag includes entire codebase (`--all-files` wrapper parameter)
|
||||
- Use `@**/*` in CONTEXT for entire codebase
|
||||
|
||||
For targeted analysis, use `rg` or MCP tools to discover relevant files first, then build precise CONTEXT field.
|
||||
|
||||
## Command Template
|
||||
|
||||
```bash
|
||||
cd . && ~/.claude/scripts/gemini-wrapper -p "
|
||||
INQUIRY: [user question]
|
||||
CONTEXT: @{CLAUDE.md,**/*CLAUDE.md} [inferred or --all-files]
|
||||
cd . && gemini -p "
|
||||
PURPOSE: Answer user inquiry about codebase
|
||||
TASK: [user question]
|
||||
MODE: analysis
|
||||
RESPONSE: Direct answer, explanation, insights (NO code modification)
|
||||
CONTEXT: @CLAUDE.md @**/*CLAUDE.md [inferred files or @**/* for all files]
|
||||
EXPECTED: Direct answer, explanation, insights (NO code modification)
|
||||
RULES: Focus on clarity and accuracy
|
||||
"
|
||||
```
|
||||
|
||||
@@ -110,7 +110,7 @@ RESPONSE: Direct answer, explanation, insights (NO code modification)
|
||||
|
||||
**Architecture Question**:
|
||||
```bash
|
||||
/cli:chat --tool qwen "how does React component optimization work here"
|
||||
/cli:chat --tool qwen -p "how does React component optimization work here"
|
||||
# Executes: Qwen architecture analysis
|
||||
# Returns: Component structure explanation, optimization patterns used
|
||||
```
|
||||
@@ -130,13 +130,6 @@ RESPONSE: Direct answer, explanation, insights (NO code modification)
|
||||
# Returns: Detailed explanation of login flow and potential issues
|
||||
```
|
||||
|
||||
**Broad Context**:
|
||||
```bash
|
||||
/cli:chat --all-files "find all API endpoints"
|
||||
# Executes: Analysis across entire codebase
|
||||
# Returns: List and explanation of API endpoints (NO code generation)
|
||||
```
|
||||
|
||||
## Output Routing
|
||||
|
||||
**Output Destination Logic**:
|
||||
@@ -152,5 +145,4 @@ RESPONSE: Direct answer, explanation, insights (NO code modification)
|
||||
## Notes
|
||||
|
||||
- Command templates and file patterns: see intelligent-tools-strategy.md (loaded in memory)
|
||||
- Scratchpad directory details: see workflow-architecture.md
|
||||
- Scratchpad conversations preserved for future reference
|
||||
|
||||
@@ -32,7 +32,7 @@ Creates tool-specific configuration directories:
|
||||
- `.gemini/settings.json`:
|
||||
```json
|
||||
{
|
||||
"contextfilename": "CLAUDE.md"
|
||||
"contextfilename": ["CLAUDE.md","GEMINI.md"]
|
||||
}
|
||||
```
|
||||
|
||||
@@ -40,7 +40,7 @@ Creates tool-specific configuration directories:
|
||||
- `.qwen/settings.json`:
|
||||
```json
|
||||
{
|
||||
"contextfilename": "CLAUDE.md"
|
||||
"contextfilename": ["CLAUDE.md","QWEN.md"]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -130,7 +130,7 @@ git status --short
|
||||
codex -C [dir] --full-auto exec "
|
||||
PURPOSE: [group goal]
|
||||
TASK: [subtask description - first in group]
|
||||
CONTEXT: @{relevant_files} @{CLAUDE.md}
|
||||
CONTEXT: @{relevant_files} @CLAUDE.md
|
||||
EXPECTED: [specific deliverables]
|
||||
RULES: [constraints]
|
||||
Group [X]: [group name] - Subtask 1 of N in this group
|
||||
@@ -164,7 +164,7 @@ git add -A
|
||||
codex -C [dir] --full-auto exec "
|
||||
PURPOSE: [new group goal]
|
||||
TASK: [subtask description - first in new group]
|
||||
CONTEXT: @{different_files} @{CLAUDE.md}
|
||||
CONTEXT: @{different_files} @CLAUDE.md
|
||||
EXPECTED: [specific deliverables]
|
||||
RULES: [constraints]
|
||||
Group [Y]: [new group name] - Subtask 1 of N in this group
|
||||
@@ -515,6 +515,5 @@ AskUserQuestion({
|
||||
**Context Window**: `codex exec "..." resume --last` maintains conversation history, ensuring consistency across subtasks without redundant context injection.
|
||||
|
||||
**Output Details**:
|
||||
- Output routing and scratchpad details: see workflow-architecture.md
|
||||
- Session management: see intelligent-tools-strategy.md
|
||||
- **⚠️ Code Modification**: This command performs multi-stage code modifications - execution log tracks all changes
|
||||
|
||||
@@ -69,11 +69,11 @@ Gemini analyzes the topic and proposes preliminary plan.
|
||||
```bash
|
||||
# Round 1: CONTEXT_INPUT is the initial topic
|
||||
# Subsequent rounds: CONTEXT_INPUT is the synthesis from previous round
|
||||
~/.claude/scripts/gemini-wrapper -p "
|
||||
gemini -p "
|
||||
PURPOSE: Analyze and propose a plan for '[topic]'
|
||||
TASK: Provide initial analysis, identify key modules, and draft implementation plan
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md} [auto-detected files]
|
||||
CONTEXT: @CLAUDE.md [auto-detected files]
|
||||
INPUT: [CONTEXT_INPUT]
|
||||
EXPECTED: Structured analysis and draft plan for discussion
|
||||
RULES: Focus on technical depth and practical considerations
|
||||
@@ -90,7 +90,7 @@ codex --full-auto exec "
|
||||
PURPOSE: Critically review technical plan
|
||||
TASK: Review the provided plan, identify weaknesses, suggest alternatives, reason about trade-offs
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md} [relevant files]
|
||||
CONTEXT: @CLAUDE.md [relevant files]
|
||||
INPUT_PLAN: [Output from Gemini's analysis]
|
||||
EXPECTED: Critical review with alternative ideas and risk analysis
|
||||
RULES: Focus on architectural soundness and implementation feasibility
|
||||
@@ -317,5 +317,4 @@ Each round's output is structured as:
|
||||
- **Priority System**: Ensures Gemini leads analysis, Codex provides critique, Claude synthesizes
|
||||
- **Output Quality**: Multi-perspective discussion produces more robust plans than single-model analysis
|
||||
- Command patterns and session management: see intelligent-tools-strategy.md (loaded in memory)
|
||||
- Output routing details: see workflow-architecture.md
|
||||
- For implementation after discussion, use `/cli:execute` or `/cli:codex-execute` separately
|
||||
|
||||
@@ -45,11 +45,11 @@ Auto-approves: file pattern inference, execution, **file modifications**, summar
|
||||
|
||||
### Context Inference
|
||||
|
||||
Auto-selects files based on keywords and technology:
|
||||
- "auth" → `@{**/*auth*,**/*user*}`
|
||||
- "React" → `@{src/**/*.{jsx,tsx}}`
|
||||
- "api" → `@{**/api/**/*,**/routes/**/*}`
|
||||
- Always includes: `@{CLAUDE.md,**/*CLAUDE.md}`
|
||||
Auto-selects files based on keywords and technology (each @ references one pattern):
|
||||
- "auth" → `@**/*auth* @**/*user*`
|
||||
- "React" → `@src/**/*.jsx @src/**/*.tsx`
|
||||
- "api" → `@**/api/**/* @**/routes/**/*`
|
||||
- Always includes: `@CLAUDE.md @**/*CLAUDE.md`
|
||||
|
||||
For precise file targeting, use `rg` or MCP tools to discover files first.
|
||||
|
||||
@@ -111,11 +111,11 @@ Use `resume --last` when current task extends/relates to previous execution. See
|
||||
### Standard Mode (Default)
|
||||
```bash
|
||||
# Gemini/Qwen: MODE=write with --approval-mode yolo
|
||||
cd . && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p "
|
||||
cd . && gemini --approval-mode yolo "
|
||||
PURPOSE: [implementation goal]
|
||||
TASK: [specific implementation]
|
||||
MODE: write
|
||||
CONTEXT: @{CLAUDE.md} [auto-detected files]
|
||||
CONTEXT: @CLAUDE.md [auto-detected files]
|
||||
EXPECTED: Working implementation with code changes
|
||||
RULES: [constraints] | Auto-approve all changes
|
||||
"
|
||||
@@ -218,5 +218,4 @@ The agent handles all phases internally, including complexity-based tool selecti
|
||||
## Notes
|
||||
|
||||
- Command templates, YOLO mode details, and session management: see intelligent-tools-strategy.md (loaded in memory)
|
||||
- Output routing and scratchpad details: see workflow-architecture.md
|
||||
- **⚠️ Code Modification**: This command modifies code - execution logs document changes made
|
||||
|
||||
@@ -79,11 +79,11 @@ The agent handles all phases internally.
|
||||
## Command Template
|
||||
|
||||
```bash
|
||||
cd [directory] && ~/.claude/scripts/gemini-wrapper --all-files -p "
|
||||
cd [directory] && gemini -p "
|
||||
PURPOSE: [bug analysis goal]
|
||||
TASK: Systematic bug analysis and fix recommendations
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md,**/*CLAUDE.md} [entire codebase in directory]
|
||||
CONTEXT: @CLAUDE.md @**/*CLAUDE.md [entire codebase in directory]
|
||||
EXPECTED: Root cause analysis, code path tracing, targeted fix suggestions
|
||||
RULES: $(cat ~/.claude/prompt-templates/bug-fix.md) | Bug: [description]
|
||||
"
|
||||
@@ -111,11 +111,11 @@ RULES: $(cat ~/.claude/prompt-templates/bug-fix.md) | Bug: [description]
|
||||
|
||||
**Standard Template Example**:
|
||||
```bash
|
||||
cd . && ~/.claude/scripts/gemini-wrapper --all-files -p "
|
||||
cd . && gemini -p "
|
||||
PURPOSE: Debug authentication null pointer error
|
||||
TASK: Identify root cause and provide fix recommendations
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md,**/*CLAUDE.md}
|
||||
CONTEXT: @CLAUDE.md @**/*CLAUDE.md
|
||||
EXPECTED: Root cause, code path, minimal fix suggestion, impact assessment
|
||||
RULES: $(cat ~/.claude/prompt-templates/bug-fix.md) | Bug: null pointer in login flow
|
||||
"
|
||||
@@ -123,11 +123,11 @@ RULES: $(cat ~/.claude/prompt-templates/bug-fix.md) | Bug: null pointer in login
|
||||
|
||||
**Directory-Specific**:
|
||||
```bash
|
||||
cd src/auth && ~/.claude/scripts/gemini-wrapper --all-files -p "
|
||||
cd src/auth && gemini -p "
|
||||
PURPOSE: Fix token validation failure
|
||||
TASK: Analyze token validation bug in auth module
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md,**/*CLAUDE.md}
|
||||
CONTEXT: @CLAUDE.md @**/*CLAUDE.md
|
||||
EXPECTED: Validation logic analysis, fix recommendation with minimal changes
|
||||
RULES: $(cat ~/.claude/prompt-templates/bug-fix.md) | Bug: token validation fails intermittently
|
||||
"
|
||||
@@ -138,7 +138,7 @@ RULES: $(cat ~/.claude/prompt-templates/bug-fix.md) | Bug: token validation fail
|
||||
```bash
|
||||
# 1. Find bug-related files
|
||||
rg "error_keyword" --files-with-matches
|
||||
mcp__code-index__search_code_advanced(pattern="error|exception", file_pattern="*.ts")
|
||||
rg "error|exception" -g "*.ts"
|
||||
|
||||
# 2. Execute bug analysis with focused context (analysis only, no code changes)
|
||||
/cli:mode:bug-index --cd "src/module" "specific error description"
|
||||
@@ -159,6 +159,5 @@ mcp__code-index__search_code_advanced(pattern="error|exception", file_pattern="*
|
||||
## Notes
|
||||
|
||||
- Command templates and file patterns: see intelligent-tools-strategy.md (loaded in memory)
|
||||
- Scratchpad directory details: see workflow-architecture.md
|
||||
- Template path: `~/.claude/prompt-templates/bug-fix.md`
|
||||
- Always uses `--all-files` for comprehensive codebase context
|
||||
- Uses `@**/*` for in CONTEXT field for comprehensive codebase context
|
||||
|
||||
@@ -82,11 +82,11 @@ The agent handles all phases internally.
|
||||
## Command Template
|
||||
|
||||
```bash
|
||||
cd [directory] && ~/.claude/scripts/gemini-wrapper --all-files -p "
|
||||
cd [directory] && gemini -p "
|
||||
PURPOSE: [analysis goal]
|
||||
TASK: Systematic code analysis and execution path tracing
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md,**/*CLAUDE.md} [entire codebase in directory]
|
||||
CONTEXT: @CLAUDE.md @**/*CLAUDE.md [entire codebase in directory]
|
||||
EXPECTED: Execution trace, call flow diagram, debugging insights
|
||||
RULES: $(cat ~/.claude/prompt-templates/code-analysis.md) | Focus on [aspect]
|
||||
"
|
||||
@@ -114,11 +114,11 @@ RULES: $(cat ~/.claude/prompt-templates/code-analysis.md) | Focus on [aspect]
|
||||
|
||||
**Standard Template Example**:
|
||||
```bash
|
||||
cd . && ~/.claude/scripts/gemini-wrapper --all-files -p "
|
||||
cd . && gemini -p "
|
||||
PURPOSE: Trace authentication execution flow
|
||||
TASK: Analyze complete auth flow from request to response
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md,**/*CLAUDE.md}
|
||||
CONTEXT: @CLAUDE.md @**/*CLAUDE.md
|
||||
EXPECTED: Step-by-step execution trace with call diagram, variable states
|
||||
RULES: $(cat ~/.claude/prompt-templates/code-analysis.md) | Focus on control flow
|
||||
"
|
||||
@@ -126,11 +126,11 @@ RULES: $(cat ~/.claude/prompt-templates/code-analysis.md) | Focus on control flo
|
||||
|
||||
**Directory-Specific Analysis**:
|
||||
```bash
|
||||
cd src/auth && ~/.claude/scripts/gemini-wrapper --all-files -p "
|
||||
cd src/auth && gemini -p "
|
||||
PURPOSE: Understand JWT token validation logic
|
||||
TASK: Trace JWT validation from middleware to service layer
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md,**/*CLAUDE.md}
|
||||
CONTEXT: @CLAUDE.md @**/*CLAUDE.md
|
||||
EXPECTED: Validation flow diagram, token lifecycle analysis
|
||||
RULES: $(cat ~/.claude/prompt-templates/code-analysis.md) | Focus on security
|
||||
"
|
||||
@@ -141,7 +141,7 @@ RULES: $(cat ~/.claude/prompt-templates/code-analysis.md) | Focus on security
|
||||
```bash
|
||||
# 1. Find entry points and related files
|
||||
rg "function.*authenticate|class.*AuthService" --files-with-matches
|
||||
mcp__code-index__search_code_advanced(pattern="authenticate|login", file_pattern="*.ts")
|
||||
rg "authenticate|login" -g "*.ts"
|
||||
|
||||
# 2. Build call graph understanding
|
||||
# entry → middleware → service → repository
|
||||
@@ -165,6 +165,5 @@ mcp__code-index__search_code_advanced(pattern="authenticate|login", file_pattern
|
||||
## Notes
|
||||
|
||||
- Command templates and file patterns: see intelligent-tools-strategy.md (loaded in memory)
|
||||
- Scratchpad directory details: see workflow-architecture.md
|
||||
- Template path: `~/.claude/prompt-templates/code-analysis.md`
|
||||
- Always uses `--all-files` for comprehensive code context
|
||||
- Uses `@**/*` for in CONTEXT field for comprehensive code context
|
||||
|
||||
@@ -80,11 +80,11 @@ The agent handles all phases internally.
|
||||
## Command Template
|
||||
|
||||
```bash
|
||||
cd [directory] && ~/.claude/scripts/gemini-wrapper --all-files -p "
|
||||
cd [directory] && gemini -p "
|
||||
PURPOSE: [planning goal from topic]
|
||||
TASK: Comprehensive planning and architecture analysis
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md,**/*CLAUDE.md} [entire codebase in directory]
|
||||
CONTEXT: @CLAUDE.md @**/*CLAUDE.md [entire codebase in directory]
|
||||
EXPECTED: Strategic insights, implementation recommendations, key decisions
|
||||
RULES: $(cat ~/.claude/prompt-templates/plan.md) | Focus on [topic area]
|
||||
"
|
||||
@@ -112,11 +112,11 @@ RULES: $(cat ~/.claude/prompt-templates/plan.md) | Focus on [topic area]
|
||||
|
||||
**Standard Template Example**:
|
||||
```bash
|
||||
cd . && ~/.claude/scripts/gemini-wrapper --all-files -p "
|
||||
cd . && gemini -p "
|
||||
PURPOSE: Design user dashboard architecture
|
||||
TASK: Plan dashboard component structure and data flow
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md,**/*CLAUDE.md}
|
||||
CONTEXT: @CLAUDE.md @**/*CLAUDE.md
|
||||
EXPECTED: Architecture recommendations, component design, data flow diagram
|
||||
RULES: $(cat ~/.claude/prompt-templates/plan.md) | Focus on scalability
|
||||
"
|
||||
@@ -124,11 +124,11 @@ RULES: $(cat ~/.claude/prompt-templates/plan.md) | Focus on scalability
|
||||
|
||||
**Directory-Specific Planning**:
|
||||
```bash
|
||||
cd src/api && ~/.claude/scripts/gemini-wrapper --all-files -p "
|
||||
cd src/api && gemini -p "
|
||||
PURPOSE: Plan API refactoring strategy
|
||||
TASK: Analyze current API structure and recommend improvements
|
||||
MODE: analysis
|
||||
CONTEXT: @{CLAUDE.md,**/*CLAUDE.md}
|
||||
CONTEXT: @CLAUDE.md @**/*CLAUDE.md
|
||||
EXPECTED: Refactoring roadmap, breaking change analysis, migration plan
|
||||
RULES: $(cat ~/.claude/prompt-templates/plan.md) | Maintain backward compatibility
|
||||
"
|
||||
@@ -139,7 +139,7 @@ RULES: $(cat ~/.claude/prompt-templates/plan.md) | Maintain backward compatibili
|
||||
```bash
|
||||
# 1. Discover project structure
|
||||
~/.claude/scripts/get_modules_by_depth.sh
|
||||
mcp__code-index__find_files(pattern="*.ts")
|
||||
find . -name "*.ts" -type f
|
||||
|
||||
# 2. Gather existing architecture info
|
||||
rg "architecture|design" --files-with-matches
|
||||
@@ -163,6 +163,5 @@ rg "architecture|design" --files-with-matches
|
||||
## Notes
|
||||
|
||||
- Command templates and file patterns: see intelligent-tools-strategy.md (loaded in memory)
|
||||
- Scratchpad directory details: see workflow-architecture.md
|
||||
- Template path: `~/.claude/prompt-templates/plan.md`
|
||||
- Always uses `--all-files` for comprehensive project context
|
||||
- Uses `@**/*` for in CONTEXT field for comprehensive project context
|
||||
|
||||
@@ -9,9 +9,10 @@ argument-hint: "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--c
|
||||
## Overview
|
||||
Lightweight planner that analyzes project structure, decomposes documentation work into tasks, and generates execution plans. Does NOT generate documentation content itself - delegates to doc-generator agent.
|
||||
|
||||
**Documentation Output**: All generated documentation is placed in `.workflow/docs/` directory with **mirrored project structure**. For example:
|
||||
- Source: `src/modules/auth/index.ts` → Docs: `.workflow/docs/src/modules/auth/API.md`
|
||||
- Source: `lib/core/utils.js` → Docs: `.workflow/docs/lib/core/README.md`
|
||||
**Documentation Output**: All generated documentation is placed in `.workflow/docs/{project_name}/` directory with **mirrored project structure**. For example:
|
||||
- Project: `my_app`
|
||||
- Source: `my_app/src/core/` → Docs: `.workflow/docs/my_app/src/core/API.md`
|
||||
- Source: `my_app/src/modules/auth/` → Docs: `.workflow/docs/my_app/src/modules/auth/API.md`
|
||||
|
||||
**Two Execution Modes**:
|
||||
- **Default**: CLI analyzes in `pre_analysis` (MODE=analysis), agent writes docs in `implementation_approach`
|
||||
@@ -19,14 +20,13 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
|
||||
|
||||
## Path Mirroring Strategy
|
||||
|
||||
**Principle**: Documentation structure **mirrors** source code structure.
|
||||
**Principle**: Documentation structure **mirrors** source code structure under project-specific directory.
|
||||
|
||||
| Source Path | Documentation Path |
|
||||
|------------|-------------------|
|
||||
| `src/modules/auth/index.ts` | `.workflow/docs/src/modules/auth/API.md` |
|
||||
| `src/modules/auth/middleware/` | `.workflow/docs/src/modules/auth/middleware/README.md` |
|
||||
| `lib/core/utils.js` | `.workflow/docs/lib/core/API.md` |
|
||||
| `lib/core/helpers/` | `.workflow/docs/lib/core/helpers/README.md` |
|
||||
| Source Path | Project Name | Documentation Path |
|
||||
|------------|--------------|-------------------|
|
||||
| `my_app/src/core/` | `my_app` | `.workflow/docs/my_app/src/core/API.md` |
|
||||
| `my_app/src/modules/auth/` | `my_app` | `.workflow/docs/my_app/src/modules/auth/API.md` |
|
||||
| `another_project/lib/utils/` | `another_project` | `.workflow/docs/another_project/lib/utils/API.md` |
|
||||
|
||||
**Benefits**:
|
||||
- Easy to locate documentation for any source file
|
||||
@@ -92,6 +92,9 @@ bash(
|
||||
target_path=$(cd "$path" 2>/dev/null && pwd || echo "$PWD/$path")
|
||||
fi
|
||||
|
||||
# Extract project name from target_path
|
||||
project_name=$(basename "$target_path")
|
||||
|
||||
# Create session
|
||||
timestamp=$(date +%Y%m%d-%H%M%S)
|
||||
session="WFS-docs-${timestamp}"
|
||||
@@ -106,6 +109,7 @@ bash(
|
||||
"path": "${path}",
|
||||
"target_path": "${target_path}",
|
||||
"project_root": "${project_root}",
|
||||
"project_name": "${project_name}",
|
||||
"mode": "${mode}",
|
||||
"tool": "${tool}",
|
||||
"cli_generate": ${cli_generate}
|
||||
@@ -129,9 +133,11 @@ EOF
|
||||
|
||||
### Phase 2: Analyze Structure
|
||||
|
||||
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack (Node.js/Python/Go/Rust/etc).
|
||||
|
||||
#### Step 1: Discover and Classify Folders
|
||||
```bash
|
||||
# Run analysis pipeline (module discovery + folder classification)
|
||||
# Run analysis pipeline (module discovery + folder classification + smart filtering)
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh > .workflow/WFS-docs-20240120/.process/folder-analysis.txt)
|
||||
```
|
||||
|
||||
@@ -142,6 +148,12 @@ bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-fold
|
||||
./src/utils|navigation|code:0|dirs:4
|
||||
```
|
||||
|
||||
**Auto-skipped**:
|
||||
- Tests: `**/test/**`, `**/*.test.*`, `**/__tests__/**`
|
||||
- Build: `**/node_modules/**`, `**/dist/**`, `**/build/**`
|
||||
- Config: Root-level config files (package.json, tsconfig.json, etc)
|
||||
- Vendor: Language-specific dependency directories
|
||||
|
||||
#### Step 2: Extract Top-Level Directories
|
||||
```bash
|
||||
# Group folders by top-level directory
|
||||
@@ -183,41 +195,48 @@ bash(jq '. + {analysis: {total: "15", code: "8", navigation: "7", top_level: "3"
|
||||
|
||||
### Phase 3: Detect Update Mode
|
||||
|
||||
#### Step 1: Count Existing Documentation in .workflow/docs/
|
||||
#### Step 1: Count Existing Documentation in .workflow/docs/{project_name}/
|
||||
```bash
|
||||
# Check .workflow/docs/ directory and count existing files
|
||||
bash(if [[ -d ".workflow/docs" ]]; then
|
||||
find .workflow/docs -name "*.md" 2>/dev/null | wc -l
|
||||
else
|
||||
echo "0"
|
||||
fi)
|
||||
# Check .workflow/docs/{project_name}/ directory and count existing files
|
||||
bash(
|
||||
project_name=$(jq -r '.project_name' .workflow/WFS-docs-20240120/.process/config.json)
|
||||
if [[ -d ".workflow/docs/${project_name}" ]]; then
|
||||
find .workflow/docs/${project_name} -name "*.md" 2>/dev/null | wc -l
|
||||
else
|
||||
echo "0"
|
||||
fi
|
||||
)
|
||||
```
|
||||
|
||||
**Output**: `5` (existing docs in .workflow/docs/)
|
||||
**Output**: `5` (existing docs in .workflow/docs/{project_name}/)
|
||||
|
||||
#### Step 2: List Existing Documentation
|
||||
```bash
|
||||
# List existing files in .workflow/docs/ (for task context)
|
||||
bash(if [[ -d ".workflow/docs" ]]; then
|
||||
find .workflow/docs -name "*.md" 2>/dev/null > .workflow/WFS-docs-20240120/.process/existing-docs.txt
|
||||
else
|
||||
touch .workflow/WFS-docs-20240120/.process/existing-docs.txt
|
||||
fi)
|
||||
# List existing files in .workflow/docs/{project_name}/ (for task context)
|
||||
bash(
|
||||
project_name=$(jq -r '.project_name' .workflow/WFS-docs-20240120/.process/config.json)
|
||||
if [[ -d ".workflow/docs/${project_name}" ]]; then
|
||||
find .workflow/docs/${project_name} -name "*.md" 2>/dev/null > .workflow/WFS-docs-20240120/.process/existing-docs.txt
|
||||
else
|
||||
touch .workflow/WFS-docs-20240120/.process/existing-docs.txt
|
||||
fi
|
||||
)
|
||||
```
|
||||
|
||||
**Output** (existing-docs.txt):
|
||||
```
|
||||
.workflow/docs/src/modules/auth/API.md
|
||||
.workflow/docs/src/modules/auth/README.md
|
||||
.workflow/docs/lib/core/README.md
|
||||
.workflow/docs/README.md
|
||||
.workflow/docs/my_app/src/modules/auth/API.md
|
||||
.workflow/docs/my_app/src/modules/auth/README.md
|
||||
.workflow/docs/my_app/lib/core/README.md
|
||||
.workflow/docs/my_app/README.md
|
||||
```
|
||||
|
||||
#### Step 3: Update Config with Update Status
|
||||
```bash
|
||||
# Determine update status (create or update) and update config
|
||||
bash(
|
||||
existing_count=$(find .workflow/docs -name "*.md" 2>/dev/null | wc -l)
|
||||
project_name=$(jq -r '.project_name' .workflow/WFS-docs-20240120/.process/config.json)
|
||||
existing_count=$(find .workflow/docs/${project_name} -name "*.md" 2>/dev/null | wc -l)
|
||||
if [[ $existing_count -gt 0 ]]; then
|
||||
jq ". + {update_mode: \"update\", existing_docs: $existing_count}" .workflow/WFS-docs-20240120/.process/config.json > .workflow/WFS-docs-20240120/.process/config.json.tmp && mv .workflow/WFS-docs-20240120/.process/config.json.tmp .workflow/WFS-docs-20240120/.process/config.json
|
||||
else
|
||||
@@ -345,7 +364,8 @@ bash(
|
||||
if [[ "$tool" == "codex" ]]; then
|
||||
echo "codex -C \${dir} --full-auto exec \"...\" --skip-git-repo-check -s danger-full-access"
|
||||
else
|
||||
echo "bash(cd \${dir} && ~/.claude/scripts/${tool}-wrapper ${approval_flag} -p \"...\")"
|
||||
echo "bash(cd \${dir} && ${tool} ${approval_flag} -p \"...\")"
|
||||
# Direct CLI commands for gemini/qwen
|
||||
fi
|
||||
)
|
||||
```
|
||||
@@ -354,7 +374,10 @@ bash(
|
||||
|
||||
### Level 1: Module Tree Task
|
||||
|
||||
**Path Mapping**: Source `src/modules/` → Output `.workflow/docs/src/modules/`
|
||||
**Path Mapping**:
|
||||
- Project: `{project_name}` (extracted from target_path)
|
||||
- Source: `{project_name}/src/modules/`
|
||||
- Output: `.workflow/docs/{project_name}/src/modules/`
|
||||
|
||||
**Default Mode (cli_generate=false)**:
|
||||
```json
|
||||
@@ -368,12 +391,12 @@ bash(
|
||||
"tool": "gemini",
|
||||
"cli_generate": false,
|
||||
"source_path": "src/modules",
|
||||
"output_path": ".workflow/docs/src/modules"
|
||||
"output_path": ".workflow/docs/${project_name}/src/modules"
|
||||
},
|
||||
"context": {
|
||||
"requirements": [
|
||||
"Analyze source code in src/modules/",
|
||||
"Generate docs to .workflow/docs/src/modules/ (mirrored structure)",
|
||||
"Generate docs to .workflow/docs/${project_name}/src/modules/ (mirrored structure)",
|
||||
"For code folders: generate API.md + README.md",
|
||||
"For navigation folders: generate README.md only"
|
||||
],
|
||||
@@ -384,7 +407,7 @@ bash(
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_existing_docs",
|
||||
"command": "bash(find .workflow/docs/${top_dir} -name '*.md' 2>/dev/null | xargs cat || echo 'No existing docs')",
|
||||
"command": "bash(find .workflow/docs/${project_name}/${top_dir} -name '*.md' 2>/dev/null | xargs cat || echo 'No existing docs')",
|
||||
"output_to": "existing_module_docs"
|
||||
},
|
||||
{
|
||||
@@ -394,7 +417,7 @@ bash(
|
||||
},
|
||||
{
|
||||
"step": "analyze_module_tree",
|
||||
"command": "bash(cd src/modules && ~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze module structure\\nTASK: Generate documentation outline\\nMODE: analysis\\nCONTEXT: @{**/*} [target_folders]\\nEXPECTED: Structure outline\\nRULES: Analyze only\")",
|
||||
"command": "bash(cd src/modules && gemini \"PURPOSE: Analyze module structure\\nTASK: Generate documentation outline\\nMODE: analysis\\nCONTEXT: @**/* [target_folders]\\nEXPECTED: Structure outline\\nRULES: Analyze only\")",
|
||||
"output_to": "tree_outline",
|
||||
"note": "CLI for analysis only"
|
||||
}
|
||||
@@ -407,7 +430,7 @@ bash(
|
||||
"modification_points": [
|
||||
"Parse folder types from [target_folders]",
|
||||
"Parse structure from [tree_outline]",
|
||||
"For src/modules/auth/ → write to .workflow/docs/src/modules/auth/",
|
||||
"For src/modules/auth/ → write to .workflow/docs/${project_name}/src/modules/auth/",
|
||||
"Generate API.md for code folders",
|
||||
"Generate README.md for all folders"
|
||||
],
|
||||
@@ -415,7 +438,7 @@ bash(
|
||||
"Parse [target_folders] to get folder types",
|
||||
"Parse [tree_outline] for structure",
|
||||
"For each folder in source:",
|
||||
" - Map source_path to .workflow/docs/{source_path}",
|
||||
" - Map source_path to .workflow/docs/${project_name}/{source_path}",
|
||||
" - If type == 'code': Generate API.md + README.md",
|
||||
" - Elif type == 'navigation': Generate README.md only"
|
||||
],
|
||||
@@ -424,8 +447,8 @@ bash(
|
||||
}
|
||||
],
|
||||
"target_files": [
|
||||
".workflow/docs/${top_dir}/*/API.md",
|
||||
".workflow/docs/${top_dir}/*/README.md"
|
||||
".workflow/docs/${project_name}/${top_dir}/*/API.md",
|
||||
".workflow/docs/${project_name}/${top_dir}/*/README.md"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -443,12 +466,12 @@ bash(
|
||||
"tool": "gemini",
|
||||
"cli_generate": true,
|
||||
"source_path": "src/modules",
|
||||
"output_path": ".workflow/docs/src/modules"
|
||||
"output_path": ".workflow/docs/${project_name}/src/modules"
|
||||
},
|
||||
"context": {
|
||||
"requirements": [
|
||||
"Analyze source code in src/modules/",
|
||||
"Generate docs to .workflow/docs/src/modules/ (mirrored structure)",
|
||||
"Generate docs to .workflow/docs/${project_name}/src/modules/ (mirrored structure)",
|
||||
"CLI generates documentation files directly"
|
||||
],
|
||||
"focus_paths": ["src/modules"]
|
||||
@@ -457,7 +480,7 @@ bash(
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_existing_docs",
|
||||
"command": "bash(find .workflow/docs/${top_dir} -name '*.md' 2>/dev/null | xargs cat || echo 'No existing docs')",
|
||||
"command": "bash(find .workflow/docs/${project_name}/${top_dir} -name '*.md' 2>/dev/null | xargs cat || echo 'No existing docs')",
|
||||
"output_to": "existing_module_docs"
|
||||
},
|
||||
{
|
||||
@@ -482,22 +505,22 @@ bash(
|
||||
"description": "Call CLI to generate docs to .workflow/docs/ with mirrored structure using MODE=write",
|
||||
"modification_points": [
|
||||
"Execute CLI generation command",
|
||||
"Generate files to .workflow/docs/src/modules/ (mirrored path)",
|
||||
"Generate files to .workflow/docs/${project_name}/src/modules/ (mirrored path)",
|
||||
"Generate API.md and README.md files"
|
||||
],
|
||||
"logic_flow": [
|
||||
"CLI analyzes source code in src/modules/",
|
||||
"CLI writes documentation to .workflow/docs/src/modules/",
|
||||
"CLI writes documentation to .workflow/docs/${project_name}/src/modules/",
|
||||
"Maintains directory structure mirroring"
|
||||
],
|
||||
"command": "bash(cd src/modules && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation files in .workflow/docs/src/modules/\\nMODE: write\\nCONTEXT: @{**/*} [target_folders] [existing_module_docs]\\nEXPECTED: API.md and README.md in .workflow/docs/src/modules/\\nRULES: Mirror source structure, generate complete docs\")",
|
||||
"command": "bash(cd src/modules && gemini --approval-mode yolo \"PURPOSE: Generate module docs\\nTASK: Create documentation files in .workflow/docs/${project_name}/src/modules/\\nMODE: write\\nCONTEXT: @**/* [target_folders] [existing_module_docs]\\nEXPECTED: API.md and README.md in .workflow/docs/${project_name}/src/modules/\\nRULES: Mirror source structure, generate complete docs\")",
|
||||
"depends_on": [1],
|
||||
"output": "generated_docs"
|
||||
}
|
||||
],
|
||||
"target_files": [
|
||||
".workflow/docs/${top_dir}/*/API.md",
|
||||
".workflow/docs/${top_dir}/*/README.md"
|
||||
".workflow/docs/${project_name}/${top_dir}/*/API.md",
|
||||
".workflow/docs/${project_name}/${top_dir}/*/README.md"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -522,18 +545,18 @@ bash(
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_existing_readme",
|
||||
"command": "bash(cat .workflow/docs/README.md 2>/dev/null || echo 'No existing README')",
|
||||
"command": "bash(cat .workflow/docs/${project_name}/README.md 2>/dev/null || echo 'No existing README')",
|
||||
"output_to": "existing_readme"
|
||||
},
|
||||
{
|
||||
"step": "load_module_docs",
|
||||
"command": "bash(find .workflow/docs -type f -name '*.md' ! -path '.workflow/docs/README.md' ! -path '.workflow/docs/ARCHITECTURE.md' ! -path '.workflow/docs/EXAMPLES.md' ! -path '.workflow/docs/api/*' | xargs cat)",
|
||||
"command": "bash(find .workflow/docs/${project_name} -type f -name '*.md' ! -path '.workflow/docs/${project_name}/README.md' ! -path '.workflow/docs/${project_name}/ARCHITECTURE.md' ! -path '.workflow/docs/${project_name}/EXAMPLES.md' ! -path '.workflow/docs/${project_name}/api/*' | xargs cat)",
|
||||
"output_to": "all_module_docs",
|
||||
"note": "Load all module docs from mirrored structure"
|
||||
},
|
||||
{
|
||||
"step": "analyze_project",
|
||||
"command": "bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
|
||||
"command": "bash(gemini \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
|
||||
"output_to": "project_outline"
|
||||
}
|
||||
],
|
||||
@@ -548,7 +571,7 @@ bash(
|
||||
"output": "project_readme"
|
||||
}
|
||||
],
|
||||
"target_files": [".workflow/docs/README.md"]
|
||||
"target_files": [".workflow/docs/${project_name}/README.md"]
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -572,18 +595,18 @@ bash(
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_existing_docs",
|
||||
"command": "bash(cat .workflow/docs/ARCHITECTURE.md 2>/dev/null || echo 'No existing ARCHITECTURE'; echo '---SEPARATOR---'; cat .workflow/docs/EXAMPLES.md 2>/dev/null || echo 'No existing EXAMPLES')",
|
||||
"command": "bash(cat .workflow/docs/${project_name}/ARCHITECTURE.md 2>/dev/null || echo 'No existing ARCHITECTURE'; echo '---SEPARATOR---'; cat .workflow/docs/${project_name}/EXAMPLES.md 2>/dev/null || echo 'No existing EXAMPLES')",
|
||||
"output_to": "existing_arch_examples"
|
||||
},
|
||||
{
|
||||
"step": "load_all_docs",
|
||||
"command": "bash(cat .workflow/docs/README.md && find .workflow/docs -type f -name '*.md' ! -path '.workflow/docs/README.md' ! -path '.workflow/docs/ARCHITECTURE.md' ! -path '.workflow/docs/EXAMPLES.md' ! -path '.workflow/docs/api/*' | xargs cat)",
|
||||
"command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '.workflow/docs/${project_name}/README.md' ! -path '.workflow/docs/${project_name}/ARCHITECTURE.md' ! -path '.workflow/docs/${project_name}/EXAMPLES.md' ! -path '.workflow/docs/${project_name}/api/*' | xargs cat)",
|
||||
"output_to": "all_docs",
|
||||
"note": "Load README + all module docs from mirrored structure"
|
||||
},
|
||||
{
|
||||
"step": "analyze_architecture_and_examples",
|
||||
"command": "bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze system architecture and generate examples\\nTASK: Synthesize architectural overview and usage patterns\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture outline + Examples outline\")",
|
||||
"command": "bash(gemini \"PURPOSE: Analyze system architecture and generate examples\\nTASK: Synthesize architectural overview and usage patterns\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture outline + Examples outline\")",
|
||||
"output_to": "arch_examples_outline"
|
||||
}
|
||||
],
|
||||
@@ -609,8 +632,8 @@ bash(
|
||||
}
|
||||
],
|
||||
"target_files": [
|
||||
".workflow/docs/ARCHITECTURE.md",
|
||||
".workflow/docs/EXAMPLES.md"
|
||||
".workflow/docs/${project_name}/ARCHITECTURE.md",
|
||||
".workflow/docs/${project_name}/EXAMPLES.md"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -635,17 +658,17 @@ bash(
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "discover_api_endpoints",
|
||||
"command": "mcp__code-index__search_code_advanced(pattern='router\\.|@(Get|Post)', file_pattern='*.{ts,js}')",
|
||||
"command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')",
|
||||
"output_to": "endpoint_discovery"
|
||||
},
|
||||
{
|
||||
"step": "load_existing_api_docs",
|
||||
"command": "bash(cat .workflow/docs/api/README.md 2>/dev/null || echo 'No existing API docs')",
|
||||
"command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')",
|
||||
"output_to": "existing_api_docs"
|
||||
},
|
||||
{
|
||||
"step": "analyze_api",
|
||||
"command": "bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Document HTTP API\\nTASK: Analyze API endpoints\\nMODE: analysis\\nCONTEXT: @{src/api/**/*} [endpoint_discovery]\\nEXPECTED: API outline\")",
|
||||
"command": "bash(gemini \"PURPOSE: Document HTTP API\\nTASK: Analyze API endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\")",
|
||||
"output_to": "api_outline"
|
||||
}
|
||||
],
|
||||
@@ -660,7 +683,7 @@ bash(
|
||||
"output": "api_docs"
|
||||
}
|
||||
],
|
||||
"target_files": [".workflow/docs/api/README.md"]
|
||||
"target_files": [".workflow/docs/${project_name}/api/README.md"]
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -693,8 +716,9 @@ bash(
|
||||
"session_id": "WFS-docs-20240120-143022",
|
||||
"timestamp": "2024-01-20T14:30:22+08:00",
|
||||
"path": ".",
|
||||
"target_path": "/d/Claude_dms3",
|
||||
"project_root": "/d/Claude_dms3",
|
||||
"target_path": "/home/user/projects/my_app",
|
||||
"project_root": "/home/user/projects",
|
||||
"project_name": "my_app",
|
||||
"mode": "full",
|
||||
"tool": "gemini",
|
||||
"cli_generate": false,
|
||||
@@ -711,33 +735,34 @@ bash(
|
||||
|
||||
## Generated Documentation
|
||||
|
||||
**Structure mirrors project source directories**:
|
||||
**Structure mirrors project source directories under project-specific folder**:
|
||||
|
||||
```
|
||||
.workflow/docs/
|
||||
├── src/ # Mirrors src/ directory
|
||||
│ ├── modules/ # Level 1 output
|
||||
│ │ ├── README.md # Navigation for src/modules/
|
||||
│ │ ├── auth/
|
||||
│ │ │ ├── API.md # Auth module API signatures
|
||||
│ │ │ ├── README.md # Auth module documentation
|
||||
│ │ │ └── middleware/
|
||||
│ │ │ ├── API.md # Middleware API
|
||||
│ │ │ └── README.md # Middleware docs
|
||||
│ │ └── api/
|
||||
│ │ ├── API.md # API module signatures
|
||||
│ │ └── README.md # API module docs
|
||||
│ └── utils/ # Level 1 output
|
||||
│ └── README.md # Utils navigation
|
||||
├── lib/ # Mirrors lib/ directory
|
||||
│ └── core/
|
||||
│ ├── API.md
|
||||
│ └── README.md
|
||||
├── README.md # Level 2 output (root only)
|
||||
├── ARCHITECTURE.md # Level 3 output (root only)
|
||||
├── EXAMPLES.md # Level 3 output (root only)
|
||||
└── api/ # Level 3 output (optional)
|
||||
└── README.md # HTTP API reference
|
||||
└── {project_name}/ # Project-specific root (e.g., my_app/)
|
||||
├── src/ # Mirrors src/ directory
|
||||
│ ├── modules/ # Level 1 output
|
||||
│ │ ├── README.md # Navigation for src/modules/
|
||||
│ │ ├── auth/
|
||||
│ │ │ ├── API.md # Auth module API signatures
|
||||
│ │ │ ├── README.md # Auth module documentation
|
||||
│ │ │ └── middleware/
|
||||
│ │ │ ├── API.md # Middleware API
|
||||
│ │ │ └── README.md # Middleware docs
|
||||
│ │ └── api/
|
||||
│ │ ├── API.md # API module signatures
|
||||
│ │ └── README.md # API module docs
|
||||
│ └── utils/ # Level 1 output
|
||||
│ └── README.md # Utils navigation
|
||||
├── lib/ # Mirrors lib/ directory
|
||||
│ └── core/
|
||||
│ ├── API.md
|
||||
│ └── README.md
|
||||
├── README.md # Level 2 output (project root only)
|
||||
├── ARCHITECTURE.md # Level 3 output (project root only)
|
||||
├── EXAMPLES.md # Level 3 output (project root only)
|
||||
└── api/ # Level 3 output (optional)
|
||||
└── README.md # HTTP API reference
|
||||
```
|
||||
|
||||
## Execution Commands
|
||||
@@ -784,11 +809,11 @@ bash(ls .workflow/WFS-docs-20240120/.task/*.json)
|
||||
# Discover and classify folders (scans project source)
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh)
|
||||
|
||||
# Count existing docs (in .workflow/docs/ directory)
|
||||
bash(if [[ -d ".workflow/docs" ]]; then find .workflow/docs -name "*.md" 2>/dev/null | wc -l; else echo "0"; fi)
|
||||
# Count existing docs (in .workflow/docs/{project_name}/ directory)
|
||||
bash(if [[ -d ".workflow/docs/${project_name}" ]]; then find .workflow/docs/${project_name} -name "*.md" 2>/dev/null | wc -l; else echo "0"; fi)
|
||||
|
||||
# List existing documentation (in .workflow/docs/ directory)
|
||||
bash(if [[ -d ".workflow/docs" ]]; then find .workflow/docs -name "*.md" 2>/dev/null; fi)
|
||||
# List existing documentation (in .workflow/docs/{project_name}/ directory)
|
||||
bash(if [[ -d ".workflow/docs/${project_name}" ]]; then find .workflow/docs/${project_name} -name "*.md" 2>/dev/null; fi)
|
||||
```
|
||||
|
||||
## Template Reference
|
||||
|
||||
240
.claude/commands/memory/load.md
Normal file
240
.claude/commands/memory/load.md
Normal file
@@ -0,0 +1,240 @@
|
||||
---
|
||||
name: load
|
||||
description: Load project memory by delegating to agent, returns structured core content package for subsequent operations
|
||||
argument-hint: "[--tool gemini|qwen] \"task context description\""
|
||||
allowed-tools: Task(*), Bash(*)
|
||||
examples:
|
||||
- /memory:load "在当前前端基础上开发用户认证功能"
|
||||
- /memory:load --tool qwen -p "重构支付模块API"
|
||||
---
|
||||
|
||||
# Memory Load Command (/memory:load)
|
||||
|
||||
## 1. Overview
|
||||
|
||||
The `memory:load` command **delegates to a universal-executor agent** to analyze the project and return a structured "Core Content Pack". This pack is loaded into the main thread's memory, providing essential context for subsequent agent operations while minimizing token consumption.
|
||||
|
||||
**Core Philosophy**:
|
||||
- **Agent-Driven**: Fully delegates execution to universal-executor agent
|
||||
- **Read-Only Analysis**: Does not modify code, only extracts context
|
||||
- **Structured Output**: Returns standardized JSON content package
|
||||
- **Memory Optimization**: Package loaded directly into main thread memory
|
||||
- **Token Efficiency**: CLI analysis executed within agent to save tokens
|
||||
|
||||
## 2. Parameters
|
||||
|
||||
- `"task context description"` (Required): Task description to guide context extraction
|
||||
- Example: "在当前前端基础上开发用户认证功能"
|
||||
- Example: "重构支付模块API"
|
||||
- Example: "修复数据库查询性能问题"
|
||||
|
||||
- `--tool <gemini|qwen>` (Optional): Specify CLI tool for agent to use (default: gemini)
|
||||
- gemini: Large context window, suitable for complex project analysis
|
||||
- qwen: Alternative to Gemini with similar capabilities
|
||||
|
||||
## 3. Agent-Driven Execution Flow
|
||||
|
||||
The command fully delegates to **universal-executor agent**, which autonomously:
|
||||
|
||||
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
|
||||
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
|
||||
3. **Extracts Keywords**: Derives core keywords from task description
|
||||
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
|
||||
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
|
||||
6. **Generates Content Package**: Returns structured JSON core content package
|
||||
|
||||
## 4. Core Content Package Structure
|
||||
|
||||
**Output Format** - Loaded into main thread memory for subsequent use:
|
||||
|
||||
```json
|
||||
{
|
||||
"task_context": "在当前前端基础上开发用户认证功能",
|
||||
"keywords": ["前端", "用户", "认证", "auth", "login"],
|
||||
"project_summary": {
|
||||
"architecture": "TypeScript + React frontend with Vite build system",
|
||||
"tech_stack": ["React", "TypeScript", "Vite", "TailwindCSS"],
|
||||
"key_patterns": [
|
||||
"State management via Context API",
|
||||
"Functional components with Hooks pattern",
|
||||
"API calls encapsulated in custom hooks"
|
||||
]
|
||||
},
|
||||
"relevant_files": [
|
||||
{
|
||||
"path": "src/components/Auth/LoginForm.tsx",
|
||||
"relevance": "Existing login form component",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"path": "src/contexts/AuthContext.tsx",
|
||||
"relevance": "Authentication state management context",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"path": "CLAUDE.md",
|
||||
"relevance": "Project development standards",
|
||||
"priority": "high"
|
||||
}
|
||||
],
|
||||
"integration_points": [
|
||||
"Must integrate with existing AuthContext",
|
||||
"Follow component organization pattern: src/components/[Feature]/",
|
||||
"API calls should use src/hooks/useApi.ts wrapper"
|
||||
],
|
||||
"constraints": [
|
||||
"Maintain backward compatibility",
|
||||
"Follow TypeScript strict mode",
|
||||
"Use existing UI component library"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## 5. Agent Invocation
|
||||
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="universal-executor",
|
||||
description="Load project memory: ${task_description}",
|
||||
prompt=`
|
||||
## Mission: Load Project Memory Context
|
||||
|
||||
**Task**: Load project memory context for: "${task_description}"
|
||||
**Mode**: analysis
|
||||
**Tool Preference**: ${tool || 'gemini'}
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Foundation Analysis
|
||||
|
||||
1. **Project Structure**
|
||||
\`\`\`bash
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
||||
\`\`\`
|
||||
|
||||
2. **Core Documentation**
|
||||
\`\`\`javascript
|
||||
Read(CLAUDE.md)
|
||||
Read(README.md)
|
||||
\`\`\`
|
||||
|
||||
### Step 2: Keyword Extraction & File Discovery
|
||||
|
||||
1. Extract core keywords from task description
|
||||
2. Discover relevant files using ripgrep and find:
|
||||
\`\`\`bash
|
||||
# Find files by name
|
||||
find . -name "*{keyword}*" -type f
|
||||
|
||||
# Search content with ripgrep
|
||||
rg "{keyword}" --type ts --type md -C 2
|
||||
rg -l "{keyword}" --type ts --type md # List files only
|
||||
\`\`\`
|
||||
|
||||
### Step 3: Deep Analysis via CLI
|
||||
|
||||
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
|
||||
|
||||
\`\`\`bash
|
||||
cd . && ${tool} -p "
|
||||
PURPOSE: Extract project core context for task: ${task_description}
|
||||
TASK: Analyze project architecture, tech stack, key patterns, relevant files
|
||||
MODE: analysis
|
||||
CONTEXT: @CLAUDE.md,README.md @${discovered_files}
|
||||
EXPECTED: Structured project summary and integration point analysis
|
||||
RULES:
|
||||
- Focus on task-relevant core information
|
||||
- Identify key architecture patterns and technical constraints
|
||||
- Extract integration points and development standards
|
||||
- Output concise, structured format
|
||||
"
|
||||
\`\`\`
|
||||
|
||||
### Step 4: Generate Core Content Package
|
||||
|
||||
Generate structured JSON content package (format shown above)
|
||||
|
||||
**Required Fields**:
|
||||
- task_context: Original task description
|
||||
- keywords: Extracted keyword array
|
||||
- project_summary: Architecture, tech stack, key patterns
|
||||
- relevant_files: File list with path, relevance, priority
|
||||
- integration_points: Integration guidance
|
||||
- constraints: Development constraints
|
||||
|
||||
### Step 5: Return Content Package
|
||||
|
||||
Return JSON content package as final output for main thread to load into memory.
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before returning:
|
||||
- [ ] Valid JSON format
|
||||
- [ ] All required fields complete
|
||||
- [ ] relevant_files contains 3-10 files minimum
|
||||
- [ ] project_summary accurately reflects architecture
|
||||
- [ ] integration_points clearly specify integration paths
|
||||
- [ ] keywords accurately extracted (3-8 keywords)
|
||||
- [ ] Content concise, avoiding redundancy (< 5KB total)
|
||||
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
## 6. Usage Examples
|
||||
|
||||
### Example 1: Load Context for New Feature
|
||||
|
||||
```bash
|
||||
/memory:load "在当前前端基础上开发用户认证功能"
|
||||
```
|
||||
|
||||
**Agent Execution**:
|
||||
1. Analyzes project structure (`get_modules_by_depth.sh`)
|
||||
2. Reads CLAUDE.md, README.md
|
||||
3. Extracts keywords: ["前端", "用户", "认证", "auth"]
|
||||
4. Uses MCP to search relevant files
|
||||
5. Executes Gemini CLI for deep analysis
|
||||
6. Returns core content package
|
||||
|
||||
**Returned Package** (loaded into memory):
|
||||
```json
|
||||
{
|
||||
"task_context": "在当前前端基础上开发用户认证功能",
|
||||
"keywords": ["前端", "认证", "auth", "login"],
|
||||
"project_summary": { ... },
|
||||
"relevant_files": [ ... ],
|
||||
"integration_points": [ ... ],
|
||||
"constraints": [ ... ]
|
||||
}
|
||||
```
|
||||
|
||||
### Example 2: Using Qwen Tool
|
||||
|
||||
```bash
|
||||
/memory:load --tool qwen -p "重构支付模块API"
|
||||
```
|
||||
|
||||
Agent uses Qwen CLI for analysis, returns same structured package.
|
||||
|
||||
### Example 3: Bug Fix Context
|
||||
|
||||
```bash
|
||||
/memory:load "修复登录验证错误"
|
||||
```
|
||||
|
||||
Returns core context related to login validation, including test files and validation logic.
|
||||
|
||||
### Memory Persistence
|
||||
|
||||
- **Session-Scoped**: Content package valid for current session
|
||||
- **Subsequent Reference**: All subsequent agents/commands can access
|
||||
- **Reload Required**: New sessions need to re-execute /memory:load
|
||||
|
||||
## 8. Notes
|
||||
|
||||
- **Read-Only**: Does not modify any code, pure analysis
|
||||
- **Token Optimization**: CLI analysis executed within agent, saves main thread tokens
|
||||
- **Memory Loading**: Returned JSON loaded directly into main thread memory
|
||||
- **Subsequent Use**: Other commands/agents can reference this package for development
|
||||
- **Session-Level**: Content package valid for current session
|
||||
333
.claude/commands/memory/update-full.md
Normal file
333
.claude/commands/memory/update-full.md
Normal file
@@ -0,0 +1,333 @@
|
||||
---
|
||||
name: update-full
|
||||
description: Complete project-wide CLAUDE.md documentation update with agent-based parallel execution and tool fallback
|
||||
argument-hint: "[--tool gemini|qwen|codex] [--path <directory>]"
|
||||
---
|
||||
|
||||
# Full Documentation Update (/memory:update-full)
|
||||
|
||||
## Overview
|
||||
|
||||
Orchestrates project-wide CLAUDE.md updates using batched agent execution with automatic tool fallback and 3-layer architecture support.
|
||||
|
||||
**Parameters**:
|
||||
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
|
||||
- `--path <directory>`: Target specific directory (default: entire project)
|
||||
|
||||
**Execution Flow**: Discovery → Plan Presentation → Execution → Safety Verification
|
||||
|
||||
## 3-Layer Architecture & Auto-Strategy Selection
|
||||
|
||||
### Layer Definition & Strategy Assignment
|
||||
|
||||
| Layer | Depth | Strategy | Purpose | Context Pattern |
|
||||
|-------|-------|----------|---------|----------------|
|
||||
| **Layer 3** (Deepest) | ≥3 | `multi-layer` | Handle unstructured files, generate docs for all subdirectories | `@**/*` (all files) |
|
||||
| **Layer 2** (Middle) | 1-2 | `single-layer` | Aggregate from children + current code | `@*/CLAUDE.md @*.{ts,tsx,js,...}` |
|
||||
| **Layer 1** (Top) | 0 | `single-layer` | Aggregate from children + current code | `@*/CLAUDE.md @*.{ts,tsx,js,...}` |
|
||||
|
||||
**Update Direction**: Layer 3 → Layer 2 → Layer 1 (bottom-up dependency flow)
|
||||
|
||||
**Strategy Auto-Selection**: Strategies are automatically determined by directory depth - no user configuration needed.
|
||||
|
||||
### Strategy Details
|
||||
|
||||
#### Multi-Layer Strategy (Layer 3 Only)
|
||||
- **Use Case**: Deepest directories with unstructured file layouts
|
||||
- **Behavior**: Generates CLAUDE.md for current directory AND each subdirectory containing files
|
||||
- **Context**: All files in current directory tree (`@**/*`)
|
||||
- **Benefits**: Creates foundation documentation for upper layers to reference
|
||||
|
||||
#### Single-Layer Strategy (Layers 1-2)
|
||||
- **Use Case**: Upper layers that aggregate from existing documentation
|
||||
- **Behavior**: Generates CLAUDE.md only for current directory
|
||||
- **Context**: Direct children CLAUDE.md files + current directory code files
|
||||
- **Benefits**: Minimal context consumption, clear layer separation
|
||||
|
||||
### Example Flow
|
||||
```
|
||||
src/auth/handlers/ (depth 3) → MULTI-LAYER STRATEGY
|
||||
CONTEXT: @**/* (all files in handlers/ and subdirs)
|
||||
GENERATES: ./CLAUDE.md + CLAUDE.md in each subdir with files
|
||||
↓
|
||||
src/auth/ (depth 2) → SINGLE-LAYER STRATEGY
|
||||
CONTEXT: @*/CLAUDE.md @*.ts (handlers/CLAUDE.md + current code)
|
||||
GENERATES: ./CLAUDE.md only
|
||||
↓
|
||||
src/ (depth 1) → SINGLE-LAYER STRATEGY
|
||||
CONTEXT: @*/CLAUDE.md (auth/CLAUDE.md, utils/CLAUDE.md)
|
||||
GENERATES: ./CLAUDE.md only
|
||||
↓
|
||||
./ (depth 0) → SINGLE-LAYER STRATEGY
|
||||
CONTEXT: @*/CLAUDE.md (src/CLAUDE.md, tests/CLAUDE.md)
|
||||
GENERATES: ./CLAUDE.md only
|
||||
```
|
||||
|
||||
## Core Execution Rules
|
||||
|
||||
1. **Analyze First**: Git cache + module discovery before updates
|
||||
2. **Wait for Approval**: Present plan, no execution without user confirmation
|
||||
3. **Execution Strategy**:
|
||||
- **<20 modules**: Direct parallel execution (max 4 concurrent per layer)
|
||||
- **≥20 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
|
||||
4. **Tool Fallback**: Auto-retry with fallback tools on failure
|
||||
5. **Layer Sequential**: Process layers 3→2→1 (bottom-up), parallel batches within layer
|
||||
6. **Safety Check**: Verify only CLAUDE.md files modified
|
||||
7. **Layer-based Grouping**: Group modules by LAYER (not depth) for execution
|
||||
|
||||
## Tool Fallback Hierarchy
|
||||
|
||||
```javascript
|
||||
--tool gemini → [gemini, qwen, codex] // default
|
||||
--tool qwen → [qwen, gemini, codex]
|
||||
--tool codex → [codex, gemini, qwen]
|
||||
```
|
||||
|
||||
**Trigger**: Non-zero exit code from update script
|
||||
|
||||
| Tool | Best For | Fallback To |
|
||||
|--------|--------------------------------|----------------|
|
||||
| gemini | Documentation, patterns | qwen → codex |
|
||||
| qwen | Architecture, system design | gemini → codex |
|
||||
| codex | Implementation, code quality | gemini → qwen |
|
||||
|
||||
## Execution Phases
|
||||
|
||||
### Phase 1: Discovery & Analysis
|
||||
|
||||
```bash
|
||||
# Cache git changes
|
||||
bash(git add -A 2>/dev/null || true)
|
||||
|
||||
# Get module structure
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh list)
|
||||
# OR with --path
|
||||
bash(cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list)
|
||||
```
|
||||
|
||||
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
|
||||
|
||||
**Smart filter**: Auto-detect and skip tests/build/config/docs based on project tech stack.
|
||||
|
||||
### Phase 2: Plan Presentation
|
||||
|
||||
**For <20 modules**:
|
||||
```
|
||||
Update Plan:
|
||||
Tool: gemini (fallback: qwen → codex)
|
||||
Total: 7 modules
|
||||
Execution: Direct parallel (< 20 modules threshold)
|
||||
|
||||
Will update:
|
||||
- ./core/interfaces (12 files) - depth 2 [Layer 2] - single-layer strategy
|
||||
- ./core (22 files) - depth 1 [Layer 2] - single-layer strategy
|
||||
- ./models (9 files) - depth 1 [Layer 2] - single-layer strategy
|
||||
- ./utils (12 files) - depth 1 [Layer 2] - single-layer strategy
|
||||
- . (5 files) - depth 0 [Layer 1] - single-layer strategy
|
||||
|
||||
Context Strategy (Auto-Selected):
|
||||
- Layer 2 (depth 1-2): @*/CLAUDE.md + current code files
|
||||
- Layer 1 (depth 0): @*/CLAUDE.md + current code files
|
||||
|
||||
Auto-skipped: ./tests, __pycache__, setup.py (15 paths)
|
||||
Execution order: Layer 2 → Layer 1
|
||||
Estimated time: ~5-10 minutes
|
||||
|
||||
Confirm execution? (y/n)
|
||||
```
|
||||
|
||||
**For ≥20 modules**:
|
||||
```
|
||||
Update Plan:
|
||||
Tool: gemini (fallback: qwen → codex)
|
||||
Total: 31 modules
|
||||
Execution: Agent batch processing (4 modules/agent)
|
||||
|
||||
Will update:
|
||||
- ./src/features/auth (12 files) - depth 3 [Layer 3] - multi-layer strategy
|
||||
- ./.claude/commands/cli (6 files) - depth 3 [Layer 3] - multi-layer strategy
|
||||
- ./src/utils (8 files) - depth 2 [Layer 2] - single-layer strategy
|
||||
...
|
||||
|
||||
Context Strategy (Auto-Selected):
|
||||
- Layer 3 (depth ≥3): @**/* (all files)
|
||||
- Layer 2 (depth 1-2): @*/CLAUDE.md + current code files
|
||||
- Layer 1 (depth 0): @*/CLAUDE.md + current code files
|
||||
|
||||
Auto-skipped: ./tests, __pycache__, setup.py (15 paths)
|
||||
Execution order: Layer 2 → Layer 1
|
||||
Estimated time: ~5-10 minutes
|
||||
|
||||
Agent allocation (by LAYER):
|
||||
- Layer 3 (14 modules, depth ≥3): 4 agents [4, 4, 4, 2]
|
||||
- Layer 2 (15 modules, depth 1-2): 4 agents [4, 4, 4, 3]
|
||||
- Layer 1 (2 modules, depth 0): 1 agent [2]
|
||||
|
||||
Estimated time: ~15-25 minutes
|
||||
|
||||
Confirm execution? (y/n)
|
||||
```
|
||||
|
||||
### Phase 3A: Direct Execution (<20 modules)
|
||||
|
||||
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
|
||||
|
||||
```javascript
|
||||
// Group modules by LAYER (not depth)
|
||||
let modules_by_layer = group_by_layer(module_list);
|
||||
let tool_order = construct_tool_order(primary_tool);
|
||||
|
||||
// Process by LAYER (3 → 2 → 1), not by depth
|
||||
for (let layer of [3, 2, 1]) {
|
||||
if (modules_by_layer[layer].length === 0) continue;
|
||||
|
||||
let batches = batch_modules(modules_by_layer[layer], 4);
|
||||
|
||||
for (let batch of batches) {
|
||||
let parallel_tasks = batch.map(module => {
|
||||
return async () => {
|
||||
// Auto-determine strategy based on depth
|
||||
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
|
||||
|
||||
for (let tool of tool_order) {
|
||||
let exit_code = bash(`cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`);
|
||||
if (exit_code === 0) {
|
||||
report(`✅ ${module.path} (Layer ${layer}) updated with ${tool}`);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
report(`❌ FAILED: ${module.path} (Layer ${layer}) failed all tools`);
|
||||
return false;
|
||||
};
|
||||
});
|
||||
|
||||
await Promise.all(parallel_tasks.map(task => task()));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3B: Agent Batch Execution (≥20 modules)
|
||||
|
||||
**Strategy**: Batch modules into groups of 4, spawn memory-bridge agents per batch.
|
||||
|
||||
```javascript
|
||||
// Group modules by LAYER and batch within each layer
|
||||
let modules_by_layer = group_by_layer(module_list);
|
||||
let tool_order = construct_tool_order(primary_tool);
|
||||
|
||||
for (let layer of [3, 2, 1]) {
|
||||
if (modules_by_layer[layer].length === 0) continue;
|
||||
|
||||
let batches = batch_modules(modules_by_layer[layer], 4);
|
||||
let worker_tasks = [];
|
||||
|
||||
for (let batch of batches) {
|
||||
worker_tasks.push(
|
||||
Task(
|
||||
subagent_type="memory-bridge",
|
||||
description=`Update ${batch.length} modules in Layer ${layer}`,
|
||||
prompt=generate_batch_worker_prompt(batch, tool_order, layer)
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
await parallel_execute(worker_tasks);
|
||||
}
|
||||
```
|
||||
|
||||
**Batch Worker Prompt Template**:
|
||||
```
|
||||
PURPOSE: Update CLAUDE.md for assigned modules with tool fallback
|
||||
|
||||
TASK: Update documentation for assigned modules using specified strategies.
|
||||
|
||||
MODULES:
|
||||
{{module_path_1}} (strategy: {{strategy_1}})
|
||||
{{module_path_2}} (strategy: {{strategy_2}})
|
||||
...
|
||||
|
||||
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
||||
|
||||
EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
|
||||
- Accepts strategy parameter: multi-layer | single-layer
|
||||
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
||||
|
||||
EXECUTION FLOW (for each module):
|
||||
1. Tool fallback loop (exit on first success):
|
||||
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}")
|
||||
exit_code=$?
|
||||
|
||||
if [ $exit_code -eq 0 ]; then
|
||||
report "✅ {{module_path}} updated with $tool"
|
||||
break
|
||||
else
|
||||
report "⚠️ {{module_path}} failed with $tool, trying next..."
|
||||
continue
|
||||
fi
|
||||
done
|
||||
|
||||
2. Handle complete failure (all tools failed):
|
||||
if [ $exit_code -ne 0 ]; then
|
||||
report "❌ FAILED: {{module_path}} - all tools exhausted"
|
||||
# Continue to next module (do not abort batch)
|
||||
fi
|
||||
|
||||
FAILURE HANDLING:
|
||||
- Module-level isolation: One module's failure does not affect others
|
||||
- Exit code detection: Non-zero exit code triggers next tool
|
||||
- Exhaustion reporting: Log modules where all tools failed
|
||||
- Batch continuation: Always process remaining modules
|
||||
|
||||
REPORTING FORMAT:
|
||||
Per-module status:
|
||||
✅ path/to/module updated with {tool}
|
||||
⚠️ path/to/module failed with {tool}, trying next...
|
||||
❌ FAILED: path/to/module - all tools exhausted
|
||||
```
|
||||
### Phase 4: Safety Verification
|
||||
|
||||
```bash
|
||||
# Check only CLAUDE.md modified
|
||||
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
|
||||
|
||||
# Display status
|
||||
bash(git status --short)
|
||||
```
|
||||
|
||||
**Result Summary**:
|
||||
```
|
||||
Update Summary:
|
||||
Total: 31 | Success: 29 | Failed: 2
|
||||
Tool usage: gemini: 25, qwen: 4, codex: 0
|
||||
Failed: path1, path2
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Batch Worker**: Tool fallback per module, batch isolation, clear status reporting
|
||||
**Coordinator**: Invalid path abort, user decline handling, safety check with auto-revert
|
||||
**Fallback Triggers**: Non-zero exit code, script timeout, unexpected output
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```bash
|
||||
# Full project update (auto-strategy selection)
|
||||
/memory:update-full
|
||||
|
||||
# Target specific directory
|
||||
/memory:update-full --path .claude
|
||||
/memory:update-full --path src/features/auth
|
||||
|
||||
# Use specific tool
|
||||
/memory:update-full --tool qwen
|
||||
/memory:update-full --path .claude --tool qwen
|
||||
```
|
||||
|
||||
## Key Advantages
|
||||
|
||||
- **Efficiency**: 30 modules → 8 agents (73% reduction from sequential)
|
||||
- **Resilience**: 3-tier tool fallback per module
|
||||
- **Performance**: Parallel batches, no concurrency limits
|
||||
- **Observability**: Per-module tool usage, batch-level metrics
|
||||
- **Automation**: Zero configuration - strategy auto-selected by directory depth
|
||||
@@ -1,330 +0,0 @@
|
||||
---
|
||||
name: update-full
|
||||
description: Complete project-wide CLAUDE.md documentation update
|
||||
argument-hint: "[--tool gemini|qwen|codex] [--path <directory>]"
|
||||
---
|
||||
|
||||
# Full Documentation Update (/memory:update-full)
|
||||
|
||||
## Coordinator Role
|
||||
|
||||
**This command orchestrates project-wide CLAUDE.md updates** using depth-parallel execution strategy with intelligent complexity detection.
|
||||
|
||||
**Execution Model**:
|
||||
|
||||
1. **Initial Analysis**: Cache git changes, discover module structure
|
||||
2. **Complexity Detection**: Analyze module count, determine strategy
|
||||
3. **Plan Presentation**: Show user exactly what will be updated
|
||||
4. **Depth-Parallel Execution**: Update modules by depth (highest to lowest)
|
||||
5. **Safety Verification**: Ensure only CLAUDE.md files modified
|
||||
|
||||
**Tool Selection**:
|
||||
- `--tool gemini` (default): Documentation generation, pattern recognition
|
||||
- `--tool qwen`: Architecture analysis, system design docs
|
||||
- `--tool codex`: Implementation validation, code quality analysis
|
||||
|
||||
**Path Parameter**:
|
||||
- `--path <directory>` (optional): Target specific directory for updates
|
||||
- If not specified: Updates entire project from current directory
|
||||
- If specified: Changes to target directory before discovery
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Analyze First**: Run git cache and module discovery before any updates
|
||||
2. **Scope Control**: Use --path to target specific directories, default is entire project
|
||||
3. **Wait for Approval**: Present plan, no execution without user confirmation
|
||||
4. **Depth-Parallel**: Same depth runs parallel (max 4 jobs), different depths sequential
|
||||
5. **Safety Check**: Verify only CLAUDE.md files modified, revert if source files touched
|
||||
6. **Independent Commands**: Each update is a separate bash() call
|
||||
7. **No Background Bash Tool**: Never use `run_in_background` parameter in bash() calls; use shell `&` for parallelism
|
||||
|
||||
## Execution Workflow
|
||||
|
||||
### Phase 1: Discovery & Analysis
|
||||
|
||||
**Cache git changes:**
|
||||
```bash
|
||||
bash(git add -A 2>/dev/null || true)
|
||||
```
|
||||
|
||||
**Get module structure:**
|
||||
|
||||
*If no --path parameter:*
|
||||
```bash
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh list)
|
||||
```
|
||||
|
||||
*If --path parameter specified:*
|
||||
```bash
|
||||
bash(cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list)
|
||||
```
|
||||
|
||||
**Example with path:**
|
||||
```bash
|
||||
# Update only .claude directory
|
||||
bash(cd .claude && ~/.claude/scripts/get_modules_by_depth.sh list)
|
||||
|
||||
# Update specific feature directory
|
||||
bash(cd src/features/auth && ~/.claude/scripts/get_modules_by_depth.sh list)
|
||||
```
|
||||
|
||||
**Parse Output**:
|
||||
- Extract module paths from `depth:N|path:<PATH>|...` format
|
||||
- Count total modules
|
||||
- Identify which modules have/need CLAUDE.md
|
||||
|
||||
**Example output:**
|
||||
```
|
||||
depth:5|path:./.claude/workflows/cli-templates/prompts/analysis|files:5|has_claude:no
|
||||
depth:4|path:./.claude/commands/cli/mode|files:3|has_claude:no
|
||||
depth:3|path:./.claude/commands/cli|files:6|has_claude:no
|
||||
depth:0|path:.|files:14|has_claude:yes
|
||||
```
|
||||
|
||||
**Validation**:
|
||||
- If --path specified, directory exists and is accessible
|
||||
- Module list contains depth and path information
|
||||
- At least one module exists
|
||||
- All paths are relative to target directory (if --path used)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Plan Presentation
|
||||
|
||||
**Decision Logic**:
|
||||
- **Simple projects (≤20 modules)**: Present plan to user, wait for approval
|
||||
- **Complex projects (>20 modules)**: Delegate to memory-bridge agent
|
||||
|
||||
**Plan format:**
|
||||
```
|
||||
📋 Update Plan:
|
||||
Tool: gemini
|
||||
Total modules: 31
|
||||
|
||||
NEW CLAUDE.md files (30):
|
||||
- ./.claude/workflows/cli-templates/prompts/analysis/CLAUDE.md
|
||||
- ./.claude/commands/cli/mode/CLAUDE.md
|
||||
- ... (28 more)
|
||||
|
||||
UPDATE existing CLAUDE.md files (1):
|
||||
- ./CLAUDE.md
|
||||
|
||||
⚠️ Confirm execution? (y/n)
|
||||
```
|
||||
|
||||
**User Confirmation Required**: No execution without explicit approval
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Depth-Parallel Execution
|
||||
|
||||
**Pattern**: Process highest depth first, parallel within depth, sequential across depths.
|
||||
|
||||
**Command structure:**
|
||||
```bash
|
||||
bash(cd <module-path> && ~/.claude/scripts/update_module_claude.sh "." "full" "<tool>" &)
|
||||
```
|
||||
|
||||
**Example - Depth 5 (8 modules):**
|
||||
```bash
|
||||
bash(cd ./.claude/workflows/cli-templates/prompts/analysis && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
|
||||
```
|
||||
```bash
|
||||
bash(cd ./.claude/workflows/cli-templates/prompts/development && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
|
||||
```
|
||||
```bash
|
||||
bash(cd ./.claude/workflows/cli-templates/prompts/documentation && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
|
||||
```
|
||||
```bash
|
||||
bash(cd ./.claude/workflows/cli-templates/prompts/implementation && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
|
||||
```
|
||||
|
||||
*Wait for depth 5 completion...*
|
||||
|
||||
**Example - Depth 4 (7 modules):**
|
||||
```bash
|
||||
bash(cd ./.claude/commands/cli/mode && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
|
||||
```
|
||||
```bash
|
||||
bash(cd ./.claude/commands/workflow/brainstorm && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
|
||||
```
|
||||
|
||||
*Continue for remaining depths (3 → 2 → 1 → 0)...*
|
||||
|
||||
**Execution Rules**:
|
||||
- Each command is separate bash() call
|
||||
- Up to 4 concurrent jobs per depth
|
||||
- Wait for all jobs in current depth before proceeding
|
||||
- Extract path from `depth:N|path:<PATH>|...` format
|
||||
- All paths relative to target directory (current dir or --path value)
|
||||
|
||||
**Path Context**:
|
||||
- Without --path: Paths relative to current directory
|
||||
- With --path: Paths relative to specified target directory
|
||||
- Module discovery runs in target directory context
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Safety Verification
|
||||
|
||||
**Check modified files:**
|
||||
```bash
|
||||
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "✅ Only CLAUDE.md files modified")
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
✅ Only CLAUDE.md files modified
|
||||
```
|
||||
|
||||
**If non-CLAUDE.md files detected:**
|
||||
```
|
||||
⚠️ Warning: Non-CLAUDE.md files were modified
|
||||
Modified files: src/index.ts, package.json
|
||||
→ Run: git restore --staged .
|
||||
```
|
||||
|
||||
**Display final status:**
|
||||
```bash
|
||||
bash(git status --short)
|
||||
```
|
||||
|
||||
**Example output:**
|
||||
```
|
||||
A .claude/workflows/cli-templates/prompts/analysis/CLAUDE.md
|
||||
A .claude/commands/cli/mode/CLAUDE.md
|
||||
M CLAUDE.md
|
||||
... (30 more files)
|
||||
```
|
||||
|
||||
## Command Pattern Reference
|
||||
|
||||
**Single module update:**
|
||||
```bash
|
||||
bash(cd <module-path> && ~/.claude/scripts/update_module_claude.sh "." "full" "<tool>" &)
|
||||
```
|
||||
|
||||
**Components**:
|
||||
- `cd <module-path>` - Navigate to module (from `path:` field)
|
||||
- `&&` - Ensure cd succeeds
|
||||
- `update_module_claude.sh` - Update script
|
||||
- `"."` - Current directory
|
||||
- `"full"` - Full update mode
|
||||
- `"<tool>"` - gemini/qwen/codex
|
||||
- `&` - Background execution
|
||||
|
||||
**Path extraction:**
|
||||
```bash
|
||||
# From: depth:5|path:./src/auth|files:10|has_claude:no
|
||||
# Extract: ./src/auth
|
||||
# Command: bash(cd ./src/auth && ~/.claude/scripts/update_module_claude.sh "." "full" "gemini" &)
|
||||
```
|
||||
|
||||
## Complex Projects Strategy
|
||||
|
||||
For projects >20 modules, delegate to memory-bridge agent:
|
||||
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="memory-bridge",
|
||||
description="Complex project full update",
|
||||
prompt=`
|
||||
CONTEXT:
|
||||
- Total modules: ${module_count}
|
||||
- Tool: ${tool}
|
||||
- Mode: full
|
||||
|
||||
MODULE LIST:
|
||||
${modules_output}
|
||||
|
||||
REQUIREMENTS:
|
||||
1. Use TodoWrite to track each depth level
|
||||
2. Process depths N→0 sequentially, max 4 parallel per depth
|
||||
3. Command: cd "<path>" && update_module_claude.sh "." "full" "${tool}" &
|
||||
4. Extract path from "depth:N|path:<PATH>|..." format
|
||||
5. Verify all modules processed
|
||||
6. Run safety check
|
||||
7. Display git status
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **Invalid path parameter**: Report error if --path directory doesn't exist, abort execution
|
||||
- **Module discovery failure**: Report error, abort execution
|
||||
- **User declines approval**: Abort execution, no changes made
|
||||
- **Safety check failure**: Automatic staging revert, report modified files
|
||||
- **Update script failure**: Report failed modules, continue with remaining
|
||||
|
||||
## Coordinator Checklist
|
||||
|
||||
✅ Parse `--tool` parameter (default: gemini)
|
||||
✅ Parse `--path` parameter (optional, default: current directory)
|
||||
✅ Execute git cache in current directory
|
||||
✅ Execute module discovery (with cd if --path specified)
|
||||
✅ Parse module list, count total modules
|
||||
✅ Determine strategy based on module count (≤20 vs >20)
|
||||
✅ Present plan with exact file paths
|
||||
✅ **Wait for user confirmation** (simple projects only)
|
||||
✅ Organize modules by depth
|
||||
✅ For each depth (highest to lowest):
|
||||
- Launch up to 5 parallel updates
|
||||
- Wait for depth completion
|
||||
- Proceed to next depth
|
||||
✅ Run safety check after all updates
|
||||
✅ Display git status
|
||||
✅ Report completion summary
|
||||
|
||||
|
||||
|
||||
## Tool Parameter Reference
|
||||
|
||||
**Gemini** (default):
|
||||
- Best for: Documentation generation, pattern recognition, architecture review
|
||||
- Context window: Large, handles complex codebases
|
||||
- Output style: Comprehensive, detailed explanations
|
||||
|
||||
**Qwen**:
|
||||
- Best for: Architecture analysis, system design documentation
|
||||
- Context window: Large, similar to Gemini
|
||||
- Output style: Structured, systematic analysis
|
||||
|
||||
**Codex**:
|
||||
- Best for: Implementation validation, code quality analysis
|
||||
- Capabilities: Mathematical reasoning, autonomous development
|
||||
- Output style: Technical, implementation-focused
|
||||
|
||||
## Path Parameter Reference
|
||||
|
||||
**Use Cases**:
|
||||
|
||||
**Update configuration directory only:**
|
||||
```bash
|
||||
/memory:update-full --path .claude
|
||||
```
|
||||
- Updates only .claude directory and subdirectories
|
||||
- Useful after workflow or command modifications
|
||||
- Faster than full project update
|
||||
|
||||
**Update specific feature module:**
|
||||
```bash
|
||||
/memory:update-full --path src/features/auth
|
||||
```
|
||||
- Updates authentication feature and sub-modules
|
||||
- Ideal for feature-specific documentation
|
||||
- Isolates scope for targeted updates
|
||||
|
||||
**Update nested structure:**
|
||||
```bash
|
||||
/memory:update-full --path .claude/workflows/cli-templates
|
||||
```
|
||||
- Updates deeply nested directory tree
|
||||
- Maintains relative path structure in output
|
||||
- All module paths relative to specified directory
|
||||
|
||||
**Best Practices**:
|
||||
- Use `--path` when working on specific features/modules
|
||||
- Omit `--path` for project-wide architectural changes
|
||||
- Combine with `--tool` for specialized documentation needs
|
||||
- Verify directory exists before execution (automatic validation)
|
||||
@@ -1,306 +0,0 @@
|
||||
---
|
||||
name: update-related
|
||||
description: Context-aware CLAUDE.md documentation updates based on recent changes
|
||||
argument-hint: "[--tool gemini|qwen|codex]"
|
||||
---
|
||||
|
||||
# Related Documentation Update (/memory:update-related)
|
||||
|
||||
## Coordinator Role
|
||||
|
||||
**This command orchestrates context-aware CLAUDE.md updates** for modules affected by recent changes using intelligent change detection.
|
||||
|
||||
**Execution Model**:
|
||||
|
||||
1. **Change Detection**: Analyze git changes to identify affected modules
|
||||
2. **Complexity Analysis**: Evaluate change count and determine strategy
|
||||
3. **Plan Presentation**: Show user which modules need updates
|
||||
4. **Depth-Parallel Execution**: Update affected modules by depth (highest to lowest)
|
||||
5. **Safety Verification**: Ensure only CLAUDE.md files modified
|
||||
|
||||
**Tool Selection**:
|
||||
- `--tool gemini` (default): Documentation generation, pattern recognition
|
||||
- `--tool qwen`: Architecture analysis, system design docs
|
||||
- `--tool codex`: Implementation validation, code quality analysis
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Detect Changes First**: Use git diff to identify affected modules before updates
|
||||
2. **Wait for Approval**: Present plan, no execution without user confirmation
|
||||
3. **Related Mode**: Update only changed modules and their parent contexts
|
||||
4. **Depth-Parallel**: Same depth runs parallel (max 4 jobs), different depths sequential
|
||||
5. **Safety Check**: Verify only CLAUDE.md files modified, revert if source files touched
|
||||
6. **No Background Bash Tool**: Never use `run_in_background` parameter in bash() calls; use shell `&` for parallelism
|
||||
|
||||
## Execution Workflow
|
||||
|
||||
### Phase 1: Change Detection & Analysis
|
||||
|
||||
**Refresh code index:**
|
||||
```bash
|
||||
bash(mcp__code-index__refresh_index)
|
||||
```
|
||||
|
||||
**Detect changed modules:**
|
||||
```bash
|
||||
bash(~/.claude/scripts/detect_changed_modules.sh list)
|
||||
```
|
||||
|
||||
**Cache git changes:**
|
||||
```bash
|
||||
bash(git add -A 2>/dev/null || true)
|
||||
```
|
||||
|
||||
**Parse Output**:
|
||||
- Extract changed module paths from `depth:N|path:<PATH>|...` format
|
||||
- Count affected modules
|
||||
- Identify which modules have/need CLAUDE.md updates
|
||||
|
||||
**Example output:**
|
||||
```
|
||||
depth:3|path:./src/api/auth|files:5|types:[ts]|has_claude:no|change:new
|
||||
depth:2|path:./src/api|files:12|types:[ts]|has_claude:yes|change:modified
|
||||
depth:1|path:./src|files:8|types:[ts]|has_claude:yes|change:parent
|
||||
depth:0|path:.|files:14|has_claude:yes|change:parent
|
||||
```
|
||||
|
||||
**Fallback behavior**:
|
||||
- If no git changes detected, use recent modules (first 10 by depth)
|
||||
|
||||
**Validation**:
|
||||
- Changed module list contains valid paths
|
||||
- At least one affected module exists
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Plan Presentation
|
||||
|
||||
**Decision Logic**:
|
||||
- **Simple changes (≤15 modules)**: Present plan to user, wait for approval
|
||||
- **Complex changes (>15 modules)**: Delegate to memory-bridge agent
|
||||
|
||||
**Plan format:**
|
||||
```
|
||||
📋 Related Update Plan:
|
||||
Tool: gemini
|
||||
Changed modules: 4
|
||||
|
||||
NEW CLAUDE.md files (1):
|
||||
- ./src/api/auth/CLAUDE.md [new module]
|
||||
|
||||
UPDATE existing CLAUDE.md files (3):
|
||||
- ./src/api/CLAUDE.md [parent of changed auth/]
|
||||
- ./src/CLAUDE.md [parent context]
|
||||
- ./CLAUDE.md [root level]
|
||||
|
||||
⚠️ Confirm execution? (y/n)
|
||||
```
|
||||
|
||||
**User Confirmation Required**: No execution without explicit approval
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Depth-Parallel Execution
|
||||
|
||||
**Pattern**: Process highest depth first, parallel within depth, sequential across depths.
|
||||
|
||||
**Command structure:**
|
||||
```bash
|
||||
bash(cd <module-path> && ~/.claude/scripts/update_module_claude.sh "." "related" "<tool>" &)
|
||||
```
|
||||
|
||||
**Example - Depth 3 (new module):**
|
||||
```bash
|
||||
bash(cd ./src/api/auth && ~/.claude/scripts/update_module_claude.sh "." "related" "gemini" &)
|
||||
```
|
||||
|
||||
*Wait for depth 3 completion...*
|
||||
|
||||
**Example - Depth 2 (modified parent):**
|
||||
```bash
|
||||
bash(cd ./src/api && ~/.claude/scripts/update_module_claude.sh "." "related" "gemini" &)
|
||||
```
|
||||
|
||||
*Wait for depth 2 completion...*
|
||||
|
||||
**Example - Depth 1 & 0 (parent contexts):**
|
||||
```bash
|
||||
bash(cd ./src && ~/.claude/scripts/update_module_claude.sh "." "related" "gemini" &)
|
||||
```
|
||||
```bash
|
||||
bash(cd . && ~/.claude/scripts/update_module_claude.sh "." "related" "gemini" &)
|
||||
```
|
||||
|
||||
*Wait for all depths completion...*
|
||||
|
||||
**Execution Rules**:
|
||||
- Each command is separate bash() call
|
||||
- Up to 4 concurrent jobs per depth
|
||||
- Wait for all jobs in current depth before proceeding
|
||||
- Use "related" mode (not "full") for context-aware updates
|
||||
- Extract path from `depth:N|path:<PATH>|...` format
|
||||
|
||||
**Related Mode Behavior**:
|
||||
- Updates module based on recent git changes
|
||||
- Includes parent context for better documentation coherence
|
||||
- More efficient than full updates for iterative development
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Safety Verification
|
||||
|
||||
**Check modified files:**
|
||||
```bash
|
||||
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "✅ Only CLAUDE.md files modified")
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
✅ Only CLAUDE.md files modified
|
||||
```
|
||||
|
||||
**If non-CLAUDE.md files detected:**
|
||||
```
|
||||
⚠️ Warning: Non-CLAUDE.md files were modified
|
||||
Modified files: src/api/auth/index.ts, package.json
|
||||
→ Run: git restore --staged .
|
||||
```
|
||||
|
||||
**Display final statistics:**
|
||||
```bash
|
||||
bash(git diff --stat)
|
||||
```
|
||||
|
||||
**Example output:**
|
||||
```
|
||||
.claude/workflows/cli-templates/prompts/analysis/CLAUDE.md | 45 +++++++++++++++++++++
|
||||
src/api/CLAUDE.md | 23 +++++++++--
|
||||
src/CLAUDE.md | 12 ++++--
|
||||
CLAUDE.md | 8 ++--
|
||||
4 files changed, 82 insertions(+), 6 deletions(-)
|
||||
```
|
||||
|
||||
## Command Pattern Reference
|
||||
|
||||
**Single module update:**
|
||||
```bash
|
||||
bash(cd <module-path> && ~/.claude/scripts/update_module_claude.sh "." "related" "<tool>" &)
|
||||
```
|
||||
|
||||
**Components**:
|
||||
- `cd <module-path>` - Navigate to module (from `path:` field)
|
||||
- `&&` - Ensure cd succeeds
|
||||
- `update_module_claude.sh` - Update script
|
||||
- `"."` - Current directory
|
||||
- `"related"` - Related mode (context-aware, change-based)
|
||||
- `"<tool>"` - gemini/qwen/codex
|
||||
- `&` - Background execution
|
||||
|
||||
**Path extraction:**
|
||||
```bash
|
||||
# From: depth:3|path:./src/api/auth|files:5|change:new|has_claude:no
|
||||
# Extract: ./src/api/auth
|
||||
# Command: bash(cd ./src/api/auth && ~/.claude/scripts/update_module_claude.sh "." "related" "gemini" &)
|
||||
```
|
||||
|
||||
**Mode comparison:**
|
||||
- `"full"` - Complete module documentation regeneration
|
||||
- `"related"` - Context-aware update based on recent changes (faster)
|
||||
|
||||
## Complex Changes Strategy
|
||||
|
||||
For changes affecting >15 modules, delegate to memory-bridge agent:
|
||||
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="memory-bridge",
|
||||
description="Complex project related update",
|
||||
prompt=`
|
||||
CONTEXT:
|
||||
- Total modules: ${change_count}
|
||||
- Tool: ${tool}
|
||||
- Mode: related
|
||||
|
||||
MODULE LIST:
|
||||
${changed_modules_output}
|
||||
|
||||
REQUIREMENTS:
|
||||
1. Use TodoWrite to track each depth level
|
||||
2. Process depths N→0 sequentially, max 4 parallel per depth
|
||||
3. Command: cd "<path>" && update_module_claude.sh "." "related" "${tool}" &
|
||||
4. Extract path from "depth:N|path:<PATH>|..." format
|
||||
5. Verify all ${change_count} modules processed
|
||||
6. Run safety check
|
||||
7. Display git diff --stat
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **No changes detected**: Use fallback mode (recent 10 modules)
|
||||
- **Change detection failure**: Report error, abort execution
|
||||
- **User declines approval**: Abort execution, no changes made
|
||||
- **Safety check failure**: Automatic staging revert, report modified files
|
||||
- **Update script failure**: Report failed modules, continue with remaining
|
||||
|
||||
## Coordinator Checklist
|
||||
|
||||
✅ Parse `--tool` parameter (default: gemini)
|
||||
✅ Refresh code index for accurate change detection
|
||||
✅ Detect changed modules via detect_changed_modules.sh
|
||||
✅ Cache git changes to protect current state
|
||||
✅ Parse changed module list, count affected modules
|
||||
✅ Apply fallback if no changes detected (recent 10 modules)
|
||||
✅ Determine strategy based on change count (≤15 vs >15)
|
||||
✅ Present plan with exact file paths and change types
|
||||
✅ **Wait for user confirmation** (simple changes only)
|
||||
✅ Organize modules by depth
|
||||
✅ For each depth (highest to lowest):
|
||||
- Launch up to 4 parallel updates with "related" mode
|
||||
- Wait for depth completion
|
||||
- Proceed to next depth
|
||||
✅ Run safety check after all updates
|
||||
✅ Display git diff statistics
|
||||
✅ Report completion summary
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```bash
|
||||
# Daily development update (default: gemini)
|
||||
/memory:update-related
|
||||
|
||||
# After feature work with specific tool
|
||||
/memory:update-related --tool qwen
|
||||
|
||||
# Code quality review after implementation
|
||||
/memory:update-related --tool codex
|
||||
```
|
||||
|
||||
## Tool Parameter Reference
|
||||
|
||||
**Gemini** (default):
|
||||
- Best for: Documentation generation, pattern recognition
|
||||
- Use case: Daily development updates, feature documentation
|
||||
- Output style: Comprehensive, contextual explanations
|
||||
|
||||
**Qwen**:
|
||||
- Best for: Architecture analysis, system design
|
||||
- Use case: Structural changes, API design updates
|
||||
- Output style: Structured, systematic documentation
|
||||
|
||||
**Codex**:
|
||||
- Best for: Implementation validation, code quality
|
||||
- Use case: After implementation, refactoring work
|
||||
- Output style: Technical, implementation-focused
|
||||
|
||||
## Comparison with Full Update
|
||||
|
||||
| Aspect | Related Update | Full Update |
|
||||
|--------|----------------|-------------|
|
||||
| **Scope** | Changed modules only | All project modules |
|
||||
| **Speed** | Fast (minutes) | Slower (10-30 min) |
|
||||
| **Use case** | Daily development | Major refactoring |
|
||||
| **Mode** | `"related"` | `"full"` |
|
||||
| **Trigger** | After commits | After major changes |
|
||||
| **Complexity threshold** | ≤15 modules | ≤20 modules |
|
||||
349
.claude/commands/memory/update-related.md
Normal file
349
.claude/commands/memory/update-related.md
Normal file
@@ -0,0 +1,349 @@
|
||||
---
|
||||
name: update-related
|
||||
description: Context-aware CLAUDE.md documentation updates based on recent changes with agent-based execution and tool fallback
|
||||
argument-hint: "[--tool gemini|qwen|codex]"
|
||||
---
|
||||
|
||||
# Related Documentation Update (/memory:update-related)
|
||||
|
||||
## Overview
|
||||
|
||||
Orchestrates context-aware CLAUDE.md updates for changed modules using batched agent execution with automatic tool fallback (gemini→qwen→codex).
|
||||
|
||||
**Parameters**:
|
||||
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
|
||||
|
||||
**Execution Flow**:
|
||||
1. Change Detection → 2. Plan Presentation → 3. Batched Agent Execution → 4. Safety Verification
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Detect Changes First**: Use git diff to identify affected modules
|
||||
2. **Wait for Approval**: Present plan, no execution without user confirmation
|
||||
3. **Execution Strategy**:
|
||||
- <15 modules: Direct parallel execution (max 4 concurrent per depth, no agent overhead)
|
||||
- ≥15 modules: Agent batch processing (4 modules/agent, 73% overhead reduction)
|
||||
4. **Tool Fallback**: Auto-retry with fallback tools on failure
|
||||
5. **Depth Sequential**: Process depths N→0, parallel batches within depth (both modes)
|
||||
6. **Related Mode**: Update only changed modules and their parent contexts
|
||||
|
||||
## Tool Fallback Hierarchy
|
||||
|
||||
```javascript
|
||||
--tool gemini → [gemini, qwen, codex] // default
|
||||
--tool qwen → [qwen, gemini, codex]
|
||||
--tool codex → [codex, gemini, qwen]
|
||||
```
|
||||
|
||||
**Trigger**: Non-zero exit code from update script
|
||||
|
||||
## Phase 1: Change Detection & Analysis
|
||||
|
||||
```bash
|
||||
# Detect changed modules (no index refresh needed)
|
||||
bash(~/.claude/scripts/detect_changed_modules.sh list)
|
||||
|
||||
# Cache git changes
|
||||
bash(git add -A 2>/dev/null || true)
|
||||
```
|
||||
|
||||
**Parse output** `depth:N|path:<PATH>|change:<TYPE>` to extract affected modules.
|
||||
|
||||
**Smart filter**: Auto-detect and skip tests/build/config/docs based on project tech stack (Node.js/Python/Go/Rust/etc).
|
||||
|
||||
**Fallback**: If no changes detected, use recent modules (first 10 by depth).
|
||||
|
||||
## Phase 2: Plan Presentation
|
||||
|
||||
**Present filtered plan**:
|
||||
```
|
||||
Related Update Plan:
|
||||
Tool: gemini (fallback: qwen → codex)
|
||||
Changed: 4 modules | Batching: 4 modules/agent
|
||||
|
||||
Will update:
|
||||
- ./src/api/auth (5 files) [new module]
|
||||
- ./src/api (12 files) [parent of changed auth/]
|
||||
- ./src (8 files) [parent context]
|
||||
- . (14 files) [root level]
|
||||
|
||||
Auto-skipped (12 paths):
|
||||
- Tests: ./src/api/auth.test.ts (8 paths)
|
||||
- Config: tsconfig.json (3 paths)
|
||||
- Other: node_modules (1 path)
|
||||
|
||||
Agent allocation:
|
||||
- Depth 3 (1 module): 1 agent [1]
|
||||
- Depth 2 (1 module): 1 agent [1]
|
||||
- Depth 1 (1 module): 1 agent [1]
|
||||
- Depth 0 (1 module): 1 agent [1]
|
||||
|
||||
Confirm execution? (y/n)
|
||||
```
|
||||
|
||||
**Decision logic**:
|
||||
- User confirms "y": Proceed with execution
|
||||
- User declines "n": Abort, no changes
|
||||
- <15 modules: Direct execution
|
||||
- ≥15 modules: Agent batch execution
|
||||
|
||||
## Phase 3A: Direct Execution (<15 modules)
|
||||
|
||||
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead, tool fallback per module.
|
||||
|
||||
```javascript
|
||||
let modules_by_depth = group_by_depth(changed_modules);
|
||||
let tool_order = construct_tool_order(primary_tool);
|
||||
|
||||
for (let depth of sorted_depths.reverse()) { // N → 0
|
||||
let modules = modules_by_depth[depth];
|
||||
let batches = batch_modules(modules, 4); // Split into groups of 4
|
||||
|
||||
for (let batch of batches) {
|
||||
// Execute batch in parallel (max 4 concurrent)
|
||||
let parallel_tasks = batch.map(module => {
|
||||
return async () => {
|
||||
let success = false;
|
||||
for (let tool of tool_order) {
|
||||
let exit_code = bash(cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}");
|
||||
if (exit_code === 0) {
|
||||
report("${module.path} updated with ${tool}");
|
||||
success = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!success) {
|
||||
report("FAILED: ${module.path} failed all tools");
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
await Promise.all(parallel_tasks.map(task => task())); // Run batch in parallel
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- No agent startup overhead
|
||||
- Parallel execution within depth (max 4 concurrent)
|
||||
- Tool fallback still applies per module
|
||||
- Faster for small changesets (<15 modules)
|
||||
- Same batching strategy as Phase 3B but without agent layer
|
||||
|
||||
---
|
||||
|
||||
## Phase 3B: Agent Batch Execution (≥15 modules)
|
||||
|
||||
### Batching Strategy
|
||||
|
||||
```javascript
|
||||
// Batch modules into groups of 4
|
||||
function batch_modules(modules, batch_size = 4) {
|
||||
let batches = [];
|
||||
for (let i = 0; i < modules.length; i += batch_size) {
|
||||
batches.push(modules.slice(i, i + batch_size));
|
||||
}
|
||||
return batches;
|
||||
}
|
||||
// Examples: 10→[4,4,2] | 8→[4,4] | 3→[3]
|
||||
```
|
||||
|
||||
### Coordinator Orchestration
|
||||
|
||||
```javascript
|
||||
let modules_by_depth = group_by_depth(changed_modules);
|
||||
let tool_order = construct_tool_order(primary_tool);
|
||||
|
||||
for (let depth of sorted_depths.reverse()) { // N → 0
|
||||
let batches = batch_modules(modules_by_depth[depth], 4);
|
||||
let worker_tasks = [];
|
||||
|
||||
for (let batch of batches) {
|
||||
worker_tasks.push(
|
||||
Task(
|
||||
subagent_type="memory-bridge",
|
||||
description=`Update ${batch.length} modules at depth ${depth}`,
|
||||
prompt=generate_batch_worker_prompt(batch, tool_order, "related")
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
await parallel_execute(worker_tasks); // Batches run in parallel
|
||||
}
|
||||
```
|
||||
|
||||
### Batch Worker Prompt Template
|
||||
|
||||
```
|
||||
PURPOSE: Update CLAUDE.md for assigned modules with tool fallback (related mode)
|
||||
|
||||
TASK:
|
||||
Update documentation for the following modules based on recent changes. For each module, try tools in order until success.
|
||||
|
||||
MODULES:
|
||||
{{module_path_1}}
|
||||
{{module_path_2}}
|
||||
{{module_path_3}}
|
||||
{{module_path_4}}
|
||||
|
||||
TOOLS (try in order):
|
||||
1. {{tool_1}}
|
||||
2. {{tool_2}}
|
||||
3. {{tool_3}}
|
||||
|
||||
EXECUTION:
|
||||
For each module above:
|
||||
1. cd "{{module_path}}"
|
||||
2. Try tool 1:
|
||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}")
|
||||
→ Success: Report "{{module_path}} updated with {{tool_1}}", proceed to next module
|
||||
→ Failure: Try tool 2
|
||||
3. Try tool 2:
|
||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}")
|
||||
→ Success: Report "{{module_path}} updated with {{tool_2}}", proceed to next module
|
||||
→ Failure: Try tool 3
|
||||
4. Try tool 3:
|
||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}")
|
||||
→ Success: Report "{{module_path}} updated with {{tool_3}}", proceed to next module
|
||||
→ Failure: Report "FAILED: {{module_path}} failed all tools", proceed to next module
|
||||
|
||||
REPORTING:
|
||||
Report final summary with:
|
||||
- Total processed: X modules
|
||||
- Successful: Y modules
|
||||
- Failed: Z modules
|
||||
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
|
||||
- Detailed results for each module
|
||||
```
|
||||
|
||||
### Example Execution
|
||||
|
||||
**Depth 3 (new module)**:
|
||||
```javascript
|
||||
Task(subagent_type="memory-bridge", batch=[./src/api/auth], mode="related")
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- 4 modules → 1 agent (75% reduction)
|
||||
- Parallel batches, sequential within batch
|
||||
- Each module gets full fallback chain
|
||||
- Context-aware updates based on git changes
|
||||
|
||||
## Phase 4: Safety Verification
|
||||
|
||||
```bash
|
||||
# Check only CLAUDE.md modified
|
||||
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
|
||||
|
||||
# Display statistics
|
||||
bash(git diff --stat)
|
||||
```
|
||||
|
||||
**Aggregate results**:
|
||||
```
|
||||
Update Summary:
|
||||
Total: 4 | Success: 4 | Failed: 0
|
||||
|
||||
Tool usage:
|
||||
- gemini: 4 modules
|
||||
- qwen: 0 modules (fallback)
|
||||
- codex: 0 modules
|
||||
|
||||
Changes:
|
||||
src/api/auth/CLAUDE.md | 45 +++++++++++++++++++++
|
||||
src/api/CLAUDE.md | 23 +++++++++--
|
||||
src/CLAUDE.md | 12 ++++--
|
||||
CLAUDE.md | 8 ++--
|
||||
4 files changed, 82 insertions(+), 6 deletions(-)
|
||||
```
|
||||
|
||||
## Execution Summary
|
||||
|
||||
**Module Count Threshold**:
|
||||
- **<15 modules**: Coordinator executes Phase 3A (Direct Execution)
|
||||
- **≥15 modules**: Coordinator executes Phase 3B (Agent Batch Execution)
|
||||
|
||||
**Agent Hierarchy** (for ≥15 modules):
|
||||
- **Coordinator**: Handles batch division, spawns worker agents per depth
|
||||
- **Worker Agents**: Each processes 4 modules with tool fallback (related mode)
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Batch Worker**:
|
||||
- Tool fallback per module (auto-retry)
|
||||
- Batch isolation (failures don't propagate)
|
||||
- Clear per-module status reporting
|
||||
|
||||
**Coordinator**:
|
||||
- No changes: Use fallback (recent 10 modules)
|
||||
- User decline: No execution
|
||||
- Safety check fail: Auto-revert staging
|
||||
- Partial failures: Continue execution, report failed modules
|
||||
|
||||
**Fallback Triggers**:
|
||||
- Non-zero exit code
|
||||
- Script timeout
|
||||
- Unexpected output
|
||||
|
||||
## Tool Reference
|
||||
|
||||
| Tool | Best For | Fallback To |
|
||||
|--------|--------------------------------|----------------|
|
||||
| gemini | Documentation, patterns | qwen → codex |
|
||||
| qwen | Architecture, system design | gemini → codex |
|
||||
| codex | Implementation, code quality | gemini → qwen |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```bash
|
||||
# Daily development update
|
||||
/memory:update-related
|
||||
|
||||
# After feature work with specific tool
|
||||
/memory:update-related --tool qwen
|
||||
|
||||
# Code quality review after implementation
|
||||
/memory:update-related --tool codex
|
||||
```
|
||||
|
||||
## Key Advantages
|
||||
|
||||
**Efficiency**: 30 modules → 8 agents (73% reduction)
|
||||
**Resilience**: 3-tier fallback per module
|
||||
**Performance**: Parallel batches, no concurrency limits
|
||||
**Context-aware**: Updates based on actual git changes
|
||||
**Fast**: Only affected modules, not entire project
|
||||
|
||||
## Coordinator Checklist
|
||||
|
||||
- Parse `--tool` (default: gemini)
|
||||
- Refresh code index for accurate change detection
|
||||
- Detect changed modules via detect_changed_modules.sh
|
||||
- **Smart filter modules** (auto-detect tech stack, skip tests/build/config/docs)
|
||||
- Cache git changes
|
||||
- Apply fallback if no changes (recent 10 modules)
|
||||
- Construct tool fallback order
|
||||
- **Present filtered plan** with skip reasons and change types
|
||||
- **Wait for y/n confirmation**
|
||||
- Determine execution mode:
|
||||
- **<15 modules**: Direct execution (Phase 3A)
|
||||
- For each depth (N→0): Sequential module updates with tool fallback
|
||||
- **≥15 modules**: Agent batch execution (Phase 3B)
|
||||
- For each depth (N→0): Batch modules (4 per batch), spawn batch workers in parallel
|
||||
- Wait for depth/batch completion
|
||||
- Aggregate results
|
||||
- Safety check (only CLAUDE.md modified)
|
||||
- Display git diff statistics + summary
|
||||
|
||||
## Comparison with Full Update
|
||||
|
||||
| Aspect | Related Update | Full Update |
|
||||
|--------|----------------|-------------|
|
||||
| **Scope** | Changed modules only | All project modules |
|
||||
| **Speed** | Fast (minutes) | Slower (10-30 min) |
|
||||
| **Use case** | Daily development | Major refactoring |
|
||||
| **Mode** | `"related"` | `"full"` |
|
||||
| **Trigger** | After commits | After major changes |
|
||||
| **Batching** | 4 modules/agent | 4 modules/agent |
|
||||
| **Fallback** | gemini→qwen→codex | gemini→qwen→codex |
|
||||
| **Complexity threshold** | ≤15 modules | ≤20 modules |
|
||||
@@ -10,7 +10,6 @@ argument-hint: "task-id"
|
||||
Breaks down complex tasks into executable subtasks with context inheritance and agent assignment.
|
||||
|
||||
## Core Principles
|
||||
**Task System:** @~/.claude/workflows/workflow-architecture.md
|
||||
**File Cohesion:** Related files must stay in same task
|
||||
**10-Task Limit:** Total tasks cannot exceed 10 (triggers re-scoping)
|
||||
|
||||
@@ -99,7 +98,7 @@ Files updated: .task/IMPL-1.json + 2 subtask files + TODO_LIST.md
|
||||
- **Implementation** → `@code-developer`
|
||||
- **Testing** → `@code-developer` (type: "test-gen")
|
||||
- **Test Validation** → `@test-fix-agent` (type: "test-fix")
|
||||
- **Review** → `@general-purpose` (optional)
|
||||
- **Review** → `@universal-executor` (optional)
|
||||
|
||||
### Context Inheritance
|
||||
- Subtasks inherit parent requirements
|
||||
@@ -138,7 +137,6 @@ Files updated: .task/IMPL-1.json + 2 subtask files + TODO_LIST.md
|
||||
|
||||
## Implementation Details
|
||||
|
||||
See @~/.claude/workflows/workflow-architecture.md for:
|
||||
- Complete task JSON schema
|
||||
- Implementation field structure
|
||||
- Context inheritance rules
|
||||
|
||||
@@ -104,7 +104,7 @@ Based on task type and title keywords:
|
||||
- **Design/Plan** → @planning-agent
|
||||
- **Test Generation** → @code-developer (type: "test-gen")
|
||||
- **Test Execution/Fix** → @test-fix-agent (type: "test-fix")
|
||||
- **Review/Audit** → @general-purpose (optional, only when explicitly requested)
|
||||
- **Review/Audit** → @universal-executor (optional, only when explicitly requested)
|
||||
|
||||
## Validation Rules
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ argument-hint: "task-id"
|
||||
### 🚀 **Command Overview: `/task:execute`**
|
||||
|
||||
- **Purpose**: Executes tasks using intelligent agent selection, context preparation, and progress tracking.
|
||||
- **Core Principles**: @~/.claude/workflows/workflow-architecture.md
|
||||
|
||||
|
||||
### ⚙️ **Execution Modes**
|
||||
|
||||
@@ -19,7 +19,7 @@ argument-hint: "task-id"
|
||||
- Executes step-by-step, requiring user confirmation at each checkpoint.
|
||||
- Allows for dynamic adjustments and manual review during the process.
|
||||
- **review**
|
||||
- Optional manual review using `@general-purpose`.
|
||||
- Optional manual review using `@universal-executor`.
|
||||
- Used only when explicitly requested by user.
|
||||
|
||||
### 🤖 **Agent Selection Logic**
|
||||
@@ -45,7 +45,7 @@ FUNCTION select_agent(task, agent_override):
|
||||
WHEN CONTAINS "Execute tests", "Fix tests", "Validate":
|
||||
RETURN "@test-fix-agent" // type: test-fix
|
||||
WHEN CONTAINS "Review code":
|
||||
RETURN "@general-purpose" // Optional manual review
|
||||
RETURN "@universal-executor" // Optional manual review
|
||||
DEFAULT:
|
||||
RETURN "@code-developer" // Default agent
|
||||
END CASE
|
||||
@@ -236,7 +236,7 @@ Different agents receive context tailored to their function, including implement
|
||||
- Error conditions to validate from implementation.context_notes.error_handling
|
||||
- Performance requirements from implementation.context_notes.performance_considerations
|
||||
|
||||
**`@general-purpose`**:
|
||||
**`@universal-executor`**:
|
||||
- Used for optional manual reviews when explicitly requested
|
||||
- Code quality standards and implementation patterns
|
||||
- Security considerations from implementation.context_notes.risks
|
||||
|
||||
@@ -15,13 +15,13 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Identify inconsistencies, duplications, ambiguities, and underspecified items between action planning artifacts (`IMPL_PLAN.md`, `task.json`) and brainstorming artifacts (`synthesis-specification.md`) before implementation. This command MUST run only after `/workflow:plan` has successfully produced complete `IMPL_PLAN.md` and task JSON files.
|
||||
Identify inconsistencies, duplications, ambiguities, and underspecified items between action planning artifacts (`IMPL_PLAN.md`, `task.json`) and brainstorming artifacts (`role analysis documents`) before implementation. This command MUST run only after `/workflow:plan` has successfully produced complete `IMPL_PLAN.md` and task JSON files.
|
||||
|
||||
## Operating Constraints
|
||||
|
||||
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands).
|
||||
|
||||
**Synthesis Authority**: The `synthesis-specification.md` is **authoritative** for requirements and design decisions. Any conflicts between IMPL_PLAN/tasks and synthesis are automatically CRITICAL and require adjustment of the plan/tasks—not reinterpretation of requirements.
|
||||
**Synthesis Authority**: The `role analysis documents` is **authoritative** for requirements and design decisions. Any conflicts between IMPL_PLAN/tasks and synthesis are automatically CRITICAL and require adjustment of the plan/tasks—not reinterpretation of requirements.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
@@ -45,13 +45,13 @@ brainstorm_dir = session_dir/.brainstorming
|
||||
task_dir = session_dir/.task
|
||||
|
||||
# Validate required artifacts
|
||||
SYNTHESIS = brainstorm_dir/synthesis-specification.md
|
||||
SYNTHESIS = brainstorm_dir/role analysis documents
|
||||
IMPL_PLAN = session_dir/IMPL_PLAN.md
|
||||
TASK_FILES = Glob(task_dir/*.json)
|
||||
|
||||
# Abort if missing
|
||||
IF NOT EXISTS(SYNTHESIS):
|
||||
ERROR: "synthesis-specification.md not found. Run /workflow:brainstorm:synthesis first"
|
||||
ERROR: "role analysis documents not found. Run /workflow:brainstorm:synthesis first"
|
||||
EXIT
|
||||
|
||||
IF NOT EXISTS(IMPL_PLAN):
|
||||
@@ -67,7 +67,12 @@ IF TASK_FILES.count == 0:
|
||||
|
||||
Load only minimal necessary context from each artifact:
|
||||
|
||||
**From synthesis-specification.md**:
|
||||
**From workflow-session.json** (NEW - PRIMARY REFERENCE):
|
||||
- Original user prompt/intent (project or description field)
|
||||
- User's stated goals and objectives
|
||||
- User's scope definition
|
||||
|
||||
**From role analysis documents**:
|
||||
- Functional Requirements (IDs, descriptions, acceptance criteria)
|
||||
- Non-Functional Requirements (IDs, targets)
|
||||
- Business Requirements (IDs, success metrics)
|
||||
@@ -117,7 +122,14 @@ Create internal representations (do not include raw artifacts in output):
|
||||
|
||||
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
|
||||
|
||||
#### A. Requirements Coverage Analysis
|
||||
#### A. User Intent Alignment (NEW - CRITICAL)
|
||||
|
||||
- **Goal Alignment**: IMPL_PLAN objectives match user's original intent
|
||||
- **Scope Drift**: Plan covers user's stated scope without unauthorized expansion
|
||||
- **Success Criteria Match**: Plan's success criteria reflect user's expectations
|
||||
- **Intent Conflicts**: Tasks contradicting user's original objectives
|
||||
|
||||
#### B. Requirements Coverage Analysis
|
||||
|
||||
- **Orphaned Requirements**: Requirements in synthesis with zero associated tasks
|
||||
- **Unmapped Tasks**: Tasks with no clear requirement linkage
|
||||
@@ -167,6 +179,7 @@ Focus on high-signal findings. Limit to 50 findings total; aggregate remainder i
|
||||
Use this heuristic to prioritize findings:
|
||||
|
||||
- **CRITICAL**:
|
||||
- Violates user's original intent (goal misalignment, scope drift)
|
||||
- Violates synthesis authority (requirement conflict)
|
||||
- Core requirement with zero coverage
|
||||
- Circular dependencies
|
||||
@@ -197,7 +210,7 @@ Output a Markdown report (no file writes) with the following structure:
|
||||
|
||||
**Session**: WFS-{session-id}
|
||||
**Generated**: {timestamp}
|
||||
**Artifacts Analyzed**: synthesis-specification.md, IMPL_PLAN.md, {N} task files
|
||||
**Artifacts Analyzed**: role analysis documents, IMPL_PLAN.md, {N} task files
|
||||
|
||||
---
|
||||
|
||||
@@ -311,44 +324,40 @@ Output a Markdown report (no file writes) with the following structure:
|
||||
|
||||
**If CRITICAL Issues Exist**:
|
||||
- ❌ **BLOCK EXECUTION** - Resolve critical issues before proceeding
|
||||
- Use `/task:create` for missing requirements coverage
|
||||
- Use TodoWrite to track all required fixes
|
||||
- Fix broken dependencies and circular references
|
||||
|
||||
**If Only HIGH/MEDIUM/LOW Issues**:
|
||||
- ⚠️ **PROCEED WITH CAUTION** - Fix high-priority issues first
|
||||
- Use batch replan mode to apply all task improvements systematically
|
||||
- Use TodoWrite to systematically track and complete all improvements
|
||||
|
||||
#### Batch Remediation
|
||||
#### TodoWrite-Based Remediation Workflow
|
||||
|
||||
**Report Location**: `.workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
|
||||
|
||||
**Apply All Task Improvements** (Recommended):
|
||||
```bash
|
||||
# Batch process all task replan recommendations
|
||||
/task:replan --batch .workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
|
||||
**Recommended Workflow**:
|
||||
1. **Create TodoWrite Task List**: Extract all findings from report
|
||||
2. **Process by Priority**: CRITICAL → HIGH → MEDIUM → LOW
|
||||
3. **Complete Each Fix**: Mark tasks as in_progress/completed as you work
|
||||
4. **Validate Changes**: Verify each modification against requirements
|
||||
|
||||
# Or with auto-confirmation (no prompts)
|
||||
/task:replan --batch ACTION_PLAN_VERIFICATION.md --auto-confirm
|
||||
```
|
||||
|
||||
**Manual Selective Fixes**:
|
||||
```bash
|
||||
# Fix critical coverage gaps first
|
||||
/task:create "Implement user authentication (FR-03)"
|
||||
/task:create "Add performance optimization (NFR-01)"
|
||||
|
||||
# Then apply task refinements individually
|
||||
/task:replan IMPL-1.2 "Add context.artifacts and target_files"
|
||||
**TodoWrite Task Structure Example**:
|
||||
```markdown
|
||||
Priority Order:
|
||||
1. Fix coverage gaps (CRITICAL)
|
||||
2. Resolve consistency conflicts (CRITICAL/HIGH)
|
||||
3. Add missing specifications (MEDIUM)
|
||||
4. Improve task quality (LOW)
|
||||
```
|
||||
|
||||
**Notes**:
|
||||
- Batch mode extracts all `/task:replan` commands from report
|
||||
- Processes by priority: CRITICAL → HIGH → MEDIUM → LOW
|
||||
- Creates TodoWrite tracking for all modifications
|
||||
- TodoWrite provides real-time progress tracking
|
||||
- Each finding becomes a trackable todo item
|
||||
- User can monitor progress throughout remediation
|
||||
- Architecture drift in IMPL_PLAN requires manual editing
|
||||
```
|
||||
|
||||
### 7. Save Report and Provide Remediation Options
|
||||
### 7. Save Report and Execute TodoWrite-Based Remediation
|
||||
|
||||
**Save Analysis Report**:
|
||||
```bash
|
||||
@@ -356,87 +365,53 @@ report_path = ".workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
||||
Write(report_path, full_report_content)
|
||||
```
|
||||
|
||||
At end of report, provide batch remediation guidance:
|
||||
**After Report Generation**:
|
||||
|
||||
1. **Extract Findings**: Parse all issues by severity
|
||||
2. **Create TodoWrite Task List**: Convert findings to actionable todos
|
||||
3. **Execute Fixes**: Process each todo systematically
|
||||
4. **Update Task Files**: Apply modifications directly to task JSON files
|
||||
5. **Update IMPL_PLAN**: Apply strategic changes if needed
|
||||
|
||||
At end of report, provide remediation guidance:
|
||||
|
||||
```markdown
|
||||
### 🔧 Remediation Options
|
||||
### 🔧 Remediation Workflow
|
||||
|
||||
**Recommended Workflow**:
|
||||
1. **Batch Mode** (Fastest): Apply all task improvements automatically
|
||||
```bash
|
||||
/task:replan --batch .workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
|
||||
```
|
||||
**Recommended Approach**:
|
||||
1. **Initialize TodoWrite**: Create comprehensive task list from all findings
|
||||
2. **Process by Severity**: Start with CRITICAL, then HIGH, MEDIUM, LOW
|
||||
3. **Apply Fixes Directly**: Modify task.json files and IMPL_PLAN.md as needed
|
||||
4. **Track Progress**: Mark todos as completed after each fix
|
||||
|
||||
2. **Manual Review**: Examine each issue before applying
|
||||
- Review findings in this report
|
||||
- Execute specific `/task:create` or `/task:replan` commands individually
|
||||
**TodoWrite Execution Pattern**:
|
||||
```bash
|
||||
# Step 1: Create task list from verification report
|
||||
TodoWrite([
|
||||
{ content: "Fix FR-03 coverage gap - add authentication task", status: "pending", activeForm: "Fixing FR-03 coverage gap" },
|
||||
{ content: "Fix IMPL-1.2 consistency - align with ADR-02", status: "pending", activeForm: "Fixing IMPL-1.2 consistency" },
|
||||
{ content: "Add context.artifacts to IMPL-1.2", status: "pending", activeForm: "Adding context.artifacts to IMPL-1.2" },
|
||||
# ... additional todos for each finding
|
||||
])
|
||||
|
||||
3. **Architecture Changes**: Update IMPL_PLAN.md manually if architecture drift detected
|
||||
|
||||
**Note**: This is read-only analysis. All fixes require explicit execution.
|
||||
# Step 2: Process each todo systematically
|
||||
# Mark as in_progress when starting
|
||||
# Apply fix using Read/Edit tools
|
||||
# Mark as completed when done
|
||||
# Move to next priority item
|
||||
```
|
||||
|
||||
### 8. Update Session Metadata
|
||||
**File Modification Workflow**:
|
||||
```bash
|
||||
# For task JSON modifications:
|
||||
1. Read(.workflow/WFS-{session}/.task/IMPL-X.Y.json)
|
||||
2. Edit() to apply fixes
|
||||
3. Mark todo as completed
|
||||
|
||||
```json
|
||||
{
|
||||
"phases": {
|
||||
"PLAN": {
|
||||
"status": "completed",
|
||||
"action_plan_verification": {
|
||||
"completed": true,
|
||||
"completed_at": "timestamp",
|
||||
"overall_risk_level": "HIGH",
|
||||
"recommendation": "PROCEED_WITH_FIXES",
|
||||
"issues": {
|
||||
"critical": 2,
|
||||
"high": 5,
|
||||
"medium": 8,
|
||||
"low": 3
|
||||
},
|
||||
"coverage": {
|
||||
"functional_requirements": 0.85,
|
||||
"non_functional_requirements": 0.40,
|
||||
"business_requirements": 1.00
|
||||
},
|
||||
"report_path": ".workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
# For IMPL_PLAN modifications:
|
||||
1. Read(.workflow/WFS-{session}/IMPL_PLAN.md)
|
||||
2. Edit() to apply strategic changes
|
||||
3. Mark todo as completed
|
||||
```
|
||||
|
||||
## Operating Principles
|
||||
|
||||
### Context Efficiency
|
||||
- **Minimal high-signal tokens**: Focus on actionable findings
|
||||
- **Progressive disclosure**: Load artifacts incrementally
|
||||
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
|
||||
- **Deterministic results**: Rerunning without changes produces consistent IDs and counts
|
||||
|
||||
### Analysis Guidelines
|
||||
- **NEVER modify files** (this is read-only analysis)
|
||||
- **NEVER hallucinate missing sections** (if absent, report them accurately)
|
||||
- **Prioritize synthesis violations** (these are always CRITICAL)
|
||||
- **Use examples over exhaustive rules** (cite specific instances)
|
||||
- **Report zero issues gracefully** (emit success report with coverage statistics)
|
||||
|
||||
### Verification Taxonomy
|
||||
- **Coverage**: Requirements → Tasks mapping
|
||||
- **Consistency**: Cross-artifact alignment
|
||||
- **Dependencies**: Task ordering and relationships
|
||||
- **Synthesis Alignment**: Adherence to authoritative requirements
|
||||
- **Task Quality**: Specification completeness
|
||||
- **Feasibility**: Implementation risks
|
||||
|
||||
## Behavior Rules
|
||||
|
||||
- **If no issues found**: Report "✅ Action plan verification passed. No issues detected." and suggest proceeding to `/workflow:execute`.
|
||||
- **If CRITICAL issues exist**: Recommend blocking execution until resolved.
|
||||
- **If only HIGH/MEDIUM issues**: User may proceed with caution, but provide improvement suggestions.
|
||||
- **If IMPL_PLAN.md or task files missing**: Instruct user to run `/workflow:plan` first.
|
||||
- **Always provide actionable remediation suggestions**: Don't just identify problems—suggest solutions.
|
||||
|
||||
## Context
|
||||
|
||||
{ARGS}
|
||||
**Note**: All fixes execute immediately after user confirmation without additional commands.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: api-designer
|
||||
description: Generate or update api-designer/analysis.md addressing topic-framework discussion points
|
||||
description: Generate or update api-designer/analysis.md addressing guidance-specification discussion points
|
||||
argument-hint: "optional topic - uses existing framework if available"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
---
|
||||
@@ -8,10 +8,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
## 🔌 **API Designer Analysis Generator**
|
||||
|
||||
### Purpose
|
||||
**Specialized command for generating api-designer/analysis.md** that addresses topic-framework.md discussion points from backend API design perspective. Creates or updates role-specific analysis with framework references.
|
||||
**Specialized command for generating api-designer/analysis.md** that addresses guidance-specification.md discussion points from backend API design perspective. Creates or updates role-specific analysis with framework references.
|
||||
|
||||
### Core Function
|
||||
- **Framework-based Analysis**: Address each discussion point in topic-framework.md
|
||||
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
|
||||
- **API Design Focus**: RESTful/GraphQL API design, endpoint structure, and contract definition
|
||||
- **Update Mechanism**: Create new or update existing analysis.md
|
||||
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
|
||||
@@ -51,7 +51,7 @@ IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/topic-framework.md
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
framework_mode = true
|
||||
load_framework = true
|
||||
@@ -78,20 +78,20 @@ ELSE:
|
||||
```
|
||||
|
||||
### Phase 3: Agent Task Generation
|
||||
**Framework-Based Analysis** (when topic-framework.md exists):
|
||||
**Framework-Based Analysis** (when guidance-specification.md exists):
|
||||
```bash
|
||||
Task(subagent_type="conceptual-planning-agent",
|
||||
prompt="Generate API designer analysis addressing topic framework
|
||||
|
||||
## Framework Integration Required
|
||||
**MANDATORY**: Load and address topic-framework.md discussion points
|
||||
**Framework Reference**: @{session.brainstorm_dir}/topic-framework.md
|
||||
**MANDATORY**: Load and address guidance-specification.md discussion points
|
||||
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
|
||||
**Output Location**: {session.brainstorm_dir}/api-designer/analysis.md
|
||||
|
||||
## Analysis Requirements
|
||||
1. **Load Topic Framework**: Read topic-framework.md completely
|
||||
1. **Load Topic Framework**: Read guidance-specification.md completely
|
||||
2. **Address Each Discussion Point**: Respond to all 5 framework sections from API design perspective
|
||||
3. **Include Framework Reference**: Start analysis.md with @../topic-framework.md
|
||||
3. **Include Framework Reference**: Start analysis.md with @../guidance-specification.md
|
||||
4. **API Design Focus**: Emphasize endpoint structure, data contracts, versioning strategies
|
||||
5. **Structured Response**: Use framework structure for analysis organization
|
||||
|
||||
@@ -106,7 +106,7 @@ Task(subagent_type="conceptual-planning-agent",
|
||||
```markdown
|
||||
# API Designer Analysis: [Topic]
|
||||
|
||||
**Framework Reference**: @../topic-framework.md
|
||||
**Framework Reference**: @../guidance-specification.md
|
||||
**Role Focus**: Backend API Design and Contract Definition
|
||||
|
||||
## Core Requirements Analysis
|
||||
@@ -140,14 +140,14 @@ IF update_mode = "incremental":
|
||||
|
||||
## Current Analysis Context
|
||||
**Existing Analysis**: @{session.brainstorm_dir}/api-designer/analysis.md
|
||||
**Framework Reference**: @{session.brainstorm_dir}/topic-framework.md
|
||||
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
|
||||
|
||||
## Update Requirements
|
||||
1. **Preserve Structure**: Maintain existing analysis structure
|
||||
2. **Add New Insights**: Integrate new API design insights and recommendations
|
||||
3. **Framework Alignment**: Ensure continued alignment with topic framework
|
||||
4. **API Updates**: Add new endpoint patterns, versioning strategies, documentation improvements
|
||||
5. **Maintain References**: Keep @../topic-framework.md reference
|
||||
5. **Maintain References**: Keep @../guidance-specification.md reference
|
||||
|
||||
## Update Instructions
|
||||
- Read existing analysis completely
|
||||
@@ -163,14 +163,14 @@ IF update_mode = "incremental":
|
||||
### Output Files
|
||||
```
|
||||
.workflow/WFS-[topic]/.brainstorming/
|
||||
├── topic-framework.md # Input: Framework (if exists)
|
||||
├── guidance-specification.md # Input: Framework (if exists)
|
||||
└── api-designer/
|
||||
└── analysis.md # ★ OUTPUT: Framework-based analysis
|
||||
```
|
||||
|
||||
### Analysis Structure
|
||||
**Required Elements**:
|
||||
- **Framework Reference**: @../topic-framework.md (if framework exists)
|
||||
- **Framework Reference**: @../guidance-specification.md (if framework exists)
|
||||
- **Role Focus**: Backend API Design and Contract Definition perspective
|
||||
- **5 Framework Sections**: Address each framework discussion point
|
||||
- **API Design Recommendations**: Endpoint-specific insights and solutions
|
||||
|
||||
@@ -1,366 +1,605 @@
|
||||
---
|
||||
name: artifacts
|
||||
description: Generate role-specific topic-framework.md dynamically based on selected roles
|
||||
argument-hint: "topic or challenge description for framework generation"
|
||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
|
||||
description: Interactive clarification generating confirmed guidance specification
|
||||
argument-hint: "topic or challenge description [--count N]"
|
||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*)
|
||||
---
|
||||
|
||||
# Topic Framework Generator Command
|
||||
## Overview
|
||||
|
||||
## Usage
|
||||
```bash
|
||||
/workflow:brainstorm:artifacts "<topic>" [--roles "role1,role2,role3"]
|
||||
Six-phase workflow: **Automatic project context collection** → Extract topic challenges → Select roles → Generate task-specific questions → Detect conflicts → Generate confirmed guidance (declarative statements only).
|
||||
|
||||
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
|
||||
**Output**: `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
|
||||
**Core Principle**: Questions dynamically generated from project context + topic keywords/challenges, NOT from generic templates
|
||||
|
||||
**Parameters**:
|
||||
- `topic` (required): Topic or challenge description (structured format recommended)
|
||||
- `--count N` (optional): Number of roles user WANTS to select (system will recommend N+2 options for user to choose from, default: 3)
|
||||
|
||||
## Task Tracking
|
||||
|
||||
**⚠️ TodoWrite Rule**: EXTEND auto-parallel's task list (NOT replace/overwrite)
|
||||
|
||||
**When called from auto-parallel**:
|
||||
- Find the artifacts parent task: "Execute artifacts command for interactive framework generation"
|
||||
- Mark parent task as "in_progress"
|
||||
- APPEND artifacts sub-tasks AFTER the parent task (Phase 0-5)
|
||||
- Mark each sub-task as it completes
|
||||
- When Phase 5 completes, mark parent task as "completed"
|
||||
- **PRESERVE all other auto-parallel tasks** (role agents, synthesis)
|
||||
|
||||
**Standalone Mode**:
|
||||
```json
|
||||
[
|
||||
{"content": "Initialize session (.workflow/.active-* check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
|
||||
{"content": "Phase 0: Automatic project context collection (call context-gather)", "status": "pending", "activeForm": "Phase 0 context collection"},
|
||||
{"content": "Phase 1: Extract challenges, output 2-4 task-specific questions, wait for user input", "status": "pending", "activeForm": "Phase 1 topic analysis"},
|
||||
{"content": "Phase 2: Recommend count+2 roles, output role selection, wait for user input", "status": "pending", "activeForm": "Phase 2 role selection"},
|
||||
{"content": "Phase 3: Generate 3-4 questions per role, output and wait for answers (max 10 per round)", "status": "pending", "activeForm": "Phase 3 role questions"},
|
||||
{"content": "Phase 4: Detect conflicts, output clarifications, wait for answers (max 10 per round)", "status": "pending", "activeForm": "Phase 4 conflict resolution"},
|
||||
{"content": "Phase 5: Transform Q&A to declarative statements, write guidance-specification.md", "status": "pending", "activeForm": "Phase 5 document generation"}
|
||||
]
|
||||
```
|
||||
|
||||
## Purpose
|
||||
**Generate dynamic topic-framework.md tailored to selected roles**. Creates role-specific discussion frameworks that address relevant perspectives. If no roles specified, generates comprehensive framework covering common analysis areas.
|
||||
## User Interaction Protocol
|
||||
|
||||
## Role-Based Framework Generation
|
||||
### Question Output Format
|
||||
|
||||
**Dynamic Generation**: Framework content adapts based on selected roles
|
||||
- **With roles**: Generate targeted discussion points for specified roles only
|
||||
- **Without roles**: Generate comprehensive framework covering all common areas
|
||||
All questions output as structured text (detailed format with descriptions):
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Topic Framework Generation Process
|
||||
|
||||
**Phase 1: Session Management** ⚠️ FIRST STEP
|
||||
- **Active session detection**: Check `.workflow/.active-*` markers
|
||||
- **Session selection**: Prompt user if multiple active sessions found
|
||||
- **Auto-creation**: Create `WFS-[topic-slug]` only if no active session exists
|
||||
- **Framework check**: Check if `topic-framework.md` exists (update vs create mode)
|
||||
|
||||
**Phase 2: Role Analysis** ⚠️ NEW
|
||||
- **Parse roles parameter**: Extract roles from `--roles "role1,role2,role3"` if provided
|
||||
- **Role validation**: Verify each role is valid (matches available role commands)
|
||||
- **Store role list**: Save selected roles to session metadata for reference
|
||||
- **Default behavior**: If no roles specified, use comprehensive coverage
|
||||
|
||||
**Phase 3: Dynamic Topic Analysis**
|
||||
- **Scope definition**: Define topic boundaries and objectives
|
||||
- **Stakeholder identification**: Identify key users and stakeholders based on selected roles
|
||||
- **Requirements gathering**: Extract requirements relevant to selected roles
|
||||
- **Context collection**: Gather context appropriate for role perspectives
|
||||
|
||||
**Phase 4: Role-Specific Framework Generation**
|
||||
- **Discussion points creation**: Generate 3-5 discussion areas **tailored to selected roles**
|
||||
- **Role-targeted questions**: Create questions specifically for chosen roles
|
||||
- **Framework document**: Generate `topic-framework.md` with role-specific sections
|
||||
- **Validation check**: Ensure framework addresses all selected role perspectives
|
||||
|
||||
**Phase 5: Metadata Storage**
|
||||
- **Save role assignment**: Store selected roles in session metadata
|
||||
- **Framework versioning**: Track which roles framework addresses
|
||||
- **Update tracking**: Maintain role evolution if framework updated
|
||||
|
||||
## Implementation Standards
|
||||
|
||||
### Discussion-Driven Analysis
|
||||
**Interactive Approach**: Direct conversation and exploration without predefined role constraints
|
||||
|
||||
**Process Flow**:
|
||||
1. **Topic introduction**: Understanding scope and context
|
||||
2. **Exploratory questioning**: Open-ended investigation
|
||||
3. **Component identification**: Breaking down into manageable pieces
|
||||
4. **Relationship analysis**: Understanding connections and dependencies
|
||||
5. **Documentation generation**: Structured capture of insights
|
||||
|
||||
**Key Areas of Investigation**:
|
||||
- **Functional aspects**: What the topic needs to accomplish
|
||||
- **Technical considerations**: Implementation constraints and requirements
|
||||
- **User perspectives**: How different stakeholders are affected
|
||||
- **Business implications**: Cost, timeline, and strategic considerations
|
||||
- **Risk assessment**: Potential challenges and mitigation strategies
|
||||
|
||||
### Document Generation Standards
|
||||
|
||||
**Always Created**:
|
||||
- **discussion-summary.md**: Main conversation points and key insights
|
||||
- **component-analysis.md**: Detailed breakdown of topic components
|
||||
|
||||
## Document Generation
|
||||
|
||||
**Primary Output**: Single structured `topic-framework.md` document
|
||||
|
||||
**Document Structure**:
|
||||
```
|
||||
.workflow/WFS-[topic]/.brainstorming/
|
||||
└── topic-framework.md # ★ STRUCTURED FRAMEWORK DOCUMENT
|
||||
```
|
||||
|
||||
**Note**: `workflow-session.json` is located at `.workflow/WFS-[topic]/workflow-session.json` (session root), not inside `.brainstorming/`.
|
||||
|
||||
## Framework Template Structures
|
||||
|
||||
### Dynamic Role-Based Framework
|
||||
|
||||
Framework content adapts based on `--roles` parameter:
|
||||
|
||||
#### Option 1: Specific Roles Provided
|
||||
```markdown
|
||||
# [Topic] - Discussion Framework
|
||||
【问题{N} - {短标签}】{问题文本}
|
||||
a) {选项标签}
|
||||
说明:{选项说明和影响}
|
||||
b) {选项标签}
|
||||
说明:{选项说明和影响}
|
||||
c) {选项标签}
|
||||
说明:{选项说明和影响}
|
||||
|
||||
## Topic Overview
|
||||
- **Scope**: [Topic boundaries relevant to selected roles]
|
||||
- **Objectives**: [Goals from perspective of selected roles]
|
||||
- **Context**: [Background focusing on role-specific concerns]
|
||||
- **Target Roles**: ui-designer, system-architect, subject-matter-expert
|
||||
|
||||
## Role-Specific Discussion Points
|
||||
|
||||
### For UI Designer
|
||||
1. **User Interface Requirements**
|
||||
- What interface components are needed?
|
||||
- What user interactions must be supported?
|
||||
- What visual design considerations apply?
|
||||
|
||||
2. **User Experience Challenges**
|
||||
- What are the key user journeys?
|
||||
- What accessibility requirements exist?
|
||||
- How to balance aesthetics with functionality?
|
||||
|
||||
### For System Architect
|
||||
1. **Architecture Decisions**
|
||||
- What architectural patterns fit this solution?
|
||||
- What scalability requirements exist?
|
||||
- How does this integrate with existing systems?
|
||||
|
||||
2. **Technical Implementation**
|
||||
- What technology stack is appropriate?
|
||||
- What are the performance requirements?
|
||||
- What dependencies must be managed?
|
||||
|
||||
### For Subject Matter Expert
|
||||
1. **Domain Expertise & Standards**
|
||||
- What industry standards and best practices apply?
|
||||
- What regulatory compliance requirements exist?
|
||||
- What domain-specific patterns should be followed?
|
||||
|
||||
2. **Technical Quality & Risk**
|
||||
- What technical debt considerations exist?
|
||||
- What scalability and maintenance implications apply?
|
||||
- What knowledge transfer and documentation is needed?
|
||||
|
||||
## Cross-Role Integration Points
|
||||
- How do UI decisions impact architecture?
|
||||
- How does architecture constrain UI possibilities?
|
||||
- What domain standards affect both UI and architecture?
|
||||
|
||||
## Framework Usage
|
||||
**For Role Agents**: Address your specific section + integration points
|
||||
**Reference Format**: @../topic-framework.md in your analysis.md
|
||||
**Update Process**: Use /workflow:brainstorm:artifacts to update
|
||||
|
||||
---
|
||||
*Generated for roles: ui-designer, system-architect, subject-matter-expert*
|
||||
*Last updated: [timestamp]*
|
||||
请回答:{N}a 或 {N}b 或 {N}c
|
||||
```
|
||||
|
||||
#### Option 2: No Roles Specified (Comprehensive)
|
||||
**Multi-select format** (Phase 2 role selection):
|
||||
```markdown
|
||||
# [Topic] - Discussion Framework
|
||||
【角色选择】请选择 {count} 个角色参与头脑风暴分析
|
||||
|
||||
## Topic Overview
|
||||
- **Scope**: [Comprehensive topic boundaries]
|
||||
- **Objectives**: [All-encompassing goals]
|
||||
- **Context**: [Full background and constraints]
|
||||
- **Stakeholders**: [All relevant parties]
|
||||
a) {role-name} ({中文名})
|
||||
推荐理由:{基于topic的相关性说明}
|
||||
b) {role-name} ({中文名})
|
||||
推荐理由:{基于topic的相关性说明}
|
||||
...
|
||||
|
||||
## Core Discussion Areas
|
||||
支持格式:
|
||||
- 分别选择:2a 2c 2d (选择第2题的a、c、d选项)
|
||||
- 合并语法:2acd (选择a、c、d)
|
||||
- 逗号分隔:2a,c,d
|
||||
|
||||
### 1. Requirements & Objectives
|
||||
- What are the fundamental requirements?
|
||||
- What are the critical success factors?
|
||||
- What constraints must be considered?
|
||||
|
||||
### 2. Technical & Architecture
|
||||
- What are the technical challenges?
|
||||
- What architectural decisions are needed?
|
||||
- What integration points exist?
|
||||
|
||||
### 3. User Experience & Design
|
||||
- Who are the primary users?
|
||||
- What are the key user journeys?
|
||||
- What usability requirements exist?
|
||||
|
||||
### 4. Security & Compliance
|
||||
- What security requirements exist?
|
||||
- What compliance considerations apply?
|
||||
- What data protection is needed?
|
||||
|
||||
### 5. Implementation & Operations
|
||||
- What are the implementation risks?
|
||||
- What resources are required?
|
||||
- How will this be maintained?
|
||||
|
||||
## Available Role Perspectives
|
||||
Framework supports analysis from any of these roles:
|
||||
- **Technical**: system-architect, data-architect, subject-matter-expert
|
||||
- **Product & Design**: ui-designer, ux-expert, product-manager, product-owner
|
||||
- **Agile & Quality**: scrum-master, test-strategist
|
||||
|
||||
---
|
||||
*Comprehensive framework - adaptable to any role*
|
||||
*Last updated: [timestamp]*
|
||||
请输入选择:
|
||||
```
|
||||
|
||||
## Role-Specific Content Generation
|
||||
### Input Parsing Rules
|
||||
|
||||
### Available Roles and Their Focus Areas
|
||||
**Supported formats** (intelligent parsing):
|
||||
|
||||
**Technical Roles**:
|
||||
- `system-architect`: Architecture patterns, scalability, technology stack, integration
|
||||
- `data-architect`: Data modeling, processing workflows, analytics, storage
|
||||
- `subject-matter-expert`: Domain expertise, industry standards, compliance, best practices
|
||||
1. **Space-separated**: `1a 2b 3c` → Q1:a, Q2:b, Q3:c
|
||||
2. **Comma-separated**: `1a,2b,3c` → Q1:a, Q2:b, Q3:c
|
||||
3. **Multi-select combined**: `2abc` → Q2: options a,b,c
|
||||
4. **Multi-select spaces**: `2 a b c` → Q2: options a,b,c
|
||||
5. **Multi-select comma**: `2a,b,c` → Q2: options a,b,c
|
||||
6. **Natural language**: `问题1选a` → 1a (fallback parsing)
|
||||
|
||||
**Product & Design Roles**:
|
||||
- `ui-designer`: User interface, visual design, interaction patterns, accessibility
|
||||
- `ux-expert`: User experience optimization, usability testing, interaction design, design systems
|
||||
- `product-manager`: Business value, feature prioritization, market positioning, roadmap
|
||||
- `product-owner`: Backlog management, user stories, acceptance criteria, stakeholder alignment
|
||||
**Parsing algorithm**:
|
||||
- Extract question numbers and option letters
|
||||
- Validate question numbers match output
|
||||
- Validate option letters exist for each question
|
||||
- If ambiguous/invalid, output example format and request re-input
|
||||
|
||||
**Agile & Quality Roles**:
|
||||
- `scrum-master`: Sprint planning, team dynamics, process optimization, delivery management
|
||||
- `test-strategist`: Testing strategies, quality assurance, test automation, validation approaches
|
||||
**Error handling** (lenient):
|
||||
- Recognize common variations automatically
|
||||
- If parsing fails, show example and wait for clarification
|
||||
- Support re-input without penalty
|
||||
|
||||
### Dynamic Discussion Point Generation
|
||||
### Batching Strategy
|
||||
|
||||
**For each selected role, generate**:
|
||||
1. **2-3 core discussion areas** specific to that role's perspective
|
||||
2. **3-5 targeted questions** per discussion area
|
||||
3. **Cross-role integration points** showing how roles interact
|
||||
**Batch limits**:
|
||||
- **Default**: Maximum 10 questions per round
|
||||
- **Phase 2 (role selection)**: Display all recommended roles at once (count+2 roles)
|
||||
- **Auto-split**: If questions > 10, split into multiple rounds with clear round indicators
|
||||
|
||||
**Example mapping**:
|
||||
**Round indicators**:
|
||||
```markdown
|
||||
===== 第 1 轮问题 (共2轮) =====
|
||||
【问题1 - ...】...
|
||||
【问题2 - ...】...
|
||||
...
|
||||
【问题10 - ...】...
|
||||
|
||||
请回答 (格式: 1a 2b ... 10c):
|
||||
```
|
||||
|
||||
### Interaction Flow
|
||||
|
||||
**Standard flow**:
|
||||
1. Output questions in formatted text
|
||||
2. Output expected input format example
|
||||
3. Wait for user input
|
||||
4. Parse input with intelligent matching
|
||||
5. If parsing succeeds → Store answers and continue
|
||||
6. If parsing fails → Show error, example, and wait for re-input
|
||||
|
||||
**No question/option limits**: Text-based interaction removes previous 4-question and 4-option restrictions
|
||||
|
||||
## Execution Phases
|
||||
|
||||
### Session Management
|
||||
- Check `.workflow/.active-*` markers first
|
||||
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
||||
- Parse `--count N` parameter from user input (default: 3 if not specified)
|
||||
- Store decisions in `workflow-session.json` including count parameter
|
||||
|
||||
### Phase 0: Automatic Project Context Collection
|
||||
|
||||
**Goal**: Gather project architecture, documentation, and relevant code context BEFORE user interaction
|
||||
|
||||
**Detection Mechanism** (execute first):
|
||||
```javascript
|
||||
// If roles = ["ui-designer", "system-architect"]
|
||||
Generate:
|
||||
- UI Designer section: UI Requirements, UX Challenges
|
||||
- System Architect section: Architecture Decisions, Technical Implementation
|
||||
- Integration Points: UI↔Architecture dependencies
|
||||
// Check if context-package already exists
|
||||
const contextPackagePath = `.workflow/WFS-{session-id}/.process/context-package.json`;
|
||||
|
||||
if (file_exists(contextPackagePath)) {
|
||||
// Validate package
|
||||
const package = Read(contextPackagePath);
|
||||
if (package.metadata.session_id === session_id) {
|
||||
console.log("✅ Valid context-package found, skipping Phase 0");
|
||||
return; // Skip to Phase 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Framework Generation Examples
|
||||
**Implementation**: Invoke `context-search-agent` only if package doesn't exist
|
||||
|
||||
#### Example 1: Architecture-Heavy Topic
|
||||
```bash
|
||||
/workflow:brainstorm:artifacts "Design scalable microservices platform" --roles "system-architect,data-architect,subject-matter-expert"
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="context-search-agent",
|
||||
description="Gather project context for brainstorm",
|
||||
prompt=`
|
||||
You are executing as context-search-agent (.claude/agents/context-search-agent.md).
|
||||
|
||||
## Execution Mode
|
||||
**BRAINSTORM MODE** (Lightweight) - Phase 1-2 only (skip deep analysis)
|
||||
|
||||
## Session Information
|
||||
- **Session ID**: ${session_id}
|
||||
- **Task Description**: ${task_description}
|
||||
- **Output Path**: .workflow/${session_id}/.process/context-package.json
|
||||
|
||||
## Mission
|
||||
Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Detection**: Check for existing context-package (early exit if valid)
|
||||
2. **Foundation**: Initialize code-index, get project structure, load docs
|
||||
3. **Analysis**: Extract keywords, determine scope, classify complexity
|
||||
|
||||
### Phase 2: Multi-Source Context Discovery
|
||||
Execute all 3 discovery tracks:
|
||||
- **Track 1**: Reference documentation (CLAUDE.md, architecture docs)
|
||||
- **Track 2**: Web examples (use Exa MCP for unfamiliar tech/APIs)
|
||||
- **Track 3**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
|
||||
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. Synthesize 3-source data (docs > code > web)
|
||||
3. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
4. Perform conflict detection with risk assessment
|
||||
5. Generate and validate context-package.json
|
||||
|
||||
## Output Requirements
|
||||
Complete context-package.json with:
|
||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
||||
- **project_context**: architecture_patterns, coding_conventions, tech_stack
|
||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
||||
- **dependencies**: {internal[], external[]} with dependency graph
|
||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
||||
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy}
|
||||
|
||||
## Quality Validation
|
||||
Before completion verify:
|
||||
- [ ] Valid JSON format with all required fields
|
||||
- [ ] File relevance accuracy >80%
|
||||
- [ ] Dependency graph complete (max 2 transitive levels)
|
||||
- [ ] Conflict risk level calculated correctly
|
||||
- [ ] No sensitive data exposed
|
||||
- [ ] Total files ≤50 (prioritize high-relevance)
|
||||
|
||||
Execute autonomously following agent documentation.
|
||||
Report completion with statistics.
|
||||
`
|
||||
)
|
||||
```
|
||||
**Generated framework focuses on**:
|
||||
- Service architecture and communication patterns
|
||||
- Data flow and storage strategies
|
||||
- Domain standards and best practices
|
||||
|
||||
#### Example 2: User-Focused Topic
|
||||
```bash
|
||||
/workflow:brainstorm:artifacts "Improve user onboarding experience" --roles "ui-designer,ux-expert,product-manager"
|
||||
**Graceful Degradation**:
|
||||
- If agent fails: Log warning, continue to Phase 1 without project context
|
||||
- If package invalid: Re-run context-search-agent
|
||||
|
||||
### Phase 1: Topic Analysis & Intent Classification
|
||||
|
||||
**Goal**: Extract keywords/challenges to drive all subsequent question generation, **enriched by Phase 0 project context**
|
||||
|
||||
**Steps**:
|
||||
1. **Load Phase 0 context** (if available):
|
||||
- Read `.workflow/WFS-{session-id}/.process/context-package.json`
|
||||
- Extract: tech_stack, existing modules, conflict_risk, relevant files
|
||||
|
||||
2. **Deep topic analysis** (context-aware):
|
||||
- Extract technical entities from topic + existing codebase
|
||||
- Identify core challenges considering existing architecture
|
||||
- Consider constraints (timeline/budget/compliance)
|
||||
- Define success metrics based on current project state
|
||||
|
||||
3. **Generate 2-4 context-aware probing questions**:
|
||||
- Reference existing tech stack in questions
|
||||
- Consider integration with existing modules
|
||||
- Address identified conflict risks from Phase 0
|
||||
- Target root challenges and trade-off priorities
|
||||
|
||||
4. **User interaction**: Output questions using text format (see User Interaction Protocol), wait for user input
|
||||
|
||||
5. **Parse user answers**: Use intelligent parsing to extract answers from user input (support multiple formats)
|
||||
|
||||
6. **Storage**: Store answers to `session.intent_context` with `{extracted_keywords, identified_challenges, user_answers, project_context_used}`
|
||||
|
||||
**Example Output**:
|
||||
```markdown
|
||||
===== Phase 1: 项目意图分析 =====
|
||||
|
||||
【问题1 - 核心挑战】实时协作平台的主要技术挑战?
|
||||
a) 实时数据同步
|
||||
说明:100+用户同时在线,状态同步复杂度高
|
||||
b) 可扩展性架构
|
||||
说明:用户规模增长时的系统扩展能力
|
||||
c) 冲突解决机制
|
||||
说明:多用户同时编辑的冲突处理策略
|
||||
|
||||
【问题2 - 优先级】MVP阶段最关注的指标?
|
||||
a) 功能完整性
|
||||
说明:实现所有核心功能
|
||||
b) 用户体验
|
||||
说明:流畅的交互体验和响应速度
|
||||
c) 系统稳定性
|
||||
说明:高可用性和数据一致性
|
||||
|
||||
请回答 (格式: 1a 2b):
|
||||
```
|
||||
**Generated framework focuses on**:
|
||||
- Onboarding flow and UI components
|
||||
- User experience optimization and usability
|
||||
- Business value and success metrics
|
||||
|
||||
#### Example 3: Agile Delivery Topic
|
||||
```bash
|
||||
/workflow:brainstorm:artifacts "Optimize sprint delivery process" --roles "scrum-master,product-owner,test-strategist"
|
||||
**User input examples**:
|
||||
- `1a 2c` → Q1:a, Q2:c
|
||||
- `1a,2c` → Q1:a, Q2:c
|
||||
|
||||
**⚠️ CRITICAL**: Questions MUST reference topic keywords. Generic "Project type?" violates dynamic generation.
|
||||
|
||||
### Phase 2: Role Selection
|
||||
|
||||
**⚠️ CRITICAL**: User MUST interact to select roles. NEVER auto-select without user confirmation.
|
||||
|
||||
**Available Roles**:
|
||||
- data-architect (数据架构师)
|
||||
- product-manager (产品经理)
|
||||
- product-owner (产品负责人)
|
||||
- scrum-master (敏捷教练)
|
||||
- subject-matter-expert (领域专家)
|
||||
- system-architect (系统架构师)
|
||||
- test-strategist (测试策略师)
|
||||
- ui-designer (UI 设计师)
|
||||
- ux-expert (UX 专家)
|
||||
|
||||
**Steps**:
|
||||
1. **Intelligent role recommendation** (AI analysis):
|
||||
- Analyze Phase 1 extracted keywords and challenges
|
||||
- Use AI reasoning to determine most relevant roles for the specific topic
|
||||
- Recommend count+2 roles (e.g., if user wants 3 roles, recommend 5 options)
|
||||
- Provide clear rationale for each recommended role based on topic context
|
||||
|
||||
2. **User selection** (text interaction):
|
||||
- Output all recommended roles at once (no batching needed for count+2 roles)
|
||||
- Display roles with labels and relevance rationale
|
||||
- Wait for user input in multi-select format
|
||||
- Parse user input (support multiple formats)
|
||||
- **Storage**: Store selections to `session.selected_roles`
|
||||
|
||||
**Example Output**:
|
||||
```markdown
|
||||
===== Phase 2: 角色选择 =====
|
||||
|
||||
【角色选择】请选择 3 个角色参与头脑风暴分析
|
||||
|
||||
a) system-architect (系统架构师)
|
||||
推荐理由:实时同步架构设计和技术选型的核心角色
|
||||
b) ui-designer (UI设计师)
|
||||
推荐理由:协作界面用户体验和实时状态展示
|
||||
c) product-manager (产品经理)
|
||||
推荐理由:功能优先级和MVP范围决策
|
||||
d) data-architect (数据架构师)
|
||||
推荐理由:数据同步模型和存储方案设计
|
||||
e) ux-expert (UX专家)
|
||||
推荐理由:多用户协作交互流程优化
|
||||
|
||||
支持格式:
|
||||
- 分别选择:2a 2c 2d (选择a、c、d)
|
||||
- 合并语法:2acd (选择a、c、d)
|
||||
- 逗号分隔:2a,c,d (选择a、c、d)
|
||||
|
||||
请输入选择:
|
||||
```
|
||||
**Generated framework focuses on**:
|
||||
- Sprint planning and team collaboration
|
||||
- Backlog management and prioritization
|
||||
- Quality assurance and testing strategies
|
||||
|
||||
#### Example 4: Comprehensive Analysis
|
||||
```bash
|
||||
/workflow:brainstorm:artifacts "Build real-time collaboration feature"
|
||||
**User input examples**:
|
||||
- `2acd` → Roles: a, c, d (system-architect, product-manager, data-architect)
|
||||
- `2a 2c 2d` → Same result
|
||||
- `2a,c,d` → Same result
|
||||
|
||||
**Role Recommendation Rules**:
|
||||
- NO hardcoded keyword-to-role mappings
|
||||
- Use intelligent analysis of topic, challenges, and requirements
|
||||
- Consider role synergies and coverage gaps
|
||||
- Explain WHY each role is relevant to THIS specific topic
|
||||
- Default recommendation: count+2 roles for user to choose from
|
||||
|
||||
### Phase 3: Role-Specific Questions (Dynamic Generation)
|
||||
|
||||
**Goal**: Generate deep questions mapping role expertise to Phase 1 challenges
|
||||
|
||||
**Algorithm**:
|
||||
```
|
||||
**Generated framework covers** all aspects (no roles specified)
|
||||
FOR each selected role:
|
||||
1. Map Phase 1 challenges to role domain:
|
||||
- "real-time sync" + system-architect → State management pattern
|
||||
- "100 users" + system-architect → Communication protocol
|
||||
- "low latency" + system-architect → Conflict resolution
|
||||
|
||||
## Session Management ⚠️ CRITICAL
|
||||
- **⚡ FIRST ACTION**: Check for all `.workflow/.active-*` markers before processing
|
||||
- **Multiple sessions support**: Different Claude instances can have different active sessions
|
||||
- **User selection**: If multiple active sessions found, prompt user to select which one to work with
|
||||
- **Auto-session creation**: `WFS-[topic-slug]` only if no active session exists
|
||||
- **Session continuity**: MUST use selected active session for all processing
|
||||
- **Context preservation**: All discussion and analysis stored in session directory
|
||||
- **Session isolation**: Each session maintains independent state
|
||||
2. Generate 3-4 questions per role probing implementation depth, trade-offs, edge cases:
|
||||
Q: "How handle real-time state sync for 100+ users?" (explores approach)
|
||||
Q: "How resolve conflicts when 2 users edit simultaneously?" (explores edge case)
|
||||
Options: [Event Sourcing/Centralized/CRDT] (concrete, explain trade-offs for THIS use case)
|
||||
|
||||
## Discussion Areas
|
||||
3. Output questions in text format per role:
|
||||
- Display all questions for current role (3-4 questions, no 10-question limit)
|
||||
- Questions in Chinese (用中文提问)
|
||||
- Wait for user input
|
||||
- Parse answers using intelligent parsing
|
||||
- Store answers to session.role_decisions[role]
|
||||
```
|
||||
|
||||
### Core Investigation Topics
|
||||
- **Purpose & Goals**: What are we trying to achieve?
|
||||
- **Scope & Boundaries**: What's included and excluded?
|
||||
- **Success Criteria**: How do we measure success?
|
||||
- **Constraints**: What limitations exist?
|
||||
- **Stakeholders**: Who is affected or involved?
|
||||
**Batching Strategy**:
|
||||
- Each role outputs all its questions at once (typically 3-4 questions)
|
||||
- No need to split per role (within 10-question batch limit)
|
||||
- Multiple roles processed sequentially (one role at a time for clarity)
|
||||
|
||||
### Technical Considerations
|
||||
- **Requirements**: What must the solution provide?
|
||||
- **Dependencies**: What does it rely on?
|
||||
- **Integration**: How does it connect to existing systems?
|
||||
- **Performance**: What are the speed/scale requirements?
|
||||
- **Security**: What protection is needed?
|
||||
**Output Format**: Follow standard format from "User Interaction Protocol" section (single-choice question format)
|
||||
|
||||
### Implementation Factors
|
||||
- **Timeline**: When is it needed?
|
||||
- **Resources**: What people/budget/tools are available?
|
||||
- **Risks**: What could go wrong?
|
||||
- **Alternatives**: What other approaches exist?
|
||||
- **Migration**: How do we transition from current state?
|
||||
**Example Topic-Specific Questions** (system-architect role for "real-time collaboration platform"):
|
||||
- "100+ 用户实时状态同步方案?" → Options: Event Sourcing / 集中式状态管理 / CRDT
|
||||
- "两个用户同时编辑冲突如何解决?" → Options: 自动合并 / 手动解决 / 版本控制
|
||||
- "低延迟通信协议选择?" → Options: WebSocket / SSE / 轮询
|
||||
- "系统扩展性架构方案?" → Options: 微服务 / 单体+缓存 / Serverless
|
||||
|
||||
## Update Mechanism ⚠️ SMART UPDATES
|
||||
**Quality Requirements**: See "Question Generation Guidelines" section for detailed rules
|
||||
|
||||
### Framework Update Logic
|
||||
```bash
|
||||
# Check existing framework
|
||||
IF topic-framework.md EXISTS:
|
||||
SHOW current framework to user
|
||||
ASK: "Framework exists. Do you want to:"
|
||||
OPTIONS:
|
||||
1. "Replace completely" → Generate new framework
|
||||
2. "Add discussion points" → Append to existing
|
||||
3. "Refine existing points" → Interactive editing
|
||||
4. "Cancel" → Exit without changes
|
||||
### Phase 4: Cross-Role Clarification (Conflict Detection)
|
||||
|
||||
**Goal**: Resolve ACTUAL conflicts from Phase 3 answers, not pre-defined relationships
|
||||
|
||||
**Algorithm**:
|
||||
```
|
||||
1. Analyze Phase 3 answers for conflicts:
|
||||
- Contradictory choices: product-manager "fast iteration" vs system-architect "complex Event Sourcing"
|
||||
- Missing integration: ui-designer "Optimistic updates" but system-architect didn't address conflict handling
|
||||
- Implicit dependencies: ui-designer "Live cursors" but no auth approach defined
|
||||
|
||||
2. FOR each detected conflict:
|
||||
Generate clarification questions referencing SPECIFIC Phase 3 choices
|
||||
|
||||
3. Output clarification questions in text format:
|
||||
- Batch conflicts into rounds (max 10 questions per round)
|
||||
- Display questions with context from Phase 3 answers
|
||||
- Questions in Chinese (用中文提问)
|
||||
- Wait for user input
|
||||
- Parse answers using intelligent parsing
|
||||
- Store answers to session.cross_role_decisions
|
||||
|
||||
4. If NO conflicts: Skip Phase 4 (inform user: "未检测到跨角色冲突,跳过Phase 4")
|
||||
```
|
||||
|
||||
**Batching Strategy**:
|
||||
- Maximum 10 clarification questions per round
|
||||
- If conflicts > 10, split into multiple rounds
|
||||
- Prioritize most critical conflicts first
|
||||
|
||||
**Output Format**: Follow standard format from "User Interaction Protocol" section (single-choice question format with background context)
|
||||
|
||||
**Example Conflict Detection** (from Phase 3 answers):
|
||||
- **Architecture Conflict**: "CRDT 与 UI 回滚期望冲突,如何解决?"
|
||||
- Background: system-architect chose CRDT, ui-designer expects rollback UI
|
||||
- Options: 采用 CRDT / 显示合并界面 / 切换到 OT
|
||||
- **Integration Gap**: "实时光标功能缺少身份认证方案"
|
||||
- Background: ui-designer chose live cursors, no auth defined
|
||||
- Options: OAuth 2.0 / JWT Token / Session-based
|
||||
|
||||
**Quality Requirements**: See "Question Generation Guidelines" section for conflict-specific rules
|
||||
|
||||
### Phase 5: Generate Guidance Specification
|
||||
|
||||
**Steps**:
|
||||
1. Load all decisions: `intent_context` + `selected_roles` + `role_decisions` + `cross_role_decisions`
|
||||
2. Transform Q&A pairs to declarative: Questions → Headers, Answers → CONFIRMED/SELECTED statements
|
||||
3. Generate guidance-specification.md (template below) - **PRIMARY OUTPUT FILE**
|
||||
4. Update workflow-session.json with **METADATA ONLY**:
|
||||
- session_id (e.g., "WFS-topic-slug")
|
||||
- selected_roles[] (array of role names, e.g., ["system-architect", "ui-designer", "product-manager"])
|
||||
- topic (original user input string)
|
||||
- timestamp (ISO-8601 format)
|
||||
- phase_completed: "artifacts"
|
||||
- count_parameter (number from --count flag)
|
||||
5. Validate: No interrogative sentences in .md file, all decisions traceable, no content duplication in .json
|
||||
|
||||
**⚠️ CRITICAL OUTPUT SEPARATION**:
|
||||
- **guidance-specification.md**: Full guidance content (decisions, rationale, integration points)
|
||||
- **workflow-session.json**: Session metadata ONLY (no guidance content, no decisions, no Q&A pairs)
|
||||
- **NO content duplication**: Guidance stays in .md, metadata stays in .json
|
||||
|
||||
## Output Document Template
|
||||
|
||||
**File**: `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||
|
||||
```markdown
|
||||
# [Project] - Confirmed Guidance Specification
|
||||
|
||||
**Metadata**: [timestamp, type, focus, roles]
|
||||
|
||||
## 1. Project Positioning & Goals
|
||||
**CONFIRMED Objectives**: [from topic + Phase 1]
|
||||
**CONFIRMED Success Criteria**: [from Phase 1 answers]
|
||||
|
||||
## 2-N. [Role] Decisions
|
||||
### SELECTED Choices
|
||||
**[Question topic]**: [User's answer]
|
||||
- **Rationale**: [From option description]
|
||||
- **Impact**: [Implications]
|
||||
|
||||
### Cross-Role Considerations
|
||||
**[Conflict resolved]**: [Resolution from Phase 4]
|
||||
- **Affected Roles**: [Roles involved]
|
||||
|
||||
## Cross-Role Integration
|
||||
**CONFIRMED Integration Points**: [API/Data/Auth from multiple roles]
|
||||
|
||||
## Risks & Constraints
|
||||
**Identified Risks**: [From answers] → Mitigation: [Approach]
|
||||
|
||||
## Next Steps
|
||||
**⚠️ Automatic Continuation** (when called from auto-parallel):
|
||||
- auto-parallel will assign agents to generate role-specific analysis documents
|
||||
- Each selected role gets dedicated conceptual-planning-agent
|
||||
- Agents read this guidance-specification.md for framework context
|
||||
|
||||
## Appendix: Decision Tracking
|
||||
| Decision ID | Category | Question | Selected | Phase | Rationale |
|
||||
|-------------|----------|----------|----------|-------|-----------|
|
||||
| D-001 | Intent | [Q] | [A] | 1 | [Why] |
|
||||
| D-002 | Roles | [Selected] | [Roles] | 2 | [Why] |
|
||||
| D-003+ | [Role] | [Q] | [A] | 3 | [Why] |
|
||||
```
|
||||
|
||||
## Question Generation Guidelines
|
||||
|
||||
### Core Principle: Developer-Facing Questions with User Context
|
||||
|
||||
**Target Audience**: 开发者(理解技术但需要从用户需求出发)
|
||||
|
||||
**Generation Philosophy**:
|
||||
1. **Phase 1**: 用户场景、业务约束、优先级(建立上下文)
|
||||
2. **Phase 2**: 基于话题分析的智能角色推荐(非关键词映射)
|
||||
3. **Phase 3**: 业务需求 + 技术选型(需求驱动的技术决策)
|
||||
4. **Phase 4**: 技术冲突的业务权衡(帮助开发者理解影响)
|
||||
|
||||
### Universal Quality Rules
|
||||
|
||||
**Question Structure** (all phases):
|
||||
```
|
||||
[业务场景/需求前提] + [技术关注点]
|
||||
```
|
||||
|
||||
**Option Structure** (all phases):
|
||||
```
|
||||
标签:[技术方案简称] + (业务特征)
|
||||
说明:[业务影响] + [技术权衡]
|
||||
```
|
||||
|
||||
**MUST Include** (all phases):
|
||||
- ✅ All questions in Chinese (用中文提问)
|
||||
- ✅ 业务场景作为问题前提
|
||||
- ✅ 技术选项的业务影响说明
|
||||
- ✅ 量化指标和约束条件
|
||||
|
||||
**MUST Avoid** (all phases):
|
||||
- ❌ 纯技术选型无业务上下文
|
||||
- ❌ 过度抽象的用户体验问题
|
||||
- ❌ 脱离话题的通用架构问题
|
||||
|
||||
### Phase-Specific Requirements
|
||||
|
||||
**Phase 1 Requirements**:
|
||||
- Questions MUST reference topic keywords (NOT generic "Project type?")
|
||||
- Focus: 用户使用场景(谁用?怎么用?多频繁?)、业务约束(预算、时间、团队、合规)
|
||||
- Success metrics: 性能指标、用户体验目标
|
||||
- Priority ranking: MVP vs 长期规划
|
||||
|
||||
**Phase 3 Requirements**:
|
||||
- Questions MUST reference Phase 1 keywords (e.g., "real-time", "100 users")
|
||||
- Options MUST be concrete approaches with relevance to topic
|
||||
- Each option includes trade-offs specific to this use case
|
||||
- Include 业务需求驱动的技术问题、量化指标(并发数、延迟、可用性)
|
||||
|
||||
**Phase 4 Requirements**:
|
||||
- Questions MUST reference SPECIFIC Phase 3 choices in background context
|
||||
- Options address the detected conflict directly
|
||||
- Each option explains impact on both conflicting roles
|
||||
- NEVER use static "Cross-Role Matrix" - ALWAYS analyze actual Phase 3 answers
|
||||
- Focus: 技术冲突的业务权衡、帮助开发者理解不同选择的影响
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
Generated guidance-specification.md MUST:
|
||||
- ✅ No interrogative sentences (use CONFIRMED/SELECTED)
|
||||
- ✅ Every decision traceable to user answer
|
||||
- ✅ Cross-role conflicts resolved or documented
|
||||
- ✅ Next steps concrete and specific
|
||||
- ✅ All Phase 1-4 decisions in session metadata
|
||||
|
||||
## Update Mechanism
|
||||
|
||||
```
|
||||
IF guidance-specification.md EXISTS:
|
||||
Prompt: "Regenerate completely / Update sections / Cancel"
|
||||
ELSE:
|
||||
CREATE new framework
|
||||
Run full Phase 1-5 flow
|
||||
```
|
||||
|
||||
### Update Strategies
|
||||
## Governance Rules
|
||||
|
||||
**1. Complete Replacement**
|
||||
- Backup existing framework as `topic-framework-[timestamp].md.backup`
|
||||
- Generate completely new framework
|
||||
- Preserve role-specific analysis points from previous version
|
||||
**Output Requirements**:
|
||||
- All decisions MUST use CONFIRMED/SELECTED (NO "?" in decision sections)
|
||||
- Every decision MUST trace to user answer
|
||||
- Conflicts MUST be resolved (not marked "TBD")
|
||||
- Next steps MUST be actionable
|
||||
- Topic preserved as authoritative reference in session
|
||||
|
||||
**2. Incremental Addition**
|
||||
- Load existing framework
|
||||
- Identify new discussion areas through user interaction
|
||||
- Add new sections while preserving existing structure
|
||||
- Update framework usage instructions
|
||||
**CRITICAL**: Guidance is single source of truth for downstream phases. Ambiguity violates governance.
|
||||
|
||||
**3. Refinement Mode**
|
||||
- Interactive editing of existing discussion points
|
||||
- Allow modification of scope, objectives, and questions
|
||||
- Preserve framework structure and role assignments
|
||||
- Update timestamp and version info
|
||||
## Storage Validation
|
||||
|
||||
### Version Control
|
||||
- **Backup Creation**: Always backup before major changes
|
||||
- **Change Tracking**: Include change summary in framework footer
|
||||
- **Rollback Support**: Keep previous version accessible
|
||||
**workflow-session.json** (metadata only):
|
||||
```json
|
||||
{
|
||||
"session_id": "WFS-{topic-slug}",
|
||||
"type": "brainstorming",
|
||||
"topic": "{original user input}",
|
||||
"selected_roles": ["system-architect", "ui-designer", "product-manager"],
|
||||
"phase_completed": "artifacts",
|
||||
"timestamp": "2025-10-24T10:30:00Z",
|
||||
"count_parameter": 3
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
- **Session creation failure**: Provide clear error message and retry option
|
||||
- **Discussion stalling**: Offer structured prompts to continue exploration
|
||||
- **Documentation issues**: Graceful handling of file creation problems
|
||||
- **Missing context**: Prompt for additional information when needed
|
||||
**⚠️ Rule**: Session JSON stores ONLY metadata (session_id, selected_roles[], topic, timestamps). All guidance content goes to guidance-specification.md.
|
||||
|
||||
## Reference Information
|
||||
## File Structure
|
||||
|
||||
### File Structure Reference
|
||||
**Architecture**: @~/.claude/workflows/workflow-architecture.md
|
||||
**Session Management**: Standard workflow session protocols
|
||||
```
|
||||
.workflow/WFS-[topic]/
|
||||
├── .active-brainstorming
|
||||
├── workflow-session.json # Session metadata ONLY
|
||||
└── .brainstorming/
|
||||
└── guidance-specification.md # Full guidance content
|
||||
```
|
||||
|
||||
### Integration Points
|
||||
- **Compatible with**: Other brainstorming commands in the same session
|
||||
- **Builds foundation for**: More detailed planning and implementation phases
|
||||
- **Outputs used by**: `/workflow:brainstorm:synthesis` command for cross-analysis integration
|
||||
|
||||
@@ -7,359 +7,334 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
|
||||
|
||||
# Workflow Brainstorm Parallel Auto Command
|
||||
|
||||
## Coordinator Role
|
||||
|
||||
**This command is a pure orchestrator**: Execute 3 phases in sequence (interactive framework → parallel role analysis → synthesis), delegate to specialized commands/agents, and ensure complete execution through **automatic continuation**.
|
||||
|
||||
**Execution Model - Auto-Continue Workflow**:
|
||||
|
||||
This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) handles user interaction, Phase 2 (role agents) runs in parallel.
|
||||
|
||||
1. **User triggers**: `/workflow:brainstorm:auto-parallel "topic" [--count N]`
|
||||
2. **Phase 1 executes** → artifacts command (interactive framework) → Auto-continues
|
||||
3. **Phase 2 executes** → Parallel role agents (N agents run concurrently) → Auto-continues
|
||||
4. **Phase 3 executes** → Synthesis command → Reports final summary
|
||||
|
||||
**Auto-Continue Mechanism**:
|
||||
- TodoList tracks current phase status
|
||||
- After Phase 1 (artifacts) completion, automatically load roles and launch Phase 2 agents
|
||||
- After Phase 2 (all agents) completion, automatically execute Phase 3 synthesis
|
||||
- Progress updates shown at each phase for visibility
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 command execution
|
||||
2. **No Preliminary Analysis**: Do not analyze topic before Phase 1 - artifacts handles all analysis
|
||||
3. **Parse Every Output**: Extract selected_roles from workflow-session.json after Phase 1
|
||||
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
||||
5. **Track Progress**: Update TodoWrite after every phase completion
|
||||
6. **TodoWrite Extension**: artifacts command EXTENDS parent TodoList (NOT replaces)
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/workflow:brainstorm:auto-parallel "<topic>" [--count N]
|
||||
```
|
||||
|
||||
**Parameters**:
|
||||
- `topic` (required): Topic or challenge description
|
||||
- `--count N` (optional): Number of roles to auto-select (default: 3, max: 9)
|
||||
|
||||
## Role Selection Logic
|
||||
- **Technical & Architecture**: `architecture|system|performance|database|security` → system-architect, data-architect, security-expert, subject-matter-expert
|
||||
- **API & Backend**: `api|endpoint|rest|graphql|backend|interface|contract|service` → api-designer, system-architect, data-architect
|
||||
- **Product & UX**: `user|ui|ux|interface|design|product|feature|experience` → ui-designer, user-researcher, product-manager, ux-expert, product-owner
|
||||
- **Business & Process**: `business|process|workflow|cost|innovation|testing` → business-analyst, innovation-lead, test-strategist
|
||||
- **Agile & Delivery**: `agile|sprint|scrum|team|collaboration|delivery` → scrum-master, product-owner
|
||||
- **Domain Expertise**: `domain|standard|compliance|expertise|regulation` → subject-matter-expert
|
||||
- **Multi-role**: Complex topics automatically select N complementary roles (N specified by --count, default 3)
|
||||
- **Default**: `product-manager` if no clear match
|
||||
- **Count Parameter**: `--count N` determines number of roles to auto-select (default: 3, max: 9)
|
||||
|
||||
**Template Loading**: `bash($(cat "~/.claude/workflows/cli-templates/planning-roles/<role-name>.md"))`
|
||||
**Template Source**: `.claude/workflows/cli-templates/planning-roles/`
|
||||
**Available Roles**: api-designer, data-architect, product-manager, product-owner, scrum-master, subject-matter-expert, system-architect, test-strategist, ui-designer, ux-expert
|
||||
|
||||
**Example**:
|
||||
**Recommended Structured Format**:
|
||||
```bash
|
||||
bash($(cat "~/.claude/workflows/cli-templates/planning-roles/system-architect.md"))
|
||||
bash($(cat "~/.claude/workflows/cli-templates/planning-roles/ui-designer.md"))
|
||||
/workflow:brainstorm:auto-parallel "GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
**Parameters**:
|
||||
- `topic` (required): Topic or challenge description (structured format recommended)
|
||||
- `--count N` (optional): Number of roles to select (default: 3, max: 9)
|
||||
|
||||
### Structured Topic Processing → Role Analysis → Synthesis
|
||||
The command follows a structured three-phase approach with dedicated document types:
|
||||
## 3-Phase Execution
|
||||
|
||||
**Phase 1: Framework Generation** ⚠️ COMMAND EXECUTION
|
||||
- **Role selection**: Auto-select N roles based on topic keywords and --count parameter (default: 3, see Role Selection Logic)
|
||||
- **Call artifacts command**: Execute `/workflow:brainstorm:artifacts "{topic}" --roles "{role1,role2,...,roleN}"` using SlashCommand tool
|
||||
- **Role-specific framework**: Generate framework with sections tailored to selected roles
|
||||
### Phase 1: Interactive Framework Generation
|
||||
|
||||
**Phase 2: Role Analysis Execution** ⚠️ PARALLEL AGENT ANALYSIS
|
||||
- **Parallel execution**: Multiple roles execute simultaneously for faster completion
|
||||
- **Independent agents**: Each role gets dedicated conceptual-planning-agent running in parallel
|
||||
- **Shared framework**: All roles reference the same topic framework for consistency
|
||||
- **Concurrent generation**: Role-specific analysis documents generated simultaneously
|
||||
- **Progress tracking**: Parallel agents update progress independently
|
||||
**Command**: `SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")`
|
||||
|
||||
**Phase 3: Synthesis Generation** ⚠️ COMMAND EXECUTION
|
||||
- **Call synthesis command**: Execute `/workflow:brainstorm:synthesis` using SlashCommand tool
|
||||
**What It Does**:
|
||||
- Topic analysis: Extract challenges, generate task-specific questions
|
||||
- Role selection: Recommend count+2 roles, user selects via AskUserQuestion
|
||||
- Role questions: Generate 3-4 questions per role, collect user decisions
|
||||
- Conflict resolution: Detect and resolve cross-role conflicts
|
||||
- Guidance generation: Transform Q&A to declarative guidance-specification.md
|
||||
|
||||
## Implementation Standards
|
||||
**Parse Output**:
|
||||
- **⚠️ Memory Check**: If `selected_roles[]` already in conversation memory from previous load, skip file read
|
||||
- Extract: `selected_roles[]` from workflow-session.json (if not in memory)
|
||||
- Extract: `session_id` from workflow-session.json (if not in memory)
|
||||
- Verify: guidance-specification.md exists
|
||||
|
||||
### Simplified Command Orchestration ⚠️ STREAMLINED
|
||||
Auto command coordinates independent specialized commands:
|
||||
**Validation**:
|
||||
- guidance-specification.md created with confirmed decisions
|
||||
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
|
||||
- Session directory `.workflow/WFS-{topic}/.brainstorming/` exists
|
||||
|
||||
**Command Sequence**:
|
||||
1. **Role Selection**: Auto-select N relevant roles based on topic keywords and --count parameter (default: 3)
|
||||
2. **Generate Role-Specific Framework**: Use SlashCommand to execute `/workflow:brainstorm:artifacts "{topic}" --roles "{role1,role2,...,roleN}"`
|
||||
3. **Parallel Role Analysis**: Execute selected role agents in parallel, each reading their specific framework section
|
||||
4. **Generate Synthesis**: Use SlashCommand to execute `/workflow:brainstorm:synthesis`
|
||||
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
|
||||
|
||||
**SlashCommand Integration**:
|
||||
1. **artifacts command**: Called via SlashCommand tool with `--roles` parameter for role-specific framework generation
|
||||
2. **role agents**: Each agent reads its dedicated section in the role-specific framework
|
||||
3. **synthesis command**: Called via SlashCommand tool for final integration with role-targeted insights
|
||||
4. **Command coordination**: SlashCommand handles execution and validation
|
||||
**After Phase 1**: Auto-continue to Phase 2 (role agent assignment)
|
||||
|
||||
**Role Selection Logic**:
|
||||
- **Technical**: `architecture|system|performance|database` → system-architect, data-architect, subject-matter-expert
|
||||
- **API & Backend**: `api|endpoint|rest|graphql|backend|interface|contract|service` → api-designer, system-architect, data-architect
|
||||
- **Product & UX**: `user|ui|ux|interface|design|product|feature|experience` → ui-designer, ux-expert, product-manager, product-owner
|
||||
- **Agile & Delivery**: `agile|sprint|scrum|team|collaboration|delivery` → scrum-master, product-owner
|
||||
- **Domain Expertise**: `domain|standard|compliance|expertise|regulation` → subject-matter-expert
|
||||
- **Auto-select**: N most relevant roles based on topic analysis (N from --count parameter, default: 3)
|
||||
**⚠️ TodoWrite Coordination**: artifacts EXTENDS parent TodoList by:
|
||||
- Marking parent task "Execute artifacts..." as in_progress
|
||||
- APPENDING artifacts sub-tasks (Phase 1-5) after parent task
|
||||
- PRESERVING all other auto-parallel tasks (role agents, synthesis)
|
||||
- When artifacts Phase 5 completes, marking parent task as completed
|
||||
|
||||
### Parameter Parsing
|
||||
---
|
||||
|
||||
**Count Parameter Handling**:
|
||||
### Phase 2: Parallel Role Analysis Execution
|
||||
|
||||
**For Each Selected Role**:
|
||||
```bash
|
||||
# Parse --count parameter from user input
|
||||
Task(conceptual-planning-agent): "
|
||||
[FLOW_CONTROL]
|
||||
|
||||
Execute {role-name} analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: {role-name}
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/{role}/
|
||||
TOPIC: {user-provided-topic}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
- Action: Load {role-name} planning template
|
||||
- Command: Read(~/.claude/workflows/cli-templates/planning-roles/{role}.md)
|
||||
- Output: role_template_guidelines
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and original user intent
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context (contains original user prompt as PRIMARY reference)
|
||||
|
||||
## Analysis Requirements
|
||||
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
|
||||
**Framework Source**: Address all discussion points in guidance-specification.md from {role-name} perspective
|
||||
**Role Focus**: {role-name} domain expertise aligned with user intent
|
||||
**Structured Approach**: Create analysis.md addressing framework discussion points
|
||||
**Template Integration**: Apply role template guidelines within framework structure
|
||||
|
||||
## Expected Deliverables
|
||||
1. **analysis.md**: Comprehensive {role-name} analysis addressing all framework discussion points
|
||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
||||
- **FORBIDDEN**: Never use `recommendations.md` or any filename not starting with `analysis`
|
||||
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
|
||||
- **Content**: Includes both analysis AND recommendations sections within analysis files
|
||||
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
|
||||
3. **User Intent Alignment**: Validate analysis aligns with original user objectives from session_context
|
||||
|
||||
## Completion Criteria
|
||||
- Address each discussion point from guidance-specification.md with {role-name} expertise
|
||||
- Provide actionable recommendations from {role-name} perspective within analysis files
|
||||
- All output files MUST start with `analysis` prefix (no recommendations.md or other naming)
|
||||
- Reference framework document using @ notation for integration
|
||||
- Update workflow-session.json with completion status
|
||||
"
|
||||
```
|
||||
|
||||
**Parallel Execution**:
|
||||
- Launch N agents simultaneously (one message with multiple Task calls)
|
||||
- Each agent operates independently reading same guidance-specification.md
|
||||
- All agents update progress concurrently
|
||||
|
||||
**Input**:
|
||||
- `selected_roles[]` from Phase 1
|
||||
- `session_id` from Phase 1
|
||||
- guidance-specification.md path
|
||||
|
||||
**Validation**:
|
||||
- Each role creates `.workflow/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
|
||||
- If content is large (>800 lines), may split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
||||
- **File naming pattern**: ALL files MUST start with `analysis` prefix (use `analysis*.md` for globbing)
|
||||
- **FORBIDDEN naming**: No `recommendations.md`, `recommendations-*.md`, or any non-`analysis` prefixed files
|
||||
- All N role analyses completed
|
||||
|
||||
**TodoWrite**: Mark all N role agent tasks completed, phase 3 in_progress
|
||||
|
||||
**After Phase 2**: Auto-continue to Phase 3 (synthesis)
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Synthesis Generation
|
||||
|
||||
**Command**: `SlashCommand(command="/workflow:brainstorm:synthesis --session {sessionId}")`
|
||||
|
||||
**What It Does**:
|
||||
- Load original user intent from workflow-session.json
|
||||
- Read all role analysis.md files
|
||||
- Integrate role insights into synthesis-specification.md
|
||||
- Validate alignment with user's original objectives
|
||||
|
||||
**Input**: `sessionId` from Phase 1
|
||||
|
||||
**Validation**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
|
||||
- Synthesis references all role analyses
|
||||
|
||||
**TodoWrite**: Mark phase 3 completed
|
||||
|
||||
**Return to User**:
|
||||
```
|
||||
Brainstorming complete for session: {sessionId}
|
||||
Roles analyzed: {count}
|
||||
Synthesis: .workflow/WFS-{topic}/.brainstorming/synthesis-specification.md
|
||||
|
||||
✅ Next Steps:
|
||||
1. /workflow:concept-clarify --session {sessionId} # Optional refinement
|
||||
2. /workflow:plan --session {sessionId} # Generate implementation plan
|
||||
```
|
||||
|
||||
## TodoWrite Pattern
|
||||
|
||||
```javascript
|
||||
// Initialize (before Phase 1)
|
||||
TodoWrite({todos: [
|
||||
{"content": "Parse --count parameter from user input", "status": "in_progress", "activeForm": "Parsing count parameter"},
|
||||
{"content": "Execute artifacts command for interactive framework generation", "status": "pending", "activeForm": "Executing artifacts interactive framework"},
|
||||
{"content": "Load selected_roles from workflow-session.json", "status": "pending", "activeForm": "Loading selected roles"},
|
||||
// Role agent tasks added dynamically after Phase 1 based on selected_roles count
|
||||
{"content": "Execute synthesis command for final integration", "status": "pending", "activeForm": "Executing synthesis integration"}
|
||||
]})
|
||||
|
||||
// After Phase 1 (artifacts completes, roles loaded)
|
||||
// Note: artifacts EXTENDS this list by appending its Phase 1-5 sub-tasks
|
||||
TodoWrite({todos: [
|
||||
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
|
||||
{"content": "Execute artifacts command for interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
|
||||
{"content": "Load selected_roles from workflow-session.json", "status": "in_progress", "activeForm": "Loading selected roles"},
|
||||
{"content": "Execute system-architect analysis [conceptual-planning-agent]", "status": "pending", "activeForm": "Executing system-architect analysis"},
|
||||
{"content": "Execute ui-designer analysis [conceptual-planning-agent]", "status": "pending", "activeForm": "Executing ui-designer analysis"},
|
||||
{"content": "Execute product-manager analysis [conceptual-planning-agent]", "status": "pending", "activeForm": "Executing product-manager analysis"},
|
||||
// ... (N role tasks based on --count parameter)
|
||||
{"content": "Execute synthesis command for final integration", "status": "pending", "activeForm": "Executing synthesis integration"}
|
||||
]})
|
||||
|
||||
// After Phase 2 (all agents launched in parallel)
|
||||
TodoWrite({todos: [
|
||||
// ... previous completed tasks
|
||||
{"content": "Load selected_roles from workflow-session.json", "status": "completed", "activeForm": "Loading selected roles"},
|
||||
{"content": "Execute system-architect analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing system-architect analysis"},
|
||||
{"content": "Execute ui-designer analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing ui-designer analysis"},
|
||||
{"content": "Execute product-manager analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing product-manager analysis"},
|
||||
// ... (all N agents in_progress simultaneously)
|
||||
{"content": "Execute synthesis command for final integration", "status": "pending", "activeForm": "Executing synthesis integration"}
|
||||
]})
|
||||
|
||||
// After Phase 2 (all agents complete)
|
||||
TodoWrite({todos: [
|
||||
// ... previous completed tasks
|
||||
{"content": "Execute system-architect analysis [conceptual-planning-agent]", "status": "completed", "activeForm": "Executing system-architect analysis"},
|
||||
{"content": "Execute ui-designer analysis [conceptual-planning-agent]", "status": "completed", "activeForm": "Executing ui-designer analysis"},
|
||||
{"content": "Execute product-manager analysis [conceptual-planning-agent]", "status": "completed", "activeForm": "Executing product-manager analysis"},
|
||||
{"content": "Execute synthesis command for final integration", "status": "in_progress", "activeForm": "Executing synthesis integration"}
|
||||
]})
|
||||
```
|
||||
|
||||
## Input Processing
|
||||
|
||||
**Count Parameter Parsing**:
|
||||
```javascript
|
||||
// Extract --count from user input
|
||||
IF user_input CONTAINS "--count":
|
||||
EXTRACT count_value FROM "--count N" pattern
|
||||
IF count_value > 9:
|
||||
count_value = 9 # Cap at maximum 9 roles
|
||||
END IF
|
||||
count_value = 9 // Cap at maximum 9 roles
|
||||
ELSE:
|
||||
count_value = 3 # Default to 3 roles
|
||||
END IF
|
||||
count_value = 3 // Default to 3 roles
|
||||
|
||||
// Pass to artifacts command
|
||||
EXECUTE: /workflow:brainstorm:artifacts "{topic}" --count {count_value}
|
||||
```
|
||||
|
||||
**Role Selection with Count**:
|
||||
1. **Analyze topic keywords**: Identify relevant role categories
|
||||
2. **Rank roles by relevance**: Score based on keyword matches
|
||||
3. **Select top N roles**: Pick N most relevant roles (N = count_value)
|
||||
4. **Ensure diversity**: Balance across different expertise areas
|
||||
5. **Minimum guarantee**: Always include at least one role (default to product-manager if no matches)
|
||||
**Topic Structuring**:
|
||||
1. **Already Structured** → Pass directly to artifacts
|
||||
```
|
||||
User: "GOAL: Build platform SCOPE: 100 users CONTEXT: Real-time"
|
||||
→ Pass as-is to artifacts
|
||||
```
|
||||
|
||||
### Simplified Processing Standards
|
||||
2. **Simple Text** → Pass directly (artifacts handles structuring)
|
||||
```
|
||||
User: "Build collaboration platform"
|
||||
→ artifacts will analyze and structure
|
||||
```
|
||||
|
||||
**Core Principles**:
|
||||
1. **Minimal preprocessing** - Only workflow-session.json and basic role selection
|
||||
2. **Agent autonomy** - Agents handle their own context and validation
|
||||
3. **Parallel execution** - Multiple agents can work simultaneously
|
||||
4. **Post-processing synthesis** - Integration happens after agent completion
|
||||
5. **TodoWrite control** - Progress tracking throughout all phases
|
||||
## Session Management
|
||||
|
||||
**Implementation Rules**:
|
||||
- **Role count**: N roles auto-selected based on --count parameter (default: 3, max: 9) and keyword mapping
|
||||
- **No upfront validation**: Agents handle their own context requirements
|
||||
- **Parallel execution**: Each agent operates concurrently without dependencies
|
||||
- **Synthesis at end**: Integration only after all agents complete
|
||||
**⚡ FIRST ACTION**: Check for `.workflow/.active-*` markers before Phase 1
|
||||
|
||||
**Agent Self-Management** (Agents decide their own approach):
|
||||
- **Context gathering**: Agents determine what questions to ask
|
||||
- **Template usage**: Agents load and apply their own role templates
|
||||
- **Analysis depth**: Agents determine appropriate level of detail
|
||||
- **Documentation**: Agents create their own file structure and content
|
||||
**Multiple Sessions Support**:
|
||||
- Different Claude instances can have different active brainstorming sessions
|
||||
- If multiple active sessions found, prompt user to select
|
||||
- If single active session found, use it
|
||||
- If no active session exists, create `WFS-[topic-slug]`
|
||||
|
||||
### Session Management ⚠️ CRITICAL
|
||||
- **⚡ FIRST ACTION**: Check for all `.workflow/.active-*` markers before role processing
|
||||
- **Multiple sessions support**: Different Claude instances can have different active brainstorming sessions
|
||||
- **User selection**: If multiple active sessions found, prompt user to select which one to work with
|
||||
- **Auto-session creation**: `WFS-[topic-slug]` only if no active session exists
|
||||
- **Session continuity**: MUST use selected active session for all role processing
|
||||
- **Context preservation**: Each role's context and agent output stored in session directory
|
||||
- **Session isolation**: Each session maintains independent brainstorming state and role assignments
|
||||
**Session Continuity**:
|
||||
- MUST use selected active session for all phases
|
||||
- Each role's context stored in session directory
|
||||
- Session isolation: Each session maintains independent state
|
||||
|
||||
## Document Generation
|
||||
## Output Structure
|
||||
|
||||
**Command Coordination Workflow**: artifacts → parallel role analysis → synthesis
|
||||
**Phase 1 Output**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md` (framework content)
|
||||
- `.workflow/WFS-{topic}/workflow-session.json` (metadata: selected_roles[], topic, timestamps)
|
||||
|
||||
**Output Structure**: Coordinated commands generate framework, role analyses, and synthesis documents as defined in their respective command specifications.
|
||||
**Phase 2 Output**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/{role}/analysis.md` (one per role)
|
||||
|
||||
**Phase 3 Output**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/synthesis-specification.md` (integrated analysis)
|
||||
|
||||
## Agent Prompt Templates
|
||||
**⚠️ Storage Separation**: Guidance content in .md files, metadata in .json (no duplication)
|
||||
|
||||
### Task Agent Invocation Template
|
||||
## Available Roles
|
||||
|
||||
- data-architect (数据架构师)
|
||||
- product-manager (产品经理)
|
||||
- product-owner (产品负责人)
|
||||
- scrum-master (敏捷教练)
|
||||
- subject-matter-expert (领域专家)
|
||||
- system-architect (系统架构师)
|
||||
- test-strategist (测试策略师)
|
||||
- ui-designer (UI 设计师)
|
||||
- ux-expert (UX 专家)
|
||||
|
||||
```bash
|
||||
Task(subagent_type="conceptual-planning-agent",
|
||||
prompt="Execute brainstorming analysis: {role-name} perspective for {topic}
|
||||
**Role Selection**: Handled by artifacts command (intelligent recommendation + user selection)
|
||||
|
||||
## Role Assignment
|
||||
**ASSIGNED_ROLE**: {role-name}
|
||||
**TOPIC**: {user-provided-topic}
|
||||
**OUTPUT_LOCATION**: .workflow/WFS-{topic}/.brainstorming/{role}/
|
||||
## Error Handling
|
||||
|
||||
## Execution Instructions
|
||||
[FLOW_CONTROL]
|
||||
|
||||
### Flow Control Steps
|
||||
**AGENT RESPONSIBILITY**: Execute these pre_analysis steps sequentially with context accumulation:
|
||||
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: bash(cat .workflow/WFS-{topic}/.brainstorming/topic-framework.md 2>/dev/null || echo 'Topic framework not found')
|
||||
- Output: topic_framework
|
||||
|
||||
2. **load_role_template**
|
||||
- Action: Load {role-name} planning template
|
||||
- Command: bash($(cat "~/.claude/workflows/cli-templates/planning-roles/{role}.md"))
|
||||
- Output: role_template
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and topic description
|
||||
- Command: bash(cat .workflow/WFS-{topic}/workflow-session.json 2>/dev/null || echo '{}')
|
||||
- Output: session_metadata
|
||||
|
||||
### Implementation Context
|
||||
**Topic Framework**: Use loaded topic-framework.md for structured analysis
|
||||
**Role Focus**: {role-name} domain expertise and perspective
|
||||
**Analysis Type**: Address framework discussion points from role perspective
|
||||
**Template Framework**: Combine role template with topic framework structure
|
||||
**Structured Approach**: Create analysis.md addressing all topic framework points
|
||||
|
||||
### Session Context
|
||||
**Workflow Directory**: .workflow/WFS-{topic}/.brainstorming/
|
||||
**Output Directory**: .workflow/WFS-{topic}/.brainstorming/{role}/
|
||||
**Session JSON**: .workflow/WFS-{topic}/workflow-session.json
|
||||
|
||||
### Dependencies & Context
|
||||
**Topic**: {user-provided-topic}
|
||||
**Role Template**: "~/.claude/workflows/cli-templates/planning-roles/{role}.md"
|
||||
**User Requirements**: To be gathered through interactive questioning
|
||||
|
||||
## Completion Requirements
|
||||
1. Execute all flow control steps in sequence (load topic framework, role template, session metadata)
|
||||
2. **Address Topic Framework**: Respond to all discussion points in topic-framework.md from role perspective
|
||||
3. Apply role template guidelines within topic framework structure
|
||||
4. Generate structured role analysis addressing framework points
|
||||
5. Create single comprehensive deliverable in OUTPUT_LOCATION:
|
||||
- analysis.md (structured analysis addressing all topic framework points with role-specific insights)
|
||||
6. Include framework reference: @../topic-framework.md in analysis.md
|
||||
7. Update workflow-session.json with completion status",
|
||||
description="Execute {role-name} brainstorming analysis")
|
||||
```
|
||||
|
||||
### Parallel Role Agent调用示例
|
||||
```bash
|
||||
# Execute N roles in parallel using single message with multiple Task calls
|
||||
# (N determined by --count parameter, default 3, shown below with 3 roles as example)
|
||||
|
||||
Task(subagent_type="conceptual-planning-agent",
|
||||
prompt="Execute brainstorming analysis: {role-1} perspective for {topic}...",
|
||||
description="Execute {role-1} brainstorming analysis")
|
||||
|
||||
Task(subagent_type="conceptual-planning-agent",
|
||||
prompt="Execute brainstorming analysis: {role-2} perspective for {topic}...",
|
||||
description="Execute {role-2} brainstorming analysis")
|
||||
|
||||
Task(subagent_type="conceptual-planning-agent",
|
||||
prompt="Execute brainstorming analysis: {role-3} perspective for {topic}...",
|
||||
description="Execute {role-3} brainstorming analysis")
|
||||
|
||||
# ... repeat for remaining N-3 roles if --count > 3
|
||||
```
|
||||
|
||||
### Direct Synthesis Process (Command-Driven)
|
||||
**Synthesis execution**: Use SlashCommand to execute `/workflow:brainstorm:synthesis` after role completion
|
||||
|
||||
|
||||
## TodoWrite Control Flow ⚠️ CRITICAL
|
||||
|
||||
### Workflow Progress Tracking
|
||||
**MANDATORY**: Use Claude Code's built-in TodoWrite tool throughout entire brainstorming workflow:
|
||||
|
||||
```javascript
|
||||
// Phase 1: Create initial todo list for command-coordinated brainstorming workflow
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{
|
||||
content: "Initialize brainstorming session and detect active sessions",
|
||||
status: "pending",
|
||||
activeForm: "Initializing brainstorming session"
|
||||
},
|
||||
{
|
||||
content: "Parse --count parameter and select N roles based on topic keyword analysis",
|
||||
status: "pending",
|
||||
activeForm: "Parsing parameters and selecting roles for brainstorming"
|
||||
},
|
||||
{
|
||||
content: "Execute artifacts command with selected roles for role-specific framework",
|
||||
status: "pending",
|
||||
activeForm: "Generating role-specific topic framework"
|
||||
},
|
||||
{
|
||||
content: "Execute [role-1] analysis [conceptual-planning-agent] [FLOW_CONTROL] addressing framework",
|
||||
status: "pending",
|
||||
activeForm: "Executing [role-1] structured framework analysis"
|
||||
},
|
||||
{
|
||||
content: "Execute [role-2] analysis [conceptual-planning-agent] [FLOW_CONTROL] addressing framework",
|
||||
status: "pending",
|
||||
activeForm: "Executing [role-2] structured framework analysis"
|
||||
},
|
||||
// ... repeat for N roles (N determined by --count parameter, default 3)
|
||||
{
|
||||
content: "Execute [role-N] analysis [conceptual-planning-agent] [FLOW_CONTROL] addressing framework",
|
||||
status: "pending",
|
||||
activeForm: "Executing [role-N] structured framework analysis"
|
||||
},
|
||||
{
|
||||
content: "Execute synthesis command using SlashCommand for final integration",
|
||||
status: "pending",
|
||||
activeForm: "Executing synthesis command for integrated analysis"
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// Phase 2: Update status as workflow progresses - ONLY ONE task should be in_progress at a time
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{
|
||||
content: "Initialize brainstorming session and detect active sessions",
|
||||
status: "completed", // Mark completed preprocessing
|
||||
activeForm: "Initializing brainstorming session"
|
||||
},
|
||||
{
|
||||
content: "Select roles for topic analysis and create workflow-session.json",
|
||||
status: "in_progress", // Mark current task as in_progress
|
||||
activeForm: "Selecting roles and creating session metadata"
|
||||
},
|
||||
// ... other tasks remain pending
|
||||
]
|
||||
});
|
||||
|
||||
// Phase 3: Parallel agent execution tracking (N roles, N from --count parameter)
|
||||
TodoWrite({
|
||||
todos: [
|
||||
// ... previous completed tasks
|
||||
{
|
||||
content: "Execute [role-1] analysis [conceptual-planning-agent] [FLOW_CONTROL]",
|
||||
status: "in_progress", // Executing in parallel
|
||||
activeForm: "Executing [role-1] brainstorming analysis"
|
||||
},
|
||||
{
|
||||
content: "Execute [role-2] analysis [conceptual-planning-agent] [FLOW_CONTROL]",
|
||||
status: "in_progress", // Executing in parallel
|
||||
activeForm: "Executing [role-2] brainstorming analysis"
|
||||
},
|
||||
// ... repeat for remaining N-2 roles
|
||||
{
|
||||
content: "Execute [role-N] analysis [conceptual-planning-agent] [FLOW_CONTROL]",
|
||||
status: "in_progress", // Executing in parallel
|
||||
activeForm: "Executing [role-N] brainstorming analysis"
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
**TodoWrite Integration Rules**:
|
||||
1. **Create initial todos**: All workflow phases at start
|
||||
2. **Mark in_progress**: Multiple parallel tasks can be in_progress simultaneously
|
||||
3. **Update immediately**: After each task completion
|
||||
4. **Track agent execution**: Include [agent-type] and [FLOW_CONTROL] markers for parallel agents
|
||||
5. **Final synthesis**: Mark synthesis as in_progress only after all parallel agents complete
|
||||
- **Role selection failure**: artifacts defaults to product-manager with explanation
|
||||
- **Agent execution failure**: Agent-specific retry with minimal dependencies
|
||||
- **Template loading issues**: Agent handles graceful degradation
|
||||
- **Synthesis conflicts**: Synthesis highlights disagreements without resolution
|
||||
|
||||
## Reference Information
|
||||
|
||||
### Structured Processing Schema
|
||||
Each role processing follows structured framework pattern:
|
||||
- **topic_framework**: Structured discussion framework document
|
||||
- **role**: Selected planning role name with framework reference
|
||||
- **agent**: Dedicated conceptual-planning-agent instance
|
||||
- **structured_analysis**: Agent addresses all framework discussion points
|
||||
- **output**: Role-specific analysis.md addressing topic framework structure
|
||||
**File Structure**:
|
||||
```
|
||||
.workflow/WFS-[topic]/
|
||||
├── .active-brainstorming
|
||||
├── workflow-session.json # Session metadata ONLY
|
||||
└── .brainstorming/
|
||||
├── guidance-specification.md # Framework (Phase 1)
|
||||
├── {role-1}/
|
||||
│ └── analysis.md # Role analysis (Phase 2)
|
||||
├── {role-2}/
|
||||
│ └── analysis.md
|
||||
├── {role-N}/
|
||||
│ └── analysis.md
|
||||
└── synthesis-specification.md # Integration (Phase 3)
|
||||
```
|
||||
|
||||
### File Structure Reference
|
||||
**Architecture**: @~/.claude/workflows/workflow-architecture.md
|
||||
**Role Templates**: @~/.claude/workflows/cli-templates/planning-roles/
|
||||
**Template Source**: `~/.claude/workflows/cli-templates/planning-roles/`
|
||||
|
||||
### Execution Integration
|
||||
Command coordination model: artifacts command → parallel role analysis → synthesis command
|
||||
|
||||
|
||||
## Error Handling
|
||||
- **Role selection failure**: Default to `product-manager` with explanation
|
||||
- **Agent execution failure**: Agent-specific retry with minimal dependencies
|
||||
- **Template loading issues**: Agent handles graceful degradation
|
||||
- **Synthesis conflicts**: Synthesis agent highlights disagreements without resolution
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### Agent Autonomy Excellence
|
||||
- **Single role focus**: Each agent handles exactly one role independently
|
||||
- **Self-contained execution**: Agent manages own context, validation, and output
|
||||
- **Parallel processing**: Multiple agents can execute simultaneously
|
||||
- **Complete ownership**: Agent produces entire role-specific analysis package
|
||||
|
||||
### Minimal Coordination Excellence
|
||||
- **Lightweight handoff**: Only topic and role assignment provided
|
||||
- **Agent self-management**: Agents handle their own workflow and validation
|
||||
- **Concurrent operation**: No inter-agent dependencies enabling parallel execution
|
||||
- **Reference-based synthesis**: Post-processing integration without content duplication
|
||||
- **TodoWrite orchestration**: Progress tracking and workflow control throughout entire process
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: data-architect
|
||||
description: Generate or update data-architect/analysis.md addressing topic-framework discussion points
|
||||
description: Generate or update data-architect/analysis.md addressing guidance-specification discussion points
|
||||
argument-hint: "optional topic - uses existing framework if available"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
---
|
||||
@@ -8,10 +8,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
## 📊 **Data Architect Analysis Generator**
|
||||
|
||||
### Purpose
|
||||
**Specialized command for generating data-architect/analysis.md** that addresses topic-framework.md discussion points from data architecture perspective. Creates or updates role-specific analysis with framework references.
|
||||
**Specialized command for generating data-architect/analysis.md** that addresses guidance-specification.md discussion points from data architecture perspective. Creates or updates role-specific analysis with framework references.
|
||||
|
||||
### Core Function
|
||||
- **Framework-based Analysis**: Address each discussion point in topic-framework.md
|
||||
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
|
||||
- **Data Architecture Focus**: Data models, pipelines, governance, and analytics perspective
|
||||
- **Update Mechanism**: Create new or update existing analysis.md
|
||||
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
|
||||
@@ -52,7 +52,7 @@ IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/topic-framework.md
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
framework_mode = true
|
||||
load_framework = true
|
||||
@@ -93,7 +93,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/topic-framework.md)
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -107,17 +107,17 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
**Framework Reference**: Address all discussion points in topic-framework.md from data architecture perspective
|
||||
**Framework Reference**: Address all discussion points in guidance-specification.md from data architecture perspective
|
||||
**Role Focus**: Data models, pipelines, governance, analytics platforms
|
||||
**Structured Approach**: Create analysis.md addressing framework discussion points
|
||||
**Template Integration**: Apply role template guidelines within framework structure
|
||||
|
||||
## Expected Deliverables
|
||||
1. **analysis.md**: Comprehensive data architecture analysis addressing all framework discussion points
|
||||
2. **Framework Reference**: Include @../topic-framework.md reference in analysis
|
||||
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
|
||||
|
||||
## Completion Criteria
|
||||
- Address each discussion point from topic-framework.md with data architecture expertise
|
||||
- Address each discussion point from guidance-specification.md with data architecture expertise
|
||||
- Provide data model designs, pipeline architectures, and governance strategies
|
||||
- Include scalability, performance, and quality considerations
|
||||
- Reference framework document using @ notation for integration
|
||||
@@ -136,7 +136,7 @@ TodoWrite({
|
||||
activeForm: "Detecting session and framework"
|
||||
},
|
||||
{
|
||||
content: "Load topic-framework.md and session metadata for context",
|
||||
content: "Load guidance-specification.md and session metadata for context",
|
||||
status: "pending",
|
||||
activeForm: "Loading framework and session context"
|
||||
},
|
||||
@@ -164,7 +164,7 @@ TodoWrite({
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/data-architect/
|
||||
└── analysis.md # Structured analysis addressing topic-framework.md discussion points
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
### Analysis Document Structure
|
||||
@@ -172,11 +172,11 @@ TodoWrite({
|
||||
# Data Architect Analysis: [Topic from Framework]
|
||||
|
||||
## Framework Reference
|
||||
**Topic Framework**: @../topic-framework.md
|
||||
**Topic Framework**: @../guidance-specification.md
|
||||
**Role Focus**: Data Architecture perspective
|
||||
|
||||
## Discussion Points Analysis
|
||||
[Address each point from topic-framework.md with data architecture expertise]
|
||||
[Address each point from guidance-specification.md with data architecture expertise]
|
||||
|
||||
### Core Requirements (from framework)
|
||||
[Data architecture perspective on requirements]
|
||||
@@ -209,12 +209,12 @@ TodoWrite({
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/data-architect/analysis.md",
|
||||
"framework_reference": "@../topic-framework.md"
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Points
|
||||
- **Framework Reference**: @../topic-framework.md for structured discussion points
|
||||
- **Framework Reference**: @../guidance-specification.md for structured discussion points
|
||||
- **Cross-Role Synthesis**: Data architecture insights available for synthesis-report.md integration
|
||||
- **Agent Autonomy**: Independent execution with framework guidance
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: product-manager
|
||||
description: Generate or update product-manager/analysis.md addressing topic-framework discussion points
|
||||
description: Generate or update product-manager/analysis.md addressing guidance-specification discussion points
|
||||
argument-hint: "optional topic - uses existing framework if available"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
---
|
||||
@@ -8,10 +8,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
## 🎯 **Product Manager Analysis Generator**
|
||||
|
||||
### Purpose
|
||||
**Specialized command for generating product-manager/analysis.md** that addresses topic-framework.md discussion points from product strategy perspective. Creates or updates role-specific analysis with framework references.
|
||||
**Specialized command for generating product-manager/analysis.md** that addresses guidance-specification.md discussion points from product strategy perspective. Creates or updates role-specific analysis with framework references.
|
||||
|
||||
### Core Function
|
||||
- **Framework-based Analysis**: Address each discussion point in topic-framework.md
|
||||
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
|
||||
- **Product Strategy Focus**: User needs, business value, and market positioning
|
||||
- **Update Mechanism**: Create new or update existing analysis.md
|
||||
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
|
||||
@@ -32,7 +32,7 @@ IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/topic-framework.md
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
framework_mode = true
|
||||
load_framework = true
|
||||
@@ -73,7 +73,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/topic-framework.md)
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -87,17 +87,17 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
**Framework Reference**: Address all discussion points in topic-framework.md from product strategy perspective
|
||||
**Framework Reference**: Address all discussion points in guidance-specification.md from product strategy perspective
|
||||
**Role Focus**: User value, business impact, market positioning, product strategy
|
||||
**Structured Approach**: Create analysis.md addressing framework discussion points
|
||||
**Template Integration**: Apply role template guidelines within framework structure
|
||||
|
||||
## Expected Deliverables
|
||||
1. **analysis.md**: Comprehensive product strategy analysis addressing all framework discussion points
|
||||
2. **Framework Reference**: Include @../topic-framework.md reference in analysis
|
||||
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
|
||||
|
||||
## Completion Criteria
|
||||
- Address each discussion point from topic-framework.md with product management expertise
|
||||
- Address each discussion point from guidance-specification.md with product management expertise
|
||||
- Provide actionable business strategies and user value propositions
|
||||
- Include market analysis and competitive positioning insights
|
||||
- Reference framework document using @ notation for integration
|
||||
@@ -116,7 +116,7 @@ TodoWrite({
|
||||
activeForm: "Detecting session and framework"
|
||||
},
|
||||
{
|
||||
content: "Load topic-framework.md and session metadata for context",
|
||||
content: "Load guidance-specification.md and session metadata for context",
|
||||
status: "pending",
|
||||
activeForm: "Loading framework and session context"
|
||||
},
|
||||
@@ -144,7 +144,7 @@ TodoWrite({
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/product-manager/
|
||||
└── analysis.md # Structured analysis addressing topic-framework.md discussion points
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
### Analysis Document Structure
|
||||
@@ -152,11 +152,11 @@ TodoWrite({
|
||||
# Product Manager Analysis: [Topic from Framework]
|
||||
|
||||
## Framework Reference
|
||||
**Topic Framework**: @../topic-framework.md
|
||||
**Topic Framework**: @../guidance-specification.md
|
||||
**Role Focus**: Product Strategy perspective
|
||||
|
||||
## Discussion Points Analysis
|
||||
[Address each point from topic-framework.md with product management expertise]
|
||||
[Address each point from guidance-specification.md with product management expertise]
|
||||
|
||||
### Core Requirements (from framework)
|
||||
[Product strategy perspective on user needs and requirements]
|
||||
@@ -189,12 +189,12 @@ TodoWrite({
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/product-manager/analysis.md",
|
||||
"framework_reference": "@../topic-framework.md"
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Points
|
||||
- **Framework Reference**: @../topic-framework.md for structured discussion points
|
||||
- **Framework Reference**: @../guidance-specification.md for structured discussion points
|
||||
- **Cross-Role Synthesis**: Product strategy insights available for synthesis-report.md integration
|
||||
- **Agent Autonomy**: Independent execution with framework guidance
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: product-owner
|
||||
description: Generate or update product-owner/analysis.md addressing topic-framework discussion points
|
||||
description: Generate or update product-owner/analysis.md addressing guidance-specification discussion points
|
||||
argument-hint: "optional topic - uses existing framework if available"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
---
|
||||
@@ -8,10 +8,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
## 🎯 **Product Owner Analysis Generator**
|
||||
|
||||
### Purpose
|
||||
**Specialized command for generating product-owner/analysis.md** that addresses topic-framework.md discussion points from product backlog and feature prioritization perspective. Creates or updates role-specific analysis with framework references.
|
||||
**Specialized command for generating product-owner/analysis.md** that addresses guidance-specification.md discussion points from product backlog and feature prioritization perspective. Creates or updates role-specific analysis with framework references.
|
||||
|
||||
### Core Function
|
||||
- **Framework-based Analysis**: Address each discussion point in topic-framework.md
|
||||
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
|
||||
- **Product Backlog Focus**: Feature prioritization, user stories, and acceptance criteria
|
||||
- **Update Mechanism**: Create new or update existing analysis.md
|
||||
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
|
||||
@@ -32,7 +32,7 @@ IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/topic-framework.md
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
framework_mode = true
|
||||
load_framework = true
|
||||
@@ -73,7 +73,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/topic-framework.md)
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -87,17 +87,17 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
**Framework Reference**: Address all discussion points in topic-framework.md from product backlog and feature prioritization perspective
|
||||
**Framework Reference**: Address all discussion points in guidance-specification.md from product backlog and feature prioritization perspective
|
||||
**Role Focus**: Backlog management, stakeholder alignment, feature prioritization, acceptance criteria
|
||||
**Structured Approach**: Create analysis.md addressing framework discussion points
|
||||
**Template Integration**: Apply role template guidelines within framework structure
|
||||
|
||||
## Expected Deliverables
|
||||
1. **analysis.md**: Comprehensive product ownership analysis addressing all framework discussion points
|
||||
2. **Framework Reference**: Include @../topic-framework.md reference in analysis
|
||||
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
|
||||
|
||||
## Completion Criteria
|
||||
- Address each discussion point from topic-framework.md with product ownership expertise
|
||||
- Address each discussion point from guidance-specification.md with product ownership expertise
|
||||
- Provide actionable user stories and acceptance criteria definitions
|
||||
- Include feature prioritization and stakeholder alignment strategies
|
||||
- Reference framework document using @ notation for integration
|
||||
@@ -116,7 +116,7 @@ TodoWrite({
|
||||
activeForm: "Detecting session and framework"
|
||||
},
|
||||
{
|
||||
content: "Load topic-framework.md and session metadata for context",
|
||||
content: "Load guidance-specification.md and session metadata for context",
|
||||
status: "pending",
|
||||
activeForm: "Loading framework and session context"
|
||||
},
|
||||
@@ -144,7 +144,7 @@ TodoWrite({
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/product-owner/
|
||||
└── analysis.md # Structured analysis addressing topic-framework.md discussion points
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
### Analysis Document Structure
|
||||
@@ -152,11 +152,11 @@ TodoWrite({
|
||||
# Product Owner Analysis: [Topic from Framework]
|
||||
|
||||
## Framework Reference
|
||||
**Topic Framework**: @../topic-framework.md
|
||||
**Topic Framework**: @../guidance-specification.md
|
||||
**Role Focus**: Product Backlog & Feature Prioritization perspective
|
||||
|
||||
## Discussion Points Analysis
|
||||
[Address each point from topic-framework.md with product ownership expertise]
|
||||
[Address each point from guidance-specification.md with product ownership expertise]
|
||||
|
||||
### Core Requirements (from framework)
|
||||
[User story formulation and backlog refinement perspective]
|
||||
@@ -189,12 +189,12 @@ TodoWrite({
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/product-owner/analysis.md",
|
||||
"framework_reference": "@../topic-framework.md"
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Points
|
||||
- **Framework Reference**: @../topic-framework.md for structured discussion points
|
||||
- **Framework Reference**: @../guidance-specification.md for structured discussion points
|
||||
- **Cross-Role Synthesis**: Product ownership insights available for synthesis-report.md integration
|
||||
- **Agent Autonomy**: Independent execution with framework guidance
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: scrum-master
|
||||
description: Generate or update scrum-master/analysis.md addressing topic-framework discussion points
|
||||
description: Generate or update scrum-master/analysis.md addressing guidance-specification discussion points
|
||||
argument-hint: "optional topic - uses existing framework if available"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
---
|
||||
@@ -8,10 +8,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
## 🎯 **Scrum Master Analysis Generator**
|
||||
|
||||
### Purpose
|
||||
**Specialized command for generating scrum-master/analysis.md** that addresses topic-framework.md discussion points from agile process and team collaboration perspective. Creates or updates role-specific analysis with framework references.
|
||||
**Specialized command for generating scrum-master/analysis.md** that addresses guidance-specification.md discussion points from agile process and team collaboration perspective. Creates or updates role-specific analysis with framework references.
|
||||
|
||||
### Core Function
|
||||
- **Framework-based Analysis**: Address each discussion point in topic-framework.md
|
||||
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
|
||||
- **Agile Process Focus**: Sprint planning, team dynamics, and delivery optimization
|
||||
- **Update Mechanism**: Create new or update existing analysis.md
|
||||
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
|
||||
@@ -32,7 +32,7 @@ IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/topic-framework.md
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
framework_mode = true
|
||||
load_framework = true
|
||||
@@ -73,7 +73,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/topic-framework.md)
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -87,17 +87,17 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
**Framework Reference**: Address all discussion points in topic-framework.md from agile process and team collaboration perspective
|
||||
**Framework Reference**: Address all discussion points in guidance-specification.md from agile process and team collaboration perspective
|
||||
**Role Focus**: Sprint planning, team dynamics, process optimization, delivery management
|
||||
**Structured Approach**: Create analysis.md addressing framework discussion points
|
||||
**Template Integration**: Apply role template guidelines within framework structure
|
||||
|
||||
## Expected Deliverables
|
||||
1. **analysis.md**: Comprehensive agile process analysis addressing all framework discussion points
|
||||
2. **Framework Reference**: Include @../topic-framework.md reference in analysis
|
||||
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
|
||||
|
||||
## Completion Criteria
|
||||
- Address each discussion point from topic-framework.md with scrum mastery expertise
|
||||
- Address each discussion point from guidance-specification.md with scrum mastery expertise
|
||||
- Provide actionable sprint planning and team facilitation strategies
|
||||
- Include process optimization and impediment removal insights
|
||||
- Reference framework document using @ notation for integration
|
||||
@@ -116,7 +116,7 @@ TodoWrite({
|
||||
activeForm: "Detecting session and framework"
|
||||
},
|
||||
{
|
||||
content: "Load topic-framework.md and session metadata for context",
|
||||
content: "Load guidance-specification.md and session metadata for context",
|
||||
status: "pending",
|
||||
activeForm: "Loading framework and session context"
|
||||
},
|
||||
@@ -144,7 +144,7 @@ TodoWrite({
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/scrum-master/
|
||||
└── analysis.md # Structured analysis addressing topic-framework.md discussion points
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
### Analysis Document Structure
|
||||
@@ -152,11 +152,11 @@ TodoWrite({
|
||||
# Scrum Master Analysis: [Topic from Framework]
|
||||
|
||||
## Framework Reference
|
||||
**Topic Framework**: @../topic-framework.md
|
||||
**Topic Framework**: @../guidance-specification.md
|
||||
**Role Focus**: Agile Process & Team Collaboration perspective
|
||||
|
||||
## Discussion Points Analysis
|
||||
[Address each point from topic-framework.md with scrum mastery expertise]
|
||||
[Address each point from guidance-specification.md with scrum mastery expertise]
|
||||
|
||||
### Core Requirements (from framework)
|
||||
[Sprint planning and iteration breakdown perspective]
|
||||
@@ -189,12 +189,12 @@ TodoWrite({
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/scrum-master/analysis.md",
|
||||
"framework_reference": "@../topic-framework.md"
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Points
|
||||
- **Framework Reference**: @../topic-framework.md for structured discussion points
|
||||
- **Framework Reference**: @../guidance-specification.md for structured discussion points
|
||||
- **Cross-Role Synthesis**: Agile process insights available for synthesis-report.md integration
|
||||
- **Agent Autonomy**: Independent execution with framework guidance
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: subject-matter-expert
|
||||
description: Generate or update subject-matter-expert/analysis.md addressing topic-framework discussion points
|
||||
description: Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points
|
||||
argument-hint: "optional topic - uses existing framework if available"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
---
|
||||
@@ -8,10 +8,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
## 🎯 **Subject Matter Expert Analysis Generator**
|
||||
|
||||
### Purpose
|
||||
**Specialized command for generating subject-matter-expert/analysis.md** that addresses topic-framework.md discussion points from domain knowledge and technical expertise perspective. Creates or updates role-specific analysis with framework references.
|
||||
**Specialized command for generating subject-matter-expert/analysis.md** that addresses guidance-specification.md discussion points from domain knowledge and technical expertise perspective. Creates or updates role-specific analysis with framework references.
|
||||
|
||||
### Core Function
|
||||
- **Framework-based Analysis**: Address each discussion point in topic-framework.md
|
||||
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
|
||||
- **Domain Expertise Focus**: Deep technical knowledge, industry standards, and best practices
|
||||
- **Update Mechanism**: Create new or update existing analysis.md
|
||||
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
|
||||
@@ -32,7 +32,7 @@ IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/topic-framework.md
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
framework_mode = true
|
||||
load_framework = true
|
||||
@@ -73,7 +73,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/topic-framework.md)
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -87,17 +87,17 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
**Framework Reference**: Address all discussion points in topic-framework.md from domain expertise and technical standards perspective
|
||||
**Framework Reference**: Address all discussion points in guidance-specification.md from domain expertise and technical standards perspective
|
||||
**Role Focus**: Domain knowledge, technical standards, risk assessment, knowledge transfer
|
||||
**Structured Approach**: Create analysis.md addressing framework discussion points
|
||||
**Template Integration**: Apply role template guidelines within framework structure
|
||||
|
||||
## Expected Deliverables
|
||||
1. **analysis.md**: Comprehensive domain expertise analysis addressing all framework discussion points
|
||||
2. **Framework Reference**: Include @../topic-framework.md reference in analysis
|
||||
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
|
||||
|
||||
## Completion Criteria
|
||||
- Address each discussion point from topic-framework.md with subject matter expertise
|
||||
- Address each discussion point from guidance-specification.md with subject matter expertise
|
||||
- Provide actionable technical standards and best practices recommendations
|
||||
- Include risk assessment and compliance considerations
|
||||
- Reference framework document using @ notation for integration
|
||||
@@ -116,7 +116,7 @@ TodoWrite({
|
||||
activeForm: "Detecting session and framework"
|
||||
},
|
||||
{
|
||||
content: "Load topic-framework.md and session metadata for context",
|
||||
content: "Load guidance-specification.md and session metadata for context",
|
||||
status: "pending",
|
||||
activeForm: "Loading framework and session context"
|
||||
},
|
||||
@@ -144,7 +144,7 @@ TodoWrite({
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/subject-matter-expert/
|
||||
└── analysis.md # Structured analysis addressing topic-framework.md discussion points
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
### Analysis Document Structure
|
||||
@@ -152,11 +152,11 @@ TodoWrite({
|
||||
# Subject Matter Expert Analysis: [Topic from Framework]
|
||||
|
||||
## Framework Reference
|
||||
**Topic Framework**: @../topic-framework.md
|
||||
**Topic Framework**: @../guidance-specification.md
|
||||
**Role Focus**: Domain Expertise & Technical Standards perspective
|
||||
|
||||
## Discussion Points Analysis
|
||||
[Address each point from topic-framework.md with subject matter expertise]
|
||||
[Address each point from guidance-specification.md with subject matter expertise]
|
||||
|
||||
### Core Requirements (from framework)
|
||||
[Domain-specific requirements and industry standards perspective]
|
||||
@@ -189,12 +189,12 @@ TodoWrite({
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/subject-matter-expert/analysis.md",
|
||||
"framework_reference": "@../topic-framework.md"
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Points
|
||||
- **Framework Reference**: @../topic-framework.md for structured discussion points
|
||||
- **Framework Reference**: @../guidance-specification.md for structured discussion points
|
||||
- **Cross-Role Synthesis**: Domain expertise insights available for synthesis-report.md integration
|
||||
- **Agent Autonomy**: Independent execution with framework guidance
|
||||
|
||||
@@ -1,346 +1,438 @@
|
||||
---
|
||||
name: synthesis
|
||||
description: Generate synthesis-specification.md from topic-framework and role analyses with @ references using conceptual-planning-agent
|
||||
argument-hint: "no arguments required - synthesizes existing framework and role analyses"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
description: Clarify and refine role analyses through intelligent Q&A and targeted updates
|
||||
argument-hint: "[optional: --session session-id]"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*)
|
||||
---
|
||||
|
||||
## 🧩 **Synthesis Document Generator**
|
||||
## Overview
|
||||
|
||||
### Core Function
|
||||
**Specialized command for generating synthesis-specification.md** that integrates topic-framework.md and all role analysis.md files using @ reference system. Creates comprehensive implementation specification with cross-role insights.
|
||||
Three-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:
|
||||
|
||||
**Dynamic Role Discovery**: Automatically detects which roles participated in brainstorming by scanning for `*/analysis.md` files. Synthesizes only actual participating roles, not predefined lists.
|
||||
**Phase 1-2 (Main Flow)**: Session detection → File discovery → Path preparation
|
||||
|
||||
### Primary Capabilities
|
||||
- **Dynamic Role Discovery**: Automatically identifies participating roles at runtime
|
||||
- **Framework Integration**: Reference topic-framework.md discussion points across all discovered roles
|
||||
- **Role Analysis Integration**: Consolidate all discovered role/analysis.md files using @ references
|
||||
- **Cross-Framework Comparison**: Compare how each participating role addressed framework discussion points
|
||||
- **@ Reference System**: Create structured references to source documents
|
||||
- **Update Detection**: Smart updates when new role analyses are added
|
||||
- **Flexible Participation**: Supports any subset of available roles (1 to 9+)
|
||||
**Phase 3A (Analysis Agent)**: Cross-role analysis → Generate recommendations
|
||||
|
||||
### Document Integration Model
|
||||
**Three-Document Reference System**:
|
||||
1. **topic-framework.md** → Structured discussion framework (input)
|
||||
2. **[role]/analysis.md** → Role-specific analyses addressing framework (input)
|
||||
3. **synthesis-specification.md** → Complete integrated specification (output)
|
||||
**Phase 4 (Main Flow)**: User selects enhancements → User answers clarifications → Build update plan
|
||||
|
||||
## ⚙️ **Execution Protocol**
|
||||
**Phase 5 (Parallel Update Agents)**: Each agent updates ONE role document → Parallel execution
|
||||
|
||||
### ⚠️ Agent Execution with Flow Control
|
||||
**Execution Model**: Uses conceptual-planning-agent for synthesis generation with structured file loading.
|
||||
**Phase 6 (Main Flow)**: Metadata update → Completion report
|
||||
|
||||
**Rationale**:
|
||||
- **Autonomous Execution**: Agent independently loads and processes all required documents
|
||||
- **Flow Control**: Structured document loading ensures systematic analysis
|
||||
- **Complex Cognitive Analysis**: Leverages agent's analytical capabilities for cross-role synthesis
|
||||
- **Conceptual Focus**: Agent specializes in conceptual analysis and multi-perspective integration
|
||||
**Key Features**:
|
||||
- Multi-agent architecture (analysis agent + parallel update agents)
|
||||
- Clear separation: Agent analysis vs Main flow interaction
|
||||
- Parallel document updates (one agent per role)
|
||||
- User intent alignment validation
|
||||
|
||||
**Agent Responsibility**: All file reading and synthesis generation performed by conceptual-planning-agent with FLOW_CONTROL instructions.
|
||||
**Document Flow**:
|
||||
- Input: `[role]/analysis*.md`, `guidance-specification.md`, session metadata
|
||||
- Output: Updated `[role]/analysis*.md` with Enhancements + Clarifications sections
|
||||
|
||||
## Task Tracking
|
||||
|
||||
### 📋 Task Tracking Protocol
|
||||
Initialize synthesis task tracking using TodoWrite at command start:
|
||||
```json
|
||||
[
|
||||
{"content": "Detect active session and validate topic-framework.md existence", "status": "in_progress", "activeForm": "Detecting session and validating framework"},
|
||||
{"content": "Discover participating role analyses dynamically", "status": "pending", "activeForm": "Discovering role analyses"},
|
||||
{"content": "Check existing synthesis and confirm user action", "status": "pending", "activeForm": "Checking update mechanism"},
|
||||
{"content": "Execute synthesis generation using conceptual-planning-agent with FLOW_CONTROL", "status": "pending", "activeForm": "Executing agent-based synthesis generation"},
|
||||
{"content": "Agent performs cross-role analysis and generates synthesis-specification.md", "status": "pending", "activeForm": "Agent analyzing and generating synthesis"},
|
||||
{"content": "Update workflow-session.json with synthesis completion status", "status": "pending", "activeForm": "Updating session metadata"}
|
||||
{"content": "Detect session and validate analyses", "status": "in_progress", "activeForm": "Detecting session"},
|
||||
{"content": "Discover role analysis file paths", "status": "pending", "activeForm": "Discovering paths"},
|
||||
{"content": "Execute analysis agent (cross-role analysis)", "status": "pending", "activeForm": "Executing analysis agent"},
|
||||
{"content": "Present enhancements for user selection", "status": "pending", "activeForm": "Presenting enhancements"},
|
||||
{"content": "Generate and present clarification questions", "status": "pending", "activeForm": "Clarifying with user"},
|
||||
{"content": "Build update plan from user input", "status": "pending", "activeForm": "Building update plan"},
|
||||
{"content": "Execute parallel update agents (one per role)", "status": "pending", "activeForm": "Updating documents in parallel"},
|
||||
{"content": "Update session metadata and generate report", "status": "pending", "activeForm": "Finalizing session"}
|
||||
]
|
||||
```
|
||||
|
||||
### Phase 1: Document Discovery & Validation
|
||||
```bash
|
||||
# Detect active brainstorming session
|
||||
CHECK: .workflow/.active-* marker files
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
ELSE:
|
||||
ERROR: "No active brainstorming session found"
|
||||
EXIT
|
||||
## Execution Phases
|
||||
|
||||
# Validate required documents
|
||||
CHECK: brainstorm_dir/topic-framework.md
|
||||
IF NOT EXISTS:
|
||||
ERROR: "topic-framework.md not found. Run /workflow:brainstorm:artifacts first"
|
||||
EXIT
|
||||
```
|
||||
### Phase 1: Discovery & Validation
|
||||
|
||||
### Phase 2: Role Analysis Discovery
|
||||
```bash
|
||||
# Dynamically discover available role analyses
|
||||
SCAN_DIRECTORY: .workflow/WFS-{session}/.brainstorming/
|
||||
FIND_ANALYSES: [
|
||||
Scan all subdirectories for */analysis.md files
|
||||
Extract role names from directory names
|
||||
]
|
||||
1. **Detect Session**: Use `--session` parameter or `.workflow/.active-*` marker
|
||||
2. **Validate Files**:
|
||||
- `guidance-specification.md` (optional, warn if missing)
|
||||
- `*/analysis*.md` (required, error if empty)
|
||||
3. **Load User Intent**: Extract from `workflow-session.json` (project/description field)
|
||||
|
||||
# Available roles (for reference, actual participation is dynamic):
|
||||
# - product-manager
|
||||
# - product-owner
|
||||
# - scrum-master
|
||||
# - system-architect
|
||||
# - ui-designer
|
||||
# - ux-expert
|
||||
# - data-architect
|
||||
# - subject-matter-expert
|
||||
# - test-strategist
|
||||
### Phase 2: Role Discovery & Path Preparation
|
||||
|
||||
LOAD_DOCUMENTS: {
|
||||
"topic_framework": topic-framework.md,
|
||||
"role_analyses": [dynamically discovered analysis.md files],
|
||||
"participating_roles": [extract role names from discovered directories],
|
||||
"existing_synthesis": synthesis-specification.md (if exists)
|
||||
}
|
||||
**Main flow prepares file paths for Agent**:
|
||||
|
||||
# Note: Not all roles participate in every brainstorming session
|
||||
# Only synthesize roles that actually produced analysis.md files
|
||||
```
|
||||
1. **Discover Analysis Files**:
|
||||
- Glob(.workflow/WFS-{session}/.brainstorming/*/analysis*.md)
|
||||
- Supports: analysis.md, analysis-1.md, analysis-2.md, analysis-3.md
|
||||
- Validate: At least one file exists (error if empty)
|
||||
|
||||
### Phase 3: Update Mechanism Check
|
||||
```bash
|
||||
# Check for existing synthesis
|
||||
IF synthesis-specification.md EXISTS:
|
||||
SHOW current synthesis summary to user
|
||||
ASK: "Synthesis exists. Do you want to:"
|
||||
OPTIONS:
|
||||
1. "Regenerate completely" → Create new synthesis
|
||||
2. "Update with new analyses" → Integrate new role analyses
|
||||
3. "Preserve existing" → Exit without changes
|
||||
ELSE:
|
||||
CREATE new synthesis
|
||||
```
|
||||
2. **Extract Role Information**:
|
||||
- `role_analysis_paths`: Relative paths from brainstorm_dir
|
||||
- `participating_roles`: Role names extracted from directory paths
|
||||
|
||||
### Phase 4: Agent Execution with Flow Control
|
||||
**Synthesis Generation using conceptual-planning-agent**
|
||||
3. **Pass to Agent** (Phase 3):
|
||||
- `session_id`
|
||||
- `brainstorm_dir`: .workflow/WFS-{session}/.brainstorming/
|
||||
- `role_analysis_paths`: ["product-manager/analysis.md", "system-architect/analysis-1.md", ...]
|
||||
- `participating_roles`: ["product-manager", "system-architect", ...]
|
||||
|
||||
Delegate synthesis generation to conceptual-planning-agent with structured file loading:
|
||||
**Main Flow Responsibility**: File discovery and path preparation only (NO file content reading)
|
||||
|
||||
### Phase 3A: Analysis & Enhancement Agent
|
||||
|
||||
**First agent call**: Cross-role analysis and generate enhancement recommendations
|
||||
|
||||
```bash
|
||||
Task(conceptual-planning-agent): "
|
||||
## Agent Mission
|
||||
Analyze role documents, identify conflicts/gaps, and generate enhancement recommendations
|
||||
|
||||
## Input from Main Flow
|
||||
- brainstorm_dir: {brainstorm_dir}
|
||||
- role_analysis_paths: {role_analysis_paths}
|
||||
- participating_roles: {participating_roles}
|
||||
|
||||
## Execution Instructions
|
||||
[FLOW_CONTROL]
|
||||
|
||||
Execute comprehensive synthesis generation from topic framework and role analyses
|
||||
### Flow Control Steps
|
||||
**AGENT RESPONSIBILITY**: Execute these analysis steps sequentially with context accumulation:
|
||||
|
||||
## Context Loading
|
||||
OUTPUT_FILE: synthesis-specification.md
|
||||
OUTPUT_PATH: .workflow/WFS-{session}/.brainstorming/synthesis-specification.md
|
||||
SESSION_ID: {session_id}
|
||||
ANALYSIS_MODE: cross_role_synthesis
|
||||
1. **load_session_metadata**
|
||||
- Action: Load original user intent as primary reference
|
||||
- Command: Read({brainstorm_dir}/../workflow-session.json)
|
||||
- Output: original_user_intent (from project/description field)
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/topic-framework.md)
|
||||
- Output: topic_framework_content
|
||||
2. **load_role_analyses**
|
||||
- Action: Load all role analysis documents
|
||||
- Command: For each path in role_analysis_paths: Read({brainstorm_dir}/{path})
|
||||
- Output: role_analyses_content_map = {role_name: content}
|
||||
|
||||
2. **discover_role_analyses**
|
||||
- Action: Dynamically discover all participating role analysis files
|
||||
- Command: Glob(.workflow/WFS-{session}/.brainstorming/*/analysis.md)
|
||||
- Output: role_analysis_paths, participating_roles
|
||||
3. **cross_role_analysis**
|
||||
- Action: Identify consensus themes, conflicts, gaps, underspecified areas
|
||||
- Output: consensus_themes, conflicting_views, gaps_list, ambiguities
|
||||
|
||||
3. **load_role_analyses**
|
||||
- Action: Load all discovered role analysis documents
|
||||
- Command: Read(each path from role_analysis_paths)
|
||||
- Output: role_analyses_content
|
||||
4. **generate_recommendations**
|
||||
- Action: Convert cross-role analysis findings into structured enhancement recommendations
|
||||
- Format: EP-001, EP-002, ... (sequential numbering)
|
||||
- Fields: id, title, affected_roles, category, current_state, enhancement, rationale, priority
|
||||
- Taxonomy: Map to 9 categories (User Intent, Requirements, Architecture, UX, Feasibility, Risk, Process, Decisions, Terminology)
|
||||
- Output: enhancement_recommendations (JSON array)
|
||||
|
||||
4. **check_existing_synthesis**
|
||||
- Action: Check if synthesis-specification.md already exists
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/synthesis-specification.md) [if exists]
|
||||
- Output: existing_synthesis_content [optional]
|
||||
### Output to Main Flow
|
||||
Return JSON array:
|
||||
[
|
||||
{
|
||||
\"id\": \"EP-001\",
|
||||
\"title\": \"API Contract Specification\",
|
||||
\"affected_roles\": [\"system-architect\", \"api-designer\"],
|
||||
\"category\": \"Architecture\",
|
||||
\"current_state\": \"High-level API descriptions\",
|
||||
\"enhancement\": \"Add detailed contract definitions with request/response schemas\",
|
||||
\"rationale\": \"Enables precise implementation and testing\",
|
||||
\"priority\": \"High\"
|
||||
},
|
||||
...
|
||||
]
|
||||
|
||||
5. **load_session_metadata**
|
||||
- Action: Load session metadata and context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
6. **load_synthesis_template**
|
||||
- Action: Load synthesis role template for structure and guidelines
|
||||
- Command: Read(~/.claude/workflows/cli-templates/planning-roles/synthesis-role.md)
|
||||
- Output: synthesis_template_guidelines
|
||||
|
||||
## Synthesis Requirements
|
||||
|
||||
### Core Integration
|
||||
**Cross-Role Analysis**: Integrate all discovered role analyses with comprehensive coverage
|
||||
**Framework Integration**: Address how each role responded to topic-framework.md discussion points
|
||||
**Decision Transparency**: Document both adopted solutions and rejected alternatives with rationale
|
||||
**Process Integration**: Include team capability gaps, process risks, and collaboration patterns
|
||||
**Visual Documentation**: Include key diagrams (architecture, data model, user journey) via Mermaid
|
||||
**Priority Matrix**: Create quantified recommendation matrix with multi-dimensional evaluation
|
||||
**Actionable Plan**: Provide phased implementation roadmap with clear next steps
|
||||
|
||||
### Cross-Role Analysis Process (Agent Internal Execution)
|
||||
Perform systematic cross-role analysis following these steps:
|
||||
|
||||
1. **Data Collection**: Extract key insights, recommendations, concerns, and diagrams from each discovered role analysis
|
||||
2. **Consensus Identification**: Identify common themes and agreement areas across all participating roles
|
||||
3. **Disagreement Analysis**: Document conflicting views and track which specific roles disagree on each point
|
||||
4. **Innovation Extraction**: Identify breakthrough ideas and cross-role synergy opportunities
|
||||
5. **Priority Scoring**: Calculate multi-dimensional priority scores (impact, feasibility, effort, risk) for each recommendation
|
||||
6. **Decision Matrix**: Create comprehensive evaluation matrix and sort recommendations by priority
|
||||
|
||||
## Synthesis Quality Standards
|
||||
Follow synthesis-specification.md structure defined in synthesis-role.md template:
|
||||
- Use template structure for comprehensive document organization
|
||||
- Apply analysis guidelines for cross-role synthesis process
|
||||
- Include all required sections from template (Executive Summary, Key Designs, Requirements, etc.)
|
||||
- Follow @ reference system for traceability to source role analyses
|
||||
- Apply quality standards from template (completeness, visual clarity, decision transparency)
|
||||
- Validate output against template's output validation checklist
|
||||
|
||||
## Expected Deliverables
|
||||
1. **synthesis-specification.md**: Complete integrated specification consolidating all role perspectives
|
||||
2. **@ References**: Include cross-references to source role analyses
|
||||
3. **Session Metadata Update**: Update workflow-session.json with synthesis completion status
|
||||
|
||||
## Completion Criteria
|
||||
- All discovered role analyses integrated without gaps
|
||||
- Framework discussion points addressed across all roles
|
||||
- Controversial points documented with dissenting roles identified
|
||||
- Process concerns (team capabilities, risks, collaboration) captured
|
||||
- Quantified priority recommendations with evaluation criteria
|
||||
- Actionable implementation plan with phased approach
|
||||
- Comprehensive risk assessment with mitigation strategies
|
||||
|
||||
## Execution Notes
|
||||
- Dynamic role participation: Only synthesize roles that produced analysis.md files
|
||||
- Update mechanism: If synthesis exists, prompt user for regenerate/update/preserve decision
|
||||
- Timeout allocation: Complex synthesis task (60-90 min recommended)
|
||||
- Reference @intelligent-tools-strategy.md for timeout guidelines
|
||||
"
|
||||
```
|
||||
|
||||
## 📊 **Output Specification**
|
||||
### Phase 4: Main Flow User Interaction
|
||||
|
||||
### Output Location
|
||||
The synthesis process creates **one consolidated document** that integrates all role perspectives:
|
||||
**Main flow handles all user interaction via text output**:
|
||||
|
||||
```
|
||||
.workflow/WFS-{topic-slug}/.brainstorming/
|
||||
├── topic-framework.md # Input: Framework structure
|
||||
├── [role]/analysis.md # Input: Role analyses (multiple)
|
||||
└── synthesis-specification.md # ★ OUTPUT: Complete integrated specification
|
||||
**⚠️ CRITICAL**: ALL questions MUST use Chinese (所有问题必须用中文) for better user understanding
|
||||
|
||||
1. **Present Enhancement Options** (multi-select):
|
||||
```markdown
|
||||
===== Enhancement 选择 =====
|
||||
|
||||
请选择要应用的改进建议(可多选):
|
||||
|
||||
a) EP-001: API Contract Specification
|
||||
影响角色:system-architect, api-designer
|
||||
说明:添加详细的请求/响应 schema 定义
|
||||
|
||||
b) EP-002: User Intent Validation
|
||||
影响角色:product-manager, ux-expert
|
||||
说明:明确用户需求优先级和验收标准
|
||||
|
||||
c) EP-003: Error Handling Strategy
|
||||
影响角色:system-architect
|
||||
说明:统一异常处理和降级方案
|
||||
|
||||
支持格式:1abc 或 1a 1b 1c 或 1a,b,c
|
||||
请输入选择(可跳过输入 skip):
|
||||
```
|
||||
|
||||
#### synthesis-specification.md Structure
|
||||
2. **Generate Clarification Questions** (based on analysis agent output):
|
||||
- ✅ **ALL questions in Chinese (所有问题必须用中文)**
|
||||
- Use 9-category taxonomy scan results
|
||||
- Prioritize most critical questions (no hard limit)
|
||||
- Each with 2-4 options + descriptions
|
||||
|
||||
**Document Purpose**: Defines **"WHAT"** to build - comprehensive requirements and design blueprint.
|
||||
**Scope**: High-level features, requirements, and design specifications. Does NOT include executable task breakdown (that's IMPL_PLAN.md's responsibility).
|
||||
3. **Interactive Clarification Loop** (max 10 questions per round):
|
||||
```markdown
|
||||
===== Clarification 问题 (第 1/2 轮) =====
|
||||
|
||||
**Template Reference**: Complete document structure and content guidelines available in `synthesis-role.md` template, including:
|
||||
- Executive Summary with strategic overview
|
||||
- Key Designs & Decisions (architecture diagrams, ADRs, user journeys)
|
||||
- Controversial Points & Alternatives (decision transparency)
|
||||
- Requirements & Acceptance Criteria (Functional, Non-Functional, Business)
|
||||
- Design Specifications (UI/UX, Architecture, Domain Expertise)
|
||||
- Process & Collaboration Concerns (team skills, risks, patterns, constraints)
|
||||
- Implementation Roadmap (high-level phases)
|
||||
- Risk Assessment & Mitigation strategies
|
||||
【问题1 - 用户意图】MVP 阶段的核心目标是什么?
|
||||
a) 快速验证市场需求
|
||||
说明:最小功能集,快速上线获取反馈
|
||||
b) 建立技术壁垒
|
||||
说明:完善架构,为长期发展打基础
|
||||
c) 实现功能完整性
|
||||
说明:覆盖所有规划功能,延迟上线
|
||||
|
||||
**Agent Usage**: The conceptual-planning-agent loads this template to understand expected structure and quality standards.
|
||||
【问题2 - 架构决策】技术栈选择的优先考虑因素?
|
||||
a) 团队熟悉度
|
||||
说明:使用现有技术栈,降低学习成本
|
||||
b) 技术先进性
|
||||
说明:采用新技术,提升竞争力
|
||||
c) 生态成熟度
|
||||
说明:选择成熟方案,保证稳定性
|
||||
|
||||
## 🔄 **Session Integration**
|
||||
...(最多10个问题)
|
||||
|
||||
### Streamlined Status Synchronization
|
||||
Upon completion, update `workflow-session.json`:
|
||||
请回答 (格式: 1a 2b 3c...):
|
||||
```
|
||||
|
||||
**Dynamic Role Participation**: The `participating_roles` and `roles_synthesized` values are determined at runtime based on actual analysis.md files discovered.
|
||||
Wait for user input → Parse all answers in batch → Continue to next round if needed
|
||||
|
||||
4. **Build Update Plan**:
|
||||
```
|
||||
update_plan = {
|
||||
"role1": {
|
||||
"enhancements": [EP-001, EP-003],
|
||||
"clarifications": [
|
||||
{"question": "...", "answer": "...", "category": "..."},
|
||||
...
|
||||
]
|
||||
},
|
||||
"role2": {
|
||||
"enhancements": [EP-002],
|
||||
"clarifications": [...]
|
||||
},
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Parallel Document Update Agents
|
||||
|
||||
**Parallel agent calls** (one per role needing updates):
|
||||
|
||||
```bash
|
||||
# Execute in parallel using single message with multiple Task calls
|
||||
|
||||
Task(conceptual-planning-agent): "
|
||||
## Agent Mission
|
||||
Apply user-confirmed enhancements and clarifications to {role1} analysis document
|
||||
|
||||
## Agent Intent
|
||||
- **Goal**: Integrate synthesis results into role-specific analysis
|
||||
- **Scope**: Update ONLY {role1}/analysis.md (isolated, no cross-role dependencies)
|
||||
- **Constraints**: Preserve original insights, add refinements without deletion
|
||||
|
||||
## Input from Main Flow
|
||||
- role: {role1}
|
||||
- analysis_path: {brainstorm_dir}/{role1}/analysis.md
|
||||
- enhancements: [EP-001, EP-003] (user-selected improvements)
|
||||
- clarifications: [{question, answer, category}, ...] (user-confirmed answers)
|
||||
- original_user_intent: {from session metadata}
|
||||
|
||||
## Execution Instructions
|
||||
[FLOW_CONTROL]
|
||||
|
||||
### Flow Control Steps
|
||||
**AGENT RESPONSIBILITY**: Execute these update steps sequentially:
|
||||
|
||||
1. **load_current_analysis**
|
||||
- Action: Load existing role analysis document
|
||||
- Command: Read({brainstorm_dir}/{role1}/analysis.md)
|
||||
- Output: current_analysis_content
|
||||
|
||||
2. **add_clarifications_section**
|
||||
- Action: Insert Clarifications section with Q&A
|
||||
- Format: \"## Clarifications\\n### Session {date}\\n- **Q**: {question} (Category: {category})\\n **A**: {answer}\"
|
||||
- Output: analysis_with_clarifications
|
||||
|
||||
3. **apply_enhancements**
|
||||
- Action: Integrate EP-001, EP-003 into relevant sections
|
||||
- Strategy: Locate section by category (Architecture → Architecture section, UX → User Experience section)
|
||||
- Output: analysis_with_enhancements
|
||||
|
||||
4. **resolve_contradictions**
|
||||
- Action: Remove conflicts between original content and clarifications/enhancements
|
||||
- Output: contradiction_free_analysis
|
||||
|
||||
5. **enforce_terminology_consistency**
|
||||
- Action: Align all terminology with user-confirmed choices from clarifications
|
||||
- Output: terminology_consistent_analysis
|
||||
|
||||
6. **validate_user_intent_alignment**
|
||||
- Action: Verify all updates support original_user_intent
|
||||
- Output: validated_analysis
|
||||
|
||||
7. **write_updated_file**
|
||||
- Action: Save final analysis document
|
||||
- Command: Write({brainstorm_dir}/{role1}/analysis.md, validated_analysis)
|
||||
- Output: File update confirmation
|
||||
|
||||
### Output
|
||||
Updated {role1}/analysis.md with Clarifications section + enhanced content
|
||||
")
|
||||
|
||||
Task(conceptual-planning-agent): "
|
||||
## Agent Mission
|
||||
Apply user-confirmed enhancements and clarifications to {role2} analysis document
|
||||
|
||||
## Agent Intent
|
||||
- **Goal**: Integrate synthesis results into role-specific analysis
|
||||
- **Scope**: Update ONLY {role2}/analysis.md (isolated, no cross-role dependencies)
|
||||
- **Constraints**: Preserve original insights, add refinements without deletion
|
||||
|
||||
## Input from Main Flow
|
||||
- role: {role2}
|
||||
- analysis_path: {brainstorm_dir}/{role2}/analysis.md
|
||||
- enhancements: [EP-002] (user-selected improvements)
|
||||
- clarifications: [{question, answer, category}, ...] (user-confirmed answers)
|
||||
- original_user_intent: {from session metadata}
|
||||
|
||||
## Execution Instructions
|
||||
[FLOW_CONTROL]
|
||||
|
||||
### Flow Control Steps
|
||||
**AGENT RESPONSIBILITY**: Execute same 7 update steps as {role1} agent (load → clarifications → enhancements → contradictions → terminology → validation → write)
|
||||
|
||||
### Output
|
||||
Updated {role2}/analysis.md with Clarifications section + enhanced content
|
||||
")
|
||||
|
||||
# ... repeat for each role in update_plan
|
||||
```
|
||||
|
||||
**Agent Characteristics**:
|
||||
- **Intent**: Integrate user-confirmed synthesis results (NOT generate new analysis)
|
||||
- **Isolation**: Each agent updates exactly ONE role (parallel execution safe)
|
||||
- **Context**: Minimal - receives only role-specific enhancements + clarifications
|
||||
- **Dependencies**: Zero cross-agent dependencies (full parallelism)
|
||||
- **Validation**: All updates must align with original_user_intent
|
||||
|
||||
### Phase 6: Completion & Metadata Update
|
||||
|
||||
**Main flow finalizes**:
|
||||
|
||||
1. Wait for all parallel agents to complete
|
||||
2. Update workflow-session.json:
|
||||
```json
|
||||
{
|
||||
"phases": {
|
||||
"BRAINSTORM": {
|
||||
"status": "completed",
|
||||
"synthesis_completed": true,
|
||||
"status": "clarification_completed",
|
||||
"clarification_completed": true,
|
||||
"completed_at": "timestamp",
|
||||
"participating_roles": ["<dynamically-discovered-role-1>", "<dynamically-discovered-role-2>", "..."],
|
||||
"available_roles": ["product-manager", "product-owner", "scrum-master", "system-architect", "ui-designer", "ux-expert", "data-architect", "subject-matter-expert", "test-strategist"],
|
||||
"consolidated_output": {
|
||||
"synthesis_specification": ".workflow/WFS-{topic}/.brainstorming/synthesis-specification.md"
|
||||
"participating_roles": [...],
|
||||
"clarification_results": {
|
||||
"enhancements_applied": ["EP-001", "EP-002", ...],
|
||||
"questions_asked": 3,
|
||||
"categories_clarified": ["Architecture", "UX", ...],
|
||||
"roles_updated": ["role1", "role2", ...],
|
||||
"outstanding_items": []
|
||||
},
|
||||
"synthesis_quality": {
|
||||
"role_integration": "complete",
|
||||
"quality_metrics": {
|
||||
"user_intent_alignment": "validated",
|
||||
"requirement_coverage": "comprehensive",
|
||||
"decision_transparency": "alternatives_documented",
|
||||
"process_risks_identified": true,
|
||||
"implementation_readiness": "ready"
|
||||
},
|
||||
"content_metrics": {
|
||||
"roles_synthesized": "<COUNT(participating_roles)>",
|
||||
"functional_requirements": "<dynamic-count>",
|
||||
"non_functional_requirements": "<dynamic-count>",
|
||||
"business_requirements": "<dynamic-count>",
|
||||
"architecture_decisions": "<dynamic-count>",
|
||||
"controversial_points": "<dynamic-count>",
|
||||
"diagrams_included": "<dynamic-count>",
|
||||
"process_risks": "<dynamic-count>",
|
||||
"team_skill_gaps": "<dynamic-count>",
|
||||
"implementation_phases": "<dynamic-count>",
|
||||
"risk_factors_identified": "<dynamic-count>"
|
||||
"ambiguity_resolution": "complete",
|
||||
"terminology_consistency": "enforced"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Example with actual values**:
|
||||
3. Generate completion report (show to user):
|
||||
```markdown
|
||||
## ✅ Clarification Complete
|
||||
|
||||
**Enhancements Applied**: EP-001, EP-002, EP-003
|
||||
**Questions Answered**: 3/5
|
||||
**Roles Updated**: role1, role2, role3
|
||||
|
||||
### Next Steps
|
||||
✅ PROCEED: `/workflow:plan --session WFS-{session-id}`
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
**Location**: `.workflow/WFS-{session}/.brainstorming/[role]/analysis*.md` (in-place updates)
|
||||
|
||||
**Updated Structure**:
|
||||
```markdown
|
||||
## Clarifications
|
||||
### Session {date}
|
||||
- **Q**: {question} (Category: {category})
|
||||
**A**: {answer}
|
||||
|
||||
## {Existing Sections}
|
||||
{Refined content based on clarifications}
|
||||
```
|
||||
|
||||
**Changes**:
|
||||
- User intent validated/corrected
|
||||
- Requirements more specific/measurable
|
||||
- Architecture with rationale
|
||||
- Ambiguities resolved, placeholders removed
|
||||
- Consistent terminology
|
||||
|
||||
## Session Metadata
|
||||
|
||||
Update `workflow-session.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"phases": {
|
||||
"BRAINSTORM": {
|
||||
"status": "completed",
|
||||
"participating_roles": ["product-manager", "system-architect", "ui-designer", "ux-expert", "scrum-master"],
|
||||
"content_metrics": {
|
||||
"roles_synthesized": 5,
|
||||
"functional_requirements": 18,
|
||||
"controversial_points": 2
|
||||
"status": "clarification_completed",
|
||||
"clarification_completed": true,
|
||||
"completed_at": "timestamp",
|
||||
"participating_roles": ["product-manager", "system-architect", ...],
|
||||
"clarification_results": {
|
||||
"questions_asked": 3,
|
||||
"categories_clarified": ["Architecture & Design", ...],
|
||||
"roles_updated": ["system-architect", "ui-designer", ...],
|
||||
"outstanding_items": []
|
||||
},
|
||||
"quality_metrics": {
|
||||
"user_intent_alignment": "validated",
|
||||
"requirement_coverage": "comprehensive",
|
||||
"ambiguity_resolution": "complete",
|
||||
"terminology_consistency": "enforced",
|
||||
"decision_transparency": "documented"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## ✅ **Quality Assurance**
|
||||
## Quality Checklist
|
||||
|
||||
Verify synthesis output meets these standards (detailed criteria in `synthesis-role.md` template):
|
||||
**Content**:
|
||||
- All role analyses loaded/analyzed
|
||||
- Cross-role analysis (consensus, conflicts, gaps)
|
||||
- 9-category ambiguity scan
|
||||
- Questions prioritized
|
||||
- Clarifications documented
|
||||
|
||||
### Content Completeness
|
||||
- [ ] All discovered role analyses integrated without gaps
|
||||
- [ ] Key designs documented (architecture diagrams, ADRs, user journeys via Mermaid)
|
||||
- [ ] Controversial points captured with alternatives and rationale
|
||||
- [ ] Process concerns included (team skills, risks, collaboration patterns)
|
||||
- [ ] Requirements documented (Functional, Non-Functional, Business) with sources
|
||||
**Analysis**:
|
||||
- User intent validated
|
||||
- Cross-role synthesis complete
|
||||
- Ambiguities resolved
|
||||
- Correct roles updated
|
||||
- Terminology consistent
|
||||
- Contradictions removed
|
||||
|
||||
### Analysis Quality
|
||||
- [ ] Cross-role synthesis identifies consensus and conflicts
|
||||
- [ ] Decision transparency documents both adopted and rejected alternatives
|
||||
- [ ] Priority recommendations with multi-dimensional evaluation
|
||||
- [ ] Implementation roadmap with phased approach
|
||||
- [ ] Risk assessment with mitigation strategies
|
||||
- [ ] @ references to source role analyses throughout
|
||||
**Documents**:
|
||||
- Clarifications section formatted
|
||||
- Sections reflect answers
|
||||
- No placeholders (TODO/TBD)
|
||||
- Valid Markdown
|
||||
- Cross-references maintained
|
||||
|
||||
## 🚀 **Recommended Next Steps**
|
||||
|
||||
After synthesis completion, proceed to action planning:
|
||||
|
||||
### Standard Workflow (Recommended)
|
||||
```bash
|
||||
/workflow:concept-clarify --session WFS-{session-id} # Optional: Clarify ambiguities
|
||||
/workflow:plan --session WFS-{session-id} # Generate IMPL_PLAN.md and tasks
|
||||
/workflow:action-plan-verify --session WFS-{session-id} # Optional: Verify plan quality
|
||||
/workflow:execute --session WFS-{session-id} # Start implementation
|
||||
```
|
||||
|
||||
### TDD Workflow
|
||||
```bash
|
||||
/workflow:concept-clarify --session WFS-{session-id} # Optional: Clarify ambiguities
|
||||
/workflow:tdd-plan --session WFS-{session-id} "Feature description"
|
||||
/workflow:action-plan-verify --session WFS-{session-id} # Optional: Verify plan quality
|
||||
/workflow:execute --session WFS-{session-id}
|
||||
```
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: system-architect
|
||||
description: Generate or update system-architect/analysis.md addressing topic-framework discussion points
|
||||
description: Generate or update system-architect/analysis.md addressing guidance-specification discussion points
|
||||
argument-hint: "optional topic - uses existing framework if available"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
---
|
||||
@@ -8,10 +8,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
## 🏗️ **System Architect Analysis Generator**
|
||||
|
||||
### Purpose
|
||||
**Specialized command for generating system-architect/analysis.md** that addresses topic-framework.md discussion points from system architecture perspective. Creates or updates role-specific analysis with framework references.
|
||||
**Specialized command for generating system-architect/analysis.md** that addresses guidance-specification.md discussion points from system architecture perspective. Creates or updates role-specific analysis with framework references.
|
||||
|
||||
### Core Function
|
||||
- **Framework-based Analysis**: Address each discussion point in topic-framework.md
|
||||
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
|
||||
- **Architecture Focus**: Technical architecture, scalability, and system design perspective
|
||||
- **Update Mechanism**: Create new or update existing analysis.md
|
||||
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
|
||||
@@ -51,7 +51,7 @@ IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/topic-framework.md
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
framework_mode = true
|
||||
load_framework = true
|
||||
@@ -78,20 +78,20 @@ ELSE:
|
||||
```
|
||||
|
||||
### Phase 3: Agent Task Generation
|
||||
**Framework-Based Analysis** (when topic-framework.md exists):
|
||||
**Framework-Based Analysis** (when guidance-specification.md exists):
|
||||
```bash
|
||||
Task(subagent_type="conceptual-planning-agent",
|
||||
prompt="Generate system architect analysis addressing topic framework
|
||||
|
||||
## Framework Integration Required
|
||||
**MANDATORY**: Load and address topic-framework.md discussion points
|
||||
**Framework Reference**: @{session.brainstorm_dir}/topic-framework.md
|
||||
**MANDATORY**: Load and address guidance-specification.md discussion points
|
||||
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
|
||||
**Output Location**: {session.brainstorm_dir}/system-architect/analysis.md
|
||||
|
||||
## Analysis Requirements
|
||||
1. **Load Topic Framework**: Read topic-framework.md completely
|
||||
1. **Load Topic Framework**: Read guidance-specification.md completely
|
||||
2. **Address Each Discussion Point**: Respond to all 5 framework sections from system architecture perspective
|
||||
3. **Include Framework Reference**: Start analysis.md with @../topic-framework.md
|
||||
3. **Include Framework Reference**: Start analysis.md with @../guidance-specification.md
|
||||
4. **Technical Focus**: Emphasize scalability, architecture patterns, technology decisions
|
||||
5. **Structured Response**: Use framework structure for analysis organization
|
||||
|
||||
@@ -106,7 +106,7 @@ Task(subagent_type="conceptual-planning-agent",
|
||||
```markdown
|
||||
# System Architect Analysis: [Topic]
|
||||
|
||||
**Framework Reference**: @../topic-framework.md
|
||||
**Framework Reference**: @../guidance-specification.md
|
||||
**Role Focus**: System Architecture and Technical Design
|
||||
|
||||
## Core Requirements Analysis
|
||||
@@ -140,14 +140,14 @@ IF update_mode = "incremental":
|
||||
|
||||
## Current Analysis Context
|
||||
**Existing Analysis**: @{session.brainstorm_dir}/system-architect/analysis.md
|
||||
**Framework Reference**: @{session.brainstorm_dir}/topic-framework.md
|
||||
**Framework Reference**: @{session.brainstorm_dir}/guidance-specification.md
|
||||
|
||||
## Update Requirements
|
||||
1. **Preserve Structure**: Maintain existing analysis structure
|
||||
2. **Add New Insights**: Integrate new technical insights and recommendations
|
||||
3. **Framework Alignment**: Ensure continued alignment with topic framework
|
||||
4. **Technical Updates**: Add new architecture patterns, technology considerations
|
||||
5. **Maintain References**: Keep @../topic-framework.md reference
|
||||
5. **Maintain References**: Keep @../guidance-specification.md reference
|
||||
|
||||
## Update Instructions
|
||||
- Read existing analysis completely
|
||||
@@ -163,14 +163,14 @@ IF update_mode = "incremental":
|
||||
### Output Files
|
||||
```
|
||||
.workflow/WFS-[topic]/.brainstorming/
|
||||
├── topic-framework.md # Input: Framework (if exists)
|
||||
├── guidance-specification.md # Input: Framework (if exists)
|
||||
└── system-architect/
|
||||
└── analysis.md # ★ OUTPUT: Framework-based analysis
|
||||
```
|
||||
|
||||
### Analysis Structure
|
||||
**Required Elements**:
|
||||
- **Framework Reference**: @../topic-framework.md (if framework exists)
|
||||
- **Framework Reference**: @../guidance-specification.md (if framework exists)
|
||||
- **Role Focus**: System Architecture and Technical Design perspective
|
||||
- **5 Framework Sections**: Address each framework discussion point
|
||||
- **Technical Recommendations**: Architecture-specific insights and solutions
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: ui-designer
|
||||
description: Generate or update ui-designer/analysis.md addressing topic-framework discussion points
|
||||
description: Generate or update ui-designer/analysis.md addressing guidance-specification discussion points
|
||||
argument-hint: "optional topic - uses existing framework if available"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
---
|
||||
@@ -8,10 +8,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
## 🎨 **UI Designer Analysis Generator**
|
||||
|
||||
### Purpose
|
||||
**Specialized command for generating ui-designer/analysis.md** that addresses topic-framework.md discussion points from UI/UX design perspective. Creates or updates role-specific analysis with framework references.
|
||||
**Specialized command for generating ui-designer/analysis.md** that addresses guidance-specification.md discussion points from UI/UX design perspective. Creates or updates role-specific analysis with framework references.
|
||||
|
||||
### Core Function
|
||||
- **Framework-based Analysis**: Address each discussion point in topic-framework.md
|
||||
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
|
||||
- **UI/UX Focus**: User experience, interface design, and accessibility perspective
|
||||
- **Update Mechanism**: Create new or update existing analysis.md
|
||||
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
|
||||
@@ -53,7 +53,7 @@ IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/topic-framework.md
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
framework_mode = true
|
||||
load_framework = true
|
||||
@@ -94,7 +94,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/topic-framework.md)
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -108,17 +108,17 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
**Framework Reference**: Address all discussion points in topic-framework.md from UI/UX perspective
|
||||
**Framework Reference**: Address all discussion points in guidance-specification.md from UI/UX perspective
|
||||
**Role Focus**: User experience design, interface optimization, accessibility compliance
|
||||
**Structured Approach**: Create analysis.md addressing framework discussion points
|
||||
**Template Integration**: Apply role template guidelines within framework structure
|
||||
|
||||
## Expected Deliverables
|
||||
1. **analysis.md**: Comprehensive UI/UX analysis addressing all framework discussion points
|
||||
2. **Framework Reference**: Include @../topic-framework.md reference in analysis
|
||||
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
|
||||
|
||||
## Completion Criteria
|
||||
- Address each discussion point from topic-framework.md with UI/UX design expertise
|
||||
- Address each discussion point from guidance-specification.md with UI/UX design expertise
|
||||
- Provide actionable design recommendations and interface solutions
|
||||
- Include accessibility considerations and WCAG compliance planning
|
||||
- Reference framework document using @ notation for integration
|
||||
@@ -137,7 +137,7 @@ TodoWrite({
|
||||
activeForm: "Detecting session and framework"
|
||||
},
|
||||
{
|
||||
content: "Load topic-framework.md and session metadata for context",
|
||||
content: "Load guidance-specification.md and session metadata for context",
|
||||
status: "pending",
|
||||
activeForm: "Loading framework and session context"
|
||||
},
|
||||
@@ -165,7 +165,7 @@ TodoWrite({
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/ui-designer/
|
||||
└── analysis.md # Structured analysis addressing topic-framework.md discussion points
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
### Analysis Document Structure
|
||||
@@ -173,11 +173,11 @@ TodoWrite({
|
||||
# UI Designer Analysis: [Topic from Framework]
|
||||
|
||||
## Framework Reference
|
||||
**Topic Framework**: @../topic-framework.md
|
||||
**Topic Framework**: @../guidance-specification.md
|
||||
**Role Focus**: UI/UX Design perspective
|
||||
|
||||
## Discussion Points Analysis
|
||||
[Address each point from topic-framework.md with UI/UX expertise]
|
||||
[Address each point from guidance-specification.md with UI/UX expertise]
|
||||
|
||||
### Core Requirements (from framework)
|
||||
[UI/UX perspective on requirements]
|
||||
@@ -210,12 +210,12 @@ TodoWrite({
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/ui-designer/analysis.md",
|
||||
"framework_reference": "@../topic-framework.md"
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Points
|
||||
- **Framework Reference**: @../topic-framework.md for structured discussion points
|
||||
- **Framework Reference**: @../guidance-specification.md for structured discussion points
|
||||
- **Cross-Role Synthesis**: UI/UX insights available for synthesis-report.md integration
|
||||
- **Agent Autonomy**: Independent execution with framework guidance
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: ux-expert
|
||||
description: Generate or update ux-expert/analysis.md addressing topic-framework discussion points
|
||||
description: Generate or update ux-expert/analysis.md addressing guidance-specification discussion points
|
||||
argument-hint: "optional topic - uses existing framework if available"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
---
|
||||
@@ -8,10 +8,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
## 🎯 **UX Expert Analysis Generator**
|
||||
|
||||
### Purpose
|
||||
**Specialized command for generating ux-expert/analysis.md** that addresses topic-framework.md discussion points from user experience and interface design perspective. Creates or updates role-specific analysis with framework references.
|
||||
**Specialized command for generating ux-expert/analysis.md** that addresses guidance-specification.md discussion points from user experience and interface design perspective. Creates or updates role-specific analysis with framework references.
|
||||
|
||||
### Core Function
|
||||
- **Framework-based Analysis**: Address each discussion point in topic-framework.md
|
||||
- **Framework-based Analysis**: Address each discussion point in guidance-specification.md
|
||||
- **UX Design Focus**: User interface, interaction patterns, and usability optimization
|
||||
- **Update Mechanism**: Create new or update existing analysis.md
|
||||
- **Agent Delegation**: Use conceptual-planning-agent for analysis generation
|
||||
@@ -53,7 +53,7 @@ IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/topic-framework.md
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
framework_mode = true
|
||||
load_framework = true
|
||||
@@ -94,7 +94,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/topic-framework.md)
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -108,17 +108,17 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
**Framework Reference**: Address all discussion points in topic-framework.md from user experience and interface design perspective
|
||||
**Framework Reference**: Address all discussion points in guidance-specification.md from user experience and interface design perspective
|
||||
**Role Focus**: UI design, interaction patterns, usability optimization, design systems
|
||||
**Structured Approach**: Create analysis.md addressing framework discussion points
|
||||
**Template Integration**: Apply role template guidelines within framework structure
|
||||
|
||||
## Expected Deliverables
|
||||
1. **analysis.md**: Comprehensive UX design analysis addressing all framework discussion points
|
||||
2. **Framework Reference**: Include @../topic-framework.md reference in analysis
|
||||
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
|
||||
|
||||
## Completion Criteria
|
||||
- Address each discussion point from topic-framework.md with UX design expertise
|
||||
- Address each discussion point from guidance-specification.md with UX design expertise
|
||||
- Provide actionable interface design and usability optimization strategies
|
||||
- Include accessibility considerations and interaction pattern recommendations
|
||||
- Reference framework document using @ notation for integration
|
||||
@@ -137,7 +137,7 @@ TodoWrite({
|
||||
activeForm: "Detecting session and framework"
|
||||
},
|
||||
{
|
||||
content: "Load topic-framework.md and session metadata for context",
|
||||
content: "Load guidance-specification.md and session metadata for context",
|
||||
status: "pending",
|
||||
activeForm: "Loading framework and session context"
|
||||
},
|
||||
@@ -165,7 +165,7 @@ TodoWrite({
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/ux-expert/
|
||||
└── analysis.md # Structured analysis addressing topic-framework.md discussion points
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
### Analysis Document Structure
|
||||
@@ -173,11 +173,11 @@ TodoWrite({
|
||||
# UX Expert Analysis: [Topic from Framework]
|
||||
|
||||
## Framework Reference
|
||||
**Topic Framework**: @../topic-framework.md
|
||||
**Topic Framework**: @../guidance-specification.md
|
||||
**Role Focus**: User Experience & Interface Design perspective
|
||||
|
||||
## Discussion Points Analysis
|
||||
[Address each point from topic-framework.md with UX design expertise]
|
||||
[Address each point from guidance-specification.md with UX design expertise]
|
||||
|
||||
### Core Requirements (from framework)
|
||||
[User interface and interaction design requirements perspective]
|
||||
@@ -210,12 +210,12 @@ TodoWrite({
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/ux-expert/analysis.md",
|
||||
"framework_reference": "@../topic-framework.md"
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Points
|
||||
- **Framework Reference**: @../topic-framework.md for structured discussion points
|
||||
- **Framework Reference**: @../guidance-specification.md for structured discussion points
|
||||
- **Cross-Role Synthesis**: UX design insights available for synthesis-report.md integration
|
||||
- **Agent Autonomy**: Independent execution with framework guidance
|
||||
|
||||
@@ -1,307 +0,0 @@
|
||||
---
|
||||
name: concept-clarify
|
||||
description: Identify underspecified areas in brainstorming artifacts through targeted clarification questions before action planning
|
||||
argument-hint: "[optional: --session session-id]"
|
||||
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
**Goal**: Detect and reduce ambiguity or missing decision points in brainstorming artifacts (synthesis-specification.md, topic-framework.md, role analyses) before moving to action planning phase.
|
||||
|
||||
**Timing**: This command runs AFTER `/workflow:brainstorm:synthesis` and BEFORE `/workflow:plan`. It serves as a quality gate to ensure conceptual clarity before detailed task planning.
|
||||
|
||||
**Execution steps**:
|
||||
|
||||
1. **Session Detection & Validation**
|
||||
```bash
|
||||
# Detect active workflow session
|
||||
IF --session parameter provided:
|
||||
session_id = provided session
|
||||
ELSE:
|
||||
CHECK: .workflow/.active-* marker files
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
ELSE:
|
||||
ERROR: "No active workflow session found. Use --session <session-id> or start a session."
|
||||
EXIT
|
||||
|
||||
# Validate brainstorming completion
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/synthesis-specification.md
|
||||
IF NOT EXISTS:
|
||||
ERROR: "synthesis-specification.md not found. Run /workflow:brainstorm:synthesis first"
|
||||
EXIT
|
||||
|
||||
CHECK: brainstorm_dir/topic-framework.md
|
||||
IF NOT EXISTS:
|
||||
WARN: "topic-framework.md not found. Verification will be limited."
|
||||
```
|
||||
|
||||
2. **Load Brainstorming Artifacts**
|
||||
```bash
|
||||
# Load primary artifacts
|
||||
synthesis_spec = Read(brainstorm_dir + "/synthesis-specification.md")
|
||||
topic_framework = Read(brainstorm_dir + "/topic-framework.md") # if exists
|
||||
|
||||
# Discover role analyses
|
||||
role_analyses = Glob(brainstorm_dir + "/*/analysis.md")
|
||||
participating_roles = extract_role_names(role_analyses)
|
||||
```
|
||||
|
||||
3. **Ambiguity & Coverage Scan**
|
||||
|
||||
Perform structured scan using this taxonomy. For each category, mark status: **Clear** / **Partial** / **Missing**.
|
||||
|
||||
**Requirements Clarity**:
|
||||
- Functional requirements specificity and measurability
|
||||
- Non-functional requirements with quantified targets
|
||||
- Business requirements with success metrics
|
||||
- Acceptance criteria completeness
|
||||
|
||||
**Architecture & Design Clarity**:
|
||||
- Architecture decisions with rationale
|
||||
- Data model completeness (entities, relationships, constraints)
|
||||
- Technology stack justification
|
||||
- Integration points and API contracts
|
||||
|
||||
**User Experience & Interface**:
|
||||
- User journey completeness
|
||||
- Critical interaction flows
|
||||
- Error/edge case handling
|
||||
- Accessibility and localization considerations
|
||||
|
||||
**Implementation Feasibility**:
|
||||
- Team capability vs. required skills
|
||||
- External dependencies and failure modes
|
||||
- Resource constraints (timeline, personnel)
|
||||
- Technical constraints and tradeoffs
|
||||
|
||||
**Risk & Mitigation**:
|
||||
- Critical risks identified
|
||||
- Mitigation strategies defined
|
||||
- Success factors clarity
|
||||
- Monitoring and quality gates
|
||||
|
||||
**Process & Collaboration**:
|
||||
- Role responsibilities and handoffs
|
||||
- Collaboration patterns defined
|
||||
- Timeline and milestone clarity
|
||||
- Dependency management strategy
|
||||
|
||||
**Decision Traceability**:
|
||||
- Controversial points documented
|
||||
- Alternatives considered and rejected
|
||||
- Decision rationale clarity
|
||||
- Consensus vs. dissent tracking
|
||||
|
||||
**Terminology & Consistency**:
|
||||
- Canonical terms defined
|
||||
- Consistent naming across artifacts
|
||||
- No unresolved placeholders (TODO, TBD, ???)
|
||||
|
||||
For each category with **Partial** or **Missing** status, add to candidate question queue unless:
|
||||
- Clarification would not materially change implementation strategy
|
||||
- Information is better deferred to planning phase
|
||||
|
||||
4. **Generate Prioritized Question Queue**
|
||||
|
||||
Internally generate prioritized queue of candidate questions (maximum 5):
|
||||
|
||||
**Constraints**:
|
||||
- Maximum 5 questions per session
|
||||
- Each question must be answerable with:
|
||||
* Multiple-choice (2-5 mutually exclusive options), OR
|
||||
* Short answer (≤5 words)
|
||||
- Only include questions whose answers materially impact:
|
||||
* Architecture decisions
|
||||
* Data modeling
|
||||
* Task decomposition
|
||||
* Risk mitigation
|
||||
* Success criteria
|
||||
- Ensure category coverage balance
|
||||
- Favor clarifications that reduce downstream rework risk
|
||||
|
||||
**Prioritization Heuristic**:
|
||||
```
|
||||
priority_score = (impact_on_planning * 0.4) +
|
||||
(uncertainty_level * 0.3) +
|
||||
(risk_if_unresolved * 0.3)
|
||||
```
|
||||
|
||||
If zero high-impact ambiguities found, proceed to **Step 8** (report success).
|
||||
|
||||
5. **Sequential Question Loop** (Interactive)
|
||||
|
||||
Present **EXACTLY ONE** question at a time:
|
||||
|
||||
**Multiple-choice format**:
|
||||
```markdown
|
||||
**Question {N}/5**: {Question text}
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| A | {Option A description} |
|
||||
| B | {Option B description} |
|
||||
| C | {Option C description} |
|
||||
| D | {Option D description} |
|
||||
| Short | Provide different answer (≤5 words) |
|
||||
```
|
||||
|
||||
**Short-answer format**:
|
||||
```markdown
|
||||
**Question {N}/5**: {Question text}
|
||||
|
||||
Format: Short answer (≤5 words)
|
||||
```
|
||||
|
||||
**Answer Validation**:
|
||||
- Validate answer maps to option or fits ≤5 word constraint
|
||||
- If ambiguous, ask quick disambiguation (doesn't count as new question)
|
||||
- Once satisfactory, record in working memory and proceed to next question
|
||||
|
||||
**Stop Conditions**:
|
||||
- All critical ambiguities resolved
|
||||
- User signals completion ("done", "no more", "proceed")
|
||||
- Reached 5 questions
|
||||
|
||||
**Never reveal future queued questions in advance**.
|
||||
|
||||
6. **Integration After Each Answer** (Incremental Update)
|
||||
|
||||
After each accepted answer:
|
||||
|
||||
```bash
|
||||
# Ensure Clarifications section exists
|
||||
IF synthesis_spec NOT contains "## Clarifications":
|
||||
Insert "## Clarifications" section after "# [Topic]" heading
|
||||
|
||||
# Create session subsection
|
||||
IF NOT contains "### Session YYYY-MM-DD":
|
||||
Create "### Session {today's date}" under "## Clarifications"
|
||||
|
||||
# Append clarification entry
|
||||
APPEND: "- Q: {question} → A: {answer}"
|
||||
|
||||
# Apply clarification to appropriate section
|
||||
CASE category:
|
||||
Functional Requirements → Update "## Requirements & Acceptance Criteria"
|
||||
Architecture → Update "## Key Designs & Decisions" or "## Design Specifications"
|
||||
User Experience → Update "## Design Specifications > UI/UX Guidelines"
|
||||
Risk → Update "## Risk Assessment & Mitigation"
|
||||
Process → Update "## Process & Collaboration Concerns"
|
||||
Data Model → Update "## Key Designs & Decisions > Data Model Overview"
|
||||
Non-Functional → Update "## Requirements & Acceptance Criteria > Non-Functional Requirements"
|
||||
|
||||
# Remove obsolete/contradictory statements
|
||||
IF clarification invalidates existing statement:
|
||||
Replace statement instead of duplicating
|
||||
|
||||
# Save immediately
|
||||
Write(synthesis_specification.md)
|
||||
```
|
||||
|
||||
7. **Validation After Each Write**
|
||||
|
||||
- [ ] Clarifications section contains exactly one bullet per accepted answer
|
||||
- [ ] Total asked questions ≤ 5
|
||||
- [ ] Updated sections contain no lingering placeholders
|
||||
- [ ] No contradictory earlier statements remain
|
||||
- [ ] Markdown structure valid
|
||||
- [ ] Terminology consistent across all updated sections
|
||||
|
||||
8. **Completion Report**
|
||||
|
||||
After questioning loop ends or early termination:
|
||||
|
||||
```markdown
|
||||
## ✅ Concept Verification Complete
|
||||
|
||||
**Session**: WFS-{session-id}
|
||||
**Questions Asked**: {count}/5
|
||||
**Artifacts Updated**: synthesis-specification.md
|
||||
**Sections Touched**: {list section names}
|
||||
|
||||
### Coverage Summary
|
||||
|
||||
| Category | Status | Notes |
|
||||
|----------|--------|-------|
|
||||
| Requirements Clarity | ✅ Resolved | Acceptance criteria quantified |
|
||||
| Architecture & Design | ✅ Clear | No ambiguities found |
|
||||
| Implementation Feasibility | ⚠️ Deferred | Team training plan to be defined in IMPL_PLAN |
|
||||
| Risk & Mitigation | ✅ Resolved | Critical risks now have mitigation strategies |
|
||||
| ... | ... | ... |
|
||||
|
||||
**Legend**:
|
||||
- ✅ Resolved: Was Partial/Missing, now addressed
|
||||
- ✅ Clear: Already sufficient
|
||||
- ⚠️ Deferred: Low impact, better suited for planning phase
|
||||
- ❌ Outstanding: Still Partial/Missing but question quota reached
|
||||
|
||||
### Recommendations
|
||||
|
||||
- ✅ **PROCEED to /workflow:plan**: Conceptual foundation is clear
|
||||
- OR ⚠️ **Address Outstanding Items First**: {list critical outstanding items}
|
||||
- OR 🔄 **Run /workflow:concept-clarify Again**: If new information available
|
||||
|
||||
### Next Steps
|
||||
```bash
|
||||
/workflow:plan # Generate IMPL_PLAN.md and task.json
|
||||
```
|
||||
```
|
||||
|
||||
9. **Update Session Metadata**
|
||||
|
||||
```json
|
||||
{
|
||||
"phases": {
|
||||
"BRAINSTORM": {
|
||||
"status": "completed",
|
||||
"concept_verification": {
|
||||
"completed": true,
|
||||
"completed_at": "timestamp",
|
||||
"questions_asked": 3,
|
||||
"categories_clarified": ["Requirements", "Risk", "Architecture"],
|
||||
"outstanding_items": [],
|
||||
"recommendation": "PROCEED_TO_PLANNING"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Behavior Rules
|
||||
|
||||
- **If no meaningful ambiguities found**: Report "No critical ambiguities detected. Conceptual foundation is clear." and suggest proceeding to `/workflow:plan`.
|
||||
- **If synthesis-specification.md missing**: Instruct user to run `/workflow:brainstorm:synthesis` first.
|
||||
- **Never exceed 5 questions** (disambiguation retries don't count as new questions).
|
||||
- **Respect user early termination**: Signals like "stop", "done", "proceed" should stop questioning.
|
||||
- **If quota reached with high-impact items unresolved**: Explicitly flag them under "Outstanding" with recommendation to address before planning.
|
||||
- **Avoid speculative tech stack questions** unless absence blocks conceptual clarity.
|
||||
|
||||
## Operating Principles
|
||||
|
||||
### Context Efficiency
|
||||
- **Minimal high-signal tokens**: Focus on actionable clarifications
|
||||
- **Progressive disclosure**: Load artifacts incrementally
|
||||
- **Deterministic results**: Rerunning without changes produces consistent analysis
|
||||
|
||||
### Verification Guidelines
|
||||
- **NEVER hallucinate missing sections**: Report them accurately
|
||||
- **Prioritize high-impact ambiguities**: Focus on what affects planning
|
||||
- **Use examples over exhaustive rules**: Cite specific instances
|
||||
- **Report zero issues gracefully**: Emit success report with coverage statistics
|
||||
- **Update incrementally**: Save after each answer to minimize context loss
|
||||
|
||||
## Context
|
||||
|
||||
{ARGS}
|
||||
@@ -11,6 +11,17 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
||||
|
||||
**Resume Mode**: When called with `--resume-session` flag, skips discovery phase and directly enters TodoWrite generation and agent execution for the specified session.
|
||||
|
||||
## Performance Optimization Strategy
|
||||
|
||||
**Lazy Loading**: Task JSONs read **on-demand** during execution, not upfront. TODO_LIST.md + IMPL_PLAN.md provide metadata for planning.
|
||||
|
||||
| Metric | Before | After | Improvement |
|
||||
|--------|--------|-------|-------------|
|
||||
| **Initial Load** | All task JSONs (~2,300 lines) | TODO_LIST.md only (~650 lines) | **72% reduction** |
|
||||
| **Startup Time** | Seconds | Milliseconds | **~90% faster** |
|
||||
| **Memory** | All tasks | 1-2 tasks | **90% less** |
|
||||
| **Scalability** | 10-20 tasks | 100+ tasks | **5-10x** |
|
||||
|
||||
## Core Rules
|
||||
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
|
||||
**Execute all discovered pending tasks sequentially until workflow completion or blocking dependency.**
|
||||
@@ -35,28 +46,21 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
||||
- **Autonomous completion**: **Execute all tasks without user interruption until workflow complete**
|
||||
|
||||
## Flow Control Execution
|
||||
**[FLOW_CONTROL]** marker indicates sequential step execution required for context gathering and preparation. **These steps are executed BY THE AGENT, not by the workflow:execute command.**
|
||||
**[FLOW_CONTROL]** marker indicates task JSON contains `flow_control.pre_analysis` steps for context preparation.
|
||||
|
||||
### Flow Control Rules
|
||||
1. **Auto-trigger**: When `task.flow_control.pre_analysis` array exists in task JSON, agents execute these steps
|
||||
2. **Sequential Processing**: Agents execute steps in order, accumulating context including artifacts
|
||||
3. **Variable Passing**: Agents use `[variable_name]` syntax to reference step outputs including artifact content
|
||||
4. **Error Handling**: Agents follow step-specific error strategies (`fail`, `skip_optional`, `retry_once`)
|
||||
5. **Artifacts Priority**: When artifacts exist in task.context.artifacts, load synthesis specifications first
|
||||
### Orchestrator Responsibility
|
||||
- Pass complete task JSON to agent (including `flow_control` block)
|
||||
- Provide session paths for artifact access
|
||||
- Monitor agent completion
|
||||
|
||||
### Execution Pattern
|
||||
```
|
||||
Step 1: load_dependencies → dependency_context
|
||||
Step 2: analyze_patterns [dependency_context] → pattern_analysis
|
||||
Step 3: implement_solution [pattern_analysis] [dependency_context] → implementation
|
||||
```
|
||||
### Agent Responsibility
|
||||
- Parse `flow_control.pre_analysis` array from JSON
|
||||
- Execute steps sequentially with variable substitution
|
||||
- Accumulate context from artifacts and dependencies
|
||||
- Follow error handling per `step.on_error`
|
||||
- Complete implementation using accumulated context
|
||||
|
||||
### Context Accumulation Process (Executed by Agents)
|
||||
- **Load Artifacts**: Agents retrieve synthesis specifications and brainstorming outputs from `context.artifacts`
|
||||
- **Load Dependencies**: Agents retrieve summaries from `context.depends_on` tasks
|
||||
- **Execute Analysis**: Agents run CLI tools with accumulated context including artifacts
|
||||
- **Prepare Implementation**: Agents build comprehensive context for implementation
|
||||
- **Continue Implementation**: Agents use all accumulated context including artifacts for task execution
|
||||
**Orchestrator does NOT execute flow control steps - Agent interprets and executes them from JSON.**
|
||||
|
||||
## Execution Lifecycle
|
||||
|
||||
@@ -70,40 +74,69 @@ Step 3: implement_solution [pattern_analysis] [dependency_context] → implement
|
||||
### Phase 1: Discovery (Normal Mode Only)
|
||||
1. **Check Active Sessions**: Find `.workflow/.active-*` markers
|
||||
2. **Select Session**: If multiple found, prompt user selection
|
||||
3. **Load Session State**: Read `workflow-session.json` and `IMPL_PLAN.md`
|
||||
4. **Scan Tasks**: Analyze `.task/*.json` files for ready tasks
|
||||
3. **Load Session Metadata**: Read `workflow-session.json` ONLY (minimal context)
|
||||
4. **DO NOT read task JSONs yet** - defer until execution phase
|
||||
|
||||
**Note**: In resume mode, this phase is completely skipped.
|
||||
|
||||
### Phase 2: Analysis (Normal Mode Only)
|
||||
1. **Dependency Resolution**: Build execution order based on `depends_on`
|
||||
2. **Status Validation**: Filter tasks with `status: "pending"` and met dependencies
|
||||
3. **Agent Assignment**: Determine agent type from `meta.agent` or `meta.type`
|
||||
4. **Context Preparation**: Load dependency summaries and inherited context
|
||||
### Phase 2: Planning Document Analysis (Normal Mode Only)
|
||||
**Optimized to avoid reading all task JSONs upfront**
|
||||
|
||||
1. **Read IMPL_PLAN.md**: Understand overall strategy, task breakdown summary, dependencies
|
||||
2. **Read TODO_LIST.md**: Get current task statuses and execution progress
|
||||
3. **Extract Task Metadata**: Parse task IDs, titles, and dependency relationships from TODO_LIST.md
|
||||
4. **Build Execution Queue**: Determine ready tasks based on TODO_LIST.md status and dependencies
|
||||
|
||||
**Key Optimization**: Use IMPL_PLAN.md and TODO_LIST.md as primary sources instead of reading all task JSONs
|
||||
|
||||
**Note**: In resume mode, this phase is also skipped as session analysis was already completed by `/workflow:status`.
|
||||
|
||||
### Phase 3: Planning (Resume Mode Entry Point)
|
||||
### Phase 3: TodoWrite Generation (Resume Mode Entry Point)
|
||||
**This is where resume mode directly enters after skipping Phases 1 & 2**
|
||||
|
||||
1. **Create TodoWrite List**: Generate task list with status markers from session state
|
||||
2. **Mark Initial Status**: Set first pending task as `in_progress`
|
||||
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
|
||||
- Parse TODO_LIST.md to extract all tasks with current statuses
|
||||
- Identify first pending task with met dependencies
|
||||
- Generate comprehensive TodoWrite covering entire workflow
|
||||
2. **Mark Initial Status**: Set first ready task as `in_progress` in TodoWrite
|
||||
3. **Prepare Session Context**: Inject workflow paths for agent use (using provided session-id)
|
||||
4. **Prepare Complete Task JSON**: Include pre_analysis and flow control steps for agent consumption
|
||||
5. **Validate Prerequisites**: Ensure all required context is available from existing session
|
||||
4. **Validate Prerequisites**: Ensure IMPL_PLAN.md and TODO_LIST.md exist and are valid
|
||||
|
||||
**Resume Mode Behavior**:
|
||||
- Load existing session state directly from `.workflow/{session-id}/`
|
||||
- Use session's task files and summaries without discovery
|
||||
- Generate TodoWrite from current session progress
|
||||
- Proceed immediately to agent execution
|
||||
- Load existing TODO_LIST.md directly from `.workflow/{session-id}/`
|
||||
- Extract current progress from TODO_LIST.md
|
||||
- Generate TodoWrite from TODO_LIST.md state
|
||||
- Proceed immediately to agent execution (Phase 4)
|
||||
|
||||
### Phase 4: Execution
|
||||
1. **Pass Task with Flow Control**: Include complete task JSON with `pre_analysis` steps for agent execution
|
||||
2. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
|
||||
3. **Monitor Progress**: Track agent execution and handle errors without user interruption
|
||||
4. **Collect Results**: Gather implementation results and outputs
|
||||
5. **Continue Workflow**: Automatically proceed to next pending task until completion
|
||||
### Phase 4: Execution (Lazy Task Loading)
|
||||
**Key Optimization**: Read task JSON **only when needed** for execution
|
||||
|
||||
1. **Identify Next Task**: From TodoWrite, get the next `in_progress` task ID
|
||||
2. **Load Task JSON on Demand**: Read `.task/{task-id}.json` for current task ONLY
|
||||
3. **Validate Task Structure**: Ensure all 5 required fields exist (id, title, status, meta, context, flow_control)
|
||||
4. **Pass Task with Flow Control**: Include complete task JSON with `pre_analysis` steps for agent execution
|
||||
5. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
|
||||
6. **Monitor Progress**: Track agent execution and handle errors without user interruption
|
||||
7. **Collect Results**: Gather implementation results and outputs
|
||||
8. **Update TODO_LIST.md**: Mark current task as completed in TODO_LIST.md
|
||||
9. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat from step 1
|
||||
|
||||
**Execution Loop Pattern**:
|
||||
```
|
||||
while (TODO_LIST.md has pending tasks) {
|
||||
next_task_id = getTodoWriteInProgressTask()
|
||||
task_json = Read(.workflow/{session}/.task/{next_task_id}.json) // Lazy load
|
||||
executeTaskWithAgent(task_json)
|
||||
updateTodoListMarkCompleted(next_task_id)
|
||||
advanceTodoWriteToNextTask()
|
||||
}
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- Reduces initial context loading by ~90%
|
||||
- Only reads task JSON when actually executing
|
||||
- Scales better for workflows with many tasks
|
||||
- Faster startup time for workflow execution
|
||||
|
||||
### Phase 5: Completion
|
||||
1. **Update Task Status**: Mark completed tasks in JSON files
|
||||
@@ -115,27 +148,33 @@ Step 3: implement_solution [pattern_analysis] [dependency_context] → implement
|
||||
|
||||
## Task Discovery & Queue Building
|
||||
|
||||
### Session Discovery Process (Normal Mode)
|
||||
### Session Discovery Process (Normal Mode - Optimized)
|
||||
```
|
||||
├── Check for .active-* markers in .workflow/
|
||||
├── If multiple active sessions found → Prompt user to select
|
||||
├── Locate selected session's workflow folder
|
||||
├── Load selected session's workflow-session.json and IMPL_PLAN.md
|
||||
├── Scan selected session's .task/ directory for task JSON files
|
||||
├── Analyze task statuses and dependencies for selected session only
|
||||
└── Build execution queue of ready tasks from selected session
|
||||
├── Load session metadata: workflow-session.json (minimal context)
|
||||
├── Read IMPL_PLAN.md (strategy overview and task summary)
|
||||
├── Read TODO_LIST.md (current task statuses and dependencies)
|
||||
├── Parse TODO_LIST.md to extract task metadata (NO JSON loading)
|
||||
├── Build execution queue from TODO_LIST.md
|
||||
└── Generate TodoWrite from TODO_LIST.md state
|
||||
```
|
||||
|
||||
### Resume Mode Process (--resume-session flag)
|
||||
**Key Change**: Task JSONs are NOT loaded during discovery - they are loaded lazily during execution
|
||||
|
||||
### Resume Mode Process (--resume-session flag - Optimized)
|
||||
```
|
||||
├── Use provided session-id directly (skip discovery)
|
||||
├── Validate .workflow/{session-id}/ directory exists
|
||||
├── Load session's workflow-session.json and IMPL_PLAN.md directly
|
||||
├── Scan session's .task/ directory for task JSON files
|
||||
├── Use existing task statuses and dependencies (no re-analysis needed)
|
||||
└── Build execution queue from session state (prioritize pending/in-progress tasks)
|
||||
├── Read TODO_LIST.md for current progress
|
||||
├── Parse TODO_LIST.md to extract task IDs and statuses
|
||||
├── Generate TodoWrite from TODO_LIST.md (prioritize in-progress/pending tasks)
|
||||
└── Enter Phase 4 (Execution) with lazy task JSON loading
|
||||
```
|
||||
|
||||
**Key Change**: Completely skip IMPL_PLAN.md and task JSON loading - use TODO_LIST.md only
|
||||
|
||||
### Task Status Logic
|
||||
```
|
||||
pending + dependencies_met → executable
|
||||
@@ -143,6 +182,122 @@ completed → skip
|
||||
blocked → skip until dependencies clear
|
||||
```
|
||||
|
||||
## Batch Execution with Dependency Graph
|
||||
|
||||
### Parallel Execution Algorithm
|
||||
**Core principle**: Execute independent tasks concurrently in batches based on dependency graph.
|
||||
|
||||
#### Algorithm Steps (Optimized with Lazy Loading)
|
||||
```javascript
|
||||
function executeBatchWorkflow(sessionId) {
|
||||
// 1. Build dependency graph from TODO_LIST.md (NOT task JSONs)
|
||||
const graph = buildDependencyGraphFromTodoList(`.workflow/${sessionId}/TODO_LIST.md`);
|
||||
|
||||
// 2. Process batches until graph is empty
|
||||
while (!graph.isEmpty()) {
|
||||
// 3. Identify current batch (tasks with in-degree = 0)
|
||||
const batch = graph.getNodesWithInDegreeZero();
|
||||
|
||||
// 4. Load task JSONs ONLY for current batch (lazy loading)
|
||||
const batchTaskJsons = batch.map(taskId =>
|
||||
Read(`.workflow/${sessionId}/.task/${taskId}.json`)
|
||||
);
|
||||
|
||||
// 5. Check for parallel execution opportunities
|
||||
const parallelGroups = groupByExecutionGroup(batchTaskJsons);
|
||||
|
||||
// 6. Execute batch concurrently
|
||||
await Promise.all(
|
||||
parallelGroups.map(group => executeBatch(group))
|
||||
);
|
||||
|
||||
// 7. Update graph: remove completed tasks and their edges
|
||||
graph.removeNodes(batch);
|
||||
|
||||
// 8. Update TODO_LIST.md and TodoWrite to reflect completed batch
|
||||
updateTodoListAfterBatch(batch);
|
||||
updateTodoWriteAfterBatch(batch);
|
||||
}
|
||||
|
||||
// 9. All tasks complete - auto-complete session
|
||||
SlashCommand("/workflow:session:complete");
|
||||
}
|
||||
|
||||
function buildDependencyGraphFromTodoList(todoListPath) {
|
||||
const todoContent = Read(todoListPath);
|
||||
const tasks = parseTodoListTasks(todoContent);
|
||||
const graph = new DirectedGraph();
|
||||
|
||||
tasks.forEach(task => {
|
||||
graph.addNode(task.id, { id: task.id, title: task.title, status: task.status });
|
||||
task.dependencies?.forEach(depId => graph.addEdge(depId, task.id));
|
||||
});
|
||||
|
||||
return graph;
|
||||
}
|
||||
|
||||
function parseTodoListTasks(todoContent) {
|
||||
// Parse: - [ ] **IMPL-001**: Task title → [📋](./.task/IMPL-001.json)
|
||||
const taskPattern = /- \[([ x])\] \*\*([A-Z]+-\d+(?:\.\d+)?)\*\*: (.+?) →/g;
|
||||
const tasks = [];
|
||||
let match;
|
||||
|
||||
while ((match = taskPattern.exec(todoContent)) !== null) {
|
||||
tasks.push({
|
||||
status: match[1] === 'x' ? 'completed' : 'pending',
|
||||
id: match[2],
|
||||
title: match[3]
|
||||
});
|
||||
}
|
||||
|
||||
return tasks;
|
||||
}
|
||||
|
||||
function groupByExecutionGroup(tasks) {
|
||||
const groups = {};
|
||||
|
||||
tasks.forEach(task => {
|
||||
const groupId = task.meta.execution_group || task.id;
|
||||
if (!groups[groupId]) groups[groupId] = [];
|
||||
groups[groupId].push(task);
|
||||
});
|
||||
|
||||
return Object.values(groups);
|
||||
}
|
||||
|
||||
async function executeBatch(tasks) {
|
||||
// Execute all tasks in batch concurrently
|
||||
return Promise.all(
|
||||
tasks.map(task => executeTask(task))
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
#### Execution Group Rules
|
||||
1. **Same `execution_group` ID** → Execute in parallel (independent, different contexts)
|
||||
2. **No `execution_group` (null)** → Execute sequentially (has dependencies)
|
||||
3. **Different `execution_group` IDs** → Execute in parallel (independent batches)
|
||||
4. **Same `context_signature`** → Should have been merged (warning if not)
|
||||
|
||||
#### Parallel Execution Example
|
||||
```
|
||||
Batch 1 (no dependencies):
|
||||
- IMPL-1.1 (execution_group: "parallel-auth-api") → Agent 1
|
||||
- IMPL-1.2 (execution_group: "parallel-ui-comp") → Agent 2
|
||||
- IMPL-1.3 (execution_group: "parallel-db-schema") → Agent 3
|
||||
|
||||
Wait for Batch 1 completion...
|
||||
|
||||
Batch 2 (depends on Batch 1):
|
||||
- IMPL-2.1 (execution_group: null, depends_on: [IMPL-1.1, IMPL-1.2]) → Agent 1
|
||||
|
||||
Wait for Batch 2 completion...
|
||||
|
||||
Batch 3 (independent of Batch 2):
|
||||
- IMPL-3.1 (execution_group: "parallel-tests-1") → Agent 1
|
||||
- IMPL-3.2 (execution_group: "parallel-tests-2") → Agent 2
|
||||
```
|
||||
|
||||
## TodoWrite Coordination
|
||||
**Comprehensive workflow tracking** with immediate status updates throughout entire execution without user interruption:
|
||||
|
||||
@@ -150,8 +305,11 @@ blocked → skip until dependencies clear
|
||||
1. **Initial Creation**: Generate TodoWrite from discovered pending tasks for entire workflow
|
||||
- **Normal Mode**: Create from discovery results
|
||||
- **Resume Mode**: Create from existing session state and current progress
|
||||
2. **Single In-Progress**: Mark ONLY ONE task as `in_progress` at a time
|
||||
3. **Immediate Updates**: Update status after each task completion without user interruption
|
||||
2. **Parallel Task Support**:
|
||||
- **Single-task execution**: Mark ONLY ONE task as `in_progress` at a time
|
||||
- **Batch execution**: Mark ALL tasks in current batch as `in_progress` simultaneously
|
||||
- **Execution group indicator**: Show `[execution_group: group-id]` for parallel tasks
|
||||
3. **Immediate Updates**: Update status after each task/batch completion without user interruption
|
||||
4. **Status Synchronization**: Sync with JSON task files after updates
|
||||
5. **Continuous Tracking**: Maintain TodoWrite throughout entire workflow execution until completion
|
||||
|
||||
@@ -167,36 +325,71 @@ blocked → skip until dependencies clear
|
||||
**Use Claude Code's built-in TodoWrite tool** to track workflow progress in real-time:
|
||||
|
||||
```javascript
|
||||
// Create initial todo list from discovered pending tasks
|
||||
// Example 1: Sequential execution (traditional)
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{
|
||||
content: "Execute IMPL-1.1: Design auth schema [code-developer] [FLOW_CONTROL]",
|
||||
status: "pending",
|
||||
status: "in_progress", // Single task in progress
|
||||
activeForm: "Executing IMPL-1.1: Design auth schema"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-1.2: Implement auth logic [code-developer] [FLOW_CONTROL]",
|
||||
status: "pending",
|
||||
activeForm: "Executing IMPL-1.2: Implement auth logic"
|
||||
},
|
||||
{
|
||||
content: "Execute TEST-FIX-1: Validate implementation tests [test-fix-agent]",
|
||||
status: "pending",
|
||||
activeForm: "Executing TEST-FIX-1: Validate implementation tests"
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// Update status as tasks progress - ONLY ONE task should be in_progress at a time
|
||||
// Example 2: Batch execution (parallel tasks with execution_group)
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{
|
||||
content: "Execute IMPL-1.1: Design auth schema [code-developer] [FLOW_CONTROL]",
|
||||
status: "in_progress", // Mark current task as in_progress
|
||||
activeForm: "Executing IMPL-1.1: Design auth schema"
|
||||
content: "Execute IMPL-1.1: Build Auth API [code-developer] [execution_group: parallel-auth-api]",
|
||||
status: "in_progress", // Batch task 1
|
||||
activeForm: "Executing IMPL-1.1: Build Auth API"
|
||||
},
|
||||
// ... other tasks remain pending
|
||||
{
|
||||
content: "Execute IMPL-1.2: Build User UI [code-developer] [execution_group: parallel-ui-comp]",
|
||||
status: "in_progress", // Batch task 2 (running concurrently)
|
||||
activeForm: "Executing IMPL-1.2: Build User UI"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-1.3: Setup Database [code-developer] [execution_group: parallel-db-schema]",
|
||||
status: "in_progress", // Batch task 3 (running concurrently)
|
||||
activeForm: "Executing IMPL-1.3: Setup Database"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-2.1: Integration Tests [test-fix-agent] [depends_on: IMPL-1.1, IMPL-1.2, IMPL-1.3]",
|
||||
status: "pending", // Next batch (waits for current batch completion)
|
||||
activeForm: "Executing IMPL-2.1: Integration Tests"
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// Example 3: After batch completion
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{
|
||||
content: "Execute IMPL-1.1: Build Auth API [code-developer] [execution_group: parallel-auth-api]",
|
||||
status: "completed", // Batch completed
|
||||
activeForm: "Executing IMPL-1.1: Build Auth API"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-1.2: Build User UI [code-developer] [execution_group: parallel-ui-comp]",
|
||||
status: "completed", // Batch completed
|
||||
activeForm: "Executing IMPL-1.2: Build User UI"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-1.3: Setup Database [code-developer] [execution_group: parallel-db-schema]",
|
||||
status: "completed", // Batch completed
|
||||
activeForm: "Executing IMPL-1.3: Setup Database"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-2.1: Integration Tests [test-fix-agent]",
|
||||
status: "in_progress", // Next batch started
|
||||
activeForm: "Executing IMPL-2.1: Integration Tests"
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
@@ -211,18 +404,19 @@ TodoWrite({
|
||||
- **Workflow Completion Check**: When all tasks marked `completed`, auto-call `/workflow:session:complete`
|
||||
|
||||
#### TODO_LIST.md Update Timing
|
||||
- **Before Agent Launch**: Update TODO_LIST.md to mark task as `in_progress` (⚠️)
|
||||
- **After Task Complete**: Update TODO_LIST.md to mark as `completed` (✅), advance to next
|
||||
- **On Error**: Keep as `in_progress` in TODO_LIST.md, add error note
|
||||
- **Workflow Complete**: When all tasks completed, call `/workflow:session:complete`
|
||||
- **Session End**: Sync all TODO_LIST.md statuses with JSON task files
|
||||
**Single source of truth for task status** - enables lazy loading by providing task metadata without reading JSONs
|
||||
|
||||
- **Before Agent Launch**: Mark task as `in_progress` (⚠️)
|
||||
- **After Task Complete**: Mark as `completed` (✅), advance to next
|
||||
- **On Error**: Keep as `in_progress`, add error note
|
||||
- **Workflow Complete**: Call `/workflow:session:complete`
|
||||
|
||||
### 3. Agent Context Management
|
||||
**Comprehensive context preparation** for autonomous agent execution:
|
||||
|
||||
#### Context Sources (Priority Order)
|
||||
1. **Complete Task JSON**: Full task definition including all fields and artifacts
|
||||
2. **Artifacts Context**: Brainstorming outputs and synthesis specifications from task.context.artifacts
|
||||
2. **Artifacts Context**: Brainstorming outputs and role analysess from task.context.artifacts
|
||||
3. **Flow Control Context**: Accumulated outputs from pre_analysis steps (including artifact loading)
|
||||
4. **Dependency Summaries**: Previous task completion summaries
|
||||
5. **Session Context**: Workflow paths and session metadata
|
||||
@@ -243,10 +437,10 @@ TodoWrite({
|
||||
{
|
||||
"task": { /* Complete task JSON with artifacts array */ },
|
||||
"artifacts": {
|
||||
"synthesis_specification": { "path": ".workflow/WFS-session/.brainstorming/synthesis-specification.md", "priority": "highest" },
|
||||
"topic_framework": { "path": ".workflow/WFS-session/.brainstorming/topic-framework.md", "priority": "medium" },
|
||||
"role_analyses": [ /* Individual role analysis files */ ],
|
||||
"available_artifacts": [ /* All detected brainstorming artifacts */ ]
|
||||
"synthesis_specification": { "path": "{{from context-package.json → brainstorm_artifacts.synthesis_output.path}}", "priority": "highest" },
|
||||
"guidance_specification": { "path": "{{from context-package.json → brainstorm_artifacts.guidance_specification.path}}", "priority": "medium" },
|
||||
"role_analyses": [ /* From context-package.json → brainstorm_artifacts.role_analyses[] */ ],
|
||||
"conflict_resolution": { "path": "{{from context-package.json → brainstorm_artifacts.conflict_resolution.path}}", "conditional": true }
|
||||
},
|
||||
"flow_context": {
|
||||
"step_outputs": {
|
||||
@@ -258,7 +452,7 @@ TodoWrite({
|
||||
},
|
||||
"session": {
|
||||
"workflow_dir": ".workflow/WFS-session/",
|
||||
"brainstorming_dir": ".workflow/WFS-session/.brainstorming/",
|
||||
"context_package_path": ".workflow/WFS-session/.process/context-package.json",
|
||||
"todo_list_path": ".workflow/WFS-session/TODO_LIST.md",
|
||||
"summaries_dir": ".workflow/WFS-session/.summaries/",
|
||||
"task_json_path": ".workflow/WFS-session/.task/IMPL-1.1.json"
|
||||
@@ -270,10 +464,10 @@ TodoWrite({
|
||||
|
||||
#### Context Validation Rules
|
||||
- **Task JSON Complete**: All 5 fields present and valid, including artifacts array in context
|
||||
- **Artifacts Available**: Synthesis specifications and brainstorming outputs accessible
|
||||
- **Artifacts Available**: All artifacts loaded from context-package.json
|
||||
- **Flow Control Ready**: All pre_analysis steps completed including artifact loading steps
|
||||
- **Dependencies Loaded**: All depends_on summaries available
|
||||
- **Session Paths Valid**: All workflow paths exist and accessible, including .brainstorming directory
|
||||
- **Session Paths Valid**: All workflow paths exist and accessible (verified via context-package.json)
|
||||
- **Agent Assignment**: Valid agent type specified in meta.agent
|
||||
|
||||
### 4. Agent Execution Pattern
|
||||
@@ -282,82 +476,40 @@ TodoWrite({
|
||||
#### Agent Prompt Template
|
||||
```bash
|
||||
Task(subagent_type="{meta.agent}",
|
||||
prompt="**TASK EXECUTION WITH FULL JSON LOADING**
|
||||
prompt="**EXECUTE TASK FROM JSON**
|
||||
|
||||
## STEP 1: Load Complete Task JSON
|
||||
**MANDATORY**: First load the complete task JSON from: {session.task_json_path}
|
||||
## Task JSON Location
|
||||
{session.task_json_path}
|
||||
|
||||
cat {session.task_json_path}
|
||||
## Instructions
|
||||
1. **Load Complete Task JSON**: Read and validate all fields (id, title, status, meta, context, flow_control)
|
||||
2. **Execute Flow Control**: If `flow_control.pre_analysis` exists, execute steps sequentially:
|
||||
- Load artifacts (role analysis documents, role analyses) using commands in each step
|
||||
- Accumulate context from step outputs using variable substitution [variable_name]
|
||||
- Handle errors per step.on_error (skip_optional | fail | retry_once)
|
||||
3. **Implement Solution**: Follow `flow_control.implementation_approach` using accumulated context
|
||||
4. **Complete Task**:
|
||||
- Update task status: `jq '.status = \"completed\"' {session.task_json_path} > temp.json && mv temp.json {session.task_json_path}`
|
||||
- Update TODO_LIST.md: Mark task as [x] completed in {session.todo_list_path}
|
||||
- Generate summary: {session.summaries_dir}/{task.id}-summary.md
|
||||
- Check workflow completion and call `/workflow:session:complete` if all tasks done
|
||||
|
||||
**CRITICAL**: Validate all 5 required fields are present:
|
||||
- id, title, status, meta, context, flow_control
|
||||
## Context Sources (All from JSON)
|
||||
- Requirements: `context.requirements`
|
||||
- Focus Paths: `context.focus_paths`
|
||||
- Acceptance: `context.acceptance`
|
||||
- Artifacts: `context.artifacts` (synthesis specs, brainstorming outputs)
|
||||
- Dependencies: `context.depends_on`
|
||||
- Target Files: `flow_control.target_files`
|
||||
|
||||
## STEP 2: Task Definition (From Loaded JSON)
|
||||
**ID**: Use id field from JSON
|
||||
**Title**: Use title field from JSON
|
||||
**Type**: Use meta.type field from JSON
|
||||
**Agent**: Use meta.agent field from JSON
|
||||
**Status**: Verify status is pending or active
|
||||
## Session Paths
|
||||
- Workflow Dir: {session.workflow_dir}
|
||||
- TODO List: {session.todo_list_path}
|
||||
- Summaries: {session.summaries_dir}
|
||||
- Flow Context: {flow_context.step_outputs}
|
||||
|
||||
## STEP 3: Flow Control Execution (if flow_control.pre_analysis exists)
|
||||
**AGENT RESPONSIBILITY**: Execute pre_analysis steps sequentially from loaded JSON:
|
||||
|
||||
**PRIORITY: Artifact Loading Steps First**
|
||||
1. **Load Synthesis Specification** (if present): Priority artifact loading for consolidated design
|
||||
2. **Load Individual Artifacts** (fallback): Load role-specific brainstorming outputs if synthesis unavailable
|
||||
3. **Execute Remaining Steps**: Continue with other pre_analysis steps
|
||||
|
||||
For each step in flow_control.pre_analysis array:
|
||||
1. Execute step.command/commands with variable substitution (support both single command and commands array)
|
||||
2. Store output to step.output_to variable
|
||||
3. Handle errors per step.on_error strategy (skip_optional, fail, retry_once)
|
||||
4. Pass accumulated variables to next step including artifact context
|
||||
|
||||
**Special Artifact Loading Commands**:
|
||||
- Use `bash(ls path 2>/dev/null || echo 'file not found')` for artifact existence checks
|
||||
- Use `Read(path)` for loading artifact content
|
||||
- Use `find` commands for discovering multiple artifact files
|
||||
- Reference artifacts in subsequent steps using output variables: [synthesis_specification], [individual_artifacts]
|
||||
|
||||
## STEP 4: Implementation Context (From JSON context field)
|
||||
**Requirements**: Use context.requirements array from JSON
|
||||
**Focus Paths**: Use context.focus_paths array from JSON
|
||||
**Acceptance Criteria**: Use context.acceptance array from JSON
|
||||
**Dependencies**: Use context.depends_on array from JSON
|
||||
**Parent Context**: Use context.inherited object from JSON
|
||||
**Artifacts**: Use context.artifacts array from JSON (synthesis specifications, brainstorming outputs)
|
||||
**Target Files**: Use flow_control.target_files array from JSON
|
||||
**Implementation Approach**: Use flow_control.implementation_approach object from JSON (with artifact integration)
|
||||
|
||||
## STEP 5: Session Context (Provided by workflow:execute)
|
||||
**Workflow Directory**: {session.workflow_dir}
|
||||
**TODO List Path**: {session.todo_list_path}
|
||||
**Summaries Directory**: {session.summaries_dir}
|
||||
**Task JSON Path**: {session.task_json_path}
|
||||
**Flow Context**: {flow_context.step_outputs}
|
||||
|
||||
## STEP 6: Agent Completion Requirements
|
||||
1. **Load Task JSON**: Read and validate complete task structure
|
||||
2. **Execute Flow Control**: Run all pre_analysis steps if present
|
||||
3. **Implement Solution**: Follow implementation_approach from JSON
|
||||
4. **Update Progress**: Mark task status in JSON as completed
|
||||
5. **Update TODO List**: Update TODO_LIST.md at provided path
|
||||
6. **Generate Summary**: Create completion summary in summaries directory
|
||||
7. **Check Workflow Complete**: After task completion, check if all workflow tasks done
|
||||
8. **Auto-Complete Session**: If all tasks completed, call SlashCommand(\"/workflow:session:complete\")
|
||||
|
||||
**JSON UPDATE COMMAND**:
|
||||
Update task status to completed using jq:
|
||||
jq '.status = \"completed\"' {session.task_json_path} > temp.json && mv temp.json {session.task_json_path}
|
||||
|
||||
**WORKFLOW COMPLETION CHECK**:
|
||||
After updating task status, check if workflow is complete:
|
||||
total_tasks=\$(find .workflow/*/\.task/ -name "*.json" -type f 2>/dev/null | wc -l)
|
||||
completed_tasks=\$(find .workflow/*/\.summaries/ -name "*.md" -type f 2>/dev/null | wc -l)
|
||||
if [ \$total_tasks -eq \$completed_tasks ]; then
|
||||
SlashCommand(command=\"/workflow:session:complete\")
|
||||
fi"),
|
||||
description="Execute task with full JSON loading and validation")
|
||||
**Complete JSON structure is authoritative - load and follow it exactly.**"),
|
||||
description="Execute task: {task.id}")
|
||||
```
|
||||
|
||||
#### Agent JSON Loading Specification
|
||||
@@ -381,7 +533,7 @@ Task(subagent_type="{meta.agent}",
|
||||
"status": "pending|active|completed|blocked",
|
||||
"meta": {
|
||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||
"agent": "@code-developer|@test-fix-agent|@general-purpose"
|
||||
"agent": "@code-developer|@test-fix-agent|@universal-executor"
|
||||
},
|
||||
"context": {
|
||||
"requirements": ["req1", "req2"],
|
||||
@@ -392,15 +544,16 @@ Task(subagent_type="{meta.agent}",
|
||||
"artifacts": [
|
||||
{
|
||||
"type": "synthesis_specification",
|
||||
"source": "brainstorm_synthesis",
|
||||
"path": ".workflow/WFS-[session]/.brainstorming/synthesis-specification.md",
|
||||
"source": "context-package.json → brainstorm_artifacts.synthesis_output",
|
||||
"path": "{{loaded dynamically from context-package.json}}",
|
||||
"priority": "highest",
|
||||
"contains": "complete_integrated_specification"
|
||||
},
|
||||
{
|
||||
"type": "individual_role_analysis",
|
||||
"source": "brainstorm_roles",
|
||||
"path": ".workflow/WFS-[session]/.brainstorming/[role]/analysis.md",
|
||||
"source": "context-package.json → brainstorm_artifacts.role_analyses[]",
|
||||
"path": "{{loaded dynamically from context-package.json}}",
|
||||
"note": "Supports analysis*.md pattern (analysis.md, analysis-01.md, analysis-api.md, etc.)",
|
||||
"priority": "low",
|
||||
"contains": "role_specific_analysis_fallback"
|
||||
}
|
||||
@@ -410,10 +563,11 @@ Task(subagent_type="{meta.agent}",
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_synthesis_specification",
|
||||
"action": "Load consolidated synthesis specification from brainstorming",
|
||||
"action": "Load synthesis specification from context-package.json",
|
||||
"commands": [
|
||||
"bash(ls .workflow/WFS-[session]/.brainstorming/synthesis-specification.md 2>/dev/null || echo 'synthesis specification not found')",
|
||||
"Read(.workflow/WFS-[session]/.brainstorming/synthesis-specification.md)"
|
||||
"Read(.workflow/WFS-[session]/.process/context-package.json)",
|
||||
"Extract(brainstorm_artifacts.synthesis_output.path)",
|
||||
"Read(extracted path)"
|
||||
],
|
||||
"output_to": "synthesis_specification",
|
||||
"on_error": "skip_optional"
|
||||
@@ -428,16 +582,16 @@ Task(subagent_type="{meta.agent}",
|
||||
"implementation_approach": [
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Implement task following synthesis specification",
|
||||
"description": "Implement '[title]' following synthesis specification. PRIORITY: Use synthesis-specification.md as primary requirement source. When implementation needs technical details (e.g., API schemas, caching configs, design tokens), refer to artifacts[] for detailed specifications from original role analyses.",
|
||||
"title": "Implement task following role analyses",
|
||||
"description": "Implement '[title]' following role analyses. PRIORITY: Use role analysis documents as primary requirement source. When implementation needs technical details (e.g., API schemas, caching configs, design tokens), refer to artifacts[] for detailed specifications from original role analyses.",
|
||||
"modification_points": [
|
||||
"Apply consolidated requirements from synthesis-specification.md",
|
||||
"Apply consolidated requirements from role analysis documents",
|
||||
"Follow technical guidelines from synthesis",
|
||||
"Consult artifacts for implementation details when needed",
|
||||
"Integrate with existing patterns"
|
||||
],
|
||||
"logic_flow": [
|
||||
"Load synthesis specification",
|
||||
"Load role analyses",
|
||||
"Parse architecture and requirements",
|
||||
"Implement following specification",
|
||||
"Consult artifacts for technical details when needed",
|
||||
@@ -467,7 +621,7 @@ meta.agent missing → Infer from meta.type:
|
||||
- "feature" → @code-developer
|
||||
- "test-gen" → @code-developer
|
||||
- "test-fix" → @test-fix-agent
|
||||
- "review" → @general-purpose
|
||||
- "review" → @universal-executor
|
||||
- "docs" → @doc-generator
|
||||
```
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: plan
|
||||
description: Orchestrate 4-phase planning workflow by executing commands and passing context between phases
|
||||
description: Orchestrate 5-phase planning workflow with quality gate, executing commands and passing context between phases
|
||||
argument-hint: "[--agent] [--cli-execute] \"text description\"|file.md"
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||
---
|
||||
@@ -9,22 +9,23 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||
|
||||
## Coordinator Role
|
||||
|
||||
**This command is a pure orchestrator**: Execute 4 slash commands in sequence, parse their outputs, pass context between them, and ensure complete execution through **automatic continuation**.
|
||||
**This command is a pure orchestrator**: Execute 5 slash commands in sequence (including a quality gate), parse their outputs, pass context between them, and ensure complete execution through **automatic continuation**.
|
||||
|
||||
**Execution Model - Auto-Continue Workflow**:
|
||||
**Execution Model - Auto-Continue Workflow with Quality Gate**:
|
||||
|
||||
This workflow runs **fully autonomously** once triggered. Phase 3 (conflict resolution) and Phase 4 (task generation) are delegated to specialized agents.
|
||||
|
||||
This workflow runs **fully autonomously** once triggered. Each phase completes, reports its output to you, then **immediately and automatically** proceeds to the next phase without requiring any user intervention.
|
||||
|
||||
1. **User triggers**: `/workflow:plan "task"`
|
||||
2. **Phase 1 executes** → Reports output to user → Auto-continues
|
||||
3. **Phase 2 executes** → Reports output to user → Auto-continues
|
||||
4. **Phase 3 executes** → Reports output to user → Auto-continues
|
||||
5. **Phase 4 executes** → Reports final summary
|
||||
2. **Phase 1 executes** → Session discovery → Auto-continues
|
||||
3. **Phase 2 executes** → Context gathering → Auto-continues
|
||||
4. **Phase 3 executes** (optional, if conflict_risk ≥ medium) → Conflict resolution → Auto-continues
|
||||
5. **Phase 4 executes** (task-generate-agent if --agent) → Task generation → Reports final summary
|
||||
|
||||
**Auto-Continue Mechanism**:
|
||||
- TodoList tracks current phase status
|
||||
- After each phase completion, automatically executes next pending phase
|
||||
- **No user action required** - workflow runs end-to-end autonomously
|
||||
- All phases run autonomously without user interaction (clarification handled in brainstorm phase)
|
||||
- Progress updates shown at each phase for visibility
|
||||
|
||||
**Execution Modes**:
|
||||
@@ -36,11 +37,12 @@ This workflow runs **fully autonomously** once triggered. Each phase completes,
|
||||
|
||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 command execution
|
||||
2. **No Preliminary Analysis**: Do not read files, analyze structure, or gather context before Phase 1
|
||||
3. **Parse Every Output**: Extract required data from each command's output for next phase
|
||||
3. **Parse Every Output**: Extract required data from each command/agent output for next phase
|
||||
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
||||
5. **Track Progress**: Update TodoWrite after every phase completion
|
||||
6. **Agent Delegation**: Phase 3 uses cli-execution-agent for autonomous intelligent analysis
|
||||
|
||||
## 4-Phase Execution
|
||||
## 5-Phase Execution
|
||||
|
||||
### Phase 1: Session Discovery
|
||||
**Command**: `SlashCommand(command="/workflow:session:start --auto \"[structured-task-description]\"")`
|
||||
@@ -81,7 +83,7 @@ CONTEXT: Existing user database schema, REST API endpoints
|
||||
|
||||
**Parse Output**:
|
||||
- Extract: context-package.json path (store as `contextPath`)
|
||||
- Typical pattern: `.workflow/[sessionId]/.context/context-package.json`
|
||||
- Typical pattern: `.workflow/[sessionId]/.process/context-package.json`
|
||||
|
||||
**Validation**:
|
||||
- Context package path extracted
|
||||
@@ -93,39 +95,69 @@ CONTEXT: Existing user database schema, REST API endpoints
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Intelligent Analysis
|
||||
**Command**: `SlashCommand(command="/workflow:tools:concept-enhanced --session [sessionId] --context [contextPath]")`
|
||||
### Phase 3: Conflict Resolution (Optional - auto-triggered by conflict risk)
|
||||
|
||||
**Input**: `sessionId` from Phase 1, `contextPath` from Phase 2
|
||||
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
||||
|
||||
**Command**: `SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")`
|
||||
|
||||
**Input**:
|
||||
- sessionId from Phase 1
|
||||
- contextPath from Phase 2
|
||||
- conflict_risk from context-package.json
|
||||
|
||||
**Parse Output**:
|
||||
- Verify ANALYSIS_RESULTS.md created
|
||||
- Extract: Execution status (success/skipped/failed)
|
||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
||||
|
||||
**Validation**:
|
||||
- File `.workflow/[sessionId]/ANALYSIS_RESULTS.md` exists
|
||||
- Contains task recommendations section
|
||||
- File `.workflow/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
||||
|
||||
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
|
||||
**Skip Behavior**:
|
||||
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
|
||||
- Display: "No significant conflicts detected, proceeding to clarification"
|
||||
|
||||
**After Phase 3**: Return to user showing Phase 3 results, then auto-continue to Phase 4
|
||||
**TodoWrite**: Mark phase 3 completed (if executed) or skipped, phase 3.5 in_progress
|
||||
|
||||
**After Phase 3**: Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 3.5
|
||||
|
||||
**Memory State Check**:
|
||||
- Evaluate current context window usage and memory state
|
||||
- If memory usage is high (>110K tokens or approaching context limits):
|
||||
- **Command**: `SlashCommand(command="/compact")`
|
||||
- This optimizes memory before proceeding to Phase 4
|
||||
- This optimizes memory before proceeding to Phase 3.5
|
||||
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
|
||||
- Ensures optimal performance and prevents context overflow
|
||||
|
||||
---
|
||||
|
||||
### Phase 3.5: Pre-Task Generation Validation (Optional Quality Gate)
|
||||
|
||||
**Purpose**: Optional quality gate before task generation - primarily handled by brainstorm synthesis phase
|
||||
|
||||
|
||||
**Current Behavior**: Auto-skip to Phase 4 (Task Generation)
|
||||
|
||||
**Future Enhancement**: Could add additional validation steps like:
|
||||
- Cross-reference checks between conflict resolution and brainstorm analyses
|
||||
- Final sanity checks before task generation
|
||||
- User confirmation prompt for proceeding
|
||||
|
||||
**TodoWrite**: Mark phase 3.5 completed (auto-skip), phase 4 in_progress
|
||||
|
||||
**After Phase 3.5**: Auto-continue to Phase 4 immediately
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Task Generation
|
||||
|
||||
**Relationship with Brainstorm Phase**:
|
||||
- If brainstorm synthesis exists (synthesis-specification.md), Phase 3 analysis incorporates it as input
|
||||
- **synthesis-specification.md defines "WHAT"**: Requirements, design specs, high-level features
|
||||
- If brainstorm role analyses exist ([role]/analysis.md files), Phase 3 analysis incorporates them as input
|
||||
- **⚠️ User's original intent is ALWAYS primary**: New or refined user goals override brainstorm recommendations
|
||||
- **Role analysis.md files define "WHAT"**: Requirements, design specs, role-specific insights
|
||||
- **IMPL_PLAN.md defines "HOW"**: Executable task breakdown, dependencies, implementation sequence
|
||||
- Task generation translates high-level specifications into concrete, actionable work items
|
||||
- Task generation translates high-level role analyses into concrete, actionable work items
|
||||
- **Intent priority**: Current user prompt > role analysis.md files > guidance-specification.md
|
||||
|
||||
**Command Selection**:
|
||||
- Manual: `SlashCommand(command="/workflow:tools:task-generate --session [sessionId]")`
|
||||
@@ -172,22 +204,36 @@ Plan: .workflow/[sessionId]/IMPL_PLAN.md
|
||||
|
||||
```javascript
|
||||
// Initialize (before Phase 1)
|
||||
// Note: Phase 3 todo only added dynamically after Phase 2 if conflict_risk ≥ medium
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute session discovery", "status": "in_progress", "activeForm": "Executing session discovery"},
|
||||
{"content": "Execute context gathering", "status": "pending", "activeForm": "Executing context gathering"},
|
||||
{"content": "Execute intelligent analysis", "status": "pending", "activeForm": "Executing intelligent analysis"},
|
||||
// Phase 3 todo added dynamically after Phase 2 if conflict_risk ≥ medium
|
||||
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
|
||||
]})
|
||||
|
||||
// After Phase 1
|
||||
// After Phase 2 (if conflict_risk ≥ medium, insert Phase 3 todo)
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Execute context gathering", "status": "in_progress", "activeForm": "Executing context gathering"},
|
||||
{"content": "Execute intelligent analysis", "status": "pending", "activeForm": "Executing intelligent analysis"},
|
||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Resolve conflicts and apply fixes", "status": "in_progress", "activeForm": "Resolving conflicts"},
|
||||
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
|
||||
]})
|
||||
|
||||
// Continue pattern for Phase 2, 3, 4...
|
||||
// After Phase 2 (if conflict_risk is none/low, skip Phase 3, go directly to Phase 4)
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Execute task generation", "status": "in_progress", "activeForm": "Executing task generation"}
|
||||
]})
|
||||
|
||||
// After Phase 3 (if executed), continue to Phase 4
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Resolve conflicts and apply fixes", "status": "completed", "activeForm": "Resolving conflicts"},
|
||||
{"content": "Execute task generation", "status": "in_progress", "activeForm": "Executing task generation"}
|
||||
]})
|
||||
```
|
||||
|
||||
## Input Processing
|
||||
@@ -236,14 +282,22 @@ Phase 1: session:start --auto "structured-description"
|
||||
↓
|
||||
Phase 2: context-gather --session sessionId "structured-description"
|
||||
↓ Input: sessionId + session memory + structured description
|
||||
↓ Output: contextPath (context-package.json)
|
||||
↓ Output: contextPath (context-package.json) + conflict_risk
|
||||
↓
|
||||
Phase 3: concept-enhanced --session sessionId --context contextPath
|
||||
↓ Input: sessionId + contextPath + session memory
|
||||
↓ Output: ANALYSIS_RESULTS.md
|
||||
Phase 3: conflict-resolution [AUTO-TRIGGERED if conflict_risk ≥ medium]
|
||||
↓ Input: sessionId + contextPath + conflict_risk
|
||||
↓ CLI-powered conflict detection (JSON output)
|
||||
↓ AskUserQuestion: Present conflicts + resolution strategies
|
||||
↓ User selects strategies (or skip)
|
||||
↓ Apply modifications via Edit tool:
|
||||
↓ - Update guidance-specification.md
|
||||
↓ - Update role analyses (*.md)
|
||||
↓ - Mark context-package.json as "resolved"
|
||||
↓ Output: Modified brainstorm artifacts (NO report file)
|
||||
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
|
||||
↓
|
||||
Phase 4: task-generate[--agent] --session sessionId
|
||||
↓ Input: sessionId + ANALYSIS_RESULTS.md + session memory
|
||||
↓ Input: sessionId + resolved brainstorm artifacts + session memory
|
||||
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
|
||||
↓
|
||||
Return summary to user
|
||||
@@ -252,7 +306,7 @@ Return summary to user
|
||||
**Session Memory Flow**: Each phase receives session ID, which provides access to:
|
||||
- Previous task summaries
|
||||
- Existing context and analysis
|
||||
- Brainstorming artifacts
|
||||
- Brainstorming artifacts (potentially modified by Phase 3)
|
||||
- Session-specific configuration
|
||||
|
||||
**Structured Description Benefits**:
|
||||
@@ -270,20 +324,22 @@ Return summary to user
|
||||
## Coordinator Checklist
|
||||
|
||||
✅ **Pre-Phase**: Convert user input to structured format (GOAL/SCOPE/CONTEXT)
|
||||
✅ Initialize TodoWrite before any command
|
||||
✅ Initialize TodoWrite before any command (Phase 3 added dynamically after Phase 2)
|
||||
✅ Execute Phase 1 immediately with structured description
|
||||
✅ Parse session ID from Phase 1 output, store in memory
|
||||
✅ Pass session ID and structured description to Phase 2 command
|
||||
✅ Parse context path from Phase 2 output, store in memory
|
||||
✅ Pass session ID and context path to Phase 3 command
|
||||
✅ Verify ANALYSIS_RESULTS.md after Phase 3
|
||||
✅ **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
|
||||
✅ **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
|
||||
✅ Wait for Phase 3 completion (if executed), verify CONFLICT_RESOLUTION.md created
|
||||
✅ **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
||||
✅ **Build Phase 4 command** based on flags:
|
||||
- Base command: `/workflow:tools:task-generate` (or `-agent` if `--agent` flag)
|
||||
- Add `--session [sessionId]`
|
||||
- Add `--cli-execute` if flag present
|
||||
✅ Pass session ID to Phase 4 command
|
||||
✅ Verify all Phase 4 outputs
|
||||
✅ Update TodoWrite after each phase
|
||||
✅ Update TodoWrite after each phase (dynamically adjust for Phase 3 presence)
|
||||
✅ After each phase, automatically continue to next phase based on TodoList status
|
||||
|
||||
## Structure Template Reference
|
||||
|
||||
@@ -92,17 +92,17 @@ After bash validation, the model takes control to:
|
||||
2. **Perform Specialized Review**: Based on `review_type`
|
||||
|
||||
**Security Review** (`--type=security`):
|
||||
- Use MCP code search for security patterns:
|
||||
- Use ripgrep for security patterns:
|
||||
```bash
|
||||
mcp__code-index__search_code_advanced(pattern="password|token|secret|auth", file_pattern="*.{ts,js,py}")
|
||||
mcp__code-index__search_code_advanced(pattern="eval|exec|innerHTML|dangerouslySetInnerHTML", file_pattern="*.{ts,js,tsx}")
|
||||
rg "password|token|secret|auth" -g "*.{ts,js,py}"
|
||||
rg "eval|exec|innerHTML|dangerouslySetInnerHTML" -g "*.{ts,js,tsx}"
|
||||
```
|
||||
- Use Gemini for security analysis:
|
||||
```bash
|
||||
cd .workflow/${sessionId} && ~/.claude/scripts/gemini-wrapper -p "
|
||||
cd .workflow/${sessionId} && gemini -p "
|
||||
PURPOSE: Security audit of completed implementation
|
||||
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
||||
CONTEXT: @{.summaries/IMPL-*.md,../..,../../CLAUDE.md}
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
EXPECTED: Security findings report with severity levels
|
||||
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
|
||||
" --approval-mode yolo
|
||||
@@ -111,10 +111,10 @@ After bash validation, the model takes control to:
|
||||
**Architecture Review** (`--type=architecture`):
|
||||
- Use Qwen for architecture analysis:
|
||||
```bash
|
||||
cd .workflow/${sessionId} && ~/.claude/scripts/qwen-wrapper -p "
|
||||
cd .workflow/${sessionId} && qwen -p "
|
||||
PURPOSE: Architecture compliance review
|
||||
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
||||
CONTEXT: @{.summaries/IMPL-*.md,../..,../../CLAUDE.md}
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
EXPECTED: Architecture assessment with recommendations
|
||||
RULES: Check for patterns, separation of concerns, modularity, scalability
|
||||
" --approval-mode yolo
|
||||
@@ -123,10 +123,10 @@ After bash validation, the model takes control to:
|
||||
**Quality Review** (`--type=quality`):
|
||||
- Use Gemini for code quality:
|
||||
```bash
|
||||
cd .workflow/${sessionId} && ~/.claude/scripts/gemini-wrapper -p "
|
||||
cd .workflow/${sessionId} && gemini -p "
|
||||
PURPOSE: Code quality and best practices review
|
||||
TASK: Assess code readability, maintainability, adherence to best practices
|
||||
CONTEXT: @{.summaries/IMPL-*.md,../..,../../CLAUDE.md}
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
EXPECTED: Quality assessment with improvement suggestions
|
||||
RULES: Check for code smells, duplication, complexity, naming conventions
|
||||
" --approval-mode yolo
|
||||
@@ -143,10 +143,10 @@ After bash validation, the model takes control to:
|
||||
' {} \;
|
||||
|
||||
# Check implementation summaries against requirements
|
||||
cd .workflow/${sessionId} && ~/.claude/scripts/gemini-wrapper -p "
|
||||
cd .workflow/${sessionId} && gemini -p "
|
||||
PURPOSE: Verify all requirements and acceptance criteria are met
|
||||
TASK: Cross-check implementation summaries against original requirements
|
||||
CONTEXT: @{.task/IMPL-*.json,.summaries/IMPL-*.md,../..,../../CLAUDE.md}
|
||||
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
EXPECTED:
|
||||
- Requirements coverage matrix
|
||||
- Acceptance criteria verification
|
||||
|
||||
@@ -20,12 +20,12 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
|
||||
2. **No Preliminary Analysis**: Do not read files before Phase 1
|
||||
3. **Parse Every Output**: Extract required data for next phase
|
||||
4. **Sequential Execution**: Each phase depends on previous output
|
||||
5. **Complete All Phases**: Do not return until Phase 7 completes (with concept verification)
|
||||
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
||||
5. **Track Progress**: Update TodoWrite after every phase completion
|
||||
6. **TDD Context**: All descriptions include "TDD:" prefix
|
||||
7. **Quality Gate**: Phase 5 concept verification ensures clarity before task generation
|
||||
7. **Quality Gate**: Phase 4 conflict resolution (optional, auto-triggered) validates compatibility before task generation
|
||||
|
||||
## 7-Phase Execution (with Concept Verification)
|
||||
## 6-Phase Execution (with Conflict Resolution)
|
||||
|
||||
### Phase 1: Session Discovery
|
||||
**Command**: `/workflow:session:start --auto "TDD: [structured-description]"`
|
||||
@@ -41,10 +41,32 @@ TEST_FOCUS: [Test scenarios]
|
||||
|
||||
**Parse**: Extract sessionId
|
||||
|
||||
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
|
||||
|
||||
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Context Gathering
|
||||
**Command**: `/workflow:tools:context-gather --session [sessionId] "TDD: [structured-description]"`
|
||||
|
||||
**Parse**: Extract contextPath
|
||||
**Use Same Structured Description**: Pass the same structured format from Phase 1
|
||||
|
||||
**Input**: `sessionId` from Phase 1
|
||||
|
||||
**Parse Output**:
|
||||
- Extract: context-package.json path (store as `contextPath`)
|
||||
- Typical pattern: `.workflow/[sessionId]/.process/context-package.json`
|
||||
|
||||
**Validation**:
|
||||
- Context package path extracted
|
||||
- File exists and is valid JSON
|
||||
|
||||
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
|
||||
|
||||
**After Phase 2**: Return to user showing Phase 2 results, then auto-continue to Phase 3
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Test Coverage Analysis
|
||||
**Command**: `/workflow:tools:test-context-gather --session [sessionId]`
|
||||
@@ -63,34 +85,49 @@ TEST_FOCUS: [Test scenarios]
|
||||
- Prevents duplicate test creation
|
||||
- Enables integration with existing tests
|
||||
|
||||
### Phase 4: TDD Analysis
|
||||
**Command**: `/workflow:tools:concept-enhanced --session [sessionId] --context [contextPath]`
|
||||
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
|
||||
|
||||
**Note**: Generates ANALYSIS_RESULTS.md with TDD-specific structure:
|
||||
- Feature list with testable requirements
|
||||
- Test cases for Red phase
|
||||
- Implementation requirements for Green phase
|
||||
- Refactoring opportunities
|
||||
- Task dependencies and execution order
|
||||
**After Phase 3**: Return to user showing test coverage results, then auto-continue to Phase 4
|
||||
|
||||
**Parse**: Verify ANALYSIS_RESULTS.md contains TDD breakdown sections
|
||||
---
|
||||
|
||||
### Phase 5: Concept Verification (NEW QUALITY GATE)
|
||||
**Command**: `/workflow:concept-verify --session [sessionId]`
|
||||
### Phase 4: Conflict Resolution (Optional - auto-triggered by conflict risk)
|
||||
|
||||
**Purpose**: Verify conceptual clarity before TDD task generation
|
||||
- Clarify test requirements and acceptance criteria
|
||||
- Resolve ambiguities in expected behavior
|
||||
- Validate TDD approach is appropriate
|
||||
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
||||
|
||||
**Behavior**:
|
||||
- If no ambiguities found → Auto-proceed to Phase 6
|
||||
- If ambiguities exist → Interactive clarification (up to 5 questions)
|
||||
- After clarifications → Auto-proceed to Phase 6
|
||||
**Command**: `SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")`
|
||||
|
||||
**Parse**: Verify concept verification completed (check for clarifications section in ANALYSIS_RESULTS.md or synthesis file if exists)
|
||||
**Input**:
|
||||
- sessionId from Phase 1
|
||||
- contextPath from Phase 2
|
||||
- conflict_risk from context-package.json
|
||||
|
||||
### Phase 6: TDD Task Generation
|
||||
**Parse Output**:
|
||||
- Extract: Execution status (success/skipped/failed)
|
||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
||||
|
||||
**Validation**:
|
||||
- File `.workflow/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
||||
|
||||
**Skip Behavior**:
|
||||
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
||||
- Display: "No significant conflicts detected, proceeding to TDD task generation"
|
||||
|
||||
**TodoWrite**: Mark phase 4 completed (if executed) or skipped, phase 5 in_progress
|
||||
|
||||
**After Phase 4**: Return to user showing conflict resolution results (if executed) and selected strategies, then auto-continue to Phase 5
|
||||
|
||||
**Memory State Check**:
|
||||
- Evaluate current context window usage and memory state
|
||||
- If memory usage is high (>110K tokens or approaching context limits):
|
||||
- **Command**: `SlashCommand(command="/compact")`
|
||||
- This optimizes memory before proceeding to Phase 5
|
||||
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
|
||||
- Ensures optimal performance and prevents context overflow
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: TDD Task Generation
|
||||
**Command**:
|
||||
- Manual: `/workflow:tools:task-generate-tdd --session [sessionId]`
|
||||
- Agent: `/workflow:tools:task-generate-tdd --session [sessionId] --agent`
|
||||
@@ -108,7 +145,7 @@ TEST_FOCUS: [Test scenarios]
|
||||
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
|
||||
- Task count ≤10 (compliance with task limit)
|
||||
|
||||
### Phase 7: TDD Structure Validation & Action Plan Verification (RECOMMENDED)
|
||||
### Phase 6: TDD Structure Validation & Action Plan Verification (RECOMMENDED)
|
||||
**Internal validation first, then recommend external verification**
|
||||
|
||||
**Internal Validation**:
|
||||
@@ -166,18 +203,44 @@ TDD Configuration:
|
||||
## TodoWrite Pattern
|
||||
|
||||
```javascript
|
||||
// Initialize (7 phases now with concept verification)
|
||||
[
|
||||
{content: "Execute session discovery", status: "in_progress", activeForm: "Executing session discovery"},
|
||||
{content: "Execute context gathering", status: "pending", activeForm": "Executing context gathering"},
|
||||
{content: "Execute test coverage analysis", status: "pending", activeForm": "Executing test coverage analysis"},
|
||||
{content: "Execute TDD analysis", status: "pending", activeForm": "Executing TDD analysis"},
|
||||
{content: "Execute concept verification", status: "pending", activeForm": "Executing concept verification"},
|
||||
{content: "Execute TDD task generation", status: "pending", activeForm: "Executing TDD task generation"},
|
||||
{content: "Validate TDD structure", status: "pending", activeForm: "Validating TDD structure"}
|
||||
]
|
||||
// Initialize (Phase 4 added dynamically after Phase 3 if conflict_risk ≥ medium)
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute session discovery", "status": "in_progress", "activeForm": "Executing session discovery"},
|
||||
{"content": "Execute context gathering", "status": "pending", "activeForm": "Executing context gathering"},
|
||||
{"content": "Execute test coverage analysis", "status": "pending", "activeForm": "Executing test coverage analysis"},
|
||||
// Phase 4 todo added dynamically after Phase 3 if conflict_risk ≥ medium
|
||||
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
]})
|
||||
|
||||
// Update after each phase: mark current "completed", next "in_progress"
|
||||
// After Phase 3 (if conflict_risk ≥ medium, insert Phase 4 todo)
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": "Execute conflict resolution", "status": "in_progress", "activeForm": "Executing conflict resolution"},
|
||||
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
]})
|
||||
|
||||
// After Phase 3 (if conflict_risk is none/low, skip Phase 4, go directly to Phase 5)
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": "Execute TDD task generation", "status": "in_progress", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
]})
|
||||
|
||||
// After Phase 4 (if executed), continue to Phase 5
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": "Execute conflict resolution", "status": "completed", "activeForm": "Executing conflict resolution"},
|
||||
{"content": "Execute TDD task generation", "status": "in_progress", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
]})
|
||||
```
|
||||
|
||||
## Input Processing
|
||||
|
||||
@@ -3,7 +3,7 @@ name: tdd-verify
|
||||
description: Verify TDD workflow compliance and generate quality report
|
||||
|
||||
argument-hint: "[optional: WFS-session-id]"
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(gemini-wrapper:*)
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(gemini:*)
|
||||
---
|
||||
|
||||
# TDD Verification Command (/workflow:tdd-verify)
|
||||
@@ -94,7 +94,7 @@ find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
||||
**Gemini analysis for comprehensive TDD compliance report**
|
||||
|
||||
```bash
|
||||
cd project-root && ~/.claude/scripts/gemini-wrapper -p "
|
||||
cd project-root && gemini -p "
|
||||
PURPOSE: Generate TDD compliance report
|
||||
TASK: Analyze TDD workflow execution and generate quality report
|
||||
CONTEXT: @{.workflow/{sessionId}/.task/*.json,.workflow/{sessionId}/.summaries/*,.workflow/{sessionId}/.process/tdd-cycle-report.md}
|
||||
@@ -237,7 +237,7 @@ Final Score: Max(0, Base Score - Deductions)
|
||||
|
||||
### Command Chain
|
||||
- **Called After**: `/workflow:execute` (when TDD tasks completed)
|
||||
- **Calls**: `/workflow:tools:tdd-coverage-analysis`, Gemini wrapper
|
||||
- **Calls**: `/workflow:tools:tdd-coverage-analysis`, Gemini CLI
|
||||
- **Related**: `/workflow:tdd-plan`, `/workflow:status`
|
||||
|
||||
### Basic Usage
|
||||
|
||||
@@ -10,6 +10,12 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*)
|
||||
## Overview
|
||||
Orchestrates dynamic test-fix workflow execution through iterative cycles of testing, analysis, and fixing. **Unlike standard execute, this command dynamically generates intermediate tasks** during execution based on test results and CLI analysis, enabling adaptive problem-solving.
|
||||
|
||||
**⚠️ CRITICAL - Orchestrator Boundary**:
|
||||
- This command is the **ONLY place** where test failures are handled
|
||||
- All CLI analysis (Gemini/Qwen), fix task generation (IMPL-fix-N.json), and iteration management happen HERE
|
||||
- Agents (@test-fix-agent) only execute single tasks and return results
|
||||
- **Do NOT handle test failures in main workflow or other commands** - always delegate to this orchestrator
|
||||
|
||||
**Resume Mode**: When called with `--resume-session` flag, skips discovery and continues from interruption point.
|
||||
|
||||
## Core Philosophy
|
||||
@@ -53,7 +59,7 @@ Orchestrates dynamic test-fix workflow execution through iterative cycles of tes
|
||||
|
||||
## Responsibility Matrix
|
||||
|
||||
**Clear division of labor between orchestrator and agents:**
|
||||
**⚠️ CRITICAL - Clear division of labor between orchestrator and agents:**
|
||||
|
||||
| Responsibility | test-cycle-execute (Orchestrator) | @test-fix-agent (Executor) |
|
||||
|----------------|----------------------------|---------------------------|
|
||||
@@ -62,12 +68,14 @@ Orchestrates dynamic test-fix workflow execution through iterative cycles of tes
|
||||
| Generate IMPL-fix-N.json | ✅ Creates task files | ❌ Not involved |
|
||||
| Run tests | ❌ Delegates to agent | ✅ Executes test command |
|
||||
| Apply fixes | ❌ Delegates to agent | ✅ Modifies code |
|
||||
| Detect test failures | ✅ Analyzes agent output | ✅ Reports results |
|
||||
| Detect test failures | ✅ Analyzes results and decides next action | ✅ Executes tests and reports outcomes |
|
||||
| Add tasks to queue | ✅ Manages queue | ❌ Not involved |
|
||||
| Update iteration state | ✅ Maintains state files | ✅ Updates task status |
|
||||
| Update iteration state | ✅ Maintains overall iteration state | ✅ Updates individual task status only |
|
||||
|
||||
**Key Principle**: Orchestrator manages the "what" and "when"; agents execute the "how".
|
||||
|
||||
**⚠️ ENFORCEMENT**: If test failures occur outside this orchestrator, do NOT handle them inline - always call `/workflow:test-cycle-execute` instead.
|
||||
|
||||
## Execution Lifecycle
|
||||
|
||||
### Phase 1: Discovery & Initialization
|
||||
@@ -217,20 +225,22 @@ Iteration N (managed by test-cycle-execute orchestrator):
|
||||
**Orchestrator executes CLI analysis between agent tasks:**
|
||||
|
||||
#### When Test Failures Occur
|
||||
1. **[Orchestrator]** Detects failures from agent output
|
||||
1. **[Orchestrator]** Detects failures from agent test execution output
|
||||
2. **[Orchestrator]** Collects failure context from `.process/test-results.json` and logs
|
||||
3. **[Orchestrator]** Runs Gemini/Qwen wrapper with failure context
|
||||
4. **[CLI Tool]** Analyzes failures and generates fix strategy
|
||||
3. **[Orchestrator]** Executes Gemini/Qwen CLI tool with failure context
|
||||
4. **[Orchestrator]** Interprets CLI tool output to extract fix strategy
|
||||
5. **[Orchestrator]** Saves analysis to `.process/iteration-N-analysis.md`
|
||||
6. **[Orchestrator]** Generates `IMPL-fix-N.json` with strategy content (not just path)
|
||||
|
||||
**Note**: The orchestrator executes CLI analysis tools and processes their output. CLI tools provide analysis, orchestrator manages the workflow.
|
||||
|
||||
#### CLI Analysis Command (executed by orchestrator)
|
||||
```bash
|
||||
cd {project_root} && ~/.claude/scripts/gemini-wrapper -p "
|
||||
cd {project_root} && gemini -p "
|
||||
PURPOSE: Analyze test failures and generate fix strategy
|
||||
TASK: Review test failures and identify root causes
|
||||
MODE: analysis
|
||||
CONTEXT: @{test files, implementation files}
|
||||
CONTEXT: @test files @ implementation files
|
||||
|
||||
[Test failure context and requirements...]
|
||||
|
||||
@@ -516,15 +526,16 @@ Task(subagent_type="{meta.agent}",
|
||||
### For test-fix (IMPL-002):
|
||||
- Run test suite: {test_command}
|
||||
- Collect results to .process/test-results.json
|
||||
- If failures: Save context, return to orchestrator
|
||||
- Report results to orchestrator (do NOT analyze failures)
|
||||
- Orchestrator will handle failure detection and iteration decisions
|
||||
- If success: Mark complete
|
||||
|
||||
### For test-fix-iteration (IMPL-fix-N):
|
||||
- Load fix strategy from context.fix_strategy (CONTENT, not path)
|
||||
- Apply surgical fixes to identified files
|
||||
- Run tests to verify
|
||||
- If still failures: Save context with new failure data
|
||||
- Update iteration state
|
||||
- Return results to orchestrator
|
||||
- Do NOT run tests independently - orchestrator manages all test execution
|
||||
- Do NOT handle failures - orchestrator analyzes and decides next iteration
|
||||
|
||||
## STEP 4: Implementation Context (From JSON)
|
||||
**Requirements**: {context.requirements}
|
||||
|
||||
@@ -13,7 +13,11 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||
|
||||
This command creates an independent test-fix workflow session for existing code. It orchestrates a 5-phase process to analyze implementation, generate test requirements, and create executable test generation and fix tasks.
|
||||
|
||||
**⚠️ Command Scope**: Prepares test workflow artifacts only. Task execution requires separate commands (`/workflow:test-cycle-execute` or `/workflow:execute`).
|
||||
**⚠️ CRITICAL - Command Scope**:
|
||||
- **This command ONLY generates task JSON files** (IMPL-001.json, IMPL-002.json)
|
||||
- **Does NOT execute tests or apply fixes** - all execution happens in separate orchestrator
|
||||
- **Must call `/workflow:test-cycle-execute`** after this command to actually run tests and fixes
|
||||
- **Test failure handling happens in test-cycle-execute**, not here
|
||||
|
||||
### Dual-Mode Support
|
||||
|
||||
@@ -44,12 +48,15 @@ fi
|
||||
|
||||
### Coordinator Role
|
||||
|
||||
This command is a **pure orchestrator**:
|
||||
This command is a **pure planning coordinator**:
|
||||
- Does NOT analyze code directly
|
||||
- Does NOT generate tests or documentation
|
||||
- ONLY coordinates slash commands in sequence
|
||||
- Does NOT execute tests or apply fixes
|
||||
- Does NOT handle test failures or iterations
|
||||
- ONLY coordinates slash commands to generate task JSON files
|
||||
- Parses outputs to pass data between phases
|
||||
- Creates independent test workflow session
|
||||
- **All execution delegated to `/workflow:test-cycle-execute`**
|
||||
|
||||
---
|
||||
|
||||
@@ -267,14 +274,20 @@ Review artifacts:
|
||||
- Test plan: .workflow/[testSessionId]/IMPL_PLAN.md
|
||||
- Task list: .workflow/[testSessionId]/TODO_LIST.md
|
||||
|
||||
Next Steps:
|
||||
- Review IMPL_PLAN.md
|
||||
- Execute: /workflow:test-cycle-execute [testSessionId]
|
||||
⚠️ CRITICAL - Next Steps:
|
||||
1. Review IMPL_PLAN.md
|
||||
2. **MUST execute: /workflow:test-cycle-execute**
|
||||
- This command only generated task JSON files
|
||||
- Test execution and fix iterations happen in test-cycle-execute
|
||||
- Do NOT attempt to run tests or fixes in main workflow
|
||||
```
|
||||
|
||||
**TodoWrite**: Mark phase 5 completed
|
||||
|
||||
**Note**: Command completes here. Task execution requires separate workflow commands.
|
||||
**⚠️ BOUNDARY NOTE**:
|
||||
- Command completes here - only task JSON files generated
|
||||
- All test execution, failure detection, CLI analysis, fix generation happens in `/workflow:test-cycle-execute`
|
||||
- This command does NOT handle test failures or apply fixes
|
||||
|
||||
---
|
||||
|
||||
@@ -329,7 +342,9 @@ Generates minimum 2 tasks (expandable for complex projects):
|
||||
|
||||
**Agent**: `@test-fix-agent`
|
||||
|
||||
**Purpose**: Execute tests and apply iterative fixes (max 5 iterations)
|
||||
**Purpose**: Execute initial tests and trigger orchestrator-managed fix cycles
|
||||
|
||||
**Note**: This task executes tests and reports results. The test-cycle-execute orchestrator manages all fix iterations, CLI analysis, and fix task generation.
|
||||
|
||||
**Task Configuration**:
|
||||
- Task ID: `IMPL-002`
|
||||
@@ -340,11 +355,12 @@ Generates minimum 2 tasks (expandable for complex projects):
|
||||
- `context.requirements`: Execute and fix tests
|
||||
|
||||
**Test-Fix Cycle Specification**:
|
||||
- **Cycle Pattern**: test → gemini_diagnose → manual_fix (or codex) → retest
|
||||
- **Tools Configuration**:
|
||||
**Note**: This specification describes what test-cycle-execute orchestrator will do. The agent only executes single tasks.
|
||||
- **Cycle Pattern** (orchestrator-managed): test → gemini_diagnose → manual_fix (or codex) → retest
|
||||
- **Tools Configuration** (orchestrator-controlled):
|
||||
- Gemini for analysis with bug-fix template → surgical fix suggestions
|
||||
- Manual fix application (default) OR Codex if `--use-codex` flag (resume mechanism)
|
||||
- **Exit Conditions**:
|
||||
- **Exit Conditions** (orchestrator-enforced):
|
||||
- Success: All tests pass
|
||||
- Failure: Max iterations reached (5)
|
||||
|
||||
|
||||
@@ -1,378 +0,0 @@
|
||||
---
|
||||
name: concept-enhanced
|
||||
description: Enhanced intelligent analysis with parallel CLI execution and design blueprint generation
|
||||
argument-hint: "--session WFS-session-id --context path/to/context-package.json"
|
||||
examples:
|
||||
- /workflow:tools:concept-enhanced --session WFS-auth --context .workflow/WFS-auth/.process/context-package.json
|
||||
- /workflow:tools:concept-enhanced --session WFS-payment --context .workflow/WFS-payment/.process/context-package.json
|
||||
---
|
||||
|
||||
# Enhanced Analysis Command (/workflow:tools:concept-enhanced)
|
||||
|
||||
## Overview
|
||||
Advanced solution design and feasibility analysis engine with parallel CLI execution. Processes standardized context packages to produce ANALYSIS_RESULTS.md focused on solution improvements, key design decisions, and critical insights.
|
||||
|
||||
**Scope**: Solution-focused technical analysis only. Does NOT generate task breakdowns or implementation plans.
|
||||
|
||||
**Usage**: Standalone command or integrated into `/workflow:plan`. Accepts context packages and orchestrates Gemini/Codex for comprehensive analysis.
|
||||
|
||||
## Core Philosophy & Responsibilities
|
||||
- **Solution-Focused Analysis**: Emphasize design decisions, architectural rationale, and critical insights (exclude task planning)
|
||||
- **Context-Driven**: Parse and validate context-package.json for precise analysis
|
||||
- **Intelligent Tool Selection**: Gemini for design (all tasks), Codex for validation (complex tasks only)
|
||||
- **Parallel Execution**: Execute multiple CLI tools simultaneously for efficiency
|
||||
- **Solution Design**: Evaluate architecture, identify key design decisions with rationale
|
||||
- **Feasibility Assessment**: Analyze technical complexity, risks, implementation readiness
|
||||
- **Optimization Recommendations**: Performance, security, and code quality improvements
|
||||
- **Perspective Synthesis**: Integrate multi-tool insights into unified assessment
|
||||
- **Single Output**: Generate only ANALYSIS_RESULTS.md with technical analysis
|
||||
|
||||
## Analysis Strategy Selection
|
||||
|
||||
### Tool Selection by Task Complexity
|
||||
|
||||
**Simple Tasks (≤3 modules)**:
|
||||
- **Primary**: Gemini (rapid understanding + pattern recognition)
|
||||
- **Support**: Code-index (structural analysis)
|
||||
- **Mode**: Single-round analysis
|
||||
|
||||
**Medium Tasks (4-6 modules)**:
|
||||
- **Primary**: Gemini (comprehensive analysis + architecture design)
|
||||
- **Support**: Code-index + Exa (best practices)
|
||||
- **Mode**: Single comprehensive round
|
||||
|
||||
**Complex Tasks (>6 modules)**:
|
||||
- **Primary**: Gemini (comprehensive analysis) + Codex (validation)
|
||||
- **Mode**: Parallel execution - Gemini design + Codex feasibility
|
||||
|
||||
### Tool Preferences by Tech Stack
|
||||
|
||||
```json
|
||||
{
|
||||
"frontend": {
|
||||
"primary": "gemini",
|
||||
"secondary": "codex",
|
||||
"focus": ["component_design", "state_management", "ui_patterns"]
|
||||
},
|
||||
"backend": {
|
||||
"primary": "codex",
|
||||
"secondary": "gemini",
|
||||
"focus": ["api_design", "data_flow", "security", "performance"]
|
||||
},
|
||||
"fullstack": {
|
||||
"primary": "gemini",
|
||||
"secondary": "codex",
|
||||
"focus": ["system_architecture", "integration", "data_consistency"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Lifecycle
|
||||
|
||||
### Phase 1: Validation & Preparation
|
||||
1. **Session Validation**: Verify `.workflow/{session_id}/` exists, load `workflow-session.json`
|
||||
2. **Context Package Validation**: Verify path, validate JSON format and structure
|
||||
3. **Task Analysis**: Extract keywords, identify domain/complexity, determine scope
|
||||
4. **Tool Selection**: Gemini (all tasks), +Codex (complex only), load templates
|
||||
|
||||
### Phase 2: Analysis Preparation
|
||||
1. **Workspace Setup**: Create `.workflow/{session_id}/.process/`, initialize logs, set resource limits
|
||||
2. **Context Optimization**: Filter high-priority assets, organize structure, prepare templates
|
||||
3. **Execution Environment**: Configure CLI tools, set timeouts, prepare error handling
|
||||
|
||||
### Phase 3: Parallel Analysis Execution
|
||||
1. **Gemini Solution Design & Architecture Analysis**
|
||||
```bash
|
||||
~/.claude/scripts/gemini-wrapper -p "
|
||||
PURPOSE: Analyze and design optimal solution for {task_description}
|
||||
TASK: Evaluate current architecture, propose solution design, identify key design decisions
|
||||
CONTEXT: @{.workflow/{session_id}/.process/context-package.json,.workflow/{session_id}/workflow-session.json,CLAUDE.md}
|
||||
|
||||
**MANDATORY**: Read context-package.json to understand task requirements, source files, tech stack, project structure
|
||||
|
||||
**ANALYSIS PRIORITY**:
|
||||
1. PRIMARY: Individual role analysis.md files (system-architect, ui-designer, etc.) - technical details, ADRs, decision context
|
||||
2. SECONDARY: synthesis-specification.md - integrated requirements, cross-role alignment
|
||||
3. REFERENCE: topic-framework.md - discussion context
|
||||
|
||||
EXPECTED:
|
||||
1. CURRENT STATE: Existing patterns, code structure, integration points, technical debt
|
||||
2. SOLUTION DESIGN: Core principles, system design, key decisions with rationale
|
||||
3. CRITICAL INSIGHTS: Strengths, gaps, risks, tradeoffs
|
||||
4. OPTIMIZATION: Performance, security, code quality recommendations
|
||||
5. FEASIBILITY: Complexity analysis, compatibility, implementation readiness
|
||||
6. OUTPUT: Write to .workflow/{session_id}/.process/gemini-solution-design.md
|
||||
|
||||
RULES:
|
||||
- Focus on SOLUTION IMPROVEMENTS and KEY DESIGN DECISIONS (NO task planning)
|
||||
- Identify code targets: existing "file:function:lines", new files "file"
|
||||
- Do NOT create task lists, implementation steps, or code examples
|
||||
" --approval-mode yolo
|
||||
```
|
||||
Output: `.workflow/{session_id}/.process/gemini-solution-design.md`
|
||||
|
||||
2. **Codex Technical Feasibility Validation** (Complex Tasks Only)
|
||||
```bash
|
||||
codex --full-auto exec "
|
||||
PURPOSE: Validate technical feasibility and identify implementation risks for {task_description}
|
||||
TASK: Assess complexity, validate technology choices, evaluate performance/security implications
|
||||
CONTEXT: @{.workflow/{session_id}/.process/context-package.json,.workflow/{session_id}/.process/gemini-solution-design.md,.workflow/{session_id}/workflow-session.json,CLAUDE.md}
|
||||
|
||||
**MANDATORY**: Read context-package.json, gemini-solution-design.md, and relevant source files
|
||||
|
||||
EXPECTED:
|
||||
1. FEASIBILITY: Complexity rating, resource requirements, technology compatibility
|
||||
2. RISK ANALYSIS: Implementation risks, integration challenges, performance/security concerns
|
||||
3. VALIDATION: Development approach, quality standards, maintenance implications
|
||||
4. RECOMMENDATIONS: Must-have requirements, optimization opportunities, security controls
|
||||
5. OUTPUT: Write to .workflow/{session_id}/.process/codex-feasibility-validation.md
|
||||
|
||||
RULES:
|
||||
- Focus on TECHNICAL FEASIBILITY and RISK ASSESSMENT (NO implementation planning)
|
||||
- Verify code targets: existing "file:function:lines", new files "file"
|
||||
- Do NOT create task breakdowns, step-by-step guides, or code examples
|
||||
" --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
Output: `.workflow/{session_id}/.process/codex-feasibility-validation.md`
|
||||
|
||||
3. **Parallel Execution**: Launch tools simultaneously, monitor progress, handle completion/errors, maintain logs
|
||||
|
||||
### Phase 4: Results Collection & Synthesis
|
||||
1. **Output Validation**: Validate gemini-solution-design.md (all), codex-feasibility-validation.md (complex), use logs if incomplete, classify status
|
||||
2. **Quality Assessment**: Verify design rationale, insight depth, feasibility rigor, optimization value
|
||||
3. **Synthesis Strategy**: Direct integration (simple/medium), multi-tool synthesis (complex), resolve conflicts, score confidence
|
||||
|
||||
### Phase 5: ANALYSIS_RESULTS.md Generation
|
||||
1. **Report Sections**: Executive Summary, Current State, Solution Design, Implementation Strategy, Optimization, Success Factors, Confidence Scores
|
||||
2. **Guidelines**: Focus on solution improvements and design decisions (exclude task planning), emphasize rationale/tradeoffs/risk assessment
|
||||
3. **Output**: Single file `ANALYSIS_RESULTS.md` at `.workflow/{session_id}/.process/` with technical insights and optimization strategies
|
||||
|
||||
## Analysis Results Format
|
||||
|
||||
Generated ANALYSIS_RESULTS.md focuses on **solution improvements, key design decisions, and critical insights** (NOT task planning):
|
||||
|
||||
```markdown
|
||||
# Technical Analysis & Solution Design
|
||||
|
||||
## Executive Summary
|
||||
- **Analysis Focus**: {core_problem_or_improvement_area}
|
||||
- **Analysis Timestamp**: {timestamp}
|
||||
- **Tools Used**: {analysis_tools}
|
||||
- **Overall Assessment**: {feasibility_score}/5 - {recommendation_status}
|
||||
|
||||
---
|
||||
|
||||
## 1. Current State Analysis
|
||||
|
||||
### Architecture Overview
|
||||
- **Existing Patterns**: {key_architectural_patterns}
|
||||
- **Code Structure**: {current_codebase_organization}
|
||||
- **Integration Points**: {system_integration_touchpoints}
|
||||
- **Technical Debt Areas**: {identified_debt_with_impact}
|
||||
|
||||
### Compatibility & Dependencies
|
||||
- **Framework Alignment**: {framework_compatibility_assessment}
|
||||
- **Dependency Analysis**: {critical_dependencies_and_risks}
|
||||
- **Migration Considerations**: {backward_compatibility_concerns}
|
||||
|
||||
### Critical Findings
|
||||
- **Strengths**: {what_works_well}
|
||||
- **Gaps**: {missing_capabilities_or_issues}
|
||||
- **Risks**: {identified_technical_and_business_risks}
|
||||
|
||||
---
|
||||
|
||||
## 2. Proposed Solution Design
|
||||
|
||||
### Core Architecture Principles
|
||||
- **Design Philosophy**: {key_design_principles}
|
||||
- **Architectural Approach**: {chosen_architectural_pattern_with_rationale}
|
||||
- **Scalability Strategy**: {how_solution_scales}
|
||||
|
||||
### System Design
|
||||
- **Component Architecture**: {high_level_component_design}
|
||||
- **Data Flow**: {data_flow_patterns_and_state_management}
|
||||
- **API Design**: {interface_contracts_and_specifications}
|
||||
- **Integration Strategy**: {how_components_integrate}
|
||||
|
||||
### Key Design Decisions
|
||||
1. **Decision**: {critical_design_choice}
|
||||
- **Rationale**: {why_this_approach}
|
||||
- **Alternatives Considered**: {other_options_and_tradeoffs}
|
||||
- **Impact**: {implications_on_architecture}
|
||||
|
||||
2. **Decision**: {another_critical_choice}
|
||||
- **Rationale**: {reasoning}
|
||||
- **Alternatives Considered**: {tradeoffs}
|
||||
- **Impact**: {consequences}
|
||||
|
||||
### Technical Specifications
|
||||
- **Technology Stack**: {chosen_technologies_with_justification}
|
||||
- **Code Organization**: {module_structure_and_patterns}
|
||||
- **Testing Strategy**: {testing_approach_and_coverage}
|
||||
- **Performance Targets**: {performance_requirements_and_benchmarks}
|
||||
|
||||
---
|
||||
|
||||
## 3. Implementation Strategy
|
||||
|
||||
### Development Approach
|
||||
- **Core Implementation Pattern**: {primary_implementation_strategy}
|
||||
- **Module Dependencies**: {dependency_graph_and_order}
|
||||
- **Quality Assurance**: {qa_approach_and_validation}
|
||||
|
||||
### Code Modification Targets
|
||||
**Purpose**: Specific code locations for modification AND new files to create
|
||||
|
||||
**Identified Targets**:
|
||||
1. **Target**: `src/auth/AuthService.ts:login:45-52`
|
||||
- **Type**: Modify existing
|
||||
- **Modification**: Enhance error handling
|
||||
- **Rationale**: Current logic lacks validation
|
||||
|
||||
2. **Target**: `src/auth/PasswordReset.ts`
|
||||
- **Type**: Create new file
|
||||
- **Purpose**: Password reset functionality
|
||||
- **Rationale**: New feature requirement
|
||||
|
||||
**Format Rules**:
|
||||
- Existing files: `file:function:lines` (with line numbers)
|
||||
- New files: `file` (no function or lines)
|
||||
- Unknown lines: `file:function:*`
|
||||
- Task generation will refine these targets during `analyze_task_patterns` step
|
||||
|
||||
### Feasibility Assessment
|
||||
- **Technical Complexity**: {complexity_rating_and_analysis}
|
||||
- **Performance Impact**: {expected_performance_characteristics}
|
||||
- **Resource Requirements**: {development_resources_needed}
|
||||
- **Maintenance Burden**: {ongoing_maintenance_considerations}
|
||||
|
||||
### Risk Mitigation
|
||||
- **Technical Risks**: {implementation_risks_and_mitigation}
|
||||
- **Integration Risks**: {compatibility_challenges_and_solutions}
|
||||
- **Performance Risks**: {performance_concerns_and_strategies}
|
||||
- **Security Risks**: {security_vulnerabilities_and_controls}
|
||||
|
||||
---
|
||||
|
||||
## 4. Solution Optimization
|
||||
|
||||
### Performance Optimization
|
||||
- **Optimization Strategies**: {key_performance_improvements}
|
||||
- **Caching Strategy**: {caching_approach_and_invalidation}
|
||||
- **Resource Management**: {resource_utilization_optimization}
|
||||
- **Bottleneck Mitigation**: {identified_bottlenecks_and_solutions}
|
||||
|
||||
### Security Enhancements
|
||||
- **Security Model**: {authentication_authorization_approach}
|
||||
- **Data Protection**: {data_security_and_encryption}
|
||||
- **Vulnerability Mitigation**: {known_vulnerabilities_and_controls}
|
||||
- **Compliance**: {regulatory_and_compliance_considerations}
|
||||
|
||||
### Code Quality
|
||||
- **Code Standards**: {coding_conventions_and_patterns}
|
||||
- **Testing Coverage**: {test_strategy_and_coverage_goals}
|
||||
- **Documentation**: {documentation_requirements}
|
||||
- **Maintainability**: {maintainability_practices}
|
||||
|
||||
---
|
||||
|
||||
## 5. Critical Success Factors
|
||||
|
||||
### Technical Requirements
|
||||
- **Must Have**: {essential_technical_capabilities}
|
||||
- **Should Have**: {important_but_not_critical_features}
|
||||
- **Nice to Have**: {optional_enhancements}
|
||||
|
||||
### Quality Metrics
|
||||
- **Performance Benchmarks**: {measurable_performance_targets}
|
||||
- **Code Quality Standards**: {quality_metrics_and_thresholds}
|
||||
- **Test Coverage Goals**: {testing_coverage_requirements}
|
||||
- **Security Standards**: {security_compliance_requirements}
|
||||
|
||||
### Success Validation
|
||||
- **Acceptance Criteria**: {how_to_validate_success}
|
||||
- **Testing Strategy**: {validation_testing_approach}
|
||||
- **Monitoring Plan**: {production_monitoring_strategy}
|
||||
- **Rollback Plan**: {failure_recovery_strategy}
|
||||
|
||||
---
|
||||
|
||||
## 6. Analysis Confidence & Recommendations
|
||||
|
||||
### Assessment Scores
|
||||
- **Conceptual Integrity**: {score}/5 - {brief_assessment}
|
||||
- **Architectural Soundness**: {score}/5 - {brief_assessment}
|
||||
- **Technical Feasibility**: {score}/5 - {brief_assessment}
|
||||
- **Implementation Readiness**: {score}/5 - {brief_assessment}
|
||||
- **Overall Confidence**: {overall_score}/5
|
||||
|
||||
### Final Recommendation
|
||||
**Status**: {PROCEED|PROCEED_WITH_MODIFICATIONS|RECONSIDER|REJECT}
|
||||
|
||||
**Rationale**: {clear_explanation_of_recommendation}
|
||||
|
||||
**Critical Prerequisites**: {what_must_be_resolved_before_proceeding}
|
||||
|
||||
---
|
||||
|
||||
## 7. Reference Information
|
||||
|
||||
### Tool Analysis Summary
|
||||
- **Gemini Insights**: {key_architectural_and_pattern_insights}
|
||||
- **Codex Validation**: {technical_feasibility_and_implementation_notes}
|
||||
- **Consensus Points**: {agreements_between_tools}
|
||||
- **Conflicting Views**: {disagreements_and_resolution}
|
||||
|
||||
### Context & Resources
|
||||
- **Analysis Context**: {context_package_reference}
|
||||
- **Documentation References**: {relevant_documentation}
|
||||
- **Related Patterns**: {similar_implementations_in_codebase}
|
||||
- **External Resources**: {external_references_and_best_practices}
|
||||
```
|
||||
|
||||
## Execution Management
|
||||
|
||||
### Error Handling & Recovery
|
||||
1. **Pre-execution**: Verify session/context package, confirm CLI tools, validate dependencies
|
||||
2. **Monitoring & Timeout**: Track progress, 30-min limit, manage parallel execution, maintain status
|
||||
3. **Partial Recovery**: Generate results with incomplete outputs, use logs, provide next steps
|
||||
4. **Error Recovery**: Auto error detection, structured workflows, graceful degradation
|
||||
|
||||
### Performance & Resource Optimization
|
||||
- **Parallel Analysis**: Execute multiple tools simultaneously to reduce time
|
||||
- **Context Sharding**: Analyze large projects by module shards
|
||||
- **Caching**: Reuse results for similar contexts
|
||||
- **Resource Management**: Monitor disk/CPU/memory, set limits, cleanup temporary files
|
||||
- **Timeout Control**: `timeout 600s` with partial result generation on failure
|
||||
|
||||
## Integration & Success Criteria
|
||||
|
||||
### Input/Output Interface
|
||||
**Input**:
|
||||
- `--session` (required): Session ID (e.g., WFS-auth)
|
||||
- `--context` (required): Context package path
|
||||
- `--depth` (optional): Analysis depth (quick|full|deep)
|
||||
- `--focus` (optional): Analysis focus areas
|
||||
|
||||
**Output**:
|
||||
- Single file: `ANALYSIS_RESULTS.md` at `.workflow/{session_id}/.process/`
|
||||
- No supplementary files (JSON, roadmap, templates)
|
||||
|
||||
### Quality & Success Validation
|
||||
**Quality Checks**: Completeness, consistency, feasibility validation
|
||||
|
||||
**Success Criteria**:
|
||||
- ✅ Solution-focused analysis (design decisions, critical insights, NO task planning)
|
||||
- ✅ Single output file only
|
||||
- ✅ Design decision depth with rationale/alternatives/tradeoffs
|
||||
- ✅ Feasibility assessment (complexity, risks, readiness)
|
||||
- ✅ Optimization strategies (performance, security, quality)
|
||||
- ✅ Parallel execution efficiency (Gemini + Codex for complex tasks)
|
||||
- ✅ Robust error handling (validation, timeout, partial recovery)
|
||||
- ✅ Confidence scoring with clear recommendation status
|
||||
|
||||
## Related Commands
|
||||
- `/context:gather` - Generate context packages required by this command
|
||||
- `/workflow:plan` - Call this command for analysis
|
||||
- `/task:create` - Create specific tasks based on analysis results
|
||||
471
.claude/commands/workflow/tools/conflict-resolution.md
Normal file
471
.claude/commands/workflow/tools/conflict-resolution.md
Normal file
@@ -0,0 +1,471 @@
|
||||
---
|
||||
name: conflict-resolution
|
||||
description: Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis
|
||||
argument-hint: "--session WFS-session-id --context path/to/context-package.json"
|
||||
examples:
|
||||
- /workflow:tools:conflict-resolution --session WFS-auth --context .workflow/WFS-auth/.process/context-package.json
|
||||
- /workflow:tools:conflict-resolution --session WFS-payment --context .workflow/WFS-payment/.process/context-package.json
|
||||
---
|
||||
|
||||
# Conflict Resolution Command
|
||||
|
||||
## Purpose
|
||||
Analyzes conflicts between implementation plans and existing codebase, generating multiple resolution strategies.
|
||||
|
||||
**Scope**: Detection and strategy generation only - NO code modification or task creation.
|
||||
|
||||
**Trigger**: Auto-executes in `/workflow:plan` Phase 3 when `conflict_risk ≥ medium`.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
| Responsibility | Description |
|
||||
|---------------|-------------|
|
||||
| **Detect Conflicts** | Analyze plan vs existing code inconsistencies |
|
||||
| **Generate Strategies** | Provide 2-4 resolution options per conflict |
|
||||
| **CLI Analysis** | Use Gemini/Qwen (Claude fallback) |
|
||||
| **User Decision** | Present options, never auto-apply |
|
||||
| **Single Output** | `CONFLICT_RESOLUTION.md` with findings |
|
||||
|
||||
## Conflict Categories
|
||||
|
||||
### 1. Architecture Conflicts
|
||||
- Incompatible design patterns
|
||||
- Module structure changes
|
||||
- Pattern migration requirements
|
||||
|
||||
### 2. API Conflicts
|
||||
- Breaking contract changes
|
||||
- Signature modifications
|
||||
- Public interface impacts
|
||||
|
||||
### 3. Data Model Conflicts
|
||||
- Schema modifications
|
||||
- Type breaking changes
|
||||
- Data migration needs
|
||||
|
||||
### 4. Dependency Conflicts
|
||||
- Version incompatibilities
|
||||
- Setup conflicts
|
||||
- Breaking updates
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Phase 1: Validation
|
||||
```
|
||||
1. Verify session directory exists
|
||||
2. Load context-package.json
|
||||
3. Check conflict_risk (skip if none/low)
|
||||
4. Prepare agent task prompt
|
||||
```
|
||||
|
||||
### Phase 2: CLI-Powered Analysis
|
||||
|
||||
**Agent Delegation**:
|
||||
```javascript
|
||||
Task(subagent_type="cli-execution-agent", prompt=`
|
||||
## Context
|
||||
- Session: {session_id}
|
||||
- Risk: {conflict_risk}
|
||||
- Files: {existing_files_list}
|
||||
|
||||
## Analysis Steps
|
||||
|
||||
### 1. Load Context
|
||||
- Read existing files from conflict_detection.existing_files
|
||||
- Load plan from .workflow/{session_id}/.process/context-package.json
|
||||
- Extract role analyses and requirements
|
||||
|
||||
### 2. Execute CLI Analysis
|
||||
|
||||
Primary (Gemini):
|
||||
cd {project_root} && gemini -p "
|
||||
PURPOSE: Detect conflicts between plan and codebase
|
||||
TASK:
|
||||
• Compare architectures
|
||||
• Identify breaking API changes
|
||||
• Detect data model incompatibilities
|
||||
• Assess dependency conflicts
|
||||
MODE: analysis
|
||||
CONTEXT: @{existing_files} @.workflow/{session_id}/**/*
|
||||
EXPECTED: Conflict list with severity ratings
|
||||
RULES: Focus on breaking changes and migration needs
|
||||
"
|
||||
|
||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||
|
||||
### 3. Generate Strategies (2-4 per conflict)
|
||||
|
||||
Template per conflict:
|
||||
- Severity: Critical/High/Medium
|
||||
- Category: Architecture/API/Data/Dependency
|
||||
- Affected files + impact
|
||||
- Options with pros/cons, effort, risk
|
||||
- Recommended strategy + rationale
|
||||
|
||||
### 4. Return Structured Conflict Data
|
||||
|
||||
⚠️ DO NOT generate CONFLICT_RESOLUTION.md file
|
||||
|
||||
Return JSON format for programmatic processing:
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"conflicts": [
|
||||
{
|
||||
"id": "CON-001",
|
||||
"brief": "一行中文冲突摘要",
|
||||
"severity": "Critical|High|Medium",
|
||||
"category": "Architecture|API|Data|Dependency",
|
||||
"affected_files": [
|
||||
".workflow/{session}/.brainstorm/guidance-specification.md",
|
||||
".workflow/{session}/.brainstorm/system-architect/analysis.md"
|
||||
],
|
||||
"description": "详细描述冲突 - 什么不兼容",
|
||||
"impact": {
|
||||
"scope": "影响的模块/组件",
|
||||
"compatibility": "Yes|No|Partial",
|
||||
"migration_required": true|false,
|
||||
"estimated_effort": "人天估计"
|
||||
},
|
||||
"strategies": [
|
||||
{
|
||||
"name": "策略名称(中文)",
|
||||
"approach": "实现方法简述",
|
||||
"complexity": "Low|Medium|High",
|
||||
"risk": "Low|Medium|High",
|
||||
"effort": "时间估计",
|
||||
"pros": ["优点1", "优点2"],
|
||||
"cons": ["缺点1", "缺点2"],
|
||||
"modifications": [
|
||||
{
|
||||
"file": ".workflow/{session}/.brainstorm/guidance-specification.md",
|
||||
"section": "## 2. System Architect Decisions",
|
||||
"change_type": "update",
|
||||
"old_content": "原始内容片段(用于定位)",
|
||||
"new_content": "修改后的内容",
|
||||
"rationale": "为什么这样改"
|
||||
},
|
||||
{
|
||||
"file": ".workflow/{session}/.brainstorm/system-architect/analysis.md",
|
||||
"section": "## Design Decisions",
|
||||
"change_type": "update",
|
||||
"old_content": "原始内容片段",
|
||||
"new_content": "修改后的内容",
|
||||
"rationale": "修改理由"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "策略2名称",
|
||||
"approach": "...",
|
||||
"complexity": "Medium",
|
||||
"risk": "Low",
|
||||
"effort": "1-2天",
|
||||
"pros": ["优点"],
|
||||
"cons": ["缺点"],
|
||||
"modifications": [...]
|
||||
}
|
||||
],
|
||||
"recommended": 0,
|
||||
"modification_suggestions": [
|
||||
"建议1:具体的修改方向或注意事项",
|
||||
"建议2:可能需要考虑的边界情况",
|
||||
"建议3:相关的最佳实践或模式"
|
||||
]
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"total": 2,
|
||||
"critical": 1,
|
||||
"high": 1,
|
||||
"medium": 0
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
⚠️ CRITICAL Requirements for modifications field:
|
||||
- old_content: Must be exact text from target file (20-100 chars for unique match)
|
||||
- new_content: Complete replacement text (maintains formatting)
|
||||
- change_type: "update" (replace), "add" (insert), "remove" (delete)
|
||||
- file: Full path relative to project root
|
||||
- section: Markdown heading for context (helps locate position)
|
||||
- Minimum 2 strategies per conflict, max 4
|
||||
- All text in Chinese for user-facing fields (brief, name, pros, cons)
|
||||
- modification_suggestions: 2-5 actionable suggestions for custom handling (Chinese)
|
||||
|
||||
Quality Standards:
|
||||
- Each strategy must have actionable modifications
|
||||
- old_content must be precise enough for Edit tool matching
|
||||
- new_content preserves markdown formatting and structure
|
||||
- Recommended strategy (index) based on lowest complexity + risk
|
||||
- modification_suggestions must be specific, actionable, and context-aware
|
||||
- Each suggestion should address a specific aspect (compatibility, migration, testing, etc.)
|
||||
`)
|
||||
```
|
||||
|
||||
**Agent Internal Flow**:
|
||||
```
|
||||
1. Load context package
|
||||
2. Check conflict_risk (exit if none/low)
|
||||
3. Read existing files + plan artifacts
|
||||
4. Run CLI analysis (Gemini→Qwen→Claude)
|
||||
5. Parse conflict findings
|
||||
6. Generate 2-4 strategies per conflict with modifications
|
||||
7. Return JSON to stdout (NOT file write)
|
||||
8. Return execution log path
|
||||
```
|
||||
|
||||
### Phase 3: User Confirmation via Text Interaction
|
||||
|
||||
**Command parses agent JSON output and presents conflicts to user via text**:
|
||||
|
||||
```javascript
|
||||
// 1. Parse agent JSON output
|
||||
const conflictData = JSON.parse(agentOutput);
|
||||
const conflicts = conflictData.conflicts; // No 4-conflict limit
|
||||
|
||||
// 2. Format conflicts as text output (max 10 per round)
|
||||
const batchSize = 10;
|
||||
const batches = chunkArray(conflicts, batchSize);
|
||||
|
||||
for (const [batchIdx, batch] of batches.entries()) {
|
||||
const totalBatches = batches.length;
|
||||
|
||||
// Output batch header
|
||||
console.log(`===== 冲突解决 (第 ${batchIdx + 1}/${totalBatches} 轮) =====\n`);
|
||||
|
||||
// Output each conflict in batch
|
||||
batch.forEach((conflict, idx) => {
|
||||
const questionNum = batchIdx * batchSize + idx + 1;
|
||||
console.log(`【问题${questionNum} - ${conflict.category}】${conflict.id}: ${conflict.brief}`);
|
||||
|
||||
conflict.strategies.forEach((strategy, sIdx) => {
|
||||
const optionLetter = String.fromCharCode(97 + sIdx); // a, b, c, ...
|
||||
console.log(`${optionLetter}) ${strategy.name}`);
|
||||
console.log(` 说明:${strategy.approach}`);
|
||||
console.log(` 复杂度: ${strategy.complexity} | 风险: ${strategy.risk} | 工作量: ${strategy.effort}`);
|
||||
});
|
||||
|
||||
// Add custom option
|
||||
const customLetter = String.fromCharCode(97 + conflict.strategies.length);
|
||||
console.log(`${customLetter}) 自定义修改`);
|
||||
console.log(` 说明:根据修改建议自行处理,不应用预设策略`);
|
||||
|
||||
// Show modification suggestions
|
||||
if (conflict.modification_suggestions && conflict.modification_suggestions.length > 0) {
|
||||
console.log(` 修改建议:`);
|
||||
conflict.modification_suggestions.forEach(suggestion => {
|
||||
console.log(` - ${suggestion}`);
|
||||
});
|
||||
}
|
||||
console.log();
|
||||
});
|
||||
|
||||
console.log(`请回答 (格式: 1a 2b 3c...):`);
|
||||
|
||||
// Wait for user input
|
||||
const userInput = await readUserInput();
|
||||
|
||||
// Parse answers
|
||||
const answers = parseUserAnswers(userInput, batch);
|
||||
}
|
||||
|
||||
// 3. Build selected strategies (exclude custom selections)
|
||||
const selectedStrategies = answers.filter(a => !a.isCustom).map(a => a.strategy);
|
||||
const customConflicts = answers.filter(a => a.isCustom).map(a => ({
|
||||
id: a.conflict.id,
|
||||
brief: a.conflict.brief,
|
||||
suggestions: a.conflict.modification_suggestions
|
||||
}));
|
||||
```
|
||||
|
||||
**Text Output Example**:
|
||||
```markdown
|
||||
===== 冲突解决 (第 1/1 轮) =====
|
||||
|
||||
【问题1 - Architecture】CON-001: 现有认证系统与计划不兼容
|
||||
a) 渐进式迁移
|
||||
说明:保留现有系统,逐步迁移到新方案
|
||||
复杂度: Medium | 风险: Low | 工作量: 3-5天
|
||||
b) 完全重写
|
||||
说明:废弃旧系统,从零实现新认证
|
||||
复杂度: High | 风险: Medium | 工作量: 7-10天
|
||||
c) 自定义修改
|
||||
说明:根据修改建议自行处理,不应用预设策略
|
||||
修改建议:
|
||||
- 评估现有认证系统的兼容性,考虑是否可以通过适配器模式桥接
|
||||
- 检查JWT token格式和验证逻辑是否需要调整
|
||||
- 确保用户会话管理与新架构保持一致
|
||||
|
||||
【问题2 - Data】CON-002: 数据库 schema 冲突
|
||||
a) 添加迁移脚本
|
||||
说明:创建数据库迁移脚本处理 schema 变更
|
||||
复杂度: Low | 风险: Low | 工作量: 1-2天
|
||||
b) 自定义修改
|
||||
说明:根据修改建议自行处理,不应用预设策略
|
||||
修改建议:
|
||||
- 检查现有表结构是否支持新增字段,避免破坏性变更
|
||||
- 考虑使用数据库版本控制工具(如Flyway或Liquibase)
|
||||
- 准备数据迁移和回滚策略
|
||||
|
||||
请回答 (格式: 1a 2b):
|
||||
```
|
||||
|
||||
**User Input Examples**:
|
||||
- `1a 2a` → Conflict 1: 渐进式迁移, Conflict 2: 添加迁移脚本
|
||||
- `1b 2b` → Conflict 1: 完全重写, Conflict 2: 自定义修改
|
||||
- `1c 2c` → Both choose custom modification (user handles manually with suggestions)
|
||||
|
||||
### Phase 4: Apply Modifications
|
||||
|
||||
```javascript
|
||||
// 1. Extract modifications from selected strategies
|
||||
const modifications = [];
|
||||
selectedStrategies.forEach(strategy => {
|
||||
if (strategy !== "skip") {
|
||||
modifications.push(...strategy.modifications);
|
||||
}
|
||||
});
|
||||
|
||||
// 2. Apply each modification using Edit tool
|
||||
modifications.forEach(mod => {
|
||||
if (mod.change_type === "update") {
|
||||
Edit({
|
||||
file_path: mod.file,
|
||||
old_string: mod.old_content,
|
||||
new_string: mod.new_content
|
||||
});
|
||||
}
|
||||
// Handle "add" and "remove" similarly
|
||||
});
|
||||
|
||||
// 3. Update context-package.json
|
||||
const contextPackage = JSON.parse(Read(contextPath));
|
||||
contextPackage.conflict_detection.conflict_risk = "resolved";
|
||||
contextPackage.conflict_detection.resolved_conflicts = conflicts.map(c => c.id);
|
||||
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
|
||||
Write(contextPath, JSON.stringify(contextPackage, null, 2));
|
||||
|
||||
// 4. Output custom conflict summary (if any)
|
||||
if (customConflicts.length > 0) {
|
||||
console.log("\n===== 需要自定义处理的冲突 =====\n");
|
||||
customConflicts.forEach(conflict => {
|
||||
console.log(`【${conflict.id}】${conflict.brief}`);
|
||||
console.log("修改建议:");
|
||||
conflict.suggestions.forEach(suggestion => {
|
||||
console.log(` - ${suggestion}`);
|
||||
});
|
||||
console.log();
|
||||
});
|
||||
}
|
||||
|
||||
// 5. Return summary
|
||||
return {
|
||||
resolved: modifications.length,
|
||||
custom: customConflicts.length,
|
||||
modified_files: [...new Set(modifications.map(m => m.file))],
|
||||
custom_conflicts: customConflicts
|
||||
};
|
||||
```
|
||||
|
||||
**Validation**:
|
||||
```
|
||||
✓ Agent returns valid JSON structure
|
||||
✓ Text output displays all conflicts (max 10 per round)
|
||||
✓ User selections captured correctly
|
||||
✓ Edit tool successfully applies modifications
|
||||
✓ guidance-specification.md updated
|
||||
✓ Role analyses (*.md) updated
|
||||
✓ context-package.json marked as resolved
|
||||
✓ Agent log saved to .workflow/{session_id}/.chat/
|
||||
```
|
||||
|
||||
## Output Format: Agent JSON Response
|
||||
|
||||
**Focus**: Structured conflict data with actionable modifications for programmatic processing.
|
||||
|
||||
**Format**: JSON to stdout (NO file generation)
|
||||
|
||||
**Structure**: Defined in Phase 2, Step 4 (agent prompt)
|
||||
|
||||
### Key Requirements
|
||||
| Requirement | Details |
|
||||
|------------|---------|
|
||||
| **Conflict batching** | Max 10 conflicts per round (no total limit) |
|
||||
| **Strategy count** | 2-4 strategies per conflict |
|
||||
| **Modifications** | Each strategy includes file paths, old_content, new_content |
|
||||
| **User-facing text** | Chinese (brief, strategy names, pros/cons) |
|
||||
| **Technical fields** | English (severity, category, complexity, risk) |
|
||||
| **old_content precision** | 20-100 chars for unique Edit tool matching |
|
||||
| **File targets** | guidance-specification.md, role analyses (*.md) |
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Recovery Strategy
|
||||
```
|
||||
1. Pre-check: Verify conflict_risk ≥ medium
|
||||
2. Monitor: Track agent via Task tool
|
||||
3. Validate: Parse agent JSON output
|
||||
4. Recover:
|
||||
- Agent failure → check logs + report error
|
||||
- Invalid JSON → retry once with Claude fallback
|
||||
- CLI failure → fallback to Claude analysis
|
||||
- Edit tool failure → report affected files + rollback option
|
||||
- User cancels → mark as "unresolved", continue to task-generate
|
||||
5. Degrade: If all fail, generate minimal conflict report and skip modifications
|
||||
```
|
||||
|
||||
### Rollback Handling
|
||||
```
|
||||
If Edit tool fails mid-application:
|
||||
1. Log all successfully applied modifications
|
||||
2. Output rollback option via text interaction
|
||||
3. If rollback selected: restore files from git or backups
|
||||
4. If continue: mark partial resolution in context-package.json
|
||||
```
|
||||
|
||||
## Integration
|
||||
|
||||
### Interface
|
||||
**Input**:
|
||||
- `--session` (required): WFS-{session-id}
|
||||
- `--context` (required): context-package.json path
|
||||
- Requires: `conflict_risk ≥ medium`
|
||||
|
||||
**Output**:
|
||||
- Modified files:
|
||||
- `.workflow/{session_id}/.brainstorm/guidance-specification.md`
|
||||
- `.workflow/{session_id}/.brainstorm/{role}/analysis.md`
|
||||
- `.workflow/{session_id}/.process/context-package.json` (conflict_risk → resolved)
|
||||
- NO report file generation
|
||||
|
||||
**User Interaction**:
|
||||
- Text-based strategy selection (max 10 conflicts per round)
|
||||
- Each conflict: 2-4 strategy options + "自定义修改" option (with suggestions)
|
||||
|
||||
### Success Criteria
|
||||
```
|
||||
✓ CLI analysis returns valid JSON structure
|
||||
✓ Conflicts presented in batches (max 10 per round)
|
||||
✓ Min 2 strategies per conflict with modifications
|
||||
✓ Each conflict includes 2-5 modification_suggestions
|
||||
✓ Text output displays all conflicts correctly with suggestions
|
||||
✓ User selections captured and processed
|
||||
✓ Edit tool applies modifications successfully
|
||||
✓ Custom conflicts displayed with suggestions for manual handling
|
||||
✓ guidance-specification.md updated with resolved conflicts
|
||||
✓ Role analyses (*.md) updated with resolved conflicts
|
||||
✓ context-package.json marked as "resolved"
|
||||
✓ No CONFLICT_RESOLUTION.md file generated
|
||||
✓ Modification summary includes custom conflict count
|
||||
✓ Agent log saved to .workflow/{session_id}/.chat/
|
||||
✓ Error handling robust (validate/retry/degrade)
|
||||
```
|
||||
|
||||
## Related Commands
|
||||
| Command | Relationship |
|
||||
|---------|--------------|
|
||||
| `/workflow:tools:context-gather` | Generates input conflict_detection data |
|
||||
| `/workflow:plan` | Auto-triggers this when risk ≥ medium |
|
||||
| `/workflow:tools:task-generate` | Uses resolved conflicts from updated brainstorm files |
|
||||
| `/workflow:brainstorm:artifacts` | Generates guidance-specification.md (modified by this command) |
|
||||
@@ -1,300 +1,175 @@
|
||||
---
|
||||
name: gather
|
||||
description: Intelligently collect project context based on task description and package into standardized JSON
|
||||
description: Intelligently collect project context using context-search-agent based on task description and package into standardized JSON
|
||||
argument-hint: "--session WFS-session-id \"task description\""
|
||||
examples:
|
||||
- /workflow:tools:context-gather --session WFS-user-auth "Implement user authentication system"
|
||||
- /workflow:tools:context-gather --session WFS-payment "Refactor payment module API"
|
||||
- /workflow:tools:context-gather --session WFS-bugfix "Fix login validation error"
|
||||
allowed-tools: Task(*), Read(*), Glob(*)
|
||||
---
|
||||
|
||||
# Context Gather Command (/workflow:tools:context-gather)
|
||||
|
||||
## Overview
|
||||
Intelligent context collector that gathers relevant information from project codebase, documentation, and dependencies based on task descriptions, generating standardized context packages.
|
||||
|
||||
Orchestrator command that invokes `context-search-agent` to gather comprehensive project context for implementation planning. Generates standardized `context-package.json` with codebase analysis, dependencies, and conflict detection.
|
||||
|
||||
**Agent**: `context-search-agent` (`.claude/agents/context-search-agent.md`)
|
||||
|
||||
## Core Philosophy
|
||||
- **Intelligent Collection**: Auto-identify relevant resources based on keyword analysis
|
||||
- **Comprehensive Coverage**: Collect code, documentation, configurations, and dependencies
|
||||
- **Standardized Output**: Generate unified format context-package.json
|
||||
- **Efficient Execution**: Optimize collection strategies to avoid irrelevant information
|
||||
|
||||
## Core Responsibilities
|
||||
- **Keyword Extraction**: Extract core keywords from task descriptions
|
||||
- **Smart Documentation Loading**: Load relevant project documentation based on keywords
|
||||
- **Code Structure Analysis**: Analyze project structure to locate relevant code files
|
||||
- **Dependency Discovery**: Identify tech stack and dependency relationships
|
||||
- **MCP Tools Integration**: Leverage code-index tools for enhanced collection
|
||||
- **Context Packaging**: Generate standardized JSON context packages
|
||||
- **Agent Delegation**: Delegate all discovery to `context-search-agent` for autonomous execution
|
||||
- **Detection-First**: Check for existing context-package before executing
|
||||
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
|
||||
- **Standardized Output**: Generate `.workflow/{session}/.process/context-package.json`
|
||||
|
||||
## Execution Process
|
||||
## Execution Flow
|
||||
|
||||
### Phase 1: Task Analysis
|
||||
1. **Keyword Extraction**
|
||||
- Parse task description to extract core keywords
|
||||
- Identify technical domain (auth, API, frontend, backend, etc.)
|
||||
- Determine complexity level (simple, medium, complex)
|
||||
### Step 1: Context-Package Detection
|
||||
|
||||
2. **Scope Determination**
|
||||
- Define collection scope based on keywords
|
||||
- Identify potentially involved modules and components
|
||||
- Set file type filters
|
||||
**Execute First** - Check if valid package already exists:
|
||||
|
||||
### Phase 2: Project Structure Exploration
|
||||
1. **Architecture Analysis**
|
||||
- Use `~/.claude/scripts/get_modules_by_depth.sh` for comprehensive project structure
|
||||
- Analyze project layout and module organization
|
||||
- Identify key directories and components
|
||||
```javascript
|
||||
const contextPackagePath = `.workflow/${session_id}/.process/context-package.json`;
|
||||
|
||||
2. **Code File Location**
|
||||
- Use MCP tools for precise search: `mcp__code-index__find_files()` and `mcp__code-index__search_code_advanced()`
|
||||
- Search for relevant source code files based on keywords
|
||||
- Locate implementation files, interfaces, and modules
|
||||
if (file_exists(contextPackagePath)) {
|
||||
const existing = Read(contextPackagePath);
|
||||
|
||||
3. **Documentation Collection**
|
||||
- Load CLAUDE.md and README.md
|
||||
- Load relevant documentation from .workflow/docs/ based on keywords
|
||||
- Collect configuration files (package.json, requirements.txt, etc.)
|
||||
|
||||
### Phase 3: Intelligent Filtering & Association
|
||||
1. **Relevance Scoring**
|
||||
- Score based on keyword match degree
|
||||
- Score based on file path relevance
|
||||
- Score based on code content relevance
|
||||
|
||||
2. **Dependency Analysis**
|
||||
- Analyze import/require statements
|
||||
- Identify inter-module dependencies
|
||||
- Determine core and optional dependencies
|
||||
|
||||
### Phase 4: Context Packaging
|
||||
1. **Standardized Output**
|
||||
- Generate context-package.json
|
||||
- Organize resources by type and importance
|
||||
- Add relevance descriptions and usage recommendations
|
||||
|
||||
## Context Package Format
|
||||
|
||||
Generated context package format:
|
||||
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"task_description": "Implement user authentication system",
|
||||
"timestamp": "2025-09-29T10:30:00Z",
|
||||
"keywords": ["user", "authentication", "JWT", "login"],
|
||||
"complexity": "medium",
|
||||
"tech_stack": ["typescript", "node.js", "express"],
|
||||
"session_id": "WFS-user-auth"
|
||||
},
|
||||
"assets": [
|
||||
{
|
||||
"type": "documentation",
|
||||
"path": "CLAUDE.md",
|
||||
"relevance": "Project development standards and conventions",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"type": "documentation",
|
||||
"path": ".workflow/docs/architecture/security.md",
|
||||
"relevance": "Security architecture design guidance",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"type": "source_code",
|
||||
"path": "src/auth/AuthService.ts",
|
||||
"relevance": "Existing authentication service implementation",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"type": "source_code",
|
||||
"path": "src/models/User.ts",
|
||||
"relevance": "User data model definition",
|
||||
"priority": "medium"
|
||||
},
|
||||
{
|
||||
"type": "config",
|
||||
"path": "package.json",
|
||||
"relevance": "Project dependencies and tech stack",
|
||||
"priority": "medium"
|
||||
},
|
||||
{
|
||||
"type": "test",
|
||||
"path": "tests/auth/*.test.ts",
|
||||
"relevance": "Authentication related test cases",
|
||||
"priority": "medium"
|
||||
}
|
||||
],
|
||||
"tech_stack": {
|
||||
"frameworks": ["express", "typescript"],
|
||||
"libraries": ["jsonwebtoken", "bcrypt"],
|
||||
"testing": ["jest", "supertest"]
|
||||
},
|
||||
"statistics": {
|
||||
"total_files": 15,
|
||||
"source_files": 8,
|
||||
"docs_files": 4,
|
||||
"config_files": 2,
|
||||
"test_files": 1
|
||||
// Validate package belongs to current session
|
||||
if (existing?.metadata?.session_id === session_id) {
|
||||
console.log("✅ Valid context-package found for session:", session_id);
|
||||
console.log("📊 Stats:", existing.statistics);
|
||||
console.log("⚠️ Conflict Risk:", existing.conflict_detection.risk_level);
|
||||
return existing; // Skip execution, return existing
|
||||
} else {
|
||||
console.warn("⚠️ Invalid session_id in existing package, re-generating...");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## MCP Tools Integration
|
||||
### Step 2: Invoke Context-Search Agent
|
||||
|
||||
### Code Index Integration
|
||||
```bash
|
||||
# Set project path
|
||||
mcp__code-index__set_project_path(path="{current_project_path}")
|
||||
**Only execute if Step 1 finds no valid package**
|
||||
|
||||
# Refresh index to ensure latest
|
||||
mcp__code-index__refresh_index()
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="context-search-agent",
|
||||
description="Gather comprehensive context for plan",
|
||||
prompt=`
|
||||
You are executing as context-search-agent (.claude/agents/context-search-agent.md).
|
||||
|
||||
# Search relevant files
|
||||
mcp__code-index__find_files(pattern="*{keyword}*")
|
||||
## Execution Mode
|
||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
||||
|
||||
# Search code content
|
||||
mcp__code-index__search_code_advanced(
|
||||
pattern="{keyword_patterns}",
|
||||
file_pattern="*.{ts,js,py,go,md}",
|
||||
context_lines=3
|
||||
## Session Information
|
||||
- **Session ID**: ${session_id}
|
||||
- **Task Description**: ${task_description}
|
||||
- **Output Path**: .workflow/${session_id}/.process/context-package.json
|
||||
|
||||
## Mission
|
||||
Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Detection**: Check for existing context-package (early exit if valid)
|
||||
2. **Foundation**: Initialize code-index, get project structure, load docs
|
||||
3. **Analysis**: Extract keywords, determine scope, classify complexity
|
||||
|
||||
### Phase 2: Multi-Source Context Discovery
|
||||
Execute all 3 discovery tracks:
|
||||
- **Track 1**: Reference documentation (CLAUDE.md, architecture docs)
|
||||
- **Track 2**: Web examples (use Exa MCP for unfamiliar tech/APIs)
|
||||
- **Track 3**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
|
||||
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. Synthesize 3-source data (docs > code > web)
|
||||
3. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
4. Perform conflict detection with risk assessment
|
||||
5. Generate and validate context-package.json
|
||||
|
||||
## Output Requirements
|
||||
Complete context-package.json with:
|
||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
||||
- **project_context**: architecture_patterns, coding_conventions, tech_stack
|
||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
||||
- **dependencies**: {internal[], external[]} with dependency graph
|
||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
||||
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy}
|
||||
|
||||
## Quality Validation
|
||||
Before completion verify:
|
||||
- [ ] Valid JSON format with all required fields
|
||||
- [ ] File relevance accuracy >80%
|
||||
- [ ] Dependency graph complete (max 2 transitive levels)
|
||||
- [ ] Conflict risk level calculated correctly
|
||||
- [ ] No sensitive data exposed
|
||||
- [ ] Total files ≤50 (prioritize high-relevance)
|
||||
|
||||
Execute autonomously following agent documentation.
|
||||
Report completion with statistics.
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
### Step 3: Output Verification
|
||||
|
||||
## Session ID Integration
|
||||
After agent completes, verify output:
|
||||
|
||||
### Session ID Usage
|
||||
- **Required Parameter**: `--session WFS-session-id`
|
||||
- **Session Context Loading**: Load existing session state and task summaries
|
||||
- **Session Continuity**: Maintain context across pipeline phases
|
||||
```javascript
|
||||
// Verify file was created
|
||||
const outputPath = `.workflow/${session_id}/.process/context-package.json`;
|
||||
if (!file_exists(outputPath)) {
|
||||
throw new Error("❌ Agent failed to generate context-package.json");
|
||||
}
|
||||
```
|
||||
|
||||
### Session State Management
|
||||
## Parameter Reference
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `--session` | string | ✅ | Workflow session ID (e.g., WFS-user-auth) |
|
||||
| `task_description` | string | ✅ | Detailed task description for context extraction |
|
||||
|
||||
## Output Schema
|
||||
|
||||
Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json` schema.
|
||||
|
||||
**Key Sections**:
|
||||
- **metadata**: Session info, keywords, complexity, tech stack
|
||||
- **project_context**: Architecture patterns, conventions, tech stack
|
||||
- **assets**: Categorized files with relevance scores (documentation, source_code, config, tests)
|
||||
- **dependencies**: Internal and external dependency graphs
|
||||
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
|
||||
- **conflict_detection**: Risk assessment with mitigation strategies
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Usage
|
||||
```bash
|
||||
# Validate session exists
|
||||
if [ ! -d ".workflow/${session_id}" ]; then
|
||||
echo "❌ Session ${session_id} not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Load session metadata
|
||||
session_metadata=".workflow/${session_id}/workflow-session.json"
|
||||
/workflow:tools:context-gather --session WFS-auth-feature "Implement JWT authentication with refresh tokens"
|
||||
```
|
||||
## Success Criteria
|
||||
|
||||
## Output Location
|
||||
|
||||
Context package output location:
|
||||
```
|
||||
.workflow/{session_id}/.process/context-package.json
|
||||
```
|
||||
- ✅ Valid context-package.json generated in `.workflow/{session}/.process/`
|
||||
- ✅ Contains >80% relevant files based on task keywords
|
||||
- ✅ Execution completes within 2 minutes
|
||||
- ✅ All required schema fields present and valid
|
||||
- ✅ Conflict risk accurately assessed
|
||||
- ✅ Agent reports completion with statistics
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Error Handling
|
||||
1. **No Active Session**: Create temporary session directory
|
||||
2. **MCP Tools Unavailable**: Fallback to traditional bash commands
|
||||
3. **Permission Errors**: Prompt user to check file permissions
|
||||
4. **Large Project Optimization**: Limit file count, prioritize high-relevance files
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Package validation failed | Invalid session_id in existing package | Re-run agent to regenerate |
|
||||
| Agent execution timeout | Large codebase or slow MCP | Increase timeout, check code-index status |
|
||||
| Missing required fields | Agent incomplete execution | Check agent logs, verify schema compliance |
|
||||
| File count exceeds limit | Too many relevant files | Agent should auto-prioritize top 50 by relevance |
|
||||
|
||||
### Graceful Degradation Strategy
|
||||
```bash
|
||||
# Fallback when MCP unavailable
|
||||
if ! command -v mcp__code-index__find_files; then
|
||||
# Use find command for file discovery
|
||||
find . -name "*{keyword}*" -type f -not -path "*/node_modules/*" -not -path "*/.git/*"
|
||||
## Notes
|
||||
|
||||
# Alternative pattern matching
|
||||
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.py" -o -name "*.go" \) -exec grep -l "{keyword}" {} \;
|
||||
fi
|
||||
|
||||
# Use ripgrep instead of MCP search
|
||||
rg "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 30
|
||||
|
||||
# Content-based search with context
|
||||
rg -A 3 -B 3 "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source
|
||||
|
||||
# Quick relevance check
|
||||
grep -r --include="*.{ts,js,py,go}" -l "{keywords}" . | head -15
|
||||
|
||||
# Test files discovery
|
||||
find . -name "*test*" -o -name "*spec*" | grep -E "\.(ts|js|py|go)$" | head -10
|
||||
|
||||
# Import/dependency analysis
|
||||
rg "^(import|from|require|#include)" --type-add 'source:*.{ts,js,py,go}' -t source | head -20
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Large Project Optimization Strategy
|
||||
- **File Count Limit**: Maximum 50 files per type
|
||||
- **Size Filtering**: Skip oversized files (>10MB)
|
||||
- **Depth Limit**: Maximum search depth of 3 levels
|
||||
- **Caching Strategy**: Cache project structure analysis results
|
||||
|
||||
### Parallel Processing
|
||||
- Documentation collection and code search in parallel
|
||||
- MCP tool calls and traditional commands in parallel
|
||||
- Reduce I/O wait time
|
||||
|
||||
## Essential Bash Commands (Max 10)
|
||||
|
||||
### 1. Project Structure Analysis
|
||||
```bash
|
||||
~/.claude/scripts/get_modules_by_depth.sh
|
||||
```
|
||||
|
||||
### 2. File Discovery by Keywords
|
||||
```bash
|
||||
find . -name "*{keyword}*" -type f -not -path "*/node_modules/*" -not -path "*/.git/*"
|
||||
```
|
||||
|
||||
### 3. Content Search in Code Files
|
||||
```bash
|
||||
rg "{keyword}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 20
|
||||
```
|
||||
|
||||
### 4. Configuration Files Discovery
|
||||
```bash
|
||||
find . -maxdepth 3 \( -name "*.json" -o -name "package.json" -o -name "requirements.txt" -o -name "Cargo.toml" \) -not -path "*/node_modules/*"
|
||||
```
|
||||
|
||||
### 5. Documentation Files Collection
|
||||
```bash
|
||||
find . -name "*.md" -o -name "README*" -o -name "CLAUDE.md" | grep -v node_modules | head -10
|
||||
```
|
||||
|
||||
### 6. Test Files Location
|
||||
```bash
|
||||
find . \( -name "*test*" -o -name "*spec*" \) -type f | grep -E "\.(js|ts|py|go)$" | head -10
|
||||
```
|
||||
|
||||
### 7. Function/Class Definitions Search
|
||||
```bash
|
||||
rg "^(function|def|func|class|interface)" --type-add 'source:*.{ts,js,py,go}' -t source -n --max-count 15
|
||||
```
|
||||
|
||||
### 8. Import/Dependency Analysis
|
||||
```bash
|
||||
rg "^(import|from|require|#include)" --type-add 'source:*.{ts,js,py,go}' -t source | head -15
|
||||
```
|
||||
|
||||
### 9. Workflow Session Information
|
||||
```bash
|
||||
find .workflow/ -name "*.json" -path "*/${session_id}/*" -o -name "workflow-session.json" | head -5
|
||||
```
|
||||
|
||||
### 10. Context-Aware Content Search
|
||||
```bash
|
||||
rg -A 2 -B 2 "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 10
|
||||
```
|
||||
|
||||
## Success Criteria
|
||||
- Generate valid context-package.json file
|
||||
- Contains sufficient relevant information for subsequent analysis
|
||||
- Execution time controlled within 30 seconds
|
||||
- File relevance accuracy rate >80%
|
||||
|
||||
## Related Commands
|
||||
- `/workflow:tools:concept-enhanced` - Consumes output of this command for analysis
|
||||
- `/workflow:plan` - Calls this command to gather context
|
||||
- `/workflow:status` - Can display context collection status
|
||||
- **Detection-first**: Always check for existing package before invoking agent
|
||||
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||
|
||||
@@ -1,21 +1,24 @@
|
||||
---
|
||||
name: task-generate-agent
|
||||
description: Autonomous task generation using action-planning-agent with discovery and output phases
|
||||
argument-hint: "--session WFS-session-id"
|
||||
argument-hint: "--session WFS-session-id [--cli-execute]"
|
||||
examples:
|
||||
- /workflow:tools:task-generate-agent --session WFS-auth
|
||||
- /workflow:tools:task-generate-agent --session WFS-auth --cli-execute
|
||||
---
|
||||
|
||||
# Autonomous Task Generation Command
|
||||
|
||||
## Overview
|
||||
Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation.
|
||||
Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Supports both agent-driven execution (default) and CLI tool execution modes.
|
||||
|
||||
## Core Philosophy
|
||||
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
|
||||
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
|
||||
- **Memory-First**: Reuse loaded documents from conversation memory
|
||||
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
|
||||
- **Pre-Selected Templates**: Command selects correct template based on `--cli-execute` flag **before** invoking agent
|
||||
- **Agent Simplicity**: Agent receives pre-selected template and focuses only on content generation
|
||||
|
||||
## Execution Lifecycle
|
||||
|
||||
@@ -26,21 +29,27 @@ Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent wit
|
||||
```javascript
|
||||
{
|
||||
"session_id": "WFS-[session-id]",
|
||||
"execution_mode": "agent-mode" | "cli-execute-mode", // Determined by flag
|
||||
"task_json_template_path": "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt"
|
||||
| "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt",
|
||||
// Path selected by command based on --cli-execute flag, agent reads it
|
||||
"session_metadata": {
|
||||
// If in memory: use cached content
|
||||
// Else: Load from .workflow/{session-id}/workflow-session.json
|
||||
},
|
||||
"analysis_results": {
|
||||
// If in memory: use cached content
|
||||
// Else: Load from .workflow/{session-id}/.process/ANALYSIS_RESULTS.md
|
||||
},
|
||||
"artifacts_inventory": {
|
||||
// If in memory: use cached list
|
||||
// Else: Scan .workflow/{session-id}/.brainstorming/ directory
|
||||
"synthesis_specification": "path or null",
|
||||
"topic_framework": "path or null",
|
||||
"role_analyses": ["paths"]
|
||||
"brainstorm_artifacts": {
|
||||
// Loaded from context-package.json → brainstorm_artifacts section
|
||||
"role_analyses": [
|
||||
{
|
||||
"role": "system-architect",
|
||||
"files": [{"path": "...", "type": "primary|supplementary"}]
|
||||
}
|
||||
],
|
||||
"guidance_specification": {"path": "...", "exists": true},
|
||||
"synthesis_output": {"path": "...", "exists": true},
|
||||
"conflict_resolution": {"path": "...", "exists": true} // if conflict_risk >= medium
|
||||
},
|
||||
"context_package_path": ".workflow/{session-id}/.process/context-package.json",
|
||||
"context_package": {
|
||||
// If in memory: use cached content
|
||||
// Else: Load from .workflow/{session-id}/.process/context-package.json
|
||||
@@ -61,31 +70,38 @@ Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent wit
|
||||
}
|
||||
```
|
||||
|
||||
2. **Load Analysis Results** (if not in memory)
|
||||
2. **Load Context Package** (if not in memory)
|
||||
```javascript
|
||||
if (!memory.has("ANALYSIS_RESULTS.md")) {
|
||||
Read(.workflow/{session-id}/.process/ANALYSIS_RESULTS.md)
|
||||
if (!memory.has("context-package.json")) {
|
||||
Read(.workflow/{session-id}/.process/context-package.json)
|
||||
}
|
||||
```
|
||||
|
||||
3. **Discover Artifacts** (if not in memory)
|
||||
3. **Extract & Load Role Analyses** (from context-package.json)
|
||||
```javascript
|
||||
if (!memory.has("artifacts_inventory")) {
|
||||
bash(find .workflow/{session-id}/.brainstorming/ -name "*.md" -type f)
|
||||
// Extract role analysis paths from context package
|
||||
const roleAnalysisPaths = contextPackage.brainstorm_artifacts.role_analyses
|
||||
.flatMap(role => role.files.map(f => f.path));
|
||||
|
||||
// Load each role analysis file
|
||||
roleAnalysisPaths.forEach(path => Read(path));
|
||||
```
|
||||
|
||||
4. **Load Conflict Resolution** (from context-package.json, if exists)
|
||||
```javascript
|
||||
if (contextPackage.brainstorm_artifacts.conflict_resolution?.exists) {
|
||||
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
|
||||
}
|
||||
```
|
||||
|
||||
4. **MCP Code Analysis** (optional - enhance understanding)
|
||||
```javascript
|
||||
// Find relevant files for task context
|
||||
mcp__code-index__find_files(pattern="*auth*")
|
||||
mcp__code-index__search_code_advanced(
|
||||
pattern="authentication|oauth",
|
||||
file_pattern="*.ts"
|
||||
)
|
||||
5. **Code Analysis with Native Tools** (optional - enhance understanding)
|
||||
```bash
|
||||
# Find relevant files for task context
|
||||
find . -name "*auth*" -type f
|
||||
rg "authentication|oauth" -g "*.ts"
|
||||
```
|
||||
|
||||
5. **MCP External Research** (optional - gather best practices)
|
||||
6. **MCP External Research** (optional - gather best practices)
|
||||
```javascript
|
||||
// Get external examples for implementation
|
||||
mcp__exa__get_code_context_exa(
|
||||
@@ -96,6 +112,14 @@ Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent wit
|
||||
|
||||
### Phase 2: Agent Execution (Document Generation)
|
||||
|
||||
**Pre-Agent Template Selection** (Command decides path before invoking agent):
|
||||
```javascript
|
||||
// Command checks flag and selects template PATH (not content)
|
||||
const templatePath = hasCliExecuteFlag
|
||||
? "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt"
|
||||
: "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt";
|
||||
```
|
||||
|
||||
**Agent Invocation**:
|
||||
```javascript
|
||||
Task(
|
||||
@@ -105,23 +129,32 @@ Task(
|
||||
## Execution Context
|
||||
|
||||
**Session ID**: WFS-{session-id}
|
||||
**Mode**: Two-Phase Autonomous Task Generation
|
||||
**Execution Mode**: {agent-mode | cli-execute-mode}
|
||||
**Task JSON Template Path**: {template_path}
|
||||
|
||||
## Phase 1: Discovery Results (Provided Context)
|
||||
|
||||
### Session Metadata
|
||||
{session_metadata_content}
|
||||
|
||||
### Analysis Results
|
||||
{analysis_results_content}
|
||||
### Role Analyses (Enhanced by Synthesis)
|
||||
{role_analyses_content}
|
||||
- Includes requirements, design specs, enhancements, and clarifications from synthesis phase
|
||||
|
||||
### Artifacts Inventory
|
||||
- **Synthesis Specification**: {synthesis_spec_path}
|
||||
- **Topic Framework**: {topic_framework_path}
|
||||
- **Guidance Specification**: {guidance_spec_path}
|
||||
- **Role Analyses**: {role_analyses_list}
|
||||
|
||||
### Context Package
|
||||
{context_package_summary}
|
||||
- Includes conflict_risk assessment
|
||||
|
||||
### Conflict Resolution (Conditional)
|
||||
If conflict_risk was medium/high, modifications have been applied to:
|
||||
- **guidance-specification.md**: Design decisions updated to resolve conflicts
|
||||
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
|
||||
- **context-package.json**: Marked as "resolved" with conflict IDs
|
||||
- NO separate CONFLICT_RESOLUTION.md file (conflicts resolved in-place)
|
||||
|
||||
### MCP Analysis Results (Optional)
|
||||
**Code Structure**: {mcp_code_index_results}
|
||||
@@ -147,334 +180,35 @@ Task(
|
||||
|
||||
#### 1. Task JSON Files (.task/IMPL-*.json)
|
||||
**Location**: .workflow/{session-id}/.task/
|
||||
**Schema**: 5-field enhanced schema with artifacts
|
||||
**Template**: Read from the template path provided above
|
||||
|
||||
**Required Fields**:
|
||||
\`\`\`json
|
||||
{
|
||||
"id": "IMPL-N[.M]",
|
||||
"title": "Descriptive task name",
|
||||
"status": "pending",
|
||||
"meta": {
|
||||
"type": "feature|bugfix|refactor|test|docs",
|
||||
"agent": "@code-developer|@test-fix-agent|@general-purpose"
|
||||
},
|
||||
"context": {
|
||||
"requirements": ["extracted from analysis"],
|
||||
"focus_paths": ["src/paths"],
|
||||
"acceptance": ["measurable criteria"],
|
||||
"depends_on": ["IMPL-N"],
|
||||
"artifacts": [
|
||||
{
|
||||
"type": "synthesis_specification",
|
||||
"path": "{synthesis_spec_path}",
|
||||
"priority": "highest",
|
||||
"usage": "Primary requirement source - use for consolidated requirements and cross-role alignment"
|
||||
},
|
||||
{
|
||||
"type": "role_analysis",
|
||||
"path": "{role_analysis_path}",
|
||||
"priority": "high",
|
||||
"usage": "Technical/design/business details from specific roles. Common roles: system-architect (ADRs, APIs, caching), ui-designer (design tokens, layouts), product-manager (user stories, metrics)",
|
||||
"note": "Dynamically discovered - multiple role analysis files included based on brainstorming results"
|
||||
},
|
||||
{
|
||||
"type": "topic_framework",
|
||||
"path": "{topic_framework_path}",
|
||||
"priority": "low",
|
||||
"usage": "Discussion context and framework structure"
|
||||
}
|
||||
]
|
||||
},
|
||||
"flow_control": {
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_synthesis_specification",
|
||||
"action": "Load consolidated synthesis specification",
|
||||
"commands": [
|
||||
"bash(ls {synthesis_spec_path} 2>/dev/null || echo 'not found')",
|
||||
"Read({synthesis_spec_path})"
|
||||
],
|
||||
"output_to": "synthesis_specification",
|
||||
"on_error": "skip_optional"
|
||||
},
|
||||
{
|
||||
"step": "mcp_codebase_exploration",
|
||||
"action": "Explore codebase using MCP",
|
||||
"command": "mcp__code-index__find_files(pattern=\\"[patterns]\\") && mcp__code-index__search_code_advanced(pattern=\\"[patterns]\\")",
|
||||
"output_to": "codebase_structure"
|
||||
},
|
||||
{
|
||||
"step": "analyze_task_patterns",
|
||||
"action": "Analyze existing code patterns",
|
||||
"commands": [
|
||||
"bash(cd \\"[focus_paths]\\")",
|
||||
"bash(~/.claude/scripts/gemini-wrapper -p \\"PURPOSE: Analyze patterns TASK: Review '[title]' CONTEXT: [synthesis_specification] EXPECTED: Pattern analysis RULES: Prioritize synthesis-specification.md\\")"
|
||||
],
|
||||
"output_to": "task_context",
|
||||
"on_error": "fail"
|
||||
}
|
||||
],
|
||||
"implementation_approach": [
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Implement task following synthesis specification",
|
||||
"description": "Implement '[title]' following synthesis specification. PRIORITY: Use synthesis-specification.md as primary requirement source. When implementation needs technical details (e.g., API schemas, caching configs, design tokens), refer to artifacts[] for detailed specifications from original role analyses.",
|
||||
"modification_points": [
|
||||
"Apply consolidated requirements from synthesis-specification.md",
|
||||
"Follow technical guidelines from synthesis",
|
||||
"Consult artifacts for implementation details when needed",
|
||||
"Integrate with existing patterns"
|
||||
],
|
||||
"logic_flow": [
|
||||
"Load synthesis specification and relevant role artifacts",
|
||||
"Execute MCP code-index discovery for relevant files",
|
||||
"Analyze existing patterns and identify modification targets",
|
||||
"Implement following specification",
|
||||
"Consult artifacts for technical details when needed",
|
||||
"Validate against acceptance criteria"
|
||||
],
|
||||
"depends_on": [],
|
||||
"output": "implementation"
|
||||
}
|
||||
],
|
||||
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
|
||||
}
|
||||
}
|
||||
**Task JSON Template Loading**:
|
||||
\`\`\`
|
||||
Read({template_path})
|
||||
\`\`\`
|
||||
|
||||
**Important**:
|
||||
- Read the template from the path provided in context
|
||||
- Use the template structure exactly as written
|
||||
- Replace placeholder variables ({synthesis_spec_path}, {role_analysis_path}, etc.) with actual session-specific paths
|
||||
- Include MCP tool integration in pre_analysis steps
|
||||
- Map artifacts based on task domain (UI → ui-designer, Backend → system-architect)
|
||||
|
||||
#### 2. IMPL_PLAN.md
|
||||
**Location**: .workflow/{session-id}/IMPL_PLAN.md
|
||||
**Structure**:
|
||||
\`\`\`markdown
|
||||
---
|
||||
identifier: WFS-{session-id}
|
||||
source: "User requirements" | "File: path" | "Issue: ISS-001"
|
||||
analysis: .workflow/{session-id}/.process/ANALYSIS_RESULTS.md
|
||||
artifacts: .workflow/{session-id}/.brainstorming/
|
||||
context_package: .workflow/{session-id}/.process/context-package.json # CCW smart context
|
||||
workflow_type: "standard | tdd | design" # Indicates execution model
|
||||
verification_history: # CCW quality gates
|
||||
concept_verify: "passed | skipped | pending"
|
||||
action_plan_verify: "pending"
|
||||
phase_progression: "brainstorm → context → analysis → concept_verify → planning" # CCW workflow phases
|
||||
---
|
||||
|
||||
# Implementation Plan: {Project Title}
|
||||
|
||||
## 1. Summary
|
||||
Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
|
||||
**Core Objectives**:
|
||||
- [Key objective 1]
|
||||
- [Key objective 2]
|
||||
|
||||
**Technical Approach**:
|
||||
- [High-level approach]
|
||||
|
||||
## 2. Context Analysis
|
||||
|
||||
### CCW Workflow Context
|
||||
**Phase Progression**:
|
||||
- ✅ Phase 1: Brainstorming (synthesis-specification.md generated)
|
||||
- ✅ Phase 2: Context Gathering (context-package.json: {N} files, {M} modules analyzed)
|
||||
- ✅ Phase 3: Enhanced Analysis (ANALYSIS_RESULTS.md: Gemini/Qwen/Codex parallel insights)
|
||||
- ✅ Phase 4: Concept Verification ({X} clarifications answered, synthesis updated | skipped)
|
||||
- ⏳ Phase 5: Action Planning (current phase - generating IMPL_PLAN.md)
|
||||
|
||||
**Quality Gates**:
|
||||
- concept-verify: ✅ Passed (0 ambiguities remaining) | ⏭️ Skipped (user decision) | ⏳ Pending
|
||||
- action-plan-verify: ⏳ Pending (recommended before /workflow:execute)
|
||||
|
||||
**Context Package Summary**:
|
||||
- **Focus Paths**: {list key directories from context-package.json}
|
||||
- **Key Files**: {list primary files for modification}
|
||||
- **Module Depth Analysis**: {from get_modules_by_depth.sh output}
|
||||
- **Smart Context**: {total file count} files, {module count} modules, {dependency count} dependencies identified
|
||||
|
||||
### Project Profile
|
||||
- **Type**: Greenfield/Enhancement/Refactor
|
||||
- **Scale**: User count, data volume, complexity
|
||||
- **Tech Stack**: Primary technologies
|
||||
- **Timeline**: Duration and milestones
|
||||
|
||||
### Module Structure
|
||||
**IMPL_PLAN Template**:
|
||||
\`\`\`
|
||||
[Directory tree showing key modules]
|
||||
$(cat ~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
|
||||
\`\`\`
|
||||
|
||||
### Dependencies
|
||||
**Primary**: [Core libraries and frameworks]
|
||||
**APIs**: [External services]
|
||||
**Development**: [Testing, linting, CI/CD tools]
|
||||
|
||||
### Patterns & Conventions
|
||||
- **Architecture**: [Key patterns like DI, Event-Driven]
|
||||
- **Component Design**: [Design patterns]
|
||||
- **State Management**: [State strategy]
|
||||
- **Code Style**: [Naming, TypeScript coverage]
|
||||
|
||||
## 3. Brainstorming Artifacts Reference
|
||||
|
||||
### Artifact Usage Strategy
|
||||
**Primary Reference (synthesis-specification.md)**:
|
||||
- **What**: Comprehensive implementation blueprint from multi-role synthesis
|
||||
- **When**: Every task references this first for requirements and design decisions
|
||||
- **How**: Extract architecture decisions, UI/UX patterns, functional requirements, non-functional requirements
|
||||
- **Priority**: Authoritative - overrides role-specific analyses when conflicts arise
|
||||
- **CCW Value**: Consolidates insights from all brainstorming roles into single source of truth
|
||||
|
||||
**Context Intelligence (context-package.json)**:
|
||||
- **What**: Smart context gathered by CCW's context-gather phase
|
||||
- **Content**: Focus paths, dependency graph, existing patterns, module structure
|
||||
- **Usage**: Tasks load this via \`flow_control.preparatory_steps\` for environment setup
|
||||
- **CCW Value**: Automated intelligent context discovery replacing manual file exploration
|
||||
|
||||
**Technical Analysis (ANALYSIS_RESULTS.md)**:
|
||||
- **What**: Gemini/Qwen/Codex parallel analysis results
|
||||
- **Content**: Optimization strategies, risk assessment, architecture review, implementation patterns
|
||||
- **Usage**: Referenced in task planning for technical guidance and risk mitigation
|
||||
- **CCW Value**: Multi-model parallel analysis providing comprehensive technical intelligence
|
||||
|
||||
### Integrated Specifications (Highest Priority)
|
||||
- **synthesis-specification.md**: Comprehensive implementation blueprint
|
||||
- Contains: Architecture design, UI/UX guidelines, functional/non-functional requirements, implementation roadmap, risk assessment
|
||||
|
||||
### Supporting Artifacts (Reference)
|
||||
- **topic-framework.md**: Role-specific discussion points and analysis framework
|
||||
- **system-architect/analysis.md**: Detailed architecture specifications
|
||||
- **ui-designer/analysis.md**: Layout and component specifications
|
||||
- **product-manager/analysis.md**: Product vision and user stories
|
||||
|
||||
**Artifact Priority in Development**:
|
||||
1. synthesis-specification.md (primary reference for all tasks)
|
||||
2. context-package.json (smart context for execution environment)
|
||||
3. ANALYSIS_RESULTS.md (technical analysis and optimization strategies)
|
||||
4. Role-specific analyses (fallback for detailed specifications)
|
||||
|
||||
## 4. Implementation Strategy
|
||||
|
||||
### Execution Strategy
|
||||
**Execution Model**: [Sequential | Parallel | Phased | TDD Cycles]
|
||||
|
||||
**Rationale**: [Why this execution model fits the project]
|
||||
|
||||
**Parallelization Opportunities**:
|
||||
- [List independent workstreams]
|
||||
|
||||
**Serialization Requirements**:
|
||||
- [List critical dependencies]
|
||||
|
||||
### Architectural Approach
|
||||
**Key Architecture Decisions**:
|
||||
- [ADR references from synthesis]
|
||||
- [Justification for architecture patterns]
|
||||
|
||||
**Integration Strategy**:
|
||||
- [How modules communicate]
|
||||
- [State management approach]
|
||||
|
||||
### Key Dependencies
|
||||
**Task Dependency Graph**:
|
||||
\`\`\`
|
||||
[High-level dependency visualization]
|
||||
\`\`\`
|
||||
|
||||
**Critical Path**: [Identify bottleneck tasks]
|
||||
|
||||
### Testing Strategy
|
||||
**Testing Approach**:
|
||||
- Unit testing: [Tools, scope]
|
||||
- Integration testing: [Key integration points]
|
||||
- E2E testing: [Critical user flows]
|
||||
|
||||
**Coverage Targets**:
|
||||
- Lines: ≥70%
|
||||
- Functions: ≥70%
|
||||
- Branches: ≥65%
|
||||
|
||||
**Quality Gates**:
|
||||
- [CI/CD gates]
|
||||
- [Performance budgets]
|
||||
|
||||
## 5. Task Breakdown Summary
|
||||
|
||||
### Task Count
|
||||
**{N} tasks** (flat hierarchy | two-level hierarchy, sequential | parallel execution)
|
||||
|
||||
### Task Structure
|
||||
- **IMPL-1**: [Main task title]
|
||||
- **IMPL-2**: [Main task title]
|
||||
...
|
||||
|
||||
### Complexity Assessment
|
||||
- **High**: [List with rationale]
|
||||
- **Medium**: [List]
|
||||
- **Low**: [List]
|
||||
|
||||
### Dependencies
|
||||
[Reference Section 4.3 for dependency graph]
|
||||
|
||||
**Parallelization Opportunities**:
|
||||
- [Specific task groups that can run in parallel]
|
||||
|
||||
## 6. Implementation Plan (Detailed Phased Breakdown)
|
||||
|
||||
### Execution Strategy
|
||||
|
||||
**Phase 1 (Weeks 1-2): [Phase Name]**
|
||||
- **Tasks**: IMPL-1, IMPL-2
|
||||
- **Deliverables**:
|
||||
- [Specific deliverable 1]
|
||||
- [Specific deliverable 2]
|
||||
- **Success Criteria**:
|
||||
- [Measurable criterion]
|
||||
|
||||
**Phase 2 (Weeks 3-N): [Phase Name]**
|
||||
...
|
||||
|
||||
### Resource Requirements
|
||||
|
||||
**Development Team**:
|
||||
- [Team composition and skills]
|
||||
|
||||
**External Dependencies**:
|
||||
- [Third-party services, APIs]
|
||||
|
||||
**Infrastructure**:
|
||||
- [Development, staging, production environments]
|
||||
|
||||
## 7. Risk Assessment & Mitigation
|
||||
|
||||
| Risk | Impact | Probability | Mitigation Strategy | Owner |
|
||||
|------|--------|-------------|---------------------|-------|
|
||||
| [Risk description] | High/Med/Low | High/Med/Low | [Strategy] | [Role] |
|
||||
|
||||
**Critical Risks** (High impact + High probability):
|
||||
- [Risk 1]: [Detailed mitigation plan]
|
||||
|
||||
**Monitoring Strategy**:
|
||||
- [How risks will be monitored]
|
||||
|
||||
## 8. Success Criteria
|
||||
|
||||
**Functional Completeness**:
|
||||
- [ ] All requirements from synthesis-specification.md implemented
|
||||
- [ ] All acceptance criteria from task.json files met
|
||||
|
||||
**Technical Quality**:
|
||||
- [ ] Test coverage ≥70%
|
||||
- [ ] Bundle size within budget
|
||||
- [ ] Performance targets met
|
||||
|
||||
**Operational Readiness**:
|
||||
- [ ] CI/CD pipeline operational
|
||||
- [ ] Monitoring and logging configured
|
||||
- [ ] Documentation complete
|
||||
|
||||
**Business Metrics**:
|
||||
- [ ] [Key business metrics from synthesis]
|
||||
\`\`\`
|
||||
**Important**:
|
||||
- Use the template above for IMPL_PLAN.md generation
|
||||
- Replace all {placeholder} variables with actual session-specific values
|
||||
- Populate CCW Workflow Context based on actual phase progression
|
||||
- Extract content from role analyses and context-package.json
|
||||
- List all detected brainstorming artifacts with correct paths (role analyses, guidance-specification.md)
|
||||
- Include conflict resolution status if CONFLICT_RESOLUTION.md exists
|
||||
|
||||
#### 3. TODO_LIST.md
|
||||
**Location**: .workflow/{session-id}/TODO_LIST.md
|
||||
@@ -495,52 +229,58 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
- \`- [x]\` = Completed leaf task
|
||||
\`\`\`
|
||||
|
||||
### Execution Instructions
|
||||
### Execution Instructions for Agent
|
||||
|
||||
**Step 1: Extract Task Definitions**
|
||||
- Parse analysis results for task recommendations
|
||||
- Extract task ID, title, requirements, complexity
|
||||
- Map artifacts to relevant tasks based on type
|
||||
**Agent Task**: Generate task JSON files, IMPL_PLAN.md, and TODO_LIST.md based on analysis results
|
||||
|
||||
**Step 2: Generate Task JSON Files**
|
||||
- Create individual .task/IMPL-*.json files
|
||||
- Embed artifacts array with detected brainstorming outputs
|
||||
- Generate flow_control with artifact loading steps
|
||||
- Add MCP tool integration for codebase exploration
|
||||
**Note**: The correct task JSON template path has been pre-selected by the command based on the `--cli-execute` flag and is provided in the context as `{template_path}`.
|
||||
|
||||
**Step 3: Create IMPL_PLAN.md**
|
||||
- Summarize requirements and technical approach
|
||||
- List detected artifacts with priorities
|
||||
- Document task breakdown and dependencies
|
||||
- Define execution strategy and success criteria
|
||||
**Step 1: Load Task JSON Template**
|
||||
- Read template from the provided path: `Read({template_path})`
|
||||
- This template is already the correct one based on execution mode
|
||||
|
||||
**Step 4: Generate TODO_LIST.md**
|
||||
- List all tasks with container/leaf structure
|
||||
- Link to task JSON files
|
||||
**Step 2: Extract and Decompose Tasks**
|
||||
- Parse role analysis.md files for requirements, design specs, and task recommendations
|
||||
- Review synthesis enhancements and clarifications in role analyses
|
||||
- Apply conflict resolution strategies (if CONFLICT_RESOLUTION.md exists)
|
||||
- Apply task merging rules (merge when possible, decompose only when necessary)
|
||||
- Map artifacts to tasks based on domain (UI → ui-designer, Backend → system-architect, Data → data-architect)
|
||||
- Ensure task count ≤10
|
||||
|
||||
**Step 3: Generate Task JSON Files**
|
||||
- Use the template structure from Step 1
|
||||
- Create .task/IMPL-*.json files with proper structure
|
||||
- Replace all {placeholder} variables with actual session paths
|
||||
- Embed artifacts array with brainstorming outputs
|
||||
- Include MCP tool integration in pre_analysis steps
|
||||
|
||||
**Step 4: Create IMPL_PLAN.md**
|
||||
- Use IMPL_PLAN template
|
||||
- Populate all sections with session-specific content
|
||||
- List artifacts with priorities and usage guidelines
|
||||
- Document execution strategy and dependencies
|
||||
|
||||
**Step 5: Generate TODO_LIST.md**
|
||||
- Create task progress checklist matching generated JSONs
|
||||
- Use proper status indicators (▸, [ ], [x])
|
||||
- Link to task JSON files
|
||||
|
||||
**Step 5: Update Session State**
|
||||
- Update .workflow/{session-id}/workflow-session.json
|
||||
- Mark session as ready for execution
|
||||
- Record task count and artifact inventory
|
||||
**Step 6: Update Session State**
|
||||
- Update workflow-session.json with task count and artifact inventory
|
||||
- Mark session ready for execution
|
||||
|
||||
### MCP Enhancement Examples
|
||||
|
||||
**Code Index Usage**:
|
||||
\`\`\`javascript
|
||||
// Discover authentication-related files
|
||||
mcp__code-index__find_files(pattern="*auth*")
|
||||
bash(find . -name "*auth*" -type f)
|
||||
|
||||
// Search for OAuth patterns
|
||||
mcp__code-index__search_code_advanced(
|
||||
pattern="oauth|jwt|authentication",
|
||||
file_pattern="*.{ts,js}"
|
||||
)
|
||||
bash(rg "oauth|jwt|authentication" -g "*.{ts,js}")
|
||||
|
||||
// Get file summary for key components
|
||||
mcp__code-index__get_file_summary(
|
||||
file_path="src/auth/index.ts"
|
||||
)
|
||||
bash(rg "^(class|function|export|interface)" src/auth/index.ts)
|
||||
\`\`\`
|
||||
|
||||
**Exa Research Usage**:
|
||||
@@ -576,23 +316,13 @@ Before completion, verify:
|
||||
|
||||
Generate all three documents and report completion status:
|
||||
- Task JSON files created: N files
|
||||
- Artifacts integrated: synthesis-spec, topic-framework, N role analyses
|
||||
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
|
||||
- MCP enhancements: code-index, exa-research
|
||||
- Session ready for execution: /workflow:execute
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
## Command Integration
|
||||
|
||||
### Usage
|
||||
```bash
|
||||
# Basic usage
|
||||
/workflow:tools:task-generate-agent --session WFS-auth
|
||||
|
||||
# Called by /workflow:plan
|
||||
SlashCommand(command="/workflow:tools:task-generate-agent --session WFS-[id]")
|
||||
```
|
||||
|
||||
### Agent Context Passing
|
||||
|
||||
@@ -607,36 +337,26 @@ const agentContext = {
|
||||
? memory.get("workflow-session.json")
|
||||
: Read(.workflow/WFS-[id]/workflow-session.json),
|
||||
|
||||
analysis_results: memory.has("ANALYSIS_RESULTS.md")
|
||||
? memory.get("ANALYSIS_RESULTS.md")
|
||||
: Read(.workflow/WFS-[id]/.process/ANALYSIS_RESULTS.md),
|
||||
|
||||
artifacts_inventory: memory.has("artifacts_inventory")
|
||||
? memory.get("artifacts_inventory")
|
||||
: discoverArtifacts(),
|
||||
context_package_path: ".workflow/WFS-[id]/.process/context-package.json",
|
||||
|
||||
context_package: memory.has("context-package.json")
|
||||
? memory.get("context-package.json")
|
||||
: Read(.workflow/WFS-[id]/.process/context-package.json),
|
||||
: Read(".workflow/WFS-[id]/.process/context-package.json"),
|
||||
|
||||
// Extract brainstorm artifacts from context package
|
||||
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
|
||||
|
||||
// Load role analyses using paths from context package
|
||||
role_analyses: brainstorm_artifacts.role_analyses
|
||||
.flatMap(role => role.files)
|
||||
.map(file => Read(file.path)),
|
||||
|
||||
// Load conflict resolution if exists (from context package)
|
||||
conflict_resolution: brainstorm_artifacts.conflict_resolution?.exists
|
||||
? Read(brainstorm_artifacts.conflict_resolution.path)
|
||||
: null,
|
||||
|
||||
// Optional MCP enhancements
|
||||
mcp_analysis: executeMcpDiscovery()
|
||||
}
|
||||
```
|
||||
|
||||
## Related Commands
|
||||
- `/workflow:plan` - Orchestrates planning and calls this command
|
||||
- `/workflow:tools:task-generate` - Manual version without agent
|
||||
- `/workflow:tools:context-gather` - Provides context package
|
||||
- `/workflow:tools:concept-enhanced` - Provides analysis results
|
||||
- `/workflow:execute` - Executes generated tasks
|
||||
|
||||
## Key Differences from task-generate
|
||||
|
||||
| Feature | task-generate | task-generate-agent |
|
||||
|---------|--------------|-------------------|
|
||||
| Execution | Manual/scripted | Agent-driven |
|
||||
| Phases | 6 phases | 2 phases (discovery + output) |
|
||||
| MCP Integration | Optional | Enhanced with examples |
|
||||
| Decision Logic | Command-driven | Agent-autonomous |
|
||||
| Complexity | Higher control | Simpler delegation |
|
||||
@@ -2,7 +2,7 @@
|
||||
name: task-generate-tdd
|
||||
description: Generate TDD task chains with Red-Green-Refactor dependencies
|
||||
argument-hint: "--session WFS-session-id [--agent]"
|
||||
allowed-tools: Read(*), Write(*), Bash(gemini-wrapper:*), TodoWrite(*)
|
||||
allowed-tools: Read(*), Write(*), Bash(gemini:*), TodoWrite(*)
|
||||
---
|
||||
|
||||
# TDD Task Generation Command
|
||||
@@ -72,25 +72,35 @@ Generate TDD-specific tasks from analysis results with complete Red-Green-Refact
|
||||
- If session metadata in memory → Skip loading
|
||||
- Else: Load `.workflow/{session_id}/workflow-session.json`
|
||||
|
||||
2. **Analysis Results Loading**
|
||||
- If ANALYSIS_RESULTS.md in memory → Skip loading
|
||||
- Else: Read `.workflow/{session_id}/.process/ANALYSIS_RESULTS.md`
|
||||
2. **Conflict Resolution Check** (NEW - Priority Input)
|
||||
- If CONFLICT_RESOLUTION.md exists → Load selected strategies
|
||||
- Else: Skip to brainstorming artifacts
|
||||
- Path: `.workflow/{session_id}/.process/CONFLICT_RESOLUTION.md`
|
||||
|
||||
3. **Artifact Discovery**
|
||||
- If artifact inventory in memory → Skip scanning
|
||||
- Else: Scan `.workflow/{session_id}/.brainstorming/` directory
|
||||
- Detect: synthesis-specification.md, topic-framework.md, role analyses
|
||||
- Detect: role analysis documents, guidance-specification.md, role analyses
|
||||
|
||||
4. **Context Package Loading**
|
||||
- Load `.workflow/{session_id}/.process/context-package.json`
|
||||
- Load `.workflow/{session_id}/.process/test-context-package.json` (if exists)
|
||||
|
||||
### Phase 2: TDD Task JSON Generation
|
||||
|
||||
**Input**: Use `.process/ANALYSIS_RESULTS.md` directly (enhanced with TDD structure from concept-enhanced phase)
|
||||
**Input Sources** (priority order):
|
||||
1. **Conflict Resolution** (if exists): `.process/CONFLICT_RESOLUTION.md` - Selected resolution strategies
|
||||
2. **Brainstorming Artifacts**: Role analysis documents (system-architect, product-owner, etc.)
|
||||
3. **Context Package**: `.process/context-package.json` - Project structure and requirements
|
||||
4. **Test Context**: `.process/test-context-package.json` - Existing test patterns
|
||||
|
||||
**ANALYSIS_RESULTS.md includes**:
|
||||
**TDD Task Structure includes**:
|
||||
- Feature list with testable requirements
|
||||
- Test cases for Red phase
|
||||
- Implementation requirements for Green phase
|
||||
- Implementation requirements for Green phase (with test-fix cycle)
|
||||
- Refactoring opportunities
|
||||
- Task dependencies and execution order
|
||||
- Conflict resolution decisions (if applicable)
|
||||
|
||||
### Phase 3: Task JSON & IMPL_PLAN.md Generation
|
||||
|
||||
@@ -124,6 +134,7 @@ For each feature, generate task(s) with ID format:
|
||||
"id": "IMPL-N", // Task identifier
|
||||
"title": "Feature description with TDD", // Human-readable title
|
||||
"status": "pending", // pending | in_progress | completed | container
|
||||
"context_package_path": ".workflow/{session-id}/.process/context-package.json", // Path to smart context package
|
||||
"meta": {
|
||||
"type": "feature", // Task type
|
||||
"agent": "@code-developer", // Assigned agent
|
||||
@@ -247,13 +258,15 @@ Generate IMPL_PLAN.md with 8-section structure:
|
||||
---
|
||||
identifier: WFS-{session-id}
|
||||
source: "User requirements" | "File: path"
|
||||
analysis: .workflow/{session-id}/.process/ANALYSIS_RESULTS.md
|
||||
conflict_resolution: .workflow/{session-id}/.process/CONFLICT_RESOLUTION.md # if exists
|
||||
context_package: .workflow/{session-id}/.process/context-package.json
|
||||
context_package_path: .workflow/{session-id}/.process/context-package.json
|
||||
test_context: .workflow/{session-id}/.process/test-context-package.json # if exists
|
||||
workflow_type: "tdd"
|
||||
verification_history:
|
||||
concept_verify: "passed | skipped | pending"
|
||||
conflict_resolution: "executed | skipped" # based on conflict_risk
|
||||
action_plan_verify: "pending"
|
||||
phase_progression: "brainstorm → context → test_context → analysis → concept_verify → tdd_planning"
|
||||
phase_progression: "brainstorm → context → test_context → conflict_resolution → tdd_planning"
|
||||
feature_count: N
|
||||
task_count: N # ≤10 total
|
||||
task_breakdown:
|
||||
@@ -283,10 +296,10 @@ tdd_workflow: true
|
||||
|
||||
## 3. Brainstorming Artifacts Reference
|
||||
- Artifact Usage Strategy
|
||||
- synthesis-specification.md (primary reference)
|
||||
- CONFLICT_RESOLUTION.md (if exists - selected resolution strategies)
|
||||
- role analysis documents (primary reference)
|
||||
- test-context-package.json (test patterns)
|
||||
- context-package.json (smart context)
|
||||
- ANALYSIS_RESULTS.md (technical analysis)
|
||||
- Artifact Priority in Development
|
||||
|
||||
## 4. Implementation Strategy
|
||||
@@ -397,9 +410,10 @@ Update workflow-session.json with TDD metadata:
|
||||
│ ├── IMPL-3.2.json # Complex feature subtask (if needed)
|
||||
│ └── ...
|
||||
└── .process/
|
||||
├── ANALYSIS_RESULTS.md # Enhanced with TDD breakdown from concept-enhanced
|
||||
├── CONFLICT_RESOLUTION.md # Conflict resolution strategies (if conflict_risk ≥ medium)
|
||||
├── test-context-package.json # Test coverage analysis
|
||||
├── context-package.json # Input from context-gather
|
||||
├── context_package_path # Path to smart context package
|
||||
└── green-fix-iteration-*.md # Fix logs from Green phase test-fix cycles
|
||||
```
|
||||
|
||||
@@ -438,7 +452,7 @@ Update workflow-session.json with TDD metadata:
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Session not found | Invalid session ID | Verify session exists |
|
||||
| Analysis missing | Incomplete planning | Run concept-enhanced first |
|
||||
| Context missing | Incomplete planning | Run context-gather first |
|
||||
|
||||
### TDD Generation Errors
|
||||
| Error | Cause | Resolution |
|
||||
@@ -452,7 +466,7 @@ Update workflow-session.json with TDD metadata:
|
||||
|
||||
### Command Chain
|
||||
- **Called By**: `/workflow:tdd-plan` (Phase 4)
|
||||
- **Calls**: Gemini wrapper for TDD breakdown
|
||||
- **Calls**: Gemini CLI for TDD breakdown
|
||||
- **Followed By**: `/workflow:execute`, `/workflow:tdd-verify`
|
||||
|
||||
### Basic Usage
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
---
|
||||
---
|
||||
name: task-generate
|
||||
description: Generate task JSON files and IMPL_PLAN.md from analysis results with artifacts integration
|
||||
argument-hint: "--session WFS-session-id [--cli-execute]"
|
||||
@@ -9,84 +9,176 @@ examples:
|
||||
|
||||
# Task Generation Command
|
||||
|
||||
## Overview
|
||||
Generate task JSON files and IMPL_PLAN.md from analysis results with automatic artifact detection and integration.
|
||||
## 1. Overview
|
||||
This command generates task JSON files and an `IMPL_PLAN.md` from brainstorming role analyses. It automatically detects and integrates all brainstorming artifacts (role-specific `analysis.md` files and `guidance-specification.md`), creating a structured and context-rich plan for implementation. The command supports two primary execution modes: a default agent-based mode for seamless context handling and a `--cli-execute` mode that leverages the Codex CLI for complex, autonomous development tasks. Its core function is to translate requirements and design specifications from role analyses into actionable, executable tasks, ensuring all necessary context, dependencies, and implementation steps are defined upfront.
|
||||
|
||||
## Execution Modes
|
||||
## 2. Execution Modes
|
||||
|
||||
This command offers two distinct modes for task execution, providing flexibility for different implementation complexities.
|
||||
|
||||
### Agent Mode (Default)
|
||||
Tasks execute within agent context using agent's capabilities:
|
||||
- Agent reads synthesis specifications
|
||||
- Agent implements following requirements
|
||||
- Agent validates implementation
|
||||
- **Benefit**: Seamless context within single agent execution
|
||||
In the default mode, each step in `implementation_approach` **omits the `command` field**. The agent interprets the step's `modification_points` and `logic_flow` to execute the task autonomously.
|
||||
- **Step Structure**: Contains `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, and `output` fields
|
||||
- **Execution**: Agent reads these fields and performs the implementation autonomously
|
||||
- **Context Loading**: Agent loads context via `pre_analysis` steps
|
||||
- **Validation**: Agent validates against acceptance criteria in `context.acceptance`
|
||||
- **Benefit**: Direct agent execution with full context awareness, no external tool overhead
|
||||
- **Use Case**: Standard implementation tasks where agent capability is sufficient
|
||||
|
||||
### CLI Execute Mode (`--cli-execute`)
|
||||
Tasks execute using Codex CLI with resume mechanism:
|
||||
- Each task uses `codex exec` command in `implementation_approach`
|
||||
- First task establishes Codex session
|
||||
- Subsequent tasks use `codex exec "..." resume --last` for context continuity
|
||||
- **Benefit**: Codex's autonomous development capabilities with persistent context
|
||||
- **Use Case**: Complex implementation requiring Codex's reasoning and iteration
|
||||
When the `--cli-execute` flag is used, each step in `implementation_approach` **includes a `command` field** that specifies the exact execution command. This mode is designed for complex implementations requiring specialized CLI tools.
|
||||
- **Step Structure**: Includes all default fields PLUS a `command` field
|
||||
- **Execution**: The specified command executes the step directly (e.g., `bash(codex ...)`)
|
||||
- **Context Packages**: Each command receives context via the CONTEXT field in the prompt
|
||||
- **Multi-Step Support**: Complex tasks can have multiple sequential codex steps with `resume --last`
|
||||
- **Benefit**: Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning and autonomous execution
|
||||
- **Use Case**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
||||
|
||||
## Core Philosophy
|
||||
- **Analysis-Driven**: Generate from ANALYSIS_RESULTS.md
|
||||
- **Artifact-Aware**: Auto-detect brainstorming outputs
|
||||
- **Context-Rich**: Embed comprehensive context in task JSON
|
||||
- **Flow-Control Ready**: Pre-define implementation steps
|
||||
- **Memory-First**: Reuse loaded documents from memory
|
||||
- **CLI-Aware**: Support Codex resume mechanism for persistent context
|
||||
## 3. Core Principles
|
||||
This command is built on a set of core principles to ensure efficient and reliable task generation.
|
||||
|
||||
## Core Responsibilities
|
||||
- Parse analysis results and extract tasks
|
||||
- Detect and integrate brainstorming artifacts
|
||||
- Generate enhanced task JSON files (5-field schema)
|
||||
- Create IMPL_PLAN.md and TODO_LIST.md
|
||||
- Update session state for execution
|
||||
- **Role Analysis-Driven**: All generated tasks originate from role-specific `analysis.md` files (enhanced in synthesis phase), ensuring direct link between requirements/design and implementation
|
||||
- **Artifact-Aware**: Automatically detects and integrates all brainstorming outputs (role analyses, guidance-specification.md, enhancements) to enrich task context
|
||||
- **Context-Rich**: Embeds comprehensive context (requirements, focus paths, acceptance criteria, artifact references) directly into each task JSON
|
||||
- **Flow-Control Ready**: Pre-defines clear execution sequence (`pre_analysis`, `implementation_approach`) within each task
|
||||
- **Memory-First**: Prioritizes using documents already loaded in conversation memory to avoid redundant file operations
|
||||
- **Mode-Flexible**: Supports both agent-driven execution (default) and CLI tool execution (with `--cli-execute` flag)
|
||||
- **Multi-Step Support**: Complex tasks can use multiple sequential steps in `implementation_approach` with codex resume mechanism
|
||||
- **Responsibility**: Parses analysis, detects artifacts, generates enhanced task JSONs, creates `IMPL_PLAN.md` and `TODO_LIST.md`, updates session state
|
||||
|
||||
## Execution Lifecycle
|
||||
## 4. Execution Flow
|
||||
The command follows a streamlined, three-step process to convert analysis into executable tasks.
|
||||
|
||||
### Phase 1: Input Validation & Discovery
|
||||
**⚡ Memory-First Rule**: Skip file loading if documents already in conversation memory
|
||||
### Step 1: Input & Discovery
|
||||
The process begins by gathering all necessary inputs. It follows a **Memory-First Rule**, skipping file reads if documents are already in the conversation memory.
|
||||
1. **Session Validation**: Loads and validates the session from `.workflow/{session_id}/workflow-session.json`.
|
||||
2. **Context Package Loading** (primary source): Reads `.workflow/{session_id}/.process/context-package.json` for smart context and artifact catalog.
|
||||
3. **Brainstorm Artifacts Extraction**: Extracts role analysis paths from `context-package.json` → `brainstorm_artifacts.role_analyses[]` (supports `analysis*.md` automatically).
|
||||
4. **Document Loading**: Reads role analyses, guidance specification, synthesis output, and conflict resolution (if exists) using paths from context package.
|
||||
|
||||
1. **Session Validation**
|
||||
- If session metadata in memory → Skip loading
|
||||
- Else: Load `.workflow/{session_id}/workflow-session.json`
|
||||
### Step 2: Task Decomposition & Grouping
|
||||
Once all inputs are loaded, the command analyzes the tasks defined in the analysis results and groups them based on shared context.
|
||||
1. **Task Definition Parsing**: Extracts task definitions, requirements, and dependencies.
|
||||
2. **Context Signature Analysis**: Computes a unique hash (`context_signature`) for each task based on its `focus_paths` and referenced `artifacts`.
|
||||
3. **Task Grouping**:
|
||||
* Tasks with the **same signature** are candidates for merging, as they operate on the same context.
|
||||
* Tasks with **different signatures** and no dependencies are grouped for parallel execution.
|
||||
* Tasks with `depends_on` relationships are marked for sequential execution.
|
||||
4. **Modification Target Determination**: Extracts specific code locations (`file:function:lines`) from the analysis to populate the `target_files` field.
|
||||
|
||||
2. **Analysis Results Loading**
|
||||
- If ANALYSIS_RESULTS.md in memory → Skip loading
|
||||
- Else: Read `.workflow/{session_id}/.process/ANALYSIS_RESULTS.md`
|
||||
### Step 3: Output Generation
|
||||
Finally, the command generates all the necessary output files.
|
||||
1. **Task JSON Creation**: Creates individual `.task/IMPL-*.json` files, embedding all context, artifacts, and flow control steps. If `--cli-execute` is active, it generates the appropriate `codex exec` commands.
|
||||
2. **IMPL_PLAN.md Generation**: Creates the main implementation plan document, summarizing the strategy, tasks, and dependencies.
|
||||
3. **TODO_LIST.md Generation**: Creates a simple checklist for tracking task progress.
|
||||
4. **Session State Update**: Updates `workflow-session.json` with the final task count and artifact inventory, marking the session as ready for execution.
|
||||
|
||||
3. **Artifact Discovery**
|
||||
- If artifact inventory in memory → Skip scanning
|
||||
- Else: Scan `.workflow/{session_id}/.brainstorming/` directory
|
||||
- Detect: synthesis-specification.md, topic-framework.md, role analyses
|
||||
## 5. Task Decomposition Strategy
|
||||
The command employs a sophisticated strategy to group and decompose tasks, optimizing for context reuse and parallel execution.
|
||||
|
||||
### Phase 2: Task JSON Generation
|
||||
### Core Principles
|
||||
- **Primary Rule: Shared Context → Merge Tasks**: Tasks that operate on the same files, use the same artifacts, and share the same tech stack are merged. This avoids redundant context loading and recognizes inherent relationships between the tasks.
|
||||
- **Secondary Rule: Different Contexts + No Dependencies → Decompose for Parallel Execution**: Tasks that are fully independent (different files, different artifacts, no shared dependencies) are decomposed into separate parallel execution groups.
|
||||
|
||||
#### Task Decomposition Standards
|
||||
**Core Principle: Task Merging Over Decomposition**
|
||||
- **Merge Rule**: Execute together when possible
|
||||
- **Decompose Only When**:
|
||||
- Excessive workload (>2500 lines or >6 files)
|
||||
- Different tech stacks or domains
|
||||
- Sequential dependency blocking
|
||||
- Parallel execution needed
|
||||
### Context Analysis for Task Grouping
|
||||
The decision to merge or decompose is based on analyzing context indicators:
|
||||
|
||||
1. **Shared Context Indicators (→ Merge)**:
|
||||
* Identical `focus_paths` (working on the same modules/files).
|
||||
* Same tech stack and dependencies.
|
||||
* Identical `context.artifacts` references.
|
||||
* A sequential logic flow within the same feature.
|
||||
* Shared test fixtures or setup.
|
||||
|
||||
2. **Independent Context Indicators (→ Decompose)**:
|
||||
* Different `focus_paths` (separate modules).
|
||||
* Different tech stacks (e.g., frontend vs. backend).
|
||||
* Different `context.artifacts` (using different brainstorming outputs).
|
||||
* No shared dependencies.
|
||||
* Can be tested independently.
|
||||
|
||||
**Decomposition is only performed when**:
|
||||
- Tasks have different contexts and no shared dependencies (enabling parallel execution).
|
||||
- A single task represents an excessive workload (e.g., >2500 lines of code or >6 files to modify).
|
||||
- A sequential dependency creates a necessary block (e.g., IMPL-1 must complete before IMPL-2 can start).
|
||||
|
||||
### Context Signature Algorithm
|
||||
To automate grouping, a `context_signature` is computed for each task.
|
||||
|
||||
```javascript
|
||||
// Compute context signature for task grouping
|
||||
function computeContextSignature(task) {
|
||||
const focusPathsStr = task.context.focus_paths.sort().join('|');
|
||||
const artifactsStr = task.context.artifacts.map(a => a.path).sort().join('|');
|
||||
const techStack = task.context.shared_context?.tech_stack?.sort().join('|') || '';
|
||||
|
||||
return hash(`${focusPathsStr}:${artifactsStr}:${techStack}`);
|
||||
}
|
||||
```
|
||||
|
||||
### Execution Group Assignment
|
||||
Tasks are assigned to execution groups based on their signatures and dependencies.
|
||||
|
||||
```javascript
|
||||
// Group tasks by context signature
|
||||
function groupTasksByContext(tasks) {
|
||||
const groups = {};
|
||||
|
||||
tasks.forEach(task => {
|
||||
const signature = computeContextSignature(task);
|
||||
if (!groups[signature]) {
|
||||
groups[signature] = [];
|
||||
}
|
||||
groups[signature].push(task);
|
||||
});
|
||||
|
||||
return groups;
|
||||
}
|
||||
|
||||
// Assign execution groups for parallel tasks
|
||||
function assignExecutionGroups(tasks) {
|
||||
const contextGroups = groupTasksByContext(tasks);
|
||||
|
||||
Object.entries(contextGroups).forEach(([signature, groupTasks]) => {
|
||||
if (groupTasks.length === 1) {
|
||||
const task = groupTasks[0];
|
||||
// Single task with unique context
|
||||
if (!task.context.depends_on || task.context.depends_on.length === 0) {
|
||||
task.meta.execution_group = `parallel-${signature.slice(0, 8)}`;
|
||||
} else {
|
||||
task.meta.execution_group = null; // Sequential task
|
||||
}
|
||||
} else {
|
||||
// Multiple tasks with same context → Should be merged
|
||||
console.warn(`Tasks ${groupTasks.map(t => t.id).join(', ')} share context and should be merged`);
|
||||
// Merge tasks into single task
|
||||
return mergeTasks(groupTasks);
|
||||
}
|
||||
});
|
||||
}
|
||||
```
|
||||
**Task Limits**:
|
||||
- **Maximum 10 tasks** (hard limit)
|
||||
- **Function-based**: Complete units (logic + UI + tests + config)
|
||||
- **Hierarchy**: Flat (≤5) | Two-level (6-10) | Re-scope (>10)
|
||||
- **Maximum 10 tasks** (hard limit).
|
||||
- **Hierarchy**: Flat (≤5 tasks) or two-level (6-10 tasks). If >10, the scope should be re-evaluated.
|
||||
- **Parallel Groups**: Tasks with the same `execution_group` ID are independent and can run concurrently.
|
||||
|
||||
## 6. Generated Outputs
|
||||
The command produces three key documents and a directory of task files.
|
||||
|
||||
### 6.1. Task JSON Schema (`.task/IMPL-*.json`)
|
||||
This enhanced 5-field schema embeds all necessary context, artifacts, and execution steps.
|
||||
|
||||
#### Enhanced Task JSON Schema (5-Field + Artifacts)
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-N[.M]",
|
||||
"title": "Descriptive task name",
|
||||
"status": "pending|active|completed|blocked|container",
|
||||
"context_package_path": ".workflow/WFS-[session]/.process/context-package.json",
|
||||
"meta": {
|
||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||
"agent": "@code-developer|@test-fix-agent|@general-purpose"
|
||||
"agent": "@code-developer|@test-fix-agent|@universal-executor",
|
||||
"execution_group": "group-id|null",
|
||||
"context_signature": "hash-of-focus_paths-and-artifacts"
|
||||
},
|
||||
"context": {
|
||||
"requirements": ["Clear requirement from analysis"],
|
||||
@@ -98,67 +190,60 @@ Tasks execute using Codex CLI with resume mechanism:
|
||||
"shared_context": {"tech_stack": [], "conventions": []},
|
||||
"artifacts": [
|
||||
{
|
||||
"type": "synthesis_specification",
|
||||
"source": "brainstorm_synthesis",
|
||||
"path": ".workflow/WFS-[session]/.brainstorming/synthesis-specification.md",
|
||||
"path": "{{from context-package.json → brainstorm_artifacts.role_analyses[].files[].path}}",
|
||||
"priority": "highest",
|
||||
"usage": "Primary requirement source - use for consolidated requirements and cross-role alignment"
|
||||
"usage": "Role-specific requirements, design specs, enhanced by synthesis. Paths loaded dynamically from context-package.json (supports multiple files per role: analysis.md, analysis-01.md, analysis-api.md, etc.). Common roles: product-manager, system-architect, ui-designer, data-architect, ux-expert."
|
||||
},
|
||||
{
|
||||
"type": "role_analysis",
|
||||
"source": "brainstorm_roles",
|
||||
"path": ".workflow/WFS-[session]/.brainstorming/[role-name]/analysis.md",
|
||||
"path": ".workflow/WFS-[session]/.brainstorming/guidance-specification.md",
|
||||
"priority": "high",
|
||||
"usage": "Technical/design/business details from specific roles. Common roles: system-architect (ADRs, APIs, caching), ui-designer (design tokens, layouts), product-manager (user stories, metrics)",
|
||||
"note": "Dynamically discovered - multiple role analysis files may be included based on brainstorming results"
|
||||
},
|
||||
{
|
||||
"type": "topic_framework",
|
||||
"source": "brainstorm_framework",
|
||||
"path": ".workflow/WFS-[session]/.brainstorming/topic-framework.md",
|
||||
"priority": "low",
|
||||
"usage": "Discussion context and framework structure"
|
||||
"usage": "Finalized design decisions (potentially modified by conflict resolution if conflict_risk was medium/high). Use for: understanding resolved requirements, design choices, conflict resolutions applied in-place"
|
||||
}
|
||||
]
|
||||
},
|
||||
"flow_control": {
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_synthesis_specification",
|
||||
"action": "Load consolidated synthesis specification",
|
||||
"step": "load_context_package",
|
||||
"action": "Load context package for artifact paths",
|
||||
"note": "Context package path is now at top-level field: context_package_path",
|
||||
"commands": [
|
||||
"bash(ls .workflow/WFS-[session]/.brainstorming/synthesis-specification.md 2>/dev/null || echo 'not found')",
|
||||
"Read(.workflow/WFS-[session]/.brainstorming/synthesis-specification.md)"
|
||||
"Read({{context_package_path}})"
|
||||
],
|
||||
"output_to": "synthesis_specification",
|
||||
"on_error": "skip_optional"
|
||||
"output_to": "context_package",
|
||||
"on_error": "fail"
|
||||
},
|
||||
{
|
||||
"step": "load_role_analysis_artifacts",
|
||||
"action": "Load role-specific analysis documents for technical details",
|
||||
"note": "These artifacts contain implementation details not in synthesis. Consult when needing: API schemas, caching configs, design tokens, ADRs, performance metrics.",
|
||||
"action": "Load role analyses from context-package.json (supports multiple files per role)",
|
||||
"note": "Paths loaded from context-package.json → brainstorm_artifacts.role_analyses[]. Supports analysis*.md automatically.",
|
||||
"commands": [
|
||||
"bash(find .workflow/WFS-[session]/.brainstorming/ -name 'analysis.md' 2>/dev/null | head -8)",
|
||||
"Read(.workflow/WFS-[session]/.brainstorming/system-architect/analysis.md)",
|
||||
"Read(.workflow/WFS-[session]/.brainstorming/ui-designer/analysis.md)",
|
||||
"Read(.workflow/WFS-[session]/.brainstorming/product-manager/analysis.md)"
|
||||
"Read({{context_package_path}})",
|
||||
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
|
||||
"Read(each extracted path)"
|
||||
],
|
||||
"output_to": "role_analysis_artifacts",
|
||||
"on_error": "skip_optional"
|
||||
},
|
||||
{
|
||||
"step": "load_planning_context",
|
||||
"action": "Load plan-generated analysis",
|
||||
"action": "Load plan-generated context intelligence with resolved conflicts",
|
||||
"note": "CRITICAL: context-package.json (from context_package_path) provides smart context (focus paths, dependencies, patterns) and conflict resolution status. If conflict_risk was medium/high, conflicts have been resolved in guidance-specification.md and role analyses.",
|
||||
"commands": [
|
||||
"Read(.workflow/WFS-[session]/.process/ANALYSIS_RESULTS.md)",
|
||||
"Read(.workflow/WFS-[session]/.process/context-package.json)"
|
||||
"Read({{context_package_path}})",
|
||||
"Read(.workflow/WFS-[session]/.brainstorming/guidance-specification.md)"
|
||||
],
|
||||
"output_to": "planning_context"
|
||||
"output_to": "planning_context",
|
||||
"on_error": "fail",
|
||||
"usage_guidance": {
|
||||
"context-package.json": "Use for focus_paths validation, dependency resolution, existing pattern discovery, module structure understanding, conflict_risk status (resolved/none/low)",
|
||||
"guidance-specification.md": "Use for finalized design decisions (includes applied conflict resolutions if any)"
|
||||
}
|
||||
},
|
||||
{
|
||||
"step": "mcp_codebase_exploration",
|
||||
"action": "Explore codebase using MCP tools",
|
||||
"command": "mcp__code-index__find_files(pattern=\"[patterns]\") && mcp__code-index__search_code_advanced(pattern=\"[patterns]\")",
|
||||
"step": "codebase_exploration",
|
||||
"action": "Explore codebase using native tools",
|
||||
"command": "bash(find . -name \"[patterns]\" -type f && rg \"[patterns]\")",
|
||||
"output_to": "codebase_structure"
|
||||
},
|
||||
{
|
||||
@@ -166,7 +251,7 @@ Tasks execute using Codex CLI with resume mechanism:
|
||||
"action": "Analyze existing code patterns and identify modification targets",
|
||||
"commands": [
|
||||
"bash(cd \"[focus_paths]\")",
|
||||
"bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Identify modification targets TASK: Analyze '[title]' and locate specific files/functions/lines to modify CONTEXT: [synthesis_specification] [individual_artifacts] EXPECTED: Code locations in format 'file:function:lines' RULES: Prioritize synthesis-specification.md, identify exact modification points\")"
|
||||
"bash(gemini \"PURPOSE: Identify modification targets TASK: Analyze '[title]' and locate specific files/functions/lines to modify CONTEXT: [role_analyses] [individual_artifacts] EXPECTED: Code locations in format 'file:function:lines' RULES: Consult role analyses for requirements, identify exact modification points\")"
|
||||
],
|
||||
"output_to": "task_context_with_targets",
|
||||
"on_error": "fail"
|
||||
@@ -175,155 +260,54 @@ Tasks execute using Codex CLI with resume mechanism:
|
||||
"implementation_approach": [
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Implement task following synthesis specification",
|
||||
"description": "Implement '[title]' following synthesis specification. PRIORITY: Use synthesis-specification.md as primary requirement source. When implementation needs technical details (e.g., API schemas, caching configs, design tokens), refer to artifacts[] for detailed specifications from original role analyses.",
|
||||
"title": "Implement task following role analyses and context",
|
||||
"description": "Implement '[title]' following this priority: 1) role analysis.md files (requirements, design specs, enhancements from synthesis), 2) guidance-specification.md (finalized decisions with resolved conflicts), 3) context-package.json (smart context, focus paths, patterns). Role analyses are enhanced by synthesis phase with concept improvements and clarifications. If conflict_risk was medium/high, conflict resolutions are already applied in-place.",
|
||||
"modification_points": [
|
||||
"Apply consolidated requirements from synthesis-specification.md",
|
||||
"Follow technical guidelines from synthesis",
|
||||
"Consult artifacts for implementation details when needed",
|
||||
"Apply requirements and design specs from role analysis documents",
|
||||
"Use enhancements and clarifications from synthesis phase",
|
||||
"Use finalized decisions from guidance-specification.md (includes resolved conflicts)",
|
||||
"Use context-package.json for focus paths and dependency resolution",
|
||||
"Consult specific role artifacts for implementation details when needed",
|
||||
"Integrate with existing patterns"
|
||||
],
|
||||
"logic_flow": [
|
||||
"Load synthesis specification",
|
||||
"Extract requirements and design",
|
||||
"Analyze existing patterns",
|
||||
"Implement following specification",
|
||||
"Consult artifacts for technical details when needed",
|
||||
"Load role analyses (requirements, design, enhancements from synthesis)",
|
||||
"Load guidance-specification.md (finalized decisions with resolved conflicts if any)",
|
||||
"Load context-package.json (smart context: focus paths, dependencies, patterns, conflict_risk status)",
|
||||
"Extract requirements and design decisions from role documents",
|
||||
"Review synthesis enhancements and clarifications",
|
||||
"Use finalized decisions (conflicts already resolved if applicable)",
|
||||
"Identify modification targets using context package",
|
||||
"Implement following role requirements and design specs",
|
||||
"Consult role artifacts for detailed specifications when needed",
|
||||
"Validate against acceptance criteria"
|
||||
],
|
||||
"depends_on": [],
|
||||
"output": "implementation"
|
||||
}
|
||||
],
|
||||
|
||||
// CLI Execute Mode: Use Codex command (when --cli-execute flag present)
|
||||
"implementation_approach": [
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Execute implementation with Codex",
|
||||
"description": "Use Codex CLI to implement '[title]' following synthesis specification with autonomous development capabilities",
|
||||
"modification_points": [
|
||||
"Codex loads synthesis specification and artifacts",
|
||||
"Codex implements following requirements",
|
||||
"Codex validates and tests implementation"
|
||||
],
|
||||
"logic_flow": [
|
||||
"Establish or resume Codex session",
|
||||
"Pass synthesis specification to Codex",
|
||||
"Codex performs autonomous implementation",
|
||||
"Codex validates against acceptance criteria"
|
||||
],
|
||||
"command": "bash(codex -C [focus_paths] --full-auto exec \"PURPOSE: [title] TASK: [requirements] MODE: auto CONTEXT: @{[synthesis_path],[artifacts_paths]} EXPECTED: [acceptance] RULES: Follow synthesis-specification.md\" [resume_flag] --skip-git-repo-check -s danger-full-access)",
|
||||
"depends_on": [],
|
||||
"output": "implementation"
|
||||
}
|
||||
],
|
||||
"target_files": ["file:function:lines"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Task Generation Process
|
||||
1. Parse analysis results and extract task definitions
|
||||
2. Detect brainstorming artifacts with priority scoring
|
||||
3. Generate task context (requirements, focus_paths, acceptance)
|
||||
4. **Determine modification targets**: Extract specific code locations from analysis
|
||||
5. Build flow_control with artifact loading steps and target_files
|
||||
6. **CLI Execute Mode**: If `--cli-execute` flag present, generate Codex commands
|
||||
7. Create individual task JSON files in `.task/`
|
||||
### 6.2. IMPL_PLAN.md Structure
|
||||
This document provides a high-level overview of the entire implementation plan.
|
||||
|
||||
#### Codex Resume Mechanism (CLI Execute Mode)
|
||||
|
||||
**Session Continuity Strategy**:
|
||||
- **First Task** (no depends_on or depends_on=[]): Establish new Codex session
|
||||
- Command: `codex -C [path] --full-auto exec "[prompt]" --skip-git-repo-check -s danger-full-access`
|
||||
- Creates new session context
|
||||
|
||||
- **Subsequent Tasks** (has depends_on): Resume previous Codex session
|
||||
- Command: `codex --full-auto exec "[prompt]" resume --last --skip-git-repo-check -s danger-full-access`
|
||||
- Maintains context from previous implementation
|
||||
- **Critical**: `resume --last` flag enables context continuity
|
||||
|
||||
**Resume Flag Logic**:
|
||||
```javascript
|
||||
// Determine resume flag based on task dependencies
|
||||
const resumeFlag = task.context.depends_on && task.context.depends_on.length > 0
|
||||
? "resume --last"
|
||||
: "";
|
||||
|
||||
// First task (IMPL-001): no resume flag
|
||||
// Later tasks (IMPL-002, IMPL-003): use "resume --last"
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- ✅ Shared context across related tasks
|
||||
- ✅ Codex learns from previous implementations
|
||||
- ✅ Consistent patterns and conventions
|
||||
- ✅ Reduced redundant analysis
|
||||
|
||||
#### Target Files Generation (Critical)
|
||||
**Purpose**: Identify specific code locations for modification AND new files to create
|
||||
|
||||
**Source Data Priority**:
|
||||
1. **ANALYSIS_RESULTS.md** - Should contain identified code locations
|
||||
2. **Gemini/MCP Analysis** - From `analyze_task_patterns` step
|
||||
3. **Context Package** - File references from `focus_paths`
|
||||
|
||||
**Format**: `["file:function:lines"]` or `["file"]` (for new files)
|
||||
- `file`: Relative path from project root (e.g., `src/auth/AuthService.ts`)
|
||||
- `function`: Function/method name to modify (e.g., `login`, `validateToken`) - **omit for new files**
|
||||
- `lines`: Approximate line range (e.g., `45-52`, `120-135`) - **omit for new files**
|
||||
|
||||
**Examples**:
|
||||
```json
|
||||
"target_files": [
|
||||
"src/auth/AuthService.ts:login:45-52",
|
||||
"src/middleware/auth.ts:validateToken:30-45",
|
||||
"src/auth/PasswordReset.ts",
|
||||
"tests/auth/PasswordReset.test.ts",
|
||||
"tests/auth.test.ts:testLogin:15-20"
|
||||
]
|
||||
```
|
||||
|
||||
**Generation Strategy**:
|
||||
- **New files to create** → Use `["path/to/NewFile.ts"]` (no function or lines)
|
||||
- **Existing files with specific locations** → Use `["file:function:lines"]`
|
||||
- **Existing files with function only** → Search lines using MCP/grep `["file:function:*"]`
|
||||
- **Existing files (explore entire)** → Mark as `["file.ts:*:*"]`
|
||||
- **No specific targets** → Leave empty `[]` (agent explores focus_paths)
|
||||
|
||||
### Phase 3: Artifact Detection & Integration
|
||||
|
||||
#### Artifact Priority
|
||||
1. **synthesis-specification.md** (highest) - Complete integrated spec
|
||||
2. **topic-framework.md** (medium) - Discussion framework
|
||||
3. **role/analysis.md** (low) - Individual perspectives
|
||||
|
||||
#### Artifact-Task Mapping
|
||||
- **synthesis-specification.md** → All tasks
|
||||
- **ui-designer/analysis.md** → UI/Frontend tasks
|
||||
- **ux-expert/analysis.md** → UX/Interaction tasks
|
||||
- **system-architect/analysis.md** → Architecture/Backend tasks
|
||||
- **subject-matter-expert/analysis.md** → Domain/Standards tasks
|
||||
- **data-architect/analysis.md** → Data/API tasks
|
||||
- **scrum-master/analysis.md** → Sprint/Process tasks
|
||||
- **product-owner/analysis.md** → Backlog/Story tasks
|
||||
|
||||
### Phase 4: IMPL_PLAN.md Generation
|
||||
|
||||
#### Document Structure
|
||||
```markdown
|
||||
---
|
||||
identifier: WFS-{session-id}
|
||||
source: "User requirements" | "File: path" | "Issue: ISS-001"
|
||||
analysis: .workflow/{session-id}/.process/ANALYSIS_RESULTS.md
|
||||
role_analyses: .workflow/{session-id}/.brainstorming/[role]/analysis*.md
|
||||
artifacts: .workflow/{session-id}/.brainstorming/
|
||||
context_package: .workflow/{session-id}/.process/context-package.json # CCW smart context
|
||||
guidance_specification: .workflow/{session-id}/.brainstorming/guidance-specification.md # Finalized decisions with resolved conflicts
|
||||
workflow_type: "standard | tdd | design" # Indicates execution model
|
||||
verification_history: # CCW quality gates
|
||||
concept_verify: "passed | skipped | pending"
|
||||
synthesis_clarify: "passed | skipped | pending" # Brainstorm phase clarification
|
||||
action_plan_verify: "pending"
|
||||
phase_progression: "brainstorm → context → analysis → concept_verify → planning" # CCW workflow phases
|
||||
conflict_resolution: "resolved | none | low" # Status from context-package.json
|
||||
phase_progression: "brainstorm → synthesis → context → conflict_resolution (if needed) → planning" # CCW workflow phases
|
||||
---
|
||||
|
||||
# Implementation Plan: {Project Title}
|
||||
@@ -342,14 +326,14 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
|
||||
### CCW Workflow Context
|
||||
**Phase Progression**:
|
||||
- ✅ Phase 1: Brainstorming (synthesis-specification.md generated)
|
||||
- ✅ Phase 2: Context Gathering (context-package.json: {N} files, {M} modules analyzed)
|
||||
- ✅ Phase 3: Enhanced Analysis (ANALYSIS_RESULTS.md: Gemini/Qwen/Codex parallel insights)
|
||||
- ✅ Phase 4: Concept Verification ({X} clarifications answered, synthesis updated | skipped)
|
||||
- ⏳ Phase 5: Action Planning (current phase - generating IMPL_PLAN.md)
|
||||
- ✅ Phase 1: Brainstorming (role analyses generated by participating roles)
|
||||
- ✅ Phase 2: Synthesis (concept enhancement + clarification, {N} questions answered, role analyses refined)
|
||||
- ✅ Phase 3: Context Gathering (context-package.json: {N} files, {M} modules analyzed, conflict_risk: {level})
|
||||
- ✅ Phase 4: Conflict Resolution ({status}: {conflict_count} conflicts detected and resolved | skipped if no conflicts)
|
||||
- ⏳ Phase 5: Task Generation (current phase - generating IMPL_PLAN.md and task JSONs)
|
||||
|
||||
**Quality Gates**:
|
||||
- concept-verify: ✅ Passed (0 ambiguities remaining) | ⏭️ Skipped (user decision) | ⏳ Pending
|
||||
- synthesis-clarify: ✅ Passed ({N} ambiguities resolved, {M} enhancements applied)
|
||||
- action-plan-verify: ⏳ Pending (recommended before /workflow:execute)
|
||||
|
||||
**Context Package Summary**:
|
||||
@@ -365,9 +349,9 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
- **Timeline**: Duration and milestones
|
||||
|
||||
### Module Structure
|
||||
```
|
||||
'''
|
||||
[Directory tree showing key modules]
|
||||
```
|
||||
'''
|
||||
|
||||
### Dependencies
|
||||
**Primary**: [Core libraries and frameworks]
|
||||
@@ -383,40 +367,42 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
## 3. Brainstorming Artifacts Reference
|
||||
|
||||
### Artifact Usage Strategy
|
||||
**Primary Reference (synthesis-specification.md)**:
|
||||
- **What**: Comprehensive implementation blueprint from multi-role synthesis
|
||||
- **When**: Every task references this first for requirements and design decisions
|
||||
- **How**: Extract architecture decisions, UI/UX patterns, functional requirements, non-functional requirements
|
||||
- **Priority**: Authoritative - overrides role-specific analyses when conflicts arise
|
||||
- **CCW Value**: Consolidates insights from all brainstorming roles into single source of truth
|
||||
**Primary Reference (Role Analyses)**:
|
||||
- **What**: Role-specific analyses from brainstorming phase providing multi-perspective insights
|
||||
- **When**: Every task references relevant role analyses for requirements and design decisions
|
||||
- **How**: Extract requirements, architecture decisions, UI/UX patterns from applicable role documents
|
||||
- **Priority**: Collective authoritative source - multiple role perspectives provide comprehensive coverage
|
||||
- **CCW Value**: Maintains role-specific expertise while enabling cross-role integration during planning
|
||||
|
||||
**Context Intelligence (context-package.json)**:
|
||||
- **What**: Smart context gathered by CCW's context-gather phase
|
||||
- **Content**: Focus paths, dependency graph, existing patterns, module structure
|
||||
- **Usage**: Tasks load this via `flow_control.preparatory_steps` for environment setup
|
||||
- **Content**: Focus paths, dependency graph, existing patterns, module structure, tech stack, conflict_risk status
|
||||
- **Usage**: Tasks load this via `flow_control.preparatory_steps` for environment setup and conflict awareness
|
||||
- **CCW Value**: Automated intelligent context discovery replacing manual file exploration
|
||||
|
||||
**Technical Analysis (ANALYSIS_RESULTS.md)**:
|
||||
- **What**: Gemini/Qwen/Codex parallel analysis results
|
||||
- **Content**: Optimization strategies, risk assessment, architecture review, implementation patterns
|
||||
- **Usage**: Referenced in task planning for technical guidance and risk mitigation
|
||||
- **CCW Value**: Multi-model parallel analysis providing comprehensive technical intelligence
|
||||
**Conflict Resolution Status**:
|
||||
- **What**: Conflict resolution applied in-place to brainstorm artifacts (if conflict_risk was >= medium)
|
||||
- **Location**: guidance-specification.md and role analyses (*.md) contain resolved conflicts
|
||||
- **Status**: Check context-package.json → conflict_detection.conflict_risk ("resolved" | "none" | "low")
|
||||
- **Usage**: Read finalized decisions from guidance-specification.md (includes applied resolutions)
|
||||
- **CCW Value**: Interactive conflict resolution with user confirmation, modifications applied automatically
|
||||
|
||||
### Integrated Specifications (Highest Priority)
|
||||
- **synthesis-specification.md**: Comprehensive implementation blueprint
|
||||
- Contains: Architecture design, UI/UX guidelines, functional/non-functional requirements, implementation roadmap, risk assessment
|
||||
### Role Analysis Documents (Highest Priority)
|
||||
Role analyses provide specialized perspectives on the implementation:
|
||||
- **system-architect/analysis.md**: Architecture design, ADRs, API specifications, caching strategies
|
||||
- **ui-designer/analysis.md**: Design tokens, layout specifications, component patterns
|
||||
- **ux-expert/analysis.md**: User journeys, interaction flows, accessibility requirements
|
||||
- **guidance-specification/analysis.md**: Product vision, user stories, business requirements, success metrics
|
||||
- **data-architect/analysis.md**: Data models, schemas, database design, migration strategies
|
||||
- **api-designer/analysis.md**: API contracts, endpoint specifications, integration patterns
|
||||
|
||||
### Supporting Artifacts (Reference)
|
||||
- **topic-framework.md**: Role-specific discussion points and analysis framework
|
||||
- **system-architect/analysis.md**: Detailed architecture specifications
|
||||
- **ui-designer/analysis.md**: Layout and component specifications
|
||||
- **product-manager/analysis.md**: Product vision and user stories
|
||||
|
||||
**Artifact Priority in Development**:
|
||||
1. synthesis-specification.md (primary reference for all tasks)
|
||||
2. context-package.json (smart context for execution environment)
|
||||
3. ANALYSIS_RESULTS.md (technical analysis and optimization strategies)
|
||||
4. Role-specific analyses (fallback for detailed specifications)
|
||||
1. {context_package_path} (primary source: smart context AND brainstorm artifact catalog in `brainstorm_artifacts` + conflict_risk status)
|
||||
2. role/analysis*.md (paths from context-package.json: requirements, design specs, enhanced by synthesis, with resolved conflicts if any)
|
||||
3. guidance-specification.md (path from context-package.json: finalized decisions with resolved conflicts if any)
|
||||
|
||||
## 4. Implementation Strategy
|
||||
|
||||
@@ -433,7 +419,7 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
|
||||
### Architectural Approach
|
||||
**Key Architecture Decisions**:
|
||||
- [ADR references from synthesis]
|
||||
- [ADR references from role analyses]
|
||||
- [Justification for architecture patterns]
|
||||
|
||||
**Integration Strategy**:
|
||||
@@ -442,9 +428,9 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
|
||||
### Key Dependencies
|
||||
**Task Dependency Graph**:
|
||||
```
|
||||
'''
|
||||
[High-level dependency visualization]
|
||||
```
|
||||
'''
|
||||
|
||||
**Critical Path**: [Identify bottleneck tasks]
|
||||
|
||||
@@ -525,7 +511,7 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
## 8. Success Criteria
|
||||
|
||||
**Functional Completeness**:
|
||||
- [ ] All requirements from synthesis-specification.md implemented
|
||||
- [ ] All requirements from role analysis documents implemented
|
||||
- [ ] All acceptance criteria from task.json files met
|
||||
|
||||
**Technical Quality**:
|
||||
@@ -539,12 +525,12 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
- [ ] Documentation complete
|
||||
|
||||
**Business Metrics**:
|
||||
- [ ] [Key business metrics from synthesis]
|
||||
- [ ] [Key business metrics from role analyses]
|
||||
```
|
||||
|
||||
### Phase 5: TODO_LIST.md Generation
|
||||
### 6.3. TODO_LIST.md Structure
|
||||
A simple Markdown file for tracking the status of each task.
|
||||
|
||||
#### Document Structure
|
||||
```markdown
|
||||
# Tasks: [Session Topic]
|
||||
|
||||
@@ -562,12 +548,8 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
- Maximum 2 levels: Main tasks and subtasks only
|
||||
```
|
||||
|
||||
### Phase 6: Session State Update
|
||||
1. Update workflow-session.json with task count and artifacts
|
||||
2. Validate all output files (task JSONs, IMPL_PLAN.md, TODO_LIST.md)
|
||||
3. Generate completion report
|
||||
|
||||
## Output Files Structure
|
||||
### 6.4. Output Files Diagram
|
||||
The command organizes outputs into a standard directory structure.
|
||||
```
|
||||
.workflow/{session-id}/
|
||||
├── IMPL_PLAN.md # Implementation plan
|
||||
@@ -576,22 +558,226 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
│ ├── IMPL-1.json # Container task
|
||||
│ ├── IMPL-1.1.json # Leaf task with flow_control
|
||||
│ └── IMPL-1.2.json # Leaf task with flow_control
|
||||
├── .brainstorming/ # Input artifacts
|
||||
│ ├── synthesis-specification.md
|
||||
│ ├── topic-framework.md
|
||||
│ └── {role}/analysis.md
|
||||
├── .brainstorming # Input artifacts from brainstorm + synthesis
|
||||
│ ├── guidance-specification.md # Finalized decisions (with resolved conflicts if any)
|
||||
│ └── {role}/analysis*.md # Role analyses (enhanced by synthesis, with resolved conflicts if any)
|
||||
└── .process/
|
||||
├── ANALYSIS_RESULTS.md # Input from concept-enhanced
|
||||
└── context-package.json # Input from context-gather
|
||||
└── context-package.json # Input from context-gather (smart context + conflict_risk status)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
## 7. Artifact Integration
|
||||
The command intelligently detects and integrates artifacts from the `.brainstorming/` directory.
|
||||
|
||||
#### Artifact Priority
|
||||
1. **context-package.json** (critical): Primary source - smart context AND all brainstorm artifact paths in `brainstorm_artifacts` section + conflict_risk status
|
||||
2. **role/analysis*.md** (highest): Paths from context-package.json → role-specific requirements, design specs, enhanced by synthesis, with resolved conflicts applied in-place
|
||||
3. **guidance-specification.md** (high): Path from context-package.json → finalized decisions with resolved conflicts (if conflict_risk was >= medium)
|
||||
|
||||
#### Artifact-Task Mapping
|
||||
Artifacts are mapped to tasks based on their relevance to the task's domain.
|
||||
- **Role analysis.md files**: Primary requirements source - all relevant role analyses included based on task type
|
||||
- **ui-designer/analysis.md**: Mapped to UI/Frontend tasks for design tokens, layouts, components
|
||||
- **system-architect/analysis.md**: Mapped to Architecture/Backend tasks for ADRs, APIs, patterns
|
||||
- **subject-matter-expert/analysis.md**: Mapped to tasks related to domain logic or standards
|
||||
- **data-architect/analysis.md**: Mapped to tasks involving data models, schemas, or APIs
|
||||
- **product-manager/analysis.md**: Mapped to all tasks for business requirements and user stories
|
||||
|
||||
This ensures that each task has access to the most relevant and detailed specifications from role-specific analyses.
|
||||
|
||||
## 8. CLI Execute Mode Details
|
||||
When using `--cli-execute`, each step in `implementation_approach` includes a `command` field with the execution command.
|
||||
|
||||
**Key Points**:
|
||||
- **Sequential Steps**: Steps execute in order defined in `implementation_approach` array
|
||||
- **Context Delivery**: Each codex command receives context via CONTEXT field: `@{context_package_path}` (role analyses loaded dynamically from context package)- **Multi-Step Tasks**: First step provides full context, subsequent steps use `resume --last` to maintain session continuity
|
||||
- **Step Dependencies**: Later steps reference outputs from earlier steps via `depends_on` field
|
||||
|
||||
### Example 1: Agent Mode - Simple Task (Default, No Command)
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-001",
|
||||
"title": "Implement user authentication module",
|
||||
"context_package_path": ".workflow/WFS-session/.process/context-package.json",
|
||||
"context": {
|
||||
"depends_on": [],
|
||||
"focus_paths": ["src/auth"],
|
||||
"requirements": ["JWT-based authentication", "Login and registration endpoints"],
|
||||
"acceptance": [
|
||||
"JWT token generation working",
|
||||
"Login and registration endpoints implemented",
|
||||
"Tests passing with >70% coverage"
|
||||
]
|
||||
},
|
||||
"flow_control": {
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_role_analyses",
|
||||
"action": "Load role analyses from context-package.json",
|
||||
"commands": [
|
||||
"Read({{context_package_path}})",
|
||||
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
|
||||
"Read(each extracted path)"
|
||||
],
|
||||
"output_to": "role_analyses",
|
||||
"on_error": "fail"
|
||||
},
|
||||
{
|
||||
"step": "load_context",
|
||||
"action": "Load context package for project structure",
|
||||
"commands": ["Read({{context_package_path}})"],
|
||||
"output_to": "context_pkg",
|
||||
"on_error": "fail"
|
||||
}
|
||||
],
|
||||
"implementation_approach": [
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Implement JWT-based authentication",
|
||||
"description": "Create authentication module using JWT following [role_analyses] requirements and [context_pkg] patterns",
|
||||
"modification_points": [
|
||||
"Create auth service with JWT generation",
|
||||
"Implement login endpoint with credential validation",
|
||||
"Implement registration endpoint with user creation",
|
||||
"Add JWT middleware for route protection"
|
||||
],
|
||||
"logic_flow": [
|
||||
"User registers → validate input → hash password → create user",
|
||||
"User logs in → validate credentials → generate JWT → return token",
|
||||
"Protected routes → validate JWT → extract user → allow access"
|
||||
],
|
||||
"depends_on": [],
|
||||
"output": "auth_implementation"
|
||||
}
|
||||
],
|
||||
"target_files": ["src/auth/service.ts", "src/auth/middleware.ts", "src/routes/auth.ts"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example 2: CLI Execute Mode - Single Codex Step
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-002",
|
||||
"title": "Implement user authentication module",
|
||||
"context_package_path": ".workflow/WFS-session/.process/context-package.json",
|
||||
"context": {
|
||||
"depends_on": [],
|
||||
"focus_paths": ["src/auth"],
|
||||
"requirements": ["JWT-based authentication", "Login and registration endpoints"],
|
||||
"acceptance": ["JWT generation working", "Endpoints implemented", "Tests passing"]
|
||||
},
|
||||
"flow_control": {
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_role_analyses",
|
||||
"action": "Load role analyses from context-package.json",
|
||||
"commands": [
|
||||
"Read({{context_package_path}})",
|
||||
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
|
||||
"Read(each extracted path)"
|
||||
],
|
||||
"output_to": "role_analyses",
|
||||
"on_error": "fail"
|
||||
}
|
||||
],
|
||||
"implementation_approach": [
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Implement authentication with Codex",
|
||||
"description": "Create JWT-based authentication module",
|
||||
"command": "bash(codex -C src/auth --full-auto exec \"PURPOSE: Implement user authentication TASK: JWT-based auth with login/registration MODE: auto CONTEXT: @{{context_package_path}} EXPECTED: Complete auth module with tests RULES: Load role analyses from context-package.json → brainstorm_artifacts\" --skip-git-repo-check -s danger-full-access)",
|
||||
"modification_points": ["Create auth service", "Implement endpoints", "Add JWT middleware"],
|
||||
"logic_flow": ["Validate credentials", "Generate JWT", "Return token"],
|
||||
"depends_on": [],
|
||||
"output": "auth_implementation"
|
||||
}
|
||||
],
|
||||
"target_files": ["src/auth/service.ts", "src/auth/middleware.ts"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example 3: CLI Execute Mode - Multi-Step with Resume
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-003",
|
||||
"title": "Implement role-based access control",
|
||||
"context_package_path": ".workflow/WFS-session/.process/context-package.json",
|
||||
"context": {
|
||||
"depends_on": ["IMPL-002"],
|
||||
"focus_paths": ["src/auth", "src/middleware"],
|
||||
"requirements": ["User roles and permissions", "Route protection middleware"],
|
||||
"acceptance": ["RBAC models created", "Middleware working", "Management API complete"]
|
||||
},
|
||||
"flow_control": {
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_context",
|
||||
"action": "Load context and role analyses from context-package.json",
|
||||
"commands": [
|
||||
"Read({{context_package_path}})",
|
||||
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
|
||||
"Read(each extracted path)"
|
||||
],
|
||||
"output_to": "full_context",
|
||||
"on_error": "fail"
|
||||
}
|
||||
],
|
||||
"implementation_approach": [
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Create RBAC models",
|
||||
"description": "Define role and permission data models",
|
||||
"command": "bash(codex -C src/auth --full-auto exec \"PURPOSE: Create RBAC models TASK: Role and permission models MODE: auto CONTEXT: @{{context_package_path}} EXPECTED: Models with migrations RULES: Load role analyses from context-package.json → brainstorm_artifacts\" --skip-git-repo-check -s danger-full-access)",
|
||||
"modification_points": ["Define role model", "Define permission model", "Create migrations"],
|
||||
"logic_flow": ["Design schema", "Implement models", "Generate migrations"],
|
||||
"depends_on": [],
|
||||
"output": "rbac_models"
|
||||
},
|
||||
{
|
||||
"step": 2,
|
||||
"title": "Implement RBAC middleware",
|
||||
"description": "Create route protection middleware using models from step 1",
|
||||
"command": "bash(codex --full-auto exec \"PURPOSE: Create RBAC middleware TASK: Route protection middleware MODE: auto CONTEXT: RBAC models from step 1 EXPECTED: Middleware for route protection RULES: Use session patterns\" resume --last --skip-git-repo-check -s danger-full-access)",
|
||||
"modification_points": ["Create permission checker", "Add route decorators", "Integrate with auth"],
|
||||
"logic_flow": ["Check user role", "Validate permissions", "Allow/deny access"],
|
||||
"depends_on": [1],
|
||||
"output": "rbac_middleware"
|
||||
},
|
||||
{
|
||||
"step": 3,
|
||||
"title": "Add role management API",
|
||||
"description": "Create CRUD endpoints for roles and permissions",
|
||||
"command": "bash(codex --full-auto exec \"PURPOSE: Role management API TASK: CRUD endpoints for roles/permissions MODE: auto CONTEXT: Models and middleware from previous steps EXPECTED: Complete API with validation RULES: Maintain consistency\" resume --last --skip-git-repo-check -s danger-full-access)",
|
||||
"modification_points": ["Create role endpoints", "Create permission endpoints", "Add validation"],
|
||||
"logic_flow": ["Define routes", "Implement controllers", "Add authorization"],
|
||||
"depends_on": [2],
|
||||
"output": "role_management_api"
|
||||
}
|
||||
],
|
||||
"target_files": [
|
||||
"src/models/Role.ts",
|
||||
"src/models/Permission.ts",
|
||||
"src/middleware/rbac.ts",
|
||||
"src/routes/roles.ts"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Pattern Summary**:
|
||||
- **Agent Mode (Example 1)**: No `command` field - agent executes via `modification_points` and `logic_flow`
|
||||
- **CLI Mode Single-Step (Example 2)**: One `command` field with full context package
|
||||
- **CLI Mode Multi-Step (Example 3)**: First step uses full context, subsequent steps use `resume --last`
|
||||
- **Context Delivery**: Context package provided via `@{...}` references in CONTEXT field
|
||||
|
||||
## 9. Error Handling
|
||||
|
||||
### Input Validation Errors
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Session not found | Invalid session ID | Verify session exists |
|
||||
| Analysis missing | Incomplete planning | Run concept-enhanced first |
|
||||
| Context missing | Incomplete planning | Run context-gather first |
|
||||
| Invalid format | Corrupted results | Regenerate analysis |
|
||||
|
||||
### Task Generation Errors
|
||||
@@ -608,7 +794,7 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
| Invalid format | Corrupted file | Skip artifact loading |
|
||||
| Path invalid | Moved/deleted | Update references |
|
||||
|
||||
## Integration & Usage
|
||||
## 10. Integration & Usage
|
||||
|
||||
### Command Chain
|
||||
- **Called By**: `/workflow:plan` (Phase 4)
|
||||
@@ -620,82 +806,9 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
/workflow:tools:task-generate --session WFS-auth
|
||||
```
|
||||
|
||||
## CLI Execute Mode Examples
|
||||
|
||||
### Example 1: First Task (Establish Session)
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-001",
|
||||
"title": "Implement user authentication module",
|
||||
"context": {
|
||||
"depends_on": [],
|
||||
"focus_paths": ["src/auth"],
|
||||
"requirements": ["JWT-based authentication", "Login and registration endpoints"]
|
||||
},
|
||||
"flow_control": {
|
||||
"implementation_approach": [{
|
||||
"step": 1,
|
||||
"title": "Execute implementation with Codex",
|
||||
"command": "bash(codex -C src/auth --full-auto exec \"PURPOSE: Implement user authentication module TASK: JWT-based authentication with login and registration MODE: auto CONTEXT: @{.workflow/WFS-session/.brainstorming/synthesis-specification.md} EXPECTED: Complete auth module with tests RULES: Follow synthesis specification\" --skip-git-repo-check -s danger-full-access)",
|
||||
"depends_on": [],
|
||||
"output": "implementation"
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example 2: Subsequent Task (Resume Session)
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-002",
|
||||
"title": "Add password reset functionality",
|
||||
"context": {
|
||||
"depends_on": ["IMPL-001"],
|
||||
"focus_paths": ["src/auth"],
|
||||
"requirements": ["Password reset via email", "Token validation"]
|
||||
},
|
||||
"flow_control": {
|
||||
"implementation_approach": [{
|
||||
"step": 1,
|
||||
"title": "Execute implementation with Codex",
|
||||
"command": "bash(codex --full-auto exec \"PURPOSE: Add password reset functionality TASK: Password reset via email with token validation MODE: auto CONTEXT: Previous auth implementation from session EXPECTED: Password reset endpoints with email integration RULES: Maintain consistency with existing auth patterns\" resume --last --skip-git-repo-check -s danger-full-access)",
|
||||
"depends_on": [],
|
||||
"output": "implementation"
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example 3: Third Task (Continue Session)
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-003",
|
||||
"title": "Implement role-based access control",
|
||||
"context": {
|
||||
"depends_on": ["IMPL-001", "IMPL-002"],
|
||||
"focus_paths": ["src/auth"],
|
||||
"requirements": ["User roles and permissions", "Middleware for route protection"]
|
||||
},
|
||||
"flow_control": {
|
||||
"implementation_approach": [{
|
||||
"step": 1,
|
||||
"title": "Execute implementation with Codex",
|
||||
"command": "bash(codex --full-auto exec \"PURPOSE: Implement role-based access control TASK: User roles, permissions, and route protection middleware MODE: auto CONTEXT: Existing auth system from session EXPECTED: RBAC system integrated with current auth RULES: Use established patterns from session context\" resume --last --skip-git-repo-check -s danger-full-access)",
|
||||
"depends_on": [],
|
||||
"output": "implementation"
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Pattern Summary**:
|
||||
- IMPL-001: Fresh start with `-C src/auth` and full prompt
|
||||
- IMPL-002: Resume with `resume --last`, references "previous auth implementation"
|
||||
- IMPL-003: Resume with `resume --last`, references "existing auth system"
|
||||
|
||||
## Related Commands
|
||||
## 11. Related Commands
|
||||
- `/workflow:plan` - Orchestrates entire planning
|
||||
- `/workflow:plan --cli-execute` - Planning with CLI execution mode
|
||||
- `/workflow:tools:context-gather` - Provides context package
|
||||
- `/workflow:tools:concept-enhanced` - Provides analysis results
|
||||
- `/workflow:execute` - Executes generated tasks
|
||||
- `/workflow:tools:conflict-resolution` - Provides conflict resolution strategies (optional)
|
||||
- `/workflow:execute` - Executes generated tasks
|
||||
|
||||
@@ -48,7 +48,7 @@ Specialized analysis tool for test generation workflows that uses Gemini to anal
|
||||
|
||||
**Tool Configuration**:
|
||||
```bash
|
||||
cd .workflow/{test_session_id}/.process && ~/.claude/scripts/gemini-wrapper -p "
|
||||
cd .workflow/{test_session_id}/.process && gemini -p "
|
||||
PURPOSE: Analyze test coverage gaps and design comprehensive test generation strategy
|
||||
TASK: Study implementation context, existing tests, and generate test requirements for missing coverage
|
||||
MODE: analysis
|
||||
|
||||
@@ -17,11 +17,11 @@ Specialized context collector for test generation workflows that analyzes test c
|
||||
- **Gap Identification**: Locate implementation files without corresponding tests
|
||||
- **Source Context Loading**: Import implementation summaries from source session
|
||||
- **Framework Detection**: Auto-detect test framework and patterns
|
||||
- **MCP-Powered**: Leverage code-index tools for precise analysis
|
||||
- **Ripgrep-Powered**: Leverage ripgrep and native tools for precise analysis
|
||||
|
||||
## Core Responsibilities
|
||||
- Load source session implementation context
|
||||
- Analyze current test coverage using MCP tools
|
||||
- Analyze current test coverage using ripgrep
|
||||
- Identify files requiring test generation
|
||||
- Detect test framework and conventions
|
||||
- Package test context for analysis phase
|
||||
@@ -41,21 +41,17 @@ Specialized context collector for test generation workflows that analyzes test c
|
||||
- Extract changed files and implementation scope
|
||||
- Identify implementation patterns and tech stack
|
||||
|
||||
### Phase 2: Test Coverage Analysis (MCP Tools)
|
||||
### Phase 2: Test Coverage Analysis (Ripgrep)
|
||||
|
||||
1. **Existing Test Discovery**
|
||||
```bash
|
||||
# Find all test files
|
||||
mcp__code-index__find_files(pattern="*.test.*")
|
||||
mcp__code-index__find_files(pattern="*.spec.*")
|
||||
mcp__code-index__find_files(pattern="*test_*.py")
|
||||
find . -name "*.test.*" -type f
|
||||
find . -name "*.spec.*" -type f
|
||||
find . -name "*test_*.py" -type f
|
||||
|
||||
# Search for test patterns
|
||||
mcp__code-index__search_code_advanced(
|
||||
pattern="describe|it|test|@Test",
|
||||
file_pattern="*.test.*",
|
||||
context_lines=0
|
||||
)
|
||||
rg "describe|it|test|@Test" -g "*.test.*"
|
||||
```
|
||||
|
||||
2. **Coverage Gap Analysis**
|
||||
@@ -80,18 +76,10 @@ Specialized context collector for test generation workflows that analyzes test c
|
||||
1. **Framework Identification**
|
||||
```bash
|
||||
# Check package.json or requirements.txt
|
||||
mcp__code-index__search_code_advanced(
|
||||
pattern="jest|mocha|jasmine|pytest|unittest|rspec",
|
||||
file_pattern="package.json|requirements.txt|Gemfile",
|
||||
context_lines=2
|
||||
)
|
||||
rg "jest|mocha|jasmine|pytest|unittest|rspec" -g "package.json" -g "requirements.txt" -g "Gemfile" -C 2
|
||||
|
||||
# Analyze existing test patterns
|
||||
mcp__code-index__search_code_advanced(
|
||||
pattern="describe\\(|it\\(|test\\(|def test_",
|
||||
file_pattern="*.test.*",
|
||||
context_lines=3
|
||||
)
|
||||
rg "describe\(|it\(|test\(|def test_" -g "*.test.*" -C 3
|
||||
```
|
||||
|
||||
2. **Convention Analysis**
|
||||
@@ -207,33 +195,26 @@ Generate `test-context-package.json`:
|
||||
.workflow/{test_session_id}/.process/test-context-package.json
|
||||
```
|
||||
|
||||
## MCP Tools Usage
|
||||
## Native Tools Usage
|
||||
|
||||
### File Discovery
|
||||
```bash
|
||||
# Test files
|
||||
mcp__code-index__find_files(pattern="*.test.*")
|
||||
mcp__code-index__find_files(pattern="*.spec.*")
|
||||
find . -name "*.test.*" -type f
|
||||
find . -name "*.spec.*" -type f
|
||||
|
||||
# Implementation files
|
||||
mcp__code-index__find_files(pattern="*.ts")
|
||||
mcp__code-index__find_files(pattern="*.js")
|
||||
find . -name "*.ts" -type f
|
||||
find . -name "*.js" -type f
|
||||
```
|
||||
|
||||
### Content Search
|
||||
```bash
|
||||
# Test framework detection
|
||||
mcp__code-index__search_code_advanced(
|
||||
pattern="jest|mocha|pytest",
|
||||
file_pattern="package.json|requirements.txt"
|
||||
)
|
||||
rg "jest|mocha|pytest" -g "package.json" -g "requirements.txt"
|
||||
|
||||
# Test pattern analysis
|
||||
mcp__code-index__search_code_advanced(
|
||||
pattern="describe|it|test",
|
||||
file_pattern="*.test.*",
|
||||
context_lines=2
|
||||
)
|
||||
rg "describe|it|test" -g "*.test.*" -C 2
|
||||
```
|
||||
|
||||
### Coverage Analysis
|
||||
@@ -249,7 +230,7 @@ test_file_patterns=(
|
||||
|
||||
# Search for test file
|
||||
for pattern in "${test_file_patterns[@]}"; do
|
||||
if mcp__code-index__find_files(pattern="$pattern") | grep -q .; then
|
||||
if [ -f "$pattern" ]; then
|
||||
echo "✅ Test exists: $pattern"
|
||||
break
|
||||
fi
|
||||
@@ -262,10 +243,9 @@ done
|
||||
|-------|-------|------------|
|
||||
| Source session not found | Invalid source_session reference | Verify test session metadata |
|
||||
| No implementation summaries | Source session incomplete | Complete source session first |
|
||||
| MCP tools unavailable | MCP not configured | Fallback to bash find/grep |
|
||||
| No test framework detected | Missing test dependencies | Request user to specify framework |
|
||||
|
||||
## Fallback Strategy (No MCP)
|
||||
## Native Tools Implementation
|
||||
|
||||
```bash
|
||||
# File discovery
|
||||
@@ -287,8 +267,8 @@ done
|
||||
- `/workflow:test-gen` (Phase 3: Context Gathering)
|
||||
|
||||
### Calls
|
||||
- MCP code-index tools for analysis
|
||||
- Bash file operations for fallback
|
||||
- Ripgrep and find for file analysis
|
||||
- Bash file operations for coverage analysis
|
||||
|
||||
### Followed By
|
||||
- `/workflow:tools:test-concept-enhanced` - Analyzes context and plans test generation
|
||||
@@ -296,7 +276,7 @@ done
|
||||
## Success Criteria
|
||||
|
||||
- ✅ Source session context loaded successfully
|
||||
- ✅ Test coverage gaps identified with MCP tools
|
||||
- ✅ Test coverage gaps identified with ripgrep
|
||||
- ✅ Test framework detected and documented
|
||||
- ✅ Valid test-context-package.json generated
|
||||
- ✅ All missing tests catalogued with priority
|
||||
|
||||
@@ -146,9 +146,9 @@ Generate **TWO task JSON files**:
|
||||
"step": "load_existing_test_patterns",
|
||||
"action": "Study existing tests for pattern reference",
|
||||
"commands": [
|
||||
"mcp__code-index__find_files(pattern=\"*.test.*\")",
|
||||
"bash(find . -name \"*.test.*\" -type f)",
|
||||
"bash(# Read first 2 existing test files as examples)",
|
||||
"bash(test_files=$(mcp__code-index__find_files(pattern=\"*.test.*\") | head -2))",
|
||||
"bash(test_files=$(find . -name \"*.test.*\" -type f | head -2))",
|
||||
"bash(for f in $test_files; do echo \"=== $f ===\"&& cat \"$f\"; done)"
|
||||
],
|
||||
"output_to": "existing_test_patterns",
|
||||
@@ -198,7 +198,7 @@ Generate **TWO task JSON files**:
|
||||
"Codex generates comprehensive test suite",
|
||||
"Codex validates test syntax and executability"
|
||||
],
|
||||
"command": "bash(codex -C [focus_paths] --full-auto exec \"PURPOSE: Generate comprehensive test suite TASK: Create test files based on TEST_ANALYSIS_RESULTS.md section 5 MODE: write CONTEXT: @{.workflow/WFS-test-[session]/.process/TEST_ANALYSIS_RESULTS.md,.workflow/WFS-test-[session]/.process/test-context-package.json} EXPECTED: All test files with happy path, error handling, edge cases, integration tests RULES: Follow test framework conventions, ensure tests are executable\" --skip-git-repo-check -s danger-full-access)",
|
||||
"command": "bash(codex -C [focus_paths] --full-auto exec \"PURPOSE: Generate comprehensive test suite TASK: Create test files based on TEST_ANALYSIS_RESULTS.md section 5 MODE: write CONTEXT: @.workflow/WFS-test-[session]/.process/TEST_ANALYSIS_RESULTS.md @.workflow/WFS-test-[session]/.process/test-context-package.json EXPECTED: All test files with happy path, error handling, edge cases, integration tests RULES: Follow test framework conventions, ensure tests are executable\" --skip-git-repo-check -s danger-full-access)",
|
||||
"depends_on": [],
|
||||
"output": "test_generation"
|
||||
}],
|
||||
@@ -282,11 +282,11 @@ Generate **TWO task JSON files**:
|
||||
"step": "analyze_test_coverage",
|
||||
"action": "Analyze test coverage and identify missing tests",
|
||||
"commands": [
|
||||
"mcp__code-index__find_files(pattern=\"*.test.*\")",
|
||||
"mcp__code-index__search_code_advanced(pattern=\"test|describe|it|def test_\", file_pattern=\"*.test.*\")",
|
||||
"bash(find . -name \"*.test.*\" -type f)",
|
||||
"bash(rg \"test|describe|it|def test_\" -g \"*.test.*\")",
|
||||
"bash(# Count implementation files vs test files)",
|
||||
"bash(impl_count=$(find [changed_files_dirs] -type f \\( -name '*.ts' -o -name '*.js' -o -name '*.py' \\) ! -name '*.test.*' 2>/dev/null | wc -l))",
|
||||
"bash(test_count=$(mcp__code-index__find_files(pattern=\"*.test.*\") | wc -l))",
|
||||
"bash(test_count=$(find . -name \"*.test.*\" -type f | wc -l))",
|
||||
"bash(echo \"Implementation files: $impl_count, Test files: $test_count\")"
|
||||
],
|
||||
"output_to": "test_coverage_analysis",
|
||||
@@ -323,7 +323,7 @@ Generate **TWO task JSON files**:
|
||||
"cycle_pattern": "test → gemini_diagnose → manual_fix (or codex if needed) → retest",
|
||||
"tools": {
|
||||
"test_execution": "bash(test_command)",
|
||||
"diagnosis": "gemini-wrapper (MODE: analysis, uses bug-fix template)",
|
||||
"diagnosis": "gemini (MODE: analysis, uses bug-fix template)",
|
||||
"fix_application": "manual (default) or codex exec resume --last (if explicitly needed)",
|
||||
"verification": "bash(test_command) + regression_check"
|
||||
},
|
||||
@@ -354,11 +354,11 @@ Generate **TWO task JSON files**:
|
||||
" * Source files from focus_paths",
|
||||
" * Implementation summaries from source session",
|
||||
" - Execute Gemini analysis with bug-fix template:",
|
||||
" bash(cd .workflow/WFS-test-[session]/.process && ~/.claude/scripts/gemini-wrapper --all-files -p \"",
|
||||
" bash(cd .workflow/WFS-test-[session]/.process && gemini \"",
|
||||
" PURPOSE: Diagnose test failure iteration [N] and propose minimal fix",
|
||||
" TASK: Systematic bug analysis and fix recommendations for test failure",
|
||||
" MODE: analysis",
|
||||
" CONTEXT: @{CLAUDE.md,**/*CLAUDE.md}",
|
||||
" CONTEXT: @CLAUDE.md,**/*CLAUDE.md",
|
||||
" Test output: [test_failures]",
|
||||
" Source files: [focus_paths]",
|
||||
" Implementation: [implementation_context]",
|
||||
|
||||
@@ -205,7 +205,7 @@ ELSE IF --prompt:
|
||||
target_source = "prompt_analysis"
|
||||
|
||||
# Step 4: Session synthesis
|
||||
ELSE IF --session AND exists(synthesis-specification.md):
|
||||
ELSE IF --session AND exists(role analysis documents):
|
||||
target_list = extract_targets_from_synthesis(); target_type = "page"; target_source = "synthesis"
|
||||
|
||||
# Step 5: Fallback
|
||||
|
||||
@@ -154,7 +154,7 @@ IF exists: SKIP to completion
|
||||
### Step 2: Load Project Context (Explore Mode)
|
||||
```bash
|
||||
# Load brainstorming context if available
|
||||
bash(test -f {base_path}/.brainstorming/synthesis-specification.md && cat it)
|
||||
bash(test -f {base_path}/.brainstorming/role analysis documents && cat it)
|
||||
```
|
||||
|
||||
### Step 3: Generate Design Direction Options (Agent Task 1)
|
||||
@@ -541,7 +541,7 @@ bash(cat {base_path}/style-extraction/style-1/design-tokens.json | grep -q "colo
|
||||
### File Operations
|
||||
```bash
|
||||
# Load brainstorming context
|
||||
bash(test -f .brainstorming/synthesis-specification.md && cat it)
|
||||
bash(test -f .brainstorming/role analysis documents && cat it)
|
||||
|
||||
# Create directories
|
||||
bash(mkdir -p {base_path}/style-extraction/style-{{1..3}})
|
||||
|
||||
@@ -15,7 +15,7 @@ Synchronize finalized design system references to brainstorming artifacts, prepa
|
||||
|
||||
- **Reference-Only Updates**: Use @ references, no content duplication
|
||||
- **Main Claude Execution**: Direct updates by main Claude (no Agent handoff)
|
||||
- **Synthesis Alignment**: Update synthesis-specification.md UI/UX Guidelines section
|
||||
- **Synthesis Alignment**: Update role analysis documents UI/UX Guidelines section
|
||||
- **Plan-Ready Output**: Ensure design artifacts discoverable by task-generate
|
||||
- **Minimal Reading**: Verify file existence, don't read design content
|
||||
|
||||
@@ -50,8 +50,8 @@ REPORT: "Found {count} design artifacts, {prototype_count} prototypes"
|
||||
### Phase 1.1: Memory Check (Skip if Already Updated)
|
||||
|
||||
```bash
|
||||
# Check if synthesis-specification.md contains current design run reference
|
||||
synthesis_spec_path = ".workflow/WFS-{session}/.brainstorming/synthesis-specification.md"
|
||||
# Check if role analysis documents contains current design run reference
|
||||
synthesis_spec_path = ".workflow/WFS-{session}/.brainstorming/role analysis documents"
|
||||
current_design_run = basename(latest_design) # e.g., "design-run-20250109-143022"
|
||||
|
||||
IF exists(synthesis_spec_path):
|
||||
@@ -68,7 +68,7 @@ IF exists(synthesis_spec_path):
|
||||
|
||||
```bash
|
||||
# Load target brainstorming artifacts (files to be updated)
|
||||
Read(.workflow/WFS-{session}/.brainstorming/synthesis-specification.md)
|
||||
Read(.workflow/WFS-{session}/.brainstorming/role analysis documents)
|
||||
IF exists(.workflow/WFS-{session}/.brainstorming/ui-designer/analysis.md): Read(analysis.md)
|
||||
|
||||
# Optional: Read prototype notes for descriptions (minimal context)
|
||||
@@ -80,7 +80,7 @@ FOR each selected_prototype IN selected_list:
|
||||
|
||||
### Phase 3: Update Synthesis Specification
|
||||
|
||||
Update `.brainstorming/synthesis-specification.md` with design system references.
|
||||
Update `.brainstorming/role analysis documents` with design system references.
|
||||
|
||||
**Target Section**: `## UI/UX Guidelines`
|
||||
|
||||
@@ -113,7 +113,7 @@ Update `.brainstorming/synthesis-specification.md` with design system references
|
||||
**Implementation**:
|
||||
```bash
|
||||
# Option 1: Edit existing section
|
||||
Edit(file_path=".workflow/WFS-{session}/.brainstorming/synthesis-specification.md",
|
||||
Edit(file_path=".workflow/WFS-{session}/.brainstorming/role analysis documents",
|
||||
old_string="## UI/UX Guidelines\n[existing content]",
|
||||
new_string="## UI/UX Guidelines\n\n[new design reference content]")
|
||||
|
||||
@@ -122,12 +122,77 @@ IF section not found:
|
||||
Edit(file_path="...", old_string="[end of document]", new_string="\n\n## UI/UX Guidelines\n\n[new design reference content]")
|
||||
```
|
||||
|
||||
### Phase 4: Update UI Designer Style Guide
|
||||
### Phase 4A: Update Relevant Role Analysis Documents
|
||||
|
||||
Create or update `.brainstorming/ui-designer/style-guide.md`:
|
||||
**Discovery**: Find role analysis.md files affected by design outputs
|
||||
|
||||
```bash
|
||||
# Always update ui-designer
|
||||
ui_designer_files = Glob(".workflow/WFS-{session}/.brainstorming/ui-designer/analysis*.md")
|
||||
|
||||
# Conditionally update other roles
|
||||
has_animations = exists({latest_design}/animation-extraction/animation-tokens.json)
|
||||
has_layouts = exists({latest_design}/layout-extraction/layout-templates.json)
|
||||
|
||||
IF has_animations: ux_expert_files = Glob(".workflow/WFS-{session}/.brainstorming/ux-expert/analysis*.md")
|
||||
IF has_layouts: architect_files = Glob(".workflow/WFS-{session}/.brainstorming/system-architect/analysis*.md")
|
||||
IF selected_list: pm_files = Glob(".workflow/WFS-{session}/.brainstorming/product-manager/analysis*.md")
|
||||
```
|
||||
|
||||
**Content Templates**:
|
||||
|
||||
**ui-designer/analysis.md** (append if not exists):
|
||||
```markdown
|
||||
## Design System Implementation Reference
|
||||
|
||||
**Design Tokens**: @../../design-{run_id}/{design_tokens_path}
|
||||
**Style Guide**: @../../design-{run_id}/{style_guide_path}
|
||||
**Prototypes**: {FOR each: @../../design-{run_id}/prototypes/{prototype}.html}
|
||||
|
||||
*Reference added by /workflow:ui-design:update*
|
||||
```
|
||||
|
||||
**ux-expert/analysis.md** (if animations):
|
||||
```markdown
|
||||
## Animation & Interaction Reference
|
||||
|
||||
**Animations**: @../../design-{run_id}/animation-extraction/animation-tokens.json
|
||||
**Prototypes**: {FOR each: @../../design-{run_id}/prototypes/{prototype}.html}
|
||||
|
||||
*Reference added by /workflow:ui-design:update*
|
||||
```
|
||||
|
||||
**system-architect/analysis.md** (if layouts):
|
||||
```markdown
|
||||
## Layout Structure Reference
|
||||
|
||||
**Layout Templates**: @../../design-{run_id}/layout-extraction/layout-templates.json
|
||||
|
||||
*Reference added by /workflow:ui-design:update*
|
||||
```
|
||||
|
||||
**product-manager/analysis.md** (if prototypes):
|
||||
```markdown
|
||||
## Prototype Validation Reference
|
||||
|
||||
**Prototypes**: {FOR each: @../../design-{run_id}/prototypes/{prototype}.html}
|
||||
|
||||
*Reference added by /workflow:ui-design:update*
|
||||
```
|
||||
|
||||
**Implementation**:
|
||||
```bash
|
||||
FOR file IN [ui_designer_files, ux_expert_files, architect_files, pm_files]:
|
||||
IF file exists AND section_not_exists(file):
|
||||
Edit(file, old_string="[end of document]", new_string="\n\n{role-specific section}")
|
||||
```
|
||||
|
||||
### Phase 4B: Create UI Designer Design System Reference
|
||||
|
||||
Create or update `.brainstorming/ui-designer/design-system-reference.md`:
|
||||
|
||||
```markdown
|
||||
# UI Designer Style Guide
|
||||
# UI Designer Design System Reference
|
||||
|
||||
## Design System Integration
|
||||
This style guide references the finalized design system from the design refinement phase.
|
||||
@@ -158,7 +223,7 @@ For complete token definitions and usage examples, see:
|
||||
|
||||
**Implementation**:
|
||||
```bash
|
||||
Write(file_path=".workflow/WFS-{session}/.brainstorming/ui-designer/style-guide.md",
|
||||
Write(file_path=".workflow/WFS-{session}/.brainstorming/ui-designer/design-system-reference.md",
|
||||
content="[generated content with @ references]")
|
||||
```
|
||||
|
||||
@@ -168,8 +233,9 @@ Write(file_path=".workflow/WFS-{session}/.brainstorming/ui-designer/style-guide.
|
||||
TodoWrite({todos: [
|
||||
{content: "Validate session and design system artifacts", status: "completed", activeForm: "Validating artifacts"},
|
||||
{content: "Load target brainstorming artifacts", status: "completed", activeForm: "Loading target files"},
|
||||
{content: "Update synthesis-specification.md with design references", status: "completed", activeForm: "Updating synthesis spec"},
|
||||
{content: "Create/update ui-designer/style-guide.md", status: "completed", activeForm: "Updating UI designer guide"}
|
||||
{content: "Update role analysis documents with design references", status: "completed", activeForm: "Updating synthesis spec"},
|
||||
{content: "Update relevant role analysis.md documents", status: "completed", activeForm: "Updating role analysis files"},
|
||||
{content: "Create/update ui-designer/design-system-reference.md", status: "completed", activeForm: "Creating design system reference"}
|
||||
]});
|
||||
```
|
||||
|
||||
@@ -178,8 +244,9 @@ TodoWrite({todos: [
|
||||
✅ Design system references updated for session: WFS-{session}
|
||||
|
||||
Updated artifacts:
|
||||
✓ synthesis-specification.md - UI/UX Guidelines section with @ references
|
||||
✓ ui-designer/style-guide.md - Design system reference guide
|
||||
✓ role analysis documents - UI/UX Guidelines section with @ references
|
||||
✓ {role_count} role analysis.md files - Design system references
|
||||
✓ ui-designer/design-system-reference.md - Design system reference guide
|
||||
|
||||
Design system assets ready for /workflow:plan:
|
||||
- design-tokens.json | style-guide.md | {prototype_count} reference prototypes
|
||||
@@ -193,31 +260,43 @@ Next: /workflow:plan [--agent] "<task description>"
|
||||
**Updated Files**:
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/
|
||||
├── synthesis-specification.md # Updated with UI/UX Guidelines section
|
||||
└── ui-designer/
|
||||
└── style-guide.md # New or updated design reference guide
|
||||
├── role analysis documents # Updated with UI/UX Guidelines section
|
||||
├── ui-designer/
|
||||
│ ├── analysis*.md # Updated with design system references
|
||||
│ └── design-system-reference.md # New or updated design reference guide
|
||||
├── ux-expert/analysis*.md # Updated if animations exist
|
||||
├── product-manager/analysis*.md # Updated if prototypes exist
|
||||
└── system-architect/analysis*.md # Updated if layouts exist
|
||||
```
|
||||
|
||||
**@ Reference Format** (synthesis-specification.md):
|
||||
**@ Reference Format** (role analysis documents):
|
||||
```
|
||||
@../design-{run_id}/style-extraction/style-1/design-tokens.json
|
||||
@../design-{run_id}/style-extraction/style-1/style-guide.md
|
||||
@../design-{run_id}/prototypes/{prototype}.html
|
||||
```
|
||||
|
||||
**@ Reference Format** (ui-designer/style-guide.md):
|
||||
**@ Reference Format** (ui-designer/design-system-reference.md):
|
||||
```
|
||||
@../../design-{run_id}/style-extraction/style-1/design-tokens.json
|
||||
@../../design-{run_id}/style-extraction/style-1/style-guide.md
|
||||
@../../design-{run_id}/prototypes/{prototype}.html
|
||||
```
|
||||
|
||||
**@ Reference Format** (role analysis.md files):
|
||||
```
|
||||
@../../design-{run_id}/style-extraction/style-1/design-tokens.json
|
||||
@../../design-{run_id}/animation-extraction/animation-tokens.json
|
||||
@../../design-{run_id}/layout-extraction/layout-templates.json
|
||||
@../../design-{run_id}/prototypes/{prototype}.html
|
||||
```
|
||||
|
||||
## Integration with /workflow:plan
|
||||
|
||||
After this update, `/workflow:plan` will discover design assets through:
|
||||
|
||||
**Phase 3: Intelligent Analysis** (`/workflow:tools:concept-enhanced`)
|
||||
- Reads synthesis-specification.md → Discovers @ references → Includes design system context in ANALYSIS_RESULTS.md
|
||||
- Reads role analysis documents → Discovers @ references → Includes design system context in ANALYSIS_RESULTS.md
|
||||
|
||||
**Phase 4: Task Generation** (`/workflow:tools:task-generate`)
|
||||
- Reads ANALYSIS_RESULTS.md → Discovers design assets → Includes design system paths in task JSON files
|
||||
@@ -239,7 +318,7 @@ After this update, `/workflow:plan` will discover design assets through:
|
||||
## Error Handling
|
||||
|
||||
- **Missing design artifacts**: Error with message "Run /workflow:ui-design:style-extract and /workflow:ui-design:generate first"
|
||||
- **synthesis-specification.md not found**: Warning, create minimal version with just UI/UX Guidelines
|
||||
- **role analysis documents not found**: Warning, create minimal version with just UI/UX Guidelines
|
||||
- **ui-designer/ directory missing**: Create directory and file
|
||||
- **Edit conflicts**: Preserve existing content, append or replace only UI/UX Guidelines section
|
||||
- **Invalid prototype names**: Skip invalid entries, continue with valid ones
|
||||
@@ -247,9 +326,11 @@ After this update, `/workflow:plan` will discover design assets through:
|
||||
## Validation Checks
|
||||
|
||||
After update, verify:
|
||||
- [ ] synthesis-specification.md contains UI/UX Guidelines section
|
||||
- [ ] role analysis documents contains UI/UX Guidelines section
|
||||
- [ ] UI/UX Guidelines include @ references (not content duplication)
|
||||
- [ ] ui-designer/style-guide.md created or updated
|
||||
- [ ] ui-designer/analysis*.md updated with design system references
|
||||
- [ ] ui-designer/design-system-reference.md created or updated
|
||||
- [ ] Relevant role analysis.md files updated (ux-expert, product-manager, system-architect)
|
||||
- [ ] All @ referenced files exist and are accessible
|
||||
- [ ] @ reference paths are relative and correct
|
||||
|
||||
@@ -264,7 +345,7 @@ After update, verify:
|
||||
## Integration Points
|
||||
|
||||
- **Input**: Design system artifacts from `/workflow:ui-design:style-extract` and `/workflow:ui-design:generate`
|
||||
- **Output**: Updated synthesis-specification.md, ui-designer/style-guide.md with @ references
|
||||
- **Output**: Updated role analysis documents, role analysis.md files, ui-designer/design-system-reference.md with @ references
|
||||
- **Next Phase**: `/workflow:plan` discovers and utilizes design system through @ references
|
||||
- **Auto Integration**: Automatically triggered by `/workflow:ui-design:auto` workflow
|
||||
|
||||
|
||||
@@ -1,288 +0,0 @@
|
||||
#!/bin/bash
|
||||
# gemini-wrapper - Token-aware wrapper for gemini command
|
||||
# Location: ~/.claude/scripts/gemini-wrapper
|
||||
#
|
||||
# This wrapper automatically manages --all-files flag based on project token count
|
||||
# and provides intelligent approval mode defaults
|
||||
#
|
||||
# Usage: gemini-wrapper [all gemini options]
|
||||
#
|
||||
# Approval Mode Options:
|
||||
# --approval-mode default : Prompt for approval on each tool call (default)
|
||||
# --approval-mode auto_edit : Auto-approve edit tools, prompt for others
|
||||
# --approval-mode yolo : Auto-approve all tool calls
|
||||
#
|
||||
# Note: Executes in current working directory
|
||||
|
||||
set -e
|
||||
|
||||
# Function to show help
|
||||
show_help() {
|
||||
echo "gemini-wrapper - Token-aware wrapper for gemini command"
|
||||
echo ""
|
||||
echo "Usage: gemini-wrapper [options] [gemini options]"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --approval-mode <mode> Sets the approval mode for tool calls"
|
||||
echo " Available modes:"
|
||||
echo " default : Prompt for approval on each tool call (default)"
|
||||
echo " auto_edit : Auto-approve edit tools, prompt for others"
|
||||
echo " yolo : Auto-approve all tool calls"
|
||||
echo " --help Show this help message"
|
||||
echo ""
|
||||
echo "Features:"
|
||||
echo " - Automatically manages --all-files flag based on project token count"
|
||||
echo " - Intelligent approval mode detection based on task type"
|
||||
echo " - Token limit: $DEFAULT_TOKEN_LIMIT (set GEMINI_TOKEN_LIMIT to override)"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " gemini-wrapper -p \"Analyze the codebase structure\""
|
||||
echo " gemini-wrapper --approval-mode yolo -p \"Implement user authentication\""
|
||||
echo " gemini-wrapper --approval-mode auto_edit -p \"Fix all linting errors\""
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Configuration
|
||||
DEFAULT_TOKEN_LIMIT=2000000
|
||||
TOKEN_LIMIT=${GEMINI_TOKEN_LIMIT:-$DEFAULT_TOKEN_LIMIT}
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Respect custom Gemini base URL
|
||||
if [[ -n "$GOOGLE_GEMINI_BASE_URL" ]]; then
|
||||
echo -e "${GREEN}🌐 Using custom Gemini base URL: $GOOGLE_GEMINI_BASE_URL${NC}" >&2
|
||||
export GOOGLE_GEMINI_BASE_URL
|
||||
fi
|
||||
|
||||
# Function to count tokens (approximate: chars/4) - optimized version
|
||||
count_tokens() {
|
||||
local total_chars=0
|
||||
local file_count=0
|
||||
|
||||
# Use single find with bulk wc for better performance
|
||||
# Common source file extensions
|
||||
local extensions="py js ts tsx jsx java cpp c h rs go md txt json yaml yml xml html css scss sass php rb sh bash"
|
||||
|
||||
# Build find command with extension patterns
|
||||
local find_cmd="find . -type f \("
|
||||
local first=true
|
||||
for ext in $extensions; do
|
||||
if [[ "$first" == true ]]; then
|
||||
find_cmd+=" -name \"*.$ext\""
|
||||
first=false
|
||||
else
|
||||
find_cmd+=" -o -name \"*.$ext\""
|
||||
fi
|
||||
done
|
||||
find_cmd+=" \)"
|
||||
|
||||
# Exclude common build/cache directories
|
||||
find_cmd+=" -not -path \"*/node_modules/*\""
|
||||
find_cmd+=" -not -path \"*/.git/*\""
|
||||
find_cmd+=" -not -path \"*/dist/*\""
|
||||
find_cmd+=" -not -path \"*/build/*\""
|
||||
find_cmd+=" -not -path \"*/.next/*\""
|
||||
find_cmd+=" -not -path \"*/.nuxt/*\""
|
||||
find_cmd+=" -not -path \"*/target/*\""
|
||||
find_cmd+=" -not -path \"*/vendor/*\""
|
||||
find_cmd+=" -not -path \"*/__pycache__/*\""
|
||||
find_cmd+=" -not -path \"*/.cache/*\""
|
||||
find_cmd+=" 2>/dev/null"
|
||||
|
||||
# Use efficient bulk processing with wc
|
||||
if command -v wc >/dev/null 2>&1; then
|
||||
# Try bulk wc first - much faster for many files
|
||||
local wc_output
|
||||
wc_output=$(eval "$find_cmd" | xargs wc -c 2>/dev/null | tail -n 1)
|
||||
|
||||
# Parse the total line (last line of wc output when processing multiple files)
|
||||
if [[ -n "$wc_output" && "$wc_output" =~ ^[[:space:]]*([0-9]+)[[:space:]]+total[[:space:]]*$ ]]; then
|
||||
total_chars="${BASH_REMATCH[1]}"
|
||||
file_count=$(eval "$find_cmd" | wc -l 2>/dev/null || echo 0)
|
||||
else
|
||||
# Fallback: single file processing
|
||||
while IFS= read -r file; do
|
||||
if [[ -f "$file" && -r "$file" ]]; then
|
||||
local chars=$(wc -c < "$file" 2>/dev/null || echo 0)
|
||||
total_chars=$((total_chars + chars))
|
||||
file_count=$((file_count + 1))
|
||||
fi
|
||||
done < <(eval "$find_cmd")
|
||||
fi
|
||||
else
|
||||
# No wc available - fallback method
|
||||
while IFS= read -r file; do
|
||||
if [[ -f "$file" && -r "$file" ]]; then
|
||||
local chars=$(stat -c%s "$file" 2>/dev/null || echo 0)
|
||||
total_chars=$((total_chars + chars))
|
||||
file_count=$((file_count + 1))
|
||||
fi
|
||||
done < <(eval "$find_cmd")
|
||||
fi
|
||||
|
||||
local estimated_tokens=$((total_chars / 4))
|
||||
echo "$estimated_tokens $file_count"
|
||||
}
|
||||
|
||||
# Function to validate approval mode
|
||||
validate_approval_mode() {
|
||||
local mode="$1"
|
||||
case "$mode" in
|
||||
"default"|"auto_edit"|"yolo")
|
||||
return 0
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}❌ Invalid approval mode: $mode${NC}" >&2
|
||||
echo -e "${YELLOW}Valid modes: default, auto_edit, yolo${NC}" >&2
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Parse arguments to check for flags
|
||||
has_all_files=false
|
||||
has_approval_mode=false
|
||||
approval_mode_value=""
|
||||
args=()
|
||||
i=0
|
||||
|
||||
# Parse arguments with proper handling of --approval-mode value
|
||||
args=("$@") # Start with all arguments
|
||||
parsed_args=()
|
||||
skip_next=false
|
||||
|
||||
for ((i=0; i<${#args[@]}; i++)); do
|
||||
if [[ "$skip_next" == true ]]; then
|
||||
skip_next=false
|
||||
continue
|
||||
fi
|
||||
|
||||
arg="${args[i]}"
|
||||
case "$arg" in
|
||||
"--help"|"-h")
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
"--all-files")
|
||||
has_all_files=true
|
||||
parsed_args+=("$arg")
|
||||
;;
|
||||
"--approval-mode")
|
||||
has_approval_mode=true
|
||||
# Get the next argument as the mode value
|
||||
if [[ $((i+1)) -lt ${#args[@]} ]]; then
|
||||
approval_mode_value="${args[$((i+1))]}"
|
||||
if validate_approval_mode "$approval_mode_value"; then
|
||||
parsed_args+=("$arg" "$approval_mode_value")
|
||||
skip_next=true
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}❌ --approval-mode requires a value${NC}" >&2
|
||||
echo -e "${YELLOW}Valid modes: default, auto_edit, yolo${NC}" >&2
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
--approval-mode=*)
|
||||
has_approval_mode=true
|
||||
approval_mode_value="${arg#*=}"
|
||||
if validate_approval_mode "$approval_mode_value"; then
|
||||
parsed_args+=("$arg")
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
parsed_args+=("$arg")
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Replace args with parsed_args
|
||||
args=("${parsed_args[@]}")
|
||||
|
||||
# Analyze current working directory
|
||||
echo -e "${GREEN}📁 Analyzing current directory: $(pwd)${NC}" >&2
|
||||
|
||||
# Count tokens (in the target directory if -c was used)
|
||||
echo -e "${YELLOW}🔍 Analyzing project size...${NC}" >&2
|
||||
read -r token_count file_count <<< "$(count_tokens)"
|
||||
|
||||
echo -e "${YELLOW}📊 Project stats: ~${token_count} tokens across ${file_count} files${NC}" >&2
|
||||
|
||||
# Decision logic for --all-files flag
|
||||
if [[ $token_count -lt $TOKEN_LIMIT ]]; then
|
||||
if [[ "$has_all_files" == false ]]; then
|
||||
echo -e "${GREEN}✅ Small project (${token_count} < ${TOKEN_LIMIT} tokens): Adding --all-files${NC}" >&2
|
||||
args=("--all-files" "${args[@]}")
|
||||
else
|
||||
echo -e "${GREEN}✅ Small project (${token_count} < ${TOKEN_LIMIT} tokens): Keeping --all-files${NC}" >&2
|
||||
fi
|
||||
else
|
||||
if [[ "$has_all_files" == true ]]; then
|
||||
echo -e "${RED}⚠️ Large project (${token_count} >= ${TOKEN_LIMIT} tokens): Removing --all-files to avoid token limits${NC}" >&2
|
||||
echo -e "${YELLOW}💡 Consider using specific @{patterns} for targeted analysis${NC}" >&2
|
||||
# Remove --all-files from args
|
||||
new_args=()
|
||||
for arg in "${args[@]}"; do
|
||||
if [[ "$arg" != "--all-files" ]]; then
|
||||
new_args+=("$arg")
|
||||
fi
|
||||
done
|
||||
args=("${new_args[@]}")
|
||||
else
|
||||
echo -e "${RED}⚠️ Large project (${token_count} >= ${TOKEN_LIMIT} tokens): Avoiding --all-files${NC}" >&2
|
||||
echo -e "${YELLOW}💡 Consider using specific @{patterns} for targeted analysis${NC}" >&2
|
||||
fi
|
||||
fi
|
||||
|
||||
# Auto-add approval-mode if not specified
|
||||
if [[ "$has_approval_mode" == false ]]; then
|
||||
# Intelligent approval mode detection based on prompt content
|
||||
prompt_text="${args[*]}"
|
||||
|
||||
# Analysis/Research tasks - use default (prompt for each tool)
|
||||
if [[ "$prompt_text" =~ (analyze|analysis|review|understand|inspect|examine|research|study|explore|investigate) ]]; then
|
||||
echo -e "${GREEN}📋 Analysis task detected: Adding --approval-mode default${NC}" >&2
|
||||
args=("--approval-mode" "default" "${args[@]}")
|
||||
|
||||
# Development/Edit tasks - use auto_edit (auto-approve edits, prompt for others)
|
||||
elif [[ "$prompt_text" =~ (implement|create|build|develop|code|write|edit|modify|update|fix|refactor|generate) ]]; then
|
||||
echo -e "${GREEN}🔧 Development task detected: Adding --approval-mode auto_edit${NC}" >&2
|
||||
args=("--approval-mode" "auto_edit" "${args[@]}")
|
||||
|
||||
# Automation/Batch tasks - use yolo (auto-approve all)
|
||||
elif [[ "$prompt_text" =~ (automate|batch|mass|bulk|all|execute|run|deploy|install|setup) ]]; then
|
||||
echo -e "${YELLOW}⚡ Automation task detected: Adding --approval-mode yolo${NC}" >&2
|
||||
args=("--approval-mode" "yolo" "${args[@]}")
|
||||
|
||||
# Default fallback - use default mode for safety
|
||||
else
|
||||
echo -e "${YELLOW}🔍 General task detected: Adding --approval-mode default${NC}" >&2
|
||||
args=("--approval-mode" "default" "${args[@]}")
|
||||
fi
|
||||
|
||||
# Show approval mode explanation
|
||||
case "${args[1]}" in
|
||||
"default")
|
||||
echo -e "${YELLOW} → Will prompt for approval on each tool call${NC}" >&2
|
||||
;;
|
||||
"auto_edit")
|
||||
echo -e "${YELLOW} → Will auto-approve edit tools, prompt for others${NC}" >&2
|
||||
;;
|
||||
"yolo")
|
||||
echo -e "${YELLOW} → Will auto-approve all tool calls${NC}" >&2
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# Show final command (for transparency)
|
||||
echo -e "${YELLOW}🚀 Executing: gemini ${args[*]}${NC}" >&2
|
||||
|
||||
# Execute gemini with adjusted arguments (we're already in the right directory)
|
||||
gemini "${args[@]}"
|
||||
@@ -1,228 +0,0 @@
|
||||
#!/bin/bash
|
||||
# qwen-wrapper - Token-aware wrapper for qwen command
|
||||
# Location: ~/.claude/scripts/qwen-wrapper
|
||||
#
|
||||
# This wrapper automatically manages --all-files flag based on project token count
|
||||
# and provides intelligent approval mode defaults
|
||||
#
|
||||
# Usage: qwen-wrapper [all qwen options]
|
||||
#
|
||||
# Approval Mode Options:
|
||||
# --approval-mode default : Prompt for approval on each tool call (default)
|
||||
# --approval-mode auto_edit : Auto-approve edit tools, prompt for others
|
||||
# --approval-mode yolo : Auto-approve all tool calls
|
||||
#
|
||||
# Note: Executes in current working directory
|
||||
|
||||
set -e
|
||||
|
||||
# Function to show help
|
||||
show_help() {
|
||||
echo "qwen-wrapper - Token-aware wrapper for qwen command"
|
||||
echo ""
|
||||
echo "Usage: qwen-wrapper [options] [qwen options]"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --approval-mode <mode> Sets the approval mode for tool calls"
|
||||
echo " Available modes:"
|
||||
echo " default : Prompt for approval on each tool call (default)"
|
||||
echo " auto_edit : Auto-approve edit tools, prompt for others"
|
||||
echo " yolo : Auto-approve all tool calls"
|
||||
echo " --help Show this help message"
|
||||
echo ""
|
||||
echo "Features:"
|
||||
echo " - Automatically manages --all-files flag based on project token count"
|
||||
echo " - Intelligent approval mode detection based on task type"
|
||||
echo " - Token limit: $DEFAULT_TOKEN_LIMIT (set QWEN_TOKEN_LIMIT to override)"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " qwen-wrapper -p \"Analyze the codebase structure\""
|
||||
echo " qwen-wrapper --approval-mode yolo -p \"Implement user authentication\""
|
||||
echo " qwen-wrapper --approval-mode auto_edit -p \"Fix all linting errors\""
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Function to validate approval mode
|
||||
validate_approval_mode() {
|
||||
local mode="$1"
|
||||
case "$mode" in
|
||||
"default"|"auto_edit"|"yolo")
|
||||
return 0
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}❌ Invalid approval mode: $mode${NC}" >&2
|
||||
echo -e "${YELLOW}Valid modes: default, auto_edit, yolo${NC}" >&2
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Configuration
|
||||
DEFAULT_TOKEN_LIMIT=2000000
|
||||
TOKEN_LIMIT=${QWEN_TOKEN_LIMIT:-$DEFAULT_TOKEN_LIMIT}
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Function to count tokens (approximate: chars/4)
|
||||
count_tokens() {
|
||||
local total_chars=0
|
||||
local file_count=0
|
||||
|
||||
# Count characters in common source files
|
||||
while IFS= read -r -d '' file; do
|
||||
if [[ -f "$file" && -r "$file" ]]; then
|
||||
local chars=$(wc -c < "$file" 2>/dev/null || echo 0)
|
||||
total_chars=$((total_chars + chars))
|
||||
file_count=$((file_count + 1))
|
||||
fi
|
||||
done < <(find . -type f \( -name "*.py" -o -name "*.js" -o -name "*.ts" -o -name "*.tsx" -o -name "*.jsx" -o -name "*.java" -o -name "*.cpp" -o -name "*.c" -o -name "*.h" -o -name "*.rs" -o -name "*.go" -o -name "*.md" -o -name "*.txt" -o -name "*.json" -o -name "*.yaml" -o -name "*.yml" -o -name "*.xml" -o -name "*.html" -o -name "*.css" -o -name "*.scss" -o -name "*.sass" -o -name "*.php" -o -name "*.rb" -o -name "*.sh" -o -name "*.bash" \) -not -path "*/node_modules/*" -not -path "*/.git/*" -not -path "*/dist/*" -not -path "*/build/*" -not -path "*/.next/*" -not -path "*/.nuxt/*" -not -path "*/target/*" -not -path "*/vendor/*" -print0 2>/dev/null)
|
||||
|
||||
local estimated_tokens=$((total_chars / 4))
|
||||
echo "$estimated_tokens $file_count"
|
||||
}
|
||||
|
||||
# Parse arguments to check for flags
|
||||
has_all_files=false
|
||||
has_approval_mode=false
|
||||
approval_mode_value=""
|
||||
|
||||
# Parse arguments with proper handling of --approval-mode value
|
||||
args=("$@") # Start with all arguments
|
||||
parsed_args=()
|
||||
skip_next=false
|
||||
|
||||
for ((i=0; i<${#args[@]}; i++)); do
|
||||
if [[ "$skip_next" == true ]]; then
|
||||
skip_next=false
|
||||
continue
|
||||
fi
|
||||
|
||||
arg="${args[i]}"
|
||||
case "$arg" in
|
||||
"--help"|"-h")
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
"--all-files")
|
||||
has_all_files=true
|
||||
parsed_args+=("$arg")
|
||||
;;
|
||||
"--approval-mode")
|
||||
has_approval_mode=true
|
||||
# Get the next argument as the mode value
|
||||
if [[ $((i+1)) -lt ${#args[@]} ]]; then
|
||||
approval_mode_value="${args[$((i+1))]}"
|
||||
if validate_approval_mode "$approval_mode_value"; then
|
||||
parsed_args+=("$arg" "$approval_mode_value")
|
||||
skip_next=true
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}❌ --approval-mode requires a value${NC}" >&2
|
||||
echo -e "${YELLOW}Valid modes: default, auto_edit, yolo${NC}" >&2
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
--approval-mode=*)
|
||||
has_approval_mode=true
|
||||
approval_mode_value="${arg#*=}"
|
||||
if validate_approval_mode "$approval_mode_value"; then
|
||||
parsed_args+=("$arg")
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
parsed_args+=("$arg")
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Replace args with parsed_args
|
||||
args=("${parsed_args[@]}")
|
||||
|
||||
# Analyze current working directory
|
||||
echo -e "${GREEN}📁 Analyzing current directory: $(pwd)${NC}" >&2
|
||||
|
||||
# Count tokens (in the target directory if -c was used)
|
||||
echo -e "${YELLOW}🔍 Analyzing project size...${NC}" >&2
|
||||
read -r token_count file_count <<< "$(count_tokens)"
|
||||
|
||||
echo -e "${YELLOW}📊 Project stats: ~${token_count} tokens across ${file_count} files${NC}" >&2
|
||||
|
||||
# Decision logic for --all-files flag
|
||||
if [[ $token_count -lt $TOKEN_LIMIT ]]; then
|
||||
if [[ "$has_all_files" == false ]]; then
|
||||
echo -e "${GREEN}✅ Small project (${token_count} < ${TOKEN_LIMIT} tokens): Adding --all-files${NC}" >&2
|
||||
args=("--all-files" "${args[@]}")
|
||||
else
|
||||
echo -e "${GREEN}✅ Small project (${token_count} < ${TOKEN_LIMIT} tokens): Keeping --all-files${NC}" >&2
|
||||
fi
|
||||
else
|
||||
if [[ "$has_all_files" == true ]]; then
|
||||
echo -e "${RED}⚠️ Large project (${token_count} >= ${TOKEN_LIMIT} tokens): Removing --all-files to avoid token limits${NC}" >&2
|
||||
echo -e "${YELLOW}💡 Consider using specific @{patterns} for targeted analysis${NC}" >&2
|
||||
# Remove --all-files from args
|
||||
new_args=()
|
||||
for arg in "${args[@]}"; do
|
||||
if [[ "$arg" != "--all-files" ]]; then
|
||||
new_args+=("$arg")
|
||||
fi
|
||||
done
|
||||
args=("${new_args[@]}")
|
||||
else
|
||||
echo -e "${RED}⚠️ Large project (${token_count} >= ${TOKEN_LIMIT} tokens): Avoiding --all-files${NC}" >&2
|
||||
echo -e "${YELLOW}💡 Consider using specific @{patterns} for targeted analysis${NC}" >&2
|
||||
fi
|
||||
fi
|
||||
|
||||
# Auto-add approval-mode if not specified
|
||||
if [[ "$has_approval_mode" == false ]]; then
|
||||
# Intelligent approval mode detection based on prompt content
|
||||
prompt_text="${args[*]}"
|
||||
|
||||
# Analysis/Research tasks - use default (prompt for each tool)
|
||||
if [[ "$prompt_text" =~ (analyze|analysis|review|understand|inspect|examine|research|study|explore|investigate) ]]; then
|
||||
echo -e "${GREEN}📋 Analysis task detected: Adding --approval-mode default${NC}" >&2
|
||||
args=("--approval-mode" "default" "${args[@]}")
|
||||
|
||||
# Development/Edit tasks - use auto_edit (auto-approve edits, prompt for others)
|
||||
elif [[ "$prompt_text" =~ (implement|create|build|develop|code|write|edit|modify|update|fix|refactor|generate) ]]; then
|
||||
echo -e "${GREEN}🔧 Development task detected: Adding --approval-mode auto_edit${NC}" >&2
|
||||
args=("--approval-mode" "auto_edit" "${args[@]}")
|
||||
|
||||
# Automation/Batch tasks - use yolo (auto-approve all)
|
||||
elif [[ "$prompt_text" =~ (automate|batch|mass|bulk|all|execute|run|deploy|install|setup) ]]; then
|
||||
echo -e "${YELLOW}⚡ Automation task detected: Adding --approval-mode yolo${NC}" >&2
|
||||
args=("--approval-mode" "yolo" "${args[@]}")
|
||||
|
||||
# Default fallback - use default mode for safety
|
||||
else
|
||||
echo -e "${YELLOW}🔍 General task detected: Adding --approval-mode default${NC}" >&2
|
||||
args=("--approval-mode" "default" "${args[@]}")
|
||||
fi
|
||||
|
||||
# Show approval mode explanation
|
||||
case "${args[1]}" in
|
||||
"default")
|
||||
echo -e "${YELLOW} → Will prompt for approval on each tool call${NC}" >&2
|
||||
;;
|
||||
"auto_edit")
|
||||
echo -e "${YELLOW} → Will auto-approve edit tools, prompt for others${NC}" >&2
|
||||
;;
|
||||
"yolo")
|
||||
echo -e "${YELLOW} → Will auto-approve all tool calls${NC}" >&2
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# Show final command (for transparency)
|
||||
echo -e "${YELLOW}🚀 Executing: qwen ${args[*]}${NC}" >&2
|
||||
|
||||
# Execute qwen with adjusted arguments (we're already in the right directory)
|
||||
qwen "${args[@]}"
|
||||
@@ -1,14 +1,32 @@
|
||||
#!/bin/bash
|
||||
# Update CLAUDE.md for a specific module with unified template
|
||||
# Usage: update_module_claude.sh <module_path> [update_type] [tool]
|
||||
# Update CLAUDE.md for modules with two strategies
|
||||
# Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]
|
||||
# strategy: single-layer|multi-layer
|
||||
# module_path: Path to the module directory
|
||||
# update_type: full|related (default: full)
|
||||
# tool: gemini|qwen|codex (default: gemini)
|
||||
# model: Model name (optional, uses tool defaults)
|
||||
#
|
||||
# Default Models:
|
||||
# gemini: gemini-2.5-flash
|
||||
# qwen: coder-model
|
||||
# codex: gpt5-codex
|
||||
#
|
||||
# Strategies:
|
||||
# single-layer: Upward aggregation
|
||||
# - Read: Current directory code + child CLAUDE.md files
|
||||
# - Generate: Single ./CLAUDE.md in current directory
|
||||
# - Use: Large projects, incremental bottom-up updates
|
||||
#
|
||||
# multi-layer: Downward distribution
|
||||
# - Read: All files in current and subdirectories
|
||||
# - Generate: CLAUDE.md for each directory containing files
|
||||
# - Use: Small projects, full documentation generation
|
||||
#
|
||||
# Features:
|
||||
# - Respects .gitignore patterns (current directory or git root)
|
||||
# - Unified template for all modules (folders and files)
|
||||
# - Template-based documentation generation
|
||||
# - Minimal prompts based on unified template
|
||||
# - Respects .gitignore patterns
|
||||
# - Path-focused processing (script only cares about paths)
|
||||
# - Template-driven generation
|
||||
|
||||
# Build exclusion filters from .gitignore
|
||||
build_exclusion_filters() {
|
||||
@@ -59,15 +77,84 @@ build_exclusion_filters() {
|
||||
echo "$filters"
|
||||
}
|
||||
|
||||
# Scan directory structure and generate structured information
|
||||
scan_directory_structure() {
|
||||
local target_path="$1"
|
||||
local strategy="$2"
|
||||
|
||||
if [ ! -d "$target_path" ]; then
|
||||
echo "Directory not found: $target_path"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local exclusion_filters=$(build_exclusion_filters)
|
||||
local structure_info=""
|
||||
|
||||
# Get basic directory info
|
||||
local dir_name=$(basename "$target_path")
|
||||
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
|
||||
|
||||
structure_info+="Directory: $dir_name\n"
|
||||
structure_info+="Total files: $total_files\n"
|
||||
structure_info+="Total directories: $total_dirs\n\n"
|
||||
|
||||
if [ "$strategy" = "multi-layer" ]; then
|
||||
# For multi-layer: show all subdirectories with file counts
|
||||
structure_info+="Subdirectories with files:\n"
|
||||
while IFS= read -r dir; do
|
||||
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
|
||||
local rel_path=${dir#$target_path/}
|
||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
if [ $file_count -gt 0 ]; then
|
||||
structure_info+=" - $rel_path/ ($file_count files)\n"
|
||||
fi
|
||||
fi
|
||||
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
|
||||
else
|
||||
# For single-layer: show direct children only
|
||||
structure_info+="Direct subdirectories:\n"
|
||||
while IFS= read -r dir; do
|
||||
if [ -n "$dir" ]; then
|
||||
local dir_name=$(basename "$dir")
|
||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local has_claude=$([ -f "$dir/CLAUDE.md" ] && echo " [has CLAUDE.md]" || echo "")
|
||||
structure_info+=" - $dir_name/ ($file_count files)$has_claude\n"
|
||||
fi
|
||||
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
|
||||
fi
|
||||
|
||||
# Show main file types in current directory
|
||||
structure_info+="\nCurrent directory files:\n"
|
||||
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
|
||||
|
||||
structure_info+=" - Code files: $code_files\n"
|
||||
structure_info+=" - Config files: $config_files\n"
|
||||
structure_info+=" - Documentation: $doc_files\n"
|
||||
|
||||
printf "%b" "$structure_info"
|
||||
}
|
||||
|
||||
update_module_claude() {
|
||||
local module_path="$1"
|
||||
local update_type="${2:-full}"
|
||||
local strategy="$1"
|
||||
local module_path="$2"
|
||||
local tool="${3:-gemini}"
|
||||
local model="$4"
|
||||
|
||||
# Validate parameters
|
||||
if [ -z "$module_path" ]; then
|
||||
echo "❌ Error: Module path is required"
|
||||
echo "Usage: update_module_claude.sh <module_path> [update_type]"
|
||||
if [ -z "$strategy" ] || [ -z "$module_path" ]; then
|
||||
echo "❌ Error: Strategy and module path are required"
|
||||
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
|
||||
echo "Strategies: single-layer|multi-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Validate strategy
|
||||
if [ "$strategy" != "single-layer" ] && [ "$strategy" != "multi-layer" ]; then
|
||||
echo "❌ Error: Invalid strategy '$strategy'"
|
||||
echo "Valid strategies: single-layer, multi-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
@@ -76,6 +163,24 @@ update_module_claude() {
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Set default models if not specified
|
||||
if [ -z "$model" ]; then
|
||||
case "$tool" in
|
||||
gemini)
|
||||
model="gemini-2.5-flash"
|
||||
;;
|
||||
qwen)
|
||||
model="coder-model"
|
||||
;;
|
||||
codex)
|
||||
model="gpt5-codex"
|
||||
;;
|
||||
*)
|
||||
model=""
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# Build exclusion filters from .gitignore
|
||||
local exclusion_filters=$(build_exclusion_filters)
|
||||
|
||||
@@ -85,79 +190,105 @@ update_module_claude() {
|
||||
echo "⚠️ Skipping '$module_path' - no files found (after .gitignore filtering)"
|
||||
return 0
|
||||
fi
|
||||
|
||||
|
||||
# Use unified template for all modules
|
||||
local template_path="$HOME/.claude/workflows/cli-templates/prompts/memory/claude-module-unified.txt"
|
||||
local analysis_strategy="--all-files"
|
||||
|
||||
|
||||
# Read template content directly
|
||||
local template_content=""
|
||||
if [ -f "$template_path" ]; then
|
||||
template_content=$(cat "$template_path")
|
||||
echo " 📋 Loaded template: $(wc -l < "$template_path") lines"
|
||||
else
|
||||
echo " ⚠️ Template not found: $template_path"
|
||||
echo " Using fallback template..."
|
||||
template_content="Create comprehensive CLAUDE.md documentation following standard structure with Purpose, Structure, Components, Dependencies, Integration, and Implementation sections."
|
||||
fi
|
||||
|
||||
# Scan directory structure first
|
||||
echo " 🔍 Scanning directory structure..."
|
||||
local structure_info=$(scan_directory_structure "$module_path" "$strategy")
|
||||
|
||||
# Prepare logging info
|
||||
local module_name=$(basename "$module_path")
|
||||
|
||||
echo "⚡ Updating: $module_path"
|
||||
echo " Type: $update_type | Tool: $tool | Files: $file_count"
|
||||
echo " Template: $(basename "$template_path")"
|
||||
|
||||
# Generate prompt with template injection
|
||||
local template_content=""
|
||||
if [ -f "$template_path" ]; then
|
||||
template_content=$(cat "$template_path")
|
||||
else
|
||||
echo " ⚠️ Template not found: $template_path, using fallback"
|
||||
template_content="Update CLAUDE.md documentation for this module: document structure, key components, dependencies, and integration points."
|
||||
fi
|
||||
|
||||
local update_context=""
|
||||
if [ "$update_type" = "full" ]; then
|
||||
update_context="
|
||||
Update Mode: Complete refresh
|
||||
- Perform comprehensive analysis of all content
|
||||
- Document module structure, dependencies, and key components
|
||||
- Follow template guidelines strictly"
|
||||
else
|
||||
update_context="
|
||||
Update Mode: Context-aware update
|
||||
- Focus on recent changes and affected areas
|
||||
- Maintain consistency with existing documentation
|
||||
- Update only relevant sections
|
||||
- Follow template guidelines for updated content"
|
||||
fi
|
||||
|
||||
local base_prompt="
|
||||
⚠️ CRITICAL RULES - MUST FOLLOW:
|
||||
1. ONLY modify CLAUDE.md files
|
||||
2. NEVER modify source code files
|
||||
3. Focus exclusively on updating documentation
|
||||
4. Follow the template guidelines exactly
|
||||
echo " Strategy: $strategy | Tool: $tool | Model: $model | Files: $file_count"
|
||||
echo " Template: $(basename "$template_path") ($(echo "$template_content" | wc -l) lines)"
|
||||
echo " Structure: Scanned $(echo "$structure_info" | wc -l) lines of structure info"
|
||||
|
||||
$template_content
|
||||
# Build minimal strategy-specific prompt with explicit paths and structure info
|
||||
local final_prompt=""
|
||||
|
||||
if [ "$strategy" = "multi-layer" ]; then
|
||||
# multi-layer strategy: read all, generate for each directory
|
||||
final_prompt="Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
Read: @**/*
|
||||
|
||||
Generate CLAUDE.md files:
|
||||
- Primary: ./CLAUDE.md (current directory)
|
||||
- Additional: CLAUDE.md in each subdirectory containing files
|
||||
|
||||
Template Guidelines:
|
||||
$template_content
|
||||
|
||||
Instructions:
|
||||
- Work bottom-up: deepest directories first
|
||||
- Parent directories reference children
|
||||
- Each CLAUDE.md file must be in its respective directory
|
||||
- Follow the template guidelines above for consistent structure
|
||||
- Use the structure analysis to understand directory hierarchy"
|
||||
else
|
||||
# single-layer strategy: read current + child CLAUDE.md, generate current only
|
||||
final_prompt="Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
Read: @*/CLAUDE.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.md @*.json @*.yaml @*.yml
|
||||
|
||||
Generate single file: ./CLAUDE.md
|
||||
|
||||
Template Guidelines:
|
||||
$template_content
|
||||
|
||||
Instructions:
|
||||
- Create exactly one CLAUDE.md file in the current directory
|
||||
- Reference child CLAUDE.md files, do not duplicate their content
|
||||
- Follow the template guidelines above for consistent structure
|
||||
- Use the structure analysis to understand the current directory context"
|
||||
fi
|
||||
|
||||
$update_context"
|
||||
|
||||
# Execute update
|
||||
local start_time=$(date +%s)
|
||||
echo " 🔄 Starting update..."
|
||||
|
||||
if cd "$module_path" 2>/dev/null; then
|
||||
local tool_result=0
|
||||
local final_prompt="$base_prompt
|
||||
|
||||
Module Information:
|
||||
- Name: $module_name
|
||||
- Path: $module_path
|
||||
- Tool: $tool"
|
||||
|
||||
# Execute with selected tool (always use --all-files)
|
||||
# Execute with selected tool
|
||||
# NOTE: Model parameter (-m) is placed AFTER the prompt
|
||||
case "$tool" in
|
||||
qwen)
|
||||
qwen --all-files --yolo -p "$final_prompt" 2>&1
|
||||
if [ "$model" = "coder-model" ]; then
|
||||
# coder-model is default, -m is optional
|
||||
qwen -p "$final_prompt" --yolo 2>&1
|
||||
else
|
||||
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
fi
|
||||
tool_result=$?
|
||||
;;
|
||||
codex)
|
||||
codex --full-auto exec "$final_prompt" --skip-git-repo-check -s danger-full-access 2>&1
|
||||
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
gemini|*)
|
||||
gemini --all-files --yolo -p "$final_prompt" 2>&1
|
||||
gemini)
|
||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
*)
|
||||
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
|
||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
esac
|
||||
@@ -181,5 +312,22 @@ update_module_claude() {
|
||||
|
||||
# Execute function if script is run directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
# Show help if no arguments or help requested
|
||||
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
|
||||
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
|
||||
echo ""
|
||||
echo "Strategies:"
|
||||
echo " single-layer - Read current dir code + child CLAUDE.md, generate ./CLAUDE.md"
|
||||
echo " multi-layer - Read all files, generate CLAUDE.md for each directory"
|
||||
echo ""
|
||||
echo "Tools: gemini (default), qwen, codex"
|
||||
echo "Models: Use tool defaults if not specified"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " ./update_module_claude.sh single-layer ./src/auth"
|
||||
echo " ./update_module_claude.sh multi-layer ./components gemini gemini-2.5-flash"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
update_module_claude "$@"
|
||||
fi
|
||||
fi
|
||||
|
||||
@@ -1,100 +1,124 @@
|
||||
---
|
||||
name: Prompt Enhancer
|
||||
description: Systematically enhance unclear and ambiguous user prompts by combining session memory with codebase analysis. AUTO-TRIGGER when user input is vague, lacks technical specificity (e.g., "fix", "improve", "clean up", "update", "refactor"), uses unclear references ("it", "that", "this thing"), or affects multiple modules or critical systems. Essential for transforming vague intent into actionable specifications.
|
||||
allowed-tools: Bash, Read, Glob, Grep
|
||||
description: Transform vague prompts into actionable specs using intelligent analysis and session memory. Use when user input contains -e or --enhance flag.
|
||||
allowed-tools: (none)
|
||||
---
|
||||
|
||||
# Prompt Enhancer
|
||||
|
||||
## Overview
|
||||
**Transform**: Vague intent → Structured specification (Memory-based, Direct Output)
|
||||
|
||||
Transforms ambiguous user requests into actionable technical specifications through semantic analysis and session memory integration.
|
||||
**Languages**: English + Chinese (中英文语义识别)
|
||||
|
||||
**Core Capability**: Vague intent → Structured specification
|
||||
## Process (Internal → Direct Output)
|
||||
|
||||
## Enhancement Process
|
||||
**Internal Analysis**: Intelligently extract session context, identify tech stack, and structure into actionable format.
|
||||
|
||||
### Step 1: Semantic Analysis
|
||||
**Output**: Direct structured prompt (no intermediate steps shown)
|
||||
|
||||
Analyze user input to identify:
|
||||
- **Intent keywords**: fix, improve, add, refactor, update, migrate
|
||||
- **Technical scope**: single file vs multi-module
|
||||
- **Domain context**: auth, payment, security, API, UI, database
|
||||
- **Implied requirements**: performance, security, testing, documentation
|
||||
## Output Format
|
||||
|
||||
### Step 2: Memory Analysis
|
||||
**Dynamic Structure**: Adapt fields based on task type and context needs. Not all fields are required.
|
||||
|
||||
Extract from conversation history:
|
||||
- **Technical context**: Previous discussions, decisions, implementations
|
||||
- **Known patterns**: Identified code patterns, architecture decisions
|
||||
- **Current state**: What's been built, what's in progress
|
||||
- **Dependencies**: Related modules, integration points
|
||||
- **Constraints**: Security requirements, backward compatibility
|
||||
**Core Fields** (always present):
|
||||
- **INTENT**: One-sentence technical goal
|
||||
- **ACTION**: Concrete steps with technical details
|
||||
|
||||
### Step 3: Context Integration
|
||||
|
||||
Combine semantic and memory analysis to determine:
|
||||
- **Precise intent**: Specific technical goal
|
||||
- **Required actions**: Implementation steps with file references
|
||||
- **Critical constraints**: Security, compatibility, testing requirements
|
||||
- **Missing information**: What needs clarification
|
||||
|
||||
## Output Structure
|
||||
|
||||
Every enhanced prompt must follow this format:
|
||||
**Optional Fields** (include when relevant):
|
||||
- **TECH STACK**: Relevant technologies (when tech-specific)
|
||||
- **CONTEXT**: Session memory findings (when context matters)
|
||||
- **ATTENTION**: Critical constraints (when risks/requirements exist)
|
||||
- **SCOPE**: Affected modules/files (for multi-module tasks)
|
||||
- **METRICS**: Success criteria (for optimization/performance tasks)
|
||||
- **DEPENDENCIES**: Related components (for integration tasks)
|
||||
|
||||
**Example (Simple Task)**:
|
||||
```
|
||||
INTENT: [Clear technical goal]
|
||||
CONTEXT: [Session memory + semantic analysis]
|
||||
ACTION: [Numbered implementation steps]
|
||||
ATTENTION: [Critical constraints]
|
||||
📋 ENHANCED PROMPT
|
||||
|
||||
INTENT: Fix authentication token validation in JWT middleware
|
||||
|
||||
ACTION:
|
||||
1. Review token expiration logic in auth middleware
|
||||
2. Add proper error handling for expired tokens
|
||||
3. Test with valid/expired/malformed tokens
|
||||
```
|
||||
|
||||
**Field Descriptions**:
|
||||
**Example (Complex Task)**:
|
||||
```
|
||||
📋 ENHANCED PROMPT
|
||||
|
||||
- **INTENT**: One-sentence technical goal derived from semantic analysis
|
||||
- **CONTEXT**: Session memory findings + semantic domain analysis
|
||||
- **ACTION**: Numbered steps with specific file/module references
|
||||
- **ATTENTION**: Critical constraints, security, compliance, tests
|
||||
INTENT: Optimize API performance with caching and database indexing
|
||||
|
||||
## Semantic Patterns
|
||||
TECH STACK:
|
||||
- Redis: Response caching
|
||||
- PostgreSQL: Query optimization
|
||||
|
||||
### Intent Translation
|
||||
CONTEXT:
|
||||
- API response times >2s mentioned in previous conversation
|
||||
- PostgreSQL slow query logs show N+1 problems
|
||||
|
||||
| User Input | Semantic Intent | Focus |
|
||||
|------------|----------------|-------|
|
||||
| "fix" + vague target | Debug and resolve | Root cause → preserve behavior |
|
||||
| "improve" + no metrics | Enhance/optimize | Performance/readability |
|
||||
| "add" + feature name | Implement feature | Integration + edge cases |
|
||||
| "refactor" + module | Restructure | Maintain behavior |
|
||||
| "update" + version | Modernize | Version compatibility |
|
||||
ACTION:
|
||||
1. Profile endpoints to identify slow queries
|
||||
2. Add PostgreSQL indexes on frequently queried columns
|
||||
3. Implement Redis caching for read-heavy endpoints
|
||||
4. Add cache invalidation on data updates
|
||||
|
||||
### Scope Detection
|
||||
METRICS:
|
||||
- Target: <500ms API response time
|
||||
- Cache hit ratio: >80%
|
||||
|
||||
**Single-file scope**:
|
||||
- "fix button", "add validation", "update component"
|
||||
- Use session memory only
|
||||
ATTENTION:
|
||||
- Maintain backward compatibility with existing API contracts
|
||||
- Handle cache invalidation correctly to avoid stale data
|
||||
```
|
||||
## Workflow
|
||||
|
||||
**Multi-module scope** (>3 modules):
|
||||
- "add authentication", "refactor payment", "migrate database"
|
||||
- Analyze dependencies and integration points
|
||||
```
|
||||
Trigger (-e/--enhance) → Internal Analysis → Dynamic Output
|
||||
↓ ↓ ↓
|
||||
User Input Assess Task Type Select Fields
|
||||
Extract Memory Context Structure Prompt
|
||||
```
|
||||
|
||||
**System-wide scope**:
|
||||
- "improve performance", "add logging", "update security"
|
||||
- Consider cross-cutting concerns
|
||||
1. **Detect**: User input contains `-e` or `--enhance`
|
||||
2. **Analyze**:
|
||||
- Determine task type (fix/optimize/implement/refactor)
|
||||
- Extract relevant session context
|
||||
- Identify tech stack and constraints
|
||||
3. **Structure**:
|
||||
- Always include: INTENT + ACTION
|
||||
- Conditionally add: TECH STACK, CONTEXT, ATTENTION, METRICS, etc.
|
||||
4. **Output**: Present dynamically structured prompt
|
||||
|
||||
## Key Principles
|
||||
## Enhancement Guidelines (Internal)
|
||||
|
||||
1. **Memory First**: Check session memory before assumptions
|
||||
2. **Semantic Precision**: Extract exact technical intent from vague language
|
||||
3. **Context Reuse**: Build on previous understanding
|
||||
4. **Clear Output**: Always structured format
|
||||
5. **Avoid Duplication**: Reference context, don't repeat
|
||||
**Always Include**:
|
||||
- Clear, actionable INTENT
|
||||
- Concrete ACTION steps with technical details
|
||||
|
||||
**Add When Relevant**:
|
||||
- TECH STACK: Task involves specific technologies
|
||||
- CONTEXT: Session memory provides useful background
|
||||
- ATTENTION: Security/compatibility/performance concerns exist
|
||||
- SCOPE: Multi-module or cross-component changes
|
||||
- METRICS: Performance/optimization goals need measurement
|
||||
- DEPENDENCIES: Integration points matter
|
||||
|
||||
**Quality Checks**:
|
||||
- Make vague intent explicit
|
||||
- Resolve ambiguous references
|
||||
- Add testing/validation steps
|
||||
- Include constraints from memory
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Semantic analysis**: Identify domain, scope, and intent keywords
|
||||
- **Memory integration**: Extract all relevant context from conversation
|
||||
- **Structured output**: Always use INTENT/CONTEXT/ACTION/ATTENTION format
|
||||
- **Actionable steps**: Specific files, clear execution order
|
||||
- **Critical constraints**: Security, compatibility, testing requirements
|
||||
- ✅ Trigger only on `-e`/`--enhance` flags
|
||||
- ✅ Use **dynamic field selection** based on task type
|
||||
- ✅ Extract **memory context ONLY** (no file reading)
|
||||
- ✅ Always include INTENT + ACTION as core fields
|
||||
- ✅ Add optional fields only when relevant to task
|
||||
- ✅ Direct output (no intermediate steps shown)
|
||||
- ❌ NO tool calls
|
||||
- ❌ NO file operations (Bash, Read, Glob, Grep)
|
||||
- ❌ NO fixed template - adapt to task needs
|
||||
|
||||
@@ -1,4 +1,25 @@
|
||||
# Synthesis Role Template
|
||||
# ⚠️ DEPRECATED: Synthesis Role Template
|
||||
|
||||
## DEPRECATION NOTICE
|
||||
|
||||
**This template is DEPRECATED and no longer used.**
|
||||
|
||||
### Why Deprecated
|
||||
The `/workflow:brainstorm:synthesis` command has been redesigned:
|
||||
- **Old behavior**: Generated synthesis-specification.md consolidating all role analyses
|
||||
- **New behavior**: Performs cross-role analysis, identifies ambiguities, interacts with user for clarification, and updates role analysis.md files directly
|
||||
|
||||
### Migration
|
||||
- **Role analyses are the source of truth**: Each role's analysis.md file is updated directly
|
||||
- **Planning reads role documents**: The planning phase dynamically reads all role analysis.md files
|
||||
- **No template needed**: The clarification workflow doesn't require a document template
|
||||
|
||||
### Historical Context
|
||||
This template was used to guide the generation of synthesis-specification.md from multiple role perspectives. It is preserved for historical reference but should not be used in the new architecture.
|
||||
|
||||
---
|
||||
|
||||
# Original Template (Historical Reference)
|
||||
|
||||
## Purpose
|
||||
Generate comprehensive synthesis-specification.md that consolidates all role perspectives from brainstorming into actionable implementation specification.
|
||||
@@ -18,7 +39,7 @@ Generate comprehensive synthesis-specification.md that consolidates all role per
|
||||
```markdown
|
||||
# [Topic] - Integrated Implementation Specification
|
||||
|
||||
**Framework Reference**: @topic-framework.md | **Generated**: [timestamp] | **Session**: WFS-[topic-slug]
|
||||
**Framework Reference**: @guidance-specification.md | **Generated**: [timestamp] | **Session**: WFS-[topic-slug]
|
||||
**Source Integration**: All brainstorming role perspectives consolidated
|
||||
**Document Type**: Requirements & Design Specification (WHAT to build)
|
||||
|
||||
@@ -344,7 +365,7 @@ Document known constraints that affect planning:
|
||||
|
||||
### Cross-Role Synthesis Process
|
||||
|
||||
1. **Load All Role Analyses**: Read topic-framework.md and all discovered */analysis.md files
|
||||
1. **Load All Role Analyses**: Read guidance-specification.md and all discovered */analysis.md files
|
||||
2. **Extract Key Insights**: Identify main recommendations, concerns, and innovations from each role
|
||||
3. **Identify Consensus Areas**: Find common themes across multiple roles
|
||||
4. **Document Disagreements**: Capture controversial points where roles differ
|
||||
@@ -371,7 +392,7 @@ Document known constraints that affect planning:
|
||||
Use @ references to link back to source role analyses:
|
||||
- `@role/analysis.md` - Reference entire role analysis
|
||||
- `@role/analysis.md#section` - Reference specific section
|
||||
- `@topic-framework.md#point-3` - Reference framework discussion point
|
||||
- `@guidance-specification.md#point-3` - Reference framework discussion point
|
||||
|
||||
### Dynamic Role Handling
|
||||
|
||||
|
||||
@@ -1,6 +1,12 @@
|
||||
Create or update CLAUDE.md documentation using unified module/file template.
|
||||
|
||||
## ⚠️ FILE NAMING RULE (CRITICAL)
|
||||
- Target file: MUST be named exactly `CLAUDE.md` in the current directory
|
||||
- NEVER create files like `ToolSidebar.CLAUDE.md` or `[filename].CLAUDE.md`
|
||||
- ALWAYS use the fixed name: `CLAUDE.md`
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ MUST create/update file named exactly 'CLAUDE.md' (not variants)
|
||||
□ MUST include all 6 sections: Purpose, Structure, Components, Dependencies, Integration, Implementation
|
||||
□ For code files: Document all public/exported APIs with complete parameter details
|
||||
□ For folders: Reference subdirectory CLAUDE.md files instead of duplicating
|
||||
@@ -64,6 +70,11 @@ Create or update CLAUDE.md documentation using unified module/file template.
|
||||
|
||||
## OUTPUT REQUIREMENTS
|
||||
|
||||
### File Naming (CRITICAL)
|
||||
- **Output file**: MUST be named exactly `CLAUDE.md` in the current directory
|
||||
- **Examples of WRONG naming**: `ToolSidebar.CLAUDE.md`, `index.CLAUDE.md`, `utils.CLAUDE.md`
|
||||
- **Correct naming**: `CLAUDE.md` (always, for all directories)
|
||||
|
||||
### Template Structure
|
||||
```markdown
|
||||
# [Module/File Name]
|
||||
@@ -143,6 +154,7 @@ Create or update CLAUDE.md documentation using unified module/file template.
|
||||
- Update existing CLAUDE.md files rather than creating duplicate sections
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ Output file is named exactly 'CLAUDE.md' (not [filename].CLAUDE.md)
|
||||
□ All 6 required sections included (Purpose, Structure, Components, Dependencies, Integration, Implementation)
|
||||
□ All public/exported APIs documented with complete signatures
|
||||
□ Parameters documented with types, descriptions, and defaults
|
||||
|
||||
@@ -0,0 +1,224 @@
|
||||
Generate ANALYSIS_RESULTS.md with comprehensive solution design and technical analysis.
|
||||
|
||||
## OUTPUT FILE STRUCTURE
|
||||
|
||||
### Required Sections
|
||||
|
||||
```markdown
|
||||
# Technical Analysis & Solution Design
|
||||
|
||||
## Executive Summary
|
||||
- **Analysis Focus**: {core_problem_or_improvement_area}
|
||||
- **Analysis Timestamp**: {timestamp}
|
||||
- **Tools Used**: {analysis_tools}
|
||||
- **Overall Assessment**: {feasibility_score}/5 - {recommendation_status}
|
||||
|
||||
---
|
||||
|
||||
## 1. Current State Analysis
|
||||
|
||||
### Architecture Overview
|
||||
- **Existing Patterns**: {key_architectural_patterns}
|
||||
- **Code Structure**: {current_codebase_organization}
|
||||
- **Integration Points**: {system_integration_touchpoints}
|
||||
- **Technical Debt Areas**: {identified_debt_with_impact}
|
||||
|
||||
### Compatibility & Dependencies
|
||||
- **Framework Alignment**: {framework_compatibility_assessment}
|
||||
- **Dependency Analysis**: {critical_dependencies_and_risks}
|
||||
- **Migration Considerations**: {backward_compatibility_concerns}
|
||||
|
||||
### Critical Findings
|
||||
- **Strengths**: {what_works_well}
|
||||
- **Gaps**: {missing_capabilities_or_issues}
|
||||
- **Risks**: {identified_technical_and_business_risks}
|
||||
|
||||
---
|
||||
|
||||
## 2. Proposed Solution Design
|
||||
|
||||
### Core Architecture Principles
|
||||
- **Design Philosophy**: {key_design_principles}
|
||||
- **Architectural Approach**: {chosen_architectural_pattern_with_rationale}
|
||||
- **Scalability Strategy**: {how_solution_scales}
|
||||
|
||||
### System Design
|
||||
- **Component Architecture**: {high_level_component_design}
|
||||
- **Data Flow**: {data_flow_patterns_and_state_management}
|
||||
- **API Design**: {interface_contracts_and_specifications}
|
||||
- **Integration Strategy**: {how_components_integrate}
|
||||
|
||||
### Key Design Decisions
|
||||
1. **Decision**: {critical_design_choice}
|
||||
- **Rationale**: {why_this_approach}
|
||||
- **Alternatives Considered**: {other_options_and_tradeoffs}
|
||||
- **Impact**: {implications_on_architecture}
|
||||
|
||||
2. **Decision**: {another_critical_choice}
|
||||
- **Rationale**: {reasoning}
|
||||
- **Alternatives Considered**: {tradeoffs}
|
||||
- **Impact**: {consequences}
|
||||
|
||||
### Technical Specifications
|
||||
- **Technology Stack**: {chosen_technologies_with_justification}
|
||||
- **Code Organization**: {module_structure_and_patterns}
|
||||
- **Testing Strategy**: {testing_approach_and_coverage}
|
||||
- **Performance Targets**: {performance_requirements_and_benchmarks}
|
||||
|
||||
---
|
||||
|
||||
## 3. Implementation Strategy
|
||||
|
||||
### Development Approach
|
||||
- **Core Implementation Pattern**: {primary_implementation_strategy}
|
||||
- **Module Dependencies**: {dependency_graph_and_order}
|
||||
- **Quality Assurance**: {qa_approach_and_validation}
|
||||
|
||||
### Code Modification Targets
|
||||
**Purpose**: Specific code locations for modification AND new files to create
|
||||
|
||||
**Identified Targets**:
|
||||
1. **Target**: `src/module/File.ts:function:45-52`
|
||||
- **Type**: Modify existing
|
||||
- **Modification**: {what_to_change}
|
||||
- **Rationale**: {why_change_needed}
|
||||
|
||||
2. **Target**: `src/module/NewFile.ts`
|
||||
- **Type**: Create new file
|
||||
- **Purpose**: {file_purpose}
|
||||
- **Rationale**: {why_new_file_needed}
|
||||
|
||||
**Format Rules**:
|
||||
- Existing files: `file:function:lines` (with line numbers)
|
||||
- New files: `file` (no function or lines)
|
||||
- Unknown lines: `file:function:*`
|
||||
|
||||
### Feasibility Assessment
|
||||
- **Technical Complexity**: {complexity_rating_and_analysis}
|
||||
- **Performance Impact**: {expected_performance_characteristics}
|
||||
- **Resource Requirements**: {development_resources_needed}
|
||||
- **Maintenance Burden**: {ongoing_maintenance_considerations}
|
||||
|
||||
### Risk Mitigation
|
||||
- **Technical Risks**: {implementation_risks_and_mitigation}
|
||||
- **Integration Risks**: {compatibility_challenges_and_solutions}
|
||||
- **Performance Risks**: {performance_concerns_and_strategies}
|
||||
- **Security Risks**: {security_vulnerabilities_and_controls}
|
||||
|
||||
---
|
||||
|
||||
## 4. Solution Optimization
|
||||
|
||||
### Performance Optimization
|
||||
- **Optimization Strategies**: {key_performance_improvements}
|
||||
- **Caching Strategy**: {caching_approach_and_invalidation}
|
||||
- **Resource Management**: {resource_utilization_optimization}
|
||||
- **Bottleneck Mitigation**: {identified_bottlenecks_and_solutions}
|
||||
|
||||
### Security Enhancements
|
||||
- **Security Model**: {authentication_authorization_approach}
|
||||
- **Data Protection**: {data_security_and_encryption}
|
||||
- **Vulnerability Mitigation**: {known_vulnerabilities_and_controls}
|
||||
- **Compliance**: {regulatory_and_compliance_considerations}
|
||||
|
||||
### Code Quality
|
||||
- **Code Standards**: {coding_conventions_and_patterns}
|
||||
- **Testing Coverage**: {test_strategy_and_coverage_goals}
|
||||
- **Documentation**: {documentation_requirements}
|
||||
- **Maintainability**: {maintainability_practices}
|
||||
|
||||
---
|
||||
|
||||
## 5. Critical Success Factors
|
||||
|
||||
### Technical Requirements
|
||||
- **Must Have**: {essential_technical_capabilities}
|
||||
- **Should Have**: {important_but_not_critical_features}
|
||||
- **Nice to Have**: {optional_enhancements}
|
||||
|
||||
### Quality Metrics
|
||||
- **Performance Benchmarks**: {measurable_performance_targets}
|
||||
- **Code Quality Standards**: {quality_metrics_and_thresholds}
|
||||
- **Test Coverage Goals**: {testing_coverage_requirements}
|
||||
- **Security Standards**: {security_compliance_requirements}
|
||||
|
||||
### Success Validation
|
||||
- **Acceptance Criteria**: {how_to_validate_success}
|
||||
- **Testing Strategy**: {validation_testing_approach}
|
||||
- **Monitoring Plan**: {production_monitoring_strategy}
|
||||
- **Rollback Plan**: {failure_recovery_strategy}
|
||||
|
||||
---
|
||||
|
||||
## 6. Analysis Confidence & Recommendations
|
||||
|
||||
### Assessment Scores
|
||||
- **Conceptual Integrity**: {score}/5 - {brief_assessment}
|
||||
- **Architectural Soundness**: {score}/5 - {brief_assessment}
|
||||
- **Technical Feasibility**: {score}/5 - {brief_assessment}
|
||||
- **Implementation Readiness**: {score}/5 - {brief_assessment}
|
||||
- **Overall Confidence**: {overall_score}/5
|
||||
|
||||
### Final Recommendation
|
||||
**Status**: {PROCEED|PROCEED_WITH_MODIFICATIONS|RECONSIDER|REJECT}
|
||||
|
||||
**Rationale**: {clear_explanation_of_recommendation}
|
||||
|
||||
**Critical Prerequisites**: {what_must_be_resolved_before_proceeding}
|
||||
|
||||
---
|
||||
|
||||
## 7. Reference Information
|
||||
|
||||
### Tool Analysis Summary
|
||||
- **Gemini Insights**: {key_architectural_and_pattern_insights}
|
||||
- **Codex Validation**: {technical_feasibility_and_implementation_notes}
|
||||
- **Consensus Points**: {agreements_between_tools}
|
||||
- **Conflicting Views**: {disagreements_and_resolution}
|
||||
|
||||
### Context & Resources
|
||||
- **Analysis Context**: {context_package_reference}
|
||||
- **Documentation References**: {relevant_documentation}
|
||||
- **Related Patterns**: {similar_implementations_in_codebase}
|
||||
- **External Resources**: {external_references_and_best_practices}
|
||||
```
|
||||
|
||||
## CONTENT REQUIREMENTS
|
||||
|
||||
### Analysis Priority Sources
|
||||
1. **PRIMARY**: Individual role analysis.md files (system-architect, ui-designer, etc.) - technical details, ADRs, decision context
|
||||
2. **SECONDARY**: role analysis documents - multi-perspective requirements and design specs
|
||||
3. **REFERENCE**: guidance-specification.md - discussion context
|
||||
|
||||
### Focus Areas
|
||||
- **SOLUTION IMPROVEMENTS**: How to enhance current design
|
||||
- **KEY DESIGN DECISIONS**: Critical choices with rationale, alternatives, tradeoffs
|
||||
- **CRITICAL INSIGHTS**: Non-obvious findings, risks, opportunities
|
||||
- **OPTIMIZATION**: Performance, security, code quality recommendations
|
||||
|
||||
### Exclusions
|
||||
- ❌ Task lists or implementation steps
|
||||
- ❌ Code examples or snippets
|
||||
- ❌ Project management timelines
|
||||
- ❌ Resource allocation details
|
||||
|
||||
## OUTPUT VALIDATION
|
||||
|
||||
### Completeness Checklist
|
||||
□ All 7 sections present with content
|
||||
□ Executive Summary with feasibility score
|
||||
□ Current State Analysis with findings
|
||||
□ Solution Design with 2+ key decisions
|
||||
□ Implementation Strategy with code targets
|
||||
□ Optimization recommendations in 3 areas
|
||||
□ Confidence scores with final recommendation
|
||||
□ Reference information included
|
||||
|
||||
### Quality Standards
|
||||
□ Design decisions include rationale and alternatives
|
||||
□ Code targets specify file:function:lines format
|
||||
□ Risk assessment with mitigation strategies
|
||||
□ Quantified scores (X/5) for all assessments
|
||||
□ Clear PROCEED/RECONSIDER/REJECT recommendation
|
||||
|
||||
Focus: Solution-focused technical analysis emphasizing design decisions and critical insights.
|
||||
@@ -0,0 +1,176 @@
|
||||
Validate technical feasibility and identify implementation risks for proposed solution design.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Read context-package.json and gemini-solution-design.md
|
||||
□ Assess complexity, validate technology choices
|
||||
□ Evaluate performance and security implications
|
||||
□ Focus on TECHNICAL FEASIBILITY and RISK ASSESSMENT
|
||||
□ Write output to specified .workflow/{session_id}/.process/ path
|
||||
|
||||
## PREREQUISITE ANALYSIS
|
||||
|
||||
### Required Input Files
|
||||
1. **context-package.json**: Task requirements, source files, tech stack
|
||||
2. **gemini-solution-design.md**: Proposed solution design and architecture
|
||||
3. **workflow-session.json**: Session state and context
|
||||
4. **CLAUDE.md**: Project standards and conventions
|
||||
|
||||
### Analysis Dependencies
|
||||
- Review Gemini's proposed solution design
|
||||
- Validate against actual codebase capabilities
|
||||
- Assess implementation complexity realistically
|
||||
- Identify gaps between design and execution
|
||||
|
||||
## REQUIRED VALIDATION
|
||||
|
||||
### 1. Feasibility Assessment
|
||||
- **Complexity Rating**: Rate technical complexity (1-5 scale)
|
||||
- 1: Trivial - straightforward implementation
|
||||
- 2: Simple - well-known patterns
|
||||
- 3: Moderate - some challenges
|
||||
- 4: Complex - significant challenges
|
||||
- 5: Very Complex - high risk, major unknowns
|
||||
|
||||
- **Resource Requirements**: Estimate development effort
|
||||
- Development time (hours/days/weeks)
|
||||
- Required expertise level
|
||||
- Infrastructure needs
|
||||
|
||||
- **Technology Compatibility**: Validate proposed tech stack
|
||||
- Framework version compatibility
|
||||
- Library maturity and support
|
||||
- Integration with existing systems
|
||||
|
||||
### 2. Risk Analysis
|
||||
- **Implementation Risks**: Technical challenges and blockers
|
||||
- Unknown implementation patterns
|
||||
- Missing capabilities or APIs
|
||||
- Breaking changes to existing code
|
||||
|
||||
- **Integration Challenges**: System integration concerns
|
||||
- Data format compatibility
|
||||
- API contract changes
|
||||
- Dependency conflicts
|
||||
|
||||
- **Performance Concerns**: Performance and scalability risks
|
||||
- Resource consumption (CPU, memory, I/O)
|
||||
- Latency and throughput impact
|
||||
- Caching and optimization needs
|
||||
|
||||
- **Security Concerns**: Security vulnerabilities and threats
|
||||
- Authentication/authorization gaps
|
||||
- Data exposure risks
|
||||
- Compliance violations
|
||||
|
||||
### 3. Implementation Validation
|
||||
- **Development Approach**: Validate proposed implementation strategy
|
||||
- Verify module dependency order
|
||||
- Assess incremental development feasibility
|
||||
- Evaluate testing approach
|
||||
|
||||
- **Quality Standards**: Validate quality requirements
|
||||
- Test coverage achievability
|
||||
- Performance benchmark realism
|
||||
- Documentation completeness
|
||||
|
||||
- **Maintenance Implications**: Long-term sustainability
|
||||
- Code maintainability assessment
|
||||
- Technical debt evaluation
|
||||
- Evolution and extensibility
|
||||
|
||||
### 4. Code Target Verification
|
||||
Review Gemini's proposed code targets:
|
||||
- **Validate existing targets**: Confirm file:function:lines exist
|
||||
- **Assess new file targets**: Evaluate necessity and placement
|
||||
- **Identify missing targets**: Suggest additional modification points
|
||||
- **Refine target specifications**: Provide more precise line numbers if possible
|
||||
|
||||
### 5. Recommendations
|
||||
- **Must-Have Requirements**: Critical requirements for success
|
||||
- **Optimization Opportunities**: Performance and quality improvements
|
||||
- **Security Controls**: Essential security measures
|
||||
- **Risk Mitigation**: Strategies to reduce identified risks
|
||||
|
||||
## OUTPUT REQUIREMENTS
|
||||
|
||||
### Output File
|
||||
**Path**: `.workflow/{session_id}/.process/codex-feasibility-validation.md`
|
||||
**Format**: Follow structure from `~/.claude/workflows/cli-templates/prompts/workflow/analysis-results-structure.txt`
|
||||
|
||||
### Required Sections
|
||||
Focus on these sections from the template:
|
||||
- Executive Summary (with Codex perspective)
|
||||
- Current State Analysis (validation findings)
|
||||
- Implementation Strategy (feasibility assessment)
|
||||
- Solution Optimization (risk mitigation)
|
||||
- Confidence Scores (technical feasibility focus)
|
||||
|
||||
### Content Guidelines
|
||||
- ✅ Focus on technical feasibility and risk assessment
|
||||
- ✅ Verify code targets from Gemini's design
|
||||
- ✅ Provide concrete risk mitigation strategies
|
||||
- ✅ Quantify complexity and effort estimates
|
||||
- ❌ Do NOT create task breakdowns
|
||||
- ❌ Do NOT provide step-by-step implementation guides
|
||||
- ❌ Do NOT include code examples
|
||||
|
||||
## VALIDATION METHODOLOGY
|
||||
|
||||
### Complexity Scoring
|
||||
Rate each aspect on 1-5 scale:
|
||||
- Technical Complexity
|
||||
- Integration Complexity
|
||||
- Performance Risk
|
||||
- Security Risk
|
||||
- Maintenance Burden
|
||||
|
||||
### Risk Classification
|
||||
- **LOW**: Minor issues, easily addressable
|
||||
- **MEDIUM**: Manageable challenges with clear mitigation
|
||||
- **HIGH**: Significant concerns requiring major mitigation
|
||||
- **CRITICAL**: Fundamental viability threats
|
||||
|
||||
### Feasibility Judgment
|
||||
- **PROCEED**: Technically feasible with acceptable risk
|
||||
- **PROCEED_WITH_MODIFICATIONS**: Feasible but needs adjustments
|
||||
- **RECONSIDER**: High risk, major changes needed
|
||||
- **REJECT**: Not feasible with current approach
|
||||
|
||||
## CONTEXT INTEGRATION
|
||||
|
||||
### Gemini Analysis Integration
|
||||
- Review proposed architecture and design decisions
|
||||
- Validate assumptions and technology choices
|
||||
- Cross-check code targets against actual codebase
|
||||
- Assess realism of performance targets
|
||||
|
||||
### Codebase Reality Check
|
||||
- Verify existing code capabilities
|
||||
- Identify actual technical constraints
|
||||
- Assess team skill compatibility
|
||||
- Evaluate infrastructure readiness
|
||||
|
||||
### Session Context
|
||||
- Consider session history and previous decisions
|
||||
- Align with project architecture standards
|
||||
- Respect existing patterns and conventions
|
||||
|
||||
## EXECUTION MODE
|
||||
|
||||
**Mode**: Analysis with write permission for output file
|
||||
**CLI Tool**: Codex with --skip-git-repo-check -s danger-full-access
|
||||
**Timeout**: 60-90 minutes for complex tasks
|
||||
**Output**: Single file codex-feasibility-validation.md
|
||||
**Trigger**: Only for complex tasks (>6 modules)
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ context-package.json and gemini-solution-design.md read
|
||||
□ Complexity rated on 1-5 scale with justification
|
||||
□ All risk categories assessed (technical, integration, performance, security)
|
||||
□ Code targets verified and refined
|
||||
□ Risk mitigation strategies provided
|
||||
□ Resource requirements estimated
|
||||
□ Final feasibility judgment (PROCEED/RECONSIDER/REJECT)
|
||||
□ Output written to .workflow/{session_id}/.process/codex-feasibility-validation.md
|
||||
|
||||
Focus: Technical feasibility validation with realistic risk assessment and mitigation strategies.
|
||||
@@ -0,0 +1,131 @@
|
||||
Analyze and design optimal solution with comprehensive architecture evaluation and design decisions.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Read context-package.json to understand task requirements, source files, tech stack
|
||||
□ Analyze current architecture patterns and code structure
|
||||
□ Propose solution design with key decisions and rationale
|
||||
□ Focus on SOLUTION IMPROVEMENTS and KEY DESIGN DECISIONS
|
||||
□ Write output to specified .workflow/{session_id}/.process/ path
|
||||
|
||||
## ANALYSIS PRIORITY
|
||||
|
||||
### Source Hierarchy
|
||||
1. **PRIMARY**: Individual role analysis.md files (system-architect, ui-designer, data-architect, etc.)
|
||||
- Technical details and implementation considerations
|
||||
- Architecture Decision Records (ADRs)
|
||||
- Design decision context and rationale
|
||||
|
||||
2. **SECONDARY**: role analysis documents
|
||||
- Integrated requirements across roles
|
||||
- Cross-role alignment and dependencies
|
||||
- Unified feature specifications
|
||||
|
||||
3. **REFERENCE**: guidance-specification.md
|
||||
- Discussion context and background
|
||||
- Initial problem framing
|
||||
|
||||
## REQUIRED ANALYSIS
|
||||
|
||||
### 1. Current State Assessment
|
||||
- Identify existing architectural patterns and code structure
|
||||
- Map integration points and dependencies
|
||||
- Evaluate technical debt and pain points
|
||||
- Assess framework compatibility and constraints
|
||||
|
||||
### 2. Solution Design
|
||||
- Propose core architecture principles and approach
|
||||
- Design component architecture and data flow
|
||||
- Specify API contracts and integration strategy
|
||||
- Define technology stack with justification
|
||||
|
||||
### 3. Key Design Decisions
|
||||
For each critical decision:
|
||||
- **Decision**: What is being decided
|
||||
- **Rationale**: Why this approach
|
||||
- **Alternatives Considered**: Other options and their tradeoffs
|
||||
- **Impact**: Implications on architecture, performance, maintainability
|
||||
|
||||
Minimum 2 key decisions required.
|
||||
|
||||
### 4. Code Modification Targets
|
||||
Identify specific code locations for changes:
|
||||
- **Existing files**: `file:function:lines` format (e.g., `src/auth/login.ts:validateUser:45-52`)
|
||||
- **New files**: `file` only (e.g., `src/auth/PasswordReset.ts`)
|
||||
- **Unknown lines**: `file:function:*` (e.g., `src/auth/service.ts:refreshToken:*`)
|
||||
|
||||
For each target:
|
||||
- Type: Modify existing | Create new
|
||||
- Modification/Purpose: What changes needed
|
||||
- Rationale: Why this target
|
||||
|
||||
### 5. Critical Insights
|
||||
- Strengths: What works well in current/proposed design
|
||||
- Gaps: Missing capabilities or concerns
|
||||
- Risks: Technical, integration, performance, security
|
||||
- Optimization Opportunities: Performance, security, code quality
|
||||
|
||||
### 6. Feasibility Assessment
|
||||
- Technical Complexity: Rating and analysis
|
||||
- Performance Impact: Expected characteristics
|
||||
- Resource Requirements: Development effort
|
||||
- Maintenance Burden: Ongoing considerations
|
||||
|
||||
## OUTPUT REQUIREMENTS
|
||||
|
||||
### Output File
|
||||
**Path**: `.workflow/{session_id}/.process/gemini-solution-design.md`
|
||||
**Format**: Follow structure from `~/.claude/workflows/cli-templates/prompts/workflow/analysis-results-structure.txt`
|
||||
|
||||
### Required Sections
|
||||
- Executive Summary with feasibility score
|
||||
- Current State Analysis
|
||||
- Proposed Solution Design with 2+ key decisions
|
||||
- Implementation Strategy with code targets
|
||||
- Solution Optimization (performance, security, quality)
|
||||
- Critical Success Factors
|
||||
- Confidence Scores with recommendation
|
||||
|
||||
### Content Guidelines
|
||||
- ✅ Focus on solution improvements and key design decisions
|
||||
- ✅ Include rationale, alternatives, and tradeoffs for decisions
|
||||
- ✅ Provide specific code targets in correct format
|
||||
- ✅ Quantify assessments with scores (X/5)
|
||||
- ❌ Do NOT create task lists or implementation steps
|
||||
- ❌ Do NOT include code examples or snippets
|
||||
- ❌ Do NOT create project management timelines
|
||||
|
||||
## CONTEXT INTEGRATION
|
||||
|
||||
### Session Context
|
||||
- Load context-package.json for task requirements
|
||||
- Reference workflow-session.json for session state
|
||||
- Review CLAUDE.md for project standards
|
||||
|
||||
### Brainstorm Context
|
||||
If brainstorming artifacts exist:
|
||||
- Prioritize individual role analysis.md files
|
||||
- Use role analysis documents for integrated view
|
||||
- Reference guidance-specification.md for context
|
||||
|
||||
### Codebase Context
|
||||
- Identify similar patterns in existing code
|
||||
- Evaluate success/failure of current approaches
|
||||
- Ensure consistency with project architecture
|
||||
|
||||
## EXECUTION MODE
|
||||
|
||||
**Mode**: Analysis with write permission for output file
|
||||
**CLI Tool**: Gemini wrapper with --approval-mode yolo
|
||||
**Timeout**: 40-60 minutes based on complexity
|
||||
**Output**: Single file gemini-solution-design.md
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ context-package.json read and analyzed
|
||||
□ All 7 required sections present in output
|
||||
□ 2+ key design decisions with rationale and alternatives
|
||||
□ Code targets specified in correct format
|
||||
□ Feasibility scores provided (X/5)
|
||||
□ Final recommendation (PROCEED/RECONSIDER/REJECT)
|
||||
□ Output written to .workflow/{session_id}/.process/gemini-solution-design.md
|
||||
|
||||
Focus: Comprehensive solution design emphasizing architecture decisions and critical insights.
|
||||
@@ -0,0 +1,286 @@
|
||||
IMPL_PLAN.md Template - Implementation Plan Document Structure
|
||||
|
||||
## Document Frontmatter
|
||||
|
||||
```yaml
|
||||
---
|
||||
identifier: WFS-{session-id}
|
||||
source: "User requirements" | "File: path" | "Issue: ISS-001"
|
||||
analysis: .workflow/{session-id}/.process/ANALYSIS_RESULTS.md
|
||||
artifacts: .workflow/{session-id}/.brainstorming/
|
||||
context_package: .workflow/{session-id}/.process/context-package.json # CCW smart context
|
||||
workflow_type: "standard | tdd | design" # Indicates execution model
|
||||
verification_history: # CCW quality gates
|
||||
concept_verify: "passed | skipped | pending"
|
||||
action_plan_verify: "pending"
|
||||
phase_progression: "brainstorm → context → analysis → concept_verify → planning" # CCW workflow phases
|
||||
---
|
||||
```
|
||||
|
||||
## Document Structure
|
||||
|
||||
# Implementation Plan: {Project Title}
|
||||
|
||||
## 1. Summary
|
||||
Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
|
||||
**Core Objectives**:
|
||||
- [Key objective 1]
|
||||
- [Key objective 2]
|
||||
|
||||
**Technical Approach**:
|
||||
- [High-level approach]
|
||||
|
||||
## 2. Context Analysis
|
||||
|
||||
### CCW Workflow Context
|
||||
**Phase Progression**:
|
||||
- ✅ Phase 1: Brainstorming (role analyses generated)
|
||||
- ✅ Phase 2: Context Gathering (context-package.json: {N} files, {M} modules analyzed)
|
||||
- ✅ Phase 3: Enhanced Analysis (ANALYSIS_RESULTS.md: Gemini/Qwen/Codex parallel insights)
|
||||
- ✅ Phase 4: Concept Verification ({X} clarifications answered, role analyses updated | skipped)
|
||||
- ⏳ Phase 5: Action Planning (current phase - generating IMPL_PLAN.md)
|
||||
|
||||
**Quality Gates**:
|
||||
- concept-verify: ✅ Passed (0 ambiguities remaining) | ⏭️ Skipped (user decision) | ⏳ Pending
|
||||
- action-plan-verify: ⏳ Pending (recommended before /workflow:execute)
|
||||
|
||||
**Context Package Summary**:
|
||||
- **Focus Paths**: {list key directories from context-package.json}
|
||||
- **Key Files**: {list primary files for modification}
|
||||
- **Module Depth Analysis**: {from get_modules_by_depth.sh output}
|
||||
- **Smart Context**: {total file count} files, {module count} modules, {dependency count} dependencies identified
|
||||
|
||||
### Project Profile
|
||||
- **Type**: Greenfield/Enhancement/Refactor
|
||||
- **Scale**: User count, data volume, complexity
|
||||
- **Tech Stack**: Primary technologies
|
||||
- **Timeline**: Duration and milestones
|
||||
|
||||
### Module Structure
|
||||
```
|
||||
[Directory tree showing key modules]
|
||||
```
|
||||
|
||||
### Dependencies
|
||||
**Primary**: [Core libraries and frameworks]
|
||||
**APIs**: [External services]
|
||||
**Development**: [Testing, linting, CI/CD tools]
|
||||
|
||||
### Patterns & Conventions
|
||||
- **Architecture**: [Key patterns like DI, Event-Driven]
|
||||
- **Component Design**: [Design patterns]
|
||||
- **State Management**: [State strategy]
|
||||
- **Code Style**: [Naming, TypeScript coverage]
|
||||
|
||||
## 3. Brainstorming Artifacts Reference
|
||||
|
||||
### Artifact Usage Strategy
|
||||
**Primary Reference (role analyses)**:
|
||||
- **What**: Role-specific analyses from brainstorming providing multi-perspective insights
|
||||
- **When**: Every task references relevant role analyses for requirements and design decisions
|
||||
- **How**: Extract requirements, architecture decisions, UI/UX patterns from applicable role documents
|
||||
- **Priority**: Collective authoritative source - multiple role perspectives provide comprehensive coverage
|
||||
- **CCW Value**: Maintains role-specific expertise while enabling cross-role integration during planning
|
||||
|
||||
**Context Intelligence (context-package.json)**:
|
||||
- **What**: Smart context gathered by CCW's context-gather phase
|
||||
- **Content**: Focus paths, dependency graph, existing patterns, module structure
|
||||
- **Usage**: Tasks load this via `flow_control.preparatory_steps` for environment setup
|
||||
- **CCW Value**: Automated intelligent context discovery replacing manual file exploration
|
||||
|
||||
**Technical Analysis (ANALYSIS_RESULTS.md)**:
|
||||
- **What**: Gemini/Qwen/Codex parallel analysis results
|
||||
- **Content**: Optimization strategies, risk assessment, architecture review, implementation patterns
|
||||
- **Usage**: Referenced in task planning for technical guidance and risk mitigation
|
||||
- **CCW Value**: Multi-model parallel analysis providing comprehensive technical intelligence
|
||||
|
||||
### Integrated Specifications (Highest Priority)
|
||||
- **role analyses**: Comprehensive implementation blueprint
|
||||
- Contains: Architecture design, UI/UX guidelines, functional/non-functional requirements, implementation roadmap, risk assessment
|
||||
|
||||
### Supporting Artifacts (Reference)
|
||||
- **guidance-specification.md**: Role-specific discussion points and analysis framework
|
||||
- **system-architect/analysis.md**: Detailed architecture specifications
|
||||
- **ui-designer/analysis.md**: Layout and component specifications
|
||||
- **product-manager/analysis.md**: Product vision and user stories
|
||||
|
||||
**Artifact Priority in Development**:
|
||||
1. role analyses (primary reference for all tasks)
|
||||
2. context-package.json (smart context for execution environment)
|
||||
3. ANALYSIS_RESULTS.md (technical analysis and optimization strategies)
|
||||
4. Role-specific analyses (fallback for detailed specifications)
|
||||
|
||||
## 4. Implementation Strategy
|
||||
|
||||
### Execution Strategy
|
||||
**Execution Model**: [Sequential | Parallel | Phased | TDD Cycles]
|
||||
|
||||
**Rationale**: [Why this execution model fits the project]
|
||||
|
||||
**Parallelization Opportunities**:
|
||||
- [List independent workstreams]
|
||||
|
||||
**Serialization Requirements**:
|
||||
- [List critical dependencies]
|
||||
|
||||
### Architectural Approach
|
||||
**Key Architecture Decisions**:
|
||||
- [ADR references from role analyses]
|
||||
- [Justification for architecture patterns]
|
||||
|
||||
**Integration Strategy**:
|
||||
- [How modules communicate]
|
||||
- [State management approach]
|
||||
|
||||
### Key Dependencies
|
||||
**Task Dependency Graph**:
|
||||
```
|
||||
[High-level dependency visualization]
|
||||
```
|
||||
|
||||
**Critical Path**: [Identify bottleneck tasks]
|
||||
|
||||
### Testing Strategy
|
||||
**Testing Approach**:
|
||||
- Unit testing: [Tools, scope]
|
||||
- Integration testing: [Key integration points]
|
||||
- E2E testing: [Critical user flows]
|
||||
|
||||
**Coverage Targets**:
|
||||
- Lines: ≥70%
|
||||
- Functions: ≥70%
|
||||
- Branches: ≥65%
|
||||
|
||||
**Quality Gates**:
|
||||
- [CI/CD gates]
|
||||
- [Performance budgets]
|
||||
|
||||
## 5. Task Breakdown Summary
|
||||
|
||||
### Task Count
|
||||
**{N} tasks** (flat hierarchy | two-level hierarchy, sequential | parallel execution)
|
||||
|
||||
### Task Structure
|
||||
- **IMPL-1**: [Main task title]
|
||||
- **IMPL-2**: [Main task title]
|
||||
...
|
||||
|
||||
### Complexity Assessment
|
||||
- **High**: [List with rationale]
|
||||
- **Medium**: [List]
|
||||
- **Low**: [List]
|
||||
|
||||
### Dependencies
|
||||
[Reference Section 4.3 for dependency graph]
|
||||
|
||||
**Parallelization Opportunities**:
|
||||
- [Specific task groups that can run in parallel]
|
||||
|
||||
## 6. Implementation Plan (Detailed Phased Breakdown)
|
||||
|
||||
### Execution Strategy
|
||||
|
||||
**Phase 1 (Weeks 1-2): [Phase Name]**
|
||||
- **Tasks**: IMPL-1, IMPL-2
|
||||
- **Deliverables**:
|
||||
- [Specific deliverable 1]
|
||||
- [Specific deliverable 2]
|
||||
- **Success Criteria**:
|
||||
- [Measurable criterion]
|
||||
|
||||
**Phase 2 (Weeks 3-N): [Phase Name]**
|
||||
...
|
||||
|
||||
### Resource Requirements
|
||||
|
||||
**Development Team**:
|
||||
- [Team composition and skills]
|
||||
|
||||
**External Dependencies**:
|
||||
- [Third-party services, APIs]
|
||||
|
||||
**Infrastructure**:
|
||||
- [Development, staging, production environments]
|
||||
|
||||
## 7. Risk Assessment & Mitigation
|
||||
|
||||
| Risk | Impact | Probability | Mitigation Strategy | Owner |
|
||||
|------|--------|-------------|---------------------|-------|
|
||||
| [Risk description] | High/Med/Low | High/Med/Low | [Strategy] | [Role] |
|
||||
|
||||
**Critical Risks** (High impact + High probability):
|
||||
- [Risk 1]: [Detailed mitigation plan]
|
||||
|
||||
**Monitoring Strategy**:
|
||||
- [How risks will be monitored]
|
||||
|
||||
## 8. Success Criteria
|
||||
|
||||
**Functional Completeness**:
|
||||
- [ ] All requirements from role analyses implemented
|
||||
- [ ] All acceptance criteria from task.json files met
|
||||
|
||||
**Technical Quality**:
|
||||
- [ ] Test coverage ≥70%
|
||||
- [ ] Bundle size within budget
|
||||
- [ ] Performance targets met
|
||||
|
||||
**Operational Readiness**:
|
||||
- [ ] CI/CD pipeline operational
|
||||
- [ ] Monitoring and logging configured
|
||||
- [ ] Documentation complete
|
||||
|
||||
**Business Metrics**:
|
||||
- [ ] [Key business metrics from role analyses]
|
||||
|
||||
## Template Usage Guidelines
|
||||
|
||||
### When Generating IMPL_PLAN.md
|
||||
|
||||
1. **Fill Frontmatter Variables**:
|
||||
- Replace {session-id} with actual session ID
|
||||
- Set workflow_type based on planning phase
|
||||
- Update verification_history based on concept-verify results
|
||||
|
||||
2. **Populate CCW Workflow Context**:
|
||||
- Extract file/module counts from context-package.json
|
||||
- Document phase progression based on completed workflow steps
|
||||
- Update quality gate status (passed/skipped/pending)
|
||||
|
||||
3. **Extract from Analysis Results**:
|
||||
- Core objectives from ANALYSIS_RESULTS.md
|
||||
- Technical approach and architecture decisions
|
||||
- Risk assessment and mitigation strategies
|
||||
|
||||
4. **Reference Brainstorming Artifacts**:
|
||||
- List detected artifacts with correct paths
|
||||
- Document artifact priority and usage strategy
|
||||
- Map artifacts to specific tasks based on domain
|
||||
|
||||
5. **Define Implementation Strategy**:
|
||||
- Choose execution model (sequential/parallel/phased)
|
||||
- Identify parallelization opportunities
|
||||
- Document critical path and dependencies
|
||||
|
||||
6. **Break Down Tasks**:
|
||||
- List all task IDs and titles
|
||||
- Assess complexity (high/medium/low)
|
||||
- Create dependency graph visualization
|
||||
|
||||
7. **Set Success Criteria**:
|
||||
- Extract from role analyses
|
||||
- Include measurable metrics
|
||||
- Define quality gates
|
||||
|
||||
### Validation Checklist
|
||||
|
||||
Before finalizing IMPL_PLAN.md:
|
||||
- [ ] All frontmatter fields populated correctly
|
||||
- [ ] CCW workflow context reflects actual phase progression
|
||||
- [ ] Brainstorming artifacts correctly referenced
|
||||
- [ ] Task breakdown matches generated task JSONs
|
||||
- [ ] Dependencies are acyclic and logical
|
||||
- [ ] Success criteria are measurable
|
||||
- [ ] Risk assessment includes mitigation strategies
|
||||
- [ ] All {placeholder} variables replaced with actual values
|
||||
@@ -0,0 +1,123 @@
|
||||
Task JSON Schema - Agent Mode (No Command Field)
|
||||
|
||||
## Schema Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-N[.M]",
|
||||
"title": "Descriptive task name",
|
||||
"status": "pending",
|
||||
"context_package_path": "{context_package_path}",
|
||||
"meta": {
|
||||
"type": "feature|bugfix|refactor|test|docs",
|
||||
"agent": "@code-developer|@test-fix-agent|@universal-executor"
|
||||
},
|
||||
"context": {
|
||||
"requirements": ["extracted from analysis"],
|
||||
"focus_paths": ["src/paths"],
|
||||
"acceptance": ["measurable criteria"],
|
||||
"depends_on": ["IMPL-N"],
|
||||
"artifacts": [
|
||||
{
|
||||
"type": "synthesis_specification",
|
||||
"path": "{synthesis_spec_path}",
|
||||
"priority": "highest",
|
||||
"usage": "Primary requirement source - use for consolidated requirements and cross-role alignment"
|
||||
},
|
||||
{
|
||||
"type": "role_analysis",
|
||||
"path": "{role_analysis_path}",
|
||||
"priority": "high",
|
||||
"usage": "Technical/design/business details from specific roles. Common roles: system-architect (ADRs, APIs, caching), ui-designer (design tokens, layouts), product-manager (user stories, metrics)"
|
||||
}
|
||||
]
|
||||
},
|
||||
"flow_control": {
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_role_analyses_specification",
|
||||
"action": "Load consolidated role analyses",
|
||||
"commands": [
|
||||
"Read({synthesis_spec_path})"
|
||||
],
|
||||
"output_to": "synthesis_specification",
|
||||
"on_error": "fail"
|
||||
},
|
||||
{
|
||||
"step": "load_context_package",
|
||||
"action": "Load context package for project structure",
|
||||
"commands": [
|
||||
"Read({context_package_path})"
|
||||
],
|
||||
"output_to": "context_pkg",
|
||||
"on_error": "fail"
|
||||
},
|
||||
{
|
||||
"step": "local_codebase_exploration",
|
||||
"action": "Explore codebase using local search",
|
||||
"commands": [
|
||||
"bash(rg '^(function|class|interface).*{keyword}' --type ts -n --max-count 15)",
|
||||
"bash(find . -name '*{keyword}*' -type f | grep -v node_modules | head -10)"
|
||||
],
|
||||
"output_to": "codebase_structure",
|
||||
"on_error": "skip_optional"
|
||||
}
|
||||
],
|
||||
"implementation_approach": [
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Implement task following role analyses",
|
||||
"description": "Implement '{title}' following [synthesis_specification] requirements and [context_pkg] patterns. Use role analyses as primary source, consult artifacts for technical details.",
|
||||
"modification_points": [
|
||||
"Apply consolidated requirements from role analyses",
|
||||
"Follow technical guidelines from synthesis",
|
||||
"Consult artifacts for implementation details when needed",
|
||||
"Integrate with existing patterns"
|
||||
],
|
||||
"logic_flow": [
|
||||
"Load role analyses and context package",
|
||||
"Analyze existing patterns from [codebase_structure]",
|
||||
"Implement following specification",
|
||||
"Consult artifacts for technical details when needed",
|
||||
"Validate against acceptance criteria"
|
||||
],
|
||||
"depends_on": [],
|
||||
"output": "implementation"
|
||||
}
|
||||
],
|
||||
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Key Features - Agent Mode
|
||||
|
||||
**Execution Model**: Agent interprets `modification_points` and `logic_flow` to execute autonomously
|
||||
|
||||
**No Command Field**: Steps in `implementation_approach` do NOT include `command` field
|
||||
|
||||
**Context Loading**: Context loaded via `pre_analysis` steps, available as variables (e.g., [synthesis_specification], [context_pkg])
|
||||
|
||||
**Agent Execution**:
|
||||
- Agent reads modification_points and logic_flow
|
||||
- Agent performs implementation autonomously
|
||||
- Agent validates against acceptance criteria
|
||||
|
||||
## Field Descriptions
|
||||
|
||||
**implementation_approach**: Array of step objects (NO command field)
|
||||
- **step**: Sequential step number
|
||||
- **title**: Step description
|
||||
- **description**: Detailed instructions with variable references
|
||||
- **modification_points**: Specific code modifications to apply
|
||||
- **logic_flow**: Business logic execution sequence
|
||||
- **depends_on**: Step dependencies (empty array for independent steps)
|
||||
- **output**: Expected deliverable variable name
|
||||
|
||||
## Usage Guidelines
|
||||
|
||||
1. **Load Context**: Use pre_analysis to load synthesis, context package, and explore codebase
|
||||
2. **Reference Variables**: Use [variable_name] to reference outputs from pre_analysis steps
|
||||
3. **Clear Instructions**: Provide detailed modification_points and logic_flow for agent
|
||||
4. **No Commands**: Never add command field to implementation_approach steps
|
||||
5. **Agent Autonomy**: Let agent interpret and execute based on provided instructions
|
||||
@@ -0,0 +1,182 @@
|
||||
Task JSON Schema - CLI Execute Mode (With Command Field)
|
||||
|
||||
## Schema Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-N[.M]",
|
||||
"title": "Descriptive task name",
|
||||
"status": "pending",
|
||||
"context_package_path": "{context_package_path}",
|
||||
"meta": {
|
||||
"type": "feature|bugfix|refactor|test|docs",
|
||||
"agent": "@code-developer|@test-fix-agent|@universal-executor"
|
||||
},
|
||||
"context": {
|
||||
"requirements": ["extracted from analysis"],
|
||||
"focus_paths": ["src/paths"],
|
||||
"acceptance": ["measurable criteria"],
|
||||
"depends_on": ["IMPL-N"],
|
||||
"artifacts": [
|
||||
{
|
||||
"type": "synthesis_specification",
|
||||
"path": "{synthesis_spec_path}",
|
||||
"priority": "highest",
|
||||
"usage": "Primary requirement source - use for consolidated requirements and cross-role alignment"
|
||||
},
|
||||
{
|
||||
"type": "role_analysis",
|
||||
"path": "{role_analysis_path}",
|
||||
"priority": "high",
|
||||
"usage": "Technical/design/business details from specific roles"
|
||||
}
|
||||
]
|
||||
},
|
||||
"flow_control": {
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_synthesis_specification",
|
||||
"action": "Load consolidated synthesis specification",
|
||||
"commands": [
|
||||
"Read({synthesis_spec_path})"
|
||||
],
|
||||
"output_to": "synthesis_specification",
|
||||
"on_error": "fail"
|
||||
},
|
||||
{
|
||||
"step": "load_context_package",
|
||||
"action": "Load context package",
|
||||
"commands": [
|
||||
"Read({context_package_path})"
|
||||
],
|
||||
"output_to": "context_pkg",
|
||||
"on_error": "fail"
|
||||
},
|
||||
{
|
||||
"step": "local_codebase_exploration",
|
||||
"action": "Explore codebase using local search",
|
||||
"commands": [
|
||||
"bash(rg '^(function|class|interface).*{keyword}' --type ts -n --max-count 15)",
|
||||
"bash(find . -name '*{keyword}*' -type f | grep -v node_modules | head -10)"
|
||||
],
|
||||
"output_to": "codebase_structure",
|
||||
"on_error": "skip_optional"
|
||||
}
|
||||
],
|
||||
"implementation_approach": [
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Implement task with Codex",
|
||||
"description": "Implement '{title}' using Codex CLI tool",
|
||||
"command": "bash(codex -C {focus_path} --full-auto exec \"PURPOSE: {purpose} TASK: {task_description} MODE: auto CONTEXT: @{{synthesis_spec_path}} @{{context_package_path}} EXPECTED: {expected_output} RULES: Follow synthesis specification\" --skip-git-repo-check -s danger-full-access)",
|
||||
"modification_points": [
|
||||
"Create/modify implementation files",
|
||||
"Follow synthesis specification requirements",
|
||||
"Integrate with existing patterns"
|
||||
],
|
||||
"logic_flow": [
|
||||
"Codex loads context package and synthesis",
|
||||
"Codex implements according to specification",
|
||||
"Codex validates against acceptance criteria"
|
||||
],
|
||||
"depends_on": [],
|
||||
"output": "implementation"
|
||||
}
|
||||
],
|
||||
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Multi-Step Example (Complex Task with Resume)
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-002",
|
||||
"title": "Implement RBAC system",
|
||||
"flow_control": {
|
||||
"implementation_approach": [
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Create RBAC models",
|
||||
"description": "Create role and permission data models",
|
||||
"command": "bash(codex -C src/models --full-auto exec \"PURPOSE: Create RBAC models TASK: Define role and permission models MODE: auto CONTEXT: @{{synthesis_spec_path}} @{{context_package_path}} EXPECTED: Models with migrations RULES: Follow synthesis spec\" --skip-git-repo-check -s danger-full-access)",
|
||||
"modification_points": ["Define role model", "Define permission model"],
|
||||
"logic_flow": ["Design schema", "Implement models", "Generate migrations"],
|
||||
"depends_on": [],
|
||||
"output": "rbac_models"
|
||||
},
|
||||
{
|
||||
"step": 2,
|
||||
"title": "Implement RBAC middleware",
|
||||
"description": "Create route protection middleware",
|
||||
"command": "bash(codex --full-auto exec \"PURPOSE: Create RBAC middleware TASK: Route protection middleware MODE: auto CONTEXT: RBAC models from step 1 EXPECTED: Middleware for route protection RULES: Use session patterns\" resume --last --skip-git-repo-check -s danger-full-access)",
|
||||
"modification_points": ["Create permission checker", "Add route decorators"],
|
||||
"logic_flow": ["Check user role", "Validate permissions", "Allow/deny access"],
|
||||
"depends_on": [1],
|
||||
"output": "rbac_middleware"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Key Features - CLI Execute Mode
|
||||
|
||||
**Execution Model**: Commands in `command` field execute steps directly
|
||||
|
||||
**Command Field Required**: Every step in `implementation_approach` MUST include `command` field
|
||||
|
||||
**Context Delivery**: Context provided via CONTEXT field in command prompt using `@{path}` syntax
|
||||
|
||||
**Multi-Step Support**:
|
||||
- First step: Full context with `-C directory` and complete CONTEXT field
|
||||
- Subsequent steps: Use `resume --last` to maintain session continuity
|
||||
- Step dependencies: Use `depends_on` array to specify step order
|
||||
|
||||
## Command Templates
|
||||
|
||||
### Single-Step Codex Command
|
||||
```bash
|
||||
bash(codex -C {focus_path} --full-auto exec "PURPOSE: {purpose} TASK: {task} MODE: auto CONTEXT: @{{synthesis_spec_path}} @{{context_package_path}} EXPECTED: {expected} RULES: {rules}" --skip-git-repo-check -s danger-full-access)
|
||||
```
|
||||
|
||||
### Multi-Step Codex with Resume
|
||||
```bash
|
||||
# First step
|
||||
bash(codex -C {path} --full-auto exec "..." --skip-git-repo-check -s danger-full-access)
|
||||
|
||||
# Subsequent steps
|
||||
bash(codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access)
|
||||
```
|
||||
|
||||
### Gemini/Qwen Commands (Analysis/Documentation)
|
||||
```bash
|
||||
bash(gemini "PURPOSE: {purpose} TASK: {task} MODE: analysis CONTEXT: @{synthesis_spec_path} EXPECTED: {expected} RULES: {rules}")
|
||||
|
||||
# With write permission
|
||||
bash(gemini --approval-mode yolo "PURPOSE: {purpose} TASK: {task} MODE: write CONTEXT: @{context} EXPECTED: {expected} RULES: {rules}")
|
||||
```
|
||||
|
||||
## Field Descriptions
|
||||
|
||||
**implementation_approach**: Array of step objects (WITH command field)
|
||||
- **step**: Sequential step number
|
||||
- **title**: Step description
|
||||
- **description**: Brief step description
|
||||
- **command**: Complete CLI command to execute the step
|
||||
- **modification_points**: Specific code modifications (for reference)
|
||||
- **logic_flow**: Execution sequence (for reference)
|
||||
- **depends_on**: Step dependencies (array of step numbers, empty for independent)
|
||||
- **output**: Expected deliverable variable name
|
||||
|
||||
## Usage Guidelines
|
||||
|
||||
1. **Always Include Command**: Every step MUST have a `command` field
|
||||
2. **Context via CONTEXT Field**: Provide context using `@{path}` syntax in command prompt
|
||||
3. **First Step Full Context**: First step should include `-C directory` and full context package
|
||||
4. **Resume for Continuity**: Use `resume --last` for subsequent steps in same task
|
||||
5. **Step Dependencies**: Use `depends_on: [1, 2]` to specify execution order
|
||||
6. **Parameter Position**:
|
||||
- Codex: `--skip-git-repo-check -s danger-full-access` at END
|
||||
- Gemini/Qwen: `--approval-mode yolo` BEFORE the prompt
|
||||
@@ -8,89 +8,61 @@ type: search-guideline
|
||||
|
||||
## ⚡ Execution Environment
|
||||
|
||||
**CRITICAL**: All commands execute in **Bash environment** (Git Bash on Windows, Bash on Linux/macOS)
|
||||
**CRITICAL**: All commands execute in **Bash environment** (Git Bash on Windows)
|
||||
|
||||
**❌ Forbidden**: Windows-specific commands (`findstr`, `dir`, `where`, `type`, `copy`, `del`) - Use Bash equivalents (`grep`, `find`, `which`, `cat`, `cp`, `rm`)
|
||||
**❌ Forbidden**: Windows commands (`findstr`, `dir`, `where`) - Use Bash (`grep`, `find`, `cat`)
|
||||
|
||||
## ⚡ Core Search Tools
|
||||
|
||||
**codebase-retrieval**: Semantic file discovery via Gemini CLI with all files analysis
|
||||
**rg (ripgrep)**: Fast content search with regex support
|
||||
**find**: File/directory location by name patterns
|
||||
**grep**: Built-in pattern matching in files
|
||||
**get_modules_by_depth.sh**: Program architecture analysis and structural discovery
|
||||
**grep**: Built-in pattern matching (fallback when rg unavailable)
|
||||
**get_modules_by_depth.sh**: Program architecture analysis (MANDATORY before planning)
|
||||
|
||||
### Decision Principles
|
||||
- **Use codebase-retrieval for semantic discovery** - Intelligent file discovery based on task context
|
||||
- **Use rg for content** - Fastest for searching within files
|
||||
- **Use find for files** - Locate files/directories by name
|
||||
- **Use grep sparingly** - Only when rg unavailable
|
||||
- **Use get_modules_by_depth.sh first** - MANDATORY for program architecture analysis before planning
|
||||
- **Always use Bash commands** - NEVER use Windows cmd/PowerShell commands
|
||||
|
||||
### Tool Selection Matrix
|
||||
## 📋 Tool Selection Matrix
|
||||
|
||||
| Need | Tool | Use Case |
|
||||
|------|------|----------|
|
||||
| **Semantic file discovery** | codebase-retrieval | Find files relevant to task/feature context |
|
||||
| **Semantic discovery** | codebase-retrieval | Find files relevant to task/feature context |
|
||||
| **Pattern matching** | rg | Search code content with regex |
|
||||
| **File name lookup** | find | Locate files by name patterns |
|
||||
| **Architecture analysis** | get_modules_by_depth.sh | Understand program structure |
|
||||
| **Architecture** | get_modules_by_depth.sh | Understand program structure |
|
||||
|
||||
## 🔧 Quick Command Reference
|
||||
|
||||
### Quick Command Reference
|
||||
```bash
|
||||
# Semantic File Discovery (codebase-retrieval)
|
||||
~/.claude/scripts/gemini-wrapper --all-files -p "List all files relevant to: [task/feature description]"
|
||||
bash(~/.claude/scripts/gemini-wrapper --all-files -p "List all files relevant to: [task/feature description]")
|
||||
cd [directory] && gemini -p "
|
||||
PURPOSE: Discover files relevant to task/feature
|
||||
TASK: List all files related to [task/feature description]
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Relevant file paths with relevance explanation
|
||||
RULES: Focus on direct relevance to task requirements
|
||||
"
|
||||
|
||||
# Program Architecture Analysis (MANDATORY FIRST)
|
||||
~/.claude/scripts/get_modules_by_depth.sh # Discover program architecture
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh) # Analyze structural hierarchy
|
||||
# Program Architecture (MANDATORY FIRST)
|
||||
~/.claude/scripts/get_modules_by_depth.sh
|
||||
|
||||
# Content Search (rg preferred)
|
||||
rg "pattern" --type js # Search in JS files
|
||||
rg -i "case-insensitive" # Ignore case
|
||||
rg -n "show-line-numbers" # Show line numbers
|
||||
rg -A 3 -B 3 "context-lines" # Show 3 lines before/after
|
||||
rg "pattern" --type js -n # Search JS files with line numbers
|
||||
rg -i "case-insensitive" # Ignore case
|
||||
rg -C 3 "context" # Show 3 lines before/after
|
||||
|
||||
# File Search (find)
|
||||
find . -name "*.ts" -type f # Find TypeScript files
|
||||
# File Search
|
||||
find . -name "*.ts" -type f # Find TypeScript files
|
||||
find . -path "*/node_modules" -prune -o -name "*.js" -print
|
||||
|
||||
# Built-in alternatives
|
||||
grep -r "pattern" . # Recursive search (slower)
|
||||
grep -n -i "pattern" file.txt # Line numbers, case-insensitive
|
||||
# Workflow Examples
|
||||
rg "IMPL-\d+" .workflow/ --type json # Find task IDs
|
||||
find .workflow/ -name "*.json" -path "*/.task/*" # Locate task files
|
||||
rg "status.*pending" .workflow/.task/ # Find pending tasks
|
||||
```
|
||||
|
||||
### Workflow Integration Examples
|
||||
```bash
|
||||
# Semantic Discovery → Content Search → Analysis (Recommended Pattern)
|
||||
~/.claude/scripts/gemini-wrapper --all-files -p "List all files relevant to: [task/feature]" # Get relevant files
|
||||
rg "[pattern]" --type [filetype] # Then search within discovered files
|
||||
## ⚡ Performance Tips
|
||||
|
||||
# Program Architecture Analysis (MANDATORY BEFORE PLANNING)
|
||||
~/.claude/scripts/get_modules_by_depth.sh # Discover program architecture
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh) # Analyze structural hierarchy
|
||||
|
||||
# Search for task definitions
|
||||
rg "IMPL-\d+" .workflow/ --type json # Find task IDs
|
||||
find .workflow/ -name "*.json" -path "*/.task/*" # Locate task files
|
||||
|
||||
# Analyze workflow structure
|
||||
rg "status.*pending" .workflow/.task/ # Find pending tasks
|
||||
rg "depends_on" .workflow/.task/ -A 2 # Show dependencies
|
||||
|
||||
# Find workflow sessions
|
||||
find .workflow/ -name ".active-*" # Active sessions
|
||||
rg "WFS-" .workflow/ --type json # Session references
|
||||
|
||||
# Content analysis for planning
|
||||
rg "flow_control" .workflow/ -B 2 -A 5 # Flow control patterns
|
||||
find . -name "IMPL_PLAN.md" -exec grep -l "requirements" {} \;
|
||||
```
|
||||
|
||||
### Performance Tips
|
||||
- **rg > grep** for content search
|
||||
- **Use --type filters** to limit file types
|
||||
- **Exclude common dirs**: `--glob '!node_modules'`
|
||||
- **Use -F for literal** strings (no regex)
|
||||
- **Exclude dirs**: `--glob '!node_modules'`
|
||||
- **Use -F** for literal strings (no regex)
|
||||
|
||||
@@ -7,67 +7,96 @@ type: strategic-guideline
|
||||
# Intelligent Tools Selection Strategy
|
||||
|
||||
## 📋 Table of Contents
|
||||
1. [Core Framework](#-core-framework)
|
||||
1. [Quick Start](#-quick-start)
|
||||
2. [Tool Specifications](#-tool-specifications)
|
||||
3. [Command Templates](#-command-templates)
|
||||
4. [Tool Selection Guide](#-tool-selection-guide)
|
||||
5. [Usage Patterns](#-usage-patterns)
|
||||
6. [Best Practices](#-best-practices)
|
||||
4. [Execution Configuration](#-execution-configuration)
|
||||
5. [Best Practices](#-best-practices)
|
||||
|
||||
---
|
||||
|
||||
## ⚡ Core Framework
|
||||
## ⚡ Quick Start
|
||||
|
||||
### Tool Overview
|
||||
- **Gemini**: Analysis, understanding, exploration & documentation (primary)
|
||||
- **Qwen**: Analysis, understanding, exploration & documentation (fallback, same capabilities as Gemini)
|
||||
- **Codex**: Development, implementation & automation
|
||||
|
||||
### Decision Principles
|
||||
### Model Selection (-m parameter)
|
||||
|
||||
**Gemini Models**:
|
||||
- `gemini-2.5-pro` - Analysis tasks (default)
|
||||
- `gemini-2.5-flash` - Documentation updates
|
||||
|
||||
**Qwen Models**:
|
||||
- `coder-model` - Code analysis (default, -m optional)
|
||||
- `vision-model` - Image analysis (rare usage)
|
||||
|
||||
**Codex Models**:
|
||||
- `gpt-5` - Analysis & execution (default)
|
||||
- `gpt5-codex` - Large context tasks
|
||||
|
||||
**Usage**: `tool -p "prompt" -m model-name` (NOTE: -m placed AFTER prompt)
|
||||
|
||||
### Quick Decision Matrix
|
||||
|
||||
| Scenario | Tool | Command Pattern |
|
||||
|----------|------|-----------------|
|
||||
| **Exploring/Understanding** | Gemini → Qwen | `cd [dir] && gemini -p "PURPOSE:... CONTEXT: @**/*"` |
|
||||
| **Architecture/Analysis** | Gemini → Qwen | `cd [dir] && gemini -p "PURPOSE:... CONTEXT: @**/*"` |
|
||||
| **Building/Fixing** | Codex | `codex -C [dir] --full-auto exec "PURPOSE:... MODE: auto"` |
|
||||
| **Not sure?** | Multiple | Use tools in parallel |
|
||||
| **Small task?** | Still use tools | Tools are faster than manual work |
|
||||
|
||||
### Core Principles
|
||||
- **Use tools early and often** - Tools are faster, more thorough, and reliable than manual approaches
|
||||
- **When in doubt, use both** - Parallel usage provides comprehensive coverage
|
||||
- **Default to tools** - Use specialized tools for most coding tasks, no matter how small
|
||||
- **Lower barriers** - Engage tools immediately when encountering any complexity
|
||||
- **Context optimization** - Based on user intent, determine whether to use `-C [directory]` parameter for focused analysis to reduce irrelevant context import
|
||||
- **⚠️ Write operation protection** - For local codebase write/modify operations, require EXPLICIT user confirmation unless user provides clear instructions containing MODE=write or MODE=auto
|
||||
|
||||
### Quick Decision Rules
|
||||
1. **Exploring/Understanding?** → Start with Gemini (fallback to Qwen if needed)
|
||||
2. **Architecture/Analysis?** → Start with Gemini (fallback to Qwen if needed)
|
||||
3. **Building/Fixing?** → Start with Codex
|
||||
4. **Not sure?** → Use multiple tools in parallel
|
||||
5. **Small task?** → Still use tools - they're faster than manual work
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Tool Specifications
|
||||
|
||||
### Gemini
|
||||
- **Command**: `~/.claude/scripts/gemini-wrapper`
|
||||
### Gemini & Qwen
|
||||
|
||||
#### Overview
|
||||
- **Commands**: `gemini` (primary) | `qwen` (fallback)
|
||||
- **Strengths**: Large context window, pattern recognition
|
||||
- **Best For**: Analysis, documentation generation, code exploration
|
||||
- **Permissions**: Default read-only analysis, MODE=write requires explicit specification (auto-enables --approval-mode yolo)
|
||||
- **Best For**: Analysis, documentation generation, code exploration, architecture review
|
||||
- **Permissions**: Default read-only analysis, MODE=write requires explicit specification
|
||||
- **Default MODE**: `analysis` (read-only)
|
||||
- **⚠️ Write Trigger**: Only when user explicitly requests "generate documentation", "modify code", or specifies MODE=write
|
||||
- **Priority**: Prefer Gemini; use Qwen as fallback when Gemini unavailable
|
||||
|
||||
#### MODE Options
|
||||
- `analysis` (default) - Read-only analysis and documentation generation
|
||||
- `write` - ⚠️ Create/modify codebase files (requires explicit specification, auto-enables --approval-mode yolo)
|
||||
|
||||
### Qwen
|
||||
- **Command**: `~/.claude/scripts/qwen-wrapper`
|
||||
- **Strengths**: Large context window, pattern recognition (same as Gemini)
|
||||
- **Best For**: Analysis, documentation generation, code exploration (fallback option when Gemini unavailable)
|
||||
- **Permissions**: Default read-only analysis, MODE=write requires explicit specification (auto-enables --approval-mode yolo)
|
||||
- **Default MODE**: `analysis` (read-only)
|
||||
- **⚠️ Write Trigger**: Only when user explicitly requests "generate documentation", "modify code", or specifies MODE=write
|
||||
- **Priority**: Secondary to Gemini - use as fallback for same tasks
|
||||
**analysis** (default) - Read-only analysis and documentation generation
|
||||
- **⚠️ CRITICAL CONSTRAINT**: Absolutely NO file creation, modification, or deletion operations
|
||||
- Analysis output should be returned as text response only
|
||||
- Use for: code review, architecture analysis, pattern discovery, documentation reading
|
||||
|
||||
#### MODE Options
|
||||
- `analysis` (default) - Read-only analysis and documentation generation (same as Gemini)
|
||||
- `write` - ⚠️ Create/modify codebase files (requires explicit specification, auto-enables --approval-mode yolo)
|
||||
**write** - ⚠️ Create/modify codebase files (requires explicit specification, auto-enables --approval-mode yolo)
|
||||
- Use for: generating documentation files, creating code files, modifying existing files
|
||||
|
||||
#### Tool Selection
|
||||
```bash
|
||||
# Default: Use Gemini
|
||||
gemini -p "analysis prompt"
|
||||
|
||||
# Fallback: Use Qwen if Gemini unavailable
|
||||
qwen -p "analysis prompt"
|
||||
```
|
||||
|
||||
#### Error Handling
|
||||
**⚠️ Gemini 429 Behavior**: May show HTTP 429 error but still return results - ignore error messages, only check if results exist (results present = success, no results = retry/fallback to Qwen)
|
||||
|
||||
---
|
||||
|
||||
### Codex
|
||||
|
||||
#### Overview
|
||||
- **Command**: `codex --full-auto exec`
|
||||
- **Strengths**: Autonomous development, mathematical reasoning
|
||||
- **Best For**: Implementation, testing, automation
|
||||
@@ -76,27 +105,39 @@ type: strategic-guideline
|
||||
- **⚠️ Write Trigger**: Only when user explicitly requests "implement", "modify", "generate code" AND specifies MODE
|
||||
|
||||
#### MODE Options
|
||||
- `auto` - ⚠️ Autonomous development with full file operations (requires explicit specification, enables -s danger-full-access)
|
||||
- `write` - ⚠️ Test generation and file modification (requires explicit specification)
|
||||
- **Default**: No default mode, MODE must be explicitly specified
|
||||
|
||||
**auto** - ⚠️ Autonomous development with full file operations
|
||||
- Requires explicit specification
|
||||
- Enables `-s danger-full-access`
|
||||
- Use for: feature implementation, bug fixes, autonomous development
|
||||
|
||||
**write** - ⚠️ Test generation and file modification
|
||||
- Requires explicit specification
|
||||
- Use for: test generation, focused file modifications
|
||||
|
||||
#### Session Management
|
||||
- `codex resume` - Resume previous interactive session (picker by default)
|
||||
- `codex exec "task" resume --last` - Continue most recent session with new task (maintains context)
|
||||
- `codex -i <image_file>` - Attach image(s) to initial prompt (useful for UI/design references)
|
||||
- **Multi-task Pattern**: First task uses `exec`, subsequent tasks use `exec "..." resume --last` for context continuity
|
||||
- **Parameter Position**: `resume --last` must be placed AFTER the prompt string at command END
|
||||
- **Example**:
|
||||
```bash
|
||||
# First task - establish session
|
||||
codex -C project --full-auto exec "Implement auth module" --skip-git-repo-check -s danger-full-access
|
||||
|
||||
# Subsequent tasks - continue same session
|
||||
codex --full-auto exec "Add JWT validation" resume --last --skip-git-repo-check -s danger-full-access
|
||||
codex --full-auto exec "Write auth tests" resume --last --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
**Basic Commands**:
|
||||
- `codex resume` - Resume previous interactive session (picker by default)
|
||||
- `codex resume --last` - Resume most recent session directly
|
||||
- `codex -i <image_file>` - Attach image(s) to initial prompt (useful for UI/design references)
|
||||
|
||||
**Multi-task Pattern**: First task uses `exec`, subsequent tasks use `exec "..." resume --last` for context continuity
|
||||
|
||||
**Parameter Position**: `resume --last` must be placed AFTER the prompt string at command END
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
# First task - establish session
|
||||
codex -C project --full-auto exec "Implement auth module" --skip-git-repo-check -s danger-full-access
|
||||
|
||||
# Subsequent tasks - continue same session
|
||||
codex --full-auto exec "Add JWT validation" resume --last --skip-git-repo-check -s danger-full-access
|
||||
codex --full-auto exec "Write auth tests" resume --last --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
#### Auto-Resume Decision Rules
|
||||
|
||||
**When to use `resume --last`**:
|
||||
- Current task is related to/extends previous Codex task in conversation memory
|
||||
- Current task requires context from previous implementation
|
||||
@@ -114,125 +155,170 @@ type: strategic-guideline
|
||||
## 🎯 Command Templates
|
||||
|
||||
### Universal Template Structure
|
||||
|
||||
Every command MUST follow this structure:
|
||||
- [ ] **PURPOSE** - Clear goal and intent
|
||||
- [ ] **TASK** - Specific execution task
|
||||
- [ ] **TASK** - Specific execution task (use list format: • Task item 1 • Task item 2 • Task item 3)
|
||||
- [ ] **MODE** - Execution mode and permission level
|
||||
- [ ] **CONTEXT** - File references and memory context from previous sessions
|
||||
- [ ] **EXPECTED** - Clear expected results
|
||||
- [ ] **RULES** - Template reference and constraints
|
||||
- [ ] **RULES** - Template reference and constraints (include mode constraints: analysis=READ-ONLY | write=CREATE/MODIFY/DELETE | auto=FULL operations)
|
||||
|
||||
---
|
||||
|
||||
### Standard Command Formats
|
||||
|
||||
#### Gemini Commands
|
||||
#### Gemini & Qwen Commands
|
||||
|
||||
```bash
|
||||
# Gemini Analysis (read-only, default)
|
||||
cd [directory] && ~/.claude/scripts/gemini-wrapper -p "
|
||||
# Analysis Mode (read-only, default)
|
||||
# Use 'gemini' (primary) or 'qwen' (fallback)
|
||||
cd [directory] && gemini -p "
|
||||
PURPOSE: [clear analysis goal]
|
||||
TASK: [specific analysis task]
|
||||
MODE: analysis
|
||||
CONTEXT: [file references and memory context]
|
||||
CONTEXT: @**/* [default: all files, or specify file patterns]
|
||||
EXPECTED: [expected output]
|
||||
RULES: [template reference and constraints]
|
||||
"
|
||||
|
||||
# Gemini Write Mode (requires explicit MODE=write)
|
||||
# NOTE: --approval-mode yolo must be placed AFTER wrapper command, BEFORE -p
|
||||
cd [directory] && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p "
|
||||
# Model Selection Examples (NOTE: -m placed AFTER prompt)
|
||||
cd [directory] && gemini -p "..." -m gemini-2.5-pro # Analysis (default)
|
||||
cd [directory] && gemini -p "..." -m gemini-2.5-flash # Documentation updates
|
||||
cd [directory] && qwen -p "..." # coder-model (default, -m optional)
|
||||
cd [directory] && qwen -p "..." -m vision-model # Image analysis (rare)
|
||||
|
||||
# Write Mode (requires explicit MODE=write)
|
||||
# NOTE: --approval-mode yolo must be placed AFTER the prompt
|
||||
cd [directory] && gemini -p "
|
||||
PURPOSE: [clear goal]
|
||||
TASK: [specific task]
|
||||
MODE: write
|
||||
CONTEXT: [file references and memory context]
|
||||
CONTEXT: @**/* [default: all files, or specify file patterns]
|
||||
EXPECTED: [expected output]
|
||||
RULES: [template reference and constraints]
|
||||
"
|
||||
```
|
||||
" -m gemini-2.5-flash --approval-mode yolo
|
||||
|
||||
#### Qwen Commands
|
||||
```bash
|
||||
# Qwen Analysis (read-only, default) - Same as Gemini, use as fallback
|
||||
cd [directory] && ~/.claude/scripts/qwen-wrapper -p "
|
||||
PURPOSE: [clear analysis goal]
|
||||
TASK: [specific analysis task]
|
||||
MODE: analysis
|
||||
CONTEXT: [file references and memory context]
|
||||
EXPECTED: [expected output]
|
||||
RULES: [template reference and constraints]
|
||||
"
|
||||
|
||||
# Qwen Write Mode (requires explicit MODE=write)
|
||||
# NOTE: --approval-mode yolo must be placed AFTER wrapper command, BEFORE -p
|
||||
cd [directory] && ~/.claude/scripts/qwen-wrapper --approval-mode yolo -p "
|
||||
PURPOSE: [clear goal]
|
||||
TASK: [specific task]
|
||||
MODE: write
|
||||
CONTEXT: [file references and memory context]
|
||||
EXPECTED: [expected output]
|
||||
RULES: [template reference and constraints]
|
||||
"
|
||||
# Fallback: Replace 'gemini' with 'qwen' if Gemini unavailable
|
||||
cd [directory] && qwen -p "..." # coder-model default (-m optional)
|
||||
```
|
||||
|
||||
#### Codex Commands
|
||||
|
||||
```bash
|
||||
# Codex Development (requires explicit MODE=auto)
|
||||
# NOTE: --skip-git-repo-check and -s danger-full-access must be placed at command END
|
||||
# NOTE: -m, --skip-git-repo-check and -s danger-full-access must be placed at command END
|
||||
codex -C [directory] --full-auto exec "
|
||||
PURPOSE: [clear development goal]
|
||||
TASK: [specific development task]
|
||||
MODE: auto
|
||||
CONTEXT: [file references and memory context]
|
||||
CONTEXT: @**/* [default: all files, or specify file patterns and memory context]
|
||||
EXPECTED: [expected deliverables]
|
||||
RULES: [template reference and constraints]
|
||||
" --skip-git-repo-check -s danger-full-access
|
||||
" -m gpt-5 --skip-git-repo-check -s danger-full-access
|
||||
|
||||
# Model Selection Examples (NOTE: -m placed AFTER prompt, BEFORE flags)
|
||||
codex -C [directory] --full-auto exec "..." -m gpt-5 --skip-git-repo-check -s danger-full-access # Analysis & execution (default)
|
||||
codex -C [directory] --full-auto exec "..." -m gpt5-codex --skip-git-repo-check -s danger-full-access # Large context tasks
|
||||
|
||||
# Codex Test/Write Mode (requires explicit MODE=write)
|
||||
# NOTE: --skip-git-repo-check and -s danger-full-access must be placed at command END
|
||||
# NOTE: -m, --skip-git-repo-check and -s danger-full-access must be placed at command END
|
||||
codex -C [directory] --full-auto exec "
|
||||
PURPOSE: [clear goal]
|
||||
TASK: [specific task]
|
||||
MODE: write
|
||||
CONTEXT: [file references and memory context]
|
||||
CONTEXT: @**/* [default: all files, or specify file patterns and memory context]
|
||||
EXPECTED: [expected deliverables]
|
||||
RULES: [template reference and constraints]
|
||||
" --skip-git-repo-check -s danger-full-access
|
||||
" -m gpt-5 --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Directory Context Configuration
|
||||
Tools execute in current working directory:
|
||||
- **Gemini**: `cd path/to/project && ~/.claude/scripts/gemini-wrapper -p "prompt"`
|
||||
- **Qwen**: `cd path/to/project && ~/.claude/scripts/qwen-wrapper -p "prompt"`
|
||||
|
||||
**Tool Directory Navigation**:
|
||||
- **Gemini & Qwen**: `cd path/to/project && gemini -p "prompt"` (or `qwen`)
|
||||
- **Codex**: `codex -C path/to/project --full-auto exec "task"` (Codex still supports -C)
|
||||
- **Path types**: Supports both relative (`../project`) and absolute (`/full/path`) paths
|
||||
- **Token analysis**: For gemini-wrapper and qwen-wrapper, token counting happens in current directory
|
||||
- **Token analysis**: For Gemini/Qwen, token counting happens in current directory
|
||||
|
||||
### RULES Field Format
|
||||
#### ⚠️ Critical Directory Scope Rules
|
||||
|
||||
**Once `cd` to a directory**:
|
||||
- **@ references ONLY apply to current directory and its subdirectories**
|
||||
- `@**/*` = All files within current directory tree
|
||||
- `@*.ts` = TypeScript files in current directory tree
|
||||
- `@src/**/*` = Files within src subdirectory (if exists under current directory)
|
||||
- **CANNOT reference parent or sibling directories via @ alone**
|
||||
|
||||
**To reference files outside current directory (TWO-STEP REQUIREMENT)**:
|
||||
- **Step 1**: Add `--include-directories` parameter to make external directories ACCESSIBLE
|
||||
- **Step 2**: Explicitly reference external files in CONTEXT field with @ patterns
|
||||
- **⚠️ BOTH steps are MANDATORY** - missing either step will fail
|
||||
- Example: `cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared`
|
||||
- **Rule**: If CONTEXT contains `@../dir/**/*`, command MUST include `--include-directories ../dir`
|
||||
- Without `--include-directories`, @ patterns CANNOT access parent/sibling directories at all
|
||||
|
||||
#### Multi-Directory Support (Gemini & Qwen)
|
||||
|
||||
**Purpose**: For large projects requiring fine-grained access across multiple directories
|
||||
|
||||
**Use Case**: When `cd` limits scope but you need to reference files from parent/sibling folders
|
||||
|
||||
**Parameter**: `--include-directories <dir1,dir2,...>`
|
||||
- Includes additional directories in the workspace beyond current `cd` directory
|
||||
- Can be specified multiple times or as comma-separated values
|
||||
- Maximum 5 directories can be added
|
||||
- **REQUIRED** when working in a subdirectory but needing context from parent or sibling directories
|
||||
|
||||
**Syntax Options**:
|
||||
```bash
|
||||
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/[category]/[template].txt") | [constraints]
|
||||
# Comma-separated format
|
||||
gemini -p "prompt" --include-directories /path/to/project1,/path/to/project2
|
||||
|
||||
# Multiple flags format
|
||||
gemini -p "prompt" --include-directories /path/to/project1 --include-directories /path/to/project2
|
||||
|
||||
# Combined with cd for focused analysis with extended context (RECOMMENDED)
|
||||
cd src/auth && gemini -p "
|
||||
PURPOSE: Analyze authentication with shared utilities context
|
||||
TASK: Review auth implementation and its dependencies
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
||||
EXPECTED: Complete analysis with cross-directory dependencies
|
||||
RULES: Focus on integration patterns
|
||||
" --include-directories ../shared,../types
|
||||
```
|
||||
|
||||
**⚠️ CRITICAL: Command Substitution Rules**
|
||||
When using `$(cat ...)` for template loading in actual CLI commands:
|
||||
- **Template reference only, never read**: When user specifies template name, use `$(cat ...)` directly in RULES field, do NOT read template content first
|
||||
- **NEVER use escape characters**: `\$`, `\"`, `\'` will break command substitution
|
||||
- **In -p "..." context**: Path in `$(cat ...)` needs NO quotes (tilde expands correctly)
|
||||
- **Correct**: `RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)`
|
||||
- **WRONG**: `RULES: \$(cat ...)` or `RULES: $(cat \"...\")` or `RULES: $(cat '...')`
|
||||
- **Why**: Shell executes `$(...)` in subshell where path is safe without quotes
|
||||
**Best Practices**:
|
||||
- **Recommended Pattern**: Use `cd` to navigate to primary focus directory, then use `--include-directories` for additional context
|
||||
- Example: `cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared,../types`
|
||||
- **⚠️ CRITICAL**: CONTEXT must explicitly list external files (e.g., `@../shared/**/*`), AND command must include `--include-directories ../shared`
|
||||
- Benefits: More precise file references (relative to current directory), clearer intent, better context control
|
||||
- **Enforcement Rule**: When CONTEXT references external directories, ALWAYS add corresponding `--include-directories`
|
||||
- Use when `cd` alone limits necessary context visibility
|
||||
- Keep directory count ≤ 5 for optimal performance
|
||||
- **Pattern matching rule**: `@../dir/**/*` in CONTEXT → `--include-directories ../dir` in command (MANDATORY)
|
||||
- Prefer `cd + --include-directories` over multiple `cd` commands for cross-directory analysis
|
||||
|
||||
**Examples**:
|
||||
- Single template: `$(cat "~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt") | Focus on security`
|
||||
- Multiple templates: `$(cat "template1.txt") $(cat "template2.txt") | Enterprise standards`
|
||||
- No template: `Focus on security patterns, include dependency analysis`
|
||||
- File patterns: `@{src/**/*.ts,CLAUDE.md} - Stay within scope`
|
||||
---
|
||||
|
||||
### File Pattern Reference
|
||||
- All files: `@{**/*}`
|
||||
- Source files: `@{src/**/*}`
|
||||
- TypeScript: `@{*.ts,*.tsx}`
|
||||
- With docs: `@{CLAUDE.md,**/*CLAUDE.md}`
|
||||
- Tests: `@{src/**/*.test.*}`
|
||||
### CONTEXT Field Configuration
|
||||
|
||||
#### File Pattern Reference
|
||||
|
||||
**Default Pattern**:
|
||||
- **All files (default)**: `@**/*` - Use this as default for comprehensive context
|
||||
|
||||
**Common Patterns**:
|
||||
- Source files: `@src/**/*`
|
||||
- TypeScript: `@*.ts @*.tsx` (multiple @ for multiple patterns)
|
||||
- With docs: `@CLAUDE.md @**/*CLAUDE.md` (multiple @ for multiple patterns)
|
||||
- Tests: `@src/**/*.test.*`
|
||||
|
||||
#### Complex Pattern Discovery
|
||||
|
||||
**Complex Pattern Discovery**:
|
||||
For complex file pattern requirements, use semantic discovery tools BEFORE CLI execution:
|
||||
- **rg (ripgrep)**: Content-based file discovery with regex patterns
|
||||
- **Code Index MCP**: Semantic file search based on task requirements
|
||||
@@ -245,14 +331,14 @@ rg "export.*Component" --files-with-matches --type ts # Find component files
|
||||
mcp__code-index__search_code_advanced(pattern="interface.*Props", file_pattern="*.tsx") # Find interface files
|
||||
|
||||
# Step 2: Build precise CONTEXT from discovery results
|
||||
CONTEXT: @{src/components/Auth.tsx,src/types/auth.d.ts,src/hooks/useAuth.ts}
|
||||
CONTEXT: @src/components/Auth.tsx @src/types/auth.d.ts @src/hooks/useAuth.ts
|
||||
|
||||
# Step 3: Execute CLI with precise file references
|
||||
cd src && ~/.claude/scripts/gemini-wrapper -p "
|
||||
cd src && gemini -p "
|
||||
PURPOSE: Analyze authentication components
|
||||
TASK: Review auth component patterns and props interfaces
|
||||
MODE: analysis
|
||||
CONTEXT: @{components/Auth.tsx,types/auth.d.ts,hooks/useAuth.ts}
|
||||
CONTEXT: @components/Auth.tsx @types/auth.d.ts @hooks/useAuth.ts
|
||||
EXPECTED: Pattern analysis and improvement suggestions
|
||||
RULES: Focus on type safety and component composition
|
||||
"
|
||||
@@ -260,26 +346,38 @@ RULES: Focus on type safety and component composition
|
||||
|
||||
---
|
||||
|
||||
## 📊 Tool Selection Guide
|
||||
### RULES Field Configuration
|
||||
|
||||
### Selection Matrix
|
||||
#### Basic Format
|
||||
```bash
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/[category]/[template].txt) | [constraints]
|
||||
```
|
||||
|
||||
| Task Type | Tool | Use Case | Template |
|
||||
|-----------|------|----------|-----------|
|
||||
| **Analysis** | Gemini (Qwen fallback) | Code exploration, architecture review, patterns | `analysis/pattern.txt` |
|
||||
| **Architecture** | Gemini (Qwen fallback) | System design, architectural analysis | `analysis/architecture.txt` |
|
||||
| **Documentation** | Gemini (Qwen fallback) | Code docs, API specs, guides | `analysis/quality.txt` |
|
||||
| **Development** | Codex | Feature implementation, bug fixes, testing | `development/feature.txt` |
|
||||
| **Planning** | Gemini/Qwen | Task breakdown, migration planning | `planning/task-breakdown.txt` |
|
||||
| **Security** | Codex | Vulnerability assessment, fixes | `analysis/security.txt` |
|
||||
| **Refactoring** | Multiple | Gemini/Qwen for analysis, Codex for execution | `development/refactor.txt` |
|
||||
| **Module Documentation** | Gemini (Qwen fallback) | Universal module/file documentation for all levels | `memory/claude-module-unified.txt` |
|
||||
#### ⚠️ CRITICAL: Command Substitution Rules
|
||||
|
||||
When using `$(cat ...)` for template loading in actual CLI commands:
|
||||
- **Template reference only, never read**: When user specifies template name, use `$(cat ...)` directly in RULES field, do NOT read template content first
|
||||
- **NEVER use escape characters**: `\$`, `\"`, `\'` will break command substitution
|
||||
- **In prompt context**: Path in `$(cat ...)` needs NO quotes (tilde expands correctly)
|
||||
- **Correct**: `RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)`
|
||||
- **WRONG**: `RULES: \$(cat ...)` or `RULES: $(cat \"...\")` or `RULES: $(cat '...')`
|
||||
- **Why**: Shell executes `$(...)` in subshell where path is safe without quotes
|
||||
|
||||
#### Examples
|
||||
- Single template: `$(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt) | Focus on security`
|
||||
- Multiple templates: `$(cat template1.txt) $(cat template2.txt) | Enterprise standards`
|
||||
- No template: `Focus on security patterns, include dependency analysis`
|
||||
- File patterns: `@src/**/*.ts @CLAUDE.md - Stay within scope`
|
||||
|
||||
---
|
||||
|
||||
### Template System
|
||||
|
||||
**Base Structure**: `~/.claude/workflows/cli-templates/`
|
||||
#### Base Structure
|
||||
`~/.claude/workflows/cli-templates/`
|
||||
|
||||
#### Available Templates
|
||||
|
||||
```
|
||||
prompts/
|
||||
├── analysis/
|
||||
@@ -307,11 +405,103 @@ tech-stacks/
|
||||
└── react-dev.md - React architecture
|
||||
```
|
||||
|
||||
#### Task-Template Selection Matrix
|
||||
|
||||
| Task Type | Tool | Use Case | Template |
|
||||
|-----------|------|----------|-----------|
|
||||
| **Analysis** | Gemini (Qwen fallback) | Code exploration, architecture review, patterns | `analysis/pattern.txt` |
|
||||
| **Architecture** | Gemini (Qwen fallback) | System design, architectural analysis | `analysis/architecture.txt` |
|
||||
| **Documentation** | Gemini (Qwen fallback) | Code docs, API specs, guides | `analysis/quality.txt` |
|
||||
| **Development** | Codex | Feature implementation, bug fixes, testing | `development/feature.txt` |
|
||||
| **Planning** | Gemini/Qwen | Task breakdown, migration planning | `planning/task-breakdown.txt` |
|
||||
| **Security** | Codex | Vulnerability assessment, fixes | `analysis/security.txt` |
|
||||
| **Refactoring** | Multiple | Gemini/Qwen for analysis, Codex for execution | `development/refactor.txt` |
|
||||
| **Module Documentation** | Gemini (Qwen fallback) | Universal module/file documentation for all levels | `memory/claude-module-unified.txt` |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Usage Patterns
|
||||
## ⚙️ Execution Configuration
|
||||
|
||||
### Dynamic Timeout Allocation
|
||||
|
||||
**Timeout Ranges**:
|
||||
- **Simple tasks** (analysis, search): 20-40min (1200000-2400000ms)
|
||||
- **Medium tasks** (refactoring, documentation): 40-60min (2400000-3600000ms)
|
||||
- **Complex tasks** (implementation, migration): 60-120min (3600000-7200000ms)
|
||||
|
||||
**Codex Multiplier**: Codex commands use 1.5x of allocated time
|
||||
|
||||
**Application**: All bash() wrapped commands including Gemini, Qwen and Codex executions
|
||||
|
||||
**Auto-detection**: Analyze PURPOSE and TASK fields to determine appropriate timeout
|
||||
|
||||
**Command Examples**:
|
||||
```bash
|
||||
bash(gemini -p "prompt") # Simple analysis: 20-40min
|
||||
bash(codex -C directory --full-auto exec "task") # Complex implementation: 90-180min
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Permission Framework
|
||||
|
||||
#### Write Operation Protection
|
||||
|
||||
**⚠️ WRITE PROTECTION**: Local codebase write/modify requires EXPLICIT user confirmation
|
||||
|
||||
**Mode Hierarchy**:
|
||||
- **Analysis Mode (default)**: Read-only, safe for auto-execution
|
||||
- **Write Mode**: Requires user explicitly states MODE=write or MODE=auto in prompt
|
||||
- **Exception**: User provides clear instructions like "modify", "create", "implement"
|
||||
|
||||
#### Tool-Specific Permissions
|
||||
|
||||
**Gemini/Qwen Write Access**:
|
||||
- Use `--approval-mode yolo` ONLY when MODE=write explicitly specified
|
||||
- **Parameter Position**: Place AFTER the prompt: `gemini -p "..." --approval-mode yolo`
|
||||
|
||||
**Codex Write Access**:
|
||||
- Use `-s danger-full-access` and `--skip-git-repo-check` ONLY when MODE=auto explicitly specified
|
||||
- **Parameter Position**: Place AFTER the prompt string at command END: `codex ... exec "..." --skip-git-repo-check -s danger-full-access`
|
||||
|
||||
**Default Behavior**: All tools default to analysis/read-only mode without explicit write permission
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Best Practices
|
||||
|
||||
### General Guidelines
|
||||
|
||||
**Workflow Principles**:
|
||||
- **Start with templates** - Use predefined templates for consistency
|
||||
- **Be specific** - Clear PURPOSE, TASK, and EXPECTED fields
|
||||
- **Include constraints** - File patterns, scope, requirements in RULES
|
||||
- **Discover patterns first** - Use rg/MCP for complex file discovery before CLI execution
|
||||
- **Build precise CONTEXT** - Convert discovery results to explicit file references
|
||||
- **Document context** - Always reference CLAUDE.md for context
|
||||
- **Default to full context** - Use `@**/*` in CONTEXT for comprehensive analysis unless specific files needed
|
||||
- **⚠️ No escape characters in CLI commands** - NEVER use `\$`, `\"`, `\'` in actual CLI execution (breaks command substitution and path expansion)
|
||||
|
||||
---
|
||||
|
||||
### Context Optimization Strategy
|
||||
|
||||
**Directory Navigation**: Use `cd [directory] &&` pattern when analyzing specific areas to reduce irrelevant context
|
||||
|
||||
**When to change directory**:
|
||||
- Specific directory mentioned → Use `cd directory &&` pattern
|
||||
- Focused analysis needed → Target specific directory with cd
|
||||
- Multi-directory scope → Use `cd` + `--include-directories` for precise control
|
||||
|
||||
**When to use `--include-directories`**:
|
||||
- Working in subdirectory but need parent/sibling context
|
||||
- Cross-directory dependency analysis required
|
||||
- Multiple related modules need simultaneous access
|
||||
|
||||
---
|
||||
|
||||
### Workflow Integration (REQUIRED)
|
||||
|
||||
When planning any coding task, **ALWAYS** integrate CLI tools:
|
||||
|
||||
1. **Understanding Phase**: Use Gemini for analysis (Qwen as fallback)
|
||||
@@ -319,185 +509,16 @@ When planning any coding task, **ALWAYS** integrate CLI tools:
|
||||
3. **Implementation Phase**: Use Codex for development
|
||||
4. **Quality Phase**: Use Codex for testing and validation
|
||||
|
||||
### Common Scenarios
|
||||
|
||||
#### Code Analysis
|
||||
```bash
|
||||
~/.claude/scripts/gemini-wrapper -p "
|
||||
PURPOSE: Understand codebase architecture
|
||||
TASK: Analyze project structure and identify patterns
|
||||
MODE: analysis
|
||||
CONTEXT: @{src/**/*.ts,CLAUDE.md} Previous analysis of auth system
|
||||
EXPECTED: Architecture overview and integration points
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt) | Focus on integration points
|
||||
"
|
||||
```
|
||||
|
||||
#### Documentation Generation
|
||||
```bash
|
||||
~/.claude/scripts/gemini-wrapper -p "
|
||||
PURPOSE: Generate API documentation
|
||||
TASK: Create comprehensive API reference from code
|
||||
MODE: write
|
||||
CONTEXT: @{src/api/**/*}
|
||||
EXPECTED: API.md with all endpoints documented
|
||||
RULES: Follow project documentation standards
|
||||
"
|
||||
```
|
||||
|
||||
#### Architecture Analysis (Qwen as Gemini fallback)
|
||||
```bash
|
||||
# Prefer Gemini for architecture analysis
|
||||
cd src/auth && ~/.claude/scripts/gemini-wrapper -p "
|
||||
PURPOSE: Analyze authentication system architecture
|
||||
TASK: Review JWT-based auth system design
|
||||
MODE: analysis
|
||||
CONTEXT: @{src/auth/**/*} Existing patterns and requirements
|
||||
EXPECTED: Architecture analysis report with recommendations
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt) | Focus on security
|
||||
"
|
||||
|
||||
# Use Qwen only if Gemini unavailable
|
||||
cd src/auth && ~/.claude/scripts/qwen-wrapper -p "
|
||||
PURPOSE: Analyze authentication system architecture
|
||||
TASK: Review JWT-based auth system design
|
||||
MODE: analysis
|
||||
CONTEXT: @{src/auth/**/*} Existing patterns and requirements
|
||||
EXPECTED: Architecture analysis report with recommendations
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt) | Focus on security
|
||||
"
|
||||
```
|
||||
|
||||
#### Feature Development (Multi-task with Resume)
|
||||
```bash
|
||||
# First task - establish session
|
||||
codex -C path/to/project --full-auto exec "
|
||||
PURPOSE: Implement user authentication
|
||||
TASK: Create JWT-based authentication system
|
||||
MODE: auto
|
||||
CONTEXT: @{src/auth/**/*} Database schema from session memory
|
||||
EXPECTED: Complete auth module with tests
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/development/feature.txt) | Follow security best practices
|
||||
" --skip-git-repo-check -s danger-full-access
|
||||
|
||||
# Continue in same session - Add JWT validation
|
||||
codex --full-auto exec "
|
||||
PURPOSE: Enhance authentication security
|
||||
TASK: Add JWT token validation and refresh logic
|
||||
MODE: auto
|
||||
CONTEXT: Previous auth implementation from current session
|
||||
EXPECTED: JWT validation middleware and token refresh endpoints
|
||||
RULES: Follow JWT best practices, maintain session context
|
||||
" resume --last --skip-git-repo-check -s danger-full-access
|
||||
|
||||
# Continue in same session - Add tests
|
||||
codex --full-auto exec "
|
||||
PURPOSE: Increase test coverage
|
||||
TASK: Generate comprehensive tests for auth module
|
||||
MODE: write
|
||||
CONTEXT: Auth implementation from current session
|
||||
EXPECTED: Complete test suite with 80%+ coverage
|
||||
RULES: Use Jest, follow existing patterns
|
||||
" resume --last --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
#### Interactive Session Resume
|
||||
```bash
|
||||
# Resume previous session with picker
|
||||
codex resume
|
||||
|
||||
# Or resume most recent session directly
|
||||
codex resume --last
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Best Practices
|
||||
|
||||
### General Guidelines
|
||||
- **Start with templates** - Use predefined templates for consistency
|
||||
- **Be specific** - Clear PURPOSE, TASK, and EXPECTED fields
|
||||
- **Include constraints** - File patterns, scope, requirements in RULES
|
||||
- **Discover patterns first** - Use rg/MCP for complex file discovery before CLI execution
|
||||
- **Build precise CONTEXT** - Convert discovery results to explicit file references
|
||||
- **Document context** - Always reference CLAUDE.md for context
|
||||
- **⚠️ No escape characters in CLI commands** - NEVER use `\$`, `\"`, `\'` in actual CLI execution (breaks command substitution and path expansion)
|
||||
|
||||
### Context Optimization Strategy
|
||||
**Directory Navigation**: Use `cd [directory] &&` pattern when analyzing specific areas to reduce irrelevant context
|
||||
|
||||
**When to change directory**:
|
||||
- Specific directory mentioned → Use `cd directory &&` pattern
|
||||
- Focused analysis needed → Target specific directory with cd
|
||||
- Multi-directory scope → Stay in root, use explicit paths or multiple commands
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
# Gemini - Focused analysis
|
||||
cd src/auth && ~/.claude/scripts/gemini-wrapper -p "
|
||||
PURPOSE: Understand authentication patterns
|
||||
TASK: Analyze auth implementation
|
||||
MODE: analysis
|
||||
CONTEXT: @{**/*.ts}
|
||||
EXPECTED: Pattern documentation
|
||||
RULES: Focus on security best practices
|
||||
"
|
||||
|
||||
# Qwen - Analysis (fallback option, same as Gemini)
|
||||
cd src/auth && ~/.claude/scripts/qwen-wrapper -p "
|
||||
PURPOSE: Analyze auth architecture
|
||||
TASK: Review auth system design and patterns
|
||||
MODE: analysis
|
||||
CONTEXT: @{**/*}
|
||||
EXPECTED: Architecture analysis report
|
||||
RULES: Focus on modularity and security
|
||||
"
|
||||
|
||||
# Codex - Implementation
|
||||
codex -C src/auth --full-auto exec "
|
||||
PURPOSE: Improve auth implementation
|
||||
TASK: Review and enhance auth code
|
||||
MODE: auto
|
||||
CONTEXT: @{**/*.ts}
|
||||
EXPECTED: Code improvements and fixes
|
||||
RULES: Maintain backward compatibility
|
||||
" --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
### Planning Checklist
|
||||
|
||||
For every development task:
|
||||
- [ ] **Purpose defined** - Clear goal and intent
|
||||
- [ ] **Mode selected** - Execution mode and permission level determined
|
||||
- [ ] **Context gathered** - File references and session memory documented
|
||||
- [ ] **Context gathered** - File references and session memory documented (default `@**/*`)
|
||||
- [ ] **Directory navigation** - Determine if `cd` or `cd + --include-directories` needed
|
||||
- [ ] **Gemini analysis** completed for understanding
|
||||
- [ ] **Template selected** - Appropriate template chosen
|
||||
- [ ] **Constraints specified** - File patterns, scope, requirements
|
||||
- [ ] **Implementation approach** - Tool selection and workflow
|
||||
- [ ] **Quality measures** - Testing and validation plan
|
||||
- [ ] **Tool configuration** - Review `.gemini/CLAUDE.md` or `.codex/Agent.md` if needed
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Execution Configuration
|
||||
|
||||
### Core Execution Rules
|
||||
- **Dynamic Timeout (20-120min)**: Allocate execution time based on task complexity
|
||||
- Simple tasks (analysis, search): 20-40min (1200000-2400000ms)
|
||||
- Medium tasks (refactoring, documentation): 40-60min (2400000-3600000ms)
|
||||
- Complex tasks (implementation, migration): 60-120min (3600000-7200000ms)
|
||||
- **Codex Multiplier**: Codex commands use 1.5x of allocated time
|
||||
- **Apply to All Tools**: All bash() wrapped commands including Gemini, Qwen wrapper and Codex executions
|
||||
- **Command Examples**: `bash(~/.claude/scripts/gemini-wrapper -p "prompt")`, `bash(codex -C directory --full-auto exec "task")`
|
||||
- **Auto-detect**: Analyze PURPOSE and TASK fields to determine appropriate timeout
|
||||
|
||||
### Permission Framework
|
||||
- **⚠️ WRITE PROTECTION**: Local codebase write/modify requires EXPLICIT user confirmation
|
||||
- **Analysis Mode (default)**: Read-only, safe for auto-execution
|
||||
- **Write Mode**: Requires user explicitly states MODE=write or MODE=auto in prompt
|
||||
- **Exception**: User provides clear instructions like "modify", "create", "implement"
|
||||
- **Gemini/Qwen Write Access**: Use `--approval-mode yolo` ONLY when MODE=write explicitly specified
|
||||
- **Parameter Position**: Place AFTER the wrapper command: `gemini-wrapper --approval-mode yolo -p "..."`
|
||||
- **Codex Write Access**: Use `-s danger-full-access` and `--skip-git-repo-check` ONLY when MODE=auto explicitly specified
|
||||
- **Parameter Position**: Place AFTER the prompt string at command END: `codex ... exec "..." --skip-git-repo-check -s danger-full-access`
|
||||
- **Default Behavior**: All tools default to analysis/read-only mode without explicit write permission
|
||||
|
||||
@@ -1,176 +1,11 @@
|
||||
# MCP Tool Strategy: Triggers & Workflows
|
||||
# MCP Tool Strategy: Exa Usage
|
||||
|
||||
## ⚡ Triggering Mechanisms
|
||||
## ⚡ Exa Triggering Mechanisms
|
||||
|
||||
**Auto-Trigger Scenarios**:
|
||||
**Auto-Trigger**:
|
||||
- User mentions "exa-code" or code-related queries → `mcp__exa__get_code_context_exa`
|
||||
- Need current web information → `mcp__exa__web_search_exa`
|
||||
- Finding code patterns/files → `mcp__code-index__search_code_advanced`
|
||||
- Locating specific files → `mcp__code-index__find_files`
|
||||
|
||||
**Manual Trigger Rules**:
|
||||
**Manual Trigger**:
|
||||
- Complex API research → Exa Code Context
|
||||
- Architecture pattern discovery → Exa Code Context + Gemini analysis
|
||||
- Real-time information needs → Exa Web Search
|
||||
- Codebase exploration → Code Index tools first, then Gemini analysis
|
||||
|
||||
## 🎯 Available MCP Tools
|
||||
|
||||
### Exa Code Context (mcp__exa__get_code_context_exa)
|
||||
**Purpose**: Search and get relevant context for programming tasks
|
||||
**Strengths**: Highest quality context for libraries, SDKs, and APIs
|
||||
**Best For**: Code examples, API patterns, learning frameworks
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
mcp__exa__get_code_context_exa(
|
||||
query="React useState hook examples",
|
||||
tokensNum="dynamic" # or 1000-50000
|
||||
)
|
||||
```
|
||||
|
||||
**Examples**: "React useState", "Python pandas filtering", "Express.js middleware"
|
||||
|
||||
### Exa Web Search (mcp__exa__web_search_exa)
|
||||
**Purpose**: Real-time web searches with content scraping
|
||||
**Best For**: Current information, research, recent solutions
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
mcp__exa__web_search_exa(
|
||||
query="latest React 18 features",
|
||||
numResults=5 # default: 5
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
### Code Index Tools (mcp__code-index__)
|
||||
**核心方法**: `search_code_advanced`, `find_files`, `refresh_index`
|
||||
|
||||
**核心搜索**:
|
||||
```bash
|
||||
mcp__code-index__search_code_advanced(pattern="function.*auth", file_pattern="*.ts")
|
||||
mcp__code-index__find_files(pattern="*.test.js")
|
||||
mcp__code-index__refresh_index() # git操作后刷新
|
||||
```
|
||||
|
||||
**实用场景**:
|
||||
- **查找代码**: `search_code_advanced(pattern="old.*API")`
|
||||
- **定位文件**: `find_files(pattern="src/**/*.tsx")`
|
||||
- **更新索引**: `refresh_index()` (git操作后)
|
||||
|
||||
**文件搜索测试结果**:
|
||||
- ✅ `find_files(pattern="*.md")` - 搜索所有 Markdown 文件
|
||||
- ✅ `find_files(pattern="*complete*")` - 通配符匹配文件名
|
||||
- ❌ `find_files(pattern="complete.md")` - 精确匹配可能失败
|
||||
- 📝 建议使用通配符模式获得更好的搜索结果
|
||||
|
||||
## 📊 Tool Selection Matrix
|
||||
|
||||
| Task | MCP Tool | Use Case | Integration |
|
||||
|------|----------|----------|-------------|
|
||||
| **Code Context** | Exa Code | API examples, patterns | → Gemini analysis |
|
||||
| **Research** | Exa Web | Current info, trends | → Planning phase |
|
||||
| **Code Search** | Code Index | Pattern discovery, file location | → Gemini analysis |
|
||||
| **Navigation** | Code Index | File exploration, structure | → Architecture phase |
|
||||
|
||||
## 🚀 Integration Patterns
|
||||
|
||||
### Standard Workflow
|
||||
```bash
|
||||
# 1. Explore codebase structure
|
||||
mcp__code-index__find_files(pattern="*async*")
|
||||
mcp__code-index__search_code_advanced(pattern="async.*function", file_pattern="*.ts")
|
||||
|
||||
# 2. Get external context
|
||||
mcp__exa__get_code_context_exa(query="TypeScript async patterns", tokensNum="dynamic")
|
||||
|
||||
# 3. Analyze with Gemini
|
||||
cd "src/async" && ~/.claude/scripts/gemini-wrapper -p "
|
||||
PURPOSE: Understand async patterns
|
||||
CONTEXT: Code index results + Exa context + @{src/async/**/*}
|
||||
EXPECTED: Pattern analysis
|
||||
RULES: Focus on TypeScript best practices
|
||||
"
|
||||
|
||||
# 4. Implement with Codex
|
||||
codex -C src/async --full-auto exec "Apply modern async patterns" -s danger-full-access
|
||||
```
|
||||
|
||||
### Enhanced Planning
|
||||
1. **Explore codebase** with Code Index tools
|
||||
2. **Research** with Exa Web Search
|
||||
3. **Get code context** with Exa Code Context
|
||||
4. **Analyze** with Gemini
|
||||
5. **Architect** with Qwen
|
||||
6. **Implement** with Codex
|
||||
|
||||
## 🔧 Best Practices
|
||||
|
||||
### Code Index
|
||||
- **Search first** - Use before external tools for codebase exploration
|
||||
- **Refresh after git ops** - Keep index synchronized
|
||||
- **Pattern specificity** - Use precise regex patterns for better results
|
||||
- **File patterns** - Combine with glob patterns for targeted search
|
||||
- **Glob pattern matching** - Use `*.md`, `*complete*` patterns for file discovery
|
||||
- **Exact vs wildcard** - Exact names may fail, use wildcards for better results
|
||||
|
||||
### Exa Code Context
|
||||
- **Use "dynamic" tokens** for efficiency
|
||||
- **Be specific** - include technology stack
|
||||
- **MANDATORY** when user mentions exa-code or code queries
|
||||
|
||||
### Exa Web Search
|
||||
- **Default 5 results** usually sufficient
|
||||
- **Use for current info** - supplement knowledge cutoff
|
||||
|
||||
|
||||
|
||||
## 🎯 Common Scenarios
|
||||
|
||||
### Learning New Technology
|
||||
```bash
|
||||
# Explore existing patterns + get examples + research + analyze
|
||||
mcp__code-index__search_code_advanced(pattern="router|routing", file_pattern="*.ts")
|
||||
mcp__exa__get_code_context_exa(query="Next.js 14 app router", tokensNum="dynamic")
|
||||
mcp__exa__web_search_exa(query="Next.js 14 best practices 2024", numResults=3)
|
||||
cd "src/app" && ~/.claude/scripts/gemini-wrapper -p "Learn Next.js patterns"
|
||||
```
|
||||
|
||||
### Debugging
|
||||
```bash
|
||||
# Find similar patterns + solutions + fix
|
||||
mcp__code-index__search_code_advanced(pattern="similar.*error", file_pattern="*.ts")
|
||||
mcp__exa__get_code_context_exa(query="TypeScript generic constraints", tokensNum="dynamic")
|
||||
codex --full-auto exec "Fix TypeScript issues" -s danger-full-access
|
||||
```
|
||||
|
||||
### Codebase Exploration
|
||||
```bash
|
||||
# Comprehensive codebase understanding workflow
|
||||
mcp__code-index__set_project_path(path="/current/project") # 设置项目路径
|
||||
mcp__code-index__refresh_index() # 刷新索引
|
||||
mcp__code-index__find_files(pattern="*auth*") # Find auth-related files
|
||||
mcp__code-index__search_code_advanced(pattern="function.*auth", file_pattern="*.ts") # Find auth functions
|
||||
mcp__code-index__get_file_summary(file_path="src/auth/index.ts") # Understand structure
|
||||
cd "src/auth" && ~/.claude/scripts/gemini-wrapper -p "Analyze auth architecture"
|
||||
```
|
||||
|
||||
### Project Setup Workflow
|
||||
```bash
|
||||
# 新项目初始化流程
|
||||
mcp__code-index__set_project_path(path="/path/to/new/project")
|
||||
mcp__code-index__get_settings_info() # 确认设置
|
||||
mcp__code-index__refresh_index() # 建立索引
|
||||
mcp__code-index__configure_file_watcher(enabled=true) # 启用文件监控
|
||||
mcp__code-index__get_file_watcher_status() # 确认监控状态
|
||||
```
|
||||
|
||||
## ⚡ Performance Tips
|
||||
|
||||
- **Code Index first** → explore codebase before external tools
|
||||
- **Use "dynamic" tokens** for Exa Code Context
|
||||
- **MCP first** → gather context before analysis
|
||||
- **Focus queries** - avoid overly broad searches
|
||||
- **Integrate selectively** - use relevant context only
|
||||
- **Refresh index** after major git operations
|
||||
@@ -4,7 +4,7 @@
|
||||
Task commands provide single-execution workflow capabilities with full context awareness, hierarchical organization, and agent orchestration.
|
||||
|
||||
## Task JSON Schema
|
||||
All task files use this simplified 5-field schema (aligned with workflow-architecture.md):
|
||||
All task files use this simplified 5-field schema:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -14,7 +14,7 @@ All task files use this simplified 5-field schema (aligned with workflow-archite
|
||||
|
||||
"meta": {
|
||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@general-purpose"
|
||||
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor"
|
||||
},
|
||||
|
||||
"context": {
|
||||
@@ -155,14 +155,14 @@ Tasks inherit from:
|
||||
- **@code-developer**: Implementation tasks, coding, test writing
|
||||
- **@action-planning-agent**: Design, architecture planning
|
||||
- **@test-fix-agent**: Test execution, failure diagnosis, code fixing
|
||||
- **@general-purpose**: Optional manual review (only when explicitly requested)
|
||||
- **@universal-executor**: Optional manual review (only when explicitly requested)
|
||||
|
||||
### Agent Context Filtering
|
||||
Each agent receives tailored context:
|
||||
- **@code-developer**: Complete implementation details, test requirements
|
||||
- **@action-planning-agent**: High-level requirements, risks, architecture
|
||||
- **@test-fix-agent**: Test execution, failure diagnosis, code fixing
|
||||
- **@general-purpose**: Quality standards, security considerations (when requested)
|
||||
- **@universal-executor**: Quality standards, security considerations (when requested)
|
||||
|
||||
## Deprecated Fields
|
||||
|
||||
|
||||
@@ -1,10 +0,0 @@
|
||||
# Tool Control Configuration
|
||||
# Controls whether CLI tools (Gemini, Qwen, Codex) are enabled in the workspace
|
||||
|
||||
tools:
|
||||
gemini:
|
||||
enabled: true
|
||||
qwen:
|
||||
enabled: true
|
||||
codex:
|
||||
enabled: true
|
||||
@@ -104,17 +104,18 @@ IMPL-2.1 # Subtask of IMPL-2 (dynamically created)
|
||||
- **Status inheritance**: Parent status derived from subtask completion
|
||||
|
||||
### Enhanced Task JSON Schema
|
||||
All task files use this unified 5-field schema with optional artifacts enhancement:
|
||||
All task files use this unified 6-field schema with optional artifacts enhancement:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-1.2",
|
||||
"title": "Implement JWT authentication",
|
||||
"status": "pending|active|completed|blocked|container",
|
||||
"context_package_path": ".workflow/WFS-session/.process/context-package.json",
|
||||
|
||||
"meta": {
|
||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@general-purpose"
|
||||
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor"
|
||||
},
|
||||
|
||||
"context": {
|
||||
@@ -132,11 +133,11 @@ All task files use this unified 5-field schema with optional artifacts enhanceme
|
||||
},
|
||||
"artifacts": [
|
||||
{
|
||||
"type": "synthesis_specification",
|
||||
"source": "brainstorm_synthesis",
|
||||
"path": ".workflow/WFS-session/.brainstorming/synthesis-specification.md",
|
||||
"type": "role_analyses",
|
||||
"source": "brainstorm_clarification",
|
||||
"path": ".workflow/WFS-session/.brainstorming/*/analysis*.md",
|
||||
"priority": "highest",
|
||||
"contains": "complete_integrated_specification"
|
||||
"contains": "role_specific_requirements_and_design"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -152,7 +153,7 @@ All task files use this unified 5-field schema with optional artifacts enhanceme
|
||||
{
|
||||
"step": "analyze_architecture",
|
||||
"action": "Review system architecture",
|
||||
"command": "~/.claude/scripts/gemini-wrapper -p \"analyze patterns: [patterns]\"",
|
||||
"command": "gemini \"analyze patterns: [patterns]\"",
|
||||
"output_to": "design"
|
||||
},
|
||||
{
|
||||
@@ -228,6 +229,13 @@ All task files use this unified 5-field schema with optional artifacts enhanceme
|
||||
|
||||
### Focus Paths & Context Management
|
||||
|
||||
#### Context Package Path (Top-Level Field)
|
||||
The **context_package_path** field provides the location of the smart context package:
|
||||
- **Location**: Top-level field (not in `artifacts` array)
|
||||
- **Path**: `.workflow/WFS-session/.process/context-package.json`
|
||||
- **Purpose**: References the comprehensive context package containing project structure, dependencies, and brainstorming artifacts catalog
|
||||
- **Usage**: Loaded in `pre_analysis` steps via `Read({{context_package_path}})`
|
||||
|
||||
#### Focus Paths Format
|
||||
The **focus_paths** field specifies concrete project paths for task implementation:
|
||||
- **Array of strings**: `["folder1", "folder2", "specific_file.ts"]`
|
||||
@@ -241,15 +249,15 @@ Optional field referencing brainstorming outputs for task execution:
|
||||
```json
|
||||
"artifacts": [
|
||||
{
|
||||
"type": "synthesis_specification|topic_framework|individual_role_analysis",
|
||||
"source": "brainstorm_synthesis|brainstorm_framework|brainstorm_roles",
|
||||
"type": "role_analyses|topic_framework|individual_role_analysis",
|
||||
"source": "brainstorm_clarification|brainstorm_framework|brainstorm_roles",
|
||||
"path": ".workflow/WFS-session/.brainstorming/document.md",
|
||||
"priority": "highest|high|medium|low"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
**Types & Priority**: synthesis_specification (highest) → topic_framework (medium) → individual_role_analysis (low)
|
||||
**Types & Priority**: role_analyses (highest) → topic_framework (medium) → individual_role_analysis (low)
|
||||
|
||||
#### Flow Control Configuration
|
||||
The **flow_control** field manages task execution through structured sequential steps. For complete format specifications and usage guidelines, see [Flow Control Format Guide](#flow-control-format-guide) below.
|
||||
@@ -296,7 +304,7 @@ The `[FLOW_CONTROL]` marker indicates that a task or prompt contains flow contro
|
||||
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/topic-framework.md)
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -332,19 +340,23 @@ The `[FLOW_CONTROL]` marker indicates that a task or prompt contains flow contro
|
||||
"flow_control": {
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_synthesis_specification",
|
||||
"action": "Load consolidated synthesis specification",
|
||||
"step": "load_role_analyses",
|
||||
"action": "Load role analysis documents from brainstorming",
|
||||
"commands": [
|
||||
"bash(ls .workflow/WFS-{session}/.brainstorming/synthesis-specification.md 2>/dev/null || echo 'not found')",
|
||||
"Read(.workflow/WFS-{session}/.brainstorming/synthesis-specification.md)"
|
||||
"bash(ls .workflow/WFS-{session}/.brainstorming/*/analysis*.md 2>/dev/null || echo 'not found')",
|
||||
"Glob(.workflow/WFS-{session}/.brainstorming/*/analysis*.md)",
|
||||
"Read(each discovered role analysis file)"
|
||||
],
|
||||
"output_to": "synthesis_specification",
|
||||
"output_to": "role_analyses",
|
||||
"on_error": "skip_optional"
|
||||
},
|
||||
{
|
||||
"step": "mcp_codebase_exploration",
|
||||
"action": "Explore codebase using MCP",
|
||||
"command": "mcp__code-index__find_files(pattern=\"*.ts\") && mcp__code-index__search_code_advanced(pattern=\"auth\")",
|
||||
"step": "local_codebase_exploration",
|
||||
"action": "Explore codebase using local search",
|
||||
"commands": [
|
||||
"bash(rg '^(function|class|interface).*auth' --type ts -n --max-count 15)",
|
||||
"bash(find . -name '*auth*' -type f | grep -v node_modules | head -10)"
|
||||
],
|
||||
"output_to": "codebase_structure"
|
||||
}
|
||||
],
|
||||
@@ -352,14 +364,14 @@ The `[FLOW_CONTROL]` marker indicates that a task or prompt contains flow contro
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Setup infrastructure",
|
||||
"description": "Install JWT library and create config following [synthesis_specification]",
|
||||
"description": "Install JWT library and create config following [role_analyses]",
|
||||
"modification_points": [
|
||||
"Add JWT library dependencies to package.json",
|
||||
"Create auth configuration file"
|
||||
],
|
||||
"logic_flow": [
|
||||
"Install jsonwebtoken library via npm",
|
||||
"Configure JWT secret from [synthesis_specification]",
|
||||
"Configure JWT secret from [role_analyses]",
|
||||
"Export auth config for use by [jwt_generator]"
|
||||
],
|
||||
"depends_on": [],
|
||||
@@ -406,7 +418,7 @@ The `[FLOW_CONTROL]` marker indicates that a task or prompt contains flow contro
|
||||
**Structure**: Array of step objects with sequential execution
|
||||
|
||||
**Step Fields**:
|
||||
- **step**: Step identifier (string, e.g., "load_synthesis_specification")
|
||||
- **step**: Step identifier (string, e.g., "load_role_analyses")
|
||||
- **action**: Human-readable description of the step
|
||||
- **command** or **commands**: Single command string or array of command strings
|
||||
- **output_to**: Variable name for storing step output
|
||||
@@ -415,8 +427,8 @@ The `[FLOW_CONTROL]` marker indicates that a task or prompt contains flow contro
|
||||
**Command Types Supported**:
|
||||
- **Bash commands**: `bash(command)` - Any shell command
|
||||
- **Tool calls**: `Read(file)`, `Glob(pattern)`, `Grep(pattern)`
|
||||
- **MCP tools**: `mcp__code-index__find_files()`, `mcp__exa__get_code_context_exa()`
|
||||
- **CLI wrappers**: `~/.claude/scripts/gemini-wrapper`, `codex --full-auto exec`
|
||||
- **MCP tools**: `mcp__exa__get_code_context_exa()`, `mcp__exa__web_search_exa()`
|
||||
- **CLI commands**: `gemini`, `qwen`, `codex --full-auto exec`
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
@@ -477,10 +489,10 @@ The `[FLOW_CONTROL]` marker indicates that a task or prompt contains flow contro
|
||||
"command": "codex --full-auto exec \"task\" resume --last --skip-git-repo-check -s danger-full-access"
|
||||
|
||||
// Gemini (user requested)
|
||||
"command": "~/.claude/scripts/gemini-wrapper -p \"analyze [context]\""
|
||||
"command": "gemini \"analyze [context]\""
|
||||
|
||||
// Qwen (fallback for Gemini)
|
||||
"command": "~/.claude/scripts/qwen-wrapper -p \"analyze [context]\""
|
||||
"command": "qwen \"analyze [context]\""
|
||||
```
|
||||
|
||||
**Example Step**:
|
||||
@@ -517,14 +529,14 @@ The `[FLOW_CONTROL]` marker indicates that a task or prompt contains flow contro
|
||||
|
||||
**Gemini CLI**:
|
||||
```bash
|
||||
~/.claude/scripts/gemini-wrapper -p "prompt"
|
||||
~/.claude/scripts/gemini-wrapper --approval-mode yolo -p "prompt" # For write mode
|
||||
gemini "prompt"
|
||||
gemini --approval-mode yolo "prompt" # For write mode
|
||||
```
|
||||
|
||||
**Qwen CLI** (Gemini fallback):
|
||||
```bash
|
||||
~/.claude/scripts/qwen-wrapper -p "prompt"
|
||||
~/.claude/scripts/qwen-wrapper --approval-mode yolo -p "prompt" # For write mode
|
||||
qwen "prompt"
|
||||
qwen --approval-mode yolo "prompt" # For write mode
|
||||
```
|
||||
|
||||
**Codex CLI**:
|
||||
@@ -540,8 +552,6 @@ codex --full-auto exec "task" resume --last --skip-git-repo-check -s danger-full
|
||||
- `bash(command)` - Execute bash command
|
||||
|
||||
**MCP Tools**:
|
||||
- `mcp__code-index__find_files(pattern="*.ts")` - Find files using code index
|
||||
- `mcp__code-index__search_code_advanced(pattern="auth")` - Search code patterns
|
||||
- `mcp__exa__get_code_context_exa(query="...")` - Get code context from Exa
|
||||
- `mcp__exa__web_search_exa(query="...")` - Web search via Exa
|
||||
|
||||
@@ -567,7 +577,7 @@ Both formats use `[variable_name]` syntax for referencing outputs from previous
|
||||
**Examples**:
|
||||
```json
|
||||
// Reference pre_analysis output
|
||||
"description": "Install JWT library following [synthesis_specification]"
|
||||
"description": "Install JWT library following [role_analyses]"
|
||||
|
||||
// Reference previous step output
|
||||
"description": "Create middleware using [auth_config] and [jwt_generator]"
|
||||
@@ -636,7 +646,7 @@ Both formats use `[variable_name]` syntax for referencing outputs:
|
||||
```json
|
||||
{
|
||||
"step": 2,
|
||||
"description": "Implement following [synthesis_specification] and [codebase_structure]",
|
||||
"description": "Implement following [role_analyses] and [codebase_structure]",
|
||||
"depends_on": [1],
|
||||
"output": "implementation"
|
||||
}
|
||||
@@ -892,13 +902,13 @@ fi
|
||||
- **Examples**: New features, API endpoints with integration, database schema changes
|
||||
- **Task Decomposition**: Two-level hierarchy when decomposition is needed
|
||||
- **Agent Coordination**: Context coordination between related tasks
|
||||
- **Tool Strategy**: `gemini-wrapper` for pattern analysis, `codex --full-auto` for implementation
|
||||
- **Tool Strategy**: `gemini` for pattern analysis, `codex --full-auto` for implementation
|
||||
|
||||
#### Complex Workflows
|
||||
- **Examples**: Major features, architecture refactoring, security implementations, multi-service deployments
|
||||
- **Task Decomposition**: Frequent use of two-level hierarchy with dynamic subtask creation
|
||||
- **Agent Coordination**: Multi-agent orchestration with deep context analysis
|
||||
- **Tool Strategy**: `gemini-wrapper` for architecture analysis, `codex --full-auto` for complex problem solving, `bash()` commands for flexible analysis
|
||||
- **Tool Strategy**: `gemini` for architecture analysis, `codex --full-auto` for complex problem solving, `bash()` commands for flexible analysis
|
||||
|
||||
### Assessment & Upgrades
|
||||
- **During Creation**: System evaluates requirements and assigns complexity
|
||||
@@ -912,7 +922,7 @@ Based on task type and title keywords:
|
||||
- **Planning tasks** → @action-planning-agent
|
||||
- **Implementation** → @code-developer (code + tests)
|
||||
- **Test execution/fixing** → @test-fix-agent
|
||||
- **Review** → @general-purpose (optional, only when explicitly requested)
|
||||
- **Review** → @universal-executor (optional, only when explicitly requested)
|
||||
|
||||
### Execution Context
|
||||
Agents receive complete task JSON plus workflow context:
|
||||
|
||||
@@ -17,28 +17,43 @@ EXPECTED: [deliverables]
|
||||
RULES: [templates | additional constraints]
|
||||
```
|
||||
|
||||
## MODE Definitions
|
||||
## MODE Definitions - STRICT OPERATION BOUNDARIES
|
||||
|
||||
### MODE: analysis (default)
|
||||
### MODE: analysis (default) - READ-ONLY OPERATIONS
|
||||
|
||||
**Permissions**:
|
||||
- Read all CONTEXT files
|
||||
- Create/modify documentation files
|
||||
**ALLOWED OPERATIONS**:
|
||||
- **READ**: All CONTEXT files and analyze content
|
||||
- **ANALYZE**: Code patterns, architecture, dependencies
|
||||
- **GENERATE**: Text output, insights, recommendations
|
||||
- **DOCUMENT**: Analysis results in output response only
|
||||
|
||||
**FORBIDDEN OPERATIONS**:
|
||||
- **NO FILE CREATION**: Cannot create any files on disk
|
||||
- **NO FILE MODIFICATION**: Cannot modify existing files
|
||||
- **NO FILE DELETION**: Cannot delete any files
|
||||
- **NO DIRECTORY OPERATIONS**: Cannot create/modify directories
|
||||
|
||||
**Execute**:
|
||||
1. Read and analyze CONTEXT files
|
||||
2. Identify patterns and issues
|
||||
3. Generate insights and recommendations
|
||||
4. Create documentation if needed
|
||||
5. Output structured analysis
|
||||
4. Output structured analysis (text response only)
|
||||
|
||||
**Constraint**: Do NOT modify source code files
|
||||
**CRITICAL CONSTRAINT**: Absolutely NO file system operations - ANALYSIS OUTPUT ONLY
|
||||
|
||||
### MODE: write
|
||||
### MODE: write - FILE CREATION/MODIFICATION OPERATIONS
|
||||
|
||||
**Permissions**:
|
||||
- Full file operations
|
||||
- Create/modify any files
|
||||
**ALLOWED OPERATIONS**:
|
||||
- **READ**: All CONTEXT files and analyze content
|
||||
- **CREATE**: New files (documentation, code, configuration)
|
||||
- **MODIFY**: Existing files (update content, refactor code)
|
||||
- **DELETE**: Files when explicitly required
|
||||
- **ORGANIZE**: Directory structure operations
|
||||
|
||||
**STILL RESTRICTED**:
|
||||
- Must follow project conventions and patterns
|
||||
- Cannot break existing functionality
|
||||
- Must validate changes before completion
|
||||
|
||||
**Execute**:
|
||||
1. Read CONTEXT files
|
||||
|
||||
@@ -17,28 +17,43 @@ EXPECTED: [deliverables]
|
||||
RULES: [templates | additional constraints]
|
||||
```
|
||||
|
||||
## MODE Definitions
|
||||
## MODE Definitions - STRICT OPERATION BOUNDARIES
|
||||
|
||||
### MODE: analysis (default)
|
||||
### MODE: analysis (default) - READ-ONLY OPERATIONS
|
||||
|
||||
**Permissions**:
|
||||
- Read all CONTEXT files
|
||||
- Create/modify documentation files
|
||||
**ALLOWED OPERATIONS**:
|
||||
- **READ**: All CONTEXT files and analyze content
|
||||
- **ANALYZE**: Code patterns, architecture, dependencies
|
||||
- **GENERATE**: Text output, insights, recommendations
|
||||
- **DOCUMENT**: Analysis results in output response only
|
||||
|
||||
**FORBIDDEN OPERATIONS**:
|
||||
- **NO FILE CREATION**: Cannot create any files on disk
|
||||
- **NO FILE MODIFICATION**: Cannot modify existing files
|
||||
- **NO FILE DELETION**: Cannot delete any files
|
||||
- **NO DIRECTORY OPERATIONS**: Cannot create/modify directories
|
||||
|
||||
**Execute**:
|
||||
1. Read and analyze CONTEXT files
|
||||
2. Identify patterns and issues
|
||||
3. Generate insights and recommendations
|
||||
4. Create documentation if needed
|
||||
5. Output structured analysis
|
||||
4. Output structured analysis (text response only)
|
||||
|
||||
**Constraint**: Do NOT modify source code files
|
||||
**CRITICAL CONSTRAINT**: Absolutely NO file system operations - ANALYSIS OUTPUT ONLY
|
||||
|
||||
### MODE: write
|
||||
### MODE: write - FILE CREATION/MODIFICATION OPERATIONS
|
||||
|
||||
**Permissions**:
|
||||
- Full file operations
|
||||
- Create/modify any files
|
||||
**ALLOWED OPERATIONS**:
|
||||
- **READ**: All CONTEXT files and analyze content
|
||||
- **CREATE**: New files (documentation, code, configuration)
|
||||
- **MODIFY**: Existing files (update content, refactor code)
|
||||
- **DELETE**: Files when explicitly required
|
||||
- **ORGANIZE**: Directory structure operations
|
||||
|
||||
**STILL RESTRICTED**:
|
||||
- Must follow project conventions and patterns
|
||||
- Cannot break existing functionality
|
||||
- Must validate changes before completion
|
||||
|
||||
**Execute**:
|
||||
1. Read CONTEXT files
|
||||
|
||||
346
CHANGELOG.md
346
CHANGELOG.md
@@ -5,6 +5,352 @@ All notable changes to Claude Code Workflow (CCW) will be documented in this fil
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [5.0.0] - 2025-10-24
|
||||
|
||||
### 🎉 Less is More - Simplified Architecture Release
|
||||
|
||||
This major release embraces the "less is more" philosophy, removing external dependencies, streamlining workflows, and focusing on core functionality with standard, proven tools.
|
||||
|
||||
#### 🚀 Breaking Changes
|
||||
|
||||
**Removed Features**:
|
||||
- ❌ **`/workflow:concept-clarify`** - Concept enhancement feature removed for simplification
|
||||
- ❌ **MCP code-index dependency** - Replaced with standard `ripgrep` and `find` tools
|
||||
- ❌ **`synthesis-specification.md` workflow** - Replaced with direct role analysis approach
|
||||
|
||||
**Command Changes**:
|
||||
- ⚠️ Memory commands renamed for consistency:
|
||||
- `/update-memory-full` → `/memory:update-full`
|
||||
- `/update-memory-related` → `/memory:update-related`
|
||||
|
||||
#### ✅ Added
|
||||
|
||||
**Standard Tool Integration**:
|
||||
- ✅ **ripgrep (rg)** - Fast content search replacing MCP code-index
|
||||
- ✅ **find** - Native filesystem discovery for better cross-platform compatibility
|
||||
- ✅ **Multi-tier fallback** - Graceful degradation when advanced tools unavailable
|
||||
|
||||
**Enhanced TDD Workflow**:
|
||||
- ✅ **Conflict resolution mechanism** - Better handling of test-implementation conflicts
|
||||
- ✅ **Improved task generation** - Enhanced phase coordination and quality gates
|
||||
- ✅ **Updated workflow phases** - Clearer separation of concerns
|
||||
|
||||
**Role-Based Planning**:
|
||||
- ✅ **Direct role analysis** - Simplified brainstorming focused on role documents
|
||||
- ✅ **Removed synthesis layer** - Less abstraction, clearer intent
|
||||
- ✅ **Better documentation flow** - From role analysis directly to action planning
|
||||
|
||||
#### 📝 Changed
|
||||
|
||||
**Documentation Updates**:
|
||||
- ✅ **All docs updated to v5.0.0** - Consistent versioning across all files
|
||||
- ✅ **Removed MCP badge** - No longer advertising experimental MCP features
|
||||
- ✅ **Clarified test workflows** - Better explanation of generate → execute pattern
|
||||
- ✅ **Fixed command references** - Corrected all memory command names
|
||||
- ✅ **Updated UI design notes** - Clarified MCP Chrome DevTools retention for UI workflows
|
||||
|
||||
**File Discovery**:
|
||||
- ✅ **`/memory:load`** - Now uses ripgrep/find instead of MCP code-index
|
||||
- ✅ **Faster search** - Native tools provide better performance
|
||||
- ✅ **Better reliability** - No external service dependencies
|
||||
|
||||
**UI Design Workflows**:
|
||||
- ℹ️ **MCP Chrome DevTools retained** - Specialized tool for browser automation
|
||||
- ℹ️ **Multi-tier fallback** - MCP → Playwright → Chrome → Manual
|
||||
- ℹ️ **Purpose-built integration** - UI workflows require browser control
|
||||
|
||||
#### 🐛 Fixed
|
||||
|
||||
**Documentation Inconsistencies**:
|
||||
- 🔧 Removed references to deprecated `/workflow:concept-clarify` command
|
||||
- 🔧 Fixed incorrect memory command names in getting started guides
|
||||
- 🔧 Clarified test workflow execution patterns
|
||||
- 🔧 Updated MCP dependency references throughout specs
|
||||
- 🔧 Corrected UI design tool descriptions
|
||||
|
||||
#### 📦 Updated Files
|
||||
|
||||
- `README.md` / `README_CN.md` - v5.0 version badge and core improvements
|
||||
- `COMMAND_REFERENCE.md` - Updated command descriptions, removed deprecated commands
|
||||
- `COMMAND_SPEC.md` - v5.0 technical specifications, clarified implementations
|
||||
- `GETTING_STARTED.md` / `GETTING_STARTED_CN.md` - v5.0 features, fixed command names
|
||||
- `INSTALL_CN.md` - v5.0 simplified installation notes
|
||||
|
||||
#### 🔍 Technical Details
|
||||
|
||||
**Performance Improvements**:
|
||||
- Faster file discovery using native ripgrep
|
||||
- Reduced external dependencies improves installation reliability
|
||||
- Better cross-platform compatibility with standard Unix tools
|
||||
|
||||
**Architectural Benefits**:
|
||||
- Simpler dependency tree
|
||||
- Easier troubleshooting with standard tools
|
||||
- More predictable behavior without external services
|
||||
|
||||
**Migration Notes**:
|
||||
- Update memory command usage (see command changes above)
|
||||
- Remove any usage of `/workflow:concept-clarify`
|
||||
- No changes needed for core workflow commands (`/workflow:plan`, `/workflow:execute`)
|
||||
|
||||
---
|
||||
|
||||
## [4.6.2] - 2025-10-20
|
||||
|
||||
### 📝 Documentation Optimization
|
||||
|
||||
#### Improved
|
||||
|
||||
**`/memory:load` Command Documentation**: Optimized command specification from 273 to 240 lines (12% reduction)
|
||||
- Merged redundant sections for better information flow
|
||||
- Removed unnecessary internal implementation details
|
||||
- Simplified usage examples while preserving clarity
|
||||
- Maintained all critical information (parameters, workflow, JSON structure)
|
||||
- Improved user-centric documentation structure
|
||||
|
||||
#### Updated
|
||||
|
||||
**COMMAND_SPEC.md**: Updated `/memory:load` specification to match actual implementation
|
||||
- Corrected syntax: `[--tool gemini|qwen]` instead of outdated `[--agent] [--json]` flags
|
||||
- Added agent-driven execution details
|
||||
- Clarified core philosophy and token-efficiency benefits
|
||||
|
||||
**GETTING_STARTED.md**: Added "Quick Context Loading for Specific Tasks" section
|
||||
- Positioned between "Full Project Index Rebuild" and "Incremental Related Module Updates"
|
||||
- Includes practical examples and use case guidance
|
||||
- Explains how `/memory:load` works and when to use it
|
||||
|
||||
---
|
||||
|
||||
## [4.6.0] - 2025-10-18
|
||||
|
||||
### 🎯 Concept Clarification & Agent-Driven Analysis
|
||||
|
||||
This release introduces a concept clarification quality gate and agent-delegated intelligent analysis, significantly enhancing workflow planning accuracy and reducing execution errors.
|
||||
|
||||
#### Added
|
||||
|
||||
**Concept Clarification Quality Gate** (`/workflow:concept-clarify`):
|
||||
- **Dual-Mode Support**: Automatically detects and operates in brainstorm or plan workflows
|
||||
- **Brainstorm Mode**: Analyzes `synthesis-specification.md` after brainstorm synthesis
|
||||
- **Plan Mode**: Analyzes `ANALYSIS_RESULTS.md` between Phase 3 and Phase 4
|
||||
- **Interactive Q&A System**: Up to 5 targeted questions to resolve ambiguities
|
||||
- Multiple-choice or short-answer format
|
||||
- Covers requirements, architecture, UX, implementation, risks
|
||||
- Progressive disclosure - one question at a time
|
||||
- **Incremental Updates**: Saves clarifications after each answer to prevent context loss
|
||||
- **Coverage Summary**: Generates detailed report with recommendations
|
||||
- **Session Metadata**: Tracks verification status in workflow session
|
||||
- **Phase 3.5 Integration**: Inserted as quality gate in `/workflow:plan`
|
||||
- Pauses auto-continue workflow for user interaction
|
||||
- Auto-skips if no critical ambiguities detected
|
||||
- Updates ANALYSIS_RESULTS.md with user clarifications
|
||||
|
||||
**Agent-Delegated Intelligent Analysis** (Phase 3 Enhancement):
|
||||
- **CLI Execution Agent Integration**: Phase 3 now uses `cli-execution-agent`
|
||||
- Autonomous context discovery via MCP code-index
|
||||
- Enhanced prompt generation with discovered patterns
|
||||
- 5-phase agent workflow (understand → discover → enhance → execute → route)
|
||||
- **MCP-Powered Context Discovery**: Automatic file and pattern discovery
|
||||
- `mcp__code-index__find_files`: Pattern-based file discovery
|
||||
- `mcp__code-index__search_code_advanced`: Content-based code search
|
||||
- `mcp__code-index__get_file_summary`: Structural analysis
|
||||
- **Smart Tool Selection**: Agent automatically chooses Gemini for analysis tasks
|
||||
- **Execution Logging**: Complete agent execution log saved to session
|
||||
- **Session-Aware Routing**: Results automatically routed to correct session directory
|
||||
|
||||
**Enhanced Planning Workflow** (`/workflow:plan`):
|
||||
- **5-Phase Model**: Upgraded from 4-phase to 5-phase workflow
|
||||
- Phase 1: Session Discovery
|
||||
- Phase 2: Context Gathering
|
||||
- Phase 3: Intelligent Analysis (agent-delegated)
|
||||
- Phase 3.5: Concept Clarification (quality gate)
|
||||
- Phase 4: Task Generation
|
||||
- **Auto-Continue Enhancement**: Workflow pauses only at Phase 3.5 for user input
|
||||
- **Memory Management**: Added memory state check before Phase 3.5
|
||||
- Automatic `/compact` execution if context usage >110K tokens
|
||||
- Prevents context overflow during intensive analysis
|
||||
|
||||
#### Changed
|
||||
|
||||
**concept-clarify.md** - Enhanced with Dual-Mode Support:
|
||||
- **Mode Detection Logic**: Auto-detects workflow type based on artifact presence
|
||||
```bash
|
||||
IF EXISTS(ANALYSIS_RESULTS.md) → plan mode
|
||||
ELSE IF EXISTS(synthesis-specification.md) → brainstorm mode
|
||||
```
|
||||
- **Dynamic File Handling**: Loads and updates appropriate artifact based on mode
|
||||
- **Mode-Specific Validation**: Different validation rules for each mode
|
||||
- **Enhanced Metadata**: Tracks `clarify_mode` in session verification data
|
||||
- **Backward Compatible**: Preserves all existing brainstorm mode functionality
|
||||
|
||||
**plan.md** - Refactored for Agent Delegation:
|
||||
- **Phase 3 Delegation**: Changed from direct `concept-enhanced` call to `cli-execution-agent`
|
||||
- Agent receives: sessionId, contextPath, task description
|
||||
- Agent executes: autonomous context discovery + Gemini analysis
|
||||
- Agent outputs: ANALYSIS_RESULTS.md + execution log
|
||||
- **Phase 3.5 Integration**: New quality gate phase with interactive Q&A
|
||||
- Command: `SlashCommand(concept-clarify --session [sessionId])`
|
||||
- Validation: Checks for clarifications section and recommendation
|
||||
- Skip conditions: Auto-proceeds if no ambiguities detected
|
||||
- **TodoWrite Enhancement**: Updated to track 5 phases including Phase 3.5
|
||||
- **Data Flow Updates**: Enhanced context flow diagram showing agent execution
|
||||
- **Coordinator Checklist**: Added Phase 3.5 verification steps
|
||||
|
||||
**README.md & README_CN.md** - Documentation Updates:
|
||||
- **Version Badge**: Updated to v4.6.0
|
||||
- **What's New Section**: Highlighted key features of v4.6.0
|
||||
- Concept clarification quality gate
|
||||
- Agent-delegated analysis
|
||||
- Dual-mode support
|
||||
- Test-cycle-execute documentation
|
||||
- **Phase 5 Enhancement**: Added `/workflow:test-cycle-execute` documentation
|
||||
- Dynamic task generation explanation
|
||||
- Iterative testing workflow
|
||||
- CLI-driven analysis integration
|
||||
- Resume session support
|
||||
- **Command Reference**: Added test-cycle-execute to workflow commands table
|
||||
|
||||
#### Improved
|
||||
|
||||
**Workflow Quality Gates**:
|
||||
- 🎯 **Pre-Planning Verification**: concept-clarify catches ambiguities before task generation
|
||||
- 🤖 **Intelligent Analysis**: Agent-driven Phase 3 provides deeper context discovery
|
||||
- 🔄 **Interactive Control**: Users validate critical decisions at Phase 3.5
|
||||
- ✅ **Higher Accuracy**: Clarified requirements reduce execution errors
|
||||
|
||||
**Context Discovery**:
|
||||
- 🔍 **MCP Integration**: Leverages code-index for automatic pattern discovery
|
||||
- 📊 **Enhanced Prompts**: Agent enriches prompts with discovered context
|
||||
- 🎯 **Relevance Scoring**: Files ranked and filtered by relevance
|
||||
- 📁 **Execution Transparency**: Complete agent logs for debugging
|
||||
|
||||
**User Experience**:
|
||||
- ⏸️ **Single Interaction Point**: Only Phase 3.5 requires user input
|
||||
- ⚡ **Auto-Skip Intelligence**: No questions if analysis is already clear
|
||||
- 📝 **Incremental Saves**: Clarifications saved after each answer
|
||||
- 🔄 **Resume Support**: Can continue interrupted test workflows
|
||||
|
||||
#### Technical Details
|
||||
|
||||
**Concept Clarification Architecture**:
|
||||
```javascript
|
||||
Phase 1: Session Detection & Mode Detection
|
||||
↓
|
||||
IF EXISTS(process_dir/ANALYSIS_RESULTS.md):
|
||||
mode = "plan" → primary_artifact = ANALYSIS_RESULTS.md
|
||||
ELSE IF EXISTS(brainstorm_dir/synthesis-specification.md):
|
||||
mode = "brainstorm" → primary_artifact = synthesis-specification.md
|
||||
↓
|
||||
Phase 2: Load Artifacts (mode-specific)
|
||||
↓
|
||||
Phase 3: Ambiguity Scan (8 categories)
|
||||
↓
|
||||
Phase 4: Question Generation (max 5, prioritized)
|
||||
↓
|
||||
Phase 5: Interactive Q&A (one at a time)
|
||||
↓
|
||||
Phase 6: Incremental Updates (save after each answer)
|
||||
↓
|
||||
Phase 7: Completion Report with recommendations
|
||||
```
|
||||
|
||||
**Agent-Delegated Analysis Flow**:
|
||||
```javascript
|
||||
plan.md Phase 3:
|
||||
Task(cli-execution-agent) →
|
||||
Agent Phase 1: Understand analysis intent
|
||||
Agent Phase 2: MCP code-index discovery
|
||||
Agent Phase 3: Enhance prompt with patterns
|
||||
Agent Phase 4: Execute Gemini analysis
|
||||
Agent Phase 5: Route to .workflow/[session]/.process/ANALYSIS_RESULTS.md
|
||||
→ ANALYSIS_RESULTS.md + execution log
|
||||
```
|
||||
|
||||
**Workflow Data Flow**:
|
||||
```
|
||||
User Input
|
||||
↓
|
||||
Phase 1: session:start → sessionId
|
||||
↓
|
||||
Phase 2: context-gather → contextPath
|
||||
↓
|
||||
Phase 3: cli-execution-agent → ANALYSIS_RESULTS.md (enhanced)
|
||||
↓
|
||||
Phase 3.5: concept-clarify → ANALYSIS_RESULTS.md (clarified)
|
||||
↓ [User answers 0-5 questions]
|
||||
↓
|
||||
Phase 4: task-generate → IMPL_PLAN.md + task.json
|
||||
```
|
||||
|
||||
#### Files Changed
|
||||
|
||||
**Commands** (3 files):
|
||||
- `.claude/commands/workflow/concept-clarify.md` - Added dual-mode support (85 lines changed)
|
||||
- `.claude/commands/workflow/plan.md` - Agent delegation + Phase 3.5 (106 lines added)
|
||||
- `.claude/commands/workflow/tools/concept-enhanced.md` - Documentation updates
|
||||
|
||||
**Documentation** (3 files):
|
||||
- `README.md` - Version update + test-cycle-execute documentation (25 lines changed)
|
||||
- `README_CN.md` - Chinese version aligned with README.md (25 lines changed)
|
||||
- `CHANGELOG.md` - This changelog entry
|
||||
|
||||
**Total Impact**:
|
||||
- 6 files changed
|
||||
- 241 insertions, 50 deletions
|
||||
- Net: +191 lines
|
||||
|
||||
#### Backward Compatibility
|
||||
|
||||
**✅ Fully Backward Compatible**:
|
||||
- Existing workflows continue to work unchanged
|
||||
- concept-clarify preserves brainstorm mode functionality
|
||||
- Phase 3.5 auto-skips when no ambiguities detected
|
||||
- Agent delegation transparent to users
|
||||
- All existing commands and sessions unaffected
|
||||
|
||||
#### Benefits
|
||||
|
||||
**Planning Accuracy**:
|
||||
- 🎯 **Ambiguity Resolution**: Interactive Q&A eliminates underspecified requirements
|
||||
- 📊 **Better Context**: Agent discovers patterns missed by manual analysis
|
||||
- ✅ **Pre-Execution Validation**: Catches issues before task generation
|
||||
|
||||
**Workflow Efficiency**:
|
||||
- ⚡ **Autonomous Discovery**: MCP integration reduces manual context gathering
|
||||
- 🔄 **Smart Skipping**: No questions when analysis is already complete
|
||||
- 📝 **Incremental Progress**: Saves work after each clarification
|
||||
|
||||
**Development Quality**:
|
||||
- 🐛 **Fewer Errors**: Clarified requirements reduce implementation mistakes
|
||||
- 🎯 **Focused Tasks**: Better analysis produces more precise task breakdown
|
||||
- 📚 **Audit Trail**: Complete execution logs for debugging
|
||||
|
||||
#### Migration Notes
|
||||
|
||||
**No Action Required**:
|
||||
- All changes are additive and backward compatible
|
||||
- Existing workflows benefit from new features automatically
|
||||
- concept-clarify can be used manually in existing sessions
|
||||
|
||||
**Optional Enhancements**:
|
||||
- Use `/workflow:concept-clarify` manually before `/workflow:plan` for brainstorm workflows
|
||||
- Review Phase 3 execution logs in `.workflow/[session]/.chat/` for insights
|
||||
- Enable MCP tools for optimal agent context discovery
|
||||
|
||||
**New Workflow Pattern**:
|
||||
```bash
|
||||
# New recommended workflow with quality gates
|
||||
/workflow:brainstorm:auto-parallel "topic"
|
||||
/workflow:brainstorm:synthesis
|
||||
/workflow:concept-clarify # Optional but recommended
|
||||
/workflow:plan "description"
|
||||
# Phase 3.5 will pause for clarification Q&A if needed
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## [4.4.1] - 2025-10-12
|
||||
|
||||
### 🔧 Implementation Approach Structure Refactoring
|
||||
|
||||
@@ -5,7 +5,6 @@
|
||||
This document defines project-specific coding standards and development principles.
|
||||
### CLI Tool Context Protocols
|
||||
For all CLI tool usage, command syntax, and integration guidelines:
|
||||
- **Tool Control Configuration**: @~/.claude/workflows/tool-control.yaml - Controls CLI tool availability for all commands and agent executions (if disabled, use other enabled CLI tools or Claude's own capabilities)
|
||||
- **MCP Tool Strategy**: @~/.claude/workflows/mcp-tool-strategy.md
|
||||
- **Intelligent Context Strategy**: @~/.claude/workflows/intelligent-tools-strategy.md
|
||||
- **Context Search Commands**: @~/.claude/workflows/context-search-strategy.md
|
||||
@@ -73,6 +72,7 @@ For all CLI tool usage, command syntax, and integration guidelines:
|
||||
## Platform-Specific Guidelines
|
||||
|
||||
### Windows Path Format Guidelines
|
||||
- always use complete absolute Windows paths with drive letters and backslashes for ALL file operations
|
||||
- **MCP Tools**: Use double backslash `D:\\path\\file.txt` (MCP doesn't support POSIX `/d/path`)
|
||||
- **Bash Commands**: Use forward slash `D:/path/file.txt` or POSIX `/d/path/file.txt`
|
||||
- **Relative Paths**: No conversion needed `./src`, `../config`
|
||||
|
||||
136
COMMAND_REFERENCE.md
Normal file
136
COMMAND_REFERENCE.md
Normal file
@@ -0,0 +1,136 @@
|
||||
# Command Reference
|
||||
|
||||
This document provides a comprehensive reference for all commands available in the Claude Code Workflow (CCW) system.
|
||||
|
||||
> **Version 5.0 Update**: Streamlined command structure focusing on essential tools. Removed MCP code-index dependency for better stability and performance.
|
||||
|
||||
## Unified CLI Commands (`/cli:*`)
|
||||
|
||||
These commands provide direct access to AI tools for quick analysis and interaction without initiating a full workflow.
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/cli:analyze` | Quick codebase analysis using CLI tools (codex/gemini/qwen). |
|
||||
| `/cli:chat` | Simple CLI interaction command for direct codebase analysis. |
|
||||
| `/cli:cli-init`| Initialize CLI tool configurations (Gemini and Qwen) based on workspace analysis. |
|
||||
| `/cli:codex-execute` | Automated task decomposition and execution with Codex using resume mechanism. |
|
||||
| `/cli:discuss-plan` | Orchestrates an iterative, multi-model discussion for planning and analysis without implementation. |
|
||||
| `/cli:execute` | Auto-execution of implementation tasks with YOLO permissions and intelligent context inference. |
|
||||
| `/cli:mode:bug-index` | Bug analysis and fix suggestions using CLI tools. |
|
||||
| `/cli:mode:code-analysis` | Deep code analysis and debugging using CLI tools with specialized template. |
|
||||
| `/cli:mode:plan` | Project planning and architecture analysis using CLI tools. |
|
||||
|
||||
## Workflow Commands (`/workflow:*`)
|
||||
|
||||
These commands orchestrate complex, multi-phase development processes, from planning to execution.
|
||||
|
||||
### Session Management
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/workflow:session:start` | Discover existing sessions or start a new workflow session with intelligent session management. |
|
||||
| `/workflow:session:list` | List all workflow sessions with status. |
|
||||
| `/workflow:session:resume` | Resume the most recently paused workflow session. |
|
||||
| `/workflow:session:complete` | Mark the active workflow session as complete and remove active flag. |
|
||||
|
||||
### Core Workflow
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/workflow:plan` | Orchestrate 5-phase planning workflow with quality gate, executing commands and passing context between phases. |
|
||||
| `/workflow:execute` | Coordinate agents for existing workflow tasks with automatic discovery. |
|
||||
| `/workflow:resume` | Intelligent workflow session resumption with automatic progress analysis. |
|
||||
| `/workflow:review` | Optional specialized review (security, architecture, docs) for completed implementation. |
|
||||
| `/workflow:status` | Generate on-demand views from JSON task data. |
|
||||
|
||||
### Brainstorming
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/workflow:brainstorm:artifacts` | Generate role-specific guidance-specification.md dynamically based on selected roles. |
|
||||
| `/workflow:brainstorm:auto-parallel` | Parallel brainstorming automation with dynamic role selection and concurrent execution. |
|
||||
| `/workflow:brainstorm:synthesis` | Clarify and refine role analyses through intelligent Q&A and targeted updates. |
|
||||
| `/workflow:brainstorm:api-designer` | Generate or update api-designer/analysis.md addressing guidance-specification discussion points. |
|
||||
| `/workflow:brainstorm:data-architect` | Generate or update data-architect/analysis.md addressing guidance-specification discussion points. |
|
||||
| `/workflow:brainstorm:product-manager` | Generate or update product-manager/analysis.md addressing guidance-specification discussion points. |
|
||||
| `/workflow:brainstorm:product-owner` | Generate or update product-owner/analysis.md addressing guidance-specification discussion points. |
|
||||
| `/workflow:brainstorm:scrum-master` | Generate or update scrum-master/analysis.md addressing guidance-specification discussion points. |
|
||||
| `/workflow:brainstorm:subject-matter-expert` | Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points. |
|
||||
| `/workflow:brainstorm:system-architect` | Generate or update system-architect/analysis.md addressing guidance-specification discussion points. |
|
||||
| `/workflow:brainstorm:ui-designer` | Generate or update ui-designer/analysis.md addressing guidance-specification discussion points. |
|
||||
| `/workflow:brainstorm:ux-expert` | Generate or update ux-expert/analysis.md addressing guidance-specification discussion points. |
|
||||
|
||||
### Quality & Verification
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/workflow:action-plan-verify`| Perform non-destructive cross-artifact consistency and quality analysis of IMPL_PLAN.md and task.json before execution. |
|
||||
|
||||
### Test-Driven Development (TDD)
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/workflow:tdd-plan` | Orchestrate TDD workflow planning with Red-Green-Refactor task chains. |
|
||||
| `/workflow:tdd-verify` | Verify TDD workflow compliance and generate quality report. |
|
||||
|
||||
### Test Generation & Execution
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/workflow:test-gen` | Generate test plan and tasks by analyzing completed implementation. Use `/workflow:execute` to run generated tasks. |
|
||||
| `/workflow:test-fix-gen` | Generate test-fix plan and tasks from existing implementation or prompt. Use `/workflow:execute` to run generated tasks. |
|
||||
| `/workflow:test-cycle-execute` | Execute test-fix workflow with dynamic task generation and iterative fix cycles. Tasks are executed by `/workflow:execute`. |
|
||||
|
||||
### UI Design Workflow
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/workflow:ui-design:explore-auto` | Exploratory UI design workflow with style-centric batch generation. |
|
||||
| `/workflow:ui-design:imitate-auto` | High-speed multi-page UI replication with batch screenshot capture. |
|
||||
| `/workflow:ui-design:batch-generate` | Prompt-driven batch UI generation using target-style-centric parallel execution. |
|
||||
| `/workflow:ui-design:capture` | Batch screenshot capture for UI design workflows using MCP or local fallback. |
|
||||
| `/workflow:ui-design:explore-layers` | Interactive deep UI capture with depth-controlled layer exploration. |
|
||||
| `/workflow:ui-design:style-extract` | Extract design style from reference images or text prompts using Claude's analysis. |
|
||||
| `/workflow:ui-design:layout-extract` | Extract structural layout information from reference images, URLs, or text prompts. |
|
||||
| `/workflow:ui-design:generate` | Assemble UI prototypes by combining layout templates with design tokens (pure assembler). |
|
||||
| `/workflow:ui-design:update` | Update brainstorming artifacts with finalized design system references. |
|
||||
| `/workflow:ui-design:animation-extract` | Extract animation and transition patterns from URLs, CSS, or interactive questioning. |
|
||||
|
||||
### Internal Tools
|
||||
|
||||
These commands are primarily used internally by other workflow commands but can be used manually.
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/workflow:tools:concept-enhanced` | Enhanced intelligent analysis with parallel CLI execution and design blueprint generation. |
|
||||
| `/workflow:tools:conflict-resolution` | Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis. |
|
||||
| `/workflow:tools:context-gather` | Intelligently collect project context using universal-executor agent based on task description and package into standardized JSON. |
|
||||
| `/workflow:tools:task-generate` | Generate task JSON files and IMPL_PLAN.md from analysis results with artifacts integration. |
|
||||
| `/workflow:tools:task-generate-agent` | Autonomous task generation using action-planning-agent with discovery and output phases. |
|
||||
| `/workflow:tools:task-generate-tdd` | Generate TDD task chains with Red-Green-Refactor dependencies. |
|
||||
| `/workflow:tools:tdd-coverage-analysis` | Analyze test coverage and TDD cycle execution. |
|
||||
| `/workflow:tools:test-concept-enhanced` | Analyze test requirements and generate test generation strategy using Gemini. |
|
||||
| `/workflow:tools:test-context-gather` | Collect test coverage context and identify files requiring test generation. |
|
||||
| `/workflow:tools:test-task-generate` | Generate test-fix task JSON with iterative test-fix-retest cycle specification. |
|
||||
|
||||
## Task Commands (`/task:*`)
|
||||
|
||||
Commands for managing individual tasks within a workflow session.
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/task:create` | Create implementation tasks with automatic context awareness. |
|
||||
| `/task:breakdown` | Intelligent task decomposition with context-aware subtask generation. |
|
||||
| `/task:execute` | Execute tasks with appropriate agents and context-aware orchestration. |
|
||||
| `/task:replan` | Replan individual tasks with detailed user input and change tracking. |
|
||||
|
||||
## Memory and Versioning Commands
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/memory:update-full` | Complete project-wide CLAUDE.md documentation update. |
|
||||
| `/memory:load` | Quickly load key project context into memory based on a task description. |
|
||||
| `/memory:update-related` | Context-aware CLAUDE.md documentation updates based on recent changes. |
|
||||
| `/version` | Display version information and check for updates. |
|
||||
| `/enhance-prompt` | Context-aware prompt enhancement using session memory and codebase analysis. |
|
||||
|
||||
500
COMMAND_SPEC.md
Normal file
500
COMMAND_SPEC.md
Normal file
@@ -0,0 +1,500 @@
|
||||
|
||||
# Claude Code Workflow (CCW) - Command Specification
|
||||
|
||||
**Version**: 5.0.0
|
||||
**Updated**: 2025年10月24日星期六
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
This document provides a detailed technical specification for every command available in the Claude Code Workflow (CCW) system. It is intended for advanced users and developers who wish to understand the inner workings of CCW, customize commands, or build new workflows.
|
||||
|
||||
> **Version 5.0 Changes**: Removed MCP code-index dependency, streamlined TDD workflow with conflict resolution, and refocused brainstorming on role analysis instead of synthesis documents.
|
||||
|
||||
For a user-friendly overview, please see [COMMAND_REFERENCE.md](COMMAND_REFERENCE.md).
|
||||
|
||||
## 2. Command Categories
|
||||
|
||||
Commands are organized into the following categories:
|
||||
|
||||
- **Workflow Commands**: High-level orchestration for multi-phase development processes.
|
||||
- **CLI Commands**: Direct access to AI tools for analysis and interaction.
|
||||
- **Task Commands**: Management of individual work units within a workflow.
|
||||
- **Memory Commands**: Context and documentation management.
|
||||
- **UI Design Commands**: Specialized workflow for UI/UX design and prototyping.
|
||||
- **Testing Commands**: TDD and test generation workflows.
|
||||
|
||||
---
|
||||
|
||||
## 3. Workflow Commands
|
||||
|
||||
High-level orchestrators for complex, multi-phase development processes.
|
||||
|
||||
### **/workflow:plan**
|
||||
|
||||
- **Syntax**: `/workflow:plan [--agent] [--cli-execute] "text description"|file.md`
|
||||
- **Parameters**:
|
||||
- `--agent` (Optional, Flag): Use the `task-generate-agent` for autonomous task generation.
|
||||
- `--cli-execute` (Optional, Flag): Generate tasks with commands ready for CLI execution (e.g., using Codex).
|
||||
- `description|file.md` (Required, String): A description of the planning goal or a path to a markdown file containing the requirements.
|
||||
- **Responsibilities**: Orchestrates a 5-phase planning workflow that includes session start, context gathering, intelligent analysis, concept clarification (quality gate), and task generation.
|
||||
- **Agent Calls**: Delegates analysis to `@cli-execution-agent` and task generation to `@action-planning-agent`.
|
||||
- **Skill Invocation**: Does not directly invoke a skill, but the underlying agents may.
|
||||
- **Integration**: This is a primary entry point for starting a development workflow. It is followed by `/workflow:execute`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:plan "Create a simple Express API that returns Hello World"
|
||||
```
|
||||
|
||||
### **/workflow:execute**
|
||||
|
||||
- **Syntax**: `/workflow:execute [--resume-session="session-id"]`
|
||||
- **Parameters**:
|
||||
- `--resume-session` (Optional, String): The ID of a paused session to resume.
|
||||
- **Responsibilities**: Discovers and executes all pending tasks in the active (or specified) workflow session. It handles dependency resolution and orchestrates agents to perform the work.
|
||||
- **Agent Calls**: Dynamically calls the agent specified in each task's `meta.agent` field (e.g., `@code-developer`, `@test-fix-agent`).
|
||||
- **Integration**: The primary command for implementing a plan generated by `/workflow:plan`.
|
||||
- **Example**:
|
||||
```bash
|
||||
# Execute tasks in the currently active session
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
### **/workflow:resume**
|
||||
|
||||
- **Syntax**: `/workflow:resume "session-id"`
|
||||
- **Parameters**:
|
||||
- `session-id` (Required, String): The ID of the workflow session to resume.
|
||||
- **Responsibilities**: A two-phase orchestrator that first analyzes the status of a paused session and then resumes it by calling `/workflow:execute --resume-session`.
|
||||
- **Agent Calls**: None directly. It orchestrates `/workflow:status` and `/workflow:execute`.
|
||||
- **Integration**: Used to continue a previously paused or interrupted workflow.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:resume "WFS-user-login-feature"
|
||||
```
|
||||
|
||||
### **/workflow:review**
|
||||
|
||||
- **Syntax**: `/workflow:review [--type=security|architecture|action-items|quality] [session-id]`
|
||||
- **Parameters**:
|
||||
- `--type` (Optional, String): The type of review to perform. Defaults to `quality`.
|
||||
- `session-id` (Optional, String): The session to review. Defaults to the active session.
|
||||
- **Responsibilities**: Performs a specialized, post-implementation review. This is optional, as the default quality gate is passing tests.
|
||||
- **Agent Calls**: Uses `gemini-wrapper` or `qwen-wrapper` for analysis based on the review type.
|
||||
- **Integration**: Used after `/workflow:execute` to perform audits before deployment.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:review --type=security
|
||||
```
|
||||
|
||||
### **/workflow:status**
|
||||
|
||||
- **Syntax**: `/workflow:status [task-id]`
|
||||
- **Parameters**:
|
||||
- `task-id` (Optional, String): If provided, shows details for a specific task.
|
||||
- **Responsibilities**: Generates and displays an on-demand view of the current workflow's status by reading task JSON data. Does not modify any state.
|
||||
- **Agent Calls**: None.
|
||||
- **Integration**: A read-only command used to check progress at any point.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:status
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Session Management Commands
|
||||
|
||||
Commands for creating, listing, and managing workflow sessions.
|
||||
|
||||
### **/workflow:session:start**
|
||||
- **Syntax**: `/workflow:session:start [--auto|--new] [description]`
|
||||
- **Parameters**:
|
||||
- `--auto` (Flag): Intelligently reuses an active session if relevant, otherwise creates a new one.
|
||||
- `--new` (Flag): Forces the creation of a new session.
|
||||
- `description` (Optional, String): A description for the new session's goal.
|
||||
- **Responsibilities**: Manages session creation and activation. It can discover existing sessions, create new ones, and set the active session marker.
|
||||
- **Agent Calls**: None.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:session:start "My New Feature"
|
||||
```
|
||||
|
||||
### **/workflow:session:list**
|
||||
- **Syntax**: `/workflow:session:list`
|
||||
- **Parameters**: None.
|
||||
- **Responsibilities**: Lists all workflow sessions found in the `.workflow/` directory, showing their status (active, paused, completed).
|
||||
- **Agent Calls**: None.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:session:list
|
||||
```
|
||||
|
||||
### **/workflow:session:resume**
|
||||
- **Syntax**: `/workflow:session:resume`
|
||||
- **Parameters**: None.
|
||||
- **Responsibilities**: Finds the most recently paused session and marks it as active.
|
||||
- **Agent Calls**: None.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:session:resume
|
||||
```
|
||||
|
||||
### **/workflow:session:complete**
|
||||
- **Syntax**: `/workflow:session:complete [--detailed]`
|
||||
- **Parameters**:
|
||||
- `--detailed` (Flag): Shows a more detailed completion summary.
|
||||
- **Responsibilities**: Marks the currently active session as "completed", records timestamps, and removes the `.active-*` marker file.
|
||||
- **Agent Calls**: None.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:session:complete
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. CLI Commands
|
||||
|
||||
Direct access to AI tools for analysis and code interaction without a full workflow structure.
|
||||
|
||||
### **/cli:analyze**
|
||||
- **Syntax**: `/cli:analyze [--agent] [--tool codex|gemini|qwen] [--enhance] <analysis target>`
|
||||
- **Responsibilities**: Performs read-only codebase analysis. Can operate in standard mode (direct tool call) or agent mode (`@cli-execution-agent`) for automated context discovery.
|
||||
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
|
||||
- **Example**:
|
||||
```bash
|
||||
/cli:analyze "authentication patterns"
|
||||
```
|
||||
|
||||
### **/cli:chat**
|
||||
- **Syntax**: `/cli:chat [--agent] [--tool codex|gemini|qwen] [--enhance] <inquiry>`
|
||||
- **Responsibilities**: Provides a direct Q&A interface with AI tools for codebase questions. Read-only.
|
||||
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
|
||||
- **Example**:
|
||||
```bash
|
||||
/cli:chat "how does the caching layer work?"
|
||||
```
|
||||
|
||||
### **/cli:cli-init**
|
||||
- **Syntax**: `/cli:cli-init [--tool gemini|qwen|all] [--output path] [--preview]`
|
||||
- **Responsibilities**: Initializes configuration for CLI tools (`.gemini/`, `.qwen/`) by analyzing the workspace and creating optimized `.geminiignore` and `.qwenignore` files.
|
||||
- **Agent Calls**: None.
|
||||
- **Example**:
|
||||
```bash
|
||||
/cli:cli-init
|
||||
```
|
||||
|
||||
### **/cli:codex-execute**
|
||||
- **Syntax**: `/cli:codex-execute [--verify-git] <description|task-id>`
|
||||
- **Responsibilities**: Orchestrates automated task decomposition and sequential execution using Codex. It uses the `resume --last` mechanism for context continuity between subtasks.
|
||||
- **Agent Calls**: None directly, but orchestrates `codex` CLI tool.
|
||||
- **Example**:
|
||||
```bash
|
||||
/cli:codex-execute "implement user authentication system"
|
||||
```
|
||||
|
||||
### **/cli:discuss-plan**
|
||||
- **Syntax**: `/cli:discuss-plan [--topic '...'] [--task-id '...'] [--rounds N] <input>`
|
||||
- **Responsibilities**: Orchestrates an iterative, multi-model (Gemini, Codex, Claude) discussion to perform deep analysis and planning without modifying code.
|
||||
- **Agent Calls**: None directly, but orchestrates `gemini-wrapper` and `codex` CLI tools.
|
||||
- **Example**:
|
||||
```bash
|
||||
/cli:discuss-plan --topic "Design a new caching layer"
|
||||
```
|
||||
|
||||
### **/cli:execute**
|
||||
- **Syntax**: `/cli:execute [--agent] [--tool codex|gemini|qwen] [--enhance] <description|task-id>`
|
||||
- **Responsibilities**: Executes implementation tasks with auto-approval (`YOLO` mode). **MODIFIES CODE**.
|
||||
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
|
||||
- **Example**:
|
||||
```bash
|
||||
/cli:execute "implement JWT authentication with middleware"
|
||||
```
|
||||
|
||||
### **/cli:mode:bug-index**
|
||||
- **Syntax**: `/cli:mode:bug-index [--agent] [--tool ...] [--enhance] [--cd path] <bug description>`
|
||||
- **Responsibilities**: Performs systematic bug analysis using the `bug-fix.md` template. Read-only.
|
||||
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
|
||||
- **Example**:
|
||||
```bash
|
||||
/cli:mode:bug-index "null pointer error in login flow"
|
||||
```
|
||||
|
||||
### **/cli:mode:code-analysis**
|
||||
- **Syntax**: `/cli:mode:code-analysis [--agent] [--tool ...] [--enhance] [--cd path] <analysis target>`
|
||||
- **Responsibilities**: Performs deep code analysis and execution path tracing using the `code-analysis.md` template. Read-only.
|
||||
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
|
||||
- **Example**:
|
||||
```bash
|
||||
/cli:mode:code-analysis "trace authentication execution flow"
|
||||
```
|
||||
|
||||
### **/cli:mode:plan**
|
||||
- **Syntax**: `/cli:mode:plan [--agent] [--tool ...] [--enhance] [--cd path] <topic>`
|
||||
- **Responsibilities**: Performs comprehensive planning and architecture analysis using the `plan.md` template. Read-only.
|
||||
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
|
||||
- **Example**:
|
||||
```bash
|
||||
/cli:mode:plan "design user dashboard architecture"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Task Commands
|
||||
|
||||
Commands for managing individual tasks within a workflow session.
|
||||
|
||||
### **/task:create**
|
||||
- **Syntax**: `/task:create "task title"`
|
||||
- **Parameters**:
|
||||
- `title` (Required, String): The title of the task.
|
||||
- **Responsibilities**: Creates a new task JSON file within the active session, auto-generating an ID and inheriting context.
|
||||
- **Agent Calls**: Suggests an agent (e.g., `@code-developer`) based on task type but does not call it.
|
||||
- **Example**:
|
||||
```bash
|
||||
/task:create "Build authentication module"
|
||||
```
|
||||
|
||||
### **/task:breakdown**
|
||||
- **Syntax**: `/task:breakdown <task-id>`
|
||||
- **Parameters**:
|
||||
- `task-id` (Required, String): The ID of the parent task to break down.
|
||||
- **Responsibilities**: Manually decomposes a complex parent task into smaller, executable subtasks. Enforces a 10-task limit and file cohesion.
|
||||
- **Agent Calls**: None.
|
||||
- **Example**:
|
||||
```bash
|
||||
/task:breakdown IMPL-1
|
||||
```
|
||||
|
||||
### **/task:execute**
|
||||
- **Syntax**: `/task:execute <task-id>`
|
||||
- **Parameters**:
|
||||
- `task-id` (Required, String): The ID of the task to execute.
|
||||
- **Responsibilities**: Executes a single task or a parent task (by executing its subtasks) using the assigned agent.
|
||||
- **Agent Calls**: Calls the agent specified in the task's `meta.agent` field.
|
||||
- **Example**:
|
||||
```bash
|
||||
/task:execute IMPL-1.1
|
||||
```
|
||||
|
||||
### **/task:replan**
|
||||
- **Syntax**: `/task:replan <task-id> ["text"|file.md] | --batch [report.md]`
|
||||
- **Parameters**:
|
||||
- `task-id` (String): The ID of the task to replan.
|
||||
- `input` (String): Text or a file path with the new specifications.
|
||||
- `--batch` (Flag): Enables batch processing from a verification report.
|
||||
- **Responsibilities**: Updates a task's specification, creating a versioned backup of the previous state.
|
||||
- **Agent Calls**: None.
|
||||
- **Example**:
|
||||
```bash
|
||||
/task:replan IMPL-1 "Add OAuth2 authentication support"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Memory and Versioning Commands
|
||||
|
||||
### **/memory:update-full**
|
||||
- **Syntax**: `/memory:update-full [--tool gemini|qwen|codex] [--path <directory>]`
|
||||
- **Responsibilities**: Orchestrates a complete, project-wide update of all `CLAUDE.md` documentation files.
|
||||
- **Agent Calls**: None directly, but orchestrates CLI tools (`gemini-wrapper`, etc.).
|
||||
- **Example**:
|
||||
```bash
|
||||
/memory:update-full
|
||||
```
|
||||
|
||||
### **/memory:load**
|
||||
- **Syntax**: `/memory:load [--tool gemini|qwen] "task context description"`
|
||||
- **Parameters**:
|
||||
- `"task context description"` (Required, String): Task description to guide context extraction.
|
||||
- `--tool <gemini|qwen>` (Optional): Specify CLI tool for agent to use (default: gemini).
|
||||
- **Responsibilities**: Delegates to `@general-purpose` agent to analyze the project and return a structured "Core Content Pack". This pack is loaded into the main thread's memory, providing essential context for subsequent operations.
|
||||
- **Agent-Driven Execution**: Fully delegates to general-purpose agent which autonomously:
|
||||
1. Analyzes project structure and documentation
|
||||
2. Extracts keywords from task description
|
||||
3. Discovers relevant files using ripgrep/find search tools
|
||||
4. Executes Gemini/Qwen CLI for deep analysis
|
||||
5. Generates structured JSON content package
|
||||
- **Core Philosophy**: Read-only analysis, token-efficient (CLI analysis in agent), structured output
|
||||
- **Agent Calls**: `@general-purpose` agent.
|
||||
- **Integration**: Provides quick, task-relevant context for subsequent agent operations while minimizing token consumption.
|
||||
- **Example**:
|
||||
```bash
|
||||
/memory:load "在当前前端基础上开发用户认证功能"
|
||||
/memory:load --tool qwen "重构支付模块API"
|
||||
```
|
||||
|
||||
### **/memory:update-related**
|
||||
- **Syntax**: `/memory:update-related [--tool gemini|qwen|codex]`
|
||||
- **Responsibilities**: Performs a context-aware update of `CLAUDE.md` files for modules affected by recent git changes.
|
||||
- **Agent Calls**: None directly, but orchestrates CLI tools.
|
||||
- **Example**:
|
||||
```bash
|
||||
/memory:update-related
|
||||
```
|
||||
|
||||
### **/version**
|
||||
- **Syntax**: `/version`
|
||||
- **Parameters**: None.
|
||||
- **Responsibilities**: Displays local and global installation versions and checks for updates from GitHub.
|
||||
- **Agent Calls**: None.
|
||||
- **Example**:
|
||||
```bash
|
||||
/version
|
||||
```
|
||||
|
||||
### **/enhance-prompt**
|
||||
- **Syntax**: `/enhance-prompt <user input>`
|
||||
- **Responsibilities**: A system-level skill that enhances a user's prompt by adding context from session memory and codebase analysis. It is typically triggered automatically by other commands that include the `--enhance` flag.
|
||||
- **Skill Invocation**: This is a core skill, invoked when `--enhance` is used.
|
||||
- **Agent Calls**: None.
|
||||
- **Example (as part of another command)**:
|
||||
```bash
|
||||
/cli:execute --enhance "fix the login button"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. UI Design Commands
|
||||
|
||||
Specialized workflow for UI/UX design, from style extraction to prototype generation.
|
||||
|
||||
### **/workflow:ui-design:explore-auto**
|
||||
- **Syntax**: `/workflow:ui-design:explore-auto [--prompt "..."] [--images "..."] [--targets "..."] ...`
|
||||
- **Responsibilities**: Fully autonomous, multi-phase workflow that orchestrates style extraction, layout extraction, and prototype generation.
|
||||
- **Agent Calls**: `@ui-design-agent`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:explore-auto --prompt "Modern blog: home, article, author"
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:imitate-auto**
|
||||
- **Syntax**: `/workflow:ui-design:imitate-auto --url-map "<map>" [--capture-mode <batch|deep>] ...`
|
||||
- **Responsibilities**: High-speed, multi-page UI replication workflow that captures screenshots and orchestrates the full design pipeline.
|
||||
- **Agent Calls**: `@ui-design-agent`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:imitate-auto --url-map "home:https://linear.app, features:https://linear.app/features"
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:batch-generate**
|
||||
- **Syntax**: `/workflow:ui-design:batch-generate [--prompt "..."] [--targets "..."] ...`
|
||||
- **Responsibilities**: Prompt-driven batch UI generation with parallel execution for multiple targets and styles.
|
||||
- **Agent Calls**: `@ui-design-agent`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:batch-generate --prompt "Dashboard with metric cards and charts"
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:capture**
|
||||
- **Syntax**: `/workflow:ui-design:capture --url-map "target:url,..." ...`
|
||||
- **Responsibilities**: Batch screenshot capture tool using MCP Chrome DevTools with multi-tier fallback strategy (MCP → Playwright → Chrome → Manual).
|
||||
- **Agent Calls**: None directly, uses MCP Chrome DevTools or browser automation as fallback.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:capture --url-map "home:https://linear.app"
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:explore-layers**
|
||||
- **Syntax**: `/workflow:ui-design:explore-layers --url <url> --depth <1-5> ...`
|
||||
- **Responsibilities**: Performs a deep, interactive UI capture of a single URL, exploring layers from the full page down to the Shadow DOM.
|
||||
- **Agent Calls**: None directly, uses MCP Chrome DevTools for layer exploration.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:explore-layers --url https://linear.app --depth 3
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:style-extract**
|
||||
- **Syntax**: `/workflow:ui-design:style-extract [--images "..."] [--prompt "..."] ...`
|
||||
- **Responsibilities**: Extracts design styles from images or text prompts and generates production-ready design systems (`design-tokens.json`, `style-guide.md`).
|
||||
- **Agent Calls**: `@ui-design-agent`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:style-extract --images "design-refs/*.png" --variants 3
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:layout-extract**
|
||||
- **Syntax**: `/workflow:ui-design:layout-extract [--images "..."] [--urls "..."] ...`
|
||||
- **Responsibilities**: Extracts structural layout information (HTML structure, CSS layout rules) separately from visual style.
|
||||
- **Agent Calls**: `@ui-design-agent`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:layout-extract --urls "home:https://linear.app" --mode imitate
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:generate**
|
||||
- **Syntax**: `/workflow:ui-design:generate [--base-path <path>] ...`
|
||||
- **Responsibilities**: A pure assembler that combines pre-extracted layout templates with design tokens to generate final UI prototypes.
|
||||
- **Agent Calls**: `@ui-design-agent`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:generate --session WFS-design-run
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:update**
|
||||
- **Syntax**: `/workflow:ui-design:update --session <session_id> ...`
|
||||
- **Responsibilities**: Synchronizes the finalized design system references into the core brainstorming artifacts (`synthesis-specification.md`) to make them available for the planning phase.
|
||||
- **Agent Calls**: None.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:update --session WFS-my-app
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:animation-extract**
|
||||
- **Syntax**: `/workflow:ui-design:animation-extract [--urls "<list>"] [--mode <auto|interactive>] ...`
|
||||
- **Responsibilities**: Extracts animation and transition patterns from URLs (auto mode) or through interactive questioning to generate animation tokens.
|
||||
- **Agent Calls**: `@ui-design-agent` (for interactive mode).
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:animation-extract --urls "home:https://linear.app" --mode auto
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Testing Commands
|
||||
|
||||
Workflows for Test-Driven Development (TDD) and post-implementation test generation.
|
||||
|
||||
### **/workflow:tdd-plan**
|
||||
- **Syntax**: `/workflow:tdd-plan [--agent] "feature description"|file.md`
|
||||
- **Responsibilities**: Orchestrates a 7-phase TDD planning workflow, creating tasks with Red-Green-Refactor cycles.
|
||||
- **Agent Calls**: Orchestrates sub-commands which may call agents.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:tdd-plan "Implement a secure login endpoint"
|
||||
```
|
||||
|
||||
### **/workflow:tdd-verify**
|
||||
- **Syntax**: `/workflow:tdd-verify [session-id]`
|
||||
- **Responsibilities**: Verifies TDD workflow compliance by analyzing task chains, test coverage, and cycle execution.
|
||||
- **Agent Calls**: None directly, orchestrates `gemini-wrapper`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:tdd-verify WFS-login-tdd
|
||||
```
|
||||
|
||||
### **/workflow:test-gen**
|
||||
- **Syntax**: `/workflow:test-gen [--use-codex] [--cli-execute] <source-session-id>`
|
||||
- **Responsibilities**: Creates an independent test-fix workflow by analyzing a completed implementation session.
|
||||
- **Agent Calls**: Orchestrates sub-commands that call `@code-developer` and `@test-fix-agent`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:test-gen WFS-user-auth-v2
|
||||
```
|
||||
|
||||
### **/workflow:test-fix-gen**
|
||||
- **Syntax**: `/workflow:test-fix-gen [--use-codex] [--cli-execute] (<source-session-id> | "description" | /path/to/file.md)`
|
||||
- **Responsibilities**: Creates an independent test-fix workflow from either a completed session or a feature description.
|
||||
- **Agent Calls**: Orchestrates sub-commands that call `@code-developer` and `@test-fix-agent`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:test-fix-gen "Test the user authentication API endpoints"
|
||||
```
|
||||
|
||||
### **/workflow:test-cycle-execute**
|
||||
- **Syntax**: `/workflow:test-cycle-execute [--resume-session="session-id"] [--max-iterations=N]`
|
||||
- **Responsibilities**: Executes a test-fix workflow by delegating to `/workflow:execute`. Generates test tasks dynamically and creates intermediate fix tasks based on test results.
|
||||
- **Agent Calls**: Delegates to `/workflow:execute` which invokes `@test-fix-agent` for task execution.
|
||||
- **Note**: This command generates tasks; actual execution is performed by `/workflow:execute`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:test-cycle-execute --resume-session="WFS-test-user-auth"
|
||||
```
|
||||
@@ -1,7 +1,11 @@
|
||||
|
||||
# 🚀 Claude Code Workflow (CCW) - Getting Started Guide
|
||||
|
||||
Welcome to Claude Code Workflow (CCW) v4.5.0! This guide will help you get up and running in 5 minutes and experience AI-driven automated software development with our latest workflow system optimizations.
|
||||
Welcome to Claude Code Workflow (CCW) v5.0! This guide will help you get up and running in 5 minutes and experience AI-driven automated software development with our streamlined, dependency-free workflow system.
|
||||
|
||||
**Project Repository**: [catlog22/Claude-Code-Workflow](https://github.com/catlog22/Claude-Code-Workflow)
|
||||
|
||||
> **🎉 What's New in v5.0**: Less is more! We've removed external MCP dependencies and simplified workflows. CCW now uses standard tools (ripgrep/find) for better stability and performance. The brainstorming workflow focuses on role analysis for clearer planning.
|
||||
|
||||
---
|
||||
|
||||
@@ -13,17 +17,7 @@ Let's build a "Hello World" web application from scratch with a simple example.
|
||||
|
||||
First, make sure you have installed CCW according to the [Installation Guide](INSTALL.md).
|
||||
|
||||
### Step 2: Start a Workflow Session
|
||||
|
||||
Think of a "session" as a dedicated project folder. CCW will store all files related to your current task here.
|
||||
|
||||
```bash
|
||||
/workflow:session:start "My First Web App"
|
||||
```
|
||||
|
||||
You will see that the system has created a new session, for example, `WFS-my-first-web-app`.
|
||||
|
||||
### Step 3: Create an Execution Plan
|
||||
### Step 2: Create an Execution Plan (Automatically Starts a Session)
|
||||
|
||||
Now, tell CCW what you want to do. CCW will analyze your request and automatically generate a detailed, executable task plan.
|
||||
|
||||
@@ -31,12 +25,14 @@ Now, tell CCW what you want to do. CCW will analyze your request and automatical
|
||||
/workflow:plan "Create a simple Express API that returns Hello World at the root path"
|
||||
```
|
||||
|
||||
> **💡 Note**: `/workflow:plan` automatically creates and starts a workflow session. No need to manually run `/workflow:session:start`. The session will be auto-named based on your task description, e.g., `WFS-create-a-simple-express-api`.
|
||||
|
||||
This command kicks off a fully automated planning process, which includes:
|
||||
1. **Context Gathering**: Analyzing your project environment.
|
||||
2. **Agent Analysis**: AI agents think about the best implementation path.
|
||||
3. **Task Generation**: Creating specific task files (in `.json` format).
|
||||
|
||||
### Step 4: Execute the Plan
|
||||
### Step 3: Execute the Plan
|
||||
|
||||
Once the plan is created, you can command the AI agents to start working.
|
||||
|
||||
@@ -46,7 +42,7 @@ Once the plan is created, you can command the AI agents to start working.
|
||||
|
||||
You will see CCW's agents (like `@code-developer`) begin to execute tasks one by one. It will automatically create files, write code, and install dependencies.
|
||||
|
||||
### Step 5: Check the Status
|
||||
### Step 4: Check the Status
|
||||
|
||||
Want to know the progress? You can check the status of the current workflow at any time.
|
||||
|
||||
@@ -82,45 +78,115 @@ Understanding these concepts will help you use CCW more effectively:
|
||||
|
||||
## 🛠️ Common Scenarios
|
||||
|
||||
### Scenario 1: Developing a New Feature (as shown above)
|
||||
### Scenario 1: Quick Feature Development
|
||||
|
||||
This is the most common use case, following the "start session → plan → execute" pattern.
|
||||
For simple, well-defined features, use the direct "plan → execute" pattern:
|
||||
|
||||
```bash
|
||||
# 1. Start a session
|
||||
/workflow:session:start "User Login Feature"
|
||||
|
||||
# 2. Create a plan
|
||||
# Create plan (auto-creates session)
|
||||
/workflow:plan "Implement JWT-based user login and registration"
|
||||
|
||||
# 3. Execute
|
||||
# Execute
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
### Scenario 2: UI Design
|
||||
> **💡 Note**: `/workflow:plan` automatically creates a session. You can also manually start a session first with `/workflow:session:start "Feature Name"`.
|
||||
|
||||
CCW has powerful UI design capabilities, capable of generating complex UI prototypes from simple text descriptions.
|
||||
### Scenario 2: UI Design Exploration
|
||||
|
||||
For UI-focused projects, start with design exploration before implementation: **ui-design → update → plan → execute**
|
||||
|
||||
```bash
|
||||
# 1. Start a UI design workflow
|
||||
/workflow:ui-design:explore-auto --prompt "A modern, clean admin dashboard login page with username, password fields and a login button"
|
||||
# Step 1: Generate UI design variations (auto-creates session)
|
||||
/workflow:ui-design:explore-auto --prompt "A modern, clean admin dashboard login page"
|
||||
|
||||
# 2. View the generated prototype
|
||||
# After the command finishes, it will provide a path to a compare.html file. Open it in your browser to preview.
|
||||
# Step 2: Review designs in compare.html, then update brainstorming artifacts
|
||||
/workflow:ui-design:update --session <session-id> --selected-prototypes "login-v1,login-v2"
|
||||
|
||||
# Step 3: Generate implementation plan with design references
|
||||
/workflow:plan
|
||||
|
||||
# Step 4: Execute the implementation
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
### Scenario 3: Fixing a Bug
|
||||
> **💡 Tip**: The `update` command integrates selected design prototypes into brainstorming artifacts, ensuring implementation follows the approved designs.
|
||||
|
||||
CCW can help you analyze and fix bugs.
|
||||
### Scenario 3: Complex Feature with Multi-Agent Brainstorming
|
||||
|
||||
For complex features requiring thorough analysis, use the complete workflow: **brainstorm → plan → execute**
|
||||
|
||||
```bash
|
||||
# 1. Use the bug-index command to analyze the problem
|
||||
/cli:mode:bug-index "Incorrect success message even with wrong password on login"
|
||||
# Step 1: Multi-agent brainstorming (auto-creates session)
|
||||
/workflow:brainstorm:auto-parallel "Design a real-time collaborative document editing system with conflict resolution"
|
||||
|
||||
# 2. The AI will analyze the relevant code and generate a fix plan. You can then execute this plan.
|
||||
# Optional: Specify number of expert roles (default: 3, max: 9)
|
||||
/workflow:brainstorm:auto-parallel "Build scalable microservices platform" --count 5
|
||||
|
||||
# Step 2: Generate implementation plan from brainstorming results
|
||||
/workflow:plan
|
||||
|
||||
# Step 3: Execute the plan
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**Brainstorming Benefits**:
|
||||
- **Auto role selection**: Analyzes your topic and selects 3-9 relevant expert roles (system-architect, ui-designer, product-manager, etc.)
|
||||
- **Parallel execution**: Multiple AI agents analyze simultaneously from different perspectives
|
||||
- **Comprehensive specification**: Generates integrated requirements and design document
|
||||
|
||||
**When to Use Brainstorming**:
|
||||
- Complex features requiring multiple perspectives
|
||||
- Architectural decisions with significant impact
|
||||
- When you need thorough requirements before implementation
|
||||
|
||||
### Scenario 4: Quality Assurance - Action Plan Verification
|
||||
|
||||
After planning, validate your implementation plan for consistency and completeness:
|
||||
|
||||
```bash
|
||||
# After /workflow:plan completes, verify task quality
|
||||
/workflow:action-plan-verify
|
||||
|
||||
# The command will:
|
||||
# 1. Check requirements coverage (all requirements have tasks)
|
||||
# 2. Validate task dependencies (no circular or broken dependencies)
|
||||
# 3. Ensure synthesis alignment (tasks match architectural decisions)
|
||||
# 4. Assess task specification quality
|
||||
# 5. Generate detailed verification report with remediation todos
|
||||
```
|
||||
|
||||
**The verification report includes**:
|
||||
- Requirements coverage analysis
|
||||
- Dependency graph validation
|
||||
- Synthesis alignment checks
|
||||
- Task specification quality assessment
|
||||
- Prioritized remediation recommendations
|
||||
|
||||
**When to Use**:
|
||||
- After `/workflow:plan` generates IMPL_PLAN.md and task files
|
||||
- Before starting `/workflow:execute`
|
||||
- When working on complex projects with many dependencies
|
||||
- When you want to ensure high-quality task specifications
|
||||
|
||||
**Benefits**:
|
||||
- Catches planning errors before execution
|
||||
- Ensures complete requirements coverage
|
||||
- Validates architectural consistency
|
||||
- Identifies resource conflicts and skill gaps
|
||||
- Provides actionable remediation plan with TodoWrite integration
|
||||
|
||||
### Scenario 6: Bug Fixing
|
||||
|
||||
Quick bug analysis and fix workflow:
|
||||
|
||||
```bash
|
||||
# Analyze the bug
|
||||
/cli:mode:bug-index "Incorrect success message with wrong password"
|
||||
|
||||
# Claude will analyze and then directly implement the fix based on the analysis
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Workflow-Free Usage: Standalone Tools
|
||||
@@ -212,12 +278,12 @@ Suitable for large-scale refactoring, architectural changes, or first-time CCW u
|
||||
|
||||
```bash
|
||||
# Rebuild entire project documentation index
|
||||
/update-memory-full
|
||||
/memory:update-full
|
||||
|
||||
# Use specific tool for indexing
|
||||
/update-memory-full --tool gemini # Comprehensive analysis (recommended)
|
||||
/update-memory-full --tool qwen # Architecture focus
|
||||
/update-memory-full --tool codex # Implementation details
|
||||
/memory:update-full --tool gemini # Comprehensive analysis (recommended)
|
||||
/memory:update-full --tool qwen # Architecture focus
|
||||
/memory:update-full --tool codex # Implementation details
|
||||
```
|
||||
|
||||
**When to Execute**:
|
||||
@@ -226,16 +292,41 @@ Suitable for large-scale refactoring, architectural changes, or first-time CCW u
|
||||
- Weekly routine maintenance
|
||||
- When AI output drift is detected
|
||||
|
||||
#### Quick Context Loading for Specific Tasks
|
||||
|
||||
When you need immediate, task-specific context without updating documentation:
|
||||
|
||||
```bash
|
||||
# Load context for a specific task into memory
|
||||
/memory:load "在当前前端基础上开发用户认证功能"
|
||||
|
||||
# Use alternative CLI tool for analysis
|
||||
/memory:load --tool qwen "重构支付模块API"
|
||||
```
|
||||
|
||||
**How It Works**:
|
||||
- Delegates to an AI agent for autonomous project analysis
|
||||
- Discovers relevant files and extracts task-specific keywords
|
||||
- Uses CLI tools (Gemini/Qwen) for deep analysis to save tokens
|
||||
- Returns a structured "Core Content Pack" loaded into memory
|
||||
- Provides context for subsequent agent operations
|
||||
|
||||
**When to Use**:
|
||||
- Before starting a new feature or task
|
||||
- When you need quick context without full documentation rebuild
|
||||
- For task-specific architectural or pattern discovery
|
||||
- As preparation for agent-based development workflows
|
||||
|
||||
#### Incremental Related Module Updates
|
||||
|
||||
Suitable for daily development, updating only modules affected by changes:
|
||||
|
||||
```bash
|
||||
# Update recently modified related documentation
|
||||
/update-memory-related
|
||||
/memory:update-related
|
||||
|
||||
# Specify tool for update
|
||||
/update-memory-related --tool gemini
|
||||
/memory:update-related --tool gemini
|
||||
```
|
||||
|
||||
**When to Execute**:
|
||||
@@ -273,6 +364,58 @@ This command will:
|
||||
|
||||
---
|
||||
|
||||
## Advanced Usage: Agent Skills
|
||||
|
||||
Agent Skills are modular, reusable capabilities that extend the AI's functionality. They are stored in the `.claude/skills/` directory and are invoked through specific trigger mechanisms.
|
||||
|
||||
### How Skills Work
|
||||
|
||||
- **Model-Invoked**: Unlike slash commands, you don't call Skills directly. The AI decides when to use a Skill based on its understanding of your goal.
|
||||
- **Contextual**: Skills provide specific instructions, scripts, and templates to the AI for specialized tasks.
|
||||
- **Trigger Mechanisms**:
|
||||
- **Conversational Trigger**: Use `-e` or `--enhance` flag in **natural conversation** to trigger the `prompt-enhancer` skill
|
||||
- **CLI Command Enhancement**: Use `--enhance` flag in **CLI commands** for prompt refinement (this is a CLI feature, not a skill trigger)
|
||||
|
||||
### Examples
|
||||
|
||||
**Conversational Trigger** (activates prompt-enhancer skill):
|
||||
```
|
||||
User: "Analyze authentication module -e"
|
||||
→ AI uses prompt-enhancer skill to expand the request
|
||||
```
|
||||
|
||||
**CLI Command Enhancement** (built-in CLI feature):
|
||||
```bash
|
||||
# The --enhance flag here is a CLI parameter, not a skill trigger
|
||||
/cli:analyze --enhance "check for security issues"
|
||||
```
|
||||
|
||||
**Important Note**: The `-e` flag works in natural conversation, but `--enhance` in CLI commands is a separate enhancement mechanism, not the skill system.
|
||||
|
||||
---
|
||||
|
||||
## Advanced Usage: UI Design Workflow
|
||||
|
||||
CCW includes a powerful, multi-phase workflow for UI design and prototyping, capable of generating complete design systems and interactive prototypes from simple descriptions or reference images.
|
||||
|
||||
### Key Commands
|
||||
|
||||
- `/workflow:ui-design:explore-auto`: An exploratory workflow that generates multiple, distinct design variations based on a prompt.
|
||||
- `/workflow:ui-design:imitate-auto`: A replication workflow that creates high-fidelity prototypes from reference URLs.
|
||||
|
||||
### Example: Generating a UI from a Prompt
|
||||
|
||||
You can generate multiple design options for a web page with a single command:
|
||||
|
||||
```bash
|
||||
# This command will generate 3 different style and layout variations for a login page.
|
||||
/workflow:ui-design:explore-auto --prompt "A modern, clean login page for a SaaS application" --targets "login" --style-variants 3 --layout-variants 3
|
||||
```
|
||||
|
||||
After the workflow completes, it provides a `compare.html` file, allowing you to visually review and select the best design combination.
|
||||
|
||||
---
|
||||
|
||||
## ❓ Troubleshooting
|
||||
|
||||
- **Problem: Prompt shows "No active session found"**
|
||||
|
||||
@@ -1,7 +1,11 @@
|
||||
|
||||
# 🚀 Claude Code Workflow (CCW) - 快速上手指南
|
||||
|
||||
欢迎来到 Claude Code Workflow (CCW) v4.5.0!本指南将帮助您在 5 分钟内快速入门,体验由 AI 驱动的自动化软件开发流程,以及我们最新的工作流系统优化。
|
||||
欢迎来到 Claude Code Workflow (CCW) v5.0!本指南将帮助您在 5 分钟内快速入门,体验由 AI 驱动的自动化软件开发流程,以及我们全新的精简化、零外部依赖的工作流系统。
|
||||
|
||||
**项目地址**:[catlog22/Claude-Code-Workflow](https://github.com/catlog22/Claude-Code-Workflow)
|
||||
|
||||
> **🎉 v5.0 新特性:少即是多**!我们移除了外部 MCP 依赖,简化了工作流程。CCW 现在使用标准工具(ripgrep/find)以获得更好的稳定性和性能。头脑风暴工作流专注于角色分析,使规划更加清晰。
|
||||
|
||||
---
|
||||
|
||||
@@ -13,30 +17,22 @@
|
||||
|
||||
首先,请确保您已经根据 [安装指南](INSTALL_CN.md) 完成了 CCW 的安装。
|
||||
|
||||
### 第 2 步:启动一个工作流会话
|
||||
### 第 2 步:创建执行计划(会自动启动会话)
|
||||
|
||||
把“会话”想象成一个专门的项目文件夹。CCW 会在这里存放所有与您当前任务相关的文件。
|
||||
|
||||
```bash
|
||||
/workflow:session:start "我的第一个 Web 应用"
|
||||
```
|
||||
|
||||
您会看到系统创建了一个新的会话,例如 `WFS-我的第一个-web-应用`。
|
||||
|
||||
### 第 3 步:创建执行计划
|
||||
|
||||
现在,告诉 CCW 您想做什么。CCW 会分析您的需求,并自动生成一个详细的、可执行的任务计划。
|
||||
直接告诉 CCW 您想做什么。CCW 会分析您的需求,并自动生成一个详细的、可执行的任务计划。
|
||||
|
||||
```bash
|
||||
/workflow:plan "创建一个简单的 Express API,在根路径返回 Hello World"
|
||||
```
|
||||
|
||||
> **💡 提示**:`/workflow:plan` 会自动创建和启动工作流会话,无需手动执行 `/workflow:session:start`。会话会根据任务描述自动命名,例如 `WFS-创建一个简单的-express-api`。
|
||||
|
||||
这个命令会启动一个完全自动化的规划流程,包括:
|
||||
1. **上下文收集**:分析您的项目环境。
|
||||
2. **智能体分析**:AI 智能体思考最佳实现路径。
|
||||
3. **任务生成**:创建具体的任务文件(`.json` 格式)。
|
||||
|
||||
### 第 4 步:执行计划
|
||||
### 第 3 步:执行计划
|
||||
|
||||
当计划创建完毕后,您就可以命令 AI 智能体开始工作了。
|
||||
|
||||
@@ -46,7 +42,7 @@
|
||||
|
||||
您会看到 CCW 的智能体(如 `@code-developer`)开始逐一执行任务。它会自动创建文件、编写代码、安装依赖。
|
||||
|
||||
### 第 5 步:查看状态
|
||||
### 第 4 步:查看状态
|
||||
|
||||
想知道进展如何?随时可以查看当前工作流的状态。
|
||||
|
||||
@@ -82,45 +78,115 @@
|
||||
|
||||
## 🛠️ 常见场景示例
|
||||
|
||||
### 场景 1:开发一个新功能(如上所示)
|
||||
### 场景 1:快速功能开发
|
||||
|
||||
这是最常见的用法,遵循“启动会话 → 规划 → 执行”的模式。
|
||||
对于简单、明确的功能,使用直接的"规划 → 执行"模式:
|
||||
|
||||
```bash
|
||||
# 1. 启动会话
|
||||
/workflow:session:start "用户登录功能"
|
||||
|
||||
# 2. 创建计划
|
||||
# 创建计划(自动创建会话)
|
||||
/workflow:plan "实现基于 JWT 的用户登录和注册功能"
|
||||
|
||||
# 3. 执行
|
||||
# 执行
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
### 场景 2:进行 UI 设计
|
||||
> **💡 提示**:`/workflow:plan` 会自动创建会话。您也可以先手动启动会话:`/workflow:session:start "功能名称"`。
|
||||
|
||||
CCW 拥有强大的 UI 设计能力,可以从简单的文本描述生成复杂的 UI 原型。
|
||||
### 场景 2:UI 设计探索
|
||||
|
||||
对于以 UI 为重点的项目,在实现前先进行设计探索:**ui-design → update → 规划 → 执行**
|
||||
|
||||
```bash
|
||||
# 1. 启动一个 UI 设计工作流
|
||||
/workflow:ui-design:explore-auto --prompt "一个现代、简洁的管理后台登录页面,包含用户名、密码输入框和登录按钮"
|
||||
# 第 1 步:生成 UI 设计变体(自动创建会话)
|
||||
/workflow:ui-design:explore-auto --prompt "一个现代、简洁的管理后台登录页面"
|
||||
|
||||
# 2. 查看生成的原型
|
||||
# 命令执行完毕后,会提供一个 compare.html 文件的路径,在浏览器中打开即可预览。
|
||||
# 第 2 步:在 compare.html 中审查设计,然后更新头脑风暴工件
|
||||
/workflow:ui-design:update --session <session-id> --selected-prototypes "login-v1,login-v2"
|
||||
|
||||
# 第 3 步:使用设计引用生成实现计划
|
||||
/workflow:plan
|
||||
|
||||
# 第 4 步:执行实现
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
### 场景 3:修复一个 Bug
|
||||
> **💡 提示**:`update` 命令将选定的设计原型集成到头脑风暴工件中,确保实现遵循批准的设计。
|
||||
|
||||
CCW 可以帮助您分析并修复 Bug。
|
||||
### 场景 3:复杂功能的多智能体头脑风暴
|
||||
|
||||
对于需要深入分析的复杂功能,使用完整工作流:**头脑风暴 → 规划 → 执行**
|
||||
|
||||
```bash
|
||||
# 1. 使用 bug-index 命令分析问题
|
||||
/cli:mode:bug-index "用户登录时,即使密码错误也提示成功"
|
||||
# 第 1 步:多智能体头脑风暴(自动创建会话)
|
||||
/workflow:brainstorm:auto-parallel "设计一个支持冲突解决的实时协作文档编辑系统"
|
||||
|
||||
# 2. AI 会分析相关代码,并生成一个修复计划。然后您可以执行这个计划。
|
||||
# 可选:指定专家角色数量(默认:3,最大:9)
|
||||
/workflow:brainstorm:auto-parallel "构建可扩展的微服务平台" --count 5
|
||||
|
||||
# 第 2 步:从头脑风暴结果生成实现计划
|
||||
/workflow:plan
|
||||
|
||||
# 第 3 步:执行计划
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
**头脑风暴优势**:
|
||||
- **自动角色选择**:分析主题并选择 3-9 个相关专家角色(系统架构师、UI 设计师、产品经理等)
|
||||
- **并行执行**:多个 AI 智能体从不同视角同时分析
|
||||
- **综合规格说明**:生成整合的需求和设计文档
|
||||
|
||||
**何时使用头脑风暴**:
|
||||
- 需要多视角分析的复杂功能
|
||||
- 具有重大影响的架构决策
|
||||
- 实现前需要详尽需求分析
|
||||
|
||||
### 场景 4:质量保证 - 行动计划验证
|
||||
|
||||
规划后,验证您的实现计划的一致性和完整性:
|
||||
|
||||
```bash
|
||||
# /workflow:plan 完成后,验证任务质量
|
||||
/workflow:action-plan-verify
|
||||
|
||||
# 该命令将:
|
||||
# 1. 检查需求覆盖率(所有需求都有任务)
|
||||
# 2. 验证任务依赖关系(无循环或损坏的依赖)
|
||||
# 3. 确保综合对齐(任务符合架构决策)
|
||||
# 4. 评估任务规范质量
|
||||
# 5. 生成详细的验证报告和修复待办事项
|
||||
```
|
||||
|
||||
**验证报告包括**:
|
||||
- 需求覆盖率分析
|
||||
- 依赖关系图验证
|
||||
- 综合对齐检查
|
||||
- 任务规范质量评估
|
||||
- 优先级修复建议
|
||||
|
||||
**使用时机**:
|
||||
- 在 `/workflow:plan` 生成 IMPL_PLAN.md 和任务文件后
|
||||
- 在开始 `/workflow:execute` 之前
|
||||
- 处理具有许多依赖关系的复杂项目时
|
||||
- 当您想确保高质量的任务规范时
|
||||
|
||||
**优势**:
|
||||
- 在执行前捕获规划错误
|
||||
- 确保完整的需求覆盖
|
||||
- 验证架构一致性
|
||||
- 识别资源冲突和技能差距
|
||||
- 提供可执行的修复计划,集成 TodoWrite
|
||||
|
||||
### 场景 6:Bug 修复
|
||||
|
||||
快速 Bug 分析和修复工作流:
|
||||
|
||||
```bash
|
||||
# 分析 Bug
|
||||
/cli:mode:bug-index "密码错误时仍显示成功消息"
|
||||
|
||||
# Claude 会分析后直接根据分析结果实现修复
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔧 无工作流协作:独立工具使用
|
||||
@@ -212,12 +278,12 @@ CCW 使用分层的 CLAUDE.md 文档系统维护项目上下文。定期更新
|
||||
|
||||
```bash
|
||||
# 重建整个项目的文档索引
|
||||
/update-memory-full
|
||||
/memory:update-full
|
||||
|
||||
# 使用特定工具进行索引
|
||||
/update-memory-full --tool gemini # 全面分析(推荐)
|
||||
/update-memory-full --tool qwen # 架构重点
|
||||
/update-memory-full --tool codex # 实现细节
|
||||
/memory:update-full --tool gemini # 全面分析(推荐)
|
||||
/memory:update-full --tool qwen # 架构重点
|
||||
/memory:update-full --tool codex # 实现细节
|
||||
```
|
||||
|
||||
**执行时机**:
|
||||
@@ -226,16 +292,41 @@ CCW 使用分层的 CLAUDE.md 文档系统维护项目上下文。定期更新
|
||||
- 每周定期维护
|
||||
- 发现 AI 输出偏差时
|
||||
|
||||
#### 快速加载特定任务上下文
|
||||
|
||||
当您需要立即获取特定任务的上下文,而无需更新文档时:
|
||||
|
||||
```bash
|
||||
# 为特定任务加载上下文到内存
|
||||
/memory:load "在当前前端基础上开发用户认证功能"
|
||||
|
||||
# 使用其他 CLI 工具进行分析
|
||||
/memory:load --tool qwen "重构支付模块API"
|
||||
```
|
||||
|
||||
**工作原理**:
|
||||
- 委托 AI 智能体进行自主项目分析
|
||||
- 发现相关文件并提取任务特定关键词
|
||||
- 使用 CLI 工具(Gemini/Qwen)进行深度分析以节省令牌
|
||||
- 返回加载到内存中的结构化"核心内容包"
|
||||
- 为后续智能体操作提供上下文
|
||||
|
||||
**使用时机**:
|
||||
- 开始新功能或任务之前
|
||||
- 需要快速获取上下文而无需完整文档重建时
|
||||
- 针对特定任务的架构或模式发现
|
||||
- 作为基于智能体开发工作流的准备工作
|
||||
|
||||
#### 增量更新相关模块
|
||||
|
||||
适用于日常开发,只更新变更影响的模块:
|
||||
|
||||
```bash
|
||||
# 更新最近修改相关的文档
|
||||
/update-memory-related
|
||||
/memory:update-related
|
||||
|
||||
# 指定工具进行更新
|
||||
/update-memory-related --tool gemini
|
||||
/memory:update-related --tool gemini
|
||||
```
|
||||
|
||||
**执行时机**:
|
||||
@@ -273,6 +364,58 @@ CCW 使用分层的 CLAUDE.md 文档系统维护项目上下文。定期更新
|
||||
|
||||
---
|
||||
|
||||
## 🎯 进阶用法:智能体技能 (Agent Skills)
|
||||
|
||||
智能体技能是可扩展 AI 功能的模块化、可复用能力。它们存储在 `.claude/skills/` 目录中,通过特定的触发机制调用。
|
||||
|
||||
### 技能工作原理
|
||||
|
||||
- **模型调用**:与斜杠命令不同,您不直接调用技能。AI 会根据对您目标的理解来决定何时使用技能。
|
||||
- **上下文化**:技能为 AI 提供特定的指令、脚本和模板,用于专门化任务。
|
||||
- **触发机制**:
|
||||
- **对话触发**:在**自然对话**中使用 `-e` 或 `--enhance` 标识符来触发 `prompt-enhancer` 技能
|
||||
- **CLI 命令增强**:在 **CLI 命令**中使用 `--enhance` 标识符进行提示词优化(这是 CLI 功能,不是技能触发)
|
||||
|
||||
### 使用示例
|
||||
|
||||
**对话触发** (激活 prompt-enhancer 技能):
|
||||
```
|
||||
用户: "分析认证模块 -e"
|
||||
→ AI 使用 prompt-enhancer 技能扩展请求
|
||||
```
|
||||
|
||||
**CLI 命令增强** (CLI 内置功能):
|
||||
```bash
|
||||
# 这里的 --enhance 标识符是 CLI 参数,不是技能触发器
|
||||
/cli:analyze --enhance "检查安全问题"
|
||||
```
|
||||
|
||||
**重要说明**:`-e` 标识符仅在自然对话中有效,而 CLI 命令中的 `--enhance` 是独立的增强机制,与技能系统无关。
|
||||
|
||||
---
|
||||
|
||||
## 🎨 进阶用法:UI 设计工作流
|
||||
|
||||
CCW 包含强大的多阶段 UI 设计和原型制作工作流,能够从简单的描述或参考图像生成完整的设计系统和交互式原型。
|
||||
|
||||
### 核心命令
|
||||
|
||||
- `/workflow:ui-design:explore-auto`: 探索性工作流,基于提示词生成多种不同的设计变体。
|
||||
- `/workflow:ui-design:imitate-auto`: 复制工作流,从参考 URL 创建高保真原型。
|
||||
|
||||
### 示例:从提示词生成 UI
|
||||
|
||||
您可以使用单个命令为网页生成多种设计选项:
|
||||
|
||||
```bash
|
||||
# 此命令将为登录页面生成 3 种不同的样式和布局变体
|
||||
/workflow:ui-design:explore-auto --prompt "一个现代简洁的 SaaS 应用登录页面" --targets "login" --style-variants 3 --layout-variants 3
|
||||
```
|
||||
|
||||
工作流完成后,会提供一个 `compare.html` 文件,让您可以可视化地查看和选择最佳设计组合。
|
||||
|
||||
---
|
||||
|
||||
## ❓ 常见问题排查 (Troubleshooting)
|
||||
|
||||
- **问题:提示 "No active session found" (未找到活动会话)**
|
||||
|
||||
319
INSTALL.md
319
INSTALL.md
@@ -4,230 +4,201 @@
|
||||
|
||||
Interactive installation guide for Claude Code with Agent workflow coordination and distributed memory system.
|
||||
|
||||
## ⚡ One-Line Remote Installation (Recommended)
|
||||
## ⚡ Quick One-Line Installation
|
||||
|
||||
### All Platforms - Remote PowerShell Installation
|
||||
**Windows (PowerShell):**
|
||||
```powershell
|
||||
# Interactive remote installation from feature branch (latest)
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1)
|
||||
|
||||
# Global installation with unified file output system
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
|
||||
|
||||
# Force overwrite (non-interactive) - includes all new workflow file generation features
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Force -NonInteractive
|
||||
|
||||
# One-click backup all existing files (no confirmations needed)
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -BackupAll
|
||||
Invoke-Expression (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1" -UseBasicParsing).Content
|
||||
```
|
||||
|
||||
**What the remote installer does:**
|
||||
- ✅ Checks system requirements (PowerShell version, network connectivity)
|
||||
- ✅ Downloads latest version from GitHub (main branch)
|
||||
- ✅ Includes all new unified file output system features
|
||||
- ✅ Automatically extracts and runs local installer
|
||||
- ✅ Security confirmation and user prompts
|
||||
- ✅ Automatic cleanup of temporary files
|
||||
- ✅ Sets up .workflow/ directory structure for session management
|
||||
**Linux/macOS (Bash/Zsh):**
|
||||
```bash
|
||||
bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
|
||||
```
|
||||
|
||||
**Note**: Interface is in English for cross-platform compatibility
|
||||
### Interactive Version Selection
|
||||
|
||||
## 📂 Local Installation
|
||||
After running the installation command, you'll see an interactive menu with real-time version information:
|
||||
|
||||
### All Platforms (PowerShell)
|
||||
```
|
||||
Detecting latest release and commits...
|
||||
Latest stable: v4.6.0 (2025-10-19 04:27 UTC)
|
||||
Latest commit: cdea58f (2025-10-19 08:15 UTC)
|
||||
|
||||
====================================================
|
||||
Version Selection Menu
|
||||
====================================================
|
||||
|
||||
1) Latest Stable Release (Recommended)
|
||||
|-- Version: v4.6.0
|
||||
|-- Released: 2025-10-19 04:27 UTC
|
||||
\-- Production-ready
|
||||
|
||||
2) Latest Development Version
|
||||
|-- Branch: main
|
||||
|-- Commit: cdea58f
|
||||
|-- Updated: 2025-10-19 08:15 UTC
|
||||
|-- Cutting-edge features
|
||||
\-- May contain experimental changes
|
||||
|
||||
3) Specific Release Version
|
||||
|-- Install a specific tagged release
|
||||
\-- Recent: v4.6.0, v4.5.0, v4.4.0
|
||||
|
||||
====================================================
|
||||
|
||||
Select version to install (1-3, default: 1):
|
||||
```
|
||||
|
||||
**Version Options:**
|
||||
- **Option 1 (Recommended)**: Latest stable release with verified production quality
|
||||
- **Option 2**: Latest development version from main branch with newest features
|
||||
- **Option 3**: Specific version tag for controlled deployments
|
||||
|
||||
> 💡 **Pro Tip**: The installer automatically detects and displays the latest version numbers and release dates from GitHub. Just press Enter to select the recommended stable release.
|
||||
|
||||
## 📂 Local Installation (Install-Claude.ps1)
|
||||
|
||||
For local installation without network access, use the bundled PowerShell installer:
|
||||
|
||||
**Installation Modes:**
|
||||
```powershell
|
||||
# Clone the repository with latest features
|
||||
git clone -b main https://github.com/catlog22/Claude-CCW.git
|
||||
cd Dmsflow
|
||||
|
||||
# Windows PowerShell 5.1+ or PowerShell Core (Global installation only)
|
||||
# Interactive mode with prompts (recommended)
|
||||
.\Install-Claude.ps1
|
||||
|
||||
# Linux/macOS PowerShell Core (Global installation only)
|
||||
pwsh ./Install-Claude.ps1
|
||||
# Quick install with automatic backup
|
||||
.\Install-Claude.ps1 -Force -BackupAll
|
||||
|
||||
# Non-interactive install
|
||||
.\Install-Claude.ps1 -NonInteractive -Force
|
||||
```
|
||||
|
||||
**Note**: The feature branch contains all the latest unified file output system enhancements and should be used for new installations.
|
||||
**Installation Options:**
|
||||
|
||||
## Installation Options
|
||||
| Mode | Description | Installs To |
|
||||
|------|-------------|-------------|
|
||||
| **Global** | System-wide installation (default) | `~/.claude/`, `~/.codex/`, `~/.gemini/` |
|
||||
| **Path** | Custom directory + global hybrid | Local: `agents/`, `commands/`<br>Global: `workflows/`, `scripts/` |
|
||||
|
||||
### Remote Installation Parameters
|
||||
**Backup Behavior:**
|
||||
- **Default**: Automatic backup enabled (`-BackupAll`)
|
||||
- **Disable**: Use `-NoBackup` flag (⚠️ overwrites without backup)
|
||||
- **Backup location**: `claude-backup-{timestamp}/` in installation directory
|
||||
|
||||
All parameters can be passed to the remote installer:
|
||||
**⚠️ Important Warnings:**
|
||||
- `-Force -BackupAll`: Silent file overwrite (with backup)
|
||||
- `-NoBackup -Force`: Permanent file overwrite (no recovery)
|
||||
- Global mode modifies user profile directories
|
||||
|
||||
```powershell
|
||||
# Global installation
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
|
||||
|
||||
# Install to specific directory
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Directory "C:\MyProject"
|
||||
|
||||
# Force overwrite without prompts
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Force -NonInteractive
|
||||
|
||||
# Install from specific branch
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Branch "dev"
|
||||
|
||||
# Skip backups (overwrite without backup - not recommended)
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -NoBackup
|
||||
|
||||
# Explicit automatic backup all existing files (default behavior since v1.1.0)
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -BackupAll
|
||||
### ✅ Verify Installation
|
||||
After installation, open **Claude Code** and check if the workflow commands are available by running:
|
||||
```bash
|
||||
/workflow:session:list
|
||||
```
|
||||
|
||||
### Local Installation Options
|
||||
This command should be recognized in Claude Code's interface. If you see the workflow slash commands (e.g., `/workflow:*`, `/cli:*`), the installation was successful.
|
||||
|
||||
### Global Installation (Default and Only Mode)
|
||||
Install to user home directory (`~/.claude`):
|
||||
```powershell
|
||||
# All platforms - Global installation (default)
|
||||
.\Install-Claude.ps1
|
||||
|
||||
# With automatic backup (default since v1.1.0)
|
||||
.\Install-Claude.ps1 -BackupAll
|
||||
|
||||
# Disable automatic backup (not recommended)
|
||||
.\Install-Claude.ps1 -NoBackup
|
||||
|
||||
# Non-interactive mode for automation
|
||||
.\Install-Claude.ps1 -Force -NonInteractive
|
||||
```
|
||||
|
||||
**Global installation structure:**
|
||||
```
|
||||
~/.claude/
|
||||
├── agents/
|
||||
├── commands/
|
||||
├── output-styles/
|
||||
├── settings.local.json
|
||||
└── CLAUDE.md
|
||||
```
|
||||
|
||||
**Note**: Starting from v1.2.0, only global installation is supported. Local directory and custom path installations have been removed to simplify the installation process and ensure consistent behavior across all platforms.
|
||||
|
||||
## Advanced Options
|
||||
|
||||
### 🛡️ Enhanced Backup Features (v1.1.0+)
|
||||
|
||||
The installer now includes **automatic backup as the default behavior** to protect your existing files:
|
||||
|
||||
**Backup Modes:**
|
||||
- **Automatic Backup** (default since v1.1.0): Automatically backs up all existing files without prompts
|
||||
- **Explicit Backup** (`-BackupAll`): Same as default behavior, explicitly specified for compatibility
|
||||
- **No Backup** (`-NoBackup`): Disable backup functionality (not recommended)
|
||||
|
||||
**Backup Organization:**
|
||||
- Creates timestamped backup folders (e.g., `claude-backup-20240117-143022`)
|
||||
- Preserves directory structure within backup folders
|
||||
- Maintains file relationships and paths
|
||||
|
||||
### Force Installation
|
||||
Overwrite existing files:
|
||||
```powershell
|
||||
.\Install-Claude.ps1 -Force
|
||||
```
|
||||
|
||||
### One-Click Backup
|
||||
Automatically backup all existing files without confirmations:
|
||||
```powershell
|
||||
.\Install-Claude.ps1 -BackupAll
|
||||
```
|
||||
|
||||
### Skip Backups
|
||||
Don't create backup files:
|
||||
```powershell
|
||||
.\Install-Claude.ps1 -NoBackup
|
||||
```
|
||||
|
||||
### Uninstall
|
||||
Remove installation:
|
||||
```powershell
|
||||
.\Install-Claude.ps1 -Uninstall -Force
|
||||
```
|
||||
> **📝 Installation Notes:**
|
||||
> - The installer will automatically install/update `.codex/` and `.gemini/` directories
|
||||
> - **Global mode**: Installs to `~/.codex` and `~/.gemini`
|
||||
> - **Path mode**: Installs to your specified directory (e.g., `project/.codex`, `project/.gemini`)
|
||||
> - **Backup**: Existing files are backed up by default to `claude-backup-{timestamp}/`
|
||||
> - **Safety**: Use interactive mode for first-time installation to review changes
|
||||
|
||||
## Platform Requirements
|
||||
|
||||
### PowerShell (Recommended)
|
||||
- **Windows**: PowerShell 5.1+ or PowerShell Core 6+
|
||||
- **Linux**: PowerShell Core 6+
|
||||
- **macOS**: PowerShell Core 6+
|
||||
- **Linux/macOS**: Bash/Zsh (for installer) or PowerShell Core 6+ (for manual Install-Claude.ps1)
|
||||
|
||||
Install PowerShell Core:
|
||||
**Install PowerShell Core (if needed):**
|
||||
- **Ubuntu/Debian**: `sudo apt install powershell`
|
||||
- **CentOS/RHEL**: `sudo dnf install powershell`
|
||||
- **macOS**: `brew install powershell`
|
||||
- **Or download**: https://github.com/PowerShell/PowerShell
|
||||
- **Download**: https://github.com/PowerShell/PowerShell
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
## Complete Installation Examples
|
||||
### Tool Control System
|
||||
|
||||
### ⚡ Super Quick (One-Liner)
|
||||
```powershell
|
||||
# Complete installation in one command
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
|
||||
CCW uses a **configuration-based tool control system** that makes external CLI tools **optional** rather than required. This allows you to:
|
||||
|
||||
# Done! 🎉
|
||||
# Start using Claude Code with Agent workflows!
|
||||
- ✅ **Start with Claude-only mode** - Work immediately without installing additional tools
|
||||
- ✅ **Progressive enhancement** - Add external tools selectively as needed
|
||||
- ✅ **Graceful degradation** - Automatic fallback when tools are unavailable
|
||||
- ✅ **Flexible configuration** - Control tool availability per project
|
||||
|
||||
**Configuration File**: `~/.claude/workflows/tool-control.yaml`
|
||||
|
||||
```yaml
|
||||
tools:
|
||||
gemini:
|
||||
enabled: false # Optional: AI analysis & documentation
|
||||
qwen:
|
||||
enabled: true # Optional: AI architecture & code generation
|
||||
codex:
|
||||
enabled: true # Optional: AI development & implementation
|
||||
```
|
||||
|
||||
### 📂 Manual Installation Method
|
||||
```powershell
|
||||
# Manual installation steps:
|
||||
# 1. Install PowerShell Core (if needed)
|
||||
# Windows: Download from GitHub
|
||||
# Linux: sudo apt install powershell
|
||||
# macOS: brew install powershell
|
||||
**Behavior**:
|
||||
- **When disabled**: CCW automatically falls back to other enabled tools or Claude's native capabilities
|
||||
- **When enabled**: Uses specialized tools for their specific strengths
|
||||
- **Default**: All tools disabled - Claude-only mode works out of the box
|
||||
|
||||
# 2. Download Claude Code Workflow System
|
||||
git clone https://github.com/catlog22/Claude-CCW.git
|
||||
cd Dmsflow
|
||||
### Optional CLI Tools *(Enhanced Capabilities)*
|
||||
|
||||
# 3. Install globally (interactive)
|
||||
.\Install-Claude.ps1 -Global
|
||||
While CCW works with Claude alone, installing these tools provides enhanced analysis and extended context:
|
||||
|
||||
# 4. Start using Claude Code with Agent workflows!
|
||||
# Use /workflow commands and memory system for development
|
||||
```
|
||||
#### System Utilities
|
||||
|
||||
## Verification
|
||||
| Tool | Purpose | Installation |
|
||||
|------|---------|--------------|
|
||||
| **ripgrep (rg)** | Fast code search | `brew install ripgrep` (macOS), `apt install ripgrep` (Ubuntu), `winget install ripgrep` (Windows) |
|
||||
| **jq** | JSON processing | `brew install jq` (macOS), `apt install jq` (Ubuntu), `winget install jq` (Windows) |
|
||||
|
||||
After installation, verify:
|
||||
#### External AI Tools
|
||||
|
||||
1. **Check installation:**
|
||||
```bash
|
||||
# Global
|
||||
ls ~/.claude
|
||||
|
||||
# Local
|
||||
ls ./.claude
|
||||
```
|
||||
Configure these tools in `~/.claude/workflows/tool-control.yaml` after installation:
|
||||
|
||||
2. **Test Claude Code:**
|
||||
- Open Claude Code in your project
|
||||
- Check that global `.claude` directory is recognized
|
||||
- Verify workflow commands and DMS commands are available
|
||||
- Test `/workflow` commands for agent coordination
|
||||
- Test `/workflow version` to check version information
|
||||
| Tool | Purpose | Installation |
|
||||
|------|---------|--------------|
|
||||
| **Gemini CLI** | AI analysis & documentation | Follow [official docs](https://ai.google.dev) - Free quota, extended context |
|
||||
| **Codex CLI** | AI development & implementation | Follow [official docs](https://github.com/openai/codex) - Autonomous development |
|
||||
| **Qwen Code** | AI architecture & code generation | Follow [official docs](https://github.com/QwenLM/qwen-code) - Large context window |
|
||||
|
||||
### Recommended: MCP Tools *(Enhanced Analysis)*
|
||||
|
||||
MCP (Model Context Protocol) tools provide advanced codebase analysis. **Recommended installation** - While CCW has fallback mechanisms, not installing MCP tools may lead to unexpected behavior or degraded performance in some workflows.
|
||||
|
||||
| MCP Server | Purpose | Installation Guide |
|
||||
|------------|---------|-------------------|
|
||||
| **Exa MCP** | External API patterns & best practices | [Install Guide](https://smithery.ai/server/exa) |
|
||||
| **Code Index MCP** | Advanced internal code search | [Install Guide](https://github.com/johnhuang316/code-index-mcp) |
|
||||
| **Chrome DevTools MCP** | ⚠️ **Required for UI workflows** - URL mode design extraction | [Install Guide](https://github.com/ChromeDevTools/chrome-devtools-mcp) |
|
||||
|
||||
⚠️ **Note**: Some workflows expect MCP tools to be available. Without them, you may experience:
|
||||
- Slower code analysis and search operations
|
||||
- Reduced context quality in some scenarios
|
||||
- Fallback to less efficient traditional tools
|
||||
- Potential unexpected behavior in advanced workflows
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### PowerShell Execution Policy
|
||||
### PowerShell Execution Policy (Windows)
|
||||
If you get execution policy errors:
|
||||
```powershell
|
||||
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
|
||||
```
|
||||
|
||||
### Workflow Commands Not Working
|
||||
- Ensure `.claude` directory exists in your project
|
||||
- Verify workflow.md and agent files are properly installed
|
||||
- Check that Claude Code recognizes the configuration
|
||||
- Verify installation: `ls ~/.claude` (should show agents/, commands/, workflows/)
|
||||
- Restart Claude Code after installation
|
||||
- Check `/workflow:session:list` command is recognized
|
||||
|
||||
### Permission Errors
|
||||
- **Windows**: Run as Administrator
|
||||
- **Linux/macOS**: Use `sudo` if needed for global PowerShell installation
|
||||
- **Windows**: Run PowerShell as Administrator
|
||||
- **Linux/macOS**: May need `sudo` for global PowerShell installation
|
||||
|
||||
## Support
|
||||
|
||||
- **Issues**: [GitHub Issues](https://github.com/catlog22/Claude-CCW/issues)
|
||||
- **Documentation**: [Main README](README.md)
|
||||
- **Workflow Documentation**: [.claude/commands/workflow.md](.claude/commands/workflow.md)
|
||||
- **Issues**: [GitHub Issues](https://github.com/catlog22/Claude-Code-Workflow/issues)
|
||||
- **Getting Started**: [Quick Start Guide](GETTING_STARTED.md)
|
||||
- **Documentation**: [Main README](README.md)
|
||||
@@ -4,21 +4,23 @@
|
||||
|
||||
Claude Code Agent 工作流协调和分布式内存系统的交互式安装指南。
|
||||
|
||||
> **版本 5.0:精简化安装** - 移除了外部 MCP 依赖,安装更简单、更稳定。使用标准工具(ripgrep/find)提供更好的性能和兼容性。
|
||||
|
||||
## ⚡ 一键远程安装(推荐)
|
||||
|
||||
### 所有平台 - 远程 PowerShell 安装
|
||||
```powershell
|
||||
# 从功能分支进行交互式远程安装(最新版本)
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1)
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
|
||||
|
||||
# 包含统一文件输出系统的全局安装
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Global
|
||||
|
||||
# 强制覆盖(非交互式)- 包含所有新的工作流文件生成功能
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Force -NonInteractive
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Force -NonInteractive
|
||||
|
||||
# 一键备份所有现有文件(无需确认)
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -BackupAll
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -BackupAll
|
||||
```
|
||||
|
||||
**远程安装器的功能:**
|
||||
@@ -37,8 +39,7 @@ iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-r
|
||||
### 所有平台(PowerShell)
|
||||
```powershell
|
||||
# 克隆包含最新功能的仓库
|
||||
git clone -b main https://github.com/catlog22/Claude-CCW.git
|
||||
cd Dmsflow
|
||||
cd Claude-Code-Workflow
|
||||
|
||||
# Windows PowerShell 5.1+ 或 PowerShell Core(仅支持全局安装)
|
||||
.\Install-Claude.ps1
|
||||
@@ -57,19 +58,19 @@ pwsh ./Install-Claude.ps1
|
||||
|
||||
```powershell
|
||||
# 全局安装
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Global
|
||||
|
||||
# 安装到指定目录
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Directory "C:\MyProject"
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Directory "C:\MyProject"
|
||||
|
||||
# 强制覆盖而不提示
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Force -NonInteractive
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Force -NonInteractive
|
||||
|
||||
# 从特定分支安装
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Branch "dev"
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Branch "dev"
|
||||
|
||||
# 跳过备份(更快)
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -NoBackup
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -NoBackup
|
||||
```
|
||||
|
||||
### 本地安装选项
|
||||
@@ -140,7 +141,7 @@ iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-r
|
||||
### ⚡ 超快速(一键)
|
||||
```powershell
|
||||
# 一条命令完成安装
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
|
||||
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Global
|
||||
|
||||
# 完成!🎉
|
||||
# 开始使用 Claude Code Agent 工作流!
|
||||
@@ -165,6 +166,55 @@ cd Dmsflow
|
||||
# 使用 /workflow 命令和内存系统进行开发
|
||||
```
|
||||
|
||||
## 先决条件和推荐工具
|
||||
|
||||
为了充分发挥 CCW 的全部潜力,强烈建议安装这些额外工具。
|
||||
|
||||
### 系统工具 (推荐)
|
||||
|
||||
这些工具增强了文件搜索和数据处理能力。
|
||||
|
||||
- **`ripgrep` (rg)**: 一款高速代码搜索工具。
|
||||
- **Windows**: `winget install BurntSushi.Ripper.MSVC` 或 `choco install ripgrep`
|
||||
- **macOS**: `brew install ripgrep`
|
||||
- **Linux**: `sudo apt-get install ripgrep` (Debian/Ubuntu) 或 `sudo dnf install ripgrep` (Fedora)
|
||||
- **验证**: `rg --version`
|
||||
|
||||
- **`jq`**: 一款命令行 JSON 处理器。
|
||||
- **Windows**: `winget install jqlang.jq` 或 `choco install jq`
|
||||
- **macOS**: `brew install jq`
|
||||
- **Linux**: `sudo apt-get install jq` (Debian/Ubuntu) 或 `sudo dnf install jq` (Fedora)
|
||||
- **验证**: `jq --version`
|
||||
|
||||
### 模型上下文协议 (MCP) 工具 (可选)
|
||||
|
||||
MCP 工具从外部来源提供高级上下文检索,增强 AI 的理解能力。关于安装,请参考各个工具的官方文档。
|
||||
|
||||
| 工具 | 用途 | 官方源码 |
|
||||
|---|---|---|
|
||||
| **Exa MCP** | 用于搜索代码和网络。 | [exa-labs/exa-mcp-server](https://github.com/exa-labs/exa-mcp-server) |
|
||||
| **Code Index MCP** | 用于索引和搜索本地代码库。 |[johnhuang316/code-index-mcp](https://github.com/johnhuang316/code-index-mcp) |
|
||||
| **Chrome DevTools MCP** | 用于与网页交互以提取布局和样式信息。 | [ChromeDevTools/chrome-devtools-mcp](https://github.com/ChromeDevTools/chrome-devtools-mcp) |
|
||||
|
||||
- **先决条件**: Node.js 和 npm (或兼容的 JavaScript 运行时)。
|
||||
- **验证**: 安装后,检查服务器是否可以启动 (具体请查阅 MCP 文档)。
|
||||
|
||||
### 可选的 AI CLI 工具
|
||||
|
||||
CCW 使用包装脚本与底层的 AI 模型进行交互。为了使这些包装器正常工作,必须在您的系统上安装和配置相应的 CLI 工具。
|
||||
|
||||
- **Gemini CLI**: 用于分析、文档和探索。
|
||||
- **用途**: 提供对 Google Gemini 模型的访问。
|
||||
- **安装**: 请遵循 Google AI 官方文档来安装和配置 Gemini CLI。确保 `gemini` 命令在您的系统 PATH 中可用。
|
||||
|
||||
- **Codex CLI**: 用于自主开发和实现。
|
||||
- **用途**: 提供对 OpenAI Codex 模型的访问,用于代码生成和修改。
|
||||
- **安装**: 请遵循您环境中使用的特定 Codex CLI 工具的安装说明。确保 `codex` 命令在您的系统 PATH 中可用。
|
||||
|
||||
- **Qwen Code**: 用于架构和代码生成。
|
||||
- **用途**: 提供对阿里巴巴通义千问模型的访问。
|
||||
- **安装**: 请遵循通义千问官方文档来安装和配置其 CLI 工具。确保 `qwen` 命令在您的系统 PATH 中可用。
|
||||
|
||||
## 验证
|
||||
|
||||
安装后,验证:
|
||||
|
||||
@@ -26,6 +26,9 @@
|
||||
.PARAMETER NoBackup
|
||||
Disable automatic backup functionality
|
||||
|
||||
.PARAMETER Uninstall
|
||||
Uninstall Claude Code Workflow System based on installation manifest
|
||||
|
||||
.EXAMPLE
|
||||
.\Install-Claude.ps1
|
||||
Interactive installation with mode selection
|
||||
@@ -45,6 +48,14 @@
|
||||
.EXAMPLE
|
||||
.\Install-Claude.ps1 -NoBackup
|
||||
Installation without any backup (overwrite existing files)
|
||||
|
||||
.EXAMPLE
|
||||
.\Install-Claude.ps1 -Uninstall
|
||||
Uninstall Claude Code Workflow System
|
||||
|
||||
.EXAMPLE
|
||||
.\Install-Claude.ps1 -Uninstall -Force
|
||||
Uninstall without confirmation prompts
|
||||
#>
|
||||
|
||||
param(
|
||||
@@ -61,6 +72,8 @@ param(
|
||||
|
||||
[switch]$NoBackup,
|
||||
|
||||
[switch]$Uninstall,
|
||||
|
||||
[string]$SourceVersion = "",
|
||||
|
||||
[string]$SourceBranch = "",
|
||||
@@ -98,6 +111,9 @@ $ColorWarning = "Yellow"
|
||||
$ColorError = "Red"
|
||||
$ColorPrompt = "Magenta"
|
||||
|
||||
# Global manifest directory location
|
||||
$script:ManifestDir = Join-Path ([Environment]::GetFolderPath("UserProfile")) ".claude-manifests"
|
||||
|
||||
function Write-ColorOutput {
|
||||
param(
|
||||
[string]$Message,
|
||||
@@ -704,6 +720,427 @@ function Merge-DirectoryContents {
|
||||
return $true
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# INSTALLATION MANIFEST MANAGEMENT
|
||||
# ============================================================================
|
||||
|
||||
function New-InstallManifest {
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Create a new installation manifest to track installed files
|
||||
#>
|
||||
param(
|
||||
[string]$InstallationMode,
|
||||
[string]$InstallationPath
|
||||
)
|
||||
|
||||
# Create manifest directory if it doesn't exist
|
||||
if (-not (Test-Path $script:ManifestDir)) {
|
||||
New-Item -ItemType Directory -Path $script:ManifestDir -Force | Out-Null
|
||||
}
|
||||
|
||||
# Generate unique manifest ID based on timestamp and mode
|
||||
$timestamp = Get-Date -Format "yyyyMMdd-HHmmss"
|
||||
$manifestId = "install-$InstallationMode-$timestamp"
|
||||
|
||||
$manifest = @{
|
||||
manifest_id = $manifestId
|
||||
version = "1.0"
|
||||
installation_mode = $InstallationMode
|
||||
installation_path = $InstallationPath
|
||||
installation_date = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssZ")
|
||||
installer_version = $ScriptVersion
|
||||
files = @()
|
||||
directories = @()
|
||||
}
|
||||
|
||||
return $manifest
|
||||
}
|
||||
|
||||
function Add-ManifestEntry {
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Add a file or directory entry to the manifest
|
||||
#>
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[hashtable]$Manifest,
|
||||
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$Path,
|
||||
|
||||
[Parameter(Mandatory=$true)]
|
||||
[ValidateSet("File", "Directory")]
|
||||
[string]$Type
|
||||
)
|
||||
|
||||
$entry = @{
|
||||
path = $Path
|
||||
type = $Type
|
||||
timestamp = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssZ")
|
||||
}
|
||||
|
||||
if ($Type -eq "File") {
|
||||
$Manifest.files += $entry
|
||||
} else {
|
||||
$Manifest.directories += $entry
|
||||
}
|
||||
}
|
||||
|
||||
function Save-InstallManifest {
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Save the installation manifest to disk
|
||||
#>
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[hashtable]$Manifest
|
||||
)
|
||||
|
||||
try {
|
||||
# Use manifest ID to create unique file name
|
||||
$manifestFileName = "$($Manifest.manifest_id).json"
|
||||
$manifestPath = Join-Path $script:ManifestDir $manifestFileName
|
||||
|
||||
$Manifest | ConvertTo-Json -Depth 10 | Out-File -FilePath $manifestPath -Encoding utf8 -Force
|
||||
Write-ColorOutput "Installation manifest saved: $manifestPath" $ColorSuccess
|
||||
return $true
|
||||
} catch {
|
||||
Write-ColorOutput "WARNING: Failed to save installation manifest: $($_.Exception.Message)" $ColorWarning
|
||||
return $false
|
||||
}
|
||||
}
|
||||
|
||||
function Migrate-LegacyManifest {
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Migrate old single manifest file to new multi-manifest system
|
||||
#>
|
||||
|
||||
$legacyManifestPath = Join-Path ([Environment]::GetFolderPath("UserProfile")) ".claude-install-manifest.json"
|
||||
|
||||
if (-not (Test-Path $legacyManifestPath)) {
|
||||
return
|
||||
}
|
||||
|
||||
try {
|
||||
Write-ColorOutput "Found legacy manifest file, migrating to new system..." $ColorInfo
|
||||
|
||||
# Create manifest directory if it doesn't exist
|
||||
if (-not (Test-Path $script:ManifestDir)) {
|
||||
New-Item -ItemType Directory -Path $script:ManifestDir -Force | Out-Null
|
||||
}
|
||||
|
||||
# Read legacy manifest
|
||||
$legacyJson = Get-Content -Path $legacyManifestPath -Raw -Encoding utf8
|
||||
$legacy = $legacyJson | ConvertFrom-Json
|
||||
|
||||
# Generate new manifest ID
|
||||
$timestamp = Get-Date -Format "yyyyMMdd-HHmmss"
|
||||
$mode = if ($legacy.installation_mode) { $legacy.installation_mode } else { "Global" }
|
||||
$manifestId = "install-$mode-$timestamp-migrated"
|
||||
|
||||
# Create new manifest with all fields
|
||||
$newManifest = @{
|
||||
manifest_id = $manifestId
|
||||
version = if ($legacy.version) { $legacy.version } else { "1.0" }
|
||||
installation_mode = $mode
|
||||
installation_path = if ($legacy.installation_path) { $legacy.installation_path } else { [Environment]::GetFolderPath("UserProfile") }
|
||||
installation_date = if ($legacy.installation_date) { $legacy.installation_date } else { (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssZ") }
|
||||
installer_version = if ($legacy.installer_version) { $legacy.installer_version } else { "unknown" }
|
||||
files = if ($legacy.files) { @($legacy.files) } else { @() }
|
||||
directories = if ($legacy.directories) { @($legacy.directories) } else { @() }
|
||||
}
|
||||
|
||||
# Save to new location
|
||||
$newManifestPath = Join-Path $script:ManifestDir "$manifestId.json"
|
||||
$newManifest | ConvertTo-Json -Depth 10 | Out-File -FilePath $newManifestPath -Encoding utf8 -Force
|
||||
|
||||
# Rename old manifest (don't delete, keep as backup)
|
||||
$backupPath = "$legacyManifestPath.migrated"
|
||||
Move-Item -Path $legacyManifestPath -Destination $backupPath -Force
|
||||
|
||||
Write-ColorOutput "Legacy manifest migrated successfully" $ColorSuccess
|
||||
Write-ColorOutput "Old manifest backed up to: $backupPath" $ColorInfo
|
||||
} catch {
|
||||
Write-ColorOutput "WARNING: Failed to migrate legacy manifest: $($_.Exception.Message)" $ColorWarning
|
||||
}
|
||||
}
|
||||
|
||||
function Get-AllInstallManifests {
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Get all installation manifests
|
||||
#>
|
||||
|
||||
# Migrate legacy manifest if exists
|
||||
Migrate-LegacyManifest
|
||||
|
||||
if (-not (Test-Path $script:ManifestDir)) {
|
||||
return @()
|
||||
}
|
||||
|
||||
try {
|
||||
$manifestFiles = Get-ChildItem -Path $script:ManifestDir -Filter "install-*.json" -File | Sort-Object LastWriteTime -Descending
|
||||
$manifests = [System.Collections.ArrayList]::new()
|
||||
|
||||
foreach ($file in $manifestFiles) {
|
||||
try {
|
||||
$manifestJson = Get-Content -Path $file.FullName -Raw -Encoding utf8
|
||||
$manifest = $manifestJson | ConvertFrom-Json
|
||||
|
||||
# Convert to hashtable for easier manipulation
|
||||
# Handle both old and new manifest formats
|
||||
|
||||
# Safely get array counts
|
||||
$filesCount = 0
|
||||
$dirsCount = 0
|
||||
|
||||
if ($manifest.files) {
|
||||
if ($manifest.files -is [System.Array]) {
|
||||
$filesCount = $manifest.files.Count
|
||||
} else {
|
||||
$filesCount = 1
|
||||
}
|
||||
}
|
||||
|
||||
if ($manifest.directories) {
|
||||
if ($manifest.directories -is [System.Array]) {
|
||||
$dirsCount = $manifest.directories.Count
|
||||
} else {
|
||||
$dirsCount = 1
|
||||
}
|
||||
}
|
||||
|
||||
$manifestHash = @{
|
||||
manifest_id = if ($manifest.manifest_id) { $manifest.manifest_id } else { $file.BaseName }
|
||||
manifest_file = $file.FullName
|
||||
version = if ($manifest.version) { $manifest.version } else { "1.0" }
|
||||
installation_mode = if ($manifest.installation_mode) { $manifest.installation_mode } else { "Unknown" }
|
||||
installation_path = if ($manifest.installation_path) { $manifest.installation_path } else { "" }
|
||||
installation_date = if ($manifest.installation_date) { $manifest.installation_date } else { $file.LastWriteTime.ToString("yyyy-MM-ddTHH:mm:ssZ") }
|
||||
installer_version = if ($manifest.installer_version) { $manifest.installer_version } else { "unknown" }
|
||||
files = if ($manifest.files) { @($manifest.files) } else { @() }
|
||||
directories = if ($manifest.directories) { @($manifest.directories) } else { @() }
|
||||
files_count = $filesCount
|
||||
directories_count = $dirsCount
|
||||
}
|
||||
|
||||
$null = $manifests.Add($manifestHash)
|
||||
} catch {
|
||||
Write-ColorOutput "WARNING: Failed to load manifest $($file.Name): $($_.Exception.Message)" $ColorWarning
|
||||
}
|
||||
}
|
||||
|
||||
return ,$manifests.ToArray()
|
||||
} catch {
|
||||
Write-ColorOutput "ERROR: Failed to list installation manifests: $($_.Exception.Message)" $ColorError
|
||||
return @()
|
||||
}
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# UNINSTALLATION FUNCTIONS
|
||||
# ============================================================================
|
||||
|
||||
function Uninstall-ClaudeWorkflow {
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Uninstall Claude Code Workflow based on installation manifest
|
||||
#>
|
||||
|
||||
Write-ColorOutput "Claude Code Workflow System Uninstaller" $ColorInfo
|
||||
Write-ColorOutput "========================================" $ColorInfo
|
||||
Write-Host ""
|
||||
|
||||
# Load all manifests
|
||||
$manifests = Get-AllInstallManifests
|
||||
|
||||
if (-not $manifests -or $manifests.Count -eq 0) {
|
||||
Write-ColorOutput "ERROR: No installation manifests found in: $script:ManifestDir" $ColorError
|
||||
Write-ColorOutput "Cannot proceed with uninstallation without manifest." $ColorError
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Manual uninstallation instructions:" $ColorInfo
|
||||
Write-Host "For Global installation, remove these directories:"
|
||||
Write-Host " - ~/.claude/agents"
|
||||
Write-Host " - ~/.claude/commands"
|
||||
Write-Host " - ~/.claude/output-styles"
|
||||
Write-Host " - ~/.claude/workflows"
|
||||
Write-Host " - ~/.claude/scripts"
|
||||
Write-Host " - ~/.claude/prompt-templates"
|
||||
Write-Host " - ~/.claude/python_script"
|
||||
Write-Host " - ~/.claude/skills"
|
||||
Write-Host " - ~/.claude/version.json"
|
||||
Write-Host " - ~/.claude/CLAUDE.md"
|
||||
Write-Host " - ~/.codex"
|
||||
Write-Host " - ~/.gemini"
|
||||
Write-Host " - ~/.qwen"
|
||||
return $false
|
||||
}
|
||||
|
||||
# Display available installations
|
||||
Write-ColorOutput "Found $($manifests.Count) installation(s):" $ColorInfo
|
||||
Write-Host ""
|
||||
|
||||
# If only one manifest, use it directly
|
||||
$selectedManifest = $null
|
||||
if ($manifests.Count -eq 1) {
|
||||
$selectedManifest = $manifests[0]
|
||||
Write-ColorOutput "Only one installation found, will uninstall:" $ColorInfo
|
||||
} else {
|
||||
# Multiple manifests - let user choose
|
||||
$options = @()
|
||||
for ($i = 0; $i -lt $manifests.Count; $i++) {
|
||||
$m = $manifests[$i]
|
||||
|
||||
# Safely extract date string
|
||||
$dateStr = "unknown date"
|
||||
if ($m.installation_date) {
|
||||
try {
|
||||
if ($m.installation_date.Length -ge 10) {
|
||||
$dateStr = $m.installation_date.Substring(0, 10)
|
||||
} else {
|
||||
$dateStr = $m.installation_date
|
||||
}
|
||||
} catch {
|
||||
$dateStr = "unknown date"
|
||||
}
|
||||
}
|
||||
|
||||
# Build option string with safe counts
|
||||
$filesCount = if ($m.files_count) { $m.files_count } else { 0 }
|
||||
$dirsCount = if ($m.directories_count) { $m.directories_count } else { 0 }
|
||||
$pathInfo = if ($m.installation_path) { " ($($m.installation_path))" } else { "" }
|
||||
$option = "$($i + 1). [$($m.installation_mode)] $dateStr - $filesCount files, $dirsCount dirs$pathInfo"
|
||||
$options += $option
|
||||
}
|
||||
$options += "Cancel - Don't uninstall anything"
|
||||
|
||||
Write-Host ""
|
||||
$selection = Get-UserChoiceWithArrows -Prompt "Select installation to uninstall:" -Options $options -DefaultIndex 0
|
||||
|
||||
if ($selection -like "Cancel*") {
|
||||
Write-ColorOutput "Uninstallation cancelled." $ColorWarning
|
||||
return $false
|
||||
}
|
||||
|
||||
# Parse selection to get index
|
||||
$selectedIndex = [int]($selection.Split('.')[0]) - 1
|
||||
$selectedManifest = $manifests[$selectedIndex]
|
||||
}
|
||||
|
||||
# Display selected installation info
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Installation Information:" $ColorInfo
|
||||
Write-Host " Manifest ID: $($selectedManifest.manifest_id)"
|
||||
Write-Host " Mode: $($selectedManifest.installation_mode)"
|
||||
Write-Host " Path: $($selectedManifest.installation_path)"
|
||||
Write-Host " Date: $($selectedManifest.installation_date)"
|
||||
Write-Host " Installer Version: $($selectedManifest.installer_version)"
|
||||
|
||||
# Use pre-calculated counts
|
||||
$filesCount = if ($selectedManifest.files_count) { $selectedManifest.files_count } else { 0 }
|
||||
$dirsCount = if ($selectedManifest.directories_count) { $selectedManifest.directories_count } else { 0 }
|
||||
Write-Host " Files tracked: $filesCount"
|
||||
Write-Host " Directories tracked: $dirsCount"
|
||||
Write-Host ""
|
||||
|
||||
# Confirm uninstallation
|
||||
if (-not (Confirm-Action "Do you want to uninstall this installation?" -DefaultYes:$false)) {
|
||||
Write-ColorOutput "Uninstallation cancelled." $ColorWarning
|
||||
return $false
|
||||
}
|
||||
|
||||
# Use the selected manifest for uninstallation
|
||||
$manifest = $selectedManifest
|
||||
|
||||
$removedFiles = 0
|
||||
$removedDirs = 0
|
||||
$failedItems = @()
|
||||
|
||||
# Remove files first
|
||||
Write-ColorOutput "Removing installed files..." $ColorInfo
|
||||
foreach ($fileEntry in $manifest.files) {
|
||||
$filePath = $fileEntry.path
|
||||
|
||||
if (Test-Path $filePath) {
|
||||
try {
|
||||
Remove-Item -Path $filePath -Force -ErrorAction Stop
|
||||
Write-ColorOutput " Removed file: $filePath" $ColorSuccess
|
||||
$removedFiles++
|
||||
} catch {
|
||||
Write-ColorOutput " WARNING: Failed to remove file: $filePath" $ColorWarning
|
||||
$failedItems += $filePath
|
||||
}
|
||||
} else {
|
||||
Write-ColorOutput " File not found (already removed): $filePath" $ColorInfo
|
||||
}
|
||||
}
|
||||
|
||||
# Remove directories (in reverse order to handle nested directories)
|
||||
Write-ColorOutput "Removing installed directories..." $ColorInfo
|
||||
$sortedDirs = $manifest.directories | Sort-Object { $_.path.Length } -Descending
|
||||
|
||||
foreach ($dirEntry in $sortedDirs) {
|
||||
$dirPath = $dirEntry.path
|
||||
|
||||
if (Test-Path $dirPath) {
|
||||
try {
|
||||
# Check if directory is empty or only contains files we installed
|
||||
$dirContents = Get-ChildItem -Path $dirPath -Recurse -Force -ErrorAction SilentlyContinue
|
||||
|
||||
if (-not $dirContents -or ($dirContents | Measure-Object).Count -eq 0) {
|
||||
Remove-Item -Path $dirPath -Recurse -Force -ErrorAction Stop
|
||||
Write-ColorOutput " Removed directory: $dirPath" $ColorSuccess
|
||||
$removedDirs++
|
||||
} else {
|
||||
Write-ColorOutput " Directory not empty (preserved): $dirPath" $ColorWarning
|
||||
}
|
||||
} catch {
|
||||
Write-ColorOutput " WARNING: Failed to remove directory: $dirPath" $ColorWarning
|
||||
$failedItems += $dirPath
|
||||
}
|
||||
} else {
|
||||
Write-ColorOutput " Directory not found (already removed): $dirPath" $ColorInfo
|
||||
}
|
||||
}
|
||||
|
||||
# Remove manifest file
|
||||
if (Test-Path $manifest.manifest_file) {
|
||||
try {
|
||||
Remove-Item -Path $manifest.manifest_file -Force
|
||||
Write-ColorOutput "Removed installation manifest: $($manifest.manifest_id)" $ColorSuccess
|
||||
} catch {
|
||||
Write-ColorOutput "WARNING: Failed to remove manifest file" $ColorWarning
|
||||
}
|
||||
}
|
||||
|
||||
# Show summary
|
||||
Write-Host ""
|
||||
Write-ColorOutput "========================================" $ColorInfo
|
||||
Write-ColorOutput "Uninstallation Summary:" $ColorInfo
|
||||
Write-Host " Files removed: $removedFiles"
|
||||
Write-Host " Directories removed: $removedDirs"
|
||||
|
||||
if ($failedItems.Count -gt 0) {
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Failed to remove the following items:" $ColorWarning
|
||||
foreach ($item in $failedItems) {
|
||||
Write-Host " - $item"
|
||||
}
|
||||
}
|
||||
|
||||
Write-Host ""
|
||||
if ($failedItems.Count -eq 0) {
|
||||
Write-ColorOutput "Claude Code Workflow has been successfully uninstalled!" $ColorSuccess
|
||||
} else {
|
||||
Write-ColorOutput "Uninstallation completed with warnings." $ColorWarning
|
||||
Write-ColorOutput "Please manually remove the failed items listed above." $ColorInfo
|
||||
}
|
||||
|
||||
return $true
|
||||
}
|
||||
|
||||
function Create-VersionJson {
|
||||
param(
|
||||
[string]$TargetClaudeDir,
|
||||
@@ -751,6 +1188,9 @@ function Install-Global {
|
||||
|
||||
Write-ColorOutput "Global installation path: $userProfile" $ColorInfo
|
||||
|
||||
# Initialize manifest
|
||||
$manifest = New-InstallManifest -InstallationMode "Global" -InstallationPath $userProfile
|
||||
|
||||
# Source paths
|
||||
$sourceDir = $PSScriptRoot
|
||||
$sourceClaudeDir = Join-Path $sourceDir ".claude"
|
||||
@@ -791,22 +1231,73 @@ function Install-Global {
|
||||
Write-ColorOutput "Installing .claude directory..." $ColorInfo
|
||||
$claudeInstalled = Backup-AndReplaceDirectory -Source $sourceClaudeDir -Destination $globalClaudeDir -Description ".claude directory" -BackupFolder $backupFolder
|
||||
|
||||
# Track .claude directory in manifest
|
||||
if ($claudeInstalled) {
|
||||
Add-ManifestEntry -Manifest $manifest -Path $globalClaudeDir -Type "Directory"
|
||||
|
||||
# Track files from SOURCE directory, not destination
|
||||
Get-ChildItem -Path $sourceClaudeDir -Recurse -File | ForEach-Object {
|
||||
# Calculate target path where this file will be installed
|
||||
$relativePath = $_.FullName.Substring($sourceClaudeDir.Length)
|
||||
$targetPath = $globalClaudeDir + $relativePath
|
||||
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
|
||||
}
|
||||
}
|
||||
|
||||
# Handle CLAUDE.md file in .claude directory
|
||||
Write-ColorOutput "Installing CLAUDE.md to global .claude directory..." $ColorInfo
|
||||
$claudeMdInstalled = Copy-FileToDestination -Source $sourceClaudeMd -Destination $globalClaudeMd -Description "CLAUDE.md" -BackupFolder $backupFolder
|
||||
|
||||
# Track CLAUDE.md in manifest
|
||||
if ($claudeMdInstalled) {
|
||||
Add-ManifestEntry -Manifest $manifest -Path $globalClaudeMd -Type "File"
|
||||
}
|
||||
|
||||
# Replace .codex directory (backup → clear → copy entire folder)
|
||||
Write-ColorOutput "Installing .codex directory..." $ColorInfo
|
||||
$codexInstalled = Backup-AndReplaceDirectory -Source $sourceCodexDir -Destination $globalCodexDir -Description ".codex directory" -BackupFolder $backupFolder
|
||||
|
||||
# Track .codex directory in manifest
|
||||
if ($codexInstalled) {
|
||||
Add-ManifestEntry -Manifest $manifest -Path $globalCodexDir -Type "Directory"
|
||||
# Track files from SOURCE directory
|
||||
Get-ChildItem -Path $sourceCodexDir -Recurse -File | ForEach-Object {
|
||||
$relativePath = $_.FullName.Substring($sourceCodexDir.Length)
|
||||
$targetPath = $globalCodexDir + $relativePath
|
||||
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
|
||||
}
|
||||
}
|
||||
|
||||
# Replace .gemini directory (backup → clear → copy entire folder)
|
||||
Write-ColorOutput "Installing .gemini directory..." $ColorInfo
|
||||
$geminiInstalled = Backup-AndReplaceDirectory -Source $sourceGeminiDir -Destination $globalGeminiDir -Description ".gemini directory" -BackupFolder $backupFolder
|
||||
|
||||
# Track .gemini directory in manifest
|
||||
if ($geminiInstalled) {
|
||||
Add-ManifestEntry -Manifest $manifest -Path $globalGeminiDir -Type "Directory"
|
||||
# Track files from SOURCE directory
|
||||
Get-ChildItem -Path $sourceGeminiDir -Recurse -File | ForEach-Object {
|
||||
$relativePath = $_.FullName.Substring($sourceGeminiDir.Length)
|
||||
$targetPath = $globalGeminiDir + $relativePath
|
||||
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
|
||||
}
|
||||
}
|
||||
|
||||
# Replace .qwen directory (backup → clear → copy entire folder)
|
||||
Write-ColorOutput "Installing .qwen directory..." $ColorInfo
|
||||
$qwenInstalled = Backup-AndReplaceDirectory -Source $sourceQwenDir -Destination $globalQwenDir -Description ".qwen directory" -BackupFolder $backupFolder
|
||||
|
||||
# Track .qwen directory in manifest
|
||||
if ($qwenInstalled) {
|
||||
Add-ManifestEntry -Manifest $manifest -Path $globalQwenDir -Type "Directory"
|
||||
# Track files from SOURCE directory
|
||||
Get-ChildItem -Path $sourceQwenDir -Recurse -File | ForEach-Object {
|
||||
$relativePath = $_.FullName.Substring($sourceQwenDir.Length)
|
||||
$targetPath = $globalQwenDir + $relativePath
|
||||
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
|
||||
}
|
||||
}
|
||||
|
||||
# Create version.json in global .claude directory
|
||||
Write-ColorOutput "Creating version.json..." $ColorInfo
|
||||
Create-VersionJson -TargetClaudeDir $globalClaudeDir -InstallationMode "Global"
|
||||
@@ -820,6 +1311,9 @@ function Install-Global {
|
||||
}
|
||||
}
|
||||
|
||||
# Save installation manifest
|
||||
Save-InstallManifest -Manifest $manifest
|
||||
|
||||
return $true
|
||||
}
|
||||
|
||||
@@ -837,6 +1331,9 @@ function Install-Path {
|
||||
|
||||
Write-ColorOutput "Global path: $userProfile" $ColorInfo
|
||||
|
||||
# Initialize manifest
|
||||
$manifest = New-InstallManifest -InstallationMode "Path" -InstallationPath $TargetDirectory
|
||||
|
||||
# Source paths
|
||||
$sourceDir = $PSScriptRoot
|
||||
$sourceClaudeDir = Join-Path $sourceDir ".claude"
|
||||
@@ -877,8 +1374,19 @@ function Install-Path {
|
||||
if (Test-Path $sourceFolderPath) {
|
||||
# Use new backup and replace logic for local folders
|
||||
Write-ColorOutput "Installing local folder: $folder..." $ColorInfo
|
||||
Backup-AndReplaceDirectory -Source $sourceFolderPath -Destination $destFolderPath -Description "$folder folder" -BackupFolder $backupFolder
|
||||
$folderInstalled = Backup-AndReplaceDirectory -Source $sourceFolderPath -Destination $destFolderPath -Description "$folder folder" -BackupFolder $backupFolder
|
||||
Write-ColorOutput "Installed local folder: $folder" $ColorSuccess
|
||||
|
||||
# Track local folder in manifest
|
||||
if ($folderInstalled) {
|
||||
Add-ManifestEntry -Manifest $manifest -Path $destFolderPath -Type "Directory"
|
||||
# Track files from SOURCE directory
|
||||
Get-ChildItem -Path $sourceFolderPath -Recurse -File | ForEach-Object {
|
||||
$relativePath = $_.FullName.Substring($sourceFolderPath.Length)
|
||||
$targetPath = $destFolderPath + $relativePath
|
||||
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
|
||||
}
|
||||
}
|
||||
} else {
|
||||
Write-ColorOutput "WARNING: Source folder not found: $folder" $ColorWarning
|
||||
}
|
||||
@@ -933,23 +1441,71 @@ function Install-Path {
|
||||
|
||||
Write-ColorOutput "Merged $mergedCount files to global location" $ColorSuccess
|
||||
|
||||
# Track global files in manifest
|
||||
$globalClaudeFiles = Get-ChildItem -Path $globalClaudeDir -Recurse -File | Where-Object {
|
||||
$relativePath = $_.FullName.Substring($globalClaudeDir.Length + 1)
|
||||
$topFolder = $relativePath.Split([System.IO.Path]::DirectorySeparatorChar)[0]
|
||||
$topFolder -notin $localFolders
|
||||
}
|
||||
foreach ($file in $globalClaudeFiles) {
|
||||
Add-ManifestEntry -Manifest $manifest -Path $file.FullName -Type "File"
|
||||
}
|
||||
|
||||
# Handle CLAUDE.md file in global .claude directory
|
||||
$globalClaudeMd = Join-Path $globalClaudeDir "CLAUDE.md"
|
||||
Write-ColorOutput "Installing CLAUDE.md to global .claude directory..." $ColorInfo
|
||||
Copy-FileToDestination -Source $sourceClaudeMd -Destination $globalClaudeMd -Description "CLAUDE.md" -BackupFolder $backupFolder
|
||||
$claudeMdInstalled = Copy-FileToDestination -Source $sourceClaudeMd -Destination $globalClaudeMd -Description "CLAUDE.md" -BackupFolder $backupFolder
|
||||
|
||||
# Track CLAUDE.md in manifest
|
||||
if ($claudeMdInstalled) {
|
||||
Add-ManifestEntry -Manifest $manifest -Path $globalClaudeMd -Type "File"
|
||||
}
|
||||
|
||||
# Replace .codex directory to local location (backup → clear → copy entire folder)
|
||||
Write-ColorOutput "Installing .codex directory to local location..." $ColorInfo
|
||||
$codexInstalled = Backup-AndReplaceDirectory -Source $sourceCodexDir -Destination $localCodexDir -Description ".codex directory" -BackupFolder $backupFolder
|
||||
|
||||
# Track .codex directory in manifest
|
||||
if ($codexInstalled) {
|
||||
Add-ManifestEntry -Manifest $manifest -Path $localCodexDir -Type "Directory"
|
||||
# Track files from SOURCE directory
|
||||
Get-ChildItem -Path $sourceCodexDir -Recurse -File | ForEach-Object {
|
||||
$relativePath = $_.FullName.Substring($sourceCodexDir.Length)
|
||||
$targetPath = $localCodexDir + $relativePath
|
||||
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
|
||||
}
|
||||
}
|
||||
|
||||
# Replace .gemini directory to local location (backup → clear → copy entire folder)
|
||||
Write-ColorOutput "Installing .gemini directory to local location..." $ColorInfo
|
||||
$geminiInstalled = Backup-AndReplaceDirectory -Source $sourceGeminiDir -Destination $localGeminiDir -Description ".gemini directory" -BackupFolder $backupFolder
|
||||
|
||||
# Track .gemini directory in manifest
|
||||
if ($geminiInstalled) {
|
||||
Add-ManifestEntry -Manifest $manifest -Path $localGeminiDir -Type "Directory"
|
||||
# Track files from SOURCE directory
|
||||
Get-ChildItem -Path $sourceGeminiDir -Recurse -File | ForEach-Object {
|
||||
$relativePath = $_.FullName.Substring($sourceGeminiDir.Length)
|
||||
$targetPath = $localGeminiDir + $relativePath
|
||||
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
|
||||
}
|
||||
}
|
||||
|
||||
# Replace .qwen directory to local location (backup → clear → copy entire folder)
|
||||
Write-ColorOutput "Installing .qwen directory to local location..." $ColorInfo
|
||||
$qwenInstalled = Backup-AndReplaceDirectory -Source $sourceQwenDir -Destination $localQwenDir -Description ".qwen directory" -BackupFolder $backupFolder
|
||||
|
||||
# Track .qwen directory in manifest
|
||||
if ($qwenInstalled) {
|
||||
Add-ManifestEntry -Manifest $manifest -Path $localQwenDir -Type "Directory"
|
||||
# Track files from SOURCE directory
|
||||
Get-ChildItem -Path $sourceQwenDir -Recurse -File | ForEach-Object {
|
||||
$relativePath = $_.FullName.Substring($sourceQwenDir.Length)
|
||||
$targetPath = $localQwenDir + $relativePath
|
||||
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
|
||||
}
|
||||
}
|
||||
|
||||
# Create version.json in local .claude directory
|
||||
Write-ColorOutput "Creating version.json in local directory..." $ColorInfo
|
||||
Create-VersionJson -TargetClaudeDir $localClaudeDir -InstallationMode "Path"
|
||||
@@ -966,6 +1522,9 @@ function Install-Path {
|
||||
}
|
||||
}
|
||||
|
||||
# Save installation manifest
|
||||
Save-InstallManifest -Manifest $manifest
|
||||
|
||||
return $true
|
||||
}
|
||||
|
||||
@@ -1098,6 +1657,42 @@ function Main {
|
||||
# Use SourceVersion parameter if provided, otherwise use default
|
||||
$installVersion = if ($SourceVersion) { $SourceVersion } else { $DefaultVersion }
|
||||
|
||||
# Show banner first
|
||||
Show-Banner
|
||||
|
||||
# Check for uninstall mode from parameter or ask user interactively
|
||||
$operationMode = "Install"
|
||||
|
||||
if ($Uninstall) {
|
||||
$operationMode = "Uninstall"
|
||||
} elseif (-not $NonInteractive -and -not $InstallMode) {
|
||||
# Interactive mode selection
|
||||
Write-Host ""
|
||||
$operations = @(
|
||||
"Install - Install Claude Code Workflow System",
|
||||
"Uninstall - Remove Claude Code Workflow System"
|
||||
)
|
||||
$selection = Get-UserChoiceWithArrows -Prompt "Choose operation:" -Options $operations -DefaultIndex 0
|
||||
|
||||
if ($selection -like "Uninstall*") {
|
||||
$operationMode = "Uninstall"
|
||||
}
|
||||
}
|
||||
|
||||
# Handle uninstall mode
|
||||
if ($operationMode -eq "Uninstall") {
|
||||
$result = Uninstall-ClaudeWorkflow
|
||||
|
||||
if (-not $NonInteractive) {
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Press any key to exit..." $ColorPrompt
|
||||
$null = $Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown")
|
||||
}
|
||||
|
||||
return $(if ($result) { 0 } else { 1 })
|
||||
}
|
||||
|
||||
# Continue with installation
|
||||
Show-Header -InstallVersion $installVersion
|
||||
|
||||
# Test prerequisites
|
||||
|
||||
@@ -24,10 +24,14 @@ FORCE=false
|
||||
NON_INTERACTIVE=false
|
||||
BACKUP_ALL=true # Enabled by default
|
||||
NO_BACKUP=false
|
||||
UNINSTALL=false # Uninstall mode
|
||||
SOURCE_VERSION="" # Version from remote installer
|
||||
SOURCE_BRANCH="" # Branch from remote installer
|
||||
SOURCE_COMMIT="" # Commit SHA from remote installer
|
||||
|
||||
# Global manifest directory location
|
||||
MANIFEST_DIR="${HOME}/.claude-manifests"
|
||||
|
||||
# Functions
|
||||
function write_color() {
|
||||
local message="$1"
|
||||
@@ -474,6 +478,9 @@ function install_global() {
|
||||
|
||||
write_color "Global installation path: $user_home" "$COLOR_INFO"
|
||||
|
||||
# Initialize manifest
|
||||
local manifest_file=$(new_install_manifest "Global" "$user_home")
|
||||
|
||||
# Source paths
|
||||
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
local source_claude_dir="${script_dir}/.claude"
|
||||
@@ -507,23 +514,66 @@ function install_global() {
|
||||
|
||||
# Replace .claude directory (backup → clear conflicting → copy)
|
||||
write_color "Installing .claude directory..." "$COLOR_INFO"
|
||||
backup_and_replace_directory "$source_claude_dir" "$global_claude_dir" ".claude directory" "$backup_folder"
|
||||
if backup_and_replace_directory "$source_claude_dir" "$global_claude_dir" ".claude directory" "$backup_folder"; then
|
||||
# Track .claude directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory, not destination
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_claude_dir}"
|
||||
local target_path="${global_claude_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_claude_dir" -type f -print0)
|
||||
fi
|
||||
|
||||
# Handle CLAUDE.md file
|
||||
write_color "Installing CLAUDE.md to global .claude directory..." "$COLOR_INFO"
|
||||
copy_file_to_destination "$source_claude_md" "$global_claude_md" "CLAUDE.md" "$backup_folder"
|
||||
if copy_file_to_destination "$source_claude_md" "$global_claude_md" "CLAUDE.md" "$backup_folder"; then
|
||||
# Track CLAUDE.md in manifest
|
||||
add_manifest_entry "$manifest_file" "$global_claude_md" "File"
|
||||
fi
|
||||
|
||||
# Replace .codex directory (backup → clear conflicting → copy)
|
||||
write_color "Installing .codex directory..." "$COLOR_INFO"
|
||||
backup_and_replace_directory "$source_codex_dir" "$global_codex_dir" ".codex directory" "$backup_folder"
|
||||
if backup_and_replace_directory "$source_codex_dir" "$global_codex_dir" ".codex directory" "$backup_folder"; then
|
||||
# Track .codex directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$global_codex_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_codex_dir}"
|
||||
local target_path="${global_codex_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_codex_dir" -type f -print0)
|
||||
fi
|
||||
|
||||
# Replace .gemini directory (backup → clear conflicting → copy)
|
||||
write_color "Installing .gemini directory..." "$COLOR_INFO"
|
||||
backup_and_replace_directory "$source_gemini_dir" "$global_gemini_dir" ".gemini directory" "$backup_folder"
|
||||
if backup_and_replace_directory "$source_gemini_dir" "$global_gemini_dir" ".gemini directory" "$backup_folder"; then
|
||||
# Track .gemini directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$global_gemini_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_gemini_dir}"
|
||||
local target_path="${global_gemini_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_gemini_dir" -type f -print0)
|
||||
fi
|
||||
|
||||
# Replace .qwen directory (backup → clear conflicting → copy)
|
||||
write_color "Installing .qwen directory..." "$COLOR_INFO"
|
||||
backup_and_replace_directory "$source_qwen_dir" "$global_qwen_dir" ".qwen directory" "$backup_folder"
|
||||
if backup_and_replace_directory "$source_qwen_dir" "$global_qwen_dir" ".qwen directory" "$backup_folder"; then
|
||||
# Track .qwen directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$global_qwen_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_qwen_dir}"
|
||||
local target_path="${global_qwen_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_qwen_dir" -type f -print0)
|
||||
fi
|
||||
|
||||
# Remove empty backup folder
|
||||
if [ -n "$backup_folder" ] && [ -d "$backup_folder" ]; then
|
||||
@@ -537,6 +587,9 @@ function install_global() {
|
||||
write_color "Creating version.json..." "$COLOR_INFO"
|
||||
create_version_json "$global_claude_dir" "Global"
|
||||
|
||||
# Save installation manifest
|
||||
save_install_manifest "$manifest_file"
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
@@ -550,6 +603,9 @@ function install_path() {
|
||||
local global_claude_dir="${user_home}/.claude"
|
||||
write_color "Global path: $user_home" "$COLOR_INFO"
|
||||
|
||||
# Initialize manifest
|
||||
local manifest_file=$(new_install_manifest "Path" "$target_dir")
|
||||
|
||||
# Source paths
|
||||
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
local source_claude_dir="${script_dir}/.claude"
|
||||
@@ -588,7 +644,17 @@ function install_path() {
|
||||
if [ -d "$source_folder" ]; then
|
||||
# Use new backup and replace logic for local folders
|
||||
write_color "Installing local folder: $folder..." "$COLOR_INFO"
|
||||
backup_and_replace_directory "$source_folder" "$dest_folder" "$folder folder" "$backup_folder"
|
||||
if backup_and_replace_directory "$source_folder" "$dest_folder" "$folder folder" "$backup_folder"; then
|
||||
# Track local folder in manifest
|
||||
add_manifest_entry "$manifest_file" "$dest_folder" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_folder}"
|
||||
local target_path="${dest_folder}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_folder" -type f -print0)
|
||||
fi
|
||||
write_color "✓ Installed local folder: $folder" "$COLOR_SUCCESS"
|
||||
else
|
||||
write_color "WARNING: Source folder not found: $folder" "$COLOR_WARNING"
|
||||
@@ -644,19 +710,52 @@ function install_path() {
|
||||
# Handle CLAUDE.md file in global .claude directory
|
||||
local global_claude_md="${global_claude_dir}/CLAUDE.md"
|
||||
write_color "Installing CLAUDE.md to global .claude directory..." "$COLOR_INFO"
|
||||
copy_file_to_destination "$source_claude_md" "$global_claude_md" "CLAUDE.md" "$backup_folder"
|
||||
if copy_file_to_destination "$source_claude_md" "$global_claude_md" "CLAUDE.md" "$backup_folder"; then
|
||||
# Track CLAUDE.md in manifest
|
||||
add_manifest_entry "$manifest_file" "$global_claude_md" "File"
|
||||
fi
|
||||
|
||||
# Replace .codex directory to local location (backup → clear conflicting → copy)
|
||||
write_color "Installing .codex directory to local location..." "$COLOR_INFO"
|
||||
backup_and_replace_directory "$source_codex_dir" "$local_codex_dir" ".codex directory" "$backup_folder"
|
||||
if backup_and_replace_directory "$source_codex_dir" "$local_codex_dir" ".codex directory" "$backup_folder"; then
|
||||
# Track .codex directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$local_codex_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_codex_dir}"
|
||||
local target_path="${local_codex_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_codex_dir" -type f -print0)
|
||||
fi
|
||||
|
||||
# Replace .gemini directory to local location (backup → clear conflicting → copy)
|
||||
write_color "Installing .gemini directory to local location..." "$COLOR_INFO"
|
||||
backup_and_replace_directory "$source_gemini_dir" "$local_gemini_dir" ".gemini directory" "$backup_folder"
|
||||
if backup_and_replace_directory "$source_gemini_dir" "$local_gemini_dir" ".gemini directory" "$backup_folder"; then
|
||||
# Track .gemini directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$local_gemini_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_gemini_dir}"
|
||||
local target_path="${local_gemini_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_gemini_dir" -type f -print0)
|
||||
fi
|
||||
|
||||
# Replace .qwen directory to local location (backup → clear conflicting → copy)
|
||||
write_color "Installing .qwen directory to local location..." "$COLOR_INFO"
|
||||
backup_and_replace_directory "$source_qwen_dir" "$local_qwen_dir" ".qwen directory" "$backup_folder"
|
||||
if backup_and_replace_directory "$source_qwen_dir" "$local_qwen_dir" ".qwen directory" "$backup_folder"; then
|
||||
# Track .qwen directory in manifest
|
||||
add_manifest_entry "$manifest_file" "$local_qwen_dir" "Directory"
|
||||
|
||||
# Track files from SOURCE directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$source_qwen_dir}"
|
||||
local target_path="${local_qwen_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$source_qwen_dir" -type f -print0)
|
||||
fi
|
||||
|
||||
# Remove empty backup folder
|
||||
if [ -n "$backup_folder" ] && [ -d "$backup_folder" ]; then
|
||||
@@ -674,6 +773,9 @@ function install_path() {
|
||||
write_color "Creating version.json in global directory..." "$COLOR_INFO"
|
||||
create_version_json "$global_claude_dir" "Global"
|
||||
|
||||
# Save installation manifest
|
||||
save_install_manifest "$manifest_file"
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
@@ -749,6 +851,357 @@ function get_installation_path() {
|
||||
done
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# INSTALLATION MANIFEST MANAGEMENT
|
||||
# ============================================================================
|
||||
|
||||
function new_install_manifest() {
|
||||
local installation_mode="$1"
|
||||
local installation_path="$2"
|
||||
|
||||
# Create manifest directory if it doesn't exist
|
||||
mkdir -p "$MANIFEST_DIR"
|
||||
|
||||
# Generate unique manifest ID based on timestamp and mode
|
||||
local timestamp=$(date +"%Y%m%d-%H%M%S")
|
||||
local manifest_id="install-${installation_mode}-${timestamp}"
|
||||
|
||||
# Create manifest file path
|
||||
local manifest_file="${MANIFEST_DIR}/${manifest_id}.json"
|
||||
|
||||
# Get current UTC timestamp
|
||||
local installation_date_utc=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
# Create manifest JSON
|
||||
cat > "$manifest_file" << EOF
|
||||
{
|
||||
"manifest_id": "$manifest_id",
|
||||
"version": "1.0",
|
||||
"installation_mode": "$installation_mode",
|
||||
"installation_path": "$installation_path",
|
||||
"installation_date": "$installation_date_utc",
|
||||
"installer_version": "$VERSION",
|
||||
"files": [],
|
||||
"directories": []
|
||||
}
|
||||
EOF
|
||||
|
||||
echo "$manifest_file"
|
||||
}
|
||||
|
||||
function add_manifest_entry() {
|
||||
local manifest_file="$1"
|
||||
local entry_path="$2"
|
||||
local entry_type="$3"
|
||||
|
||||
if [ ! -f "$manifest_file" ]; then
|
||||
write_color "WARNING: Manifest file not found: $manifest_file" "$COLOR_WARNING"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
# Escape path for JSON
|
||||
local escaped_path=$(echo "$entry_path" | sed 's/\\/\\\\/g' | sed 's/"/\\"/g')
|
||||
|
||||
# Create entry JSON
|
||||
local entry_json=$(cat << EOF
|
||||
{
|
||||
"path": "$escaped_path",
|
||||
"type": "$entry_type",
|
||||
"timestamp": "$timestamp"
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
# Read manifest, add entry, write back
|
||||
local temp_file="${manifest_file}.tmp"
|
||||
|
||||
if [ "$entry_type" = "File" ]; then
|
||||
jq --argjson entry "$entry_json" '.files += [$entry]' "$manifest_file" > "$temp_file"
|
||||
else
|
||||
jq --argjson entry "$entry_json" '.directories += [$entry]' "$manifest_file" > "$temp_file"
|
||||
fi
|
||||
|
||||
mv "$temp_file" "$manifest_file"
|
||||
}
|
||||
|
||||
function save_install_manifest() {
|
||||
local manifest_file="$1"
|
||||
|
||||
if [ -f "$manifest_file" ]; then
|
||||
write_color "Installation manifest saved: $manifest_file" "$COLOR_SUCCESS"
|
||||
return 0
|
||||
else
|
||||
write_color "WARNING: Failed to save installation manifest" "$COLOR_WARNING"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
function migrate_legacy_manifest() {
|
||||
local legacy_manifest="${HOME}/.claude-install-manifest.json"
|
||||
|
||||
if [ ! -f "$legacy_manifest" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
write_color "Found legacy manifest file, migrating to new system..." "$COLOR_INFO"
|
||||
|
||||
# Create manifest directory if it doesn't exist
|
||||
mkdir -p "$MANIFEST_DIR"
|
||||
|
||||
# Read legacy manifest
|
||||
local mode=$(jq -r '.installation_mode // "Global"' "$legacy_manifest")
|
||||
local timestamp=$(date +"%Y%m%d-%H%M%S")
|
||||
local manifest_id="install-${mode}-${timestamp}-migrated"
|
||||
|
||||
# Create new manifest file
|
||||
local new_manifest="${MANIFEST_DIR}/${manifest_id}.json"
|
||||
|
||||
# Copy with new manifest_id field
|
||||
jq --arg id "$manifest_id" '. + {manifest_id: $id}' "$legacy_manifest" > "$new_manifest"
|
||||
|
||||
# Rename old manifest (don't delete, keep as backup)
|
||||
mv "$legacy_manifest" "${legacy_manifest}.migrated"
|
||||
|
||||
write_color "Legacy manifest migrated successfully" "$COLOR_SUCCESS"
|
||||
write_color "Old manifest backed up to: ${legacy_manifest}.migrated" "$COLOR_INFO"
|
||||
}
|
||||
|
||||
function get_all_install_manifests() {
|
||||
# Migrate legacy manifest if exists
|
||||
migrate_legacy_manifest
|
||||
|
||||
if [ ! -d "$MANIFEST_DIR" ]; then
|
||||
echo "[]"
|
||||
return
|
||||
fi
|
||||
|
||||
# Check if any manifest files exist
|
||||
local manifest_count=$(find "$MANIFEST_DIR" -name "install-*.json" -type f 2>/dev/null | wc -l)
|
||||
|
||||
if [ "$manifest_count" -eq 0 ]; then
|
||||
echo "[]"
|
||||
return
|
||||
fi
|
||||
|
||||
# Collect all manifests into JSON array
|
||||
local manifests="["
|
||||
local first=true
|
||||
|
||||
while IFS= read -r -d '' file; do
|
||||
if [ "$first" = true ]; then
|
||||
first=false
|
||||
else
|
||||
manifests+=","
|
||||
fi
|
||||
|
||||
# Add manifest_file field
|
||||
local manifest_content=$(jq --arg file "$file" '. + {manifest_file: $file}' "$file")
|
||||
|
||||
# Count files and directories safely
|
||||
local files_count=$(echo "$manifest_content" | jq '.files | length')
|
||||
local dirs_count=$(echo "$manifest_content" | jq '.directories | length')
|
||||
|
||||
# Add counts to manifest
|
||||
manifest_content=$(echo "$manifest_content" | jq --argjson fc "$files_count" --argjson dc "$dirs_count" '. + {files_count: $fc, directories_count: $dc}')
|
||||
|
||||
manifests+="$manifest_content"
|
||||
done < <(find "$MANIFEST_DIR" -name "install-*.json" -type f -print0 | sort -z)
|
||||
|
||||
manifests+="]"
|
||||
|
||||
echo "$manifests"
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# UNINSTALLATION FUNCTIONS
|
||||
# ============================================================================
|
||||
|
||||
function uninstall_claude_workflow() {
|
||||
write_color "Claude Code Workflow System Uninstaller" "$COLOR_INFO"
|
||||
write_color "========================================" "$COLOR_INFO"
|
||||
echo ""
|
||||
|
||||
# Load all manifests
|
||||
local manifests_json=$(get_all_install_manifests)
|
||||
local manifests_count=$(echo "$manifests_json" | jq 'length')
|
||||
|
||||
if [ "$manifests_count" -eq 0 ]; then
|
||||
write_color "ERROR: No installation manifests found in: $MANIFEST_DIR" "$COLOR_ERROR"
|
||||
write_color "Cannot proceed with uninstallation without manifest." "$COLOR_ERROR"
|
||||
echo ""
|
||||
write_color "Manual uninstallation instructions:" "$COLOR_INFO"
|
||||
echo "For Global installation, remove these directories:"
|
||||
echo " - ~/.claude/agents"
|
||||
echo " - ~/.claude/commands"
|
||||
echo " - ~/.claude/output-styles"
|
||||
echo " - ~/.claude/workflows"
|
||||
echo " - ~/.claude/scripts"
|
||||
echo " - ~/.claude/prompt-templates"
|
||||
echo " - ~/.claude/python_script"
|
||||
echo " - ~/.claude/skills"
|
||||
echo " - ~/.claude/version.json"
|
||||
echo " - ~/.claude/CLAUDE.md"
|
||||
echo " - ~/.codex"
|
||||
echo " - ~/.gemini"
|
||||
echo " - ~/.qwen"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Display available installations
|
||||
write_color "Found $manifests_count installation(s):" "$COLOR_INFO"
|
||||
echo ""
|
||||
|
||||
# If only one manifest, use it directly
|
||||
local selected_index=0
|
||||
local selected_manifest=""
|
||||
|
||||
if [ "$manifests_count" -eq 1 ]; then
|
||||
selected_manifest=$(echo "$manifests_json" | jq '.[0]')
|
||||
write_color "Only one installation found, will uninstall:" "$COLOR_INFO"
|
||||
else
|
||||
# Multiple manifests - let user choose
|
||||
local options=()
|
||||
|
||||
for i in $(seq 0 $((manifests_count - 1))); do
|
||||
local m=$(echo "$manifests_json" | jq ".[$i]")
|
||||
|
||||
# Safely extract date string
|
||||
local date_str=$(echo "$m" | jq -r '.installation_date // "unknown date"' | cut -c1-10)
|
||||
local mode=$(echo "$m" | jq -r '.installation_mode // "Unknown"')
|
||||
local files_count=$(echo "$m" | jq -r '.files_count // 0')
|
||||
local dirs_count=$(echo "$m" | jq -r '.directories_count // 0')
|
||||
local path_info=$(echo "$m" | jq -r '.installation_path // ""')
|
||||
|
||||
if [ -n "$path_info" ]; then
|
||||
path_info=" ($path_info)"
|
||||
fi
|
||||
|
||||
options+=("$((i + 1)). [$mode] $date_str - $files_count files, $dirs_count dirs$path_info")
|
||||
done
|
||||
|
||||
options+=("Cancel - Don't uninstall anything")
|
||||
|
||||
echo ""
|
||||
local selection=$(get_user_choice "Select installation to uninstall:" "${options[@]}")
|
||||
|
||||
if [[ "$selection" == Cancel* ]]; then
|
||||
write_color "Uninstallation cancelled." "$COLOR_WARNING"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Parse selection to get index
|
||||
selected_index=$((${selection%%.*} - 1))
|
||||
selected_manifest=$(echo "$manifests_json" | jq ".[$selected_index]")
|
||||
fi
|
||||
|
||||
# Display selected installation info
|
||||
echo ""
|
||||
write_color "Installation Information:" "$COLOR_INFO"
|
||||
echo " Manifest ID: $(echo "$selected_manifest" | jq -r '.manifest_id')"
|
||||
echo " Mode: $(echo "$selected_manifest" | jq -r '.installation_mode')"
|
||||
echo " Path: $(echo "$selected_manifest" | jq -r '.installation_path')"
|
||||
echo " Date: $(echo "$selected_manifest" | jq -r '.installation_date')"
|
||||
echo " Installer Version: $(echo "$selected_manifest" | jq -r '.installer_version')"
|
||||
echo " Files tracked: $(echo "$selected_manifest" | jq -r '.files_count')"
|
||||
echo " Directories tracked: $(echo "$selected_manifest" | jq -r '.directories_count')"
|
||||
echo ""
|
||||
|
||||
# Confirm uninstallation
|
||||
if ! confirm_action "Do you want to uninstall this installation?" false; then
|
||||
write_color "Uninstallation cancelled." "$COLOR_WARNING"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local removed_files=0
|
||||
local removed_dirs=0
|
||||
local failed_items=()
|
||||
|
||||
# Remove files first
|
||||
write_color "Removing installed files..." "$COLOR_INFO"
|
||||
|
||||
local files_array=$(echo "$selected_manifest" | jq -c '.files[]')
|
||||
|
||||
while IFS= read -r file_entry; do
|
||||
local file_path=$(echo "$file_entry" | jq -r '.path')
|
||||
|
||||
if [ -f "$file_path" ]; then
|
||||
if rm -f "$file_path" 2>/dev/null; then
|
||||
write_color " Removed file: $file_path" "$COLOR_SUCCESS"
|
||||
((removed_files++))
|
||||
else
|
||||
write_color " WARNING: Failed to remove file: $file_path" "$COLOR_WARNING"
|
||||
failed_items+=("$file_path")
|
||||
fi
|
||||
else
|
||||
write_color " File not found (already removed): $file_path" "$COLOR_INFO"
|
||||
fi
|
||||
done <<< "$files_array"
|
||||
|
||||
# Remove directories (in reverse order by path length)
|
||||
write_color "Removing installed directories..." "$COLOR_INFO"
|
||||
|
||||
local dirs_array=$(echo "$selected_manifest" | jq -c '.directories[] | {path: .path, length: (.path | length)}' | sort -t: -k2 -rn | jq -c '.path')
|
||||
|
||||
while IFS= read -r dir_path_json; do
|
||||
local dir_path=$(echo "$dir_path_json" | jq -r '.')
|
||||
|
||||
if [ -d "$dir_path" ]; then
|
||||
# Check if directory is empty
|
||||
if [ -z "$(ls -A "$dir_path" 2>/dev/null)" ]; then
|
||||
if rmdir "$dir_path" 2>/dev/null; then
|
||||
write_color " Removed directory: $dir_path" "$COLOR_SUCCESS"
|
||||
((removed_dirs++))
|
||||
else
|
||||
write_color " WARNING: Failed to remove directory: $dir_path" "$COLOR_WARNING"
|
||||
failed_items+=("$dir_path")
|
||||
fi
|
||||
else
|
||||
write_color " Directory not empty (preserved): $dir_path" "$COLOR_WARNING"
|
||||
fi
|
||||
else
|
||||
write_color " Directory not found (already removed): $dir_path" "$COLOR_INFO"
|
||||
fi
|
||||
done <<< "$dirs_array"
|
||||
|
||||
# Remove manifest file
|
||||
local manifest_file=$(echo "$selected_manifest" | jq -r '.manifest_file')
|
||||
|
||||
if [ -f "$manifest_file" ]; then
|
||||
if rm -f "$manifest_file" 2>/dev/null; then
|
||||
write_color "Removed installation manifest: $(basename "$manifest_file")" "$COLOR_SUCCESS"
|
||||
else
|
||||
write_color "WARNING: Failed to remove manifest file" "$COLOR_WARNING"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Show summary
|
||||
echo ""
|
||||
write_color "========================================" "$COLOR_INFO"
|
||||
write_color "Uninstallation Summary:" "$COLOR_INFO"
|
||||
echo " Files removed: $removed_files"
|
||||
echo " Directories removed: $removed_dirs"
|
||||
|
||||
if [ ${#failed_items[@]} -gt 0 ]; then
|
||||
echo ""
|
||||
write_color "Failed to remove the following items:" "$COLOR_WARNING"
|
||||
for item in "${failed_items[@]}"; do
|
||||
echo " - $item"
|
||||
done
|
||||
fi
|
||||
|
||||
echo ""
|
||||
if [ ${#failed_items[@]} -eq 0 ]; then
|
||||
write_color "✓ Claude Code Workflow has been successfully uninstalled!" "$COLOR_SUCCESS"
|
||||
else
|
||||
write_color "Uninstallation completed with warnings." "$COLOR_WARNING"
|
||||
write_color "Please manually remove the failed items listed above." "$COLOR_INFO"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
function create_version_json() {
|
||||
local target_claude_dir="$1"
|
||||
local installation_mode="$2"
|
||||
@@ -863,6 +1316,10 @@ function parse_arguments() {
|
||||
BACKUP_ALL=false
|
||||
shift
|
||||
;;
|
||||
-Uninstall)
|
||||
UNINSTALL=true
|
||||
shift
|
||||
;;
|
||||
-SourceVersion)
|
||||
SOURCE_VERSION="$2"
|
||||
shift 2
|
||||
@@ -901,6 +1358,7 @@ Options:
|
||||
-NonInteractive Run in non-interactive mode with default options
|
||||
-BackupAll Automatically backup all existing files (default)
|
||||
-NoBackup Disable automatic backup functionality
|
||||
-Uninstall Uninstall Claude Code Workflow System based on installation manifest
|
||||
-SourceVersion <ver> Source version (passed from install-remote.sh)
|
||||
-SourceBranch <name> Source branch (passed from install-remote.sh)
|
||||
-SourceCommit <sha> Source commit SHA (passed from install-remote.sh)
|
||||
@@ -919,6 +1377,12 @@ Examples:
|
||||
# Installation without backup
|
||||
$0 -NoBackup
|
||||
|
||||
# Uninstall Claude Code Workflow System
|
||||
$0 -Uninstall
|
||||
|
||||
# Uninstall without confirmation prompts
|
||||
$0 -Uninstall -Force
|
||||
|
||||
# With version info (typically called by install-remote.sh)
|
||||
$0 -InstallMode Global -Force -SourceVersion "3.4.2" -SourceBranch "main" -SourceCommit "abc1234"
|
||||
|
||||
@@ -926,6 +1390,46 @@ EOF
|
||||
}
|
||||
|
||||
function main() {
|
||||
# Show banner first
|
||||
show_banner
|
||||
|
||||
# Check for uninstall mode from parameter or ask user interactively
|
||||
local operation_mode="Install"
|
||||
|
||||
if [ "$UNINSTALL" = true ]; then
|
||||
operation_mode="Uninstall"
|
||||
elif [ "$NON_INTERACTIVE" != true ] && [ -z "$INSTALL_MODE" ]; then
|
||||
# Interactive mode selection
|
||||
echo ""
|
||||
local operations=(
|
||||
"Install - Install Claude Code Workflow System"
|
||||
"Uninstall - Remove Claude Code Workflow System"
|
||||
)
|
||||
local selection=$(get_user_choice "Choose operation:" "${operations[@]}")
|
||||
|
||||
if [[ "$selection" == Uninstall* ]]; then
|
||||
operation_mode="Uninstall"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Handle uninstall mode
|
||||
if [ "$operation_mode" = "Uninstall" ]; then
|
||||
if uninstall_claude_workflow; then
|
||||
local result=0
|
||||
else
|
||||
local result=1
|
||||
fi
|
||||
|
||||
if [ "$NON_INTERACTIVE" != true ]; then
|
||||
echo ""
|
||||
write_color "Press Enter to exit..." "$COLOR_PROMPT"
|
||||
read -r
|
||||
fi
|
||||
|
||||
return $result
|
||||
fi
|
||||
|
||||
# Continue with installation
|
||||
show_header
|
||||
|
||||
# Test prerequisites
|
||||
|
||||
987
README_CN.md
987
README_CN.md
File diff suppressed because it is too large
Load Diff
@@ -153,7 +153,22 @@ function Test-Prerequisites {
|
||||
Write-ColorOutput "✓ Network connection OK" $ColorSuccess
|
||||
} catch {
|
||||
Write-ColorOutput "ERROR: Cannot connect to GitHub" $ColorError
|
||||
Write-ColorOutput "Please check your network connection: $($_.Exception.Message)" $ColorError
|
||||
Write-ColorOutput "Please check your network connection and try again." $ColorError
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Common causes:" $ColorInfo
|
||||
Write-Host " • Internet connection is down or unstable"
|
||||
Write-Host " • Firewall or proxy is blocking GitHub access"
|
||||
Write-Host " • DNS resolution issues"
|
||||
Write-Host " • GitHub is temporarily unavailable"
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Troubleshooting steps:" $ColorInfo
|
||||
Write-Host " 1. Check your internet connection"
|
||||
Write-Host " 2. Try accessing https://github.com in your browser"
|
||||
Write-Host " 3. If using a proxy, configure it properly"
|
||||
Write-Host " 4. Check firewall settings"
|
||||
Write-Host " 5. Wait a few minutes and try again"
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Error details: $($_.Exception.Message)" $ColorError
|
||||
return $false
|
||||
}
|
||||
|
||||
@@ -172,10 +187,12 @@ function Get-TempDirectory {
|
||||
function Get-LatestRelease {
|
||||
try {
|
||||
$apiUrl = "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest"
|
||||
$response = Invoke-RestMethod -Uri $apiUrl -UseBasicParsing
|
||||
$response = Invoke-RestMethod -Uri $apiUrl -UseBasicParsing -TimeoutSec 10
|
||||
return $response.tag_name
|
||||
} catch {
|
||||
Write-ColorOutput "WARNING: Failed to fetch latest release, using 'main' branch" $ColorWarning
|
||||
Write-ColorOutput "WARNING: Failed to fetch latest release" $ColorWarning
|
||||
Write-ColorOutput "Reason: $($_.Exception.Message)" $ColorWarning
|
||||
Write-ColorOutput "Falling back to 'main' branch" $ColorInfo
|
||||
return $null
|
||||
}
|
||||
}
|
||||
@@ -229,19 +246,40 @@ function Download-Repository {
|
||||
$progressPreference = $ProgressPreference
|
||||
$ProgressPreference = 'SilentlyContinue'
|
||||
|
||||
Invoke-WebRequest -Uri $zipUrl -OutFile $zipPath -UseBasicParsing
|
||||
Invoke-WebRequest -Uri $zipUrl -OutFile $zipPath -UseBasicParsing -TimeoutSec 300
|
||||
|
||||
$ProgressPreference = $progressPreference
|
||||
|
||||
if (Test-Path $zipPath) {
|
||||
$fileSize = (Get-Item $zipPath).Length
|
||||
if ($fileSize -eq 0) {
|
||||
throw "Downloaded file is empty (0 bytes)"
|
||||
}
|
||||
Write-ColorOutput "Download complete ($([math]::Round($fileSize/1024/1024, 2)) MB)" $ColorSuccess
|
||||
return $zipPath
|
||||
} else {
|
||||
throw "Downloaded file does not exist"
|
||||
}
|
||||
} catch {
|
||||
Write-ColorOutput "Download failed: $($_.Exception.Message)" $ColorError
|
||||
Write-Host ""
|
||||
Write-ColorOutput "ERROR: Download failed" $ColorError
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Common causes:" $ColorInfo
|
||||
Write-Host " • Network connection interrupted during download"
|
||||
Write-Host " • GitHub API rate limit exceeded"
|
||||
Write-Host " • Invalid version tag or branch name"
|
||||
Write-Host " • Temporary GitHub service issues"
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Troubleshooting steps:" $ColorInfo
|
||||
Write-Host " 1. Check your internet connection stability"
|
||||
Write-Host " 2. Wait a few minutes and try again (rate limit resets)"
|
||||
Write-Host " 3. Verify the version tag or branch name is correct"
|
||||
Write-Host " 4. Try a different version (stable/latest)"
|
||||
Write-Host " 5. Check GitHub status at https://www.githubstatus.com"
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Download URL: $zipUrl" $ColorInfo
|
||||
Write-ColorOutput "Error details: $($_.Exception.Message)" $ColorError
|
||||
Write-Host ""
|
||||
return $null
|
||||
}
|
||||
}
|
||||
@@ -255,14 +293,24 @@ function Extract-Repository {
|
||||
Write-ColorOutput "Extracting files..." $ColorInfo
|
||||
|
||||
try {
|
||||
# Verify zip file exists and is not empty
|
||||
if (-not (Test-Path $ZipPath)) {
|
||||
throw "ZIP file not found: $ZipPath"
|
||||
}
|
||||
|
||||
$zipSize = (Get-Item $ZipPath).Length
|
||||
if ($zipSize -eq 0) {
|
||||
throw "ZIP file is empty (0 bytes)"
|
||||
}
|
||||
|
||||
# Use .NET to extract zip
|
||||
Add-Type -AssemblyName System.IO.Compression.FileSystem
|
||||
[System.IO.Compression.ZipFile]::ExtractToDirectory($ZipPath, $TempDir)
|
||||
|
||||
|
||||
# Find the extracted directory (usually repo-name-branch)
|
||||
$extractedDirs = Get-ChildItem -Path $TempDir -Directory
|
||||
$repoDir = $extractedDirs | Where-Object { $_.Name -like "Claude-Code-Workflow-*" } | Select-Object -First 1
|
||||
|
||||
|
||||
if ($repoDir) {
|
||||
Write-ColorOutput "Extraction complete: $($repoDir.FullName)" $ColorSuccess
|
||||
return $repoDir.FullName
|
||||
@@ -270,7 +318,27 @@ function Extract-Repository {
|
||||
throw "Could not find extracted repository directory"
|
||||
}
|
||||
} catch {
|
||||
Write-ColorOutput "Extraction failed: $($_.Exception.Message)" $ColorError
|
||||
Write-Host ""
|
||||
Write-ColorOutput "ERROR: Extraction failed" $ColorError
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Common causes:" $ColorInfo
|
||||
Write-Host " • Downloaded file is corrupted or incomplete"
|
||||
Write-Host " • ZIP file format is invalid"
|
||||
Write-Host " • Insufficient disk space"
|
||||
Write-Host " • Permission issues on temporary directory"
|
||||
Write-Host ""
|
||||
Write-ColorOutput "Troubleshooting steps:" $ColorInfo
|
||||
Write-Host " 1. Try downloading again (network may have interrupted)"
|
||||
Write-Host " 2. Check available disk space"
|
||||
Write-Host " 3. Verify temporary directory permissions"
|
||||
Write-Host " 4. Try running as administrator"
|
||||
Write-Host ""
|
||||
Write-ColorOutput "ZIP file: $ZipPath" $ColorInfo
|
||||
if (Test-Path $ZipPath) {
|
||||
Write-ColorOutput "ZIP size: $([math]::Round($zipSize/1024/1024, 2)) MB" $ColorInfo
|
||||
}
|
||||
Write-ColorOutput "Error details: $($_.Exception.Message)" $ColorError
|
||||
Write-Host ""
|
||||
return $null
|
||||
}
|
||||
}
|
||||
|
||||
@@ -74,7 +74,21 @@ function test_prerequisites() {
|
||||
write_color "✓ Network connection OK" "$COLOR_SUCCESS"
|
||||
else
|
||||
write_color "ERROR: Cannot connect to GitHub" "$COLOR_ERROR"
|
||||
write_color "Please check your network connection" "$COLOR_ERROR"
|
||||
write_color "Please check your network connection and try again." "$COLOR_ERROR"
|
||||
echo ""
|
||||
write_color "Common causes:" "$COLOR_INFO"
|
||||
echo " • Internet connection is down or unstable"
|
||||
echo " • Firewall or proxy is blocking GitHub access"
|
||||
echo " • DNS resolution issues"
|
||||
echo " • GitHub is temporarily unavailable"
|
||||
echo ""
|
||||
write_color "Troubleshooting steps:" "$COLOR_INFO"
|
||||
echo " 1. Check your internet connection"
|
||||
echo " 2. Try accessing https://github.com in your browser"
|
||||
echo " 3. If using a proxy, configure it properly"
|
||||
echo " 4. Check firewall settings"
|
||||
echo " 5. Wait a few minutes and try again"
|
||||
echo ""
|
||||
return 1
|
||||
fi
|
||||
|
||||
@@ -108,7 +122,8 @@ function get_latest_release() {
|
||||
fi
|
||||
fi
|
||||
|
||||
write_color "WARNING: Failed to fetch latest release, using 'main' branch" "$COLOR_WARNING" >&2
|
||||
write_color "WARNING: Failed to fetch latest release" "$COLOR_WARNING" >&2
|
||||
write_color "Falling back to 'main' branch" "$COLOR_INFO" >&2
|
||||
return 1
|
||||
}
|
||||
|
||||
@@ -163,11 +178,34 @@ function download_repository() {
|
||||
write_color "Type: $download_type" "$COLOR_INFO" >&2
|
||||
|
||||
# Download with curl
|
||||
if curl -fsSL -o "$zip_path" "$zip_url" 2>&1 >&2; then
|
||||
local download_error=""
|
||||
if download_error=$(curl -fsSL -o "$zip_path" "$zip_url" 2>&1); then
|
||||
# Verify the download
|
||||
if [ -f "$zip_path" ]; then
|
||||
local file_size
|
||||
file_size=$(du -h "$zip_path" 2>/dev/null | cut -f1)
|
||||
|
||||
# Check if file is empty
|
||||
if [ ! -s "$zip_path" ]; then
|
||||
echo "" >&2
|
||||
write_color "ERROR: Downloaded file is empty (0 bytes)" "$COLOR_ERROR" >&2
|
||||
echo "" >&2
|
||||
write_color "Common causes:" "$COLOR_INFO" >&2
|
||||
echo " • Network connection was interrupted" >&2
|
||||
echo " • Invalid version tag or branch name" >&2
|
||||
echo " • GitHub API or server issues" >&2
|
||||
echo "" >&2
|
||||
write_color "Troubleshooting steps:" "$COLOR_INFO" >&2
|
||||
echo " 1. Verify the version tag or branch name is correct" >&2
|
||||
echo " 2. Wait a few minutes and try again" >&2
|
||||
echo " 3. Try a different version (stable/latest)" >&2
|
||||
echo " 4. Check GitHub status at https://www.githubstatus.com" >&2
|
||||
echo "" >&2
|
||||
write_color "Download URL: $zip_url" "$COLOR_INFO" >&2
|
||||
echo "" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
write_color "✓ Download complete ($file_size)" "$COLOR_SUCCESS" >&2
|
||||
|
||||
# Output path to stdout for capture
|
||||
@@ -178,7 +216,29 @@ function download_repository() {
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
echo "" >&2
|
||||
write_color "ERROR: Download failed" "$COLOR_ERROR" >&2
|
||||
echo "" >&2
|
||||
write_color "Common causes:" "$COLOR_INFO" >&2
|
||||
echo " • Network connection interrupted during download" >&2
|
||||
echo " • GitHub API rate limit exceeded" >&2
|
||||
echo " • Invalid version tag or branch name" >&2
|
||||
echo " • Temporary GitHub service issues" >&2
|
||||
echo " • Firewall or proxy blocking the download" >&2
|
||||
echo "" >&2
|
||||
write_color "Troubleshooting steps:" "$COLOR_INFO" >&2
|
||||
echo " 1. Check your internet connection stability" >&2
|
||||
echo " 2. Wait a few minutes and try again (rate limit resets)" >&2
|
||||
echo " 3. Verify the version tag or branch name is correct" >&2
|
||||
echo " 4. Try a different version (stable/latest)" >&2
|
||||
echo " 5. Check GitHub status at https://www.githubstatus.com" >&2
|
||||
echo " 6. If using a proxy, verify it's configured correctly" >&2
|
||||
echo "" >&2
|
||||
write_color "Download URL: $zip_url" "$COLOR_INFO" >&2
|
||||
if [ -n "$download_error" ]; then
|
||||
write_color "Error details: $download_error" "$COLOR_ERROR" >&2
|
||||
fi
|
||||
echo "" >&2
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
@@ -195,8 +255,27 @@ function extract_repository() {
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Verify zip file is not empty
|
||||
if [ ! -s "$zip_path" ]; then
|
||||
echo "" >&2
|
||||
write_color "ERROR: ZIP file is empty (0 bytes)" "$COLOR_ERROR" >&2
|
||||
echo "" >&2
|
||||
write_color "Common causes:" "$COLOR_INFO" >&2
|
||||
echo " • Download was interrupted" >&2
|
||||
echo " • Network connection issues during download" >&2
|
||||
echo " • Server-side issues" >&2
|
||||
echo "" >&2
|
||||
write_color "Troubleshooting steps:" "$COLOR_INFO" >&2
|
||||
echo " 1. Try downloading again" >&2
|
||||
echo " 2. Check your network connection" >&2
|
||||
echo " 3. Wait a few minutes and retry" >&2
|
||||
echo "" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Extract with unzip
|
||||
if unzip -q "$zip_path" -d "$temp_dir" >&2 2>&1; then
|
||||
local extract_error=""
|
||||
if extract_error=$(unzip -q "$zip_path" -d "$temp_dir" 2>&1); then
|
||||
# Find the extracted directory
|
||||
local repo_dir
|
||||
repo_dir=$(find "$temp_dir" -maxdepth 1 -type d -name "Claude-Code-Workflow-*" 2>/dev/null | head -n 1)
|
||||
@@ -207,13 +286,39 @@ function extract_repository() {
|
||||
echo "$repo_dir"
|
||||
return 0
|
||||
else
|
||||
echo "" >&2
|
||||
write_color "ERROR: Could not find extracted repository directory" "$COLOR_ERROR" >&2
|
||||
write_color "Temp directory contents:" "$COLOR_INFO" >&2
|
||||
ls -la "$temp_dir" >&2
|
||||
echo "" >&2
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
echo "" >&2
|
||||
write_color "ERROR: Extraction failed" "$COLOR_ERROR" >&2
|
||||
echo "" >&2
|
||||
write_color "Common causes:" "$COLOR_INFO" >&2
|
||||
echo " • Downloaded file is corrupted or incomplete" >&2
|
||||
echo " • ZIP file format is invalid" >&2
|
||||
echo " • Insufficient disk space" >&2
|
||||
echo " • Permission issues on temporary directory" >&2
|
||||
echo "" >&2
|
||||
write_color "Troubleshooting steps:" "$COLOR_INFO" >&2
|
||||
echo " 1. Try downloading again (network may have interrupted)" >&2
|
||||
echo " 2. Check available disk space: df -h" >&2
|
||||
echo " 3. Verify temporary directory permissions" >&2
|
||||
echo " 4. Check if 'unzip' command is working: unzip -v" >&2
|
||||
echo "" >&2
|
||||
write_color "ZIP file: $zip_path" "$COLOR_INFO" >&2
|
||||
if [ -f "$zip_path" ]; then
|
||||
local zip_size
|
||||
zip_size=$(du -h "$zip_path" 2>/dev/null | cut -f1)
|
||||
write_color "ZIP size: $zip_size" "$COLOR_INFO" >&2
|
||||
fi
|
||||
if [ -n "$extract_error" ]; then
|
||||
write_color "Error details: $extract_error" "$COLOR_ERROR" >&2
|
||||
fi
|
||||
echo "" >&2
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
598
skills.md
598
skills.md
@@ -1,598 +0,0 @@
|
||||
# Agent Skills
|
||||
|
||||
> Create, manage, and share Skills to extend Claude's capabilities in Claude Code.
|
||||
|
||||
This guide shows you how to create, use, and manage Agent Skills in Claude Code. Skills are modular capabilities that extend Claude's functionality through organized folders containing instructions, scripts, and resources.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Claude Code version 1.0 or later
|
||||
* Basic familiarity with [Claude Code](/en/docs/claude-code/quickstart)
|
||||
|
||||
## What are Agent Skills?
|
||||
|
||||
Agent Skills package expertise into discoverable capabilities. Each Skill consists of a `SKILL.md` file with instructions that Claude reads when relevant, plus optional supporting files like scripts and templates.
|
||||
|
||||
**How Skills are invoked**: Skills are **model-invoked**—Claude autonomously decides when to use them based on your request and the Skill's description. This is different from slash commands, which are **user-invoked** (you explicitly type `/command` to trigger them).
|
||||
|
||||
**Benefits**:
|
||||
|
||||
* Extend Claude's capabilities for your specific workflows
|
||||
* Share expertise across your team via git
|
||||
* Reduce repetitive prompting
|
||||
* Compose multiple Skills for complex tasks
|
||||
|
||||
Learn more in the [Agent Skills overview](/en/docs/agents-and-tools/agent-skills/overview).
|
||||
|
||||
<Note>
|
||||
For a deep dive into the architecture and real-world applications of Agent Skills, read our engineering blog: [Equipping agents for the real world with Agent Skills](https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills).
|
||||
</Note>
|
||||
|
||||
## Create a Skill
|
||||
|
||||
Skills are stored as directories containing a `SKILL.md` file.
|
||||
|
||||
### Personal Skills
|
||||
|
||||
Personal Skills are available across all your projects. Store them in `~/.claude/skills/`:
|
||||
|
||||
```bash theme={null}
|
||||
mkdir -p ~/.claude/skills/my-skill-name
|
||||
```
|
||||
|
||||
**Use personal Skills for**:
|
||||
|
||||
* Your individual workflows and preferences
|
||||
* Experimental Skills you're developing
|
||||
* Personal productivity tools
|
||||
|
||||
### Project Skills
|
||||
|
||||
Project Skills are shared with your team. Store them in `.claude/skills/` within your project:
|
||||
|
||||
```bash theme={null}
|
||||
mkdir -p .claude/skills/my-skill-name
|
||||
```
|
||||
|
||||
**Use project Skills for**:
|
||||
|
||||
* Team workflows and conventions
|
||||
* Project-specific expertise
|
||||
* Shared utilities and scripts
|
||||
|
||||
Project Skills are checked into git and automatically available to team members.
|
||||
|
||||
### Plugin Skills
|
||||
|
||||
Skills can also come from [Claude Code plugins](/en/docs/claude-code/plugins). Plugins may bundle Skills that are automatically available when the plugin is installed. These Skills work the same way as personal and project Skills.
|
||||
|
||||
## Write SKILL.md
|
||||
|
||||
Create a `SKILL.md` file with YAML frontmatter and Markdown content:
|
||||
|
||||
```yaml theme={null}
|
||||
---
|
||||
name: Your Skill Name
|
||||
description: Brief description of what this Skill does and when to use it
|
||||
---
|
||||
|
||||
# Your Skill Name
|
||||
|
||||
## Instructions
|
||||
Provide clear, step-by-step guidance for Claude.
|
||||
|
||||
## Examples
|
||||
Show concrete examples of using this Skill.
|
||||
```
|
||||
|
||||
The `description` field is critical for Claude to discover when to use your Skill. It should include both what the Skill does and when Claude should use it.
|
||||
|
||||
See the [best practices guide](/en/docs/agents-and-tools/agent-skills/best-practices) for complete authoring guidance.
|
||||
|
||||
## Add supporting files
|
||||
|
||||
Create additional files alongside SKILL.md:
|
||||
|
||||
```
|
||||
my-skill/
|
||||
├── SKILL.md (required)
|
||||
├── reference.md (optional documentation)
|
||||
├── examples.md (optional examples)
|
||||
├── scripts/
|
||||
│ └── helper.py (optional utility)
|
||||
└── templates/
|
||||
└── template.txt (optional template)
|
||||
```
|
||||
|
||||
Reference these files from SKILL.md:
|
||||
|
||||
````markdown theme={null}
|
||||
For advanced usage, see [reference.md](reference.md).
|
||||
|
||||
Run the helper script:
|
||||
```bash
|
||||
python scripts/helper.py input.txt
|
||||
```
|
||||
````
|
||||
|
||||
Claude reads these files only when needed, using progressive disclosure to manage context efficiently.
|
||||
|
||||
## Restrict tool access with allowed-tools
|
||||
|
||||
Use the `allowed-tools` frontmatter field to limit which tools Claude can use when a Skill is active:
|
||||
|
||||
```yaml theme={null}
|
||||
---
|
||||
name: Safe File Reader
|
||||
description: Read files without making changes. Use when you need read-only file access.
|
||||
allowed-tools: Read, Grep, Glob
|
||||
---
|
||||
|
||||
# Safe File Reader
|
||||
|
||||
This Skill provides read-only file access.
|
||||
|
||||
## Instructions
|
||||
1. Use Read to view file contents
|
||||
2. Use Grep to search within files
|
||||
3. Use Glob to find files by pattern
|
||||
```
|
||||
|
||||
When this Skill is active, Claude can only use the specified tools (Read, Grep, Glob) without needing to ask for permission. This is useful for:
|
||||
|
||||
* Read-only Skills that shouldn't modify files
|
||||
* Skills with limited scope (e.g., only data analysis, no file writing)
|
||||
* Security-sensitive workflows where you want to restrict capabilities
|
||||
|
||||
If `allowed-tools` is not specified, Claude will ask for permission to use tools as normal, following the standard permission model.
|
||||
|
||||
<Note>
|
||||
`allowed-tools` is only supported for Skills in Claude Code.
|
||||
</Note>
|
||||
|
||||
## View available Skills
|
||||
|
||||
Skills are automatically discovered by Claude from three sources:
|
||||
|
||||
* Personal Skills: `~/.claude/skills/`
|
||||
* Project Skills: `.claude/skills/`
|
||||
* Plugin Skills: bundled with installed plugins
|
||||
|
||||
**To view all available Skills**, ask Claude directly:
|
||||
|
||||
```
|
||||
What Skills are available?
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
List all available Skills
|
||||
```
|
||||
|
||||
This will show all Skills from all sources, including plugin Skills.
|
||||
|
||||
**To inspect a specific Skill**, you can also check the filesystem:
|
||||
|
||||
```bash theme={null}
|
||||
# List personal Skills
|
||||
ls ~/.claude/skills/
|
||||
|
||||
# List project Skills (if in a project directory)
|
||||
ls .claude/skills/
|
||||
|
||||
# View a specific Skill's content
|
||||
cat ~/.claude/skills/my-skill/SKILL.md
|
||||
```
|
||||
|
||||
## Test a Skill
|
||||
|
||||
After creating a Skill, test it by asking questions that match your description.
|
||||
|
||||
**Example**: If your description mentions "PDF files":
|
||||
|
||||
```
|
||||
Can you help me extract text from this PDF?
|
||||
```
|
||||
|
||||
Claude autonomously decides to use your Skill if it matches the request—you don't need to explicitly invoke it. The Skill activates automatically based on the context of your question.
|
||||
|
||||
## Debug a Skill
|
||||
|
||||
If Claude doesn't use your Skill, check these common issues:
|
||||
|
||||
### Make description specific
|
||||
|
||||
**Too vague**:
|
||||
|
||||
```yaml theme={null}
|
||||
description: Helps with documents
|
||||
```
|
||||
|
||||
**Specific**:
|
||||
|
||||
```yaml theme={null}
|
||||
description: Extract text and tables from PDF files, fill forms, merge documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
|
||||
```
|
||||
|
||||
Include both what the Skill does and when to use it in the description.
|
||||
|
||||
### Verify file path
|
||||
|
||||
**Personal Skills**: `~/.claude/skills/skill-name/SKILL.md`
|
||||
**Project Skills**: `.claude/skills/skill-name/SKILL.md`
|
||||
|
||||
Check the file exists:
|
||||
|
||||
```bash theme={null}
|
||||
# Personal
|
||||
ls ~/.claude/skills/my-skill/SKILL.md
|
||||
|
||||
# Project
|
||||
ls .claude/skills/my-skill/SKILL.md
|
||||
```
|
||||
|
||||
### Check YAML syntax
|
||||
|
||||
Invalid YAML prevents the Skill from loading. Verify the frontmatter:
|
||||
|
||||
```bash theme={null}
|
||||
cat SKILL.md | head -n 10
|
||||
```
|
||||
|
||||
Ensure:
|
||||
|
||||
* Opening `---` on line 1
|
||||
* Closing `---` before Markdown content
|
||||
* Valid YAML syntax (no tabs, correct indentation)
|
||||
|
||||
### View errors
|
||||
|
||||
Run Claude Code with debug mode to see Skill loading errors:
|
||||
|
||||
```bash theme={null}
|
||||
claude --debug
|
||||
```
|
||||
|
||||
## Share Skills with your team
|
||||
|
||||
**Recommended approach**: Distribute Skills through [plugins](/en/docs/claude-code/plugins).
|
||||
|
||||
To share Skills via plugin:
|
||||
|
||||
1. Create a plugin with Skills in the `skills/` directory
|
||||
2. Add the plugin to a marketplace
|
||||
3. Team members install the plugin
|
||||
|
||||
For complete instructions, see [Add Skills to your plugin](/en/docs/claude-code/plugins#add-skills-to-your-plugin).
|
||||
|
||||
You can also share Skills directly through project repositories:
|
||||
|
||||
### Step 1: Add Skill to your project
|
||||
|
||||
Create a project Skill:
|
||||
|
||||
```bash theme={null}
|
||||
mkdir -p .claude/skills/team-skill
|
||||
# Create SKILL.md
|
||||
```
|
||||
|
||||
### Step 2: Commit to git
|
||||
|
||||
```bash theme={null}
|
||||
git add .claude/skills/
|
||||
git commit -m "Add team Skill for PDF processing"
|
||||
git push
|
||||
```
|
||||
|
||||
### Step 3: Team members get Skills automatically
|
||||
|
||||
When team members pull the latest changes, Skills are immediately available:
|
||||
|
||||
```bash theme={null}
|
||||
git pull
|
||||
claude # Skills are now available
|
||||
```
|
||||
|
||||
## Update a Skill
|
||||
|
||||
Edit SKILL.md directly:
|
||||
|
||||
```bash theme={null}
|
||||
# Personal Skill
|
||||
code ~/.claude/skills/my-skill/SKILL.md
|
||||
|
||||
# Project Skill
|
||||
code .claude/skills/my-skill/SKILL.md
|
||||
```
|
||||
|
||||
Changes take effect the next time you start Claude Code. If Claude Code is already running, restart it to load the updates.
|
||||
|
||||
## Remove a Skill
|
||||
|
||||
Delete the Skill directory:
|
||||
|
||||
```bash theme={null}
|
||||
# Personal
|
||||
rm -rf ~/.claude/skills/my-skill
|
||||
|
||||
# Project
|
||||
rm -rf .claude/skills/my-skill
|
||||
git commit -m "Remove unused Skill"
|
||||
```
|
||||
|
||||
## Best practices
|
||||
|
||||
### Keep Skills focused
|
||||
|
||||
One Skill should address one capability:
|
||||
|
||||
**Focused**:
|
||||
|
||||
* "PDF form filling"
|
||||
* "Excel data analysis"
|
||||
* "Git commit messages"
|
||||
|
||||
**Too broad**:
|
||||
|
||||
* "Document processing" (split into separate Skills)
|
||||
* "Data tools" (split by data type or operation)
|
||||
|
||||
### Write clear descriptions
|
||||
|
||||
Help Claude discover when to use Skills by including specific triggers in your description:
|
||||
|
||||
**Clear**:
|
||||
|
||||
```yaml theme={null}
|
||||
description: Analyze Excel spreadsheets, create pivot tables, and generate charts. Use when working with Excel files, spreadsheets, or analyzing tabular data in .xlsx format.
|
||||
```
|
||||
|
||||
**Vague**:
|
||||
|
||||
```yaml theme={null}
|
||||
description: For files
|
||||
```
|
||||
|
||||
### Test with your team
|
||||
|
||||
Have teammates use Skills and provide feedback:
|
||||
|
||||
* Does the Skill activate when expected?
|
||||
* Are the instructions clear?
|
||||
* Are there missing examples or edge cases?
|
||||
|
||||
### Document Skill versions
|
||||
|
||||
You can document Skill versions in your SKILL.md content to track changes over time. Add a version history section:
|
||||
|
||||
```markdown theme={null}
|
||||
# My Skill
|
||||
|
||||
## Version History
|
||||
- v2.0.0 (2025-10-01): Breaking changes to API
|
||||
- v1.1.0 (2025-09-15): Added new features
|
||||
- v1.0.0 (2025-09-01): Initial release
|
||||
```
|
||||
|
||||
This helps team members understand what changed between versions.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Claude doesn't use my Skill
|
||||
|
||||
**Symptom**: You ask a relevant question but Claude doesn't use your Skill.
|
||||
|
||||
**Check**: Is the description specific enough?
|
||||
|
||||
Vague descriptions make discovery difficult. Include both what the Skill does and when to use it, with key terms users would mention.
|
||||
|
||||
**Too generic**:
|
||||
|
||||
```yaml theme={null}
|
||||
description: Helps with data
|
||||
```
|
||||
|
||||
**Specific**:
|
||||
|
||||
```yaml theme={null}
|
||||
description: Analyze Excel spreadsheets, generate pivot tables, create charts. Use when working with Excel files, spreadsheets, or .xlsx files.
|
||||
```
|
||||
|
||||
**Check**: Is the YAML valid?
|
||||
|
||||
Run validation to check for syntax errors:
|
||||
|
||||
```bash theme={null}
|
||||
# View frontmatter
|
||||
cat .claude/skills/my-skill/SKILL.md | head -n 15
|
||||
|
||||
# Check for common issues
|
||||
# - Missing opening or closing ---
|
||||
# - Tabs instead of spaces
|
||||
# - Unquoted strings with special characters
|
||||
```
|
||||
|
||||
**Check**: Is the Skill in the correct location?
|
||||
|
||||
```bash theme={null}
|
||||
# Personal Skills
|
||||
ls ~/.claude/skills/*/SKILL.md
|
||||
|
||||
# Project Skills
|
||||
ls .claude/skills/*/SKILL.md
|
||||
```
|
||||
|
||||
### Skill has errors
|
||||
|
||||
**Symptom**: The Skill loads but doesn't work correctly.
|
||||
|
||||
**Check**: Are dependencies available?
|
||||
|
||||
Claude will automatically install required dependencies (or ask for permission to install them) when it needs them.
|
||||
|
||||
**Check**: Do scripts have execute permissions?
|
||||
|
||||
```bash theme={null}
|
||||
chmod +x .claude/skills/my-skill/scripts/*.py
|
||||
```
|
||||
|
||||
**Check**: Are file paths correct?
|
||||
|
||||
Use forward slashes (Unix style) in all paths:
|
||||
|
||||
**Correct**: `scripts/helper.py`
|
||||
**Wrong**: `scripts\helper.py` (Windows style)
|
||||
|
||||
### Multiple Skills conflict
|
||||
|
||||
**Symptom**: Claude uses the wrong Skill or seems confused between similar Skills.
|
||||
|
||||
**Be specific in descriptions**: Help Claude choose the right Skill by using distinct trigger terms in your descriptions.
|
||||
|
||||
Instead of:
|
||||
|
||||
```yaml theme={null}
|
||||
# Skill 1
|
||||
description: For data analysis
|
||||
|
||||
# Skill 2
|
||||
description: For analyzing data
|
||||
```
|
||||
|
||||
Use:
|
||||
|
||||
```yaml theme={null}
|
||||
# Skill 1
|
||||
description: Analyze sales data in Excel files and CRM exports. Use for sales reports, pipeline analysis, and revenue tracking.
|
||||
|
||||
# Skill 2
|
||||
description: Analyze log files and system metrics data. Use for performance monitoring, debugging, and system diagnostics.
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Simple Skill (single file)
|
||||
|
||||
```
|
||||
commit-helper/
|
||||
└── SKILL.md
|
||||
```
|
||||
|
||||
```yaml theme={null}
|
||||
---
|
||||
name: Generating Commit Messages
|
||||
description: Generates clear commit messages from git diffs. Use when writing commit messages or reviewing staged changes.
|
||||
---
|
||||
|
||||
# Generating Commit Messages
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Run `git diff --staged` to see changes
|
||||
2. I'll suggest a commit message with:
|
||||
- Summary under 50 characters
|
||||
- Detailed description
|
||||
- Affected components
|
||||
|
||||
## Best practices
|
||||
|
||||
- Use present tense
|
||||
- Explain what and why, not how
|
||||
```
|
||||
|
||||
### Skill with tool permissions
|
||||
|
||||
```
|
||||
code-reviewer/
|
||||
└── SKILL.md
|
||||
```
|
||||
|
||||
```yaml theme={null}
|
||||
---
|
||||
name: Code Reviewer
|
||||
description: Review code for best practices and potential issues. Use when reviewing code, checking PRs, or analyzing code quality.
|
||||
allowed-tools: Read, Grep, Glob
|
||||
---
|
||||
|
||||
# Code Reviewer
|
||||
|
||||
## Review checklist
|
||||
|
||||
1. Code organization and structure
|
||||
2. Error handling
|
||||
3. Performance considerations
|
||||
4. Security concerns
|
||||
5. Test coverage
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Read the target files using Read tool
|
||||
2. Search for patterns using Grep
|
||||
3. Find related files using Glob
|
||||
4. Provide detailed feedback on code quality
|
||||
```
|
||||
|
||||
### Multi-file Skill
|
||||
|
||||
```
|
||||
pdf-processing/
|
||||
├── SKILL.md
|
||||
├── FORMS.md
|
||||
├── REFERENCE.md
|
||||
└── scripts/
|
||||
├── fill_form.py
|
||||
└── validate.py
|
||||
```
|
||||
|
||||
**SKILL.md**:
|
||||
|
||||
````yaml theme={null}
|
||||
---
|
||||
name: PDF Processing
|
||||
description: Extract text, fill forms, merge PDFs. Use when working with PDF files, forms, or document extraction. Requires pypdf and pdfplumber packages.
|
||||
---
|
||||
|
||||
# PDF Processing
|
||||
|
||||
## Quick start
|
||||
|
||||
Extract text:
|
||||
```python
|
||||
import pdfplumber
|
||||
with pdfplumber.open("doc.pdf") as pdf:
|
||||
text = pdf.pages[0].extract_text()
|
||||
```
|
||||
|
||||
For form filling, see [FORMS.md](FORMS.md).
|
||||
For detailed API reference, see [REFERENCE.md](REFERENCE.md).
|
||||
|
||||
## Requirements
|
||||
|
||||
Packages must be installed in your environment:
|
||||
```bash
|
||||
pip install pypdf pdfplumber
|
||||
```
|
||||
````
|
||||
|
||||
<Note>
|
||||
List required packages in the description. Packages must be installed in your environment before Claude can use them.
|
||||
</Note>
|
||||
|
||||
Claude loads additional files only when needed.
|
||||
|
||||
## Next steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Authoring best practices" icon="lightbulb" href="/en/docs/agents-and-tools/agent-skills/best-practices">
|
||||
Write Skills that Claude can use effectively
|
||||
</Card>
|
||||
|
||||
<Card title="Agent Skills overview" icon="book" href="/en/docs/agents-and-tools/agent-skills/overview">
|
||||
Learn how Skills work across Claude products
|
||||
</Card>
|
||||
|
||||
<Card title="Get started with Agent Skills" icon="rocket" href="/en/docs/agents-and-tools/agent-skills/quickstart">
|
||||
Create your first Skill
|
||||
</Card>
|
||||
</CardGroup>
|
||||
Reference in New Issue
Block a user