Refactor workflow session commands to utilize ccw CLI for session management

- Updated session completion process to use `ccw session` commands for listing, reading, and updating session statuses.
- Enhanced session listing command to provide metadata and statistics using `ccw session list` and `ccw session stats`.
- Streamlined session resume functionality with `ccw session` commands for checking status and updating session state.
- Improved session creation process by replacing manual directory and metadata handling with `ccw session init`.
- Introduced `session_manager` tool for simplified session operations, including listing, updating, and archiving sessions.
- Updated conflict resolution tools to generate structured output in `conflict-resolution.json` instead of markdown files.
- Enhanced TDD and UI design tools to utilize `ccw cli exec` for executing analysis commands, improving integration with the workflow.
This commit is contained in:
catlog22
2025-12-13 10:45:03 +08:00
parent 25ac862f46
commit 30e9ae0153
32 changed files with 758 additions and 625 deletions

View File

@@ -112,7 +112,7 @@
{ {
"name": "tech-research", "name": "tech-research",
"command": "/memory:tech-research", "command": "/memory:tech-research",
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)", "description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]", "arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory", "category": "memory",
"subcategory": null, "subcategory": null,

View File

@@ -133,7 +133,7 @@
{ {
"name": "tech-research", "name": "tech-research",
"command": "/memory:tech-research", "command": "/memory:tech-research",
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)", "description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]", "arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory", "category": "memory",
"subcategory": null, "subcategory": null,

View File

@@ -36,7 +36,7 @@
{ {
"name": "tech-research", "name": "tech-research",
"command": "/memory:tech-research", "command": "/memory:tech-research",
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)", "description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]", "arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory", "category": "memory",
"subcategory": null, "subcategory": null,

View File

@@ -409,14 +409,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
// Pattern: Gemini CLI deep analysis // Pattern: Gemini CLI deep analysis
{ {
"step": "gemini_analyze_[aspect]", "step": "gemini_analyze_[aspect]",
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')", "command": "ccw cli exec 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --cd [path]",
"output_to": "analysis_result" "output_to": "analysis_result"
}, },
// Pattern: Qwen CLI analysis (fallback/alternative) // Pattern: Qwen CLI analysis (fallback/alternative)
{ {
"step": "qwen_analyze_[aspect]", "step": "qwen_analyze_[aspect]",
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')", "command": "ccw cli exec '[similar to gemini pattern]' --tool qwen --cd [path]",
"output_to": "analysis_result" "output_to": "analysis_result"
}, },
@@ -457,7 +457,7 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
4. **Command Composition Patterns**: 4. **Command Composition Patterns**:
- **Single command**: `bash([simple_search])` - **Single command**: `bash([simple_search])`
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]` - **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')` - **CLI analysis**: `ccw cli exec '[prompt]' --tool gemini --cd [path]`
- **MCP integration**: `mcp__[tool]__[function]([params])` - **MCP integration**: `mcp__[tool]__[function]([params])`
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically. **Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
@@ -481,9 +481,9 @@ The `implementation_approach` supports **two execution modes** based on the pres
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage - **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
- **Required fields**: Same as default mode **PLUS** `command` - **Required fields**: Same as default mode **PLUS** `command`
- **Command patterns**: - **Command patterns**:
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)` - `ccw cli exec '[prompt]' --tool codex --mode auto --cd [path]`
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step) - `ccw cli exec '[task]' --tool codex --mode auto` (multi-step with context)
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode) - `ccw cli exec '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
**Semantic CLI Tool Selection**: **Semantic CLI Tool Selection**:
@@ -500,12 +500,12 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
**Task-Based Selection** (when no explicit user preference): **Task-Based Selection** (when no explicit user preference):
- **Implementation/coding**: Codex preferred for autonomous development - **Implementation/coding**: Codex preferred for autonomous development
- **Analysis/exploration**: Gemini preferred for large context analysis - **Analysis/exploration**: Gemini preferred for large context analysis
- **Documentation**: Gemini/Qwen with write mode (`--approval-mode yolo`) - **Documentation**: Gemini/Qwen with write mode (`--mode write`)
- **Testing**: Depends on complexity - simple=agent, complex=Codex - **Testing**: Depends on complexity - simple=agent, complex=Codex
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps: **Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
- Agent orchestrates task execution - Agent orchestrates task execution
- When step has `command` field, agent executes it via Bash - When step has `command` field, agent executes it via CCW CLI
- When step has no `command` field, agent implements directly - When step has no `command` field, agent implements directly
- This maintains agent control while leveraging CLI tool power - This maintains agent control while leveraging CLI tool power
@@ -559,7 +559,7 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
"step": 3, "step": 3,
"title": "Execute implementation using CLI tool", "title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution", "description": "Use Codex/Gemini for complex autonomous execution",
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)", "command": "ccw cli exec '[prompt]' --tool codex --mode auto --cd [path]",
"modification_points": ["[Same as default mode]"], "modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"], "logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2], "depends_on": [1, 2],

View File

@@ -100,7 +100,7 @@ CONTEXT: @**/*
# Specific patterns # Specific patterns
CONTEXT: @CLAUDE.md @src/**/* @*.ts CONTEXT: @CLAUDE.md @src/**/* @*.ts
# Cross-directory (requires --include-directories) # Cross-directory (requires --includeDirs)
CONTEXT: @**/* @../shared/**/* @../types/**/* CONTEXT: @**/* @../shared/**/* @../types/**/*
``` ```
@@ -144,43 +144,40 @@ discuss → multi (gemini + codex parallel)
- Codex: `gpt-5` (default), `gpt5-codex` (large context) - Codex: `gpt-5` (default), `gpt5-codex` (large context)
- **Position**: `-m` after prompt, before flags - **Position**: `-m` after prompt, before flags
### Command Templates ### Command Templates (CCW Unified CLI)
**Gemini/Qwen (Analysis)**: **Gemini/Qwen (Analysis)**:
```bash ```bash
cd {dir} && gemini -p " ccw cli exec "
PURPOSE: {goal} PURPOSE: {goal}
TASK: {task} TASK: {task}
MODE: analysis MODE: analysis
CONTEXT: @**/* CONTEXT: @**/*
EXPECTED: {output} EXPECTED: {output}
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt) RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
" -m gemini-2.5-pro " --tool gemini --cd {dir}
# Qwen fallback: Replace 'gemini' with 'qwen' # Qwen fallback: Replace '--tool gemini' with '--tool qwen'
``` ```
**Gemini/Qwen (Write)**: **Gemini/Qwen (Write)**:
```bash ```bash
cd {dir} && gemini -p "..." --approval-mode yolo ccw cli exec "..." --tool gemini --mode write --cd {dir}
``` ```
**Codex (Auto)**: **Codex (Auto)**:
```bash ```bash
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access ccw cli exec "..." --tool codex --mode auto --cd {dir}
# Resume: Add 'resume --last' after prompt
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
``` ```
**Cross-Directory** (Gemini/Qwen): **Cross-Directory** (Gemini/Qwen):
```bash ```bash
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared ccw cli exec "CONTEXT: @**/* @../shared/**/*" --tool gemini --cd src/auth --includeDirs ../shared
``` ```
**Directory Scope**: **Directory Scope**:
- `@` only references current directory + subdirectories - `@` only references current directory + subdirectories
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference - External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5) **Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)

View File

@@ -78,14 +78,14 @@ rg "^import .* from " -n | head -30
### Gemini Semantic Analysis (deep-scan, dependency-map) ### Gemini Semantic Analysis (deep-scan, dependency-map)
```bash ```bash
cd {dir} && gemini -p " ccw cli exec "
PURPOSE: {from prompt} PURPOSE: {from prompt}
TASK: {from prompt} TASK: {from prompt}
MODE: analysis MODE: analysis
CONTEXT: @**/* CONTEXT: @**/*
EXPECTED: {from prompt} EXPECTED: {from prompt}
RULES: {from prompt, if template specified} | analysis=READ-ONLY RULES: {from prompt, if template specified} | analysis=READ-ONLY
" " --tool gemini --cd {dir}
``` ```
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only **Fallback Chain**: Gemini → Qwen → Codex → Bash-only

View File

@@ -97,7 +97,7 @@ Phase 3: planObject Generation
## CLI Command Template ## CLI Command Template
```bash ```bash
cd {project_root} && {cli_tool} -p " ccw cli exec "
PURPOSE: Generate implementation plan for {complexity} task PURPOSE: Generate implementation plan for {complexity} task
TASK: TASK:
• Analyze: {task_description} • Analyze: {task_description}
@@ -134,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-tas
- Acceptance must be quantified (counts, method names, metrics) - Acceptance must be quantified (counts, method names, metrics)
- Dependencies use task IDs (T1, T2) - Dependencies use task IDs (T1, T2)
- analysis=READ-ONLY - analysis=READ-ONLY
" " --tool {cli_tool} --cd {project_root}
``` ```
## Core Functions ## Core Functions

View File

@@ -107,7 +107,7 @@ Phase 3: Task JSON Generation
**Template-Based Command Construction with Test Layer Awareness**: **Template-Based Command Construction with Test Layer Awareness**:
```bash ```bash
cd {project_root} && {cli_tool} -p " ccw cli exec "
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration} PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
TASK: TASK:
• Review {failed_tests.length} {test_type} test failures: [{test_names}] • Review {failed_tests.length} {test_type} test failures: [{test_names}]
@@ -134,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
- Consider previous iteration failures - Consider previous iteration failures
- Validate fix doesn't introduce new vulnerabilities - Validate fix doesn't introduce new vulnerabilities
- analysis=READ-ONLY - analysis=READ-ONLY
" {timeout_flag} " --tool {cli_tool} --cd {project_root} --timeout {timeout_value}
``` ```
**Layer-Specific Guidance Injection**: **Layer-Specific Guidance Injection**:
@@ -527,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis 1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
2. **Execute CLI**: 2. **Execute CLI**:
```bash ```bash
gemini -p "PURPOSE: Analyze integration test failure... ccw cli exec "PURPOSE: Analyze integration test failure...
TASK: Examine component interactions, data flow, interface contracts... TASK: Examine component interactions, data flow, interface contracts...
RULES: Analyze full call stack and data flow across components" RULES: Analyze full call stack and data flow across components" --tool gemini
``` ```
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections 3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
4. **Generate Task JSON** (IMPL-fix-1.json): 4. **Generate Task JSON** (IMPL-fix-1.json):

View File

@@ -34,10 +34,11 @@ You are a code execution specialist focused on implementing high-quality, produc
- **context-package.json** (when available in workflow tasks) - **context-package.json** (when available in workflow tasks)
**Context Package** : **Context Package** :
`context-package.json` provides artifact paths - extract dynamically using `jq`: `context-package.json` provides artifact paths - read using `ccw session`:
```bash ```bash
# Get role analysis paths from context package # Get context package content from session
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json ccw session read ${SESSION_ID} --type context
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
``` ```
**Pre-Analysis: Smart Tech Stack Loading**: **Pre-Analysis: Smart Tech Stack Loading**:
@@ -121,9 +122,9 @@ When task JSON contains `flow_control.implementation_approach` array:
- If `command` field present, execute it; otherwise use agent capabilities - If `command` field present, execute it; otherwise use agent capabilities
**CLI Command Execution (CLI Execute Mode)**: **CLI Command Execution (CLI Execute Mode)**:
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume: When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access` - First task (`depends_on: []`): `ccw cli exec "..." --tool codex --mode auto --cd [path]`
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context - Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
**Test-Driven Development**: **Test-Driven Development**:
- Write tests first (red → green → refactor) - Write tests first (red → green → refactor)

View File

@@ -61,9 +61,9 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
**Step 2** (CLI execution): **Step 2** (CLI execution):
- Agent substitutes [target_folders] into command - Agent substitutes [target_folders] into command
- Agent executes CLI command via Bash tool: - Agent executes CLI command via CCW:
```bash ```bash
bash(cd src/modules && gemini --approval-mode yolo -p " ccw cli exec "
PURPOSE: Generate module documentation PURPOSE: Generate module documentation
TASK: Create API.md and README.md for each module TASK: Create API.md and README.md for each module
MODE: write MODE: write
@@ -71,7 +71,7 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
./src/modules/api|code|code:3|dirs:0 ./src/modules/api|code|code:3|dirs:0
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/ EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
") " --tool gemini --mode write --cd src/modules
``` ```
4. **CLI Execution** (Gemini CLI): 4. **CLI Execution** (Gemini CLI):
@@ -216,7 +216,7 @@ Before completion, verify:
{ {
"step": "analyze_module_structure", "step": "analyze_module_structure",
"action": "Deep analysis of module structure and API", "action": "Deep analysis of module structure and API",
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")", "command": "ccw cli exec \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --cd src/auth",
"output_to": "module_analysis", "output_to": "module_analysis",
"on_error": "fail" "on_error": "fail"
} }

View File

@@ -191,10 +191,10 @@ Large Projects (single dir >10 docs):
```bash ```bash
# 1. Get top-level directories from doc-planning-data.json # 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]') ccw session read WFS-docs-{timestamp} --type process --filename doc-planning-data.json --raw | jq -r '.top_level_dirs[]'
# 2. Get mode from workflow-session.json # 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"') ccw session read WFS-docs-{timestamp} --type session --raw | jq -r '.mode // "full"'
# 3. Check for HTTP API # 3. Check for HTTP API
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API") bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
@@ -223,7 +223,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**: **Task ID Calculation**:
```bash ```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json) group_count=$(ccw session read WFS-docs-{timestamp} --type process --filename doc-planning-data.json --raw | jq '.groups.count')
readme_id=$((group_count + 1)) # Next ID after groups readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2)) arch_id=$((group_count + 2))
api_id=$((group_count + 3)) api_id=$((group_count + 3))
@@ -236,12 +236,12 @@ api_id=$((group_count + 3))
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role | | Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|------|-------------|-----------|----------|---------------|------------| |------|-------------|-----------|----------|---------------|------------|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach | | **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
| **CLI** | true | implementation_approach | write | --approval-mode yolo | Execute CLI commands, validate output | | **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
**Command Patterns**: **Command Patterns**:
- Gemini/Qwen: `cd dir && gemini -p "..."` - Gemini/Qwen: `ccw cli exec "..." --tool gemini --cd dir`
- CLI Mode: `cd dir && gemini --approval-mode yolo -p "..."` - CLI Mode: `ccw cli exec "..." --tool gemini --mode write --cd dir`
- Codex: `codex -C dir --full-auto exec "..." --skip-git-repo-check -s danger-full-access` - Codex: `ccw cli exec "..." --tool codex --mode auto --cd dir`
**Generation Process**: **Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json 1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
@@ -286,8 +286,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data", "step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories", "action": "Load Phase 2 analysis and extract group directories",
"commands": [ "commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)", "ccw session read ${session_id} --type process --filename doc-planning-data.json",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)" "ccw session read ${session_id} --type process --filename doc-planning-data.json --raw | jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories'"
], ],
"output_to": "phase2_context", "output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results" "note": "Single JSON file contains all Phase 2 analysis results"
@@ -332,7 +332,7 @@ api_id=$((group_count + 3))
{ {
"step": 2, "step": 2,
"title": "Batch generate documentation via CLI", "title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)", "command": "ccw cli exec 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
"depends_on": [1], "depends_on": [1],
"output": "generated_docs" "output": "generated_docs"
} }
@@ -602,7 +602,7 @@ api_id=$((group_count + 3))
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role | | Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|------|---------------|----------|---------------|------------| |------|---------------|----------|---------------|------------|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content | | **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
| **CLI (--cli-execute)** | implementation_approach | write | --approval-mode yolo | Executes CLI commands, validates output | | **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
**Execution Flow**: **Execution Flow**:
- **Phase 2**: Unified analysis once, results in `.process/` - **Phase 2**: Unified analysis once, results in `.process/`

View File

@@ -5,7 +5,7 @@ argument-hint: "[--tool gemini|qwen] \"task context description\""
allowed-tools: Task(*), Bash(*) allowed-tools: Task(*), Bash(*)
examples: examples:
- /memory:load "在当前前端基础上开发用户认证功能" - /memory:load "在当前前端基础上开发用户认证功能"
- /memory:load --tool qwen -p "重构支付模块API" - /memory:load --tool qwen "重构支付模块API"
--- ---
# Memory Load Command (/memory:load) # Memory Load Command (/memory:load)
@@ -136,7 +136,7 @@ Task(
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens): Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
\`\`\`bash \`\`\`bash
cd . && ${tool} -p " ccw cli exec "
PURPOSE: Extract project core context for task: ${task_description} PURPOSE: Extract project core context for task: ${task_description}
TASK: Analyze project architecture, tech stack, key patterns, relevant files TASK: Analyze project architecture, tech stack, key patterns, relevant files
MODE: analysis MODE: analysis
@@ -147,7 +147,7 @@ RULES:
- Identify key architecture patterns and technical constraints - Identify key architecture patterns and technical constraints
- Extract integration points and development standards - Extract integration points and development standards
- Output concise, structured format - Output concise, structured format
" " --tool ${tool}
\`\`\` \`\`\`
### Step 4: Generate Core Content Package ### Step 4: Generate Core Content Package
@@ -212,7 +212,7 @@ Before returning:
### Example 2: Using Qwen Tool ### Example 2: Using Qwen Tool
```bash ```bash
/memory:load --tool qwen -p "重构支付模块API" /memory:load --tool qwen "重构支付模块API"
``` ```
Agent uses Qwen CLI for analysis, returns same structured package. Agent uses Qwen CLI for analysis, returns same structured package.

View File

@@ -1,477 +1,314 @@
--- ---
name: tech-research name: tech-research
description: 3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists) description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]" argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*) allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
--- ---
# Tech Stack Research SKILL Generator # Tech Stack Rules Generator
## Overview ## Overview
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates ALL work to agent. Agent produces files directly. **Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase. **Key Difference from SKILL Memory**:
- **SKILL**: Manual loading via `Skill(command: "tech-name")`
- **Rules**: Automatic loading when working with matching file paths
**Execution Paths**: **Output Structure**:
- **Full Path**: All 3 phases (no existing SKILL OR `--regenerate` specified) ```
- **Skip Path**: Phase 1 → Phase 3 (existing SKILL found AND no `--regenerate` flag) .claude/rules/tech/{tech-stack}/
- **Phase 3 Always Executes**: SKILL index is always generated or updated ├── core.md # paths: **/*.{ext} - Core principles
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
├── config.md # paths: *.config.* - Configuration rules
├── api.md # paths: **/api/**/* - API rules (backend only)
├── components.md # paths: **/components/**/* - Component rules (frontend only)
└── metadata.json # Generation metadata
```
**Agent Responsibility**: **Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
- Agent does ALL the work: context reading, Exa research, content synthesis, file writing
- Orchestrator only provides context paths and waits for completion ---
## Core Rules ## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution 1. **Start Immediately**: First action is TodoWrite initialization
2. **Context Path Delegation**: Pass session directory or tech stack name to agent, let agent do discovery 2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
3. **Agent Produces Files**: Agent directly writes all module files, orchestrator does NOT parse agent output 3. **Template-Driven**: Agent reads templates before generating content
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase 4. **Agent Produces Files**: Agent writes all rule files directly
5. **No User Prompts**: Never ask user questions or wait for input between phases 5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
6. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
7. **Lightweight Index**: Phase 3 only generates SKILL.md index by reading existing files
--- ---
## 3-Phase Execution ## 3-Phase Execution
### Phase 1: Prepare Context Paths ### Phase 1: Prepare Context & Detect Tech Stack
**Goal**: Detect input mode, prepare context paths for agent, check existing SKILL **Goal**: Detect input mode, extract tech stack info, determine file extensions
**Input Mode Detection**: **Input Mode Detection**:
```bash ```bash
# Get input parameter
input="$1" input="$1"
# Detect mode
if [[ "$input" == WFS-* ]]; then if [[ "$input" == WFS-* ]]; then
MODE="session" MODE="session"
SESSION_ID="$input" SESSION_ID="$input"
CONTEXT_PATH=".workflow/${SESSION_ID}" # Read workflow-session.json to extract tech stack
else else
MODE="direct" MODE="direct"
TECH_STACK_NAME="$input" TECH_STACK_NAME="$input"
CONTEXT_PATH="$input" # Pass tech stack name as context
fi fi
``` ```
**Check Existing SKILL**: **Tech Stack Analysis**:
```bash ```javascript
# For session mode, peek at session to get tech stack name // Decompose composite tech stacks
if [[ "$MODE" == "session" ]]; then // "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
bash(test -f ".workflow/${SESSION_ID}/workflow-session.json")
Read(.workflow/${SESSION_ID}/workflow-session.json)
# Extract tech_stack_name (minimal extraction)
fi
# Normalize and check const TECH_EXTENSIONS = {
"typescript": "{ts,tsx}",
"javascript": "{js,jsx}",
"python": "py",
"rust": "rs",
"go": "go",
"java": "java",
"csharp": "cs",
"ruby": "rb",
"php": "php"
};
const FRAMEWORK_TYPE = {
"react": "frontend",
"vue": "frontend",
"angular": "frontend",
"nextjs": "fullstack",
"nuxt": "fullstack",
"fastapi": "backend",
"express": "backend",
"django": "backend",
"rails": "backend"
};
```
**Check Existing Rules**:
```bash
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-') normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
bash(test -d ".claude/skills/${normalized_name}" && echo "exists" || echo "not_exists") rules_dir=".claude/rules/tech/${normalized_name}"
bash(find ".claude/skills/${normalized_name}" -name "*.md" 2>/dev/null | wc -l || echo 0) existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
``` ```
**Skip Decision**: **Skip Decision**:
```javascript - If `existing_count > 0` AND no `--regenerate``SKIP_GENERATION = true`
if (existing_files > 0 && !regenerate_flag) { - If `--regenerate` → Delete existing and regenerate
SKIP_GENERATION = true
message = "Tech stack SKILL already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf ".claude/skills/${normalized_name}")
SKIP_GENERATION = false
message = "Regenerating tech stack SKILL from scratch."
} else {
SKIP_GENERATION = false
message = "No existing SKILL found, generating new tech stack documentation."
}
```
**Output Variables**: **Output Variables**:
- `MODE`: `session` or `direct` - `TECH_STACK_NAME`: Normalized name
- `SESSION_ID`: Session ID (if session mode) - `PRIMARY_LANG`: Primary language
- `CONTEXT_PATH`: Path to session directory OR tech stack name - `FILE_EXT`: File extension pattern
- `TECH_STACK_NAME`: Extracted or provided tech stack name - `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2 - `COMPONENTS`: Array of tech components
- `SKIP_GENERATION`: Boolean
**TodoWrite**: **TodoWrite**: Mark phase 1 completed
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
--- ---
### Phase 2: Agent Produces All Files ### Phase 2: Agent Produces Path-Conditional Rules
**Skip Condition**: Skipped if `SKIP_GENERATION = true` **Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate EVERYTHING to agent - context reading, Exa research, content synthesis, and file writing **Goal**: Delegate to agent for Exa research and rule file generation
**Agent Task Specification**:
**Template Files**:
``` ```
Task( ~/.claude/workflows/cli-templates/prompts/rules/
├── tech-rules-agent-prompt.txt # Agent instructions
├── rule-core.txt # Core principles template
├── rule-patterns.txt # Implementation patterns template
├── rule-testing.txt # Testing rules template
├── rule-config.txt # Configuration rules template
├── rule-api.txt # API rules template (backend)
└── rule-components.txt # Component rules template (frontend)
```
**Agent Task**:
```javascript
Task({
subagent_type: "general-purpose", subagent_type: "general-purpose",
description: "Generate tech stack SKILL: {CONTEXT_PATH}", description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
prompt: " prompt: `
Generate a complete tech stack SKILL package with Exa research. You are generating path-conditional rules for Claude Code.
**Context Provided**: ## Context
- Mode: {MODE} - Tech Stack: ${TECH_STACK_NAME}
- Context Path: {CONTEXT_PATH} - Primary Language: ${PRIMARY_LANG}
- File Extensions: ${FILE_EXT}
- Framework Type: ${FRAMEWORK_TYPE}
- Components: ${JSON.stringify(COMPONENTS)}
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
**Templates Available**: ## Instructions
- Module Format: ~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt
- SKILL Index: ~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt
**Your Responsibilities**: Read the agent prompt template for detailed instructions:
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
1. **Extract Tech Stack Information**: ## Execution Steps
IF MODE == 'session': 1. Execute Exa research queries (see agent prompt)
- Read `.workflow/active/{session_id}/workflow-session.json` 2. Read each rule template
- Read `.workflow/active/{session_id}/.process/context-package.json` 3. Generate rule files following template structure
- Extract tech_stack: {language, frameworks, libraries} 4. Write files to output directory
- Build tech stack name: \"{language}-{framework1}-{framework2}\" 5. Write metadata.json
- Example: \"typescript-react-nextjs\" 6. Report completion
IF MODE == 'direct': ## Variable Substitutions
- Tech stack name = CONTEXT_PATH
- Parse composite: split by '-' delimiter
- Example: \"typescript-react-nextjs\" → [\"typescript\", \"react\", \"nextjs\"]
2. **Execute Exa Research** (4-6 parallel queries): Replace in templates:
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
Base Queries (always execute): - {PRIMARY_LANG} → ${PRIMARY_LANG}
- mcp__exa__get_code_context_exa(query: \"{tech} core principles best practices 2025\", tokensNum: 8000) - {FILE_EXT} → ${FILE_EXT}
- mcp__exa__get_code_context_exa(query: \"{tech} common patterns architecture examples\", tokensNum: 7000) - {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
- mcp__exa__web_search_exa(query: \"{tech} configuration setup tooling 2025\", numResults: 5) `
- mcp__exa__get_code_context_exa(query: \"{tech} testing strategies\", tokensNum: 5000) })
Component Queries (if composite):
- For each additional component:
mcp__exa__get_code_context_exa(query: \"{main_tech} {component} integration\", tokensNum: 5000)
3. **Read Module Format Template**:
Read template for structure guidance:
```bash
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt)
```
4. **Synthesize Content into 6 Modules**:
Follow template structure from tech-module-format.txt:
- **principles.md** - Core concepts, philosophies (~3K tokens)
- **patterns.md** - Implementation patterns with code examples (~5K tokens)
- **practices.md** - Best practices, anti-patterns, pitfalls (~4K tokens)
- **testing.md** - Testing strategies, frameworks (~3K tokens)
- **config.md** - Setup, configuration, tooling (~3K tokens)
- **frameworks.md** - Framework integration (only if composite, ~4K tokens)
Each module follows template format:
- Frontmatter (YAML)
- Main sections with clear headings
- Code examples from Exa research
- Best practices sections
- References to Exa sources
5. **Write Files Directly**:
```javascript
// Create directory
bash(mkdir -p \".claude/skills/{tech_stack_name}\")
// Write each module file using Write tool
Write({ file_path: \".claude/skills/{tech_stack_name}/principles.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/patterns.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/practices.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/testing.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/config.md\", content: ... })
// Write frameworks.md only if composite
// Write metadata.json
Write({
file_path: \".claude/skills/{tech_stack_name}/metadata.json\",
content: JSON.stringify({
tech_stack_name,
components,
is_composite,
generated_at: timestamp,
source: \"exa-research\",
research_summary: { total_queries, total_sources }
})
})
```
6. **Report Completion**:
Provide summary:
- Tech stack name
- Files created (count)
- Exa queries executed
- Sources consulted
**CRITICAL**:
- MUST read external template files before generating content (step 3 for modules, step 4 for index)
- You have FULL autonomy - read files, execute Exa, synthesize content, write files
- Do NOT return JSON or structured data - produce actual .md files
- Handle errors gracefully (Exa failures, missing files, template read failures)
- If tech stack cannot be determined, ask orchestrator to clarify
"
)
``` ```
**Completion Criteria**: **Completion Criteria**:
- Agent task executed successfully - 4-6 rule files written with proper `paths` frontmatter
- 5-6 modular files written to `.claude/skills/{tech_stack_name}/`
- metadata.json written - metadata.json written
- Agent reports completion - Agent reports files created
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress **TodoWrite**: Mark phase 2 completed
--- ---
### Phase 3: Generate SKILL.md Index ### Phase 3: Verify & Report
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index. **Goal**: Verify generated files and provide usage summary
**Goal**: Read generated module files and create SKILL.md index with loading recommendations
**Steps**: **Steps**:
1. **Verify Generated Files**: 1. **Verify Files**:
```bash ```bash
bash(find ".claude/skills/${TECH_STACK_NAME}" -name "*.md" -type f | sort) find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
``` ```
2. **Read metadata.json**: 2. **Validate Frontmatter**:
```bash
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
```
3. **Read Metadata**:
```javascript ```javascript
Read(.claude/skills/${TECH_STACK_NAME}/metadata.json) Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
// Extract: tech_stack_name, components, is_composite, research_summary
``` ```
3. **Read Module Headers** (optional, first 20 lines): 4. **Generate Summary Report**:
```javascript
Read(.claude/skills/${TECH_STACK_NAME}/principles.md, limit: 20)
// Repeat for other modules
``` ```
Tech Stack Rules Generated
4. **Read SKILL Index Template**: Tech Stack: {TECH_STACK_NAME}
Location: .claude/rules/tech/{TECH_STACK_NAME}/
```javascript Files Created:
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt) ├── core.md → paths: **/*.{ext}
├── patterns.md → paths: src/**/*.{ext}
├── testing.md → paths: **/*.{test,spec}.{ext}
├── config.md → paths: *.config.*
├── api.md → paths: **/api/**/* (if backend)
└── components.md → paths: **/components/**/* (if frontend)
Auto-Loading:
- Rules apply automatically when editing matching files
- No manual loading required
Example Activation:
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
- Edit tests/api.test.ts → core.md + testing.md
- Edit package.json → config.md
``` ```
5. **Generate SKILL.md Index**:
Follow template from tech-skill-index.txt with variable substitutions:
- `{TECH_STACK_NAME}`: From metadata.json
- `{MAIN_TECH}`: Primary technology
- `{ISO_TIMESTAMP}`: Current timestamp
- `{QUERY_COUNT}`: From research_summary
- `{SOURCE_COUNT}`: From research_summary
- Conditional sections for composite tech stacks
Template provides structure for:
- Frontmatter with metadata
- Overview and tech stack description
- Module organization (Core/Practical/Config sections)
- Loading recommendations (Quick/Implementation/Complete)
- Usage guidelines and auto-trigger keywords
- Research metadata and version history
6. **Write SKILL.md**:
```javascript
Write({
file_path: `.claude/skills/${TECH_STACK_NAME}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All module files verified
- Loading recommendations included
**TodoWrite**: Mark phase 3 completed **TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Tech Stack SKILL Package Complete
Tech Stack: {TECH_STACK_NAME}
Location: .claude/skills/{TECH_STACK_NAME}/
Files: SKILL.md + 5-6 modules + metadata.json
Exa Research: {queries} queries, {sources} sources
Usage: Skill(command: "{TECH_STACK_NAME}")
```
--- ---
## Implementation Details ## Path Pattern Reference
### TodoWrite Patterns | Pattern | Matches |
|---------|---------|
| `**/*.ts` | All .ts files |
| `src/**/*` | All files under src/ |
| `*.config.*` | Config files in root |
| `**/*.{ts,tsx}` | .ts and .tsx files |
**Initialization** (Before Phase 1): | Tech Stack | Core Pattern | Test Pattern |
```javascript |------------|--------------|--------------|
TodoWrite({todos: [ | TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
{"content": "Prepare context paths", "status": "in_progress", "activeForm": "Preparing context paths"}, | Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
{"content": "Agent produces all module files", "status": "pending", "activeForm": "Agent producing files"}, | Rust | `**/*.rs` | `**/tests/**/*.rs` |
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"} | Go | `**/*.go` | `**/*_test.go` |
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (prepare) → Phase 2 (agent writes files) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Invalid session ID: Report error, verify session exists
- Missing context-package: Warn, fall back to direct mode
- No tech stack detected: Ask user to specify tech stack name
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- Exa API failures: Agent handles internally with retries
- Incomplete results: Warn user, proceed with partial data if minimum sections available
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
--- ---
## Parameters ## Parameters
```bash ```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate] [--tool <gemini|qwen>] /memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
``` ```
**Arguments**: **Arguments**:
- **session-id | tech-stack-name**: Input source (auto-detected by WFS- prefix) - **session-id**: `WFS-*` format - Extract from workflow session
- Session mode: `WFS-user-auth-v2` - Extract tech stack from workflow - **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
- Direct mode: `"typescript"`, `"typescript-react-nextjs"` - User specifies - **--regenerate**: Force regenerate existing rules
- **--regenerate**: Force regenerate existing SKILL (deletes and recreates)
- **--tool**: Reserved for future CLI integration (default: gemini)
--- ---
## Examples ## Examples
**Generated File Structure** (for all examples): ### Single Language
```
.claude/skills/{tech-stack}/
├── SKILL.md # Index (Phase 3)
├── principles.md # Agent (Phase 2)
├── patterns.md # Agent
├── practices.md # Agent
├── testing.md # Agent
├── config.md # Agent
├── frameworks.md # Agent (if composite)
└── metadata.json # Agent
```
### Direct Mode - Single Stack
```bash ```bash
/memory:tech-research "typescript" /memory:tech-research "typescript"
``` ```
**Workflow**: **Output**: `.claude/rules/tech/typescript/` with 4 rule files
1. Phase 1: Detects direct mode, checks existing SKILL
2. Phase 2: Agent executes 4 Exa queries, writes 5 modules
3. Phase 3: Generates SKILL.md index
### Direct Mode - Composite Stack ### Frontend Stack
```bash ```bash
/memory:tech-research "typescript-react-nextjs" /memory:tech-research "typescript-react"
``` ```
**Workflow**: **Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
1. Phase 1: Decomposes into ["typescript", "react", "nextjs"]
2. Phase 2: Agent executes 6 Exa queries (4 base + 2 components), writes 6 modules (adds frameworks.md)
3. Phase 3: Generates SKILL.md index with framework integration
### Session Mode - Extract from Workflow ### Backend Stack
```bash
/memory:tech-research "python-fastapi"
```
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
### From Session
```bash ```bash
/memory:tech-research WFS-user-auth-20251104 /memory:tech-research WFS-user-auth-20251104
``` ```
**Workflow**: **Workflow**: Extract tech stack from session → Generate rules
1. Phase 1: Reads session, extracts tech stack: `python-fastapi-sqlalchemy`
2. Phase 2: Agent researches Python + FastAPI + SQLAlchemy, writes 6 modules
3. Phase 3: Generates SKILL.md index
### Regenerate Existing ---
```bash ## Comparison: Rules vs SKILL
/memory:tech-research "react" --regenerate
```
**Workflow**:
1. Phase 1: Deletes existing SKILL due to --regenerate
2. Phase 2: Agent executes fresh Exa research (latest 2025 practices)
3. Phase 3: Generates updated SKILL.md
### Skip Path - Fast Update
```bash
/memory:tech-research "python"
```
**Scenario**: SKILL already exists with 7 files
**Workflow**:
1. Phase 1: Detects existing SKILL, sets SKIP_GENERATION = true
2. Phase 2: **SKIPPED**
3. Phase 3: Updates SKILL.md index only (5-10x faster)
| Aspect | SKILL Memory | Rules |
|--------|--------------|-------|
| Loading | Manual: `Skill("tech")` | Automatic by path |
| Scope | All files when loaded | Only matching files |
| Granularity | Monolithic packages | Per-file-type |
| Context | Full package | Only relevant rules |
**When to Use**:
- **Rules**: Tech stack conventions per file type
- **SKILL**: Reference docs, APIs, examples for manual lookup

View File

@@ -187,7 +187,7 @@ Objectives:
3. Use Gemini for aggregation (optional): 3. Use Gemini for aggregation (optional):
Command pattern: Command pattern:
cd .workflow/.archives/{session_id} && gemini -p " ccw cli exec "
PURPOSE: Extract lessons and conflicts from workflow session PURPOSE: Extract lessons and conflicts from workflow session
TASK: TASK:
• Analyze IMPL_PLAN and lessons from manifest • Analyze IMPL_PLAN and lessons from manifest
@@ -198,7 +198,7 @@ Objectives:
CONTEXT: @IMPL_PLAN.md @workflow-session.json CONTEXT: @IMPL_PLAN.md @workflow-session.json
EXPECTED: Structured lessons and conflicts in JSON format EXPECTED: Structured lessons and conflicts in JSON format
RULES: Template reference from skill-aggregation.txt RULES: Template reference from skill-aggregation.txt
" " --tool gemini --cd .workflow/.archives/{session_id}
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading): 3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
@@ -334,7 +334,7 @@ Objectives:
- Sort sessions by date - Sort sessions by date
2. Use Gemini for final aggregation: 2. Use Gemini for final aggregation:
gemini -p " ccw cli exec "
PURPOSE: Aggregate lessons and conflicts from all workflow sessions PURPOSE: Aggregate lessons and conflicts from all workflow sessions
TASK: TASK:
• Group successes by functional domain • Group successes by functional domain
@@ -345,7 +345,7 @@ Objectives:
CONTEXT: [Provide aggregated JSON data] CONTEXT: [Provide aggregated JSON data]
EXPECTED: Final aggregated structure for SKILL documents EXPECTED: Final aggregated structure for SKILL documents
RULES: Template reference from skill-aggregation.txt RULES: Template reference from skill-aggregation.txt
" " --tool gemini
3. Read templates for formatting (same 4 templates as single mode) 3. Read templates for formatting (same 4 templates as single mode)

View File

@@ -67,7 +67,9 @@ Phase 4: Execution Strategy & Task Execution
├─ Get next in_progress task from TodoWrite ├─ Get next in_progress task from TodoWrite
├─ Lazy load task JSON ├─ Lazy load task JSON
├─ Launch agent with task context ├─ Launch agent with task context
├─ Mark task completed ├─ Mark task completed (update IMPL-*.json status)
│ # Update task status with ccw session (auto-tracks status_history):
│ # ccw session task ${sessionId} IMPL-X completed
└─ Advance to next task └─ Advance to next task
Phase 5: Completion Phase 5: Completion
@@ -90,37 +92,32 @@ Resume Mode (--resume-session):
**Process**: **Process**:
#### Step 1.1: Count Active Sessions #### Step 1.1: List Active Sessions
```bash ```bash
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l) ccw session list --location active
# Returns: {"success":true,"result":{"active":[...],"total":N}}
``` ```
#### Step 1.2: Handle Session Selection #### Step 1.2: Handle Session Selection
**Case A: No Sessions** (count = 0) **Case A: No Sessions** (total = 0)
``` ```
ERROR: No active workflow sessions found ERROR: No active workflow sessions found
Run /workflow:plan "task description" to create a session Run /workflow:plan "task description" to create a session
``` ```
**Case B: Single Session** (count = 1) **Case B: Single Session** (total = 1)
```bash Auto-select the single session from result.active[0].session_id and continue to Phase 2.
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
```
Auto-select and continue to Phase 2.
**Case C: Multiple Sessions** (count > 1) **Case C: Multiple Sessions** (total > 1)
List sessions with metadata and prompt user selection: List sessions with metadata using ccw session:
```bash ```bash
bash(for dir in .workflow/active/WFS-*/; do # Get session list with metadata
session=$(basename "$dir") ccw session list --location active
project=$(ccw session read "$session" --type session --raw 2>/dev/null | jq -r '.project // "Unknown"')
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0") # For each session, get stats
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0") ccw session stats WFS-session-name
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
echo "${session} | ${project} | ${completed}/${total} tasks (${progress}%)"
done)
``` ```
Use AskUserQuestion to present formatted options (max 4 options shown): Use AskUserQuestion to present formatted options (max 4 options shown):
@@ -158,6 +155,14 @@ ccw session read ${sessionId} --type session
**Output**: Store session metadata in memory **Output**: Store session metadata in memory
**DO NOT read task JSONs yet** - defer until execution phase (lazy loading) **DO NOT read task JSONs yet** - defer until execution phase (lazy loading)
#### Step 1.4: Update Session Status to Active
**Purpose**: Update workflow-session.json status from "planning" to "active" for dashboard monitoring.
```bash
# Update status atomically using ccw session
ccw session status ${sessionId} active
```
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided. **Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
### Phase 2: Planning Document Validation ### Phase 2: Planning Document Validation

View File

@@ -473,7 +473,7 @@ Detailed plan: ${executionContext.session.artifacts.plan}`)
return prompt return prompt
} }
codex --full-auto exec "${buildCLIPrompt(batch)}" --skip-git-repo-check -s danger-full-access ccw cli exec "${buildCLIPrompt(batch)}" --tool codex --mode auto
``` ```
**Execution with tracking**: **Execution with tracking**:
@@ -541,15 +541,15 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-q
# - Report findings directly # - Report findings directly
# Method 2: Gemini Review (recommended) # Method 2: Gemini Review (recommended)
gemini -p "[Shared Prompt Template with artifacts]" ccw cli exec "[Shared Prompt Template with artifacts]" --tool gemini
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}] # CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
# Method 3: Qwen Review (alternative) # Method 3: Qwen Review (alternative)
qwen -p "[Shared Prompt Template with artifacts]" ccw cli exec "[Shared Prompt Template with artifacts]" --tool qwen
# Same prompt as Gemini, different execution engine # Same prompt as Gemini, different execution engine
# Method 4: Codex Review (autonomous) # Method 4: Codex Review (autonomous)
codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --skip-git-repo-check -s danger-full-access ccw cli exec "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode auto
``` ```
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting: **Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:

View File

@@ -152,11 +152,16 @@ Launching ${selectedAngles.length} parallel explorations...
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent: **Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
**⚠️ CRITICAL - NO BACKGROUND EXECUTION**:
- **MUST NOT use `run_in_background: true`** - exploration results are REQUIRED before planning
```javascript ```javascript
// Launch agents with pre-assigned angles // Launch agents with pre-assigned angles
const explorationTasks = selectedAngles.map((angle, index) => const explorationTasks = selectedAngles.map((angle, index) =>
Task( Task(
subagent_type="cli-explore-agent", subagent_type="cli-explore-agent",
run_in_background=false, // ⚠️ MANDATORY: Must wait for results
description=`Explore: ${angle}`, description=`Explore: ${angle}`,
prompt=` prompt=`
## Task Objective ## Task Objective
@@ -356,7 +361,15 @@ if (dedupedClarifications.length > 0) {
// Step 1: Read schema // Step 1: Read schema
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`) const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`)
// Step 2: Generate plan following schema (Claude directly, no agent) // Step 2: ⚠️ MANDATORY - Read and review ALL exploration files
const manifest = JSON.parse(Read(`${sessionFolder}/explorations-manifest.json`))
manifest.explorations.forEach(exp => {
const explorationData = Read(exp.path)
console.log(`\n### Exploration: ${exp.angle}\n${explorationData}`)
})
// Step 3: Generate plan following schema (Claude directly, no agent)
// ⚠️ Plan MUST incorporate insights from exploration files read in Step 2
const plan = { const plan = {
summary: "...", summary: "...",
approach: "...", approach: "...",
@@ -367,10 +380,10 @@ const plan = {
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" } _metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
} }
// Step 3: Write plan to session folder // Step 4: Write plan to session folder
Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2)) Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
// Step 4: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here // Step 5: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
``` ```
**Medium/High Complexity** - Invoke cli-lite-planning-agent: **Medium/High Complexity** - Invoke cli-lite-planning-agent:

View File

@@ -61,7 +61,7 @@ Phase 2: Context Gathering
├─ Tasks attached: Analyze structure → Identify integration → Generate package ├─ Tasks attached: Analyze structure → Identify integration → Generate package
└─ Output: contextPath + conflict_risk └─ Output: contextPath + conflict_risk
Phase 3: Conflict Resolution (conditional) Phase 3: Conflict Resolution
└─ Decision (conflict_risk check): └─ Decision (conflict_risk check):
├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution ├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies │ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
@@ -168,7 +168,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
--- ---
### Phase 3: Conflict Resolution (Optional - auto-triggered by conflict risk) ### Phase 3: Conflict Resolution
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high" **Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
@@ -185,10 +185,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Parse Output**: **Parse Output**:
- Extract: Execution status (success/skipped/failed) - Extract: Execution status (success/skipped/failed)
- Verify: CONFLICT_RESOLUTION.md file path (if executed) - Verify: conflict-resolution.json file path (if executed)
**Validation**: **Validation**:
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed) - File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
**Skip Behavior**: **Skip Behavior**:
- If conflict_risk is "none" or "low", skip directly to Phase 3.5 - If conflict_risk is "none" or "low", skip directly to Phase 3.5
@@ -497,7 +497,7 @@ Return summary to user
- Parse context path from Phase 2 output, store in memory - Parse context path from Phase 2 output, store in memory
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution - **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath - **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
- Wait for Phase 3 to finish executing (if executed), verify CONFLICT_RESOLUTION.md created - Wait for Phase 3 to finish executing (if executed), verify conflict-resolution.json created
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4 - **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
- **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]` - **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]`
- Pass session ID to Phase 4 command - Pass session ID to Phase 4 command

View File

@@ -133,37 +133,37 @@ After bash validation, the model takes control to:
``` ```
- Use Gemini for security analysis: - Use Gemini for security analysis:
```bash ```bash
cd .workflow/active/${sessionId} && gemini -p " ccw cli exec "
PURPOSE: Security audit of completed implementation PURPOSE: Security audit of completed implementation
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
EXPECTED: Security findings report with severity levels EXPECTED: Security findings report with severity levels
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
" --approval-mode yolo " --tool gemini --mode write --cd .workflow/active/${sessionId}
``` ```
**Architecture Review** (`--type=architecture`): **Architecture Review** (`--type=architecture`):
- Use Qwen for architecture analysis: - Use Qwen for architecture analysis:
```bash ```bash
cd .workflow/active/${sessionId} && qwen -p " ccw cli exec "
PURPOSE: Architecture compliance review PURPOSE: Architecture compliance review
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
EXPECTED: Architecture assessment with recommendations EXPECTED: Architecture assessment with recommendations
RULES: Check for patterns, separation of concerns, modularity, scalability RULES: Check for patterns, separation of concerns, modularity, scalability
" --approval-mode yolo " --tool qwen --mode write --cd .workflow/active/${sessionId}
``` ```
**Quality Review** (`--type=quality`): **Quality Review** (`--type=quality`):
- Use Gemini for code quality: - Use Gemini for code quality:
```bash ```bash
cd .workflow/active/${sessionId} && gemini -p " ccw cli exec "
PURPOSE: Code quality and best practices review PURPOSE: Code quality and best practices review
TASK: Assess code readability, maintainability, adherence to best practices TASK: Assess code readability, maintainability, adherence to best practices
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
EXPECTED: Quality assessment with improvement suggestions EXPECTED: Quality assessment with improvement suggestions
RULES: Check for code smells, duplication, complexity, naming conventions RULES: Check for code smells, duplication, complexity, naming conventions
" --approval-mode yolo " --tool gemini --mode write --cd .workflow/active/${sessionId}
``` ```
**Action Items Review** (`--type=action-items`): **Action Items Review** (`--type=action-items`):
@@ -177,7 +177,7 @@ After bash validation, the model takes control to:
' '
# Check implementation summaries against requirements # Check implementation summaries against requirements
cd .workflow/active/${sessionId} && gemini -p " ccw cli exec "
PURPOSE: Verify all requirements and acceptance criteria are met PURPOSE: Verify all requirements and acceptance criteria are met
TASK: Cross-check implementation summaries against original requirements TASK: Cross-check implementation summaries against original requirements
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
@@ -191,7 +191,7 @@ After bash validation, the model takes control to:
- Verify all acceptance criteria are met - Verify all acceptance criteria are met
- Flag any incomplete or missing action items - Flag any incomplete or missing action items
- Assess deployment readiness - Assess deployment readiness
" --approval-mode yolo " --tool gemini --mode write --cd .workflow/active/${sessionId}
``` ```

View File

@@ -25,11 +25,9 @@ Mark the currently active workflow session as complete, analyze it for lessons l
#### Step 1.1: Find Active Session and Get Name #### Step 1.1: Find Active Session and Get Name
```bash ```bash
# Find active session directory # Find active session
bash(find .workflow/active/ -name "WFS-*" -type d | head -1) ccw session list --location active
# Extract first session_id from result.active array
# Extract session name from directory path
bash(basename .workflow/active/WFS-session-name)
``` ```
**Output**: Session name `WFS-session-name` **Output**: Session name `WFS-session-name`
@@ -199,7 +197,7 @@ manifest.push(archiveEntry);
Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2)); Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2));
``` ```
#### Step 3.3: Remove Archiving Marker #### Step 3.5: Remove Archiving Marker
```bash ```bash
# Remove archiving marker from archived session (use bash rm as ccw has no delete) # Remove archiving marker from archived session (use bash rm as ccw has no delete)
rm .workflow/archives/WFS-session-name/.process/.archiving 2>/dev/null || true rm .workflow/archives/WFS-session-name/.process/.archiving 2>/dev/null || true
@@ -223,7 +221,8 @@ rm .workflow/archives/WFS-session-name/.process/.archiving 2>/dev/null || true
#### Step 4.1: Check Project State Exists #### Step 4.1: Check Project State Exists
```bash ```bash
bash(test -f .workflow/project.json && echo "EXISTS" || echo "SKIP") # Check project state using ccw
ccw session read project --type project 2>/dev/null && echo "EXISTS" || echo "SKIP"
``` ```
**If SKIP**: Output warning and skip Phase 4 **If SKIP**: Output warning and skip Phase 4
@@ -251,8 +250,8 @@ const featureId = title.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 5
#### Step 4.3: Update project.json #### Step 4.3: Update project.json
```bash ```bash
# Read current project state # Read current project state using ccw
bash(cat .workflow/project.json) ccw session read project --type project --raw
``` ```
**JSON Update Logic**: **JSON Update Logic**:
@@ -366,8 +365,8 @@ function getLatestCommitHash() {
**Recovery Steps**: **Recovery Steps**:
```bash ```bash
# Session still in .workflow/active/WFS-session-name # Session still in .workflow/active/WFS-session-name
# Remove archiving marker # Remove archiving marker using bash
bash(rm .workflow/active/WFS-session-name/.archiving) rm .workflow/active/WFS-session-name/.process/.archiving 2>/dev/null || true
``` ```
**User Notification**: **User Notification**:
@@ -464,11 +463,12 @@ Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
**Phase 3: Atomic Commit** (Transactional file operations) **Phase 3: Atomic Commit** (Transactional file operations)
- Create archive directory - Create archive directory
- Update session status to "completed"
- Move session to archive location - Move session to archive location
- Update manifest.json with archive entry - Update manifest.json with archive entry
- Remove `.archiving` marker - Remove `.archiving` marker
- **All-or-nothing**: Either all succeed or session remains in safe state - **All-or-nothing**: Either all succeed or session remains in safe state
- **Total**: 4 bash commands + JSON manipulation - **Total**: 5 bash commands + JSON manipulation
**Phase 4: Project Registry Update** (Optional feature tracking) **Phase 4: Project Registry Update** (Optional feature tracking)
- Check project.json exists - Check project.json exists
@@ -498,3 +498,55 @@ Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
- Idempotent operations (safe to retry) - Idempotent operations (safe to retry)
## session_manager Tool Alternative
Use `ccw tool exec session_manager` for session completion operations:
### List Active Sessions
```bash
ccw tool exec session_manager '{"operation":"list","location":"active"}'
```
### Update Session Status to Completed
```bash
ccw tool exec session_manager '{
"operation": "update",
"session_id": "WFS-xxx",
"content_type": "session",
"content": {
"status": "completed",
"archived_at": "2025-12-10T08:00:00Z"
}
}'
```
### Archive Session
```bash
ccw tool exec session_manager '{"operation":"archive","session_id":"WFS-xxx"}'
# This operation:
# 1. Updates status to "completed" if update_status=true (default)
# 2. Moves session from .workflow/active/ to .workflow/archives/
```
### Read Session Data
```bash
# Read workflow-session.json
ccw tool exec session_manager '{"operation":"read","session_id":"WFS-xxx","content_type":"session"}'
# Read IMPL_PLAN.md
ccw tool exec session_manager '{"operation":"read","session_id":"WFS-xxx","content_type":"plan"}'
```
### Write Archiving Marker
```bash
ccw tool exec session_manager '{
"operation": "write",
"session_id": "WFS-xxx",
"content_type": "process",
"path_params": {"filename": ".archiving"},
"content": ""
}'
```

View File

@@ -17,41 +17,30 @@ Display all workflow sessions with their current status, progress, and metadata.
## Implementation Flow ## Implementation Flow
### Step 1: Find All Sessions ### Step 1: List All Sessions
```bash ```bash
ls .workflow/active/WFS-* 2>/dev/null ccw session list --location both
``` ```
### Step 2: Check Active Session ### Step 2: Get Session Statistics
```bash ```bash
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 ccw session stats WFS-session
# Returns: tasks count by status, summaries count, has_plan
``` ```
### Step 3: Read Session Metadata ### Step 3: Read Session Metadata
```bash ```bash
jq -r '.session_id, .status, .project' .workflow/active/WFS-session/workflow-session.json ccw session read WFS-session --type session
# Returns: session_id, status, project, created_at, etc.
``` ```
### Step 4: Count Task Progress ## Simple Commands
```bash
find .workflow/active/WFS-session/.task/ -name "*.json" -type f 2>/dev/null | wc -l
find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
```
### Step 5: Get Creation Time
```bash
jq -r '.created_at // "unknown"' .workflow/active/WFS-session/workflow-session.json
```
## Simple Bash Commands
### Basic Operations ### Basic Operations
- **List sessions**: `find .workflow/active/ -name "WFS-*" -type d` - **List all sessions**: `ccw session list`
- **Find active**: `find .workflow/active/ -name "WFS-*" -type d` - **List active only**: `ccw session list --location active`
- **Read session data**: `jq -r '.session_id, .status' session.json` - **Read session data**: `ccw session read WFS-xxx --type session`
- **Count tasks**: `find .task/ -name "*.json" -type f | wc -l` - **Get task stats**: `ccw session stats WFS-xxx`
- **Count completed**: `find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l`
- **Get timestamp**: `jq -r '.created_at' session.json`
## Simple Output Format ## Simple Output Format
@@ -94,4 +83,32 @@ ccw session list --location active --no-metadata
# Show recent sessions # Show recent sessions
ccw session list --location active ccw session list --location active
``` ```
## session_manager Tool Alternative
Use `ccw tool exec session_manager` for simplified session listing:
### List All Sessions (Active + Archived)
```bash
ccw tool exec session_manager '{"operation":"list","location":"both","include_metadata":true}'
# Response:
# {
# "success": true,
# "result": {
# "active": [{"session_id":"WFS-xxx","metadata":{...}}],
# "archived": [{"session_id":"WFS-yyy","metadata":{...}}],
# "total": 2
# }
# }
```
### List Active Sessions Only
```bash
ccw tool exec session_manager '{"operation":"list","location":"active","include_metadata":true}'
```
### Read Specific Session
```bash
ccw tool exec session_manager '{"operation":"read","session_id":"WFS-xxx","content_type":"session"}'
```

View File

@@ -17,39 +17,35 @@ Resume the most recently paused workflow session, restoring all context and stat
### Step 1: Find Paused Sessions ### Step 1: Find Paused Sessions
```bash ```bash
ls .workflow/active/WFS-* 2>/dev/null ccw session list --location active
# Filter for sessions with status="paused"
``` ```
### Step 2: Check Session Status ### Step 2: Check Session Status
```bash ```bash
jq -r '.status' .workflow/active/WFS-session/workflow-session.json ccw session read WFS-session --type session
# Check .status field in response
``` ```
### Step 3: Find Most Recent Paused ### Step 3: Find Most Recent Paused
```bash ```bash
ls -t .workflow/active/WFS-*/workflow-session.json | head -1 ccw session list --location active
# Sort by created_at, filter for paused status
``` ```
### Step 4: Update Session Status ### Step 4: Update Session Status to Active
```bash ```bash
jq '.status = "active"' .workflow/active/WFS-session/workflow-session.json > temp.json ccw session status WFS-session active
mv temp.json .workflow/active/WFS-session/workflow-session.json # Or with full update:
ccw session update WFS-session --type session --content '{"status":"active","resumed_at":"2025-12-10T08:00:00Z"}'
``` ```
### Step 5: Add Resume Timestamp ## Simple Commands
```bash
jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"' .workflow/active/WFS-session/workflow-session.json > temp.json
mv temp.json .workflow/active/WFS-session/workflow-session.json
```
## Simple Bash Commands
### Basic Operations ### Basic Operations
- **List sessions**: `ls .workflow/active/WFS-*` - **List sessions**: `ccw session list --location active`
- **Check status**: `jq -r '.status' session.json` - **Check status**: `ccw session read WFS-xxx --type session`
- **Find recent**: `ls -t .workflow/active/*/workflow-session.json | head -1` - **Update status**: `ccw session status WFS-xxx active`
- **Update status**: `jq '.status = "active"' session.json > temp.json`
- **Add timestamp**: `jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"'`
### Resume Result ### Resume Result
``` ```
@@ -58,4 +54,26 @@ Session WFS-user-auth resumed
- Paused at: 2025-09-15T14:30:00Z - Paused at: 2025-09-15T14:30:00Z
- Resumed at: 2025-09-15T15:45:00Z - Resumed at: 2025-09-15T15:45:00Z
- Ready for: /workflow:execute - Ready for: /workflow:execute
``` ```
## session_manager Tool Alternative
Use `ccw tool exec session_manager` for session resume:
### Update Session Status
```bash
# Update status to active
ccw tool exec session_manager '{
"operation": "update",
"session_id": "WFS-xxx",
"content_type": "session",
"content": {
"status": "active",
"resumed_at": "2025-12-10T08:00:00Z"
}
}'
```
### Read Session Status
```bash
ccw tool exec session_manager '{"operation":"read","session_id":"WFS-xxx","content_type":"session"}'
```

View File

@@ -70,12 +70,12 @@ SlashCommand({command: "/workflow:init"});
### Step 1: List Active Sessions ### Step 1: List Active Sessions
```bash ```bash
bash(ls -1 .workflow/active/ 2>/dev/null | head -5) ccw session list --location active
``` ```
### Step 2: Display Session Metadata ### Step 2: Display Session Metadata
```bash ```bash
bash(cat .workflow/active/WFS-promptmaster-platform/workflow-session.json) ccw session read WFS-promptmaster-platform --type session
``` ```
### Step 4: User Decision ### Step 4: User Decision
@@ -92,34 +92,29 @@ Present session information and wait for user to select or create session.
### Step 1: Check Active Sessions Count ### Step 1: Check Active Sessions Count
```bash ```bash
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l) ccw session list --location active
# Check result.total in response
``` ```
### Step 2a: No Active Sessions → Create New ### Step 2a: No Active Sessions → Create New
```bash ```bash
# Generate session slug # Generate session slug from description
bash(echo "implement OAuth2 auth" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50) # Pattern: WFS-{lowercase-slug-from-description}
# Create directory structure # Create session with ccw (creates directories + metadata atomically)
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.process) ccw session init WFS-implement-oauth2-auth --type workflow --content '{"project":"implement OAuth2 auth","status":"planning"}'
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.task)
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.summaries)
# Create metadata (include type field, default to "workflow" if not specified)
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > .workflow/active/WFS-implement-oauth2-auth/workflow-session.json)
``` ```
**Output**: `SESSION_ID: WFS-implement-oauth2-auth` **Output**: `SESSION_ID: WFS-implement-oauth2-auth`
### Step 2b: Single Active Session → Check Relevance ### Step 2b: Single Active Session → Check Relevance
```bash ```bash
# Extract session ID # Get session list with metadata
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename) ccw session list --location active
# Read project name from metadata # Read session metadata for relevance check
bash(cat .workflow/active/WFS-promptmaster-platform/workflow-session.json | grep -o '"project":"[^"]*"' | cut -d'"' -f4) ccw session read WFS-promptmaster-platform --type session
# Check keyword match (manual comparison)
# If task contains project keywords → Reuse session # If task contains project keywords → Reuse session
# If task unrelated → Create new session (use Step 2a) # If task unrelated → Create new session (use Step 2a)
``` ```
@@ -129,8 +124,9 @@ bash(cat .workflow/active/WFS-promptmaster-platform/workflow-session.json | grep
### Step 2c: Multiple Active Sessions → Use First ### Step 2c: Multiple Active Sessions → Use First
```bash ```bash
# Get first active session # Get first active session from list
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename) ccw session list --location active
# Use first session_id from result.active array
# Output warning and session ID # Output warning and session ID
# WARNING: Multiple active sessions detected # WARNING: Multiple active sessions detected
@@ -146,24 +142,15 @@ bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs
### Step 1: Generate Unique Session Slug ### Step 1: Generate Unique Session Slug
```bash ```bash
# Convert to slug # Convert description to slug: lowercase, alphanumeric + hyphen, max 50 chars
bash(echo "fix login bug" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50) # Check if exists via ccw session list, add counter if collision
ccw session list --location active
# Check if exists, add counter if needed
bash(ls .workflow/active/WFS-fix-login-bug 2>/dev/null && echo "WFS-fix-login-bug-2" || echo "WFS-fix-login-bug")
``` ```
### Step 2: Create Session Structure ### Step 2: Create Session Structure
```bash ```bash
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.process) # Single command creates directories (.process, .task, .summaries) + metadata
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.task) ccw session init WFS-fix-login-bug --type workflow --content '{"project":"fix login bug","status":"planning"}'
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.summaries)
```
### Step 3: Create Metadata
```bash
# Include type field from --type parameter (default: "workflow")
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > .workflow/active/WFS-fix-login-bug/workflow-session.json)
``` ```
**Output**: `SESSION_ID: WFS-fix-login-bug` **Output**: `SESSION_ID: WFS-fix-login-bug`
@@ -197,4 +184,36 @@ SESSION_ID: WFS-promptmaster-platform
- Pattern: `WFS-[lowercase-slug]` - Pattern: `WFS-[lowercase-slug]`
- Characters: `a-z`, `0-9`, `-` only - Characters: `a-z`, `0-9`, `-` only
- Max length: 50 characters - Max length: 50 characters
- Uniqueness: Add numeric suffix if collision (`WFS-auth-2`, `WFS-auth-3`) - Uniqueness: Add numeric suffix if collision (`WFS-auth-2`, `WFS-auth-3`)
## session_manager Tool Alternative
The above bash commands can be replaced with `ccw tool exec session_manager`:
### List Sessions
```bash
# List active sessions with metadata
ccw tool exec session_manager '{"operation":"list","location":"active","include_metadata":true}'
# Response: {"success":true,"result":{"active":[{"session_id":"WFS-xxx","metadata":{...}}],"total":1}}
```
### Create Session (replaces mkdir + echo)
```bash
# Single command creates directories + metadata
ccw tool exec session_manager '{
"operation": "init",
"session_id": "WFS-my-session",
"metadata": {
"project": "my project description",
"status": "planning",
"type": "workflow",
"created_at": "2025-12-10T08:00:00Z"
}
}'
```
### Read Session Metadata
```bash
ccw tool exec session_manager '{"operation":"read","session_id":"WFS-xxx","content_type":"session"}'
```

View File

@@ -164,10 +164,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Parse Output**: **Parse Output**:
- Extract: Execution status (success/skipped/failed) - Extract: Execution status (success/skipped/failed)
- Verify: CONFLICT_RESOLUTION.md file path (if executed) - Verify: conflict-resolution.json file path (if executed)
**Validation**: **Validation**:
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed) - File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
**Skip Behavior**: **Skip Behavior**:
- If conflict_risk is "none" or "low", skip directly to Phase 5 - If conflict_risk is "none" or "low", skip directly to Phase 5
@@ -402,7 +402,7 @@ TDD Workflow Orchestrator
│ ├─ Phase 4.1: Detect conflicts with CLI │ ├─ Phase 4.1: Detect conflicts with CLI
│ ├─ Phase 4.2: Present conflicts to user │ ├─ Phase 4.2: Present conflicts to user
│ └─ Phase 4.3: Apply resolution strategies │ └─ Phase 4.3: Apply resolution strategies
│ └─ Returns: CONFLICT_RESOLUTION.md ← COLLAPSED │ └─ Returns: conflict-resolution.json ← COLLAPSED
│ ELSE: │ ELSE:
│ └─ Skip to Phase 5 │ └─ Skip to Phase 5

View File

@@ -127,7 +127,7 @@ ccw session read {sessionId} --type task --raw | jq -r '.meta.agent'
**Gemini analysis for comprehensive TDD compliance report** **Gemini analysis for comprehensive TDD compliance report**
```bash ```bash
cd project-root && gemini -p " ccw cli exec "
PURPOSE: Generate TDD compliance report PURPOSE: Generate TDD compliance report
TASK: Analyze TDD workflow execution and generate quality report TASK: Analyze TDD workflow execution and generate quality report
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md} CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
@@ -139,7 +139,7 @@ EXPECTED:
- Red-Green-Refactor cycle validation - Red-Green-Refactor cycle validation
- Best practices adherence assessment - Best practices adherence assessment
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements. RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
" > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md " --tool gemini --cd project-root > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
``` ```
**Output**: TDD_COMPLIANCE_REPORT.md **Output**: TDD_COMPLIANCE_REPORT.md

View File

@@ -133,7 +133,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness) ### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
Primary (Gemini): Primary (Gemini):
cd {project_root} && gemini -p " ccw cli exec "
PURPOSE: Detect conflicts between plan and codebase, using exploration insights PURPOSE: Detect conflicts between plan and codebase, using exploration insights
TASK: TASK:
• **Review pre-identified conflict_indicators from exploration results** • **Review pre-identified conflict_indicators from exploration results**
@@ -152,7 +152,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
- ModuleOverlap conflicts with overlap_analysis - ModuleOverlap conflicts with overlap_analysis
- Targeted clarification questions - Targeted clarification questions
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
" " --tool gemini --cd {project_root}
Fallback: Qwen (same prompt) → Claude (manual analysis) Fallback: Qwen (same prompt) → Claude (manual analysis)
@@ -169,7 +169,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
### 4. Return Structured Conflict Data ### 4. Return Structured Conflict Data
⚠️ DO NOT generate CONFLICT_RESOLUTION.md file ⚠️ Output to conflict-resolution.json (generated in Phase 4)
Return JSON format for programmatic processing: Return JSON format for programmatic processing:
@@ -467,14 +467,30 @@ selectedStrategies.forEach(item => {
console.log(`\n正在应用 ${modifications.length} 个修改...`); console.log(`\n正在应用 ${modifications.length} 个修改...`);
// 2. Apply each modification using Edit tool // 2. Apply each modification using Edit tool (with fallback to context-package.json)
const appliedModifications = []; const appliedModifications = [];
const failedModifications = []; const failedModifications = [];
const fallbackConstraints = []; // For files that don't exist
modifications.forEach((mod, idx) => { modifications.forEach((mod, idx) => {
try { try {
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`); console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
// Check if target file exists (brainstorm files may not exist in lite workflow)
if (!file_exists(mod.file)) {
console.log(` ⚠️ 文件不存在,写入 context-package.json 作为约束`);
fallbackConstraints.push({
source: "conflict-resolution",
conflict_id: mod.conflict_id,
target_file: mod.file,
section: mod.section,
change_type: mod.change_type,
content: mod.new_content,
rationale: mod.rationale
});
return; // Skip to next modification
}
if (mod.change_type === "update") { if (mod.change_type === "update") {
Edit({ Edit({
file_path: mod.file, file_path: mod.file,
@@ -502,14 +518,45 @@ modifications.forEach((mod, idx) => {
} }
}); });
// 3. Update context-package.json with resolution details // 2b. Generate conflict-resolution.json output file
const resolutionOutput = {
session_id: sessionId,
resolved_at: new Date().toISOString(),
summary: {
total_conflicts: conflicts.length,
resolved_with_strategy: selectedStrategies.length,
custom_handling: customConflicts.length,
fallback_constraints: fallbackConstraints.length
},
resolved_conflicts: selectedStrategies.map(s => ({
conflict_id: s.conflict_id,
strategy_name: s.strategy.name,
strategy_approach: s.strategy.approach,
clarifications: s.clarifications || [],
modifications_applied: s.strategy.modifications?.filter(m =>
appliedModifications.some(am => am.conflict_id === s.conflict_id)
) || []
})),
custom_conflicts: customConflicts.map(c => ({
id: c.id,
brief: c.brief,
category: c.category,
suggestions: c.suggestions,
overlap_analysis: c.overlap_analysis || null
})),
planning_constraints: fallbackConstraints, // Constraints for files that don't exist
failed_modifications: failedModifications
};
const resolutionPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
console.log(`\n📄 冲突解决结果已保存: ${resolutionPath}`);
// 3. Update context-package.json with resolution details (reference to JSON file)
const contextPackage = JSON.parse(Read(contextPath)); const contextPackage = JSON.parse(Read(contextPath));
contextPackage.conflict_detection.conflict_risk = "resolved"; contextPackage.conflict_detection.conflict_risk = "resolved";
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => ({ contextPackage.conflict_detection.resolution_file = resolutionPath; // Reference to detailed JSON
conflict_id: s.conflict_id, contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => s.conflict_id);
strategy_name: s.strategy.name,
clarifications: s.clarifications
}));
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id); contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
contextPackage.conflict_detection.resolved_at = new Date().toISOString(); contextPackage.conflict_detection.resolved_at = new Date().toISOString();
Write(contextPath, JSON.stringify(contextPackage, null, 2)); Write(contextPath, JSON.stringify(contextPackage, null, 2));
@@ -582,12 +629,50 @@ return {
✓ Agent log saved to .workflow/active/{session_id}/.chat/ ✓ Agent log saved to .workflow/active/{session_id}/.chat/
``` ```
## Output Format: Agent JSON Response ## Output Format
### Primary Output: conflict-resolution.json
**Path**: `.workflow/active/{session_id}/.process/conflict-resolution.json`
**Schema**:
```json
{
"session_id": "WFS-xxx",
"resolved_at": "ISO timestamp",
"summary": {
"total_conflicts": 3,
"resolved_with_strategy": 2,
"custom_handling": 1,
"fallback_constraints": 0
},
"resolved_conflicts": [
{
"conflict_id": "CON-001",
"strategy_name": "策略名称",
"strategy_approach": "实现方法",
"clarifications": [],
"modifications_applied": []
}
],
"custom_conflicts": [
{
"id": "CON-002",
"brief": "冲突摘要",
"category": "ModuleOverlap",
"suggestions": ["建议1", "建议2"],
"overlap_analysis": null
}
],
"planning_constraints": [],
"failed_modifications": []
}
```
### Secondary: Agent JSON Response (stdout)
**Focus**: Structured conflict data with actionable modifications for programmatic processing. **Focus**: Structured conflict data with actionable modifications for programmatic processing.
**Format**: JSON to stdout (NO file generation)
**Structure**: Defined in Phase 2, Step 4 (agent prompt) **Structure**: Defined in Phase 2, Step 4 (agent prompt)
### Key Requirements ### Key Requirements
@@ -635,11 +720,12 @@ If Edit tool fails mid-application:
- Requires: `conflict_risk ≥ medium` - Requires: `conflict_risk ≥ medium`
**Output**: **Output**:
- Modified files: - Generated file:
- `.workflow/active/{session_id}/.process/conflict-resolution.json` (primary output)
- Modified files (if exist):
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md` - `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md` - `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved) - `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved, resolution_file reference)
- NO report file generation
**User Interaction**: **User Interaction**:
- **Iterative conflict processing**: One conflict at a time, not in batches - **Iterative conflict processing**: One conflict at a time, not in batches
@@ -667,7 +753,7 @@ If Edit tool fails mid-application:
✓ guidance-specification.md updated with resolved conflicts ✓ guidance-specification.md updated with resolved conflicts
✓ Role analyses (*.md) updated with resolved conflicts ✓ Role analyses (*.md) updated with resolved conflicts
✓ context-package.json marked as "resolved" with clarification records ✓ context-package.json marked as "resolved" with clarification records
No CONFLICT_RESOLUTION.md file generated conflict-resolution.json generated with full resolution details
✓ Modification summary includes: ✓ Modification summary includes:
- Total conflicts - Total conflicts
- Resolved with strategy (count) - Resolved with strategy (count)

View File

@@ -89,6 +89,14 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
3. **Auto Module Detection** (determines single vs parallel mode): 3. **Auto Module Detection** (determines single vs parallel mode):
```javascript ```javascript
function autoDetectModules(contextPackage, projectRoot) { function autoDetectModules(contextPackage, projectRoot) {
// === Complexity Gate: Only parallelize for High complexity ===
const complexity = contextPackage.metadata?.complexity || 'Medium';
if (complexity !== 'High') {
// Force single agent mode for Low/Medium complexity
// This maximizes agent context reuse for related tasks
return [{ name: 'main', prefix: '', paths: ['.'] }];
}
// Priority 1: Explicit frontend/backend separation // Priority 1: Explicit frontend/backend separation
if (exists('src/frontend') && exists('src/backend')) { if (exists('src/frontend') && exists('src/backend')) {
return [ return [
@@ -112,8 +120,9 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
``` ```
**Decision Logic**: **Decision Logic**:
- `complexity !== 'High'` → Force Phase 2A (Single Agent, maximize context reuse)
- `modules.length == 1` → Phase 2A (Single Agent, original flow) - `modules.length == 1` → Phase 2A (Single Agent, original flow)
- `modules.length >= 2` → Phase 2B + Phase 3 (N+1 Parallel) - `modules.length >= 2 && complexity == 'High'` → Phase 2B + Phase 3 (N+1 Parallel)
**Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description, not by flags. **Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description, not by flags.
@@ -163,6 +172,13 @@ Determine CLI tool usage per-step based on user's task description:
- Use aggregated_insights.all_integration_points for precise modification locations - Use aggregated_insights.all_integration_points for precise modification locations
- Use conflict_indicators for risk-aware task sequencing - Use conflict_indicators for risk-aware task sequencing
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
- Reference resolved_conflicts for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## EXPECTED DELIVERABLES ## EXPECTED DELIVERABLES
1. Task JSON Files (.task/IMPL-*.json) 1. Task JSON Files (.task/IMPL-*.json)
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control) - 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
@@ -203,7 +219,9 @@ Hard Constraints:
**Condition**: `modules.length >= 2` (multi-module detected) **Condition**: `modules.length >= 2` (multi-module detected)
**Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task generation. **Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task JSON generation.
**Note**: Phase 2B agents generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md are generated by Phase 3 Coordinator.
**Parallel Agent Invocation**: **Parallel Agent Invocation**:
```javascript ```javascript
@@ -211,27 +229,49 @@ Hard Constraints:
const planningTasks = modules.map(module => const planningTasks = modules.map(module =>
Task( Task(
subagent_type="action-planning-agent", subagent_type="action-planning-agent",
description=`Plan ${module.name} module`, description=`Generate ${module.name} module task JSONs`,
prompt=` prompt=`
## SCOPE ## TASK OBJECTIVE
Generate task JSON files for ${module.name} module within workflow session
IMPORTANT: This is PLANNING ONLY - generate task JSONs, NOT implementing code.
IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Coordinator.
CRITICAL: Follow progressive loading strategy in agent specification
## MODULE SCOPE
- Module: ${module.name} (${module.type}) - Module: ${module.name} (${module.type})
- Focus Paths: ${module.paths.join(', ')} - Focus Paths: ${module.paths.join(', ')}
- Task ID Prefix: IMPL-${module.prefix} - Task ID Prefix: IMPL-${module.prefix}
- Task Limit: ≤9 tasks - Task Limit: ≤9 tasks
- Other Modules: ${otherModules.join(', ')} - Other Modules: ${otherModules.join(', ')}
- Cross-module deps format: "CROSS::{module}::{pattern}"
## SESSION PATHS ## SESSION PATHS
Input: Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Context Package: .workflow/active/{session-id}/.process/context-package.json - Context Package: .workflow/active/{session-id}/.process/context-package.json
Output: Output:
- Task Dir: .workflow/active/{session-id}/.task/ - Task Dir: .workflow/active/{session-id}/.task/
## INSTRUCTIONS ## CONTEXT METADATA
- Generate tasks ONLY for ${module.name} module Session ID: {session-id}
- Use task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ... MCP Capabilities: {exa_code, exa_web, code_index}
- For cross-module dependencies, use: depends_on: ["CROSS::B::api-endpoint"]
- Maximum 9 tasks per module ## CROSS-MODULE DEPENDENCIES
- Use placeholder: depends_on: ["CROSS::{module}::{pattern}"]
- Example: depends_on: ["CROSS::B::api-endpoint"]
- Phase 3 Coordinator resolves to actual task IDs
## EXPECTED DELIVERABLES
Task JSON Files (.task/IMPL-${module.prefix}*.json):
- 6-field schema per agent specification
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
- Focus ONLY on ${module.name} module scope
## SUCCESS CRITERIA
- Task JSONs saved to .task/ with IMPL-${module.prefix}* naming
- Cross-module dependencies use CROSS:: placeholder format
- Return task count and brief summary
` `
) )
); );
@@ -255,37 +295,78 @@ await Promise.all(planningTasks);
- Prefix: A, B, C... (assigned by detection order) - Prefix: A, B, C... (assigned by detection order)
- Sequence: 1, 2, 3... (per-module increment) - Sequence: 1, 2, 3... (per-module increment)
### Phase 3: Integration (+1 Coordinator, Multi-Module Only) ### Phase 3: Integration (+1 Coordinator Agent, Multi-Module Only)
**Condition**: Only executed when `modules.length >= 2` **Condition**: Only executed when `modules.length >= 2`
**Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified documents. **Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified IMPL_PLAN.md and TODO_LIST.md documents.
**Integration Logic**: **Coordinator Agent Invocation**:
```javascript ```javascript
// 1. Collect all module task JSONs // Wait for all Phase 2B agents to complete
const allTasks = glob('.task/IMPL-*.json').map(loadJson); const moduleResults = await Promise.all(planningTasks);
// 2. Resolve cross-module dependencies // Launch +1 Coordinator Agent
for (const task of allTasks) { Task(
if (task.depends_on) { subagent_type="action-planning-agent",
task.depends_on = task.depends_on.map(dep => { description="Integrate module tasks and generate unified documents",
if (dep.startsWith('CROSS::')) { prompt=`
// CROSS::B::api-endpoint → find matching IMPL-B* task ## TASK OBJECTIVE
const [, targetModule, pattern] = dep.match(/CROSS::(\w+)::(.+)/); Integrate all module task JSONs, resolve cross-module dependencies, and generate unified IMPL_PLAN.md and TODO_LIST.md
return findTaskByModuleAndPattern(allTasks, targetModule, pattern);
}
return dep;
});
}
}
// 3. Generate unified IMPL_PLAN.md (grouped by module) IMPORTANT: This is INTEGRATION ONLY - consolidate existing task JSONs, NOT creating new tasks.
generateIMPL_PLAN(allTasks, modules);
// 4. Generate TODO_LIST.md (hierarchical structure) ## SESSION PATHS
generateTODO_LIST(allTasks, modules); Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Context Package: .workflow/active/{session-id}/.process/context-package.json
- Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (from Phase 2B)
Output:
- Updated Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (resolved dependencies)
- IMPL_PLAN: .workflow/active/{session-id}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/{session-id}/TODO_LIST.md
## CONTEXT METADATA
Session ID: {session-id}
Modules: ${modules.map(m => m.name + '(' + m.prefix + ')').join(', ')}
Module Count: ${modules.length}
## INTEGRATION STEPS
1. Collect all .task/IMPL-*.json, group by module prefix
2. Resolve CROSS:: dependencies → actual task IDs, update task JSONs
3. Generate IMPL_PLAN.md (multi-module format per agent specification)
4. Generate TODO_LIST.md (hierarchical format per agent specification)
## CROSS-MODULE DEPENDENCY RESOLUTION
- Pattern: CROSS::{module}::{pattern} → IMPL-{module}* matching title/context
- Example: CROSS::B::api-endpoint → IMPL-B1 (if B1 title contains "api-endpoint")
- Log unresolved as warnings
## EXPECTED DELIVERABLES
1. Updated Task JSONs with resolved dependency IDs
2. IMPL_PLAN.md - multi-module format with cross-dependency section
3. TODO_LIST.md - hierarchical by module with cross-dependency section
## SUCCESS CRITERIA
- No CROSS:: placeholders remaining in task JSONs
- IMPL_PLAN.md and TODO_LIST.md generated with multi-module structure
- Return: task count, per-module breakdown, resolved dependency count
`
)
``` ```
**Note**: IMPL_PLAN.md and TODO_LIST.md structure definitions are in `action-planning-agent.md`. **Dependency Resolution Algorithm**:
```javascript
function resolveCrossModuleDependency(placeholder, allTasks) {
const [, targetModule, pattern] = placeholder.match(/CROSS::(\w+)::(.+)/);
const candidates = allTasks.filter(t =>
t.id.startsWith(`IMPL-${targetModule}`) &&
(t.title.toLowerCase().includes(pattern.toLowerCase()) ||
t.context?.description?.toLowerCase().includes(pattern.toLowerCase()))
);
return candidates.length > 0
? candidates.sort((a, b) => a.id.localeCompare(b.id))[0].id
: placeholder; // Keep for manual resolution
}
```

View File

@@ -152,9 +152,14 @@ Phase 2: Agent Execution (Document Generation)
roleAnalysisPaths.forEach(path => Read(path)); roleAnalysisPaths.forEach(path => Read(path));
``` ```
5. **Load Conflict Resolution** (from context-package.json, if exists) 5. **Load Conflict Resolution** (from conflict-resolution.json, if exists)
```javascript ```javascript
if (contextPackage.brainstorm_artifacts.conflict_resolution?.exists) { // Check for new conflict-resolution.json format
if (contextPackage.conflict_detection?.resolution_file) {
Read(contextPackage.conflict_detection.resolution_file) // .process/conflict-resolution.json
}
// Fallback: legacy brainstorm_artifacts path
else if (contextPackage.brainstorm_artifacts?.conflict_resolution?.exists) {
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path) Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
} }
``` ```
@@ -223,7 +228,7 @@ If conflict_risk was medium/high, modifications have been applied to:
- **guidance-specification.md**: Design decisions updated to resolve conflicts - **guidance-specification.md**: Design decisions updated to resolve conflicts
- **Role analyses (*.md)**: Recommendations adjusted for compatibility - **Role analyses (*.md)**: Recommendations adjusted for compatibility
- **context-package.json**: Marked as "resolved" with conflict IDs - **context-package.json**: Marked as "resolved" with conflict IDs
- NO separate CONFLICT_RESOLUTION.md file (conflicts resolved in-place) - Conflict resolution results stored in conflict-resolution.json
### MCP Analysis Results (Optional) ### MCP Analysis Results (Optional)
**Code Structure**: {mcp_code_index_results} **Code Structure**: {mcp_code_index_results}
@@ -373,10 +378,12 @@ const agentContext = {
.flatMap(role => role.files) .flatMap(role => role.files)
.map(file => Read(file.path)), .map(file => Read(file.path)),
// Load conflict resolution if exists (from context package) // Load conflict resolution if exists (prefer new JSON format)
conflict_resolution: brainstorm_artifacts.conflict_resolution?.exists conflict_resolution: context_package.conflict_detection?.resolution_file
? Read(brainstorm_artifacts.conflict_resolution.path) ? Read(context_package.conflict_detection.resolution_file) // .process/conflict-resolution.json
: null, : (brainstorm_artifacts?.conflict_resolution?.exists
? Read(brainstorm_artifacts.conflict_resolution.path)
: null),
// Optional MCP enhancements // Optional MCP enhancements
mcp_analysis: executeMcpDiscovery() mcp_analysis: executeMcpDiscovery()
@@ -408,7 +415,7 @@ This section provides quick reference for TDD task JSON structure. For complete
│ ├── IMPL-3.2.json # Complex feature subtask (if needed) │ ├── IMPL-3.2.json # Complex feature subtask (if needed)
│ └── ... │ └── ...
└── .process/ └── .process/
├── CONFLICT_RESOLUTION.md # Conflict resolution strategies (if conflict_risk ≥ medium) ├── conflict-resolution.json # Conflict resolution results (if conflict_risk ≥ medium)
├── test-context-package.json # Test coverage analysis ├── test-context-package.json # Test coverage analysis
├── context-package.json # Input from context-gather ├── context-package.json # Input from context-gather
├── context_package_path # Path to smart context package ├── context_package_path # Path to smart context package

View File

@@ -89,7 +89,7 @@ Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.t
## EXECUTION STEPS ## EXECUTION STEPS
1. Execute Gemini analysis: 1. Execute Gemini analysis:
cd .workflow/active/{test_session_id}/.process && gemini -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --approval-mode yolo ccw cli exec "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --tool gemini --mode write --cd .workflow/active/{test_session_id}/.process
2. Generate TEST_ANALYSIS_RESULTS.md: 2. Generate TEST_ANALYSIS_RESULTS.md:
Synthesize gemini-test-analysis.md into standardized format for task generation Synthesize gemini-test-analysis.md into standardized format for task generation

View File

@@ -180,14 +180,14 @@ Task(subagent_type="ui-design-agent",
- Pattern: rg → Extract values → Compare → If different → Read full context with comments → Record conflict - Pattern: rg → Extract values → Compare → If different → Read full context with comments → Record conflict
- Alternative (if many files): Execute CLI analysis for comprehensive report: - Alternative (if many files): Execute CLI analysis for comprehensive report:
\`\`\`bash \`\`\`bash
cd ${source} && gemini -p \" ccw cli exec \"
PURPOSE: Detect color token conflicts across all CSS/SCSS/JS files PURPOSE: Detect color token conflicts across all CSS/SCSS/JS files
TASK: • Scan all files for color definitions • Identify conflicting values • Extract semantic comments TASK: • Scan all files for color definitions • Identify conflicting values • Extract semantic comments
MODE: analysis MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
EXPECTED: JSON report listing conflicts with file:line, values, semantic context EXPECTED: JSON report listing conflicts with file:line, values, semantic context
RULES: Focus on core tokens | Report ALL variants | analysis=READ-ONLY RULES: Focus on core tokens | Report ALL variants | analysis=READ-ONLY
\" \" --tool gemini --cd ${source}
\`\`\` \`\`\`
**Step 1: Load file list** **Step 1: Load file list**
@@ -295,14 +295,14 @@ Task(subagent_type="ui-design-agent",
- Pattern: rg → Identify animation types → Map framework usage → Prioritize extraction targets - Pattern: rg → Identify animation types → Map framework usage → Prioritize extraction targets
- Alternative (if complex framework mix): Execute CLI analysis for comprehensive report: - Alternative (if complex framework mix): Execute CLI analysis for comprehensive report:
\`\`\`bash \`\`\`bash
cd ${source} && gemini -p \" ccw cli exec \"
PURPOSE: Detect animation frameworks and patterns PURPOSE: Detect animation frameworks and patterns
TASK: • Identify frameworks • Map animation patterns • Categorize by complexity TASK: • Identify frameworks • Map animation patterns • Categorize by complexity
MODE: analysis MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
EXPECTED: JSON report listing frameworks, animation types, file locations EXPECTED: JSON report listing frameworks, animation types, file locations
RULES: Focus on framework consistency | Map all animations | analysis=READ-ONLY RULES: Focus on framework consistency | Map all animations | analysis=READ-ONLY
\" \" --tool gemini --cd ${source}
\`\`\` \`\`\`
**Step 1: Load file list** **Step 1: Load file list**
@@ -374,14 +374,14 @@ Task(subagent_type="ui-design-agent",
- Pattern: rg → Count occurrences → Classify by frequency → Prioritize universal components - Pattern: rg → Count occurrences → Classify by frequency → Prioritize universal components
- Alternative (if large codebase): Execute CLI analysis for comprehensive categorization: - Alternative (if large codebase): Execute CLI analysis for comprehensive categorization:
\`\`\`bash \`\`\`bash
cd ${source} && gemini -p \" ccw cli exec \"
PURPOSE: Classify components as universal vs specialized PURPOSE: Classify components as universal vs specialized
TASK: • Identify UI components • Classify reusability • Map layout systems TASK: • Identify UI components • Classify reusability • Map layout systems
MODE: analysis MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts @**/*.html CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts @**/*.html
EXPECTED: JSON report categorizing components, layout patterns, naming conventions EXPECTED: JSON report categorizing components, layout patterns, naming conventions
RULES: Focus on component reusability | Identify layout systems | analysis=READ-ONLY RULES: Focus on component reusability | Identify layout systems | analysis=READ-ONLY
\" \" --tool gemini --cd ${source}
\`\`\` \`\`\`
**Step 1: Load file list** **Step 1: Load file list**

View File

@@ -6,12 +6,12 @@ import { fileURLToPath } from 'url';
const __filename = fileURLToPath(import.meta.url); const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename); const __dirname = dirname(__filename);
// Bundled template paths // Bundled template paths (from dist/core/ -> src/templates/)
const UNIFIED_TEMPLATE = join(__dirname, '../templates/dashboard.html'); const UNIFIED_TEMPLATE = join(__dirname, '../../src/templates/dashboard.html');
const JS_FILE = join(__dirname, '../templates/dashboard.js'); const JS_FILE = join(__dirname, '../../src/templates/dashboard.js');
const MODULE_CSS_DIR = join(__dirname, '../templates/dashboard-css'); const MODULE_CSS_DIR = join(__dirname, '../../src/templates/dashboard-css');
const WORKFLOW_TEMPLATE = join(__dirname, '../templates/workflow-dashboard.html'); const WORKFLOW_TEMPLATE = join(__dirname, '../../src/templates/workflow-dashboard.html');
const REVIEW_TEMPLATE = join(__dirname, '../templates/review-cycle-dashboard.html'); const REVIEW_TEMPLATE = join(__dirname, '../../src/templates/review-cycle-dashboard.html');
// Modular CSS files in load order // Modular CSS files in load order
const MODULE_CSS_FILES = [ const MODULE_CSS_FILES = [

View File

@@ -36,11 +36,11 @@ function getEnterpriseMcpPath(): string {
// WebSocket clients for real-time notifications // WebSocket clients for real-time notifications
const wsClients = new Set(); const wsClients = new Set();
const TEMPLATE_PATH = join(import.meta.dirname, '../templates/dashboard.html'); const TEMPLATE_PATH = join(import.meta.dirname, '../../src/templates/dashboard.html');
const MODULE_CSS_DIR = join(import.meta.dirname, '../templates/dashboard-css'); const MODULE_CSS_DIR = join(import.meta.dirname, '../../src/templates/dashboard-css');
const JS_FILE = join(import.meta.dirname, '../templates/dashboard.js'); const JS_FILE = join(import.meta.dirname, '../../src/templates/dashboard.js');
const MODULE_JS_DIR = join(import.meta.dirname, '../templates/dashboard-js'); const MODULE_JS_DIR = join(import.meta.dirname, '../../src/templates/dashboard-js');
const ASSETS_DIR = join(import.meta.dirname, '../templates/assets'); const ASSETS_DIR = join(import.meta.dirname, '../../src/templates/assets');
// Modular CSS files in load order // Modular CSS files in load order
const MODULE_CSS_FILES = [ const MODULE_CSS_FILES = [