feat: initialize monorepo with package.json for CCW workflow platform

This commit is contained in:
catlog22
2026-02-03 14:42:20 +08:00
parent 5483a72e9f
commit 39b80b3386
267 changed files with 99597 additions and 2658 deletions

View File

@@ -0,0 +1,146 @@
---
title: /cli:cli-init
sidebar_label: /cli:cli-init
sidebar_position: 1
description: Initialize CLI configuration for workspace with automatic technology detection
---
# /cli:cli-init
Initialize CLI tool configurations for the workspace by analyzing project structure and generating optimized configuration files.
## Overview
The `/cli:cli-init` command analyzes your workspace to detect technology stacks and automatically generates CLI tool configurations for Gemini and Qwen with optimized ignore patterns.
**Supported Tools**: gemini, qwen, all (default)
## Features
- **Automatic Technology Detection** - Analyzes project structure to identify tech stacks
- **Smart Ignore Rules** - Generates filtering patterns optimized for detected technologies
- **Configuration Generation** - Creates tool-specific settings files
- **Multi-Tool Support** - Configure Gemini, Qwen, or both simultaneously
## Usage
```bash
# Initialize all CLI tools (Gemini + Qwen)
/cli:cli-init
# Initialize only Gemini
/cli:cli-init --tool gemini
# Initialize only Qwen
/cli:cli-init --tool qwen
# Preview what would be generated
/cli:cli-init --preview
# Generate in subdirectory
/cli:cli-init --output=.config/
```
## Generated Files
### Configuration Directories
**For Gemini** (`.gemini/`):
- `.gemini/settings.json` - Context file configuration
**For Qwen** (`.qwen/`):
- `.qwen/settings.json` - Context file configuration
### Ignore Files
- `.geminiignore` - File filtering patterns for Gemini CLI
- `.qwenignore` - File filtering patterns for Qwen CLI
Both ignore files use gitignore syntax and include technology-specific rules.
## Supported Technology Stacks
### Frontend Technologies
- **React/Next.js** - Ignores build artifacts, .next/, node_modules
- **Vue/Nuxt** - Ignores .nuxt/, dist/, .cache/
- **Angular** - Ignores dist/, .angular/, node_modules
- **Webpack/Vite** - Ignores build outputs, cache directories
### Backend Technologies
- **Node.js** - Ignores node_modules/, npm-debug.log*, package-lock.json
- **Python** - Ignores __pycache__/, *.py[cod], .venv/, venv/
- **Java** - Ignores target/, .gradle/, *.class, *.jar
- **Go** - Ignores vendor/, *.exe, *.test
- **Rust** - Ignores target/, Cargo.lock
### Configuration Files
Both `.gemini/settings.json` and `.qwen/settings.json` are configured with:
```json
{
"contextfilename": ["CLAUDE.md"]
}
```
## Command Options
| Option | Description | Default |
|--------|-------------|---------|
| `--tool <name>` | Tool to configure (gemini, qwen, all) | all |
| `--output <path>` | Custom output directory | Current directory |
| `--preview` | Show what would be generated without creating files | false |
## Execution Flow
1. **Workspace Analysis** - Runs `get_modules_by_depth` to analyze project structure
2. **Technology Detection** - Identifies tech stacks based on files and directories
3. **Configuration Generation** - Creates tool-specific configuration directories
4. **Ignore Rules Generation** - Creates organized ignore files with detected patterns
5. **File Creation** - Writes all configuration files to disk
## Examples
### Basic Project Setup
```bash
# Initialize all CLI tools
/cli:cli-init
# Output:
# Creating .gemini/ directory
# Creating .qwen/ directory
# Generating .geminiignore with 45 rules
# Generating .qwenignore with 45 rules
# Technology detected: Node.js, TypeScript, React
```
### Technology Migration
```bash
# After adding Docker to project
/cli:cli-init
# Regenerates all config and ignore files with new Docker rules
```
### Tool-Specific Setup
```bash
# Gemini-only workflow
/cli:cli-init --tool gemini
# Creates only .gemini/ and .geminiignore
```
## Related Commands
- **/cli:codex-review** - Code review using Codex CLI
- **/memory:update-full** - Full project memory update
- **/workflow:lite-execute** - Quick execution workflow
## Notes
- Backs up existing configuration files before overwriting
- Auto-detects technology stacks from common indicators
- Generated ignore patterns optimize CLI tool performance
- Safe to run multiple times - preserves existing configurations

View File

@@ -0,0 +1,208 @@
---
title: /cli:codex-review
sidebar_label: /cli:codex-review
sidebar_position: 2
description: Interactive code review using Codex CLI with configurable review targets
---
# /cli:codex-review
Interactive code review command that invokes `codex review` via CCW CLI endpoint with guided parameter selection.
## Overview
The `/cli:codex-review` command provides an interface to Codex's powerful code review capabilities, supporting multiple review targets and customizable review parameters.
## Review Parameters
| Parameter | Description |
|-----------|-------------|
| `[PROMPT]` | Custom review instructions (positional) |
| `-c model=<model>` | Override model via config |
| `--uncommitted` | Review staged, unstaged, and untracked changes |
| `--base <BRANCH>` | Review changes against base branch |
| `--commit <SHA>` | Review changes introduced by a commit |
| `--title <TITLE>` | Optional commit title for review summary |
## Usage
### Direct Execution (No Interaction)
```bash
# Review uncommitted changes
/cli:codex-review --uncommitted
# Review against main branch
/cli:codex-review --base main
# Review specific commit
/cli:codex-review --commit abc123
# Review with custom model
/cli:codex-review --uncommitted --model o3
# Review with security focus
/cli:codex-review --uncommitted security
# Full options
/cli:codex-review --base main --model o3 --title "Auth Feature" security
```
### Interactive Mode
```bash
# Start interactive selection (guided flow)
/cli:codex-review
```
## Review Targets
| Target | Description | Use Case |
|--------|-------------|----------|
| **Uncommitted** | Reviews staged, unstaged, and untracked changes | Quick pre-commit review |
| **Base Branch** | Reviews changes against specified branch | PR review, branch comparison |
| **Commit** | Reviews changes introduced by specific commit | Historical commit review |
## Focus Areas
| Focus | Description | Key Checks |
|-------|-------------|------------|
| **General** | Comprehensive review | Correctness, style, bugs, documentation |
| **Security** | Security-first review | Injection vulnerabilities, auth issues, validation, data exposure |
| **Performance** | Optimization review | Complexity analysis, memory usage, query optimization, caching |
| **Code Quality** | Maintainability review | SOLID principles, code duplication, naming conventions, test coverage |
## Execution Flow
### Interactive Mode
1. **Select Review Target** - Choose uncommitted, base branch, or commit
2. **Select Focus Area** - Choose general, security, performance, or code quality
3. **Configure Options** - Set model, title, and custom instructions
4. **Execute Review** - Runs Codex review with selected parameters
5. **Display Results** - Shows structured review findings
### Command Construction
```bash
# Base structure
ccw cli -p "<PROMPT>" --tool codex --mode review [OPTIONS]
# Example with custom prompt
ccw cli -p "
PURPOSE: Comprehensive code review to identify issues and improve quality
TASK: • Review correctness and logic • Check standards compliance • Identify bugs
MODE: review
CONTEXT: Reviewing uncommitted changes
CONSTRAINTS: Actionable feedback
" --tool codex --mode review --rule analysis-review-code-quality
# Example with target flag only
ccw cli --tool codex --mode review --uncommitted
```
## Prompt Template
Follow the standard CCW CLI prompt template:
```bash
PURPOSE: [what] + [why] + [success criteria]
TASK: • [step 1] • [step 2] • [step 3]
MODE: review
CONTEXT: [target description] | Memory: [project context]
EXPECTED: [deliverable format] + [quality criteria]
CONSTRAINTS: [domain constraints]
```
## Validation Constraints
**IMPORTANT**: Target flags (`--uncommitted`, `--base`, `--commit`) and custom prompt are **mutually exclusive**.
### Valid Combinations
| Command | Result |
|---------|--------|
| `codex review "Focus on security"` | ✓ Custom prompt, reviews uncommitted (default) |
| `codex review --uncommitted` | ✓ No prompt, uses default review |
| `codex review --base main` | ✓ No prompt, uses default review |
| `codex review --commit abc123` | ✓ No prompt, uses default review |
### Invalid Combinations
| Command | Result |
|---------|--------|
| `codex review --uncommitted "prompt"` | ✗ Mutually exclusive |
| `codex review --base main "prompt"` | ✗ Mutually exclusive |
| `codex review --commit abc123 "prompt"` | ✗ Mutually exclusive |
## Error Handling
### No Changes to Review
```
No changes found for review target. Suggestions:
- For --uncommitted: Make some code changes first
- For --base: Ensure branch exists and has diverged
- For --commit: Verify commit SHA exists
```
### Invalid Branch
```bash
# Show available branches
git branch -a --list | head -20
```
### Invalid Commit
```bash
# Show recent commits
git log --oneline -10
```
## Examples
### Pre-Commit Review
```bash
# Quick review before committing
/cli:codex-review --uncommitted
# Output:
# Reviewing 3 files with 45 changes
# - src/auth/login.ts: 2 issues found
# - src/user/profile.ts: 1 issue found
# - tests/auth.test.ts: No issues
```
### Branch Comparison
```bash
# Review feature branch against main
/cli:codex-review --base feature-auth
# Shows all differences between branches
```
### Security-Focused Review
```bash
# Security review of uncommitted changes
/cli:codex-review --uncommitted "Focus on security vulnerabilities, injection risks, authentication issues"
# Prioritizes security-related findings
```
## Related Commands
- **/cli:cli-init** - Initialize CLI configuration
- **/workflow:review-session-cycle** - Session-based code review
- **/workflow:review-module-cycle** - Module-specific code review
## Integration Notes
- Uses `ccw cli --tool codex --mode review` endpoint
- Model passed via prompt (codex uses `-c model=` internally)
- Target flags passed through to codex
- Prompt follows standard CCW CLI template format
- Results include severity levels, file:line references, and remediation suggestions

View File

@@ -0,0 +1,292 @@
---
title: /ccw-coordinator
sidebar_label: /ccw-coordinator
sidebar_position: 4
description: Generic command orchestration tool for CCW workflows
---
# /ccw-coordinator
Generic command orchestration tool - analyzes requirements, recommends command chains, and executes sequentially with state persistence.
## Overview
The `/ccw-coordinator` command is a generic orchestrator that can handle any CCW workflow by analyzing task requirements and recommending optimal command chains.
**Parameters**:
- `[task description]`: Task to orchestrate (required)
**Execution Model**: Pseudocode guidance - Claude intelligently executes each phase based on context.
## Core Concept: Minimum Execution Units
**Definition**: Commands that must execute together as an atomic group to achieve meaningful workflow milestones.
### Examples of Execution Units
| Unit Name | Commands | Purpose |
|-----------|----------|---------|
| **Quick Implementation** | `/workflow:lite-plan` → `/workflow:lite-execute` | Lightweight plan and execution |
| **Multi-CLI Planning** | `/workflow:multi-cli-plan` → `/workflow:lite-execute` | Multi-perspective analysis |
| **Bug Fix** | `/workflow:lite-fix` → `/workflow:lite-execute` | Bug diagnosis and fix |
| **Verified Planning** | `/workflow:plan` → `/workflow:plan-verify` → `/workflow:execute` | Planning with verification |
| **TDD Planning** | `/workflow:tdd-plan` → `/workflow:execute` | Test-driven development |
## Command Port Mapping
Each workflow command has defined input/output ports for intelligent routing:
```javascript
const commandPorts = {
// Planning Commands
'lite-plan': {
name: 'lite-plan',
input: ['task-description'],
output: ['memory-plan'],
tags: ['planning', 'lite']
},
'plan': {
name: 'plan',
input: ['task-description', 'brainstorm-artifacts'],
output: ['impl-plan', 'tasks-json'],
tags: ['planning', 'full']
},
'multi-cli-plan': {
name: 'multi-cli-plan',
input: ['decision-topic'],
output: ['comparison-report', 'recommendation'],
tags: ['planning', 'multi-cli', 'analysis']
},
// ... more commands
};
```
## Usage
```bash
# Let coordinator analyze and recommend
/ccw-coordinator "Implement user authentication"
# The coordinator will:
# 1. Analyze task requirements
# 2. Recommend optimal command chain
# 3. Execute commands sequentially
# 4. Track state throughout
```
## Execution Flow
### Phase 1: Analyze Requirements
```javascript
async function analyzeRequirements(taskDescription) {
const analysis = {
goal: extractGoal(taskDescription),
scope: determineScope(taskDescription),
complexity: calculateComplexity(taskDescription),
task_type: classifyTask(taskDescription),
constraints: identifyConstraints(taskDescription)
};
return analysis;
}
```
**Output**:
- Goal: What needs to be accomplished
- Scope: Affected modules/components
- Complexity: simple/medium/complex
- Task type: feature, bugfix, refactor, etc.
- Constraints: Time, resources, dependencies
### Phase 2: Recommend Command Chain
```javascript
async function recommendCommandChain(analysis) {
const { inputPort, outputPort } = determinePortFlow(analysis.task_type, analysis.constraints);
const chain = selectChainByPorts(inputPort, outputPort, analysis);
return chain;
}
```
**Output**:
- Selected workflow level
- Command chain with units
- Execution mode (mainprocess/async)
- Expected artifacts
### Phase 3: User Confirmation
```javascript
async function getUserConfirmation(chain) {
const response = await AskUserQuestion({
questions: [{
question: 'Proceed with this command chain?',
header: 'Confirm',
options: [
{ label: 'Confirm and execute', description: 'Proceed with commands' },
{ label: 'Show details', description: 'View each command' },
{ label: 'Adjust chain', description: 'Remove or reorder' },
{ label: 'Cancel', description: 'Abort' }
]
}]
});
return response;
}
```
### Phase 4: Execute Sequential Command Chain
```javascript
async function executeCommandChain(chain, analysis) {
const sessionId = `ccw-coord-${Date.now()}`;
const state = {
session_id: sessionId,
status: 'running',
analysis: analysis,
command_chain: chain.map((cmd, idx) => ({ ...cmd, index: idx, status: 'pending' })),
execution_results: []
};
// Save initial state
Write(`.workflow/.ccw-coordinator/${sessionId}/state.json`, JSON.stringify(state, null, 2));
for (let i = 0; i < chain.length; i++) {
const cmd = chain[i];
console.log(`[${i+1}/${chain.length}] Executing: ${cmd.command}`);
// Update status to running
state.command_chain[i].status = 'running';
Write(`.workflow/.ccw-coordinator/${sessionId}/state.json`, JSON.stringify(state, null, 2));
// Execute command via CLI
const taskId = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool claude --mode write`,
{ run_in_background: true }
).task_id;
// Save execution record
state.execution_results.push({
index: i,
command: cmd.command,
status: 'in-progress',
task_id: taskId,
timestamp: new Date().toISOString()
});
}
return state;
}
```
## Command Chain Examples
### Quick Implementation Unit
```javascript
// Commands: 2 | Units: 1 (quick-impl)
const quickImplChain = [
{
command: '/workflow:lite-plan',
args: '"{{goal}}"',
unit: 'quick-impl',
execution: { type: 'slash-command', mode: 'mainprocess' }
},
{
command: '/workflow:lite-execute',
args: '--in-memory',
unit: 'quick-impl',
execution: { type: 'slash-command', mode: 'async' }
}
];
```
### Verified Planning Unit
```javascript
// Commands: 3 | Units: 1 (verified-planning-execution)
const verifiedChain = [
{
command: '/workflow:plan',
args: '"{{goal}}"',
unit: 'verified-planning-execution',
execution: { type: 'slash-command', mode: 'mainprocess' }
},
{
command: '/workflow:plan-verify',
args: '',
unit: 'verified-planning-execution',
execution: { type: 'slash-command', mode: 'mainprocess' }
},
{
command: '/workflow:execute',
args: '--resume-session="{{session}}"',
unit: 'verified-planning-execution',
execution: { type: 'slash-command', mode: 'mainprocess' }
}
];
```
## Parameter Patterns
| Command Type | Parameter Pattern | Example |
|--------------|------------------|---------|
| **Planning** | `"task description"` | `/workflow:plan -y "Implement OAuth2"` |
| **Execution (with plan)** | `--resume-session="WFS-xxx"` | `/workflow:execute -y --resume-session="WFS-plan-001"` |
| **Execution (standalone)** | `--in-memory` or `"task"` | `/workflow:lite-execute -y --in-memory` |
| **Session-based** | `--session="WFS-xxx"` | `/workflow:test-fix-gen -y --session="WFS-impl-001"` |
| **Fix/Debug** | `"problem description"` | `/workflow:lite-fix -y "Fix timeout bug"` |
## Examples
### Simple Feature
```bash
/ccw-coordinator "Add user profile page"
# Output:
# Analyzing: "Add user profile page"
# Complexity: Low (score: 1)
# Recommended: Level 2 - Rapid Workflow
# Command chain:
# Unit: quick-impl
# 1. /workflow:lite-plan "Add user profile page"
# 2. /workflow:lite-execute --in-memory
# Confirm? (y/n): y
#
# [1/2] Executing: /workflow:lite-plan
# [2/2] Executing: /workflow:lite-execute
# ✅ Complete! Session: ccw-coord-1738425600000
```
### Complex Feature
```bash
/ccw-coordinator "Refactor authentication system with OAuth2"
# Output:
# Analyzing: "Refactor authentication system..."
# Complexity: High (score: 6)
# Recommended: Level 3 - Standard Workflow
# Command chain:
# Unit: verified-planning-execution
# 1. /workflow:plan "Refactor authentication..."
# 2. /workflow:plan-verify
# 3. /workflow:execute --resume-session="{session}"
# Confirm? (y/n): y
```
## Related Commands
- **/ccw** - Main workflow coordinator
- **/ccw-plan** - Planning coordinator
- **/ccw-test** - Test coordinator
- **/ccw-debug** - Debug coordinator
## Notes
- **Atomic execution** - Never split minimum execution units
- **State persistence** - All state saved to `.workflow/.ccw-coordinator/`
- **User control** - Confirmation before execution
- **Context passing** - Parameters chain across commands
- **Resume support** - Can resume from state.json
- **Intelligent routing** - Port-based command matching

View File

@@ -0,0 +1,270 @@
---
title: /ccw-debug
sidebar_label: /ccw-debug
sidebar_position: 5
description: Debug coordinator for intelligent debugging workflows
---
# /ccw-debug
Debug coordinator - analyzes issues, selects debug strategy, and executes debug workflow in the main process.
## Overview
The `/ccw-debug` command orchestrates debugging workflows by analyzing issue descriptions, selecting appropriate debug strategies, and executing targeted command chains.
**Parameters**:
- `--mode &lt;mode&gt;`: Debug mode (cli, debug, test, bidirectional)
- `--yes|-y`: Skip confirmation prompts
- `"bug description"`: Issue to debug (required)
**Core Concept**: Debug Units - commands grouped into logical units for different root cause strategies.
## Features
- **Issue Analysis** - Extracts symptoms, occurrence patterns, affected components
- **Strategy Selection** - Auto-selects based on keywords and complexity
- **Debug Units** - 4 debug modes for different scenarios
- **Parallel Execution** - Bidirectional mode for complex issues
- **State Tracking** - TODO and status file tracking
## Usage
```bash
# Auto-select mode (keyword-based detection)
/ccw-debug "Login timeout error"
# Explicit mode selection
/ccw-debug --mode cli "Quick API question"
/ccw-debug --mode debug "User authentication fails"
/ccw-debug --mode test "Unit tests failing"
/ccw-debug --mode bidirectional "Complex multi-module issue"
# Skip confirmation
/ccw-debug --yes "Fix typo in config"
```
## Debug Modes
### CLI Mode
**Use for**: Quick analysis, simple questions, early diagnosis
**Command Chain**:
```
ccw cli --mode analysis --rule analysis-diagnose-bug-root-ause
```
**Characteristics**:
- Analysis only
- No code changes
- Returns findings and recommendations
### Debug Mode
**Use for**: Standard bug diagnosis and fix
**Command Chain**:
```
/workflow:debug-with-file
/workflow:test-fix-gen
/workflow:test-cycle-execute
```
**Characteristics**:
- Hypothesis-driven debugging
- Test generation
- Iterative fixing
### Test Mode
**Use for**: Test failures, test validation
**Command Chain**:
```
/workflow:test-fix-gen
/workflow:test-cycle-execute
```
**Characteristics**:
- Test-focused
- Fix in testing
- Iterative until pass
### Bidirectional Mode
**Use for**: Complex issues requiring multiple perspectives
**Command Chain** (Parallel):
```
/workflow:debug-with-file ∥ /workflow:test-fix-gen ∥ /workflow:test-cycle-execute
Merge findings
```
**Characteristics**:
- Parallel execution
- Multiple perspectives
- Merged findings
## Debug Units
| Unit Name | Commands | Purpose |
|-----------|----------|---------|
| **Quick Analysis** | ccw cli (analysis) | Quick diagnosis |
| **Standard Debug** | debug-with-file → test-fix-gen → test-cycle-execute | Full debug cycle |
| **Test Fix** | test-fix-gen → test-cycle-execute | Test-focused fix |
| **Comprehensive** | (debug ∥ test ∥ test-cycle) → merge | Multi-perspective |
## Execution Flow
```
User Input: "bug description"
Phase 1: Analyze Issue
├─ Extract: description, error_type, clarity, complexity, scope
└─ If clarity < 2 → Phase 1.5: Clarify Issue
Phase 2: Select Debug Strategy & Build Chain
├─ Detect mode: cli | debug | test | bidirectional
├─ Build command chain based on mode
└─ Parallel execution for bidirectional
Phase 3: User Confirmation (optional)
├─ Show debug strategy
└─ Allow mode change
Phase 4: Setup TODO Tracking & Status File
├─ Create todos with CCWD prefix
└─ Initialize .workflow/.ccw-debug/{session_id}/status.json
Phase 5: Execute Debug Chain
├─ Sequential: execute commands in order
├─ Bidirectional: execute debug + test in parallel
├─ CLI: present findings, ask for escalation
└─ Merge findings (bidirectional)
Update status and TODO
```
## Mode Detection
| Keywords | Detected Mode |
|----------|---------------|
| quick, simple, question, what, how | cli |
| bug, error, fail, crash, timeout | debug |
| test, unit test, coverage, assertion | test |
| complex, multiple, module, integration | bidirectional |
## Debug Pipeline Examples
| Issue | Mode | Pipeline |
|-------|------|----------|
| "Login timeout error (quick)" | cli | ccw cli → analysis → (escalate or done) |
| "User login fails intermittently" | debug | debug-with-file → test-gen → test-cycle |
| "Authentication tests failing" | test | test-fix-gen → test-cycle-execute |
| "Multi-module auth + db sync issue" | bidirectional | (debug ∥ test) → merge findings |
**Legend**: `∥` = parallel execution
## State Management
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display):
```bash
CCWD:debug: [1/3] /workflow:debug-with-file [in_progress]
CCWD:debug: [2/3] /workflow:test-fix-gen [pending]
CCWD:debug: [3/3] /workflow:test-cycle-execute [pending]
```
**2. Status File** (Internal State):
```json
{
"session_id": "CCWD-...",
"mode": "debug|cli|test|bidirectional",
"status": "running",
"parallel_execution": false|true,
"issue": {
"description": "...",
"error_type": "...",
"clarity": 1-5,
"complexity": "low|medium|high"
},
"command_chain": [...],
"findings": {
"debug": {...},
"test": {...},
"merged": {...}
}
}
```
## Examples
### CLI Mode
```bash
# Quick analysis
/ccw-debug --mode cli "Why is the API returning 500?"
# Output:
# Executing CLI analysis...
# Analysis complete:
# - Root cause: Database connection timeout
# - Recommendation: Increase connection pool size
# Escalate to debug mode? (y/n): y
```
### Debug Mode
```bash
# Standard debugging
/ccw-debug "User login fails intermittently"
# Output:
# Analyzing issue...
# Mode detected: debug
# Command chain:
# 1. /workflow:debug-with-file
# 2. /workflow:test-fix-gen
# 3. /workflow:test-cycle-execute
# Confirm? (y/n): y
#
# CCWD:debug: [1/3] /workflow:debug-with-file [in_progress]
# ...
```
### Bidirectional Mode
```bash
# Complex issue
/ccw-debug --mode bidirectional "Multi-module auth + database sync issue"
# Output:
# Analyzing issue...
# Mode: bidirectional (parallel execution)
# Command chain:
# Branch A: /workflow:debug-with-file
# Branch B: /workflow:test-fix-gen
# Branch C: /workflow:test-cycle-execute
# → Merge findings
# Confirm? (y/n): y
#
# Executing branches in parallel...
# Merging findings...
# Final recommendations: ...
```
## Related Commands
- **/workflow:debug-with-file** - Hypothesis-driven debugging
- **/workflow:test-fix-gen** - Test fix generation
- **/workflow:test-cycle-execute** - Test cycle execution
- **/ccw** - Main workflow coordinator
## Notes
- **Auto mode detection** based on keywords
- **Debug units** ensure complete debugging milestones
- **TODO tracking** with CCWD prefix
- **Status file** in `.workflow/.ccw-debug/{session}/`
- **Parallel execution** for bidirectional mode
- **Merge findings** combines multiple perspectives
- **Escalation support** from CLI to other modes

View File

@@ -0,0 +1,205 @@
---
title: /ccw-plan
sidebar_label: /ccw-plan
sidebar_position: 2
description: Planning coordinator for intelligent workflow selection
---
# /ccw-plan
Planning coordinator - analyzes requirements, selects planning strategy, and executes planning workflow in the main process.
## Overview
The `/ccw-plan` command serves as the planning orchestrator, automatically analyzing task requirements and selecting the appropriate planning workflow based on complexity and constraints.
**Parameters**:
- `--mode &lt;mode&gt;`: Planning mode (lite, multi-cli, full, plan-verify, replan, cli, issue, rapid-to-issue, brainstorm-with-file, analyze-with-file)
- `--yes|-y`: Skip confirmation prompts
- `"task description"`: Task to plan (required)
## Features
- **Auto Mode Detection** - Keyword-based mode selection
- **Planning Units** - Commands grouped for complete planning milestones
- **Multi-Mode Support** - 10+ planning workflows available
- **Issue Integration** - Bridge to issue workflow
- **With-File Workflows** - Multi-CLI collaboration support
## Usage
```bash
# Auto-select mode (keyword-based detection)
/ccw-plan "Add user authentication"
# Standard planning modes
/ccw-plan --mode lite "Add logout endpoint"
/ccw-plan --mode multi-cli "Implement OAuth2"
/ccw-plan --mode full "Design notification system"
/ccw-plan --mode plan-verify "Payment processing"
/ccw-plan --mode replan --session WFS-auth-2025-01-28
# CLI-assisted planning (quick recommendations)
/ccw-plan --mode cli "Should we use OAuth2 or JWT?"
# With-File workflows
/ccw-plan --mode brainstorm-with-file "用户通知系统重新设计"
/ccw-plan --mode analyze-with-file "认证架构设计决策"
# Issue workflow integration
/ccw-plan --mode issue "Handle all pending security issues"
/ccw-plan --mode rapid-to-issue "Plan and create user profile issue"
# Auto mode (skip confirmations)
/ccw-plan --yes "Quick feature: user profile endpoint"
```
## Planning Modes
### Lite Modes (Level 2)
| Mode | Description | Use Case |
|------|-------------|----------|
| `lite` | In-memory planning | Clear requirements, single module |
| `multi-cli` | Multi-CLI collaborative | Technology selection, solution comparison |
### Full Modes (Level 3)
| Mode | Description | Use Case |
|------|-------------|----------|
| `full` | 5-phase standard planning | Complex features, multi-module |
| `plan-verify` | Planning with quality gate | Production features, high quality required |
### Special Modes
| Mode | Description | Use Case |
|------|-------------|----------|
| `cli` | Quick CLI recommendations | Quick questions, architectural decisions |
| `replan` | Modify existing plan | Plan adjustments, scope changes |
| `tdd` | Test-driven development planning | TDD workflow |
### With-File Modes (Level 3-4)
| Mode | Description | Multi-CLI |
|------|-------------|-----------|
| `brainstorm-with-file` | Multi-perspective ideation | Yes |
| `analyze-with-file` | Collaborative analysis | Yes |
### Issue Modes
| Mode | Description | Use Case |
|------|-------------|----------|
| `issue` | Batch issue planning | Handle multiple issues |
| `rapid-to-issue` | Plan + create issue | Bridge planning to issue tracking |
## Planning Units
| Unit Name | Commands | Purpose |
|-----------|----------|---------|
| **Quick Planning** | lite-plan → lite-execute | Lightweight plan and execution |
| **Multi-CLI Planning** | multi-cli-plan → lite-execute | Multi-perspective analysis |
| **Verified Planning** | plan → plan-verify → execute | Planning with verification |
| **TDD Planning** | tdd-plan → execute | Test-driven development |
| **Issue Planning** | issue:discover → issue:plan → issue:queue → issue:execute | Issue workflow |
| **Brainstorm to Issue** | brainstorm:auto-parallel → issue:from-brainstorm → issue:queue | Exploration to issues |
## Mode Selection Decision Tree
```
User calls: /ccw-plan "task description"
Explicit --mode specified?
├─ Yes → Use specified mode
└─ No → Detect keywords
├─ "brainstorm" → brainstorm-with-file
├─ "analyze" → analyze-with-file
├─ "test", "tdd" → tdd
├─ "issue" → issue or rapid-to-issue
├─ Complexity high → full or plan-verify
└─ Default → lite
```
## Execution Flow
```
User Input
Phase 1: Analyze Requirements
├─ Extract: goal, scope, complexity, constraints
└─ Detect: task type, keywords
Phase 2: Select Mode & Build Chain
├─ Detect mode (explicit or auto)
├─ Build command chain based on mode
└─ Show planning strategy
Phase 3: User Confirmation (optional)
├─ Show command chain
└─ Allow mode change
Phase 4: Execute Planning Chain
├─ Setup TODO tracking (CCWP prefix)
├─ Initialize status file
└─ Execute commands sequentially
Output completion summary
```
## Examples
### Auto Mode Selection
```bash
# CCW detects keywords and selects mode
/ccw-plan "Implement user authentication with TDD"
# Output:
# Detecting mode...
# Keywords: "TDD" → Mode: tdd
# Commands: tdd-plan
# Confirm? (y/n): y
```
### Brainstorm Mode
```bash
# Multi-perspective exploration
/ccw-plan --mode brainstorm-with-file "Design notification system"
# Uses brainstorm workflow with multi-CLI collaboration
```
### CLI Quick Recommendations
```bash
# Quick architectural question
/ccw-plan --mode cli "Should we use Redux or Zustand for state?"
# Uses CLI for quick analysis and recommendations
```
### Replan Existing Session
```bash
# Modify existing plan
/ccw-plan --mode replan --session WFS-auth-2025-01-28
# Opens existing plan for modification
```
## Related Commands
- **/ccw** - Main workflow coordinator
- **/ccw-test** - Test workflow coordinator
- **/ccw-debug** - Debug workflow coordinator
- **/workflow:plan** - Standard planning workflow
- **/workflow:tdd-plan** - TDD planning workflow
## Notes
- **Keyword detection** for auto mode selection
- **Planning units** ensure complete planning milestones
- **TODO tracking** with CCWP prefix
- **Status file** in `.workflow/.ccw-plan/{session}/`
- **Multi-CLI collaboration** for with-file modes
- **Issue integration** for seamless workflow transition

View File

@@ -0,0 +1,221 @@
---
title: /ccw-test
sidebar_label: /ccw-test
sidebar_position: 3
description: Test workflow coordinator for testing strategies
---
# /ccw-test
Test workflow coordinator - analyzes testing requirements, selects test strategy, and executes test workflow.
## Overview
The `/ccw-test` command serves as the testing orchestrator, automatically analyzing testing requirements and selecting the appropriate test workflow based on the testing context.
**Parameters**:
- `--mode &lt;mode&gt;`: Test mode (test-gen, test-fix-gen, test-cycle-execute, tdd-verify)
- `--session &lt;id&gt;`: Resume from existing session
- `"test description or session ID"`: Test target
## Features
- **Auto Test Mode Detection** - Analyzes context to select appropriate test workflow
- **Test Units** - Commands grouped for complete testing milestones
- **Progressive Layers** - L0-L3 test requirements
- **AI Code Validation** - Detects common AI-generated code issues
- **Quality Gates** - Multiple validation checkpoints
## Usage
```bash
# Session mode - test validation for completed implementation
/ccw-test WFS-user-auth-v2
# Prompt mode - text description
/ccw-test "Test the user authentication API endpoints"
# Prompt mode - file reference
/ccw-test ./docs/api-requirements.md
# Explicit mode selection
/ccw-test --mode test-gen "Generate comprehensive tests for auth module"
/ccw-test --mode test-fix-gen "Test failures in login flow"
/ccw-test --mode test-cycle-execute WFS-test-auth
```
## Test Modes
### Test Generation
| Mode | Description | Output | Follow-up |
|------|-------------|--------|-----------|
| `test-gen` | Extensive test generation | test-tasks | `/workflow:execute` |
**Purpose**: Generate comprehensive test examples and execute tests
### Test Fix Generation
| Mode | Description | Output | Follow-up |
|------|-------------|--------|-----------|
| `test-fix-gen` | Test and fix specific issues | test-tasks | `/workflow:test-cycle-execute` |
**Purpose**: Generate tests for specific problems and fix in testing
### Test Cycle Execution
| Mode | Description | Output | Follow-up |
|------|-------------|--------|-----------|
| `test-cycle-execute` | Iterative test and fix | test-passed | N/A |
**Purpose**: Execute test-fix cycle until 95% pass rate
### TDD Verification
| Mode | Description | Output |
|------|-------------|--------|
| `tdd-verify` | Verify TDD compliance | Quality report |
**Purpose**: Verify Red-Green-Refactor cycle compliance
## Test Units
| Unit Name | Commands | Purpose |
|-----------|----------|---------|
| **Test Generation** | test-gen → execute | Generate and run comprehensive tests |
| **Test Fix** | test-fix-gen → test-cycle-execute | Generate tests and fix iteratively |
## Progressive Test Layers
| Layer | Description | Components |
|-------|-------------|------------|
| **L0** | AI code issues | Hallucinated imports, placeholder code, mock leakage |
| **L0.5** | Common issues | Edge cases, error handling |
| **L1** | Unit tests | Component-level testing |
| **L2** | Integration tests | API/database integration |
| **L3** | E2E tests | Full workflow testing |
## Execution Flow
### Session Mode
```
Session ID Input
Phase 1: Create Test Session
└─ /workflow:session:start --type test
└─ Output: testSessionId
Phase 2: Test Context Gathering
└─ /workflow:tools:test-context-gather
└─ Output: test-context-package.json
Phase 3: Test Concept Enhancement
└─ /workflow:tools:test-concept-enhanced
└─ Output: TEST_ANALYSIS_RESULTS.md (L0-L3 requirements)
Phase 4: Test Task Generation
└─ /workflow:tools:test-task-generate
└─ Output: IMPL_PLAN.md, IMPL-*.json (4+ tasks)
Return summary → Next: /workflow:test-cycle-execute
```
### Prompt Mode
```
Description/File Input
Phase 1: Create Test Session
└─ /workflow:session:start --type test "description"
Phase 2: Context Gathering
└─ /workflow:tools:context-gather
Phase 3-4: Same as Session Mode
```
## Minimum Task Structure (test-fix-gen)
| Task | Type | Agent | Purpose |
|------|------|-------|---------|
| **IMPL-001** | test-gen | @code-developer | Test understanding & generation (L1-L3) |
| **IMPL-001.3** | code-validation | @test-fix-agent | Code validation gate (L0 + AI issues) |
| **IMPL-001.5** | test-quality-review | @test-fix-agent | Test quality gate |
| **IMPL-002** | test-fix | @test-fix-agent | Test execution & fix cycle |
## Examples
### Session Mode
```bash
# Test validation for completed implementation
/ccw-test WFS-user-auth-v2
# Output:
# Creating test session...
# Gathering test context...
# Analyzing test coverage...
# Generating test tasks...
# Created 4 test tasks
# Next: /workflow:test-cycle-execute
```
### Prompt Mode
```bash
# Test specific functionality
/ccw-test "Test the user authentication API endpoints in src/auth/api.ts"
# Generates tests for specific module
```
### Test Fix for Failures
```bash
# Fix failing tests
/ccw-test --mode test-fix-gen "Login tests failing with timeout error"
# Generates tasks to diagnose and fix test failures
```
## AI Code Issue Detection (L0)
The test workflow automatically detects common AI-generated code problems:
| Issue Type | Description | Detection |
|------------|-------------|-----------|
| **Hallucinated Imports** | Imports that don't exist | Static analysis |
| **Placeholder Code** | TODO/FIXME in production | Pattern matching |
| **Mock Leakage** | Test mocks in production code | Code analysis |
| **Incomplete Implementation** | Empty functions/stubs | AST analysis |
## Quality Gates
### IMPL-001.3 - Code Validation
- Validates L0 requirements (AI code issues)
- Checks for hallucinated imports
- Detects placeholder code
- Ensures no mock leakage
### IMPL-001.5 - Test Quality
- Reviews test coverage
- Validates test patterns
- Checks assertion quality
- Ensures proper error handling
## Related Commands
- **/workflow:test-fix-gen** - Test fix generation workflow
- **/workflow:test-cycle-execute** - Test cycle execution
- **/workflow:tdd-plan** - TDD planning workflow
- **/workflow:tdd-verify** - TDD verification
## Notes
- **Progressive layers** ensure comprehensive testing (L0-L3)
- **AI code validation** detects common AI-generated issues
- **Quality gates** ensure high test standards
- **Iterative fixing** until 95% pass rate
- **Project type detection** applies appropriate test templates
- **CLI tool preference** supports semantic detection

View File

@@ -0,0 +1,237 @@
---
title: /ccw
sidebar_label: /ccw
sidebar_position: 1
description: Main CCW workflow coordinator for intelligent command orchestration
---
# /ccw
Main CCW workflow coordinator - the unified entry point for intelligent command orchestration based on task complexity analysis.
## Overview
The `/ccw` command is the primary CCW workflow coordinator that automatically analyzes task requirements, evaluates complexity, and selects the appropriate workflow level and execution path.
**Core Concept**: Minimum Execution Units - commands grouped into logical units for complete workflow milestones.
## Features
- **Auto Complexity Analysis** - Evaluates task based on keywords and context
- **Workflow Selection** - Automatically selects optimal workflow level (1-5)
- **Unit-Based Orchestration** - Groups commands into Minimum Execution Units
- **State Persistence** - Tracks execution state in `.workflow/.ccw-coordinator/`
- **Intelligent Routing** - Direct execution vs CLI-based execution
## Usage
```bash
# Let CCW analyze and select workflow
/ccw "Implement user authentication"
# Explicit workflow selection
/ccw --workflow rapid "Add logout endpoint"
# Skip tests
/ccw --skip-tests "Quick config fix"
# Auto-confirm (skip confirmation prompts)
/ccw --yes "Simple bug fix"
```
## Command Options
| Option | Description | Default |
|--------|-------------|---------|
| `[task description]` | Task to execute (required) | - |
| `--workflow &lt;name&gt;` | Explicit workflow selection | Auto-detected |
| `--skip-tests` | Skip test validation unit | false |
| `--yes` | Auto-confirm execution | false |
## Workflow Levels
The coordinator automatically selects from 5 workflow levels:
### Level 1: Rapid Execution
**Complexity**: Low | **Artifacts**: None | **State**: Stateless
| Workflow | Description |
|----------|-------------|
| `lite-lite-lite` | Ultra-lightweight direct execution |
**Use for**: Quick fixes, simple features, config adjustments
### Level 2: Lightweight Planning
**Complexity**: Low-Medium | **Artifacts**: Memory/Lightweight files
| Workflow | Description |
|----------|-------------|
| `rapid` | lite-plan → lite-execute (+ optional test units) |
**Use for**: Single-module features, bug fixes
### Level 3: Standard Planning
**Complexity**: Medium-High | **Artifacts**: Persistent session files
| Workflow | Description |
|----------|-------------|
| `coupled` | plan → plan-verify → execute (+ optional test units) |
| `tdd` | tdd-plan → execute → tdd-verify |
**Use for**: Multi-module changes, refactoring, TDD
### Level 4: Brainstorming
**Complexity**: High | **Artifacts**: Multi-role analysis docs
| Workflow | Description |
|----------|-------------|
| `brainstorm` | brainstorm:auto-parallel → plan → execute |
**Use for**: New feature design, architecture refactoring
### Level 5: Intelligent Orchestration
**Complexity**: All levels | **Artifacts**: Full state persistence
| Workflow | Description |
|----------|-------------|
| `full` | ccw-coordinator (auto-analyze & recommend) |
**Use for**: Complex multi-step workflows, uncertain commands
## Execution Flow
```
User Input: "task description"
Phase 1: Complexity Analysis
├─ Extract keywords
├─ Calculate complexity score
└─ Detect constraints
Phase 2: Workflow Selection
├─ Map complexity to level
├─ Select workflow
└─ Build command chain
Phase 3: User Confirmation
├─ Display selected workflow
└─ Show command chain
Phase 4: Execute Command Chain
├─ Setup TODO tracking
├─ Execute commands sequentially
└─ Track state
Output completion summary
```
## Complexity Evaluation
Auto-evaluates complexity based on keywords:
| Weight | Keywords |
|--------|----------|
| +2 | refactor, migrate, architect, system |
| +2 | multiple, across, all, entire |
| +1 | integrate, api, database |
| +1 | security, performance, scale |
**Thresholds**:
- **High complexity** (&gt;=4): Level 3-4
- **Medium complexity** (2-3): Level 2
- **Low complexity** (&lt;2): Level 1
## Minimum Execution Units
| Unit Name | Commands | Purpose |
|-----------|----------|---------|
| **Quick Implementation** | lite-plan → lite-execute | Lightweight plan and execution |
| **Multi-CLI Planning** | multi-cli-plan → lite-execute | Multi-perspective analysis |
| **Bug Fix** | lite-fix → lite-execute | Bug diagnosis and fix |
| **Verified Planning** | plan → plan-verify → execute | Planning with verification |
| **TDD Planning** | tdd-plan → execute | Test-driven development |
| **Test Validation** | test-fix-gen → test-cycle-execute | Test fix cycle |
| **Code Review** | review-session-cycle → review-cycle-fix | Review and fix |
## Command Chain Examples
### Rapid Workflow (Level 2)
```bash
# Quick implementation
Unit: quick-impl
Commands:
1. /workflow:lite-plan "Add logout endpoint"
2. /workflow:lite-execute --in-memory
(Optional) Unit: test-validation
Commands:
3. /workflow:test-fix-gen
4. /workflow:test-cycle-execute
```
### Coupled Workflow (Level 3)
```bash
# Verified planning
Unit: verified-planning-execution
Commands:
1. /workflow:plan "Implement OAuth2"
2. /workflow:plan-verify
3. /workflow:execute
(Optional) Unit: test-validation
Commands:
4. /workflow:test-fix-gen
5. /workflow:test-cycle-execute
```
## Examples
### Auto Workflow Selection
```bash
# CCW analyzes and selects appropriate workflow
/ccw "Add user profile page"
# Output:
# Analyzing task...
# Complexity: Low (score: 1)
# Selected: Level 2 - Rapid Workflow
# Commands: lite-plan → lite-execute
# Confirm? (y/n): y
```
### Explicit Workflow
```bash
# Force specific workflow
/ccw --workflow tdd "Implement authentication with TDD"
# Uses TDD workflow regardless of complexity
```
### Skip Tests
```bash
# Quick fix without tests
/ccw --skip-tests "Fix typo in config"
# Omits test-validation unit
```
## Related Commands
- **/ccw-plan** - Planning coordinator
- **/ccw-test** - Test workflow coordinator
- **/ccw-coordinator** - Generic command orchestration
- **/ccw-debug** - Debug workflow coordinator
## Notes
- **Auto-analysis** evaluates task complexity based on keywords
- **Workflow levels** map complexity to appropriate execution paths
- **Minimum Execution Units** ensure complete workflow milestones
- **State persistence** in `.workflow/.ccw-coordinator/{session}/`
- **TODO tracking** with CCW prefix for visibility
- **CLI execution** runs in background with hook callbacks

View File

@@ -0,0 +1,295 @@
---
title: /codex-coordinator
sidebar_label: /codex-coordinator
sidebar_position: 7
description: Command orchestration tool for Codex workflows
---
# /codex-coordinator
Command orchestration tool for Codex - analyzes requirements, recommends command chains, and executes sequentially with state persistence.
## Overview
The `/codex-coordinator` command is the Codex equivalent of the CCW coordinator, providing intelligent orchestration for Codex-specific workflows.
**Parameters**:
- `TASK="<task description>"`: Task to orchestrate (required)
- `--depth=<standard|deep>`: Analysis depth (default: standard)
- `--auto-confirm`: Skip user confirmation
- `--verbose`: Enable verbose output
**Execution Model**: Intelligent agent-driven workflow - Claude analyzes each phase and orchestrates command execution.
## Core Concept: Minimum Execution Units
**Definition**: Commands that must execute together as an atomic group to achieve meaningful workflow milestones.
### Available Codex Commands
| Category | Commands | Purpose |
|----------|----------|---------|
| **Planning** | lite-plan-a, lite-plan-b, plan-a, plan-b | Various planning approaches |
| **Execution** | execute, execute-a, execute-b | Implementation workflows |
| **Analysis** | analyze-a, analyze-b | Code analysis |
| **Testing** | test-a, test-b | Test generation |
| **Brainstorming** | brainstorm-a, brainstorm-with-file, brainstorm-to-cycle | Idea exploration |
| **Cleanup** | clean, compact | Code cleanup and memory compaction |
## Usage
```bash
# Basic usage
/codex-coordinator TASK="Implement user authentication"
# With deep analysis
/codex-coordinator TASK="Design notification system" --depth=deep
# Auto-confirm
/codex-coordinator TASK="Fix typo in config" --auto-confirm
# Verbose output
/codex-coordinator TASK="Add API endpoint" --verbose
```
## Execution Flow
### Phase 1: Analyze Requirements
```javascript
async function analyzeRequirements(taskDescription) {
const analysis = {
goal: extractGoal(taskDescription),
scope: determineScope(taskDescription),
complexity: calculateComplexity(taskDescription),
task_type: classifyTask(taskDescription),
constraints: identifyConstraints(taskDescription)
};
return analysis;
}
```
**Output**:
- Goal: Primary objective
- Scope: Affected modules
- Complexity: simple/medium/complex
- Task type: feature/bugfix/refactor
- Constraints: Dependencies, requirements
### Phase 2: Recommend Command Chain
```javascript
async function recommendCommandChain(analysis) {
const { inputPort, outputPort } = determinePortFlow(analysis.task_type, analysis.complexity);
const chain = selectChainByTaskType(analysis);
return chain;
}
```
**Output**:
- Recommended command sequence
- Execution parameters
- Expected artifacts
- Estimated complexity
### Phase 3: User Confirmation
```javascript
async function getUserConfirmation(chain) {
const response = await AskUserQuestion({
questions: [{
question: 'Proceed with this command chain?',
header: 'Confirm Chain',
options: [
{ label: 'Confirm and execute', description: 'Proceed with commands' },
{ label: 'Show details', description: 'View each command' },
{ label: 'Adjust chain', description: 'Remove or reorder' },
{ label: 'Cancel', description: 'Abort' }
]
}]
});
return response;
}
```
### Phase 4: Execute Sequential Command Chain
```javascript
async function executeCommandChain(chain, analysis) {
const sessionId = `codex-coord-${Date.now()}`;
const state = {
session_id: sessionId,
status: 'running',
analysis: analysis,
command_chain: chain.map((cmd, idx) => ({ ...cmd, index: idx, status: 'pending' })),
execution_results: []
};
// Save initial state
Write(`.workflow/.codex-coordinator/${sessionId}/state.json`, JSON.stringify(state, null, 2));
for (let i = 0; i < chain.length; i++) {
const cmd = chain[i];
// Build command string
let commandStr = `@~/.codex/prompts/${cmd.name}.md`;
// Add parameters
if (i > 0 && state.execution_results.length > 0) {
const lastResult = state.execution_results[state.execution_results.length - 1];
commandStr += ` --resume="${lastResult.session_id || lastResult.artifact}"`;
}
// Add depth for complex tasks
if (analysis.complexity === 'complex' && (cmd.name.includes('analyze') || cmd.name.includes('plan'))) {
commandStr += ` --depth=deep`;
}
// Add task description for planning commands
if (cmd.type === 'planning' && i === 0) {
commandStr += ` TASK="${analysis.goal}"`;
}
// Execute command
console.log(`Executing: ${commandStr}`);
state.execution_results.push({
index: i,
command: cmd.name,
status: 'in-progress',
started_at: new Date().toISOString()
});
state.command_chain[i].status = 'completed';
state.updated_at = new Date().toISOString();
Write(`.workflow/.codex-coordinator/${sessionId}/state.json`, JSON.stringify(state, null, 2));
}
return state;
}
```
## Command Invocation Format
**Format**: `@~/.codex/prompts/&lt;command-name&gt;.md &lt;parameters&gt;`
### Examples
```bash
# Planning command
@~/.codex/prompts/plan-a.md TASK="Implement OAuth2"
# Execution with resume
@~/.codex/prompts/execute-a.md --resume="session-id"
# Analysis with depth
@~/.codex/prompts/analyze-a.md --depth=deep
```
## Available Codex Workflows
### Planning Workflows
| Command | Purpose | Output |
|---------|---------|--------|
| **lite-plan-a** | Lightweight planning A | Quick plan |
| **lite-plan-b** | Lightweight planning B | Alternative quick plan |
| **plan-a** | Standard planning A | Detailed plan |
| **plan-b** | Standard planning B | Alternative detailed plan |
### Execution Workflows
| Command | Purpose | Output |
|---------|---------|--------|
| **execute** | Standard execution | Implemented code |
| **execute-a** | Execution variant A | Implementation A |
| **execute-b** | Execution variant B | Implementation B |
### Analysis Workflows
| Command | Purpose | Output |
|---------|---------|--------|
| **analyze-a** | Code analysis A | Analysis report |
| **analyze-b** | Code analysis B | Alternative analysis |
### Testing Workflows
| Command | Purpose | Output |
|---------|---------|--------|
| **test-a** | Test generation A | Test suite A |
| **test-b** | Test generation B | Test suite B |
### Brainstorming Workflows
| Command | Purpose | Output |
|---------|---------|--------|
| **brainstorm-a** | Brainstorming A | Ideas A |
| **brainstorm-with-file** | Documented brainstorm | brainstorm.md |
| **brainstorm-to-cycle** | Brainstorm to cycle | Actionable cycle |
### Cleanup Workflows
| Command | Purpose | Output |
|---------|---------|--------|
| **clean** | Code cleanup | Cleaned code |
| **compact** | Memory compaction | Compressed state |
## Examples
### Simple Feature
```bash
/codex-coordinator TASK="Add user profile page" --auto-confirm
# Output:
# Analyzing: "Add user profile page"
# Complexity: Low
# Recommended: lite-plan-a → execute
# Executing: @~/.codex/prompts/lite-plan-a.md TASK="Add user profile page"
# Executing: @~/.codex/prompts/execute.md
# ✅ Complete! Session: codex-coord-1738425600000
```
### Complex Feature
```bash
/codex-coordinator TASK="Refactor authentication system" --depth=deep
# Output:
# Analyzing: "Refactor authentication system"
# Complexity: High
# Recommended: plan-a → execute → test-a
# Executing: @~/.codex/prompts/plan-a.md --depth=deep
# Executing: @~/.codex/prompts/execute.md
# Executing: @~/.codex/prompts/test-a.md
# ✅ Complete! Session: codex-coord-1738425600000
```
## Session Management
### Resume Previous Session
```bash
# Find session
ls .workflow/.codex-coordinator/
# Load state
cat .workflow/.codex-coordinator/{session-id}/state.json
# Resume from last completed command
# (Manual continuation based on state)
```
## Related Commands
- **/ccw-coordinator** - CCW workflow coordinator
- **@~/.codex/prompts/** - Individual Codex commands
- **/ccw** - Main CCW workflow coordinator
## Notes
- **Agent-driven** workflow - Claude intelligently orchestrates
- **State persistence** in `.workflow/.codex-coordinator/`
- **Command chaining** with parameter passing
- **Resume support** from state files
- **Depth control** for analysis complexity
- **Auto-confirm** for non-interactive use

View File

@@ -0,0 +1,311 @@
---
title: /flow-create
sidebar_label: /flow-create
sidebar_position: 6
description: Generate workflow templates for flow-coordinator
---
# /flow-create
Flow template generator - create workflow templates for meta-skill/flow-coordinator with interactive step definition.
## Overview
The `/flow-create` command generates workflow templates that can be used by the flow-coordinator skill for executing predefined command chains.
**Usage**: `/flow-create [template-name] [--output &lt;path&gt;]`
## Features
- **Interactive Design** - Guided template creation process
- **Complexity Levels** - 4 levels from rapid to full workflows
- **Step Definition** - Define workflow steps with commands and parameters
- **Command Selection** - Choose from available CCW commands
- **Template Validation** - Ensures template structure correctness
## Usage
```bash
# Create template with default name
/flow-create
# Create specific template
/flow-create bugfix-v2
# Create with custom output
/flow-create my-workflow --output ~/.claude/skills/my-skill/templates/
```
## Execution Flow
```
User Input → Phase 1: Template Design → Phase 2: Step Definition → Phase 3: Generate JSON
↓ ↓ ↓
Name + Description Define workflow steps Write template file
```
## Phase 1: Template Design
Gather basic template information:
- **Template Name** - Identifier for the template
- **Purpose** - What the template accomplishes
- **Complexity Level** - 1-4 (Rapid to Full)
### Complexity Levels
| Level | Name | Steps | Description |
|-------|------|-------|-------------|
| **1** | Rapid | 1-2 | Ultra-lightweight (lite-lite-lite) |
| **2** | Lightweight | 2-4 | Quick implementation |
| **3** | Standard | 4-6 | With verification and testing |
| **4** | Full | 6+ | Brainstorm + full workflow |
### Purpose Categories
- **Bug Fix** - Fix bugs and issues
- **Feature** - Implement new features
- **Generation** - Generate code or content
- **Analysis** - Analyze code or architecture
- **Transformation** - Transform or refactor code
- **Orchestration** - Complex multi-step workflows
## Phase 2: Step Definition
Define workflow steps with:
1. **Command Selection** - Choose from available CCW commands
2. **Execution Unit** - Group related steps
3. **Execution Mode** - mainprocess or async
4. **Context Hint** - Description of step purpose
### Command Categories
- **Planning** - plan, lite-plan, multi-cli-plan, tdd-plan
- **Execution** - execute, lite-execute, lite-lite-lite
- **Testing** - test-fix-gen, test-cycle-execute, tdd-verify
- **Review** - review-session-cycle, review-module-cycle
- **Debug** - debug-with-file, lite-fix
- **Brainstorm** - brainstorm:auto-parallel, brainstorm-with-file
- **Issue** - issue:discover, issue:plan, issue:queue, issue:execute
## Phase 3: Generate JSON
Creates template file with structure:
```json
{
"name": "template-name",
"description": "Template description",
"level": 2,
"steps": [
{
"cmd": "/workflow:command",
"args": "\"{{goal}}\"",
"unit": "unit-name",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Description of what this step does"
}
]
}
```
## Template Examples
### Bug Fix Template (Level 2)
```json
{
"name": "hotfix-simple",
"description": "Quick bug fix workflow",
"level": 2,
"steps": [
{
"cmd": "/workflow:lite-fix",
"args": "\"{{goal}}\"",
"unit": "bug-fix",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Diagnose and fix the bug"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "bug-fix",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Apply the fix"
}
]
}
```
### Feature Template (Level 3)
```json
{
"name": "feature-standard",
"description": "Standard feature implementation with verification",
"level": 3,
"steps": [
{
"cmd": "/workflow:plan",
"args": "\"{{goal}}\"",
"unit": "verified-planning",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create detailed implementation plan"
},
{
"cmd": "/workflow:plan-verify",
"args": "",
"unit": "verified-planning",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify plan completeness and quality"
},
{
"cmd": "/workflow:execute",
"args": "--resume-session=\"{{session}}\"",
"unit": "verified-planning",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Execute the implementation plan"
}
]
}
```
### Full Workflow Template (Level 4)
```json
{
"name": "full",
"description": "Comprehensive workflow with brainstorm and verification",
"level": 4,
"steps": [
{
"cmd": "/workflow:brainstorm:auto-parallel",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective exploration of requirements"
},
{
"cmd": "/workflow:plan",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create detailed implementation plan"
},
{
"cmd": "/workflow:plan-verify",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify plan completeness"
},
{
"cmd": "/workflow:execute",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Execute implementation"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate tests"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Execute test-fix cycle"
}
]
}
```
## Execution Mode Types
| Mode | Description | Use Case |
|------|-------------|----------|
| **mainprocess** | Execute in main process | Sequential execution, state sharing |
| **async** | Execute asynchronously | Parallel execution, independent steps |
## Template Output Location
Default: `~/.claude/skills/flow-coordinator/templates/{name}.json`
Custom: Use `--output` to specify alternative location
## Examples
### Quick Bug Fix Template
```bash
/flow-create hotfix-simple
# Interactive prompts:
# Purpose: Bug Fix
# Level: 2 (Lightweight)
# Steps: Use Suggested
# Output: ~/.claude/skills/flow-coordinator/templates/hotfix-simple.json
```
### Custom Multi-Stage Workflow
```bash
/flow-create complex-feature --output ~/.claude/skills/my-project/templates/
# Interactive prompts:
# Name: complex-feature
# Purpose: Feature
# Level: 4 (Full)
# Steps: Custom definition
# Output: ~/.claude/skills/my-project/templates/complex-feature.json
```
## Related Commands
- **/flow-coordinator** - Execute workflow templates
- **/ccw-coordinator** - Generic command orchestration
- **Skill:flow-coordinator** - Flow coordinator skill
## Notes
- **Interactive design** process with guided prompts
- **Complexity levels** determine workflow depth
- **Execution units** group related steps
- **Template validation** ensures correctness
- **Custom output** supported via `--output` parameter
- **Existing templates** can be used as examples

View File

@@ -0,0 +1,392 @@
---
title: issue:convert-to-plan
sidebar_label: issue:convert-to-plan
sidebar_position: 7
description: Convert planning artifacts to issue solutions
---
# issue:convert-to-plan
Converts various planning artifact formats (lite-plan, workflow session, markdown, JSON) into issue workflow solutions with intelligent detection and automatic binding.
## Description
The `issue:convert-to-plan` command bridges external planning workflows with the issue system. It auto-detects source formats, extracts task structures, normalizes to solution schema, and either creates new issues or supplements existing solutions.
### Key Features
- **Multi-format support**: lite-plan, workflow sessions, markdown, JSON
- **Auto-detection**: Identifies source type automatically
- **AI-assisted extraction**: Gemini CLI for markdown parsing
- **Supplement mode**: Add tasks to existing solutions
- **Auto-binding**: Solutions automatically bound to issues
- **Issue auto-creation**: Creates issues from plans when needed
## Usage
```bash
# Convert lite-plan to new issue (auto-creates issue)
/issue:convert-to-plan ".workflow/.lite-plan/implement-auth-2026-01-25"
# Convert workflow session to existing issue
/issue:convert-to-plan WFS-auth-impl --issue GH-123
# Convert markdown file to issue
/issue:convert-to-plan "./docs/implementation-plan.md"
# Supplement existing solution with additional tasks
/issue:convert-to-plan "./docs/additional-tasks.md" --issue ISS-001 --supplement
# Auto mode - skip confirmations
/issue:convert-to-plan ".workflow/.lite-plan/my-plan" -y
```
### Arguments
| Argument | Required | Description |
|----------|----------|-------------|
| `SOURCE` | Yes | Planning artifact path or WFS-xxx ID |
| `--issue &lt;id&gt;` | No | Bind to existing issue instead of creating new |
| `--supplement` | No | Add tasks to existing solution (requires --issue) |
| `-y, --yes` | No | Skip all confirmations |
## Examples
### Convert Lite-Plan
```bash
/issue:convert-to-plan ".workflow/.lite-plan/implement-auth"
# Output:
# Detected source type: lite-plan
# Extracted: 5 tasks
# Created issue: ISS-20250129-001 (priority: 3)
# ✓ Created solution: SOL-ISS-20250129-001-a7b3
# ✓ Bound solution to issue
# → Status: planned
```
### Convert Workflow Session
```bash
/issue:convert-to-plan WFS-auth-impl --issue GH-123
# Output:
# Detected source type: workflow-session
# Loading session: .workflow/active/WFS-auth-impl/
# Extracted: 8 tasks from session
# ✓ Created solution: SOL-GH-123-c9d2
# ✓ Bound solution to issue GH-123
```
### Convert Markdown with AI
```bash
/issue:convert-to-plan "./docs/api-redesign.md"
# Output:
# Detected source type: markdown-file
# Using Gemini CLI for intelligent extraction...
# Extracted: 6 tasks
# Created issue: ISS-20250129-002 (priority: 2)
# ✓ Created solution: SOL-ISS-20250129-002-e4f1
```
### Supplement Existing Solution
```bash
/issue:convert-to-plan "./docs/tasks-phase2.md" --issue ISS-001 --supplement
# Output:
# Loaded existing solution: SOL-ISS-001-a7b3 (3 tasks)
# Extracted: 2 new tasks
# Supplementing: 3 existing + 2 new = 5 total tasks
# ✓ Updated solution: SOL-ISS-001-a7b3
```
## Issue Lifecycle Flow
```mermaid
graph TB
A[Source Artifact] --> B{Detect Type}
B -->|Lite-Plan| C1[plan.json]
B -->|Workflow Session| C2[workflow-session.json]
B -->|Markdown| C3[.md File + Gemini AI]
B -->|JSON| C4[plan.json]
C1 --> D[Extract Tasks]
C2 --> D
C3 --> D
C4 --> D
D --> E{--issue
Provided?}
E -->|Yes| F{Issue
Exists?}
E -->|No| G[Create New
Issue]
F -->|Yes| H{--supplement?}
F -->|No| I[Error: Issue
Not Found]
H -->|Yes| J[Load Existing
Solution]
H -->|No| K[Create New
Solution]
G --> K
J --> L[Merge Tasks]
K --> M[Normalize IDs]
L --> M
M --> N[Persist
Solution]
N --> O[Bind to Issue]
O --> P[Status: Planned]
P --> Q[issue:queue]
```
## Supported Sources
### 1. Lite-Plan
**Location**: `.workflow/.lite-plan/&#123;slug&#125;/plan.json`
**Schema**:
```typescript
interface LitePlan {
summary: string;
approach: string;
complexity: 'low' | 'medium' | 'high';
estimated_time: string;
tasks: LiteTask[];
_metadata: {
timestamp: string;
exploration_angles: string[];
};
}
interface LiteTask {
id: string;
title: string;
scope: string;
action: string;
description: string;
modification_points: Array<{file, target, change}>;
implementation: string[];
acceptance: string[];
depends_on: string[];
}
```
### 2. Workflow Session
**Location**: `.workflow/active/&#123;session&#125;/` or `WFS-xxx` ID
**Files**:
- `workflow-session.json` - Session metadata
- `IMPL_PLAN.md` - Implementation plan (optional)
- `.task/IMPL-*.json` - Task definitions
**Extraction**:
```typescript
// From workflow-session.json
{
title: session.name,
description: session.description,
session_id: session.id
}
// From IMPL_PLAN.md
approach: Extract overview section
// From .task/IMPL-*.json
tasks: Map IMPL-001 T1, IMPL-002 T2...
```
### 3. Markdown File
**Any `.md` file with implementation/task content**
**AI Extraction** (via Gemini CLI):
```javascript
// Prompt Gemini to extract structure
{
title: "Extracted from document",
approach: "Technical strategy section",
tasks: [
{
id: "T1",
title: "Parsed from headers/lists",
scope: "inferred from content",
action: "detected from verbs",
implementation: ["step 1", "step 2"],
acceptance: ["criteria 1", "criteria 2"]
}
]
}
```
### 4. JSON File
**Direct JSON matching plan-json-schema**
```typescript
interface JSONPlan {
summary?: string;
title?: string;
description?: string;
approach?: string;
complexity?: 'low' | 'medium' | 'high';
tasks: JSONTask[];
_metadata?: object;
}
```
## Task Normalization
### ID Normalization
```javascript
// Various input formats → T1, T2, T3...
"IMPL-001" "T1"
"IMPL-01" "T1"
"task-1" "T1"
"1" "T1"
"T-1" "T1"
```
### Depends-On Normalization
```javascript
// Normalize all references to T-format
depends_on: ["IMPL-001", "task-2"]
// → ["T1", "T2"]
```
### Action Validation
```javascript
// Valid actions (case-insensitive, auto-capitalized)
['Create', 'Update', 'Implement', 'Refactor',
'Add', 'Delete', 'Configure', 'Test', 'Fix']
// Invalid actions → Default to 'Implement'
'build' 'Implement'
'write' 'Create'
```
## Solution Schema
Target format for all extracted data:
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description?: string; // High-level summary
approach?: string; // Technical strategy
tasks: Task[]; // Required: at least 1 task
exploration_context?: object; // Optional: source context
analysis?: {
risk: 'low' | 'medium' | 'high';
impact: 'low' | 'medium' | 'high';
complexity: 'low' | 'medium' | 'high';
};
score?: number; // 0.0-1.0
is_bound: boolean;
created_at: string;
bound_at?: string;
_conversion_metadata?: {
source_type: string;
source_path: string;
converted_at: string;
};
}
interface Task {
id: string; // T1, T2, T3...
title: string; // Required: action verb + target
scope: string; // Required: module path or feature area
action: Action; // Required: Create|Update|Implement...
description?: string;
modification_points?: Array<{file, target, change}>;
implementation: string[]; // Required: step-by-step guide
test?: {
unit?: string[];
integration?: string[];
commands?: string[];
coverage_target?: string;
};
acceptance: {
criteria: string[];
verification: string[];
};
commit?: {
type: string;
scope: string;
message_template: string;
breaking?: boolean;
};
depends_on?: string[];
priority?: number; // 1-5 (default: 3)
}
type Action = 'Create' | 'Update' | 'Implement' | 'Refactor' |
'Add' | 'Delete' | 'Configure' | 'Test' | 'Fix';
```
## Priority Auto-Detection
For new issues created from plans:
```javascript
const complexityMap = {
high: 2, // High complexity → priority 2 (high)
medium: 3, // Medium complexity → priority 3 (medium)
low: 4 // Low complexity → priority 4 (low)
};
// If no complexity specified, default to 3
const priority = complexityMap[plan.complexity?.toLowerCase()] || 3;
```
## Error Handling
| Error Code | Message | Resolution |
|------------|---------|------------|
| **E001** | Source not found: &#123;path&#125; | Check path exists and is accessible |
| **E002** | Unable to detect source format | Verify file contains valid plan structure |
| **E003** | Issue not found: &#123;id&#125; | Check issue ID or omit --issue to create new |
| **E004** | Issue already has bound solution | Use --supplement to add tasks |
| **E005** | Failed to extract tasks from markdown | Check markdown structure, try simpler format |
| **E006** | No tasks extracted from source | Source must contain at least 1 task |
## CLI Endpoints
```bash
# Create new issue
ccw issue create << 'EOF'
{"title":"...","context":"...","priority":3,"source":"converted"}
EOF
# Check issue exists
ccw issue status &lt;id&gt; --json
# Get existing solution
ccw issue solution &lt;solution-id&gt; --json
# Bind solution to issue
ccw issue bind &lt;issue-id&gt; &lt;solution-id&gt;
# Update issue status
ccw issue update &lt;issue-id&gt; --status planned
```
## Related Commands
- **[issue:plan](./issue-plan.md)** - Generate solutions from issue exploration
- **[issue:new](./issue-new.md)** - Create issues from GitHub or text
- **[issue:queue](./issue-queue.md)** - Form execution queue from converted plans
- **[issue:execute](./issue-execute.md)** - Execute converted solutions
- **[workflow:lite-lite-lite](#)** - Generate lite-plan artifacts
- **[workflow:execute](#)** - Generate workflow sessions
## Best Practices
1. **Verify source format**: Ensure plan has valid structure before conversion
2. **Check for existing solutions**: Use --supplement to add tasks, not replace
3. **Review extracted tasks**: Verify AI extraction accuracy for markdown
4. **Normalize manually**: Edit task IDs and dependencies if needed
5. **Test in supplement mode**: Add tasks to existing solution before creating new issue
6. **Keep source artifacts**: Don't delete original plan files after conversion

View File

@@ -0,0 +1,269 @@
---
title: issue:discover
sidebar_label: issue:discover
sidebar_position: 2
description: Discover potential issues from multiple code analysis perspectives
---
# issue:discover
Multi-perspective issue discovery orchestrator that explores code from different angles to identify potential bugs, UX improvements, test gaps, and other actionable items.
## Description
The `issue:discover` command analyzes code from 8 specialized perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using parallel CLI agents. It aggregates findings and can export high-priority discoveries as issues.
### Key Features
- **8 analysis perspectives**: Specialized analysis for different concern areas
- **Parallel execution**: Multiple agents run simultaneously for speed
- **External research**: Exa integration for security and best-practices benchmarking
- **Dashboard integration**: View and filter findings via CCW dashboard
- **Smart prioritization**: Automated severity scoring and deduplication
- **Direct export**: Convert findings to issues with one click
## Usage
```bash
# Interactive perspective selection
/issue:discover src/auth/**
# Specific perspectives
/issue:discover src/payment/** --perspectives=bug,security,test
# With external research
/issue:discover src/api/** --external
# Auto mode - all perspectives
/issue:discover src/** --yes
# Multiple modules
/issue:discover src/auth/**,src/payment/**
```
### Arguments
| Argument | Required | Description |
|----------|----------|-------------|
| `path-pattern` | Yes | Glob pattern for files to analyze |
| `--perspectives` | No | Comma-separated list (default: interactive) |
| `--external` | No | Enable Exa external research |
| `-y, --yes` | No | Auto-select all perspectives |
## Perspectives
### Available Analysis Types
| Perspective | Focus Areas | Categories |
|-------------|-------------|------------|
| **bug** | Edge cases, null checks, resource leaks | edge-case, null-check, race-condition, boundary |
| **ux** | User experience issues | error-message, loading-state, accessibility, feedback |
| **test** | Test coverage gaps | missing-test, edge-case-test, integration-gap |
| **quality** | Code quality issues | complexity, duplication, naming, code-smell |
| **security** | Security vulnerabilities | injection, auth, encryption, input-validation |
| **performance** | Performance bottlenecks | n-plus-one, memory-usage, caching, algorithm |
| **maintainability** | Code maintainability | coupling, cohesion, tech-debt, module-boundary |
| **best-practices** | Industry best practices | convention, pattern, framework-usage |
## Examples
### Quick Scan (Recommended)
```bash
/issue:discover src/auth/**
# Interactive prompt:
# Select primary discovery focus:
# [1] Bug + Test + Quality (Recommended)
# [2] Security + Performance
# [3] Maintainability + Best-practices
# [4] Full analysis
```
### Security Audit with External Research
```bash
/issue:discover src/payment/** --perspectives=security --external
# Uses Exa to research OWASP payment security standards
# Compares implementation against industry benchmarks
```
### Full Analysis with Auto Mode
```bash
/issue:discover src/api/** --yes
# Runs all 8 perspectives in parallel
# No confirmations, processes all findings
```
## Issue Lifecycle Flow
```mermaid
graph TB
A[Target Files] --> B[Select Perspectives]
B --> C[Launch N Agents Parallel]
C --> D1[Bug Agent]
C --> D2[UX Agent]
C --> D3[Test Agent]
C --> D4[Security Agent + Exa]
C --> D5[Performance Agent]
C --> D6[Quality Agent]
C --> D7[Maintainability Agent]
C --> D8[Best-Practices Agent + Exa]
D1 --> E[Aggregate Findings]
D2 --> E
D3 --> E
D4 --> E
D5 --> E
D6 --> E
D7 --> E
D8 --> E
E --> F{Priority Score}
F -->|Critical/High| G[Export to Issues]
F -->|Medium| H[Dashboard Review]
F -->|Low| I[Archive]
G --> J[issue:plan]
H --> J
```
## Discovery Output Structure
### Directory Layout
```
.workflow/issues/discoveries/
├── {discovery-id}/
│ ├── discovery-state.json # Session state machine
│ ├── perspectives/
│ │ ├── bug.json # Bug findings
│ │ ├── ux.json # UX findings
│ │ ├── security.json # Security findings
│ │ └── ...
│ ├── external-research.json # Exa results (if enabled)
│ ├── discovery-issues.jsonl # Exported candidate issues
│ └── summary.md # Consolidated report
```
### Finding Schema
```typescript
interface DiscoveryFinding {
id: string;
perspective: string;
title: string;
priority: 'critical' | 'high' | 'medium' | 'low';
category: string;
description: string;
file: string;
line: number;
snippet: string;
suggested_issue: string;
confidence: number;
priority_score: number;
}
```
## Priority Categories
### Critical (Automatic Export)
- Data corruption risks
- Security vulnerabilities (auth bypass, injection)
- Memory leaks
- Race conditions
- Critical accessibility issues
### High (Recommended Export)
- Missing core functionality tests
- Significant UX confusion
- N+1 query problems
- Clear security gaps
- Major code smells
### Medium (Dashboard Review)
- Edge case gaps
- Inconsistent patterns
- Minor performance issues
- Documentation gaps
- Style violations
### Low (Informational)
- Cosmetic issues
- Minor naming inconsistencies
- Optimization opportunities
- Nice-to-have improvements
## Dashboard Integration
### Viewing Discoveries
```bash
# Open CCW dashboard
ccw view
# Navigate to: Issues > Discovery
```
**Features**:
- View all discovery sessions
- Filter by perspective and priority
- Preview finding details with code snippets
- Bulk select findings for export
- Compare findings across sessions
### Exporting to Issues
From the dashboard:
1. Select findings to export
2. Click "Export as Issues"
3. Findings are converted to standard issue format
4. Appended to `.workflow/issues/issues.jsonl`
5. Status set to `registered`
6. Continue with `/issue:plan` workflow
## Exa External Research
### Security Perspective
Researches:
- OWASP Top 10 for your technology stack
- Industry-standard security patterns
- Common vulnerabilities in your framework
- Best practices for your specific use case
### Best-Practices Perspective
Researches:
- Framework-specific conventions
- Language idioms and patterns
- Deprecated API warnings
- Community-recommended approaches
## Related Commands
- **[issue:new](./issue-new.md)** - Create issues from discoveries
- **[issue:plan](./issue-plan.md)** - Plan solutions for discovered issues
- **[issue:manage](#)** - Interactive issue management dashboard
- **[review-code](#)** - Code review for quality assessment
## Best Practices
1. **Start focused**: Begin with specific modules, not entire codebase
2. **Quick scan first**: Use bug+test+quality for fast results
3. **Review before export**: Not all findings warrant issues
4. **Enable Exa strategically**: For unfamiliar tech or security audits
5. **Combine perspectives**: Run related perspectives together (e.g., security+bug)
6. **Iterate**: Run discovery on changed modules after each sprint
## Comparison: Discovery vs Code Review
| Aspect | issue:discover | review-code |
|--------|----------------|-------------|
| **Purpose** | Find actionable issues | Assess code quality |
| **Output** | Exportable issues | Quality report |
| **Perspectives** | 8 specialized angles | 7 quality dimensions |
| **External Research** | Yes (Exa) | No |
| **Dashboard Integration** | Yes | No |
| **Use When** | Proactive issue hunting | Post-commit review |

View File

@@ -0,0 +1,364 @@
---
title: issue:execute
sidebar_label: issue:execute
sidebar_position: 5
description: Execute issue queue with DAG-based parallel orchestration
---
# issue:execute
Minimal orchestrator that dispatches solution IDs to executors. Each executor receives a complete solution with all tasks and commits once per solution.
## Description
The `issue:execute` command executes queued solutions using DAG-based parallel orchestration. Each executor receives a complete solution with all tasks, executes tasks sequentially, and commits once per solution. Supports optional git worktree isolation for clean workspace management.
### Key Features
- **DAG-based parallelism**: Automatic parallel execution of independent solutions
- **Solution-level execution**: Each executor handles all tasks in a solution
- **Single commit per solution**: Clean git history with formatted summaries
- **Worktree isolation**: Optional isolated workspace for queue execution
- **Multiple executors**: Codex, Gemini, or Agent support
- **Resume capability**: Recover from interruptions with existing worktree
## Usage
```bash
# Execute specific queue (REQUIRED)
/issue:execute --queue QUE-xxx
# Execute in isolated worktree
/issue:execute --queue QUE-xxx --worktree
# Resume in existing worktree
/issue:execute --queue QUE-xxx --worktree /path/to/worktree
# Dry-run (show DAG without executing)
/issue:execute --queue QUE-xxx
# Select: Dry-run mode
```
### Arguments
| Argument | Required | Description |
|----------|----------|-------------|
| `--queue &lt;id&gt;` | Yes | Queue ID to execute (required) |
| `--worktree` | No | Create isolated worktree for execution |
| `--worktree &lt;path&gt;` | No | Resume in existing worktree |
### Executor Selection
Interactive prompt selects:
- **Codex** (Recommended): Autonomous coding with 2hr timeout
- **Gemini**: Large context analysis and implementation
- **Agent**: Claude Code sub-agent for complex tasks
## Examples
### Execute Queue (Interactive)
```bash
/issue:execute --queue QUE-20251227-143000
# Output:
# ## Executing Queue: QUE-20251227-143000
# ## Queue DAG (Solution-Level)
# - Total Solutions: 5
# - Ready: 2
# - Completed: 0
# - Parallel in batch 1: 2
#
# Select executor:
# [1] Codex (Recommended)
# [2] Gemini
# [3] Agent
# Select mode:
# [1] Execute (Recommended)
# [2] Dry-run
# Use git worktree?
# [1] Yes (Recommended)
# [2] No
```
### Queue ID Not Provided
```bash
/issue:execute
# Output:
# Available Queues:
# ID Status Progress Issues
# -----------------------------------------------------------
# → QUE-20251215-001 active 3/10 ISS-001, ISS-002
# QUE-20251210-002 active 0/5 ISS-003
# QUE-20251205-003 completed 8/8 ISS-004
#
# Which queue would you like to execute?
# [1] QUE-20251215-001 - 3/10 completed, Issues: ISS-001, ISS-002
# [2] QUE-20251210-002 - 0/5 completed, Issues: ISS-003
```
### Execute with Worktree
```bash
/issue:execute --queue QUE-xxx --worktree
# Output:
# Created queue worktree: /repo/.ccw/worktrees/queue-exec-QUE-xxx
# Branch: queue-exec-QUE-xxx
# ### Executing Solutions (DAG batch 1): S-1, S-2
# [S-1] Executor launched (codex, 2hr timeout)
# [S-2] Executor launched (codex, 2hr timeout)
# ✓ S-1 completed: 3 tasks, 1 commit
# ✓ S-2 completed: 2 tasks, 1 commit
```
### Resume Existing Worktree
```bash
# Find existing worktrees
git worktree list
# /repo/.ccw/worktrees/queue-exec-QUE-123
# Resume execution
/issue:execute --queue QUE-123 --worktree /repo/.ccw/worktrees/queue-exec-QUE-123
# Output:
# Resuming in existing worktree: /repo/.ccw/worktrees/queue-exec-QUE-123
# Branch: queue-exec-QUE-123
# ### Executing Solutions (DAG batch 2): S-3
```
## Issue Lifecycle Flow
```mermaid
graph TB
A[Start] --> B{Queue ID
Provided?}
B -->|No| C[List Queues]
B -->|Yes| D[Validate Queue]
C --> E[User Selects]
E --> D
D --> F{Use
Worktree?}
F -->|Yes| G[Create/Resume
Worktree]
F -->|No| H[Main Workspace]
G --> I[Get DAG]
H --> I
I --> J[Parallel Batches]
J --> K[Dispatch Batch 1]
K --> L1[Executor 1:
S-1 Detail]
K --> L2[Executor 2:
S-2 Detail]
K --> L3[Executor N:
S-N Detail]
L1 --> M1[Execute All Tasks]
L2 --> M2[Execute All Tasks]
L3 --> M3[Execute All Tasks]
M1 --> N1[Commit Once]
M2 --> N2[Commit Once]
M3 --> N3[Commit Once]
N1 --> O[Mark Done]
N2 --> O
N3 --> O
O --> P{More Batches?}
P -->|Yes| I
P -->|No| Q{Worktree
Used?}
Q -->|Yes| R{All Complete?}
Q -->|No| S[Done]
R -->|Yes| T[Merge Strategy]
R -->|No| I
T --> U1[Create PR]
T --> U2[Merge Main]
T --> U3[Keep Branch]
```
## Execution Model
### DAG-Based Batching
```
Batch 1 (Parallel): S-1, S-2 → No file conflicts
Batch 2 (Parallel): S-3, S-4 → No conflicts, waits for Batch 1
Batch 3 (Sequential): S-5 → Depends on S-3
```
### Solution Execution (Within Executor)
```
ccw issue detail S-1 → Get full solution with all tasks
For each task T1, T2, T3...:
- Follow implementation steps
- Run test commands
- Verify acceptance criteria
After ALL tasks pass:
git commit -m "feat(scope): summary
Solution: S-1
Tasks completed: T1, T2, T3
Changes:
- file1: what changed
- file2: what changed
Verified: all tests passed"
ccw issue done S-1 --result '{summary, files, commit}'
```
## Executor Dispatch
### Codex Executor
```bash
ccw cli -p "## Execute Solution: S-1
..." --tool codex --mode write --id exec-S-1
# Timeout: 2 hours (7200000ms)
# Background: true
```
### Gemini Executor
```bash
ccw cli -p "## Execute Solution: S-1
..." --tool gemini --mode write --id exec-S-1
# Timeout: 1 hour (3600000ms)
# Background: true
```
### Agent Executor
```javascript
Task({
subagent_type: 'code-developer',
run_in_background: false,
description: 'Execute solution S-1',
prompt: '...' // Full execution prompt
})
```
## Worktree Management
### Create New Worktree
```bash
# One worktree for entire queue execution
git worktree add .ccw/worktrees/queue-exec-QUE-xxx -b queue-exec-QUE-xxx
# All solutions execute in this isolated workspace
# Main workspace remains untouched
```
### Resume Existing Worktree
```bash
# Find interrupted executions
git worktree list
# Output:
# /repo/.ccw/worktrees/queue-exec-QUE-123 abc1234 [queue-exec-QUE-123]
# Resume with worktree path
/issue:execute --queue QUE-123 --worktree /repo/.ccw/worktrees/queue-exec-QUE-123
```
### Worktree Completion
After all batches complete:
```bash
# Prompt for merge strategy
Queue complete. What to do with worktree branch "queue-exec-QUE-xxx"?
[1] Create PR (Recommended)
[2] Merge to main
[3] Keep branch
# Create PR
git push -u origin queue-exec-QUE-xxx
gh pr create --title "Queue QUE-xxx" --body "Issue queue execution"
git worktree remove .ccw/worktrees/queue-exec-QUE-xxx
# OR Merge to main
git merge --no-ff queue-exec-QUE-xxx -m "Merge queue QUE-xxx"
git branch -d queue-exec-QUE-xxx
git worktree remove .ccw/worktrees/queue-exec-QUE-xxx
```
## CLI Endpoints
### Queue Operations
```bash
# List queues
ccw issue queue list --brief --json
# Get DAG
ccw issue queue dag --queue QUE-xxx
# Returns: {parallel_batches: [["S-1","S-2"], ["S-3"]]}
```
### Solution Operations
```bash
# Get solution details (READ-ONLY)
ccw issue detail S-1
# Returns: Full solution with all tasks
# Mark solution complete
ccw issue done S-1 --result '{"summary":"...","files_modified":[...],"commit":{...},"tasks_completed":3}'
# Mark solution failed
ccw issue done S-1 --fail --reason '{"task_id":"T2","error_type":"test_failure","message":"..."}'
```
## Commit Message Format
```bash
feat(auth): implement OAuth2 login flow
Solution: S-1
Tasks completed: T1, T2, T3
Changes:
- src/auth/oauth.ts: Implemented OAuth2 flow
- src/auth/login.ts: Integrated OAuth with existing login
- tests/auth/oauth.test.ts: Added comprehensive tests
Verified: all tests passed
```
**Commit Types**:
- `feat`: New feature
- `fix`: Bug fix
- `refactor`: Code refactoring
- `docs`: Documentation
- `test`: Test updates
- `chore`: Maintenance tasks
## Related Commands
- **[issue:queue](./issue-queue.md)** - Form execution queue before executing
- **[issue:plan](./issue-plan.md)** - Plan solutions before queuing
- **ccw issue retry** - Reset failed solutions for retry
- **ccw issue queue dag** - View dependency graph
- **ccw issue detail &lt;id&gt;** - View solution details
## Best Practices
1. **Use Codex executor**: Best for long-running autonomous work
2. **Enable worktree**: Keeps main workspace clean during execution
3. **Check DAG first**: Use dry-run to see execution plan
4. **Monitor progress**: Executors run in background, check completion
5. **Resume on failure**: Use existing worktree path to continue
6. **Review commits**: Each solution produces one formatted commit
## Troubleshooting
| Error | Cause | Resolution |
|-------|-------|------------|
| No queue specified | --queue argument missing | List queues and select one |
| No ready solutions | Dependencies blocked | Check DAG for blocking issues |
| Executor timeout | Solution too complex | Break into smaller solutions |
| Worktree exists | Previous incomplete execution | Resume with --worktree &lt;path&gt; |
| Partial task failure | Task reports failure | Check ccw issue done --fail output |
| Git conflicts | Parallel executors touched same files | DAG should prevent this |

View File

@@ -0,0 +1,460 @@
---
title: issue:from-brainstorm
sidebar_label: issue:from-brainstorm
sidebar_position: 6
description: Convert brainstorm session ideas into issues with solutions
---
# issue:from-brainstorm
Bridge command that converts brainstorm-with-file session output into executable issue + solution for parallel-dev-cycle consumption.
## Description
The `issue:from-brainstorm` command converts brainstorm session ideas into structured issues with executable solutions. It loads synthesis results, selects ideas, enriches context with multi-CLI perspectives, and generates task-based solutions ready for execution.
### Key Features
- **Idea selection**: Interactive or automatic (highest-scored) selection
- **Context enrichment**: Adds clarifications and multi-perspective insights
- **Auto task generation**: Creates structured tasks from idea next_steps
- **Priority calculation**: Derives priority from idea score (0-10 → 1-5)
- **Direct binding**: Solution automatically bound to issue
- **Session metadata**: Preserves brainstorm origin in issue
## Usage
```bash
# Interactive mode - select idea from table
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Auto mode - select highest-scored idea
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto
# Pre-select idea by index
/issue:from-brainstorm SESSION="BS-auth-system-2025-01-28" --idea=0
# Auto mode with skip confirmations
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto -y
```
### Arguments
| Argument | Required | Description |
|----------|----------|-------------|
| `SESSION` | Yes | Session ID or path to `.workflow/.brainstorm/BS-xxx` |
| `--idea <index>` | No | Pre-select idea by index (0-based) |
| `--auto` | No | Auto-select highest-scored idea |
| `-y, --yes` | No | Skip all confirmations |
## Examples
### Interactive Mode
```bash
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Output:
# | # | Title | Score | Feasibility |
# |---|-------|-------|-------------|
# | 0 | Token Bucket Algorithm | 8.5 | High |
# | 1 | Sliding Window Counter | 7.2 | Medium |
# | 2 | Fixed Window | 6.1 | High |
#
# Select idea: #0
#
# ✓ Created issue: ISS-20250128-001
# ✓ Created solution: SOL-ISS-20250128-001-ab3d
# ✓ Bound solution to issue
# → Status: planned
# → Next: /issue:queue
```
### Auto Mode
```bash
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto
# Output:
# Auto-selected: Redis Cache Layer (Score: 9.2/10)
# ✓ Created issue: ISS-20250128-002
# ✓ Solution with 4 tasks:
# - T1: Research & Validate Approach
# - T2: Design & Create Specification
# - T3: Implement Redis Cache Layer
# - T4: Write Integration Tests
# → Status: planned
```
### Pre-select Idea
```bash
/issue:from-brainstorm SESSION="BS-auth-system-2025-01-28" --idea=1
# Skips selection, uses idea at index 1
```
## Issue Lifecycle Flow
```mermaid
graph TB
A[Brainstorm Session] --> B[Load Synthesis]
B --> C[Load Perspectives
Optional]
C --> D[Load Synthesis
Artifacts
Optional]
D --> E{Idea Selection}
E -->|Auto| F[Select Highest
Score]
E -->|Pre-selected| F
E -->|Interactive| G[Display Table]
G --> H[User Selects]
H --> F
F --> I[Enrich Context]
I --> J[Add Clarifications]
J --> K[Add Multi-CLI
Insights]
K --> L[Create Issue]
L --> M[Generate Tasks]
M --> N[Create Solution]
N --> O[Bind Solution]
O --> P[Status: Planned]
P --> Q[issue:queue]
```
## Session Files
### Input Files
```
.workflow/.brainstorm/BS-{slug}-{date}/
├── synthesis.json # REQUIRED - Top ideas with scores
├── perspectives.json # OPTIONAL - Multi-CLI insights
├── brainstorm.md # Reference only
└── .brainstorming/ # OPTIONAL - Synthesis artifacts
├── system-architect/
│ └── analysis.md # Contains clarifications
├── api-designer/
│ └── analysis.md
└── ...
```
### Synthesis Schema
```typescript
interface Synthesis {
session_id: string;
topic: string;
completed_at: string;
top_ideas: Idea[];
}
interface Idea {
index: number;
title: string;
description: string;
score: number; // 0-10
novelty: number; // 0-10
feasibility: 'Low' | 'Medium' | 'High';
key_strengths: string[];
main_challenges: string[];
next_steps: string[];
}
```
## Context Enrichment
### Base Context (Always Included)
```markdown
## Idea Description
{idea.description}
## Why This Idea
{idea.key_strengths}
## Challenges to Address
{idea.main_challenges}
## Implementation Steps
{idea.next_steps}
```
### Enhanced Context (If Available)
#### From Synthesis Artifacts
```markdown
## Requirements Clarification (system-architect)
Q: How will this integrate with existing auth?
A: Will use adapter pattern to wrap current system
## Architecture Feasibility (api-designer)
Q: What API changes are needed?
A: New /oauth/* endpoints, backward compatible
```
#### From Multi-CLI Perspectives
```markdown
## Creative Perspective
Insight: Consider gamification to improve adoption
## Pragmatic Perspective
Blocker: Current rate limiter lacks configuration
## Systematic Perspective
Pattern: Token bucket provides better burst handling
```
## Task Generation Strategy
### Task 1: Research & Validation
**Trigger**: `idea.main_challenges.length > 0`
```typescript
{
id: "T1",
title: "Research & Validate Approach",
scope: "design",
action: "Research",
implementation: [
"Investate identified blockers",
"Review similar implementations in industry",
"Validate approach with team/stakeholders"
],
acceptance: {
criteria: [
"Blockers documented with resolution strategies",
"Feasibility assessed with risk mitigation",
"Approach validated with key stakeholders"
],
verification: [
"Research document created",
"Stakeholder approval obtained",
"Risk assessment completed"
]
},
priority: 1
}
```
### Task 2: Design & Specification
**Trigger**: `idea.key_strengths.length > 0`
```typescript
{
id: "T2",
title: "Design & Create Specification",
scope: "design",
action: "Design",
implementation: [
"Create detailed design document",
"Define success metrics and KPIs",
"Plan implementation phases"
],
acceptance: {
criteria: [
"Design document complete with diagrams",
"Success metrics defined and measurable",
"Implementation plan with timeline"
],
verification: [
"Design reviewed and approved",
"Metrics tracked in dashboard",
"Phase milestones defined"
]
},
priority: 2
}
```
### Task 3+: Implementation Tasks
**Trigger**: `idea.next_steps[]`
Each next_step becomes a task:
```typescript
{
id: "T3",
title: "{next_steps[0]}", // max 60 chars
scope: inferScope(step), // backend, frontend, infra...
action: detectAction(step), // Implement, Create, Update...
implementation: [
"Execute: {next_steps[0]}",
"Follow design specification",
"Write unit and integration tests"
],
acceptance: {
criteria: [
"Step implemented per design",
"Tests passing with coverage >80%",
"Code reviewed and approved"
],
verification: [
"Functional tests pass",
"Code coverage meets threshold",
"Review approved"
]
},
priority: 3
}
```
### Fallback Task
**Trigger**: No tasks generated from above
```typescript
{
id: "T1",
title: idea.title,
scope: "implementation",
action: "Implement",
implementation: [
"Analyze requirements and context",
"Design solution approach",
"Implement core functionality",
"Write comprehensive tests",
"Document changes and usage"
],
acceptance: {
criteria: [
"Core functionality working",
"Tests passing",
"Documentation complete"
],
verification: [
"Manual testing successful",
"Automated tests pass",
"Docs reviewed"
]
},
priority: 3
}
```
## Priority Calculation
### Issue Priority
```javascript
// idea.score: 0-10 → priority: 1-5
priority = max(1, min(5, ceil((10 - score) / 2)))
Examples:
score 9-10 priority 1 (critical)
score 7-8 priority 2 (high)
score 5-6 priority 3 (medium)
score 3-4 priority 4 (low)
score 0-2 priority 5 (lowest)
```
### Task Priority
- Research task: 1 (highest - validates approach)
- Design task: 2 (high - foundation for implementation)
- Implementation tasks: 3 by default
- Testing/documentation: 4-5 (lower priority)
## Output Structure
### Issue Created
```typescript
interface Issue {
id: string; // ISS-YYYYMMDD-NNN
title: string; // From idea.title
status: 'planned'; // Auto-set after binding
priority: number; // Derived from score
context: string; // Enriched description
source: 'brainstorm';
labels: string[]; // ['brainstorm', perspective, feasibility]
expected_behavior: string; // From key_strengths
actual_behavior: string; // From main_challenges
affected_components: string[]; // Extracted from description
bound_solution_id: string; // Auto-bound
_brainstorm_metadata: {
session_id: string;
idea_score: number;
novelty: number;
feasibility: string;
clarifications_count: number;
};
}
```
### Solution Created
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description: string; // idea.title
approach: string; // idea.description
tasks: Task[];
analysis: {
risk: 'low' | 'medium' | 'high';
impact: 'low' | 'medium' | 'high';
complexity: 'low' | 'medium' | 'high';
};
is_bound: true;
created_at: string;
bound_at: string;
}
```
## Integration Flow
```
brainstorm-with-file
├─ synthesis.json (top_ideas)
├─ perspectives.json (multi-CLI insights)
└─ .brainstorming/** (synthesis artifacts)
issue:from-brainstorm ◄─── This command
├─ ISS-YYYYMMDD-NNN (enriched issue)
└─ SOL-{issue-id}-{uid} (structured solution)
issue:queue
issue:execute
Complete Solution
```
## Related Commands
- **[workflow:brainstorm-with-file](#)** - Generate brainstorm sessions
- **[workflow:brainstorm:synthesis](#)** - Add clarifications to brainstorm
- **[issue:new](./issue-new.md)** - Create issues from GitHub or text
- **[issue:plan](./issue-plan.md)** - Plan solutions for issues
- **[issue:queue](./issue-queue.md)** - Form execution queue
- **[issue:execute](./issue-execute.md)** - Execute with parallel-dev-cycle
## CLI Endpoints
```bash
# Create issue
ccw issue create << 'EOF'
{
"title": "...",
"context": "...",
"priority": 3,
"source": "brainstorm",
"labels": ["brainstorm", "creative", "feasibility-high"]
}
EOF
# Bind solution
ccw issue bind {issue-id} {solution-id}
# Update status
ccw issue update {issue-id} --status planned
```

View File

@@ -0,0 +1,203 @@
---
title: issue:new
sidebar_label: issue:new
sidebar_position: 1
description: Create new issue with automatic categorization
---
# issue:new
Create structured issues from GitHub URLs, text descriptions, or brainstorm sessions with automatic categorization and priority detection.
## Description
The `issue:new` command creates structured issues from multiple input sources with intelligent clarity detection. It asks clarifying questions only when needed and supports both local and GitHub-synced issues.
### Key Features
- **Multi-source input**: GitHub URLs, text descriptions, structured input
- **Clarity detection**: Asks questions only for vague inputs
- **GitHub integration**: Optional publishing to GitHub with bidirectional sync
- **Smart categorization**: Automatic tag and priority detection
- **ACE integration**: Lightweight codebase context for affected components
## Usage
```bash
# Clear inputs - direct creation (no questions)
/issue:new https://github.com/owner/repo/issues/123
/issue:new "Login fails with special chars. Expected: success. Actual: 500"
# Vague input - will ask 1 clarifying question
/issue:new "something wrong with auth"
# With priority override
/issue:new "Database connection times out" --priority 2
# Auto mode - skip confirmations
/issue:new "Fix navigation bug" -y
```
### Arguments
| Argument | Required | Description |
|----------|----------|-------------|
| `input` | Yes | GitHub URL, issue description, or structured text |
| `-y, --yes` | No | Skip confirmation questions |
| `--priority <1-5>` | No | Override priority (1=critical, 5=low) |
## Examples
### Create from GitHub URL
```bash
/issue:new https://github.com/owner/repo/issues/42
# Output: Fetches issue details via gh CLI, creates immediately
```
### Create with Structured Text
```bash
/issue:new "Login fails with special chars. Expected: success. Actual: 500 error"
# Output: Parses structure, creates issue with extracted fields
```
### Create with Clarification
```bash
/issue:new "auth broken"
# System asks: "Please describe the issue in more detail:"
# User provides: "Users cannot log in when password contains quotes"
# Issue created with enriched context
```
### Create with GitHub Publishing
```bash
/issue:new "Memory leak in WebSocket handler"
# System asks: "Would you like to publish to GitHub?"
# User selects: "Yes, publish to GitHub"
# Output:
# Local issue: ISS-20251229-001
# GitHub issue: https://github.com/org/repo/issues/123
# Both linked bidirectionally
```
## Issue Lifecycle Flow
```mermaid
graph LR
A[Input Source] --> B{Clarity Score}
B -->|Score 3: GitHub| C[Fetch via gh CLI]
B -->|Score 2: Structured| D[Parse Text]
B -->|Score 0-1: Vague| E[Ask 1 Question]
C --> F[Create Issue]
D --> F
E --> F
F --> G{Publish to GitHub?}
G -->|Yes| H[Create + Link GitHub]
G -->|No| I[Local Only]
H --> J[issue:plan]
I --> J
```
## Issue Fields
### Core Fields
| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Issue ID (`GH-123` or `ISS-YYYYMMDD-NNN`) |
| `title` | string | Issue title (max 60 chars) |
| `status` | enum | `registered` | `planned` | `queued` | `in_progress` | `completed` | `failed` |
| `priority` | number | 1 (critical) to 5 (low) |
| `context` | string | Problem description (single source of truth) |
| `source` | enum | `github` | `text` | `discovery` | `brainstorm` | `converted` |
### Optional Fields
| Field | Type | Description |
|-------|------|-------------|
| `source_url` | string | Original source URL |
| `labels` | string[] | Category tags |
| `expected_behavior` | string | Expected system behavior |
| `actual_behavior` | string | Actual problematic behavior |
| `affected_components` | string[] | Related files/modules (via ACE) |
| `github_url` | string | Linked GitHub issue URL |
| `github_number` | number | GitHub issue number |
| `feedback` | object[] | Failure history and clarifications |
### Feedback Schema
```typescript
interface Feedback {
type: 'failure' | 'clarification' | 'rejection';
stage: 'new' | 'plan' | 'execute';
content: string;
created_at: string;
}
```
## Clarity Detection
### Scoring Rules
| Score | Criteria | Behavior |
|-------|----------|----------|
| **3** | GitHub URL | Fetch directly, no questions |
| **2** | Structured text (has "expected:", "actual:", etc.) | Parse structure, may use ACE for components |
| **1** | Long text (>50 chars) | Quick ACE hint if components missing |
| **0** | Vague/short text | Ask 1 clarifying question |
### Structured Text Patterns
The command recognizes these keywords for automatic parsing:
- `expected:` / `Expected:`
- `actual:` / `Actual:`
- `affects:` / `Affects:`
- `steps:` / `Steps:`
## GitHub Publishing Workflow
```mermaid
sequenceDiagram
participant User
participant Command
participant CLI
participant GitHub
User->>Command: /issue:new "description"
Command->>CLI: ccw issue create (local)
CLI-->>Command: ISS-YYYYMMDD-NNN
Command->>User: Publish to GitHub?
User->>Command: Yes
Command->>GitHub: gh issue create
GitHub-->>Command: https://github.com/.../123
Command->>CLI: ccw issue update --github-url --github-number
CLI-->>Command: Issue updated
Command-->>User: Local + GitHub linked
```
## Related Commands
- **[issue:plan](./issue-plan.md)** - Generate solution for issue
- **[issue:queue](./issue-queue.md)** - Form execution queue
- **[issue:discover](./issue-discover.md)** - Discover issues from codebase
- **[issue:from-brainstorm](./issue-from-brainstorm.md)** - Convert brainstorm to issue
- **[issue:convert-to-plan](./issue-convert-to-plan.md)** - Convert plans to issues
- **[issue:execute](./issue-execute.md)** - Execute issue queue
## CLI Endpoints
The command uses these CLI endpoints:
```bash
# Create issue
echo '{"title":"...","context":"...","priority":3}' | ccw issue create
# Update with GitHub binding
ccw issue update <id> --github-url "<url>" --github-number <num>
# View issue
ccw issue status <id> --json
```

View File

@@ -0,0 +1,288 @@
---
title: issue:plan
sidebar_label: issue:plan
sidebar_position: 3
description: Plan issue solutions with exploration and task breakdown
---
# issue:plan
Batch plan issue resolution using intelligent agent-driven exploration and planning with failure-aware retry logic.
## Description
The `issue:plan` command uses the issue-plan-agent to combine exploration and planning into a single closed-loop workflow. It generates executable solutions with task breakdowns, handles previous failure analysis, and supports batch processing of up to 3 issues per agent.
### Key Features
- **Explore + Plan**: Combined workflow for faster planning
- **Failure-aware**: Analyzes previous failures to avoid repeats
- **Batch processing**: Groups semantically similar issues
- **Auto-binding**: Single solutions automatically bound
- **Conflict detection**: Identifies cross-issue file conflicts
- **GitHub integration**: Adds GitHub comment tasks when applicable
## Usage
```bash
# Plan all pending issues (default)
/issue:plan
# Plan specific issues
/issue:plan GH-123,GH-124,GH-125
# Plan single issue
/issue:plan ISS-20251229-001
# Explicit all-pending
/issue:plan --all-pending
# Custom batch size
/issue:plan --batch-size 5
```
### Arguments
| Argument | Required | Description |
|----------|----------|-------------|
| `issue-ids` | No | Comma-separated issue IDs (default: all pending) |
| `--all-pending` | No | Explicit flag for all pending issues |
| `--batch-size <n>` | No | Max issues per batch (default: 3) |
| `-y, --yes` | No | Auto-bind without confirmation |
## Examples
### Plan All Pending Issues
```bash
/issue:plan
# Output:
# Found 5 pending issues
# Processing 5 issues in 2 batch(es)
# [Batch 1/2] Planning GH-123, GH-124, GH-125...
# ✓ GH-123: SOL-GH-123-a7x9 (3 tasks)
# ✓ GH-124: SOL-GH-124-b3m2 (4 tasks)
# ✓ GH-125: SOL-GH-125-c8k1 (2 tasks)
```
### Plan with Failure Retry
```bash
/issue:plan ISS-20251229-001
# Agent analyzes previous failure from issue.feedback
# Avoids same approach that failed before
# Creates alternative solution with verification steps
```
### Multiple Solutions Selection
```bash
/issue:plan GH-999
# Agent generates 2 alternative solutions
# Interactive prompt:
# Issue GH-999: which solution to bind?
# [1] SOL-GH-999-a1b2 (4 tasks) - Refactor approach
# [2] SOL-GH-999-c3d4 (6 tasks) - Rewrite approach
```
## Issue Lifecycle Flow
```mermaid
graph TB
A[Issues to Plan] --> B[Load Issue Metadata]
B --> C[Intelligent Grouping]
C --> D[Launch Agents Parallel]
D --> E1[Agent Batch 1]
D --> E2[Agent Batch 2]
E1{Analyze
Failures?}
E2{Analyze
Failures?}
E1 -->|Yes| F1[Extract Failure Patterns]
E1 -->|No| G1[Explore Codebase]
E2 -->|Yes| F2[Extract Failure Patterns]
E2 -->|No| G2[Explore Codebase]
F1 --> G1
F2 --> G2
G1 --> H1[Generate Solution]
G2 --> H2[Generate Solution]
H1 --> I{Single
Solution?}
H2 --> I
I -->|Yes| J[Auto-Bind]
I -->|No| K[User Selection]
K --> L[Bind Selected]
J --> M[Status: Planned]
L --> M
```
## Planning Workflow
### Phase 1: Issue Loading
```bash
# Brief metadata only (to avoid context overflow)
ccw issue list --status pending,registered --json
```
**Returns**: Array of `{id, title, tags}`
### Phase 2: Agent Exploration (Parallel)
Each agent performs:
1. **Fetch full issue details**
```bash
ccw issue status <id> --json
```
2. **Analyze failure history** (if exists)
- Extract `issue.feedback` where `type='failure'`, `stage='execute'`
- Parse error_type, message, task_id, solution_id
- Identify repeated patterns and root causes
- Design alternative approach
3. **Load project context**
- `.workflow/project-tech.json` (technology stack)
- `.workflow/project-guidelines.json` (constraints)
4. **Explore codebase** (ACE semantic search)
5. **Generate solution** (following solution-schema.json)
- Tasks with quantified acceptance criteria
- Verification steps to prevent same failure
- Reference to previous failures in approach
6. **Write and bind**
- Write to `solutions/{issue-id}.jsonl`
- Execute `ccw issue bind <issue-id> <solution-id>` if single solution
### Phase 3: Solution Selection
Multiple solutions → User selects via AskUserQuestion
### Phase 4: Summary
```bash
## Done: 5 issues → 5 planned
Bound Solutions:
- GH-123: SOL-GH-123-a7x9 (3 tasks)
- GH-124: SOL-GH-124-b3m2 (4 tasks)
- ISS-20251229-001: SOL-ISS-20251229-001-c8k1 (2 tasks)
Next: /issue:queue
```
## Failure-Aware Planning
### Feedback Schema
```typescript
interface FailureFeedback {
type: 'failure';
stage: 'execute';
content: {
task_id: string;
solution_id: string;
error_type: 'test_failure' | 'compilation' | 'timeout' | 'runtime_error';
message: string;
timestamp: string;
};
created_at: string;
}
```
### Failure Analysis Rules
1. **Extract patterns**: Repeated errors indicate systemic issues
2. **Identify root cause**: Test failure vs. compilation vs. timeout
3. **Design alternative**: Change approach, not just implementation
4. **Add prevention**: Explicit verification steps for same error
5. **Document lessons**: Reference failures in solution.approach
## CLI Endpoints
### Issue Operations
```bash
# List pending issues (brief)
ccw issue list --status pending --brief
# Get full issue details (agent use)
ccw issue status <id> --json
# Bind solution to issue
ccw issue bind <issue-id> <solution-id>
# List with bound solutions
ccw issue solutions --status planned --brief
```
### Solution Schema
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description: string;
approach: string;
tasks: Task[];
exploration_context: {
relevant_files: string[];
dependencies: string[];
patterns: string[];
};
failure_analysis?: {
previous_failures: string[];
alternative_approach: string;
prevention_steps: string[];
};
is_bound: boolean;
created_at: string;
bound_at?: string;
}
interface Task {
id: string; // T1, T2, T3...
title: string;
scope: string;
action: string;
implementation: string[];
acceptance: {
criteria: string[];
verification: string[];
};
test?: {
unit?: string[];
integration?: string[];
commands?: string[];
};
}
```
## Related Commands
- **[issue:new](./issue-new.md)** - Create issues before planning
- **[issue:queue](./issue-queue.md)** - Form execution queue from planned issues
- **[issue:execute](./issue-execute.md)** - Execute planned solutions
- **[issue:from-brainstorm](./issue-from-brainstorm.md)** - Convert brainstorm to planned issue
- **[issue:convert-to-plan](./issue-convert-to-plan.md)** - Convert existing plans to issues
## Best Practices
1. **Plan in batches**: Default 3 issues per batch for optimal performance
2. **Review failures**: Check issue feedback before replanning
3. **Verify conflicts**: Agent reports file conflicts across issues
4. **GitHub issues**: Agent adds final task to comment on GitHub issue
5. **Acceptance criteria**: Ensure tasks have quantified success metrics
6. **Test coverage**: Each task should include verification steps
## Troubleshooting
| Error | Cause | Resolution |
|-------|-------|------------|
| No pending issues | All issues already planned | Create new issues or use --all-pending flag |
| Agent timeout | Large codebase exploration | Reduce batch size or limit scope |
| No solutions generated | Insufficient context | Provide more detailed issue description |
| Solution conflicts | Multiple issues touch same files | Agent reports conflicts, resolve manually |
| Bind failure | Solution file write error | Check permissions, retry |

View File

@@ -0,0 +1,372 @@
---
title: issue:queue
sidebar_label: issue:queue
sidebar_position: 4
description: Form execution queue from bound solutions with conflict resolution
---
# issue:queue
Queue formation command using issue-queue-agent that analyzes bound solutions, resolves inter-solution conflicts, and creates an ordered execution queue.
## Description
The `issue:queue` command creates execution queues from planned issues with bound solutions. It performs solution-level conflict analysis, builds dependency DAGs, calculates semantic priority, and assigns execution groups (parallel/sequential).
### Key Features
- **Solution-level granularity**: Queue items are complete solutions, not individual tasks
- **Conflict resolution**: Automatic detection and user clarification for high-severity conflicts
- **Multi-queue support**: Create parallel queues for distributed execution
- **Semantic priority**: Intelligent ordering based on issue priority and task complexity
- **DAG-based grouping**: Parallel (P*) and Sequential (S*) execution groups
- **Queue history**: Track all queues with active queue management
## Usage
```bash
# Form new queue from all bound solutions
/issue:queue
# Form 3 parallel queues (solutions distributed)
/issue:queue --queues 3
# Form queue for specific issue only
/issue:queue --issue GH-123
# Append to active queue
/issue:queue --append GH-124
# List all queues
/issue:queue --list
# Switch active queue
/issue:queue --switch QUE-xxx
# Archive completed queue
/issue:queue --archive
```
### Arguments
| Argument | Required | Description |
|----------|----------|-------------|
| `--queues <n>` | No | Number of parallel queues (default: 1) |
| `--issue <id>` | No | Form queue for specific issue only |
| `--append <id>` | No | Append issue to active queue |
| `--force` | No | Skip active queue check, always create new |
| `-y, --yes` | No | Auto-confirm, use recommended resolutions |
### CLI Subcommands
```bash
ccw issue queue list # List all queues with status
ccw issue queue add <issue-id> # Add issue to queue
ccw issue queue add <issue-id> -f # Force add to new queue
ccw issue queue merge <src> --queue <target> # Merge queues
ccw issue queue switch <queue-id> # Switch active queue
ccw issue queue archive # Archive current queue
ccw issue queue delete <queue-id> # Delete queue from history
```
## Examples
### Create Single Queue
```bash
/issue:queue
# Output:
# Loading 5 bound solutions...
# Generating queue: QUE-20251227-143000
# Analyzing conflicts...
# ✓ Queue created: 5 solutions, 3 execution groups
# - P1: S-1, S-2 (parallel)
# - S1: S-3 (sequential)
# - P2: S-4, S-5 (parallel)
# Next: /issue:execute --queue QUE-20251227-143000
```
### Create Multiple Parallel Queues
```bash
/issue:queue --queues 3
# Distributes solutions to minimize cross-queue conflicts
# Creates: QUE-20251227-143000-1, QUE-20251227-143000-2, QUE-20251227-143000-3
# All linked via queue_group: QGR-20251227-143000
```
### Append to Existing Queue
```bash
/issue:queue --append GH-124
# Checks active queue exists
# Adds new solution to end of active queue
# Recalculates execution groups
```
## Issue Lifecycle Flow
```mermaid
graph TB
A[Planned Issues + Bound Solutions] --> B[Load Solutions]
B --> C{Multi-Queue?}
C -->|Yes| D[Partition Solutions]
C -->|No| E[Single Queue]
D --> F[Launch N Agents Parallel]
E --> G[Launch Agent]
F --> H[Conflict Analysis]
G --> H
H --> I{High-Severity
Conflicts?}
I -->|Yes| J[User Clarification]
I -->|No| K[Build DAG]
J --> K
K --> L[Calculate Priority]
L --> M[Assign Groups]
M --> N[Write Queue + Index]
N --> O{Active Queue
Exists?}
O -->|No| P[Activate New Queue]
O -->|Yes| Q[User Decision]
Q --> R1[Merge]
Q --> R2[Switch]
Q --> R3[Cancel]
R1 --> S[Queue Ready]
R2 --> S
P --> S
```
## Execution Groups
### Parallel Groups (P*)
Solutions with NO file conflicts can execute simultaneously:
```
P1: S-1, S-2, S-3 → 3 executors work in parallel
```
### Sequential Groups (S*)
Solutions with shared dependencies must execute in order:
```
S1: S-4 → S-5 → S-6 → Execute one after another
```
### Mixed Execution
```
P1: S-1, S-2 (parallel)
S1: S-3 (sequential, waits for P1)
P2: S-4, S-5 (parallel, waits for S1)
```
## Conflict Types
### 1. File Conflicts
Solutions modify the same file:
```json
{
"conflict_id": "CFT-1",
"type": "file",
"severity": "high",
"solutions": ["S-1", "S-2"],
"files": ["src/auth/login.ts"],
"resolution": "sequential"
}
```
**Resolution**: S-1 before S-2 in sequential group
### 2. API Conflicts
Solutions change shared interfaces:
```json
{
"conflict_id": "CFT-2",
"type": "api",
"severity": "high",
"solutions": ["S-3", "S-4"],
"interfaces": ["AuthService.login()"],
"resolution": "sequential"
}
```
**Resolution**: User clarifies which approach to use
### 3. Data Conflicts
Solutions modify same database schema:
```json
{
"conflict_id": "CFT-3",
"type": "data",
"severity": "medium",
"solutions": ["S-5", "S-6"],
"tables": ["users"],
"resolution": "sequential"
}
```
**Resolution**: S-5 before S-6
### 4. Dependency Conflicts
Solutions require incompatible versions:
```json
{
"conflict_id": "CFT-4",
"type": "dependency",
"severity": "high",
"solutions": ["S-7", "S-8"],
"packages": ["redis@4.x vs 5.x"],
"resolution": "clarification"
}
```
**Resolution**: User selects version or defers one solution
### 5. Architecture Conflicts
Solutions have opposing architectural approaches:
```json
{
"conflict_id": "CFT-5",
"type": "architecture",
"severity": "medium",
"solutions": ["S-9", "S-10"],
"approaches": ["monolithic", "microservice"],
"resolution": "clarification"
}
```
**Resolution**: User selects approach or separates concerns
## Queue Structure
### Directory Layout
```
.workflow/issues/queues/
├── index.json # Queue index (active + history)
├── QUE-20251227-143000.json # Individual queue file
├── QUE-20251227-143000-1.json # Multi-queue partition 1
├── QUE-20251227-143000-2.json # Multi-queue partition 2
└── QUE-20251227-143000-3.json # Multi-queue partition 3
```
### Index Schema
```typescript
interface QueueIndex {
active_queue_id: string | null;
active_queue_group: string | null;
queues: QueueEntry[];
}
interface QueueEntry {
id: string;
queue_group?: string; // Links multi-queue partitions
queue_index?: number; // Position in group (1-based)
total_queues?: number; // Total queues in group
status: 'active' | 'archived' | 'deleted';
issue_ids: string[];
total_solutions: number;
completed_solutions: number;
created_at: string;
}
```
### Queue File Schema
```typescript
interface Queue {
queue_id: string;
queue_group?: string;
solutions: QueueSolution[];
execution_groups: ExecutionGroup[];
conflicts: Conflict[];
priority_order: string[];
created_at: string;
}
interface QueueSolution {
item_id: string; // S-1, S-2, S-3...
issue_id: string;
solution_id: string;
status: 'pending' | 'in_progress' | 'completed' | 'failed';
task_count: number;
files_touched: string[];
priority_score: number;
}
interface ExecutionGroup {
id: string; // P1, S1, P2...
type: 'parallel' | 'sequential';
items: string[]; // S-1, S-2...
}
```
## Clarification Flow
When high-severity conflicts exist without clear resolution:
```bash
# Interactive prompt
[CFT-5] File conflict: src/auth/login.ts modified by both S-1 and S-2
Options:
[1] Sequential: Execute S-1 first, then S-2
[2] Sequential: Execute S-2 first, then S-1
[3] Merge: Combine changes into single solution
[4] Defer: Remove one solution from queue
User selects: [1]
# Agent resumes with resolution
# Updates queue with sequential ordering: S1: [S-1, S-2]
```
## Related Commands
- **[issue:plan](./issue-plan.md)** - Plan solutions before queuing
- **[issue:execute](./issue-execute.md)** - Execute queued solutions
- **[issue:new](./issue-new.md)** - Create issues to plan and queue
- **ccw issue queue dag** - View dependency graph
- **ccw issue next** - Get next item from queue
## Best Practices
1. **Plan before queue**: Ensure all issues have bound solutions
2. **Review conflicts**: Check conflict report before execution
3. **Use parallel queues**: For large projects, distribute work
4. **Archive completed**: Keep queue history for reference
5. **Check unplanned**: Review planned but unqueued issues
6. **Validate DAG**: Ensure no circular dependencies
## CLI Endpoints
```bash
# List planned issues with bound solutions
ccw issue solutions --status planned --brief
# Create/update queue
ccw issue queue form
# Sync issue statuses from queue
ccw issue update --from-queue [queue-id]
# View queue DAG
ccw issue queue dag --queue <queue-id>
# Get next item
ccw issue next --queue <queue-id>
```

View File

@@ -0,0 +1,249 @@
---
title: /memory:compact
sidebar_label: /memory:compact
sidebar_position: 6
description: Compact session memory into structured text for recovery
---
# /memory:compact
Compress current session working memory into structured text optimized for session recovery and persistent storage.
## Overview
The `/memory:compact` command compresses the current session's working memory into structured text, extracting critical information and saving it to persistent storage via MCP `core_memory` tool.
**Parameters**:
- `--description="..."`: Custom session description
- `--tags=<tag1,tag2>`: Add custom tags
- `--force`: Override existing memory without confirmation
**Execution Flow**:
1. Session Analysis → 2. Structure Extraction → 3. Text Generation → 4. MCP Import
## Features
- **Session Compression** - Extracts key information from working memory
- **Structured Format** - Organizes content for easy recovery
- **Critical State Capture** - Preserves objectives, plans, and decisions
- **Tag Support** - Add custom tags for organization
- **Persistent Storage** - Saves via MCP core_memory tool
- **Session Recovery** - Enables resuming from compacted state
## Usage
```bash
# Compact current session
/memory:compact
# With custom description
/memory:compact --description="User authentication implementation"
# With tags
/memory:compact --tags=auth,security,api
# Force overwrite
/memory:compact --force
```
## Execution Flow
### Phase 1: Session Analysis
Analyze current session state to extract:
- Session objectives and goals
- Implementation plan
- Files modified
- Key decisions made
- Constraints and requirements
- Current state
### Phase 2: Structure Extraction
```javascript
const sessionAnalysis = {
objective: extract_objective(session),
plan: extract_plan(session),
files: extract_files(session),
decisions: extract_decisions(session),
constraints: extract_constraints(session),
state: extract_state(session),
notes: extract_notes(session)
};
```
### Phase 3: Structured Text Generation
```text
# Session: {session_id}
## Objective
{objective}
## Implementation Plan
{plan}
## Files Modified
{files}
## Key Decisions
{decisions}
## Constraints
{constraints}
## Current State
{state}
## Notes
{notes}
```
### Phase 4: MCP Import
```javascript
mcp__ccw-tools__core_memory({
operation: "import",
text: structuredText
})
```
## Output Format
```json
{
"operation": "import",
"id": "CMEM-YYYYMMDD-HHMMSS",
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
}
```
## Structured Text Template
```markdown
# Session: {session_id}
**Date**: {timestamp}
**Description**: {custom_description or auto-generated}
**Tags**: {tags}
---
## Objective
{session_objective}
---
## Implementation Plan
{implementation_plan}
---
## Files Modified
| File | Changes |
|------|---------|
| {file1} | {changes1} |
| {file2} | {changes2} |
---
## Key Decisions
1. {decision1}
2. {decision2}
3. {decision3}
---
## Constraints
- {constraint1}
- {constraint2}
---
## Current State
{current_state}
---
## Notes
{additional_notes}
```
## Examples
### Basic Usage
```bash
# Compact current session
/memory:compact
# Output:
# Analyzing session...
# Extracting key information...
# Generating structured text...
# Importing to core memory...
# ✅ Memory compacted: CMEM-20250203-143022
```
### With Custom Description
```bash
# Compact with description
/memory:compact --description="OAuth2 implementation"
# Description is saved with the memory
```
### With Tags
```bash
# Compact with tags
/memory:compact --tags=oauth,authentication,security
# Tags help with organization and retrieval
```
## Recovery
To recover a compacted session:
```bash
# List available memories
mcp__ccw-tools__core_memory({ operation: "list" })
# Export specific memory
mcp__ccw-tools__core_memory({ operation: "export", id: "CMEM-YYYYMMDD-HHMMSS" })
# Search for memories
mcp__ccw-tools__core_memory({ operation: "search", query: "oauth" })
```
## Use Cases
1. **Session Handoff** - Preserve context for later continuation
2. **Knowledge Base** - Store insights and decisions for reference
3. **Team Sharing** - Share session state with team members
4. **Documentation** - Generate structured records of work sessions
5. **Recovery** - Restore session state after interruption
## Related Commands
- **/memory:load** - Load project context into memory
- **/memory:update-full** - Update all CLAUDE.md files
- **/memory:update-related** - Update changed CLAUDE.md files
## Notes
- **Persistent storage** - Saved via MCP core_memory tool
- **Structured format** - Optimized for readability and parsing
- **Automatic ID generation** - Format: CMEM-YYYYMMDD-HHMMSS
- **Force option** - Override existing memory without confirmation
- **Tag support** - Add custom tags for organization and search
- **Custom description** - Add meaningful description for easy identification

View File

@@ -0,0 +1,178 @@
---
title: /memory:docs-full-cli
sidebar_label: /memory:docs-full-cli
sidebar_position: 4
description: Generate full CLI documentation for all project modules
---
# /memory:docs-full-cli
Generate comprehensive CLI documentation for all project modules using batched agent execution with automatic tool fallback.
## Overview
The `/memory:docs-full-cli` command generates complete documentation for all modules in the project using CLI tools with intelligent batching and automatic fallback.
**Parameters**:
- `--tool &lt;gemini|qwen|codex&gt;`: Primary tool (default: gemini)
- `--path &lt;directory&gt;`: Target directory (default: project root)
**Execution Flow**:
1. Module Detection → 2. Plan Presentation → 3. Batched Generation → 4. Verification
## Features
- **Full Coverage** - Documents all modules in the project
- **Intelligent Batching** - Groups modules by depth (4 modules/batch)
- **Automatic Fallback** - gemini→qwen→codex on failure
- **Depth Sequential** - Process depths N→0, parallel batches within depth
- **Smart Filtering** - Auto-detects and skips tests/build/config/docs
## Usage
```bash
# Generate full documentation
/memory:docs-full-cli
# Target specific directory
/memory:docs-full-cli --path src/auth
# Use specific tool
/memory:docs-full-cli --tool qwen
```
## Tool Fallback Hierarchy
```javascript
--tool gemini → [gemini, qwen, codex] // default
--tool qwen → [qwen, gemini, codex]
--tool codex → [codex, gemini, qwen]
```
## Execution Flow
### Phase 1: Module Detection & Analysis
```javascript
// Get module structure with classification
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
// OR with path parameter
Bash({command: "cd &lt;target-path&gt; && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
```
**Parse output** `depth:N|path:&lt;PATH&gt;|type:&lt;code|navigation&gt;|...` to extract module paths, types, and count.
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
### Phase 2: Plan Presentation
- Parse `--tool` (default: gemini)
- Get module structure with classification
- **Smart filter modules** (auto-detect tech stack, skip tests/build/config)
- Construct tool fallback order
- **Present filtered plan** with module types and counts
- **Wait for y/n confirmation**
### Phase 3: Batched Documentation Generation
```javascript
let modules_by_depth = group_by_depth(all_modules);
let tool_order = construct_tool_order(primary_tool);
for (let depth of sorted_depths.reverse()) { // N → 0
let batches = batch_modules(modules_by_depth[depth], 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
let strategy = module.depth >= 3 ? "full" : "single";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"${strategy}","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ ${module.path} (Layer ${layer}) docs generated with ${tool}`);
return true;
}
}
report(`❌ FAILED: ${module.path} (Layer ${layer}) failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
### Phase 4: Verification
- Verify documentation files were created
- Display statistics
- Show summary of generated docs
## Strategy Selection
| Module Depth | Strategy | Description |
|--------------|----------|-------------|
| Depth &lt; 3 | single | Single document for module |
| Depth &gt;= 3 | full | Comprehensive documentation with subsections |
## Module Types
| Type | Description | Documentation Focus |
|------|-------------|---------------------|
| **code** | Source code modules | API, classes, functions |
| **navigation** | Directory structures | Organization, file purposes |
## Examples
### Basic Usage
```bash
# Generate full project documentation
/memory:docs-full-cli
# Output:
# Analyzing workspace...
# Found 45 modules (38 code, 7 navigation)
# Filtered: 12 test/build/config modules skipped
# Plan: Generate docs for 33 modules
# Confirm? (y/n): y
#
# Depth 7: [4/4] ✅
# Depth 6: [8/8] ✅
# ...
# Summary: 33/33 modules documented
```
### Directory-Specific
```bash
# Document specific feature
/memory:docs-full-cli --path src/features/auth
# Only documents auth feature
```
### Tool Selection
```bash
# Use Qwen for generation
/memory:docs-full-cli --tool qwen
```
## Related Commands
- **/memory:docs-related-cli** - Generate docs for changed modules only
- **/memory:update-full** - Update CLAUDE.md files
- **/memory:compact** - Compact session memory
## Notes
- **Smart filtering** automatically skips test/build/config directories
- **Classification** distinguishes between code and navigation modules
- **Depth-based strategy** optimizes documentation detail level
- **Tool fallback** ensures completion even if primary tool fails
- **Verification** confirms all documentation files created successfully

View File

@@ -0,0 +1,174 @@
---
title: /memory:docs-related-cli
sidebar_label: /memory:docs-related-cli
sidebar_position: 5
description: Generate CLI documentation for git-changed modules
---
# /memory:docs-related-cli
Generate CLI documentation for modules affected by git changes using batched agent execution with automatic tool fallback.
## Overview
The `/memory:docs-related-cli` command generates documentation only for modules affected by recent git changes, providing faster documentation updates for daily development.
**Parameters**:
- `--tool &lt;gemini|qwen|codex&gt;`: Primary tool (default: gemini)
**Execution Flow**:
1. Change Detection → 2. Plan Presentation → 3. Batched Generation → 4. Verification
## Features
- **Changed Module Detection** - Uses git diff to identify affected modules
- **Intelligent Batching** - Groups modules by depth (4 modules/agent)
- **Automatic Fallback** - gemini→qwen→codex on failure
- **Depth Sequential** - Process depths N→0, parallel batches within depth
- **Smart Filtering** - Auto-detects and skips tests/build/config/docs
- **Single Strategy** - Uses single-layer documentation for speed
## Usage
```bash
# Generate docs for changed modules
/memory:docs-related-cli
# Use specific tool
/memory:docs-related-cli --tool qwen
```
## Tool Fallback Hierarchy
```javascript
--tool gemini → [gemini, qwen, codex] // default
--tool qwen → [qwen, gemini, codex]
--tool codex → [codex, gemini, qwen]
```
## Execution Flow
### Phase 1: Change Detection & Analysis
```javascript
// Detect changed modules
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
```
**Parse output** `depth:N|path:&lt;PATH&gt;|change:&lt;TYPE&gt;` to extract affected modules.
**Smart filter**: Auto-detect and skip tests/build/config/docs based on project tech stack.
**Fallback**: If no changes detected, use recent modules (first 10 by depth).
### Phase 2: Plan Presentation
- Parse `--tool` (default: gemini)
- Refresh code index for accurate change detection
- Detect changed modules
- **Smart filter modules** (auto-detect tech stack, skip tests/build/config)
- Cache git changes
- Apply fallback if no changes
- Construct tool fallback order
- **Present filtered plan** with change types
- **Wait for y/n confirmation**
### Phase 3: Batched Documentation Generation
```javascript
let modules_by_depth = group_by_depth(changed_modules);
let tool_order = construct_tool_order(primary_tool);
for (let depth of sorted_depths.reverse()) { // N → 0
let batches = batch_modules(modules_by_depth[depth], 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ ${module.path} docs generated with ${tool}`);
return true;
}
}
report(`❌ FAILED: ${module.path} failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
### Phase 4: Verification
- Verify documentation files were created
- Display statistics
- Show summary of generated docs
## Strategy Selection
**Related Mode** uses `single` strategy:
- Generates single document per module
- Faster than full mode's multi-layer approach
- Suitable for iterative development
## Comparison with Full CLI Documentation
| Aspect | Related Docs | Full Docs |
|--------|--------------|-----------|
| **Scope** | Changed modules only | All project modules |
| **Speed** | Fast (minutes) | Slower (10-30 min) |
| **Use case** | Daily development | Major refactoring |
| **Strategy** | single only | full for depth&gt;=3 |
| **Trigger** | After commits | After major changes |
| **Batching** | 4 modules/agent | 4 modules/agent |
| **Fallback** | gemini→qwen→codex | gemini→qwen→codex |
## Examples
### Basic Usage
```bash
# Document changed modules after commits
/memory:docs-related-cli
# Output:
# Detecting git changes...
# Found 8 changed modules
# Filtered: 3 test modules skipped
# Plan: Generate docs for 5 modules
# Confirm? (y/n): y
#
# Depth 3: [4/4] ✅
# Depth 2: [1/1] ✅
# Summary: 5/5 modules documented
```
### Tool Selection
```bash
# Use Qwen for generation
/memory:docs-related-cli --tool qwen
```
## Related Commands
- **/memory:docs-full-cli** - Generate docs for all modules
- **/memory:update-related** - Update CLAUDE.md for changed modules
- **/memory:compact** - Compact session memory
## Notes
- **Smart filtering** automatically skips test/build/config directories
- **Change detection** uses git diff to find affected modules
- **Single strategy** optimizes for speed in iterative development
- **Tool fallback** ensures completion even if primary tool fails
- **Verification** confirms all documentation files created successfully

View File

@@ -0,0 +1,189 @@
---
title: /memory:load
sidebar_label: /memory:load
sidebar_position: 3
description: Load project context and core content into memory
---
# /memory:load
Delegate to a universal-executor agent to analyze the project and return a structured "Core Content Pack" for memory optimization.
## Overview
The `/memory:load` command analyzes the project and returns a structured content package loaded directly into main thread memory, providing essential context for subsequent agent operations while minimizing token consumption.
**Core Philosophy**:
- **Agent-Driven**: Fully delegates execution to universal-executor agent
- **Read-Only Analysis**: Does not modify code, only extracts context
- **Structured Output**: Returns standardized JSON content package
- **Memory Optimization**: Package loaded directly into main thread memory
- **Token Efficiency**: CLI analysis executed within agent to save tokens
## Features
- **Project Analysis** - Comprehensive codebase understanding
- **Core Content Extraction** - Identifies key components and patterns
- **Structured Output** - JSON format for easy integration
- **Memory Optimization** - Reduces token usage for subsequent operations
- **Tech Stack Detection** - Automatic identification of technologies used
## Usage
```bash
# Load project context into memory
/memory:load
# Load with custom scope
/memory:load --path src/features
# Load with specific depth
/memory:load --depth 3
```
## Execution Flow
```
User Input
Delegate to universal-executor agent
Phase 1: Project Analysis
├─ Analyze project structure
├─ Identify tech stack
├─ Detect key components
└─ Extract architectural patterns
Phase 2: Core Content Extraction
├─ Extract main components
├─ Identify data structures
├─ Find key interfaces
└─ Document important patterns
Phase 3: Structured Output Generation
├─ Format as JSON
├─ Organize by categories
└─ Add metadata
Phase 4: Memory Loading
├─ Load into main thread memory
├─ Make available for subsequent operations
└─ Display summary
```
## Output Format
The command returns a structured "Core Content Pack":
```json
{
"project_name": "string",
"tech_stack": ["list", "of", "technologies"],
"architecture": {
"overview": "string",
"key_components": ["list"],
"patterns": ["list"]
},
"core_content": {
"components": [
{
"name": "string",
"purpose": "string",
"location": "path",
"dependencies": ["list"]
}
],
"data_structures": [
{
"name": "string",
"purpose": "string",
"location": "path"
}
],
"interfaces": [
{
"name": "string",
"methods": ["list"],
"location": "path"
}
]
},
"metadata": {
"total_modules": number,
"depth_analyzed": number,
"timestamp": "ISO8601"
}
}
```
## Examples
### Basic Usage
```bash
# Load full project context
/memory:load
# Output:
# Delegating to universal-executor agent...
# Analyzing project structure...
# Detected: TypeScript, React, Node.js
# Extracting core content...
# Core Content Pack loaded:
# - 45 components identified
# - 12 data structures
# - 28 interfaces
# Memory load complete
```
### Scoped Analysis
```bash
# Load specific directory
/memory:load --path src/features/auth
# Only analyzes auth feature
```
## Core Content Pack Categories
### Components
- Main application components
- UI elements
- Service classes
- Utility functions
### Data Structures
- Type definitions
- Interfaces
- Enums
- Configuration schemas
### Patterns
- Architectural patterns
- Design patterns
- Coding conventions
- Best practices
## Benefits
1. **Token Efficiency** - Reduces token usage for subsequent operations
2. **Faster Context Loading** - Pre-loaded content available immediately
3. **Improved Accuracy** - Better context for agent decisions
4. **Structured Knowledge** - Organized project information
5. **Quick Navigation** - Easy access to key components
## Related Commands
- **/memory:update-full** - Full project documentation update
- **/memory:update-related** - Changed module documentation update
- **/memory:compact** - Compact session memory
## Notes
- **Read-only operation** - Does not modify any files
- **Agent-driven** - Fully delegates to universal-executor agent
- **Memory optimization** - Reduces token consumption for subsequent operations
- **Project agnostic** - Works with any project type and structure
- **Automatic tech stack detection** - Identifies technologies in use
- **Structured output** - JSON format for easy integration

View File

@@ -0,0 +1,228 @@
---
title: /memory:update-full
sidebar_label: /memory:update-full
sidebar_position: 1
description: Update CLAUDE.md for all project modules using batched agent execution
---
# /memory:update-full
Orchestrates comprehensive CLAUDE.md updates for all project modules using batched agent execution with automatic tool fallback.
## Overview
The `/memory:update-full` command updates CLAUDE.md documentation for all project modules with intelligent batching and automatic tool fallback (gemini→qwen→codex).
**Parameters**:
- `--tool &lt;gemini|qwen|codex&gt;`: Primary tool (default: gemini)
- `--path &lt;directory&gt;`: Target directory (default: project root)
**Execution Flow**:
1. Module Detection → 2. Plan Presentation → 3. Batched Execution → 4. Safety Verification
## Features
- **Full Project Coverage** - Updates all modules in the project
- **Intelligent Batching** - Groups modules by depth (4 modules/batch)
- **Automatic Fallback** - gemini→qwen→codex on failure
- **Depth Sequential** - Process depths N→0, parallel batches within depth
- **Smart Filtering** - Auto-detects and skips tests/build/config/docs
## Usage
```bash
# Full project update (auto-strategy selection)
/memory:update-full
# Target specific directory
/memory:update-full --path .claude
/memory:update-full --path src/features/auth
# Use specific tool
/memory:update-full --tool qwen
/memory:update-full --path .claude --tool qwen
```
## Tool Fallback Hierarchy
```javascript
--tool gemini → [gemini, qwen, codex] // default
--tool qwen → [qwen, gemini, codex]
--tool codex → [codex, gemini, qwen]
```
**Trigger**: Non-zero exit code from update script
## Execution Modes
### Small Projects (&lt;15 modules)
- **Direct parallel execution**
- Max 4 concurrent per depth
- No agent overhead
- Faster execution
### Large Projects (&gt;=15 modules)
- **Agent batch processing**
- 4 modules/agent
- 73% overhead reduction
- Better resource utilization
## Execution Flow
### Phase 1: Module Detection & Analysis
```javascript
// Get module structure
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
// OR with --path
Bash({command: "cd &lt;target-path&gt; && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
```
**Parse output** `depth:N|path:&lt;PATH&gt;|...` to extract module paths and count.
**Smart filter**: Auto-detect and skip tests/build/config/docs based on project tech stack.
### Phase 2: Plan Presentation
- Parse `--tool` (default: gemini)
- Get module structure from workspace
- **Smart filter modules** (auto-detect tech stack, skip tests/build/config/docs)
- Construct tool fallback order
- **Present filtered plan** with skip reasons
- **Wait for y/n confirmation**
### Phase 3A: Direct Execution (&lt;15 modules)
```javascript
for (let depth of sorted_depths.reverse()) { // N → 0
let modules = modules_by_depth[depth];
let batches = batch_modules(modules, 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"${strategy}","path":".","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ ${module.path} updated with ${tool}`);
return true;
}
}
report(`❌ FAILED: ${module.path} failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
### Phase 3B: Agent Execution (&gt;=15 modules)
```javascript
let modules_by_depth = group_by_depth(all_modules);
let tool_order = construct_tool_order(primary_tool);
for (let depth of sorted_depths.reverse()) { // N → 0
let batches = batch_modules(modules_by_depth[depth], 4);
let worker_tasks = [];
for (let batch of batches) {
worker_tasks.push(
Task(
subagent_type="memory-bridge",
description=`Update ${batch.length} modules at depth ${depth}`,
prompt=generate_batch_worker_prompt(batch, tool_order, "full")
)
);
}
await parallel_execute(worker_tasks); // Batches run in parallel
}
```
### Phase 4: Safety Verification
- Verify only CLAUDE.md files were modified
- Display git diff statistics
- Show summary of updates
## Strategy Selection
| Module Depth | Strategy | Description |
|--------------|----------|-------------|
| Depth &lt; 3 | single-layer | Updates only current module's CLAUDE.md |
| Depth &gt;= 3 | multi-layer | Updates current module + all parent CLAUDE.md files |
## Comparison with Related Update
| Aspect | Full Update | Related Update |
|--------|-------------|----------------|
| **Scope** | All project modules | Changed modules only |
| **Speed** | Slower (10-30 min) | Fast (minutes) |
| **Use case** | Major refactoring | Daily development |
| **Mode** | `"full"` | `"related"` |
| **Trigger** | After major changes | After commits |
| **Batching** | 4 modules/agent | 4 modules/agent |
| **Fallback** | gemini→qwen→codex | gemini→qwen→codex |
| **Complexity threshold** | &lt;=20 modules | &lt;=15 modules |
## Examples
### Basic Usage
```bash
# Full project update
/memory:update-full
# Output:
# Analyzing workspace...
# Found 45 modules across 8 depth levels
# Filtered: 12 test/build/config modules skipped
# Plan: Update 33 modules with gemini→qwen→codex fallback
# Confirm? (y/n): y
#
# Depth 7: [4/4] ✅
# Depth 6: [8/8] ✅
# ...
# Summary: 33/33 modules updated
# Safety check: Only CLAUDE.md modified ✅
```
### Directory-Specific Update
```bash
# Update specific feature directory
/memory:update-full --path src/features/auth
# Only updates modules within src/features/auth
```
### Tool Selection
```bash
# Use Qwen for faster updates
/memory:update-full --tool qwen
# Tries qwen → gemini → codex
```
## Related Commands
- **/memory:update-related** - Update only changed modules
- **/memory:load** - Load project context into memory
- **/memory:compact** - Compact session memory
## Notes
- **Direct execution** for &lt;15 modules (faster, no agent overhead)
- **Agent execution** for &gt;=15 modules (better resource utilization)
- **Smart filtering** automatically skips test/build/config directories
- **Safety check** ensures only CLAUDE.md files are modified
- **Git diff statistics** provide summary of changes
- **Automatic backup** of existing files before update

View File

@@ -0,0 +1,230 @@
---
title: /memory:update-related
sidebar_label: /memory:update-related
sidebar_position: 2
description: Update CLAUDE.md for git-changed modules using batched execution
---
# /memory:update-related
Orchestrates context-aware CLAUDE.md updates for changed modules using batched agent execution with automatic tool fallback.
## Overview
The `/memory:update-related` command updates CLAUDE.md documentation only for modules affected by git changes, providing faster updates for daily development.
**Parameters**:
- `--tool &lt;gemini|qwen|codex&gt;`: Primary tool (default: gemini)
**Execution Flow**:
1. Change Detection → 2. Plan Presentation → 3. Batched Execution → 4. Safety Verification
## Features
- **Changed Module Detection** - Uses git diff to identify affected modules
- **Intelligent Batching** - Groups modules by depth (4 modules/agent)
- **Automatic Fallback** - gemini→qwen→codex on failure
- **Depth Sequential** - Process depths N→0, parallel batches within depth
- **Related Mode** - Update only changed modules and their parent contexts
- **Smart Filtering** - Auto-detects and skips tests/build/config/docs
## Usage
```bash
# Update git-changed modules
/memory:update-related
# Use specific tool
/memory:update-related --tool qwen
# Fallback to recent modules if no changes detected
/memory:update-related
```
## Tool Fallback Hierarchy
```javascript
--tool gemini → [gemini, qwen, codex] // default
--tool qwen → [qwen, gemini, codex]
--tool codex → [codex, gemini, qwen]
```
**Trigger**: Non-zero exit code from update script
## Execution Modes
### Small Changes (&lt;15 modules)
- **Direct parallel execution**
- Max 4 concurrent per depth
- No agent overhead
### Large Changes (&gt;=15 modules)
- **Agent batch processing**
- 4 modules/agent
- Better resource utilization
## Execution Flow
### Phase 1: Change Detection & Analysis
```javascript
// Detect changed modules
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
```
**Parse output** `depth:N|path:&lt;PATH&gt;|change:&lt;TYPE&gt;` to extract affected modules.
**Smart filter**: Auto-detect and skip tests/build/config/docs based on project tech stack.
**Fallback**: If no changes detected, use recent modules (first 10 by depth).
### Phase 2: Plan Presentation
- Parse `--tool` (default: gemini)
- Refresh code index for accurate change detection
- Detect changed modules via detect_changed_modules
- **Smart filter modules** (auto-detect tech stack, skip tests/build/config/docs)
- Cache git changes
- Apply fallback if no changes (recent 10 modules)
- Construct tool fallback order
- **Present filtered plan** with skip reasons and change types
- **Wait for y/n confirmation**
### Phase 3A: Direct Execution (&lt;15 modules)
```javascript
for (let depth of sorted_depths.reverse()) { // N → 0
let batches = batch_modules(modules_by_depth[depth], 4);
for (let batch of batches) {
let parallel_tasks = batch.map(module => {
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
report(`✅ ${module.path} updated with ${tool}`);
return true;
}
}
report(`❌ FAILED: ${module.path} failed all tools`);
return false;
};
});
await Promise.all(parallel_tasks.map(task => task()));
}
}
```
### Phase 3B: Agent Execution (&gt;=15 modules)
```javascript
let modules_by_depth = group_by_depth(changed_modules);
let tool_order = construct_tool_order(primary_tool);
for (let depth of sorted_depths.reverse()) { // N → 0
let batches = batch_modules(modules_by_depth[depth], 4);
let worker_tasks = [];
for (let batch of batches) {
worker_tasks.push(
Task(
subagent_type="memory-bridge",
description=`Update ${batch.length} modules at depth ${depth}`,
prompt=generate_batch_worker_prompt(batch, tool_order, "related")
)
);
}
await parallel_execute(worker_tasks); // Batches run in parallel
}
```
### Phase 4: Safety Verification
- Verify only CLAUDE.md files were modified
- Display git diff statistics
- Show summary of updates
## Strategy Selection
**Related Mode** uses `single-layer` strategy:
- Updates only the changed module's CLAUDE.md
- Faster than full update's multi-layer approach
- Suitable for iterative development
## Comparison with Full Update
| Aspect | Related Update | Full Update |
|--------|----------------|-------------|
| **Scope** | Changed modules only | All project modules |
| **Speed** | Fast (minutes) | Slower (10-30 min) |
| **Use case** | Daily development | Major refactoring |
| **Mode** | `"related"` | `"full"` |
| **Trigger** | After commits | After major changes |
| **Batching** | 4 modules/agent | 4 modules/agent |
| **Fallback** | gemini→qwen→codex | gemini→qwen→codex |
| **Complexity threshold** | &lt;=15 modules | &lt;=20 modules |
| **Strategy** | single-layer only | multi-layer for depth&gt;=3 |
## Examples
### Basic Usage
```bash
# Update changed modules after commits
/memory:update-related
# Output:
# Detecting git changes...
# Found 8 changed modules
# Filtered: 3 test modules skipped
# Plan: Update 5 modules with gemini→qwen→codex fallback
# Confirm? (y/n): y
#
# Depth 3: [4/4] ✅
# Depth 2: [1/1] ✅
# Summary: 5/5 modules updated
# Safety check: Only CLAUDE.md modified ✅
```
### Tool Selection
```bash
# Use Qwen for faster updates
/memory:update-related --tool qwen
# Tries qwen → gemini → codex
```
### No Changes Detected
```bash
# When no git changes found
/memory:update-related
# Output:
# No git changes detected, using recent 10 modules
# Plan: Update recent modules
```
## Related Commands
- **/memory:update-full** - Update all project modules
- **/memory:load** - Load project context into memory
- **/memory:compact** - Compact session memory
## Notes
- **Direct execution** for &lt;15 modules (faster, no agent overhead)
- **Agent execution** for &gt;=15 modules (better resource utilization)
- **Smart filtering** automatically skips test/build/config directories
- **Change detection** uses git diff to find affected modules
- **Fallback** to recent modules when no changes detected
- **Safety check** ensures only CLAUDE.md files are modified
- **Git diff statistics** provide summary of changes