feat: Add unified-execute-parallel, unified-execute-with-file, and worktree-merge workflows

- Implemented unified-execute-parallel for executing tasks in isolated Git worktrees with parallel execution capabilities.
- Developed unified-execute-with-file for universal task execution from various planning outputs with progress tracking.
- Created worktree-merge to handle merging of completed worktrees back to the main branch, including dependency management and conflict detection.
- Enhanced documentation for each workflow, detailing core functionalities, implementation phases, and configuration options.
This commit is contained in:
catlog22
2026-02-05 18:44:00 +08:00
parent 1480873008
commit eac1bb81c8
17 changed files with 16 additions and 0 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,457 @@
---
name: brainstorm-to-cycle
description: Convert brainstorm session output to parallel-dev-cycle input with idea selection and context enrichment. Unified parameter format.
argument-hint: "--session=<id> [--idea=<index>] [--auto] [--launch]"
---
# Brainstorm to Cycle Adapter
## Overview
Bridge workflow that converts **brainstorm-with-file** output to **parallel-dev-cycle** input. Reads synthesis.json, allows user to select an idea, and formats it as an enriched TASK description.
**Core workflow**: Load Session → Select Idea → Format Task → Launch Cycle
## Inputs
| Argument | Required | Description |
|----------|----------|-------------|
| --session | Yes | Brainstorm session ID (e.g., `BS-rate-limiting-2025-01-28`) |
| --idea | No | Pre-select idea by index (0-based, from top_ideas) |
| --auto | No | Auto-select top-scored idea without confirmation |
| --launch | No | Auto-launch parallel-dev-cycle without preview |
## Output
Launches `/parallel-dev-cycle` with enriched TASK containing:
- Primary recommendation or selected idea
- Key strengths and challenges
- Suggested implementation steps
- Alternative approaches for reference
## Execution Process
```
Phase 1: Session Loading
├─ Validate session folder exists
├─ Read synthesis.json
├─ Parse top_ideas and recommendations
└─ Validate data structure
Phase 2: Idea Selection
├─ --auto mode → Select highest scored idea
├─ --idea=N → Select specified index
└─ Interactive → Present options, await selection
Phase 3: Task Formatting
├─ Build enriched task description
├─ Include context from brainstorm
└─ Generate parallel-dev-cycle command
Phase 4: Cycle Launch
├─ Confirm with user (unless --auto)
└─ Execute parallel-dev-cycle
```
## Implementation
### Phase 1: Session Loading
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse arguments
const args = "$ARGUMENTS"
const sessionId = "$SESSION"
const ideaIndexMatch = args.match(/--idea=(\d+)/)
const preSelectedIdea = ideaIndexMatch ? parseInt(ideaIndexMatch[1]) : null
const isAutoMode = args.includes('--auto')
// Validate session
const sessionFolder = `.workflow/.brainstorm/${sessionId}`
const synthesisPath = `${sessionFolder}/synthesis.json`
const brainstormPath = `${sessionFolder}/brainstorm.md`
function fileExists(p) {
try { return bash(`test -f "${p}" && echo "yes"`).includes('yes') } catch { return false }
}
if (!fileExists(synthesisPath)) {
console.error(`
## Error: Session Not Found
Session ID: ${sessionId}
Expected path: ${synthesisPath}
**Available sessions**:
`)
bash(`ls -1 .workflow/.brainstorm/ 2>/dev/null | head -10`)
return { status: 'error', message: 'Session not found' }
}
// Load synthesis
const synthesis = JSON.parse(Read(synthesisPath))
// Validate structure
if (!synthesis.top_ideas || synthesis.top_ideas.length === 0) {
console.error(`
## Error: No Ideas Found
The brainstorm session has no top_ideas.
Please complete the brainstorm workflow first.
`)
return { status: 'error', message: 'No ideas in synthesis' }
}
console.log(`
## Brainstorm Session Loaded
**Session**: ${sessionId}
**Topic**: ${synthesis.topic}
**Completed**: ${synthesis.completed}
**Ideas Found**: ${synthesis.top_ideas.length}
`)
```
---
### Phase 2: Idea Selection
```javascript
let selectedIdea = null
let selectionSource = ''
// Auto mode: select highest scored
if (isAutoMode) {
selectedIdea = synthesis.top_ideas.reduce((best, idea) =>
idea.score > best.score ? idea : best
)
selectionSource = 'auto (highest score)'
console.log(`
**Auto-selected**: ${selectedIdea.title} (Score: ${selectedIdea.score}/10)
`)
}
// Pre-selected by index
else if (preSelectedIdea !== null) {
if (preSelectedIdea >= synthesis.top_ideas.length) {
console.error(`
## Error: Invalid Idea Index
Requested: --idea=${preSelectedIdea}
Available: 0 to ${synthesis.top_ideas.length - 1}
`)
return { status: 'error', message: 'Invalid idea index' }
}
selectedIdea = synthesis.top_ideas[preSelectedIdea]
selectionSource = `index ${preSelectedIdea}`
console.log(`
**Pre-selected**: ${selectedIdea.title} (Index: ${preSelectedIdea})
`)
}
// Interactive selection
else {
// Display options
console.log(`
## Select Idea for Development
| # | Title | Score | Feasibility |
|---|-------|-------|-------------|
${synthesis.top_ideas.map((idea, i) =>
`| ${i} | ${idea.title.substring(0, 40)} | ${idea.score}/10 | ${idea.feasibility || 'N/A'} |`
).join('\n')}
**Primary Recommendation**: ${synthesis.recommendations?.primary?.substring(0, 60) || 'N/A'}
`)
// Build options for AskUser
const ideaOptions = synthesis.top_ideas.slice(0, 4).map((idea, i) => ({
label: `#${i}: ${idea.title.substring(0, 30)}`,
description: `Score: ${idea.score}/10 - ${idea.description?.substring(0, 50) || ''}`
}))
// Add primary recommendation option if different
if (synthesis.recommendations?.primary) {
ideaOptions.unshift({
label: "Primary Recommendation",
description: synthesis.recommendations.primary.substring(0, 60)
})
}
const selection = AskUser({
questions: [{
question: "Which idea should be developed?",
header: "Idea",
multiSelect: false,
options: ideaOptions
}]
})
// Parse selection
if (selection.idea === "Primary Recommendation") {
// Use primary recommendation as task
selectedIdea = {
title: "Primary Recommendation",
description: synthesis.recommendations.primary,
key_strengths: synthesis.key_insights || [],
main_challenges: [],
next_steps: synthesis.follow_up?.filter(f => f.type === 'implementation').map(f => f.summary) || []
}
selectionSource = 'primary recommendation'
} else {
const match = selection.idea.match(/^#(\d+):/)
const idx = match ? parseInt(match[1]) : 0
selectedIdea = synthesis.top_ideas[idx]
selectionSource = `user selected #${idx}`
}
}
console.log(`
### Selected Idea
**Title**: ${selectedIdea.title}
**Source**: ${selectionSource}
**Description**: ${selectedIdea.description?.substring(0, 200) || 'N/A'}
`)
```
---
### Phase 3: Task Formatting
```javascript
// Build enriched task description
function formatTask(idea, synthesis) {
const sections = []
// Main objective
sections.push(`# Main Objective\n\n${idea.title}`)
// Description
if (idea.description) {
sections.push(`# Description\n\n${idea.description}`)
}
// Key strengths
if (idea.key_strengths?.length > 0) {
sections.push(`# Key Strengths\n\n${idea.key_strengths.map(s => `- ${s}`).join('\n')}`)
}
// Main challenges (important for RA agent)
if (idea.main_challenges?.length > 0) {
sections.push(`# Main Challenges to Address\n\n${idea.main_challenges.map(c => `- ${c}`).join('\n')}`)
}
// Recommended steps
if (idea.next_steps?.length > 0) {
sections.push(`# Recommended Implementation Steps\n\n${idea.next_steps.map((s, i) => `${i + 1}. ${s}`).join('\n')}`)
}
// Alternative approaches (for RA consideration)
if (synthesis.recommendations?.alternatives?.length > 0) {
sections.push(`# Alternative Approaches (for reference)\n\n${synthesis.recommendations.alternatives.map(a => `- ${a}`).join('\n')}`)
}
// Key insights from brainstorm
if (synthesis.key_insights?.length > 0) {
const relevantInsights = synthesis.key_insights.slice(0, 3)
sections.push(`# Key Insights from Brainstorm\n\n${relevantInsights.map(i => `- ${i}`).join('\n')}`)
}
// Source reference
sections.push(`# Source\n\nBrainstorm Session: ${synthesis.session_id}\nTopic: ${synthesis.topic}`)
return sections.join('\n\n')
}
const enrichedTask = formatTask(selectedIdea, synthesis)
// Display formatted task
console.log(`
## Formatted Task for parallel-dev-cycle
\`\`\`markdown
${enrichedTask}
\`\`\`
`)
// Save task to session folder for reference
Write(`${sessionFolder}/cycle-task.md`, `# Generated Task\n\n**Generated**: ${getUtc8ISOString()}\n**Idea**: ${selectedIdea.title}\n**Selection**: ${selectionSource}\n\n---\n\n${enrichedTask}`)
```
---
### Phase 4: Cycle Launch
```javascript
// Confirm launch (unless auto mode)
let shouldLaunch = isAutoMode
if (!isAutoMode) {
const confirmation = AskUser({
questions: [{
question: "Launch parallel-dev-cycle with this task?",
header: "Launch",
multiSelect: false,
options: [
{ label: "Yes, launch cycle (Recommended)", description: "Start parallel-dev-cycle with enriched task" },
{ label: "No, just save task", description: "Save formatted task for manual use" }
]
}]
})
shouldLaunch = confirmation.launch.includes("Yes")
}
if (shouldLaunch) {
console.log(`
## Launching parallel-dev-cycle
**Task**: ${selectedIdea.title}
**Source Session**: ${sessionId}
`)
// Escape task for command line
const escapedTask = enrichedTask
.replace(/\\/g, '\\\\')
.replace(/"/g, '\\"')
.replace(/\$/g, '\\$')
.replace(/`/g, '\\`')
// Launch parallel-dev-cycle
// Note: In actual execution, this would invoke the skill
console.log(`
### Cycle Command
\`\`\`bash
/parallel-dev-cycle TASK="${escapedTask.substring(0, 100)}..."
\`\`\`
**Full task saved to**: ${sessionFolder}/cycle-task.md
`)
// Return success with cycle trigger
return {
status: 'success',
action: 'launch_cycle',
session_id: sessionId,
idea: selectedIdea.title,
task_file: `${sessionFolder}/cycle-task.md`,
cycle_command: `/parallel-dev-cycle TASK="${enrichedTask}"`
}
} else {
console.log(`
## Task Saved (Not Launched)
**Task file**: ${sessionFolder}/cycle-task.md
To launch manually:
\`\`\`bash
/parallel-dev-cycle TASK="$(cat ${sessionFolder}/cycle-task.md)"
\`\`\`
`)
return {
status: 'success',
action: 'saved_only',
session_id: sessionId,
task_file: `${sessionFolder}/cycle-task.md`
}
}
```
---
## Session Files
After execution:
```
.workflow/.brainstorm/{session-id}/
├── brainstorm.md # Original brainstorm
├── synthesis.json # Synthesis data (input)
├── perspectives.json # Perspectives data
├── ideas/ # Idea deep-dives
└── cycle-task.md # ⭐ Generated task (output)
```
## Task Format
The generated task includes:
| Section | Purpose | Used By |
|---------|---------|---------|
| Main Objective | Clear goal statement | RA: Primary requirement |
| Description | Detailed explanation | RA: Requirement context |
| Key Strengths | Why this approach | RA: Design decisions |
| Main Challenges | Known issues to address | RA: Edge cases, risks |
| Implementation Steps | Suggested approach | EP: Planning guidance |
| Alternatives | Other valid approaches | RA: Fallback options |
| Key Insights | Learnings from brainstorm | RA: Domain context |
## Error Handling
| Situation | Action |
|-----------|--------|
| Session not found | List available sessions, abort |
| synthesis.json missing | Suggest completing brainstorm first |
| No top_ideas | Report error, abort |
| Invalid --idea index | Show valid range, abort |
| Task too long | Truncate with reference to file |
## Examples
### Auto Mode (Quick Launch)
```bash
/brainstorm-to-cycle SESSION="BS-rate-limiting-2025-01-28" --auto
# → Selects highest-scored idea
# → Launches parallel-dev-cycle immediately
```
### Pre-Selected Idea
```bash
/brainstorm-to-cycle SESSION="BS-auth-system-2025-01-28" --idea=2
# → Selects top_ideas[2]
# → Confirms before launch
```
### Interactive Selection
```bash
/brainstorm-to-cycle SESSION="BS-caching-2025-01-28"
# → Displays all ideas with scores
# → User selects from options
# → Confirms and launches
```
## Integration Flow
```
brainstorm-with-file
synthesis.json
brainstorm-to-cycle ◄─── This command
enriched TASK
parallel-dev-cycle
RA → EP → CD → VAS
```
---
**Now execute brainstorm-to-cycle** with session: $SESSION

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,413 @@
---
name: clean
description: Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution. Supports targeted cleanup and confirmation.
argument-hint: "[--dry-run] [--focus=<area>]"
---
# Workflow Clean Command
## Overview
Evidence-based intelligent cleanup command. Systematically identifies stale artifacts through mainline analysis, discovers drift, and safely removes unused sessions, documents, and dead code.
**Core workflow**: Detect Mainline → Discover Drift → Confirm → Stage → Execute
## Target Cleanup
**Focus area**: $FOCUS (or entire project if not specified)
**Mode**: $ARGUMENTS
- `--dry-run`: Preview cleanup without executing
- `--focus`: Focus area (module or path)
## Execution Process
```
Phase 0: Initialization
├─ Parse arguments (--dry-run, FOCUS)
├─ Setup session folder
└─ Initialize utility functions
Phase 1: Mainline Detection
├─ Analyze git history (30 days)
├─ Identify core modules (high commit frequency)
├─ Map active vs stale branches
└─ Build mainline profile
Phase 2: Drift Discovery (Subagent)
├─ spawn_agent with cli-explore-agent role
├─ Scan workflow sessions for orphaned artifacts
├─ Identify documents drifted from mainline
├─ Detect dead code and unused exports
└─ Generate cleanup manifest
Phase 3: Confirmation
├─ Validate manifest schema
├─ Display cleanup summary by category
├─ AskUser: Select categories and risk level
└─ Dry-run exit if --dry-run
Phase 4: Execution
├─ Validate paths (security check)
├─ Stage deletion (move to .trash)
├─ Update manifests
├─ Permanent deletion
└─ Report results
```
## Implementation
### Phase 0: Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse arguments
const args = "$ARGUMENTS"
const isDryRun = args.includes('--dry-run')
const focusMatch = args.match(/FOCUS="([^"]+)"/)
const focusArea = focusMatch ? focusMatch[1] : "$FOCUS" !== "$" + "FOCUS" ? "$FOCUS" : null
// Session setup
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `clean-${dateStr}`
const sessionFolder = `.workflow/.clean/${sessionId}`
const trashFolder = `${sessionFolder}/.trash`
const projectRoot = process.cwd()
bash(`mkdir -p ${sessionFolder}`)
bash(`mkdir -p ${trashFolder}`)
// Utility functions
function fileExists(p) {
try { return bash(`test -f "${p}" && echo "yes"`).includes('yes') } catch { return false }
}
function dirExists(p) {
try { return bash(`test -d "${p}" && echo "yes"`).includes('yes') } catch { return false }
}
function validatePath(targetPath) {
if (targetPath.includes('..')) return { valid: false, reason: 'Path traversal' }
const allowed = ['.workflow/', '.claude/rules/tech/', 'src/']
const dangerous = [/^\//, /^C:\\Windows/i, /node_modules/, /\.git$/]
if (!allowed.some(p => targetPath.startsWith(p))) {
return { valid: false, reason: 'Outside allowed directories' }
}
if (dangerous.some(p => p.test(targetPath))) {
return { valid: false, reason: 'Dangerous pattern' }
}
return { valid: true }
}
```
---
### Phase 1: Mainline Detection
```javascript
// Check git repository
const isGitRepo = bash('git rev-parse --git-dir 2>/dev/null && echo "yes"').includes('yes')
let mainlineProfile = {
coreModules: [],
activeFiles: [],
activeBranches: [],
staleThreshold: { sessions: 7, branches: 30, documents: 14 },
isGitRepo,
timestamp: getUtc8ISOString()
}
if (isGitRepo) {
// Commit frequency by directory (last 30 days)
const freq = bash('git log --since="30 days ago" --name-only --pretty=format: | grep -v "^$" | cut -d/ -f1-2 | sort | uniq -c | sort -rn | head -20')
// Parse core modules (>5 commits)
mainlineProfile.coreModules = freq.trim().split('\n')
.map(l => l.trim().match(/^(\d+)\s+(.+)$/))
.filter(m => m && parseInt(m[1]) >= 5)
.map(m => m[2])
// Recent branches
const branches = bash('git for-each-ref --sort=-committerdate refs/heads/ --format="%(refname:short)" | head -10')
mainlineProfile.activeBranches = branches.trim().split('\n').filter(Boolean)
}
Write(`${sessionFolder}/mainline-profile.json`, JSON.stringify(mainlineProfile, null, 2))
```
---
### Phase 2: Drift Discovery
```javascript
let exploreAgent = null
try {
// Launch cli-explore-agent
exploreAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. Read: ~/.codex/agents/cli-explore-agent.md
2. Read: .workflow/project-tech.json (if exists)
## Task Objective
Discover stale artifacts for cleanup.
## Context
- Session: ${sessionFolder}
- Focus: ${focusArea || 'entire project'}
## Discovery Categories
### 1. Stale Workflow Sessions
Scan: .workflow/active/WFS-*, .workflow/archives/WFS-*, .workflow/.lite-plan/*, .workflow/.debug/DBG-*
Criteria: No modification >7 days + no related git commits
### 2. Drifted Documents
Scan: .claude/rules/tech/*, .workflow/.scratchpad/*
Criteria: >30% broken references to non-existent files
### 3. Dead Code
Scan: Unused exports, orphan files (not imported anywhere)
Criteria: No importers in import graph
## Output
Write to: ${sessionFolder}/cleanup-manifest.json
Format:
{
"generated_at": "ISO",
"discoveries": {
"stale_sessions": [{ "path": "...", "age_days": N, "reason": "...", "risk": "low|medium|high" }],
"drifted_documents": [{ "path": "...", "drift_percentage": N, "reason": "...", "risk": "..." }],
"dead_code": [{ "path": "...", "type": "orphan_file", "reason": "...", "risk": "..." }]
},
"summary": { "total_items": N, "by_category": {...}, "by_risk": {...} }
}
`
})
// Wait with timeout handling
let result = wait({ ids: [exploreAgent], timeout_ms: 600000 })
if (result.timed_out) {
send_input({ id: exploreAgent, message: 'Complete now and write cleanup-manifest.json.' })
result = wait({ ids: [exploreAgent], timeout_ms: 300000 })
if (result.timed_out) throw new Error('Agent timeout')
}
if (!fileExists(`${sessionFolder}/cleanup-manifest.json`)) {
throw new Error('Manifest not generated')
}
} finally {
if (exploreAgent) close_agent({ id: exploreAgent })
}
```
---
### Phase 3: Confirmation
```javascript
// Load and validate manifest
const manifest = JSON.parse(Read(`${sessionFolder}/cleanup-manifest.json`))
// Display summary
console.log(`
## Cleanup Discovery Report
| Category | Count | Risk |
|----------|-------|------|
| Sessions | ${manifest.summary.by_category.stale_sessions} | ${getRiskSummary('sessions')} |
| Documents | ${manifest.summary.by_category.drifted_documents} | ${getRiskSummary('documents')} |
| Dead Code | ${manifest.summary.by_category.dead_code} | ${getRiskSummary('code')} |
**Total**: ${manifest.summary.total_items} items
`)
// Dry-run exit
if (isDryRun) {
console.log(`
**Dry-run mode**: No changes made.
Manifest: ${sessionFolder}/cleanup-manifest.json
`)
return
}
// User confirmation
const selection = AskUser({
questions: [
{
question: "Which categories to clean?",
header: "Categories",
multiSelect: true,
options: [
{ label: "Sessions", description: `${manifest.summary.by_category.stale_sessions} stale sessions` },
{ label: "Documents", description: `${manifest.summary.by_category.drifted_documents} drifted docs` },
{ label: "Dead Code", description: `${manifest.summary.by_category.dead_code} unused files` }
]
},
{
question: "Risk level?",
header: "Risk",
options: [
{ label: "Low only", description: "Safest (Recommended)" },
{ label: "Low + Medium", description: "Includes likely unused" },
{ label: "All", description: "Aggressive" }
]
}
]
})
```
---
### Phase 4: Execution
```javascript
const riskFilter = {
'Low only': ['low'],
'Low + Medium': ['low', 'medium'],
'All': ['low', 'medium', 'high']
}[selection.risk]
// Collect items to clean
const items = []
if (selection.categories.includes('Sessions')) {
items.push(...manifest.discoveries.stale_sessions.filter(s => riskFilter.includes(s.risk)))
}
if (selection.categories.includes('Documents')) {
items.push(...manifest.discoveries.drifted_documents.filter(d => riskFilter.includes(d.risk)))
}
if (selection.categories.includes('Dead Code')) {
items.push(...manifest.discoveries.dead_code.filter(c => riskFilter.includes(c.risk)))
}
const results = { staged: [], deleted: [], failed: [], skipped: [] }
// Validate and stage
for (const item of items) {
const validation = validatePath(item.path)
if (!validation.valid) {
results.skipped.push({ path: item.path, reason: validation.reason })
continue
}
if (!fileExists(item.path) && !dirExists(item.path)) {
results.skipped.push({ path: item.path, reason: 'Not found' })
continue
}
try {
const trashTarget = `${trashFolder}/${item.path.replace(/\//g, '_')}`
bash(`mv "${item.path}" "${trashTarget}"`)
results.staged.push({ path: item.path, trashPath: trashTarget })
} catch (e) {
results.failed.push({ path: item.path, error: e.message })
}
}
// Permanent deletion
for (const staged of results.staged) {
try {
bash(`rm -rf "${staged.trashPath}"`)
results.deleted.push(staged.path)
} catch (e) {
console.error(`Failed: ${staged.path}`)
}
}
// Cleanup empty trash
bash(`rmdir "${trashFolder}" 2>/dev/null || true`)
// Report
console.log(`
## Cleanup Complete
**Deleted**: ${results.deleted.length}
**Failed**: ${results.failed.length}
**Skipped**: ${results.skipped.length}
### Deleted
${results.deleted.map(p => `- ${p}`).join('\n') || '(none)'}
${results.failed.length > 0 ? `### Failed\n${results.failed.map(f => `- ${f.path}: ${f.error}`).join('\n')}` : ''}
Report: ${sessionFolder}/cleanup-report.json
`)
Write(`${sessionFolder}/cleanup-report.json`, JSON.stringify({
timestamp: getUtc8ISOString(),
results,
summary: {
deleted: results.deleted.length,
failed: results.failed.length,
skipped: results.skipped.length
}
}, null, 2))
```
---
## Session Folder
```
.workflow/.clean/clean-{YYYY-MM-DD}/
├── mainline-profile.json # Git history analysis
├── cleanup-manifest.json # Discovery results
├── cleanup-report.json # Execution results
└── .trash/ # Staging area (temporary)
```
## Risk Levels
| Risk | Description | Examples |
|------|-------------|----------|
| **Low** | Safe to delete | Empty sessions, scratchpad files |
| **Medium** | Likely unused | Orphan files, old archives |
| **High** | May have dependencies | Files with some imports |
## Security Features
| Feature | Protection |
|---------|------------|
| Path Validation | Whitelist directories, reject traversal |
| Staged Deletion | Move to .trash before permanent delete |
| Dangerous Patterns | Block system dirs, node_modules, .git |
## Iteration Flow
```
First Call (/prompts:clean):
├─ Detect mainline from git history
├─ Discover stale artifacts via subagent
├─ Display summary, await user selection
└─ Execute cleanup with staging
Dry-Run (/prompts:clean --dry-run):
├─ All phases except execution
└─ Manifest saved for review
Focused (/prompts:clean FOCUS="auth"):
└─ Discovery limited to specified area
```
## Error Handling
| Situation | Action |
|-----------|--------|
| No git repo | Use file timestamps only |
| Agent timeout | Retry once with prompt, then abort |
| Path validation fail | Skip item, report reason |
| Manifest parse error | Abort with error |
| Empty discovery | Report "codebase is clean" |
---
**Now execute cleanup workflow** with focus: $FOCUS

View File

@@ -0,0 +1,719 @@
---
name: collaborative-plan-parallel
description: Parallel collaborative planning with Execution Groups - Multi-codex parallel task generation, execution group assignment, multi-branch strategy. Codex-optimized.
argument-hint: "TASK=\"<description>\" [--max-groups=3] [--group-strategy=automatic|balanced|manual]"
---
# Codex Collaborative-Plan-Parallel Workflow
## Quick Start
Parallel collaborative planning workflow using **Execution Groups** architecture. Splits task into sub-domains, assigns them to execution groups, and prepares for multi-branch parallel development.
**Core workflow**: Understand → Group Assignment → Sequential Planning → Conflict Detection → Execution Strategy
**Key features**:
- **Execution Groups**: Sub-domains grouped for parallel execution by different codex instances
- **Multi-branch strategy**: Each execution group works on independent Git branch
- **Codex instance assignment**: Each group assigned to specific codex worker
- **Dependency-aware grouping**: Automatic or manual group assignment based on dependencies
- **plan-note.md**: Shared document with execution group sections
**Note**: Planning is still serial (Codex limitation), but output is structured for parallel execution.
## Overview
This workflow enables structured planning for parallel execution:
1. **Understanding & Group Assignment** - Analyze requirements, identify sub-domains, assign to execution groups
2. **Sequential Planning** - Process each sub-domain serially via CLI analysis (planning phase only)
3. **Conflict Detection** - Scan for conflicts across execution groups
4. **Execution Strategy** - Generate branch strategy and codex assignment for parallel execution
The key innovation is **Execution Groups** - sub-domains are grouped by dependencies and complexity, enabling true parallel development with multiple codex instances.
## Output Structure
```
.workflow/.planning/CPLAN-{slug}-{date}/
├── plan-note.md # ⭐ Core: Requirements + Groups + Tasks
├── requirement-analysis.json # Phase 1: Sub-domain + group assignments
├── execution-groups.json # ⭐ Phase 1: Group metadata + codex assignment
├── agents/ # Phase 2: Per-domain plans (serial planning)
│ ├── {domain-1}/
│ │ └── plan.json
│ ├── {domain-2}/
│ │ └── plan.json
│ └── ...
├── conflicts.json # Phase 3: Conflict report
├── execution-strategy.md # ⭐ Phase 4: Branch strategy + codex commands
└── plan.md # Phase 4: Human-readable summary
```
## Output Artifacts
### Phase 1: Understanding & Group Assignment
| Artifact | Purpose |
|----------|---------|
| `plan-note.md` | Collaborative template with execution group sections |
| `requirement-analysis.json` | Sub-domain assignments with group IDs |
| `execution-groups.json` | ⭐ Group metadata, codex assignment, branch names, dependencies |
### Phase 2: Sequential Planning (per Phase 1 in original)
| Artifact | Purpose |
|----------|---------|
| `agents/{domain}/plan.json` | Detailed implementation plan per domain |
| Updated `plan-note.md` | Task pool and evidence sections filled per domain |
### Phase 3: Conflict Detection (same as original)
| Artifact | Purpose |
|----------|---------|
| `conflicts.json` | Detected conflicts with types, severity, resolutions |
| Updated `plan-note.md` | Conflict markers section populated |
### Phase 4: Execution Strategy Generation
| Artifact | Purpose |
|----------|---------|
| `execution-strategy.md` | ⭐ Branch creation commands, codex execution commands per group, merge strategy |
| `plan.md` | Human-readable summary with execution groups |
---
## Implementation Details
### Session Initialization
The workflow automatically generates a unique session identifier and directory structure.
**Session ID Format**: `CPLAN-{slug}-{date}`
- `slug`: Lowercase alphanumeric, max 30 chars
- `date`: YYYY-MM-DD format (UTC+8)
**Session Directory**: `.workflow/.planning/{sessionId}/`
**Auto-Detection**: If session folder exists with plan-note.md, automatically enters continue mode.
**Session Variables**:
- `sessionId`: Unique session identifier
- `sessionFolder`: Base directory for all artifacts
- `maxGroups`: Maximum execution groups (default: 3)
- `groupStrategy`: automatic | balanced | manual (default: automatic)
---
## Phase 1: Understanding & Group Assignment
**Objective**: Analyze task requirements, identify sub-domains, assign to execution groups, and create the plan-note.md template.
### Step 1.1: Analyze Task Description
Use built-in tools to understand the task scope and identify sub-domains.
**Analysis Activities**:
1. **Extract task keywords** - Identify key terms and concepts
2. **Identify sub-domains** - Split into 2-8 parallelizable focus areas
3. **Analyze dependencies** - Map cross-domain dependencies
4. **Assess complexity** - Evaluate task complexity per domain (Low/Medium/High)
5. **Search for references** - Find related documentation, README, architecture guides
**Sub-Domain Identification Patterns**:
| Pattern | Keywords | Typical Group Assignment |
|---------|----------|--------------------------|
| Backend API | 服务, 后端, API, 接口 | Group with database if dependent |
| Frontend | 界面, 前端, UI, 视图 | Separate group (UI-focused) |
| Database | 数据, 存储, 数据库, 持久化 | Group with backend if tightly coupled |
| Testing | 测试, 验证, QA | Can be separate or split across groups |
| Infrastructure | 部署, 基础, 运维, 配置 | Usually separate group |
### Step 1.2: Assign Execution Groups
Assign sub-domains to execution groups based on strategy.
**Group Assignment Strategies**:
#### 1. Automatic Strategy (default)
- **Logic**: Group domains by dependency relationships
- **Rule**: Domains with direct dependencies → same group
- **Rule**: Independent domains → separate groups (up to maxGroups)
- **Example**:
- Group 1: backend-api + database (dependent)
- Group 2: frontend + ui-components (dependent)
- Group 3: testing + documentation (independent)
#### 2. Balanced Strategy
- **Logic**: Distribute domains evenly across groups by estimated effort
- **Rule**: Balance total complexity across groups
- **Example**:
- Group 1: frontend (high) + testing (low)
- Group 2: backend (high) + documentation (low)
- Group 3: database (medium) + infrastructure (medium)
#### 3. Manual Strategy
- **Logic**: Prompt user to manually assign domains to groups
- **UI**: Present domains with dependencies, ask for group assignments
- **Validation**: Check that dependencies are within same group or properly ordered
**Codex Instance Assignment**:
- Each group assigned to `codex-{N}` (e.g., codex-1, codex-2, codex-3)
- Instance names are logical identifiers for parallel execution
- Actual parallel execution happens in unified-execute-parallel workflow
### Step 1.3: Generate execution-groups.json
Create the execution group metadata document.
**execution-groups.json Structure**:
```json
{
"session_id": "CPLAN-auth-2025-02-03",
"total_groups": 3,
"group_strategy": "automatic",
"groups": [
{
"group_id": "EG-001",
"codex_instance": "codex-1",
"domains": ["frontend", "ui-components"],
"branch_name": "feature/cplan-auth-eg-001-frontend",
"estimated_effort": "high",
"task_id_range": "TASK-001~200",
"dependencies_on_groups": [],
"cross_group_files": []
},
{
"group_id": "EG-002",
"codex_instance": "codex-2",
"domains": ["backend-api", "database"],
"branch_name": "feature/cplan-auth-eg-002-backend",
"estimated_effort": "medium",
"task_id_range": "TASK-201~400",
"dependencies_on_groups": [],
"cross_group_files": []
},
{
"group_id": "EG-003",
"codex_instance": "codex-3",
"domains": ["testing"],
"branch_name": "feature/cplan-auth-eg-003-testing",
"estimated_effort": "low",
"task_id_range": "TASK-401~500",
"dependencies_on_groups": ["EG-001", "EG-002"],
"cross_group_files": []
}
],
"inter_group_dependencies": [
{
"from_group": "EG-003",
"to_group": "EG-001",
"dependency_type": "requires_completion",
"description": "Testing requires frontend implementation"
},
{
"from_group": "EG-003",
"to_group": "EG-002",
"dependency_type": "requires_completion",
"description": "Testing requires backend API"
}
]
}
```
**Field Descriptions**:
| Field | Purpose |
|-------|---------|
| `group_id` | Unique execution group identifier (EG-001, EG-002, ...) |
| `codex_instance` | Logical codex worker name for parallel execution |
| `domains[]` | Sub-domains assigned to this group |
| `branch_name` | Git branch name for this group's work |
| `estimated_effort` | Complexity: low/medium/high |
| `task_id_range` | Non-overlapping TASK ID range (200 IDs per group) |
| `dependencies_on_groups[]` | Groups that must complete before this group starts |
| `cross_group_files[]` | Files modified by multiple groups (conflict risk) |
| `inter_group_dependencies[]` | Explicit cross-group dependency relationships |
### Step 1.4: Create plan-note.md Template with Groups
Generate structured template with execution group sections.
**plan-note.md Structure**:
- **YAML Frontmatter**: session_id, original_requirement, total_groups, group_strategy, status
- **Section: 需求理解**: Core objectives, key points, constraints, group strategy
- **Section: 执行组划分**: Table of groups with domains, branches, codex assignments
- **Section: 任务池 - {Group ID} - {Domains}**: Pre-allocated task section per execution group
- **Section: 依赖关系**: Cross-group dependencies
- **Section: 冲突标记**: Populated in Phase 3
- **Section: 上下文证据 - {Group ID}**: Evidence section per execution group
**TASK ID Range Allocation**: Each group receives 200 non-overlapping IDs (e.g., Group 1: TASK-001~200, Group 2: TASK-201~400).
### Step 1.5: Update requirement-analysis.json with Groups
Extend requirement-analysis.json to include execution group assignments.
**requirement-analysis.json Structure** (extended):
```json
{
"session_id": "CPLAN-auth-2025-02-03",
"original_requirement": "...",
"complexity": "high",
"total_groups": 3,
"group_strategy": "automatic",
"sub_domains": [
{
"focus_area": "frontend",
"description": "...",
"execution_group": "EG-001",
"task_id_range": "TASK-001~100",
"estimated_effort": "high",
"dependencies": []
},
{
"focus_area": "ui-components",
"description": "...",
"execution_group": "EG-001",
"task_id_range": "TASK-101~200",
"estimated_effort": "medium",
"dependencies": ["frontend"]
}
],
"execution_groups_summary": [
{
"group_id": "EG-001",
"domains": ["frontend", "ui-components"],
"total_estimated_effort": "high"
}
]
}
```
**Success Criteria**:
- 2-3 execution groups identified (up to maxGroups)
- Each group has 1-4 sub-domains
- Dependencies mapped (intra-group and inter-group)
- execution-groups.json created with complete metadata
- plan-note.md template includes group sections
- requirement-analysis.json extended with group assignments
- Branch names generated for each group
- Codex instance assigned to each group
---
## Phase 2: Sequential Sub-Domain Planning
**Objective**: Process each sub-domain serially via CLI analysis (same as original workflow, but with group awareness).
**Note**: This phase is identical to original collaborative-plan-with-file Phase 2, with the following additions:
- CLI prompt includes execution group context
- Task IDs respect group's assigned range
- Cross-group dependencies explicitly documented
### Step 2.1: Domain Planning Loop (Serial)
For each sub-domain in sequence:
1. Execute Gemini/Codex CLI analysis for the current domain
2. Include execution group metadata in CLI context
3. Parse CLI output into structured plan
4. Save detailed plan as `agents/{domain}/plan.json`
5. Update plan-note.md group section with task summaries and evidence
**Planning Guideline**: Wait for each domain's CLI analysis to complete before proceeding.
### Step 2.2: CLI Planning with Group Context
Execute synchronous CLI analysis with execution group awareness.
**CLI Analysis Scope** (extended):
- **PURPOSE**: Generate detailed implementation plan for domain within execution group
- **CONTEXT**:
- Domain description
- Execution group ID and metadata
- Related codebase files
- Prior domain results within same group
- Cross-group dependencies (if any)
- **TASK**: Analyze domain, identify tasks within group's ID range, define dependencies
- **EXPECTED**: JSON output with tasks, summaries, group-aware dependencies, effort estimates
- **CONSTRAINTS**:
- Use only TASK IDs from assigned range
- Document any cross-group dependencies
- Flag files that might be modified by other groups
**Cross-Group Dependency Handling**:
- If a task depends on another group's completion, document as `depends_on_group: "EG-XXX"`
- Mark files that are likely modified by multiple groups as `cross_group_risk: true`
### Step 2.3: Update plan-note.md Group Sections
Parse CLI output and update the plan-note.md sections for the current domain's group.
**Task Summary Format** (extended with group info):
- Task header: `### TASK-{ID}: {Title} [{domain}] [Group: {group_id}]`
- Fields: 状态, 复杂度, 依赖, 范围, **执行组** (execution_group)
- Cross-group dependencies: `依赖执行组: EG-XXX`
- Modification points with conflict risk flag
- Conflict risk assessment
**Evidence Format** (same as original)
**Success Criteria**:
- All domains processed sequentially
- `agents/{domain}/plan.json` created for each domain
- `plan-note.md` updated with group-aware task pools
- Cross-group dependencies explicitly documented
- Task IDs respect group ranges
---
## Phase 3: Conflict Detection
**Objective**: Analyze plan-note.md for conflicts within and across execution groups.
**Note**: This phase extends original conflict detection with group-aware analysis.
### Step 3.1: Parse plan-note.md (same as original)
Extract all tasks from all group sections.
### Step 3.2: Detect Conflicts (Extended)
Scan all tasks for four categories of conflicts (added cross-group conflicts).
**Conflict Types** (extended):
| Type | Severity | Detection Logic | Resolution |
|------|----------|-----------------|------------|
| file_conflict | high | Same file:location modified by multiple domains within same group | Coordinate modification order |
| cross_group_file_conflict | critical | Same file modified by multiple execution groups | Requires merge coordination or branch rebase strategy |
| dependency_cycle | critical | Circular dependencies in task graph (within or across groups) | Remove or reorganize dependencies |
| strategy_conflict | medium | Multiple high-risk tasks in same file from different domains/groups | Review approaches and align on strategy |
**Detection Activities**:
1. **File Conflicts (Intra-Group)**: Group modification points by file:location within each group
2. **Cross-Group File Conflicts**: Identify files modified by multiple execution groups
3. **Dependency Cycles**: Build dependency graph including cross-group dependencies, detect cycles
4. **Strategy Conflicts**: Identify files with high-risk tasks from multiple groups
**Cross-Group Conflict Detection**:
- Parse `cross_group_files[]` from execution-groups.json
- Scan all tasks for files modified by multiple groups
- Flag as critical conflict requiring merge strategy
### Step 3.3: Update execution-groups.json with Conflicts
Append detected cross-group conflicts to execution-groups.json.
**Update Structure**:
```json
{
"groups": [
{
"group_id": "EG-001",
"cross_group_files": [
{
"file": "src/shared/config.ts",
"conflicting_groups": ["EG-002"],
"conflict_type": "both modify shared configuration",
"resolution": "Coordinate changes or use merge strategy"
}
]
}
]
}
```
### Step 3.4: Generate Conflict Artifacts (Extended)
Write conflict results with group context.
**conflicts.json Structure** (extended):
- `detected_at`: Detection timestamp
- `total_conflicts`: Number of conflicts
- `intra_group_conflicts[]`: Conflicts within single group
- `cross_group_conflicts[]`: ⭐ Conflicts across execution groups
- `conflicts[]`: All conflict objects with group IDs
**plan-note.md Update**: Populate "冲突标记" section with:
- Intra-group conflicts (can be resolved during group execution)
- Cross-group conflicts (require coordination or merge strategy)
**Success Criteria**:
- All tasks analyzed for intra-group and cross-group conflicts
- `conflicts.json` written with group-aware detection results
- `execution-groups.json` updated with cross_group_files
- `plan-note.md` updated with conflict markers
- Cross-group conflicts flagged as critical
---
## Phase 4: Execution Strategy Generation
**Objective**: Generate branch strategy and codex execution commands for parallel development.
### Step 4.1: Generate Branch Strategy
Create Git branch strategy for multi-branch parallel development.
**Branch Strategy Decisions**:
1. **Independent Groups** (no cross-group conflicts):
- Each group works on independent branch from main
- Branches can be merged independently
- Parallel development fully supported
2. **Dependent Groups** (cross-group dependencies but no file conflicts):
- Groups with dependencies must coordinate completion order
- Independent branches, but merge order matters
- Group A completes → merge to main → Group B starts/continues
3. **Conflicting Groups** (cross-group file conflicts):
- Strategy 1: Sequential - Complete one group, merge, then start next
- Strategy 2: Feature branch + rebase - Each group rebases on main periodically
- Strategy 3: Shared integration branch - Both groups branch from shared base, coordinate merges
**Default Strategy**: Independent branches with merge order based on dependencies
### Step 4.2: Generate execution-strategy.md
Create execution strategy document with concrete commands.
**execution-strategy.md Structure**:
```markdown
# Execution Strategy: {session_id}
## Overview
- **Total Execution Groups**: {N}
- **Group Strategy**: {automatic|balanced|manual}
- **Branch Strategy**: {independent|dependent|conflicting}
- **Estimated Total Effort**: {sum of all groups}
## Execution Groups
### EG-001: Frontend Development
- **Codex Instance**: codex-1
- **Domains**: frontend, ui-components
- **Branch**: feature/cplan-auth-eg-001-frontend
- **Dependencies**: None (can start immediately)
- **Estimated Effort**: High
### EG-002: Backend Development
- **Codex Instance**: codex-2
- **Domains**: backend-api, database
- **Branch**: feature/cplan-auth-eg-002-backend
- **Dependencies**: None (can start immediately)
- **Estimated Effort**: Medium
### EG-003: Testing
- **Codex Instance**: codex-3
- **Domains**: testing
- **Branch**: feature/cplan-auth-eg-003-testing
- **Dependencies**: EG-001, EG-002 (must complete first)
- **Estimated Effort**: Low
## Branch Creation Commands
```bash
# Create branches for all execution groups
git checkout main
git pull
# Group 1: Frontend
git checkout -b feature/cplan-auth-eg-001-frontend
git push -u origin feature/cplan-auth-eg-001-frontend
# Group 2: Backend
git checkout main
git checkout -b feature/cplan-auth-eg-002-backend
git push -u origin feature/cplan-auth-eg-002-backend
# Group 3: Testing
git checkout main
git checkout -b feature/cplan-auth-eg-003-testing
git push -u origin feature/cplan-auth-eg-003-testing
```
## Parallel Execution Commands
Execute these commands in parallel (separate terminal sessions or background):
```bash
# Terminal 1: Execute Group 1 (Frontend)
PLAN=".workflow/.planning/CPLAN-auth-2025-02-03/plan-note.md" \
GROUP="EG-001" \
/workflow:unified-execute-parallel
# Terminal 2: Execute Group 2 (Backend)
PLAN=".workflow/.planning/CPLAN-auth-2025-02-03/plan-note.md" \
GROUP="EG-002" \
/workflow:unified-execute-parallel
# Terminal 3: Execute Group 3 (Testing) - starts after EG-001 and EG-002 complete
PLAN=".workflow/.planning/CPLAN-auth-2025-02-03/plan-note.md" \
GROUP="EG-003" \
WAIT_FOR="EG-001,EG-002" \
/workflow:unified-execute-parallel
```
## Cross-Group Conflicts
### Critical Conflicts Detected
1. **File: src/shared/config.ts**
- Modified by: EG-001 (frontend), EG-002 (backend)
- Resolution: Coordinate changes or use merge strategy
- Recommendation: EG-001 completes first, EG-002 rebases before continuing
### Resolution Strategy
- **Option 1**: Sequential execution (EG-001 → merge → EG-002 rebases)
- **Option 2**: Manual coordination (both groups align on config changes before execution)
- **Option 3**: Split file (refactor into separate configs if feasible)
## Merge Strategy
### Independent Groups (EG-001, EG-002)
```bash
# After EG-001 completes
git checkout main
git merge feature/cplan-auth-eg-001-frontend
git push
# After EG-002 completes
git checkout main
git merge feature/cplan-auth-eg-002-backend
git push
```
### Dependent Group (EG-003)
```bash
# After EG-001 and EG-002 merged to main
git checkout feature/cplan-auth-eg-003-testing
git rebase main # Update with latest changes
# Continue execution...
# After EG-003 completes
git checkout main
git merge feature/cplan-auth-eg-003-testing
git push
```
## Monitoring Progress
Track execution progress:
```bash
# Check execution logs for each group
cat .workflow/.execution/EXEC-eg-001-*/execution-events.md
cat .workflow/.execution/EXEC-eg-002-*/execution-events.md
cat .workflow/.execution/EXEC-eg-003-*/execution-events.md
```
### Step 4.3: Generate plan.md Summary (Extended)
Create human-readable summary with execution group information.
**plan.md Structure** (extended):
| Section | Content |
|---------|---------|
| Header | Session ID, task description, creation time |
| 需求 (Requirements) | From plan-note.md "需求理解" |
| 执行组划分 (Execution Groups) | ⭐ Table of groups with domains, branches, codex assignments, dependencies |
| 任务概览 (Task Overview) | All tasks grouped by execution group |
| 冲突报告 (Conflict Report) | Intra-group and cross-group conflicts |
| 执行策略 (Execution Strategy) | Branch strategy, parallel execution commands, merge order |
### Step 4.4: Display Completion Summary
Present session statistics with execution group information.
**Summary Content**:
- Session ID and directory path
- Total execution groups created
- Total domains planned
- Total tasks generated (per group and total)
- Conflict status (intra-group and cross-group)
- Execution strategy summary
- Next step: Use `workflow:unified-execute-parallel` with GROUP parameter
**Success Criteria**:
- `execution-strategy.md` generated with complete branch and execution strategy
- `plan.md` includes execution group information
- All artifacts present in session directory
- User informed of parallel execution approach and commands
- Cross-group conflicts clearly documented with resolution strategies
---
## Configuration
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--max-groups` | 3 | Maximum execution groups to create |
| `--group-strategy` | automatic | Group assignment: automatic / balanced / manual |
**Group Strategy Details**:
- **automatic**: Group by dependency relationships (dependent domains in same group)
- **balanced**: Distribute evenly by estimated effort
- **manual**: Prompt user to assign domains to groups interactively
---
## Error Handling & Recovery
| Situation | Action | Recovery |
|-----------|--------|----------|
| Too many groups requested | Limit to maxGroups | Merge low-effort domains |
| Circular group dependencies | Stop execution, report error | Reorganize domain assignments |
| All domains in one group | Warning: No parallelization | Continue or prompt user to split |
| Cross-group file conflicts | Flag as critical | Suggest resolution strategies |
| Manual grouping timeout | Fall back to automatic | Continue with automatic strategy |
---
## Best Practices
### Before Starting Planning
1. **Clear Task Description**: Detailed requirements for better grouping
2. **Understand Dependencies**: Know which modules depend on each other
3. **Choose Group Strategy**:
- Use `automatic` for dependency-heavy tasks
- Use `balanced` for independent features
- Use `manual` for complex architectures you understand well
### During Planning
1. **Review Group Assignments**: Check execution-groups.json makes sense
2. **Verify Dependencies**: Cross-group dependencies should be minimal
3. **Check Branch Names**: Ensure branch names follow project conventions
4. **Monitor Conflicts**: Review conflicts.json for cross-group file conflicts
### After Planning
1. **Review Execution Strategy**: Read execution-strategy.md carefully
2. **Resolve Critical Conflicts**: Address cross-group file conflicts before execution
3. **Prepare Environments**: Ensure multiple codex instances can run in parallel
4. **Plan Merge Order**: Understand which groups must merge first
---
## Migration from Original Workflow
Existing `collaborative-plan-with-file` sessions can be converted to parallel execution:
1. Read existing `plan-note.md` and `requirement-analysis.json`
2. Assign sub-domains to execution groups (run Step 1.2 manually)
3. Generate `execution-groups.json` and `execution-strategy.md`
4. Use `workflow:unified-execute-parallel` for execution
---
**Now execute collaborative-plan-parallel for**: $TASK

View File

@@ -0,0 +1,560 @@
---
name: collaborative-plan-with-file
description: Parallel collaborative planning with Plan Note - Multi-agent parallel task generation, unified plan-note.md, conflict detection. Codex subagent-optimized.
argument-hint: "TASK=\"<description>\" [--max-agents=5]"
---
# Codex Collaborative-Plan-With-File Workflow
## Quick Start
Parallel collaborative planning workflow using **Plan Note** architecture. Spawns parallel subagents for each sub-domain, generates task plans concurrently, and detects conflicts across domains.
**Core workflow**: Understand → Template → Parallel Subagent Planning → Conflict Detection → Completion
**Key features**:
- **plan-note.md**: Shared collaborative document with pre-allocated sections
- **Parallel subagent planning**: Each sub-domain planned by its own subagent concurrently
- **Conflict detection**: Automatic file, dependency, and strategy conflict scanning
- **No merge needed**: Pre-allocated sections eliminate merge conflicts
**Codex-Specific Features**:
- Parallel subagent execution via `spawn_agent` + batch `wait({ ids: [...] })`
- Role loading via path (agent reads `~/.codex/agents/*.md` itself)
- Pre-allocated sections per agent = no write conflicts
- Explicit lifecycle management with `close_agent`
## Overview
This workflow enables structured planning through parallel-capable phases:
1. **Understanding & Template** - Analyze requirements, identify sub-domains, create plan-note.md template
2. **Parallel Planning** - Spawn subagent per sub-domain, batch wait for all results
3. **Conflict Detection** - Scan plan-note.md for conflicts across all domains
4. **Completion** - Generate human-readable plan.md summary
The key innovation is the **Plan Note** architecture - a shared collaborative document with pre-allocated sections per sub-domain, eliminating merge conflicts. Combined with Codex's true parallel subagent execution, all domains are planned simultaneously.
## Output Structure
```
.workflow/.planning/CPLAN-{slug}-{date}/
├── plan-note.md # ⭐ Core: Requirements + Tasks + Conflicts
├── requirement-analysis.json # Phase 1: Sub-domain assignments
├── agents/ # Phase 2: Per-domain plans (serial)
│ ├── {domain-1}/
│ │ └── plan.json # Detailed plan
│ ├── {domain-2}/
│ │ └── plan.json
│ └── ...
├── conflicts.json # Phase 3: Conflict report
└── plan.md # Phase 4: Human-readable summary
```
## Output Artifacts
### Phase 1: Understanding & Template
| Artifact | Purpose |
|----------|---------|
| `plan-note.md` | Collaborative template with pre-allocated task pool and evidence sections per domain |
| `requirement-analysis.json` | Sub-domain assignments, TASK ID ranges, complexity assessment |
### Phase 2: Parallel Planning
| Artifact | Purpose |
|----------|---------|
| `agents/{domain}/plan.json` | Detailed implementation plan per domain (from parallel subagent) |
| Updated `plan-note.md` | Task pool and evidence sections filled by each subagent |
### Phase 3: Conflict Detection
| Artifact | Purpose |
|----------|---------|
| `conflicts.json` | Detected conflicts with types, severity, and resolutions |
| Updated `plan-note.md` | Conflict markers section populated |
### Phase 4: Completion
| Artifact | Purpose |
|----------|---------|
| `plan.md` | Human-readable summary with requirements, tasks, and conflicts |
---
## Implementation Details
### Session Initialization
The workflow automatically generates a unique session identifier and directory structure.
**Session ID Format**: `CPLAN-{slug}-{date}`
- `slug`: Lowercase alphanumeric, max 30 chars
- `date`: YYYY-MM-DD format (UTC+8)
**Session Directory**: `.workflow/.planning/{sessionId}/`
**Auto-Detection**: If session folder exists with plan-note.md, automatically enters continue mode.
**Session Variables**:
- `sessionId`: Unique session identifier
- `sessionFolder`: Base directory for all artifacts
- `maxDomains`: Maximum number of sub-domains (default: 5)
---
## Phase 1: Understanding & Template Creation
**Objective**: Analyze task requirements, identify parallelizable sub-domains, and create the plan-note.md template with pre-allocated sections.
### Step 1.1: Analyze Task Description
Use built-in tools to understand the task scope and identify sub-domains.
**Analysis Activities**:
1. **Extract task keywords** - Identify key terms and concepts from the task description
2. **Identify sub-domains** - Split into 2-5 parallelizable focus areas based on task complexity
3. **Assess complexity** - Evaluate overall task complexity (Low/Medium/High)
4. **Search for references** - Find related documentation, README files, and architecture guides
**Sub-Domain Identification Patterns**:
| Pattern | Keywords |
|---------|----------|
| Backend API | 服务, 后端, API, 接口 |
| Frontend | 界面, 前端, UI, 视图 |
| Database | 数据, 存储, 数据库, 持久化 |
| Testing | 测试, 验证, QA |
| Infrastructure | 部署, 基础, 运维, 配置 |
**Ambiguity Handling**: When the task description is unclear or has multiple interpretations, gather user clarification before proceeding.
### Step 1.2: Create plan-note.md Template
Generate a structured template with pre-allocated sections for each sub-domain.
**plan-note.md Structure**:
- **YAML Frontmatter**: session_id, original_requirement, created_at, complexity, sub_domains, status
- **Section: 需求理解**: Core objectives, key points, constraints, split strategy
- **Section: 任务池 - {Domain N}**: Pre-allocated task section per domain (TASK-{range})
- **Section: 依赖关系**: Auto-generated after all domains complete
- **Section: 冲突标记**: Populated in Phase 3
- **Section: 上下文证据 - {Domain N}**: Evidence section per domain
**TASK ID Range Allocation**: Each domain receives a non-overlapping range of 100 IDs (e.g., Domain 1: TASK-001~100, Domain 2: TASK-101~200).
### Step 1.3: Generate requirement-analysis.json
Create the sub-domain configuration document.
**requirement-analysis.json Structure**:
| Field | Purpose |
|-------|---------|
| `session_id` | Session identifier |
| `original_requirement` | Task description |
| `complexity` | Low / Medium / High |
| `sub_domains[]` | Array of focus areas with descriptions |
| `sub_domains[].focus_area` | Domain name |
| `sub_domains[].description` | Domain scope description |
| `sub_domains[].task_id_range` | Non-overlapping TASK ID range |
| `sub_domains[].estimated_effort` | Effort estimate |
| `sub_domains[].dependencies` | Cross-domain dependencies |
| `total_domains` | Number of domains identified |
**Success Criteria**:
- 2-5 clear sub-domains identified
- Each sub-domain can be planned independently
- Plan Note template includes all pre-allocated sections
- TASK ID ranges have no overlap (100 IDs per domain)
- Requirements understanding is comprehensive
---
## Phase 2: Parallel Sub-Domain Planning
**Objective**: Spawn parallel subagents for each sub-domain, generating detailed plans and updating plan-note.md concurrently.
**Execution Model**: Parallel subagent execution - all domains planned simultaneously via `spawn_agent` + batch `wait`.
**Key API Pattern**:
```
spawn_agent × N → wait({ ids: [...] }) → verify outputs → close_agent × N
```
### Step 2.1: User Confirmation (unless autoMode)
Display identified sub-domains and confirm before spawning agents.
```javascript
// User confirmation
if (!autoMode) {
// Display sub-domains for user approval
// Options: "开始规划" / "调整拆分" / "取消"
}
```
### Step 2.2: Parallel Subagent Planning
**⚠️ IMPORTANT**: Role files are NOT read by main process. Pass path in message, agent reads itself.
**Spawn All Domain Agents in Parallel**:
```javascript
// Create agent directories first
subDomains.forEach(sub => {
// mkdir: ${sessionFolder}/agents/${sub.focus_area}/
})
// Parallel spawn - all agents start immediately
const agentIds = subDomains.map(sub => {
return spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-lite-planning-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
4. Read: ${sessionFolder}/plan-note.md (understand template structure)
5. Read: ${sessionFolder}/requirement-analysis.json (understand full context)
---
## Sub-Domain Context
**Focus Area**: ${sub.focus_area}
**Description**: ${sub.description}
**TASK ID Range**: ${sub.task_id_range[0]}-${sub.task_id_range[1]}
**Session**: ${sessionId}
## Dual Output Tasks
### Task 1: Generate Complete plan.json
Output: ${sessionFolder}/agents/${sub.focus_area}/plan.json
Include:
- Task breakdown with IDs from assigned range (${sub.task_id_range[0]}-${sub.task_id_range[1]})
- Dependencies within and across domains
- Files to modify with specific locations
- Effort and complexity estimates per task
- Conflict risk assessment for each task
### Task 2: Sync Summary to plan-note.md
**Locate Your Sections** (pre-allocated, ONLY modify these):
- Task Pool: "## 任务池 - ${toTitleCase(sub.focus_area)}"
- Evidence: "## 上下文证据 - ${toTitleCase(sub.focus_area)}"
**Task Summary Format**:
### TASK-{ID}: {Title} [${sub.focus_area}]
- **状态**: pending
- **复杂度**: Low/Medium/High
- **依赖**: TASK-xxx (if any)
- **范围**: Brief scope description
- **修改点**: file:line - change summary
- **冲突风险**: Low/Medium/High
**Evidence Format**:
- 相关文件: File list with relevance
- 现有模式: Patterns identified
- 约束: Constraints discovered
## Execution Steps
1. Explore codebase for domain-relevant files
2. Generate complete plan.json
3. Extract task summaries from plan.json
4. Read ${sessionFolder}/plan-note.md
5. Locate and fill your pre-allocated task pool section
6. Locate and fill your pre-allocated evidence section
7. Write back plan-note.md
## Important Rules
- ONLY modify your pre-allocated sections (do NOT touch other domains)
- Use assigned TASK ID range exclusively: ${sub.task_id_range[0]}-${sub.task_id_range[1]}
- Include conflict_risk assessment for each task
## Success Criteria
- [ ] Role definition read
- [ ] plan.json generated with detailed tasks
- [ ] plan-note.md updated with task pool and evidence
- [ ] All tasks within assigned ID range
`
})
})
// Batch wait - TRUE PARALLELISM (key Codex advantage)
const results = wait({
ids: agentIds,
timeout_ms: 900000 // 15 minutes for all planning agents
})
// Handle timeout
if (results.timed_out) {
const completed = agentIds.filter(id => results.status[id].completed)
const pending = agentIds.filter(id => !results.status[id].completed)
// Option: Continue waiting or use partial results
// If most agents completed, proceed with partial results
}
// Verify outputs exist
subDomains.forEach((sub, index) => {
const agentId = agentIds[index]
if (results.status[agentId].completed) {
// Verify: agents/${sub.focus_area}/plan.json exists
// Verify: plan-note.md sections populated
}
})
// Batch cleanup
agentIds.forEach(id => close_agent({ id }))
```
### Step 2.3: Verify plan-note.md Consistency
After all agents complete, verify the shared document.
**Verification Activities**:
1. Read final plan-note.md
2. Verify all task pool sections are populated
3. Verify all evidence sections are populated
4. Check for any accidental cross-section modifications
5. Validate TASK ID uniqueness across all domains
**Success Criteria**:
- All subagents spawned and completed (or timeout handled)
- `agents/{domain}/plan.json` created for each domain
- `plan-note.md` updated with all task pools and evidence sections
- Task summaries follow consistent format
- No TASK ID overlaps across domains
- All agents closed properly
---
## Phase 3: Conflict Detection
**Objective**: Analyze plan-note.md for conflicts across all domain contributions.
### Step 3.1: Parse plan-note.md
Extract all tasks from all "任务池" sections.
**Extraction Activities**:
1. Read plan-note.md content
2. Parse YAML frontmatter for session metadata
3. Identify all "任务池" sections by heading pattern
4. Extract tasks matching pattern: `### TASK-{ID}: {Title} [{domain}]`
5. Parse task details: status, complexity, dependencies, modification points, conflict risk
6. Consolidate into unified task list
### Step 3.2: Detect Conflicts
Scan all tasks for three categories of conflicts.
**Conflict Types**:
| Type | Severity | Detection Logic | Resolution |
|------|----------|-----------------|------------|
| file_conflict | high | Same file:location modified by multiple domains | Coordinate modification order or merge changes |
| dependency_cycle | critical | Circular dependencies in task graph (DFS detection) | Remove or reorganize dependencies |
| strategy_conflict | medium | Multiple high-risk tasks in same file from different domains | Review approaches and align on single strategy |
**Detection Activities**:
1. **File Conflicts**: Group modification points by file:location, identify locations modified by multiple domains
2. **Dependency Cycles**: Build dependency graph from task dependencies, detect cycles using depth-first search
3. **Strategy Conflicts**: Group tasks by files they modify, identify files with high-risk tasks from multiple domains
### Step 3.3: Generate Conflict Artifacts
Write conflict results and update plan-note.md.
**conflicts.json Structure**:
- `detected_at`: Detection timestamp
- `total_conflicts`: Number of conflicts found
- `conflicts[]`: Array of conflict objects with type, severity, tasks involved, description, suggested resolution
**plan-note.md Update**: Locate "冲突标记" section and populate with conflict summary markdown. If no conflicts found, mark as "✅ 无冲突检测到".
**Success Criteria**:
- All tasks extracted and analyzed
- `conflicts.json` written with detection results
- `plan-note.md` updated with conflict markers
- All conflict types checked (file, dependency, strategy)
---
## Phase 4: Completion
**Objective**: Generate human-readable plan summary and finalize workflow.
### Step 4.1: Generate plan.md
Create a human-readable summary from plan-note.md content.
**plan.md Structure**:
| Section | Content |
|---------|---------|
| Header | Session ID, task description, creation time |
| 需求 (Requirements) | Copied from plan-note.md "需求理解" section |
| 子领域拆分 (Sub-Domains) | Each domain with description, task range, estimated effort |
| 任务概览 (Task Overview) | All tasks with complexity, dependencies, and target files |
| 冲突报告 (Conflict Report) | Summary of detected conflicts or "无冲突" |
| 执行指令 (Execution) | Command to execute the plan |
### Step 4.2: Display Completion Summary
Present session statistics and next steps.
**Summary Content**:
- Session ID and directory path
- Total domains planned
- Total tasks generated
- Conflict status
- Execution command for next step
**Success Criteria**:
- `plan.md` generated with complete summary
- All artifacts present in session directory
- User informed of completion and next steps
---
## Configuration
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--max-domains` | 5 | Maximum sub-domains to identify |
---
## Error Handling & Recovery
| Situation | Action | Recovery |
|-----------|--------|----------|
| **Subagent timeout** | Check `results.timed_out`, continue `wait()` or use partial results | Reduce scope, plan remaining domains with new agent |
| **Agent closed prematurely** | Cannot recover closed agent | Spawn new agent with domain context |
| **Parallel agent partial failure** | Some domains complete, some fail | Use completed results, re-spawn for failed domains |
| **plan-note.md write conflict** | Multiple agents write simultaneously | Pre-allocated sections prevent this; if detected, re-read and verify |
| **Section not found in plan-note** | Agent creates section defensively | Continue with new section |
| **No tasks generated** | Review domain description | Retry with refined description via new agent |
| **Conflict detection fails** | Continue with empty conflicts | Note in completion summary |
| **Session folder conflict** | Append timestamp suffix | Create unique folder |
### Codex-Specific Error Patterns
```javascript
// Safe parallel planning with error handling
try {
const agentIds = subDomains.map(sub => spawn_agent({ message: buildPlanPrompt(sub) }))
const results = wait({ ids: agentIds, timeout_ms: 900000 })
if (results.timed_out) {
const completed = agentIds.filter(id => results.status[id].completed)
const pending = agentIds.filter(id => !results.status[id].completed)
// Re-spawn for timed-out domains
const retryIds = pending.map((id, i) => {
const sub = subDomains[agentIds.indexOf(id)]
return spawn_agent({ message: buildPlanPrompt(sub) })
})
const retryResults = wait({ ids: retryIds, timeout_ms: 600000 })
retryIds.forEach(id => { try { close_agent({ id }) } catch(e) {} })
}
} finally {
// ALWAYS cleanup
agentIds.forEach(id => {
try { close_agent({ id }) } catch (e) { /* ignore */ }
})
}
```
---
## Iteration Patterns
### New Planning Session (Parallel Mode)
```
User initiates: TASK="task description"
├─ No session exists → New session mode
├─ Analyze task and identify sub-domains
├─ Create plan-note.md template
├─ Generate requirement-analysis.json
├─ Execute parallel planning:
│ ├─ spawn_agent × N (one per sub-domain)
│ ├─ wait({ ids: [...] }) ← TRUE PARALLELISM
│ └─ close_agent × N
├─ Verify plan-note.md consistency
├─ Detect conflicts
├─ Generate plan.md summary
└─ Report completion
```
### Continue Existing Session
```
User resumes: TASK="same task"
├─ Session exists → Continue mode
├─ Load plan-note.md and requirement-analysis.json
├─ Identify incomplete domains (empty task pool sections)
├─ Spawn agents for incomplete domains only
└─ Continue with conflict detection
```
### Agent Lifecycle Management
```
Subagent lifecycle:
├─ spawn_agent({ message }) → Create with role path + task
├─ wait({ ids, timeout_ms }) → Get results (ONLY way to get output)
└─ close_agent({ id }) → Cleanup (MUST do, cannot recover)
Key rules:
├─ Pre-allocated sections = no write conflicts
├─ ALWAYS use wait() to get results, NOT close_agent()
├─ Batch wait for all domain agents: wait({ ids: [a, b, c, ...] })
└─ Verify plan-note.md after batch completion
```
---
## Best Practices
### Before Starting Planning
1. **Clear Task Description**: Detailed requirements lead to better sub-domain splitting
2. **Reference Documentation**: Ensure latest README and design docs are identified
3. **Clarify Ambiguities**: Resolve unclear requirements before committing to sub-domains
### During Planning
1. **Review Plan Note**: Check plan-note.md between phases to verify progress
2. **Verify Domains**: Ensure sub-domains are truly independent and parallelizable
3. **Check Dependencies**: Cross-domain dependencies should be documented explicitly
4. **Inspect Details**: Review `agents/{domain}/plan.json` for specifics when needed
### Codex Subagent Best Practices
1. **Role Path, Not Content**: Pass `~/.codex/agents/*.md` path in message, let agent read itself
2. **Pre-allocated Sections**: Each agent only writes to its own sections - no write conflicts
3. **Batch wait**: Use `wait({ ids: [a, b, c] })` for all domain agents, not sequential waits
4. **Handle Timeouts**: Check `results.timed_out`, re-spawn for failed domains
5. **Explicit Cleanup**: Always `close_agent` when done, even on errors (use try/finally)
6. **Verify After Batch**: Read plan-note.md after all agents complete to verify consistency
7. **TASK ID Isolation**: Pre-assigned non-overlapping ranges prevent ID conflicts
### After Planning
1. **Resolve Conflicts**: Address high/critical conflicts before execution
2. **Review Summary**: Check plan.md for completeness and accuracy
3. **Validate Tasks**: Ensure all tasks have clear scope and modification targets
---
**Now execute collaborative-plan-with-file for**: $TASK

View File

@@ -0,0 +1,381 @@
---
name: compact
description: Compact current session memory into structured text for session recovery. Supports custom descriptions and tagging.
argument-hint: "[--description=\"...\"] [--tags=<tag1,tag2>] [--force]"
---
# Memory Compact Command (/memory:compact)
## 1. Overview
The `memory:compact` command **compresses current session working memory** into structured text optimized for **session recovery**, extracts critical information, and saves it to persistent storage via MCP `core_memory` tool.
**Core Philosophy**:
- **Session Recovery First**: Capture everything needed to resume work seamlessly
- **Minimize Re-exploration**: Include file paths, decisions, and state to avoid redundant analysis
- **Preserve Train of Thought**: Keep notes and hypotheses for complex debugging
- **Actionable State**: Record last action result and known issues
## 2. Parameters
- `--description`: Custom session description (optional)
- Example: "completed core-memory module"
- Example: "debugging JWT refresh - suspected memory leak"
- `--tags`: Comma-separated tags for categorization (optional)
- `--force`: Skip confirmation, save directly
## 3. Structured Output Format
```markdown
## Session ID
[WFS-ID if workflow session active, otherwise (none)]
## Project Root
[Absolute path to project root, e.g., D:\Claude_dms3]
## Objective
[High-level goal - the "North Star" of this session]
## Execution Plan
[CRITICAL: Embed the LATEST plan in its COMPLETE and DETAILED form]
### Source: [workflow | todo | user-stated | inferred]
<details>
<summary>Full Execution Plan (Click to expand)</summary>
[PRESERVE COMPLETE PLAN VERBATIM - DO NOT SUMMARIZE]
- ALL phases, tasks, subtasks
- ALL file paths (absolute)
- ALL dependencies and prerequisites
- ALL acceptance criteria
- ALL status markers ([x] done, [ ] pending)
- ALL notes and context
Example:
## Phase 1: Setup
- [x] Initialize project structure
- Created D:\Claude_dms3\src\core\index.ts
- Added dependencies: lodash, zod
- [ ] Configure TypeScript
- Update tsconfig.json for strict mode
## Phase 2: Implementation
- [ ] Implement core API
- Target: D:\Claude_dms3\src\api\handler.ts
- Dependencies: Phase 1 complete
- Acceptance: All tests pass
</details>
## Working Files (Modified)
[Absolute paths to actively modified files]
- D:\Claude_dms3\src\file1.ts (role: main implementation)
- D:\Claude_dms3\tests\file1.test.ts (role: unit tests)
## Reference Files (Read-Only)
[Absolute paths to context files - NOT modified but essential for understanding]
- D:\Claude_dms3\.claude\CLAUDE.md (role: project instructions)
- D:\Claude_dms3\src\types\index.ts (role: type definitions)
- D:\Claude_dms3\package.json (role: dependencies)
## Last Action
[Last significant action and its result/status]
## Decisions
- [Decision]: [Reasoning]
- [Decision]: [Reasoning]
## Constraints
- [User-specified limitation or preference]
## Dependencies
- [Added/changed packages or environment requirements]
## Known Issues
- [Deferred bug or edge case]
## Changes Made
- [Completed modification]
## Pending
- [Next step] or (none)
## Notes
[Unstructured thoughts, hypotheses, debugging trails]
```
## 4. Field Definitions
| Field | Purpose | Recovery Value |
|-------|---------|----------------|
| **Session ID** | Workflow session identifier (WFS-*) | Links memory to specific stateful task execution |
| **Project Root** | Absolute path to project directory | Enables correct path resolution in new sessions |
| **Objective** | Ultimate goal of the session | Prevents losing track of broader feature |
| **Execution Plan** | Complete plan from any source (verbatim) | Preserves full planning context, avoids re-planning |
| **Working Files** | Actively modified files (absolute paths) | Immediately identifies where work was happening |
| **Reference Files** | Read-only context files (absolute paths) | Eliminates re-exploration for critical context |
| **Last Action** | Final tool output/status | Immediate state awareness (success/failure) |
| **Decisions** | Architectural choices + reasoning | Prevents re-litigating settled decisions |
| **Constraints** | User-imposed limitations | Maintains personalized coding style |
| **Dependencies** | Package/environment changes | Prevents missing dependency errors |
| **Known Issues** | Deferred bugs/edge cases | Ensures issues aren't forgotten |
| **Changes Made** | Completed modifications | Clear record of what was done |
| **Pending** | Next steps | Immediate action items |
| **Notes** | Hypotheses, debugging trails | Preserves "train of thought" |
## 5. Execution Flow
### Step 1: Analyze Current Session
Extract the following from conversation history:
```javascript
const sessionAnalysis = {
sessionId: "", // WFS-* if workflow session active, null otherwise
projectRoot: "", // Absolute path: D:\Claude_dms3
objective: "", // High-level goal (1-2 sentences)
executionPlan: {
source: "workflow" | "todo" | "user-stated" | "inferred",
content: "" // Full plan content - ALWAYS preserve COMPLETE and DETAILED form
},
workingFiles: [], // {absolutePath, role} - modified files
referenceFiles: [], // {absolutePath, role} - read-only context files
lastAction: "", // Last significant action + result
decisions: [], // {decision, reasoning}
constraints: [], // User-specified limitations
dependencies: [], // Added/changed packages
knownIssues: [], // Deferred bugs
changesMade: [], // Completed modifications
pending: [], // Next steps
notes: "" // Unstructured thoughts
}
```
### Step 2: Generate Structured Text
```javascript
// Helper: Generate execution plan section
const generateExecutionPlan = (plan) => {
const sourceLabels = {
'workflow': 'workflow (IMPL_PLAN.md)',
'todo': 'todo (TodoWrite)',
'user-stated': 'user-stated',
'inferred': 'inferred'
};
// CRITICAL: Preserve complete plan content verbatim - DO NOT summarize
return `### Source: ${sourceLabels[plan.source] || plan.source}
<details>
<summary>Full Execution Plan (Click to expand)</summary>
${plan.content}
</details>`;
};
const structuredText = `## Session ID
${sessionAnalysis.sessionId || '(none)'}
## Project Root
${sessionAnalysis.projectRoot}
## Objective
${sessionAnalysis.objective}
## Execution Plan
${generateExecutionPlan(sessionAnalysis.executionPlan)}
## Working Files (Modified)
${sessionAnalysis.workingFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
## Reference Files (Read-Only)
${sessionAnalysis.referenceFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
## Last Action
${sessionAnalysis.lastAction}
## Decisions
${sessionAnalysis.decisions.map(d => `- ${d.decision}: ${d.reasoning}`).join('\n') || '(none)'}
## Constraints
${sessionAnalysis.constraints.map(c => `- ${c}`).join('\n') || '(none)'}
## Dependencies
${sessionAnalysis.dependencies.map(d => `- ${d}`).join('\n') || '(none)'}
## Known Issues
${sessionAnalysis.knownIssues.map(i => `- ${i}`).join('\n') || '(none)'}
## Changes Made
${sessionAnalysis.changesMade.map(c => `- ${c}`).join('\n') || '(none)'}
## Pending
${sessionAnalysis.pending.length > 0
? sessionAnalysis.pending.map(p => `- ${p}`).join('\n')
: '(none)'}
## Notes
${sessionAnalysis.notes || '(none)'}`
```
### Step 3: Import to Core Memory via MCP
Use the MCP `core_memory` tool to save the structured text:
```javascript
mcp__ccw-tools__core_memory({
operation: "import",
text: structuredText
})
```
Or via CLI (pipe structured text to import):
```bash
# Write structured text to temp file, then import
echo "$structuredText" | ccw core-memory import
# Or from a file
ccw core-memory import --file /path/to/session-memory.md
```
**Response Format**:
```json
{
"operation": "import",
"id": "CMEM-YYYYMMDD-HHMMSS",
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
}
```
### Step 4: Report Recovery ID
After successful import, **clearly display the Recovery ID** to the user:
```
╔════════════════════════════════════════════════════════════════════════════╗
║ ✓ Session Memory Saved ║
║ ║
║ Recovery ID: CMEM-YYYYMMDD-HHMMSS ║
║ ║
║ To restore: "Please import memory <ID>" ║
║ (MCP: core_memory export | CLI: ccw core-memory export --id <ID>) ║
╚════════════════════════════════════════════════════════════════════════════╝
```
## 6. Quality Checklist
Before generating:
- [ ] Session ID captured if workflow session active (WFS-*)
- [ ] Project Root is absolute path (e.g., D:\Claude_dms3)
- [ ] Objective clearly states the "North Star" goal
- [ ] Execution Plan: COMPLETE plan preserved VERBATIM (no summarization)
- [ ] Plan Source: Clearly identified (workflow | todo | user-stated | inferred)
- [ ] Plan Details: ALL phases, tasks, file paths, dependencies, status markers included
- [ ] All file paths are ABSOLUTE (not relative)
- [ ] Working Files: 3-8 modified files with roles
- [ ] Reference Files: Key context files (CLAUDE.md, types, configs)
- [ ] Last Action captures final state (success/failure)
- [ ] Decisions include reasoning, not just choices
- [ ] Known Issues separates deferred from forgotten bugs
- [ ] Notes preserve debugging hypotheses if any
## 7. Path Resolution Rules
### Project Root Detection
1. Check current working directory from environment
2. Look for project markers: `.git/`, `package.json`, `.claude/`
3. Use the topmost directory containing these markers
### Absolute Path Conversion
```javascript
// Convert relative to absolute
const toAbsolutePath = (relativePath, projectRoot) => {
if (path.isAbsolute(relativePath)) return relativePath;
return path.join(projectRoot, relativePath);
};
// Example: "src/api/auth.ts" → "D:\Claude_dms3\src\api\auth.ts"
```
### Reference File Categories
| Category | Examples | Priority |
|----------|----------|----------|
| Project Config | `.claude/CLAUDE.md`, `package.json`, `tsconfig.json` | High |
| Type Definitions | `src/types/*.ts`, `*.d.ts` | High |
| Related Modules | Parent/sibling modules with shared interfaces | Medium |
| Test Files | Corresponding test files for modified code | Medium |
| Documentation | `README.md`, `ARCHITECTURE.md` | Low |
## 8. Plan Detection (Priority Order)
### Priority 1: Workflow Session (IMPL_PLAN.md)
```javascript
// Check for active workflow session
const manifest = await mcp__ccw-tools__session_manager({
operation: "list",
location: "active"
});
if (manifest.sessions?.length > 0) {
const session = manifest.sessions[0];
const plan = await mcp__ccw-tools__session_manager({
operation: "read",
session_id: session.id,
content_type: "plan"
});
sessionAnalysis.sessionId = session.id;
sessionAnalysis.executionPlan.source = "workflow";
sessionAnalysis.executionPlan.content = plan.content;
}
```
### Priority 2: TodoWrite (Current Session Todos)
```javascript
// Extract from conversation - look for TodoWrite tool calls
// Preserve COMPLETE todo list with all details
const todos = extractTodosFromConversation();
if (todos.length > 0) {
sessionAnalysis.executionPlan.source = "todo";
// Format todos with full context - preserve status markers
sessionAnalysis.executionPlan.content = todos.map(t =>
`- [${t.status === 'completed' ? 'x' : t.status === 'in_progress' ? '>' : ' '}] ${t.content}`
).join('\n');
}
```
### Priority 3: User-Stated Plan
```javascript
// Look for explicit plan statements in user messages:
// - "Here's my plan: 1. ... 2. ... 3. ..."
// - "I want to: first..., then..., finally..."
// - Numbered or bulleted lists describing steps
const userPlan = extractUserStatedPlan();
if (userPlan) {
sessionAnalysis.executionPlan.source = "user-stated";
sessionAnalysis.executionPlan.content = userPlan;
}
```
### Priority 4: Inferred Plan
```javascript
// If no explicit plan, infer from:
// - Task description and breakdown discussion
// - Sequence of actions taken
// - Outstanding work mentioned
const inferredPlan = inferPlanFromDiscussion();
if (inferredPlan) {
sessionAnalysis.executionPlan.source = "inferred";
sessionAnalysis.executionPlan.content = inferredPlan;
}
```
## 9. Notes
- **Timing**: Execute at task completion or before context switch
- **Frequency**: Once per independent task or milestone
- **Recovery**: New session can immediately continue with full context
- **Knowledge Graph**: Entity relationships auto-extracted for visualization
- **Absolute Paths**: Critical for cross-session recovery on different machines

View File

@@ -0,0 +1,605 @@
---
name: debug-with-file
description: Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and analysis-assisted correction.
argument-hint: "BUG=\"<bug description or error message>\""
---
# Codex Debug-With-File Prompt
## Overview
Enhanced evidence-based debugging with **documented exploration process**. Records understanding evolution, consolidates insights, and uses analysis to correct misunderstandings.
**Core workflow**: Explore → Document → Log → Analyze → Correct Understanding → Fix → Verify
**Key enhancements over /prompts:debug**:
- **understanding.md**: Timeline of exploration and learning
- **Analysis-assisted correction**: Validates and corrects hypotheses
- **Consolidation**: Simplifies proven-wrong understanding to avoid clutter
- **Learning retention**: Preserves what was learned, even from failed attempts
## Target Bug
**$BUG**
## Execution Process
```
Session Detection:
├─ Check if debug session exists for this bug
├─ EXISTS + understanding.md exists → Continue mode
└─ NOT_FOUND → Explore mode
Explore Mode:
├─ Locate error source in codebase
├─ Document initial understanding in understanding.md
├─ Generate testable hypotheses with analysis validation
├─ Add NDJSON logging instrumentation
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
├─ Parse debug.log, validate each hypothesis
├─ Use analysis to evaluate hypotheses and correct understanding
├─ Update understanding.md with:
│ ├─ New evidence
│ ├─ Corrected misunderstandings (strikethrough + correction)
│ └─ Consolidated current understanding
└─ Decision:
├─ Confirmed → Fix root cause
├─ Inconclusive → Add more logging, iterate
└─ All rejected → Assisted new hypotheses
Fix & Cleanup:
├─ Apply fix based on confirmed hypothesis
├─ User verifies
├─ Document final understanding + lessons learned
├─ Remove debug instrumentation
└─ If not fixed → Return to Analyze mode
```
## Implementation Details
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const bugSlug = "$BUG".toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `DBG-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.debug/${sessionId}`
const debugLogPath = `${sessionFolder}/debug.log`
const understandingPath = `${sessionFolder}/understanding.md`
const hypothesesPath = `${sessionFolder}/hypotheses.json`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const hasUnderstanding = sessionExists && fs.existsSync(understandingPath)
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
const mode = logHasContent ? 'analyze' : (hasUnderstanding ? 'continue' : 'explore')
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
### Explore Mode
#### Step 1.1: Locate Error Source
```javascript
// Extract keywords from bug description
const keywords = extractErrorKeywords("$BUG")
// Search codebase for error locations
const searchResults = []
for (const keyword of keywords) {
const results = Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
searchResults.push({ keyword, results })
}
// Identify affected files and functions
const affectedLocations = analyzeSearchResults(searchResults)
```
#### Step 1.2: Document Initial Understanding
Create `understanding.md`:
```markdown
# Understanding Document
**Session ID**: ${sessionId}
**Bug Description**: $BUG
**Started**: ${getUtc8ISOString()}
---
## Exploration Timeline
### Iteration 1 - Initial Exploration (${timestamp})
#### Current Understanding
Based on bug description and initial code search:
- Error pattern: ${errorPattern}
- Affected areas: ${affectedLocations.map(l => l.file).join(', ')}
- Initial hypothesis: ${initialThoughts}
#### Evidence from Code Search
${searchResults.map(r => `
**Keyword: "${r.keyword}"**
- Found in: ${r.results.files.join(', ')}
- Key findings: ${r.insights}
`).join('\n')}
#### Next Steps
- Generate testable hypotheses
- Add instrumentation
- Await reproduction
---
## Current Consolidated Understanding
${initialConsolidatedUnderstanding}
```
#### Step 1.3: Generate Hypotheses
Analyze the bug and generate 3-5 testable hypotheses:
```javascript
// Hypothesis generation based on error pattern
const HYPOTHESIS_PATTERNS = {
"not found|missing|undefined|未找到": "data_mismatch",
"0|empty|zero|registered": "logic_error",
"timeout|connection|sync": "integration_issue",
"type|format|parse": "type_mismatch"
}
function generateHypotheses(bugDescription, affectedLocations) {
// Generate targeted hypotheses based on error analysis
// Each hypothesis includes:
// - id: H1, H2, ...
// - description: What might be wrong
// - testable_condition: What to log
// - logging_point: Where to add instrumentation
// - evidence_criteria: What confirms/rejects it
return hypotheses
}
```
Save to `hypotheses.json`:
```json
{
"iteration": 1,
"timestamp": "2025-01-21T10:00:00+08:00",
"hypotheses": [
{
"id": "H1",
"description": "Data structure mismatch - expected key not present",
"testable_condition": "Check if target key exists in dict",
"logging_point": "file.py:func:42",
"evidence_criteria": {
"confirm": "data shows missing key",
"reject": "key exists with valid value"
},
"likelihood": 1,
"status": "pending"
}
]
}
```
#### Step 1.4: Add NDJSON Instrumentation
For each hypothesis, add logging at the specified location:
**Python template**:
```python
# region debug [H{n}]
try:
import json, time
_dbg = {
"sid": "{sessionId}",
"hid": "H{n}",
"loc": "{file}:{line}",
"msg": "{testable_condition}",
"data": {
# Capture relevant values here
},
"ts": int(time.time() * 1000)
}
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
except: pass
# endregion
```
**JavaScript/TypeScript template**:
```javascript
// region debug [H{n}]
try {
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
sid: "{sessionId}",
hid: "H{n}",
loc: "{file}:{line}",
msg: "{testable_condition}",
data: { /* Capture relevant values */ },
ts: Date.now()
}) + "\n");
} catch(_) {}
// endregion
```
#### Step 1.5: Output to User
```
## Hypotheses Generated
Based on error "$BUG", generated {n} hypotheses:
{hypotheses.map(h => `
### ${h.id}: ${h.description}
- Logging at: ${h.logging_point}
- Testing: ${h.testable_condition}
- Evidence to confirm: ${h.evidence_criteria.confirm}
- Evidence to reject: ${h.evidence_criteria.reject}
`).join('')}
**Debug log**: ${debugLogPath}
**Next**: Run reproduction steps, then come back for analysis.
```
### Analyze Mode
#### Step 2.1: Parse Debug Log
```javascript
// Parse NDJSON log
const entries = Read(debugLogPath).split('\n')
.filter(l => l.trim())
.map(l => JSON.parse(l))
// Group by hypothesis
const byHypothesis = groupBy(entries, 'hid')
// Validate each hypothesis
for (const [hid, logs] of Object.entries(byHypothesis)) {
const hypothesis = hypotheses.find(h => h.id === hid)
const latestLog = logs[logs.length - 1]
// Check if evidence confirms or rejects hypothesis
const verdict = evaluateEvidence(hypothesis, latestLog.data)
// Returns: 'confirmed' | 'rejected' | 'inconclusive'
}
```
#### Step 2.2: Analyze Evidence and Correct Understanding
Review the debug log and evaluate each hypothesis:
1. Parse all log entries
2. Group by hypothesis ID
3. Compare evidence against expected criteria
4. Determine verdict: confirmed | rejected | inconclusive
5. Identify incorrect assumptions from previous understanding
6. Generate corrections
#### Step 2.3: Update Understanding with Corrections
Append new iteration to `understanding.md`:
```markdown
### Iteration ${n} - Evidence Analysis (${timestamp})
#### Log Analysis Results
${results.map(r => `
**${r.id}**: ${r.verdict.toUpperCase()}
- Evidence: ${JSON.stringify(r.evidence)}
- Reasoning: ${r.reason}
`).join('\n')}
#### Corrected Understanding
Previous misunderstandings identified and corrected:
${corrections.map(c => `
- ~~${c.wrong}~~ → ${c.corrected}
- Why wrong: ${c.reason}
- Evidence: ${c.evidence}
`).join('\n')}
#### New Insights
${newInsights.join('\n- ')}
${confirmedHypothesis ? `
#### Root Cause Identified
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
Evidence supporting this conclusion:
${confirmedHypothesis.supportingEvidence}
` : `
#### Next Steps
${nextSteps}
`}
---
## Current Consolidated Understanding (Updated)
${consolidatedUnderstanding}
```
#### Step 2.4: Update hypotheses.json
```json
{
"iteration": 2,
"timestamp": "2025-01-21T10:15:00+08:00",
"hypotheses": [
{
"id": "H1",
"status": "rejected",
"verdict_reason": "Evidence shows key exists with valid value",
"evidence": {...}
},
{
"id": "H2",
"status": "confirmed",
"verdict_reason": "Log data confirms timing issue",
"evidence": {...}
}
],
"corrections": [
{
"wrong_assumption": "...",
"corrected_to": "...",
"reason": "..."
}
]
}
```
### Fix & Verification
#### Step 3.1: Apply Fix
Based on confirmed hypothesis, implement the fix in the affected files.
#### Step 3.2: Document Resolution
Append to `understanding.md`:
```markdown
### Iteration ${n} - Resolution (${timestamp})
#### Fix Applied
- Modified files: ${modifiedFiles.join(', ')}
- Fix description: ${fixDescription}
- Root cause addressed: ${rootCause}
#### Verification Results
${verificationResults}
#### Lessons Learned
What we learned from this debugging session:
1. ${lesson1}
2. ${lesson2}
3. ${lesson3}
#### Key Insights for Future
- ${insight1}
- ${insight2}
```
#### Step 3.3: Cleanup
Remove debug instrumentation by searching for region markers:
```javascript
const instrumentedFiles = Grep({
pattern: "# region debug|// region debug",
output_mode: "files_with_matches"
})
for (const file of instrumentedFiles) {
// Remove content between region markers
removeDebugRegions(file)
}
```
## Session Folder Structure
```
.workflow/.debug/DBG-{slug}-{date}/
├── debug.log # NDJSON log (execution evidence)
├── understanding.md # Exploration timeline + consolidated understanding
└── hypotheses.json # Hypothesis history with verdicts
```
## Understanding Document Template
```markdown
# Understanding Document
**Session ID**: DBG-xxx-2025-01-21
**Bug Description**: [original description]
**Started**: 2025-01-21T10:00:00+08:00
---
## Exploration Timeline
### Iteration 1 - Initial Exploration (2025-01-21 10:00)
#### Current Understanding
...
#### Evidence from Code Search
...
#### Hypotheses Generated
...
### Iteration 2 - Evidence Analysis (2025-01-21 10:15)
#### Log Analysis Results
...
#### Corrected Understanding
- ~~[wrong]~~ → [corrected]
#### Analysis Results
...
---
## Current Consolidated Understanding
### What We Know
- [valid understanding points]
### What Was Disproven
- ~~[disproven assumptions]~~
### Current Investigation Focus
[current focus]
### Remaining Questions
- [open questions]
```
## Debug Log Format (NDJSON)
Each line is a JSON object:
```json
{"sid":"DBG-xxx-2025-01-21","hid":"H1","loc":"file.py:func:42","msg":"Check dict keys","data":{"keys":["a","b"],"target":"c","found":false},"ts":1734567890123}
```
| Field | Description |
|-------|-------------|
| `sid` | Session ID |
| `hid` | Hypothesis ID (H1, H2, ...) |
| `loc` | Code location |
| `msg` | What's being tested |
| `data` | Captured values |
| `ts` | Timestamp (ms) |
## Iteration Flow
```
First Call (BUG="error"):
├─ No session exists → Explore mode
├─ Extract error keywords, search codebase
├─ Document initial understanding in understanding.md
├─ Generate hypotheses
├─ Add logging instrumentation
└─ Await user reproduction
After Reproduction (BUG="error"):
├─ Session exists + debug.log has content → Analyze mode
├─ Parse log, evaluate hypotheses
├─ Update understanding.md with:
│ ├─ Evidence analysis results
│ ├─ Corrected misunderstandings (strikethrough)
│ ├─ New insights
│ └─ Updated consolidated understanding
├─ Update hypotheses.json with verdicts
└─ Decision:
├─ Confirmed → Fix → Document resolution
├─ Inconclusive → Add logging, document next steps
└─ All rejected → Assisted new hypotheses
Output:
├─ .workflow/.debug/DBG-{slug}-{date}/debug.log
├─ .workflow/.debug/DBG-{slug}-{date}/understanding.md (evolving document)
└─ .workflow/.debug/DBG-{slug}-{date}/hypotheses.json (history)
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Empty debug.log | Verify reproduction triggered the code path |
| All hypotheses rejected | Generate new hypotheses based on disproven assumptions |
| Fix doesn't work | Document failed fix attempt, iterate with refined understanding |
| >5 iterations | Review consolidated understanding, escalate with full context |
| Understanding too long | Consolidate aggressively, archive old iterations to separate file |
## Consolidation Rules
When updating "Current Consolidated Understanding":
1. **Simplify disproven items**: Move to "What Was Disproven" with single-line summary
2. **Keep valid insights**: Promote confirmed findings to "What We Know"
3. **Avoid duplication**: Don't repeat timeline details in consolidated section
4. **Focus on current state**: What do we know NOW, not the journey
5. **Preserve key corrections**: Keep important wrong→right transformations for learning
**Bad (cluttered)**:
```markdown
## Current Consolidated Understanding
In iteration 1 we thought X, but in iteration 2 we found Y, then in iteration 3...
Also we checked A and found B, and then we checked C...
```
**Good (consolidated)**:
```markdown
## Current Consolidated Understanding
### What We Know
- Error occurs during runtime update, not initialization
- Config value is None (not missing key)
### What Was Disproven
- ~~Initialization error~~ (Timing evidence)
- ~~Missing key hypothesis~~ (Key exists)
### Current Investigation Focus
Why is config value None during update?
```
## Key Features
| Feature | Description |
|---------|-------------|
| NDJSON logging | Structured debug log with hypothesis tracking |
| Hypothesis generation | Analysis-assisted hypothesis creation |
| Exploration documentation | understanding.md with timeline |
| Understanding evolution | Timeline + corrections tracking |
| Error correction | Strikethrough + reasoning for wrong assumptions |
| Consolidated learning | Current understanding section |
| Hypothesis history | hypotheses.json with verdicts |
| Analysis validation | At key decision points |
## When to Use
Best suited for:
- Complex bugs requiring multiple investigation rounds
- Learning from debugging process is valuable
- Team needs to understand debugging rationale
- Bug might recur, documentation helps prevention
---
**Now execute the debug-with-file workflow for bug**: $BUG

View File

@@ -0,0 +1,365 @@
---
name: issue-discover-by-prompt
description: Discover issues from user prompt with iterative multi-agent exploration and cross-module comparison
argument-hint: "<prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]"
---
# Issue Discovery by Prompt (Codex Version)
## Goal
Prompt-driven issue discovery with intelligent planning. Instead of fixed perspectives, this command:
1. **Analyzes user intent** to understand what to find
2. **Plans exploration strategy** dynamically based on codebase structure
3. **Executes iterative exploration** with feedback loops
4. **Performs cross-module comparison** when detecting comparison intent
**Core Difference from `issue-discover.md`**:
- `issue-discover`: Pre-defined perspectives (bug, security, etc.), parallel execution
- `issue-discover-by-prompt`: User-driven prompt, planned strategy, iterative exploration
## Inputs
- **Prompt**: Natural language description of what to find
- **Scope**: `--scope=src/**` - File pattern to explore (default: `**/*`)
- **Depth**: `--depth=standard|deep` - standard (3 iterations) or deep (5+ iterations)
- **Max Iterations**: `--max-iterations=N` (default: 5)
## Output Requirements
**Generate Files:**
1. `.workflow/issues/discoveries/{discovery-id}/discovery-state.json` - Session state with iteration tracking
2. `.workflow/issues/discoveries/{discovery-id}/iterations/{N}/{dimension}.json` - Per-iteration findings
3. `.workflow/issues/discoveries/{discovery-id}/comparison-analysis.json` - Cross-dimension comparison (if applicable)
4. `.workflow/issues/discoveries/{discovery-id}/discovery-issues.jsonl` - Generated issue candidates
**Return Summary:**
```json
{
"discovery_id": "DBP-YYYYMMDD-HHmmss",
"prompt": "Check if frontend API calls match backend implementations",
"intent_type": "comparison",
"dimensions": ["frontend-calls", "backend-handlers"],
"total_iterations": 3,
"total_findings": 24,
"issues_generated": 12,
"comparison_match_rate": 0.75
}
```
## Workflow
### Step 1: Initialize Discovery Session
```bash
# Generate discovery ID
DISCOVERY_ID="DBP-$(date -u +%Y%m%d-%H%M%S)"
OUTPUT_DIR=".workflow/issues/discoveries/${DISCOVERY_ID}"
# Create directory structure
mkdir -p "${OUTPUT_DIR}/iterations"
```
Detect intent type from prompt:
- `comparison`: Contains "match", "compare", "versus", "vs", "between"
- `search`: Contains "find", "locate", "where"
- `verification`: Contains "verify", "check", "ensure"
- `audit`: Contains "audit", "review", "analyze"
### Step 2: Gather Context
Use `rg` and file exploration to understand codebase structure:
```bash
# Find relevant modules based on prompt keywords
rg -l "<keyword1>" --type ts | head -10
rg -l "<keyword2>" --type ts | head -10
# Understand project structure
ls -la src/
cat .workflow/project-tech.json 2>/dev/null || echo "No project-tech.json"
```
Build context package:
```json
{
"prompt_keywords": ["frontend", "API", "backend"],
"codebase_structure": { "modules": [...], "patterns": [...] },
"relevant_modules": ["src/api/", "src/services/"]
}
```
### Step 3: Plan Exploration Strategy
Analyze the prompt and context to design exploration strategy.
**Output exploration plan:**
```json
{
"intent_analysis": {
"type": "comparison",
"primary_question": "Do frontend API calls match backend implementations?",
"sub_questions": ["Are endpoints aligned?", "Are payloads compatible?"]
},
"dimensions": [
{
"name": "frontend-calls",
"description": "Client-side API calls and error handling",
"search_targets": ["src/api/**", "src/hooks/**"],
"focus_areas": ["fetch calls", "error boundaries", "response parsing"]
},
{
"name": "backend-handlers",
"description": "Server-side API implementations",
"search_targets": ["src/server/**", "src/routes/**"],
"focus_areas": ["endpoint handlers", "response schemas", "error responses"]
}
],
"comparison_matrix": {
"dimension_a": "frontend-calls",
"dimension_b": "backend-handlers",
"comparison_points": [
{"aspect": "endpoints", "frontend_check": "fetch URLs", "backend_check": "route paths"},
{"aspect": "methods", "frontend_check": "HTTP methods used", "backend_check": "methods accepted"},
{"aspect": "payloads", "frontend_check": "request body structure", "backend_check": "expected schema"},
{"aspect": "responses", "frontend_check": "response parsing", "backend_check": "response format"}
]
},
"estimated_iterations": 3,
"termination_conditions": ["All comparison points verified", "No new findings in last iteration"]
}
```
### Step 4: Iterative Exploration
Execute iterations until termination conditions are met:
```
WHILE iteration < max_iterations AND shouldContinue:
1. Plan iteration focus based on previous findings
2. Explore each dimension
3. Collect and analyze findings
4. Cross-reference between dimensions
5. Check convergence
```
**For each iteration:**
1. **Search for relevant code** using `rg`:
```bash
# Based on dimension focus areas
rg "fetch\s*\(" --type ts -C 3 | head -50
rg "app\.(get|post|put|delete)" --type ts -C 3 | head -50
```
2. **Analyze and record findings**:
```json
{
"dimension": "frontend-calls",
"iteration": 1,
"findings": [
{
"id": "F-001",
"title": "Undefined endpoint in UserService",
"category": "endpoint-mismatch",
"file": "src/api/userService.ts",
"line": 42,
"snippet": "fetch('/api/users/profile')",
"related_dimension": "backend-handlers",
"confidence": 0.85
}
],
"coverage": {
"files_explored": 15,
"areas_covered": ["fetch calls", "axios instances"],
"areas_remaining": ["graphql queries"]
},
"leads": [
{"description": "Check GraphQL mutations", "suggested_search": "mutation.*User"}
]
}
```
3. **Cross-reference findings** between dimensions:
```javascript
// For each finding in dimension A, look for related code in dimension B
if (finding.related_dimension) {
searchForRelatedCode(finding, otherDimension);
}
```
4. **Check convergence**:
```javascript
const convergence = {
newDiscoveries: newFindings.length,
confidence: calculateConfidence(cumulativeFindings),
converged: newFindings.length === 0 || confidence > 0.9
};
```
### Step 5: Cross-Analysis (for comparison intent)
If intent is comparison, analyze findings across dimensions:
```javascript
for (const point of comparisonMatrix.comparison_points) {
const aFindings = findings.filter(f =>
f.related_dimension === dimension_a && f.category.includes(point.aspect)
);
const bFindings = findings.filter(f =>
f.related_dimension === dimension_b && f.category.includes(point.aspect)
);
// Find discrepancies
const discrepancies = compareFindings(aFindings, bFindings, point);
// Calculate match rate
const matchRate = calculateMatchRate(aFindings, bFindings);
}
```
Write to `comparison-analysis.json`:
```json
{
"matrix": { "dimension_a": "...", "dimension_b": "...", "comparison_points": [...] },
"results": [
{
"aspect": "endpoints",
"dimension_a_count": 15,
"dimension_b_count": 12,
"discrepancies": [
{"frontend": "/api/users/profile", "backend": "NOT_FOUND", "type": "missing_endpoint"}
],
"match_rate": 0.80
}
],
"summary": {
"total_discrepancies": 5,
"overall_match_rate": 0.75,
"critical_mismatches": ["endpoints", "payloads"]
}
}
```
### Step 6: Generate Issues
Convert high-confidence findings to issues:
```bash
# For each finding with confidence >= 0.7 or priority critical/high
echo '{"id":"ISS-DBP-001","title":"Missing backend endpoint for /api/users/profile",...}' >> ${OUTPUT_DIR}/discovery-issues.jsonl
```
### Step 7: Update Final State
```json
{
"discovery_id": "DBP-...",
"type": "prompt-driven",
"prompt": "...",
"intent_type": "comparison",
"phase": "complete",
"created_at": "...",
"updated_at": "...",
"iterations": [
{"number": 1, "findings_count": 10, "new_discoveries": 10, "confidence": 0.6},
{"number": 2, "findings_count": 18, "new_discoveries": 8, "confidence": 0.75},
{"number": 3, "findings_count": 24, "new_discoveries": 6, "confidence": 0.85}
],
"results": {
"total_iterations": 3,
"total_findings": 24,
"issues_generated": 12,
"comparison_match_rate": 0.75
}
}
```
### Step 8: Output Summary
```markdown
## Discovery Complete: DBP-...
**Prompt**: Check if frontend API calls match backend implementations
**Intent**: comparison
**Dimensions**: frontend-calls, backend-handlers
### Iteration Summary
| # | Findings | New | Confidence |
|---|----------|-----|------------|
| 1 | 10 | 10 | 60% |
| 2 | 18 | 8 | 75% |
| 3 | 24 | 6 | 85% |
### Comparison Results
- **Overall Match Rate**: 75%
- **Total Discrepancies**: 5
- **Critical Mismatches**: endpoints, payloads
### Issues Generated: 12
- 2 Critical
- 4 High
- 6 Medium
### Next Steps
- `/issue:plan DBP-001,DBP-002,...` to plan solutions
- `ccw view` to review findings in dashboard
```
## Quality Checklist
Before completing, verify:
- [ ] Intent type correctly detected from prompt
- [ ] Dimensions dynamically generated based on prompt
- [ ] Iterations executed until convergence or max limit
- [ ] Cross-reference analysis performed (for comparison intent)
- [ ] High-confidence findings converted to issues
- [ ] Discovery state shows `phase: complete`
## Error Handling
| Situation | Action |
|-----------|--------|
| No relevant code found | Report empty result, suggest broader scope |
| Max iterations without convergence | Complete with current findings, note in summary |
| Comparison dimension mismatch | Report which dimension has fewer findings |
| No comparison points matched | Report as "No direct matches found" |
## Use Cases
| Scenario | Example Prompt |
|----------|----------------|
| API Contract | "Check if frontend calls match backend endpoints" |
| Error Handling | "Find inconsistent error handling patterns" |
| Migration Gap | "Compare old auth with new auth implementation" |
| Feature Parity | "Verify mobile has all web features" |
| Schema Drift | "Check if TypeScript types match API responses" |
| Integration | "Find mismatches between service A and service B" |
## Start Discovery
Parse prompt and detect intent:
```bash
PROMPT="${1}"
SCOPE="${2:-**/*}"
DEPTH="${3:-standard}"
# Detect intent keywords
if echo "${PROMPT}" | grep -qiE '(match|compare|versus|vs|between)'; then
INTENT="comparison"
elif echo "${PROMPT}" | grep -qiE '(find|locate|where)'; then
INTENT="search"
elif echo "${PROMPT}" | grep -qiE '(verify|check|ensure)'; then
INTENT="verification"
else
INTENT="audit"
fi
echo "Intent detected: ${INTENT}"
echo "Starting discovery with scope: ${SCOPE}"
```
Then follow the workflow to explore and discover issues.

View File

@@ -0,0 +1,262 @@
---
name: issue-discover
description: Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices)
argument-hint: "<path-pattern> [--perspectives=bug,ux,...] [--external]"
---
# Issue Discovery (Codex Version)
## Goal
Multi-perspective issue discovery that explores code from different angles to identify potential bugs, UX improvements, test gaps, and other actionable items. Unlike code review (which assesses existing code quality), discovery focuses on **finding opportunities for improvement and potential problems**.
**Discovery Scope**: Specified modules/files only
**Output Directory**: `.workflow/issues/discoveries/{discovery-id}/`
**Available Perspectives**: bug, ux, test, quality, security, performance, maintainability, best-practices
## Inputs
- **Target Pattern**: File glob pattern (e.g., `src/auth/**`)
- **Perspectives**: Comma-separated list via `--perspectives` (or interactive selection)
- **External Research**: `--external` flag enables Exa research for security and best-practices
## Output Requirements
**Generate Files:**
1. `.workflow/issues/discoveries/{discovery-id}/discovery-state.json` - Session state
2. `.workflow/issues/discoveries/{discovery-id}/perspectives/{perspective}.json` - Per-perspective findings
3. `.workflow/issues/discoveries/{discovery-id}/discovery-issues.jsonl` - Generated issue candidates
4. `.workflow/issues/discoveries/{discovery-id}/summary.md` - Summary report
**Return Summary:**
```json
{
"discovery_id": "DSC-YYYYMMDD-HHmmss",
"target_pattern": "src/auth/**",
"perspectives_analyzed": ["bug", "security", "test"],
"total_findings": 15,
"issues_generated": 8,
"priority_distribution": { "critical": 1, "high": 3, "medium": 4 }
}
```
## Workflow
### Step 1: Initialize Discovery Session
```bash
# Generate discovery ID
DISCOVERY_ID="DSC-$(date -u +%Y%m%d-%H%M%S)"
OUTPUT_DIR=".workflow/issues/discoveries/${DISCOVERY_ID}"
# Create directory structure
mkdir -p "${OUTPUT_DIR}/perspectives"
```
Resolve target files:
```bash
# List files matching pattern
find <target-pattern> -type f -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx"
```
If no files found, abort with error message.
### Step 2: Select Perspectives
**If `--perspectives` provided:**
- Parse comma-separated list
- Validate against available perspectives
**If not provided (interactive):**
- Present perspective groups:
- Quick scan: bug, test, quality
- Security audit: security, bug, quality
- Full analysis: all perspectives
- Use first group as default or wait for user input
### Step 3: Analyze Each Perspective
For each selected perspective, explore target files and identify issues.
**Perspective-Specific Focus:**
| Perspective | Focus Areas | Priority Guide |
|-------------|-------------|----------------|
| **bug** | Null checks, edge cases, resource leaks, race conditions, boundary conditions, exception handling | Critical=data corruption/crash, High=malfunction, Medium=edge case |
| **ux** | Error messages, loading states, feedback, accessibility, interaction patterns | Critical=inaccessible, High=confusing, Medium=inconsistent |
| **test** | Missing unit tests, edge case coverage, integration gaps, assertion quality | Critical=no security tests, High=no core logic tests |
| **quality** | Complexity, duplication, naming, documentation, code smells | Critical=unmaintainable, High=significant issues |
| **security** | Input validation, auth/authz, injection, XSS/CSRF, data exposure | Critical=auth bypass/injection, High=missing authz |
| **performance** | N+1 queries, memory leaks, caching, algorithm efficiency | Critical=memory leaks, High=N+1 queries |
| **maintainability** | Coupling, interface design, tech debt, extensibility | Critical=forced changes, High=unclear boundaries |
| **best-practices** | Framework conventions, language patterns, anti-patterns | Critical=bug-causing anti-patterns, High=convention violations |
**For each perspective:**
1. Read target files and analyze for perspective-specific concerns
2. Use `rg` to search for patterns indicating issues
3. Record findings with:
- `id`: Finding ID (e.g., `F-001`)
- `title`: Brief description
- `priority`: critical/high/medium/low
- `category`: Specific category within perspective
- `description`: Detailed explanation
- `file`: File path
- `line`: Line number
- `snippet`: Code snippet
- `suggested_issue`: Proposed issue text
- `confidence`: 0.0-1.0
4. Write to `{OUTPUT_DIR}/perspectives/{perspective}.json`:
```json
{
"perspective": "security",
"analyzed_at": "2025-01-22T...",
"files_analyzed": 15,
"findings": [
{
"id": "F-001",
"title": "Missing input validation",
"priority": "high",
"category": "input-validation",
"description": "User input is passed directly to database query",
"file": "src/auth/login.ts",
"line": 42,
"snippet": "db.query(`SELECT * FROM users WHERE name = '${input}'`)",
"suggested_issue": "Add input sanitization to prevent SQL injection",
"confidence": 0.95
}
]
}
```
### Step 4: External Research (if --external)
For security and best-practices perspectives, use Exa to search for:
- Industry best practices for the tech stack
- Known vulnerability patterns
- Framework-specific security guidelines
Write results to `{OUTPUT_DIR}/external-research.json`.
### Step 5: Aggregate and Prioritize
1. Load all perspective JSON files
2. Deduplicate findings by file+line
3. Calculate priority scores:
- critical: 1.0
- high: 0.8
- medium: 0.5
- low: 0.2
- Adjust by confidence
4. Sort by priority score descending
### Step 6: Generate Issues
Convert high-priority findings to issue format:
```bash
# Append to discovery-issues.jsonl
echo '{"id":"ISS-DSC-001","title":"...","priority":"high",...}' >> ${OUTPUT_DIR}/discovery-issues.jsonl
```
Issue criteria:
- `priority` is critical or high
- OR `priority_score >= 0.7`
- OR `confidence >= 0.9` with medium priority
### Step 7: Update Discovery State
Write final state to `{OUTPUT_DIR}/discovery-state.json`:
```json
{
"discovery_id": "DSC-...",
"target_pattern": "src/auth/**",
"phase": "complete",
"created_at": "...",
"updated_at": "...",
"perspectives": ["bug", "security", "test"],
"results": {
"total_findings": 15,
"issues_generated": 8,
"priority_distribution": {
"critical": 1,
"high": 3,
"medium": 4
}
}
}
```
### Step 8: Generate Summary
Write summary to `{OUTPUT_DIR}/summary.md`:
```markdown
# Discovery Summary: DSC-...
**Target**: src/auth/**
**Perspectives**: bug, security, test
**Total Findings**: 15
**Issues Generated**: 8
## Priority Breakdown
- Critical: 1
- High: 3
- Medium: 4
## Top Findings
1. **[Critical] SQL Injection in login.ts:42**
Category: security/input-validation
...
2. **[High] Missing null check in auth.ts:128**
Category: bug/null-check
...
## Next Steps
- Run `/issue:plan` to plan solutions for generated issues
- Use `ccw view` to review findings in dashboard
```
## Quality Checklist
Before completing, verify:
- [ ] All target files analyzed for selected perspectives
- [ ] Findings include file:line references
- [ ] Priority assigned to all findings
- [ ] Issues generated from high-priority findings
- [ ] Discovery state shows `phase: complete`
- [ ] Summary includes actionable next steps
## Error Handling
| Situation | Action |
|-----------|--------|
| No files match pattern | Abort with clear error message |
| Perspective analysis fails | Log error, continue with other perspectives |
| No findings | Report "No issues found" (not an error) |
| External research fails | Continue without external context |
## Schema References
| Schema | Path | Purpose |
|--------|------|---------|
| Discovery State | `~/.claude/workflows/cli-templates/schemas/discovery-state-schema.json` | Session state |
| Discovery Finding | `~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json` | Finding format |
## Start Discovery
Begin by resolving target files:
```bash
# Parse target pattern from arguments
TARGET_PATTERN="${1:-src/**}"
# Count matching files
find ${TARGET_PATTERN} -type f \( -name "*.ts" -o -name "*.tsx" -o -name "*.js" \) | wc -l
```
Then proceed with perspective selection and analysis.

View File

@@ -0,0 +1,820 @@
---
name: issue-execute
description: Execute all solutions from issue queue with git commit after each solution. Supports batch processing and execution control.
argument-hint: "--queue=<id> [--worktree=<path|new>] [--skip-tests] [--skip-build] [--dry-run]"
---
# Issue Execute (Codex Version)
## Core Principle
**Serial Execution**: Execute solutions ONE BY ONE from the issue queue via `ccw issue next`. For each solution, complete all tasks sequentially (implement → test → verify), then commit once per solution with formatted summary. Continue autonomously until queue is empty.
## Project Context (MANDATORY FIRST STEPS)
Before starting execution, load project context:
1. **Read project tech stack**: `.workflow/project-tech.json`
2. **Read project guidelines**: `.workflow/project-guidelines.json`
3. **Read solution schema**: `~/.claude/workflows/cli-templates/schemas/solution-schema.json`
This ensures execution follows project conventions and patterns.
## Parameters
- `--queue=<id>`: Queue ID to execute (REQUIRED)
- `--worktree=<path|new>`: Worktree path or 'new' for creating new worktree
- `--skip-tests`: Skip test execution during solution implementation
- `--skip-build`: Skip build step
- `--dry-run`: Preview execution without making changes
## Queue ID Requirement (MANDATORY)
**`--queue <queue-id>` parameter is REQUIRED**
### When Queue ID Not Provided
```
List queues → Output options → Stop and wait for user
```
**Actions**:
1. `ccw issue queue list --brief --json` - Fetch queue list
2. Filter active/pending status, output formatted list
3. **Stop execution**, prompt user to rerun with `codex -p "@.codex/prompts/issue-execute.md --queue QUE-xxx"`
**No auto-selection** - User MUST explicitly specify queue-id
## Worktree Mode (Recommended for Parallel Execution)
When `--worktree` is specified, create or use a git worktree to isolate work.
**Usage**:
- `--worktree` - Create a new worktree with timestamp-based name
- `--worktree <existing-path>` - Resume in an existing worktree (for recovery/continuation)
**Note**: `ccw issue` commands auto-detect worktree and redirect to main repo automatically.
```bash
# Step 0: Setup worktree before starting (run from MAIN REPO)
# Use absolute paths to avoid issues when running from subdirectories
REPO_ROOT=$(git rev-parse --show-toplevel)
WORKTREE_BASE="${REPO_ROOT}/.ccw/worktrees"
# Check if existing worktree path was provided
EXISTING_WORKTREE="${1:-}" # Pass as argument or empty
if [[ -n "${EXISTING_WORKTREE}" && -d "${EXISTING_WORKTREE}" ]]; then
# Resume mode: Use existing worktree
WORKTREE_PATH="${EXISTING_WORKTREE}"
WORKTREE_NAME=$(basename "${WORKTREE_PATH}")
# Verify it's a valid git worktree
if ! git -C "${WORKTREE_PATH}" rev-parse --is-inside-work-tree &>/dev/null; then
echo "Error: ${EXISTING_WORKTREE} is not a valid git worktree"
exit 1
fi
echo "Resuming in existing worktree: ${WORKTREE_PATH}"
else
# Create mode: New worktree with timestamp
WORKTREE_NAME="issue-exec-$(date +%Y%m%d-%H%M%S)"
WORKTREE_PATH="${WORKTREE_BASE}/${WORKTREE_NAME}"
# Ensure worktree base directory exists (gitignored)
mkdir -p "${WORKTREE_BASE}"
# Prune stale worktrees from previous interrupted executions
git worktree prune
# Create worktree from current branch
git worktree add "${WORKTREE_PATH}" -b "${WORKTREE_NAME}"
echo "Created new worktree: ${WORKTREE_PATH}"
fi
# Setup cleanup trap for graceful failure handling
cleanup_worktree() {
echo "Cleaning up worktree due to interruption..."
cd "${REPO_ROOT}" 2>/dev/null || true
git worktree remove "${WORKTREE_PATH}" --force 2>/dev/null || true
# Keep branch for debugging failed executions
echo "Worktree removed. Branch '${WORKTREE_NAME}' kept for inspection."
}
trap cleanup_worktree EXIT INT TERM
# Change to worktree directory
cd "${WORKTREE_PATH}"
# ccw issue commands auto-detect worktree and use main repo's .workflow/
# So you can run ccw issue next/done directly from worktree
```
**Worktree Execution Pattern**:
```
0. [MAIN REPO] Validate queue ID (--queue required, or prompt user to select)
1. [WORKTREE] ccw issue next --queue <queue-id> → auto-redirects to main repo's .workflow/
2. [WORKTREE] Implement all tasks, run tests, git commit
3. [WORKTREE] ccw issue done <item_id> → auto-redirects to main repo
4. Repeat from step 1
```
**Note**: Add `.ccw/worktrees/` to `.gitignore` to prevent tracking worktree contents.
**Benefits:**
- Parallel executors don't conflict with each other
- Main working directory stays clean
- Easy cleanup after execution
- **Resume support**: Pass existing worktree path to continue interrupted executions
**Resume Examples:**
```bash
# List existing worktrees to find interrupted execution
git worktree list
# Resume in existing worktree (pass path as argument)
# The worktree path will be used instead of creating a new one
codex -p "@.codex/prompts/issue-execute.md --worktree /path/to/existing/worktree"
```
**Completion - User Choice:**
When all solutions are complete, output options and wait for user to specify:
```
All solutions completed in worktree. Choose next action:
1. Merge to main - Merge worktree branch into main and cleanup
2. Create PR - Push branch and create pull request (Recommended for parallel execution)
3. Keep branch - Keep branch for manual handling, cleanup worktree only
Please respond with: 1, 2, or 3
```
**Based on user response:**
```bash
# Disable cleanup trap before intentional cleanup
trap - EXIT INT TERM
# Return to main repo first (use REPO_ROOT from setup)
cd "${REPO_ROOT}"
# Validate main repo state before merge (prevents conflicts)
validate_main_clean() {
if [[ -n $(git status --porcelain) ]]; then
echo "⚠️ Warning: Main repo has uncommitted changes."
echo "Cannot auto-merge. Falling back to 'Create PR' option."
return 1
fi
return 0
}
# Option 1: Merge to main (only if main is clean)
if validate_main_clean; then
git merge "${WORKTREE_NAME}" --no-ff -m "Merge issue queue execution: ${WORKTREE_NAME}"
git worktree remove "${WORKTREE_PATH}"
git branch -d "${WORKTREE_NAME}"
else
# Fallback to PR if main is dirty
git push -u origin "${WORKTREE_NAME}"
gh pr create --title "Issue Queue: ${WORKTREE_NAME}" --body "Automated issue queue execution (main had uncommitted changes)"
git worktree remove "${WORKTREE_PATH}"
fi
# Option 2: Create PR (Recommended for parallel execution)
git push -u origin "${WORKTREE_NAME}"
gh pr create --title "Issue Queue: ${WORKTREE_NAME}" --body "Automated issue queue execution"
git worktree remove "${WORKTREE_PATH}"
# Branch kept on remote
# Option 3: Keep branch
git worktree remove "${WORKTREE_PATH}"
# Branch kept locally for manual handling
echo "Branch '${WORKTREE_NAME}' kept. Merge manually when ready."
```
**Parallel Execution Safety**: For parallel executors, "Create PR" is the safest option as it avoids race conditions during merge. Multiple PRs can be reviewed and merged sequentially.
## Execution Flow
```
STEP 0: Validate queue ID (--queue required, or prompt user to select)
INIT: Fetch first solution via ccw issue next --queue <queue-id>
WHILE solution exists:
1. Receive solution JSON from ccw issue next --queue <queue-id>
2. Execute all tasks in solution.tasks sequentially:
FOR each task:
- IMPLEMENT: Follow task.implementation steps
- TEST: Run task.test commands
- VERIFY: Check task.acceptance criteria
3. COMMIT: Stage all files, commit once with formatted summary
4. Report completion via ccw issue done <item_id>
5. Fetch next solution via ccw issue next --queue <queue-id>
WHEN queue empty:
Output final summary
```
## Step 1: Fetch First Solution
**Prerequisite**: Queue ID must be determined (either from `--queue` argument or user selection in Step 0).
Run this command to get your first solution:
```javascript
// ccw auto-detects worktree and uses main repo's .workflow/
// QUEUE_ID is required - obtained from --queue argument or user selection
const result = shell_command({ command: `ccw issue next --queue ${QUEUE_ID}` })
```
This returns JSON with the full solution definition:
- `item_id`: Solution identifier in queue (e.g., "S-1")
- `issue_id`: Parent issue ID (e.g., "ISS-20251227-001")
- `solution_id`: Solution ID (e.g., "SOL-ISS-20251227-001-1")
- `solution`: Full solution with all tasks
- `execution_hints`: Timing and executor hints
If response contains `{ "status": "empty" }`, all solutions are complete - skip to final summary.
## Step 2: Parse Solution Response
Expected solution structure:
```json
{
"item_id": "S-1",
"issue_id": "ISS-20251227-001",
"solution_id": "SOL-ISS-20251227-001-1",
"status": "pending",
"solution": {
"id": "SOL-ISS-20251227-001-1",
"description": "Description of solution approach",
"tasks": [
{
"id": "T1",
"title": "Task title",
"scope": "src/module/",
"action": "Create|Modify|Fix|Refactor|Add",
"description": "What to do",
"modification_points": [
{ "file": "path/to/file.ts", "target": "function name", "change": "description" }
],
"implementation": [
"Step 1: Do this",
"Step 2: Do that"
],
"test": {
"commands": ["npm test -- --filter=xxx"],
"unit": ["Unit test requirement 1", "Unit test requirement 2"]
},
"regression": ["Verify existing tests still pass"],
"acceptance": {
"criteria": ["Criterion 1: Must pass", "Criterion 2: Must verify"],
"verification": ["Run test command", "Manual verification step"]
},
"commit": {
"type": "feat|fix|test|refactor",
"scope": "module",
"message_template": "feat(scope): description"
},
"depends_on": [],
"estimated_minutes": 30,
"priority": 1
}
],
"exploration_context": {
"relevant_files": ["path/to/reference.ts"],
"patterns": "Follow existing pattern in xxx",
"integration_points": "Used by other modules"
},
"analysis": {
"risk": "low|medium|high",
"impact": "low|medium|high",
"complexity": "low|medium|high"
},
"score": 0.95,
"is_bound": true
},
"execution_hints": {
"executor": "codex",
"estimated_minutes": 180
}
}
```
## Step 2.1: Determine Execution Strategy
After parsing the solution, analyze the issue type and task actions to determine the appropriate execution strategy. The strategy defines additional verification steps and quality gates beyond the basic implement-test-verify cycle.
### Strategy Auto-Matching
**Matching Priority**:
1. Explicit `solution.strategy_type` if provided
2. Infer from `task.action` keywords (Debug, Fix, Feature, Refactor, Test, etc.)
3. Infer from `solution.description` and `task.title` content
4. Default to "standard" if no clear match
**Strategy Types and Matching Keywords**:
| Strategy Type | Match Keywords | Description |
|---------------|----------------|-------------|
| `debug` | Debug, Diagnose, Trace, Investigate | Bug diagnosis with logging and debugging |
| `bugfix` | Fix, Patch, Resolve, Correct | Bug fixing with root cause analysis |
| `feature` | Feature, Add, Implement, Create, Build | New feature development with full testing |
| `refactor` | Refactor, Restructure, Optimize, Cleanup | Code restructuring with behavior preservation |
| `test` | Test, Coverage, E2E, Integration | Test implementation with coverage checks |
| `performance` | Performance, Optimize, Speed, Memory | Performance optimization with benchmarking |
| `security` | Security, Vulnerability, CVE, Audit | Security fixes with vulnerability checks |
| `hotfix` | Hotfix, Urgent, Critical, Emergency | Urgent fixes with minimal changes |
| `documentation` | Documentation, Docs, Comment, README | Documentation updates with example validation |
| `chore` | Chore, Dependency, Config, Maintenance | Maintenance tasks with compatibility checks |
| `standard` | (default) | Standard implementation without extra steps |
### Strategy-Specific Execution Phases
Each strategy extends the basic cycle with additional quality gates:
#### 1. Debug → Reproduce → Instrument → Diagnose → Implement → Test → Verify → Cleanup
```
REPRODUCE → INSTRUMENT → DIAGNOSE → IMPLEMENT → TEST → VERIFY → CLEANUP
```
#### 2. Bugfix → Root Cause → Implement → Test → Edge Cases → Regression → Verify
```
ROOT_CAUSE → IMPLEMENT → TEST → EDGE_CASES → REGRESSION → VERIFY
```
#### 3. Feature → Design Review → Unit Tests → Implement → Integration Tests → Code Review → Docs → Verify
```
DESIGN_REVIEW → UNIT_TESTS → IMPLEMENT → INTEGRATION_TESTS → TEST → CODE_REVIEW → DOCS → VERIFY
```
#### 4. Refactor → Baseline Tests → Implement → Test → Behavior Check → Performance Compare → Verify
```
BASELINE_TESTS → IMPLEMENT → TEST → BEHAVIOR_PRESERVATION → PERFORMANCE_CMP → VERIFY
```
#### 5. Test → Coverage Baseline → Test Design → Implement → Coverage Check → Verify
```
COVERAGE_BASELINE → TEST_DESIGN → IMPLEMENT → COVERAGE_CHECK → VERIFY
```
#### 6. Performance → Profiling → Bottleneck → Implement → Benchmark → Test → Verify
```
PROFILING → BOTTLENECK → IMPLEMENT → BENCHMARK → TEST → VERIFY
```
#### 7. Security → Vulnerability Scan → Implement → Security Test → Penetration Test → Verify
```
VULNERABILITY_SCAN → IMPLEMENT → SECURITY_TEST → PENETRATION_TEST → VERIFY
```
#### 8. Hotfix → Impact Assessment → Implement → Test → Quick Verify → Verify
```
IMPACT_ASSESSMENT → IMPLEMENT → TEST → QUICK_VERIFY → VERIFY
```
#### 9. Documentation → Implement → Example Validation → Format Check → Link Validation → Verify
```
IMPLEMENT → EXAMPLE_VALIDATION → FORMAT_CHECK → LINK_VALIDATION → VERIFY
```
#### 10. Chore → Implement → Compatibility Check → Test → Changelog → Verify
```
IMPLEMENT → COMPATIBILITY_CHECK → TEST → CHANGELOG → VERIFY
```
#### 11. Standard → Implement → Test → Verify
```
IMPLEMENT → TEST → VERIFY
```
### Strategy Selection Implementation
**Pseudo-code for strategy matching**:
```javascript
function determineStrategy(solution) {
// Priority 1: Explicit strategy type
if (solution.strategy_type) {
return solution.strategy_type
}
// Priority 2: Infer from task actions
const actions = solution.tasks.map(t => t.action.toLowerCase())
const titles = solution.tasks.map(t => t.title.toLowerCase())
const description = solution.description.toLowerCase()
const allText = [...actions, ...titles, description].join(' ')
// Match keywords (order matters - more specific first)
if (/hotfix|urgent|critical|emergency/.test(allText)) return 'hotfix'
if (/debug|diagnose|trace|investigate/.test(allText)) return 'debug'
if (/security|vulnerability|cve|audit/.test(allText)) return 'security'
if (/performance|optimize|speed|memory|benchmark/.test(allText)) return 'performance'
if (/refactor|restructure|cleanup/.test(allText)) return 'refactor'
if (/test|coverage|e2e|integration/.test(allText)) return 'test'
if (/documentation|docs|comment|readme/.test(allText)) return 'documentation'
if (/chore|dependency|config|maintenance/.test(allText)) return 'chore'
if (/fix|patch|resolve|correct/.test(allText)) return 'bugfix'
if (/feature|add|implement|create|build/.test(allText)) return 'feature'
// Default
return 'standard'
}
```
**Usage in execution flow**:
```javascript
// After parsing solution (Step 2)
const strategy = determineStrategy(solution)
console.log(`Strategy selected: ${strategy}`)
// During task execution (Step 3), follow strategy-specific phases
for (const task of solution.tasks) {
executeTaskWithStrategy(task, strategy)
}
```
## Step 2.5: Initialize Task Tracking
After parsing solution and determining strategy, use `update_plan` to track each task:
```javascript
// Initialize plan with all tasks from solution
update_plan({
explanation: `Starting solution ${item_id}`,
plan: solution.tasks.map(task => ({
step: `${task.id}: ${task.title}`,
status: "pending"
}))
})
```
**Note**: Codex uses `update_plan` tool for task tracking (not TodoWrite).
## Step 3: Execute Tasks Sequentially
Iterate through `solution.tasks` array and execute each task.
**Before starting each task**, mark it as in_progress:
```javascript
// Update current task status
update_plan({
explanation: `Working on ${task.id}: ${task.title}`,
plan: tasks.map(t => ({
step: `${t.id}: ${t.title}`,
status: t.id === task.id ? "in_progress" : (t.completed ? "completed" : "pending")
}))
})
```
**After completing each task** (verification passed), mark it as completed:
```javascript
// Mark task as completed (commit happens at solution level)
update_plan({
explanation: `Completed ${task.id}: ${task.title}`,
plan: tasks.map(t => ({
step: `${t.id}: ${t.title}`,
status: t.id === task.id ? "completed" : t.status
}))
})
```
### Phase A: IMPLEMENT
1. **Read context files in parallel** using `multi_tool_use.parallel`:
```javascript
// Read all relevant files in parallel for context
multi_tool_use.parallel({
tool_uses: solution.exploration_context.relevant_files.map(file => ({
recipient_name: "functions.read_file",
parameters: { path: file }
}))
})
```
2. Follow `task.implementation` steps in order
3. Apply changes to `task.modification_points` files
4. Follow `solution.exploration_context.patterns` for code style consistency
5. Run `task.regression` checks if specified to ensure no breakage
**Output format:**
```
## Implementing: [task.title] (Task [N]/[Total])
**Scope**: [task.scope]
**Action**: [task.action]
**Steps**:
1. ✓ [implementation step 1]
2. ✓ [implementation step 2]
...
**Files Modified**:
- path/to/file1.ts
- path/to/file2.ts
```
### Phase B: TEST
1. Run all commands in `task.test.commands`
2. Verify unit tests pass (`task.test.unit`)
3. Run integration tests if specified (`task.test.integration`)
**If tests fail**: Fix the code and re-run. Do NOT proceed until tests pass.
**Output format:**
```
## Testing: [task.title]
**Test Results**:
- [x] Unit tests: PASSED
- [x] Integration tests: PASSED (or N/A)
```
### Phase C: VERIFY
Check all `task.acceptance.criteria` are met using `task.acceptance.verification` steps:
```
## Verifying: [task.title]
**Acceptance Criteria**:
- [x] Criterion 1: Verified
- [x] Criterion 2: Verified
...
**Verification Steps**:
- [x] Run test command
- [x] Manual verification step
All criteria met: YES
```
**If any criterion fails**: Go back to IMPLEMENT phase and fix.
### Repeat for Next Task
Continue to next task in `solution.tasks` array until all tasks are complete.
**Note**: Do NOT commit after each task. Commits happen at solution level after all tasks pass.
## Step 3.5: Commit Solution
After ALL tasks in the solution pass implementation, testing, and verification, commit once for the entire solution:
```bash
# Stage all modified files from all tasks
git add path/to/file1.ts path/to/file2.ts ...
# Commit with clean, standard format (NO solution metadata)
git commit -m "[commit_type](scope): [brief description of changes]"
# Example commits:
# feat(auth): add token refresh mechanism
# fix(payment): resolve timeout handling in checkout flow
# refactor(api): simplify error handling logic
```
**Commit Type Selection**:
- `feat`: New feature or capability
- `fix`: Bug fix
- `refactor`: Code restructuring without behavior change
- `test`: Adding or updating tests
- `docs`: Documentation changes
- `chore`: Maintenance tasks
**Commit Language**:
- Use **Chinese** commit summary if project's `CLAUDE.md` specifies Chinese response guidelines or user explicitly requests Chinese
- Use **English** commit summary by default or when project targets international collaboration
- Check project's existing commit history for language convention consistency
**Output format:**
```
## Solution Committed: [solution_id]
**Commit**: [commit hash]
**Type**: [commit_type]([scope])
**Changes**:
- [Feature/Fix/Improvement]: [What functionality was added/fixed/improved]
- [Specific change 1]
- [Specific change 2]
**Files Modified**:
- path/to/file1.ts - [Brief description of changes]
- path/to/file2.ts - [Brief description of changes]
- path/to/file3.ts - [Brief description of changes]
**Solution**: [solution_id] ([N] tasks completed)
```
## Step 4: Report Completion
After ALL tasks in the solution are complete and committed, report to queue system with full solution metadata:
```javascript
// ccw auto-detects worktree and uses main repo's .workflow/
// Record ALL solution context here (NOT in git commit)
shell_command({
command: `ccw issue done ${item_id} --result '${JSON.stringify({
solution_id: solution.id,
issue_id: issue_id,
commit: {
hash: commit_hash,
type: commit_type,
scope: commit_scope,
message: commit_message
},
analysis: {
risk: solution.analysis.risk,
impact: solution.analysis.impact,
complexity: solution.analysis.complexity
},
tasks_completed: solution.tasks.map(t => ({
id: t.id,
title: t.title,
action: t.action,
scope: t.scope
})),
files_modified: ["path1", "path2", ...],
tests_passed: true,
verification: {
all_tests_passed: true,
acceptance_criteria_met: true,
regression_checked: true
},
summary: "[What was accomplished - brief description]"
})}'`
})
```
**Complete Example**:
```javascript
shell_command({
command: `ccw issue done S-1 --result '${JSON.stringify({
solution_id: "SOL-ISS-20251227-001-1",
issue_id: "ISS-20251227-001",
commit: {
hash: "a1b2c3d4",
type: "feat",
scope: "auth",
message: "feat(auth): add token refresh mechanism"
},
analysis: {
risk: "low",
impact: "medium",
complexity: "medium"
},
tasks_completed: [
{ id: "T1", title: "Implement refresh token endpoint", action: "Add", scope: "src/auth/" },
{ id: "T2", title: "Add token rotation logic", action: "Create", scope: "src/auth/services/" }
],
files_modified: [
"src/auth/routes/token.ts",
"src/auth/services/refresh.ts",
"src/auth/middleware/validate.ts"
],
tests_passed: true,
verification: {
all_tests_passed: true,
acceptance_criteria_met: true,
regression_checked: true
},
summary: "Implemented token refresh mechanism with automatic rotation"
})}'`
})
```
**If solution failed:**
```javascript
shell_command({
command: `ccw issue done ${item_id} --fail --reason '${JSON.stringify({
task_id: "TX",
error_type: "test_failure",
message: "Integration tests failed: timeout in token validation",
files_attempted: ["path1", "path2"],
commit: null
})}'`
})
```
## Step 5: Continue to Next Solution
Fetch next solution (using same QUEUE_ID from Step 0/1):
```javascript
// ccw auto-detects worktree
// Continue using the same QUEUE_ID throughout execution
const result = shell_command({ command: `ccw issue next --queue ${QUEUE_ID}` })
```
**Output progress:**
```
✓ [N/M] Completed: [item_id] - [solution.description]
Commit: [commit_hash] ([commit_type])
Tasks: [task_count] completed
→ Fetching next solution...
```
**DO NOT STOP.** Return to Step 2 and continue until queue is empty.
## Final Summary
When `ccw issue next` returns `{ "status": "empty" }`:
**If running in worktree mode**: Prompt user for merge/PR/keep choice (see "Completion - User Choice" above) before outputting summary.
```markdown
## Issue Queue Execution Complete
**Total Solutions Executed**: N
**Total Tasks Executed**: M
**Total Commits**: N (one per solution)
**Solution Commits**:
| # | Solution | Tasks | Commit | Type |
|---|----------|-------|--------|------|
| 1 | SOL-xxx-1 | T1, T2 | abc123 | feat |
| 2 | SOL-xxx-2 | T1 | def456 | fix |
| 3 | SOL-yyy-1 | T1, T2, T3 | ghi789 | refactor |
**Files Modified**:
- path/to/file1.ts
- path/to/file2.ts
**Summary**:
[Overall what was accomplished]
```
## Execution Rules
1. **Never stop mid-queue** - Continue until queue is empty
2. **One solution at a time** - Fully complete (all tasks + commit + report) before moving on
3. **Sequential within solution** - Complete each task's implement/test/verify before next task
4. **Tests MUST pass** - Do not proceed if any task's tests fail
5. **One commit per solution** - All tasks share a single commit with formatted summary
6. **Self-verify** - All acceptance criteria must pass before solution commit
7. **Report accurately** - Use `ccw issue done` after each solution
8. **Handle failures gracefully** - If a solution fails, report via `ccw issue done --fail` and continue to next
9. **Track with update_plan** - Use update_plan tool for task progress tracking
10. **Worktree auto-detect** - `ccw issue` commands auto-redirect to main repo from worktree
## Error Handling
| Situation | Action |
|-----------|--------|
| `ccw issue next` returns empty | All done - output final summary |
| Tests fail | Fix code, re-run tests |
| Verification fails | Go back to implement phase |
| Solution commit fails | Check staging, retry commit |
| `ccw issue done` fails | Log error, continue to next solution |
| Any task unrecoverable | Call `ccw issue done --fail`, continue to next solution |
## CLI Command Reference
| Command | Purpose |
|---------|---------|
| `ccw issue queue list --brief --json` | List all queues (for queue selection) |
| `ccw issue next --queue QUE-xxx` | Fetch next solution from specified queue (**--queue required**) |
| `ccw issue done <id>` | Mark solution complete with result (auto-detects queue) |
| `ccw issue done <id> --fail --reason "..."` | Mark solution failed with structured reason |
| `ccw issue retry --queue QUE-xxx` | Reset failed items in specific queue |
## Start Execution
**Step 0: Validate Queue ID**
If `--queue` was NOT provided in the command arguments:
1. Run `ccw issue queue list --brief --json`
2. Filter and display active/pending queues to user
3. **Stop execution**, prompt user to rerun with `--queue QUE-xxx`
**Step 1: Fetch First Solution**
Once queue ID is confirmed, begin by running:
```bash
ccw issue next --queue <queue-id>
```
Then follow the solution lifecycle for each solution until queue is empty.

View File

@@ -0,0 +1,391 @@
---
name: issue-new
description: Create structured issue from GitHub URL or text description. Auto mode with --yes flag.
argument-hint: "[--yes|-y] <GITHUB_URL | TEXT_DESCRIPTION> [--priority PRIORITY] [--labels LABELS]"
---
# Issue New Command
## Core Principles
**Requirement Clarity Detection** → Ask only when needed
**Flexible Parameter Input** → Support multiple formats and flags
**Auto Mode Support**`--yes`/`-y` skips confirmation questions
```
Clear Input (GitHub URL, structured text) → Direct creation (no questions)
Unclear Input (vague description) → Clarifying questions (unless --yes)
Auto Mode (--yes or -y flag) → Skip all questions, use inference
```
## Parameter Formats
```bash
# GitHub URL (auto-detected)
/prompts:issue-new https://github.com/owner/repo/issues/123
/prompts:issue-new GH-123
# Text description with priority
/prompts:issue-new "Login fails with special chars" --priority 1
# Auto mode - skip all questions
/prompts:issue-new --yes "something broken"
/prompts:issue-new -y https://github.com/owner/repo/issues/456
# With labels
/prompts:issue-new "Database migration needed" --priority 2 --labels "enhancement,database"
```
## Issue Structure
```typescript
interface Issue {
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
title: string;
status: 'registered' | 'planned' | 'queued' | 'in_progress' | 'completed' | 'failed';
priority: number; // 1 (critical) to 5 (low)
context: string; // Problem description
source: 'github' | 'text' | 'discovery';
source_url?: string;
labels?: string[];
// GitHub binding (for non-GitHub sources that publish to GitHub)
github_url?: string;
github_number?: number;
// Optional structured fields
expected_behavior?: string;
actual_behavior?: string;
affected_components?: string[];
// Solution binding
bound_solution_id: string | null;
// Timestamps
created_at: string;
updated_at: string;
}
```
## Inputs
- **GitHub URL**: `https://github.com/owner/repo/issues/123` or `#123`
- **Text description**: Natural language description
- **Priority flag**: `--priority 1-5` (optional, default: 3)
## Output Requirements
**Create Issue via CLI** (preferred method):
```bash
# Pipe input (recommended for complex JSON)
echo '{"title":"...", "context":"...", "priority":3}' | ccw issue create
# Returns created issue JSON
{"id":"ISS-20251229-001","title":"...","status":"registered",...}
```
**Return Summary:**
```json
{
"created": true,
"id": "ISS-20251229-001",
"title": "Login fails with special chars",
"source": "text",
"github_published": false,
"next_step": "/issue:plan ISS-20251229-001"
}
```
## Workflow
### Phase 0: Parse Arguments & Flags
Extract parameters from user input:
```bash
# Input examples (Codex placeholders)
INPUT="$1" # GitHub URL or text description
AUTO_MODE="$2" # Check for --yes or -y flag
# Parse flags (comma-separated in single argument)
PRIORITY=$(echo "$INPUT" | grep -oP '(?<=--priority\s)\d+' || echo "3")
LABELS=$(echo "$INPUT" | grep -oP '(?<=--labels\s)\K[^-]*' | xargs)
AUTO_YES=$(echo "$INPUT" | grep -qE '--yes|-y' && echo "true" || echo "false")
# Extract main input (URL or text) - remove all flags
MAIN_INPUT=$(echo "$INPUT" | sed 's/\s*--priority\s*\d*//; s/\s*--labels\s*[^-]*//; s/\s*--yes\s*//; s/\s*-y\s*//' | xargs)
```
### Phase 1: Analyze Input & Clarity Detection
```javascript
const mainInput = userInput.trim();
// Detect input type and clarity
const isGitHubUrl = mainInput.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
const isGitHubShort = mainInput.match(/^GH-?\d+$/);
const hasStructure = mainInput.match(/(expected|actual|affects|steps):/i);
// Clarity score: 0-3
let clarityScore = 0;
if (isGitHubUrl || isGitHubShort) clarityScore = 3; // GitHub = fully clear
else if (hasStructure) clarityScore = 2; // Structured text = clear
else if (mainInput.length > 50) clarityScore = 1; // Long text = somewhat clear
else clarityScore = 0; // Vague
// Auto mode override: if --yes/-y flag, skip all questions
const skipQuestions = process.env.AUTO_YES === 'true';
```
### Phase 2: Extract Issue Data & Priority
**For GitHub URL/Short:**
```bash
# Fetch issue details via gh CLI
gh issue view <issue-ref> --json number,title,body,labels,url
# Parse response with priority override
{
"id": "GH-123",
"title": "...",
"priority": $PRIORITY || 3, # Use --priority flag if provided
"source": "github",
"source_url": "https://github.com/...",
"labels": $LABELS || [...existing labels],
"context": "..."
}
```
**For Text Description:**
```javascript
// Generate issue ID
const id = `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`;
// Parse structured fields if present
const expected = text.match(/expected:?\s*([^.]+)/i);
const actual = text.match(/actual:?\s*([^.]+)/i);
const affects = text.match(/affects?:?\s*([^.]+)/i);
// Build issue data with flags
{
"id": id,
"title": text.split(/[.\n]/)[0].substring(0, 60),
"priority": $PRIORITY || 3, # From --priority flag
"labels": $LABELS?.split(',') || [], # From --labels flag
"source": "text",
"context": text.substring(0, 500),
"expected_behavior": expected?.[1]?.trim(),
"actual_behavior": actual?.[1]?.trim()
}
```
### Phase 3: Context Hint (Conditional)
For medium clarity (score 1-2) without affected components:
```bash
# Use rg to find potentially related files
rg -l "<keyword>" --type ts | head -5
```
Add discovered files to `affected_components` (max 3 files).
**Note**: Skip this for GitHub issues (already have context) and vague inputs (needs clarification first).
### Phase 4: Conditional Clarification (Skip if Auto Mode)
**Only ask if**: clarity < 2 AND NOT in auto mode (skipQuestions = false)
If auto mode (`--yes`/`-y`), proceed directly to creation with inferred details.
Otherwise, present minimal clarification:
```
Input unclear. Please describe:
- What is the issue about?
- Where does it occur?
- What is the expected behavior?
```
Wait for user response, then update issue data.
### Phase 5: GitHub Publishing Decision (Skip if Already GitHub)
For non-GitHub sources, determine if user wants to publish to GitHub:
```
For non-GitHub sources AND NOT auto mode, ask:
```
Would you like to publish this issue to GitHub?
1. Yes, publish to GitHub (create issue and link it)
2. No, keep local only (store without GitHub sync)
```
In auto mode: Default to NO (keep local only, unless explicitly requested with --publish flag).
### Phase 6: Create Issue
**Create via CLI:**
```bash
# Build issue JSON
ISSUE_JSON='{"title":"...","context":"...","priority":3,"source":"text"}'
# Create issue (auto-generates ID)
echo "${ISSUE_JSON}" | ccw issue create
```
**If publishing to GitHub:**
```bash
# Create on GitHub first
GH_URL=$(gh issue create --title "..." --body "..." | grep -oE 'https://github.com/[^ ]+')
GH_NUMBER=$(echo $GH_URL | grep -oE '/issues/([0-9]+)$' | grep -oE '[0-9]+')
# Update local issue with binding
ccw issue update ${ISSUE_ID} --github-url "${GH_URL}" --github-number ${GH_NUMBER}
```
### Phase 7: Output Result
```markdown
## Issue Created
**ID**: ISS-20251229-001
**Title**: Login fails with special chars
**Source**: text
**Priority**: 2 (High)
**Context**:
500 error when password contains quotes
**Affected Components**:
- src/auth/login.ts
- src/utils/validation.ts
**GitHub**: Not published (local only)
**Next Step**: `/issue:plan ISS-20251229-001`
```
## Quality Checklist
Before completing, verify:
- [ ] Issue ID generated correctly (GH-xxx or ISS-YYYYMMDD-HHMMSS)
- [ ] Title extracted (max 60 chars)
- [ ] Context captured (problem description)
- [ ] Priority assigned (1-5)
- [ ] Status set to `registered`
- [ ] Created via `ccw issue create` CLI command
## Error Handling
| Situation | Action |
|-----------|--------|
| GitHub URL not accessible | Report error, suggest text input |
| gh CLI not available | Fall back to text-based creation |
| Empty input | Prompt for description |
| Very vague input | Ask clarifying questions |
| Issue already exists | Report duplicate, show existing |
## Start Execution
### Parameter Parsing (Phase 0)
```bash
# Codex passes full input as $1
INPUT="$1"
# Extract flags
AUTO_YES=false
PRIORITY=3
LABELS=""
# Parse using parameter expansion
while [[ $INPUT == -* ]]; do
case $INPUT in
-y|--yes)
AUTO_YES=true
INPUT="${INPUT#* }" # Remove flag and space
;;
--priority)
PRIORITY="${INPUT#* }"
PRIORITY="${PRIORITY%% *}" # Extract next word
INPUT="${INPUT#*--priority $PRIORITY }"
;;
--labels)
LABELS="${INPUT#* }"
LABELS="${LABELS%% --*}" # Extract until next flag
INPUT="${INPUT#*--labels $LABELS }"
;;
*)
INPUT="${INPUT#* }"
;;
esac
done
# Remaining text is the main input (GitHub URL or description)
MAIN_INPUT="$INPUT"
```
### Execution Flow (All Phases)
```
1. Parse Arguments (Phase 0)
└─ Extract: AUTO_YES, PRIORITY, LABELS, MAIN_INPUT
2. Detect Input Type & Clarity (Phase 1)
├─ GitHub URL/Short? → Score 3 (clear)
├─ Structured text? → Score 2 (somewhat clear)
├─ Long text? → Score 1 (vague)
└─ Short text? → Score 0 (very vague)
3. Extract Issue Data (Phase 2)
├─ If GitHub: gh CLI fetch + parse
└─ If text: Parse structure + apply PRIORITY/LABELS flags
4. Context Hint (Phase 3, conditional)
└─ Only for clarity 1-2 AND no components → ACE search (max 3 files)
5. Clarification (Phase 4, conditional)
└─ If clarity < 2 AND NOT auto mode → Ask for details
└─ If auto mode (AUTO_YES=true) → Skip, use inferred data
6. GitHub Publishing (Phase 5, conditional)
├─ If source = github → Skip (already from GitHub)
└─ If source != github:
├─ If auto mode → Default NO (keep local)
└─ If manual → Ask user preference
7. Create Issue (Phase 6)
├─ Create local issue via ccw CLI
└─ If publishToGitHub → gh issue create → link
8. Output Result (Phase 7)
└─ Display: ID, title, source, GitHub status, next step
```
## Quick Examples
```bash
# Auto mode - GitHub issue (no questions)
/prompts:issue-new -y https://github.com/org/repo/issues/42
# Standard mode - text with priority
/prompts:issue-new "Database connection timeout" --priority 1
# Auto mode - text with priority and labels
/prompts:issue-new --yes "Add caching layer" --priority 2 --labels "enhancement,performance"
# GitHub short format
/prompts:issue-new GH-123
# Complex text description
/prompts:issue-new "User login fails. Expected: redirect to dashboard. Actual: 500 error"
```

View File

@@ -0,0 +1,247 @@
---
name: issue-plan
description: Plan issue(s) into bound solutions using subagent pattern (explore + plan closed-loop)
argument-hint: "<issue-id>[,<issue-id>,...] [--all-pending] [--batch-size 4]"
---
# Issue Plan (Codex Version)
## Goal
Create executable solution(s) for issue(s) and bind the selected solution to each issue using `ccw issue bind`.
This workflow uses **subagent pattern** for parallel batch processing: spawn planning agents per batch, wait for results, handle multi-solution selection.
## Core Guidelines
**⚠️ Data Access Principle**: Issues and solutions files can grow very large. To avoid context overflow:
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| List issues (brief) | `ccw issue list --status pending --brief` | Read issues.jsonl |
| Read issue details | `ccw issue status <id> --json` | Read issues.jsonl |
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
| Bind solution | `ccw issue bind <id> <sol-id>` | Direct file edit |
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `solutions/*.jsonl` directly.
## Inputs
- **Explicit issues**: comma-separated IDs, e.g. `ISS-123,ISS-124`
- **All pending**: `--all-pending` → plan all issues in `registered` status
- **Batch size**: `--batch-size N` (default `4`) → max issues per subagent batch
## Output Requirements
For each issue:
- Register at least one solution and bind one solution to the issue
- Ensure tasks conform to `~/.claude/workflows/cli-templates/schemas/solution-schema.json`
- Each task includes quantified `acceptance.criteria` and concrete `acceptance.verification`
Return a final summary JSON:
```json
{
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": 0 }],
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "task_count": 0, "description": "..." }] }],
"conflicts": [{ "file": "...", "issues": ["..."] }]
}
```
## Workflow
### Step 1: Resolve Issue List
**If `--all-pending`:**
```bash
ccw issue list --status registered --json
```
**Else (explicit IDs):**
```bash
# For each ID, ensure exists
ccw issue init <issue-id> --title "Issue <issue-id>" 2>/dev/null || true
ccw issue status <issue-id> --json
```
### Step 2: Group Issues by Similarity
Group issues for batch processing (max 4 per batch):
```bash
# Extract issue metadata for grouping
ccw issue list --status registered --brief --json
```
Group by:
- Shared tags
- Similar keywords in title
- Related components
### Step 3: Spawn Planning Subagents (Parallel)
For each batch, spawn a planning subagent:
```javascript
// Subagent message structure
spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
4. Read schema: ~/.claude/workflows/cli-templates/schemas/solution-schema.json
---
Goal: Plan solutions for ${batch.length} issues with executable task breakdown
Scope:
- CAN DO: Explore codebase, design solutions, create tasks
- CANNOT DO: Execute solutions, modify production code
- Directory: ${process.cwd()}
Context:
- Issues: ${batch.map(i => `${i.id}: ${i.title}`).join('\n')}
- Fetch full details: ccw issue status <id> --json
Deliverables:
- For each issue: Write solution to .workflow/issues/solutions/{issue-id}.jsonl
- Single solution → auto-bind via ccw issue bind
- Multiple solutions → return in pending_selection
Quality bar:
- Tasks have quantified acceptance.criteria
- Each task includes test.commands
- Solution follows schema exactly
`
})
```
**Batch execution (parallel):**
```javascript
// Launch all batches in parallel
const agentIds = batches.map(batch => spawn_agent({ message: buildPrompt(batch) }))
// Wait for all agents to complete
const results = wait({ ids: agentIds, timeout_ms: 900000 }) // 15 min
// Collect results
const allBound = []
const allPendingSelection = []
const allConflicts = []
for (const id of agentIds) {
if (results.status[id].completed) {
const result = JSON.parse(results.status[id].completed)
allBound.push(...(result.bound || []))
allPendingSelection.push(...(result.pending_selection || []))
allConflicts.push(...(result.conflicts || []))
}
}
// Close all agents
agentIds.forEach(id => close_agent({ id }))
```
### Step 4: Handle Multi-Solution Selection
If `pending_selection` is non-empty, present options:
```
Issue ISS-001 has multiple solutions:
1. SOL-ISS-001-1: Refactor with adapter pattern (3 tasks)
2. SOL-ISS-001-2: Direct implementation (2 tasks)
Select solution (1-2):
```
Bind selected solution:
```bash
ccw issue bind ISS-001 SOL-ISS-001-1
```
### Step 5: Handle Conflicts
If conflicts detected:
- Low/Medium severity: Auto-resolve with recommended order
- High severity: Present to user for decision
### Step 6: Update Issue Status
After binding, update status:
```bash
ccw issue update <issue-id> --status planned
```
### Step 7: Output Summary
```markdown
## Planning Complete
**Planned**: 5 issues
**Bound Solutions**: 4
**Pending Selection**: 1
### Bound Solutions
| Issue | Solution | Tasks |
|-------|----------|-------|
| ISS-001 | SOL-ISS-001-1 | 3 |
| ISS-002 | SOL-ISS-002-1 | 2 |
### Pending Selection
- ISS-003: 2 solutions available (user selection required)
### Conflicts Detected
- src/auth.ts touched by ISS-001, ISS-002 (resolved: sequential)
**Next Step**: `/issue:queue`
```
## Subagent Role Reference
Planning subagent uses role file at: `~/.codex/agents/issue-plan-agent.md`
Role capabilities:
- Codebase exploration (rg, file reading)
- Solution design with task breakdown
- Schema validation
- Solution registration via CLI
## Quality Checklist
Before completing, verify:
- [ ] All input issues have solutions in `solutions/{issue-id}.jsonl`
- [ ] Single solution issues are auto-bound (`bound_solution_id` set)
- [ ] Multi-solution issues returned in `pending_selection` for user choice
- [ ] Each solution has executable tasks with `modification_points`
- [ ] Task acceptance criteria are quantified (not vague)
- [ ] Conflicts detected and reported (if multiple issues touch same files)
- [ ] Issue status updated to `planned` after binding
- [ ] All subagents closed after completion
## Error Handling
| Error | Resolution |
|-------|------------|
| Issue not found | Auto-create via `ccw issue init` |
| Subagent timeout | Retry with increased timeout or smaller batch |
| No solutions generated | Display error, suggest manual planning |
| User cancels selection | Skip issue, continue with others |
| File conflicts | Detect and suggest resolution order |
## Start Execution
Begin by resolving issue list:
```bash
# Default to all pending
ccw issue list --status registered --brief --json
# Or with explicit IDs
ccw issue status ISS-001 --json
```
Then group issues and spawn planning subagents.

View File

@@ -0,0 +1,299 @@
---
name: issue-queue
description: Form execution queue from bound solutions using subagent for conflict analysis and ordering
argument-hint: "[--queues <n>] [--issue <id>] [--append <id>]"
---
# Issue Queue (Codex Version)
## Goal
Create an ordered execution queue from all bound solutions. Uses **subagent pattern** to analyze inter-solution file conflicts, calculate semantic priorities, and assign parallel/sequential execution groups.
**Design Principle**: Queue items are **solutions**, not individual tasks. Each executor receives a complete solution with all its tasks.
## Core Guidelines
**⚠️ Data Access Principle**: Issues and queue files can grow very large. To avoid context overflow:
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| List issues (brief) | `ccw issue list --status planned --brief` | Read issues.jsonl |
| List queue (brief) | `ccw issue queue --brief` | Read queues/*.json |
| Read issue details | `ccw issue status <id> --json` | Read issues.jsonl |
| Get next item | `ccw issue next --json` | Read queues/*.json |
| Sync from queue | `ccw issue update --from-queue` | Direct file edit |
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `queues/*.json` directly.
## Inputs
- **All planned**: Default behavior → queue all issues with `planned` status and bound solutions
- **Multiple queues**: `--queues <n>` → create N parallel queues
- **Specific issue**: `--issue <id>` → queue only that issue's solution
- **Append mode**: `--append <id>` → append issue to active queue (don't create new)
## Output Requirements
**Generate Files (EXACTLY 2):**
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with solutions, conflicts, groups
2. `.workflow/issues/queues/index.json` - Update with new queue entry
**Return Summary:**
```json
{
"queue_id": "QUE-YYYYMMDD-HHMMSS",
"total_solutions": 3,
"total_tasks": 12,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": 2 }],
"conflicts_resolved": 1,
"issues_queued": ["ISS-xxx", "ISS-yyy"]
}
```
## Workflow
### Step 1: Generate Queue ID and Load Solutions
```bash
# Generate queue ID
QUEUE_ID="QUE-$(date -u +%Y%m%d-%H%M%S)"
# Load planned issues with bound solutions
ccw issue list --status planned --json
```
For each issue, extract:
- `id`, `bound_solution_id`, `priority`
- Read solution from `.workflow/issues/solutions/{issue-id}.jsonl`
- Collect `files_touched` from all tasks' `modification_points.file`
Build solution list:
```json
[
{
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx",
"task_count": 3,
"files_touched": ["src/auth.ts", "src/utils.ts"],
"priority": "medium"
}
]
```
### Step 2: Spawn Queue Agent for Conflict Analysis
Spawn subagent to analyze conflicts and order solutions:
```javascript
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/issue-queue-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
Goal: Order ${solutions.length} solutions into execution queue with conflict resolution
Scope:
- CAN DO: Analyze file conflicts, calculate priorities, assign groups
- CANNOT DO: Execute solutions, modify code
- Queue ID: ${QUEUE_ID}
Context:
- Solutions: ${JSON.stringify(solutions, null, 2)}
- Project Root: ${process.cwd()}
Deliverables:
1. Write queue JSON to: .workflow/issues/queues/${QUEUE_ID}.json
2. Update index: .workflow/issues/queues/index.json
3. Return summary JSON
Quality bar:
- No circular dependencies in DAG
- Parallel groups have NO file overlaps
- Semantic priority calculated (0.0-1.0)
- All conflicts resolved with rationale
`
})
// Wait for agent completion
const result = wait({ ids: [agentId], timeout_ms: 600000 })
// Parse result
const summary = JSON.parse(result.status[agentId].completed)
// Check for clarifications
if (summary.clarifications?.length > 0) {
// Handle high-severity conflicts requiring user input
for (const clarification of summary.clarifications) {
console.log(`Conflict: ${clarification.question}`)
console.log(`Options: ${clarification.options.join(', ')}`)
// Get user input and send back
send_input({
id: agentId,
message: `Conflict ${clarification.conflict_id} resolved: ${userChoice}`
})
wait({ ids: [agentId], timeout_ms: 300000 })
}
}
// Close agent
close_agent({ id: agentId })
```
### Step 3: Multi-Queue Support (if --queues > 1)
When creating multiple parallel queues:
1. **Partition solutions** to minimize cross-queue file conflicts
2. **Spawn N agents in parallel** (one per queue)
3. **Wait for all agents** with batch wait
```javascript
// Partition solutions by file overlap
const partitions = partitionSolutions(solutions, numQueues)
// Spawn agents in parallel
const agentIds = partitions.map((partition, i) =>
spawn_agent({
message: buildQueuePrompt(partition, `${QUEUE_ID}-${i+1}`, i+1, numQueues)
})
)
// Batch wait for all agents
const results = wait({ ids: agentIds, timeout_ms: 600000 })
// Collect clarifications from all agents
const allClarifications = agentIds.flatMap((id, i) =>
(results.status[id].clarifications || []).map(c => ({ ...c, queue_id: `${QUEUE_ID}-${i+1}`, agent_id: id }))
)
// Handle clarifications, then close all agents
agentIds.forEach(id => close_agent({ id }))
```
### Step 4: Update Issue Statuses
**MUST use CLI command:**
```bash
# Batch update from queue (recommended)
ccw issue update --from-queue ${QUEUE_ID}
# Or individual update
ccw issue update <issue-id> --status queued
```
### Step 5: Active Queue Check
```bash
ccw issue queue list --brief
```
**Decision:**
- If no active queue: `ccw issue queue switch ${QUEUE_ID}`
- If active queue exists: Present options to user
```
Active queue exists. Choose action:
1. Merge into existing queue
2. Use new queue (keep existing in history)
3. Cancel (delete new queue)
Select (1-3):
```
### Step 6: Output Summary
```markdown
## Queue Formed: ${QUEUE_ID}
**Solutions**: 5
**Tasks**: 18
**Execution Groups**: 3
### Execution Order
| # | Item | Issue | Tasks | Group | Files |
|---|------|-------|-------|-------|-------|
| 1 | S-1 | ISS-001 | 3 | P1 | src/auth.ts |
| 2 | S-2 | ISS-002 | 2 | P1 | src/api.ts |
| 3 | S-3 | ISS-003 | 4 | S2 | src/auth.ts |
### Conflicts Resolved
- src/auth.ts: S-1 → S-3 (sequential, S-1 creates module)
**Next Step**: `/issue:execute --queue ${QUEUE_ID}`
```
## Subagent Role Reference
Queue agent uses role file at: `~/.codex/agents/issue-queue-agent.md`
Role capabilities:
- File conflict detection (5 types)
- Dependency DAG construction
- Semantic priority calculation
- Execution group assignment
## Queue File Schema
```json
{
"id": "QUE-20251228-120000",
"status": "active",
"issue_ids": ["ISS-001", "ISS-002"],
"solutions": [
{
"item_id": "S-1",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.8,
"files_touched": ["src/auth.ts"],
"task_count": 3
}
],
"conflicts": [...],
"execution_groups": [...]
}
```
## Quality Checklist
Before completing, verify:
- [ ] Exactly 2 files generated: queue JSON + index update
- [ ] Queue has valid DAG (no circular dependencies)
- [ ] All file conflicts resolved with rationale
- [ ] Semantic priority calculated for each solution (0.0-1.0)
- [ ] Execution groups assigned (P* for parallel, S* for sequential)
- [ ] Issue statuses updated to `queued`
- [ ] All subagents closed after completion
## Error Handling
| Situation | Action |
|-----------|--------|
| No planned issues | Return empty queue summary |
| Circular dependency detected | Abort, report cycle details |
| Missing solution file | Skip issue, log warning |
| Agent timeout | Retry with increased timeout |
| Clarification rejected | Abort queue formation |
## Start Execution
Begin by listing planned issues:
```bash
ccw issue list --status planned --json
```
Then extract solution data and spawn queue agent.

View File

@@ -0,0 +1,272 @@
---
name: unified-execute-parallel
description: Worktree-based parallel execution engine. Execute group tasks in isolated Git worktree. Codex-optimized.
argument-hint: "PLAN=\"<path>\" GROUP=\"<group-id>\" [--auto-commit] [--dry-run]"
---
# Codex Unified-Execute-Parallel Workflow
## Quick Start
Execute tasks for a specific execution group in isolated Git worktree.
**Core workflow**: Load Plan → Select Group → Create Worktree → Execute Tasks → Mark Complete
**Key features**:
- **Worktree isolation**: Each group executes in `.ccw/worktree/{group-id}/`
- **Parallel execution**: Multiple codex instances can run different groups simultaneously
- **Simple focus**: Execute only, merge handled by separate command
## Overview
1. **Plan & Group Selection** - Load plan, select execution group
2. **Worktree Setup** - Create Git worktree for group's branch
3. **Task Execution** - Execute group tasks serially in worktree
4. **Mark Complete** - Record completion status
**Note**: Merging is handled by `/workflow:worktree-merge` command.
## Output Structure
```
.ccw/worktree/
├── {group-id}/ # Git worktree for group
│ ├── (full project checkout)
│ └── .execution/ # Execution artifacts
│ ├── execution.md # Task overview
│ └── execution-events.md # Execution log
.workflow/.execution/
└── worktree-status.json # ⭐ All groups completion status
```
---
## Implementation Details
### Session Variables
- `planPath`: Path to plan file
- `groupId`: Selected execution group ID (required)
- `worktreePath`: `.ccw/worktree/{groupId}`
- `branchName`: From execution-groups.json
- `autoCommit`: Boolean for auto-commit mode
- `dryRun`: Boolean for dry-run mode
---
## Phase 1: Plan & Group Selection
**Objective**: Load plan, extract group metadata, filter tasks.
### Step 1.1: Load Plan File
Read plan file and execution-groups.json from same directory.
**Required Files**:
- `plan-note.md` or `plan.json` - Task definitions
- `execution-groups.json` - Group metadata with branch names
### Step 1.2: Select Group
Validate GROUP parameter.
**Validation**:
- Group exists in execution-groups.json
- Group has assigned tasks
- Branch name is defined
### Step 1.3: Filter Group Tasks
Extract only tasks belonging to selected group.
**Task Filtering**:
- Match by `execution_group` field, OR
- Match by `domain` if domain in group.domains
Build execution order from filtered tasks.
---
## Phase 2: Worktree Setup
**Objective**: Create Git worktree for isolated execution.
### Step 2.1: Create Worktree Directory
Ensure worktree base directory exists.
```bash
mkdir -p .ccw/worktree
```
### Step 2.2: Create Git Worktree
Create worktree with group's branch.
**If worktree doesn't exist**:
```bash
# Create branch and worktree
git worktree add -b {branch-name} .ccw/worktree/{group-id} main
```
**If worktree exists**:
```bash
# Verify worktree is on correct branch
cd .ccw/worktree/{group-id}
git status
```
**Branch Naming**: From execution-groups.json `branch_name` field.
### Step 2.3: Initialize Execution Folder
Create execution tracking folder inside worktree.
```bash
mkdir -p .ccw/worktree/{group-id}/.execution
```
Create initial `execution.md`:
- Group ID and branch name
- Task list for this group
- Start timestamp
**Success Criteria**:
- Worktree created at `.ccw/worktree/{group-id}/`
- Working on correct branch
- Execution folder initialized
---
## Phase 3: Task Execution
**Objective**: Execute group tasks serially in worktree.
### Step 3.1: Change to Worktree Directory
All execution happens inside worktree.
```bash
cd .ccw/worktree/{group-id}
```
### Step 3.2: Execute Tasks Sequentially
For each task in group:
1. Execute via CLI in worktree directory
2. Record result in `.execution/execution-events.md`
3. Auto-commit if enabled
4. Continue to next task
**CLI Execution**:
- `--cd .ccw/worktree/{group-id}` - Execute in worktree
- `--mode write` - Allow file modifications
### Step 3.3: Record Progress
Update `.execution/execution-events.md` after each task:
- Task ID and title
- Timestamp
- Status (completed/failed)
- Files modified
### Step 3.4: Auto-Commit (if enabled)
Commit task changes in worktree.
```bash
cd .ccw/worktree/{group-id}
git add .
git commit -m "feat: {task-title} [TASK-{id}]"
```
---
## Phase 4: Mark Complete
**Objective**: Record group completion status.
### Step 4.1: Update worktree-status.json
Write/update status file in main project.
**worktree-status.json** (in `.workflow/.execution/`):
```json
{
"plan_session": "CPLAN-auth-2025-02-03",
"groups": {
"EG-001": {
"worktree_path": ".ccw/worktree/EG-001",
"branch": "feature/cplan-auth-eg-001-frontend",
"status": "completed",
"tasks_total": 15,
"tasks_completed": 15,
"tasks_failed": 0,
"completed_at": "2025-02-03T14:30:00Z"
},
"EG-002": {
"status": "in_progress",
"tasks_completed": 8,
"tasks_total": 12
}
}
}
```
### Step 4.2: Display Summary
Report group execution results:
- Group ID and worktree path
- Tasks completed/failed
- Next step: Use `/workflow:worktree-merge` to merge
---
## Configuration
| Parameter | Default | Description |
|-----------|---------|-------------|
| `PLAN` | Auto-detect | Path to plan file |
| `GROUP` | Required | Execution group ID (e.g., EG-001) |
| `--auto-commit` | false | Commit after each task |
| `--dry-run` | false | Simulate without changes |
---
## Error Handling
| Situation | Action |
|-----------|--------|
| GROUP missing | Error: Require GROUP parameter |
| Group not found | Error: Check execution-groups.json |
| Worktree exists with wrong branch | Warning: Clean or remove existing worktree |
| Task fails | Record failure, continue to next |
---
## Parallel Execution Example
**3 terminals, 3 groups**:
```bash
# Terminal 1
PLAN=".workflow/.planning/CPLAN-auth/plan-note.md" GROUP="EG-001" --auto-commit
# Terminal 2
PLAN=".workflow/.planning/CPLAN-auth/plan-note.md" GROUP="EG-002" --auto-commit
# Terminal 3
PLAN=".workflow/.planning/CPLAN-auth/plan-note.md" GROUP="EG-003" --auto-commit
```
Each executes in isolated worktree:
- `.ccw/worktree/EG-001/`
- `.ccw/worktree/EG-002/`
- `.ccw/worktree/EG-003/`
After all complete, use `/workflow:worktree-merge` to merge.
---
**Now execute unified-execute-parallel for**: $PLAN with GROUP=$GROUP

View File

@@ -0,0 +1,474 @@
---
name: unified-execute-with-file
description: Universal execution engine for consuming planning/brainstorm/analysis output. Serial task execution with progress tracking. Codex-optimized.
argument-hint: "PLAN=\"<path>\" [--auto-commit] [--dry-run]"
---
# Codex Unified-Execute-With-File Workflow
## Quick Start
Universal execution engine consuming **any** planning output and executing tasks serially with progress tracking.
**Core workflow**: Load Plan → Parse Tasks → Validate → Execute Sequentially → Track Progress → Verify
**Key features**:
- **Format-agnostic**: Supports plan.json, plan-note.md, synthesis.json, conclusions.json
- **Serial execution**: Process tasks sequentially with dependency ordering
- **Progress tracking**: execution.md overview + execution-events.md detailed log
- **Auto-commit**: Optional conventional commits after each task
- **Dry-run mode**: Simulate execution without making changes
## Overview
This workflow enables reliable task execution through sequential phases:
1. **Plan Detection & Parsing** - Load and parse planning output in any format
2. **Pre-Execution Analysis** - Validate feasibility and identify potential issues
3. **Serial Task Execution** - Execute tasks one by one with dependency ordering
4. **Progress Tracking** - Update execution logs with results and discoveries
5. **Completion** - Generate summary and offer follow-up actions
The key innovation is the **unified event log** that serves as both human-readable progress tracker and machine-parseable state store.
## Output Structure
```
.workflow/.execution/EXEC-{slug}-{date}-{random}/
├── execution.md # Plan overview + task table + timeline
└── execution-events.md # ⭐ Unified log (all executions) - SINGLE SOURCE OF TRUTH
```
## Output Artifacts
### Phase 1: Session Initialization
| Artifact | Purpose |
|----------|---------|
| `execution.md` | Overview of plan source, task table, execution timeline |
| Session folder | `.workflow/.execution/{sessionId}/` |
### Phase 2: Pre-Execution Analysis
| Artifact | Purpose |
|----------|---------|
| `execution.md` (updated) | Feasibility assessment and validation results |
### Phase 3-4: Serial Execution & Progress
| Artifact | Purpose |
|----------|---------|
| `execution-events.md` | Unified log: all task executions with results |
| `execution.md` (updated) | Real-time progress updates and task status |
### Phase 5: Completion
| Artifact | Purpose |
|----------|---------|
| Final `execution.md` | Complete execution summary and statistics |
| Final `execution-events.md` | Complete execution history |
---
## Implementation Details
### Session Initialization
The workflow creates a unique session for tracking execution.
**Session ID Format**: `EXEC-{slug}-{date}-{random}`
- `slug`: Plan filename without extension, lowercased, max 30 chars
- `date`: YYYY-MM-DD format (UTC+8)
- `random`: 7-char random suffix for uniqueness
**Session Directory**: `.workflow/.execution/{sessionId}/`
**Plan Path Resolution**:
1. If `$PLAN` provided explicitly, use it
2. Otherwise, auto-detect from common locations:
- `.workflow/IMPL_PLAN.md`
- `.workflow/.planning/*/plan-note.md`
- `.workflow/.brainstorm/*/synthesis.json`
- `.workflow/.analysis/*/conclusions.json`
**Session Variables**:
- `sessionId`: Unique session identifier
- `sessionFolder`: Base directory for artifacts
- `planPath`: Resolved path to plan file
- `autoCommit`: Boolean flag for auto-commit mode
- `dryRun`: Boolean flag for dry-run mode
---
## Phase 1: Plan Detection & Parsing
**Objective**: Load plan file, parse tasks, build execution order, and validate for cycles.
### Step 1.1: Load Plan File
Detect plan format and parse based on file extension.
**Supported Formats**:
| Format | Source | Parser |
|--------|--------|--------|
| plan.json | lite-plan, collaborative-plan | parsePlanJson() |
| plan-note.md | collaborative-plan | parsePlanMarkdown() |
| synthesis.json | brainstorm session | convertSynthesisToTasks() |
| conclusions.json | analysis session | convertConclusionsToTasks() |
**Parsing Activities**:
1. Read plan file content
2. Detect format from filename or content structure
3. Route to appropriate parser
4. Extract tasks with required fields: id, title, description, files_to_modify, depends_on
### Step 1.2: Build Execution Order
Analyze task dependencies and calculate execution sequence.
**Execution Order Calculation**:
1. Build dependency graph from task dependencies
2. Validate for circular dependencies (no cycles allowed)
3. Calculate topological sort for sequential execution order
4. In Codex: serial mode means executing tasks one by one
**Dependency Validation**:
- Check that all referenced dependencies exist
- Detect cycles and report as critical error
- Order tasks based on dependencies
### Step 1.3: Generate execution.md
Create the main execution tracking document.
**execution.md Structure**:
- **Header**: Session ID, plan source, execution timestamp
- **Plan Overview**: Summary from plan metadata
- **Task List**: Table with ID, title, complexity, dependencies, status
- **Execution Timeline**: To be updated as tasks complete
**Success Criteria**:
- execution.md created with complete plan overview
- Task list includes all tasks from plan
- Execution order calculated with no cycles
- Ready for feasibility analysis
---
## Phase 2: Pre-Execution Analysis
**Objective**: Validate feasibility and identify potential issues before starting execution.
### Step 2.1: Analyze Plan Structure
Examine task dependencies, file modifications, and potential conflicts.
**Analysis Activities**:
1. **Check file conflicts**: Identify files modified by multiple tasks
2. **Check missing dependencies**: Verify all referenced dependencies exist
3. **Check file existence**: Identify files that will be created vs modified
4. **Estimate complexity**: Assess overall execution complexity
**Issue Detection**:
- Sequential modifications to same file (document for ordered execution)
- Missing dependency targets
- High complexity patterns that may need special handling
### Step 2.2: Generate Feasibility Report
Document analysis results and recommendations.
**Feasibility Report Content**:
- Issues found (if any)
- File conflict warnings
- Dependency validation results
- Complexity assessment
- Recommended execution strategy
### Step 2.3: Update execution.md
Append feasibility analysis results.
**Success Criteria**:
- All validation checks completed
- Issues documented in execution.md
- No blocking issues found (or user confirmed to proceed)
- Ready for task execution
---
## Phase 3: Serial Task Execution
**Objective**: Execute tasks one by one in dependency order, tracking progress and recording results.
**Execution Model**: Serial execution - process tasks sequentially, one at a time. Each task must complete before the next begins.
### Step 3.1: Execute Tasks Sequentially
For each task in execution order:
1. Load context from previous task results
2. Route to Codex CLI for execution
3. Wait for completion
4. Record results in execution-events.md
5. Auto-commit if enabled
6. Move to next task
**Execution Loop**:
```
For each task in executionOrder:
├─ Extract task context
├─ Load previous task outputs
├─ Execute task via CLI (synchronous)
├─ Record result with timestamp
├─ Auto-commit if enabled
└─ Continue to next task
```
### Step 3.2: Execute Task via CLI
Execute individual task using Codex CLI in synchronous mode.
**CLI Execution Scope**:
- **PURPOSE**: Execute task from plan
- **TASK DETAILS**: ID, title, description, required changes
- **PRIOR CONTEXT**: Results from previous tasks
- **REQUIRED CHANGES**: Files to modify with specific locations
- **MODE**: write (modification mode)
- **EXPECTED**: Files modified as specified, no test failures
**CLI Parameters**:
- `--tool codex`: Use Codex for execution
- `--mode write`: Allow file modifications
- Synchronous execution: Wait for completion
### Step 3.3: Track Progress
Record task execution results in the unified event log.
**execution-events.md Structure**:
- **Header**: Session metadata
- **Event Timeline**: One entry per task with results
- **Event Format**:
- Task ID and title
- Timestamp and duration
- Status (completed/failed)
- Summary of changes
- Any notes or issues discovered
**Event Recording Activities**:
1. Capture execution timestamp
2. Record task status and duration
3. Document any modifications made
4. Note any issues or discoveries
5. Append event to execution-events.md
### Step 3.4: Auto-Commit (if enabled)
Commit task changes with conventional commit format.
**Auto-Commit Process**:
1. Get changed files from git status
2. Filter to task.files_to_modify
3. Stage files: `git add`
4. Generate commit message based on task type
5. Commit: `git commit -m`
**Commit Message Format**:
- Type: feat, fix, refactor, test, docs (inferred from task)
- Scope: file/module affected (inferred from files modified)
- Subject: Task title or description
- Footer: Task ID and plan reference
**Success Criteria**:
- All tasks executed sequentially
- Results recorded in execution-events.md
- Auto-commits created (if enabled)
- Failed tasks logged for review
---
## Phase 4: Completion
**Objective**: Summarize execution results and offer follow-up actions.
### Step 4.1: Collect Statistics
Gather execution metrics.
**Metrics Collection**:
- Total tasks executed
- Successfully completed count
- Failed count
- Success rate percentage
- Total duration
- Artifacts generated
### Step 4.2: Generate Summary
Update execution.md with final results.
**Summary Content**:
- Execution completion timestamp
- Statistics table
- Task status table (completed/failed)
- Commit log (if auto-commit enabled)
- Any failed tasks requiring attention
### Step 4.3: Display Completion Summary
Present results to user.
**Summary Output**:
- Session ID and folder path
- Statistics (completed/failed/total)
- Failed tasks (if any)
- Execution log location
- Next step recommendations
**Success Criteria**:
- execution.md finalized with complete summary
- execution-events.md contains all task records
- User informed of completion status
- All artifacts successfully created
---
## Configuration
### Plan Format Detection
Workflow automatically detects plan format:
| File Extension | Format |
|---|---|
| `.json` | JSON plan (lite-plan, collaborative-plan) |
| `.md` | Markdown plan (IMPL_PLAN.md, plan-note.md) |
| `synthesis.json` | Brainstorm synthesis |
| `conclusions.json` | Analysis conclusions |
### Execution Modes
| Mode | Behavior | Use Case |
|------|----------|----------|
| Normal | Execute tasks, track progress | Standard execution |
| `--auto-commit` | Execute + commit each task | Tracked progress with git history |
| `--dry-run` | Simulate execution, no changes | Validate plan before executing |
### Task Dependencies
Tasks can declare dependencies on other tasks:
- `depends_on: ["TASK-001", "TASK-002"]` - Wait for these tasks
- Tasks are executed in topological order
- Circular dependencies are detected and reported as error
---
## Error Handling & Recovery
| Situation | Action | Recovery |
|-----------|--------|----------|
| Plan not found | Check file path and common locations | Verify plan path is correct |
| Unsupported format | Detect format from extension/content | Use supported plan format |
| Circular dependency | Stop execution, report error | Remove or reorganize dependencies |
| Task execution fails | Record failure in log | Review error details in execution-events.md |
| File conflict | Document in execution-events.md | Resolve conflict manually or adjust plan order |
| Missing file | Log as warning, continue | Verify files will be created by prior tasks |
---
## Execution Flow Diagram
```
Load Plan File
├─ Detect format (JSON/Markdown)
├─ Parse tasks
└─ Build dependency graph
Validate
├─ Check for cycles
├─ Analyze file conflicts
└─ Calculate execution order
Execute Sequentially
├─ Task 1: CLI execution → record result
├─ Task 2: CLI execution → record result
├─ Task 3: CLI execution → record result
└─ (repeat for all tasks)
Track Progress
├─ Update execution.md after each task
└─ Append event to execution-events.md
Complete
├─ Generate final summary
├─ Report statistics
└─ Offer follow-up actions
```
---
## Best Practices
### Before Execution
1. **Review Plan**: Check plan.md or plan-note.md for completeness
2. **Validate Format**: Ensure plan is in supported format
3. **Check Dependencies**: Verify dependency order is logical
4. **Test First**: Use `--dry-run` mode to validate before actual execution
5. **Backup**: Commit any pending changes before starting
### During Execution
1. **Monitor Progress**: Check execution-events.md for real-time updates
2. **Handle Failures**: Review error details and decide whether to continue
3. **Check Commits**: Verify auto-commits are correct if enabled
4. **Track Context**: Prior task results are available to subsequent tasks
### After Execution
1. **Review Results**: Check execution.md summary and statistics
2. **Verify Changes**: Inspect modified files match expected changes
3. **Handle Failures**: Address any failed tasks
4. **Update History**: Check git log for conventional commits if enabled
5. **Plan Next Steps**: Use completion artifacts for future work
---
## Command Examples
### Standard Execution
```bash
PLAN=".workflow/.planning/CPLAN-auth-2025-01-27/plan-note.md"
```
Execute the plan with standard options.
### With Auto-Commit
```bash
PLAN=".workflow/.planning/CPLAN-auth-2025-01-27/plan-note.md" \
--auto-commit
```
Execute and automatically commit changes after each task.
### Dry-Run Mode
```bash
PLAN=".workflow/.planning/CPLAN-auth-2025-01-27/plan-note.md" \
--dry-run
```
Simulate execution without making changes.
### Auto-Detect Plan
```bash
# No PLAN specified - auto-detects from .workflow/ directories
```
---
**Now execute unified-execute-with-file for**: $PLAN

View File

@@ -0,0 +1,557 @@
---
name: worktree-merge
description: Merge completed worktrees back to main branch. Handle cross-group conflicts and dependency order.
argument-hint: "[--plan=<plan-session>] [--group=<group-id>] [--all] [--cleanup]"
---
# Codex Worktree-Merge Workflow
## Quick Start
Merge completed execution group worktrees back to main branch.
**Core workflow**: Load Status → Check Dependencies → Merge Groups → Cleanup Worktrees
**Key features**:
- **Dependency-aware merge**: Merge groups in correct order
- **Conflict detection**: Check for cross-group file conflicts
- **Selective or bulk merge**: Merge single group or all completed groups
- **Cleanup option**: Remove worktrees after successful merge
## Overview
1. **Load Status** - Read worktree-status.json and execution-groups.json
2. **Validate Dependencies** - Check group dependencies are merged first
3. **Merge Worktree** - Merge group's branch to main
4. **Update Status** - Mark group as merged
5. **Cleanup** (optional) - Remove worktree after merge
**Note**: This command only merges, execution is handled by `/workflow:unified-execute-parallel`.
## Input Files
```
.workflow/.execution/
└── worktree-status.json # Group completion status
.workflow/.planning/{session}/
├── execution-groups.json # Group metadata and dependencies
└── conflicts.json # Cross-group conflicts (if any)
.ccw/worktree/
├── {group-id}/ # Worktree to merge
│ ├── .execution/ # Execution logs
│ └── (modified files)
```
## Output
```
.workflow/.execution/
├── worktree-status.json # Updated with merge status
└── merge-log.md # Merge history and details
```
---
## Implementation Details
### Command Parameters
- `--plan=<session>`: Plan session ID (auto-detect if not provided)
- `--group=<id>`: Merge specific group (e.g., EG-001)
- `--all`: Merge all completed groups in dependency order
- `--cleanup`: Remove worktree after successful merge
**Examples**:
```bash
# Merge single group
--group=EG-001
# Merge all completed groups
--all
# Merge and cleanup
--group=EG-001 --cleanup
```
---
## Phase 1: Load Status
**Objective**: Read completion status and group metadata.
### Step 1.1: Load worktree-status.json
Read group completion status.
**Status File Location**: `.workflow/.execution/worktree-status.json`
**Required Fields**:
- `plan_session`: Planning session ID
- `groups[]`: Array of group status objects
- `status`: "completed" / "in_progress" / "failed"
- `worktree_path`: Path to worktree
- `branch`: Branch name
- `merge_status`: "not_merged" / "merged"
### Step 1.2: Load execution-groups.json
Read group dependencies.
**Metadata File**: `.workflow/.planning/{session}/execution-groups.json`
**Required Fields**:
- `groups[]`: Group metadata with dependencies
- `group_id`: Group identifier
- `dependencies_on_groups[]`: Groups that must merge first
- `cross_group_files[]`: Files modified by multiple groups
### Step 1.3: Determine Merge Targets
Select groups to merge based on parameters.
**Selection Logic**:
| Parameter | Behavior |
|-----------|----------|
| `--group=EG-001` | Merge only specified group |
| `--all` | Merge all groups with status="completed" |
| Neither | Prompt user to select from completed groups |
**Validation**:
- Group must have status="completed"
- Group's worktree must exist
- Group must not already be merged
---
## Phase 2: Validate Dependencies
**Objective**: Ensure dependencies are merged before target group.
### Step 2.1: Build Dependency Graph
Create merge order based on inter-group dependencies.
**Dependency Analysis**:
1. For target group, check `dependencies_on_groups[]`
2. For each dependency, verify merge status
3. Build topological order for merge sequence
**Example**:
```json
EG-003 depends on [EG-001, EG-002]
Merge order: EG-001, EG-002, then EG-003
```
### Step 2.2: Check Dependency Status
Validate all dependencies are merged.
**Check Logic**:
```
For each dependency in target.dependencies_on_groups:
├─ Check dependency.merge_status == "merged"
├─ If not merged: Error or prompt to merge dependency first
└─ If merged: Continue
```
**Options on Dependency Not Met**:
1. **Error**: Refuse to merge until dependencies merged
2. **Cascade**: Automatically merge dependencies first (if --all)
3. **Force**: Allow merge anyway (dangerous, use --force)
---
## Phase 3: Conflict Detection
**Objective**: Check for cross-group file conflicts before merge.
### Step 3.1: Load Cross-Group Files
Read files modified by multiple groups.
**Source**: `execution-groups.json``groups[].cross_group_files[]`
**Example**:
```json
{
"group_id": "EG-001",
"cross_group_files": [
{
"file": "src/shared/config.ts",
"conflicting_groups": ["EG-002"]
}
]
}
```
### Step 3.2: Check File Modifications
Compare file state across groups and main.
**Conflict Check**:
1. For each cross-group file:
- Get version on main branch
- Get version in target worktree
- Get version in conflicting group worktrees
2. If all different → conflict likely
3. If same → safe to merge
### Step 3.3: Report Conflicts
Display potential conflicts to user.
**Conflict Report**:
```markdown
## Potential Merge Conflicts
### File: src/shared/config.ts
- Modified by: EG-001 (target), EG-002
- Status: EG-002 already merged to main
- Action: Manual review recommended
### File: package.json
- Modified by: EG-001 (target), EG-003
- Status: EG-003 not yet merged
- Action: Safe to merge (EG-003 will handle conflict)
```
**User Decision**:
- Proceed with merge (handle conflicts manually if occur)
- Abort and review files first
- Coordinate with other group maintainers
---
## Phase 4: Merge Worktree
**Objective**: Merge group's branch from worktree to main.
### Step 4.1: Prepare Main Branch
Ensure main branch is up to date.
```bash
git checkout main
git pull origin main
```
### Step 4.2: Merge Group Branch
Merge from worktree's branch.
**Merge Command**:
```bash
# Strategy 1: Regular merge (creates merge commit)
git merge --no-ff {branch-name} -m "Merge {group-id}: {description}"
# Strategy 2: Squash merge (single commit)
git merge --squash {branch-name}
git commit -m "feat: {group-id} - {description}"
```
**Default**: Use regular merge to preserve history.
### Step 4.3: Handle Merge Conflicts
If conflicts occur, provide resolution guidance.
**Conflict Resolution**:
```bash
# List conflicting files
git status
# For each conflict:
# 1. Open file and resolve markers
# 2. Stage resolved file
git add {file}
# Complete merge
git commit
```
**Conflict Types**:
- **Cross-group file**: Expected, requires manual merge
- **Unexpected conflict**: Investigate cause
### Step 4.4: Push to Remote
Push merged changes.
```bash
git push origin main
```
**Validation**:
- Check CI/tests pass after merge
- Verify no regressions
---
## Phase 5: Update Status & Cleanup
**Objective**: Mark group as merged, optionally remove worktree.
### Step 5.1: Update worktree-status.json
Mark group as merged.
**Status Update**:
```json
{
"groups": {
"EG-001": {
"merge_status": "merged",
"merged_at": "2025-02-03T15:00:00Z",
"merged_to": "main",
"merge_commit": "abc123def456"
}
}
}
```
### Step 5.2: Append to merge-log.md
Record merge details.
**Merge Log Entry**:
```markdown
## EG-001: Frontend Development
- **Merged At**: 2025-02-03 15:00:00
- **Branch**: feature/cplan-auth-eg-001-frontend
- **Commit**: abc123def456
- **Tasks Completed**: 15/15
- **Conflicts**: 1 file (src/shared/config.ts) - resolved
- **Status**: Successfully merged to main
```
### Step 5.3: Cleanup Worktree (optional)
Remove worktree if --cleanup flag provided.
**Cleanup Commands**:
```bash
# Remove worktree
git worktree remove .ccw/worktree/{group-id}
# Delete branch (optional)
git branch -d {branch-name}
git push origin --delete {branch-name}
```
**When to Cleanup**:
- Group successfully merged
- No need to revisit worktree
- Disk space needed
**When to Keep**:
- May need to reference execution logs
- Other groups may need to coordinate
- Debugging merge issues
### Step 5.4: Display Summary
Report merge results.
**Summary Output**:
```
✓ Merged EG-001 to main
- Branch: feature/cplan-auth-eg-001-frontend
- Commit: abc123def456
- Tasks: 15/15 completed
- Conflicts: 1 resolved
- Worktree: Cleaned up
Remaining groups:
- EG-002: completed, ready to merge
- EG-003: in progress, waiting for dependencies
```
---
## Configuration
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--plan` | Auto-detect | Plan session ID |
| `--group` | Interactive | Group to merge |
| `--all` | false | Merge all completed groups |
| `--cleanup` | false | Remove worktree after merge |
| `--force` | false | Ignore dependency checks |
| `--squash` | false | Use squash merge instead of regular |
---
## Error Handling
| Situation | Action |
|-----------|--------|
| Group not completed | Error: Complete execution first |
| Group already merged | Skip with warning |
| Dependencies not merged | Error or cascade merge (--all) |
| Merge conflict | Pause for manual resolution |
| Worktree not found | Error: Check worktree path |
| Push fails | Rollback merge, report error |
---
## Merge Strategies
### Strategy 1: Sequential Merge
Merge groups one by one in dependency order.
```bash
# Merge EG-001
--group=EG-001 --cleanup
# Merge EG-002
--group=EG-002 --cleanup
# Merge EG-003 (depends on EG-001, EG-002)
--group=EG-003 --cleanup
```
**Use When**:
- Want to review each merge carefully
- High risk of conflicts
- Testing between merges
### Strategy 2: Bulk Merge
Merge all completed groups at once.
```bash
--all --cleanup
```
**Use When**:
- Groups are independent
- Low conflict risk
- Want fast integration
### Strategy 3: Dependency-First
Merge dependencies before dependent groups.
```bash
# Automatically merges EG-001, EG-002 before EG-003
--group=EG-003 --cascade
```
**Use When**:
- Complex dependency graph
- Want automatic ordering
---
## Best Practices
### Before Merge
1. **Verify Completion**: Check all tasks in group completed
2. **Review Conflicts**: Read conflicts.json for cross-group files
3. **Test Worktree**: Run tests in worktree before merge
4. **Update Main**: Ensure main branch is current
### During Merge
1. **Follow Order**: Respect dependency order
2. **Review Conflicts**: Carefully resolve cross-group conflicts
3. **Test After Merge**: Run CI/tests after each merge
4. **Commit Often**: Keep merge history clean
### After Merge
1. **Update Status**: Ensure worktree-status.json reflects merge
2. **Keep Logs**: Archive merge-log.md for reference
3. **Cleanup Gradually**: Don't rush to delete worktrees
4. **Notify Team**: Inform others of merged groups
---
## Rollback Strategy
If merge causes issues:
```bash
# Find merge commit
git log --oneline
# Revert merge
git revert -m 1 {merge-commit}
git push origin main
# Or reset (dangerous, loses history)
git reset --hard HEAD~1
git push origin main --force
# Update status
# Mark group as not_merged in worktree-status.json
```
---
## Example Workflow
### Scenario: 3 Groups Complete
**Status**:
- EG-001: Completed (no dependencies)
- EG-002: Completed (no dependencies)
- EG-003: Completed (depends on EG-001, EG-002)
### Step 1: Merge Independent Groups
```bash
# Merge EG-001
--group=EG-001
# Test after merge
npm test
# Merge EG-002
--group=EG-002
# Test after merge
npm test
```
### Step 2: Merge Dependent Group
```bash
# EG-003 depends on EG-001, EG-002 (already merged)
--group=EG-003
# Final test
npm test
```
### Step 3: Cleanup All Worktrees
```bash
# Remove all merged worktrees
--cleanup-all
```
---
## When to Use This Workflow
### Use worktree-merge when:
- Execution groups completed via unified-execute-parallel
- Ready to integrate changes to main branch
- Need dependency-aware merge order
- Want to handle cross-group conflicts systematically
### Manual merge when:
- Single group with no dependencies
- Comfortable with Git merge commands
- No cross-group conflicts to handle
---
**Now execute worktree-merge for completed execution groups**