feat: initialize monorepo with package.json for CCW workflow platform

This commit is contained in:
catlog22
2026-02-03 14:42:20 +08:00
parent 5483a72e9f
commit 39b80b3386
267 changed files with 99597 additions and 2658 deletions

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -3,66 +3,32 @@ description: Serial collaborative planning with Plan Note - Single-agent sequent
argument-hint: "TASK=\"<description>\" [--max-domains=5] [--focus=<domain>]"
---
# Codex Collaborative-Plan-With-File Prompt
# Codex Collaborative-Plan-With-File Workflow
## Quick Start
Serial collaborative planning workflow using **Plan Note** architecture. Processes sub-domains sequentially, generates task plans, and detects conflicts across domains.
**Core workflow**: Understand → Template → Sequential Planning → Conflict Detection → Completion
**Key features**:
- **plan-note.md**: Shared collaborative document with pre-allocated sections
- **Serial domain processing**: Each sub-domain planned sequentially via CLI
- **Conflict detection**: Automatic file, dependency, and strategy conflict scanning
- **No merge needed**: Pre-allocated sections eliminate merge conflicts
**Note**: Codex does not support parallel agent execution. All domains are processed serially.
## Overview
Serial collaborative planning workflow using **Plan Note** architecture:
This workflow enables structured planning through sequential phases:
1. **Understanding**: Analyze requirements and identify 2-5 sub-domains
2. **Sequential Planning**: Process each sub-domain sequentially, generating plan.json + updating plan-note.md
3. **Conflict Detection**: Scan plan-note.md for conflicts
4. **Completion**: Generate executable plan.md summary
1. **Understanding & Template** - Analyze requirements, identify sub-domains, create plan-note.md template
2. **Sequential Planning** - Process each sub-domain serially via CLI analysis
3. **Conflict Detection** - Scan plan-note.md for conflicts across all domains
4. **Completion** - Generate human-readable plan.md summary
**Note**: Codex does not support parallel agent execution. All domains processed serially.
## Target Task
**$TASK**
**Parameters**:
- `--max-domains`: Maximum sub-domains to identify (default: 5)
- `--focus`: Focus specific domain (optional)
## Execution Process
```
Session Detection:
├─ Check if planning session exists for task
├─ EXISTS + plan-note.md exists → Continue mode
└─ NOT_FOUND → New session mode
Phase 1: Understanding & Template Creation
├─ Analyze task description (Glob/Grep/Bash)
├─ Identify 2-5 sub-domains
├─ Create plan-note.md template
└─ Generate requirement-analysis.json
Phase 2: Sequential Sub-Domain Planning (Serial)
├─ For each sub-domain (LOOP):
│ ├─ Gemini CLI: Generate detailed plan
│ ├─ Extract task summary
│ └─ Update plan-note.md section
└─ Complete all domains sequentially
Phase 3: Conflict Detection
├─ Parse plan-note.md
├─ Extract all tasks from all sections
├─ Detect file/dependency/strategy conflicts
└─ Update conflict markers in plan-note.md
Phase 4: Completion
├─ Generate conflicts.json
├─ Generate plan.md summary
└─ Ready for execution
Output:
├─ .workflow/.planning/{slug}-{date}/plan-note.md (executable)
├─ .workflow/.planning/{slug}-{date}/requirement-analysis.json (metadata)
├─ .workflow/.planning/{slug}-{date}/conflicts.json (conflict report)
├─ .workflow/.planning/{slug}-{date}/plan.md (human-readable)
└─ .workflow/.planning/{slug}-{date}/agents/{domain}/plan.json (detailed)
```
The key innovation is the **Plan Note** architecture - a shared collaborative document with pre-allocated sections per sub-domain, eliminating merge conflicts.
## Output Structure
@@ -80,469 +46,364 @@ Output:
└── plan.md # Phase 4: Human-readable summary
```
## Output Artifacts
### Phase 1: Understanding & Template
| Artifact | Purpose |
|----------|---------|
| `plan-note.md` | Collaborative template with pre-allocated task pool and evidence sections per domain |
| `requirement-analysis.json` | Sub-domain assignments, TASK ID ranges, complexity assessment |
### Phase 2: Sequential Planning
| Artifact | Purpose |
|----------|---------|
| `agents/{domain}/plan.json` | Detailed implementation plan per domain |
| Updated `plan-note.md` | Task pool and evidence sections filled for each domain |
### Phase 3: Conflict Detection
| Artifact | Purpose |
|----------|---------|
| `conflicts.json` | Detected conflicts with types, severity, and resolutions |
| Updated `plan-note.md` | Conflict markers section populated |
### Phase 4: Completion
| Artifact | Purpose |
|----------|---------|
| `plan.md` | Human-readable summary with requirements, tasks, and conflicts |
---
## Implementation Details
### Session Setup
### Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
The workflow automatically generates a unique session identifier and directory structure.
const taskSlug = "$TASK".toLowerCase().replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
**Session ID Format**: `CPLAN-{slug}-{date}`
- `slug`: Lowercase alphanumeric, max 30 chars
- `date`: YYYY-MM-DD format (UTC+8)
const sessionId = `CPLAN-${taskSlug}-${dateStr}`
const sessionFolder = `.workflow/.planning/${sessionId}`
const planNotePath = `${sessionFolder}/plan-note.md`
const requirementsPath = `${sessionFolder}/requirement-analysis.json`
const conflictsPath = `${sessionFolder}/conflicts.json`
const planPath = `${sessionFolder}/plan.md`
**Session Directory**: `.workflow/.planning/{sessionId}/`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const hasPlanNote = sessionExists && fs.existsSync(planNotePath)
const mode = hasPlanNote ? 'continue' : 'new'
**Auto-Detection**: If session folder exists with plan-note.md, automatically enters continue mode.
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}/agents`)
}
```
**Session Variables**:
- `sessionId`: Unique session identifier
- `sessionFolder`: Base directory for all artifacts
- `maxDomains`: Maximum number of sub-domains (default: 5)
---
### Phase 1: Understanding & Template Creation
## Phase 1: Understanding & Template Creation
#### Step 1.1: Analyze Task Description
**Objective**: Analyze task requirements, identify parallelizable sub-domains, and create the plan-note.md template with pre-allocated sections.
Use built-in tools (no agent):
### Step 1.1: Analyze Task Description
```javascript
// 1. Extract task keywords
const taskKeywords = extractKeywords("$TASK")
Use built-in tools to understand the task scope and identify sub-domains.
// 2. Identify sub-domains via analysis
// Example: "Implement real-time notification system"
// → Domains: [Backend API, Frontend UI, Notification Service, Data Storage, Testing]
**Analysis Activities**:
1. **Extract task keywords** - Identify key terms and concepts from the task description
2. **Identify sub-domains** - Split into 2-5 parallelizable focus areas based on task complexity
3. **Assess complexity** - Evaluate overall task complexity (Low/Medium/High)
4. **Search for references** - Find related documentation, README files, and architecture guides
const subDomains = identifySubDomains("$TASK", {
maxDomains: 5, // --max-domains parameter
keywords: taskKeywords
})
**Sub-Domain Identification Patterns**:
// 3. Estimate scope
const complexity = assessComplexity("$TASK")
```
| Pattern | Keywords |
|---------|----------|
| Backend API | 服务, 后端, API, 接口 |
| Frontend | 界面, 前端, UI, 视图 |
| Database | 数据, 存储, 数据库, 持久化 |
| Testing | 测试, 验证, QA |
| Infrastructure | 部署, 基础, 运维, 配置 |
#### Step 1.2: Create plan-note.md Template
**Ambiguity Handling**: When the task description is unclear or has multiple interpretations, gather user clarification before proceeding.
Generate structured template:
### Step 1.2: Create plan-note.md Template
```markdown
---
session_id: ${sessionId}
original_requirement: |
$TASK
created_at: ${getUtc8ISOString()}
complexity: ${complexity}
sub_domains: ${subDomains.map(d => d.name).join(', ')}
status: in_progress
---
Generate a structured template with pre-allocated sections for each sub-domain.
# 协作规划
**plan-note.md Structure**:
- **YAML Frontmatter**: session_id, original_requirement, created_at, complexity, sub_domains, status
- **Section: 需求理解**: Core objectives, key points, constraints, split strategy
- **Section: 任务池 - {Domain N}**: Pre-allocated task section per domain (TASK-{range})
- **Section: 依赖关系**: Auto-generated after all domains complete
- **Section: 冲突标记**: Populated in Phase 3
- **Section: 上下文证据 - {Domain N}**: Evidence section per domain
**Session ID**: ${sessionId}
**任务**: $TASK
**复杂度**: ${complexity}
**创建时间**: ${getUtc8ISOString()}
**TASK ID Range Allocation**: Each domain receives a non-overlapping range of 100 IDs (e.g., Domain 1: TASK-001~100, Domain 2: TASK-101~200).
### Step 1.3: Generate requirement-analysis.json
Create the sub-domain configuration document.
**requirement-analysis.json Structure**:
| Field | Purpose |
|-------|---------|
| `session_id` | Session identifier |
| `original_requirement` | Task description |
| `complexity` | Low / Medium / High |
| `sub_domains[]` | Array of focus areas with descriptions |
| `sub_domains[].focus_area` | Domain name |
| `sub_domains[].description` | Domain scope description |
| `sub_domains[].task_id_range` | Non-overlapping TASK ID range |
| `sub_domains[].estimated_effort` | Effort estimate |
| `sub_domains[].dependencies` | Cross-domain dependencies |
| `total_domains` | Number of domains identified |
**Success Criteria**:
- 2-5 clear sub-domains identified
- Each sub-domain can be planned independently
- Plan Note template includes all pre-allocated sections
- TASK ID ranges have no overlap (100 IDs per domain)
- Requirements understanding is comprehensive
---
## 需求理解
## Phase 2: Sequential Sub-Domain Planning
### 核心目标
${extractObjectives("$TASK")}
**Objective**: Process each sub-domain serially via CLI analysis, generating detailed plans and updating plan-note.md.
### 关键要点
${extractKeyPoints("$TASK")}
**Execution Model**: Serial processing - plan each domain completely before moving to the next. Later domains can reference earlier planning results.
### 约束条件
${extractConstraints("$TASK")}
### Step 2.1: Domain Planning Loop
### 拆分策略
${subDomains.length} 个子领域:
${subDomains.map((d, i) => `${i+1}. **${d.name}**: ${d.description}`).join('\n')}
For each sub-domain in sequence:
1. Execute Gemini CLI analysis for the current domain
2. Parse CLI output into structured plan
3. Save detailed plan as `agents/{domain}/plan.json`
4. Update plan-note.md with task summaries and evidence
**Planning Guideline**: Wait for each domain's CLI analysis to complete before proceeding to the next.
### Step 2.2: CLI Planning for Each Domain
Execute synchronous CLI analysis to generate a detailed implementation plan.
**CLI Analysis Scope**:
- **PURPOSE**: Generate detailed implementation plan for the specific domain
- **CONTEXT**: Domain description, related codebase files, prior domain results
- **TASK**: Analyze domain, identify all necessary tasks, define dependencies, estimate effort
- **EXPECTED**: JSON output with tasks, summaries, interdependencies, total effort
**Analysis Output Should Include**:
- Task breakdown with IDs from the assigned range
- Dependencies within and across domains
- Files to modify with specific locations
- Effort and complexity estimates per task
- Conflict risk assessment for each task
### Step 2.3: Update plan-note.md After Each Domain
Parse CLI output and update the plan-note.md sections for the current domain.
**Task Summary Format** (for "任务池" section):
- Task header: `### TASK-{ID}: {Title} [{domain}]`
- Fields: 状态 (status), 复杂度 (complexity), 依赖 (dependencies), 范围 (scope)
- Modification points: File paths with line ranges and change summaries
- Conflict risk assessment: Low/Medium/High
**Evidence Format** (for "上下文证据" section):
- Related files with relevance descriptions
- Existing patterns identified in codebase
- Constraints discovered during analysis
**Success Criteria**:
- All domains processed sequentially
- `agents/{domain}/plan.json` created for each domain
- `plan-note.md` updated with all task pools and evidence sections
- Task summaries follow consistent format
---
## 任务池 - ${subDomains[0].name}
*(TASK-001 ~ TASK-100)*
## Phase 3: Conflict Detection
*待由规划流程填充*
**Objective**: Analyze plan-note.md for conflicts across all domain contributions.
### Step 3.1: Parse plan-note.md
Extract all tasks from all "任务池" sections.
**Extraction Activities**:
1. Read plan-note.md content
2. Parse YAML frontmatter for session metadata
3. Identify all "任务池" sections by heading pattern
4. Extract tasks matching pattern: `### TASK-{ID}: {Title} [{domain}]`
5. Parse task details: status, complexity, dependencies, modification points, conflict risk
6. Consolidate into unified task list
### Step 3.2: Detect Conflicts
Scan all tasks for three categories of conflicts.
**Conflict Types**:
| Type | Severity | Detection Logic | Resolution |
|------|----------|-----------------|------------|
| file_conflict | high | Same file:location modified by multiple domains | Coordinate modification order or merge changes |
| dependency_cycle | critical | Circular dependencies in task graph (DFS detection) | Remove or reorganize dependencies |
| strategy_conflict | medium | Multiple high-risk tasks in same file from different domains | Review approaches and align on single strategy |
**Detection Activities**:
1. **File Conflicts**: Group modification points by file:location, identify locations modified by multiple domains
2. **Dependency Cycles**: Build dependency graph from task dependencies, detect cycles using depth-first search
3. **Strategy Conflicts**: Group tasks by files they modify, identify files with high-risk tasks from multiple domains
### Step 3.3: Generate Conflict Artifacts
Write conflict results and update plan-note.md.
**conflicts.json Structure**:
- `detected_at`: Detection timestamp
- `total_conflicts`: Number of conflicts found
- `conflicts[]`: Array of conflict objects with type, severity, tasks involved, description, suggested resolution
**plan-note.md Update**: Locate "冲突标记" section and populate with conflict summary markdown. If no conflicts found, mark as "✅ 无冲突检测到".
**Success Criteria**:
- All tasks extracted and analyzed
- `conflicts.json` written with detection results
- `plan-note.md` updated with conflict markers
- All conflict types checked (file, dependency, strategy)
---
## 任务池 - ${subDomains[1].name}
*(TASK-101 ~ TASK-200)*
## Phase 4: Completion
*待由规划流程填充*
**Objective**: Generate human-readable plan summary and finalize workflow.
---
### Step 4.1: Generate plan.md
## 依赖关系
Create a human-readable summary from plan-note.md content.
*所有子域规划完成后自动生成*
**plan.md Structure**:
---
| Section | Content |
|---------|---------|
| Header | Session ID, task description, creation time |
| 需求 (Requirements) | Copied from plan-note.md "需求理解" section |
| 子领域拆分 (Sub-Domains) | Each domain with description, task range, estimated effort |
| 任务概览 (Task Overview) | All tasks with complexity, dependencies, and target files |
| 冲突报告 (Conflict Report) | Summary of detected conflicts or "无冲突" |
| 执行指令 (Execution) | Command to execute the plan |
## 冲突标记
### Step 4.2: Display Completion Summary
*冲突检测阶段生成*
Present session statistics and next steps.
---
**Summary Content**:
- Session ID and directory path
- Total domains planned
- Total tasks generated
- Conflict status
- Execution command for next step
## 上下文证据 - ${subDomains[0].name}
*相关文件、现有模式、约束等*
---
## 上下文证据 - ${subDomains[1].name}
*相关文件、现有模式、约束等*
---
```
#### Step 1.3: Generate requirement-analysis.json
```javascript
const requirements = {
session_id: sessionId,
original_requirement: "$TASK",
complexity: complexity,
sub_domains: subDomains.map((domain, index) => ({
focus_area: domain.name,
description: domain.description,
task_id_range: [index * 100 + 1, (index + 1) * 100],
estimated_effort: domain.effort,
dependencies: domain.dependencies || []
})),
total_domains: subDomains.length
}
Write(requirementsPath, JSON.stringify(requirements, null, 2))
```
---
### Phase 2: Sequential Sub-Domain Planning
#### Step 2.1: Plan Each Domain Sequentially
```javascript
for (let i = 0; i < subDomains.length; i++) {
const domain = subDomains[i]
const domainFolder = `${sessionFolder}/agents/${domain.slug}`
const domainPlanPath = `${domainFolder}/plan.json`
console.log(`Planning Domain ${i+1}/${subDomains.length}: ${domain.name}`)
// Execute Gemini CLI for this domain
// ⏳ Wait for completion before proceeding to next domain
}
```
#### Step 2.2: CLI Planning for Current Domain
**CLI Call** (synchronous):
```bash
ccw cli -p "
PURPOSE: Generate detailed implementation plan for domain '${domain.name}' in task: $TASK
Success: Comprehensive task breakdown with clear dependencies and effort estimates
DOMAIN CONTEXT:
- Focus Area: ${domain.name}
- Description: ${domain.description}
- Task ID Range: ${domain.task_id_range[0]}-${domain.task_id_range[1]}
- Related Domains: ${relatedDomains.join(', ')}
PRIOR DOMAINS (if any):
${completedDomains.map(d => `- ${d.name}: ${completedTaskCount} tasks`).join('\n')}
TASK:
• Analyze ${domain.name} in detail
• Identify all necessary tasks (use TASK-ID range: ${domain.task_id_range[0]}-${domain.task_id_range[1]})
• Define task dependencies and order
• Estimate effort and complexity for each task
• Identify file modifications needed
• Assess conflict risks with other domains
MODE: analysis
CONTEXT: @**/*
EXPECTED:
JSON output with:
- tasks[]: {id, title, description, complexity, depends_on[], files_to_modify[], conflict_risk}
- summary: Overview of domain plan
- interdependencies: Links to other domains
- total_effort: Estimated effort points
OUTPUT FORMAT: Structured JSON
" --tool gemini --mode analysis
```
#### Step 2.3: Parse and Update plan-note.md
After CLI completes for each domain:
```javascript
// Parse CLI output
const planJson = parseCLIOutput(cliResult)
// Save detailed plan
Write(domainPlanPath, JSON.stringify(planJson, null, 2))
// Extract task summary
const taskSummary = planJson.tasks.map((t, idx) => `
### TASK-${t.id}: ${t.title} [${domain.slug}]
**状态**: 规划中
**复杂度**: ${t.complexity}
**依赖**: ${t.depends_on.length > 0 ? t.depends_on.map(d => `TASK-${d}`).join(', ') : 'None'}
**范围**: ${t.description}
**修改点**:
${t.files_to_modify.map(f => `- \`${f.path}:${f.line_range}\`: ${f.summary}`).join('\n')}
**冲突风险**: ${t.conflict_risk}
`).join('\n')
// Update plan-note.md
updatePlanNoteSection(
planNotePath,
`## 任务池 - ${domain.name}`,
taskSummary
)
// Extract evidence
const evidence = `
**相关文件**:
${planJson.related_files.map(f => `- ${f.path}: ${f.relevance}`).join('\n')}
**现有模式**:
${planJson.existing_patterns.map(p => `- ${p}`).join('\n')}
**约束**:
${planJson.constraints.map(c => `- ${c}`).join('\n')}
`
updatePlanNoteSection(
planNotePath,
`## 上下文证据 - ${domain.name}`,
evidence
)
```
#### Step 2.4: Process All Domains
```javascript
const completedDomains = []
for (const domain of subDomains) {
// Step 2.2: CLI call (synchronous)
const cliResult = executeCLI(domain)
// Step 2.3: Parse and update
updatePlanNoteFromCLI(domain, cliResult)
completedDomains.push(domain)
console.log(`✅ Completed: ${domain.name}`)
}
```
---
### Phase 3: Conflict Detection
#### Step 3.1: Parse plan-note.md
```javascript
const planContent = Read(planNotePath)
const sections = parsePlanNoteSections(planContent)
const allTasks = []
// Extract tasks from all domains
for (const section of sections) {
if (section.heading.includes('任务池')) {
const tasks = extractTasks(section.content)
allTasks.push(...tasks)
}
}
```
#### Step 3.2: Detect Conflicts
```javascript
const conflicts = []
// 1. File conflicts
const fileMap = new Map()
for (const task of allTasks) {
for (const file of task.files_to_modify) {
const key = `${file.path}:${file.line_range}`
if (!fileMap.has(key)) fileMap.set(key, [])
fileMap.get(key).push(task)
}
}
for (const [location, tasks] of fileMap.entries()) {
if (tasks.length > 1) {
const agents = new Set(tasks.map(t => t.domain))
if (agents.size > 1) {
conflicts.push({
type: 'file_conflict',
severity: 'high',
location: location,
tasks_involved: tasks.map(t => t.id),
agents_involved: Array.from(agents),
description: `Multiple domains modifying: ${location}`,
suggested_resolution: 'Coordinate modification order'
})
}
}
}
// 2. Dependency cycles
const depGraph = buildDependencyGraph(allTasks)
const cycles = detectCycles(depGraph)
for (const cycle of cycles) {
conflicts.push({
type: 'dependency_cycle',
severity: 'critical',
tasks_involved: cycle,
description: `Circular dependency: ${cycle.join(' → ')}`,
suggested_resolution: 'Remove or reorganize dependencies'
})
}
// Write conflicts.json
Write(conflictsPath, JSON.stringify({
detected_at: getUtc8ISOString(),
total_conflicts: conflicts.length,
conflicts: conflicts
}, null, 2))
```
#### Step 3.3: Update plan-note.md
```javascript
const conflictMarkdown = generateConflictMarkdown(conflicts)
updatePlanNoteSection(
planNotePath,
'## 冲突标记',
conflictMarkdown
)
```
---
### Phase 4: Completion
#### Step 4.1: Generate plan.md
```markdown
# 实现计划
**Session**: ${sessionId}
**任务**: $TASK
**创建**: ${getUtc8ISOString()}
---
## 需求
${copySection(planNotePath, '## 需求理解')}
---
## 子领域拆分
${subDomains.map((domain, i) => `
### ${i+1}. ${domain.name}
- **描述**: ${domain.description}
- **任务范围**: TASK-${domain.task_id_range[0]} ~ TASK-${domain.task_id_range[1]}
- **预估工作量**: ${domain.effort}
`).join('\n')}
---
## 任务概览
${allTasks.map(t => `
### ${t.id}: ${t.title}
- **复杂度**: ${t.complexity}
- **依赖**: ${t.depends_on.length > 0 ? t.depends_on.join(', ') : 'None'}
- **文件**: ${t.files_to_modify.map(f => f.path).join(', ')}
`).join('\n')}
---
## 冲突报告
${conflicts.length > 0
? `检测到 ${conflicts.length} 个冲突:\n${copySection(planNotePath, '## 冲突标记')}`
: '✅ 无冲突检测到'}
---
## 执行指令
\`\`\`bash
/workflow:unified-execute-with-file ${planPath}
\`\`\`
```
#### Step 4.2: Write Summary
```javascript
Write(planPath, planMarkdown)
```
**Success Criteria**:
- `plan.md` generated with complete summary
- All artifacts present in session directory
- User informed of completion and next steps
---
## Configuration
### Sub-Domain Identification
Common domain patterns:
- Backend API: "服务", "后端", "API", "接口"
- Frontend: "界面", "前端", "UI", "视图"
- Database: "数据", "存储", "数据库", "持久化"
- Testing: "测试", "验证", "QA"
- Infrastructure: "部署", "基础", "运维", "配置"
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--max-domains` | 5 | Maximum sub-domains to identify |
| `--focus` | None | Focus specific domain (optional) |
---
## Error Handling
## Error Handling & Recovery
| Error | Resolution |
|-------|------------|
| CLI timeout | Retry with shorter prompt |
| No tasks generated | Review domain description, retry |
| Section not found | Recreate section in plan-note.md |
| Conflict detection fails | Continue with empty conflicts |
| Situation | Action | Recovery |
|-----------|--------|----------|
| CLI timeout | Retry with shorter, focused prompt | Skip domain or reduce scope |
| No tasks generated | Review domain description | Retry with refined description |
| Section not found in plan-note | Recreate section defensively | Continue with new section |
| Conflict detection fails | Continue with empty conflicts | Note in completion summary |
| Session folder conflict | Append timestamp suffix | Create unique folder |
---
## Iteration Patterns
### New Planning Session
```
User initiates: TASK="task description"
├─ No session exists → New session mode
├─ Analyze task and identify sub-domains
├─ Create plan-note.md template
├─ Generate requirement-analysis.json
├─ Process each domain serially:
│ ├─ CLI analysis → plan.json
│ └─ Update plan-note.md sections
├─ Detect conflicts
├─ Generate plan.md summary
└─ Report completion
```
### Continue Existing Session
```
User resumes: TASK="same task"
├─ Session exists → Continue mode
├─ Load plan-note.md and requirement-analysis.json
├─ Resume from first incomplete domain
└─ Continue sequential processing
```
---
## Best Practices
1. **Clear Task Description**: Detailed requirements → better sub-domains
2. **Review plan-note.md**: Check before moving to next phase
3. **Resolve Conflicts**: Address before execution
4. **Inspect Details**: Review agents/{domain}/plan.json for specifics
### Before Starting Planning
1. **Clear Task Description**: Detailed requirements lead to better sub-domain splitting
2. **Reference Documentation**: Ensure latest README and design docs are identified
3. **Clarify Ambiguities**: Resolve unclear requirements before committing to sub-domains
### During Planning
1. **Review Plan Note**: Check plan-note.md between phases to verify progress
2. **Verify Domains**: Ensure sub-domains are truly independent and parallelizable
3. **Check Dependencies**: Cross-domain dependencies should be documented explicitly
4. **Inspect Details**: Review `agents/{domain}/plan.json` for specifics when needed
### After Planning
1. **Resolve Conflicts**: Address high/critical conflicts before execution
2. **Review Summary**: Check plan.md for completeness and accuracy
3. **Validate Tasks**: Ensure all tasks have clear scope and modification targets
---
## When to Use This Workflow
### Use collaborative-plan-with-file when:
- Complex tasks requiring multi-domain decomposition
- Need structured planning with conflict detection
- Tasks spanning multiple modules or systems
- Want documented planning process for team review
- Preparing for multi-step execution workflows
### Use direct execution when:
- Simple, single-domain tasks
- Clear implementation path without ambiguity
- Quick follow-up to existing planning session
### Consider alternatives when:
- Exploring ideas without clear direction → use `workflow:brainstorm-with-file`
- Analyzing existing code/system → use `workflow:analyze-with-file`
- Lightweight planning for simple features → use `workflow:lite-plan`
- Ready to execute existing plan → use `workflow:unified-execute-with-file`
---

View File

@@ -3,503 +3,490 @@ description: Universal execution engine for consuming planning/brainstorm/analys
argument-hint: "PLAN=\"<path>\" [--auto-commit] [--dry-run]"
---
# Codex Unified-Execute-With-File Prompt
# Codex Unified-Execute-With-File Workflow
## Overview
## Quick Start
Universal execution engine consuming **any** planning output and executing tasks serially with progress tracking.
**Core workflow**: Load Plan → Parse Tasks → Execute Sequentially → Track Progress → Verify
**Core workflow**: Load Plan → Parse Tasks → Validate → Execute Sequentially → Track Progress → Verify
## Target Plan
**Key features**:
- **Format-agnostic**: Supports plan.json, plan-note.md, synthesis.json, conclusions.json
- **Serial execution**: Process tasks sequentially with dependency ordering
- **Progress tracking**: execution.md overview + execution-events.md detailed log
- **Auto-commit**: Optional conventional commits after each task
- **Dry-run mode**: Simulate execution without making changes
**$PLAN**
## Overview
**Parameters**:
- `--auto-commit`: Auto-commit after each task (conventional commits)
- `--dry-run`: Simulate execution without making changes
This workflow enables reliable task execution through sequential phases:
## Execution Process
1. **Plan Detection & Parsing** - Load and parse planning output in any format
2. **Pre-Execution Analysis** - Validate feasibility and identify potential issues
3. **Serial Task Execution** - Execute tasks one by one with dependency ordering
4. **Progress Tracking** - Update execution logs with results and discoveries
5. **Completion** - Generate summary and offer follow-up actions
```
Session Initialization:
├─ Detect or load plan file
├─ Parse tasks from plan (JSON, Markdown, or other formats)
├─ Build task dependency graph
└─ Validate for cycles and feasibility
Pre-Execution:
├─ Analyze plan structure
├─ Identify modification targets (files)
├─ Check file conflicts and feasibility
└─ Generate execution strategy
Serial Execution (Task by Task):
├─ For each task:
│ ├─ Extract task context
│ ├─ Load previous task outputs
│ ├─ Route to Codex CLI for execution
│ ├─ Track progress in execution.md
│ ├─ Auto-commit if enabled
│ └─ Next task
└─ Complete all tasks
Post-Execution:
├─ Generate execution summary
├─ Record completion status
├─ Identify any failures
└─ Suggest next steps
Output:
├─ .workflow/.execution/{session-id}/execution.md (overview + timeline)
├─ .workflow/.execution/{session-id}/execution-events.md (detailed log)
└─ Git commits (if --auto-commit enabled)
```
The key innovation is the **unified event log** that serves as both human-readable progress tracker and machine-parseable state store.
## Output Structure
```
.workflow/.execution/EXEC-{slug}-{date}/
.workflow/.execution/EXEC-{slug}-{date}-{random}/
├── execution.md # Plan overview + task table + timeline
└── execution-events.md # ⭐ Unified log (all executions) - SINGLE SOURCE OF TRUTH
```
## Output Artifacts
### Phase 1: Session Initialization
| Artifact | Purpose |
|----------|---------|
| `execution.md` | Overview of plan source, task table, execution timeline |
| Session folder | `.workflow/.execution/{sessionId}/` |
### Phase 2: Pre-Execution Analysis
| Artifact | Purpose |
|----------|---------|
| `execution.md` (updated) | Feasibility assessment and validation results |
### Phase 3-4: Serial Execution & Progress
| Artifact | Purpose |
|----------|---------|
| `execution-events.md` | Unified log: all task executions with results |
| `execution.md` (updated) | Real-time progress updates and task status |
### Phase 5: Completion
| Artifact | Purpose |
|----------|---------|
| Final `execution.md` | Complete execution summary and statistics |
| Final `execution-events.md` | Complete execution history |
---
## Implementation Details
### Session Setup
### Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
The workflow creates a unique session for tracking execution.
// Resolve plan path
let planPath = "$PLAN"
if (!fs.existsSync(planPath)) {
// Auto-detect from common locations
const candidates = [
'.workflow/IMPL_PLAN.md',
'.workflow/.planning/*/plan-note.md',
'.workflow/.brainstorm/*/synthesis.json',
'.workflow/.analysis/*/conclusions.json'
]
planPath = autoDetectPlan(candidates)
}
**Session ID Format**: `EXEC-{slug}-{date}-{random}`
- `slug`: Plan filename without extension, lowercased, max 30 chars
- `date`: YYYY-MM-DD format (UTC+8)
- `random`: 7-char random suffix for uniqueness
// Create session
const planSlug = path.basename(planPath).replace(/[^a-z0-9-]/g, '').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const randomId = Math.random().toString(36).substring(7)
const sessionId = `EXEC-${planSlug}-${dateStr}-${randomId}`
**Session Directory**: `.workflow/.execution/{sessionId}/`
const sessionFolder = `.workflow/.execution/${sessionId}`
const executionPath = `${sessionFolder}/execution.md`
const eventsPath = `${sessionFolder}/execution-events.md`
**Plan Path Resolution**:
1. If `$PLAN` provided explicitly, use it
2. Otherwise, auto-detect from common locations:
- `.workflow/IMPL_PLAN.md`
- `.workflow/.planning/*/plan-note.md`
- `.workflow/.brainstorm/*/synthesis.json`
- `.workflow/.analysis/*/conclusions.json`
bash(`mkdir -p ${sessionFolder}`)
```
**Session Variables**:
- `sessionId`: Unique session identifier
- `sessionFolder`: Base directory for artifacts
- `planPath`: Resolved path to plan file
- `autoCommit`: Boolean flag for auto-commit mode
- `dryRun`: Boolean flag for dry-run mode
---
### Phase 1: Plan Detection & Parsing
## Phase 1: Plan Detection & Parsing
#### Step 1.1: Load Plan File
**Objective**: Load plan file, parse tasks, build execution order, and validate for cycles.
```javascript
// Detect plan format and parse
let tasks = []
### Step 1.1: Load Plan File
if (planPath.endsWith('.json')) {
// JSON plan (from lite-plan, collaborative-plan, etc.)
const planJson = JSON.parse(Read(planPath))
tasks = parsePlanJson(planJson)
} else if (planPath.endsWith('.md')) {
// Markdown plan (IMPL_PLAN.md, plan-note.md, etc.)
const planMd = Read(planPath)
tasks = parsePlanMarkdown(planMd)
} else if (planPath.endsWith('synthesis.json')) {
// Brainstorm synthesis
const synthesis = JSON.parse(Read(planPath))
tasks = convertSynthesisToTasks(synthesis)
} else if (planPath.endsWith('conclusions.json')) {
// Analysis conclusions
const conclusions = JSON.parse(Read(planPath))
tasks = convertConclusionsToTasks(conclusions)
} else {
throw new Error(`Unsupported plan format: ${planPath}`)
}
```
Detect plan format and parse based on file extension.
#### Step 1.2: Build Execution Order
**Supported Formats**:
```javascript
// Handle task dependencies
const depGraph = buildDependencyGraph(tasks)
| Format | Source | Parser |
|--------|--------|--------|
| plan.json | lite-plan, collaborative-plan | parsePlanJson() |
| plan-note.md | collaborative-plan | parsePlanMarkdown() |
| synthesis.json | brainstorm session | convertSynthesisToTasks() |
| conclusions.json | analysis session | convertConclusionsToTasks() |
// Validate: no cycles
validateNoCycles(depGraph)
**Parsing Activities**:
1. Read plan file content
2. Detect format from filename or content structure
3. Route to appropriate parser
4. Extract tasks with required fields: id, title, description, files_to_modify, depends_on
// Calculate execution order (simple topological sort)
const executionOrder = topologicalSort(depGraph, tasks)
### Step 1.2: Build Execution Order
// In Codex: serial execution, no parallel waves
console.log(`Total tasks: ${tasks.length}`)
console.log(`Execution order: ${executionOrder.map(t => t.id).join(' → ')}`)
```
Analyze task dependencies and calculate execution sequence.
#### Step 1.3: Generate execution.md
**Execution Order Calculation**:
1. Build dependency graph from task dependencies
2. Validate for circular dependencies (no cycles allowed)
3. Calculate topological sort for sequential execution order
4. In Codex: serial mode means executing tasks one by one
```markdown
# 执行计划
**Dependency Validation**:
- Check that all referenced dependencies exist
- Detect cycles and report as critical error
- Order tasks based on dependencies
**Session**: ${sessionId}
**Plan Source**: ${planPath}
**Started**: ${getUtc8ISOString()}
### Step 1.3: Generate execution.md
Create the main execution tracking document.
**execution.md Structure**:
- **Header**: Session ID, plan source, execution timestamp
- **Plan Overview**: Summary from plan metadata
- **Task List**: Table with ID, title, complexity, dependencies, status
- **Execution Timeline**: To be updated as tasks complete
**Success Criteria**:
- execution.md created with complete plan overview
- Task list includes all tasks from plan
- Execution order calculated with no cycles
- Ready for feasibility analysis
---
## 计划概览
## Phase 2: Pre-Execution Analysis
| 字段 | 值 |
|------|-----|
| 总任务数 | ${tasks.length} |
| 计划来源 | ${planPath} |
| 执行模式 | ${dryRun ? '模拟' : '实际'} |
| 自动提交 | ${autoCommit ? '启用' : '禁用'} |
**Objective**: Validate feasibility and identify potential issues before starting execution.
### Step 2.1: Analyze Plan Structure
Examine task dependencies, file modifications, and potential conflicts.
**Analysis Activities**:
1. **Check file conflicts**: Identify files modified by multiple tasks
2. **Check missing dependencies**: Verify all referenced dependencies exist
3. **Check file existence**: Identify files that will be created vs modified
4. **Estimate complexity**: Assess overall execution complexity
**Issue Detection**:
- Sequential modifications to same file (document for ordered execution)
- Missing dependency targets
- High complexity patterns that may need special handling
### Step 2.2: Generate Feasibility Report
Document analysis results and recommendations.
**Feasibility Report Content**:
- Issues found (if any)
- File conflict warnings
- Dependency validation results
- Complexity assessment
- Recommended execution strategy
### Step 2.3: Update execution.md
Append feasibility analysis results.
**Success Criteria**:
- All validation checks completed
- Issues documented in execution.md
- No blocking issues found (or user confirmed to proceed)
- Ready for task execution
---
## 任务列表
## Phase 3: Serial Task Execution
| ID | 标题 | 复杂度 | 依赖 | 状态 |
|----|------|--------|-------|-------|
${tasks.map(t => `| ${t.id} | ${t.title} | ${t.complexity || 'medium'} | ${t.depends_on?.join(',') || '-'} | ⏳ |`).join('\n')}
**Objective**: Execute tasks one by one in dependency order, tracking progress and recording results.
---
**Execution Model**: Serial execution - process tasks sequentially, one at a time. Each task must complete before the next begins.
## 执行时间线
### Step 3.1: Execute Tasks Sequentially
*(更新于 execution-events.md)*
For each task in execution order:
1. Load context from previous task results
2. Route to Codex CLI for execution
3. Wait for completion
4. Record results in execution-events.md
5. Auto-commit if enabled
6. Move to next task
---
**Execution Loop**:
```
For each task in executionOrder:
├─ Extract task context
├─ Load previous task outputs
├─ Execute task via CLI (synchronous)
├─ Record result with timestamp
├─ Auto-commit if enabled
└─ Continue to next task
```
---
### Step 3.2: Execute Task via CLI
### Phase 2: Pre-Execution Analysis
Execute individual task using Codex CLI in synchronous mode.
#### Step 2.1: Feasibility Check
**CLI Execution Scope**:
- **PURPOSE**: Execute task from plan
- **TASK DETAILS**: ID, title, description, required changes
- **PRIOR CONTEXT**: Results from previous tasks
- **REQUIRED CHANGES**: Files to modify with specific locations
- **MODE**: write (modification mode)
- **EXPECTED**: Files modified as specified, no test failures
```javascript
const issues = []
**CLI Parameters**:
- `--tool codex`: Use Codex for execution
- `--mode write`: Allow file modifications
- Synchronous execution: Wait for completion
// Check file conflicts
const fileMap = new Map()
for (const task of tasks) {
for (const file of task.files_to_modify || []) {
if (!fileMap.has(file)) fileMap.set(file, [])
fileMap.get(file).push(task.id)
}
}
### Step 3.3: Track Progress
for (const [file, taskIds] of fileMap.entries()) {
if (taskIds.length > 1) {
// Sequential modification of same file
console.log(`⚠️ Sequential modification: ${file} (${taskIds.join(' → ')})`)
}
}
Record task execution results in the unified event log.
// Check missing dependencies
for (const task of tasks) {
for (const depId of task.depends_on || []) {
if (!tasks.find(t => t.id === depId)) {
issues.push(`Task ${task.id} depends on missing ${depId}`)
}
}
}
**execution-events.md Structure**:
- **Header**: Session metadata
- **Event Timeline**: One entry per task with results
- **Event Format**:
- Task ID and title
- Timestamp and duration
- Status (completed/failed)
- Summary of changes
- Any notes or issues discovered
if (issues.length > 0) {
console.log(`⚠️ Issues found:\n${issues.map(i => `- ${i}`).join('\n')}`)
}
```
**Event Recording Activities**:
1. Capture execution timestamp
2. Record task status and duration
3. Document any modifications made
4. Note any issues or discoveries
5. Append event to execution-events.md
### Step 3.4: Auto-Commit (if enabled)
Commit task changes with conventional commit format.
**Auto-Commit Process**:
1. Get changed files from git status
2. Filter to task.files_to_modify
3. Stage files: `git add`
4. Generate commit message based on task type
5. Commit: `git commit -m`
**Commit Message Format**:
- Type: feat, fix, refactor, test, docs (inferred from task)
- Scope: file/module affected (inferred from files modified)
- Subject: Task title or description
- Footer: Task ID and plan reference
**Success Criteria**:
- All tasks executed sequentially
- Results recorded in execution-events.md
- Auto-commits created (if enabled)
- Failed tasks logged for review
---
### Phase 3: Serial Task Execution
## Phase 4: Completion
#### Step 3.1: Execute Tasks Sequentially
**Objective**: Summarize execution results and offer follow-up actions.
```javascript
const executionLog = []
const taskResults = new Map()
### Step 4.1: Collect Statistics
for (const task of executionOrder) {
console.log(`\n📋 Executing: ${task.id} - ${task.title}`)
Gather execution metrics.
const eventRecord = {
timestamp: getUtc8ISOString(),
task_id: task.id,
task_title: task.title,
status: 'in_progress',
notes: []
}
**Metrics Collection**:
- Total tasks executed
- Successfully completed count
- Failed count
- Success rate percentage
- Total duration
- Artifacts generated
try {
// Load context from previous tasks
const priorOutputs = executionOrder
.slice(0, executionOrder.indexOf(task))
.map(t => taskResults.get(t.id))
.filter(Boolean)
### Step 4.2: Generate Summary
const context = {
task: task,
prior_outputs: priorOutputs,
plan_source: planPath
}
Update execution.md with final results.
// Execute task via Codex CLI
if (dryRun) {
console.log(`[DRY RUN] ${task.id}`)
eventRecord.status = 'completed'
eventRecord.notes.push('Dry run - no changes made')
} else {
await executeTaskViaCLI(task, context)
eventRecord.status = 'completed'
// Auto-commit if enabled
if (autoCommit) {
commitTask(task)
eventRecord.notes.push(`✅ Committed: ${task.id}`)
}
}
**Summary Content**:
- Execution completion timestamp
- Statistics table
- Task status table (completed/failed)
- Commit log (if auto-commit enabled)
- Any failed tasks requiring attention
} catch (error) {
eventRecord.status = 'failed'
eventRecord.error = error.message
eventRecord.notes.push(`❌ Error: ${error.message}`)
console.log(`❌ Failed: ${task.id}`)
}
### Step 4.3: Display Completion Summary
executionLog.push(eventRecord)
updateExecutionEvents(eventsPath, executionLog)
updateExecutionMd(executionPath, task, eventRecord)
}
```
Present results to user.
#### Step 3.2: Execute Task via CLI
**Summary Output**:
- Session ID and folder path
- Statistics (completed/failed/total)
- Failed tasks (if any)
- Execution log location
- Next step recommendations
**CLI Call** (synchronous):
```bash
ccw cli -p "
PURPOSE: Execute task '${task.id}: ${task.title}' from plan
Success: Task completed as specified in plan
TASK DETAILS:
- ID: ${task.id}
- Title: ${task.title}
- Description: ${task.description}
- Complexity: ${task.complexity}
- Estimated Effort: ${task.effort}
REQUIRED CHANGES:
${task.files_to_modify?.map(f => `- \`${f.path}\`: ${f.summary}`).join('\n')}
PRIOR CONTEXT:
${priorOutputs.map(p => `- ${p.task_id}: ${p.notes.join('; ')}`).join('\n')}
TASK ACTIONS:
${task.actions?.map((a, i) => `${i+1}. ${a}`).join('\n')}
MODE: write
CONTEXT: @**/* | Plan Source: ${planPath} | Task: ${task.id}
EXPECTED:
- Modifications implemented as specified
- Code follows project conventions
- No test failures introduced
- All required files updated
CONSTRAINTS: Exactly as specified in plan | No additional scope
" --tool codex --mode write
```
#### Step 3.3: Track Progress
```javascript
function updateExecutionEvents(eventsPath, log) {
const eventsMd = `# 执行日志
**Session**: ${sessionId}
**更新**: ${getUtc8ISOString()}
---
## 事件时间线
${log.map((e, i) => `
### 事件 ${i+1}: ${e.task_id}
**时间**: ${e.timestamp}
**任务**: ${e.task_title}
**状态**: ${e.status === 'completed' ? '✅' : e.status === 'failed' ? '❌' : '⏳'}
**笔记**:
${e.notes.map(n => `- ${n}`).join('\n')}
${e.error ? `**错误**: ${e.error}` : ''}
`).join('\n')}
---
## 统计
- **总数**: ${log.length}
- **完成**: ${log.filter(e => e.status === 'completed').length}
- **失败**: ${log.filter(e => e.status === 'failed').length}
- **进行中**: ${log.filter(e => e.status === 'in_progress').length}
`
Write(eventsPath, eventsMd)
}
function updateExecutionMd(mdPath, task, record) {
const content = Read(mdPath)
// Update task status in table
const updated = content.replace(
new RegExp(`\\| ${task.id} \\|.*\\| ⏳ \\|`),
`| ${task.id} | ... | ... | ... | ${record.status === 'completed' ? '✅' : '❌'} |`
)
Write(mdPath, updated)
}
```
---
### Phase 4: Completion
#### Step 4.1: Generate Summary
```javascript
const completed = executionLog.filter(e => e.status === 'completed').length
const failed = executionLog.filter(e => e.status === 'failed').length
const summary = `
# 执行完成
**Session**: ${sessionId}
**完成时间**: ${getUtc8ISOString()}
## 结果
| 指标 | 数值 |
|------|------|
| 总任务 | ${executionLog.length} |
| 成功 | ${completed} ✅ |
| 失败 | ${failed} ❌ |
| 成功率 | ${Math.round(completed / executionLog.length * 100)}% |
## 后续步骤
${failed > 0 ? `
### ❌ 修复失败的任务
\`\`\`bash
# 检查失败详情
cat ${eventsPath}
# 重新执行失败任务
${executionLog.filter(e => e.status === 'failed').map(e => `# ${e.task_id}`).join('\n')}
\`\`\`
` : `
### ✅ 执行完成
所有任务已成功完成!
`}
## 提交日志
${executionLog.filter(e => e.notes.some(n => n.includes('Committed'))).map(e => `- ${e.task_id}: ✅`).join('\n')}
`
Write(executionPath, summary)
```
#### Step 4.2: Report Results
```javascript
console.log(`
✅ 执行完成: ${sessionId}
成功: ${completed}/${executionLog.length}
${failed > 0 ? `失败: ${failed}` : '无失败'}
📁 详情: ${eventsPath}
`)
```
**Success Criteria**:
- execution.md finalized with complete summary
- execution-events.md contains all task records
- User informed of completion status
- All artifacts successfully created
---
## Configuration
### Task Format Detection
### Plan Format Detection
Supports multiple plan formats:
Workflow automatically detects plan format:
| Format | Source | Parser |
|--------|--------|--------|
| JSON | lite-plan, collaborative-plan | parsePlanJson() |
| Markdown | IMPL_PLAN.md, plan-note.md | parsePlanMarkdown() |
| JSON synthesis | Brainstorm session | convertSynthesisToTasks() |
| JSON conclusions | Analysis session | convertConclusionsToTasks() |
| File Extension | Format |
|---|---|
| `.json` | JSON plan (lite-plan, collaborative-plan) |
| `.md` | Markdown plan (IMPL_PLAN.md, plan-note.md) |
| `synthesis.json` | Brainstorm synthesis |
| `conclusions.json` | Analysis conclusions |
### Auto-Commit Format
### Execution Modes
| Mode | Behavior | Use Case |
|------|----------|----------|
| Normal | Execute tasks, track progress | Standard execution |
| `--auto-commit` | Execute + commit each task | Tracked progress with git history |
| `--dry-run` | Simulate execution, no changes | Validate plan before executing |
### Task Dependencies
Tasks can declare dependencies on other tasks:
- `depends_on: ["TASK-001", "TASK-002"]` - Wait for these tasks
- Tasks are executed in topological order
- Circular dependencies are detected and reported as error
---
## Error Handling & Recovery
| Situation | Action | Recovery |
|-----------|--------|----------|
| Plan not found | Check file path and common locations | Verify plan path is correct |
| Unsupported format | Detect format from extension/content | Use supported plan format |
| Circular dependency | Stop execution, report error | Remove or reorganize dependencies |
| Task execution fails | Record failure in log | Review error details in execution-events.md |
| File conflict | Document in execution-events.md | Resolve conflict manually or adjust plan order |
| Missing file | Log as warning, continue | Verify files will be created by prior tasks |
---
## Execution Flow Diagram
Conventional Commits:
```
{type}({scope}): {description}
Load Plan File
├─ Detect format (JSON/Markdown)
├─ Parse tasks
└─ Build dependency graph
{task_id}: {task_title}
Files: {list of modified files}
Validate
├─ Check for cycles
├─ Analyze file conflicts
└─ Calculate execution order
Execute Sequentially
├─ Task 1: CLI execution → record result
├─ Task 2: CLI execution → record result
├─ Task 3: CLI execution → record result
└─ (repeat for all tasks)
Track Progress
├─ Update execution.md after each task
└─ Append event to execution-events.md
Complete
├─ Generate final summary
├─ Report statistics
└─ Offer follow-up actions
```
---
## Error Handling
## Best Practices
| Error | Resolution |
|-------|------------|
| Plan not found | Use explicit --plan flag or check .workflow/ |
| Unsupported format | Verify plan file format matches supported types |
| Task execution fails | Check execution-events.md for details |
| Dependency missing | Verify plan completeness |
### Before Execution
1. **Review Plan**: Check plan.md or plan-note.md for completeness
2. **Validate Format**: Ensure plan is in supported format
3. **Check Dependencies**: Verify dependency order is logical
4. **Test First**: Use `--dry-run` mode to validate before actual execution
5. **Backup**: Commit any pending changes before starting
### During Execution
1. **Monitor Progress**: Check execution-events.md for real-time updates
2. **Handle Failures**: Review error details and decide whether to continue
3. **Check Commits**: Verify auto-commits are correct if enabled
4. **Track Context**: Prior task results are available to subsequent tasks
### After Execution
1. **Review Results**: Check execution.md summary and statistics
2. **Verify Changes**: Inspect modified files match expected changes
3. **Handle Failures**: Address any failed tasks
4. **Update History**: Check git log for conventional commits if enabled
5. **Plan Next Steps**: Use completion artifacts for future work
---
## Execution Modes
## When to Use This Workflow
| Mode | Behavior |
|------|----------|
| Normal | Execute tasks sequentially, auto-commit disabled |
| --auto-commit | Execute + commit each task |
| --dry-run | Simulate execution, no changes |
### Use unified-execute-with-file when:
- Ready to execute a complete plan from planning workflow
- Need reliable sequential task execution with tracking
- Want automatic git commits for audit trail
- Executing plans from brainstorm, analysis, or collaborative-plan workflows
- Need to validate plan before full execution (--dry-run)
### Use direct CLI execution when:
- Single task that doesn't need full plan structure
- Quick implementation without tracking overhead
- Small changes that don't need git history
### Consider alternatives when:
- Still planning/exploring → use `workflow:brainstorm-with-file` or `workflow:analyze-with-file`
- Need complex task planning → use `workflow:collaborative-plan-with-file`
- Debugging or troubleshooting → use `workflow:debug-with-file`
---
## Usage
## Command Examples
### Standard Execution
```bash
# Load and execute plan
PLAN="path/to/plan.json" \
PLAN=".workflow/.planning/CPLAN-auth-2025-01-27/plan-note.md"
```
Execute the plan with standard options.
### With Auto-Commit
```bash
PLAN=".workflow/.planning/CPLAN-auth-2025-01-27/plan-note.md" \
--auto-commit
```
# Dry run first
PLAN="path/to/plan.json" \
Execute and automatically commit changes after each task.
### Dry-Run Mode
```bash
PLAN=".workflow/.planning/CPLAN-auth-2025-01-27/plan-note.md" \
--dry-run
```
# Auto-detect plan
# (searches .workflow/ for recent plans)
Simulate execution without making changes.
### Auto-Detect Plan
```bash
# No PLAN specified - auto-detects from .workflow/ directories
```
---