mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-26 19:56:37 +08:00
delete: remove unified-execute-with-file skill documentation and implementation details
This commit is contained in:
File diff suppressed because it is too large
Load Diff
@@ -1,830 +0,0 @@
|
|||||||
---
|
|
||||||
name: collaborative-plan-with-file
|
|
||||||
description: Serial collaborative planning with Plan Note - Multi-domain serial task generation, unified plan-note.md, conflict detection. No agent delegation.
|
|
||||||
argument-hint: "[-y|--yes] <task description> [--max-domains=5]"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Collaborative-Plan-With-File Workflow
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
Serial collaborative planning workflow using **Plan Note** architecture. Analyzes requirements, identifies sub-domains, generates detailed plans per domain serially, and detects conflicts across domains.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Basic usage
|
|
||||||
/codex:collaborative-plan-with-file "Implement real-time notification system"
|
|
||||||
|
|
||||||
# With options
|
|
||||||
/codex:collaborative-plan-with-file "Refactor authentication module" --max-domains=4
|
|
||||||
/codex:collaborative-plan-with-file "Add payment gateway support" -y
|
|
||||||
```
|
|
||||||
|
|
||||||
**Core workflow**: Understand → Template → Serial Domain Planning → Conflict Detection → Completion
|
|
||||||
|
|
||||||
**Key features**:
|
|
||||||
- **plan-note.md**: Shared collaborative document with pre-allocated sections per domain
|
|
||||||
- **Serial domain planning**: Each sub-domain planned sequentially with full codebase context
|
|
||||||
- **Conflict detection**: Automatic file, dependency, and strategy conflict scanning
|
|
||||||
- **No merge needed**: Pre-allocated sections eliminate merge conflicts
|
|
||||||
|
|
||||||
## Auto Mode
|
|
||||||
|
|
||||||
When `--yes` or `-y`: Auto-approve splits, skip confirmations.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
This workflow enables structured planning through sequential phases:
|
|
||||||
|
|
||||||
1. **Understanding & Template** — Analyze requirements, identify sub-domains, create plan-note.md template
|
|
||||||
2. **Serial Domain Planning** — Plan each sub-domain sequentially using direct search and analysis
|
|
||||||
3. **Conflict Detection** — Scan plan-note.md for conflicts across all domains
|
|
||||||
4. **Completion** — Generate human-readable plan.md summary
|
|
||||||
|
|
||||||
The key innovation is the **Plan Note** architecture — a shared collaborative document with pre-allocated sections per sub-domain, eliminating merge conflicts even in serial execution.
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PLAN NOTE COLLABORATIVE PLANNING │
|
|
||||||
├─────────────────────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ Phase 1: Understanding & Template Creation │
|
|
||||||
│ ├─ Analyze requirements (inline search & analysis) │
|
|
||||||
│ ├─ Identify 2-5 sub-domains (focus areas) │
|
|
||||||
│ ├─ Create plan-note.md with pre-allocated sections │
|
|
||||||
│ └─ Assign TASK ID ranges (no conflicts) │
|
|
||||||
│ │
|
|
||||||
│ Phase 2: Serial Domain Planning │
|
|
||||||
│ ┌──────────────┐ │
|
|
||||||
│ │ Domain 1 │→ Explore codebase → Generate .task/TASK-*.json │
|
|
||||||
│ │ Section 1 │→ Fill task pool + evidence in plan-note.md │
|
|
||||||
│ └──────┬───────┘ │
|
|
||||||
│ ┌──────▼───────┐ │
|
|
||||||
│ │ Domain 2 │→ Explore codebase → Generate .task/TASK-*.json │
|
|
||||||
│ │ Section 2 │→ Fill task pool + evidence in plan-note.md │
|
|
||||||
│ └──────┬───────┘ │
|
|
||||||
│ ┌──────▼───────┐ │
|
|
||||||
│ │ Domain N │→ ... │
|
|
||||||
│ └──────────────┘ │
|
|
||||||
│ │
|
|
||||||
│ Phase 3: Conflict Detection (Single Source) │
|
|
||||||
│ ├─ Parse plan-note.md (all sections) │
|
|
||||||
│ ├─ Detect file/dependency/strategy conflicts │
|
|
||||||
│ └─ Update plan-note.md conflict section │
|
|
||||||
│ │
|
|
||||||
│ Phase 4: Completion (No Merge) │
|
|
||||||
│ ├─ Collect domain .task/*.json → session .task/*.json │
|
|
||||||
│ ├─ Generate plan.md (human-readable) │
|
|
||||||
│ └─ Ready for execution │
|
|
||||||
│ │
|
|
||||||
└─────────────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Structure
|
|
||||||
|
|
||||||
> **Schema**: `cat ~/.ccw/workflows/cli-templates/schemas/task-schema.json`
|
|
||||||
|
|
||||||
```
|
|
||||||
{projectRoot}/.workflow/.planning/CPLAN-{slug}-{date}/
|
|
||||||
├── plan-note.md # ⭐ Core: Requirements + Tasks + Conflicts
|
|
||||||
├── requirement-analysis.json # Phase 1: Sub-domain assignments
|
|
||||||
├── domains/ # Phase 2: Per-domain plans
|
|
||||||
│ ├── {domain-1}/
|
|
||||||
│ │ └── .task/ # Per-domain task JSON files
|
|
||||||
│ │ ├── TASK-001.json
|
|
||||||
│ │ └── ...
|
|
||||||
│ ├── {domain-2}/
|
|
||||||
│ │ └── .task/
|
|
||||||
│ │ ├── TASK-101.json
|
|
||||||
│ │ └── ...
|
|
||||||
│ └── ...
|
|
||||||
├── plan.json # Plan overview (plan-overview-base-schema.json)
|
|
||||||
├── .task/ # ⭐ Merged task JSON files (all domains)
|
|
||||||
│ ├── TASK-001.json
|
|
||||||
│ ├── TASK-101.json
|
|
||||||
│ └── ...
|
|
||||||
├── conflicts.json # Phase 3: Conflict report
|
|
||||||
└── plan.md # Phase 4: Human-readable summary
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Artifacts
|
|
||||||
|
|
||||||
### Phase 1: Understanding & Template
|
|
||||||
|
|
||||||
| Artifact | Purpose |
|
|
||||||
|----------|---------|
|
|
||||||
| `plan-note.md` | Collaborative template with pre-allocated task pool and evidence sections per domain |
|
|
||||||
| `requirement-analysis.json` | Sub-domain assignments, TASK ID ranges, complexity assessment |
|
|
||||||
|
|
||||||
### Phase 2: Serial Domain Planning
|
|
||||||
|
|
||||||
| Artifact | Purpose |
|
|
||||||
|----------|---------|
|
|
||||||
| `domains/{domain}/.task/TASK-*.json` | Task JSON files per domain (one file per task with convergence) |
|
|
||||||
| Updated `plan-note.md` | Task pool and evidence sections filled for each domain |
|
|
||||||
|
|
||||||
### Phase 3: Conflict Detection
|
|
||||||
|
|
||||||
| Artifact | Purpose |
|
|
||||||
|----------|---------|
|
|
||||||
| `conflicts.json` | Detected conflicts with types, severity, and resolutions |
|
|
||||||
| Updated `plan-note.md` | Conflict markers section populated |
|
|
||||||
|
|
||||||
### Phase 4: Completion
|
|
||||||
|
|
||||||
| Artifact | Purpose |
|
|
||||||
|----------|---------|
|
|
||||||
| `.task/TASK-*.json` | Merged task JSON files from all domains (consumable by unified-execute) |
|
|
||||||
| `plan.json` | Plan overview following plan-overview-base-schema.json |
|
|
||||||
| `plan.md` | Human-readable summary with requirements, tasks, and conflicts |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Details
|
|
||||||
|
|
||||||
### Session Initialization
|
|
||||||
|
|
||||||
##### Step 0: Initialize Session
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
|
||||||
|
|
||||||
// Detect project root
|
|
||||||
const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()
|
|
||||||
|
|
||||||
// Parse arguments
|
|
||||||
const autoMode = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
|
||||||
const maxDomainsMatch = $ARGUMENTS.match(/--max-domains=(\d+)/)
|
|
||||||
const maxDomains = maxDomainsMatch ? parseInt(maxDomainsMatch[1]) : 5
|
|
||||||
|
|
||||||
// Clean task description
|
|
||||||
const taskDescription = $ARGUMENTS
|
|
||||||
.replace(/--yes|-y|--max-domains=\d+/g, '')
|
|
||||||
.trim()
|
|
||||||
|
|
||||||
const slug = taskDescription.toLowerCase()
|
|
||||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
|
||||||
.substring(0, 30)
|
|
||||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
|
||||||
const sessionId = `CPLAN-${slug}-${dateStr}`
|
|
||||||
const sessionFolder = `${projectRoot}/.workflow/.planning/${sessionId}`
|
|
||||||
|
|
||||||
// Auto-detect continue: session folder + plan-note.md exists → continue mode
|
|
||||||
// If continue → load existing state and resume from incomplete phase
|
|
||||||
Bash(`mkdir -p ${sessionFolder}/domains`)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Session Variables**:
|
|
||||||
- `sessionId`: Unique session identifier
|
|
||||||
- `sessionFolder`: Base directory for all artifacts
|
|
||||||
- `maxDomains`: Maximum number of sub-domains (default: 5)
|
|
||||||
- `autoMode`: Boolean for auto-confirmation
|
|
||||||
|
|
||||||
**Auto-Detection**: If session folder exists with plan-note.md, automatically enters continue mode.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Understanding & Template Creation
|
|
||||||
|
|
||||||
**Objective**: Analyze task requirements, identify parallelizable sub-domains, and create the plan-note.md template with pre-allocated sections.
|
|
||||||
|
|
||||||
### Step 1.1: Analyze Task Description
|
|
||||||
|
|
||||||
Use built-in tools directly to understand the task scope and identify sub-domains.
|
|
||||||
|
|
||||||
**Analysis Activities**:
|
|
||||||
1. **Search for references** — Find related documentation, README files, and architecture guides
|
|
||||||
- Use: `mcp__ace-tool__search_context`, Grep, Glob, Read
|
|
||||||
- Run: `ccw spec load --category planning` (if spec system available)
|
|
||||||
2. **Extract task keywords** — Identify key terms and concepts from the task description
|
|
||||||
3. **Identify ambiguities** — List any unclear points or multiple possible interpretations
|
|
||||||
4. **Clarify with user** — If ambiguities found, use request_user_input for clarification
|
|
||||||
5. **Identify sub-domains** — Split into 2-{maxDomains} parallelizable focus areas based on task complexity
|
|
||||||
6. **Assess complexity** — Evaluate overall task complexity (Low/Medium/High)
|
|
||||||
|
|
||||||
**Sub-Domain Identification Patterns**:
|
|
||||||
|
|
||||||
| Pattern | Keywords |
|
|
||||||
|---------|----------|
|
|
||||||
| Backend API | 服务, 后端, API, 接口 |
|
|
||||||
| Frontend | 界面, 前端, UI, 视图 |
|
|
||||||
| Database | 数据, 存储, 数据库, 持久化 |
|
|
||||||
| Testing | 测试, 验证, QA |
|
|
||||||
| Infrastructure | 部署, 基础, 运维, 配置 |
|
|
||||||
|
|
||||||
**Guideline**: Prioritize identifying latest documentation (README, design docs, architecture guides). When ambiguities exist, ask user for clarification instead of assuming interpretations.
|
|
||||||
|
|
||||||
### Step 1.2: Create plan-note.md Template
|
|
||||||
|
|
||||||
Generate a structured template with pre-allocated sections for each sub-domain.
|
|
||||||
|
|
||||||
**plan-note.md Structure**:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
---
|
|
||||||
session_id: CPLAN-{slug}-{date}
|
|
||||||
original_requirement: "{task description}"
|
|
||||||
created_at: "{ISO timestamp}"
|
|
||||||
complexity: Low | Medium | High
|
|
||||||
sub_domains: ["{domain-1}", "{domain-2}", ...]
|
|
||||||
domain_task_id_ranges:
|
|
||||||
"{domain-1}": [1, 100]
|
|
||||||
"{domain-2}": [101, 200]
|
|
||||||
status: planning
|
|
||||||
---
|
|
||||||
```
|
|
||||||
|
|
||||||
**Sections**:
|
|
||||||
- `## 需求理解` — Core objectives, key points, constraints, split strategy
|
|
||||||
- `## 任务池 - {Domain N}` — Pre-allocated task section per domain (TASK-{range})
|
|
||||||
- `## 依赖关系` — Auto-generated after all domains complete
|
|
||||||
- `## 冲突标记` — Populated in Phase 3
|
|
||||||
- `## 上下文证据 - {Domain N}` — Evidence section per domain
|
|
||||||
|
|
||||||
**TASK ID Range Allocation**: Each domain receives a non-overlapping range of 100 IDs (e.g., Domain 1: TASK-001~100, Domain 2: TASK-101~200).
|
|
||||||
|
|
||||||
### Step 1.3: Generate requirement-analysis.json
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Write(`${sessionFolder}/requirement-analysis.json`, JSON.stringify({
|
|
||||||
session_id: sessionId,
|
|
||||||
original_requirement: taskDescription,
|
|
||||||
complexity: complexity, // Low | Medium | High
|
|
||||||
sub_domains: subDomains.map(sub => ({
|
|
||||||
focus_area: sub.focus_area,
|
|
||||||
description: sub.description,
|
|
||||||
task_id_range: sub.task_id_range,
|
|
||||||
estimated_effort: sub.estimated_effort,
|
|
||||||
dependencies: sub.dependencies // cross-domain dependencies
|
|
||||||
})),
|
|
||||||
total_domains: subDomains.length
|
|
||||||
}, null, 2))
|
|
||||||
```
|
|
||||||
|
|
||||||
**Success Criteria**:
|
|
||||||
- Latest documentation identified and referenced (if available)
|
|
||||||
- Ambiguities resolved via user clarification (if any found)
|
|
||||||
- 2-{maxDomains} clear sub-domains identified
|
|
||||||
- Each sub-domain can be planned independently
|
|
||||||
- Plan Note template includes all pre-allocated sections
|
|
||||||
- TASK ID ranges have no overlap (100 IDs per domain)
|
|
||||||
- Requirements understanding is comprehensive
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2: Serial Sub-Domain Planning
|
|
||||||
|
|
||||||
**Objective**: Plan each sub-domain sequentially, generating detailed plans and updating plan-note.md.
|
|
||||||
|
|
||||||
**Execution Model**: Serial inline execution — each domain explored and planned directly using search tools, one at a time.
|
|
||||||
|
|
||||||
### Step 2.1: User Confirmation (unless autoMode)
|
|
||||||
|
|
||||||
Display identified sub-domains and confirm before starting.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
if (!autoMode) {
|
|
||||||
request_user_input({
|
|
||||||
questions: [{
|
|
||||||
header: "确认规划",
|
|
||||||
id: "confirm",
|
|
||||||
question: `已识别 ${subDomains.length} 个子领域:\n${subDomains.map((s, i) =>
|
|
||||||
`${i+1}. ${s.focus_area}: ${s.description}`).join('\n')}\n\n确认开始规划?`,
|
|
||||||
options: [
|
|
||||||
{ label: "开始规划(Recommended)", description: "逐域进行规划" },
|
|
||||||
{ label: "调整拆分", description: "修改子领域划分" },
|
|
||||||
{ label: "取消", description: "退出规划" }
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2.2: Serial Domain Planning
|
|
||||||
|
|
||||||
For each sub-domain, execute the full planning cycle inline:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
for (const sub of subDomains) {
|
|
||||||
// 1. Create domain directory with .task/ subfolder
|
|
||||||
Bash(`mkdir -p ${sessionFolder}/domains/${sub.focus_area}/.task`)
|
|
||||||
|
|
||||||
// 2. Explore codebase for domain-relevant context
|
|
||||||
// Use: mcp__ace-tool__search_context, Grep, Glob, Read
|
|
||||||
// Focus on:
|
|
||||||
// - Modules/components related to this domain
|
|
||||||
// - Existing patterns to follow
|
|
||||||
// - Integration points with other domains
|
|
||||||
// - Architecture constraints
|
|
||||||
|
|
||||||
// 3. Generate task JSON records (following task-schema.json)
|
|
||||||
const domainTasks = [
|
|
||||||
// For each task within the assigned ID range:
|
|
||||||
{
|
|
||||||
id: `TASK-${String(sub.task_id_range[0]).padStart(3, '0')}`,
|
|
||||||
title: "...",
|
|
||||||
description: "...", // scope/goal of this task
|
|
||||||
type: "feature", // infrastructure|feature|enhancement|fix|refactor|testing
|
|
||||||
priority: "medium", // high|medium|low
|
|
||||||
effort: "medium", // small|medium|large
|
|
||||||
scope: "...", // Brief scope description
|
|
||||||
depends_on: [], // TASK-xxx references
|
|
||||||
convergence: {
|
|
||||||
criteria: ["... (testable)"], // Testable conditions
|
|
||||||
verification: "... (executable)", // Command or steps
|
|
||||||
definition_of_done: "... (business language)"
|
|
||||||
},
|
|
||||||
files: [ // Files to modify
|
|
||||||
{
|
|
||||||
path: "...",
|
|
||||||
action: "modify", // modify|create|delete
|
|
||||||
changes: ["..."], // Change descriptions
|
|
||||||
conflict_risk: "low" // low|medium|high
|
|
||||||
}
|
|
||||||
],
|
|
||||||
source: {
|
|
||||||
tool: "collaborative-plan-with-file",
|
|
||||||
session_id: sessionId,
|
|
||||||
original_id: `TASK-${String(sub.task_id_range[0]).padStart(3, '0')}`
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// ... more tasks
|
|
||||||
]
|
|
||||||
|
|
||||||
// 4. Write individual task JSON files (one per task)
|
|
||||||
domainTasks.forEach(task => {
|
|
||||||
Write(`${sessionFolder}/domains/${sub.focus_area}/.task/${task.id}.json`,
|
|
||||||
JSON.stringify(task, null, 2))
|
|
||||||
})
|
|
||||||
|
|
||||||
// 5. Sync summary to plan-note.md
|
|
||||||
// Read current plan-note.md
|
|
||||||
// Locate pre-allocated sections:
|
|
||||||
// - Task Pool: "## 任务池 - ${toTitleCase(sub.focus_area)}"
|
|
||||||
// - Evidence: "## 上下文证据 - ${toTitleCase(sub.focus_area)}"
|
|
||||||
// Fill with task summaries and evidence
|
|
||||||
// Write back plan-note.md
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Task Summary Format** (for plan-note.md task pool sections):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
### TASK-{ID}: {Title} [{focus-area}]
|
|
||||||
- **状态**: pending
|
|
||||||
- **类型**: feature/fix/refactor/enhancement/testing/infrastructure
|
|
||||||
- **优先级**: high/medium/low
|
|
||||||
- **工作量**: small/medium/large
|
|
||||||
- **依赖**: TASK-xxx (if any)
|
|
||||||
- **范围**: Brief scope description
|
|
||||||
- **修改文件**: `file-path` (action): change summary
|
|
||||||
- **收敛标准**:
|
|
||||||
- criteria 1
|
|
||||||
- criteria 2
|
|
||||||
- **验证方式**: executable command or steps
|
|
||||||
- **完成定义**: business language definition
|
|
||||||
```
|
|
||||||
|
|
||||||
**Evidence Format** (for plan-note.md evidence sections):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
- **相关文件**: file list with relevance
|
|
||||||
- **现有模式**: patterns identified
|
|
||||||
- **约束**: constraints discovered
|
|
||||||
```
|
|
||||||
|
|
||||||
**Domain Planning Rules**:
|
|
||||||
- Each domain modifies ONLY its pre-allocated sections in plan-note.md
|
|
||||||
- Use assigned TASK ID range exclusively
|
|
||||||
- Include convergence criteria for each task (criteria + verification + definition_of_done)
|
|
||||||
- Include `files[]` with conflict_risk assessment per file
|
|
||||||
- Reference cross-domain dependencies explicitly
|
|
||||||
- Each task record must be self-contained (can be independently consumed by unified-execute)
|
|
||||||
|
|
||||||
### Step 2.3: Verify plan-note.md Consistency
|
|
||||||
|
|
||||||
After all domains are planned, verify the shared document.
|
|
||||||
|
|
||||||
**Verification Activities**:
|
|
||||||
1. Read final plan-note.md
|
|
||||||
2. Verify all task pool sections are populated
|
|
||||||
3. Verify all evidence sections are populated
|
|
||||||
4. Validate TASK ID uniqueness across all domains
|
|
||||||
5. Check for any section format inconsistencies
|
|
||||||
|
|
||||||
**Success Criteria**:
|
|
||||||
- `domains/{domain}/.task/TASK-*.json` created for each domain (one file per task)
|
|
||||||
- Each task has convergence (criteria + verification + definition_of_done)
|
|
||||||
- `plan-note.md` updated with all task pools and evidence sections
|
|
||||||
- Task summaries follow consistent format
|
|
||||||
- No TASK ID overlaps across domains
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 3: Conflict Detection
|
|
||||||
|
|
||||||
**Objective**: Analyze plan-note.md for conflicts across all domain contributions.
|
|
||||||
|
|
||||||
### Step 3.1: Parse plan-note.md
|
|
||||||
|
|
||||||
Extract all tasks from all "任务池" sections and domain .task/*.json files.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// parsePlanNote(markdown)
|
|
||||||
// - Extract YAML frontmatter between `---` markers
|
|
||||||
// - Scan for heading patterns: /^(#{2,})\s+(.+)$/
|
|
||||||
// - Build sections array: { level, heading, start, content }
|
|
||||||
// - Return: { frontmatter, sections }
|
|
||||||
|
|
||||||
// Also load all domain .task/*.json for detailed data
|
|
||||||
// loadDomainTasks(sessionFolder, subDomains):
|
|
||||||
// const allTasks = []
|
|
||||||
// for (const sub of subDomains) {
|
|
||||||
// const taskDir = `${sessionFolder}/domains/${sub.focus_area}/.task`
|
|
||||||
// const taskFiles = Glob(`${taskDir}/TASK-*.json`)
|
|
||||||
// taskFiles.forEach(file => {
|
|
||||||
// allTasks.push(JSON.parse(Read(file)))
|
|
||||||
// })
|
|
||||||
// }
|
|
||||||
// return allTasks
|
|
||||||
|
|
||||||
// extractTasksFromSection(content, sectionHeading)
|
|
||||||
// - Match: /### (TASK-\d+):\s+(.+?)\s+\[(.+?)\]/
|
|
||||||
// - For each: extract taskId, title, author
|
|
||||||
// - Parse details: status, type, priority, effort, depends_on, files, convergence
|
|
||||||
// - Return: array of task objects
|
|
||||||
|
|
||||||
// parseTaskDetails(content)
|
|
||||||
// - Extract via regex:
|
|
||||||
// - /\*\*状态\*\*:\s*(.+)/ → status
|
|
||||||
// - /\*\*类型\*\*:\s*(.+)/ → type
|
|
||||||
// - /\*\*优先级\*\*:\s*(.+)/ → priority
|
|
||||||
// - /\*\*工作量\*\*:\s*(.+)/ → effort
|
|
||||||
// - /\*\*依赖\*\*:\s*(.+)/ → depends_on (extract TASK-\d+ references)
|
|
||||||
// - Extract files: /- `([^`]+)` \((\w+)\):\s*(.+)/ → path, action, change
|
|
||||||
// - Return: { status, type, priority, effort, depends_on[], files[], convergence }
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3.2: Detect Conflicts
|
|
||||||
|
|
||||||
Scan all tasks for three categories of conflicts.
|
|
||||||
|
|
||||||
**Conflict Types**:
|
|
||||||
|
|
||||||
| Type | Severity | Detection Logic | Resolution |
|
|
||||||
|------|----------|-----------------|------------|
|
|
||||||
| file_conflict | high | Same file:location modified by multiple domains | Coordinate modification order or merge changes |
|
|
||||||
| dependency_cycle | critical | Circular dependencies in task graph (DFS detection) | Remove or reorganize dependencies |
|
|
||||||
| strategy_conflict | medium | Multiple high-risk tasks in same file from different domains | Review approaches and align on single strategy |
|
|
||||||
|
|
||||||
**Detection Functions**:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// detectFileConflicts(tasks)
|
|
||||||
// Build fileMap: { "file-path": [{ task_id, task_title, source_domain, changes }] }
|
|
||||||
// For each file with modifications from multiple domains:
|
|
||||||
// → conflict: type='file_conflict', severity='high'
|
|
||||||
// → include: file, tasks_involved, domains_involved, changes
|
|
||||||
// → resolution: 'Coordinate modification order or merge changes'
|
|
||||||
|
|
||||||
// detectDependencyCycles(tasks)
|
|
||||||
// Build dependency graph: { taskId: [dependsOn_taskIds] }
|
|
||||||
// DFS with recursion stack to detect cycles:
|
|
||||||
function detectCycles(tasks) {
|
|
||||||
const graph = new Map(tasks.map(t => [t.id, t.depends_on || []]))
|
|
||||||
const visited = new Set(), inStack = new Set(), cycles = []
|
|
||||||
function dfs(node, path) {
|
|
||||||
if (inStack.has(node)) { cycles.push([...path, node].join(' → ')); return }
|
|
||||||
if (visited.has(node)) return
|
|
||||||
visited.add(node); inStack.add(node)
|
|
||||||
;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node]))
|
|
||||||
inStack.delete(node)
|
|
||||||
}
|
|
||||||
tasks.forEach(t => { if (!visited.has(t.id)) dfs(t.id, []) })
|
|
||||||
return cycles
|
|
||||||
}
|
|
||||||
|
|
||||||
// detectStrategyConflicts(tasks)
|
|
||||||
// Group tasks by files they modify (from task.files[].path)
|
|
||||||
// For each file with tasks from multiple domains:
|
|
||||||
// Filter for tasks with files[].conflict_risk === 'high' or 'medium'
|
|
||||||
// If >1 high-risk from different domains:
|
|
||||||
// → conflict: type='strategy_conflict', severity='medium'
|
|
||||||
// → resolution: 'Review approaches and align on single strategy'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3.3: Generate Conflict Artifacts
|
|
||||||
|
|
||||||
Write conflict results and update plan-note.md.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// 1. Write conflicts.json
|
|
||||||
Write(`${sessionFolder}/conflicts.json`, JSON.stringify({
|
|
||||||
detected_at: getUtc8ISOString(),
|
|
||||||
total_tasks: allTasks.length,
|
|
||||||
total_domains: subDomains.length,
|
|
||||||
total_conflicts: allConflicts.length,
|
|
||||||
conflicts: allConflicts // { type, severity, tasks_involved, description, suggested_resolution }
|
|
||||||
}, null, 2))
|
|
||||||
|
|
||||||
// 2. Update plan-note.md "## 冲突标记" section
|
|
||||||
// generateConflictMarkdown(conflicts):
|
|
||||||
// If empty: return '✅ 无冲突检测到'
|
|
||||||
// For each conflict:
|
|
||||||
// ### CONFLICT-{padded_index}: {description}
|
|
||||||
// - **严重程度**: critical | high | medium
|
|
||||||
// - **涉及任务**: TASK-xxx, TASK-yyy
|
|
||||||
// - **涉及领域**: domain-a, domain-b
|
|
||||||
// - **问题详情**: (based on conflict type)
|
|
||||||
// - **建议解决方案**: ...
|
|
||||||
// - **决策状态**: [ ] 待解决
|
|
||||||
|
|
||||||
// replaceSectionContent(markdown, sectionHeading, newContent):
|
|
||||||
// Find section heading position via regex
|
|
||||||
// Find next heading of same or higher level
|
|
||||||
// Replace content between heading and next section
|
|
||||||
// If section not found: append at end
|
|
||||||
```
|
|
||||||
|
|
||||||
**Success Criteria**:
|
|
||||||
- All tasks extracted and analyzed
|
|
||||||
- `conflicts.json` written with detection results
|
|
||||||
- `plan-note.md` updated with conflict markers
|
|
||||||
- All conflict types checked (file, dependency, strategy)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 4: Completion
|
|
||||||
|
|
||||||
**Objective**: Generate human-readable plan summary and finalize workflow.
|
|
||||||
|
|
||||||
### Step 4.1: Collect Domain .task/*.json to Session .task/
|
|
||||||
|
|
||||||
Copy all per-domain task JSON files into a single session-level `.task/` directory.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Create session-level .task/ directory
|
|
||||||
Bash(`mkdir -p ${sessionFolder}/.task`)
|
|
||||||
|
|
||||||
// Collect all domain task files
|
|
||||||
for (const sub of subDomains) {
|
|
||||||
const taskDir = `${sessionFolder}/domains/${sub.focus_area}/.task`
|
|
||||||
const taskFiles = Glob(`${taskDir}/TASK-*.json`)
|
|
||||||
taskFiles.forEach(file => {
|
|
||||||
const filename = path.basename(file)
|
|
||||||
// Copy domain task file to session .task/ directory
|
|
||||||
Bash(`cp ${file} ${sessionFolder}/.task/${filename}`)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4.2: Generate plan.json
|
|
||||||
|
|
||||||
Generate a plan overview following the plan-overview-base-schema.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Generate plan.json (plan-overview-base-schema)
|
|
||||||
const allTaskFiles = Glob(`${sessionFolder}/.task/TASK-*.json`)
|
|
||||||
const taskIds = allTaskFiles.map(f => JSON.parse(Read(f)).id).sort()
|
|
||||||
|
|
||||||
// Guard: skip plan.json if no tasks generated
|
|
||||||
if (taskIds.length === 0) {
|
|
||||||
console.warn('No tasks generated; skipping plan.json')
|
|
||||||
} else {
|
|
||||||
|
|
||||||
const planOverview = {
|
|
||||||
summary: `Collaborative plan for: ${taskDescription}`,
|
|
||||||
approach: `Multi-domain planning across ${subDomains.length} sub-domains: ${subDomains.map(s => s.focus_area).join(', ')}`,
|
|
||||||
task_ids: taskIds,
|
|
||||||
task_count: taskIds.length,
|
|
||||||
complexity: complexity,
|
|
||||||
recommended_execution: "Agent",
|
|
||||||
_metadata: {
|
|
||||||
timestamp: getUtc8ISOString(),
|
|
||||||
source: "direct-planning",
|
|
||||||
planning_mode: "direct",
|
|
||||||
plan_type: "collaborative",
|
|
||||||
schema_version: "2.0"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Write(`${sessionFolder}/plan.json`, JSON.stringify(planOverview, null, 2))
|
|
||||||
|
|
||||||
} // end guard
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4.3: Generate plan.md
|
|
||||||
|
|
||||||
Create a human-readable summary from plan-note.md content.
|
|
||||||
|
|
||||||
**plan.md Structure**:
|
|
||||||
|
|
||||||
| Section | Content |
|
|
||||||
|---------|---------|
|
|
||||||
| Header | Session ID, task description, creation time |
|
|
||||||
| 需求 (Requirements) | Copied from plan-note.md "需求理解" section |
|
|
||||||
| 子领域拆分 (Sub-Domains) | Each domain with description, task range, estimated effort |
|
|
||||||
| 任务概览 (Task Overview) | All tasks with complexity, dependencies, and target files |
|
|
||||||
| 冲突报告 (Conflict Report) | Summary of detected conflicts or "无冲突" |
|
|
||||||
| 执行指令 (Execution) | Command to execute the plan |
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const planMd = `# Collaborative Plan
|
|
||||||
|
|
||||||
**Session**: ${sessionId}
|
|
||||||
**Requirement**: ${taskDescription}
|
|
||||||
**Created**: ${getUtc8ISOString()}
|
|
||||||
**Complexity**: ${complexity}
|
|
||||||
**Domains**: ${subDomains.length}
|
|
||||||
|
|
||||||
## 需求理解
|
|
||||||
|
|
||||||
${requirementSection}
|
|
||||||
|
|
||||||
## 子领域拆分
|
|
||||||
|
|
||||||
| # | Focus Area | Description | TASK Range | Effort |
|
|
||||||
|---|-----------|-------------|------------|--------|
|
|
||||||
${subDomains.map((s, i) => `| ${i+1} | ${s.focus_area} | ${s.description} | ${s.task_id_range[0]}-${s.task_id_range[1]} | ${s.estimated_effort} |`).join('\n')}
|
|
||||||
|
|
||||||
## 任务概览
|
|
||||||
|
|
||||||
${subDomains.map(sub => {
|
|
||||||
const domainTasks = allTasks.filter(t => t.source?.original_id?.startsWith('TASK') && t.source?.session_id === sessionId)
|
|
||||||
return `### ${sub.focus_area}\n\n` +
|
|
||||||
domainTasks.map(t => `- **${t.id}**: ${t.title} (${t.type}, ${t.effort}) ${t.depends_on.length ? '← ' + t.depends_on.join(', ') : ''}`).join('\n')
|
|
||||||
}).join('\n\n')}
|
|
||||||
|
|
||||||
## 冲突报告
|
|
||||||
|
|
||||||
${allConflicts.length === 0
|
|
||||||
? '✅ 无冲突检测到'
|
|
||||||
: allConflicts.map(c => `- **${c.type}** (${c.severity}): ${c.description}`).join('\n')}
|
|
||||||
|
|
||||||
## 执行
|
|
||||||
|
|
||||||
\`\`\`bash
|
|
||||||
/workflow:unified-execute-with-file PLAN="${sessionFolder}/.task/"
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
**Session artifacts**: \`${sessionFolder}/\`
|
|
||||||
`
|
|
||||||
Write(`${sessionFolder}/plan.md`, planMd)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4.4: Display Completion Summary
|
|
||||||
|
|
||||||
Present session statistics and next steps.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Display:
|
|
||||||
// - Session ID and directory path
|
|
||||||
// - Total domains planned
|
|
||||||
// - Total tasks generated
|
|
||||||
// - Conflict status (count and severity)
|
|
||||||
// - Execution command for next step
|
|
||||||
|
|
||||||
if (!autoMode) {
|
|
||||||
request_user_input({
|
|
||||||
questions: [{
|
|
||||||
header: "下一步",
|
|
||||||
id: "next_step",
|
|
||||||
question: `规划完成:\n- ${subDomains.length} 个子领域\n- ${allTasks.length} 个任务\n- ${allConflicts.length} 个冲突\n\n下一步:`,
|
|
||||||
options: [
|
|
||||||
{ label: "Execute Plan(Recommended)", description: "使用 unified-execute 执行计划" },
|
|
||||||
{ label: "Review Conflicts", description: "查看并解决冲突" },
|
|
||||||
{ label: "Done", description: "保存产物,稍后执行" }
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
| Selection | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Execute Plan | `Skill(skill="workflow:unified-execute-with-file", args="PLAN=\"${sessionFolder}/.task/\"")` |
|
|
||||||
| Review Conflicts | Display conflicts.json content for manual resolution |
|
|
||||||
| Export | Copy plan.md + plan-note.md to user-specified location |
|
|
||||||
| Done | Display artifact paths, end workflow |
|
|
||||||
|
|
||||||
### Step 4.5: Sync Session State
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$session-sync -y "Plan complete: {domains} domains, {tasks} tasks"
|
|
||||||
```
|
|
||||||
|
|
||||||
Updates specs/*.md with planning insights and project-tech.json with planning session entry.
|
|
||||||
|
|
||||||
**Success Criteria**:
|
|
||||||
- `plan.md` generated with complete summary
|
|
||||||
- `.task/TASK-*.json` collected at session root (consumable by unified-execute)
|
|
||||||
- All artifacts present in session directory
|
|
||||||
- Session state synced via `$session-sync`
|
|
||||||
- User informed of completion and next steps
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
| Flag | Default | Description |
|
|
||||||
|------|---------|-------------|
|
|
||||||
| `--max-domains` | 5 | Maximum sub-domains to identify |
|
|
||||||
| `-y, --yes` | false | Auto-confirm all decisions |
|
|
||||||
|
|
||||||
## Iteration Patterns
|
|
||||||
|
|
||||||
### New Planning Session
|
|
||||||
|
|
||||||
```
|
|
||||||
User initiates: TASK="task description"
|
|
||||||
├─ No session exists → New session mode
|
|
||||||
├─ Analyze task with inline search tools
|
|
||||||
├─ Identify sub-domains
|
|
||||||
├─ Create plan-note.md template
|
|
||||||
├─ Generate requirement-analysis.json
|
|
||||||
│
|
|
||||||
├─ Serial domain planning:
|
|
||||||
│ ├─ Domain 1: explore → .task/TASK-*.json → fill plan-note.md
|
|
||||||
│ ├─ Domain 2: explore → .task/TASK-*.json → fill plan-note.md
|
|
||||||
│ └─ Domain N: ...
|
|
||||||
│
|
|
||||||
├─ Collect domain .task/*.json → session .task/
|
|
||||||
│
|
|
||||||
├─ Verify plan-note.md consistency
|
|
||||||
├─ Detect conflicts
|
|
||||||
├─ Generate plan.md summary
|
|
||||||
└─ Report completion
|
|
||||||
```
|
|
||||||
|
|
||||||
### Continue Existing Session
|
|
||||||
|
|
||||||
```
|
|
||||||
User resumes: TASK="same task"
|
|
||||||
├─ Session exists → Continue mode
|
|
||||||
├─ Load plan-note.md and requirement-analysis.json
|
|
||||||
├─ Identify incomplete domains (empty task pool sections)
|
|
||||||
├─ Plan remaining domains serially
|
|
||||||
└─ Continue with conflict detection
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling & Recovery
|
|
||||||
|
|
||||||
| Situation | Action | Recovery |
|
|
||||||
|-----------|--------|----------|
|
|
||||||
| No codebase detected | Normal flow, pure requirement planning | Proceed without codebase context |
|
|
||||||
| Codebase search fails | Continue with available context | Note limitation in plan-note.md |
|
|
||||||
| Domain planning fails | Record error, continue with next domain | Retry failed domain or plan manually |
|
|
||||||
| Section not found in plan-note | Create section defensively | Continue with new section |
|
|
||||||
| No tasks generated for a domain | Review domain description | Refine scope and retry |
|
|
||||||
| Conflict detection fails | Continue with empty conflicts | Note in completion summary |
|
|
||||||
| Session folder conflict | Append timestamp suffix | Create unique folder |
|
|
||||||
| plan-note.md format inconsistency | Validate and fix format after each domain | Re-read and normalize |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
### Before Starting Planning
|
|
||||||
|
|
||||||
1. **Clear Task Description**: Detailed requirements lead to better sub-domain splitting
|
|
||||||
2. **Reference Documentation**: Ensure latest README and design docs are identified during Phase 1
|
|
||||||
3. **Clarify Ambiguities**: Resolve unclear requirements before committing to sub-domains
|
|
||||||
|
|
||||||
### During Planning
|
|
||||||
|
|
||||||
1. **Review Plan Note**: Check plan-note.md between domains to verify progress
|
|
||||||
2. **Verify Independence**: Ensure sub-domains are truly independent and have minimal overlap
|
|
||||||
3. **Check Dependencies**: Cross-domain dependencies should be documented explicitly
|
|
||||||
4. **Inspect Details**: Review `domains/{domain}/.task/TASK-*.json` for specifics when needed
|
|
||||||
5. **Consistent Format**: Follow task summary format strictly across all domains
|
|
||||||
6. **TASK ID Isolation**: Use pre-assigned non-overlapping ranges to prevent ID conflicts
|
|
||||||
|
|
||||||
### After Planning
|
|
||||||
|
|
||||||
1. **Resolve Conflicts**: Address high/critical conflicts before execution
|
|
||||||
2. **Review Summary**: Check plan.md for completeness and accuracy
|
|
||||||
3. **Validate Tasks**: Ensure all tasks have clear scope and modification targets
|
|
||||||
|
|
||||||
## When to Use
|
|
||||||
|
|
||||||
**Use collaborative-plan-with-file when:**
|
|
||||||
- A complex task spans multiple sub-domains (backend + frontend + database, etc.)
|
|
||||||
- Need structured multi-domain task breakdown with conflict detection
|
|
||||||
- Planning a feature that touches many parts of the codebase
|
|
||||||
- Want pre-allocated section organization for clear domain separation
|
|
||||||
|
|
||||||
**Use lite-plan when:**
|
|
||||||
- Single domain, clear task with no sub-domain splitting needed
|
|
||||||
- Quick planning without conflict detection
|
|
||||||
|
|
||||||
**Use req-plan-with-file when:**
|
|
||||||
- Requirement-level progressive roadmap needed (MVP → iterations)
|
|
||||||
- Higher-level decomposition before detailed planning
|
|
||||||
|
|
||||||
**Use analyze-with-file when:**
|
|
||||||
- Need in-depth analysis before planning
|
|
||||||
- Understanding and discussion, not task generation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Now execute collaborative-plan-with-file for**: $ARGUMENTS
|
|
||||||
@@ -25,9 +25,6 @@ $csv-wave-pipeline --continue "auth-20260228"
|
|||||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 4)
|
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 4)
|
||||||
- `--continue`: Resume existing session
|
- `--continue`: Resume existing session
|
||||||
|
|
||||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
|
||||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
@@ -37,35 +34,75 @@ Wave-based batch execution using `spawn_agents_on_csv` with **cross-wave context
|
|||||||
**Core workflow**: Decompose → Compute Waves → Execute Wave-by-Wave → Aggregate
|
**Core workflow**: Decompose → Compute Waves → Execute Wave-by-Wave → Aggregate
|
||||||
|
|
||||||
```
|
```
|
||||||
┌─────────────────────────────────────────────────────────────────────────┐
|
Phase 1: Requirement → CSV
|
||||||
│ CSV BATCH EXECUTION WORKFLOW │
|
├─ Parse requirement into subtasks (3-10 tasks)
|
||||||
├─────────────────────────────────────────────────────────────────────────┤
|
├─ Identify dependencies (deps column)
|
||||||
│ │
|
├─ Compute dependency waves (topological sort → depth grouping)
|
||||||
│ Phase 1: Requirement → CSV │
|
├─ Generate tasks.csv with wave column
|
||||||
│ ├─ Parse requirement into subtasks (3-10 tasks) │
|
└─ User validates task breakdown (skip if -y)
|
||||||
│ ├─ Identify dependencies (deps column) │
|
|
||||||
│ ├─ Compute dependency waves (topological sort → depth grouping) │
|
Phase 2: Wave Execution Engine
|
||||||
│ ├─ Generate tasks.csv with wave column │
|
├─ For each wave (1..N):
|
||||||
│ └─ User validates task breakdown (skip if -y) │
|
│ ├─ Build wave CSV (filter rows for this wave)
|
||||||
│ │
|
│ ├─ Inject previous wave findings into prev_context column
|
||||||
│ Phase 2: Wave Execution Engine │
|
│ ├─ spawn_agents_on_csv(wave CSV)
|
||||||
│ ├─ For each wave (1..N): │
|
│ ├─ Collect results, merge into master tasks.csv
|
||||||
│ │ ├─ Build wave CSV (filter rows for this wave) │
|
│ └─ Check: any failed? → skip dependents or retry
|
||||||
│ │ ├─ Inject previous wave findings into prev_context column │
|
└─ discoveries.ndjson shared across all waves (append-only)
|
||||||
│ │ ├─ spawn_agents_on_csv(wave CSV) │
|
|
||||||
│ │ ├─ Collect results, merge into master tasks.csv │
|
Phase 3: Results Aggregation
|
||||||
│ │ └─ Check: any failed? → skip dependents or retry │
|
├─ Export final results.csv
|
||||||
│ └─ discoveries.ndjson shared across all waves (append-only) │
|
├─ Generate context.md with all findings
|
||||||
│ │
|
├─ Display summary: completed/failed/skipped per wave
|
||||||
│ Phase 3: Results Aggregation │
|
└─ Offer: view results | retry failed | done
|
||||||
│ ├─ Export final results.csv │
|
|
||||||
│ ├─ Generate context.md with all findings │
|
|
||||||
│ ├─ Display summary: completed/failed/skipped per wave │
|
|
||||||
│ └─ Offer: view results | retry failed | done │
|
|
||||||
│ │
|
|
||||||
└─────────────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Context Propagation
|
||||||
|
|
||||||
|
Two context channels flow across waves:
|
||||||
|
|
||||||
|
1. **CSV findings** (structured): `context_from` column → `prev_context` injection — task-specific directed context
|
||||||
|
2. **NDJSON discoveries** (broadcast): `discoveries.ndjson` — general exploration findings available to all
|
||||||
|
|
||||||
|
```
|
||||||
|
Wave 1 agents:
|
||||||
|
├─ Execute tasks (no prev_context)
|
||||||
|
├─ Write findings to report_agent_job_result
|
||||||
|
└─ Append discoveries to discoveries.ndjson
|
||||||
|
↓ merge results into master CSV
|
||||||
|
Wave 2 agents:
|
||||||
|
├─ Read discoveries.ndjson (exploration sharing)
|
||||||
|
├─ Read prev_context column (wave 1 findings from context_from)
|
||||||
|
├─ Execute tasks with full upstream context
|
||||||
|
├─ Write findings to report_agent_job_result
|
||||||
|
└─ Append new discoveries to discoveries.ndjson
|
||||||
|
↓ merge results into master CSV
|
||||||
|
Wave 3+ agents: same pattern, accumulated context from all prior waves
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Session & Output Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/.csv-wave/{session-id}/
|
||||||
|
├── tasks.csv # Master state (updated per wave)
|
||||||
|
├── results.csv # Final results export (Phase 3)
|
||||||
|
├── discoveries.ndjson # Shared discovery board (all agents, append-only)
|
||||||
|
├── context.md # Human-readable report (Phase 3)
|
||||||
|
├── wave-{N}.csv # Temporary per-wave input (cleaned up after merge)
|
||||||
|
└── wave-{N}-results.csv # Temporary per-wave output (cleaned up after merge)
|
||||||
|
```
|
||||||
|
|
||||||
|
| File | Purpose | Lifecycle |
|
||||||
|
|------|---------|-----------|
|
||||||
|
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
|
||||||
|
| `wave-{N}.csv` | Per-wave input with prev_context column | Created before wave, deleted after |
|
||||||
|
| `wave-{N}-results.csv` | Per-wave output from spawn_agents_on_csv | Created during wave, deleted after merge |
|
||||||
|
| `results.csv` | Final export of all task results | Created in Phase 3 |
|
||||||
|
| `discoveries.ndjson` | Shared exploration board across all agents | Append-only, carries across waves |
|
||||||
|
| `context.md` | Human-readable execution report | Created in Phase 3 |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## CSV Schema
|
## CSV Schema
|
||||||
@@ -104,7 +141,7 @@ id,title,description,test,acceptance_criteria,scope,hints,execution_directives,d
|
|||||||
|
|
||||||
### Per-Wave CSV (Temporary)
|
### Per-Wave CSV (Temporary)
|
||||||
|
|
||||||
Each wave generates a temporary `wave-{N}.csv` with an extra `prev_context` column:
|
Each wave generates a temporary `wave-{N}.csv` with an extra `prev_context` column built from `context_from` by looking up completed tasks' `findings` in the master CSV:
|
||||||
|
|
||||||
```csv
|
```csv
|
||||||
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context
|
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context
|
||||||
@@ -112,32 +149,37 @@ id,title,description,test,acceptance_criteria,scope,hints,execution_directives,d
|
|||||||
"3","Add JWT tokens","Implement JWT","Unit test: sign/verify round-trip; Edge test: expired token returns 401","generateToken() returns valid JWT; verifyToken() rejects expired/tampered tokens","src/auth/jwt/**","Use jsonwebtoken library; Set default expiry 1h || src/config/auth.ts","Ensure tsc --noEmit passes","1","1","2","[Task 1] Created auth/ with index.ts and types.ts"
|
"3","Add JWT tokens","Implement JWT","Unit test: sign/verify round-trip; Edge test: expired token returns 401","generateToken() returns valid JWT; verifyToken() rejects expired/tampered tokens","src/auth/jwt/**","Use jsonwebtoken library; Set default expiry 1h || src/config/auth.ts","Ensure tsc --noEmit passes","1","1","2","[Task 1] Created auth/ with index.ts and types.ts"
|
||||||
```
|
```
|
||||||
|
|
||||||
The `prev_context` column is built from `context_from` by looking up completed tasks' `findings` in the master CSV.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Output Artifacts
|
## Shared Discovery Board Protocol
|
||||||
|
|
||||||
| File | Purpose | Lifecycle |
|
All agents across all waves share `discoveries.ndjson`. This eliminates redundant codebase exploration.
|
||||||
|------|---------|-----------|
|
|
||||||
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
|
|
||||||
| `wave-{N}.csv` | Per-wave input (temporary) | Created before wave, deleted after |
|
|
||||||
| `results.csv` | Final export of all task results | Created in Phase 3 |
|
|
||||||
| `discoveries.ndjson` | Shared exploration board across all agents | Append-only, carries across waves |
|
|
||||||
| `context.md` | Human-readable execution report | Created in Phase 3 |
|
|
||||||
|
|
||||||
---
|
**Lifecycle**: Created by the first agent to write a discovery. Carries over across waves — never cleared. Agents append via `echo '...' >> discoveries.ndjson`.
|
||||||
|
|
||||||
## Session Structure
|
**Format**: NDJSON, each line is a self-contained JSON:
|
||||||
|
|
||||||
|
```jsonl
|
||||||
|
{"ts":"2026-02-28T10:00:00+08:00","worker":"1","type":"code_pattern","data":{"name":"repository-pattern","file":"src/repos/Base.ts","description":"Abstract CRUD repository"}}
|
||||||
|
{"ts":"2026-02-28T10:01:00+08:00","worker":"2","type":"integration_point","data":{"file":"src/auth/index.ts","description":"Auth module entry","exports":["authenticate","authorize"]}}
|
||||||
```
|
```
|
||||||
.workflow/.csv-wave/{session-id}/
|
|
||||||
├── tasks.csv # Master state (updated per wave)
|
**Discovery Types**:
|
||||||
├── results.csv # Final results export
|
|
||||||
├── discoveries.ndjson # Shared discovery board (all agents)
|
| type | Dedup Key | Description |
|
||||||
├── context.md # Human-readable report
|
|------|-----------|-------------|
|
||||||
└── wave-{N}.csv # Temporary per-wave input (cleaned up)
|
| `code_pattern` | `data.name` | Reusable code pattern found |
|
||||||
```
|
| `integration_point` | `data.file` | Module connection point |
|
||||||
|
| `convention` | singleton | Code style conventions |
|
||||||
|
| `blocker` | `data.issue` | Blocking issue encountered |
|
||||||
|
| `tech_stack` | singleton | Project technology stack |
|
||||||
|
| `test_command` | singleton | Test commands discovered |
|
||||||
|
|
||||||
|
**Protocol Rules**:
|
||||||
|
1. Read board before own exploration → skip covered areas
|
||||||
|
2. Write discoveries immediately via `echo >>` → don't batch
|
||||||
|
3. Deduplicate — check existing entries; skip if same type + dedup key exists
|
||||||
|
4. Append-only — never modify or delete existing lines
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -154,17 +196,19 @@ const continueMode = $ARGUMENTS.includes('--continue')
|
|||||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4
|
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4
|
||||||
|
|
||||||
// Clean requirement text (remove flags)
|
// Clean requirement text (remove flags — word-boundary safe)
|
||||||
const requirement = $ARGUMENTS
|
const requirement = $ARGUMENTS
|
||||||
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
.replace(/--yes|(?:^|\s)-y(?=\s|$)|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||||
.trim()
|
.trim()
|
||||||
|
|
||||||
|
let sessionId, sessionFolder
|
||||||
|
|
||||||
const slug = requirement.toLowerCase()
|
const slug = requirement.toLowerCase()
|
||||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||||
.substring(0, 40)
|
.substring(0, 40)
|
||||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||||
const sessionId = `cwp-${slug}-${dateStr}`
|
sessionId = `cwp-${slug}-${dateStr}`
|
||||||
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||||
|
|
||||||
// Continue mode: find existing session
|
// Continue mode: find existing session
|
||||||
if (continueMode) {
|
if (continueMode) {
|
||||||
@@ -181,6 +225,60 @@ if (continueMode) {
|
|||||||
Bash(`mkdir -p ${sessionFolder}`)
|
Bash(`mkdir -p ${sessionFolder}`)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### CSV Utility Functions
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Escape a value for CSV (wrap in quotes, double internal quotes)
|
||||||
|
function csvEscape(value) {
|
||||||
|
const str = String(value ?? '')
|
||||||
|
return str.replace(/"/g, '""')
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse CSV string into array of objects (header row → keys)
|
||||||
|
function parseCsv(csvString) {
|
||||||
|
const lines = csvString.trim().split('\n')
|
||||||
|
if (lines.length < 2) return []
|
||||||
|
const headers = parseCsvLine(lines[0]).map(h => h.replace(/^"|"$/g, ''))
|
||||||
|
return lines.slice(1).map(line => {
|
||||||
|
const cells = parseCsvLine(line).map(c => c.replace(/^"|"$/g, '').replace(/""/g, '"'))
|
||||||
|
const obj = {}
|
||||||
|
headers.forEach((h, i) => { obj[h] = cells[i] ?? '' })
|
||||||
|
return obj
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse a single CSV line respecting quoted fields with commas/newlines
|
||||||
|
function parseCsvLine(line) {
|
||||||
|
const cells = []
|
||||||
|
let current = ''
|
||||||
|
let inQuotes = false
|
||||||
|
for (let i = 0; i < line.length; i++) {
|
||||||
|
const ch = line[i]
|
||||||
|
if (inQuotes) {
|
||||||
|
if (ch === '"' && line[i + 1] === '"') {
|
||||||
|
current += '"'
|
||||||
|
i++ // skip escaped quote
|
||||||
|
} else if (ch === '"') {
|
||||||
|
inQuotes = false
|
||||||
|
} else {
|
||||||
|
current += ch
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if (ch === '"') {
|
||||||
|
inQuotes = true
|
||||||
|
} else if (ch === ',') {
|
||||||
|
cells.push(current)
|
||||||
|
current = ''
|
||||||
|
} else {
|
||||||
|
current += ch
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cells.push(current)
|
||||||
|
return cells
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Phase 1: Requirement → CSV
|
### Phase 1: Requirement → CSV
|
||||||
@@ -222,11 +320,28 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|||||||
// Parse JSON from CLI output → decomposedTasks[]
|
// Parse JSON from CLI output → decomposedTasks[]
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Compute Waves** (Topological Sort → Depth Grouping)
|
2. **Compute Waves** (Kahn's BFS topological sort with depth tracking)
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
|
// Algorithm:
|
||||||
|
// 1. Build in-degree map and adjacency list from deps
|
||||||
|
// 2. Enqueue all tasks with in-degree 0 at wave 1
|
||||||
|
// 3. BFS: for each dequeued task at wave W, for each dependent D:
|
||||||
|
// - Decrement D's in-degree
|
||||||
|
// - D.wave = max(D.wave, W + 1)
|
||||||
|
// - If D's in-degree reaches 0, enqueue D
|
||||||
|
// 4. Any task without wave assignment → circular dependency error
|
||||||
|
//
|
||||||
|
// Wave properties:
|
||||||
|
// Wave 1: no dependencies — fully independent
|
||||||
|
// Wave N: all deps in waves 1..(N-1) — guaranteed completed before start
|
||||||
|
// Within a wave: tasks are independent → safe for concurrent execution
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
// A(no deps)→W1, B(no deps)→W1, C(deps:A)→W2, D(deps:A,B)→W2, E(deps:C,D)→W3
|
||||||
|
// Wave 1: [A,B] concurrent → Wave 2: [C,D] concurrent → Wave 3: [E]
|
||||||
|
|
||||||
function computeWaves(tasks) {
|
function computeWaves(tasks) {
|
||||||
// Build adjacency: task.deps → predecessors
|
|
||||||
const taskMap = new Map(tasks.map(t => [t.id, t]))
|
const taskMap = new Map(tasks.map(t => [t.id, t]))
|
||||||
const inDegree = new Map(tasks.map(t => [t.id, 0]))
|
const inDegree = new Map(tasks.map(t => [t.id, 0]))
|
||||||
const adjList = new Map(tasks.map(t => [t.id, []]))
|
const adjList = new Map(tasks.map(t => [t.id, []]))
|
||||||
@@ -267,7 +382,7 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Detect cycles: any task without wave assignment
|
// Detect cycles
|
||||||
for (const task of tasks) {
|
for (const task of tasks) {
|
||||||
if (!waveAssignment.has(task.id)) {
|
if (!waveAssignment.has(task.id)) {
|
||||||
throw new Error(`Circular dependency detected involving task ${task.id}`)
|
throw new Error(`Circular dependency detected involving task ${task.id}`)
|
||||||
@@ -344,10 +459,7 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Success Criteria**:
|
**Success Criteria**: tasks.csv created with valid schema and wave assignments, no circular dependencies, user approved (or AUTO_YES).
|
||||||
- tasks.csv created with valid schema and wave assignments
|
|
||||||
- No circular dependencies
|
|
||||||
- User approved (or AUTO_YES)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -378,7 +490,6 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|||||||
const deps = task.deps.split(';').filter(Boolean)
|
const deps = task.deps.split(';').filter(Boolean)
|
||||||
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
||||||
skippedIds.add(task.id)
|
skippedIds.add(task.id)
|
||||||
// Update master CSV: mark as skipped
|
|
||||||
updateMasterCsvRow(sessionFolder, task.id, {
|
updateMasterCsvRow(sessionFolder, task.id, {
|
||||||
status: 'skipped',
|
status: 'skipped',
|
||||||
error: 'Dependency failed or skipped'
|
error: 'Dependency failed or skipped'
|
||||||
@@ -394,7 +505,7 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// 4. Build prev_context for each task
|
// 4. Build prev_context for each task (from context_from → master CSV findings)
|
||||||
for (const task of executableTasks) {
|
for (const task of executableTasks) {
|
||||||
const contextIds = task.context_from.split(';').filter(Boolean)
|
const contextIds = task.context_from.split(';').filter(Boolean)
|
||||||
const prevFindings = contextIds
|
const prevFindings = contextIds
|
||||||
@@ -465,8 +576,8 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// 8. Cleanup temporary wave CSV
|
// 8. Cleanup temporary wave CSVs
|
||||||
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv"`)
|
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv" "${sessionFolder}/wave-${wave}-results.csv"`)
|
||||||
|
|
||||||
console.log(` Wave ${wave} done: ${waveResults.filter(r => r.status === 'completed').length} completed, ${waveResults.filter(r => r.status === 'failed').length} failed`)
|
console.log(` Wave ${wave} done: ${waveResults.filter(r => r.status === 'completed').length} completed, ${waveResults.filter(r => r.status === 'failed').length} failed`)
|
||||||
}
|
}
|
||||||
@@ -535,6 +646,8 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|||||||
- \`integration_point\`: {file, description, exports[]} — module connection points
|
- \`integration_point\`: {file, description, exports[]} — module connection points
|
||||||
- \`convention\`: {naming, imports, formatting} — code style conventions
|
- \`convention\`: {naming, imports, formatting} — code style conventions
|
||||||
- \`blocker\`: {issue, severity, impact} — blocking issues encountered
|
- \`blocker\`: {issue, severity, impact} — blocking issues encountered
|
||||||
|
- \`tech_stack\`: {runtime, framework, language} — project technology stack
|
||||||
|
- \`test_command\`: {command, scope, description} — test commands discovered
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -587,11 +700,7 @@ Otherwise set status to "failed" with details in error field.
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Success Criteria**:
|
**Success Criteria**: All waves executed in order, each wave's results merged into master CSV before next wave starts, dependent tasks skipped when predecessor failed, discoveries.ndjson accumulated across all waves.
|
||||||
- All waves executed in order
|
|
||||||
- Each wave's results merged into master CSV before next wave starts
|
|
||||||
- Dependent tasks skipped when predecessor failed
|
|
||||||
- discoveries.ndjson accumulated across all waves
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -741,120 +850,7 @@ ${[...new Set(tasks.flatMap(t => (t.files_modified || '').split(';')).filter(Boo
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Success Criteria**:
|
**Success Criteria**: results.csv exported, context.md generated, summary displayed to user.
|
||||||
- results.csv exported
|
|
||||||
- context.md generated
|
|
||||||
- Summary displayed to user
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Shared Discovery Board Protocol
|
|
||||||
|
|
||||||
All agents across all waves share `discoveries.ndjson`. This eliminates redundant codebase exploration.
|
|
||||||
|
|
||||||
**Lifecycle**:
|
|
||||||
- Created by the first agent to write a discovery
|
|
||||||
- Carries over across waves — never cleared
|
|
||||||
- Agents append via `echo '...' >> discoveries.ndjson`
|
|
||||||
|
|
||||||
**Format**: NDJSON, each line is a self-contained JSON:
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
{"ts":"2026-02-28T10:00:00+08:00","worker":"1","type":"code_pattern","data":{"name":"repository-pattern","file":"src/repos/Base.ts","description":"Abstract CRUD repository"}}
|
|
||||||
{"ts":"2026-02-28T10:01:00+08:00","worker":"2","type":"integration_point","data":{"file":"src/auth/index.ts","description":"Auth module entry","exports":["authenticate","authorize"]}}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Discovery Types**:
|
|
||||||
|
|
||||||
| type | Dedup Key | Description |
|
|
||||||
|------|-----------|-------------|
|
|
||||||
| `code_pattern` | `data.name` | Reusable code pattern found |
|
|
||||||
| `integration_point` | `data.file` | Module connection point |
|
|
||||||
| `convention` | singleton | Code style conventions |
|
|
||||||
| `blocker` | `data.issue` | Blocking issue encountered |
|
|
||||||
| `tech_stack` | singleton | Project technology stack |
|
|
||||||
| `test_command` | singleton | Test commands discovered |
|
|
||||||
|
|
||||||
**Protocol Rules**:
|
|
||||||
1. Read board before own exploration → skip covered areas
|
|
||||||
2. Write discoveries immediately via `echo >>` → don't batch
|
|
||||||
3. Deduplicate — check existing entries; skip if same type + dedup key exists
|
|
||||||
4. Append-only — never modify or delete existing lines
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Wave Computation Details
|
|
||||||
|
|
||||||
### Algorithm
|
|
||||||
|
|
||||||
Kahn's BFS topological sort with depth tracking:
|
|
||||||
|
|
||||||
```
|
|
||||||
Input: tasks[] with deps[]
|
|
||||||
Output: waveAssignment (taskId → wave number)
|
|
||||||
|
|
||||||
1. Build in-degree map and adjacency list from deps
|
|
||||||
2. Enqueue all tasks with in-degree 0 at wave 1
|
|
||||||
3. BFS: for each dequeued task at wave W:
|
|
||||||
- For each dependent task D:
|
|
||||||
- Decrement D's in-degree
|
|
||||||
- D.wave = max(D.wave, W + 1)
|
|
||||||
- If D's in-degree reaches 0, enqueue D
|
|
||||||
4. Any task without wave assignment → circular dependency error
|
|
||||||
```
|
|
||||||
|
|
||||||
### Wave Properties
|
|
||||||
|
|
||||||
- **Wave 1**: No dependencies — all tasks in wave 1 are fully independent
|
|
||||||
- **Wave N**: All dependencies are in waves 1..(N-1) — guaranteed completed before wave N starts
|
|
||||||
- **Within a wave**: Tasks are independent of each other → safe for concurrent execution
|
|
||||||
|
|
||||||
### Example
|
|
||||||
|
|
||||||
```
|
|
||||||
Task A (no deps) → Wave 1
|
|
||||||
Task B (no deps) → Wave 1
|
|
||||||
Task C (deps: A) → Wave 2
|
|
||||||
Task D (deps: A, B) → Wave 2
|
|
||||||
Task E (deps: C, D) → Wave 3
|
|
||||||
|
|
||||||
Execution:
|
|
||||||
Wave 1: [A, B] ← concurrent
|
|
||||||
Wave 2: [C, D] ← concurrent, sees A+B findings
|
|
||||||
Wave 3: [E] ← sees A+B+C+D findings
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Context Propagation Flow
|
|
||||||
|
|
||||||
```
|
|
||||||
Wave 1 agents:
|
|
||||||
├─ Execute tasks (no prev_context)
|
|
||||||
├─ Write findings to report_agent_job_result
|
|
||||||
└─ Append discoveries to discoveries.ndjson
|
|
||||||
|
|
||||||
↓ merge results into master CSV
|
|
||||||
|
|
||||||
Wave 2 agents:
|
|
||||||
├─ Read discoveries.ndjson (exploration sharing)
|
|
||||||
├─ Read prev_context column (wave 1 findings from context_from)
|
|
||||||
├─ Execute tasks with full upstream context
|
|
||||||
├─ Write findings to report_agent_job_result
|
|
||||||
└─ Append new discoveries to discoveries.ndjson
|
|
||||||
|
|
||||||
↓ merge results into master CSV
|
|
||||||
|
|
||||||
Wave 3 agents:
|
|
||||||
├─ Read discoveries.ndjson (accumulated from waves 1+2)
|
|
||||||
├─ Read prev_context column (wave 1+2 findings from context_from)
|
|
||||||
├─ Execute tasks
|
|
||||||
└─ ...
|
|
||||||
```
|
|
||||||
|
|
||||||
**Two context channels**:
|
|
||||||
1. **CSV findings** (structured): `context_from` column → `prev_context` injection — task-specific directed context
|
|
||||||
2. **NDJSON discoveries** (broadcast): `discoveries.ndjson` — general exploration findings available to all
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -872,7 +868,9 @@ Wave 3 agents:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Core Rules
|
## Rules & Best Practices
|
||||||
|
|
||||||
|
### Core Rules
|
||||||
|
|
||||||
1. **Start Immediately**: First action is session initialization, then Phase 1
|
1. **Start Immediately**: First action is session initialization, then Phase 1
|
||||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||||
@@ -880,22 +878,18 @@ Wave 3 agents:
|
|||||||
4. **Context Propagation**: prev_context built from master CSV, not from memory
|
4. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||||
5. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
|
5. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
|
||||||
6. **Skip on Failure**: If a dependency failed, skip the dependent task (don't attempt)
|
6. **Skip on Failure**: If a dependency failed, skip the dependent task (don't attempt)
|
||||||
7. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
7. **Cleanup Temp Files**: Remove wave-{N}.csv and wave-{N}-results.csv after results are merged
|
||||||
8. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
8. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||||
|
|
||||||
---
|
### Task Design
|
||||||
|
|
||||||
## Best Practices
|
- **Granularity**: 3-10 tasks optimal; too many = overhead, too few = no parallelism benefit
|
||||||
|
- **Minimize Cross-Wave Deps**: More tasks in wave 1 = more parallelism
|
||||||
|
- **Specific Descriptions**: Agent sees only its CSV row + prev_context — make description self-contained
|
||||||
|
- **Context From ≠ Deps**: `deps` = execution order constraint; `context_from` = information flow. A task can have `context_from` without `deps` (it just reads previous findings but doesn't require them to be done first in its wave)
|
||||||
|
- **Concurrency Tuning**: `-c 1` for serial execution (maximum context sharing); `-c 8` for I/O-bound tasks
|
||||||
|
|
||||||
1. **Task Granularity**: 3-10 tasks optimal; too many = overhead, too few = no parallelism benefit
|
### Scenario Recommendations
|
||||||
2. **Minimize Cross-Wave Deps**: More tasks in wave 1 = more parallelism
|
|
||||||
3. **Specific Descriptions**: Agent sees only its CSV row + prev_context — make description self-contained
|
|
||||||
4. **Context From ≠ Deps**: `deps` = execution order constraint; `context_from` = information flow. A task can have `context_from` without `deps` (it just reads previous findings but doesn't require them to be done first in its wave)
|
|
||||||
5. **Concurrency Tuning**: `-c 1` for serial execution (maximum context sharing); `-c 8` for I/O-bound tasks
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Usage Recommendations
|
|
||||||
|
|
||||||
| Scenario | Recommended Approach |
|
| Scenario | Recommended Approach |
|
||||||
|----------|---------------------|
|
|----------|---------------------|
|
||||||
|
|||||||
@@ -1,797 +0,0 @@
|
|||||||
---
|
|
||||||
name: unified-execute-with-file
|
|
||||||
description: Universal execution engine consuming .task/*.json directory format. Serial task execution with convergence verification, progress tracking via execution.md + execution-events.md.
|
|
||||||
argument-hint: "PLAN=\"<path/to/.task/>\" [--auto-commit] [--dry-run]"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Unified-Execute-With-File Workflow
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
Universal execution engine consuming **`.task/*.json`** directory and executing tasks serially with convergence verification and progress tracking.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Execute from lite-plan output
|
|
||||||
/codex:unified-execute-with-file PLAN=".workflow/.lite-plan/LPLAN-auth-2025-01-21/.task/"
|
|
||||||
|
|
||||||
# Execute from workflow session output
|
|
||||||
/codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/" --auto-commit
|
|
||||||
|
|
||||||
# Execute a single task JSON file
|
|
||||||
/codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/IMPL-001.json" --dry-run
|
|
||||||
|
|
||||||
# Auto-detect from .workflow/ directories
|
|
||||||
/codex:unified-execute-with-file
|
|
||||||
```
|
|
||||||
|
|
||||||
**Core workflow**: Scan .task/*.json → Validate → Pre-Execution Analysis → Execute → Verify Convergence → Track Progress
|
|
||||||
|
|
||||||
**Key features**:
|
|
||||||
- **Directory-based**: Consumes `.task/` directory containing individual task JSON files
|
|
||||||
- **Convergence-driven**: Verifies each task's convergence criteria after execution
|
|
||||||
- **Serial execution**: Process tasks in topological order with dependency tracking
|
|
||||||
- **Dual progress tracking**: `execution.md` (overview) + `execution-events.md` (event stream)
|
|
||||||
- **Auto-commit**: Optional conventional commits per task
|
|
||||||
- **Dry-run mode**: Simulate execution without changes
|
|
||||||
- **Flexible input**: Accepts `.task/` directory path or a single `.json` file path
|
|
||||||
|
|
||||||
**Input format**: Each task is a standalone JSON file in `.task/` directory (e.g., `IMPL-001.json`). Use `plan-converter` to convert other formats to `.task/*.json` first.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
|
||||||
│ UNIFIED EXECUTE WORKFLOW │
|
|
||||||
├─────────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ Phase 1: Load & Validate │
|
|
||||||
│ ├─ Scan .task/*.json (one task per file) │
|
|
||||||
│ ├─ Validate schema (id, title, depends_on, convergence) │
|
|
||||||
│ ├─ Detect cycles, build topological order │
|
|
||||||
│ └─ Initialize execution.md + execution-events.md │
|
|
||||||
│ │
|
|
||||||
│ Phase 2: Pre-Execution Analysis │
|
|
||||||
│ ├─ Check file conflicts (multiple tasks → same file) │
|
|
||||||
│ ├─ Verify file existence │
|
|
||||||
│ ├─ Generate feasibility report │
|
|
||||||
│ └─ User confirmation (unless dry-run) │
|
|
||||||
│ │
|
|
||||||
│ Phase 3: Serial Execution + Convergence Verification │
|
|
||||||
│ For each task in topological order: │
|
|
||||||
│ ├─ Check dependencies satisfied │
|
|
||||||
│ ├─ Record START event │
|
|
||||||
│ ├─ Execute directly (Read/Edit/Write/Grep/Glob/Bash) │
|
|
||||||
│ ├─ Verify convergence.criteria[] │
|
|
||||||
│ ├─ Run convergence.verification command │
|
|
||||||
│ ├─ Record COMPLETE/FAIL event with verification results │
|
|
||||||
│ ├─ Update _execution state in task JSON file │
|
|
||||||
│ └─ Auto-commit if enabled │
|
|
||||||
│ │
|
|
||||||
│ Phase 4: Completion │
|
|
||||||
│ ├─ Finalize execution.md with summary statistics │
|
|
||||||
│ ├─ Finalize execution-events.md with session footer │
|
|
||||||
│ ├─ Write back .task/*.json with _execution states │
|
|
||||||
│ └─ Offer follow-up actions │
|
|
||||||
│ │
|
|
||||||
└─────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
${projectRoot}/.workflow/.execution/EXEC-{slug}-{date}-{random}/
|
|
||||||
├── execution.md # Plan overview + task table + summary
|
|
||||||
└── execution-events.md # ⭐ Unified event log (single source of truth)
|
|
||||||
```
|
|
||||||
|
|
||||||
Additionally, each source `.task/*.json` file is updated in-place with `_execution` states.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Details
|
|
||||||
|
|
||||||
### Session Initialization
|
|
||||||
|
|
||||||
##### Step 0: Initialize Session
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
|
||||||
const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()
|
|
||||||
|
|
||||||
// Parse arguments
|
|
||||||
const autoCommit = $ARGUMENTS.includes('--auto-commit')
|
|
||||||
const dryRun = $ARGUMENTS.includes('--dry-run')
|
|
||||||
const planMatch = $ARGUMENTS.match(/PLAN="([^"]+)"/) || $ARGUMENTS.match(/PLAN=(\S+)/)
|
|
||||||
let planPath = planMatch ? planMatch[1] : null
|
|
||||||
|
|
||||||
// Auto-detect if no PLAN specified
|
|
||||||
if (!planPath) {
|
|
||||||
// Search in order (most recent first):
|
|
||||||
// .workflow/active/*/.task/
|
|
||||||
// .workflow/.lite-plan/*/.task/
|
|
||||||
// .workflow/.req-plan/*/.task/
|
|
||||||
// .workflow/.planning/*/.task/
|
|
||||||
// Use most recently modified directory containing *.json files
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve path
|
|
||||||
planPath = path.isAbsolute(planPath) ? planPath : `${projectRoot}/${planPath}`
|
|
||||||
|
|
||||||
// Generate session ID
|
|
||||||
const slug = path.basename(path.dirname(planPath)).toLowerCase().substring(0, 30)
|
|
||||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
|
||||||
const random = Math.random().toString(36).substring(2, 9)
|
|
||||||
const sessionId = `EXEC-${slug}-${dateStr}-${random}`
|
|
||||||
const sessionFolder = `${projectRoot}/.workflow/.execution/${sessionId}`
|
|
||||||
|
|
||||||
Bash(`mkdir -p ${sessionFolder}`)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Load & Validate
|
|
||||||
|
|
||||||
**Objective**: Scan `.task/` directory, parse individual task JSON files, validate schema and dependencies, build execution order.
|
|
||||||
|
|
||||||
### Step 1.1: Scan .task/ Directory and Parse Task Files
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Determine if planPath is a directory or single file
|
|
||||||
const isDirectory = planPath.endsWith('/') || Bash(`test -d "${planPath}" && echo dir || echo file`).trim() === 'dir'
|
|
||||||
|
|
||||||
let taskFiles, tasks
|
|
||||||
|
|
||||||
if (isDirectory) {
|
|
||||||
// Directory mode: scan for all *.json files
|
|
||||||
taskFiles = Glob('*.json', planPath)
|
|
||||||
if (taskFiles.length === 0) throw new Error(`No .json files found in ${planPath}`)
|
|
||||||
|
|
||||||
tasks = taskFiles.map(filePath => {
|
|
||||||
try {
|
|
||||||
const content = Read(filePath)
|
|
||||||
const task = JSON.parse(content)
|
|
||||||
task._source_file = filePath // Track source file for write-back
|
|
||||||
return task
|
|
||||||
} catch (e) {
|
|
||||||
throw new Error(`${path.basename(filePath)}: Invalid JSON - ${e.message}`)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
} else {
|
|
||||||
// Single file mode: parse one task JSON
|
|
||||||
try {
|
|
||||||
const content = Read(planPath)
|
|
||||||
const task = JSON.parse(content)
|
|
||||||
task._source_file = planPath
|
|
||||||
tasks = [task]
|
|
||||||
} catch (e) {
|
|
||||||
throw new Error(`${path.basename(planPath)}: Invalid JSON - ${e.message}`)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (tasks.length === 0) throw new Error('No tasks found')
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 1.2: Validate Schema
|
|
||||||
|
|
||||||
Validate against unified task schema: `~/.ccw/workflows/cli-templates/schemas/task-schema.json`
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const errors = []
|
|
||||||
tasks.forEach((task, i) => {
|
|
||||||
const src = task._source_file ? path.basename(task._source_file) : `Task ${i + 1}`
|
|
||||||
|
|
||||||
// Required fields (per task-schema.json)
|
|
||||||
if (!task.id) errors.push(`${src}: missing 'id'`)
|
|
||||||
if (!task.title) errors.push(`${src}: missing 'title'`)
|
|
||||||
if (!task.description) errors.push(`${src}: missing 'description'`)
|
|
||||||
if (!Array.isArray(task.depends_on)) errors.push(`${task.id || src}: missing 'depends_on' array`)
|
|
||||||
|
|
||||||
// Context block (optional but validated if present)
|
|
||||||
if (task.context) {
|
|
||||||
if (task.context.requirements && !Array.isArray(task.context.requirements))
|
|
||||||
errors.push(`${task.id}: context.requirements must be array`)
|
|
||||||
if (task.context.acceptance && !Array.isArray(task.context.acceptance))
|
|
||||||
errors.push(`${task.id}: context.acceptance must be array`)
|
|
||||||
if (task.context.focus_paths && !Array.isArray(task.context.focus_paths))
|
|
||||||
errors.push(`${task.id}: context.focus_paths must be array`)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Convergence (required for execution verification)
|
|
||||||
if (!task.convergence) {
|
|
||||||
errors.push(`${task.id || src}: missing 'convergence'`)
|
|
||||||
} else {
|
|
||||||
if (!task.convergence.criteria?.length) errors.push(`${task.id}: empty convergence.criteria`)
|
|
||||||
if (!task.convergence.verification) errors.push(`${task.id}: missing convergence.verification`)
|
|
||||||
if (!task.convergence.definition_of_done) errors.push(`${task.id}: missing convergence.definition_of_done`)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Flow control (optional but validated if present)
|
|
||||||
if (task.flow_control) {
|
|
||||||
if (task.flow_control.target_files && !Array.isArray(task.flow_control.target_files))
|
|
||||||
errors.push(`${task.id}: flow_control.target_files must be array`)
|
|
||||||
}
|
|
||||||
|
|
||||||
// New unified schema fields (backward compatible addition)
|
|
||||||
if (task.focus_paths && !Array.isArray(task.focus_paths))
|
|
||||||
errors.push(`${task.id}: focus_paths must be array`)
|
|
||||||
if (task.implementation && !Array.isArray(task.implementation))
|
|
||||||
errors.push(`${task.id}: implementation must be array`)
|
|
||||||
if (task.files && !Array.isArray(task.files))
|
|
||||||
errors.push(`${task.id}: files must be array`)
|
|
||||||
})
|
|
||||||
|
|
||||||
if (errors.length) {
|
|
||||||
// Report errors, stop execution
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 1.3: Build Execution Order
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// 1. Validate dependency references
|
|
||||||
const taskIds = new Set(tasks.map(t => t.id))
|
|
||||||
tasks.forEach(task => {
|
|
||||||
task.depends_on.forEach(dep => {
|
|
||||||
if (!taskIds.has(dep)) errors.push(`${task.id}: depends on unknown task '${dep}'`)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
// 2. Detect cycles (DFS)
|
|
||||||
function detectCycles(tasks) {
|
|
||||||
const graph = new Map(tasks.map(t => [t.id, t.depends_on || []]))
|
|
||||||
const visited = new Set(), inStack = new Set(), cycles = []
|
|
||||||
function dfs(node, path) {
|
|
||||||
if (inStack.has(node)) { cycles.push([...path, node].join(' → ')); return }
|
|
||||||
if (visited.has(node)) return
|
|
||||||
visited.add(node); inStack.add(node)
|
|
||||||
;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node]))
|
|
||||||
inStack.delete(node)
|
|
||||||
}
|
|
||||||
tasks.forEach(t => { if (!visited.has(t.id)) dfs(t.id, []) })
|
|
||||||
return cycles
|
|
||||||
}
|
|
||||||
const cycles = detectCycles(tasks)
|
|
||||||
if (cycles.length) errors.push(`Circular dependencies: ${cycles.join('; ')}`)
|
|
||||||
|
|
||||||
// 3. Topological sort
|
|
||||||
function topoSort(tasks) {
|
|
||||||
const inDegree = new Map(tasks.map(t => [t.id, 0]))
|
|
||||||
tasks.forEach(t => t.depends_on.forEach(dep => {
|
|
||||||
inDegree.set(t.id, (inDegree.get(t.id) || 0) + 1)
|
|
||||||
}))
|
|
||||||
const queue = tasks.filter(t => inDegree.get(t.id) === 0).map(t => t.id)
|
|
||||||
const order = []
|
|
||||||
while (queue.length) {
|
|
||||||
const id = queue.shift()
|
|
||||||
order.push(id)
|
|
||||||
tasks.forEach(t => {
|
|
||||||
if (t.depends_on.includes(id)) {
|
|
||||||
inDegree.set(t.id, inDegree.get(t.id) - 1)
|
|
||||||
if (inDegree.get(t.id) === 0) queue.push(t.id)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return order
|
|
||||||
}
|
|
||||||
const executionOrder = topoSort(tasks)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 1.4: Initialize Execution Artifacts
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// execution.md
|
|
||||||
const executionMd = `# Execution Overview
|
|
||||||
|
|
||||||
## Session Info
|
|
||||||
- **Session ID**: ${sessionId}
|
|
||||||
- **Plan Source**: ${planPath}
|
|
||||||
- **Started**: ${getUtc8ISOString()}
|
|
||||||
- **Total Tasks**: ${tasks.length}
|
|
||||||
- **Mode**: ${dryRun ? 'Dry-run (no changes)' : 'Direct inline execution'}
|
|
||||||
- **Auto-Commit**: ${autoCommit ? 'Enabled' : 'Disabled'}
|
|
||||||
|
|
||||||
## Task Overview
|
|
||||||
|
|
||||||
| # | ID | Title | Type | Priority | Effort | Dependencies | Status |
|
|
||||||
|---|-----|-------|------|----------|--------|--------------|--------|
|
|
||||||
${tasks.map((t, i) => `| ${i+1} | ${t.id} | ${t.title} | ${t.type || '-'} | ${t.priority || '-'} | ${t.effort || '-'} | ${t.depends_on.join(', ') || '-'} | pending |`).join('\n')}
|
|
||||||
|
|
||||||
## Pre-Execution Analysis
|
|
||||||
> Populated in Phase 2
|
|
||||||
|
|
||||||
## Execution Timeline
|
|
||||||
> Updated as tasks complete
|
|
||||||
|
|
||||||
## Execution Summary
|
|
||||||
> Updated after all tasks complete
|
|
||||||
`
|
|
||||||
Write(`${sessionFolder}/execution.md`, executionMd)
|
|
||||||
|
|
||||||
// execution-events.md
|
|
||||||
Write(`${sessionFolder}/execution-events.md`, `# Execution Events
|
|
||||||
|
|
||||||
**Session**: ${sessionId}
|
|
||||||
**Started**: ${getUtc8ISOString()}
|
|
||||||
**Source**: ${planPath}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
`)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2: Pre-Execution Analysis
|
|
||||||
|
|
||||||
**Objective**: Validate feasibility and identify issues before execution.
|
|
||||||
|
|
||||||
### Step 2.1: Analyze File Conflicts
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const fileTaskMap = new Map() // file → [taskIds]
|
|
||||||
tasks.forEach(task => {
|
|
||||||
(task.files || []).forEach(f => {
|
|
||||||
const key = f.path
|
|
||||||
if (!fileTaskMap.has(key)) fileTaskMap.set(key, [])
|
|
||||||
fileTaskMap.get(key).push(task.id)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
const conflicts = []
|
|
||||||
fileTaskMap.forEach((taskIds, file) => {
|
|
||||||
if (taskIds.length > 1) {
|
|
||||||
conflicts.push({ file, tasks: taskIds, resolution: 'Execute in dependency order' })
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
// Check file existence
|
|
||||||
const missingFiles = []
|
|
||||||
tasks.forEach(task => {
|
|
||||||
(task.files || []).forEach(f => {
|
|
||||||
if (f.action !== 'create' && !file_exists(f.path)) {
|
|
||||||
missingFiles.push({ file: f.path, task: task.id })
|
|
||||||
}
|
|
||||||
})
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2.2: Append to execution.md
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Replace "Pre-Execution Analysis" section with:
|
|
||||||
// - File Conflicts (list or "No conflicts")
|
|
||||||
// - Missing Files (list or "All files exist")
|
|
||||||
// - Dependency Validation (errors or "No issues")
|
|
||||||
// - Execution Order (numbered list)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2.3: User Confirmation
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
if (!dryRun) {
|
|
||||||
request_user_input({
|
|
||||||
questions: [{
|
|
||||||
header: "Confirm",
|
|
||||||
id: "confirm_execute",
|
|
||||||
question: `Execute ${tasks.length} tasks?`,
|
|
||||||
options: [
|
|
||||||
{ label: "Execute (Recommended)", description: "Start serial execution" },
|
|
||||||
{ label: "Dry Run", description: "Simulate without changes" },
|
|
||||||
{ label: "Cancel", description: "Abort execution" }
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
// answer.answers.confirm_execute.answers[0] → selected label
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 3: Serial Execution + Convergence Verification
|
|
||||||
|
|
||||||
**Objective**: Execute tasks sequentially, verify convergence after each task, track all state.
|
|
||||||
|
|
||||||
**Execution Model**: Direct inline execution — main process reads, edits, writes files directly. No CLI delegation.
|
|
||||||
|
|
||||||
### Step 3.1: Execution Loop
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const completedTasks = new Set()
|
|
||||||
const failedTasks = new Set()
|
|
||||||
const skippedTasks = new Set()
|
|
||||||
|
|
||||||
for (const taskId of executionOrder) {
|
|
||||||
const task = tasks.find(t => t.id === taskId)
|
|
||||||
const startTime = getUtc8ISOString()
|
|
||||||
|
|
||||||
// 1. Check dependencies
|
|
||||||
const unmetDeps = task.depends_on.filter(dep => !completedTasks.has(dep))
|
|
||||||
if (unmetDeps.length) {
|
|
||||||
appendToEvents(task, 'BLOCKED', `Unmet dependencies: ${unmetDeps.join(', ')}`)
|
|
||||||
skippedTasks.add(task.id)
|
|
||||||
task._execution = { status: 'skipped', executed_at: startTime,
|
|
||||||
result: { success: false, error: `Blocked by: ${unmetDeps.join(', ')}` } }
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2. Record START event
|
|
||||||
appendToEvents(`## ${getUtc8ISOString()} — ${task.id}: ${task.title}
|
|
||||||
|
|
||||||
**Type**: ${task.type || '-'} | **Priority**: ${task.priority || '-'} | **Effort**: ${task.effort || '-'}
|
|
||||||
**Status**: ⏳ IN PROGRESS
|
|
||||||
**Files**: ${(task.files || []).map(f => f.path).join(', ') || 'To be determined'}
|
|
||||||
**Description**: ${task.description}
|
|
||||||
**Convergence Criteria**:
|
|
||||||
${task.convergence.criteria.map(c => `- [ ] ${c}`).join('\n')}
|
|
||||||
|
|
||||||
### Execution Log
|
|
||||||
`)
|
|
||||||
|
|
||||||
if (dryRun) {
|
|
||||||
// Simulate: mark as completed without changes
|
|
||||||
appendToEvents(`\n**Status**: ⏭ DRY RUN (no changes)\n\n---\n`)
|
|
||||||
task._execution = { status: 'completed', executed_at: startTime,
|
|
||||||
result: { success: true, summary: 'Dry run — no changes made' } }
|
|
||||||
completedTasks.add(task.id)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// 3. Execute task directly
|
|
||||||
// - Read each file in task.files (if specified)
|
|
||||||
// - Analyze what changes satisfy task.description + task.convergence.criteria
|
|
||||||
// - If task.files has detailed changes, use them as guidance
|
|
||||||
// - Apply changes using Edit (preferred) or Write (for new files)
|
|
||||||
// - Use Grep/Glob/mcp__ace-tool for discovery if needed
|
|
||||||
// - Use Bash for build/test commands
|
|
||||||
|
|
||||||
// Dual-path field access (supports both unified and legacy 6-field schema)
|
|
||||||
// const targetFiles = task.files?.map(f => f.path) || task.flow_control?.target_files || []
|
|
||||||
// const acceptanceCriteria = task.convergence?.criteria || task.context?.acceptance || []
|
|
||||||
// const requirements = task.implementation || task.context?.requirements || []
|
|
||||||
// const focusPaths = task.focus_paths || task.context?.focus_paths || []
|
|
||||||
|
|
||||||
// 4. Verify convergence
|
|
||||||
const convergenceResults = verifyConvergence(task)
|
|
||||||
|
|
||||||
const endTime = getUtc8ISOString()
|
|
||||||
const filesModified = getModifiedFiles()
|
|
||||||
|
|
||||||
if (convergenceResults.allPassed) {
|
|
||||||
// 5a. Record SUCCESS
|
|
||||||
appendToEvents(`
|
|
||||||
**Status**: ✅ COMPLETED
|
|
||||||
**Duration**: ${calculateDuration(startTime, endTime)}
|
|
||||||
**Files Modified**: ${filesModified.join(', ')}
|
|
||||||
|
|
||||||
#### Changes Summary
|
|
||||||
${changeSummary}
|
|
||||||
|
|
||||||
#### Convergence Verification
|
|
||||||
${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}`).join('\n')}
|
|
||||||
- **Verification**: ${convergenceResults.verificationOutput}
|
|
||||||
- **Definition of Done**: ${task.convergence.definition_of_done}
|
|
||||||
|
|
||||||
---
|
|
||||||
`)
|
|
||||||
task._execution = {
|
|
||||||
status: 'completed', executed_at: endTime,
|
|
||||||
result: {
|
|
||||||
success: true,
|
|
||||||
files_modified: filesModified,
|
|
||||||
summary: changeSummary,
|
|
||||||
convergence_verified: convergenceResults.verified
|
|
||||||
}
|
|
||||||
}
|
|
||||||
completedTasks.add(task.id)
|
|
||||||
} else {
|
|
||||||
// 5b. Record FAILURE
|
|
||||||
handleTaskFailure(task, convergenceResults, startTime, endTime)
|
|
||||||
}
|
|
||||||
|
|
||||||
// 6. Auto-commit if enabled
|
|
||||||
if (autoCommit && task._execution.status === 'completed') {
|
|
||||||
autoCommitTask(task, filesModified)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3.2: Convergence Verification
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
function verifyConvergence(task) {
|
|
||||||
const results = {
|
|
||||||
verified: [], // boolean[] per criterion
|
|
||||||
verificationOutput: '', // output of verification command
|
|
||||||
allPassed: true
|
|
||||||
}
|
|
||||||
|
|
||||||
// 1. Check each criterion
|
|
||||||
// For each criterion in task.convergence.criteria:
|
|
||||||
// - If it references a testable condition, check it
|
|
||||||
// - If it's manual, mark as verified based on changes made
|
|
||||||
// - Record true/false per criterion
|
|
||||||
task.convergence.criteria.forEach(criterion => {
|
|
||||||
const passed = evaluateCriterion(criterion, task)
|
|
||||||
results.verified.push(passed)
|
|
||||||
if (!passed) results.allPassed = false
|
|
||||||
})
|
|
||||||
|
|
||||||
// 2. Run verification command (if executable)
|
|
||||||
const verification = task.convergence.verification
|
|
||||||
if (isExecutableCommand(verification)) {
|
|
||||||
try {
|
|
||||||
const output = Bash(verification, { timeout: 120000 })
|
|
||||||
results.verificationOutput = `${verification} → PASS`
|
|
||||||
} catch (e) {
|
|
||||||
results.verificationOutput = `${verification} → FAIL: ${e.message}`
|
|
||||||
results.allPassed = false
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
results.verificationOutput = `Manual: ${verification}`
|
|
||||||
}
|
|
||||||
|
|
||||||
return results
|
|
||||||
}
|
|
||||||
|
|
||||||
function isExecutableCommand(verification) {
|
|
||||||
// Detect executable patterns: npm, npx, jest, tsc, curl, pytest, go test, etc.
|
|
||||||
return /^(npm|npx|jest|tsc|eslint|pytest|go\s+test|cargo\s+test|curl|make)/.test(verification.trim())
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3.3: Failure Handling
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
function handleTaskFailure(task, convergenceResults, startTime, endTime) {
|
|
||||||
appendToEvents(`
|
|
||||||
**Status**: ❌ FAILED
|
|
||||||
**Duration**: ${calculateDuration(startTime, endTime)}
|
|
||||||
**Error**: Convergence verification failed
|
|
||||||
|
|
||||||
#### Failed Criteria
|
|
||||||
${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}`).join('\n')}
|
|
||||||
- **Verification**: ${convergenceResults.verificationOutput}
|
|
||||||
|
|
||||||
---
|
|
||||||
`)
|
|
||||||
|
|
||||||
task._execution = {
|
|
||||||
status: 'failed', executed_at: endTime,
|
|
||||||
result: {
|
|
||||||
success: false,
|
|
||||||
error: 'Convergence verification failed',
|
|
||||||
convergence_verified: convergenceResults.verified
|
|
||||||
}
|
|
||||||
}
|
|
||||||
failedTasks.add(task.id)
|
|
||||||
|
|
||||||
// Ask user
|
|
||||||
request_user_input({
|
|
||||||
questions: [{
|
|
||||||
header: "Failure",
|
|
||||||
id: "handle_failure",
|
|
||||||
question: `Task ${task.id} failed convergence verification. How to proceed?`,
|
|
||||||
options: [
|
|
||||||
{ label: "Skip & Continue (Recommended)", description: "Skip this task, continue with next" },
|
|
||||||
{ label: "Retry", description: "Retry this task" },
|
|
||||||
{ label: "Abort", description: "Stop execution, keep progress" }
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
// answer.answers.handle_failure.answers[0] → selected label
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3.4: Auto-Commit
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
function autoCommitTask(task, filesModified) {
|
|
||||||
Bash(`git add ${filesModified.join(' ')}`)
|
|
||||||
|
|
||||||
const commitType = {
|
|
||||||
fix: 'fix', refactor: 'refactor', feature: 'feat',
|
|
||||||
enhancement: 'feat', testing: 'test', infrastructure: 'chore'
|
|
||||||
}[task.type] || 'chore'
|
|
||||||
|
|
||||||
const scope = inferScope(filesModified)
|
|
||||||
|
|
||||||
Bash(`git commit -m "$(cat <<'EOF'
|
|
||||||
${commitType}(${scope}): ${task.title}
|
|
||||||
|
|
||||||
Task: ${task.id}
|
|
||||||
Source: ${path.basename(planPath)}
|
|
||||||
EOF
|
|
||||||
)"`)
|
|
||||||
|
|
||||||
appendToEvents(`**Commit**: \`${commitType}(${scope}): ${task.title}\`\n`)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 4: Completion
|
|
||||||
|
|
||||||
**Objective**: Finalize all artifacts, write back execution state, offer follow-up actions.
|
|
||||||
|
|
||||||
### Step 4.1: Finalize execution.md
|
|
||||||
|
|
||||||
Append summary statistics to execution.md:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const summary = `
|
|
||||||
## Execution Summary
|
|
||||||
|
|
||||||
- **Completed**: ${getUtc8ISOString()}
|
|
||||||
- **Total Tasks**: ${tasks.length}
|
|
||||||
- **Succeeded**: ${completedTasks.size}
|
|
||||||
- **Failed**: ${failedTasks.size}
|
|
||||||
- **Skipped**: ${skippedTasks.size}
|
|
||||||
- **Success Rate**: ${Math.round(completedTasks.size / tasks.length * 100)}%
|
|
||||||
|
|
||||||
### Task Results
|
|
||||||
|
|
||||||
| ID | Title | Status | Convergence | Files Modified |
|
|
||||||
|----|-------|--------|-------------|----------------|
|
|
||||||
${tasks.map(t => {
|
|
||||||
const ex = t._execution || {}
|
|
||||||
const convergenceStatus = ex.result?.convergence_verified
|
|
||||||
? `${ex.result.convergence_verified.filter(v => v).length}/${ex.result.convergence_verified.length}`
|
|
||||||
: '-'
|
|
||||||
return `| ${t.id} | ${t.title} | ${ex.status || 'pending'} | ${convergenceStatus} | ${(ex.result?.files_modified || []).join(', ') || '-'} |`
|
|
||||||
}).join('\n')}
|
|
||||||
|
|
||||||
${failedTasks.size > 0 ? `### Failed Tasks
|
|
||||||
|
|
||||||
${[...failedTasks].map(id => {
|
|
||||||
const t = tasks.find(t => t.id === id)
|
|
||||||
return `- **${t.id}**: ${t.title} — ${t._execution?.result?.error || 'Unknown'}`
|
|
||||||
}).join('\n')}
|
|
||||||
` : ''}
|
|
||||||
### Artifacts
|
|
||||||
- **Plan Source**: ${planPath}
|
|
||||||
- **Execution Overview**: ${sessionFolder}/execution.md
|
|
||||||
- **Execution Events**: ${sessionFolder}/execution-events.md
|
|
||||||
`
|
|
||||||
// Append to execution.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4.2: Finalize execution-events.md
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
appendToEvents(`
|
|
||||||
---
|
|
||||||
|
|
||||||
# Session Summary
|
|
||||||
|
|
||||||
- **Session**: ${sessionId}
|
|
||||||
- **Completed**: ${getUtc8ISOString()}
|
|
||||||
- **Tasks**: ${completedTasks.size} completed, ${failedTasks.size} failed, ${skippedTasks.size} skipped
|
|
||||||
- **Total Events**: ${completedTasks.size + failedTasks.size + skippedTasks.size}
|
|
||||||
`)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4.3: Write Back .task/*.json with _execution
|
|
||||||
|
|
||||||
Update each source task JSON file with execution states:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
tasks.forEach(task => {
|
|
||||||
const filePath = task._source_file
|
|
||||||
if (!filePath) return
|
|
||||||
|
|
||||||
// Read current file to preserve formatting and non-execution fields
|
|
||||||
const current = JSON.parse(Read(filePath))
|
|
||||||
|
|
||||||
// Update _execution status and result
|
|
||||||
current._execution = {
|
|
||||||
status: task._execution?.status || 'pending',
|
|
||||||
executed_at: task._execution?.executed_at || null,
|
|
||||||
result: task._execution?.result || null
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write back individual task file
|
|
||||||
Write(filePath, JSON.stringify(current, null, 2))
|
|
||||||
})
|
|
||||||
// Each task JSON file now has _execution: { status, executed_at, result }
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4.4: Post-Completion Options
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
request_user_input({
|
|
||||||
questions: [{
|
|
||||||
header: "Post Execute",
|
|
||||||
id: "post_execute",
|
|
||||||
question: `Execution complete: ${completedTasks.size}/${tasks.length} succeeded. Next step?`,
|
|
||||||
options: [
|
|
||||||
{ label: "Done (Recommended)", description: "End workflow" },
|
|
||||||
{ label: "Retry Failed", description: `Re-execute ${failedTasks.size} failed tasks` },
|
|
||||||
{ label: "Create Issue", description: "Create issue from failed tasks" }
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
// answer.answers.post_execute.answers[0] → selected label
|
|
||||||
```
|
|
||||||
|
|
||||||
| Selection | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Retry Failed | Filter tasks with `_execution.status === 'failed'`, re-execute, append `[RETRY]` events |
|
|
||||||
| View Events | Display execution-events.md content |
|
|
||||||
| Create Issue | `Skill(skill="issue:new", args="...")` from failed task details |
|
|
||||||
| Done | Display artifact paths, sync session state, end workflow |
|
|
||||||
|
|
||||||
### Step 4.5: Sync Session State
|
|
||||||
|
|
||||||
After completion (regardless of user selection), unless `--dry-run`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$session-sync -y "Execution complete: {completed}/{total} tasks succeeded"
|
|
||||||
```
|
|
||||||
|
|
||||||
Updates specs/*.md with execution learnings and project-tech.json with development index entry.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
| Flag | Default | Description |
|
|
||||||
|------|---------|-------------|
|
|
||||||
| `PLAN="..."` | auto-detect | Path to `.task/` directory or single task `.json` file |
|
|
||||||
| `--auto-commit` | false | Commit changes after each successful task |
|
|
||||||
| `--dry-run` | false | Simulate execution without making changes |
|
|
||||||
|
|
||||||
### Plan Auto-Detection Order
|
|
||||||
|
|
||||||
When no `PLAN` specified, search for `.task/` directories in order (most recent first):
|
|
||||||
|
|
||||||
1. `.workflow/active/*/.task/`
|
|
||||||
2. `.workflow/.lite-plan/*/.task/`
|
|
||||||
3. `.workflow/.req-plan/*/.task/`
|
|
||||||
4. `.workflow/.planning/*/.task/`
|
|
||||||
|
|
||||||
**If source is not `.task/*.json`**: Run `plan-converter` first to generate `.task/` directory.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling & Recovery
|
|
||||||
|
|
||||||
| Situation | Action | Recovery |
|
|
||||||
|-----------|--------|----------|
|
|
||||||
| .task/ directory not found | Report error with path | Check path, run plan-converter |
|
|
||||||
| Invalid JSON in task file | Report filename and error | Fix task JSON file manually |
|
|
||||||
| Missing convergence | Report validation error | Run plan-converter to add convergence |
|
|
||||||
| Circular dependency | Stop, report cycle path | Fix dependencies in task JSON |
|
|
||||||
| Task execution fails | Record in events, ask user | Retry, skip, accept, or abort |
|
|
||||||
| Convergence verification fails | Mark task failed, ask user | Fix code and retry, or accept |
|
|
||||||
| Verification command timeout | Mark as unverified | Manual verification needed |
|
|
||||||
| File conflict during execution | Document in events | Resolve in dependency order |
|
|
||||||
| All tasks fail | Report, suggest plan review | Re-analyze or manual intervention |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
### Before Execution
|
|
||||||
|
|
||||||
1. **Validate Plan**: Use `--dry-run` first to check plan feasibility
|
|
||||||
2. **Check Convergence**: Ensure all tasks have meaningful convergence criteria
|
|
||||||
3. **Review Dependencies**: Verify execution order makes sense
|
|
||||||
4. **Backup**: Commit pending changes before starting
|
|
||||||
5. **Convert First**: Use `plan-converter` for non-.task/ sources
|
|
||||||
|
|
||||||
### During Execution
|
|
||||||
|
|
||||||
1. **Monitor Events**: Check execution-events.md for real-time progress
|
|
||||||
2. **Handle Failures**: Review convergence failures carefully before deciding
|
|
||||||
3. **Check Commits**: Verify auto-commits are correct if enabled
|
|
||||||
|
|
||||||
### After Execution
|
|
||||||
|
|
||||||
1. **Review Summary**: Check execution.md statistics and failed tasks
|
|
||||||
2. **Verify Changes**: Inspect modified files match expectations
|
|
||||||
3. **Check Task Files**: Review `_execution` states in `.task/*.json` files
|
|
||||||
4. **Next Steps**: Use completion options for follow-up
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Now execute unified-execute-with-file for**: $PLAN
|
|
||||||
Reference in New Issue
Block a user