diff --git a/.codex/skills/analyze-with-file/EXECUTE.md b/.codex/skills/analyze-with-file/EXECUTE.md index 4cbd5407..4eb5ae5e 100644 --- a/.codex/skills/analyze-with-file/EXECUTE.md +++ b/.codex/skills/analyze-with-file/EXECUTE.md @@ -7,14 +7,14 @@ ## Execution Flow ``` -conclusions.json → execution-plan.jsonl → User Confirmation → Direct Inline Execution → execution.md + execution-events.md +conclusions.json → tasks.jsonl → User Confirmation → Direct Inline Execution → execution.md + execution-events.md ``` --- -## Step 1: Generate execution-plan.jsonl +## Step 1: Generate tasks.jsonl -Convert `conclusions.json` recommendations directly into JSONL execution list. Each line is a self-contained task with convergence criteria. +Convert `conclusions.json` recommendations directly into unified JSONL task format. Each line is a self-contained task with convergence criteria, compatible with `unified-execute-with-file`. **Conversion Logic**: @@ -32,22 +32,28 @@ const tasks = conclusions.recommendations.map((rec, index) => ({ description: rec.rationale, type: inferTaskType(rec), // fix | refactor | feature | enhancement | testing priority: rec.priority, // high | medium | low - files_to_modify: extractFilesFromEvidence(rec, explorations), + effort: inferEffort(rec), // small | medium | large + files: extractFilesFromEvidence(rec, explorations).map(f => ({ + path: f, + action: 'modify' // modify | create | delete + })), depends_on: [], // Serial by default; add dependencies if task ordering matters convergence: { criteria: generateCriteria(rec), // Testable conditions verification: generateVerification(rec), // Executable command or steps definition_of_done: generateDoD(rec) // Business language }, - context: { - source_conclusions: conclusions.key_conclusions, - evidence: rec.evidence || [] + evidence: rec.evidence || [], + source: { + tool: 'analyze-with-file', + session_id: sessionId, + original_id: `TASK-${String(index + 1).padStart(3, '0')}` } })) // Write one task per line const jsonlContent = tasks.map(t => JSON.stringify(t)).join('\n') -Write(`${sessionFolder}/execution-plan.jsonl`, jsonlContent) +Write(`${sessionFolder}/tasks.jsonl`, jsonlContent) ``` **Task Type Inference**: @@ -64,7 +70,15 @@ Write(`${sessionFolder}/execution-plan.jsonl`, jsonlContent) - Parse evidence from `explorations.json` or `perspectives.json` - Match recommendation action keywords to `relevant_files` - If no specific files found, use pattern matching from findings -- Include both files to modify and files as read-only context +- Return file paths as strings (converted to `{path, action}` objects in the task) + +**Effort Inference**: + +| Signal | Effort | +|--------|--------| +| Priority high + multiple files | `large` | +| Priority medium or 1-2 files | `medium` | +| Priority low or single file | `small` | **Convergence Generation**: @@ -92,13 +106,13 @@ tasks.forEach(task => { }) ``` -**Output**: `${sessionFolder}/execution-plan.jsonl` +**Output**: `${sessionFolder}/tasks.jsonl` **JSONL Schema** (one task per line): ```jsonl -{"id":"TASK-001","title":"Fix authentication token refresh","description":"Token refresh fails silently when...","type":"fix","priority":"high","files_to_modify":["src/auth/token.ts","src/middleware/auth.ts"],"depends_on":[],"convergence":{"criteria":["Token refresh returns new valid token","Expired token triggers refresh automatically","Failed refresh redirects to login"],"verification":"jest --testPathPattern=token.test.ts","definition_of_done":"Users remain logged in across token expiration without manual re-login"},"context":{"source_conclusions":[...],"evidence":[...]}} -{"id":"TASK-002","title":"Add input validation to user endpoints","description":"Missing validation allows...","type":"enhancement","priority":"medium","files_to_modify":["src/routes/user.ts","src/validators/user.ts"],"depends_on":["TASK-001"],"convergence":{"criteria":["All user inputs validated against schema","Invalid inputs return 400 with specific error message","SQL injection patterns rejected"],"verification":"jest --testPathPattern=user.validation.test.ts","definition_of_done":"All user-facing inputs are validated with clear error feedback"},"context":{"source_conclusions":[...],"evidence":[...]}} +{"id":"TASK-001","title":"Fix authentication token refresh","description":"Token refresh fails silently when...","type":"fix","priority":"high","effort":"large","files":[{"path":"src/auth/token.ts","action":"modify"},{"path":"src/middleware/auth.ts","action":"modify"}],"depends_on":[],"convergence":{"criteria":["Token refresh returns new valid token","Expired token triggers refresh automatically","Failed refresh redirects to login"],"verification":"jest --testPathPattern=token.test.ts","definition_of_done":"Users remain logged in across token expiration without manual re-login"},"evidence":[...],"source":{"tool":"analyze-with-file","session_id":"ANL-xxx","original_id":"TASK-001"}} +{"id":"TASK-002","title":"Add input validation to user endpoints","description":"Missing validation allows...","type":"enhancement","priority":"medium","effort":"medium","files":[{"path":"src/routes/user.ts","action":"modify"},{"path":"src/validators/user.ts","action":"create"}],"depends_on":["TASK-001"],"convergence":{"criteria":["All user inputs validated against schema","Invalid inputs return 400 with specific error message","SQL injection patterns rejected"],"verification":"jest --testPathPattern=user.validation.test.ts","definition_of_done":"All user-facing inputs are validated with clear error feedback"},"evidence":[...],"source":{"tool":"analyze-with-file","session_id":"ANL-xxx","original_id":"TASK-002"}} ``` --- @@ -110,7 +124,7 @@ Validate feasibility before starting execution. Reference: unified-execute-with- ##### Step 2.1: Build Execution Order ```javascript -const tasks = Read(`${sessionFolder}/execution-plan.jsonl`) +const tasks = Read(`${sessionFolder}/tasks.jsonl`) .split('\n').filter(l => l.trim()).map(l => JSON.parse(l)) // 1. Dependency validation @@ -169,9 +183,9 @@ const executionOrder = topoSort(tasks) // Check files modified by multiple tasks const fileTaskMap = new Map() // file → [taskIds] tasks.forEach(task => { - task.files_to_modify.forEach(file => { - if (!fileTaskMap.has(file)) fileTaskMap.set(file, []) - fileTaskMap.get(file).push(task.id) + (task.files || []).forEach(f => { + if (!fileTaskMap.has(f.path)) fileTaskMap.set(f.path, []) + fileTaskMap.get(f.path).push(task.id) }) }) @@ -185,8 +199,10 @@ fileTaskMap.forEach((taskIds, file) => { // Check file existence const missingFiles = [] tasks.forEach(task => { - task.files_to_modify.forEach(file => { - if (!file_exists(file)) missingFiles.push({ file, task: task.id, action: "Will be created" }) + (task.files || []).forEach(f => { + if (f.action !== 'create' && !file_exists(f.path)) { + missingFiles.push({ file: f.path, task: task.id, action: "Will be created" }) + } }) }) ``` @@ -204,7 +220,7 @@ const executionMd = `# Execution Overview ## Session Info - **Session ID**: ${sessionId} -- **Plan Source**: execution-plan.jsonl (from analysis conclusions) +- **Plan Source**: tasks.jsonl (from analysis conclusions) - **Started**: ${getUtc8ISOString()} - **Total Tasks**: ${tasks.length} - **Execution Mode**: Direct inline (serial) @@ -254,7 +270,7 @@ const eventsHeader = `# Execution Events **Session**: ${sessionId} **Started**: ${getUtc8ISOString()} -**Source**: execution-plan.jsonl +**Source**: tasks.jsonl --- @@ -341,15 +357,16 @@ for (const taskId of executionOrder) { **Type**: ${task.type} | **Priority**: ${task.priority} **Status**: ⏳ IN PROGRESS -**Files**: ${task.files_to_modify.join(', ')} +**Files**: ${(task.files || []).map(f => f.path).join(', ')} **Description**: ${task.description} ### Execution Log `) // 3. Execute task directly - // - Read each file in task.files_to_modify + // - Read each file in task.files (if specified) // - Analyze what changes satisfy task.description + task.convergence.criteria + // - If task.files has detailed changes, use them as guidance // - Apply changes using Edit (preferred) or Write (for new files) // - Use Grep/Glob for discovery if needed // - Use Bash for build/test verification commands @@ -409,6 +426,12 @@ ${attemptedChanges} updateTaskStatus(task.id, 'failed', [], errorMessage) failedTasks.add(task.id) +// Set _execution state +task._execution = { + status: 'failed', executed_at: getUtc8ISOString(), + result: { success: false, error: errorMessage, files_modified: [] } +} + // Ask user how to proceed if (!autoYes) { const decision = AskUserQuestion({ @@ -431,7 +454,7 @@ if (!autoYes) { After each successful task, optionally commit changes: ```javascript -if (autoCommit && task.status === 'completed') { +if (autoCommit && task._execution?.status === 'completed') { // 1. Stage modified files Bash(`git add ${filesModified.join(' ')}`) @@ -473,17 +496,17 @@ const summary = ` | ID | Title | Status | Files Modified | |----|-------|--------|----------------| -${tasks.map(t => `| ${t.id} | ${t.title} | ${t.status} | ${(t.result?.files_modified || []).join(', ') || '-'} |`).join('\n')} +${tasks.map(t => `| ${t.id} | ${t.title} | ${t._execution?.status || 'pending'} | ${(t._execution?.result?.files_modified || []).join(', ') || '-'} |`).join('\n')} ${failedTasks.size > 0 ? `### Failed Tasks Requiring Attention ${[...failedTasks].map(id => { const t = tasks.find(t => t.id === id) - return `- **${t.id}**: ${t.title} — ${t.result?.error || 'Unknown error'}` + return `- **${t.id}**: ${t.title} — ${t._execution?.result?.error || 'Unknown error'}` }).join('\n')} ` : ''} ### Artifacts -- **Execution Plan**: ${sessionFolder}/execution-plan.jsonl +- **Execution Plan**: ${sessionFolder}/tasks.jsonl - **Execution Overview**: ${sessionFolder}/execution.md - **Execution Events**: ${sessionFolder}/execution-events.md ` @@ -507,23 +530,26 @@ appendToEvents(` `) ``` -##### Step 6.3: Update execution-plan.jsonl +##### Step 6.3: Update tasks.jsonl -Rewrite JSONL with execution results per task: +Rewrite JSONL with `_execution` state per task: ```javascript const updatedJsonl = tasks.map(task => JSON.stringify({ ...task, - status: task.status, // "completed" | "failed" | "skipped" | "pending" - executed_at: task.executed_at, // ISO timestamp - result: { - success: task.status === 'completed', - files_modified: task.result?.files_modified || [], - summary: task.result?.summary || '', - error: task.result?.error || null + _execution: { + status: task._status, // "completed" | "failed" | "skipped" | "pending" + executed_at: task._executed_at, // ISO timestamp + result: { + success: task._status === 'completed', + files_modified: task._result?.files_modified || [], + summary: task._result?.summary || '', + error: task._result?.error || null, + convergence_verified: task._result?.convergence_verified || [] + } } })).join('\n') -Write(`${sessionFolder}/execution-plan.jsonl`, updatedJsonl) +Write(`${sessionFolder}/tasks.jsonl`, updatedJsonl) ``` --- @@ -560,10 +586,10 @@ if (!autoYes) { | Done | Display artifact paths, end workflow | **Retry Logic**: -- Filter tasks with `status: "failed"` +- Filter tasks with `_execution.status === 'failed'` - Re-execute in original dependency order - Append retry events to execution-events.md with `[RETRY]` prefix -- Update execution.md and execution-plan.jsonl +- Update execution.md and tasks.jsonl --- @@ -574,14 +600,14 @@ When Quick Execute is activated, session folder expands with: ``` {projectRoot}/.workflow/.analysis/ANL-{slug}-{date}/ ├── ... # Phase 1-4 artifacts -├── execution-plan.jsonl # ⭐ JSONL execution list (one task per line, with convergence) +├── tasks.jsonl # ⭐ Unified JSONL (one task per line, with convergence + source) ├── execution.md # Plan overview + task table + execution summary └── execution-events.md # ⭐ Unified event log (all task executions with details) ``` | File | Purpose | |------|---------| -| `execution-plan.jsonl` | Self-contained task list from conclusions, each line has convergence criteria | +| `tasks.jsonl` | Unified task list from conclusions, each line has convergence criteria and source provenance | | `execution.md` | Overview: plan source, task table, pre-execution analysis, execution timeline, final summary | | `execution-events.md` | Chronological event stream: task start/complete/fail with details, changes, verification results | @@ -594,7 +620,7 @@ When Quick Execute is activated, session folder expands with: **Session**: ANL-xxx-2025-01-21 **Started**: 2025-01-21T10:00:00+08:00 -**Source**: execution-plan.jsonl +**Source**: tasks.jsonl --- @@ -655,9 +681,9 @@ When Quick Execute is activated, session folder expands with: |-----------|--------|----------| | Task execution fails | Record failure in execution-events.md, ask user | Retry, skip, or abort | | Verification command fails | Mark criterion as unverified, continue | Note in events, manual check needed | -| No recommendations in conclusions | Cannot generate execution-plan.jsonl | Inform user, suggest lite-plan | +| No recommendations in conclusions | Cannot generate tasks.jsonl | Inform user, suggest lite-plan | | File conflict during execution | Document in execution-events.md | Resolve in dependency order | -| Circular dependencies detected | Stop, report error | Fix dependencies in execution-plan.jsonl | +| Circular dependencies detected | Stop, report error | Fix dependencies in tasks.jsonl | | All tasks fail | Record all failures, suggest analysis review | Re-run analysis or manual intervention | | Missing target file | Attempt to create if task.type is "feature" | Log as warning for other types | @@ -665,9 +691,10 @@ When Quick Execute is activated, session folder expands with: ## Success Criteria -- `execution-plan.jsonl` generated with convergence criteria per task +- `tasks.jsonl` generated with convergence criteria and source provenance per task - `execution.md` contains plan overview, task table, pre-execution analysis, final summary - `execution-events.md` contains chronological event stream with convergence verification - All tasks executed (or explicitly skipped) via direct inline execution - Each task's convergence criteria checked and recorded +- `_execution` state written back to tasks.jsonl after completion - User informed of results and next steps diff --git a/.codex/skills/collaborative-plan-with-file/SKILL.md b/.codex/skills/collaborative-plan-with-file/SKILL.md index 822aa10a..3f86a671 100644 --- a/.codex/skills/collaborative-plan-with-file/SKILL.md +++ b/.codex/skills/collaborative-plan-with-file/SKILL.md @@ -55,11 +55,11 @@ The key innovation is the **Plan Note** architecture — a shared collaborative │ │ │ Phase 2: Serial Domain Planning │ │ ┌──────────────┐ │ -│ │ Domain 1 │→ Explore codebase → Generate plan.json │ +│ │ Domain 1 │→ Explore codebase → Generate tasks.jsonl │ │ │ Section 1 │→ Fill task pool + evidence in plan-note.md │ │ └──────┬───────┘ │ │ ┌──────▼───────┐ │ -│ │ Domain 2 │→ Explore codebase → Generate plan.json │ +│ │ Domain 2 │→ Explore codebase → Generate tasks.jsonl │ │ │ Section 2 │→ Fill task pool + evidence in plan-note.md │ │ └──────┬───────┘ │ │ ┌──────▼───────┐ │ @@ -72,6 +72,7 @@ The key innovation is the **Plan Note** architecture — a shared collaborative │ └─ Update plan-note.md conflict section │ │ │ │ Phase 4: Completion (No Merge) │ +│ ├─ Merge domain tasks.jsonl → session tasks.jsonl │ │ ├─ Generate plan.md (human-readable) │ │ └─ Ready for execution │ │ │ @@ -86,10 +87,11 @@ The key innovation is the **Plan Note** architecture — a shared collaborative ├── requirement-analysis.json # Phase 1: Sub-domain assignments ├── domains/ # Phase 2: Per-domain plans │ ├── {domain-1}/ -│ │ └── plan.json # Detailed plan +│ │ └── tasks.jsonl # Unified JSONL (one task per line) │ ├── {domain-2}/ -│ │ └── plan.json +│ │ └── tasks.jsonl │ └── ... +├── tasks.jsonl # ⭐ Merged unified JSONL (all domains) ├── conflicts.json # Phase 3: Conflict report └── plan.md # Phase 4: Human-readable summary ``` @@ -107,7 +109,7 @@ The key innovation is the **Plan Note** architecture — a shared collaborative | Artifact | Purpose | |----------|---------| -| `domains/{domain}/plan.json` | Detailed implementation plan per domain | +| `domains/{domain}/tasks.jsonl` | Unified JSONL per domain (one task per line with convergence) | | Updated `plan-note.md` | Task pool and evidence sections filled for each domain | ### Phase 3: Conflict Detection @@ -121,6 +123,7 @@ The key innovation is the **Plan Note** architecture — a shared collaborative | Artifact | Purpose | |----------|---------| +| `tasks.jsonl` | Merged unified JSONL from all domains (consumable by unified-execute) | | `plan.md` | Human-readable summary with requirements, tasks, and conflicts | --- @@ -302,38 +305,45 @@ for (const sub of subDomains) { // - Integration points with other domains // - Architecture constraints - // 3. Generate detailed plan.json - const plan = { - session_id: sessionId, - focus_area: sub.focus_area, - description: sub.description, - task_id_range: sub.task_id_range, - generated_at: getUtc8ISOString(), - tasks: [ - // For each task within the assigned ID range: - { - id: `TASK-${String(sub.task_id_range[0]).padStart(3, '0')}`, - title: "...", - complexity: "Low | Medium | High", - depends_on: [], // TASK-xxx references - scope: "...", // Brief scope description - modification_points: [ // file:line → change summary - { file: "...", location: "...", change: "..." } - ], - conflict_risk: "Low | Medium | High", - estimated_effort: "..." + // 3. Generate unified JSONL tasks (one task per line) + const domainTasks = [ + // For each task within the assigned ID range: + { + id: `TASK-${String(sub.task_id_range[0]).padStart(3, '0')}`, + title: "...", + description: "...", // scope/goal of this task + type: "feature", // infrastructure|feature|enhancement|fix|refactor|testing + priority: "medium", // high|medium|low + effort: "medium", // small|medium|large + scope: "...", // Brief scope description + depends_on: [], // TASK-xxx references + convergence: { + criteria: ["... (testable)"], // Testable conditions + verification: "... (executable)", // Command or steps + definition_of_done: "... (business language)" + }, + files: [ // Files to modify + { + path: "...", + action: "modify", // modify|create|delete + changes: ["..."], // Change descriptions + conflict_risk: "low" // low|medium|high + } + ], + source: { + tool: "collaborative-plan-with-file", + session_id: sessionId, + original_id: `TASK-${String(sub.task_id_range[0]).padStart(3, '0')}` } - // ... more tasks - ], - evidence: { - relevant_files: [...], - existing_patterns: [...], - constraints: [...] } - } - Write(`${sessionFolder}/domains/${sub.focus_area}/plan.json`, JSON.stringify(plan, null, 2)) + // ... more tasks + ] - // 4. Sync summary to plan-note.md + // 4. Write domain tasks.jsonl (one task per line) + const jsonlContent = domainTasks.map(t => JSON.stringify(t)).join('\n') + Write(`${sessionFolder}/domains/${sub.focus_area}/tasks.jsonl`, jsonlContent) + + // 5. Sync summary to plan-note.md // Read current plan-note.md // Locate pre-allocated sections: // - Task Pool: "## 任务池 - ${toTitleCase(sub.focus_area)}" @@ -348,11 +358,17 @@ for (const sub of subDomains) { ```markdown ### TASK-{ID}: {Title} [{focus-area}] - **状态**: pending -- **复杂度**: Low/Medium/High +- **类型**: feature/fix/refactor/enhancement/testing/infrastructure +- **优先级**: high/medium/low +- **工作量**: small/medium/large - **依赖**: TASK-xxx (if any) - **范围**: Brief scope description -- **修改点**: `file:location`: change summary -- **冲突风险**: Low/Medium/High +- **修改文件**: `file-path` (action): change summary +- **收敛标准**: + - criteria 1 + - criteria 2 +- **验证方式**: executable command or steps +- **完成定义**: business language definition ``` **Evidence Format** (for plan-note.md evidence sections): @@ -366,8 +382,10 @@ for (const sub of subDomains) { **Domain Planning Rules**: - Each domain modifies ONLY its pre-allocated sections in plan-note.md - Use assigned TASK ID range exclusively -- Include `conflict_risk` assessment for each task +- Include convergence criteria for each task (criteria + verification + definition_of_done) +- Include `files[]` with conflict_risk assessment per file - Reference cross-domain dependencies explicitly +- Each task record must be self-contained (can be independently consumed by unified-execute) ### Step 2.3: Verify plan-note.md Consistency @@ -381,7 +399,8 @@ After all domains are planned, verify the shared document. 5. Check for any section format inconsistencies **Success Criteria**: -- `domains/{domain}/plan.json` created for each domain +- `domains/{domain}/tasks.jsonl` created for each domain (unified JSONL format) +- Each task has convergence (criteria + verification + definition_of_done) - `plan-note.md` updated with all task pools and evidence sections - Task summaries follow consistent format - No TASK ID overlaps across domains @@ -394,7 +413,7 @@ After all domains are planned, verify the shared document. ### Step 3.1: Parse plan-note.md -Extract all tasks from all "任务池" sections. +Extract all tasks from all "任务池" sections and domain tasks.jsonl files. ```javascript // parsePlanNote(markdown) @@ -403,20 +422,32 @@ Extract all tasks from all "任务池" sections. // - Build sections array: { level, heading, start, content } // - Return: { frontmatter, sections } +// Also load all domain tasks.jsonl for detailed data +// loadDomainTasks(sessionFolder, subDomains): +// const allTasks = [] +// for (const sub of subDomains) { +// const jsonl = Read(`${sessionFolder}/domains/${sub.focus_area}/tasks.jsonl`) +// jsonl.split('\n').filter(l => l.trim()).forEach(line => { +// allTasks.push(JSON.parse(line)) +// }) +// } +// return allTasks + // extractTasksFromSection(content, sectionHeading) // - Match: /### (TASK-\d+):\s+(.+?)\s+\[(.+?)\]/ // - For each: extract taskId, title, author -// - Parse details: status, complexity, depends_on, modification_points, conflict_risk +// - Parse details: status, type, priority, effort, depends_on, files, convergence // - Return: array of task objects // parseTaskDetails(content) // - Extract via regex: // - /\*\*状态\*\*:\s*(.+)/ → status -// - /\*\*复杂度\*\*:\s*(.+)/ → complexity +// - /\*\*类型\*\*:\s*(.+)/ → type +// - /\*\*优先级\*\*:\s*(.+)/ → priority +// - /\*\*工作量\*\*:\s*(.+)/ → effort // - /\*\*依赖\*\*:\s*(.+)/ → depends_on (extract TASK-\d+ references) -// - /\*\*冲突风险\*\*:\s*(.+)/ → conflict_risk -// - Extract modification points: /- `([^`]+):\s*([^`]+)`:\s*(.+)/ → file, location, summary -// - Return: { status, complexity, depends_on[], modification_points[], conflict_risk } +// - Extract files: /- `([^`]+)` \((\w+)\):\s*(.+)/ → path, action, change +// - Return: { status, type, priority, effort, depends_on[], files[], convergence } ``` ### Step 3.2: Detect Conflicts @@ -435,10 +466,10 @@ Scan all tasks for three categories of conflicts. ```javascript // detectFileConflicts(tasks) -// Build fileMap: { "file:location": [{ task_id, task_title, source_domain, change }] } -// For each location with modifications from multiple domains: +// Build fileMap: { "file-path": [{ task_id, task_title, source_domain, changes }] } +// For each file with modifications from multiple domains: // → conflict: type='file_conflict', severity='high' -// → include: location, tasks_involved, domains_involved, modifications +// → include: file, tasks_involved, domains_involved, changes // → resolution: 'Coordinate modification order or merge changes' // detectDependencyCycles(tasks) @@ -459,9 +490,9 @@ function detectCycles(tasks) { } // detectStrategyConflicts(tasks) -// Group tasks by files they modify +// Group tasks by files they modify (from task.files[].path) // For each file with tasks from multiple domains: -// Filter for high/medium conflict_risk tasks +// Filter for tasks with files[].conflict_risk === 'high' or 'medium' // If >1 high-risk from different domains: // → conflict: type='strategy_conflict', severity='medium' // → resolution: 'Review approaches and align on single strategy' @@ -512,7 +543,26 @@ Write(`${sessionFolder}/conflicts.json`, JSON.stringify({ **Objective**: Generate human-readable plan summary and finalize workflow. -### Step 4.1: Generate plan.md +### Step 4.1: Merge Domain tasks.jsonl + +Merge all per-domain JSONL files into a single session-level `tasks.jsonl`. + +```javascript +// Collect all domain tasks +const allDomainTasks = [] +for (const sub of subDomains) { + const jsonl = Read(`${sessionFolder}/domains/${sub.focus_area}/tasks.jsonl`) + jsonl.split('\n').filter(l => l.trim()).forEach(line => { + allDomainTasks.push(JSON.parse(line)) + }) +} + +// Write merged tasks.jsonl at session root +const mergedJsonl = allDomainTasks.map(t => JSON.stringify(t)).join('\n') +Write(`${sessionFolder}/tasks.jsonl`, mergedJsonl) +``` + +### Step 4.2: Generate plan.md Create a human-readable summary from plan-note.md content. @@ -549,9 +599,9 @@ ${subDomains.map((s, i) => `| ${i+1} | ${s.focus_area} | ${s.description} | ${s. ## 任务概览 ${subDomains.map(sub => { - const domainTasks = allTasks.filter(t => t.source_domain === sub.focus_area) + const domainTasks = allTasks.filter(t => t.source?.original_id?.startsWith('TASK') && t.source?.session_id === sessionId) return `### ${sub.focus_area}\n\n` + - domainTasks.map(t => `- **${t.id}**: ${t.title} (${t.complexity}) ${t.depends_on.length ? '← ' + t.depends_on.join(', ') : ''}`).join('\n') + domainTasks.map(t => `- **${t.id}**: ${t.title} (${t.type}, ${t.effort}) ${t.depends_on.length ? '← ' + t.depends_on.join(', ') : ''}`).join('\n') }).join('\n\n')} ## 冲突报告 @@ -563,7 +613,7 @@ ${allConflicts.length === 0 ## 执行 \`\`\`bash -/workflow:unified-execute-with-file "${sessionFolder}/plan-note.md" +/workflow:unified-execute-with-file PLAN="${sessionFolder}/tasks.jsonl" \`\`\` **Session artifacts**: \`${sessionFolder}/\` @@ -571,7 +621,7 @@ ${allConflicts.length === 0 Write(`${sessionFolder}/plan.md`, planMd) ``` -### Step 4.2: Display Completion Summary +### Step 4.3: Display Completion Summary Present session statistics and next steps. @@ -602,13 +652,14 @@ if (!autoMode) { | Selection | Action | |-----------|--------| -| Execute Plan | `Skill(skill="workflow:unified-execute-with-file", args="${sessionFolder}/plan-note.md")` | +| Execute Plan | `Skill(skill="workflow:unified-execute-with-file", args="PLAN=\"${sessionFolder}/tasks.jsonl\"")` | | Review Conflicts | Display conflicts.json content for manual resolution | | Export | Copy plan.md + plan-note.md to user-specified location | | Done | Display artifact paths, end workflow | **Success Criteria**: - `plan.md` generated with complete summary +- `tasks.jsonl` merged at session root (consumable by unified-execute) - All artifacts present in session directory - User informed of completion and next steps @@ -634,10 +685,12 @@ User initiates: TASK="task description" ├─ Generate requirement-analysis.json │ ├─ Serial domain planning: - │ ├─ Domain 1: explore → plan.json → fill plan-note.md - │ ├─ Domain 2: explore → plan.json → fill plan-note.md + │ ├─ Domain 1: explore → tasks.jsonl → fill plan-note.md + │ ├─ Domain 2: explore → tasks.jsonl → fill plan-note.md │ └─ Domain N: ... │ + ├─ Merge domain tasks.jsonl → session tasks.jsonl + │ ├─ Verify plan-note.md consistency ├─ Detect conflicts ├─ Generate plan.md summary @@ -685,7 +738,7 @@ User resumes: TASK="same task" 1. **Review Plan Note**: Check plan-note.md between domains to verify progress 2. **Verify Independence**: Ensure sub-domains are truly independent and have minimal overlap 3. **Check Dependencies**: Cross-domain dependencies should be documented explicitly -4. **Inspect Details**: Review `domains/{domain}/plan.json` for specifics when needed +4. **Inspect Details**: Review `domains/{domain}/tasks.jsonl` for specifics when needed 5. **Consistent Format**: Follow task summary format strictly across all domains 6. **TASK ID Isolation**: Use pre-assigned non-overlapping ranges to prevent ID conflicts diff --git a/.codex/skills/plan-converter/SKILL.md b/.codex/skills/plan-converter/SKILL.md new file mode 100644 index 00000000..90648135 --- /dev/null +++ b/.codex/skills/plan-converter/SKILL.md @@ -0,0 +1,457 @@ +--- +name: plan-converter +description: Convert any planning/analysis/brainstorm output to unified JSONL task format. Supports roadmap.jsonl, plan.json, plan-note.md, conclusions.json, synthesis.json. +argument-hint: " [-o ]" +--- + +# Plan Converter + +## Overview + +Converts any planning artifact to **unified JSONL task format** — the single standard consumed by `unified-execute-with-file`. + +```bash +# Auto-detect format, output to same directory +/codex:plan-converter ".workflow/.req-plan/RPLAN-auth-2025-01-21/roadmap.jsonl" + +# Specify output path +/codex:plan-converter ".workflow/.planning/CPLAN-xxx/plan-note.md" -o tasks.jsonl + +# Convert brainstorm synthesis +/codex:plan-converter ".workflow/.brainstorm/BS-xxx/synthesis.json" +``` + +**Supported inputs**: roadmap.jsonl, tasks.jsonl (per-domain), plan-note.md, conclusions.json, synthesis.json + +**Output**: Unified JSONL (`tasks.jsonl` in same directory, or specified `-o` path) + +## Unified JSONL Schema + +每行一条自包含任务记录: + +``` +┌─ IDENTITY (必填) ──────────────────────────────────────────┐ +│ id string 任务 ID (L0/T1/TASK-001) │ +│ title string 任务标题 │ +│ description string 目标 + 原因 │ +├─ CLASSIFICATION (可选) ────────────────────────────────────┤ +│ type enum infrastructure|feature|enhancement│ +│ enum |fix|refactor|testing │ +│ priority enum high|medium|low │ +│ effort enum small|medium|large │ +├─ SCOPE (可选) ─────────────────────────────────────────────┤ +│ scope string|[] 覆盖范围 │ +│ excludes string[] 明确排除 │ +├─ DEPENDENCIES (depends_on 必填) ───────────────────────────┤ +│ depends_on string[] 依赖任务 ID(无依赖则 []) │ +│ parallel_group number 并行分组(同组可并行) │ +│ inputs string[] 消费的产物 │ +│ outputs string[] 产出的产物 │ +├─ CONVERGENCE (必填) ───────────────────────────────────────┤ +│ convergence.criteria string[] 可测试的完成条件 │ +│ convergence.verification string 可执行的验证步骤 │ +│ convergence.definition_of_done string 业务语言完成定义 │ +├─ FILES (可选,渐进详细) ───────────────────────────────────┤ +│ files[].path string 文件路径 (必填*) │ +│ files[].action enum modify|create|delete │ +│ files[].changes string[] 修改描述 │ +│ files[].conflict_risk enum low|medium|high │ +├─ CONTEXT (可选) ───────────────────────────────────────────┤ +│ source.tool string 产出工具名 │ +│ source.session_id string 来源 session │ +│ source.original_id string 转换前原始 ID │ +│ evidence any[] 支撑证据 │ +│ risk_items string[] 风险项 │ +├─ EXECUTION (执行时填充,规划时不存在) ─────────────────────┤ +│ _execution.status enum pending|in_progress| │ +│ completed|failed|skipped │ +│ _execution.executed_at string ISO 时间戳 │ +│ _execution.result object { success, files_modified, │ +│ summary, error, │ +│ convergence_verified[] } │ +└────────────────────────────────────────────────────────────┘ +``` + +## Target Input + +**$ARGUMENTS** + +## Execution Process + +``` +Step 0: Parse arguments, resolve input path +Step 1: Detect input format +Step 2: Parse input → extract raw records +Step 3: Transform → unified JSONL records +Step 4: Validate convergence quality +Step 5: Write output + display summary +``` + +## Implementation + +### Step 0: Parse Arguments + +```javascript +const args = $ARGUMENTS +const outputMatch = args.match(/-o\s+(\S+)/) +const outputPath = outputMatch ? outputMatch[1] : null +const inputPath = args.replace(/-o\s+\S+/, '').trim() + +// Resolve absolute path +const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim() +const resolvedInput = path.isAbsolute(inputPath) ? inputPath : `${projectRoot}/${inputPath}` +``` + +### Step 1: Detect Format + +```javascript +const filename = path.basename(resolvedInput) +const content = Read(resolvedInput) + +function detectFormat(filename, content) { + if (filename === 'roadmap.jsonl') return 'roadmap-jsonl' + if (filename === 'tasks.jsonl') return 'tasks-jsonl' // already unified or per-domain + if (filename === 'plan-note.md') return 'plan-note-md' + if (filename === 'conclusions.json') return 'conclusions-json' + if (filename === 'synthesis.json') return 'synthesis-json' + if (filename.endsWith('.jsonl')) return 'generic-jsonl' + if (filename.endsWith('.json')) { + const parsed = JSON.parse(content) + if (parsed.top_ideas) return 'synthesis-json' + if (parsed.recommendations && parsed.key_conclusions) return 'conclusions-json' + if (parsed.tasks && parsed.focus_area) return 'domain-plan-json' + return 'unknown-json' + } + if (filename.endsWith('.md')) return 'plan-note-md' + return 'unknown' +} +``` + +**Format Detection Table**: + +| Filename | Format ID | Source Tool | +|----------|-----------|------------| +| `roadmap.jsonl` | roadmap-jsonl | req-plan-with-file | +| `tasks.jsonl` (per-domain) | tasks-jsonl | collaborative-plan-with-file | +| `plan-note.md` | plan-note-md | collaborative-plan-with-file | +| `conclusions.json` | conclusions-json | analyze-with-file | +| `synthesis.json` | synthesis-json | brainstorm-with-file | + +### Step 2: Parse Input + +#### roadmap-jsonl (req-plan-with-file) + +```javascript +function parseRoadmapJsonl(content) { + return content.split('\n') + .filter(line => line.trim()) + .map(line => JSON.parse(line)) +} +// Records have: id (L0/T1), name/title, goal/scope, convergence, depends_on, etc. +``` + +#### plan-note-md (collaborative-plan-with-file) + +```javascript +function parsePlanNoteMd(content) { + const tasks = [] + // 1. Extract YAML frontmatter for session metadata + const frontmatter = extractYamlFrontmatter(content) + + // 2. Find all "## 任务池 - {Domain}" sections + const taskPoolSections = content.match(/## 任务池 - .+/g) || [] + + // 3. For each section, extract tasks matching: + // ### TASK-{ID}: {Title} [{domain}] + // - **状态**: pending + // - **复杂度**: Medium + // - **依赖**: TASK-xxx + // - **范围**: ... + // - **修改点**: `file:location`: change summary + // - **冲突风险**: Low + taskPoolSections.forEach(section => { + const sectionContent = extractSectionContent(content, section) + const taskPattern = /### (TASK-\d+):\s+(.+?)\s+\[(.+?)\]/g + let match + while ((match = taskPattern.exec(sectionContent)) !== null) { + const [_, id, title, domain] = match + const taskBlock = extractTaskBlock(sectionContent, match.index) + tasks.push({ + id, title, domain, + ...parseTaskDetails(taskBlock) + }) + } + }) + return { tasks, frontmatter } +} +``` + +#### conclusions-json (analyze-with-file) + +```javascript +function parseConclusionsJson(content) { + const conclusions = JSON.parse(content) + // Extract from: conclusions.recommendations[] + // { action, rationale, priority } + // Also available: conclusions.key_conclusions[] + return conclusions +} +``` + +#### synthesis-json (brainstorm-with-file) + +```javascript +function parseSynthesisJson(content) { + const synthesis = JSON.parse(content) + // Extract from: synthesis.top_ideas[] + // { title, description, score, feasibility, next_steps, key_strengths, main_challenges } + // Also available: synthesis.recommendations + return synthesis +} +``` + +### Step 3: Transform to Unified Records + +#### roadmap-jsonl → unified + +```javascript +function transformRoadmap(records, sessionId) { + return records.map(rec => { + // roadmap.jsonl now uses unified field names (title, description, source) + // Passthrough is mostly direct + return { + id: rec.id, + title: rec.title, + description: rec.description, + type: rec.type || 'feature', + effort: rec.effort, + scope: rec.scope, + excludes: rec.excludes, + depends_on: rec.depends_on || [], + parallel_group: rec.parallel_group, + inputs: rec.inputs, + outputs: rec.outputs, + convergence: rec.convergence, // already unified format + risk_items: rec.risk_items, + source: rec.source || { + tool: 'req-plan-with-file', + session_id: sessionId, + original_id: rec.id + } + } + }) +} +``` + +#### plan-note-md → unified + +```javascript +function transformPlanNote(parsed) { + const { tasks, frontmatter } = parsed + return tasks.map(task => ({ + id: task.id, + title: task.title, + description: task.scope || task.title, + type: task.type || inferTypeFromTitle(task.title), + priority: task.priority || inferPriorityFromEffort(task.effort), + effort: task.effort || 'medium', + scope: task.scope, + depends_on: task.depends_on || [], + convergence: task.convergence || generateConvergence(task), // plan-note now has convergence + files: task.files?.map(f => ({ + path: f.path || f.file, + action: f.action || 'modify', + changes: f.changes || [f.change], + conflict_risk: f.conflict_risk + })), + source: { + tool: 'collaborative-plan-with-file', + session_id: frontmatter.session_id, + original_id: task.id + } + })) +} + +// Generate convergence from task details when source lacks it (legacy fallback) +function generateConvergence(task) { + return { + criteria: [ + // Derive testable conditions from scope and files + // e.g., "Modified files compile without errors" + // e.g., scope-derived: "API endpoint returns expected response" + ], + verification: '// Derive from files — e.g., test commands', + definition_of_done: '// Derive from scope — business language summary' + } +} +``` + +#### conclusions-json → unified + +```javascript +function transformConclusions(conclusions) { + return conclusions.recommendations.map((rec, index) => ({ + id: `TASK-${String(index + 1).padStart(3, '0')}`, + title: rec.action, + description: rec.rationale, + type: inferTypeFromAction(rec.action), + priority: rec.priority, + depends_on: [], + convergence: { + criteria: generateCriteriaFromAction(rec), + verification: generateVerificationFromAction(rec), + definition_of_done: generateDoDFromRationale(rec) + }, + evidence: conclusions.key_conclusions.map(c => c.point), + source: { + tool: 'analyze-with-file', + session_id: conclusions.session_id + } + })) +} + +function inferTypeFromAction(action) { + const lower = action.toLowerCase() + if (/fix|resolve|repair|修复/.test(lower)) return 'fix' + if (/refactor|restructure|extract|重构/.test(lower)) return 'refactor' + if (/add|implement|create|新增|实现/.test(lower)) return 'feature' + if (/improve|optimize|enhance|优化/.test(lower)) return 'enhancement' + if (/test|coverage|validate|测试/.test(lower)) return 'testing' + return 'feature' +} +``` + +#### synthesis-json → unified + +```javascript +function transformSynthesis(synthesis) { + return synthesis.top_ideas + .filter(idea => idea.score >= 6) // Only viable ideas (score ≥ 6) + .map((idea, index) => ({ + id: `IDEA-${String(index + 1).padStart(3, '0')}`, + title: idea.title, + description: idea.description, + type: 'feature', + priority: idea.score >= 8 ? 'high' : idea.score >= 6 ? 'medium' : 'low', + effort: idea.feasibility >= 4 ? 'small' : idea.feasibility >= 2 ? 'medium' : 'large', + depends_on: [], + convergence: { + criteria: idea.next_steps || [`${idea.title} implemented and functional`], + verification: 'Manual validation of feature functionality', + definition_of_done: idea.description + }, + risk_items: idea.main_challenges || [], + source: { + tool: 'brainstorm-with-file', + session_id: synthesis.session_id, + original_id: `idea-${index + 1}` + } + })) +} +``` + +### Step 4: Validate Convergence Quality + +All records must pass convergence quality checks before output. + +```javascript +function validateConvergence(records) { + const vaguePatterns = /正常|正确|好|可以|没问题|works|fine|good|correct/i + const technicalPatterns = /compile|build|lint|npm|npx|jest|tsc|eslint/i + const issues = [] + + records.forEach(record => { + const c = record.convergence + if (!c) { + issues.push({ id: record.id, field: 'convergence', issue: 'Missing entirely' }) + return + } + if (!c.criteria?.length) { + issues.push({ id: record.id, field: 'criteria', issue: 'Empty criteria array' }) + } + c.criteria?.forEach((criterion, i) => { + if (vaguePatterns.test(criterion) && criterion.length < 15) { + issues.push({ id: record.id, field: `criteria[${i}]`, issue: `Too vague: "${criterion}"` }) + } + }) + if (!c.verification || c.verification.length < 10) { + issues.push({ id: record.id, field: 'verification', issue: 'Too short or missing' }) + } + if (technicalPatterns.test(c.definition_of_done)) { + issues.push({ id: record.id, field: 'definition_of_done', issue: 'Should be business language' }) + } + }) + + return issues +} + +// Auto-fix strategy: +// | Issue | Fix | +// |----------------------|----------------------------------------------| +// | Missing convergence | Generate from title + description + files | +// | Vague criteria | Replace with specific condition from context | +// | Short verification | Expand with file-based test suggestion | +// | Technical DoD | Rewrite in business language | +``` + +### Step 5: Write Output & Summary + +```javascript +// Determine output path +const outputFile = outputPath + || `${path.dirname(resolvedInput)}/tasks.jsonl` + +// Clean records: remove undefined/null optional fields +const cleanedRecords = records.map(rec => { + const clean = { ...rec } + Object.keys(clean).forEach(key => { + if (clean[key] === undefined || clean[key] === null) delete clean[key] + if (Array.isArray(clean[key]) && clean[key].length === 0 && key !== 'depends_on') delete clean[key] + }) + return clean +}) + +// Write JSONL +const jsonlContent = cleanedRecords.map(r => JSON.stringify(r)).join('\n') +Write(outputFile, jsonlContent) + +// Display summary +// | Source | Format | Records | Issues | +// |-----------------|-------------------|---------|--------| +// | roadmap.jsonl | roadmap-jsonl | 4 | 0 | +// +// Output: .workflow/.req-plan/RPLAN-xxx/tasks.jsonl +// Records: 4 tasks with convergence criteria +// Quality: All convergence checks passed +``` + +--- + +## Conversion Matrix + +| Source | Source Tool | ID Pattern | Has Convergence | Has Files | Has Priority | Has Source | +|--------|-----------|------------|-----------------|-----------|--------------|-----------| +| roadmap.jsonl (progressive) | req-plan | L0-L3 | **Yes** | No | No | **Yes** | +| roadmap.jsonl (direct) | req-plan | T1-TN | **Yes** | No | No | **Yes** | +| tasks.jsonl (per-domain) | collaborative-plan | TASK-NNN | **Yes** | **Yes** (detailed) | Optional | **Yes** | +| plan-note.md | collaborative-plan | TASK-NNN | **Generate** | **Yes** (from 修改文件) | From effort | No | +| conclusions.json | analyze | TASK-NNN | **Generate** | No | **Yes** | No | +| synthesis.json | brainstorm | IDEA-NNN | **Generate** | No | From score | No | + +**Legend**: Yes = source already has it, Generate = converter produces it, No = not available + +## Error Handling + +| Situation | Action | +|-----------|--------| +| Input file not found | Report error, suggest checking path | +| Unknown format | Report error, list supported formats | +| Empty input | Report error, no output file created | +| Convergence validation fails | Auto-fix where possible, report remaining issues | +| Partial parse failure | Convert valid records, report skipped items | +| Output file exists | Overwrite with warning message | +| plan-note.md has empty sections | Skip empty domains, report in summary | + +--- + +**Now execute plan-converter for**: $ARGUMENTS diff --git a/.codex/skills/req-plan-with-file/SKILL.md b/.codex/skills/req-plan-with-file/SKILL.md index 95f4a177..2d596791 100644 --- a/.codex/skills/req-plan-with-file/SKILL.md +++ b/.codex/skills/req-plan-with-file/SKILL.md @@ -304,12 +304,13 @@ if (file_exists(`${sessionFolder}/exploration-codebase.json`)) { ```javascript // Each layer must have: // - id: L0, L1, L2, L3 -// - name: MVP / Usable / Refined / Optimized -// - goal: what this layer achieves +// - title: "MVP" / "Usable" / "Refined" / "Optimized" +// - description: what this layer achieves (goal) // - scope[]: features included // - excludes[]: features explicitly deferred // - convergence: { criteria[], verification, definition_of_done } // - risk_items[], effort (small|medium|large), depends_on[] +// - source: { tool, session_id, original_id } // // Rules: // - L0 (MVP) = self-contained closed loop, no dependencies @@ -319,15 +320,16 @@ if (file_exists(`${sessionFolder}/exploration-codebase.json`)) { const layers = [ { - id: "L0", name: "MVP", - goal: "...", + id: "L0", title: "MVP", + description: "...", scope: ["..."], excludes: ["..."], convergence: { criteria: ["... (testable)"], verification: "... (executable command or steps)", definition_of_done: "... (business language)" }, - risk_items: [], effort: "medium", depends_on: [] + risk_items: [], effort: "medium", depends_on: [], + source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "L0" } }, // L1, L2, ... ] @@ -338,10 +340,11 @@ const layers = [ ```javascript // Each task must have: // - id: T1, T2, ... -// - title, type (infrastructure|feature|enhancement|testing) +// - title, description, type (infrastructure|feature|enhancement|testing) // - scope, inputs[], outputs[] // - convergence: { criteria[], verification, definition_of_done } // - depends_on[], parallel_group +// - source: { tool, session_id, original_id } // // Rules: // - Inputs must come from preceding task outputs or existing resources @@ -351,14 +354,15 @@ const layers = [ const tasks = [ { - id: "T1", title: "...", type: "infrastructure", + id: "T1", title: "...", description: "...", type: "infrastructure", scope: "...", inputs: [], outputs: ["..."], convergence: { criteria: ["... (testable)"], verification: "... (executable)", definition_of_done: "... (business language)" }, - depends_on: [], parallel_group: 1 + depends_on: [], parallel_group: 1, + source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "T1" } }, // T2, T3, ... ] @@ -524,10 +528,10 @@ Display the decomposition as a table with convergence criteria, then run feedbac ```markdown ## Roadmap Overview -| Layer | Name | Goal | Scope | Effort | Dependencies | -|-------|------|------|-------|--------|--------------| -| L0 | MVP | ... | ... | medium | - | -| L1 | Usable | ... | ... | medium | L0 | +| Layer | Title | Description | Effort | Dependencies | +|-------|-------|-------------|--------|--------------| +| L0 | MVP | ... | medium | - | +| L1 | Usable | ... | medium | L0 | ### Convergence Criteria **L0 - MVP**: @@ -541,10 +545,10 @@ Display the decomposition as a table with convergence criteria, then run feedbac ```markdown ## Task Sequence -| Group | ID | Title | Type | Dependencies | -|-------|----|-------|------|--------------| -| 1 | T1 | ... | infrastructure | - | -| 2 | T2 | ... | feature | T1 | +| Group | ID | Title | Type | Description | Dependencies | +|-------|----|-------|------|-------------|--------------| +| 1 | T1 | ... | infrastructure | ... | - | +| 2 | T2 | ... | feature | ... | T1 | ### Convergence Criteria **T1 - Establish Data Model**: @@ -609,15 +613,15 @@ const roadmapMd = `# Requirement Roadmap ## Roadmap Overview -| Layer | Name | Goal | Effort | Dependencies | -|-------|------|------|--------|--------------| -${items.map(l => `| ${l.id} | ${l.name} | ${l.goal} | ${l.effort} | ${l.depends_on.length ? l.depends_on.join(', ') : '-'} |`).join('\n')} +| Layer | Title | Description | Effort | Dependencies | +|-------|-------|-------------|--------|--------------| +${items.map(l => `| ${l.id} | ${l.title} | ${l.description} | ${l.effort} | ${l.depends_on.length ? l.depends_on.join(', ') : '-'} |`).join('\n')} ## Layer Details -${items.map(l => `### ${l.id}: ${l.name} +${items.map(l => `### ${l.id}: ${l.title} -**Goal**: ${l.goal} +**Description**: ${l.description} **Scope**: ${l.scope.join(', ')} @@ -641,7 +645,7 @@ ${items.flatMap(l => l.risk_items.map(r => \`- **${l.id}**: ${r}\`)).join('\n') Each layer can be executed independently: \\\`\\\`\\\`bash -/workflow:lite-plan "${items[0]?.name}: ${items[0]?.scope.join(', ')}" +/workflow:lite-plan "${items[0]?.title}: ${items[0]?.scope.join(', ')}" \\\`\\\`\\\` Roadmap JSONL file: \\\`${sessionFolder}/roadmap.jsonl\\\` @@ -665,9 +669,9 @@ const roadmapMd = `# Requirement Roadmap ## Task Sequence -| Group | ID | Title | Type | Dependencies | -|-------|----|-------|------|--------------| -${items.map(t => `| ${t.parallel_group} | ${t.id} | ${t.title} | ${t.type} | ${t.depends_on.length ? t.depends_on.join(', ') : '-'} |`).join('\n')} +| Group | ID | Title | Type | Description | Dependencies | +|-------|----|-------|------|-------------|--------------| +${items.map(t => `| ${t.parallel_group} | ${t.id} | ${t.title} | ${t.type} | ${t.description} | ${t.depends_on.length ? t.depends_on.join(', ') : '-'} |`).join('\n')} ## Task Details @@ -675,6 +679,8 @@ ${items.map(t => `### ${t.id}: ${t.title} **Type**: ${t.type} | **Parallel Group**: ${t.parallel_group} +**Description**: ${t.description} + **Scope**: ${t.scope} **Inputs**: ${t.inputs.length ? t.inputs.join(', ') : 'None (starting task)'} @@ -748,7 +754,7 @@ if (!autoYes) { | `strategy-assessment.json` | 1 | Uncertainty analysis + mode recommendation + extracted goal/constraints/stakeholders/domain_keywords | | `roadmap.md` (skeleton) | 1 | Initial skeleton with placeholders, finalized in Phase 4 | | `exploration-codebase.json` | 2 | Codebase context: relevant modules, patterns, integration points (only when codebase exists) | -| `roadmap.jsonl` | 3 | One self-contained JSON record per line with convergence criteria | +| `roadmap.jsonl` | 3 | One self-contained JSON record per line with convergence criteria and source provenance | | `roadmap.md` (final) | 4 | Human-readable roadmap with tabular display + convergence details, revised per user feedback | ## JSONL Schema @@ -765,18 +771,18 @@ Each record's `convergence` object: ### Progressive Mode (one layer per line) -| Layer | Name | Typical Goal | -|-------|------|--------------| +| Layer | Title | Typical Description | +|-------|-------|---------------------| | L0 | MVP | Minimum viable closed loop, core path end-to-end | | L1 | Usable | Key user paths refined, basic error handling | | L2 | Refined | Edge cases, performance, security hardening | | L3 | Optimized | Advanced features, observability, operations | -**Schema**: `id, name, goal, scope[], excludes[], convergence{}, risk_items[], effort, depends_on[]` +**Schema**: `id, title, description, scope[], excludes[], convergence{}, risk_items[], effort, depends_on[], source{}` ```jsonl -{"id":"L0","name":"MVP","goal":"Minimum viable closed loop","scope":["User registration and login","Basic CRUD"],"excludes":["OAuth","2FA"],"convergence":{"criteria":["End-to-end register→login→operate flow works","Core API returns correct responses"],"verification":"curl/Postman manual testing or smoke test script","definition_of_done":"New user can complete register→login→perform one core operation"},"risk_items":["JWT library selection needs validation"],"effort":"medium","depends_on":[]} -{"id":"L1","name":"Usable","goal":"Complete key user paths","scope":["Password reset","Input validation","Error messages"],"excludes":["Audit logs","Rate limiting"],"convergence":{"criteria":["All form fields have frontend+backend validation","Password reset email can be sent and reset completed","Error scenarios show user-friendly messages"],"verification":"Unit tests cover validation logic + manual test of reset flow","definition_of_done":"Users have a clear recovery path when encountering input errors or forgotten passwords"},"risk_items":[],"effort":"medium","depends_on":["L0"]} +{"id":"L0","title":"MVP","description":"Minimum viable closed loop","scope":["User registration and login","Basic CRUD"],"excludes":["OAuth","2FA"],"convergence":{"criteria":["End-to-end register→login→operate flow works","Core API returns correct responses"],"verification":"curl/Postman manual testing or smoke test script","definition_of_done":"New user can complete register→login→perform one core operation"},"risk_items":["JWT library selection needs validation"],"effort":"medium","depends_on":[],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L0"}} +{"id":"L1","title":"Usable","description":"Complete key user paths","scope":["Password reset","Input validation","Error messages"],"excludes":["Audit logs","Rate limiting"],"convergence":{"criteria":["All form fields have frontend+backend validation","Password reset email can be sent and reset completed","Error scenarios show user-friendly messages"],"verification":"Unit tests cover validation logic + manual test of reset flow","definition_of_done":"Users have a clear recovery path when encountering input errors or forgotten passwords"},"risk_items":[],"effort":"medium","depends_on":["L0"],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L1"}} ``` **Constraints**: 2-4 layers, L0 must be a self-contained closed loop with no dependencies, each feature belongs to exactly ONE layer (no scope overlap). @@ -790,11 +796,11 @@ Each record's `convergence` object: | enhancement | Validation, error handling, edge cases | | testing | Unit tests, integration tests, E2E | -**Schema**: `id, title, type, scope, inputs[], outputs[], convergence{}, depends_on[], parallel_group` +**Schema**: `id, title, description, type, scope, inputs[], outputs[], convergence{}, depends_on[], parallel_group, source{}` ```jsonl -{"id":"T1","title":"Establish data model","type":"infrastructure","scope":"DB schema + TypeScript types","inputs":[],"outputs":["schema.prisma","types/user.ts"],"convergence":{"criteria":["Migration executes without errors","TypeScript types compile successfully","Fields cover all business entities"],"verification":"npx prisma migrate dev && npx tsc --noEmit","definition_of_done":"Database schema migrates correctly, type definitions can be referenced by other modules"},"depends_on":[],"parallel_group":1} -{"id":"T2","title":"Implement core API","type":"feature","scope":"CRUD endpoints for User","inputs":["schema.prisma","types/user.ts"],"outputs":["routes/user.ts","controllers/user.ts"],"convergence":{"criteria":["GET/POST/PUT/DELETE return correct status codes","Request/response conforms to schema","No N+1 queries"],"verification":"jest --testPathPattern=user.test.ts","definition_of_done":"All User CRUD endpoints pass integration tests"},"depends_on":["T1"],"parallel_group":2} +{"id":"T1","title":"Establish data model","description":"Create database schema and TypeScript type definitions for all business entities","type":"infrastructure","scope":"DB schema + TypeScript types","inputs":[],"outputs":["schema.prisma","types/user.ts"],"convergence":{"criteria":["Migration executes without errors","TypeScript types compile successfully","Fields cover all business entities"],"verification":"npx prisma migrate dev && npx tsc --noEmit","definition_of_done":"Database schema migrates correctly, type definitions can be referenced by other modules"},"depends_on":[],"parallel_group":1,"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"T1"}} +{"id":"T2","title":"Implement core API","description":"Build CRUD endpoints for User entity with proper validation and error handling","type":"feature","scope":"CRUD endpoints for User","inputs":["schema.prisma","types/user.ts"],"outputs":["routes/user.ts","controllers/user.ts"],"convergence":{"criteria":["GET/POST/PUT/DELETE return correct status codes","Request/response conforms to schema","No N+1 queries"],"verification":"jest --testPathPattern=user.test.ts","definition_of_done":"All User CRUD endpoints pass integration tests"},"depends_on":["T1"],"parallel_group":2,"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"T2"}} ``` **Constraints**: Inputs must come from preceding task outputs or existing resources, tasks in same parallel_group must be truly independent, no circular dependencies. @@ -822,24 +828,26 @@ When normal decomposition fails or produces empty results, use fallback template ```javascript [ { - id: "L0", name: "MVP", goal: "Minimum viable closed loop", + id: "L0", title: "MVP", description: "Minimum viable closed loop", scope: ["Core functionality"], excludes: ["Advanced features", "Optimization"], convergence: { criteria: ["Core path works end-to-end"], verification: "Manual test of core flow", definition_of_done: "User can complete one full core operation" }, - risk_items: ["Tech selection needs validation"], effort: "medium", depends_on: [] + risk_items: ["Tech selection needs validation"], effort: "medium", depends_on: [], + source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "L0" } }, { - id: "L1", name: "Usable", goal: "Refine key user paths", + id: "L1", title: "Usable", description: "Refine key user paths", scope: ["Error handling", "Input validation"], excludes: ["Performance optimization", "Monitoring"], convergence: { criteria: ["All user inputs validated", "Error scenarios show messages"], verification: "Unit tests + manual error scenario testing", definition_of_done: "Users have clear guidance and recovery paths when encountering problems" }, - risk_items: [], effort: "medium", depends_on: ["L0"] + risk_items: [], effort: "medium", depends_on: ["L0"], + source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "L1" } } ] ``` @@ -848,7 +856,8 @@ When normal decomposition fails or produces empty results, use fallback template ```javascript [ { - id: "T1", title: "Infrastructure setup", type: "infrastructure", + id: "T1", title: "Infrastructure setup", description: "Project scaffolding and base configuration", + type: "infrastructure", scope: "Project scaffolding and base configuration", inputs: [], outputs: ["project-structure"], convergence: { @@ -856,10 +865,12 @@ When normal decomposition fails or produces empty results, use fallback template verification: "npm run build (or equivalent build command)", definition_of_done: "Project foundation ready for feature development" }, - depends_on: [], parallel_group: 1 + depends_on: [], parallel_group: 1, + source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "T1" } }, { - id: "T2", title: "Core feature implementation", type: "feature", + id: "T2", title: "Core feature implementation", description: "Implement core business logic", + type: "feature", scope: "Core business logic", inputs: ["project-structure"], outputs: ["core-module"], convergence: { @@ -867,7 +878,8 @@ When normal decomposition fails or produces empty results, use fallback template verification: "Run core feature tests", definition_of_done: "Core business functionality works as expected" }, - depends_on: ["T1"], parallel_group: 2 + depends_on: ["T1"], parallel_group: 2, + source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "T2" } } ] ``` diff --git a/.codex/skills/unified-execute-with-file/SKILL.md b/.codex/skills/unified-execute-with-file/SKILL.md index 69c6dfa3..9a136ded 100644 --- a/.codex/skills/unified-execute-with-file/SKILL.md +++ b/.codex/skills/unified-execute-with-file/SKILL.md @@ -1,72 +1,89 @@ --- name: unified-execute-with-file -description: Universal execution engine for consuming planning/brainstorm/analysis output. Serial task execution with progress tracking. Codex-optimized. -argument-hint: "PLAN=\"\" [--auto-commit] [--dry-run]" +description: Universal execution engine consuming unified JSONL task format. Serial task execution with convergence verification, progress tracking via execution.md + execution-events.md. +argument-hint: "PLAN=\"\" [--auto-commit] [--dry-run]" --- -# Codex Unified-Execute-With-File Workflow +# Unified-Execute-With-File Workflow ## Quick Start -Universal execution engine consuming **any** planning output and executing tasks serially with progress tracking. +Universal execution engine consuming **unified JSONL** (`tasks.jsonl`) and executing tasks serially with convergence verification and progress tracking. -**Core workflow**: Load Plan → Parse Tasks → Validate → Execute Sequentially → Track Progress → Verify +```bash +# Execute from req-plan output +/codex:unified-execute-with-file PLAN=".workflow/.req-plan/RPLAN-auth-2025-01-21/tasks.jsonl" + +# Execute from collaborative-plan output +/codex:unified-execute-with-file PLAN=".workflow/.planning/CPLAN-xxx/tasks.jsonl" --auto-commit + +# Dry-run mode +/codex:unified-execute-with-file PLAN="tasks.jsonl" --dry-run + +# Auto-detect from .workflow/ directories +/codex:unified-execute-with-file +``` + +**Core workflow**: Load JSONL → Validate → Pre-Execution Analysis → Execute → Verify Convergence → Track Progress **Key features**: -- **Format-agnostic**: Supports plan.json, plan-note.md, synthesis.json, conclusions.json -- **Serial execution**: Process tasks sequentially with dependency ordering -- **Progress tracking**: execution.md overview + execution-events.md detailed log -- **Auto-commit**: Optional conventional commits after each task -- **Dry-run mode**: Simulate execution without making changes +- **Single format**: Only consumes unified JSONL (`tasks.jsonl`) +- **Convergence-driven**: Verifies each task's convergence criteria after execution +- **Serial execution**: Process tasks in topological order with dependency tracking +- **Dual progress tracking**: `execution.md` (overview) + `execution-events.md` (event stream) +- **Auto-commit**: Optional conventional commits per task +- **Dry-run mode**: Simulate execution without changes + +**Input format**: Use `plan-converter` to convert other formats (roadmap.jsonl, plan-note.md, conclusions.json, synthesis.json) to unified JSONL first. ## Overview -This workflow enables reliable task execution through sequential phases: - -1. **Plan Detection & Parsing** - Load and parse planning output in any format -2. **Pre-Execution Analysis** - Validate feasibility and identify potential issues -3. **Serial Task Execution** - Execute tasks one by one with dependency ordering -4. **Progress Tracking** - Update execution logs with results and discoveries -5. **Completion** - Generate summary and offer follow-up actions - -The key innovation is the **unified event log** that serves as both human-readable progress tracker and machine-parseable state store. +``` +┌─────────────────────────────────────────────────────────────┐ +│ UNIFIED EXECUTE WORKFLOW │ +├─────────────────────────────────────────────────────────────┤ +│ │ +│ Phase 1: Load & Validate │ +│ ├─ Parse tasks.jsonl (one task per line) │ +│ ├─ Validate schema (id, title, depends_on, convergence) │ +│ ├─ Detect cycles, build topological order │ +│ └─ Initialize execution.md + execution-events.md │ +│ │ +│ Phase 2: Pre-Execution Analysis │ +│ ├─ Check file conflicts (multiple tasks → same file) │ +│ ├─ Verify file existence │ +│ ├─ Generate feasibility report │ +│ └─ User confirmation (unless dry-run) │ +│ │ +│ Phase 3: Serial Execution + Convergence Verification │ +│ For each task in topological order: │ +│ ├─ Check dependencies satisfied │ +│ ├─ Record START event │ +│ ├─ Execute directly (Read/Edit/Write/Grep/Glob/Bash) │ +│ ├─ Verify convergence.criteria[] │ +│ ├─ Run convergence.verification command │ +│ ├─ Record COMPLETE/FAIL event with verification results │ +│ ├─ Update _execution state in JSONL │ +│ └─ Auto-commit if enabled │ +│ │ +│ Phase 4: Completion │ +│ ├─ Finalize execution.md with summary statistics │ +│ ├─ Finalize execution-events.md with session footer │ +│ ├─ Write back tasks.jsonl with _execution states │ +│ └─ Offer follow-up actions │ +│ │ +└─────────────────────────────────────────────────────────────┘ +``` ## Output Structure ``` ${projectRoot}/.workflow/.execution/EXEC-{slug}-{date}-{random}/ -├── execution.md # Plan overview + task table + timeline -└── execution-events.md # ⭐ Unified log (all executions) - SINGLE SOURCE OF TRUTH +├── execution.md # Plan overview + task table + summary +└── execution-events.md # ⭐ Unified event log (single source of truth) ``` -## Output Artifacts - -### Phase 1: Session Initialization - -| Artifact | Purpose | -|----------|---------| -| `execution.md` | Overview of plan source, task table, execution timeline | -| Session folder | `{projectRoot}/.workflow/.execution/{sessionId}/` | - -### Phase 2: Pre-Execution Analysis - -| Artifact | Purpose | -|----------|---------| -| `execution.md` (updated) | Feasibility assessment and validation results | - -### Phase 3-4: Serial Execution & Progress - -| Artifact | Purpose | -|----------|---------| -| `execution-events.md` | Unified log: all task executions with results | -| `execution.md` (updated) | Real-time progress updates and task status | - -### Phase 5: Completion - -| Artifact | Purpose | -|----------|---------| -| Final `execution.md` | Complete execution summary and statistics | -| Final `execution-events.md` | Complete execution history | +Additionally, the source `tasks.jsonl` is updated in-place with `_execution` states. --- @@ -74,293 +91,580 @@ ${projectRoot}/.workflow/.execution/EXEC-{slug}-{date}-{random}/ ### Session Initialization -The workflow creates a unique session for tracking execution. +##### Step 0: Initialize Session -**Session ID Format**: `EXEC-{slug}-{date}-{random}` -- `slug`: Plan filename without extension, lowercased, max 30 chars -- `date`: YYYY-MM-DD format (UTC+8) -- `random`: 7-char random suffix for uniqueness +```javascript +const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString() +const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim() -**Session Directory**: `{projectRoot}/.workflow/.execution/{sessionId}/` +// Parse arguments +const autoCommit = $ARGUMENTS.includes('--auto-commit') +const dryRun = $ARGUMENTS.includes('--dry-run') +const planMatch = $ARGUMENTS.match(/PLAN="([^"]+)"/) || $ARGUMENTS.match(/PLAN=(\S+)/) +let planPath = planMatch ? planMatch[1] : null -**Plan Path Resolution**: -1. If `$PLAN` provided explicitly, use it -2. Otherwise, auto-detect from common locations: - - `{projectRoot}/.workflow/IMPL_PLAN.md` - - `{projectRoot}/.workflow/.planning/*/plan-note.md` - - `{projectRoot}/.workflow/.brainstorm/*/synthesis.json` - - `{projectRoot}/.workflow/.analysis/*/conclusions.json` +// Auto-detect if no PLAN specified +if (!planPath) { + // Search in order: + // .workflow/.req-plan/*/tasks.jsonl + // .workflow/.planning/*/tasks.jsonl + // .workflow/.analysis/*/tasks.jsonl + // .workflow/.brainstorm/*/tasks.jsonl + // Use most recently modified +} -**Session Variables**: -- `sessionId`: Unique session identifier -- `sessionFolder`: Base directory for artifacts -- `planPath`: Resolved path to plan file -- `autoCommit`: Boolean flag for auto-commit mode -- `dryRun`: Boolean flag for dry-run mode +// Resolve path +planPath = path.isAbsolute(planPath) ? planPath : `${projectRoot}/${planPath}` + +// Generate session ID +const slug = path.basename(path.dirname(planPath)).toLowerCase().substring(0, 30) +const dateStr = getUtc8ISOString().substring(0, 10) +const random = Math.random().toString(36).substring(2, 9) +const sessionId = `EXEC-${slug}-${dateStr}-${random}` +const sessionFolder = `${projectRoot}/.workflow/.execution/${sessionId}` + +Bash(`mkdir -p ${sessionFolder}`) +``` --- -## Phase 1: Plan Detection & Parsing +## Phase 1: Load & Validate -**Objective**: Load plan file, parse tasks, build execution order, and validate for cycles. +**Objective**: Parse unified JSONL, validate schema and dependencies, build execution order. -### Step 1.1: Load Plan File +### Step 1.1: Parse Unified JSONL -Detect plan format and parse based on file extension. +```javascript +const content = Read(planPath) +const tasks = content.split('\n') + .filter(line => line.trim()) + .map((line, i) => { + try { return JSON.parse(line) } + catch (e) { throw new Error(`Line ${i + 1}: Invalid JSON — ${e.message}`) } + }) -**Supported Formats**: +if (tasks.length === 0) throw new Error('No tasks found in JSONL file') +``` -| Format | Source | Parser | -|--------|--------|--------| -| plan.json | lite-plan, collaborative-plan | parsePlanJson() | -| plan-note.md | collaborative-plan | parsePlanMarkdown() | -| synthesis.json | brainstorm session | convertSynthesisToTasks() | -| conclusions.json | analysis session | convertConclusionsToTasks() | +### Step 1.2: Validate Schema -**Parsing Activities**: -1. Read plan file content -2. Detect format from filename or content structure -3. Route to appropriate parser -4. Extract tasks with required fields: id, title, description, files_to_modify, depends_on +```javascript +const errors = [] +tasks.forEach((task, i) => { + // Required fields + if (!task.id) errors.push(`Task ${i + 1}: missing 'id'`) + if (!task.title) errors.push(`Task ${i + 1}: missing 'title'`) + if (!task.description) errors.push(`Task ${i + 1}: missing 'description'`) + if (!Array.isArray(task.depends_on)) errors.push(`${task.id}: missing 'depends_on' array`) -### Step 1.2: Build Execution Order + // Convergence required + if (!task.convergence) { + errors.push(`${task.id}: missing 'convergence'`) + } else { + if (!task.convergence.criteria?.length) errors.push(`${task.id}: empty convergence.criteria`) + if (!task.convergence.verification) errors.push(`${task.id}: missing convergence.verification`) + if (!task.convergence.definition_of_done) errors.push(`${task.id}: missing convergence.definition_of_done`) + } +}) -Analyze task dependencies and calculate execution sequence. +if (errors.length) { + // Report errors, stop execution +} +``` -**Execution Order Calculation**: -1. Build dependency graph from task dependencies -2. Validate for circular dependencies (no cycles allowed) -3. Calculate topological sort for sequential execution order -4. In Codex: serial mode means executing tasks one by one +### Step 1.3: Build Execution Order -**Dependency Validation**: -- Check that all referenced dependencies exist -- Detect cycles and report as critical error -- Order tasks based on dependencies +```javascript +// 1. Validate dependency references +const taskIds = new Set(tasks.map(t => t.id)) +tasks.forEach(task => { + task.depends_on.forEach(dep => { + if (!taskIds.has(dep)) errors.push(`${task.id}: depends on unknown task '${dep}'`) + }) +}) -### Step 1.3: Generate execution.md +// 2. Detect cycles (DFS) +function detectCycles(tasks) { + const graph = new Map(tasks.map(t => [t.id, t.depends_on || []])) + const visited = new Set(), inStack = new Set(), cycles = [] + function dfs(node, path) { + if (inStack.has(node)) { cycles.push([...path, node].join(' → ')); return } + if (visited.has(node)) return + visited.add(node); inStack.add(node) + ;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node])) + inStack.delete(node) + } + tasks.forEach(t => { if (!visited.has(t.id)) dfs(t.id, []) }) + return cycles +} +const cycles = detectCycles(tasks) +if (cycles.length) errors.push(`Circular dependencies: ${cycles.join('; ')}`) -Create the main execution tracking document. +// 3. Topological sort +function topoSort(tasks) { + const inDegree = new Map(tasks.map(t => [t.id, 0])) + tasks.forEach(t => t.depends_on.forEach(dep => { + inDegree.set(t.id, (inDegree.get(t.id) || 0) + 1) + })) + const queue = tasks.filter(t => inDegree.get(t.id) === 0).map(t => t.id) + const order = [] + while (queue.length) { + const id = queue.shift() + order.push(id) + tasks.forEach(t => { + if (t.depends_on.includes(id)) { + inDegree.set(t.id, inDegree.get(t.id) - 1) + if (inDegree.get(t.id) === 0) queue.push(t.id) + } + }) + } + return order +} +const executionOrder = topoSort(tasks) +``` -**execution.md Structure**: -- **Header**: Session ID, plan source, execution timestamp -- **Plan Overview**: Summary from plan metadata -- **Task List**: Table with ID, title, complexity, dependencies, status -- **Execution Timeline**: To be updated as tasks complete +### Step 1.4: Initialize Execution Artifacts -**Success Criteria**: -- execution.md created with complete plan overview -- Task list includes all tasks from plan -- Execution order calculated with no cycles -- Ready for feasibility analysis +```javascript +// execution.md +const executionMd = `# Execution Overview + +## Session Info +- **Session ID**: ${sessionId} +- **Plan Source**: ${planPath} +- **Started**: ${getUtc8ISOString()} +- **Total Tasks**: ${tasks.length} +- **Mode**: ${dryRun ? 'Dry-run (no changes)' : 'Direct inline execution'} +- **Auto-Commit**: ${autoCommit ? 'Enabled' : 'Disabled'} + +## Task Overview + +| # | ID | Title | Type | Priority | Effort | Dependencies | Status | +|---|-----|-------|------|----------|--------|--------------|--------| +${tasks.map((t, i) => `| ${i+1} | ${t.id} | ${t.title} | ${t.type || '-'} | ${t.priority || '-'} | ${t.effort || '-'} | ${t.depends_on.join(', ') || '-'} | pending |`).join('\n')} + +## Pre-Execution Analysis +> Populated in Phase 2 + +## Execution Timeline +> Updated as tasks complete + +## Execution Summary +> Updated after all tasks complete +` +Write(`${sessionFolder}/execution.md`, executionMd) + +// execution-events.md +Write(`${sessionFolder}/execution-events.md`, `# Execution Events + +**Session**: ${sessionId} +**Started**: ${getUtc8ISOString()} +**Source**: ${planPath} + +--- + +`) +``` --- ## Phase 2: Pre-Execution Analysis -**Objective**: Validate feasibility and identify potential issues before starting execution. +**Objective**: Validate feasibility and identify issues before execution. -### Step 2.1: Analyze Plan Structure +### Step 2.1: Analyze File Conflicts -Examine task dependencies, file modifications, and potential conflicts. +```javascript +const fileTaskMap = new Map() // file → [taskIds] +tasks.forEach(task => { + (task.files || []).forEach(f => { + const key = f.path + if (!fileTaskMap.has(key)) fileTaskMap.set(key, []) + fileTaskMap.get(key).push(task.id) + }) +}) -**Analysis Activities**: -1. **Check file conflicts**: Identify files modified by multiple tasks -2. **Check missing dependencies**: Verify all referenced dependencies exist -3. **Check file existence**: Identify files that will be created vs modified -4. **Estimate complexity**: Assess overall execution complexity +const conflicts = [] +fileTaskMap.forEach((taskIds, file) => { + if (taskIds.length > 1) { + conflicts.push({ file, tasks: taskIds, resolution: 'Execute in dependency order' }) + } +}) -**Issue Detection**: -- Sequential modifications to same file (document for ordered execution) -- Missing dependency targets -- High complexity patterns that may need special handling +// Check file existence +const missingFiles = [] +tasks.forEach(task => { + (task.files || []).forEach(f => { + if (f.action !== 'create' && !file_exists(f.path)) { + missingFiles.push({ file: f.path, task: task.id }) + } + }) +}) +``` -### Step 2.2: Generate Feasibility Report +### Step 2.2: Append to execution.md -Document analysis results and recommendations. +```javascript +// Replace "Pre-Execution Analysis" section with: +// - File Conflicts (list or "No conflicts") +// - Missing Files (list or "All files exist") +// - Dependency Validation (errors or "No issues") +// - Execution Order (numbered list) +``` -**Feasibility Report Content**: -- Issues found (if any) -- File conflict warnings -- Dependency validation results -- Complexity assessment -- Recommended execution strategy +### Step 2.3: User Confirmation -### Step 2.3: Update execution.md - -Append feasibility analysis results. - -**Success Criteria**: -- All validation checks completed -- Issues documented in execution.md -- No blocking issues found (or user confirmed to proceed) -- Ready for task execution +```javascript +if (!dryRun) { + AskUserQuestion({ + questions: [{ + question: `Execute ${tasks.length} tasks?\n\n${conflicts.length ? `⚠ ${conflicts.length} file conflicts\n` : ''}Execution order:\n${executionOrder.map((id, i) => ` ${i+1}. ${id}: ${tasks.find(t => t.id === id).title}`).join('\n')}`, + header: "Confirm", + multiSelect: false, + options: [ + { label: "Execute", description: "Start serial execution" }, + { label: "Dry Run", description: "Simulate without changes" }, + { label: "Cancel", description: "Abort execution" } + ] + }] + }) +} +``` --- -## Phase 3: Serial Task Execution +## Phase 3: Serial Execution + Convergence Verification -**Objective**: Execute tasks one by one in dependency order, tracking progress and recording results. +**Objective**: Execute tasks sequentially, verify convergence after each task, track all state. -**Execution Model**: Serial execution - process tasks sequentially, one at a time. Each task must complete before the next begins. +**Execution Model**: Direct inline execution — main process reads, edits, writes files directly. No CLI delegation. -### Step 3.1: Execute Tasks Sequentially +### Step 3.1: Execution Loop -For each task in execution order: -1. Load context from previous task results -2. Route to Codex CLI for execution -3. Wait for completion -4. Record results in execution-events.md -5. Auto-commit if enabled -6. Move to next task +```javascript +const completedTasks = new Set() +const failedTasks = new Set() +const skippedTasks = new Set() -**Execution Loop**: -``` -For each task in executionOrder: - ├─ Extract task context - ├─ Load previous task outputs - ├─ Execute task via CLI (synchronous) - ├─ Record result with timestamp - ├─ Auto-commit if enabled - └─ Continue to next task +for (const taskId of executionOrder) { + const task = tasks.find(t => t.id === taskId) + const startTime = getUtc8ISOString() + + // 1. Check dependencies + const unmetDeps = task.depends_on.filter(dep => !completedTasks.has(dep)) + if (unmetDeps.length) { + appendToEvents(task, 'BLOCKED', `Unmet dependencies: ${unmetDeps.join(', ')}`) + skippedTasks.add(task.id) + task._execution = { status: 'skipped', executed_at: startTime, + result: { success: false, error: `Blocked by: ${unmetDeps.join(', ')}` } } + continue + } + + // 2. Record START event + appendToEvents(`## ${getUtc8ISOString()} — ${task.id}: ${task.title} + +**Type**: ${task.type || '-'} | **Priority**: ${task.priority || '-'} | **Effort**: ${task.effort || '-'} +**Status**: ⏳ IN PROGRESS +**Files**: ${(task.files || []).map(f => f.path).join(', ') || 'To be determined'} +**Description**: ${task.description} +**Convergence Criteria**: +${task.convergence.criteria.map(c => `- [ ] ${c}`).join('\n')} + +### Execution Log +`) + + if (dryRun) { + // Simulate: mark as completed without changes + appendToEvents(`\n**Status**: ⏭ DRY RUN (no changes)\n\n---\n`) + task._execution = { status: 'completed', executed_at: startTime, + result: { success: true, summary: 'Dry run — no changes made' } } + completedTasks.add(task.id) + continue + } + + // 3. Execute task directly + // - Read each file in task.files (if specified) + // - Analyze what changes satisfy task.description + task.convergence.criteria + // - If task.files has detailed changes, use them as guidance + // - Apply changes using Edit (preferred) or Write (for new files) + // - Use Grep/Glob/mcp__ace-tool for discovery if needed + // - Use Bash for build/test commands + + // 4. Verify convergence + const convergenceResults = verifyConvergence(task) + + const endTime = getUtc8ISOString() + const filesModified = getModifiedFiles() + + if (convergenceResults.allPassed) { + // 5a. Record SUCCESS + appendToEvents(` +**Status**: ✅ COMPLETED +**Duration**: ${calculateDuration(startTime, endTime)} +**Files Modified**: ${filesModified.join(', ')} + +#### Changes Summary +${changeSummary} + +#### Convergence Verification +${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}`).join('\n')} +- **Verification**: ${convergenceResults.verificationOutput} +- **Definition of Done**: ${task.convergence.definition_of_done} + +--- +`) + task._execution = { + status: 'completed', executed_at: endTime, + result: { + success: true, + files_modified: filesModified, + summary: changeSummary, + convergence_verified: convergenceResults.verified + } + } + completedTasks.add(task.id) + } else { + // 5b. Record FAILURE + handleTaskFailure(task, convergenceResults, startTime, endTime) + } + + // 6. Auto-commit if enabled + if (autoCommit && task._execution.status === 'completed') { + autoCommitTask(task, filesModified) + } +} ``` -### Step 3.2: Execute Task via CLI +### Step 3.2: Convergence Verification -Execute individual task using Codex CLI in synchronous mode. +```javascript +function verifyConvergence(task) { + const results = { + verified: [], // boolean[] per criterion + verificationOutput: '', // output of verification command + allPassed: true + } -**CLI Execution Scope**: -- **PURPOSE**: Execute task from plan -- **TASK DETAILS**: ID, title, description, required changes -- **PRIOR CONTEXT**: Results from previous tasks -- **REQUIRED CHANGES**: Files to modify with specific locations -- **MODE**: write (modification mode) -- **EXPECTED**: Files modified as specified, no test failures + // 1. Check each criterion + // For each criterion in task.convergence.criteria: + // - If it references a testable condition, check it + // - If it's manual, mark as verified based on changes made + // - Record true/false per criterion + task.convergence.criteria.forEach(criterion => { + const passed = evaluateCriterion(criterion, task) + results.verified.push(passed) + if (!passed) results.allPassed = false + }) -**CLI Parameters**: -- `--tool codex`: Use Codex for execution -- `--mode write`: Allow file modifications -- Synchronous execution: Wait for completion + // 2. Run verification command (if executable) + const verification = task.convergence.verification + if (isExecutableCommand(verification)) { + try { + const output = Bash(verification, { timeout: 120000 }) + results.verificationOutput = `${verification} → PASS` + } catch (e) { + results.verificationOutput = `${verification} → FAIL: ${e.message}` + results.allPassed = false + } + } else { + results.verificationOutput = `Manual: ${verification}` + } -### Step 3.3: Track Progress + return results +} -Record task execution results in the unified event log. +function isExecutableCommand(verification) { + // Detect executable patterns: npm, npx, jest, tsc, curl, pytest, go test, etc. + return /^(npm|npx|jest|tsc|eslint|pytest|go\s+test|cargo\s+test|curl|make)/.test(verification.trim()) +} +``` -**execution-events.md Structure**: -- **Header**: Session metadata -- **Event Timeline**: One entry per task with results -- **Event Format**: - - Task ID and title - - Timestamp and duration - - Status (completed/failed) - - Summary of changes - - Any notes or issues discovered +### Step 3.3: Failure Handling -**Event Recording Activities**: -1. Capture execution timestamp -2. Record task status and duration -3. Document any modifications made -4. Note any issues or discoveries -5. Append event to execution-events.md +```javascript +function handleTaskFailure(task, convergenceResults, startTime, endTime) { + appendToEvents(` +**Status**: ❌ FAILED +**Duration**: ${calculateDuration(startTime, endTime)} +**Error**: Convergence verification failed -### Step 3.4: Auto-Commit (if enabled) +#### Failed Criteria +${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}`).join('\n')} +- **Verification**: ${convergenceResults.verificationOutput} -Commit task changes with conventional commit format. +--- +`) -**Auto-Commit Process**: -1. Get changed files from git status -2. Filter to task.files_to_modify -3. Stage files: `git add` -4. Generate commit message based on task type -5. Commit: `git commit -m` + task._execution = { + status: 'failed', executed_at: endTime, + result: { + success: false, + error: 'Convergence verification failed', + convergence_verified: convergenceResults.verified + } + } + failedTasks.add(task.id) -**Commit Message Format**: -- Type: feat, fix, refactor, test, docs (inferred from task) -- Scope: file/module affected (inferred from files modified) -- Subject: Task title or description -- Footer: Task ID and plan reference + // Ask user + AskUserQuestion({ + questions: [{ + question: `Task ${task.id} failed convergence verification. How to proceed?`, + header: "Failure", + multiSelect: false, + options: [ + { label: "Skip & Continue", description: "Skip this task, continue with next" }, + { label: "Retry", description: "Retry this task" }, + { label: "Accept", description: "Mark as completed despite failure" }, + { label: "Abort", description: "Stop execution, keep progress" } + ] + }] + }) +} +``` -**Success Criteria**: -- All tasks executed sequentially -- Results recorded in execution-events.md -- Auto-commits created (if enabled) -- Failed tasks logged for review +### Step 3.4: Auto-Commit + +```javascript +function autoCommitTask(task, filesModified) { + Bash(`git add ${filesModified.join(' ')}`) + + const commitType = { + fix: 'fix', refactor: 'refactor', feature: 'feat', + enhancement: 'feat', testing: 'test', infrastructure: 'chore' + }[task.type] || 'chore' + + const scope = inferScope(filesModified) + + Bash(`git commit -m "$(cat <<'EOF' +${commitType}(${scope}): ${task.title} + +Task: ${task.id} +Source: ${path.basename(planPath)} +EOF +)"`) + + appendToEvents(`**Commit**: \`${commitType}(${scope}): ${task.title}\`\n`) +} +``` --- ## Phase 4: Completion -**Objective**: Summarize execution results and offer follow-up actions. +**Objective**: Finalize all artifacts, write back execution state, offer follow-up actions. -### Step 4.1: Collect Statistics +### Step 4.1: Finalize execution.md -Gather execution metrics. +Append summary statistics to execution.md: -**Metrics Collection**: -- Total tasks executed -- Successfully completed count -- Failed count -- Success rate percentage -- Total duration -- Artifacts generated +```javascript +const summary = ` +## Execution Summary -### Step 4.2: Generate Summary +- **Completed**: ${getUtc8ISOString()} +- **Total Tasks**: ${tasks.length} +- **Succeeded**: ${completedTasks.size} +- **Failed**: ${failedTasks.size} +- **Skipped**: ${skippedTasks.size} +- **Success Rate**: ${Math.round(completedTasks.size / tasks.length * 100)}% -Update execution.md with final results. +### Task Results -**Summary Content**: -- Execution completion timestamp -- Statistics table -- Task status table (completed/failed) -- Commit log (if auto-commit enabled) -- Any failed tasks requiring attention +| ID | Title | Status | Convergence | Files Modified | +|----|-------|--------|-------------|----------------| +${tasks.map(t => { + const ex = t._execution || {} + const convergenceStatus = ex.result?.convergence_verified + ? `${ex.result.convergence_verified.filter(v => v).length}/${ex.result.convergence_verified.length}` + : '-' + return `| ${t.id} | ${t.title} | ${ex.status || 'pending'} | ${convergenceStatus} | ${(ex.result?.files_modified || []).join(', ') || '-'} |` +}).join('\n')} -### Step 4.3: Display Completion Summary +${failedTasks.size > 0 ? `### Failed Tasks -Present results to user. +${[...failedTasks].map(id => { + const t = tasks.find(t => t.id === id) + return `- **${t.id}**: ${t.title} — ${t._execution?.result?.error || 'Unknown'}` +}).join('\n')} +` : ''} +### Artifacts +- **Plan Source**: ${planPath} +- **Execution Overview**: ${sessionFolder}/execution.md +- **Execution Events**: ${sessionFolder}/execution-events.md +` +// Append to execution.md +``` -**Summary Output**: -- Session ID and folder path -- Statistics (completed/failed/total) -- Failed tasks (if any) -- Execution log location -- Next step recommendations +### Step 4.2: Finalize execution-events.md -**Success Criteria**: -- execution.md finalized with complete summary -- execution-events.md contains all task records -- User informed of completion status -- All artifacts successfully created +```javascript +appendToEvents(` +--- + +# Session Summary + +- **Session**: ${sessionId} +- **Completed**: ${getUtc8ISOString()} +- **Tasks**: ${completedTasks.size} completed, ${failedTasks.size} failed, ${skippedTasks.size} skipped +- **Total Events**: ${completedTasks.size + failedTasks.size + skippedTasks.size} +`) +``` + +### Step 4.3: Write Back tasks.jsonl with _execution + +Update the source JSONL file with execution states: + +```javascript +const updatedJsonl = tasks.map(task => JSON.stringify(task)).join('\n') +Write(planPath, updatedJsonl) +// Each task now has _execution: { status, executed_at, result } +``` + +### Step 4.4: Post-Completion Options + +```javascript +AskUserQuestion({ + questions: [{ + question: `Execution complete: ${completedTasks.size}/${tasks.length} succeeded (${Math.round(completedTasks.size / tasks.length * 100)}%).\nNext step:`, + header: "Post-Execute", + multiSelect: false, + options: [ + { label: "Retry Failed", description: `Re-execute ${failedTasks.size} failed tasks` }, + { label: "View Events", description: "Display execution-events.md" }, + { label: "Create Issue", description: "Create issue from failed tasks" }, + { label: "Done", description: "End workflow" } + ] + }] +}) +``` + +| Selection | Action | +|-----------|--------| +| Retry Failed | Filter tasks with `_execution.status === 'failed'`, re-execute, append `[RETRY]` events | +| View Events | Display execution-events.md content | +| Create Issue | `Skill(skill="issue:new", args="...")` from failed task details | +| Done | Display artifact paths, end workflow | --- ## Configuration -### Plan Format Detection +| Flag | Default | Description | +|------|---------|-------------| +| `PLAN="..."` | auto-detect | Path to unified JSONL file (`tasks.jsonl`) | +| `--auto-commit` | false | Commit changes after each successful task | +| `--dry-run` | false | Simulate execution without making changes | -Workflow automatically detects plan format: +### Plan Auto-Detection Order -| File Extension | Format | -|---|---| -| `.json` | JSON plan (lite-plan, collaborative-plan) | -| `.md` | Markdown plan (IMPL_PLAN.md, plan-note.md) | -| `synthesis.json` | Brainstorm synthesis | -| `conclusions.json` | Analysis conclusions | +When no `PLAN` specified, search in order (most recent first): -### Execution Modes +1. `.workflow/.req-plan/*/tasks.jsonl` +2. `.workflow/.planning/*/tasks.jsonl` +3. `.workflow/.analysis/*/tasks.jsonl` +4. `.workflow/.brainstorm/*/tasks.jsonl` -| Mode | Behavior | Use Case | -|------|----------|----------| -| Normal | Execute tasks, track progress | Standard execution | -| `--auto-commit` | Execute + commit each task | Tracked progress with git history | -| `--dry-run` | Simulate execution, no changes | Validate plan before executing | - -### Task Dependencies - -Tasks can declare dependencies on other tasks: -- `depends_on: ["TASK-001", "TASK-002"]` - Wait for these tasks -- Tasks are executed in topological order -- Circular dependencies are detected and reported as error +**If source is not unified JSONL**: Run `plan-converter` first. --- @@ -368,43 +672,15 @@ Tasks can declare dependencies on other tasks: | Situation | Action | Recovery | |-----------|--------|----------| -| Plan not found | Check file path and common locations | Verify plan path is correct | -| Unsupported format | Detect format from extension/content | Use supported plan format | -| Circular dependency | Stop execution, report error | Remove or reorganize dependencies | -| Task execution fails | Record failure in log | Review error details in execution-events.md | -| File conflict | Document in execution-events.md | Resolve conflict manually or adjust plan order | -| Missing file | Log as warning, continue | Verify files will be created by prior tasks | - ---- - -## Execution Flow Diagram - -``` -Load Plan File - ├─ Detect format (JSON/Markdown) - ├─ Parse tasks - └─ Build dependency graph - -Validate - ├─ Check for cycles - ├─ Analyze file conflicts - └─ Calculate execution order - -Execute Sequentially - ├─ Task 1: CLI execution → record result - ├─ Task 2: CLI execution → record result - ├─ Task 3: CLI execution → record result - └─ (repeat for all tasks) - -Track Progress - ├─ Update execution.md after each task - └─ Append event to execution-events.md - -Complete - ├─ Generate final summary - ├─ Report statistics - └─ Offer follow-up actions -``` +| JSONL file not found | Report error with path | Check path, run plan-converter | +| Invalid JSON line | Report line number and error | Fix JSONL file manually | +| Missing convergence | Report validation error | Run plan-converter to add convergence | +| Circular dependency | Stop, report cycle path | Fix dependencies in JSONL | +| Task execution fails | Record in events, ask user | Retry, skip, accept, or abort | +| Convergence verification fails | Mark task failed, ask user | Fix code and retry, or accept | +| Verification command timeout | Mark as unverified | Manual verification needed | +| File conflict during execution | Document in events | Resolve in dependency order | +| All tasks fail | Report, suggest plan review | Re-analyze or manual intervention | --- @@ -412,62 +688,24 @@ Complete ### Before Execution -1. **Review Plan**: Check plan.md or plan-note.md for completeness -2. **Validate Format**: Ensure plan is in supported format -3. **Check Dependencies**: Verify dependency order is logical -4. **Test First**: Use `--dry-run` mode to validate before actual execution -5. **Backup**: Commit any pending changes before starting +1. **Validate Plan**: Use `--dry-run` first to check plan feasibility +2. **Check Convergence**: Ensure all tasks have meaningful convergence criteria +3. **Review Dependencies**: Verify execution order makes sense +4. **Backup**: Commit pending changes before starting +5. **Convert First**: Use `plan-converter` for non-JSONL sources ### During Execution -1. **Monitor Progress**: Check execution-events.md for real-time updates -2. **Handle Failures**: Review error details and decide whether to continue +1. **Monitor Events**: Check execution-events.md for real-time progress +2. **Handle Failures**: Review convergence failures carefully before deciding 3. **Check Commits**: Verify auto-commits are correct if enabled -4. **Track Context**: Prior task results are available to subsequent tasks ### After Execution -1. **Review Results**: Check execution.md summary and statistics -2. **Verify Changes**: Inspect modified files match expected changes -3. **Handle Failures**: Address any failed tasks -4. **Update History**: Check git log for conventional commits if enabled -5. **Plan Next Steps**: Use completion artifacts for future work - ---- - -## Command Examples - -### Standard Execution - -```bash -PLAN="${projectRoot}/.workflow/.planning/CPLAN-auth-2025-01-27/plan-note.md" -``` - -Execute the plan with standard options. - -### With Auto-Commit - -```bash -PLAN="${projectRoot}/.workflow/.planning/CPLAN-auth-2025-01-27/plan-note.md" \ - --auto-commit -``` - -Execute and automatically commit changes after each task. - -### Dry-Run Mode - -```bash -PLAN="${projectRoot}/.workflow/.planning/CPLAN-auth-2025-01-27/plan-note.md" \ - --dry-run -``` - -Simulate execution without making changes. - -### Auto-Detect Plan - -```bash -# No PLAN specified - auto-detects from .workflow/ directories -``` +1. **Review Summary**: Check execution.md statistics and failed tasks +2. **Verify Changes**: Inspect modified files match expectations +3. **Check JSONL**: Review `_execution` states in tasks.jsonl +4. **Next Steps**: Use completion options for follow-up ---