feat: add plan-converter skill for converting planning artifacts to unified JSONL format

- Implemented a new skill to convert various planning outputs (roadmap.jsonl, plan-note.md, conclusions.json, synthesis.json) into a standardized JSONL task format.
- Included detailed documentation on supported input formats, unified JSONL schema, execution process, and error handling.
- Added functions for parsing, transforming, and validating input data to ensure quality and consistency in the output.
This commit is contained in:
catlog22
2026-02-09 13:32:46 +08:00
parent c3fd0624de
commit afd9729873
5 changed files with 1274 additions and 487 deletions

View File

@@ -7,14 +7,14 @@
## Execution Flow
```
conclusions.json → execution-plan.jsonl → User Confirmation → Direct Inline Execution → execution.md + execution-events.md
conclusions.json → tasks.jsonl → User Confirmation → Direct Inline Execution → execution.md + execution-events.md
```
---
## Step 1: Generate execution-plan.jsonl
## Step 1: Generate tasks.jsonl
Convert `conclusions.json` recommendations directly into JSONL execution list. Each line is a self-contained task with convergence criteria.
Convert `conclusions.json` recommendations directly into unified JSONL task format. Each line is a self-contained task with convergence criteria, compatible with `unified-execute-with-file`.
**Conversion Logic**:
@@ -32,22 +32,28 @@ const tasks = conclusions.recommendations.map((rec, index) => ({
description: rec.rationale,
type: inferTaskType(rec), // fix | refactor | feature | enhancement | testing
priority: rec.priority, // high | medium | low
files_to_modify: extractFilesFromEvidence(rec, explorations),
effort: inferEffort(rec), // small | medium | large
files: extractFilesFromEvidence(rec, explorations).map(f => ({
path: f,
action: 'modify' // modify | create | delete
})),
depends_on: [], // Serial by default; add dependencies if task ordering matters
convergence: {
criteria: generateCriteria(rec), // Testable conditions
verification: generateVerification(rec), // Executable command or steps
definition_of_done: generateDoD(rec) // Business language
},
context: {
source_conclusions: conclusions.key_conclusions,
evidence: rec.evidence || []
evidence: rec.evidence || [],
source: {
tool: 'analyze-with-file',
session_id: sessionId,
original_id: `TASK-${String(index + 1).padStart(3, '0')}`
}
}))
// Write one task per line
const jsonlContent = tasks.map(t => JSON.stringify(t)).join('\n')
Write(`${sessionFolder}/execution-plan.jsonl`, jsonlContent)
Write(`${sessionFolder}/tasks.jsonl`, jsonlContent)
```
**Task Type Inference**:
@@ -64,7 +70,15 @@ Write(`${sessionFolder}/execution-plan.jsonl`, jsonlContent)
- Parse evidence from `explorations.json` or `perspectives.json`
- Match recommendation action keywords to `relevant_files`
- If no specific files found, use pattern matching from findings
- Include both files to modify and files as read-only context
- Return file paths as strings (converted to `{path, action}` objects in the task)
**Effort Inference**:
| Signal | Effort |
|--------|--------|
| Priority high + multiple files | `large` |
| Priority medium or 1-2 files | `medium` |
| Priority low or single file | `small` |
**Convergence Generation**:
@@ -92,13 +106,13 @@ tasks.forEach(task => {
})
```
**Output**: `${sessionFolder}/execution-plan.jsonl`
**Output**: `${sessionFolder}/tasks.jsonl`
**JSONL Schema** (one task per line):
```jsonl
{"id":"TASK-001","title":"Fix authentication token refresh","description":"Token refresh fails silently when...","type":"fix","priority":"high","files_to_modify":["src/auth/token.ts","src/middleware/auth.ts"],"depends_on":[],"convergence":{"criteria":["Token refresh returns new valid token","Expired token triggers refresh automatically","Failed refresh redirects to login"],"verification":"jest --testPathPattern=token.test.ts","definition_of_done":"Users remain logged in across token expiration without manual re-login"},"context":{"source_conclusions":[...],"evidence":[...]}}
{"id":"TASK-002","title":"Add input validation to user endpoints","description":"Missing validation allows...","type":"enhancement","priority":"medium","files_to_modify":["src/routes/user.ts","src/validators/user.ts"],"depends_on":["TASK-001"],"convergence":{"criteria":["All user inputs validated against schema","Invalid inputs return 400 with specific error message","SQL injection patterns rejected"],"verification":"jest --testPathPattern=user.validation.test.ts","definition_of_done":"All user-facing inputs are validated with clear error feedback"},"context":{"source_conclusions":[...],"evidence":[...]}}
{"id":"TASK-001","title":"Fix authentication token refresh","description":"Token refresh fails silently when...","type":"fix","priority":"high","effort":"large","files":[{"path":"src/auth/token.ts","action":"modify"},{"path":"src/middleware/auth.ts","action":"modify"}],"depends_on":[],"convergence":{"criteria":["Token refresh returns new valid token","Expired token triggers refresh automatically","Failed refresh redirects to login"],"verification":"jest --testPathPattern=token.test.ts","definition_of_done":"Users remain logged in across token expiration without manual re-login"},"evidence":[...],"source":{"tool":"analyze-with-file","session_id":"ANL-xxx","original_id":"TASK-001"}}
{"id":"TASK-002","title":"Add input validation to user endpoints","description":"Missing validation allows...","type":"enhancement","priority":"medium","effort":"medium","files":[{"path":"src/routes/user.ts","action":"modify"},{"path":"src/validators/user.ts","action":"create"}],"depends_on":["TASK-001"],"convergence":{"criteria":["All user inputs validated against schema","Invalid inputs return 400 with specific error message","SQL injection patterns rejected"],"verification":"jest --testPathPattern=user.validation.test.ts","definition_of_done":"All user-facing inputs are validated with clear error feedback"},"evidence":[...],"source":{"tool":"analyze-with-file","session_id":"ANL-xxx","original_id":"TASK-002"}}
```
---
@@ -110,7 +124,7 @@ Validate feasibility before starting execution. Reference: unified-execute-with-
##### Step 2.1: Build Execution Order
```javascript
const tasks = Read(`${sessionFolder}/execution-plan.jsonl`)
const tasks = Read(`${sessionFolder}/tasks.jsonl`)
.split('\n').filter(l => l.trim()).map(l => JSON.parse(l))
// 1. Dependency validation
@@ -169,9 +183,9 @@ const executionOrder = topoSort(tasks)
// Check files modified by multiple tasks
const fileTaskMap = new Map() // file → [taskIds]
tasks.forEach(task => {
task.files_to_modify.forEach(file => {
if (!fileTaskMap.has(file)) fileTaskMap.set(file, [])
fileTaskMap.get(file).push(task.id)
(task.files || []).forEach(f => {
if (!fileTaskMap.has(f.path)) fileTaskMap.set(f.path, [])
fileTaskMap.get(f.path).push(task.id)
})
})
@@ -185,8 +199,10 @@ fileTaskMap.forEach((taskIds, file) => {
// Check file existence
const missingFiles = []
tasks.forEach(task => {
task.files_to_modify.forEach(file => {
if (!file_exists(file)) missingFiles.push({ file, task: task.id, action: "Will be created" })
(task.files || []).forEach(f => {
if (f.action !== 'create' && !file_exists(f.path)) {
missingFiles.push({ file: f.path, task: task.id, action: "Will be created" })
}
})
})
```
@@ -204,7 +220,7 @@ const executionMd = `# Execution Overview
## Session Info
- **Session ID**: ${sessionId}
- **Plan Source**: execution-plan.jsonl (from analysis conclusions)
- **Plan Source**: tasks.jsonl (from analysis conclusions)
- **Started**: ${getUtc8ISOString()}
- **Total Tasks**: ${tasks.length}
- **Execution Mode**: Direct inline (serial)
@@ -254,7 +270,7 @@ const eventsHeader = `# Execution Events
**Session**: ${sessionId}
**Started**: ${getUtc8ISOString()}
**Source**: execution-plan.jsonl
**Source**: tasks.jsonl
---
@@ -341,15 +357,16 @@ for (const taskId of executionOrder) {
**Type**: ${task.type} | **Priority**: ${task.priority}
**Status**: ⏳ IN PROGRESS
**Files**: ${task.files_to_modify.join(', ')}
**Files**: ${(task.files || []).map(f => f.path).join(', ')}
**Description**: ${task.description}
### Execution Log
`)
// 3. Execute task directly
// - Read each file in task.files_to_modify
// - Read each file in task.files (if specified)
// - Analyze what changes satisfy task.description + task.convergence.criteria
// - If task.files has detailed changes, use them as guidance
// - Apply changes using Edit (preferred) or Write (for new files)
// - Use Grep/Glob for discovery if needed
// - Use Bash for build/test verification commands
@@ -409,6 +426,12 @@ ${attemptedChanges}
updateTaskStatus(task.id, 'failed', [], errorMessage)
failedTasks.add(task.id)
// Set _execution state
task._execution = {
status: 'failed', executed_at: getUtc8ISOString(),
result: { success: false, error: errorMessage, files_modified: [] }
}
// Ask user how to proceed
if (!autoYes) {
const decision = AskUserQuestion({
@@ -431,7 +454,7 @@ if (!autoYes) {
After each successful task, optionally commit changes:
```javascript
if (autoCommit && task.status === 'completed') {
if (autoCommit && task._execution?.status === 'completed') {
// 1. Stage modified files
Bash(`git add ${filesModified.join(' ')}`)
@@ -473,17 +496,17 @@ const summary = `
| ID | Title | Status | Files Modified |
|----|-------|--------|----------------|
${tasks.map(t => `| ${t.id} | ${t.title} | ${t.status} | ${(t.result?.files_modified || []).join(', ') || '-'} |`).join('\n')}
${tasks.map(t => `| ${t.id} | ${t.title} | ${t._execution?.status || 'pending'} | ${(t._execution?.result?.files_modified || []).join(', ') || '-'} |`).join('\n')}
${failedTasks.size > 0 ? `### Failed Tasks Requiring Attention
${[...failedTasks].map(id => {
const t = tasks.find(t => t.id === id)
return `- **${t.id}**: ${t.title}${t.result?.error || 'Unknown error'}`
return `- **${t.id}**: ${t.title}${t._execution?.result?.error || 'Unknown error'}`
}).join('\n')}
` : ''}
### Artifacts
- **Execution Plan**: ${sessionFolder}/execution-plan.jsonl
- **Execution Plan**: ${sessionFolder}/tasks.jsonl
- **Execution Overview**: ${sessionFolder}/execution.md
- **Execution Events**: ${sessionFolder}/execution-events.md
`
@@ -507,23 +530,26 @@ appendToEvents(`
`)
```
##### Step 6.3: Update execution-plan.jsonl
##### Step 6.3: Update tasks.jsonl
Rewrite JSONL with execution results per task:
Rewrite JSONL with `_execution` state per task:
```javascript
const updatedJsonl = tasks.map(task => JSON.stringify({
...task,
status: task.status, // "completed" | "failed" | "skipped" | "pending"
executed_at: task.executed_at, // ISO timestamp
result: {
success: task.status === 'completed',
files_modified: task.result?.files_modified || [],
summary: task.result?.summary || '',
error: task.result?.error || null
_execution: {
status: task._status, // "completed" | "failed" | "skipped" | "pending"
executed_at: task._executed_at, // ISO timestamp
result: {
success: task._status === 'completed',
files_modified: task._result?.files_modified || [],
summary: task._result?.summary || '',
error: task._result?.error || null,
convergence_verified: task._result?.convergence_verified || []
}
}
})).join('\n')
Write(`${sessionFolder}/execution-plan.jsonl`, updatedJsonl)
Write(`${sessionFolder}/tasks.jsonl`, updatedJsonl)
```
---
@@ -560,10 +586,10 @@ if (!autoYes) {
| Done | Display artifact paths, end workflow |
**Retry Logic**:
- Filter tasks with `status: "failed"`
- Filter tasks with `_execution.status === 'failed'`
- Re-execute in original dependency order
- Append retry events to execution-events.md with `[RETRY]` prefix
- Update execution.md and execution-plan.jsonl
- Update execution.md and tasks.jsonl
---
@@ -574,14 +600,14 @@ When Quick Execute is activated, session folder expands with:
```
{projectRoot}/.workflow/.analysis/ANL-{slug}-{date}/
├── ... # Phase 1-4 artifacts
├── execution-plan.jsonl # ⭐ JSONL execution list (one task per line, with convergence)
├── tasks.jsonl # ⭐ Unified JSONL (one task per line, with convergence + source)
├── execution.md # Plan overview + task table + execution summary
└── execution-events.md # ⭐ Unified event log (all task executions with details)
```
| File | Purpose |
|------|---------|
| `execution-plan.jsonl` | Self-contained task list from conclusions, each line has convergence criteria |
| `tasks.jsonl` | Unified task list from conclusions, each line has convergence criteria and source provenance |
| `execution.md` | Overview: plan source, task table, pre-execution analysis, execution timeline, final summary |
| `execution-events.md` | Chronological event stream: task start/complete/fail with details, changes, verification results |
@@ -594,7 +620,7 @@ When Quick Execute is activated, session folder expands with:
**Session**: ANL-xxx-2025-01-21
**Started**: 2025-01-21T10:00:00+08:00
**Source**: execution-plan.jsonl
**Source**: tasks.jsonl
---
@@ -655,9 +681,9 @@ When Quick Execute is activated, session folder expands with:
|-----------|--------|----------|
| Task execution fails | Record failure in execution-events.md, ask user | Retry, skip, or abort |
| Verification command fails | Mark criterion as unverified, continue | Note in events, manual check needed |
| No recommendations in conclusions | Cannot generate execution-plan.jsonl | Inform user, suggest lite-plan |
| No recommendations in conclusions | Cannot generate tasks.jsonl | Inform user, suggest lite-plan |
| File conflict during execution | Document in execution-events.md | Resolve in dependency order |
| Circular dependencies detected | Stop, report error | Fix dependencies in execution-plan.jsonl |
| Circular dependencies detected | Stop, report error | Fix dependencies in tasks.jsonl |
| All tasks fail | Record all failures, suggest analysis review | Re-run analysis or manual intervention |
| Missing target file | Attempt to create if task.type is "feature" | Log as warning for other types |
@@ -665,9 +691,10 @@ When Quick Execute is activated, session folder expands with:
## Success Criteria
- `execution-plan.jsonl` generated with convergence criteria per task
- `tasks.jsonl` generated with convergence criteria and source provenance per task
- `execution.md` contains plan overview, task table, pre-execution analysis, final summary
- `execution-events.md` contains chronological event stream with convergence verification
- All tasks executed (or explicitly skipped) via direct inline execution
- Each task's convergence criteria checked and recorded
- `_execution` state written back to tasks.jsonl after completion
- User informed of results and next steps

View File

@@ -55,11 +55,11 @@ The key innovation is the **Plan Note** architecture — a shared collaborative
│ │
│ Phase 2: Serial Domain Planning │
│ ┌──────────────┐ │
│ │ Domain 1 │→ Explore codebase → Generate plan.json
│ │ Domain 1 │→ Explore codebase → Generate tasks.jsonl
│ │ Section 1 │→ Fill task pool + evidence in plan-note.md │
│ └──────┬───────┘ │
│ ┌──────▼───────┐ │
│ │ Domain 2 │→ Explore codebase → Generate plan.json
│ │ Domain 2 │→ Explore codebase → Generate tasks.jsonl
│ │ Section 2 │→ Fill task pool + evidence in plan-note.md │
│ └──────┬───────┘ │
│ ┌──────▼───────┐ │
@@ -72,6 +72,7 @@ The key innovation is the **Plan Note** architecture — a shared collaborative
│ └─ Update plan-note.md conflict section │
│ │
│ Phase 4: Completion (No Merge) │
│ ├─ Merge domain tasks.jsonl → session tasks.jsonl │
│ ├─ Generate plan.md (human-readable) │
│ └─ Ready for execution │
│ │
@@ -86,10 +87,11 @@ The key innovation is the **Plan Note** architecture — a shared collaborative
├── requirement-analysis.json # Phase 1: Sub-domain assignments
├── domains/ # Phase 2: Per-domain plans
│ ├── {domain-1}/
│ │ └── plan.json # Detailed plan
│ │ └── tasks.jsonl # Unified JSONL (one task per line)
│ ├── {domain-2}/
│ │ └── plan.json
│ │ └── tasks.jsonl
│ └── ...
├── tasks.jsonl # ⭐ Merged unified JSONL (all domains)
├── conflicts.json # Phase 3: Conflict report
└── plan.md # Phase 4: Human-readable summary
```
@@ -107,7 +109,7 @@ The key innovation is the **Plan Note** architecture — a shared collaborative
| Artifact | Purpose |
|----------|---------|
| `domains/{domain}/plan.json` | Detailed implementation plan per domain |
| `domains/{domain}/tasks.jsonl` | Unified JSONL per domain (one task per line with convergence) |
| Updated `plan-note.md` | Task pool and evidence sections filled for each domain |
### Phase 3: Conflict Detection
@@ -121,6 +123,7 @@ The key innovation is the **Plan Note** architecture — a shared collaborative
| Artifact | Purpose |
|----------|---------|
| `tasks.jsonl` | Merged unified JSONL from all domains (consumable by unified-execute) |
| `plan.md` | Human-readable summary with requirements, tasks, and conflicts |
---
@@ -302,38 +305,45 @@ for (const sub of subDomains) {
// - Integration points with other domains
// - Architecture constraints
// 3. Generate detailed plan.json
const plan = {
session_id: sessionId,
focus_area: sub.focus_area,
description: sub.description,
task_id_range: sub.task_id_range,
generated_at: getUtc8ISOString(),
tasks: [
// For each task within the assigned ID range:
{
id: `TASK-${String(sub.task_id_range[0]).padStart(3, '0')}`,
title: "...",
complexity: "Low | Medium | High",
depends_on: [], // TASK-xxx references
scope: "...", // Brief scope description
modification_points: [ // file:line → change summary
{ file: "...", location: "...", change: "..." }
],
conflict_risk: "Low | Medium | High",
estimated_effort: "..."
// 3. Generate unified JSONL tasks (one task per line)
const domainTasks = [
// For each task within the assigned ID range:
{
id: `TASK-${String(sub.task_id_range[0]).padStart(3, '0')}`,
title: "...",
description: "...", // scope/goal of this task
type: "feature", // infrastructure|feature|enhancement|fix|refactor|testing
priority: "medium", // high|medium|low
effort: "medium", // small|medium|large
scope: "...", // Brief scope description
depends_on: [], // TASK-xxx references
convergence: {
criteria: ["... (testable)"], // Testable conditions
verification: "... (executable)", // Command or steps
definition_of_done: "... (business language)"
},
files: [ // Files to modify
{
path: "...",
action: "modify", // modify|create|delete
changes: ["..."], // Change descriptions
conflict_risk: "low" // low|medium|high
}
],
source: {
tool: "collaborative-plan-with-file",
session_id: sessionId,
original_id: `TASK-${String(sub.task_id_range[0]).padStart(3, '0')}`
}
// ... more tasks
],
evidence: {
relevant_files: [...],
existing_patterns: [...],
constraints: [...]
}
}
Write(`${sessionFolder}/domains/${sub.focus_area}/plan.json`, JSON.stringify(plan, null, 2))
// ... more tasks
]
// 4. Sync summary to plan-note.md
// 4. Write domain tasks.jsonl (one task per line)
const jsonlContent = domainTasks.map(t => JSON.stringify(t)).join('\n')
Write(`${sessionFolder}/domains/${sub.focus_area}/tasks.jsonl`, jsonlContent)
// 5. Sync summary to plan-note.md
// Read current plan-note.md
// Locate pre-allocated sections:
// - Task Pool: "## 任务池 - ${toTitleCase(sub.focus_area)}"
@@ -348,11 +358,17 @@ for (const sub of subDomains) {
```markdown
### TASK-{ID}: {Title} [{focus-area}]
- **状态**: pending
- **复杂度**: Low/Medium/High
- **类型**: feature/fix/refactor/enhancement/testing/infrastructure
- **优先级**: high/medium/low
- **工作量**: small/medium/large
- **依赖**: TASK-xxx (if any)
- **范围**: Brief scope description
- **修改**: `file:location`: change summary
- **冲突风险**: Low/Medium/High
- **修改文件**: `file-path` (action): change summary
- **收敛标准**:
- criteria 1
- criteria 2
- **验证方式**: executable command or steps
- **完成定义**: business language definition
```
**Evidence Format** (for plan-note.md evidence sections):
@@ -366,8 +382,10 @@ for (const sub of subDomains) {
**Domain Planning Rules**:
- Each domain modifies ONLY its pre-allocated sections in plan-note.md
- Use assigned TASK ID range exclusively
- Include `conflict_risk` assessment for each task
- Include convergence criteria for each task (criteria + verification + definition_of_done)
- Include `files[]` with conflict_risk assessment per file
- Reference cross-domain dependencies explicitly
- Each task record must be self-contained (can be independently consumed by unified-execute)
### Step 2.3: Verify plan-note.md Consistency
@@ -381,7 +399,8 @@ After all domains are planned, verify the shared document.
5. Check for any section format inconsistencies
**Success Criteria**:
- `domains/{domain}/plan.json` created for each domain
- `domains/{domain}/tasks.jsonl` created for each domain (unified JSONL format)
- Each task has convergence (criteria + verification + definition_of_done)
- `plan-note.md` updated with all task pools and evidence sections
- Task summaries follow consistent format
- No TASK ID overlaps across domains
@@ -394,7 +413,7 @@ After all domains are planned, verify the shared document.
### Step 3.1: Parse plan-note.md
Extract all tasks from all "任务池" sections.
Extract all tasks from all "任务池" sections and domain tasks.jsonl files.
```javascript
// parsePlanNote(markdown)
@@ -403,20 +422,32 @@ Extract all tasks from all "任务池" sections.
// - Build sections array: { level, heading, start, content }
// - Return: { frontmatter, sections }
// Also load all domain tasks.jsonl for detailed data
// loadDomainTasks(sessionFolder, subDomains):
// const allTasks = []
// for (const sub of subDomains) {
// const jsonl = Read(`${sessionFolder}/domains/${sub.focus_area}/tasks.jsonl`)
// jsonl.split('\n').filter(l => l.trim()).forEach(line => {
// allTasks.push(JSON.parse(line))
// })
// }
// return allTasks
// extractTasksFromSection(content, sectionHeading)
// - Match: /### (TASK-\d+):\s+(.+?)\s+\[(.+?)\]/
// - For each: extract taskId, title, author
// - Parse details: status, complexity, depends_on, modification_points, conflict_risk
// - Parse details: status, type, priority, effort, depends_on, files, convergence
// - Return: array of task objects
// parseTaskDetails(content)
// - Extract via regex:
// - /\*\*状态\*\*:\s*(.+)/ → status
// - /\*\*复杂度\*\*:\s*(.+)/ → complexity
// - /\*\*类型\*\*:\s*(.+)/ → type
// - /\*\*优先级\*\*:\s*(.+)/ → priority
// - /\*\*工作量\*\*:\s*(.+)/ → effort
// - /\*\*依赖\*\*:\s*(.+)/ → depends_on (extract TASK-\d+ references)
// - /\*\*冲突风险\*\*:\s*(.+)/ → conflict_risk
// - Extract modification points: /- `([^`]+):\s*([^`]+)`:\s*(.+)/ → file, location, summary
// - Return: { status, complexity, depends_on[], modification_points[], conflict_risk }
// - Extract files: /- `([^`]+)` \((\w+)\):\s*(.+)/ → path, action, change
// - Return: { status, type, priority, effort, depends_on[], files[], convergence }
```
### Step 3.2: Detect Conflicts
@@ -435,10 +466,10 @@ Scan all tasks for three categories of conflicts.
```javascript
// detectFileConflicts(tasks)
// Build fileMap: { "file:location": [{ task_id, task_title, source_domain, change }] }
// For each location with modifications from multiple domains:
// Build fileMap: { "file-path": [{ task_id, task_title, source_domain, changes }] }
// For each file with modifications from multiple domains:
// → conflict: type='file_conflict', severity='high'
// → include: location, tasks_involved, domains_involved, modifications
// → include: file, tasks_involved, domains_involved, changes
// → resolution: 'Coordinate modification order or merge changes'
// detectDependencyCycles(tasks)
@@ -459,9 +490,9 @@ function detectCycles(tasks) {
}
// detectStrategyConflicts(tasks)
// Group tasks by files they modify
// Group tasks by files they modify (from task.files[].path)
// For each file with tasks from multiple domains:
// Filter for high/medium conflict_risk tasks
// Filter for tasks with files[].conflict_risk === 'high' or 'medium'
// If >1 high-risk from different domains:
// → conflict: type='strategy_conflict', severity='medium'
// → resolution: 'Review approaches and align on single strategy'
@@ -512,7 +543,26 @@ Write(`${sessionFolder}/conflicts.json`, JSON.stringify({
**Objective**: Generate human-readable plan summary and finalize workflow.
### Step 4.1: Generate plan.md
### Step 4.1: Merge Domain tasks.jsonl
Merge all per-domain JSONL files into a single session-level `tasks.jsonl`.
```javascript
// Collect all domain tasks
const allDomainTasks = []
for (const sub of subDomains) {
const jsonl = Read(`${sessionFolder}/domains/${sub.focus_area}/tasks.jsonl`)
jsonl.split('\n').filter(l => l.trim()).forEach(line => {
allDomainTasks.push(JSON.parse(line))
})
}
// Write merged tasks.jsonl at session root
const mergedJsonl = allDomainTasks.map(t => JSON.stringify(t)).join('\n')
Write(`${sessionFolder}/tasks.jsonl`, mergedJsonl)
```
### Step 4.2: Generate plan.md
Create a human-readable summary from plan-note.md content.
@@ -549,9 +599,9 @@ ${subDomains.map((s, i) => `| ${i+1} | ${s.focus_area} | ${s.description} | ${s.
## 任务概览
${subDomains.map(sub => {
const domainTasks = allTasks.filter(t => t.source_domain === sub.focus_area)
const domainTasks = allTasks.filter(t => t.source?.original_id?.startsWith('TASK') && t.source?.session_id === sessionId)
return `### ${sub.focus_area}\n\n` +
domainTasks.map(t => `- **${t.id}**: ${t.title} (${t.complexity}) ${t.depends_on.length ? '← ' + t.depends_on.join(', ') : ''}`).join('\n')
domainTasks.map(t => `- **${t.id}**: ${t.title} (${t.type}, ${t.effort}) ${t.depends_on.length ? '← ' + t.depends_on.join(', ') : ''}`).join('\n')
}).join('\n\n')}
## 冲突报告
@@ -563,7 +613,7 @@ ${allConflicts.length === 0
## 执行
\`\`\`bash
/workflow:unified-execute-with-file "${sessionFolder}/plan-note.md"
/workflow:unified-execute-with-file PLAN="${sessionFolder}/tasks.jsonl"
\`\`\`
**Session artifacts**: \`${sessionFolder}/\`
@@ -571,7 +621,7 @@ ${allConflicts.length === 0
Write(`${sessionFolder}/plan.md`, planMd)
```
### Step 4.2: Display Completion Summary
### Step 4.3: Display Completion Summary
Present session statistics and next steps.
@@ -602,13 +652,14 @@ if (!autoMode) {
| Selection | Action |
|-----------|--------|
| Execute Plan | `Skill(skill="workflow:unified-execute-with-file", args="${sessionFolder}/plan-note.md")` |
| Execute Plan | `Skill(skill="workflow:unified-execute-with-file", args="PLAN=\"${sessionFolder}/tasks.jsonl\"")` |
| Review Conflicts | Display conflicts.json content for manual resolution |
| Export | Copy plan.md + plan-note.md to user-specified location |
| Done | Display artifact paths, end workflow |
**Success Criteria**:
- `plan.md` generated with complete summary
- `tasks.jsonl` merged at session root (consumable by unified-execute)
- All artifacts present in session directory
- User informed of completion and next steps
@@ -634,10 +685,12 @@ User initiates: TASK="task description"
├─ Generate requirement-analysis.json
├─ Serial domain planning:
│ ├─ Domain 1: explore → plan.json → fill plan-note.md
│ ├─ Domain 2: explore → plan.json → fill plan-note.md
│ ├─ Domain 1: explore → tasks.jsonl → fill plan-note.md
│ ├─ Domain 2: explore → tasks.jsonl → fill plan-note.md
│ └─ Domain N: ...
├─ Merge domain tasks.jsonl → session tasks.jsonl
├─ Verify plan-note.md consistency
├─ Detect conflicts
├─ Generate plan.md summary
@@ -685,7 +738,7 @@ User resumes: TASK="same task"
1. **Review Plan Note**: Check plan-note.md between domains to verify progress
2. **Verify Independence**: Ensure sub-domains are truly independent and have minimal overlap
3. **Check Dependencies**: Cross-domain dependencies should be documented explicitly
4. **Inspect Details**: Review `domains/{domain}/plan.json` for specifics when needed
4. **Inspect Details**: Review `domains/{domain}/tasks.jsonl` for specifics when needed
5. **Consistent Format**: Follow task summary format strictly across all domains
6. **TASK ID Isolation**: Use pre-assigned non-overlapping ranges to prevent ID conflicts

View File

@@ -0,0 +1,457 @@
---
name: plan-converter
description: Convert any planning/analysis/brainstorm output to unified JSONL task format. Supports roadmap.jsonl, plan.json, plan-note.md, conclusions.json, synthesis.json.
argument-hint: "<input-file> [-o <output-file>]"
---
# Plan Converter
## Overview
Converts any planning artifact to **unified JSONL task format** — the single standard consumed by `unified-execute-with-file`.
```bash
# Auto-detect format, output to same directory
/codex:plan-converter ".workflow/.req-plan/RPLAN-auth-2025-01-21/roadmap.jsonl"
# Specify output path
/codex:plan-converter ".workflow/.planning/CPLAN-xxx/plan-note.md" -o tasks.jsonl
# Convert brainstorm synthesis
/codex:plan-converter ".workflow/.brainstorm/BS-xxx/synthesis.json"
```
**Supported inputs**: roadmap.jsonl, tasks.jsonl (per-domain), plan-note.md, conclusions.json, synthesis.json
**Output**: Unified JSONL (`tasks.jsonl` in same directory, or specified `-o` path)
## Unified JSONL Schema
每行一条自包含任务记录:
```
┌─ IDENTITY (必填) ──────────────────────────────────────────┐
│ id string 任务 ID (L0/T1/TASK-001) │
│ title string 任务标题 │
│ description string 目标 + 原因 │
├─ CLASSIFICATION (可选) ────────────────────────────────────┤
│ type enum infrastructure|feature|enhancement│
│ enum |fix|refactor|testing │
│ priority enum high|medium|low │
│ effort enum small|medium|large │
├─ SCOPE (可选) ─────────────────────────────────────────────┤
│ scope string|[] 覆盖范围 │
│ excludes string[] 明确排除 │
├─ DEPENDENCIES (depends_on 必填) ───────────────────────────┤
│ depends_on string[] 依赖任务 ID无依赖则 []
│ parallel_group number 并行分组(同组可并行) │
│ inputs string[] 消费的产物 │
│ outputs string[] 产出的产物 │
├─ CONVERGENCE (必填) ───────────────────────────────────────┤
│ convergence.criteria string[] 可测试的完成条件 │
│ convergence.verification string 可执行的验证步骤 │
│ convergence.definition_of_done string 业务语言完成定义 │
├─ FILES (可选,渐进详细) ───────────────────────────────────┤
│ files[].path string 文件路径 (必填*) │
│ files[].action enum modify|create|delete │
│ files[].changes string[] 修改描述 │
│ files[].conflict_risk enum low|medium|high │
├─ CONTEXT (可选) ───────────────────────────────────────────┤
│ source.tool string 产出工具名 │
│ source.session_id string 来源 session │
│ source.original_id string 转换前原始 ID │
│ evidence any[] 支撑证据 │
│ risk_items string[] 风险项 │
├─ EXECUTION (执行时填充,规划时不存在) ─────────────────────┤
│ _execution.status enum pending|in_progress| │
│ completed|failed|skipped │
│ _execution.executed_at string ISO 时间戳 │
│ _execution.result object { success, files_modified, │
│ summary, error, │
│ convergence_verified[] } │
└────────────────────────────────────────────────────────────┘
```
## Target Input
**$ARGUMENTS**
## Execution Process
```
Step 0: Parse arguments, resolve input path
Step 1: Detect input format
Step 2: Parse input → extract raw records
Step 3: Transform → unified JSONL records
Step 4: Validate convergence quality
Step 5: Write output + display summary
```
## Implementation
### Step 0: Parse Arguments
```javascript
const args = $ARGUMENTS
const outputMatch = args.match(/-o\s+(\S+)/)
const outputPath = outputMatch ? outputMatch[1] : null
const inputPath = args.replace(/-o\s+\S+/, '').trim()
// Resolve absolute path
const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()
const resolvedInput = path.isAbsolute(inputPath) ? inputPath : `${projectRoot}/${inputPath}`
```
### Step 1: Detect Format
```javascript
const filename = path.basename(resolvedInput)
const content = Read(resolvedInput)
function detectFormat(filename, content) {
if (filename === 'roadmap.jsonl') return 'roadmap-jsonl'
if (filename === 'tasks.jsonl') return 'tasks-jsonl' // already unified or per-domain
if (filename === 'plan-note.md') return 'plan-note-md'
if (filename === 'conclusions.json') return 'conclusions-json'
if (filename === 'synthesis.json') return 'synthesis-json'
if (filename.endsWith('.jsonl')) return 'generic-jsonl'
if (filename.endsWith('.json')) {
const parsed = JSON.parse(content)
if (parsed.top_ideas) return 'synthesis-json'
if (parsed.recommendations && parsed.key_conclusions) return 'conclusions-json'
if (parsed.tasks && parsed.focus_area) return 'domain-plan-json'
return 'unknown-json'
}
if (filename.endsWith('.md')) return 'plan-note-md'
return 'unknown'
}
```
**Format Detection Table**:
| Filename | Format ID | Source Tool |
|----------|-----------|------------|
| `roadmap.jsonl` | roadmap-jsonl | req-plan-with-file |
| `tasks.jsonl` (per-domain) | tasks-jsonl | collaborative-plan-with-file |
| `plan-note.md` | plan-note-md | collaborative-plan-with-file |
| `conclusions.json` | conclusions-json | analyze-with-file |
| `synthesis.json` | synthesis-json | brainstorm-with-file |
### Step 2: Parse Input
#### roadmap-jsonl (req-plan-with-file)
```javascript
function parseRoadmapJsonl(content) {
return content.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line))
}
// Records have: id (L0/T1), name/title, goal/scope, convergence, depends_on, etc.
```
#### plan-note-md (collaborative-plan-with-file)
```javascript
function parsePlanNoteMd(content) {
const tasks = []
// 1. Extract YAML frontmatter for session metadata
const frontmatter = extractYamlFrontmatter(content)
// 2. Find all "## 任务池 - {Domain}" sections
const taskPoolSections = content.match(/## 任务池 - .+/g) || []
// 3. For each section, extract tasks matching:
// ### TASK-{ID}: {Title} [{domain}]
// - **状态**: pending
// - **复杂度**: Medium
// - **依赖**: TASK-xxx
// - **范围**: ...
// - **修改点**: `file:location`: change summary
// - **冲突风险**: Low
taskPoolSections.forEach(section => {
const sectionContent = extractSectionContent(content, section)
const taskPattern = /### (TASK-\d+):\s+(.+?)\s+\[(.+?)\]/g
let match
while ((match = taskPattern.exec(sectionContent)) !== null) {
const [_, id, title, domain] = match
const taskBlock = extractTaskBlock(sectionContent, match.index)
tasks.push({
id, title, domain,
...parseTaskDetails(taskBlock)
})
}
})
return { tasks, frontmatter }
}
```
#### conclusions-json (analyze-with-file)
```javascript
function parseConclusionsJson(content) {
const conclusions = JSON.parse(content)
// Extract from: conclusions.recommendations[]
// { action, rationale, priority }
// Also available: conclusions.key_conclusions[]
return conclusions
}
```
#### synthesis-json (brainstorm-with-file)
```javascript
function parseSynthesisJson(content) {
const synthesis = JSON.parse(content)
// Extract from: synthesis.top_ideas[]
// { title, description, score, feasibility, next_steps, key_strengths, main_challenges }
// Also available: synthesis.recommendations
return synthesis
}
```
### Step 3: Transform to Unified Records
#### roadmap-jsonl → unified
```javascript
function transformRoadmap(records, sessionId) {
return records.map(rec => {
// roadmap.jsonl now uses unified field names (title, description, source)
// Passthrough is mostly direct
return {
id: rec.id,
title: rec.title,
description: rec.description,
type: rec.type || 'feature',
effort: rec.effort,
scope: rec.scope,
excludes: rec.excludes,
depends_on: rec.depends_on || [],
parallel_group: rec.parallel_group,
inputs: rec.inputs,
outputs: rec.outputs,
convergence: rec.convergence, // already unified format
risk_items: rec.risk_items,
source: rec.source || {
tool: 'req-plan-with-file',
session_id: sessionId,
original_id: rec.id
}
}
})
}
```
#### plan-note-md → unified
```javascript
function transformPlanNote(parsed) {
const { tasks, frontmatter } = parsed
return tasks.map(task => ({
id: task.id,
title: task.title,
description: task.scope || task.title,
type: task.type || inferTypeFromTitle(task.title),
priority: task.priority || inferPriorityFromEffort(task.effort),
effort: task.effort || 'medium',
scope: task.scope,
depends_on: task.depends_on || [],
convergence: task.convergence || generateConvergence(task), // plan-note now has convergence
files: task.files?.map(f => ({
path: f.path || f.file,
action: f.action || 'modify',
changes: f.changes || [f.change],
conflict_risk: f.conflict_risk
})),
source: {
tool: 'collaborative-plan-with-file',
session_id: frontmatter.session_id,
original_id: task.id
}
}))
}
// Generate convergence from task details when source lacks it (legacy fallback)
function generateConvergence(task) {
return {
criteria: [
// Derive testable conditions from scope and files
// e.g., "Modified files compile without errors"
// e.g., scope-derived: "API endpoint returns expected response"
],
verification: '// Derive from files — e.g., test commands',
definition_of_done: '// Derive from scope — business language summary'
}
}
```
#### conclusions-json → unified
```javascript
function transformConclusions(conclusions) {
return conclusions.recommendations.map((rec, index) => ({
id: `TASK-${String(index + 1).padStart(3, '0')}`,
title: rec.action,
description: rec.rationale,
type: inferTypeFromAction(rec.action),
priority: rec.priority,
depends_on: [],
convergence: {
criteria: generateCriteriaFromAction(rec),
verification: generateVerificationFromAction(rec),
definition_of_done: generateDoDFromRationale(rec)
},
evidence: conclusions.key_conclusions.map(c => c.point),
source: {
tool: 'analyze-with-file',
session_id: conclusions.session_id
}
}))
}
function inferTypeFromAction(action) {
const lower = action.toLowerCase()
if (/fix|resolve|repair|修复/.test(lower)) return 'fix'
if (/refactor|restructure|extract|重构/.test(lower)) return 'refactor'
if (/add|implement|create|新增|实现/.test(lower)) return 'feature'
if (/improve|optimize|enhance|优化/.test(lower)) return 'enhancement'
if (/test|coverage|validate|测试/.test(lower)) return 'testing'
return 'feature'
}
```
#### synthesis-json → unified
```javascript
function transformSynthesis(synthesis) {
return synthesis.top_ideas
.filter(idea => idea.score >= 6) // Only viable ideas (score ≥ 6)
.map((idea, index) => ({
id: `IDEA-${String(index + 1).padStart(3, '0')}`,
title: idea.title,
description: idea.description,
type: 'feature',
priority: idea.score >= 8 ? 'high' : idea.score >= 6 ? 'medium' : 'low',
effort: idea.feasibility >= 4 ? 'small' : idea.feasibility >= 2 ? 'medium' : 'large',
depends_on: [],
convergence: {
criteria: idea.next_steps || [`${idea.title} implemented and functional`],
verification: 'Manual validation of feature functionality',
definition_of_done: idea.description
},
risk_items: idea.main_challenges || [],
source: {
tool: 'brainstorm-with-file',
session_id: synthesis.session_id,
original_id: `idea-${index + 1}`
}
}))
}
```
### Step 4: Validate Convergence Quality
All records must pass convergence quality checks before output.
```javascript
function validateConvergence(records) {
const vaguePatterns = /正常|正确|好|可以|没问题|works|fine|good|correct/i
const technicalPatterns = /compile|build|lint|npm|npx|jest|tsc|eslint/i
const issues = []
records.forEach(record => {
const c = record.convergence
if (!c) {
issues.push({ id: record.id, field: 'convergence', issue: 'Missing entirely' })
return
}
if (!c.criteria?.length) {
issues.push({ id: record.id, field: 'criteria', issue: 'Empty criteria array' })
}
c.criteria?.forEach((criterion, i) => {
if (vaguePatterns.test(criterion) && criterion.length < 15) {
issues.push({ id: record.id, field: `criteria[${i}]`, issue: `Too vague: "${criterion}"` })
}
})
if (!c.verification || c.verification.length < 10) {
issues.push({ id: record.id, field: 'verification', issue: 'Too short or missing' })
}
if (technicalPatterns.test(c.definition_of_done)) {
issues.push({ id: record.id, field: 'definition_of_done', issue: 'Should be business language' })
}
})
return issues
}
// Auto-fix strategy:
// | Issue | Fix |
// |----------------------|----------------------------------------------|
// | Missing convergence | Generate from title + description + files |
// | Vague criteria | Replace with specific condition from context |
// | Short verification | Expand with file-based test suggestion |
// | Technical DoD | Rewrite in business language |
```
### Step 5: Write Output & Summary
```javascript
// Determine output path
const outputFile = outputPath
|| `${path.dirname(resolvedInput)}/tasks.jsonl`
// Clean records: remove undefined/null optional fields
const cleanedRecords = records.map(rec => {
const clean = { ...rec }
Object.keys(clean).forEach(key => {
if (clean[key] === undefined || clean[key] === null) delete clean[key]
if (Array.isArray(clean[key]) && clean[key].length === 0 && key !== 'depends_on') delete clean[key]
})
return clean
})
// Write JSONL
const jsonlContent = cleanedRecords.map(r => JSON.stringify(r)).join('\n')
Write(outputFile, jsonlContent)
// Display summary
// | Source | Format | Records | Issues |
// |-----------------|-------------------|---------|--------|
// | roadmap.jsonl | roadmap-jsonl | 4 | 0 |
//
// Output: .workflow/.req-plan/RPLAN-xxx/tasks.jsonl
// Records: 4 tasks with convergence criteria
// Quality: All convergence checks passed
```
---
## Conversion Matrix
| Source | Source Tool | ID Pattern | Has Convergence | Has Files | Has Priority | Has Source |
|--------|-----------|------------|-----------------|-----------|--------------|-----------|
| roadmap.jsonl (progressive) | req-plan | L0-L3 | **Yes** | No | No | **Yes** |
| roadmap.jsonl (direct) | req-plan | T1-TN | **Yes** | No | No | **Yes** |
| tasks.jsonl (per-domain) | collaborative-plan | TASK-NNN | **Yes** | **Yes** (detailed) | Optional | **Yes** |
| plan-note.md | collaborative-plan | TASK-NNN | **Generate** | **Yes** (from 修改文件) | From effort | No |
| conclusions.json | analyze | TASK-NNN | **Generate** | No | **Yes** | No |
| synthesis.json | brainstorm | IDEA-NNN | **Generate** | No | From score | No |
**Legend**: Yes = source already has it, Generate = converter produces it, No = not available
## Error Handling
| Situation | Action |
|-----------|--------|
| Input file not found | Report error, suggest checking path |
| Unknown format | Report error, list supported formats |
| Empty input | Report error, no output file created |
| Convergence validation fails | Auto-fix where possible, report remaining issues |
| Partial parse failure | Convert valid records, report skipped items |
| Output file exists | Overwrite with warning message |
| plan-note.md has empty sections | Skip empty domains, report in summary |
---
**Now execute plan-converter for**: $ARGUMENTS

View File

@@ -304,12 +304,13 @@ if (file_exists(`${sessionFolder}/exploration-codebase.json`)) {
```javascript
// Each layer must have:
// - id: L0, L1, L2, L3
// - name: MVP / Usable / Refined / Optimized
// - goal: what this layer achieves
// - title: "MVP" / "Usable" / "Refined" / "Optimized"
// - description: what this layer achieves (goal)
// - scope[]: features included
// - excludes[]: features explicitly deferred
// - convergence: { criteria[], verification, definition_of_done }
// - risk_items[], effort (small|medium|large), depends_on[]
// - source: { tool, session_id, original_id }
//
// Rules:
// - L0 (MVP) = self-contained closed loop, no dependencies
@@ -319,15 +320,16 @@ if (file_exists(`${sessionFolder}/exploration-codebase.json`)) {
const layers = [
{
id: "L0", name: "MVP",
goal: "...",
id: "L0", title: "MVP",
description: "...",
scope: ["..."], excludes: ["..."],
convergence: {
criteria: ["... (testable)"],
verification: "... (executable command or steps)",
definition_of_done: "... (business language)"
},
risk_items: [], effort: "medium", depends_on: []
risk_items: [], effort: "medium", depends_on: [],
source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "L0" }
},
// L1, L2, ...
]
@@ -338,10 +340,11 @@ const layers = [
```javascript
// Each task must have:
// - id: T1, T2, ...
// - title, type (infrastructure|feature|enhancement|testing)
// - title, description, type (infrastructure|feature|enhancement|testing)
// - scope, inputs[], outputs[]
// - convergence: { criteria[], verification, definition_of_done }
// - depends_on[], parallel_group
// - source: { tool, session_id, original_id }
//
// Rules:
// - Inputs must come from preceding task outputs or existing resources
@@ -351,14 +354,15 @@ const layers = [
const tasks = [
{
id: "T1", title: "...", type: "infrastructure",
id: "T1", title: "...", description: "...", type: "infrastructure",
scope: "...", inputs: [], outputs: ["..."],
convergence: {
criteria: ["... (testable)"],
verification: "... (executable)",
definition_of_done: "... (business language)"
},
depends_on: [], parallel_group: 1
depends_on: [], parallel_group: 1,
source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "T1" }
},
// T2, T3, ...
]
@@ -524,10 +528,10 @@ Display the decomposition as a table with convergence criteria, then run feedbac
```markdown
## Roadmap Overview
| Layer | Name | Goal | Scope | Effort | Dependencies |
|-------|------|------|-------|--------|--------------|
| L0 | MVP | ... | ... | medium | - |
| L1 | Usable | ... | ... | medium | L0 |
| Layer | Title | Description | Effort | Dependencies |
|-------|-------|-------------|--------|--------------|
| L0 | MVP | ... | medium | - |
| L1 | Usable | ... | medium | L0 |
### Convergence Criteria
**L0 - MVP**:
@@ -541,10 +545,10 @@ Display the decomposition as a table with convergence criteria, then run feedbac
```markdown
## Task Sequence
| Group | ID | Title | Type | Dependencies |
|-------|----|-------|------|--------------|
| 1 | T1 | ... | infrastructure | - |
| 2 | T2 | ... | feature | T1 |
| Group | ID | Title | Type | Description | Dependencies |
|-------|----|-------|------|-------------|--------------|
| 1 | T1 | ... | infrastructure | ... | - |
| 2 | T2 | ... | feature | ... | T1 |
### Convergence Criteria
**T1 - Establish Data Model**:
@@ -609,15 +613,15 @@ const roadmapMd = `# Requirement Roadmap
## Roadmap Overview
| Layer | Name | Goal | Effort | Dependencies |
|-------|------|------|--------|--------------|
${items.map(l => `| ${l.id} | ${l.name} | ${l.goal} | ${l.effort} | ${l.depends_on.length ? l.depends_on.join(', ') : '-'} |`).join('\n')}
| Layer | Title | Description | Effort | Dependencies |
|-------|-------|-------------|--------|--------------|
${items.map(l => `| ${l.id} | ${l.title} | ${l.description} | ${l.effort} | ${l.depends_on.length ? l.depends_on.join(', ') : '-'} |`).join('\n')}
## Layer Details
${items.map(l => `### ${l.id}: ${l.name}
${items.map(l => `### ${l.id}: ${l.title}
**Goal**: ${l.goal}
**Description**: ${l.description}
**Scope**: ${l.scope.join(', ')}
@@ -641,7 +645,7 @@ ${items.flatMap(l => l.risk_items.map(r => \`- **${l.id}**: ${r}\`)).join('\n')
Each layer can be executed independently:
\\\`\\\`\\\`bash
/workflow:lite-plan "${items[0]?.name}: ${items[0]?.scope.join(', ')}"
/workflow:lite-plan "${items[0]?.title}: ${items[0]?.scope.join(', ')}"
\\\`\\\`\\\`
Roadmap JSONL file: \\\`${sessionFolder}/roadmap.jsonl\\\`
@@ -665,9 +669,9 @@ const roadmapMd = `# Requirement Roadmap
## Task Sequence
| Group | ID | Title | Type | Dependencies |
|-------|----|-------|------|--------------|
${items.map(t => `| ${t.parallel_group} | ${t.id} | ${t.title} | ${t.type} | ${t.depends_on.length ? t.depends_on.join(', ') : '-'} |`).join('\n')}
| Group | ID | Title | Type | Description | Dependencies |
|-------|----|-------|------|-------------|--------------|
${items.map(t => `| ${t.parallel_group} | ${t.id} | ${t.title} | ${t.type} | ${t.description} | ${t.depends_on.length ? t.depends_on.join(', ') : '-'} |`).join('\n')}
## Task Details
@@ -675,6 +679,8 @@ ${items.map(t => `### ${t.id}: ${t.title}
**Type**: ${t.type} | **Parallel Group**: ${t.parallel_group}
**Description**: ${t.description}
**Scope**: ${t.scope}
**Inputs**: ${t.inputs.length ? t.inputs.join(', ') : 'None (starting task)'}
@@ -748,7 +754,7 @@ if (!autoYes) {
| `strategy-assessment.json` | 1 | Uncertainty analysis + mode recommendation + extracted goal/constraints/stakeholders/domain_keywords |
| `roadmap.md` (skeleton) | 1 | Initial skeleton with placeholders, finalized in Phase 4 |
| `exploration-codebase.json` | 2 | Codebase context: relevant modules, patterns, integration points (only when codebase exists) |
| `roadmap.jsonl` | 3 | One self-contained JSON record per line with convergence criteria |
| `roadmap.jsonl` | 3 | One self-contained JSON record per line with convergence criteria and source provenance |
| `roadmap.md` (final) | 4 | Human-readable roadmap with tabular display + convergence details, revised per user feedback |
## JSONL Schema
@@ -765,18 +771,18 @@ Each record's `convergence` object:
### Progressive Mode (one layer per line)
| Layer | Name | Typical Goal |
|-------|------|--------------|
| Layer | Title | Typical Description |
|-------|-------|---------------------|
| L0 | MVP | Minimum viable closed loop, core path end-to-end |
| L1 | Usable | Key user paths refined, basic error handling |
| L2 | Refined | Edge cases, performance, security hardening |
| L3 | Optimized | Advanced features, observability, operations |
**Schema**: `id, name, goal, scope[], excludes[], convergence{}, risk_items[], effort, depends_on[]`
**Schema**: `id, title, description, scope[], excludes[], convergence{}, risk_items[], effort, depends_on[], source{}`
```jsonl
{"id":"L0","name":"MVP","goal":"Minimum viable closed loop","scope":["User registration and login","Basic CRUD"],"excludes":["OAuth","2FA"],"convergence":{"criteria":["End-to-end register→login→operate flow works","Core API returns correct responses"],"verification":"curl/Postman manual testing or smoke test script","definition_of_done":"New user can complete register→login→perform one core operation"},"risk_items":["JWT library selection needs validation"],"effort":"medium","depends_on":[]}
{"id":"L1","name":"Usable","goal":"Complete key user paths","scope":["Password reset","Input validation","Error messages"],"excludes":["Audit logs","Rate limiting"],"convergence":{"criteria":["All form fields have frontend+backend validation","Password reset email can be sent and reset completed","Error scenarios show user-friendly messages"],"verification":"Unit tests cover validation logic + manual test of reset flow","definition_of_done":"Users have a clear recovery path when encountering input errors or forgotten passwords"},"risk_items":[],"effort":"medium","depends_on":["L0"]}
{"id":"L0","title":"MVP","description":"Minimum viable closed loop","scope":["User registration and login","Basic CRUD"],"excludes":["OAuth","2FA"],"convergence":{"criteria":["End-to-end register→login→operate flow works","Core API returns correct responses"],"verification":"curl/Postman manual testing or smoke test script","definition_of_done":"New user can complete register→login→perform one core operation"},"risk_items":["JWT library selection needs validation"],"effort":"medium","depends_on":[],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L0"}}
{"id":"L1","title":"Usable","description":"Complete key user paths","scope":["Password reset","Input validation","Error messages"],"excludes":["Audit logs","Rate limiting"],"convergence":{"criteria":["All form fields have frontend+backend validation","Password reset email can be sent and reset completed","Error scenarios show user-friendly messages"],"verification":"Unit tests cover validation logic + manual test of reset flow","definition_of_done":"Users have a clear recovery path when encountering input errors or forgotten passwords"},"risk_items":[],"effort":"medium","depends_on":["L0"],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L1"}}
```
**Constraints**: 2-4 layers, L0 must be a self-contained closed loop with no dependencies, each feature belongs to exactly ONE layer (no scope overlap).
@@ -790,11 +796,11 @@ Each record's `convergence` object:
| enhancement | Validation, error handling, edge cases |
| testing | Unit tests, integration tests, E2E |
**Schema**: `id, title, type, scope, inputs[], outputs[], convergence{}, depends_on[], parallel_group`
**Schema**: `id, title, description, type, scope, inputs[], outputs[], convergence{}, depends_on[], parallel_group, source{}`
```jsonl
{"id":"T1","title":"Establish data model","type":"infrastructure","scope":"DB schema + TypeScript types","inputs":[],"outputs":["schema.prisma","types/user.ts"],"convergence":{"criteria":["Migration executes without errors","TypeScript types compile successfully","Fields cover all business entities"],"verification":"npx prisma migrate dev && npx tsc --noEmit","definition_of_done":"Database schema migrates correctly, type definitions can be referenced by other modules"},"depends_on":[],"parallel_group":1}
{"id":"T2","title":"Implement core API","type":"feature","scope":"CRUD endpoints for User","inputs":["schema.prisma","types/user.ts"],"outputs":["routes/user.ts","controllers/user.ts"],"convergence":{"criteria":["GET/POST/PUT/DELETE return correct status codes","Request/response conforms to schema","No N+1 queries"],"verification":"jest --testPathPattern=user.test.ts","definition_of_done":"All User CRUD endpoints pass integration tests"},"depends_on":["T1"],"parallel_group":2}
{"id":"T1","title":"Establish data model","description":"Create database schema and TypeScript type definitions for all business entities","type":"infrastructure","scope":"DB schema + TypeScript types","inputs":[],"outputs":["schema.prisma","types/user.ts"],"convergence":{"criteria":["Migration executes without errors","TypeScript types compile successfully","Fields cover all business entities"],"verification":"npx prisma migrate dev && npx tsc --noEmit","definition_of_done":"Database schema migrates correctly, type definitions can be referenced by other modules"},"depends_on":[],"parallel_group":1,"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"T1"}}
{"id":"T2","title":"Implement core API","description":"Build CRUD endpoints for User entity with proper validation and error handling","type":"feature","scope":"CRUD endpoints for User","inputs":["schema.prisma","types/user.ts"],"outputs":["routes/user.ts","controllers/user.ts"],"convergence":{"criteria":["GET/POST/PUT/DELETE return correct status codes","Request/response conforms to schema","No N+1 queries"],"verification":"jest --testPathPattern=user.test.ts","definition_of_done":"All User CRUD endpoints pass integration tests"},"depends_on":["T1"],"parallel_group":2,"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"T2"}}
```
**Constraints**: Inputs must come from preceding task outputs or existing resources, tasks in same parallel_group must be truly independent, no circular dependencies.
@@ -822,24 +828,26 @@ When normal decomposition fails or produces empty results, use fallback template
```javascript
[
{
id: "L0", name: "MVP", goal: "Minimum viable closed loop",
id: "L0", title: "MVP", description: "Minimum viable closed loop",
scope: ["Core functionality"], excludes: ["Advanced features", "Optimization"],
convergence: {
criteria: ["Core path works end-to-end"],
verification: "Manual test of core flow",
definition_of_done: "User can complete one full core operation"
},
risk_items: ["Tech selection needs validation"], effort: "medium", depends_on: []
risk_items: ["Tech selection needs validation"], effort: "medium", depends_on: [],
source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "L0" }
},
{
id: "L1", name: "Usable", goal: "Refine key user paths",
id: "L1", title: "Usable", description: "Refine key user paths",
scope: ["Error handling", "Input validation"], excludes: ["Performance optimization", "Monitoring"],
convergence: {
criteria: ["All user inputs validated", "Error scenarios show messages"],
verification: "Unit tests + manual error scenario testing",
definition_of_done: "Users have clear guidance and recovery paths when encountering problems"
},
risk_items: [], effort: "medium", depends_on: ["L0"]
risk_items: [], effort: "medium", depends_on: ["L0"],
source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "L1" }
}
]
```
@@ -848,7 +856,8 @@ When normal decomposition fails or produces empty results, use fallback template
```javascript
[
{
id: "T1", title: "Infrastructure setup", type: "infrastructure",
id: "T1", title: "Infrastructure setup", description: "Project scaffolding and base configuration",
type: "infrastructure",
scope: "Project scaffolding and base configuration",
inputs: [], outputs: ["project-structure"],
convergence: {
@@ -856,10 +865,12 @@ When normal decomposition fails or produces empty results, use fallback template
verification: "npm run build (or equivalent build command)",
definition_of_done: "Project foundation ready for feature development"
},
depends_on: [], parallel_group: 1
depends_on: [], parallel_group: 1,
source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "T1" }
},
{
id: "T2", title: "Core feature implementation", type: "feature",
id: "T2", title: "Core feature implementation", description: "Implement core business logic",
type: "feature",
scope: "Core business logic",
inputs: ["project-structure"], outputs: ["core-module"],
convergence: {
@@ -867,7 +878,8 @@ When normal decomposition fails or produces empty results, use fallback template
verification: "Run core feature tests",
definition_of_done: "Core business functionality works as expected"
},
depends_on: ["T1"], parallel_group: 2
depends_on: ["T1"], parallel_group: 2,
source: { tool: "req-plan-with-file", session_id: sessionId, original_id: "T2" }
}
]
```

File diff suppressed because it is too large Load Diff