feat: Enhance parallel-dev-cycle with prep-package integration

- Added argument parsing and prep package loading in session initialization.
- Implemented validation checks for prep-package.json integrity.
- Integrated prep package data into cycle state, including task refinement and auto-iteration settings.
- Updated agent execution to utilize source references and focus directives from prep package.
- Modified context gathering and test context generation to reference active workflow paths.
- Introduced a new interactive prompt for pre-flight checklist and task quality assessment.
- Created a detailed schema and integration specification for prep-package.json.
- Ensured all relevant phases validate and utilize the prep package effectively.
This commit is contained in:
catlog22
2026-02-09 14:07:52 +08:00
parent afd9729873
commit 113bee5ef9
13 changed files with 801 additions and 42 deletions

View File

@@ -302,11 +302,11 @@ if (!autoYes) {
options: [
{ label: "Start Execution", description: "Execute all tasks serially" },
{ label: "Adjust Tasks", description: "Modify, reorder, or remove tasks" },
{ label: "Cancel", description: "Cancel execution, keep execution-plan.jsonl" }
{ label: "Cancel", description: "Cancel execution, keep tasks.jsonl" }
]
}]
})
// "Adjust Tasks": display task list, user deselects/reorders, regenerate execution-plan.jsonl
// "Adjust Tasks": display task list, user deselects/reorders, regenerate tasks.jsonl
// "Cancel": end workflow, keep artifacts
}
```
@@ -321,7 +321,7 @@ Execute tasks one by one directly using tools (Read, Edit, Write, Grep, Glob, Ba
```
For each taskId in executionOrder:
├─ Load task from execution-plan.jsonl
├─ Load task from tasks.jsonl
├─ Check dependencies satisfied (all deps completed)
├─ Record START event to execution-events.md
├─ Execute task directly:

View File

@@ -82,7 +82,7 @@ Step 4: Synthesis & Conclusion
└─ Offer options: quick execute / create issue / generate task / export / done
Step 5: Quick Execute (Optional - user selects)
├─ Convert conclusions.recommendations → execution-plan.jsonl (with convergence)
├─ Convert conclusions.recommendations → tasks.jsonl (unified JSONL with convergence)
├─ Pre-execution analysis (dependencies, file conflicts, execution order)
├─ User confirmation
├─ Direct inline execution (Read/Edit/Write/Grep/Glob/Bash)
@@ -581,13 +581,13 @@ if (!autoYes) {
**Key Principle**: No additional exploration — analysis phase has already collected all necessary context. No CLI delegation — execute directly using tools.
**Flow**: `conclusions.json → execution-plan.jsonl → User Confirmation → Direct Inline Execution → execution.md + execution-events.md`
**Flow**: `conclusions.json → tasks.jsonl → User Confirmation → Direct Inline Execution → execution.md + execution-events.md`
**Full specification**: See `EXECUTE.md` for detailed step-by-step implementation.
##### Step 5.1: Generate execution-plan.jsonl
##### Step 5.1: Generate tasks.jsonl
Convert `conclusions.recommendations` into JSONL execution list. Each line is a self-contained task with convergence criteria:
Convert `conclusions.recommendations` into unified JSONL task format. Each line is a self-contained task with convergence criteria:
```javascript
const conclusions = JSON.parse(Read(`${sessionFolder}/conclusions.json`))
@@ -603,22 +603,28 @@ const tasks = conclusions.recommendations.map((rec, index) => ({
description: rec.rationale,
type: inferTaskType(rec), // fix | refactor | feature | enhancement | testing
priority: rec.priority,
files_to_modify: extractFilesFromEvidence(rec, explorations),
effort: inferEffort(rec), // small | medium | large
files: extractFilesFromEvidence(rec, explorations).map(f => ({
path: f,
action: 'modify'
})),
depends_on: [],
convergence: {
criteria: generateCriteria(rec), // Testable conditions
verification: generateVerification(rec), // Executable command or steps
definition_of_done: generateDoD(rec) // Business language
},
context: {
source_conclusions: conclusions.key_conclusions,
evidence: rec.evidence || []
evidence: rec.evidence || [],
source: {
tool: 'analyze-with-file',
session_id: sessionId,
original_id: `TASK-${String(index + 1).padStart(3, '0')}`
}
}))
// Validate convergence quality (same as req-plan-with-file)
// Write one task per line
Write(`${sessionFolder}/execution-plan.jsonl`, tasks.map(t => JSON.stringify(t)).join('\n'))
Write(`${sessionFolder}/tasks.jsonl`, tasks.map(t => JSON.stringify(t)).join('\n'))
```
##### Step 5.2: Pre-Execution Analysis
@@ -641,7 +647,7 @@ if (!autoYes) {
options: [
{ label: "Start Execution", description: "Execute all tasks serially" },
{ label: "Adjust Tasks", description: "Modify, reorder, or remove tasks" },
{ label: "Cancel", description: "Cancel execution, keep execution-plan.jsonl" }
{ label: "Cancel", description: "Cancel execution, keep tasks.jsonl" }
]
}]
})
@@ -664,7 +670,7 @@ For each task in execution order:
- Update `execution.md` with final summary (statistics, task results table)
- Finalize `execution-events.md` with session footer
- Update `execution-plan.jsonl` with execution results per task
- Update `tasks.jsonl` with `_execution` state per task
```javascript
if (!autoYes) {
@@ -685,7 +691,7 @@ if (!autoYes) {
```
**Success Criteria**:
- `execution-plan.jsonl` generated with convergence criteria per task
- `tasks.jsonl` generated with convergence criteria and source provenance per task
- `execution.md` contains plan overview, task table, pre-execution analysis, final summary
- `execution-events.md` contains chronological event stream with convergence verification
- All tasks executed (or explicitly skipped) via direct inline execution
@@ -704,7 +710,7 @@ if (!autoYes) {
├── explorations.json # Phase 2: Single perspective aggregated findings
├── perspectives.json # Phase 2: Multi-perspective findings with synthesis
├── conclusions.json # Phase 4: Final synthesis with recommendations
├── execution-plan.jsonl # Phase 5: JSONL execution list with convergence (if quick execute)
├── tasks.jsonl # Phase 5: Unified JSONL with convergence + source (if quick execute)
├── execution.md # Phase 5: Execution overview + task table + summary (if quick execute)
└── execution-events.md # Phase 5: Chronological event log (if quick execute)
```
@@ -717,7 +723,7 @@ if (!autoYes) {
| `explorations.json` | 2 | Single perspective aggregated findings |
| `perspectives.json` | 2 | Multi-perspective findings with cross-perspective synthesis |
| `conclusions.json` | 4 | Final synthesis: conclusions, recommendations, open questions |
| `execution-plan.jsonl` | 5 | JSONL execution list from recommendations, each line with convergence criteria |
| `tasks.jsonl` | 5 | Unified JSONL from recommendations, each line with convergence criteria and source provenance |
| `execution.md` | 5 | Execution overview: plan source, task table, pre-execution analysis, final summary |
| `execution-events.md` | 5 | Chronological event stream with task details and convergence verification |
@@ -861,7 +867,7 @@ Remaining questions or areas for investigation
| Session folder conflict | Append timestamp suffix | Create unique folder and continue |
| Quick execute: task fails | Record failure in execution-events.md | User can retry, skip, or abort |
| Quick execute: verification fails | Mark criterion as unverified, continue | Note in events, manual check |
| Quick execute: no recommendations | Cannot generate execution-plan.jsonl | Suggest using lite-plan instead |
| Quick execute: no recommendations | Cannot generate tasks.jsonl | Suggest using lite-plan instead |
## Best Practices

View File

@@ -74,6 +74,15 @@ Each agent **maintains one main document** (e.g., requirements.md, plan.json, im
When `--auto`: Run all phases sequentially without user confirmation between iterations. Use recommended defaults for all decisions. Automatically continue iteration loop until tests pass or max iterations reached.
## Prep Package Integration
When `prep-package.json` exists at `{projectRoot}/.workflow/.cycle/prep-package.json`, Phase 1 consumes it to:
- Use refined task description instead of raw TASK
- Apply auto-iteration config (convergence criteria, phase gates)
- Inject per-iteration agent focus directives (0→1 vs 1→100)
Prep packages are generated by the interactive prompt `/prompts:prep-cycle`. See [phases/00-prep-checklist.md](phases/00-prep-checklist.md) for schema.
## Execution Flow
```

View File

@@ -0,0 +1,191 @@
# Prep Package Schema & Integration Spec
Schema definition for `prep-package.json` and integration points with the parallel-dev-cycle skill.
## File Location
```
{projectRoot}/.workflow/.cycle/prep-package.json
```
Generated by: `/prompts:prep-cycle` (interactive prompt)
Consumed by: Phase 1 (Session Initialization)
## JSON Schema
```json
{
"version": "1.0.0",
"generated_at": "ISO8601",
"prep_status": "ready | needs_refinement | blocked",
"environment": {
"project_root": "/path/to/project",
"prerequisites": {
"required_passed": true,
"recommended_passed": true,
"warnings": ["string"]
},
"tech_stack": "string (e.g. Express.js + TypeORM + PostgreSQL)",
"test_framework": "string (e.g. jest, vitest, pytest)",
"has_project_tech": true,
"has_project_guidelines": true
},
"task": {
"original": "raw user input",
"refined": "enhanced task description with all 5 dimensions",
"quality_score": 8,
"dimensions": {
"objective": { "score": 2, "value": "..." },
"success_criteria": { "score": 2, "value": "..." },
"scope": { "score": 2, "value": "..." },
"constraints": { "score": 1, "value": "..." },
"context": { "score": 1, "value": "..." }
},
"source_refs": [
{
"path": "docs/prd.md",
"type": "local_file | url | auto_detected",
"status": "verified | linked | not_found",
"preview": "first ~20 lines (local_file only)"
}
]
},
"auto_iteration": {
"enabled": true,
"no_confirmation": true,
"max_iterations": 5,
"timeout_per_iteration_ms": 1800000,
"convergence": {
"test_pass_rate": 90,
"coverage": 80,
"max_critical_bugs": 0,
"max_open_issues": 3
},
"phase_gates": {
"zero_to_one": {
"iterations": [1, 2],
"exit_criteria": {
"code_compiles": true,
"core_test_passes": true,
"min_requirements_implemented": 1
}
},
"one_to_hundred": {
"iterations": [3, 4, 5],
"exit_criteria": {
"test_pass_rate": 90,
"coverage": 80,
"critical_bugs": 0
}
}
},
"agent_focus": {
"zero_to_one": {
"ra": "core_requirements_only",
"ep": "minimal_viable_architecture",
"cd": "happy_path_first",
"vas": "smoke_tests_only"
},
"one_to_hundred": {
"ra": "full_requirements_with_nfr",
"ep": "refined_architecture_with_risks",
"cd": "complete_implementation_with_error_handling",
"vas": "full_test_suite_with_coverage"
}
}
}
}
```
## Phase 1 Integration (Consume & Check)
Phase 1 对 prep-package.json 执行 **6 项验证**,全部通过才加载,任一失败回退默认行为:
| # | 检查项 | 条件 | 失败处理 |
|---|--------|------|----------|
| 1 | prep_status | `=== "ready"` | 跳过 prep |
| 2 | project_root | 与当前 projectRoot 一致 | 跳过 prep防错误项目 |
| 3 | quality_score | `>= 6` | 跳过 prep任务质量不达标 |
| 4 | 时效性 | generated_at 在 24h 以内 | 跳过 prep可能过期 |
| 5 | 必需字段 | task.refined, convergence, phase_gates, agent_focus 全部存在 | 跳过 prep |
| 6 | 收敛值合法 | test_pass_rate/coverage 为 0-100 的数字 | 跳过 prep |
```javascript
// In 01-session-init.md, Step 1.1:
const prepPath = `${projectRoot}/.workflow/.cycle/prep-package.json`
if (fs.existsSync(prepPath)) {
const raw = JSON.parse(Read(prepPath))
const checks = validatePrepPackage(raw, projectRoot)
if (checks.valid) {
prepPackage = raw
task = prepPackage.task.refined
// Inject into state:
state.convergence = prepPackage.auto_iteration.convergence
state.phase_gates = prepPackage.auto_iteration.phase_gates
state.agent_focus = prepPackage.auto_iteration.agent_focus
state.max_iterations = prepPackage.auto_iteration.max_iterations
} else {
console.warn('Prep package validation failed, using defaults')
// prepPackage remains null → no convergence/phase_gates/agent_focus
}
}
```
## Phase 2 Integration (Agent Focus Directives)
```javascript
// Before spawning each agent, append focus directive:
function getAgentFocusDirective(agentName, state) {
if (!state.phase_gates) return ""
const iteration = state.current_iteration
const isZeroToOne = state.phase_gates.zero_to_one.iterations.includes(iteration)
const focus = isZeroToOne
? state.agent_focus.zero_to_one[agentName]
: state.agent_focus.one_to_hundred[agentName]
const directives = {
core_requirements_only: "Focus ONLY on core functional requirements. Skip NFRs and edge cases.",
minimal_viable_architecture: "Design the simplest working architecture. Skip optimization.",
happy_path_first: "Implement ONLY the happy path. Skip error handling and edge cases.",
smoke_tests_only: "Run smoke tests only. Skip coverage analysis and exhaustive validation.",
full_requirements_with_nfr: "Complete requirements including NFRs, edge cases, security.",
refined_architecture_with_risks: "Refine architecture with risk mitigation and scalability.",
complete_implementation_with_error_handling: "Complete all tasks with error handling and validation.",
full_test_suite_with_coverage: "Full test suite with coverage report and quality audit."
}
return `\n## FOCUS DIRECTIVE (${isZeroToOne ? '0→1' : '1→100'})\n${directives[focus] || ''}\n`
}
```
## Phase 3 Integration (Convergence Evaluation)
```javascript
// In 03-result-aggregation.md, Step 3.4:
function evaluateConvergence(parsedResults, state) {
if (!state.phase_gates) {
// No prep package: use default issue detection
return { converged: !parsedResults.vas.issues?.length, phase: "default" }
}
const iteration = state.current_iteration
const isZeroToOne = state.phase_gates.zero_to_one.iterations.includes(iteration)
if (isZeroToOne) {
return {
converged: parsedResults.cd.status !== 'failed'
&& (parsedResults.vas.test_pass_rate > 0 || parsedResults.cd.tests_passing),
phase: "0→1"
}
}
const conv = state.convergence
return {
converged: (parsedResults.vas.test_pass_rate || 0) >= conv.test_pass_rate
&& (parsedResults.vas.coverage || 0) >= conv.coverage
&& (parsedResults.vas.critical_issues || 0) <= conv.max_critical_bugs,
phase: "1→100"
}
}
```

View File

@@ -12,7 +12,7 @@ Create or resume a development cycle, initialize state file and directory struct
## Execution
### Step 1.1: Parse Arguments
### Step 1.1: Parse Arguments & Load Prep Package
```javascript
const { cycleId: existingCycleId, task, mode = 'interactive', extension } = options
@@ -22,6 +22,102 @@ if (!existingCycleId && !task) {
console.error('Either --cycle-id or task description is required')
return { status: 'error', message: 'Missing cycleId or task' }
}
// ── Prep Package: Detect → Validate → Consume ──
let prepPackage = null
const prepPath = `${projectRoot}/.workflow/.cycle/prep-package.json`
if (fs.existsSync(prepPath)) {
const raw = JSON.parse(Read(prepPath))
const checks = validatePrepPackage(raw, projectRoot)
if (checks.valid) {
prepPackage = raw
task = prepPackage.task.refined
console.log(`✓ Prep package loaded: score=${prepPackage.task.quality_score}/10, auto=${prepPackage.auto_iteration.enabled}`)
console.log(` Checks passed: ${checks.passed.join(', ')}`)
} else {
console.warn(`⚠ Prep package found but failed validation:`)
checks.failures.forEach(f => console.warn(`${f}`))
console.warn(` → Falling back to default behavior (prep-package ignored)`)
prepPackage = null
}
}
/**
* Validate prep-package.json integrity before consumption.
* Returns { valid: bool, passed: string[], failures: string[] }
*/
function validatePrepPackage(prep, projectRoot) {
const passed = []
const failures = []
// Check 1: prep_status must be "ready"
if (prep.prep_status === 'ready') {
passed.push('status=ready')
} else {
failures.push(`prep_status is "${prep.prep_status}", expected "ready"`)
}
// Check 2: project_root must match current project
if (prep.environment?.project_root === projectRoot) {
passed.push('project_root match')
} else {
failures.push(`project_root mismatch: prep="${prep.environment?.project_root}", current="${projectRoot}"`)
}
// Check 3: quality_score must be >= 6
if ((prep.task?.quality_score || 0) >= 6) {
passed.push(`quality=${prep.task.quality_score}/10`)
} else {
failures.push(`quality_score ${prep.task?.quality_score || 0} < 6 minimum`)
}
// Check 4: generated_at must be within 24 hours
const generatedAt = new Date(prep.generated_at)
const hoursSince = (Date.now() - generatedAt.getTime()) / (1000 * 60 * 60)
if (hoursSince <= 24) {
passed.push(`age=${Math.round(hoursSince)}h`)
} else {
failures.push(`prep-package is ${Math.round(hoursSince)}h old (max 24h), may be stale`)
}
// Check 5: required fields exist
const requiredFields = [
'task.refined',
'auto_iteration.convergence.test_pass_rate',
'auto_iteration.convergence.coverage',
'auto_iteration.phase_gates.zero_to_one',
'auto_iteration.phase_gates.one_to_hundred',
'auto_iteration.agent_focus.zero_to_one',
'auto_iteration.agent_focus.one_to_hundred'
]
const missing = requiredFields.filter(path => {
const val = path.split('.').reduce((obj, key) => obj?.[key], prep)
return val === undefined || val === null
})
if (missing.length === 0) {
passed.push('fields complete')
} else {
failures.push(`missing fields: ${missing.join(', ')}`)
}
// Check 6: convergence values are valid numbers
const conv = prep.auto_iteration?.convergence
if (conv && typeof conv.test_pass_rate === 'number' && typeof conv.coverage === 'number'
&& conv.test_pass_rate > 0 && conv.test_pass_rate <= 100
&& conv.coverage > 0 && conv.coverage <= 100) {
passed.push(`convergence valid (test≥${conv.test_pass_rate}%, cov≥${conv.coverage}%)`)
} else {
failures.push(`convergence values invalid: test_pass_rate=${conv?.test_pass_rate}, coverage=${conv?.coverage}`)
}
return {
valid: failures.length === 0,
passed,
failures
}
}
```
### Step 1.2: Utility Functions
@@ -73,7 +169,7 @@ function createCycleState(cycleId, taskDescription) {
cycle_id: cycleId,
title: taskDescription.substring(0, 100),
description: taskDescription,
max_iterations: 5,
max_iterations: prepPackage?.auto_iteration?.max_iterations || 5,
status: 'running',
created_at: now,
updated_at: now,
@@ -96,7 +192,13 @@ function createCycleState(cycleId, taskDescription) {
exploration: null,
plan: null,
changes: [],
test_results: null
test_results: null,
// Prep package integration (from /prompts:prep-cycle)
convergence: prepPackage?.auto_iteration?.convergence || null,
phase_gates: prepPackage?.auto_iteration?.phase_gates || null,
agent_focus: prepPackage?.auto_iteration?.agent_focus || null,
source_refs: prepPackage?.task?.source_refs || null
}
Write(stateFile, JSON.stringify(state, null, 2))

View File

@@ -27,6 +27,31 @@ Each agent reads its detailed role definition at execution time:
```javascript
function spawnRAAgent(cycleId, state, progressDir) {
// Build source references section from prep-package
const sourceRefsSection = (state.source_refs && state.source_refs.length > 0)
? `## REQUIREMENT SOURCE DOCUMENTS
Read these original requirement documents BEFORE analyzing the task:
${state.source_refs
.filter(r => r.status === 'verified' || r.status === 'linked')
.map((r, i) => {
if (r.type === 'local_file' || r.type === 'auto_detected') {
return `${i + 1}. **Read**: ${r.path} (${r.type})`
} else if (r.type === 'url') {
return `${i + 1}. **Reference URL**: ${r.path} (fetch if accessible)`
}
return ''
}).join('\n')}
Use these documents as the primary source of truth for requirements analysis.
Cross-reference the task description against these documents for completeness.
`
: ''
// Build focus directive from prep-package
const focusDirective = getAgentFocusDirective('ra', state)
return spawn_agent({
message: `
## TASK ASSIGNMENT
@@ -39,6 +64,7 @@ function spawnRAAgent(cycleId, state, progressDir) {
---
${sourceRefsSection}
## CYCLE CONTEXT
- **Cycle ID**: ${cycleId}
@@ -61,7 +87,7 @@ Requirements Analyst - Analyze and refine requirements throughout the cycle.
3. Identify edge cases and implicit requirements
4. Track requirement changes across iterations
5. Maintain requirements.md and changes.log
${focusDirective}
## DELIVERABLES
Write files to ${progressDir}/ra/:

View File

@@ -306,6 +306,8 @@ if (file_exists(`${sessionFolder}/exploration-codebase.json`)) {
// - id: L0, L1, L2, L3
// - title: "MVP" / "Usable" / "Refined" / "Optimized"
// - description: what this layer achieves (goal)
// - type: feature (default for layers)
// - priority: high (L0) | medium (L1) | low (L2-L3)
// - scope[]: features included
// - excludes[]: features explicitly deferred
// - convergence: { criteria[], verification, definition_of_done }
@@ -322,6 +324,7 @@ const layers = [
{
id: "L0", title: "MVP",
description: "...",
type: "feature", priority: "high",
scope: ["..."], excludes: ["..."],
convergence: {
criteria: ["... (testable)"],
@@ -341,6 +344,7 @@ const layers = [
// Each task must have:
// - id: T1, T2, ...
// - title, description, type (infrastructure|feature|enhancement|testing)
// - priority (high|medium|low)
// - scope, inputs[], outputs[]
// - convergence: { criteria[], verification, definition_of_done }
// - depends_on[], parallel_group
@@ -355,6 +359,7 @@ const layers = [
const tasks = [
{
id: "T1", title: "...", description: "...", type: "infrastructure",
priority: "high",
scope: "...", inputs: [], outputs: ["..."],
convergence: {
criteria: ["... (testable)"],
@@ -778,11 +783,11 @@ Each record's `convergence` object:
| L2 | Refined | Edge cases, performance, security hardening |
| L3 | Optimized | Advanced features, observability, operations |
**Schema**: `id, title, description, scope[], excludes[], convergence{}, risk_items[], effort, depends_on[], source{}`
**Schema**: `id, title, description, type, priority, scope[], excludes[], convergence{}, risk_items[], effort, depends_on[], source{}`
```jsonl
{"id":"L0","title":"MVP","description":"Minimum viable closed loop","scope":["User registration and login","Basic CRUD"],"excludes":["OAuth","2FA"],"convergence":{"criteria":["End-to-end register→login→operate flow works","Core API returns correct responses"],"verification":"curl/Postman manual testing or smoke test script","definition_of_done":"New user can complete register→login→perform one core operation"},"risk_items":["JWT library selection needs validation"],"effort":"medium","depends_on":[],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L0"}}
{"id":"L1","title":"Usable","description":"Complete key user paths","scope":["Password reset","Input validation","Error messages"],"excludes":["Audit logs","Rate limiting"],"convergence":{"criteria":["All form fields have frontend+backend validation","Password reset email can be sent and reset completed","Error scenarios show user-friendly messages"],"verification":"Unit tests cover validation logic + manual test of reset flow","definition_of_done":"Users have a clear recovery path when encountering input errors or forgotten passwords"},"risk_items":[],"effort":"medium","depends_on":["L0"],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L1"}}
{"id":"L0","title":"MVP","description":"Minimum viable closed loop","type":"feature","priority":"high","scope":["User registration and login","Basic CRUD"],"excludes":["OAuth","2FA"],"convergence":{"criteria":["End-to-end register→login→operate flow works","Core API returns correct responses"],"verification":"curl/Postman manual testing or smoke test script","definition_of_done":"New user can complete register→login→perform one core operation"},"risk_items":["JWT library selection needs validation"],"effort":"medium","depends_on":[],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L0"}}
{"id":"L1","title":"Usable","description":"Complete key user paths","type":"feature","priority":"medium","scope":["Password reset","Input validation","Error messages"],"excludes":["Audit logs","Rate limiting"],"convergence":{"criteria":["All form fields have frontend+backend validation","Password reset email can be sent and reset completed","Error scenarios show user-friendly messages"],"verification":"Unit tests cover validation logic + manual test of reset flow","definition_of_done":"Users have a clear recovery path when encountering input errors or forgotten passwords"},"risk_items":[],"effort":"medium","depends_on":["L0"],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L1"}}
```
**Constraints**: 2-4 layers, L0 must be a self-contained closed loop with no dependencies, each feature belongs to exactly ONE layer (no scope overlap).
@@ -829,6 +834,7 @@ When normal decomposition fails or produces empty results, use fallback template
[
{
id: "L0", title: "MVP", description: "Minimum viable closed loop",
type: "feature", priority: "high",
scope: ["Core functionality"], excludes: ["Advanced features", "Optimization"],
convergence: {
criteria: ["Core path works end-to-end"],
@@ -840,6 +846,7 @@ When normal decomposition fails or produces empty results, use fallback template
},
{
id: "L1", title: "Usable", description: "Refine key user paths",
type: "feature", priority: "medium",
scope: ["Error handling", "Input validation"], excludes: ["Performance optimization", "Monitoring"],
convergence: {
criteria: ["All user inputs validated", "Error scenarios show messages"],
@@ -857,7 +864,7 @@ When normal decomposition fails or produces empty results, use fallback template
[
{
id: "T1", title: "Infrastructure setup", description: "Project scaffolding and base configuration",
type: "infrastructure",
type: "infrastructure", priority: "high",
scope: "Project scaffolding and base configuration",
inputs: [], outputs: ["project-structure"],
convergence: {
@@ -870,7 +877,7 @@ When normal decomposition fails or produces empty results, use fallback template
},
{
id: "T2", title: "Core feature implementation", description: "Implement core business logic",
type: "feature",
type: "feature", priority: "high",
scope: "Core business logic",
inputs: ["project-structure"], outputs: ["core-module"],
convergence: {

View File

@@ -108,7 +108,7 @@ Execute context-search-agent in BRAINSTORM MODE (Phase 1-2 only).
## Assigned Context
- **Session**: ${session_id}
- **Task**: ${task_description}
- **Output**: ${projectRoot}/.workflow/${session_id}/.process/context-package.json
- **Output**: ${projectRoot}/.workflow/active/${session_id}/.process/context-package.json
## Required Output Fields
metadata, project_context, assets, dependencies, conflict_detection

View File

@@ -69,7 +69,7 @@ Step 5: Output Verification (enhanced)
**Execute First** - Check if valid package already exists:
```javascript
const contextPackagePath = `${projectRoot}/.workflow/${session_id}/.process/context-package.json`;
const contextPackagePath = `${projectRoot}/.workflow/active/${session_id}/.process/context-package.json`;
if (file_exists(contextPackagePath)) {
const existing = Read(contextPackagePath);
@@ -559,7 +559,7 @@ modifications.forEach((mod, idx) => {
// Generate conflict-resolution.json
const resolutionOutput = {
session_id: sessionId,
session_id: session_id,
resolved_at: new Date().toISOString(),
summary: {
total_conflicts: conflicts.length,
@@ -584,7 +584,7 @@ const resolutionOutput = {
failed_modifications: failedModifications
};
const resolutionPath = `${projectRoot}/.workflow/active/${sessionId}/.process/conflict-resolution.json`;
const resolutionPath = `${projectRoot}/.workflow/active/${session_id}/.process/conflict-resolution.json`;
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
// Output custom conflict summary (if any)
@@ -648,7 +648,7 @@ const contextAgentId = spawn_agent({
## Session Information
- **Session ID**: ${session_id}
- **Task Description**: ${task_description}
- **Output Path**: ${projectRoot}/.workflow/${session_id}/.process/context-package.json
- **Output Path**: ${projectRoot}/.workflow/active/${session_id}/.process/context-package.json
## User Intent (from Phase 1 - Planning Notes)
**GOAL**: ${userIntent.goal}
@@ -790,7 +790,7 @@ After agent completes, verify output:
```javascript
// Verify file was created
const outputPath = `${projectRoot}/.workflow/${session_id}/.process/context-package.json`;
const outputPath = `${projectRoot}/.workflow/active/${session_id}/.process/context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("Agent failed to generate context-package.json");
}

View File

@@ -50,7 +50,7 @@ Step 3: Output Verification
**Execute First** - Check if valid package already exists:
```javascript
const testContextPath = `${projectRoot}/.workflow/${test_session_id}/.process/test-context-package.json`;
const testContextPath = `${projectRoot}/.workflow/active/${test_session_id}/.process/test-context-package.json`;
if (file_exists(testContextPath)) {
const existing = Read(testContextPath);
@@ -90,7 +90,7 @@ const agentId = spawn_agent({
## Session Information
- **Test Session ID**: ${test_session_id}
- **Output Path**: ${projectRoot}/.workflow/${test_session_id}/.process/test-context-package.json
- **Output Path**: ${projectRoot}/.workflow/active/${test_session_id}/.process/test-context-package.json
## Mission
Execute complete test-context-search-agent workflow for test generation planning:
@@ -161,7 +161,7 @@ After agent completes, verify output:
```javascript
// Verify file was created
const outputPath = `${projectRoot}/.workflow/${test_session_id}/.process/test-context-package.json`;
const outputPath = `${projectRoot}/.workflow/active/${test_session_id}/.process/test-context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("Agent failed to generate test-context-package.json");
}

View File

@@ -292,7 +292,7 @@ echo "Next: Review full report for detailed findings"
### Chain Validation Algorithm
```
1. Load all task JSONs from ${projectRoot}/.workflow/active/{sessionId}/.task/
1. Load all task JSONs from ${projectRoot}/.workflow/active/{session_id}/.task/
2. Extract task IDs and group by feature number
3. For each feature:
- Check TEST-N.M exists
@@ -373,7 +373,7 @@ ${projectRoot}/.workflow/active/WFS-{session-id}/
# TDD Compliance Report - {Session ID}
**Generated**: {timestamp}
**Session**: WFS-{sessionId}
**Session**: WFS-{session_id}
**Workflow Type**: TDD
---

View File

@@ -218,7 +218,7 @@ close_agent({ id: analysisAgentId });
- Scan for AI code issues
- Generate `TEST_ANALYSIS_RESULTS.md`
**Output**: `${projectRoot}/.workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md`
**Output**: `${projectRoot}/.workflow/active/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md`
**Validation** - TEST_ANALYSIS_RESULTS.md must include:
- Project Type Detection (with confidence)
@@ -335,9 +335,9 @@ Quality Thresholds:
- Max Fix Iterations: 5
Artifacts:
- Test plan: ${projectRoot}/.workflow/[testSessionId]/IMPL_PLAN.md
- Task list: ${projectRoot}/.workflow/[testSessionId]/TODO_LIST.md
- Analysis: ${projectRoot}/.workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md
- Test plan: ${projectRoot}/.workflow/active/[testSessionId]/IMPL_PLAN.md
- Task list: ${projectRoot}/.workflow/active/[testSessionId]/TODO_LIST.md
- Analysis: ${projectRoot}/.workflow/active/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md
→ Transitioning to Phase 2: Test-Cycle Execution
```