feat: Enhance parallel-dev-cycle with prep-package integration

- Added argument parsing and prep package loading in session initialization.
- Implemented validation checks for prep-package.json integrity.
- Integrated prep package data into cycle state, including task refinement and auto-iteration settings.
- Updated agent execution to utilize source references and focus directives from prep package.
- Modified context gathering and test context generation to reference active workflow paths.
- Introduced a new interactive prompt for pre-flight checklist and task quality assessment.
- Created a detailed schema and integration specification for prep-package.json.
- Ensured all relevant phases validate and utilize the prep package effectively.
This commit is contained in:
catlog22
2026-02-09 14:07:52 +08:00
parent afd9729873
commit 113bee5ef9
13 changed files with 801 additions and 42 deletions

View File

@@ -0,0 +1,418 @@
---
description: "Interactive pre-flight checklist for parallel-dev-cycle. Validates environment, refines task via Q&A, configures auto-iteration (0→1→100), writes prep-package.json, then launches the cycle."
argument-hint: TASK="<task description>" [MAX_ITER=5] [TEST_RATE=90] [COVERAGE=80]
---
# Pre-Flight Checklist for Parallel Dev Cycle
You are an interactive preparation assistant. Your job is to ensure everything is ready for an **unattended** `parallel-dev-cycle` run. Follow each step sequentially. **Ask the user questions when information is missing.** At the end, write `prep-package.json` and invoke the cycle.
---
## Step 1: Environment Prerequisites
Check these items. Report results as a checklist.
### 1.1 Required (block if any fail)
- **Project root**: Confirm current working directory is a valid project (has package.json, Cargo.toml, pyproject.toml, go.mod, or similar)
- **Writable workspace**: Ensure `.workflow/.cycle/` directory exists or can be created
- **Git status**: Run `git status --short`. If working tree is dirty, WARN but don't block
### 1.2 Strongly Recommended (warn if missing)
- **project-tech.json**: Check `{projectRoot}/.workflow/project-tech.json`
- If missing: Read `package.json` / `tsconfig.json` / `pyproject.toml` and generate a minimal version. Ask user: "检测到项目使用 [tech stack], 是否正确?需要补充什么?"
- **project-guidelines.json**: Check `{projectRoot}/.workflow/project-guidelines.json`
- If missing: Scan for `.eslintrc`, `.prettierrc`, `ruff.toml` etc. Ask user: "未找到 project-guidelines.json, 是否有特定的编码规范需要遵循?"
- **Test framework**: Detect from config files (jest.config, vitest.config, pytest.ini, etc.)
- If missing: Ask user: "未检测到测试框架配置,请指定测试命令(如 `npm test`, `pytest`),或输入 'skip' 跳过测试验证"
### 1.3 Output
Print formatted checklist:
```
环境检查
════════
✓ 项目根目录: D:\myproject
✓ 工作空间: .workflow/.cycle/ 就绪
⚠ Git: 3 个未提交变更
✓ project-tech.json: 已检测 (Express + TypeORM + PostgreSQL)
⚠ project-guidelines.json: 未找到 (已跳过)
✓ 测试框架: jest (npm test)
```
---
## Step 2: Task Quality Assessment
### 2.0 Requirement Source Tracking
**在评估任务质量之前,先追踪需求的原始来源。** 这些引用会写入 prep-package.json供 RA agent 在分析阶段直接读取原始文档。
Ask the user:
> "任务需求的来源是什么?可以提供以下一种或多种:
> 1. 本地文档路径 (如 docs/prd.md, requirements/feature-spec.md)
> 2. GitHub Issue URL (如 https://github.com/org/repo/issues/123)
> 3. 设计文档 / 原型链接
> 4. 会话中直接描述 (无外部文档)
>
> 请输入来源路径/URL多个用逗号分隔或输入 'none' 表示无外部来源"
**Processing logic**:
```javascript
const sourceRefs = []
for (const input of userInputs) {
if (input === 'none') break
const ref = { path: input, type: 'unknown', status: 'unverified' }
// Classify reference type
if (input.startsWith('http')) {
ref.type = 'url'
ref.status = 'linked'
} else if (fs.existsSync(input) || fs.existsSync(`${projectRoot}/${input}`)) {
ref.type = 'local_file'
ref.path = fs.existsSync(input) ? input : `${projectRoot}/${input}`
ref.status = 'verified'
// Extract summary from first 20 lines
ref.preview = Read(ref.path, { limit: 20 })
} else {
ref.type = 'local_file'
ref.status = 'not_found'
console.warn(`⚠ 文件未找到: ${input}`)
}
sourceRefs.push(ref)
}
// Auto-detect: scan for common requirement docs in project
const autoDetectPaths = [
'docs/prd.md', 'docs/PRD.md', 'docs/requirements.md',
'docs/design.md', 'docs/spec.md', 'docs/feature-spec.md',
'requirements/*.md', 'specs/*.md',
'.github/ISSUE_TEMPLATE/*.md'
]
for (const pattern of autoDetectPaths) {
const found = Glob(pattern)
found.forEach(f => {
if (!sourceRefs.some(r => r.path === f)) {
sourceRefs.push({ path: f, type: 'auto_detected', status: 'verified' })
}
})
}
```
Display detected sources:
```
需求来源
════════
✓ docs/prd.md (本地文档, 已验证)
✓ docs/api-design.md (本地文档, 已验证)
✓ https://github.com/.../issues/42 (URL, 已链接)
⚠ specs/auth-flow.md (未找到, 已跳过)
~ .github/ISSUE_TEMPLATE/feature.md (自动检测)
```
### 2.1 Scoring
Read the user's `$TASK` and score each dimension:
| # | 维度 | 评分标准 |
|---|------|----------|
| 1 | **目标** (Objective) | 0=无具体内容 / 1=有方向无细节 / 2=具体可执行 |
| 2 | **成功标准** (Success Criteria) | 0=无 / 1=不可度量 / 2=可测试可验证 |
| 3 | **范围** (Scope) | 0=无 / 1=笼统区域 / 2=具体文件/模块 |
| 4 | **约束** (Constraints) | 0=无 / 1=泛泛"别破坏" / 2=具体限制条件 |
| 5 | **技术上下文** (Context) | 0=无 / 1=最少 / 2=丰富(栈、模式、集成点) |
### 2.2 Display Score
```
任务质量评估
════════════
目标: ██████████ 2/2 "Add Google OAuth login with JWT session"
成功标准: █████░░░░░ 1/2 "Should work" → 需要细化
范围: ██████████ 2/2 "src/auth/*, src/strategies/*, src/models/User.ts"
约束: ░░░░░░░░░░ 0/2 未指定 → 必须补充
技术上下文: █████░░░░░ 1/2 "TypeScript" → 可以自动增强
总分: 6/10 (可接受,需要交互补充)
```
### 2.3 Interactive Refinement
**For each dimension scoring < 2**, ask a targeted question:
**目标不清 (score 0-1)**:
> "请更具体地描述要实现什么功能?例如:'为现有 Express API 添加 Google OAuth 登录,生成 JWT token支持 /api/auth/google 和 /api/auth/callback 两个端点'"
**成功标准缺失 (score 0-1)**:
> "完成后如何验证?请描述至少 2 个可测试的验收条件。例如:'1. 用户能通过 Google 账号登录 2. 登录后返回有效 JWT 3. 受保护路由能正确验证 token'"
**范围不明 (score 0-1)**:
> "这个任务涉及哪些文件或模块?我检测到以下可能相关的目录: [列出扫描到的相关目录],请确认或补充"
**约束缺失 (score 0-1)**:
> "有哪些限制条件?常见的约束包括:
> - 不能破坏现有 API 兼容性
> - 必须使用现有的数据库表结构
> - 不引入新的依赖库
> - 保持与现有 auth middleware 一致的模式
> 请选择适用的或添加自定义约束"
**上下文不足 (score 0-1)**:
> "我从项目中检测到: [tech stack from project-tech.json]。还有其他需要知道的技术细节吗?例如现有的认证机制、相关的工具库、数据模型等"
### 2.4 Auto-Enhancement
For dimensions still at score 1 after Q&A, auto-enhance from codebase:
- **Scope**: Use `Glob` and `Grep` to find related files, list them
- **Context**: Read `project-tech.json` and key config files
- **Constraints**: Infer from `project-guidelines.json` and existing patterns
### 2.5 Assemble Refined Task
Combine all dimensions into a structured task string:
```
OBJECTIVE: {objective}
SUCCESS_CRITERIA: {criteria}
SCOPE: {scope}
CONSTRAINTS: {constraints}
CONTEXT: {context}
```
---
## Step 3: Auto-Iteration Configuration
### 3.1 Present Defaults & Ask for Overrides
Display the 0→1→100 model and ask if user wants to customize:
```
自动迭代配置 (0→1→100)
═══════════════════════
模式: 全自动 (无确认)
最大迭代: $MAX_ITER (默认 5)
阶段 "0→1" (迭代 1-2): 构建可运行原型
RA: 仅核心需求
EP: 最简架构
CD: 仅 happy path
VAS: 冒烟测试
通过标准: 代码编译成功 + 核心测试通过
阶段 "1→100" (迭代 3-5): 达到生产质量
RA: 完整需求 + NFR + 边界情况
EP: 精细架构 + 风险缓解
CD: 完整实现 + 错误处理
VAS: 完整测试 + 覆盖率审计
通过标准: 测试通过率 >= $TEST_RATE% + 覆盖率 >= $COVERAGE% + 0 致命 Bug
需要调整任何参数吗?(直接回车使用默认值)
```
If user provides `$MAX_ITER`, `$TEST_RATE`, or `$COVERAGE`, use those values. Otherwise use defaults (5, 90, 80).
### 3.2 Customization Options
If user wants to customize, ask:
> "请选择要调整的项目:
> 1. 最大迭代次数 (当前: 5)
> 2. 测试通过率阈值 (当前: 90%)
> 3. 代码覆盖率阈值 (当前: 80%)
> 4. 0→1 阶段迭代数 (当前: 2)
> 5. 全部使用默认值"
---
## Step 4: Final Confirmation Summary
Display the complete pre-flight summary:
```
══════════════════════════════════════════════
Pre-Flight 检查完成
══════════════════════════════════════════════
环境: ✓ 就绪 (3/3 必需, 2/3 推荐)
任务质量: 9/10 (优秀)
自动模式: ON (无确认, 最多 5 次迭代)
收敛标准:
0→1: 编译通过 + 核心测试通过 (迭代 1-2)
1→100: 90% 测试 + 80% 覆盖率 + 0 致命bug (迭代 3-5)
精炼后的任务:
目标: Add Google OAuth login with JWT session management
标准: User can login via Google, receive JWT, access protected routes
范围: src/auth/*, src/strategies/*, src/models/User.ts
约束: No breaking changes to /api/login, use existing User table
上下文: Express.js + TypeORM + PostgreSQL, JWT middleware in src/middleware/auth.ts
══════════════════════════════════════════════
```
Ask: "确认启动?(Y/n)"
- If **Y** or Enter → proceed to Step 5
- If **n** → ask which part to revise, loop back to relevant step
---
## Step 5: Write prep-package.json
Write the following to `{projectRoot}/.workflow/.cycle/prep-package.json`:
```json
{
"version": "1.0.0",
"generated_at": "{ISO8601_UTC+8}",
"prep_status": "ready",
"environment": {
"project_root": "{projectRoot}",
"prerequisites": {
"required_passed": true,
"recommended_passed": true,
"warnings": ["{list of warnings}"]
},
"tech_stack": "{detected tech stack}",
"test_framework": "{detected test framework}",
"has_project_tech": true,
"has_project_guidelines": false
},
"task": {
"original": "{$TASK raw input}",
"refined": "OBJECTIVE: ... | SUCCESS_CRITERIA: ... | SCOPE: ... | CONSTRAINTS: ... | CONTEXT: ...",
"quality_score": 9,
"dimensions": {
"objective": { "score": 2, "value": "..." },
"success_criteria": { "score": 2, "value": "..." },
"scope": { "score": 2, "value": "..." },
"constraints": { "score": 2, "value": "..." },
"context": { "score": 1, "value": "..." }
},
"source_refs": [
{
"path": "docs/prd.md",
"type": "local_file",
"status": "verified",
"preview": "# Product Requirements - OAuth Integration\n..."
},
{
"path": "https://github.com/org/repo/issues/42",
"type": "url",
"status": "linked"
},
{
"path": ".github/ISSUE_TEMPLATE/feature.md",
"type": "auto_detected",
"status": "verified"
}
]
},
"auto_iteration": {
"enabled": true,
"no_confirmation": true,
"max_iterations": 5,
"timeout_per_iteration_ms": 1800000,
"convergence": {
"test_pass_rate": 90,
"coverage": 80,
"max_critical_bugs": 0,
"max_open_issues": 3
},
"phase_gates": {
"zero_to_one": {
"iterations": [1, 2],
"exit_criteria": {
"code_compiles": true,
"core_test_passes": true,
"min_requirements_implemented": 1
}
},
"one_to_hundred": {
"iterations": [3, 4, 5],
"exit_criteria": {
"test_pass_rate": 90,
"coverage": 80,
"critical_bugs": 0
}
}
},
"agent_focus": {
"zero_to_one": {
"ra": "core_requirements_only",
"ep": "minimal_viable_architecture",
"cd": "happy_path_first",
"vas": "smoke_tests_only"
},
"one_to_hundred": {
"ra": "full_requirements_with_nfr",
"ep": "refined_architecture_with_risks",
"cd": "complete_implementation_with_error_handling",
"vas": "full_test_suite_with_coverage"
}
}
}
}
```
Confirm file written:
```
✓ prep-package.json 已写入 .workflow/.cycle/prep-package.json
```
---
## Step 6: Launch Cycle
Invoke the skill using `$ARGUMENTS` pass-through. Prompt 负责组装参数skill 负责消费 prep-package.json 并做合法性检查。
**调用方式**:
```
$parallel-dev-cycle --auto TASK="$TASK_REFINED"
```
其中:
- `$parallel-dev-cycle` — 展开为 skill 调用
- `$TASK_REFINED` — Step 2 组装的精炼任务描述
- `--auto` — 启用全自动模式
**Skill 端会做以下检查**(见 Phase 1 Step 1.1:
1. 检测 `prep-package.json` 是否存在
2. 验证 `prep_status === "ready"`
3. 校验 `project_root` 与当前项目一致
4. 校验 `quality_score >= 6`
5. 校验文件时效24h 内生成)
6. 校验必需字段完整性
7. 全部通过 → 加载配置;任一失败 → 回退到默认行为 + 打印警告
Print:
```
启动 parallel-dev-cycle (自动模式)...
prep-package.json → Phase 1 自动加载并校验
迭代计划: 0→1 (迭代 1-2) → 1→100 (迭代 3-5)
```
---
## Error Handling
| 情况 | 处理 |
|------|------|
| 必需项检查失败 | 报告缺失项,给出修复建议,**不启动 cycle** |
| 任务质量 < 6/10 且用户拒绝补充 | 报告各维度得分,建议重写任务描述,**不启动 cycle** |
| 用户取消确认 | 保存当前 prep-package.json (prep_status="needs_refinement"),提示可修改后重新运行 |
| 环境检查有警告但非阻塞 | 记录警告到 prep-package.json继续执行 |
| Skill 端 prep-package 校验失败 | Skill 打印警告,回退到无 prep 的默认行为(不阻塞执行) |

View File

@@ -302,11 +302,11 @@ if (!autoYes) {
options: [
{ label: "Start Execution", description: "Execute all tasks serially" },
{ label: "Adjust Tasks", description: "Modify, reorder, or remove tasks" },
{ label: "Cancel", description: "Cancel execution, keep execution-plan.jsonl" }
{ label: "Cancel", description: "Cancel execution, keep tasks.jsonl" }
]
}]
})
// "Adjust Tasks": display task list, user deselects/reorders, regenerate execution-plan.jsonl
// "Adjust Tasks": display task list, user deselects/reorders, regenerate tasks.jsonl
// "Cancel": end workflow, keep artifacts
}
```
@@ -321,7 +321,7 @@ Execute tasks one by one directly using tools (Read, Edit, Write, Grep, Glob, Ba
```
For each taskId in executionOrder:
├─ Load task from execution-plan.jsonl
├─ Load task from tasks.jsonl
├─ Check dependencies satisfied (all deps completed)
├─ Record START event to execution-events.md
├─ Execute task directly:

View File

@@ -82,7 +82,7 @@ Step 4: Synthesis & Conclusion
└─ Offer options: quick execute / create issue / generate task / export / done
Step 5: Quick Execute (Optional - user selects)
├─ Convert conclusions.recommendations → execution-plan.jsonl (with convergence)
├─ Convert conclusions.recommendations → tasks.jsonl (unified JSONL with convergence)
├─ Pre-execution analysis (dependencies, file conflicts, execution order)
├─ User confirmation
├─ Direct inline execution (Read/Edit/Write/Grep/Glob/Bash)
@@ -581,13 +581,13 @@ if (!autoYes) {
**Key Principle**: No additional exploration — analysis phase has already collected all necessary context. No CLI delegation — execute directly using tools.
**Flow**: `conclusions.json → execution-plan.jsonl → User Confirmation → Direct Inline Execution → execution.md + execution-events.md`
**Flow**: `conclusions.json → tasks.jsonl → User Confirmation → Direct Inline Execution → execution.md + execution-events.md`
**Full specification**: See `EXECUTE.md` for detailed step-by-step implementation.
##### Step 5.1: Generate execution-plan.jsonl
##### Step 5.1: Generate tasks.jsonl
Convert `conclusions.recommendations` into JSONL execution list. Each line is a self-contained task with convergence criteria:
Convert `conclusions.recommendations` into unified JSONL task format. Each line is a self-contained task with convergence criteria:
```javascript
const conclusions = JSON.parse(Read(`${sessionFolder}/conclusions.json`))
@@ -603,22 +603,28 @@ const tasks = conclusions.recommendations.map((rec, index) => ({
description: rec.rationale,
type: inferTaskType(rec), // fix | refactor | feature | enhancement | testing
priority: rec.priority,
files_to_modify: extractFilesFromEvidence(rec, explorations),
effort: inferEffort(rec), // small | medium | large
files: extractFilesFromEvidence(rec, explorations).map(f => ({
path: f,
action: 'modify'
})),
depends_on: [],
convergence: {
criteria: generateCriteria(rec), // Testable conditions
verification: generateVerification(rec), // Executable command or steps
definition_of_done: generateDoD(rec) // Business language
},
context: {
source_conclusions: conclusions.key_conclusions,
evidence: rec.evidence || []
evidence: rec.evidence || [],
source: {
tool: 'analyze-with-file',
session_id: sessionId,
original_id: `TASK-${String(index + 1).padStart(3, '0')}`
}
}))
// Validate convergence quality (same as req-plan-with-file)
// Write one task per line
Write(`${sessionFolder}/execution-plan.jsonl`, tasks.map(t => JSON.stringify(t)).join('\n'))
Write(`${sessionFolder}/tasks.jsonl`, tasks.map(t => JSON.stringify(t)).join('\n'))
```
##### Step 5.2: Pre-Execution Analysis
@@ -641,7 +647,7 @@ if (!autoYes) {
options: [
{ label: "Start Execution", description: "Execute all tasks serially" },
{ label: "Adjust Tasks", description: "Modify, reorder, or remove tasks" },
{ label: "Cancel", description: "Cancel execution, keep execution-plan.jsonl" }
{ label: "Cancel", description: "Cancel execution, keep tasks.jsonl" }
]
}]
})
@@ -664,7 +670,7 @@ For each task in execution order:
- Update `execution.md` with final summary (statistics, task results table)
- Finalize `execution-events.md` with session footer
- Update `execution-plan.jsonl` with execution results per task
- Update `tasks.jsonl` with `_execution` state per task
```javascript
if (!autoYes) {
@@ -685,7 +691,7 @@ if (!autoYes) {
```
**Success Criteria**:
- `execution-plan.jsonl` generated with convergence criteria per task
- `tasks.jsonl` generated with convergence criteria and source provenance per task
- `execution.md` contains plan overview, task table, pre-execution analysis, final summary
- `execution-events.md` contains chronological event stream with convergence verification
- All tasks executed (or explicitly skipped) via direct inline execution
@@ -704,7 +710,7 @@ if (!autoYes) {
├── explorations.json # Phase 2: Single perspective aggregated findings
├── perspectives.json # Phase 2: Multi-perspective findings with synthesis
├── conclusions.json # Phase 4: Final synthesis with recommendations
├── execution-plan.jsonl # Phase 5: JSONL execution list with convergence (if quick execute)
├── tasks.jsonl # Phase 5: Unified JSONL with convergence + source (if quick execute)
├── execution.md # Phase 5: Execution overview + task table + summary (if quick execute)
└── execution-events.md # Phase 5: Chronological event log (if quick execute)
```
@@ -717,7 +723,7 @@ if (!autoYes) {
| `explorations.json` | 2 | Single perspective aggregated findings |
| `perspectives.json` | 2 | Multi-perspective findings with cross-perspective synthesis |
| `conclusions.json` | 4 | Final synthesis: conclusions, recommendations, open questions |
| `execution-plan.jsonl` | 5 | JSONL execution list from recommendations, each line with convergence criteria |
| `tasks.jsonl` | 5 | Unified JSONL from recommendations, each line with convergence criteria and source provenance |
| `execution.md` | 5 | Execution overview: plan source, task table, pre-execution analysis, final summary |
| `execution-events.md` | 5 | Chronological event stream with task details and convergence verification |
@@ -861,7 +867,7 @@ Remaining questions or areas for investigation
| Session folder conflict | Append timestamp suffix | Create unique folder and continue |
| Quick execute: task fails | Record failure in execution-events.md | User can retry, skip, or abort |
| Quick execute: verification fails | Mark criterion as unverified, continue | Note in events, manual check |
| Quick execute: no recommendations | Cannot generate execution-plan.jsonl | Suggest using lite-plan instead |
| Quick execute: no recommendations | Cannot generate tasks.jsonl | Suggest using lite-plan instead |
## Best Practices

View File

@@ -74,6 +74,15 @@ Each agent **maintains one main document** (e.g., requirements.md, plan.json, im
When `--auto`: Run all phases sequentially without user confirmation between iterations. Use recommended defaults for all decisions. Automatically continue iteration loop until tests pass or max iterations reached.
## Prep Package Integration
When `prep-package.json` exists at `{projectRoot}/.workflow/.cycle/prep-package.json`, Phase 1 consumes it to:
- Use refined task description instead of raw TASK
- Apply auto-iteration config (convergence criteria, phase gates)
- Inject per-iteration agent focus directives (0→1 vs 1→100)
Prep packages are generated by the interactive prompt `/prompts:prep-cycle`. See [phases/00-prep-checklist.md](phases/00-prep-checklist.md) for schema.
## Execution Flow
```

View File

@@ -0,0 +1,191 @@
# Prep Package Schema & Integration Spec
Schema definition for `prep-package.json` and integration points with the parallel-dev-cycle skill.
## File Location
```
{projectRoot}/.workflow/.cycle/prep-package.json
```
Generated by: `/prompts:prep-cycle` (interactive prompt)
Consumed by: Phase 1 (Session Initialization)
## JSON Schema
```json
{
"version": "1.0.0",
"generated_at": "ISO8601",
"prep_status": "ready | needs_refinement | blocked",
"environment": {
"project_root": "/path/to/project",
"prerequisites": {
"required_passed": true,
"recommended_passed": true,
"warnings": ["string"]
},
"tech_stack": "string (e.g. Express.js + TypeORM + PostgreSQL)",
"test_framework": "string (e.g. jest, vitest, pytest)",
"has_project_tech": true,
"has_project_guidelines": true
},
"task": {
"original": "raw user input",
"refined": "enhanced task description with all 5 dimensions",
"quality_score": 8,
"dimensions": {
"objective": { "score": 2, "value": "..." },
"success_criteria": { "score": 2, "value": "..." },
"scope": { "score": 2, "value": "..." },
"constraints": { "score": 1, "value": "..." },
"context": { "score": 1, "value": "..." }
},
"source_refs": [
{
"path": "docs/prd.md",
"type": "local_file | url | auto_detected",
"status": "verified | linked | not_found",
"preview": "first ~20 lines (local_file only)"
}
]
},
"auto_iteration": {
"enabled": true,
"no_confirmation": true,
"max_iterations": 5,
"timeout_per_iteration_ms": 1800000,
"convergence": {
"test_pass_rate": 90,
"coverage": 80,
"max_critical_bugs": 0,
"max_open_issues": 3
},
"phase_gates": {
"zero_to_one": {
"iterations": [1, 2],
"exit_criteria": {
"code_compiles": true,
"core_test_passes": true,
"min_requirements_implemented": 1
}
},
"one_to_hundred": {
"iterations": [3, 4, 5],
"exit_criteria": {
"test_pass_rate": 90,
"coverage": 80,
"critical_bugs": 0
}
}
},
"agent_focus": {
"zero_to_one": {
"ra": "core_requirements_only",
"ep": "minimal_viable_architecture",
"cd": "happy_path_first",
"vas": "smoke_tests_only"
},
"one_to_hundred": {
"ra": "full_requirements_with_nfr",
"ep": "refined_architecture_with_risks",
"cd": "complete_implementation_with_error_handling",
"vas": "full_test_suite_with_coverage"
}
}
}
}
```
## Phase 1 Integration (Consume & Check)
Phase 1 对 prep-package.json 执行 **6 项验证**,全部通过才加载,任一失败回退默认行为:
| # | 检查项 | 条件 | 失败处理 |
|---|--------|------|----------|
| 1 | prep_status | `=== "ready"` | 跳过 prep |
| 2 | project_root | 与当前 projectRoot 一致 | 跳过 prep防错误项目 |
| 3 | quality_score | `>= 6` | 跳过 prep任务质量不达标 |
| 4 | 时效性 | generated_at 在 24h 以内 | 跳过 prep可能过期 |
| 5 | 必需字段 | task.refined, convergence, phase_gates, agent_focus 全部存在 | 跳过 prep |
| 6 | 收敛值合法 | test_pass_rate/coverage 为 0-100 的数字 | 跳过 prep |
```javascript
// In 01-session-init.md, Step 1.1:
const prepPath = `${projectRoot}/.workflow/.cycle/prep-package.json`
if (fs.existsSync(prepPath)) {
const raw = JSON.parse(Read(prepPath))
const checks = validatePrepPackage(raw, projectRoot)
if (checks.valid) {
prepPackage = raw
task = prepPackage.task.refined
// Inject into state:
state.convergence = prepPackage.auto_iteration.convergence
state.phase_gates = prepPackage.auto_iteration.phase_gates
state.agent_focus = prepPackage.auto_iteration.agent_focus
state.max_iterations = prepPackage.auto_iteration.max_iterations
} else {
console.warn('Prep package validation failed, using defaults')
// prepPackage remains null → no convergence/phase_gates/agent_focus
}
}
```
## Phase 2 Integration (Agent Focus Directives)
```javascript
// Before spawning each agent, append focus directive:
function getAgentFocusDirective(agentName, state) {
if (!state.phase_gates) return ""
const iteration = state.current_iteration
const isZeroToOne = state.phase_gates.zero_to_one.iterations.includes(iteration)
const focus = isZeroToOne
? state.agent_focus.zero_to_one[agentName]
: state.agent_focus.one_to_hundred[agentName]
const directives = {
core_requirements_only: "Focus ONLY on core functional requirements. Skip NFRs and edge cases.",
minimal_viable_architecture: "Design the simplest working architecture. Skip optimization.",
happy_path_first: "Implement ONLY the happy path. Skip error handling and edge cases.",
smoke_tests_only: "Run smoke tests only. Skip coverage analysis and exhaustive validation.",
full_requirements_with_nfr: "Complete requirements including NFRs, edge cases, security.",
refined_architecture_with_risks: "Refine architecture with risk mitigation and scalability.",
complete_implementation_with_error_handling: "Complete all tasks with error handling and validation.",
full_test_suite_with_coverage: "Full test suite with coverage report and quality audit."
}
return `\n## FOCUS DIRECTIVE (${isZeroToOne ? '0→1' : '1→100'})\n${directives[focus] || ''}\n`
}
```
## Phase 3 Integration (Convergence Evaluation)
```javascript
// In 03-result-aggregation.md, Step 3.4:
function evaluateConvergence(parsedResults, state) {
if (!state.phase_gates) {
// No prep package: use default issue detection
return { converged: !parsedResults.vas.issues?.length, phase: "default" }
}
const iteration = state.current_iteration
const isZeroToOne = state.phase_gates.zero_to_one.iterations.includes(iteration)
if (isZeroToOne) {
return {
converged: parsedResults.cd.status !== 'failed'
&& (parsedResults.vas.test_pass_rate > 0 || parsedResults.cd.tests_passing),
phase: "0→1"
}
}
const conv = state.convergence
return {
converged: (parsedResults.vas.test_pass_rate || 0) >= conv.test_pass_rate
&& (parsedResults.vas.coverage || 0) >= conv.coverage
&& (parsedResults.vas.critical_issues || 0) <= conv.max_critical_bugs,
phase: "1→100"
}
}
```

View File

@@ -12,7 +12,7 @@ Create or resume a development cycle, initialize state file and directory struct
## Execution
### Step 1.1: Parse Arguments
### Step 1.1: Parse Arguments & Load Prep Package
```javascript
const { cycleId: existingCycleId, task, mode = 'interactive', extension } = options
@@ -22,6 +22,102 @@ if (!existingCycleId && !task) {
console.error('Either --cycle-id or task description is required')
return { status: 'error', message: 'Missing cycleId or task' }
}
// ── Prep Package: Detect → Validate → Consume ──
let prepPackage = null
const prepPath = `${projectRoot}/.workflow/.cycle/prep-package.json`
if (fs.existsSync(prepPath)) {
const raw = JSON.parse(Read(prepPath))
const checks = validatePrepPackage(raw, projectRoot)
if (checks.valid) {
prepPackage = raw
task = prepPackage.task.refined
console.log(`✓ Prep package loaded: score=${prepPackage.task.quality_score}/10, auto=${prepPackage.auto_iteration.enabled}`)
console.log(` Checks passed: ${checks.passed.join(', ')}`)
} else {
console.warn(`⚠ Prep package found but failed validation:`)
checks.failures.forEach(f => console.warn(`${f}`))
console.warn(` → Falling back to default behavior (prep-package ignored)`)
prepPackage = null
}
}
/**
* Validate prep-package.json integrity before consumption.
* Returns { valid: bool, passed: string[], failures: string[] }
*/
function validatePrepPackage(prep, projectRoot) {
const passed = []
const failures = []
// Check 1: prep_status must be "ready"
if (prep.prep_status === 'ready') {
passed.push('status=ready')
} else {
failures.push(`prep_status is "${prep.prep_status}", expected "ready"`)
}
// Check 2: project_root must match current project
if (prep.environment?.project_root === projectRoot) {
passed.push('project_root match')
} else {
failures.push(`project_root mismatch: prep="${prep.environment?.project_root}", current="${projectRoot}"`)
}
// Check 3: quality_score must be >= 6
if ((prep.task?.quality_score || 0) >= 6) {
passed.push(`quality=${prep.task.quality_score}/10`)
} else {
failures.push(`quality_score ${prep.task?.quality_score || 0} < 6 minimum`)
}
// Check 4: generated_at must be within 24 hours
const generatedAt = new Date(prep.generated_at)
const hoursSince = (Date.now() - generatedAt.getTime()) / (1000 * 60 * 60)
if (hoursSince <= 24) {
passed.push(`age=${Math.round(hoursSince)}h`)
} else {
failures.push(`prep-package is ${Math.round(hoursSince)}h old (max 24h), may be stale`)
}
// Check 5: required fields exist
const requiredFields = [
'task.refined',
'auto_iteration.convergence.test_pass_rate',
'auto_iteration.convergence.coverage',
'auto_iteration.phase_gates.zero_to_one',
'auto_iteration.phase_gates.one_to_hundred',
'auto_iteration.agent_focus.zero_to_one',
'auto_iteration.agent_focus.one_to_hundred'
]
const missing = requiredFields.filter(path => {
const val = path.split('.').reduce((obj, key) => obj?.[key], prep)
return val === undefined || val === null
})
if (missing.length === 0) {
passed.push('fields complete')
} else {
failures.push(`missing fields: ${missing.join(', ')}`)
}
// Check 6: convergence values are valid numbers
const conv = prep.auto_iteration?.convergence
if (conv && typeof conv.test_pass_rate === 'number' && typeof conv.coverage === 'number'
&& conv.test_pass_rate > 0 && conv.test_pass_rate <= 100
&& conv.coverage > 0 && conv.coverage <= 100) {
passed.push(`convergence valid (test≥${conv.test_pass_rate}%, cov≥${conv.coverage}%)`)
} else {
failures.push(`convergence values invalid: test_pass_rate=${conv?.test_pass_rate}, coverage=${conv?.coverage}`)
}
return {
valid: failures.length === 0,
passed,
failures
}
}
```
### Step 1.2: Utility Functions
@@ -73,7 +169,7 @@ function createCycleState(cycleId, taskDescription) {
cycle_id: cycleId,
title: taskDescription.substring(0, 100),
description: taskDescription,
max_iterations: 5,
max_iterations: prepPackage?.auto_iteration?.max_iterations || 5,
status: 'running',
created_at: now,
updated_at: now,
@@ -96,7 +192,13 @@ function createCycleState(cycleId, taskDescription) {
exploration: null,
plan: null,
changes: [],
test_results: null
test_results: null,
// Prep package integration (from /prompts:prep-cycle)
convergence: prepPackage?.auto_iteration?.convergence || null,
phase_gates: prepPackage?.auto_iteration?.phase_gates || null,
agent_focus: prepPackage?.auto_iteration?.agent_focus || null,
source_refs: prepPackage?.task?.source_refs || null
}
Write(stateFile, JSON.stringify(state, null, 2))

View File

@@ -27,6 +27,31 @@ Each agent reads its detailed role definition at execution time:
```javascript
function spawnRAAgent(cycleId, state, progressDir) {
// Build source references section from prep-package
const sourceRefsSection = (state.source_refs && state.source_refs.length > 0)
? `## REQUIREMENT SOURCE DOCUMENTS
Read these original requirement documents BEFORE analyzing the task:
${state.source_refs
.filter(r => r.status === 'verified' || r.status === 'linked')
.map((r, i) => {
if (r.type === 'local_file' || r.type === 'auto_detected') {
return `${i + 1}. **Read**: ${r.path} (${r.type})`
} else if (r.type === 'url') {
return `${i + 1}. **Reference URL**: ${r.path} (fetch if accessible)`
}
return ''
}).join('\n')}
Use these documents as the primary source of truth for requirements analysis.
Cross-reference the task description against these documents for completeness.
`
: ''
// Build focus directive from prep-package
const focusDirective = getAgentFocusDirective('ra', state)
return spawn_agent({
message: `
## TASK ASSIGNMENT
@@ -39,6 +64,7 @@ function spawnRAAgent(cycleId, state, progressDir) {
---
${sourceRefsSection}
## CYCLE CONTEXT
- **Cycle ID**: ${cycleId}
@@ -61,7 +87,7 @@ Requirements Analyst - Analyze and refine requirements throughout the cycle.
3. Identify edge cases and implicit requirements
4. Track requirement changes across iterations
5. Maintain requirements.md and changes.log
${focusDirective}
## DELIVERABLES
Write files to ${progressDir}/ra/:

View File

@@ -306,6 +306,8 @@ if (file_exists(`${sessionFolder}/exploration-codebase.json`)) {
// - id: L0, L1, L2, L3
// - title: "MVP" / "Usable" / "Refined" / "Optimized"
// - description: what this layer achieves (goal)
// - type: feature (default for layers)
// - priority: high (L0) | medium (L1) | low (L2-L3)
// - scope[]: features included
// - excludes[]: features explicitly deferred
// - convergence: { criteria[], verification, definition_of_done }
@@ -322,6 +324,7 @@ const layers = [
{
id: "L0", title: "MVP",
description: "...",
type: "feature", priority: "high",
scope: ["..."], excludes: ["..."],
convergence: {
criteria: ["... (testable)"],
@@ -341,6 +344,7 @@ const layers = [
// Each task must have:
// - id: T1, T2, ...
// - title, description, type (infrastructure|feature|enhancement|testing)
// - priority (high|medium|low)
// - scope, inputs[], outputs[]
// - convergence: { criteria[], verification, definition_of_done }
// - depends_on[], parallel_group
@@ -355,6 +359,7 @@ const layers = [
const tasks = [
{
id: "T1", title: "...", description: "...", type: "infrastructure",
priority: "high",
scope: "...", inputs: [], outputs: ["..."],
convergence: {
criteria: ["... (testable)"],
@@ -778,11 +783,11 @@ Each record's `convergence` object:
| L2 | Refined | Edge cases, performance, security hardening |
| L3 | Optimized | Advanced features, observability, operations |
**Schema**: `id, title, description, scope[], excludes[], convergence{}, risk_items[], effort, depends_on[], source{}`
**Schema**: `id, title, description, type, priority, scope[], excludes[], convergence{}, risk_items[], effort, depends_on[], source{}`
```jsonl
{"id":"L0","title":"MVP","description":"Minimum viable closed loop","scope":["User registration and login","Basic CRUD"],"excludes":["OAuth","2FA"],"convergence":{"criteria":["End-to-end register→login→operate flow works","Core API returns correct responses"],"verification":"curl/Postman manual testing or smoke test script","definition_of_done":"New user can complete register→login→perform one core operation"},"risk_items":["JWT library selection needs validation"],"effort":"medium","depends_on":[],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L0"}}
{"id":"L1","title":"Usable","description":"Complete key user paths","scope":["Password reset","Input validation","Error messages"],"excludes":["Audit logs","Rate limiting"],"convergence":{"criteria":["All form fields have frontend+backend validation","Password reset email can be sent and reset completed","Error scenarios show user-friendly messages"],"verification":"Unit tests cover validation logic + manual test of reset flow","definition_of_done":"Users have a clear recovery path when encountering input errors or forgotten passwords"},"risk_items":[],"effort":"medium","depends_on":["L0"],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L1"}}
{"id":"L0","title":"MVP","description":"Minimum viable closed loop","type":"feature","priority":"high","scope":["User registration and login","Basic CRUD"],"excludes":["OAuth","2FA"],"convergence":{"criteria":["End-to-end register→login→operate flow works","Core API returns correct responses"],"verification":"curl/Postman manual testing or smoke test script","definition_of_done":"New user can complete register→login→perform one core operation"},"risk_items":["JWT library selection needs validation"],"effort":"medium","depends_on":[],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L0"}}
{"id":"L1","title":"Usable","description":"Complete key user paths","type":"feature","priority":"medium","scope":["Password reset","Input validation","Error messages"],"excludes":["Audit logs","Rate limiting"],"convergence":{"criteria":["All form fields have frontend+backend validation","Password reset email can be sent and reset completed","Error scenarios show user-friendly messages"],"verification":"Unit tests cover validation logic + manual test of reset flow","definition_of_done":"Users have a clear recovery path when encountering input errors or forgotten passwords"},"risk_items":[],"effort":"medium","depends_on":["L0"],"source":{"tool":"req-plan-with-file","session_id":"RPLAN-xxx","original_id":"L1"}}
```
**Constraints**: 2-4 layers, L0 must be a self-contained closed loop with no dependencies, each feature belongs to exactly ONE layer (no scope overlap).
@@ -829,6 +834,7 @@ When normal decomposition fails or produces empty results, use fallback template
[
{
id: "L0", title: "MVP", description: "Minimum viable closed loop",
type: "feature", priority: "high",
scope: ["Core functionality"], excludes: ["Advanced features", "Optimization"],
convergence: {
criteria: ["Core path works end-to-end"],
@@ -840,6 +846,7 @@ When normal decomposition fails or produces empty results, use fallback template
},
{
id: "L1", title: "Usable", description: "Refine key user paths",
type: "feature", priority: "medium",
scope: ["Error handling", "Input validation"], excludes: ["Performance optimization", "Monitoring"],
convergence: {
criteria: ["All user inputs validated", "Error scenarios show messages"],
@@ -857,7 +864,7 @@ When normal decomposition fails or produces empty results, use fallback template
[
{
id: "T1", title: "Infrastructure setup", description: "Project scaffolding and base configuration",
type: "infrastructure",
type: "infrastructure", priority: "high",
scope: "Project scaffolding and base configuration",
inputs: [], outputs: ["project-structure"],
convergence: {
@@ -870,7 +877,7 @@ When normal decomposition fails or produces empty results, use fallback template
},
{
id: "T2", title: "Core feature implementation", description: "Implement core business logic",
type: "feature",
type: "feature", priority: "high",
scope: "Core business logic",
inputs: ["project-structure"], outputs: ["core-module"],
convergence: {

View File

@@ -108,7 +108,7 @@ Execute context-search-agent in BRAINSTORM MODE (Phase 1-2 only).
## Assigned Context
- **Session**: ${session_id}
- **Task**: ${task_description}
- **Output**: ${projectRoot}/.workflow/${session_id}/.process/context-package.json
- **Output**: ${projectRoot}/.workflow/active/${session_id}/.process/context-package.json
## Required Output Fields
metadata, project_context, assets, dependencies, conflict_detection

View File

@@ -69,7 +69,7 @@ Step 5: Output Verification (enhanced)
**Execute First** - Check if valid package already exists:
```javascript
const contextPackagePath = `${projectRoot}/.workflow/${session_id}/.process/context-package.json`;
const contextPackagePath = `${projectRoot}/.workflow/active/${session_id}/.process/context-package.json`;
if (file_exists(contextPackagePath)) {
const existing = Read(contextPackagePath);
@@ -559,7 +559,7 @@ modifications.forEach((mod, idx) => {
// Generate conflict-resolution.json
const resolutionOutput = {
session_id: sessionId,
session_id: session_id,
resolved_at: new Date().toISOString(),
summary: {
total_conflicts: conflicts.length,
@@ -584,7 +584,7 @@ const resolutionOutput = {
failed_modifications: failedModifications
};
const resolutionPath = `${projectRoot}/.workflow/active/${sessionId}/.process/conflict-resolution.json`;
const resolutionPath = `${projectRoot}/.workflow/active/${session_id}/.process/conflict-resolution.json`;
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
// Output custom conflict summary (if any)
@@ -648,7 +648,7 @@ const contextAgentId = spawn_agent({
## Session Information
- **Session ID**: ${session_id}
- **Task Description**: ${task_description}
- **Output Path**: ${projectRoot}/.workflow/${session_id}/.process/context-package.json
- **Output Path**: ${projectRoot}/.workflow/active/${session_id}/.process/context-package.json
## User Intent (from Phase 1 - Planning Notes)
**GOAL**: ${userIntent.goal}
@@ -790,7 +790,7 @@ After agent completes, verify output:
```javascript
// Verify file was created
const outputPath = `${projectRoot}/.workflow/${session_id}/.process/context-package.json`;
const outputPath = `${projectRoot}/.workflow/active/${session_id}/.process/context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("Agent failed to generate context-package.json");
}

View File

@@ -50,7 +50,7 @@ Step 3: Output Verification
**Execute First** - Check if valid package already exists:
```javascript
const testContextPath = `${projectRoot}/.workflow/${test_session_id}/.process/test-context-package.json`;
const testContextPath = `${projectRoot}/.workflow/active/${test_session_id}/.process/test-context-package.json`;
if (file_exists(testContextPath)) {
const existing = Read(testContextPath);
@@ -90,7 +90,7 @@ const agentId = spawn_agent({
## Session Information
- **Test Session ID**: ${test_session_id}
- **Output Path**: ${projectRoot}/.workflow/${test_session_id}/.process/test-context-package.json
- **Output Path**: ${projectRoot}/.workflow/active/${test_session_id}/.process/test-context-package.json
## Mission
Execute complete test-context-search-agent workflow for test generation planning:
@@ -161,7 +161,7 @@ After agent completes, verify output:
```javascript
// Verify file was created
const outputPath = `${projectRoot}/.workflow/${test_session_id}/.process/test-context-package.json`;
const outputPath = `${projectRoot}/.workflow/active/${test_session_id}/.process/test-context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("Agent failed to generate test-context-package.json");
}

View File

@@ -292,7 +292,7 @@ echo "Next: Review full report for detailed findings"
### Chain Validation Algorithm
```
1. Load all task JSONs from ${projectRoot}/.workflow/active/{sessionId}/.task/
1. Load all task JSONs from ${projectRoot}/.workflow/active/{session_id}/.task/
2. Extract task IDs and group by feature number
3. For each feature:
- Check TEST-N.M exists
@@ -373,7 +373,7 @@ ${projectRoot}/.workflow/active/WFS-{session-id}/
# TDD Compliance Report - {Session ID}
**Generated**: {timestamp}
**Session**: WFS-{sessionId}
**Session**: WFS-{session_id}
**Workflow Type**: TDD
---

View File

@@ -218,7 +218,7 @@ close_agent({ id: analysisAgentId });
- Scan for AI code issues
- Generate `TEST_ANALYSIS_RESULTS.md`
**Output**: `${projectRoot}/.workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md`
**Output**: `${projectRoot}/.workflow/active/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md`
**Validation** - TEST_ANALYSIS_RESULTS.md must include:
- Project Type Detection (with confidence)
@@ -335,9 +335,9 @@ Quality Thresholds:
- Max Fix Iterations: 5
Artifacts:
- Test plan: ${projectRoot}/.workflow/[testSessionId]/IMPL_PLAN.md
- Task list: ${projectRoot}/.workflow/[testSessionId]/TODO_LIST.md
- Analysis: ${projectRoot}/.workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md
- Test plan: ${projectRoot}/.workflow/active/[testSessionId]/IMPL_PLAN.md
- Task list: ${projectRoot}/.workflow/active/[testSessionId]/TODO_LIST.md
- Analysis: ${projectRoot}/.workflow/active/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md
→ Transitioning to Phase 2: Test-Cycle Execution
```