feat: Add interactive pre-flight checklists for ccw-loop and workflow-plan, including validation and task transformation steps

- Implemented `prep-loop.md` for ccw-loop, detailing source discovery, validation, task transformation, and auto-loop configuration.
- Created `prep-plan.md` for workflow planning, covering environment checks, task quality assessment, execution preferences, and final confirmation.
- Defined schemas and integration points for `prep-package.json` in both ccw-loop and workflow-plan skills, ensuring proper validation and task handling.
- Added error handling mechanisms for various scenarios during the preparation phases.
This commit is contained in:
catlog22
2026-02-09 15:02:38 +08:00
parent ef7382ecf5
commit c62d26183b
25 changed files with 1596 additions and 2896 deletions

456
.codex/prompts/prep-loop.md Normal file
View File

@@ -0,0 +1,456 @@
---
description: "Interactive pre-flight checklist for ccw-loop. Discovers JSONL from collaborative-plan-with-file, analyze-with-file, brainstorm-to-cycle sessions; validates, transforms to ccw-loop task format, writes prep-package.json + tasks.jsonl, then launches the loop."
argument-hint: '[SOURCE="<path-to-tasks.jsonl-or-session-folder>"] [MAX_ITER=10]'
---
# Pre-Flight Checklist for CCW Loop
You are an interactive preparation assistant. Your job is to discover and consume task artifacts from upstream planning/analysis/brainstorm skills, validate them, transform into ccw-loop's task format, and launch an **unattended** development loop. Follow each step sequentially. **Ask the user questions when information is missing.**
---
## Step 1: Source Discovery
### 1.1 Auto-Detect Available Sessions
Scan for upstream artifacts from the three supported source skills:
```javascript
const projectRoot = Bash('git rev-parse --show-toplevel 2>/dev/null || pwd').trim()
// Source 1: collaborative-plan-with-file
const cplanSessions = Glob(`${projectRoot}/.workflow/.planning/CPLAN-*/tasks.jsonl`)
.map(p => ({
path: p,
source: 'collaborative-plan-with-file',
type: 'jsonl',
session: p.match(/CPLAN-[^/]+/)?.[0],
mtime: fs.statSync(p).mtime
}))
// Source 2: analyze-with-file
const anlSessions = Glob(`${projectRoot}/.workflow/.analysis/ANL-*/tasks.jsonl`)
.map(p => ({
path: p,
source: 'analyze-with-file',
type: 'jsonl',
session: p.match(/ANL-[^/]+/)?.[0],
mtime: fs.statSync(p).mtime
}))
// Source 3: brainstorm-to-cycle
const bsSessions = Glob(`${projectRoot}/.workflow/.brainstorm/*/cycle-task.md`)
.map(p => ({
path: p,
source: 'brainstorm-to-cycle',
type: 'markdown',
session: p.match(/\.brainstorm\/([^/]+)/)?.[1],
mtime: fs.statSync(p).mtime
}))
const allSources = [...cplanSessions, ...anlSessions, ...bsSessions]
.sort((a, b) => b.mtime - a.mtime) // Most recent first
```
### 1.2 Display Discovered Sources
```
可用的上游任务源
════════════════
collaborative-plan-with-file:
1. CPLAN-auth-redesign-20260208 tasks.jsonl (5 tasks, 2h ago)
2. CPLAN-api-cleanup-20260205 tasks.jsonl (3 days ago)
analyze-with-file:
3. ANL-perf-audit-20260207 tasks.jsonl (8 tasks, 1d ago)
brainstorm-to-cycle:
4. BS-notification-system cycle-task.md (1d ago)
手动输入:
5. 自定义路径 (输入 JSONL 文件路径或任务描述)
```
### 1.3 User Selection
Ask the user to select a source:
> "请选择任务来源(输入编号),或输入 JSONL 文件的完整路径:
> 也可以输入 'manual' 手动输入任务描述(不使用上游 JSONL"
**If `$SOURCE` argument provided**, skip discovery and use directly:
```javascript
if (options.SOURCE) {
// Validate path exists
if (!fs.existsSync(options.SOURCE)) {
console.error(`文件不存在: ${options.SOURCE}`)
return
}
selectedSource = {
path: options.SOURCE,
source: inferSource(options.SOURCE),
type: options.SOURCE.endsWith('.jsonl') ? 'jsonl' : 'markdown'
}
}
```
---
## Step 2: Source Validation & Task Loading
### 2.1 For JSONL Sources (collaborative-plan / analyze-with-file)
```javascript
function validateAndLoadJsonl(jsonlPath) {
const content = Read(jsonlPath)
const lines = content.trim().split('\n').filter(l => l.trim())
const tasks = []
const errors = []
for (let i = 0; i < lines.length; i++) {
try {
const task = JSON.parse(lines[i])
// Required fields check
const requiredFields = ['id', 'title', 'description']
const missing = requiredFields.filter(f => !task[f])
if (missing.length > 0) {
errors.push(`Line ${i + 1}: missing fields: ${missing.join(', ')}`)
continue
}
// Validate task structure
if (task.id && task.title && task.description) {
tasks.push(task)
}
} catch (e) {
errors.push(`Line ${i + 1}: invalid JSON: ${e.message}`)
}
}
return { tasks, errors, total_lines: lines.length }
}
```
Display validation results:
```
JSONL 验证
══════════
文件: .workflow/.planning/CPLAN-auth-redesign-20260208/tasks.jsonl
来源: collaborative-plan-with-file
✓ 5/5 行解析成功
✓ 必需字段完整 (id, title, description)
✓ 3 个任务含收敛标准 (convergence)
⚠ 2 个任务缺少收敛标准 (将使用默认)
任务列表:
TASK-001 [high] Implement JWT token service (feature, 3 files)
TASK-002 [high] Add OAuth2 Google strategy (feature, 2 files)
TASK-003 [medium] Create user session middleware (feature, 4 files)
TASK-004 [low] Add rate limiting to auth endpoints (enhancement, 2 files)
TASK-005 [low] Write integration tests (testing, 5 files)
```
### 2.2 For Markdown Sources (brainstorm-to-cycle)
```javascript
function loadBrainstormTask(mdPath) {
const content = Read(mdPath)
// Extract enriched task description from cycle-task.md
// Format: # Generated Task \n\n **Idea**: ... \n\n --- \n\n {enrichedTask}
const taskMatch = content.match(/---\s*\n([\s\S]+)$/)
const enrichedTask = taskMatch ? taskMatch[1].trim() : content
// Parse into a single composite task
return {
tasks: [{
id: 'TASK-001',
title: extractTitle(content),
description: enrichedTask,
type: 'feature',
priority: 'high',
effort: 'large',
source: { tool: 'brainstorm-to-cycle', path: mdPath }
}],
errors: [],
is_composite: true // Single large task from brainstorm
}
}
```
Display:
```
Brainstorm 任务加载
══════════════════
文件: .workflow/.brainstorm/notification-system/cycle-task.md
来源: brainstorm-to-cycle
脑暴输出为复合任务描述(非结构化 JSONL
标题: Build real-time notification system
类型: feature (composite)
是否需要将其拆分为多个子任务?(Y/n)
```
If user selects **Y** (split), analyze the task description and generate sub-tasks:
```javascript
// Analyze and decompose the composite task into 3-7 sub-tasks
// Use mcp__ace-tool__search_context to find relevant patterns
// Generate structured tasks with convergence criteria
```
If user selects **n** (keep as single), use as-is.
### 2.3 Validation Gate
If validation has errors:
```
⚠ 验证发现 {N} 个问题:
Line 3: missing fields: description
Line 7: invalid JSON
选项:
1. 跳过有问题的行,继续 ({valid_count} 个有效任务)
2. 取消,手动修复后重试
```
**Block if 0 valid tasks.** Warn and continue if some tasks invalid.
---
## Step 3: Task Transformation
Transform unified JSONL tasks → ccw-loop `develop.tasks[]` format.
```javascript
function transformToCcwLoopTasks(sourceTasks) {
const now = getUtc8ISOString()
return sourceTasks.map((task, index) => ({
// Core fields (ccw-loop native)
id: task.id || `task-${String(index + 1).padStart(3, '0')}`,
description: task.title
? `${task.title}: ${task.description}`
: task.description,
tool: inferTool(task), // 'gemini' | 'qwen' | 'codex'
mode: 'write',
status: 'pending',
priority: mapPriority(task.priority), // 1 (high) | 2 (medium) | 3 (low)
files_changed: (task.files || []).map(f => f.path || f),
created_at: now,
completed_at: null,
// Extended fields (preserved from source for agent reference)
_source: task.source || { tool: 'manual' },
_convergence: task.convergence || null,
_type: task.type || 'feature',
_effort: task.effort || 'medium',
_depends_on: task.depends_on || []
}))
}
function inferTool(task) {
// Default to gemini for write tasks
return 'gemini'
}
function mapPriority(priority) {
switch (priority) {
case 'high': case 'critical': return 1
case 'medium': return 2
case 'low': return 3
default: return 2
}
}
```
Display transformed tasks:
```
任务转换
════════
源格式: unified JSONL (collaborative-plan-with-file)
目标格式: ccw-loop develop.tasks
task-001 [P1] Implement JWT token service: Create JWT service... gemini/write pending
task-002 [P1] Add OAuth2 Google strategy: Implement passport... gemini/write pending
task-003 [P2] Create user session middleware: Add Express... gemini/write pending
task-004 [P3] Add rate limiting to auth endpoints: Implement... gemini/write pending
task-005 [P3] Write integration tests: Create test suite... gemini/write pending
共 5 个任务 (2 high, 1 medium, 2 low)
```
### 3.1 Task Reordering (Optional)
Ask: "是否需要调整任务顺序或移除某些任务?(输入编号排列如 '1,3,2,5' 或回车保持当前顺序)"
---
## Step 4: Auto-Loop Configuration
### 4.1 Present Defaults
```
自动循环配置
════════════
模式: 全自动 (develop → debug → validate → complete)
最大迭代: $MAX_ITER (默认 10)
超时: 10 分钟/action
收敛标准 (从源任务汇总):
${tasksWithConvergence} 个任务含收敛标准 → 自动验证
${tasksWithoutConvergence} 个任务无收敛标准 → 使用默认 (测试通过)
需要调整参数吗?(直接回车使用默认值)
```
### 4.2 Customization (if requested)
> "请选择要调整的项目:
> 1. 最大迭代次数 (当前: 10)
> 2. 每个 action 超时 (当前: 10 分钟)
> 3. 全部使用默认值"
---
## Step 5: Final Confirmation
```
══════════════════════════════════════════════
Pre-Flight 检查完成
══════════════════════════════════════════════
来源: collaborative-plan-with-file (CPLAN-auth-redesign-20260208)
任务数: 5 个 (2 high, 1 medium, 2 low)
验证: ✓ 5/5 任务格式正确
收敛: 3/5 任务含收敛标准
自动模式: ON (最多 10 次迭代)
任务摘要:
1. [P1] Implement JWT token service
2. [P1] Add OAuth2 Google strategy
3. [P2] Create user session middleware
4. [P3] Add rate limiting to auth endpoints
5. [P3] Write integration tests
══════════════════════════════════════════════
```
Ask: "确认启动?(Y/n)"
- If **Y** → proceed to Step 6
- If **n** → ask which part to revise
---
## Step 6: Write Artifacts
### 6.1 Write prep-package.json
Write to `{projectRoot}/.workflow/.loop/prep-package.json`:
```json
{
"version": "1.0.0",
"generated_at": "{ISO8601_UTC+8}",
"prep_status": "ready",
"target_skill": "ccw-loop",
"environment": {
"project_root": "{projectRoot}",
"tech_stack": "{detected tech stack}",
"test_framework": "{detected test framework}"
},
"source": {
"tool": "collaborative-plan-with-file",
"session_id": "CPLAN-auth-redesign-20260208",
"jsonl_path": "{projectRoot}/.workflow/.planning/CPLAN-auth-redesign-20260208/tasks.jsonl",
"task_count": 5,
"tasks_with_convergence": 3
},
"tasks": {
"total": 5,
"by_priority": { "high": 2, "medium": 1, "low": 2 },
"by_type": { "feature": 3, "enhancement": 1, "testing": 1 }
},
"auto_loop": {
"enabled": true,
"no_confirmation": true,
"max_iterations": 10,
"timeout_per_action_ms": 600000
}
}
```
### 6.2 Write tasks.jsonl
Write transformed tasks to `{projectRoot}/.workflow/.loop/prep-tasks.jsonl` (ccw-loop format):
```javascript
const jsonlContent = transformedTasks.map(t => JSON.stringify(t)).join('\n')
Write(`${projectRoot}/.workflow/.loop/prep-tasks.jsonl`, jsonlContent)
```
Confirm:
```
✓ prep-package.json → .workflow/.loop/prep-package.json
✓ prep-tasks.jsonl → .workflow/.loop/prep-tasks.jsonl
```
---
## Step 7: Launch Loop
Invoke the skill:
```
$ccw-loop --auto TASK="Execute tasks from {source.tool} session {source.session_id}"
```
其中:
- `$ccw-loop` — 展开为 skill 调用
- `--auto` — 启用全自动模式
- Skill 端会检测 `prep-package.json` 并加载 `prep-tasks.jsonl`
**Skill 端会做以下检查**(见 Phase 1 Step 1.1:
1. 检测 `prep-package.json` 是否存在
2. 验证 `prep_status === "ready"`
3. 验证 `target_skill === "ccw-loop"`
4. 校验 `project_root` 与当前项目一致
5. 校验文件时效24h 内生成)
6. 验证 `prep-tasks.jsonl` 存在且可读
7. 全部通过 → 加载预构建任务列表;任一失败 → 回退到默认 INIT 行为
Print:
```
启动 ccw-loop (自动模式)...
prep-package.json → Phase 1 自动加载并校验
prep-tasks.jsonl → 5 个预构建任务加载到 develop.tasks
循环: develop → validate → complete (最多 10 次迭代)
```
---
## Error Handling
| 情况 | 处理 |
|------|------|
| 无可用上游会话 | 提示用户先运行 collaborative-plan / analyze-with-file / brainstorm或选择手动输入 |
| JSONL 格式全部无效 | 报告错误,**不启动 loop** |
| JSONL 部分无效 | 警告无效行,用有效任务继续 |
| brainstorm cycle-task.md 为空 | 报告错误,建议完成 brainstorm 流程 |
| 用户取消确认 | 保存 prep-package.json (prep_status="cancelled"),提示可修改后重新运行 |
| Skill 端 prep-package 校验失败 | Skill 打印警告,回退到无 prep 的默认 INIT 行为(不阻塞执行) |

373
.codex/prompts/prep-plan.md Normal file
View File

@@ -0,0 +1,373 @@
---
description: "Interactive pre-flight checklist for workflow:plan. Validates environment, refines task to GOAL/SCOPE/CONTEXT, collects source docs, configures execution preferences, writes prep-package.json, then launches the workflow."
argument-hint: TASK="<task description>" [EXEC_METHOD=agent|cli|hybrid] [CLI_TOOL=codex|gemini|qwen]
---
# Pre-Flight Checklist for Workflow Plan
You are an interactive preparation assistant. Your job is to ensure everything is ready for an **unattended** `workflow:plan` run with `--yes` mode. Follow each step sequentially. **Ask the user questions when information is missing.** At the end, write `prep-package.json` and invoke the skill.
---
## Step 1: Environment Prerequisites
Check these items. Report results as a checklist.
### 1.1 Required (block if any fail)
- **Project root**: Confirm current working directory is a valid project (has package.json, Cargo.toml, pyproject.toml, go.mod, or similar)
- **Writable workspace**: Ensure `.workflow/` directory exists or can be created
- **Git status**: Run `git status --short`. If working tree is dirty, WARN but don't block
### 1.2 Strongly Recommended (warn if missing)
- **project-tech.json**: Check `{projectRoot}/.workflow/project-tech.json`
- If missing: WARN — Phase 1 will call `workflow:init` to generate it. Ask user: "检测到项目使用 [tech stack from package.json], 是否正确?需要补充什么?"
- **project-guidelines.json**: Check `{projectRoot}/.workflow/project-guidelines.json`
- If missing: WARN — will be generated as empty scaffold. Ask: "有特定的编码规范需要遵循吗?"
- **Test framework**: Detect from config files (jest.config, vitest.config, pytest.ini, etc.)
- If missing: Ask: "未检测到测试框架,请指定测试命令(如 `npm test`),或输入 'skip' 跳过"
### 1.3 Output
Print formatted checklist:
```
环境检查
════════
✓ 项目根目录: D:\myproject
✓ .workflow/ 目录就绪
⚠ Git: 3 个未提交变更
✓ project-tech.json: 已检测 (Express + TypeORM + PostgreSQL)
⚠ project-guidelines.json: 未找到 (Phase 1 将生成空模板)
✓ 测试框架: jest (npm test)
```
---
## Step 2: Task Quality Assessment
### 2.0 Requirement Source Tracking
**在评估任务质量之前,先追踪需求的原始来源。** 这些引用会写入 prep-package.json供 Phase 2 context-gather 和 Phase 3 task-generation 使用。
Ask the user:
> "任务需求的来源是什么?可以提供以下一种或多种:
> 1. 本地文档路径 (如 docs/prd.md, requirements/feature-spec.md)
> 2. GitHub Issue URL (如 https://github.com/org/repo/issues/123)
> 3. 设计文档 / 原型链接
> 4. 会话中直接描述 (无外部文档)
>
> 请输入来源路径/URL多个用逗号分隔或输入 'none' 表示无外部来源"
**Processing logic**:
```javascript
const sourceRefs = []
for (const input of userInputs) {
if (input === 'none') break
const ref = { path: input, type: 'unknown', status: 'unverified' }
if (input.startsWith('http')) {
ref.type = 'url'
ref.status = 'linked'
} else if (fs.existsSync(input) || fs.existsSync(`${projectRoot}/${input}`)) {
ref.type = 'local_file'
ref.path = fs.existsSync(input) ? input : `${projectRoot}/${input}`
ref.status = 'verified'
ref.preview = Read(ref.path, { limit: 20 })
} else {
ref.type = 'local_file'
ref.status = 'not_found'
console.warn(`⚠ 文件未找到: ${input}`)
}
sourceRefs.push(ref)
}
// Auto-detect common requirement docs
const autoDetectPaths = [
'docs/prd.md', 'docs/PRD.md', 'docs/requirements.md',
'docs/design.md', 'docs/spec.md', 'requirements/*.md', 'specs/*.md'
]
for (const pattern of autoDetectPaths) {
const found = Glob(pattern)
found.forEach(f => {
if (!sourceRefs.some(r => r.path === f)) {
sourceRefs.push({ path: f, type: 'auto_detected', status: 'verified' })
}
})
}
```
Display detected sources:
```
需求来源
════════
✓ docs/prd.md (本地文档, 已验证)
✓ https://github.com/.../issues/42 (URL, 已链接)
~ requirements/api-spec.md (自动检测)
```
### 2.1 Scoring
Score the user's TASK against 5 dimensions, mapped to workflow:plan's GOAL/SCOPE/CONTEXT format.
Each dimension scores 0-2 (0=missing, 1=vague, 2=clear). **Total minimum: 6/10 to proceed.**
| # | 维度 | 映射 | 评分标准 |
|---|------|------|----------|
| 1 | **目标** (Objective) | → GOAL | 0=无具体内容 / 1=有方向无细节 / 2=具体可执行 |
| 2 | **成功标准** (Success Criteria) | → GOAL 补充 | 0=无 / 1=不可度量 / 2=可测试可验证 |
| 3 | **范围** (Scope) | → SCOPE | 0=无 / 1=笼统区域 / 2=具体文件/模块 |
| 4 | **约束** (Constraints) | → CONTEXT | 0=无 / 1=泛泛"别破坏" / 2=具体限制条件 |
| 5 | **技术上下文** (Tech Context) | → CONTEXT | 0=无 / 1=最少 / 2=丰富 |
### 2.2 Display Score
```
任务质量评估
════════════
目标(GOAL): ██████████ 2/2 "Add Google OAuth login with JWT session"
成功标准: █████░░░░░ 1/2 "Should work" → 需要细化
范围(SCOPE): ██████████ 2/2 "src/auth/*, src/strategies/*"
约束(CTX): ░░░░░░░░░░ 0/2 未指定 → 必须补充
技术上下文: █████░░░░░ 1/2 "TypeScript" → 可自动增强
总分: 6/10 (可接受,需交互补充)
```
### 2.3 Interactive Refinement
**For each dimension scoring < 2**, ask a targeted question:
**目标不清 (score 0-1)**:
> "请更具体地描述要实现什么功能?例如:'为现有 Express API 添加 Google OAuth 登录,生成 JWT token支持 /api/auth/google 和 /api/auth/callback 两个端点'"
**成功标准缺失 (score 0-1)**:
> "完成后如何验证?请描述至少 2 个可测试的验收条件。例如:'1. 用户能通过 Google 账号登录 2. 登录后返回有效 JWT 3. 受保护路由能正确验证 token'"
**范围不明 (score 0-1)**:
> "这个任务涉及哪些文件或模块?我检测到以下可能相关的目录: [列出扫描到的相关目录],请确认或补充"
**约束缺失 (score 0-1)**:
> "有哪些限制条件?常见约束:不破坏现有 API / 使用现有数据库 / 不引入新依赖 / 保持现有模式。请选择或自定义"
**上下文不足 (score 0-1)**:
> "我从项目中检测到: [tech stack from project-tech.json]。还有需要知道的技术细节吗?"
### 2.4 Auto-Enhancement
For dimensions still at score 1 after Q&A, auto-enhance from codebase:
- **Scope**: Use `Glob` and `Grep` to find related files
- **Context**: Read `project-tech.json` and key config files
- **Constraints**: Infer from `project-guidelines.json`
### 2.5 Assemble Structured Description
Map to workflow:plan's GOAL/SCOPE/CONTEXT format:
```
GOAL: {objective + success criteria}
SCOPE: {scope boundaries}
CONTEXT: {constraints + technical context}
```
---
## Step 3: Execution Preferences
### 3.1 Present Configuration & Ask for Overrides
```
执行配置
════════
自动模式: --yes (跳过所有确认)
自动提交: --with-commit (每个任务完成后自动 git commit)
执行方式: $EXEC_METHOD (默认 agent)
agent — Claude agent 直接实现
hybrid — Agent 编排 + CLI 处理复杂步骤 (推荐)
cli — 全部通过 CLI 工具执行
CLI 工具: $CLI_TOOL (默认 codex)
codex / gemini / qwen / auto
补充材料: 无 (可后续在 Phase 3 Phase 0 中添加)
需要调整任何参数吗?(直接回车使用默认值)
```
If user wants to customize, ask:
> "请选择要调整的项目:
> 1. 执行方式 (当前: agent)
> 2. CLI 工具 (当前: codex)
> 3. 是否自动提交 (当前: 是)
> 4. 补充材料路径
> 5. 全部使用默认值"
### 3.2 Build Execution Config
```javascript
const executionConfig = {
auto_yes: true,
with_commit: true,
execution_method: userChoice.executionMethod || 'agent',
preferred_cli_tool: userChoice.preferredCliTool || 'codex',
supplementary_materials: {
type: 'none',
content: []
}
}
```
---
## Step 4: Final Confirmation Summary
```
══════════════════════════════════════════════
Pre-Flight 检查完成
══════════════════════════════════════════════
环境: ✓ 就绪 (3/3 必需, 2/3 推荐)
任务质量: 9/10 (优秀)
自动模式: ON (--yes --with-commit)
执行方式: hybrid (codex)
需求来源: 2 个文档 (docs/prd.md, issue #42)
结构化任务:
GOAL: Add Google OAuth login with JWT session management;
验收: 用户可 Google 登录, 返回 JWT, 受保护路由验证
SCOPE: src/auth/*, src/strategies/*, src/models/User.ts
CONTEXT: Express.js + TypeORM + PostgreSQL;
约束: 不破坏 /api/login, 使用现有 User 表
══════════════════════════════════════════════
```
Ask: "确认启动?(Y/n)"
- If **Y** or Enter → proceed to Step 5
- If **n** → ask which part to revise, loop back
---
## Step 5: Write prep-package.json
Write to `{projectRoot}/.workflow/.prep/plan-prep-package.json`:
```json
{
"version": "1.0.0",
"generated_at": "{ISO8601_UTC+8}",
"prep_status": "ready",
"target_skill": "workflow-plan-execute",
"environment": {
"project_root": "{projectRoot}",
"prerequisites": {
"required_passed": true,
"recommended_passed": true,
"warnings": ["{list of warnings}"]
},
"tech_stack": "{detected tech stack}",
"test_framework": "{detected test framework}",
"has_project_tech": true,
"has_project_guidelines": false
},
"task": {
"original": "{$TASK raw input}",
"structured": {
"goal": "{GOAL string}",
"scope": "{SCOPE string}",
"context": "{CONTEXT string}"
},
"quality_score": 9,
"dimensions": {
"objective": { "score": 2, "value": "..." },
"success_criteria": { "score": 2, "value": "..." },
"scope": { "score": 2, "value": "..." },
"constraints": { "score": 2, "value": "..." },
"context": { "score": 1, "value": "..." }
},
"source_refs": [
{
"path": "docs/prd.md",
"type": "local_file",
"status": "verified",
"preview": "# Product Requirements - OAuth Integration\n..."
},
{
"path": "https://github.com/org/repo/issues/42",
"type": "url",
"status": "linked"
}
]
},
"execution": {
"auto_yes": true,
"with_commit": true,
"execution_method": "agent",
"preferred_cli_tool": "codex",
"supplementary_materials": {
"type": "none",
"content": []
}
}
}
```
Confirm:
```
✓ prep-package.json 已写入 .workflow/.prep/plan-prep-package.json
```
---
## Step 6: Launch Workflow
Invoke the skill using `$ARGUMENTS` pass-through:
```
$workflow-plan-execute --yes --with-commit TASK="$TASK_STRUCTURED"
```
其中:
- `$workflow-plan-execute` — 展开为 skill 调用
- `$TASK_STRUCTURED` — Step 2 组装的 GOAL/SCOPE/CONTEXT 格式任务
- `--yes` — 全自动模式
- `--with-commit` — 每任务自动提交(根据 Step 3 配置)
**Skill 端会做以下检查**(见 Phase 1 消费逻辑):
1. 检测 `.workflow/.prep/plan-prep-package.json` 是否存在
2. 验证 `prep_status === "ready"``target_skill === "workflow-plan-execute"`
3. 校验 `project_root` 与当前项目一致
4. 校验 `quality_score >= 6`
5. 校验文件时效24h 内生成)
6. 校验必需字段完整性
7. 全部通过 → 加载配置;任一失败 → 回退默认行为 + 打印警告
Print:
```
启动 workflow:plan (自动模式)...
prep-package.json → Phase 1 自动加载并校验
执行方式: hybrid (codex) + auto-commit
```
---
## Error Handling
| 情况 | 处理 |
|------|------|
| 必需项检查失败 | 报告缺失项,给出修复建议,**不启动 workflow** |
| 任务质量 < 6/10 且用户拒绝补充 | 报告各维度得分,建议重写任务描述,**不启动 workflow** |
| 用户取消确认 | 保存 prep-package.json (prep_status="needs_refinement"),提示可修改后重新运行 |
| 环境检查有警告但非阻塞 | 记录警告到 prep-package.json继续执行 |
| Skill 端 prep-package 校验失败 | Skill 打印警告,回退到无 prep 的默认行为(不阻塞执行) |

View File

@@ -1,301 +0,0 @@
# CCW Loop-B: Hybrid Orchestrator Pattern
Iterative development workflow using coordinator + specialized workers architecture.
## Overview
CCW Loop-B implements a flexible orchestration pattern:
- **Coordinator**: Main agent managing state, user interaction, worker scheduling
- **Workers**: Specialized agents (init, develop, debug, validate, complete)
- **Modes**: Interactive / Auto / Parallel execution
## Architecture
```
Coordinator (Main Agent)
|
+-- Spawns Workers
| - ccw-loop-b-init.md
| - ccw-loop-b-develop.md
| - ccw-loop-b-debug.md
| - ccw-loop-b-validate.md
| - ccw-loop-b-complete.md
|
+-- Batch Wait (parallel mode)
+-- Sequential Wait (auto/interactive)
+-- State Management
+-- User Interaction
```
## Subagent API
Core APIs for worker orchestration:
| API | 作用 |
|-----|------|
| `spawn_agent({ message })` | 创建 worker返回 `agent_id` |
| `wait({ ids, timeout_ms })` | 等待结果(唯一取结果入口) |
| `send_input({ id, message })` | 继续交互 |
| `close_agent({ id })` | 关闭回收 |
**可用模式**: 单 agent 深度交互 / 多 agent 并行 / 混合模式
## Execution Modes
### Interactive Mode (default)
Coordinator displays menu, user selects action, spawns corresponding worker.
```bash
/ccw-loop-b TASK="Implement feature X"
```
**Flow**:
1. Init: Parse task, create breakdown
2. Menu: Show options to user
3. User selects action (develop/debug/validate)
4. Spawn worker for selected action
5. Wait for result
6. Display result, back to menu
7. Repeat until complete
### Auto Mode
Automated sequential execution following predefined workflow.
```bash
/ccw-loop-b --mode=auto TASK="Fix bug Y"
```
**Flow**:
1. Init → 2. Develop → 3. Validate → 4. Complete
If issues found: loop back to Debug → Develop → Validate
### Parallel Mode
Spawn multiple workers simultaneously, batch wait for results.
```bash
/ccw-loop-b --mode=parallel TASK="Analyze module Z"
```
**Flow**:
1. Init: Create analysis plan
2. Spawn workers in parallel: [develop, debug, validate]
3. Batch wait: `wait({ ids: [w1, w2, w3] })`
4. Merge results
5. Coordinator decides next action
6. Complete
## Session Structure
```
{projectRoot}/.workflow/.loop/
+-- {loopId}.json # Master state
+-- {loopId}.workers/ # Worker outputs
| +-- init.output.json
| +-- develop.output.json
| +-- debug.output.json
| +-- validate.output.json
| +-- complete.output.json
+-- {loopId}.progress/ # Human-readable logs
+-- develop.md
+-- debug.md
+-- validate.md
+-- summary.md
```
## Worker Responsibilities
| Worker | Role | Specialization |
|--------|------|----------------|
| **init** | Session initialization | Task parsing, breakdown, planning |
| **develop** | Code implementation | File operations, pattern matching, incremental development |
| **debug** | Problem diagnosis | Root cause analysis, hypothesis testing, fix recommendations |
| **validate** | Testing & verification | Test execution, coverage analysis, quality gates |
| **complete** | Session finalization | Summary generation, commit preparation, cleanup |
## Usage Examples
### Example 1: Simple Feature Implementation
```bash
/ccw-loop-b TASK="Add user logout function"
```
**Auto flow**:
- Init: Parse requirements
- Develop: Implement logout in `src/auth.ts`
- Validate: Run tests
- Complete: Generate commit message
### Example 2: Bug Investigation
```bash
/ccw-loop-b TASK="Fix memory leak in WebSocket handler"
```
**Interactive flow**:
1. Init: Parse issue
2. User selects "debug" → Spawn debug worker
3. Debug: Root cause analysis → recommends fix
4. User selects "develop" → Apply fix
5. User selects "validate" → Verify fix works
6. User selects "complete" → Generate summary
### Example 3: Comprehensive Analysis
```bash
/ccw-loop-b --mode=parallel TASK="Analyze payment module for improvements"
```
**Parallel flow**:
- Spawn [develop, debug, validate] workers simultaneously
- Develop: Analyze code quality and patterns
- Debug: Identify potential issues
- Validate: Check test coverage
- Wait for all three to complete
- Merge findings into comprehensive report
### Example 4: Resume Existing Loop
```bash
/ccw-loop-b --loop-id=loop-b-20260122-abc123
```
Continues from previous state, respects status (running/paused).
## Key Features
### 1. Worker Specialization
Each worker focuses on one domain:
- **No overlap**: Clear boundaries between workers
- **Reusable**: Same worker for different tasks
- **Composable**: Combine workers for complex workflows
### 2. Flexible Coordination
Coordinator adapts to mode:
- **Interactive**: Menu-driven, user controls flow
- **Auto**: Predetermined sequence
- **Parallel**: Concurrent execution with batch wait
### 3. State Management
Unified state at `{projectRoot}/.workflow/.loop/{loopId}.json`:
- **API compatible**: Works with CCW API
- **Extension fields**: Skill-specific data in `skill_state`
- **Worker outputs**: Structured JSON for each action
### 4. Progress Tracking
Human-readable logs:
- **Per-worker progress**: `{action}.md` files
- **Summary**: Consolidated achievements
- **Commit-ready**: Formatted commit messages
## Best Practices
1. **Start with Init**: Always initialize before execution
2. **Use appropriate mode**:
- Interactive: Complex tasks needing user decisions
- Auto: Well-defined workflows
- Parallel: Independent analysis tasks
3. **Clean up workers**: `close_agent()` after each worker completes
4. **Batch wait wisely**: Use in parallel mode for efficiency
5. **Track progress**: Document in progress files
6. **Validate often**: After each develop phase
## Implementation Patterns
### Pattern 1: Single Worker Deep Interaction
```javascript
const workerId = spawn_agent({ message: workerPrompt })
const result1 = wait({ ids: [workerId] })
// Continue with same worker
send_input({ id: workerId, message: "Continue with next task" })
const result2 = wait({ ids: [workerId] })
close_agent({ id: workerId })
```
### Pattern 2: Multi-Worker Parallel
```javascript
const workers = {
develop: spawn_agent({ message: developPrompt }),
debug: spawn_agent({ message: debugPrompt }),
validate: spawn_agent({ message: validatePrompt })
}
// Batch wait
const results = wait({ ids: Object.values(workers), timeout_ms: 900000 })
// Process all results
Object.values(workers).forEach(id => close_agent({ id }))
```
### Pattern 3: Sequential Worker Chain
```javascript
const actions = ['init', 'develop', 'validate', 'complete']
for (const action of actions) {
const workerId = spawn_agent({ message: buildPrompt(action) })
const result = wait({ ids: [workerId] })
updateState(action, result)
close_agent({ id: workerId })
}
```
## Error Handling
| Error | Recovery |
|-------|----------|
| Worker timeout | `send_input` request convergence |
| Worker fails | Log error, coordinator decides retry strategy |
| Partial results | Use completed workers, mark incomplete |
| State corruption | Rebuild from progress files |
## File Structure
```
.codex/skills/ccw-loop-b/
+-- SKILL.md # Entry point
+-- README.md # This file
+-- phases/
| +-- state-schema.md # State structure definition
+-- specs/
+-- action-catalog.md # Action reference
.codex/agents/
+-- ccw-loop-b-init.md # Worker: Init
+-- ccw-loop-b-develop.md # Worker: Develop
+-- ccw-loop-b-debug.md # Worker: Debug
+-- ccw-loop-b-validate.md # Worker: Validate
+-- ccw-loop-b-complete.md # Worker: Complete
```
## Comparison: ccw-loop vs ccw-loop-b
| Aspect | ccw-loop | ccw-loop-b |
|--------|----------|------------|
| Pattern | Single agent, multi-phase | Coordinator + workers |
| Worker model | Single agent handles all | Specialized workers per action |
| Parallelization | Sequential only | Supports parallel mode |
| Flexibility | Fixed sequence | Mode-based (interactive/auto/parallel) |
| Best for | Simple linear workflows | Complex tasks needing specialization |
## Contributing
To add new workers:
1. Create worker role file in `.codex/agents/`
2. Define clear responsibilities
3. Update `action-catalog.md`
4. Add worker to coordinator spawn logic
5. Test integration with existing workers

View File

@@ -1,429 +0,0 @@
---
name: ccw-loop-b
description: Hybrid orchestrator pattern for iterative development. Coordinator + specialized workers with batch wait, parallel split, and two-phase clarification. Triggers on "ccw-loop-b".
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
---
# CCW Loop-B - Hybrid Orchestrator Pattern
协调器 + 专用 worker 的迭代开发工作流。支持三种执行模式Interactive / Auto / Parallel每个 action 由独立 worker agent 执行,协调器负责调度、状态管理和结果汇聚。
## Architecture Overview
```
+------------------------------------------------------------+
| Main Coordinator |
| 职责: 状态管理 + worker 调度 + 结果汇聚 + 用户交互 |
+------------------------------------------------------------+
| | |
v v v
+----------------+ +----------------+ +----------------+
| Worker-Develop | | Worker-Debug | | Worker-Validate|
| 专注: 代码实现 | | 专注: 问题诊断 | | 专注: 测试验证 |
+----------------+ +----------------+ +----------------+
| | |
v v v
.workers/ .workers/ .workers/
develop.output.json debug.output.json validate.output.json
```
### Subagent API
| API | 作用 | 注意事项 |
|-----|------|----------|
| `spawn_agent({ message })` | 创建 worker返回 `agent_id` | 首条 message 加载角色 |
| `wait({ ids, timeout_ms })` | 等待结果 | **唯一取结果入口**,非 close |
| `send_input({ id, message })` | 继续交互/追问 | `interrupt=true` 慎用 |
| `close_agent({ id })` | 关闭回收 | 不可逆,确认不再交互后才关闭 |
## Key Design Principles
1. **协调器保持轻量**: 只做调度和状态管理,具体工作交给 worker
2. **Worker 职责单一**: 每个 worker 专注一个领域develop/debug/validate
3. **角色路径传递**: Worker 自己读取角色文件,主流程不传递内容
4. **延迟 close_agent**: 确认不再需要交互后才关闭 worker
5. **两阶段工作流**: 复杂任务先澄清后执行,减少返工
6. **批量等待优化**: 并行模式用 `wait({ ids: [...] })` 批量等待
7. **结果标准化**: Worker 输出遵循统一 WORKER_RESULT 格式
8. **灵活模式切换**: 根据任务复杂度选择 interactive/auto/parallel
## Arguments
| Arg | Required | Description |
|-----|----------|-------------|
| TASK | One of TASK or --loop-id | Task description (for new loop) |
| --loop-id | One of TASK or --loop-id | Existing loop ID to continue |
| --mode | No | `interactive` (default) / `auto` / `parallel` |
## Execution Modes
### Mode: Interactive (default)
协调器展示菜单,用户选择 actionspawn 对应 worker 执行。
```
Coordinator -> Show menu -> User selects -> spawn worker -> wait -> Display result -> Loop
```
### Mode: Auto
自动按预设顺序执行worker 完成后协调器决定下一步。
```
Init -> Develop -> [if issues] Debug -> Validate -> [if fail] Loop back -> Complete
```
### Mode: Parallel
并行 spawn 多个 workerbatch wait 汇聚结果,协调器综合决策。
```
Coordinator -> spawn [develop, debug, validate] in parallel -> wait({ ids: all }) -> Merge -> Decide
```
## Execution Flow
```
Input Parsing:
└─ Parse arguments (TASK | --loop-id + --mode)
└─ Convert to structured context (loopId, state, mode)
Phase 1: Session Initialization
└─ Ref: phases/01-session-init.md
├─ Create new loop OR resume existing loop
├─ Initialize state file and directory structure
└─ Output: loopId, state, progressDir, mode
Phase 2: Orchestration Loop
└─ Ref: phases/02-orchestration-loop.md
├─ Mode dispatch: interactive / auto / parallel
├─ Worker spawn with structured prompt (Goal/Scope/Context/Deliverables)
├─ Wait + timeout handling + result parsing
├─ State update per iteration
└─ close_agent on loop exit
```
**Phase Reference Documents** (read on-demand when phase executes):
| Phase | Document | Purpose |
|-------|----------|---------|
| 1 | [phases/01-session-init.md](phases/01-session-init.md) | Argument parsing, state creation/resume, directory init |
| 2 | [phases/02-orchestration-loop.md](phases/02-orchestration-loop.md) | 3-mode orchestration, worker spawn, batch wait, result merge |
## Data Flow
```
User Input (TASK | --loop-id + --mode)
[Parse Arguments]
↓ loopId, state, mode
Phase 1: Session Initialization
↓ loopId, state (initialized/resumed), progressDir
Phase 2: Orchestration Loop
┌─── Interactive Mode ──────────────────────────────────┐
│ showMenu → user selects → spawn worker → wait → │
│ parseResult → updateState → close worker → loop │
└───────────────────────────────────────────────────────┘
┌─── Auto Mode ─────────────────────────────────────────┐
│ selectNext → spawn worker → wait → parseResult → │
│ updateState → close worker → [loop_back?] → next │
└───────────────────────────────────────────────────────┘
┌─── Parallel Mode ─────────────────────────────────────┐
│ spawn [develop, debug, validate] → batch wait → │
│ mergeOutputs → coordinator decides → close all │
└───────────────────────────────────────────────────────┘
return finalState
```
## Session Structure
```
{projectRoot}/.workflow/.loop/
├── {loopId}.json # Master state (API + Skill shared)
├── {loopId}.workers/ # Worker structured outputs
│ ├── init.output.json
│ ├── develop.output.json
│ ├── debug.output.json
│ ├── validate.output.json
│ └── complete.output.json
└── {loopId}.progress/ # Human-readable progress
├── develop.md
├── debug.md
├── validate.md
└── summary.md
```
## State Management
Master state file: `{projectRoot}/.workflow/.loop/{loopId}.json`
```json
{
"loop_id": "loop-b-20260122-abc123",
"title": "Task title",
"description": "Full task description",
"mode": "interactive | auto | parallel",
"status": "running | paused | completed | failed",
"current_iteration": 0,
"max_iterations": 10,
"created_at": "ISO8601",
"updated_at": "ISO8601",
"skill_state": {
"phase": "init | develop | debug | validate | complete",
"action_index": 0,
"workers_completed": [],
"parallel_results": null,
"pending_tasks": [],
"completed_tasks": [],
"findings": []
}
}
```
**Control Signal Checking**: 协调器在每次 spawn worker 前检查 `state.status`:
- `running` → continue
- `paused` → exit gracefully, wait for resume
- `failed` → terminate
**Recovery**: If state corrupted, rebuild from `.progress/` markdown files and `.workers/*.output.json`.
## Worker Catalog
| Worker | Role File | Purpose | Output Files |
|--------|-----------|---------|--------------|
| [init](workers/worker-init.md) | ccw-loop-b-init.md | 会话初始化、任务解析 | init.output.json |
| [develop](workers/worker-develop.md) | ccw-loop-b-develop.md | 代码实现、重构 | develop.output.json, develop.md |
| [debug](workers/worker-debug.md) | ccw-loop-b-debug.md | 问题诊断、假设验证 | debug.output.json, debug.md |
| [validate](workers/worker-validate.md) | ccw-loop-b-validate.md | 测试执行、覆盖率 | validate.output.json, validate.md |
| [complete](workers/worker-complete.md) | ccw-loop-b-complete.md | 总结收尾 | complete.output.json, summary.md |
### Worker Dependencies
| Worker | Depends On | Leads To |
|--------|------------|----------|
| init | - | develop (auto) / menu (interactive) |
| develop | init | validate / debug |
| debug | init | develop / validate |
| validate | develop or debug | complete / develop (if fail) |
| complete | - | Terminal |
### Worker Sequences
```
Simple Task (Auto): init → develop → validate → complete
Complex Task (Auto): init → develop → validate (fail) → debug → develop → validate → complete
Bug Fix (Auto): init → debug → develop → validate → complete
Analysis (Parallel): init → [develop ‖ debug ‖ validate] → complete
Interactive: init → menu → user selects → worker → menu → ...
```
## Worker Prompt Protocol
### Spawn Message Structure (§7.1)
```javascript
function buildWorkerPrompt(action, loopId, state) {
return `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/ccw-loop-b-${action}.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
Goal: ${goalForAction(action, state)}
Scope:
- 可做: ${allowedScope(action)}
- 不可做: ${forbiddenScope(action)}
- 目录限制: ${directoryScope(action, state)}
Context:
- Loop ID: ${loopId}
- State: ${projectRoot}/.workflow/.loop/${loopId}.json
- Output: ${projectRoot}/.workflow/.loop/${loopId}.workers/${action}.output.json
- Progress: ${projectRoot}/.workflow/.loop/${loopId}.progress/${action}.md
Deliverables:
- 按 WORKER_RESULT 格式输出
- 写入 output.json 和 progress.md
Quality bar:
- ${qualityCriteria(action)}
`
}
```
**关键**: 角色文件由 worker 自己读取,主流程只传递路径。不嵌入角色内容。
### Worker Output Format (WORKER_RESULT)
```
WORKER_RESULT:
- action: {action_name}
- status: success | failed | needs_input
- summary: <brief summary>
- files_changed: [list]
- next_suggestion: <suggested next action>
- loop_back_to: <action name if needs loop back, or null>
DETAILED_OUTPUT:
<action-specific structured output>
```
### Two-Phase Clarification (§5.2)
Worker 遇到模糊需求时采用两阶段模式:
```
阶段 1: Worker 输出 CLARIFICATION_NEEDED + Open questions
阶段 2: 协调器收集用户回答 → send_input → Worker 继续执行
```
```javascript
// 解析 worker 是否需要澄清
if (output.includes('CLARIFICATION_NEEDED')) {
const userAnswers = await collectUserAnswers(output)
send_input({
id: workerId,
message: `## CLARIFICATION ANSWERS\n${userAnswers}\n\n## CONTINUE EXECUTION`
})
const finalResult = wait({ ids: [workerId], timeout_ms: 600000 })
}
```
## Parallel Split Strategy (§6)
### Strategy 1: 按职责域拆分(推荐)
| Worker | 职责 | 交付物 | 禁止事项 |
|--------|------|--------|----------|
| develop | 定位入口、调用链、实现方案 | 变更点清单 | 不做测试 |
| debug | 问题诊断、风险评估 | 问题清单+修复建议 | 不写代码 |
| validate | 测试策略、覆盖分析 | 测试结果+质量报告 | 不改实现 |
### Strategy 2: 按模块域拆分
```
Worker 1: src/auth/** → 认证模块变更
Worker 2: src/api/** → API 层变更
Worker 3: src/database/** → 数据层变更
```
### 拆分原则
1. **文件隔离**: 避免多个 worker 同时修改同一文件
2. **职责单一**: 每个 worker 只做一件事
3. **边界清晰**: 超出范围用 `CLARIFICATION_NEEDED` 请求确认
4. **最小上下文**: 只传递完成任务所需的最小信息
## Result Merge (Parallel Mode)
```javascript
function mergeWorkerOutputs(outputs) {
return {
develop: parseWorkerResult(outputs.develop),
debug: parseWorkerResult(outputs.debug),
validate: parseWorkerResult(outputs.validate),
conflicts: detectConflicts(outputs), // 检查 worker 间建议冲突
merged_at: getUtc8ISOString()
}
}
```
**冲突检测**: 当多个 worker 建议修改同一文件时,协调器标记冲突,由用户决定。
## TodoWrite Pattern
### Phase-Level Tracking (Attached)
```json
[
{"content": "Phase 1: Session Initialization", "status": "completed"},
{"content": "Phase 2: Orchestration Loop (auto mode)", "status": "in_progress"},
{"content": " → Worker: init", "status": "completed"},
{"content": " → Worker: develop (task 2/5)", "status": "in_progress"},
{"content": " → Worker: validate", "status": "pending"},
{"content": " → Worker: complete", "status": "pending"}
]
```
### Parallel Mode Tracking
```json
[
{"content": "Phase 1: Session Initialization", "status": "completed"},
{"content": "Phase 2: Parallel Analysis", "status": "in_progress"},
{"content": " → Worker: develop (parallel)", "status": "in_progress"},
{"content": " → Worker: debug (parallel)", "status": "in_progress"},
{"content": " → Worker: validate (parallel)", "status": "in_progress"},
{"content": " → Merge results", "status": "pending"}
]
```
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, then Phase 1 execution
2. **Progressive Phase Loading**: Read phase docs ONLY when that phase is about to execute
3. **Parse Every Output**: Extract WORKER_RESULT from worker output for next decision
4. **Worker 生命周期**: spawn → wait → [send_input if needed] → close不长期保留 worker
5. **结果持久化**: Worker 输出写入 `{projectRoot}/.workflow/.loop/{loopId}.workers/`
6. **状态同步**: 每次 worker 完成后更新 master state
7. **超时处理**: send_input 请求收敛,再超时则使用已有结果继续
8. **DO NOT STOP**: Continuous execution until completed, paused, or max iterations
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Worker timeout | send_input 请求收敛 → 再超时则跳过 |
| Worker failed | Log error, 协调器决策是否重试 |
| Batch wait partial timeout | 使用已完成结果继续 |
| State corrupted | 从 progress 文件和 worker output 重建 |
| Conflicting worker results | 标记冲突,由用户决定 |
| Max iterations reached | 生成总结,记录未完成项 |
## Coordinator Checklist
### Before Each Phase
- [ ] Read phase reference document
- [ ] Check current state and control signals
- [ ] Update TodoWrite with phase tasks
### After Each Worker
- [ ] Parse WORKER_RESULT from output
- [ ] Persist output to `.workers/{action}.output.json`
- [ ] Update master state file
- [ ] close_agent (确认不再需要交互)
- [ ] Determine next action (continue / loop back / complete)
## Reference Documents
| Document | Purpose |
|----------|---------|
| [workers/](workers/) | Worker 定义 (init, develop, debug, validate, complete) |
## Usage
```bash
# Interactive mode (default)
/ccw-loop-b TASK="Implement user authentication"
# Auto mode
/ccw-loop-b --mode=auto TASK="Fix login bug"
# Parallel analysis mode
/ccw-loop-b --mode=parallel TASK="Analyze and improve payment module"
# Resume existing loop
/ccw-loop-b --loop-id=loop-b-20260122-abc123
```

View File

@@ -1,163 +0,0 @@
# Phase 1: Session Initialization
Create or resume a development loop, initialize state file and directory structure, detect execution mode.
## Objective
- Parse user arguments (TASK, --loop-id, --mode)
- Create new loop with unique ID OR resume existing loop
- Initialize directory structure (progress + workers)
- Create master state file
- Output: loopId, state, progressDir, mode
## Execution
### Step 0: Determine Project Root
```javascript
// Step 0: Determine Project Root
const projectRoot = Bash('git rev-parse --show-toplevel 2>/dev/null || pwd').trim()
```
### Step 1.1: Parse Arguments
```javascript
const { loopId: existingLoopId, task, mode = 'interactive' } = options
// Validate mutual exclusivity
if (!existingLoopId && !task) {
console.error('Either --loop-id or task description is required')
return { status: 'error', message: 'Missing loopId or task' }
}
// Validate mode
const validModes = ['interactive', 'auto', 'parallel']
if (!validModes.includes(mode)) {
console.error(`Invalid mode: ${mode}. Use: ${validModes.join(', ')}`)
return { status: 'error', message: 'Invalid mode' }
}
```
### Step 1.2: Utility Functions
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
function readState(loopId) {
const stateFile = `${projectRoot}/.workflow/.loop/${loopId}.json`
if (!fs.existsSync(stateFile)) return null
return JSON.parse(Read(stateFile))
}
function saveState(loopId, state) {
state.updated_at = getUtc8ISOString()
Write(`${projectRoot}/.workflow/.loop/${loopId}.json`, JSON.stringify(state, null, 2))
}
```
### Step 1.3: New Loop Creation
When `TASK` is provided (no `--loop-id`):
```javascript
const timestamp = getUtc8ISOString().replace(/[-:]/g, '').split('.')[0]
const random = Math.random().toString(36).substring(2, 10)
const loopId = `loop-b-${timestamp}-${random}`
console.log(`Creating new loop: ${loopId}`)
```
#### Create Directory Structure
```bash
mkdir -p ${projectRoot}/.workflow/.loop/${loopId}.workers
mkdir -p ${projectRoot}/.workflow/.loop/${loopId}.progress
```
#### Initialize State File
```javascript
function createState(loopId, taskDescription, mode) {
const now = getUtc8ISOString()
const state = {
loop_id: loopId,
title: taskDescription.substring(0, 100),
description: taskDescription,
mode: mode,
status: 'running',
current_iteration: 0,
max_iterations: 10,
created_at: now,
updated_at: now,
skill_state: {
phase: 'init',
action_index: 0,
workers_completed: [],
parallel_results: null,
pending_tasks: [],
completed_tasks: [],
findings: []
}
}
Write(`${projectRoot}/.workflow/.loop/${loopId}.json`, JSON.stringify(state, null, 2))
return state
}
```
### Step 1.4: Resume Existing Loop
When `--loop-id` is provided:
```javascript
const loopId = existingLoopId
const state = readState(loopId)
if (!state) {
console.error(`Loop not found: ${loopId}`)
return { status: 'error', message: 'Loop not found' }
}
console.log(`Resuming loop: ${loopId}`)
console.log(`Mode: ${state.mode}, Status: ${state.status}`)
// Override mode if provided
if (options['--mode']) {
state.mode = options['--mode']
saveState(loopId, state)
}
```
### Step 1.5: Control Signal Check
```javascript
function checkControlSignals(loopId) {
const state = readState(loopId)
switch (state?.status) {
case 'paused':
return { continue: false, action: 'pause_exit' }
case 'failed':
return { continue: false, action: 'stop_exit' }
case 'running':
return { continue: true, action: 'continue' }
default:
return { continue: false, action: 'stop_exit' }
}
}
```
## Output
- **Variable**: `loopId` - Unique loop identifier
- **Variable**: `state` - Initialized or resumed loop state object
- **Variable**: `progressDir` - `${projectRoot}/.workflow/.loop/${loopId}.progress`
- **Variable**: `workersDir` - `${projectRoot}/.workflow/.loop/${loopId}.workers`
- **Variable**: `mode` - `'interactive'` / `'auto'` / `'parallel'`
- **TodoWrite**: Mark Phase 1 completed, Phase 2 in_progress
## Next Phase
Return to orchestrator, then auto-continue to [Phase 2: Orchestration Loop](02-orchestration-loop.md).

View File

@@ -1,450 +0,0 @@
# Phase 2: Orchestration Loop
Run main orchestration loop with 3-mode dispatch: Interactive, Auto, Parallel.
## Objective
- Dispatch to appropriate mode handler based on `state.mode`
- Spawn workers with structured prompts (Goal/Scope/Context/Deliverables)
- Handle batch wait, timeout, two-phase clarification
- Parse WORKER_RESULT, update state per iteration
- close_agent after confirming no more interaction needed
- Exit on completion, pause, stop, or max iterations
## Execution
### Step 2.1: Mode Dispatch
```javascript
const mode = state.mode || 'interactive'
console.log(`=== CCW Loop-B Orchestrator (${mode} mode) ===`)
switch (mode) {
case 'interactive':
return await runInteractiveMode(loopId, state)
case 'auto':
return await runAutoMode(loopId, state)
case 'parallel':
return await runParallelMode(loopId, state)
}
```
### Step 2.2: Interactive Mode
```javascript
async function runInteractiveMode(loopId, state) {
while (state.status === 'running') {
// 1. Check control signals
const signal = checkControlSignals(loopId)
if (!signal.continue) break
// 2. Show menu, get user choice
const action = await showMenuAndGetChoice(state)
if (action === 'exit') {
state.status = 'user_exit'
saveState(loopId, state)
break
}
// 3. Spawn worker
const workerId = spawn_agent({
message: buildWorkerPrompt(action, loopId, state)
})
// 4. Wait for result (with two-phase clarification support)
let output = await waitWithClarification(workerId, action)
// 5. Process and persist output
const workerResult = parseWorkerResult(output)
persistWorkerOutput(loopId, action, workerResult)
state = processWorkerOutput(loopId, action, workerResult, state)
// 6. Cleanup worker
close_agent({ id: workerId })
// 7. Display result
displayResult(workerResult)
// 8. Update iteration
state.current_iteration++
saveState(loopId, state)
}
return { status: state.status, loop_id: loopId, iterations: state.current_iteration }
}
```
### Step 2.3: Auto Mode
```javascript
async function runAutoMode(loopId, state) {
const sequence = ['init', 'develop', 'debug', 'validate', 'complete']
let idx = state.skill_state?.action_index || 0
while (idx < sequence.length && state.status === 'running') {
// Check control signals
const signal = checkControlSignals(loopId)
if (!signal.continue) break
// Check iteration limit
if (state.current_iteration >= state.max_iterations) {
console.log(`Max iterations (${state.max_iterations}) reached`)
break
}
const action = sequence[idx]
// Spawn worker
const workerId = spawn_agent({
message: buildWorkerPrompt(action, loopId, state)
})
// Wait with two-phase clarification
let output = await waitWithClarification(workerId, action)
// Parse and persist
const workerResult = parseWorkerResult(output)
persistWorkerOutput(loopId, action, workerResult)
state = processWorkerOutput(loopId, action, workerResult, state)
close_agent({ id: workerId })
// Determine next step
if (workerResult.loop_back_to && workerResult.loop_back_to !== 'null') {
idx = sequence.indexOf(workerResult.loop_back_to)
if (idx === -1) idx = sequence.indexOf('develop') // fallback
} else if (workerResult.status === 'failed') {
console.log(`Worker ${action} failed: ${workerResult.summary}`)
break
} else {
idx++
}
// Update state
state.skill_state.action_index = idx
state.current_iteration++
saveState(loopId, state)
}
return { status: state.status, loop_id: loopId, iterations: state.current_iteration }
}
```
### Step 2.4: Parallel Mode
```javascript
async function runParallelMode(loopId, state) {
// 1. Run init worker first (sequential)
const initWorker = spawn_agent({
message: buildWorkerPrompt('init', loopId, state)
})
const initResult = wait({ ids: [initWorker], timeout_ms: 300000 })
const initOutput = parseWorkerResult(initResult.status[initWorker].completed)
persistWorkerOutput(loopId, 'init', initOutput)
state = processWorkerOutput(loopId, 'init', initOutput, state)
close_agent({ id: initWorker })
// 2. Spawn analysis workers in parallel
const workers = {
develop: spawn_agent({ message: buildWorkerPrompt('develop', loopId, state) }),
debug: spawn_agent({ message: buildWorkerPrompt('debug', loopId, state) }),
validate: spawn_agent({ message: buildWorkerPrompt('validate', loopId, state) })
}
// 3. Batch wait for all workers
const results = wait({
ids: Object.values(workers),
timeout_ms: 900000 // 15 minutes for all
})
// 4. Handle partial timeout
if (results.timed_out) {
console.log('Partial timeout - using completed results')
// Send convergence request to timed-out workers
for (const [role, workerId] of Object.entries(workers)) {
if (!results.status[workerId]?.completed) {
send_input({
id: workerId,
message: '## TIMEOUT\nPlease output WORKER_RESULT with current progress immediately.'
})
}
}
// Brief second wait for convergence
const retryResults = wait({ ids: Object.values(workers), timeout_ms: 60000 })
Object.assign(results.status, retryResults.status)
}
// 5. Collect and merge outputs
const outputs = {}
for (const [role, workerId] of Object.entries(workers)) {
const completed = results.status[workerId]?.completed
if (completed) {
outputs[role] = parseWorkerResult(completed)
persistWorkerOutput(loopId, role, outputs[role])
}
close_agent({ id: workerId })
}
// 6. Merge analysis
const mergedResults = mergeWorkerOutputs(outputs)
state.skill_state.parallel_results = mergedResults
state.current_iteration++
saveState(loopId, state)
// 7. Run complete worker
const completeWorker = spawn_agent({
message: buildWorkerPrompt('complete', loopId, state)
})
const completeResult = wait({ ids: [completeWorker], timeout_ms: 300000 })
const completeOutput = parseWorkerResult(completeResult.status[completeWorker].completed)
persistWorkerOutput(loopId, 'complete', completeOutput)
state = processWorkerOutput(loopId, 'complete', completeOutput, state)
close_agent({ id: completeWorker })
return { status: state.status, loop_id: loopId, iterations: state.current_iteration }
}
```
## Helper Functions
### buildWorkerPrompt
```javascript
function buildWorkerPrompt(action, loopId, state) {
const roleFiles = {
init: '~/.codex/agents/ccw-loop-b-init.md',
develop: '~/.codex/agents/ccw-loop-b-develop.md',
debug: '~/.codex/agents/ccw-loop-b-debug.md',
validate: '~/.codex/agents/ccw-loop-b-validate.md',
complete: '~/.codex/agents/ccw-loop-b-complete.md'
}
return `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ${roleFiles[action]} (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
Goal: Execute ${action} action for loop ${loopId}
Scope:
- 可做: ${action} 相关的所有操作
- 不可做: 其他 action 的操作
- 目录限制: 项目根目录
Context:
- Loop ID: ${loopId}
- Action: ${action}
- State File: ${projectRoot}/.workflow/.loop/${loopId}.json
- Output File: ${projectRoot}/.workflow/.loop/${loopId}.workers/${action}.output.json
- Progress File: ${projectRoot}/.workflow/.loop/${loopId}.progress/${action}.md
Deliverables:
- WORKER_RESULT 格式输出
- 写入 output.json 和 progress.md
## CURRENT STATE
${JSON.stringify(state, null, 2)}
## TASK DESCRIPTION
${state.description}
## EXPECTED OUTPUT
\`\`\`
WORKER_RESULT:
- action: ${action}
- status: success | failed | needs_input
- summary: <brief summary>
- files_changed: [list]
- next_suggestion: <suggested next action>
- loop_back_to: <action name if needs loop back, or null>
DETAILED_OUTPUT:
<action-specific structured output>
\`\`\`
Execute the ${action} action now.
`
}
```
### waitWithClarification (Two-Phase Workflow)
```javascript
async function waitWithClarification(workerId, action) {
const result = wait({ ids: [workerId], timeout_ms: 600000 })
// Handle timeout
if (result.timed_out) {
send_input({
id: workerId,
message: '## TIMEOUT\nPlease converge and output WORKER_RESULT with current progress.'
})
const retry = wait({ ids: [workerId], timeout_ms: 300000 })
if (retry.timed_out) {
return `WORKER_RESULT:\n- action: ${action}\n- status: failed\n- summary: Worker timeout\n\nNEXT_ACTION_NEEDED: NONE`
}
return retry.status[workerId].completed
}
const output = result.status[workerId].completed
// Check if worker needs clarification (two-phase)
if (output.includes('CLARIFICATION_NEEDED')) {
// Collect user answers
const questions = parseClarificationQuestions(output)
const userAnswers = await collectUserAnswers(questions)
// Send answers back to worker
send_input({
id: workerId,
message: `
## CLARIFICATION ANSWERS
${userAnswers.map(a => `Q: ${a.question}\nA: ${a.answer}`).join('\n\n')}
## CONTINUE EXECUTION
Based on clarification answers, continue with the ${action} action.
Output WORKER_RESULT when complete.
`
})
// Wait for final result
const finalResult = wait({ ids: [workerId], timeout_ms: 600000 })
return finalResult.status[workerId]?.completed || output
}
return output
}
```
### parseWorkerResult
```javascript
function parseWorkerResult(output) {
const result = {
action: 'unknown',
status: 'unknown',
summary: '',
files_changed: [],
next_suggestion: null,
loop_back_to: null,
detailed_output: ''
}
// Parse WORKER_RESULT block
const match = output.match(/WORKER_RESULT:\s*([\s\S]*?)(?:DETAILED_OUTPUT:|$)/)
if (match) {
const lines = match[1].split('\n')
for (const line of lines) {
const m = line.match(/^-\s*(\w[\w_]*):\s*(.+)$/)
if (m) {
const [, key, value] = m
if (key === 'files_changed') {
try { result.files_changed = JSON.parse(value) } catch {}
} else {
result[key] = value.trim()
}
}
}
}
// Parse DETAILED_OUTPUT
const detailMatch = output.match(/DETAILED_OUTPUT:\s*([\s\S]*)$/)
if (detailMatch) {
result.detailed_output = detailMatch[1].trim()
}
return result
}
```
### mergeWorkerOutputs (Parallel Mode)
```javascript
function mergeWorkerOutputs(outputs) {
const merged = {
develop: outputs.develop || null,
debug: outputs.debug || null,
validate: outputs.validate || null,
conflicts: [],
merged_at: getUtc8ISOString()
}
// Detect file conflicts: multiple workers suggest modifying same file
const allFiles = {}
for (const [role, output] of Object.entries(outputs)) {
if (output?.files_changed) {
for (const file of output.files_changed) {
if (allFiles[file]) {
merged.conflicts.push({
file,
workers: [allFiles[file], role],
resolution: 'manual'
})
} else {
allFiles[file] = role
}
}
}
}
return merged
}
```
### showMenuAndGetChoice
```javascript
async function showMenuAndGetChoice(state) {
const ss = state.skill_state
const pendingCount = ss?.pending_tasks?.length || 0
const completedCount = ss?.completed_tasks?.length || 0
const response = await ASK_USER([{
id: "Action", type: "select",
prompt: `Select next action (completed: ${completedCount}, pending: ${pendingCount}):`,
options: [
{ label: "develop", description: `Continue development (${pendingCount} pending)` },
{ label: "debug", description: "Start debugging / diagnosis" },
{ label: "validate", description: "Run tests and validation" },
{ label: "complete", description: "Complete loop and generate summary" },
{ label: "exit", description: "Exit and save progress" }
]
}]) // BLOCKS (wait for user response)
return response["Action"]
}
```
### persistWorkerOutput
```javascript
function persistWorkerOutput(loopId, action, workerResult) {
const outputPath = `${projectRoot}/.workflow/.loop/${loopId}.workers/${action}.output.json`
Write(outputPath, JSON.stringify({
...workerResult,
timestamp: getUtc8ISOString()
}, null, 2))
}
```
## Output
- **Return**: `{ status, loop_id, iterations }`
- **TodoWrite**: Mark Phase 2 completed
## Next Phase
None. Phase 2 is the terminal phase of the orchestrator.

View File

@@ -1,257 +0,0 @@
# Orchestrator (Hybrid Pattern)
协调器负责状态管理、worker 调度、结果汇聚。
## Role
```
Read state -> Select mode -> Spawn workers -> Wait results -> Merge -> Update state -> Loop/Exit
```
## State Management
### Read State
```javascript
function readState(loopId) {
const stateFile = `${projectRoot}/.workflow/.loop/${loopId}.json`
return fs.existsSync(stateFile)
? JSON.parse(Read(stateFile))
: null
}
```
### Create State
```javascript
function createState(loopId, taskDescription, mode) {
const now = new Date().toISOString()
return {
loop_id: loopId,
title: taskDescription.substring(0, 100),
description: taskDescription,
mode: mode,
status: 'running',
current_iteration: 0,
max_iterations: 10,
created_at: now,
updated_at: now,
skill_state: {
phase: 'init',
action_index: 0,
workers_completed: [],
parallel_results: null
}
}
}
```
## Mode Handlers
### Interactive Mode
```javascript
async function runInteractiveMode(loopId, state) {
while (state.status === 'running') {
// 1. Show menu
const action = await showMenu(state)
if (action === 'exit') break
// 2. Spawn worker
const worker = spawn_agent({
message: buildWorkerPrompt(action, loopId, state)
})
// 3. Wait for result
const result = wait({ ids: [worker], timeout_ms: 600000 })
// 4. Handle timeout
if (result.timed_out) {
send_input({ id: worker, message: 'Please converge and output WORKER_RESULT' })
const retryResult = wait({ ids: [worker], timeout_ms: 300000 })
if (retryResult.timed_out) {
console.log('Worker timeout, skipping')
close_agent({ id: worker })
continue
}
}
// 5. Process output
const output = result.status[worker].completed
state = processWorkerOutput(loopId, action, output, state)
// 6. Cleanup
close_agent({ id: worker })
// 7. Display result
displayResult(output)
}
}
```
### Auto Mode
```javascript
async function runAutoMode(loopId, state) {
const sequence = ['init', 'develop', 'debug', 'validate', 'complete']
let idx = state.skill_state?.action_index || 0
while (idx < sequence.length && state.status === 'running') {
const action = sequence[idx]
// Spawn and wait
const worker = spawn_agent({ message: buildWorkerPrompt(action, loopId, state) })
const result = wait({ ids: [worker], timeout_ms: 600000 })
const output = result.status[worker].completed
close_agent({ id: worker })
// Parse result
const workerResult = parseWorkerResult(output)
state = processWorkerOutput(loopId, action, output, state)
// Determine next
if (workerResult.loop_back_to) {
idx = sequence.indexOf(workerResult.loop_back_to)
} else if (workerResult.status === 'failed') {
break
} else {
idx++
}
// Update action index
state.skill_state.action_index = idx
saveState(loopId, state)
}
}
```
### Parallel Mode
```javascript
async function runParallelMode(loopId, state) {
// Spawn all workers
const workers = {
develop: spawn_agent({ message: buildWorkerPrompt('develop', loopId, state) }),
debug: spawn_agent({ message: buildWorkerPrompt('debug', loopId, state) }),
validate: spawn_agent({ message: buildWorkerPrompt('validate', loopId, state) })
}
// Batch wait
const results = wait({
ids: Object.values(workers),
timeout_ms: 900000
})
// Collect outputs
const outputs = {}
for (const [role, id] of Object.entries(workers)) {
if (results.status[id].completed) {
outputs[role] = results.status[id].completed
}
close_agent({ id })
}
// Merge analysis
state.skill_state.parallel_results = outputs
saveState(loopId, state)
// Coordinator analyzes merged results
return analyzeAndDecide(outputs)
}
```
## Worker Prompt Template
```javascript
function buildWorkerPrompt(action, loopId, state) {
const roleFiles = {
init: '~/.codex/agents/ccw-loop-b-init.md',
develop: '~/.codex/agents/ccw-loop-b-develop.md',
debug: '~/.codex/agents/ccw-loop-b-debug.md',
validate: '~/.codex/agents/ccw-loop-b-validate.md',
complete: '~/.codex/agents/ccw-loop-b-complete.md'
}
return `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. **Read role definition**: ${roleFiles[action]}
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## CONTEXT
- Loop ID: ${loopId}
- Action: ${action}
- State: ${JSON.stringify(state, null, 2)}
## TASK
${state.description}
## OUTPUT FORMAT
\`\`\`
WORKER_RESULT:
- action: ${action}
- status: success | failed | needs_input
- summary: <brief>
- files_changed: []
- next_suggestion: <action>
- loop_back_to: <action or null>
DETAILED_OUTPUT:
<action-specific output>
\`\`\`
`
}
```
## Result Processing
```javascript
function parseWorkerResult(output) {
const result = {
action: 'unknown',
status: 'unknown',
summary: '',
files_changed: [],
next_suggestion: null,
loop_back_to: null
}
const match = output.match(/WORKER_RESULT:\s*([\s\S]*?)(?:DETAILED_OUTPUT:|$)/)
if (match) {
const lines = match[1].split('\n')
for (const line of lines) {
const m = line.match(/^-\s*(\w+):\s*(.+)$/)
if (m) {
const [, key, value] = m
if (key === 'files_changed') {
try { result.files_changed = JSON.parse(value) } catch {}
} else {
result[key] = value.trim()
}
}
}
}
return result
}
```
## Termination Conditions
1. User exits (interactive)
2. Sequence complete (auto)
3. Worker failed with no recovery
4. Max iterations reached
5. API paused/stopped
## Best Practices
1. **Worker 生命周期**: spawn → wait → close不保留 worker
2. **结果持久化**: Worker 输出写入 `{projectRoot}/.workflow/.loop/{loopId}.workers/`
3. **状态同步**: 每次 worker 完成后更新 state
4. **超时处理**: send_input 请求收敛,再超时则跳过

View File

@@ -1,181 +0,0 @@
# State Schema (CCW Loop-B)
## Master State Structure
```json
{
"loop_id": "loop-b-20260122-abc123",
"title": "Implement user authentication",
"description": "Full task description here",
"mode": "interactive | auto | parallel",
"status": "running | paused | completed | failed",
"current_iteration": 3,
"max_iterations": 10,
"created_at": "2026-01-22T10:00:00.000Z",
"updated_at": "2026-01-22T10:30:00.000Z",
"skill_state": {
"phase": "develop | debug | validate | complete",
"action_index": 2,
"workers_completed": ["init", "develop"],
"parallel_results": null,
"pending_tasks": [],
"completed_tasks": [],
"findings": []
}
}
```
## Field Descriptions
### Core Fields (API Compatible)
| Field | Type | Description |
|-------|------|-------------|
| `loop_id` | string | Unique identifier |
| `title` | string | Short title (max 100 chars) |
| `description` | string | Full task description |
| `mode` | enum | Execution mode |
| `status` | enum | Current status |
| `current_iteration` | number | Iteration counter |
| `max_iterations` | number | Safety limit |
| `created_at` | ISO string | Creation timestamp |
| `updated_at` | ISO string | Last update timestamp |
### Skill State Fields
| Field | Type | Description |
|-------|------|-------------|
| `phase` | enum | Current execution phase |
| `action_index` | number | Position in action sequence (auto mode) |
| `workers_completed` | array | List of completed worker actions |
| `parallel_results` | object | Merged results from parallel mode |
| `pending_tasks` | array | Tasks waiting to be executed |
| `completed_tasks` | array | Tasks already done |
| `findings` | array | Discoveries during execution |
## Worker Output Structure
Each worker writes to `{projectRoot}/.workflow/.loop/{loopId}.workers/{action}.output.json`:
```json
{
"action": "develop",
"status": "success",
"summary": "Implemented 3 functions",
"files_changed": ["src/auth.ts", "src/utils.ts"],
"next_suggestion": "validate",
"loop_back_to": null,
"timestamp": "2026-01-22T10:15:00.000Z",
"detailed_output": {
"tasks_completed": [
{ "id": "T1", "description": "Create auth module" }
],
"metrics": {
"lines_added": 150,
"lines_removed": 20
}
}
}
```
## Progress File Structure
Human-readable progress in `{projectRoot}/.workflow/.loop/{loopId}.progress/{action}.md`:
```markdown
# Develop Progress
## Session: loop-b-20260122-abc123
### Iteration 1 (2026-01-22 10:15)
**Task**: Implement auth module
**Changes**:
- Created `src/auth.ts` with login/logout functions
- Added JWT token handling in `src/utils.ts`
**Status**: Success
---
### Iteration 2 (2026-01-22 10:30)
...
```
## Status Transitions
```
+--------+
| init |
+--------+
|
v
+------> +---------+
| | develop |
| +---------+
| |
| +--------+--------+
| | |
| v v
| +-------+ +---------+
| | debug |<------| validate|
| +-------+ +---------+
| | |
| +--------+--------+
| |
| v
| [needs fix?]
| yes | | no
| v v
+------------+ +----------+
| complete |
+----------+
```
## Parallel Results Schema
When `mode === 'parallel'`:
```json
{
"parallel_results": {
"develop": {
"status": "success",
"summary": "...",
"suggestions": []
},
"debug": {
"status": "success",
"issues_found": [],
"suggestions": []
},
"validate": {
"status": "success",
"test_results": {},
"coverage": {}
},
"merged_at": "2026-01-22T10:45:00.000Z"
}
}
```
## Directory Structure
```
{projectRoot}/.workflow/.loop/
+-- loop-b-20260122-abc123.json # Master state
+-- loop-b-20260122-abc123.workers/
| +-- init.output.json
| +-- develop.output.json
| +-- debug.output.json
| +-- validate.output.json
| +-- complete.output.json
+-- loop-b-20260122-abc123.progress/
+-- develop.md
+-- debug.md
+-- validate.md
+-- summary.md
```

View File

@@ -1,383 +0,0 @@
# Action Catalog (CCW Loop-B)
Complete reference of worker actions and their capabilities.
## Action Matrix
| Action | Worker Agent | Purpose | Input Requirements | Output |
|--------|--------------|---------|-------------------|--------|
| init | ccw-loop-b-init.md | Session initialization | Task description | Task breakdown + execution plan |
| develop | ccw-loop-b-develop.md | Code implementation | Task list | Code changes + progress update |
| debug | ccw-loop-b-debug.md | Problem diagnosis | Issue description | Root cause analysis + fix suggestions |
| validate | ccw-loop-b-validate.md | Testing and verification | Files to test | Test results + coverage report |
| complete | ccw-loop-b-complete.md | Session finalization | All worker outputs | Summary + commit message |
## Detailed Action Specifications
### INIT
**Purpose**: Parse requirements, create execution plan
**Preconditions**:
- `status === 'running'`
- `skill_state === null` (first time)
**Input**:
```
- Task description (text)
- Project context files
```
**Execution**:
1. Read `{projectRoot}/.workflow/project-tech.json`
2. Read `{projectRoot}/.workflow/project-guidelines.json`
3. Parse task into phases
4. Create task breakdown
5. Generate execution plan
**Output**:
```
WORKER_RESULT:
- action: init
- status: success
- summary: "Initialized with 5 tasks"
- next_suggestion: develop
TASK_BREAKDOWN:
- T1: Create auth module
- T2: Implement JWT utils
- T3: Write tests
- T4: Validate implementation
- T5: Documentation
EXECUTION_PLAN:
1. Develop (T1-T2)
2. Validate (T3-T4)
3. Complete (T5)
```
**Effects**:
- `skill_state.pending_tasks` populated
- Progress structure created
- Ready for develop phase
---
### DEVELOP
**Purpose**: Implement code, create/modify files
**Preconditions**:
- `skill_state.pending_tasks.length > 0`
- `status === 'running'`
**Input**:
```
- Task list from state
- Project conventions
- Existing code patterns
```
**Execution**:
1. Load pending tasks
2. Find existing patterns
3. Implement tasks one by one
4. Update progress file
5. Mark tasks completed
**Output**:
```
WORKER_RESULT:
- action: develop
- status: success
- summary: "Implemented 3 tasks"
- files_changed: ["src/auth.ts", "src/utils.ts"]
- next_suggestion: validate
DETAILED_OUTPUT:
tasks_completed: [T1, T2]
metrics:
lines_added: 180
lines_removed: 15
```
**Effects**:
- Files created/modified
- `skill_state.completed_tasks` updated
- Progress documented
**Failure Modes**:
- Pattern unclear → suggest debug
- Task blocked → mark blocked, continue
- Partial completion → set `loop_back_to: "develop"`
---
### DEBUG
**Purpose**: Diagnose issues, root cause analysis
**Preconditions**:
- Issue exists (test failure, bug report, etc.)
- `status === 'running'`
**Input**:
```
- Issue description
- Error messages
- Stack traces
- Reproduction steps
```
**Execution**:
1. Understand problem symptoms
2. Gather evidence from code
3. Form hypothesis
4. Test hypothesis
5. Document root cause
6. Suggest fixes
**Output**:
```
WORKER_RESULT:
- action: debug
- status: success
- summary: "Root cause: memory leak in event listeners"
- next_suggestion: develop (apply fixes)
ROOT_CAUSE_ANALYSIS:
hypothesis: "Listener accumulation"
confidence: high
evidence: [...]
mechanism: "Detailed explanation"
FIX_RECOMMENDATIONS:
1. Add removeAllListeners() on disconnect
2. Verification: Monitor memory usage
```
**Effects**:
- `skill_state.findings` updated
- Fix recommendations documented
- Ready for develop to apply fixes
**Failure Modes**:
- Insufficient info → request more data
- Multiple hypotheses → rank by likelihood
- Inconclusive → suggest investigation areas
---
### VALIDATE
**Purpose**: Run tests, check coverage, quality gates
**Preconditions**:
- Code exists to validate
- `status === 'running'`
**Input**:
```
- Files to test
- Test configuration
- Coverage requirements
```
**Execution**:
1. Identify test framework
2. Run unit tests
3. Run integration tests
4. Measure coverage
5. Check quality (lint, types, security)
6. Generate report
**Output**:
```
WORKER_RESULT:
- action: validate
- status: success
- summary: "113 tests pass, coverage 95%"
- next_suggestion: complete (all pass) | develop (fix failures)
TEST_RESULTS:
unit_tests: { passed: 98, failed: 0 }
integration_tests: { passed: 15, failed: 0 }
coverage: "95%"
QUALITY_CHECKS:
lint: ✓ Pass
types: ✓ Pass
security: ✓ Pass
```
**Effects**:
- Test results documented
- Coverage measured
- Quality gates verified
**Failure Modes**:
- Tests fail → document failures, suggest fixes
- Coverage low → identify gaps
- Quality issues → flag problems
---
### COMPLETE
**Purpose**: Finalize session, generate summary, commit
**Preconditions**:
- All tasks completed
- Tests passing
- `status === 'running'`
**Input**:
```
- All worker outputs
- Progress files
- Current state
```
**Execution**:
1. Read all worker outputs
2. Consolidate achievements
3. Verify completeness
4. Generate summary
5. Prepare commit message
6. Cleanup and archive
**Output**:
```
WORKER_RESULT:
- action: complete
- status: success
- summary: "Session completed successfully"
- next_suggestion: null
SESSION_SUMMARY:
achievements: [...]
files_changed: [...]
test_results: { ... }
quality_checks: { ... }
COMMIT_SUGGESTION:
message: "feat: ..."
files: [...]
ready_for_pr: true
```
**Effects**:
- `status` → 'completed'
- Summary file created
- Progress archived
- Commit message ready
**Failure Modes**:
- Pending tasks remain → mark partial
- Quality gates fail → list failures
---
## Action Flow Diagrams
### Interactive Mode Flow
```
+------+
| INIT |
+------+
|
v
+------+ user selects
| MENU |-------------+
+------+ |
^ v
| +--------------+
| | spawn worker |
| +--------------+
| |
| v
| +------+-------+
+---------| wait result |
+------+-------+
|
v
+------+-------+
| update state |
+--------------+
|
v
[completed?] --no--> [back to MENU]
|
yes
v
+----------+
| COMPLETE |
+----------+
```
### Auto Mode Flow
```
+------+ +---------+ +-------+ +----------+ +----------+
| INIT | ---> | DEVELOP | ---> | DEBUG | ---> | VALIDATE | ---> | COMPLETE |
+------+ +---------+ +-------+ +----------+ +----------+
^ | |
| +--- [issues] |
+--------------------------------+
[tests fail]
```
### Parallel Mode Flow
```
+------+
| INIT |
+------+
|
v
+---------------------+
| spawn all workers |
| [develop, debug, |
| validate] |
+---------------------+
|
v
+---------------------+
| wait({ ids: all }) |
+---------------------+
|
v
+---------------------+
| merge results |
+---------------------+
|
v
+---------------------+
| coordinator decides |
+---------------------+
|
v
+----------+
| COMPLETE |
+----------+
```
## Worker Coordination
| Scenario | Worker Sequence | Mode |
|----------|-----------------|------|
| Simple task | init → develop → validate → complete | Auto |
| Complex task | init → develop → debug → develop → validate → complete | Auto |
| Bug fix | init → debug → develop → validate → complete | Auto |
| Analysis | init → [develop \|\| debug \|\| validate] → complete | Parallel |
| Interactive | init → menu → user selects → worker → menu → ... | Interactive |
## Best Practices
1. **Init always first**: Parse requirements before execution
2. **Validate often**: After each develop phase
3. **Debug when needed**: Don't skip diagnosis
4. **Complete always last**: Ensure proper cleanup
5. **Use parallel wisely**: For independent analysis tasks
6. **Follow sequence**: In auto mode, respect dependencies

View File

@@ -1,168 +0,0 @@
# Worker: COMPLETE
Session finalization worker. Aggregate results, generate summary, cleanup.
## Purpose
- Aggregate all worker results into comprehensive summary
- Verify completeness of tasks
- Generate commit message suggestion
- Offer expansion options
- Mark loop as completed
## Preconditions
- `state.status === 'running'`
## Execution
### Step 1: Read All Worker Outputs
```javascript
const workerOutputs = {}
for (const action of ['init', 'develop', 'debug', 'validate']) {
const outputPath = `${workersDir}/${action}.output.json`
if (fs.existsSync(outputPath)) {
workerOutputs[action] = JSON.parse(Read(outputPath))
}
}
```
### Step 2: Aggregate Statistics
```javascript
const stats = {
duration: Date.now() - new Date(state.created_at).getTime(),
iterations: state.current_iteration,
tasks_completed: state.skill_state.completed_tasks.length,
tasks_total: state.skill_state.completed_tasks.length + state.skill_state.pending_tasks.length,
files_changed: collectAllFilesChanged(workerOutputs),
test_passed: workerOutputs.validate?.summary?.passed || 0,
test_total: workerOutputs.validate?.summary?.total || 0,
coverage: workerOutputs.validate?.coverage || 'N/A'
}
```
### Step 3: Generate Summary
```javascript
Write(`${progressDir}/summary.md`, `# CCW Loop-B Session Summary
**Loop ID**: ${loopId}
**Task**: ${state.description}
**Mode**: ${state.mode}
**Started**: ${state.created_at}
**Completed**: ${getUtc8ISOString()}
**Duration**: ${formatDuration(stats.duration)}
---
## Results
| Metric | Value |
|--------|-------|
| Iterations | ${stats.iterations} |
| Tasks Completed | ${stats.tasks_completed}/${stats.tasks_total} |
| Tests | ${stats.test_passed}/${stats.test_total} |
| Coverage | ${stats.coverage} |
| Files Changed | ${stats.files_changed.length} |
## Files Changed
${stats.files_changed.map(f => `- \`${f}\``).join('\n') || '- None'}
## Worker Summary
${Object.entries(workerOutputs).map(([action, output]) => `
### ${action}
- Status: ${output.status}
- Summary: ${output.summary}
`).join('\n')}
## Recommendations
${generateRecommendations(stats, state)}
---
*Generated by CCW Loop-B at ${getUtc8ISOString()}*
`)
```
### Step 4: Generate Commit Suggestion
```javascript
const commitSuggestion = {
message: generateCommitMessage(state.description, stats),
files: stats.files_changed,
ready_for_pr: stats.test_passed > 0 && stats.tasks_completed === stats.tasks_total
}
```
### Step 5: Update State
```javascript
state.status = 'completed'
state.completed_at = getUtc8ISOString()
state.skill_state.phase = 'complete'
state.skill_state.workers_completed.push('complete')
saveState(loopId, state)
```
## Output Format
```
WORKER_RESULT:
- action: complete
- status: success
- summary: Loop completed. {tasks_completed} tasks, {test_passed} tests pass
- files_changed: []
- next_suggestion: null
- loop_back_to: null
DETAILED_OUTPUT:
SESSION_SUMMARY:
achievements: [...]
files_changed: [...]
test_results: { passed: N, total: N }
COMMIT_SUGGESTION:
message: "feat: ..."
files: [...]
ready_for_pr: true
EXPANSION_OPTIONS:
1. [test] Add more test cases
2. [enhance] Feature enhancements
3. [refactor] Code refactoring
4. [doc] Documentation updates
```
## Helper Functions
```javascript
function formatDuration(ms) {
const seconds = Math.floor(ms / 1000)
const minutes = Math.floor(seconds / 60)
const hours = Math.floor(minutes / 60)
if (hours > 0) return `${hours}h ${minutes % 60}m`
if (minutes > 0) return `${minutes}m ${seconds % 60}s`
return `${seconds}s`
}
function generateRecommendations(stats, state) {
const recs = []
if (stats.tasks_completed < stats.tasks_total) recs.push('- Complete remaining tasks')
if (stats.test_passed < stats.test_total) recs.push('- Fix failing tests')
if (stats.coverage !== 'N/A' && parseFloat(stats.coverage) < 80) recs.push(`- Improve coverage (${stats.coverage}%)`)
if (recs.length === 0) recs.push('- Consider code review', '- Update documentation')
return recs.join('\n')
}
```
## Error Handling
| Error | Recovery |
|-------|----------|
| Missing worker outputs | Generate partial summary |
| State write failed | Retry, then report |

View File

@@ -1,148 +0,0 @@
# Worker: DEBUG
Problem diagnosis worker. Hypothesis-driven debugging with evidence tracking.
## Purpose
- Locate error source and understand failure mechanism
- Generate testable hypotheses ranked by likelihood
- Collect evidence and evaluate against criteria
- Document root cause and fix recommendations
## Preconditions
- Issue exists (test failure, bug report, blocked task)
- `state.status === 'running'`
## Mode Detection
```javascript
const debugPath = `${progressDir}/debug.md`
const debugExists = fs.existsSync(debugPath)
const debugMode = debugExists ? 'continue' : 'explore'
```
## Execution
### Mode: Explore (First Debug)
#### Step E1: Understand Problem
```javascript
// From test failures, blocked tasks, or user description
const bugDescription = state.skill_state.findings?.[0]
|| state.description
```
#### Step E2: Search Codebase
```javascript
const searchResults = mcp__ace_tool__search_context({
project_root_path: '.',
query: `code related to: ${bugDescription}`
})
```
#### Step E3: Generate Hypotheses
```javascript
const hypotheses = [
{
id: 'H1',
description: 'Most likely cause',
testable_condition: 'What to check',
confidence: 'high | medium | low',
evidence: [],
mechanism: 'Detailed explanation of how this causes the bug'
},
// H2, H3...
]
```
#### Step E4: Create Understanding Document
```javascript
Write(`${progressDir}/debug.md`, `# Debug Understanding
**Loop ID**: ${loopId}
**Bug**: ${bugDescription}
**Started**: ${getUtc8ISOString()}
---
## Hypotheses
${hypotheses.map(h => `
### ${h.id}: ${h.description}
- Confidence: ${h.confidence}
- Testable: ${h.testable_condition}
- Mechanism: ${h.mechanism}
`).join('\n')}
## Evidence
[To be collected]
## Root Cause
[Pending investigation]
`)
```
### Mode: Continue (Previous Debug Exists)
#### Step C1: Review Previous Findings
```javascript
const previousDebug = Read(`${progressDir}/debug.md`)
// Continue investigation based on previous findings
```
#### Step C2: Apply Fix and Verify
```javascript
// If root cause identified, apply fix
// Record fix in progress document
```
## Output Format
```
WORKER_RESULT:
- action: debug
- status: success
- summary: Root cause: {description}
- files_changed: []
- next_suggestion: develop
- loop_back_to: develop
DETAILED_OUTPUT:
ROOT_CAUSE_ANALYSIS:
hypothesis: "H1: {description}"
confidence: high
evidence: [...]
mechanism: "Detailed explanation"
FIX_RECOMMENDATIONS:
1. {specific fix action}
2. {verification step}
```
## Clarification Mode
If insufficient information:
```
CLARIFICATION_NEEDED:
Q1: Can you reproduce the issue? | Options: [Yes, No, Sometimes] | Recommended: [Yes]
Q2: When did this start? | Options: [Recent change, Always, Unknown] | Recommended: [Recent change]
```
## Error Handling
| Error | Recovery |
|-------|----------|
| Insufficient info | Output CLARIFICATION_NEEDED |
| All hypotheses rejected | Generate new hypotheses |
| >5 iterations | Suggest escalation |

View File

@@ -1,123 +0,0 @@
# Worker: DEVELOP
Code implementation worker. Execute pending tasks, record changes.
## Purpose
- Execute next pending development task
- Implement code changes following project conventions
- Record progress to markdown and NDJSON log
- Update task status in state
## Preconditions
- `state.skill_state.pending_tasks.length > 0`
- `state.status === 'running'`
## Execution
### Step 1: Find Pending Task
```javascript
const tasks = state.skill_state.pending_tasks
const currentTask = tasks.find(t => t.status === 'pending')
if (!currentTask) {
// All tasks done
return WORKER_RESULT with next_suggestion: 'validate'
}
currentTask.status = 'in_progress'
```
### Step 2: Find Existing Patterns
```javascript
// Use ACE search_context to find similar implementations
const patterns = mcp__ace_tool__search_context({
project_root_path: '.',
query: `implementation patterns for: ${currentTask.description}`
})
// Study 3+ similar features/components
// Follow existing conventions
```
### Step 3: Implement Task
```javascript
// Use appropriate tools:
// - ACE search_context for finding patterns
// - Read for loading files
// - Edit/Write for making changes
const filesChanged = []
// ... implementation logic ...
```
### Step 4: Record Changes
```javascript
// Append to progress document
const progressEntry = `
### Task ${currentTask.id} - ${currentTask.description} (${getUtc8ISOString()})
**Files Changed**:
${filesChanged.map(f => `- \`${f}\``).join('\n')}
**Summary**: [implementation description]
**Status**: COMPLETED
---
`
const existingProgress = Read(`${progressDir}/develop.md`)
Write(`${progressDir}/develop.md`, existingProgress + progressEntry)
```
### Step 5: Update State
```javascript
currentTask.status = 'completed'
state.skill_state.completed_tasks.push(currentTask)
state.skill_state.pending_tasks = tasks.filter(t => t.status === 'pending')
saveState(loopId, state)
```
## Output Format
```
WORKER_RESULT:
- action: develop
- status: success
- summary: Implemented: {task_description}
- files_changed: ["file1.ts", "file2.ts"]
- next_suggestion: develop | validate
- loop_back_to: null
DETAILED_OUTPUT:
tasks_completed: [T1]
tasks_remaining: [T2, T3]
metrics:
lines_added: 180
lines_removed: 15
```
## Clarification Mode
If task is ambiguous, output:
```
CLARIFICATION_NEEDED:
Q1: [question about implementation approach] | Options: [A, B] | Recommended: [A]
Q2: [question about scope] | Options: [A, B, C] | Recommended: [B]
```
## Error Handling
| Error | Recovery |
|-------|----------|
| Pattern unclear | Output CLARIFICATION_NEEDED |
| Task blocked | Mark blocked, suggest debug |
| Partial completion | Set loop_back_to: "develop" |

View File

@@ -1,115 +0,0 @@
# Worker: INIT
Session initialization worker. Parse requirements, create execution plan.
## Purpose
- Parse task description and project context
- Break task into development phases
- Generate initial task list
- Create progress document structure
## Preconditions
- `state.status === 'running'`
- `state.skill_state.phase === 'init'` or first run
## Execution
### Step 1: Read Project Context
```javascript
// MANDATORY FIRST STEPS (already in prompt)
// 1. Read role definition
// 2. Read ${projectRoot}/.workflow/project-tech.json
// 3. Read ${projectRoot}/.workflow/project-guidelines.json
```
### Step 2: Analyze Task
```javascript
// Use ACE search_context to find relevant patterns
const searchResults = mcp__ace_tool__search_context({
project_root_path: '.',
query: `code related to: ${state.description}`
})
// Parse task into 3-7 development tasks
const tasks = analyzeAndDecompose(state.description, searchResults)
```
### Step 3: Create Task Breakdown
```javascript
const breakdown = tasks.map((t, i) => ({
id: `T${i + 1}`,
description: t.description,
priority: t.priority || i + 1,
status: 'pending',
files: t.relatedFiles || []
}))
```
### Step 4: Initialize Progress Document
```javascript
const progressPath = `${progressDir}/develop.md`
Write(progressPath, `# Development Progress
**Loop ID**: ${loopId}
**Task**: ${state.description}
**Started**: ${getUtc8ISOString()}
---
## Task List
${breakdown.map((t, i) => `${i + 1}. [ ] ${t.description}`).join('\n')}
---
## Progress Timeline
`)
```
### Step 5: Update State
```javascript
state.skill_state.pending_tasks = breakdown
state.skill_state.phase = 'init'
state.skill_state.workers_completed.push('init')
saveState(loopId, state)
```
## Output Format
```
WORKER_RESULT:
- action: init
- status: success
- summary: Initialized with {N} development tasks
- files_changed: []
- next_suggestion: develop
- loop_back_to: null
DETAILED_OUTPUT:
TASK_BREAKDOWN:
- T1: {description}
- T2: {description}
...
EXECUTION_PLAN:
1. Develop (T1-T2)
2. Validate
3. Complete
```
## Error Handling
| Error | Recovery |
|-------|----------|
| Task analysis failed | Create single generic task |
| Project context missing | Proceed without context |
| State write failed | Retry once, then report |

View File

@@ -1,132 +0,0 @@
# Worker: VALIDATE
Testing and verification worker. Run tests, check coverage, quality gates.
## Purpose
- Detect test framework and run tests
- Measure code coverage
- Check quality gates (lint, types, security)
- Generate validation report
- Determine pass/fail status
## Preconditions
- Code exists to validate
- `state.status === 'running'`
## Execution
### Step 1: Detect Test Framework
```javascript
const packageJson = JSON.parse(Read('package.json') || '{}')
const testScript = packageJson.scripts?.test || 'npm test'
const coverageScript = packageJson.scripts?.['test:coverage']
```
### Step 2: Run Tests
```javascript
const testResult = await Bash({
command: testScript,
timeout: 300000 // 5 minutes
})
const testResults = parseTestOutput(testResult.stdout, testResult.stderr)
```
### Step 3: Run Coverage (if available)
```javascript
let coverageData = null
if (coverageScript) {
const coverageResult = await Bash({ command: coverageScript, timeout: 300000 })
coverageData = parseCoverageReport(coverageResult.stdout)
}
```
### Step 4: Quality Checks
```javascript
// Lint check
const lintResult = await Bash({ command: 'npm run lint 2>&1 || true' })
// Type check
const typeResult = await Bash({ command: 'npx tsc --noEmit 2>&1 || true' })
```
### Step 5: Generate Validation Report
```javascript
Write(`${progressDir}/validate.md`, `# Validation Report
**Loop ID**: ${loopId}
**Validated**: ${getUtc8ISOString()}
## Test Results
| Metric | Value |
|--------|-------|
| Total | ${testResults.total} |
| Passed | ${testResults.passed} |
| Failed | ${testResults.failed} |
| Pass Rate | ${((testResults.passed / testResults.total) * 100).toFixed(1)}% |
## Coverage
${coverageData ? `Overall: ${coverageData.overall}%` : 'N/A'}
## Quality Checks
- Lint: ${lintResult.exitCode === 0 ? 'PASS' : 'FAIL'}
- Types: ${typeResult.exitCode === 0 ? 'PASS' : 'FAIL'}
## Failed Tests
${testResults.failures?.map(f => `- ${f.name}: ${f.error}`).join('\n') || 'None'}
`)
```
### Step 6: Save Structured Results
```javascript
Write(`${workersDir}/validate.output.json`, JSON.stringify({
action: 'validate',
timestamp: getUtc8ISOString(),
summary: { total: testResults.total, passed: testResults.passed, failed: testResults.failed },
coverage: coverageData?.overall || null,
quality: { lint: lintResult.exitCode === 0, types: typeResult.exitCode === 0 }
}, null, 2))
```
## Output Format
```
WORKER_RESULT:
- action: validate
- status: success
- summary: {passed}/{total} tests pass, coverage {N}%
- files_changed: []
- next_suggestion: complete | develop
- loop_back_to: develop (if tests fail)
DETAILED_OUTPUT:
TEST_RESULTS:
unit_tests: { passed: 98, failed: 0 }
integration_tests: { passed: 15, failed: 0 }
coverage: "95%"
QUALITY_CHECKS:
lint: PASS
types: PASS
```
## Error Handling
| Error | Recovery |
|-------|----------|
| Tests don't run | Check config, report error |
| All tests fail | Suggest debug action |
| Coverage tool missing | Skip coverage, tests only |
| Timeout | Increase timeout or split tests |

View File

@@ -57,6 +57,20 @@ Stateless iterative development loop using Codex single-agent deep interaction p
| --loop-id | One of TASK or --loop-id | Existing loop ID to continue |
| --auto | No | Auto-cycle mode (develop → debug → validate → complete) |
## Prep Package Integration
When `prep-package.json` exists at `{projectRoot}/.workflow/.loop/prep-package.json`, Phase 1 consumes it to:
- Load pre-built task list from `prep-tasks.jsonl` instead of generating tasks from scratch
- Apply auto-loop config (max_iterations, timeout)
- Preserve source provenance and convergence criteria from upstream planning/analysis skills
Prep packages are generated by the interactive prompt `/prompts:prep-loop`, which accepts JSONL from:
- `collaborative-plan-with-file` (tasks.jsonl)
- `analyze-with-file` (tasks.jsonl)
- `brainstorm-to-cycle` (cycle-task.md → converted to task format)
See [phases/00-prep-checklist.md](phases/00-prep-checklist.md) for schema and validation rules.
## Execution Modes
### Mode 1: Interactive
@@ -101,6 +115,7 @@ Phase 2: Orchestration Loop
| Phase | Document | Purpose |
|-------|----------|---------|
| 0 | [phases/00-prep-checklist.md](phases/00-prep-checklist.md) | Prep package schema and validation rules |
| 1 | [phases/01-session-init.md](phases/01-session-init.md) | Argument parsing, state creation/resume, directory init |
| 2 | [phases/02-orchestration-loop.md](phases/02-orchestration-loop.md) | Agent spawn, main loop, result parsing, send_input dispatch |

View File

@@ -43,26 +43,36 @@ const progressDir = `${projectRoot}/.workflow/.loop/${loopId}.progress`
### Step 3: Analyze Task and Generate Tasks
```javascript
// Analyze task description
const taskDescription = state.description
// Check if prep tasks already loaded by orchestrator (from prep-package)
// If skill_state already has tasks (pre-populated by Phase 1), skip generation
const existingTasks = state.skill_state?.develop?.tasks
if (existingTasks && existingTasks.length > 0) {
console.log(`✓ Using ${existingTasks.length} pre-built tasks from prep-package`)
console.log(` Source: ${state.prep_source?.tool || 'unknown'}`)
// Skip to Step 4 — tasks already available
tasks = existingTasks
} else {
// No prep tasks — analyze task description and generate 3-7 development tasks
const taskDescription = state.description
// Generate 3-7 development tasks based on analysis
// Use ACE search or smart_search to find relevant patterns
// Generate 3-7 development tasks based on analysis
// Use ACE search or smart_search to find relevant patterns
const tasks = [
{
id: 'task-001',
description: 'Task description based on analysis',
tool: 'gemini',
mode: 'write',
status: 'pending',
priority: 1,
files: [],
created_at: getUtc8ISOString(),
completed_at: null
}
// ... more tasks
]
tasks = [
{
id: 'task-001',
description: 'Task description based on analysis',
tool: 'gemini',
mode: 'write',
status: 'pending',
priority: 1,
files: [],
created_at: getUtc8ISOString(),
completed_at: null
}
// ... more tasks
]
}
```
### Step 4: Initialize Progress Document

View File

@@ -0,0 +1,116 @@
# Phase 0: Prep Package Schema & Integration
Schema reference for `prep-package.json` consumed by ccw-loop Phase 1. Generated by interactive prompt `/prompts:prep-loop`.
## prep-package.json Schema
```json
{
"version": "1.0.0",
"generated_at": "ISO8601 (UTC+8)",
"prep_status": "ready | cancelled | needs_refinement",
"target_skill": "ccw-loop",
"environment": {
"project_root": "absolute path",
"tech_stack": "string",
"test_framework": "string"
},
"source": {
"tool": "collaborative-plan-with-file | analyze-with-file | brainstorm-to-cycle | manual",
"session_id": "string",
"jsonl_path": "absolute path to original JSONL",
"task_count": "number",
"tasks_with_convergence": "number"
},
"tasks": {
"total": "number",
"by_priority": { "high": 0, "medium": 0, "low": 0 },
"by_type": { "feature": 0, "fix": 0, "refactor": 0, "enhancement": 0, "testing": 0 }
},
"auto_loop": {
"enabled": true,
"no_confirmation": true,
"max_iterations": 10,
"timeout_per_action_ms": 600000
}
}
```
## prep-tasks.jsonl Schema
One task per line, each in ccw-loop `develop.tasks[]` format with extended fields:
```json
{
"id": "task-001",
"description": "Title: detailed description",
"tool": "gemini",
"mode": "write",
"status": "pending",
"priority": 1,
"files_changed": ["path/to/file.ts"],
"created_at": "ISO8601",
"completed_at": null,
"_source": { "tool": "collaborative-plan-with-file", "session_id": "...", "original_id": "TASK-001" },
"_convergence": { "criteria": ["..."], "verification": "...", "definition_of_done": "..." },
"_type": "feature",
"_effort": "medium",
"_depends_on": []
}
```
## Validation Rules
| # | Check | Condition | On Failure |
|---|-------|-----------|------------|
| 1 | prep_status | `=== "ready"` | Skip prep, use default INIT |
| 2 | target_skill | `=== "ccw-loop"` | Skip prep, use default INIT |
| 3 | project_root | Matches current `projectRoot` | Skip prep, warn mismatch |
| 4 | freshness | `generated_at` within 24h | Skip prep, warn stale |
| 5 | tasks file | `prep-tasks.jsonl` exists and readable | Skip prep, use default INIT |
| 6 | tasks content | At least 1 valid task line in JSONL | Skip prep, use default INIT |
## Integration Points
### Phase 1: Session Initialization
```javascript
// Load prep-package.json (generated by /prompts:prep-loop)
let prepPackage = null
const prepPath = `${projectRoot}/.workflow/.loop/prep-package.json`
if (fs.existsSync(prepPath)) {
const raw = JSON.parse(Read(prepPath))
const checks = validateLoopPrepPackage(raw, projectRoot)
if (checks.valid) {
prepPackage = raw
// Load pre-built tasks from prep-tasks.jsonl
const tasksPath = `${projectRoot}/.workflow/.loop/prep-tasks.jsonl`
const prepTasks = loadPrepTasks(tasksPath)
// → Inject into state.skill_state.develop.tasks
// → Set max_iterations from auto_loop config
} else {
console.warn(`⚠ Prep package failed validation, using default INIT`)
prepPackage = null
}
}
```
### INIT Action (action-init.md)
When prep tasks are loaded:
- **Skip** Step 3 (Analyze Task and Generate Tasks) — tasks already provided
- **Use** prep tasks directly in Step 5 (Update State)
- **Preserve** `_convergence` fields for VALIDATE action reference
### VALIDATE Action
When `_convergence` exists on a task:
- Use `convergence.verification` as validation command/steps
- Use `convergence.criteria` as pass/fail conditions
- Fall back to default test validation if `_convergence` is null

View File

@@ -19,7 +19,7 @@ Create or resume a development loop, initialize state file and directory structu
const projectRoot = Bash('git rev-parse --show-toplevel 2>/dev/null || pwd').trim()
```
### Step 1.1: Parse Arguments
### Step 1.1: Parse Arguments & Load Prep Package
```javascript
const { loopId: existingLoopId, task, mode = 'interactive' } = options
@@ -32,6 +32,123 @@ if (!existingLoopId && !task) {
// Determine mode
const executionMode = options['--auto'] ? 'auto' : 'interactive'
// ── Prep Package: Detect → Validate → Consume ──
let prepPackage = null
let prepTasks = null
const prepPath = `${projectRoot}/.workflow/.loop/prep-package.json`
if (fs.existsSync(prepPath)) {
const raw = JSON.parse(Read(prepPath))
const checks = validateLoopPrepPackage(raw, projectRoot)
if (checks.valid) {
prepPackage = raw
// Load pre-built tasks
const tasksPath = `${projectRoot}/.workflow/.loop/prep-tasks.jsonl`
prepTasks = loadPrepTasks(tasksPath)
if (prepTasks && prepTasks.length > 0) {
console.log(`✓ Prep package loaded: ${prepTasks.length} tasks from ${prepPackage.source.tool}`)
console.log(` Checks passed: ${checks.passed.join(', ')}`)
} else {
console.warn(`⚠ Prep tasks file empty or invalid, falling back to default INIT`)
prepPackage = null
prepTasks = null
}
} else {
console.warn(`⚠ Prep package found but failed validation:`)
checks.failures.forEach(f => console.warn(`${f}`))
console.warn(` → Falling back to default behavior (prep-package ignored)`)
prepPackage = null
}
}
/**
* Validate prep-package.json integrity before consumption.
* Returns { valid: bool, passed: string[], failures: string[] }
*/
function validateLoopPrepPackage(prep, projectRoot) {
const passed = []
const failures = []
// Check 1: prep_status must be "ready"
if (prep.prep_status === 'ready') {
passed.push('status=ready')
} else {
failures.push(`prep_status is "${prep.prep_status}", expected "ready"`)
}
// Check 2: target_skill must match
if (prep.target_skill === 'ccw-loop') {
passed.push('target_skill match')
} else {
failures.push(`target_skill is "${prep.target_skill}", expected "ccw-loop"`)
}
// Check 3: project_root must match current project
if (prep.environment?.project_root === projectRoot) {
passed.push('project_root match')
} else {
failures.push(`project_root mismatch: prep="${prep.environment?.project_root}", current="${projectRoot}"`)
}
// Check 4: generated_at must be within 24 hours
const generatedAt = new Date(prep.generated_at)
const hoursSince = (Date.now() - generatedAt.getTime()) / (1000 * 60 * 60)
if (hoursSince <= 24) {
passed.push(`age=${Math.round(hoursSince)}h`)
} else {
failures.push(`prep-package is ${Math.round(hoursSince)}h old (max 24h), may be stale`)
}
// Check 5: prep-tasks.jsonl must exist
const tasksPath = `${projectRoot}/.workflow/.loop/prep-tasks.jsonl`
if (fs.existsSync(tasksPath)) {
passed.push('prep-tasks.jsonl exists')
} else {
failures.push('prep-tasks.jsonl not found')
}
// Check 6: task count > 0
if ((prep.tasks?.total || 0) > 0) {
passed.push(`tasks=${prep.tasks.total}`)
} else {
failures.push('task count is 0')
}
return {
valid: failures.length === 0,
passed,
failures
}
}
/**
* Load pre-built tasks from prep-tasks.jsonl.
* Returns array of task objects or null on failure.
*/
function loadPrepTasks(tasksPath) {
if (!fs.existsSync(tasksPath)) return null
const content = Read(tasksPath)
const lines = content.trim().split('\n').filter(l => l.trim())
const tasks = []
for (const line of lines) {
try {
const task = JSON.parse(line)
if (task.id && task.description) {
tasks.push(task)
}
} catch (e) {
console.warn(`⚠ Skipping invalid task line: ${e.message}`)
}
}
return tasks.length > 0 ? tasks : null
}
```
### Step 1.2: Utility Functions
@@ -79,14 +196,51 @@ function createLoopState(loopId, taskDescription) {
loop_id: loopId,
title: taskDescription.substring(0, 100),
description: taskDescription,
max_iterations: 10,
max_iterations: prepPackage?.auto_loop?.max_iterations || 10,
status: 'running',
current_iteration: 0,
created_at: now,
updated_at: now,
// Skill extension fields (initialized by INIT action)
skill_state: null
// Skill extension fields
// When prep tasks available, pre-populate skill_state instead of null
skill_state: prepTasks ? {
current_action: 'init',
last_action: null,
completed_actions: [],
mode: executionMode,
develop: {
total: prepTasks.length,
completed: 0,
current_task: null,
tasks: prepTasks,
last_progress_at: null
},
debug: {
active_bug: null,
hypotheses_count: 0,
hypotheses: [],
confirmed_hypothesis: null,
iteration: 0,
last_analysis_at: null
},
validate: {
pass_rate: 0,
coverage: 0,
test_results: [],
passed: false,
failed_tests: [],
last_run_at: null
},
errors: []
} : null,
// Prep package metadata (for traceability)
prep_source: prepPackage?.source || null
}
Write(stateFile, JSON.stringify(state, null, 2))

View File

@@ -44,6 +44,17 @@ When `--yes` or `-y`: Auto-continue all phases (skip confirmations), use recomme
When `--with-commit`: Auto-commit after each task completion in Phase 4.
## Prep Package Integration
When `plan-prep-package.json` exists at `{projectRoot}/.workflow/.prep/plan-prep-package.json`, the skill consumes it with 6-point validation:
1. **Phase 1**: Use `task.structured` (GOAL/SCOPE/CONTEXT) for session creation, enrich planning-notes.md with source_refs and quality dimensions
2. **Phase 2**: Feed verified source_refs as supplementary docs for exploration agents
3. **Phase 3**: Auto-populate Phase 0 User Configuration (execution_method, preferred_cli_tool, supplementary_materials) — skip interactive questions
4. **Phase 4**: Apply `execution.with_commit` flag
Prep packages are generated by the interactive prompt `/prompts:prep-plan`. See [phases/00-prep-checklist.md](phases/00-prep-checklist.md) for schema and validation rules.
## Execution Flow
```

View File

@@ -0,0 +1,181 @@
# Prep Package Schema & Integration Spec
Schema definition for `plan-prep-package.json` and integration points with the workflow-plan-execute skill.
## File Location
```
{projectRoot}/.workflow/.prep/plan-prep-package.json
```
Generated by: `/prompts:prep-plan` (interactive prompt)
Consumed by: Phase 1 (Session Discovery) → feeds into Phase 2, 3, 4
## JSON Schema
```json
{
"version": "1.0.0",
"generated_at": "ISO8601",
"prep_status": "ready | needs_refinement | blocked",
"target_skill": "workflow-plan-execute",
"environment": {
"project_root": "/path/to/project",
"prerequisites": {
"required_passed": true,
"recommended_passed": true,
"warnings": ["string"]
},
"tech_stack": "string",
"test_framework": "string",
"has_project_tech": true,
"has_project_guidelines": true
},
"task": {
"original": "raw user input",
"structured": {
"goal": "GOAL string (objective + success criteria)",
"scope": "SCOPE string (boundaries)",
"context": "CONTEXT string (constraints + tech context)"
},
"quality_score": 8,
"dimensions": {
"objective": { "score": 2, "value": "..." },
"success_criteria": { "score": 2, "value": "..." },
"scope": { "score": 2, "value": "..." },
"constraints": { "score": 1, "value": "..." },
"context": { "score": 1, "value": "..." }
},
"source_refs": [
{
"path": "docs/prd.md",
"type": "local_file | url | auto_detected",
"status": "verified | linked | not_found",
"preview": "first ~20 lines (local_file only)"
}
]
},
"execution": {
"auto_yes": true,
"with_commit": true,
"execution_method": "agent | cli | hybrid",
"preferred_cli_tool": "codex | gemini | qwen | auto",
"supplementary_materials": {
"type": "none | paths | inline",
"content": []
}
}
}
```
## Validation Rules (6 checks)
Phase 1 对 plan-prep-package.json 执行 **6 项验证**,全部通过才加载:
| # | 检查项 | 条件 | 失败处理 |
|---|--------|------|----------|
| 1 | prep_status | `=== "ready"` | 跳过 prep |
| 2 | target_skill | `=== "workflow-plan-execute"` | 跳过 prep防错误 skill |
| 3 | project_root | 与当前 projectRoot 一致 | 跳过 prep防错误项目 |
| 4 | quality_score | `>= 6` | 跳过 prep任务质量不达标 |
| 5 | 时效性 | generated_at 在 24h 以内 | 跳过 prep可能过期 |
| 6 | 必需字段 | task.structured.goal, execution 全部存在 | 跳过 prep |
## Phase 1 Integration (Session Discovery)
After session creation, enrich planning-notes.md with prep data:
```javascript
const prepPath = `${projectRoot}/.workflow/.prep/plan-prep-package.json`
let prepPackage = null
if (fs.existsSync(prepPath)) {
const raw = JSON.parse(Read(prepPath))
const checks = validatePlanPrepPackage(raw, projectRoot)
if (checks.valid) {
prepPackage = raw
// Use structured task for session creation
structuredDescription = {
goal: prepPackage.task.structured.goal,
scope: prepPackage.task.structured.scope,
context: prepPackage.task.structured.context
}
console.log(`✓ Prep package loaded: score=${prepPackage.task.quality_score}/10`)
} else {
console.warn(`⚠ Prep package validation failed, using defaults`)
}
}
// After session created, enrich planning-notes.md:
if (prepPackage) {
// 1. Add source refs section
const sourceRefsSection = prepPackage.task.source_refs
?.filter(r => r.status === 'verified' || r.status === 'linked')
.map(r => `- **${r.type}**: ${r.path}`)
.join('\n') || 'None'
// 2. Add quality dimensions
const dimensionsSection = Object.entries(prepPackage.task.dimensions)
.map(([k, v]) => `- **${k}**: ${v.value} (score: ${v.score}/2)`)
.join('\n')
// Append to planning-notes.md under User Intent
Edit(planningNotesPath, {
old: `- **KEY_CONSTRAINTS**: ${userConstraints}`,
new: `- **KEY_CONSTRAINTS**: ${userConstraints}
### Requirement Sources (from prep)
${sourceRefsSection}
### Quality Dimensions (from prep)
${dimensionsSection}`
})
}
```
## Phase 3 Integration (Task Generation - Phase 0 User Config)
Prep package auto-populates Phase 0 user configuration:
```javascript
// In Phase 3, Phase 0 (User Configuration):
if (prepPackage) {
// Auto-answer all Phase 0 questions from prep
userConfig = {
supplementaryMaterials: prepPackage.execution.supplementary_materials,
executionMethod: prepPackage.execution.execution_method,
preferredCliTool: prepPackage.execution.preferred_cli_tool,
enableResume: true
}
console.log(`✓ Phase 0 auto-configured from prep: ${userConfig.executionMethod} (${userConfig.preferredCliTool})`)
// Skip interactive questions, proceed to Phase 1 (Context Prep)
}
```
## Phase 2 Integration (Context Gathering)
Source refs from prep feed into exploration context:
```javascript
// In Phase 2, Step 2 (spawn explore agents):
// Add source_refs as supplementary context for exploration
if (prepPackage?.task?.source_refs?.length > 0) {
const verifiedRefs = prepPackage.task.source_refs.filter(r => r.status === 'verified')
// Include verified local docs in exploration agent prompt
explorationAgentPrompt += `\n## SUPPLEMENTARY REQUIREMENT DOCUMENTS\n`
explorationAgentPrompt += verifiedRefs.map(r => `Read and analyze: ${r.path}`).join('\n')
}
```
## Phase 4 Integration (Execution)
Commit flag from prep:
```javascript
// In Phase 4:
const withCommit = prepPackage?.execution?.with_commit || $ARGUMENTS.includes('--with-commit')
```

View File

@@ -9,6 +9,79 @@ Discover existing sessions or start new workflow session with intelligent sessio
- Generate unique session ID (WFS-xxx format)
- Initialize session directory structure
## Step 0.0: Load Prep Package (if exists)
```javascript
// Load plan-prep-package.json (generated by /prompts:prep-plan)
let prepPackage = null
const prepPath = `${projectRoot}/.workflow/.prep/plan-prep-package.json`
if (fs.existsSync(prepPath)) {
const raw = JSON.parse(Read(prepPath))
const checks = validatePlanPrepPackage(raw, projectRoot)
if (checks.valid) {
prepPackage = raw
console.log(`✓ Prep package loaded: score=${prepPackage.task.quality_score}/10, exec=${prepPackage.execution.execution_method}`)
console.log(` Checks passed: ${checks.passed.join(', ')}`)
} else {
console.warn(`⚠ Prep package found but failed validation:`)
checks.failures.forEach(f => console.warn(`${f}`))
console.warn(` → Falling back to default behavior (prep-package ignored)`)
prepPackage = null
}
}
/**
* Validate plan-prep-package.json integrity before consumption.
*/
function validatePlanPrepPackage(prep, projectRoot) {
const passed = []
const failures = []
// Check 1: prep_status
if (prep.prep_status === 'ready') passed.push('status=ready')
else failures.push(`prep_status is "${prep.prep_status}", expected "ready"`)
// Check 2: target_skill
if (prep.target_skill === 'workflow-plan-execute') passed.push('target_skill match')
else failures.push(`target_skill is "${prep.target_skill}", expected "workflow-plan-execute"`)
// Check 3: project_root
if (prep.environment?.project_root === projectRoot) passed.push('project_root match')
else failures.push(`project_root mismatch: "${prep.environment?.project_root}" vs "${projectRoot}"`)
// Check 4: quality_score >= 6
if ((prep.task?.quality_score || 0) >= 6) passed.push(`quality=${prep.task.quality_score}/10`)
else failures.push(`quality_score ${prep.task?.quality_score || 0} < 6`)
// Check 5: generated_at within 24h
const hoursSince = (Date.now() - new Date(prep.generated_at).getTime()) / 3600000
if (hoursSince <= 24) passed.push(`age=${Math.round(hoursSince)}h`)
else failures.push(`prep-package is ${Math.round(hoursSince)}h old (max 24h)`)
// Check 6: required fields
const required = ['task.structured.goal', 'task.structured.scope', 'execution.execution_method']
const missing = required.filter(p => !p.split('.').reduce((o, k) => o?.[k], prep))
if (missing.length === 0) passed.push('fields complete')
else failures.push(`missing: ${missing.join(', ')}`)
return { valid: failures.length === 0, passed, failures }
}
// Build structured description from prep or raw input
let structuredDescription
if (prepPackage) {
structuredDescription = {
goal: prepPackage.task.structured.goal,
scope: prepPackage.task.structured.scope,
context: prepPackage.task.structured.context
}
} else {
structuredDescription = null // Will be parsed from user input later
}
```
## Step 0: Initialize Project State (First-time Only)
**Executed before all modes** - Ensures project-level state files exist by calling `workflow:init`.
@@ -73,23 +146,47 @@ CONTEXT: Existing user database schema, REST API endpoints
### Step 1.4: Initialize Planning Notes
Create `planning-notes.md` with N+1 context support:
Create `planning-notes.md` with N+1 context support, enriched with prep data:
```javascript
const planningNotesPath = `${projectRoot}/.workflow/active/${sessionId}/planning-notes.md`
const userGoal = structuredDescription.goal
const userConstraints = structuredDescription.context || "None specified"
const userGoal = structuredDescription?.goal || taskDescription
const userScope = structuredDescription?.scope || "Not specified"
const userConstraints = structuredDescription?.context || "None specified"
// Build source refs section from prep
const sourceRefsSection = (prepPackage?.task?.source_refs?.length > 0)
? prepPackage.task.source_refs
.filter(r => r.status === 'verified' || r.status === 'linked')
.map(r => `- **${r.type}**: ${r.path}`)
.join('\n')
: 'None'
// Build quality dimensions section from prep
const dimensionsSection = prepPackage?.task?.dimensions
? Object.entries(prepPackage.task.dimensions)
.map(([k, v]) => `- **${k}**: ${v.value} (${v.score}/2)`)
.join('\n')
: ''
Write(planningNotesPath, `# Planning Notes
**Session**: ${sessionId}
**Created**: ${new Date().toISOString()}
${prepPackage ? `**Prep Package**: plan-prep-package.json (score: ${prepPackage.task.quality_score}/10)` : ''}
## User Intent (Phase 1)
- **GOAL**: ${userGoal}
- **SCOPE**: ${userScope}
- **KEY_CONSTRAINTS**: ${userConstraints}
${sourceRefsSection !== 'None' ? `
### Requirement Sources (from prep)
${sourceRefsSection}
` : ''}${dimensionsSection ? `
### Quality Dimensions (from prep)
${dimensionsSection}
` : ''}
---
## Context Findings (Phase 2)

View File

@@ -127,6 +127,15 @@ const sessionFolder = `${projectRoot}/.workflow/active/${session_id}/.process`;
// 2.2 Launch Parallel Explore Agents (with conflict detection)
const explorationAgents = [];
// Load source_refs from prep-package for supplementary context
const prepPath = `${projectRoot}/.workflow/.prep/plan-prep-package.json`
const prepSourceRefs = fs.existsSync(prepPath)
? (JSON.parse(Read(prepPath))?.task?.source_refs || []).filter(r => r.status === 'verified')
: []
const sourceRefsDirective = prepSourceRefs.length > 0
? `\n## SUPPLEMENTARY REQUIREMENT DOCUMENTS (from prep)\nRead these before exploration:\n${prepSourceRefs.map((r, i) => `${i + 1}. Read: ${r.path} (${r.type})`).join('\n')}\nCross-reference findings against these source documents.\n`
: ''
// Spawn all agents in parallel
selectedAngles.forEach((angle, index) => {
const agentId = spawn_agent({
@@ -144,6 +153,7 @@ selectedAngles.forEach((angle, index) => {
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
**CONFLICT DETECTION**: Additionally detect conflict indicators including module overlaps, breaking changes, incompatible patterns, and scenario boundary ambiguities.
${sourceRefsDirective}
## Assigned Context
- **Exploration Angle**: ${angle}

View File

@@ -78,12 +78,17 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
console.log(`[--yes] Using defaults: No materials, Agent executor, Codex CLI`)
// Check for prep-package auto-configuration (from /prompts:prep-plan)
const prepPath = `${projectRoot}/.workflow/.prep/plan-prep-package.json`
const prepExec = fs.existsSync(prepPath) ? JSON.parse(Read(prepPath))?.execution : null
if (autoYes || prepExec) {
const source = prepExec ? 'prep-package' : '--yes flag'
console.log(`[${source}] Using defaults: ${prepExec?.execution_method || 'agent'} executor, ${prepExec?.preferred_cli_tool || 'codex'} CLI`)
userConfig = {
supplementaryMaterials: { type: "none", content: [] },
executionMethod: "agent",
preferredCliTool: "codex",
supplementaryMaterials: prepExec?.supplementary_materials || { type: "none", content: [] },
executionMethod: prepExec?.execution_method || "agent",
preferredCliTool: prepExec?.preferred_cli_tool || "codex",
enableResume: true
}
// Skip to Phase 1

View File

@@ -16,11 +16,25 @@ interface StopOptions {
*/
async function findProcessOnPort(port: number): Promise<string | null> {
try {
const { stdout } = await execAsync(`netstat -ano | findstr :${port} | findstr LISTENING`);
const lines = stdout.trim().split('\n');
if (lines.length > 0) {
const parts = lines[0].trim().split(/\s+/);
return parts[parts.length - 1]; // PID is the last column
// Avoid filtering on the localized state column (e.g. not always "LISTENING").
const { stdout } = await execAsync(`netstat -ano | findstr :${port}`);
const lines = stdout.trim().split(/\r?\n/).map((l) => l.trim()).filter(Boolean);
for (const line of lines) {
// Typical format:
// TCP 0.0.0.0:3457 0.0.0.0:0 LISTENING 31736
// TCP [::]:3457 [::]:0 LISTENING 31736
const parts = line.split(/\s+/);
if (parts.length < 4) continue;
const proto = parts[0]?.toUpperCase();
const localAddress = parts[1] || '';
const pidCandidate = parts[parts.length - 1] || '';
if (proto !== 'TCP') continue;
if (!localAddress.endsWith(`:${port}`)) continue;
if (!/^\d+$/.test(pidCandidate)) continue;
return pidCandidate; // PID is the last column
}
} catch {
// No process found
@@ -28,20 +42,62 @@ async function findProcessOnPort(port: number): Promise<string | null> {
return null;
}
async function getProcessCommandLine(pid: string): Promise<string | null> {
if (!/^\d+$/.test(pid)) return null;
try {
const probeCommand =
process.platform === 'win32'
? `powershell -NoProfile -Command "(Get-CimInstance Win32_Process -Filter 'ProcessId=${pid}').CommandLine"`
: `ps -p ${pid} -o command=`;
const { stdout } = await execAsync(probeCommand);
const commandLine = stdout.trim();
return commandLine.length > 0 ? commandLine : null;
} catch {
return null;
}
}
function isLikelyViteCommandLine(commandLine: string, port: number): boolean {
const lower = commandLine.toLowerCase();
if (!lower.includes('vite')) return false;
const portStr = String(port);
return (
lower.includes(`--port ${portStr}`) ||
lower.includes(`--port=${portStr}`) ||
// Some npm wrappers pass through the port in a slightly different shape.
lower.includes(`port ${portStr}`)
);
}
/**
* Kill process by PID (Windows)
* @param {string} pid - Process ID
* @returns {Promise<boolean>} Success status
*/
async function killProcess(pid: string): Promise<boolean> {
if (!/^\d+$/.test(pid)) return false;
try {
// Use PowerShell to avoid Git Bash path expansion issues with /PID
await execAsync(`powershell -Command "Stop-Process -Id ${pid} -Force -ErrorAction Stop"`);
// Prefer taskkill to terminate the entire process tree on Windows (npm/cmd wrappers can orphan children).
if (process.platform === 'win32') {
await execAsync(`cmd /c "taskkill /PID ${pid} /T /F"`);
return true;
}
// Best-effort on non-Windows platforms (mockable via child_process.exec in tests).
await execAsync(`kill -TERM ${pid}`);
return true;
} catch {
// Fallback to taskkill via cmd
try {
await execAsync(`cmd /c "taskkill /PID ${pid} /F"`);
if (process.platform === 'win32') {
await execAsync(`powershell -NoProfile -Command "Stop-Process -Id ${pid} -Force -ErrorAction Stop"`);
return true;
}
await execAsync(`kill -KILL ${pid}`);
return true;
} catch {
return false;
@@ -105,6 +161,7 @@ export async function stopCommand(options: StopOptions): Promise<void> {
await cleanupReactFrontend(reactPort);
console.log(chalk.green.bold('\n Server stopped successfully!\n'));
process.exit(0);
return;
}
// Best-effort verify shutdown (may still succeed even if shutdown endpoint didn't return ok)
@@ -116,6 +173,7 @@ export async function stopCommand(options: StopOptions): Promise<void> {
await cleanupReactFrontend(reactPort);
console.log(chalk.green.bold('\n Server stopped successfully!\n'));
process.exit(0);
return;
}
const statusHint = shutdownResponse ? `HTTP ${shutdownResponse.status}` : 'no response';
@@ -132,7 +190,11 @@ export async function stopCommand(options: StopOptions): Promise<void> {
const reactPid = await findProcessOnPort(reactPort);
if (reactPid) {
console.log(chalk.yellow(` React frontend still running on port ${reactPort} (PID: ${reactPid})`));
if (force) {
const commandLine = await getProcessCommandLine(reactPid);
const isLikelyVite = commandLine ? isLikelyViteCommandLine(commandLine, reactPort) : false;
if (force || isLikelyVite) {
console.log(chalk.cyan(' Cleaning up React frontend...'));
const killed = await killProcess(reactPid);
if (killed) {
@@ -141,10 +203,12 @@ export async function stopCommand(options: StopOptions): Promise<void> {
console.log(chalk.red(' Failed to stop React frontend.\n'));
}
} else {
console.log(chalk.gray(`\n Use --force to clean it up:\n ccw stop --force\n`));
console.log(chalk.gray(`\n React process does not look like Vite on port ${reactPort}.`));
console.log(chalk.gray(` Use --force to clean it up:\n ccw stop --force\n`));
}
}
process.exit(0);
return;
}
// Port is in use by another process
@@ -174,9 +238,11 @@ export async function stopCommand(options: StopOptions): Promise<void> {
console.log(chalk.green.bold('\n All processes stopped successfully!\n'));
process.exit(0);
return;
} else {
console.log(chalk.red('\n Failed to kill process. Try running as administrator.\n'));
process.exit(1);
return;
}
} else {
// Also check React frontend port
@@ -188,11 +254,13 @@ export async function stopCommand(options: StopOptions): Promise<void> {
console.log(chalk.gray(`\n This is not a CCW server. Use --force to kill it:`));
console.log(chalk.white(` ccw stop --force\n`));
process.exit(0);
return;
}
} catch (err) {
const error = err as Error;
console.error(chalk.red(`\n Error: ${error.message}\n`));
process.exit(1);
return;
}
}

View File

@@ -21,17 +21,43 @@ describe('stop command module', async () => {
const childProcess = require('child_process');
const originalExec = childProcess.exec;
const execCalls: string[] = [];
const netstatByPort = new Map<number, string>();
const commandLineByPid = new Map<string, string>();
before(async () => {
// Patch child_process.exec BEFORE importing stop module (it captures exec at module init).
childProcess.exec = (command: string, cb: any) => {
execCalls.push(command);
if (/^netstat -ano/i.test(command)) {
const stdout = 'TCP 0.0.0.0:56792 0.0.0.0:0 LISTENING 4242\r\n';
const portMatch = command.match(/findstr\s+:([0-9]+)/i);
const port = portMatch ? Number(portMatch[1]) : NaN;
const stdout = Number.isFinite(port) ? (netstatByPort.get(port) ?? '') : '';
cb(null, stdout, '');
return {} as any;
}
if (/^taskkill /i.test(command)) {
if (/taskkill\b/i.test(command)) {
cb(null, '', '');
return {} as any;
}
if (/^powershell\b/i.test(command) && /Get-CimInstance\s+Win32_Process/i.test(command)) {
const pidMatch = command.match(/ProcessId=([0-9]+)/i);
const pid = pidMatch ? pidMatch[1] : '';
const stdout = commandLineByPid.get(pid) ?? '';
cb(null, stdout, '');
return {} as any;
}
if (/^ps\s+-p\s+/i.test(command)) {
const pidMatch = command.match(/^ps\s+-p\s+([0-9]+)/i);
const pid = pidMatch ? pidMatch[1] : '';
const stdout = commandLineByPid.get(pid) ?? '';
cb(null, stdout, '');
return {} as any;
}
if (/^powershell\b/i.test(command) && /Stop-Process\s+-Id/i.test(command)) {
cb(null, '', '');
return {} as any;
}
if (/^kill\s+-/i.test(command)) {
cb(null, '', '');
return {} as any;
}
@@ -44,6 +70,8 @@ describe('stop command module', async () => {
afterEach(() => {
execCalls.length = 0;
netstatByPort.clear();
commandLineByPid.clear();
mock.restoreAll();
});
@@ -84,9 +112,10 @@ describe('stop command module', async () => {
// No server responding, fall back to netstat/taskkill
mock.method(globalThis as any, 'fetch', async () => null);
netstatByPort.set(56792, 'TCP 0.0.0.0:56792 0.0.0.0:0 LISTENING 4242\r\n');
await stopModule.stopCommand({ port: 56792, force: true });
assert.ok(execCalls.some((c) => /^taskkill /i.test(c)));
assert.ok(execCalls.some((c) => /taskkill\b/i.test(c) || /Stop-Process\b/i.test(c) || /^kill\s+-/i.test(c)));
assert.ok(exitCodes.includes(0));
assert.ok(!exitCodes.includes(1));
});
@@ -100,10 +129,35 @@ describe('stop command module', async () => {
});
mock.method(globalThis as any, 'fetch', async () => null);
netstatByPort.set(56792, 'TCP 0.0.0.0:56792 0.0.0.0:0 LISTENING 4242\r\n');
await stopModule.stopCommand({ port: 56792, force: false });
assert.ok(execCalls.some((c) => /^netstat -ano/i.test(c)));
assert.ok(!execCalls.some((c) => /^taskkill /i.test(c)));
assert.ok(!execCalls.some((c) => /taskkill\b/i.test(c) || /Stop-Process\b/i.test(c) || /^kill\s+-/i.test(c)));
assert.ok(exitCodes.includes(0));
assert.ok(!exitCodes.includes(1));
});
it('auto-cleans Vite on react port when main server is not running (no --force)', async () => {
mock.method(console, 'log', () => {});
mock.method(console, 'error', () => {});
const exitCodes: Array<number | undefined> = [];
mock.method(process as any, 'exit', (code?: number) => {
exitCodes.push(code);
});
// No server responding, main port free, react port occupied by Vite.
mock.method(globalThis as any, 'fetch', async () => null);
netstatByPort.set(56792, '');
netstatByPort.set(56793, 'TCP 0.0.0.0:56793 0.0.0.0:0 LISTENING 4242\r\n');
commandLineByPid.set('4242', 'cmd.exe /d /s /c vite --port 56793 --strictPort\r\n');
await stopModule.stopCommand({ port: 56792, force: false });
assert.ok(execCalls.some((c) =>
(/^powershell\b/i.test(c) && /Get-CimInstance\s+Win32_Process/i.test(c)) ||
/^ps\s+-p\s+/i.test(c)
));
assert.ok(execCalls.some((c) => /taskkill\b/i.test(c) || /Stop-Process\b/i.test(c) || /^kill\s+-/i.test(c)));
assert.ok(exitCodes.includes(0));
assert.ok(!exitCodes.includes(1));
});