feat: add CLI settings export/import functionality

- Implemented exportSettings and importSettings APIs for CLI settings.
- Added hooks useExportSettings and useImportSettings for managing export/import operations in the frontend.
- Updated SettingsPage to include buttons for exporting and importing CLI settings.
- Enhanced backend to handle export and import requests, including validation and conflict resolution.
- Introduced new data structures for exported settings and import options.
- Updated localization files to support new export/import features.
- Refactored CLI tool configurations to remove hardcoded model defaults, allowing dynamic model retrieval.
This commit is contained in:
catlog22
2026-02-25 21:40:24 +08:00
parent 4c2bf31525
commit b2b8688d26
24 changed files with 1287 additions and 651 deletions

View File

@@ -1,16 +1,16 @@
---
name: issue-devpipeline
description: |
Plan-and-Execute pipeline with Wave Pipeline pattern.
Plan-and-Execute pipeline with per-issue beat pattern.
Orchestrator coordinates planner (Deep Interaction) and executors (Parallel Fan-out).
Planner produces wave queues, executors implement solutions concurrently.
agents: 4
Planner outputs per-issue solutions, executors implement solutions concurrently.
agents: 3
phases: 4
---
# Issue DevPipeline
边规划边执行流水线。编排器通过 Wave Pipeline 协调 planner 和 executor(s)planner 完成一个 wave 的规划后输出执行队列,编排器即为该 wave 派发 executor agents,同时 planner 继续规划下一 wave。
边规划边执行流水线。编排器通过逐 Issue 节拍流水线协调 planner 和 executor(s)planner 完成一个 issue 的规划后立即输出,编排器即为该 issue 派发 executor agent同时 planner 继续规划下一 issue。
## Architecture Overview
@@ -24,24 +24,23 @@ phases: 4
│ Planner │ │ Executors (N) │
│ (Deep │ │ (Parallel Fan-out) │
│ Interaction│ │ │
multi-round│ │ exec-1 exec-2 ... │
per-issue) │ │ exec-1 exec-2 ... │
└──────┬──────┘ └──────────┬──────────┘
│ │
┌──────┴──────┐ ┌──────────┴──────────┐
│ issue-plan │ │ code-developer │
issue-queue │ │ (role reference) │
│ (existing) │ │ │
(existing) │ │ (role reference) │
└─────────────┘ └─────────────────────┘
```
**Wave Pipeline Flow**:
**Per-Issue Beat Pipeline Flow**:
```
Planner Round 1 → Wave 1 queue
↓ (spawn executors for wave 1)
↓ send_input → Planner Round 2 → Wave 2 queue
↓ (spawn executors for wave 2)
Planner → Issue 1 solution → ISSUE_READY
↓ (spawn executor for issue 1)
↓ send_input → Planner → Issue 2 solution → ISSUE_READY
↓ (spawn executor for issue 2)
...
↓ Planner outputs "ALL_PLANNED"
↓ Planner outputs "all_planned"
↓ wait for all executor agents
↓ Aggregate results → Done
```
@@ -50,10 +49,9 @@ Planner Round 1 → Wave 1 queue
| Agent | Role File | Responsibility | New/Existing |
|-------|-----------|----------------|--------------|
| `planex-planner` | `~/.codex/agents/planex-planner.md` | 需求拆解 → issue 创建 → 方案设计 → 队列编排 | New |
| `planex-planner` | `~/.codex/agents/planex-planner.md` | 需求拆解 → issue 创建 → 方案设计 → 冲突检查 → 逐 issue 输出 | New |
| `planex-executor` | `~/.codex/agents/planex-executor.md` | 加载 solution → 代码实现 → 测试 → 提交 | New |
| `issue-plan-agent` | `~/.codex/agents/issue-plan-agent.md` | Closed-loop: ACE 探索 + solution 生成 | Existing |
| `issue-queue-agent` | `~/.codex/agents/issue-queue-agent.md` | Solution 排序 + 冲突检测 → 执行队列 | Existing |
## Input Types
@@ -88,9 +86,16 @@ const inputPayload = {
text: textMatch ? textMatch[1] : args,
planFile: planMatch ? planMatch[1] : null
}
// Initialize session directory for artifacts
const slug = (issueIds[0] || 'batch').replace(/[^a-zA-Z0-9-]/g, '')
const dateStr = new Date().toISOString().slice(0,10).replace(/-/g,'')
const sessionId = `PEX-${slug}-${dateStr}`
const sessionDir = `.workflow/.team/${sessionId}`
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
```
### Phase 2: Planning (Deep Interaction with Planner)
### Phase 2: Planning (Deep Interaction with Planner — Per-Issue Beat)
```javascript
// Track all agents for cleanup
@@ -108,45 +113,42 @@ const plannerId = spawn_agent({
---
Goal: 分析需求并完成第一波 (Wave 1) 的规划。输出执行队列
Goal: 分析需求并逐 issue 输出规划结果。每完成一个 issue 立即输出
Input:
${JSON.stringify(inputPayload, null, 2)}
Session Dir: ${sessionDir}
Scope:
- Include: 需求分析、issue 创建、方案设计、队列编排
- Include: 需求分析、issue 创建、方案设计、inline 冲突检查、写中间产物
- Exclude: 代码实现、测试执行、git 操作
Deliverables:
输出严格遵循以下 JSON 格式:
每个 issue 输出严格遵循以下 JSON 格式:
\`\`\`json
{
"wave": 1,
"status": "wave_ready" | "all_planned",
"issues": ["ISS-xxx", ...],
"queue": [
{
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx",
"title": "描述",
"priority": "normal",
"depends_on": []
}
],
"status": "issue_ready" | "all_planned",
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx",
"title": "描述",
"priority": "normal",
"depends_on": [],
"solution_file": "${sessionDir}/artifacts/solutions/ISS-xxx.json",
"remaining_issues": ["ISS-yyy", ...],
"summary": "本波次规划摘要"
"summary": "本 issue 规划摘要"
}
\`\`\`
Quality bar:
- 每个 issue 必须有绑定的 solution
- 队列必须按依赖排序
- 每波最多 5 个 issues
- Solution 写入中间产物文件
- Inline 冲突检查标记 depends_on
`
})
allAgentIds.push(plannerId)
// Wait for planner Wave 1 output
// Wait for planner first issue output
let plannerResult = wait({ ids: [plannerId], timeout_ms: 900000 })
if (plannerResult.timed_out) {
@@ -155,21 +157,21 @@ if (plannerResult.timed_out) {
}
// Parse planner output
let waveData = parseWaveOutput(plannerResult.status[plannerId].completed)
let issueData = parseIssueOutput(plannerResult.status[plannerId].completed)
```
### Phase 3: Wave Execution Loop
### Phase 3: Per-Issue Execution Loop
```javascript
const executorResults = []
let waveNum = 0
let issueCount = 0
while (true) {
waveNum++
issueCount++
// ─── Dispatch executors for current wave (Parallel Fan-out) ───
const waveExecutors = waveData.queue.map(entry =>
spawn_agent({
// ─── Dispatch executor for current issue (if valid) ───
if (issueData && issueData.issue_id) {
const executorId = spawn_agent({
message: `
## TASK ASSIGNMENT
@@ -180,23 +182,25 @@ while (true) {
---
Goal: 实现 ${entry.issue_id} 的 solution
Goal: 实现 ${issueData.issue_id} 的 solution
Issue: ${entry.issue_id}
Solution: ${entry.solution_id}
Title: ${entry.title}
Priority: ${entry.priority}
Dependencies: ${entry.depends_on?.join(', ') || 'none'}
Issue: ${issueData.issue_id}
Solution: ${issueData.solution_id}
Title: ${issueData.title}
Priority: ${issueData.priority}
Dependencies: ${issueData.depends_on?.join(', ') || 'none'}
Solution File: ${issueData.solution_file}
Session Dir: ${sessionDir}
Scope:
- Include: 加载 solution plan、代码实现、测试运行、git commit
- Exclude: issue 创建、方案修改、队列变更
- Exclude: issue 创建、方案修改
Deliverables:
输出严格遵循以下格式:
\`\`\`json
{
"issue_id": "${entry.issue_id}",
"issue_id": "${issueData.issue_id}",
"status": "success" | "failed",
"files_changed": ["path/to/file", ...],
"tests_passed": true | false,
@@ -214,65 +218,57 @@ Quality bar:
- 每个变更必须 commit
`
})
)
allAgentIds.push(...waveExecutors)
// ─── Check if more waves needed ───
if (waveData.status === 'all_planned') {
// No more waves — wait for current executors and finish
const execResults = wait({ ids: waveExecutors, timeout_ms: 1200000 })
waveExecutors.forEach((id, i) => {
executorResults.push({
wave: waveNum,
issue: waveData.queue[i].issue_id,
result: execResults.status[id]?.completed || 'timeout'
})
allAgentIds.push(executorId)
executorResults.push({
id: executorId,
issueId: issueData.issue_id,
index: issueCount
})
}
// ─── Check if all planned ───
if (issueData?.status === 'all_planned') {
break
}
// ─── Request next wave from planner (while executors run) ───
// ─── Request next issue from planner ───
send_input({
id: plannerId,
message: `
## WAVE ${waveNum} 已派发
已为 Wave ${waveNum} 创建 ${waveExecutors.length} 个 executor agents。
## NEXT
请继续规划下一波 (Wave ${waveNum + 1})。
剩余 issues: ${JSON.stringify(waveData.remaining_issues)}
输出格式同前。如果所有 issues 已规划完毕status 设为 "all_planned"。
`
message: `Issue ${issueData?.issue_id || 'unknown'} dispatched. Continue to next issue.`
})
// ─── Wait for both: executors (current wave) + planner (next wave) ───
const allWaiting = [...waveExecutors, plannerId]
const batchResult = wait({ ids: allWaiting, timeout_ms: 1200000 })
// ─── Wait for planner next issue ───
const nextResult = wait({ ids: [plannerId], timeout_ms: 900000 })
// Collect executor results
waveExecutors.forEach((id, i) => {
executorResults.push({
wave: waveNum,
issue: waveData.queue[i].issue_id,
result: batchResult.status[id]?.completed || 'timeout'
})
})
// Parse next wave from planner
if (batchResult.status[plannerId]?.completed) {
waveData = parseWaveOutput(batchResult.status[plannerId].completed)
if (nextResult.timed_out) {
send_input({ id: plannerId, message: "请尽快输出当前已完成的规划结果。" })
const retryResult = wait({ ids: [plannerId], timeout_ms: 120000 })
if (retryResult.timed_out) break
issueData = parseIssueOutput(retryResult.status[plannerId].completed)
} else {
// Planner timed out — wait more
const plannerRetry = wait({ ids: [plannerId], timeout_ms: 300000 })
if (plannerRetry.timed_out) {
// Abort pipeline
break
}
waveData = parseWaveOutput(plannerRetry.status[plannerId].completed)
issueData = parseIssueOutput(nextResult.status[plannerId].completed)
}
}
// ─── Wait for all executor agents ───
const executorIds = executorResults.map(e => e.id)
if (executorIds.length > 0) {
const execResults = wait({ ids: executorIds, timeout_ms: 1200000 })
// Handle timeouts
if (execResults.timed_out) {
const pending = executorIds.filter(id => !execResults.status[id]?.completed)
pending.forEach(id => {
send_input({ id, message: "Please finalize current task and output results." })
})
wait({ ids: pending, timeout_ms: 120000 })
}
// Collect results
executorResults.forEach(entry => {
entry.result = execResults.status[entry.id]?.completed || 'timeout'
})
}
```
### Phase 4: Aggregation & Cleanup
@@ -297,19 +293,18 @@ const failed = executorResults.filter(r => {
const report = `
## PlanEx Pipeline Complete
**Waves**: ${waveNum}
**Total Issues**: ${executorResults.length}
**Succeeded**: ${succeeded.length}
**Failed**: ${failed.length}
### Results by Wave
${executorResults.map(r => `- Wave ${r.wave} | ${r.issue} | ${(() => {
### Results
${executorResults.map(r => `- ${r.issueId} | ${(() => {
try { return JSON.parse(r.result).status } catch { return 'error' }
})()}`).join('\n')}
${failed.length > 0 ? `### Failed Issues
${failed.map(r => `- ${r.issue}: ${(() => {
try { return JSON.parse(r.result).error } catch { return r.result.slice(0, 200) }
${failed.map(r => `- ${r.issueId}: ${(() => {
try { return JSON.parse(r.result).error } catch { return r.result?.slice(0, 200) || 'unknown' }
})()}`).join('\n')}` : ''}
`
@@ -324,7 +319,7 @@ allAgentIds.forEach(id => {
## Helper Functions
```javascript
function parseWaveOutput(output) {
function parseIssueOutput(output) {
// Extract JSON block from agent output
const jsonMatch = output.match(/```json\s*([\s\S]*?)```/)
if (jsonMatch) {
@@ -332,8 +327,8 @@ function parseWaveOutput(output) {
}
// Fallback: try parsing entire output as JSON
try { return JSON.parse(output) } catch {}
// Last resort: return empty wave with all_planned
return { wave: 0, status: 'all_planned', queue: [], remaining_issues: [], summary: 'Parse failed' }
// Last resort: return empty with all_planned
return { status: 'all_planned', issue_id: null, remaining_issues: [], summary: 'Parse failed' }
}
```
@@ -342,11 +337,11 @@ function parseWaveOutput(output) {
```javascript
const CONFIG = {
sessionDir: ".workflow/.team/PEX-{slug}-{date}/",
artifactsDir: ".workflow/.team/PEX-{slug}-{date}/artifacts/",
issueDataDir: ".workflow/issues/",
maxWaveSize: 5,
plannerTimeout: 900000, // 15 min
executorTimeout: 1200000, // 20 min
maxWaves: 10
maxIssues: 50
}
```
@@ -356,10 +351,10 @@ const CONFIG = {
| Scenario | Action |
|----------|--------|
| Planner wave timeout | send_input 催促收敛retry wait 120s |
| Planner issue timeout | send_input 催促收敛retry wait 120s |
| Executor timeout | 标记为 failed继续其他 executor |
| Batch wait partial timeout | 收集已完成结果,继续 pipeline |
| Pipeline stall (> 2 waves timeout) | 中止 pipeline输出部分结果 |
| Pipeline stall (> 3 issues timeout) | 中止 pipeline输出部分结果 |
### Cleanup Protocol
@@ -379,5 +374,5 @@ allAgentIds.forEach(id => {
| No issues created | Report error, abort pipeline |
| Solution planning failure | Skip issue, report in final results |
| Executor implementation failure | Mark as failed, continue with other executors |
| All executors in wave fail | Report wave failure, continue to next wave |
| Planner exits early | Treat as all_planned, finish current wave |
| Inline conflict check failure | Use empty depends_on, continue |
| Planner exits early | Treat as all_planned, finish current executors |

View File

@@ -1,7 +1,7 @@
---
name: planex-executor
description: |
PlanEx 执行角色。加载 solution plan → 代码实现 → 测试验证 → git commit。
PlanEx 执行角色。从中间产物文件加载 solution plan(兼容 CLI fallback→ 代码实现 → 测试验证 → git commit。
每个 executor 实例处理一个 issue 的 solution。
color: green
skill: issue-devpipeline
@@ -9,11 +9,11 @@ skill: issue-devpipeline
# PlanEx Executor
代码实现角色。接收编排器派发的 issue + solution 信息,加载 solution plan实现代码变更运行测试验证提交变更。每个 executor 实例独立处理一个 issue。
代码实现角色。接收编排器派发的 issue + solution 信息,从中间产物文件加载 solution plan(兼容 CLI fallback,实现代码变更,运行测试验证,提交变更。每个 executor 实例独立处理一个 issue。
## Core Capabilities
1. **Solution 加载**: 通过 `ccw issue solutions <id> --json` 加载绑定的 solution plan
1. **Solution 加载**: 从中间产物文件加载 solution plan兼容 `ccw issue solutions <id> --json` fallback
2. **代码实现**: 按 solution plan 的任务列表顺序实现代码变更
3. **测试验证**: 运行相关测试确保变更正确且不破坏现有功能
4. **变更提交**: 将实现的代码 commit 到 git
@@ -179,10 +179,24 @@ function getUtc8ISOString() {
### Step 2: Solution Loading & Implementation
```javascript
// ── Load solution plan ──
// ── Load solution plan (dual-mode: artifact file first, CLI fallback) ──
const issueId = taskAssignment.issue_id
const solJson = shell(`ccw issue solutions ${issueId} --json`)
const solution = JSON.parse(solJson)
const solutionFile = taskAssignment.solution_file
let solution
if (solutionFile) {
try {
const solutionData = JSON.parse(read_file(solutionFile))
solution = solutionData.bound ? solutionData : { bound: solutionData }
} catch {
// Fallback to CLI
const solJson = shell(`ccw issue solutions ${issueId} --json`)
solution = JSON.parse(solJson)
}
} else {
const solJson = shell(`ccw issue solutions ${issueId} --json`)
solution = JSON.parse(solJson)
}
if (!solution.bound) {
outputError(`No bound solution for ${issueId}`)

View File

@@ -1,23 +1,24 @@
---
name: planex-planner
description: |
PlanEx 规划角色。需求拆解 → issue 创建 → 方案设计 → 队列编排
按波次 (wave) 输出执行队列,支持 Deep Interaction 多轮交互。
PlanEx 规划角色。需求拆解 → issue 创建 → 方案设计 → inline 冲突检查
逐 issue 输出执行信息,支持 Deep Interaction 多轮交互。
color: blue
skill: issue-devpipeline
---
# PlanEx Planner
需求分析和规划角色。接收需求输入issue IDs / 文本 / plan 文件完成需求拆解、issue 创建、方案设计(调用 issue-plan-agent队列编排(调用 issue-queue-agent按波次输出执行队列供编排器派发 executor。
需求分析和规划角色。接收需求输入issue IDs / 文本 / plan 文件完成需求拆解、issue 创建、方案设计(调用 issue-plan-agentinline 冲突检查,逐 issue 输出执行信息供编排器即时派发 executor。
## Core Capabilities
1. **需求分析**: 解析输入类型,提取需求要素
2. **Issue 创建**: 将文本/plan 拆解为结构化 issue通过 `ccw issue new`
3. **方案设计**: 调用 issue-plan-agent 为每个 issue 生成 solution
4. **队列编排**: 调用 issue-queue-agent 按依赖排序形成执行队列
5. **波次输出**: 每波最多 5 个 issues输出结构化 JSON 队列
4. **Inline 冲突检查**: 基于 files_touched 重叠检测 + 显式依赖排序
5. **中间产物**: 将 solution 写入文件供 executor 直接加载
6. **逐 issue 输出**: 每完成一个 issue 立即输出 JSON编排器即时派发
## Execution Process
@@ -32,6 +33,7 @@ skill: issue-devpipeline
- **Goal**: What to achieve
- **Scope**: What's allowed and forbidden
- **Input**: Input payload with type, issueIds, text, planFile
- **Session Dir**: Path for writing solution artifacts
- **Deliverables**: Expected JSON output format
### Step 2: Input Processing & Issue Creation
@@ -40,6 +42,7 @@ skill: issue-devpipeline
```javascript
const input = taskAssignment.input
const sessionDir = taskAssignment.session_dir
if (input.type === 'issue_ids') {
// Issue IDs 已提供,直接使用
@@ -68,28 +71,24 @@ if (input.type === 'plan_file') {
}
```
### Step 3: Solution Planning & Queue Formation
### Step 3: Per-Issue Solution Planning & Artifact Writing
分波次处理 issues。每波最多 5 个
issue 处理plan-agent → 写中间产物 → 冲突检查 → 输出 JSON
```javascript
const WAVE_SIZE = 5
const allIssues = [...issueIds]
const waves = []
const projectRoot = shell('pwd').trim()
const dispatchedSolutions = []
const remainingIssues = [...issueIds]
for (let i = 0; i < allIssues.length; i += WAVE_SIZE) {
waves.push(allIssues.slice(i, i + WAVE_SIZE))
}
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
// 处理第一个 wave后续 wave 通过 send_input 触发)
const currentWave = waves[0]
const remainingWaves = waves.slice(1)
const remainingIssues = remainingWaves.flat()
for (let i = 0; i < issueIds.length; i++) {
const issueId = issueIds[i]
remainingIssues.shift()
// ── Solution Planning ──
// 调用 issue-plan-agent 为当前 wave 的 issues 生成 solutions
const planAgent = spawn_agent({
message: `
// --- Step 3a: Spawn issue-plan-agent for single issue ---
const planAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
@@ -97,89 +96,112 @@ const planAgent = spawn_agent({
---
issue_ids: ${JSON.stringify(currentWave)}
project_root: "${shell('pwd').trim()}"
issue_ids: ["${issueId}"]
project_root: "${projectRoot}"
## Requirements
- Generate solutions for each issue
- Auto-bind single solutions
- Generate solution for this issue
- Auto-bind single solution
- For multiple solutions, select the most pragmatic one
`
})
const planResult = wait({ ids: [planAgent], timeout_ms: 600000 })
close_agent({ id: planAgent })
})
const planResult = wait({ ids: [planAgent], timeout_ms: 600000 })
// ── Queue Formation ──
// 调用 issue-queue-agent 形成执行队列
const queueAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
if (planResult.timed_out) {
send_input({ id: planAgent, message: "Please finalize solution and output results." })
wait({ ids: [planAgent], timeout_ms: 120000 })
}
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/issue-queue-agent.md (MUST read first)
close_agent({ id: planAgent })
---
// --- Step 3b: Load solution + write artifact file ---
const solJson = shell(`ccw issue solution ${issueId} --json`)
const solution = JSON.parse(solJson)
issue_ids: ${JSON.stringify(currentWave)}
project_root: "${shell('pwd').trim()}"
const solutionFile = `${sessionDir}/artifacts/solutions/${issueId}.json`
write_file(solutionFile, JSON.stringify({
issue_id: issueId,
...solution,
timestamp: new Date().toISOString()
}, null, 2))
## Requirements
- Order solutions by dependency (DAG)
- Detect conflicts between solutions
- Output execution queue
`
})
const queueResult = wait({ ids: [queueAgent], timeout_ms: 300000 })
close_agent({ id: queueAgent })
// --- Step 3c: Inline conflict check ---
const dependsOn = inlineConflictCheck(issueId, solution, dispatchedSolutions)
// 读取生成的 queue 文件
const queuePath = '.workflow/issues/queue/execution-queue.json'
const queue = JSON.parse(readFile(queuePath))
// --- Step 3d: Track + output per-issue JSON ---
dispatchedSolutions.push({ issueId, solution, solutionFile })
const isLast = remainingIssues.length === 0
// Output per-issue JSON for orchestrator
console.log(JSON.stringify({
status: isLast ? "all_planned" : "issue_ready",
issue_id: issueId,
solution_id: solution.bound?.id || 'N/A',
title: solution.bound?.title || issueId,
priority: "normal",
depends_on: dependsOn,
solution_file: solutionFile,
remaining_issues: remainingIssues,
summary: `${issueId} solution ready` + (isLast ? ` (all ${issueIds.length} issues planned)` : '')
}, null, 2))
// Wait for orchestrator send_input before continuing
// (orchestrator will send: "Issue dispatched. Continue.")
}
```
### Step 4: Output Delivery
输出严格遵循编排器要求的 JSON 格式。
输出格式(每个 issue 独立输出):
```json
{
"wave": 1,
"status": "wave_ready",
"issues": ["ISS-xxx", "ISS-yyy"],
"queue": [
{
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx",
"title": "实现功能A",
"priority": "normal",
"depends_on": []
},
{
"issue_id": "ISS-yyy",
"solution_id": "SOL-yyy",
"title": "实现功能B",
"priority": "normal",
"depends_on": ["ISS-xxx"]
}
],
"remaining_issues": ["ISS-zzz"],
"summary": "Wave 1 规划完成: 2 个 issues, 按依赖排序"
"status": "issue_ready",
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx",
"title": "实现功能A",
"priority": "normal",
"depends_on": [],
"solution_file": ".workflow/.team/PEX-xxx/artifacts/solutions/ISS-xxx.json",
"remaining_issues": ["ISS-yyy", "ISS-zzz"],
"summary": "ISS-xxx solution ready"
}
```
**status 取值**:
- `"wave_ready"` — 本波次完成,还有后续波次
- `"all_planned"` — 所有 issues 已规划完毕(包含最后一个波次的 queue
- `"issue_ready"` — 本 issue 完成,还有后续 issues
- `"all_planned"` — 所有 issues 已规划完毕(最后一个 issue 的输出
### Multi-Round: 处理后续 Wave
## Inline Conflict Check
编排器会通过 `send_input` 触发后续波次规划。收到 send_input 后:
```javascript
function inlineConflictCheck(issueId, solution, dispatchedSolutions) {
const currentFiles = solution.bound?.files_touched
|| solution.bound?.affected_files || []
const blockedBy = []
1. 解析 `remaining_issues` 列表
2. 取下一批(最多 WAVE_SIZE 个)
3. 重复 Step 3 的 solution planning + queue formation
4. 输出下一个 wave 的 JSON
5. 如果没有剩余 issues`status` 设为 `"all_planned"`
// 1. File conflict detection
for (const prev of dispatchedSolutions) {
const prevFiles = prev.solution.bound?.files_touched
|| prev.solution.bound?.affected_files || []
const overlap = currentFiles.filter(f => prevFiles.includes(f))
if (overlap.length > 0) {
blockedBy.push(prev.issueId)
}
}
// 2. Explicit dependencies
const explicitDeps = solution.bound?.dependencies?.on_issues || []
for (const depId of explicitDeps) {
if (!blockedBy.includes(depId)) {
blockedBy.push(depId)
}
}
return blockedBy
}
```
## Plan File Parsing
@@ -219,11 +241,11 @@ function parsePlanPhases(planContent) {
### MUST
- 仅执行规划相关工作需求分析、issue 创建、方案设计、队列编排
- 仅执行规划相关工作需求分析、issue 创建、方案设计、冲突检查
- 输出严格遵循 JSON 格式
- 每波最多 5 个 issues
- 按依赖关系排序队列
- 复用已有 issue-plan-agent 和 issue-queue-agent
- 按依赖关系标记 depends_on
- 将 solution 写入中间产物文件
- 每个 issue 完成后立即输出 JSON
### MUST NOT
@@ -237,17 +259,19 @@ function parsePlanPhases(planContent) {
**ALWAYS**:
- Read role definition file as FIRST action
- Output strictly formatted JSON for each wave
- Output strictly formatted JSON for each issue
- Include `remaining_issues` for orchestrator to track progress
- Set correct `status` (`wave_ready` vs `all_planned`)
- Set correct `status` (`issue_ready` vs `all_planned`)
- Write solution artifact file before outputting JSON
- Include `solution_file` path in output
- Use `ccw issue new --json` for issue creation
- Clean up spawned sub-agents (issue-plan-agent, issue-queue-agent)
- Clean up spawned sub-agents (issue-plan-agent)
**NEVER**:
- Implement code (executor's job)
- Output free-form text instead of structured JSON
- Skip solution planning (every issue needs a bound solution)
- Hold more than 5 issues in a single wave
- Skip writing solution artifact file
## Error Handling
@@ -255,7 +279,8 @@ function parsePlanPhases(planContent) {
|----------|--------|
| Issue creation fails | Retry once with simplified text, skip if still fails |
| issue-plan-agent timeout | Retry once, output partial results |
| issue-queue-agent timeout | Output queue without dependency ordering |
| Inline conflict check failure | Use empty depends_on, continue |
| Solution artifact write failure | Report error in JSON output, continue |
| Plan file not found | Report in output JSON: `"error": "plan file not found"` |
| Empty input | Output: `"status": "all_planned", "queue": [], "error": "no input"` |
| Empty input | Output: `"status": "all_planned", "error": "no input"` |
| Sub-agent parse failure | Use raw output, include in summary |