mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-28 09:23:08 +08:00
feat: add CLI settings export/import functionality
- Implemented exportSettings and importSettings APIs for CLI settings. - Added hooks useExportSettings and useImportSettings for managing export/import operations in the frontend. - Updated SettingsPage to include buttons for exporting and importing CLI settings. - Enhanced backend to handle export and import requests, including validation and conflict resolution. - Introduced new data structures for exported settings and import options. - Updated localization files to support new export/import features. - Refactored CLI tool configurations to remove hardcoded model defaults, allowing dynamic model retrieval.
This commit is contained in:
@@ -1,16 +1,16 @@
|
||||
---
|
||||
name: issue-devpipeline
|
||||
description: |
|
||||
Plan-and-Execute pipeline with Wave Pipeline pattern.
|
||||
Plan-and-Execute pipeline with per-issue beat pattern.
|
||||
Orchestrator coordinates planner (Deep Interaction) and executors (Parallel Fan-out).
|
||||
Planner produces wave queues, executors implement solutions concurrently.
|
||||
agents: 4
|
||||
Planner outputs per-issue solutions, executors implement solutions concurrently.
|
||||
agents: 3
|
||||
phases: 4
|
||||
---
|
||||
|
||||
# Issue DevPipeline
|
||||
|
||||
边规划边执行流水线。编排器通过 Wave Pipeline 协调 planner 和 executor(s):planner 完成一个 wave 的规划后输出执行队列,编排器立即为该 wave 派发 executor agents,同时 planner 继续规划下一 wave。
|
||||
边规划边执行流水线。编排器通过逐 Issue 节拍流水线协调 planner 和 executor(s):planner 每完成一个 issue 的规划后立即输出,编排器即时为该 issue 派发 executor agent,同时 planner 继续规划下一 issue。
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
@@ -24,24 +24,23 @@ phases: 4
|
||||
│ Planner │ │ Executors (N) │
|
||||
│ (Deep │ │ (Parallel Fan-out) │
|
||||
│ Interaction│ │ │
|
||||
│ multi-round│ │ exec-1 exec-2 ... │
|
||||
│ per-issue) │ │ exec-1 exec-2 ... │
|
||||
└──────┬──────┘ └──────────┬──────────┘
|
||||
│ │
|
||||
┌──────┴──────┐ ┌──────────┴──────────┐
|
||||
│ issue-plan │ │ code-developer │
|
||||
│ issue-queue │ │ (role reference) │
|
||||
│ (existing) │ │ │
|
||||
│ (existing) │ │ (role reference) │
|
||||
└─────────────┘ └─────────────────────┘
|
||||
```
|
||||
|
||||
**Wave Pipeline Flow**:
|
||||
**Per-Issue Beat Pipeline Flow**:
|
||||
```
|
||||
Planner Round 1 → Wave 1 queue
|
||||
↓ (spawn executors for wave 1)
|
||||
↓ send_input → Planner Round 2 → Wave 2 queue
|
||||
↓ (spawn executors for wave 2)
|
||||
Planner → Issue 1 solution → ISSUE_READY
|
||||
↓ (spawn executor for issue 1)
|
||||
↓ send_input → Planner → Issue 2 solution → ISSUE_READY
|
||||
↓ (spawn executor for issue 2)
|
||||
...
|
||||
↓ Planner outputs "ALL_PLANNED"
|
||||
↓ Planner outputs "all_planned"
|
||||
↓ wait for all executor agents
|
||||
↓ Aggregate results → Done
|
||||
```
|
||||
@@ -50,10 +49,9 @@ Planner Round 1 → Wave 1 queue
|
||||
|
||||
| Agent | Role File | Responsibility | New/Existing |
|
||||
|-------|-----------|----------------|--------------|
|
||||
| `planex-planner` | `~/.codex/agents/planex-planner.md` | 需求拆解 → issue 创建 → 方案设计 → 队列编排 | New |
|
||||
| `planex-planner` | `~/.codex/agents/planex-planner.md` | 需求拆解 → issue 创建 → 方案设计 → 冲突检查 → 逐 issue 输出 | New |
|
||||
| `planex-executor` | `~/.codex/agents/planex-executor.md` | 加载 solution → 代码实现 → 测试 → 提交 | New |
|
||||
| `issue-plan-agent` | `~/.codex/agents/issue-plan-agent.md` | Closed-loop: ACE 探索 + solution 生成 | Existing |
|
||||
| `issue-queue-agent` | `~/.codex/agents/issue-queue-agent.md` | Solution 排序 + 冲突检测 → 执行队列 | Existing |
|
||||
|
||||
## Input Types
|
||||
|
||||
@@ -88,9 +86,16 @@ const inputPayload = {
|
||||
text: textMatch ? textMatch[1] : args,
|
||||
planFile: planMatch ? planMatch[1] : null
|
||||
}
|
||||
|
||||
// Initialize session directory for artifacts
|
||||
const slug = (issueIds[0] || 'batch').replace(/[^a-zA-Z0-9-]/g, '')
|
||||
const dateStr = new Date().toISOString().slice(0,10).replace(/-/g,'')
|
||||
const sessionId = `PEX-${slug}-${dateStr}`
|
||||
const sessionDir = `.workflow/.team/${sessionId}`
|
||||
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
|
||||
```
|
||||
|
||||
### Phase 2: Planning (Deep Interaction with Planner)
|
||||
### Phase 2: Planning (Deep Interaction with Planner — Per-Issue Beat)
|
||||
|
||||
```javascript
|
||||
// Track all agents for cleanup
|
||||
@@ -108,45 +113,42 @@ const plannerId = spawn_agent({
|
||||
|
||||
---
|
||||
|
||||
Goal: 分析需求并完成第一波 (Wave 1) 的规划。输出执行队列。
|
||||
Goal: 分析需求并逐 issue 输出规划结果。每完成一个 issue 立即输出。
|
||||
|
||||
Input:
|
||||
${JSON.stringify(inputPayload, null, 2)}
|
||||
|
||||
Session Dir: ${sessionDir}
|
||||
|
||||
Scope:
|
||||
- Include: 需求分析、issue 创建、方案设计、队列编排
|
||||
- Include: 需求分析、issue 创建、方案设计、inline 冲突检查、写中间产物
|
||||
- Exclude: 代码实现、测试执行、git 操作
|
||||
|
||||
Deliverables:
|
||||
输出严格遵循以下 JSON 格式:
|
||||
每个 issue 输出严格遵循以下 JSON 格式:
|
||||
\`\`\`json
|
||||
{
|
||||
"wave": 1,
|
||||
"status": "wave_ready" | "all_planned",
|
||||
"issues": ["ISS-xxx", ...],
|
||||
"queue": [
|
||||
{
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"title": "描述",
|
||||
"priority": "normal",
|
||||
"depends_on": []
|
||||
}
|
||||
],
|
||||
"status": "issue_ready" | "all_planned",
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"title": "描述",
|
||||
"priority": "normal",
|
||||
"depends_on": [],
|
||||
"solution_file": "${sessionDir}/artifacts/solutions/ISS-xxx.json",
|
||||
"remaining_issues": ["ISS-yyy", ...],
|
||||
"summary": "本波次规划摘要"
|
||||
"summary": "本 issue 规划摘要"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
Quality bar:
|
||||
- 每个 issue 必须有绑定的 solution
|
||||
- 队列必须按依赖排序
|
||||
- 每波最多 5 个 issues
|
||||
- Solution 写入中间产物文件
|
||||
- Inline 冲突检查标记 depends_on
|
||||
`
|
||||
})
|
||||
allAgentIds.push(plannerId)
|
||||
|
||||
// Wait for planner Wave 1 output
|
||||
// Wait for planner first issue output
|
||||
let plannerResult = wait({ ids: [plannerId], timeout_ms: 900000 })
|
||||
|
||||
if (plannerResult.timed_out) {
|
||||
@@ -155,21 +157,21 @@ if (plannerResult.timed_out) {
|
||||
}
|
||||
|
||||
// Parse planner output
|
||||
let waveData = parseWaveOutput(plannerResult.status[plannerId].completed)
|
||||
let issueData = parseIssueOutput(plannerResult.status[plannerId].completed)
|
||||
```
|
||||
|
||||
### Phase 3: Wave Execution Loop
|
||||
### Phase 3: Per-Issue Execution Loop
|
||||
|
||||
```javascript
|
||||
const executorResults = []
|
||||
let waveNum = 0
|
||||
let issueCount = 0
|
||||
|
||||
while (true) {
|
||||
waveNum++
|
||||
issueCount++
|
||||
|
||||
// ─── Dispatch executors for current wave (Parallel Fan-out) ───
|
||||
const waveExecutors = waveData.queue.map(entry =>
|
||||
spawn_agent({
|
||||
// ─── Dispatch executor for current issue (if valid) ───
|
||||
if (issueData && issueData.issue_id) {
|
||||
const executorId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
@@ -180,23 +182,25 @@ while (true) {
|
||||
|
||||
---
|
||||
|
||||
Goal: 实现 ${entry.issue_id} 的 solution
|
||||
Goal: 实现 ${issueData.issue_id} 的 solution
|
||||
|
||||
Issue: ${entry.issue_id}
|
||||
Solution: ${entry.solution_id}
|
||||
Title: ${entry.title}
|
||||
Priority: ${entry.priority}
|
||||
Dependencies: ${entry.depends_on?.join(', ') || 'none'}
|
||||
Issue: ${issueData.issue_id}
|
||||
Solution: ${issueData.solution_id}
|
||||
Title: ${issueData.title}
|
||||
Priority: ${issueData.priority}
|
||||
Dependencies: ${issueData.depends_on?.join(', ') || 'none'}
|
||||
Solution File: ${issueData.solution_file}
|
||||
Session Dir: ${sessionDir}
|
||||
|
||||
Scope:
|
||||
- Include: 加载 solution plan、代码实现、测试运行、git commit
|
||||
- Exclude: issue 创建、方案修改、队列变更
|
||||
- Exclude: issue 创建、方案修改
|
||||
|
||||
Deliverables:
|
||||
输出严格遵循以下格式:
|
||||
\`\`\`json
|
||||
{
|
||||
"issue_id": "${entry.issue_id}",
|
||||
"issue_id": "${issueData.issue_id}",
|
||||
"status": "success" | "failed",
|
||||
"files_changed": ["path/to/file", ...],
|
||||
"tests_passed": true | false,
|
||||
@@ -214,65 +218,57 @@ Quality bar:
|
||||
- 每个变更必须 commit
|
||||
`
|
||||
})
|
||||
)
|
||||
allAgentIds.push(...waveExecutors)
|
||||
|
||||
// ─── Check if more waves needed ───
|
||||
if (waveData.status === 'all_planned') {
|
||||
// No more waves — wait for current executors and finish
|
||||
const execResults = wait({ ids: waveExecutors, timeout_ms: 1200000 })
|
||||
waveExecutors.forEach((id, i) => {
|
||||
executorResults.push({
|
||||
wave: waveNum,
|
||||
issue: waveData.queue[i].issue_id,
|
||||
result: execResults.status[id]?.completed || 'timeout'
|
||||
})
|
||||
allAgentIds.push(executorId)
|
||||
executorResults.push({
|
||||
id: executorId,
|
||||
issueId: issueData.issue_id,
|
||||
index: issueCount
|
||||
})
|
||||
}
|
||||
|
||||
// ─── Check if all planned ───
|
||||
if (issueData?.status === 'all_planned') {
|
||||
break
|
||||
}
|
||||
|
||||
// ─── Request next wave from planner (while executors run) ───
|
||||
// ─── Request next issue from planner ───
|
||||
send_input({
|
||||
id: plannerId,
|
||||
message: `
|
||||
## WAVE ${waveNum} 已派发
|
||||
|
||||
已为 Wave ${waveNum} 创建 ${waveExecutors.length} 个 executor agents。
|
||||
|
||||
## NEXT
|
||||
请继续规划下一波 (Wave ${waveNum + 1})。
|
||||
剩余 issues: ${JSON.stringify(waveData.remaining_issues)}
|
||||
|
||||
输出格式同前。如果所有 issues 已规划完毕,status 设为 "all_planned"。
|
||||
`
|
||||
message: `Issue ${issueData?.issue_id || 'unknown'} dispatched. Continue to next issue.`
|
||||
})
|
||||
|
||||
// ─── Wait for both: executors (current wave) + planner (next wave) ───
|
||||
const allWaiting = [...waveExecutors, plannerId]
|
||||
const batchResult = wait({ ids: allWaiting, timeout_ms: 1200000 })
|
||||
// ─── Wait for planner next issue ───
|
||||
const nextResult = wait({ ids: [plannerId], timeout_ms: 900000 })
|
||||
|
||||
// Collect executor results
|
||||
waveExecutors.forEach((id, i) => {
|
||||
executorResults.push({
|
||||
wave: waveNum,
|
||||
issue: waveData.queue[i].issue_id,
|
||||
result: batchResult.status[id]?.completed || 'timeout'
|
||||
})
|
||||
})
|
||||
|
||||
// Parse next wave from planner
|
||||
if (batchResult.status[plannerId]?.completed) {
|
||||
waveData = parseWaveOutput(batchResult.status[plannerId].completed)
|
||||
if (nextResult.timed_out) {
|
||||
send_input({ id: plannerId, message: "请尽快输出当前已完成的规划结果。" })
|
||||
const retryResult = wait({ ids: [plannerId], timeout_ms: 120000 })
|
||||
if (retryResult.timed_out) break
|
||||
issueData = parseIssueOutput(retryResult.status[plannerId].completed)
|
||||
} else {
|
||||
// Planner timed out — wait more
|
||||
const plannerRetry = wait({ ids: [plannerId], timeout_ms: 300000 })
|
||||
if (plannerRetry.timed_out) {
|
||||
// Abort pipeline
|
||||
break
|
||||
}
|
||||
waveData = parseWaveOutput(plannerRetry.status[plannerId].completed)
|
||||
issueData = parseIssueOutput(nextResult.status[plannerId].completed)
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Wait for all executor agents ───
|
||||
const executorIds = executorResults.map(e => e.id)
|
||||
if (executorIds.length > 0) {
|
||||
const execResults = wait({ ids: executorIds, timeout_ms: 1200000 })
|
||||
|
||||
// Handle timeouts
|
||||
if (execResults.timed_out) {
|
||||
const pending = executorIds.filter(id => !execResults.status[id]?.completed)
|
||||
pending.forEach(id => {
|
||||
send_input({ id, message: "Please finalize current task and output results." })
|
||||
})
|
||||
wait({ ids: pending, timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
// Collect results
|
||||
executorResults.forEach(entry => {
|
||||
entry.result = execResults.status[entry.id]?.completed || 'timeout'
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Aggregation & Cleanup
|
||||
@@ -297,19 +293,18 @@ const failed = executorResults.filter(r => {
|
||||
const report = `
|
||||
## PlanEx Pipeline Complete
|
||||
|
||||
**Waves**: ${waveNum}
|
||||
**Total Issues**: ${executorResults.length}
|
||||
**Succeeded**: ${succeeded.length}
|
||||
**Failed**: ${failed.length}
|
||||
|
||||
### Results by Wave
|
||||
${executorResults.map(r => `- Wave ${r.wave} | ${r.issue} | ${(() => {
|
||||
### Results
|
||||
${executorResults.map(r => `- ${r.issueId} | ${(() => {
|
||||
try { return JSON.parse(r.result).status } catch { return 'error' }
|
||||
})()}`).join('\n')}
|
||||
|
||||
${failed.length > 0 ? `### Failed Issues
|
||||
${failed.map(r => `- ${r.issue}: ${(() => {
|
||||
try { return JSON.parse(r.result).error } catch { return r.result.slice(0, 200) }
|
||||
${failed.map(r => `- ${r.issueId}: ${(() => {
|
||||
try { return JSON.parse(r.result).error } catch { return r.result?.slice(0, 200) || 'unknown' }
|
||||
})()}`).join('\n')}` : ''}
|
||||
`
|
||||
|
||||
@@ -324,7 +319,7 @@ allAgentIds.forEach(id => {
|
||||
## Helper Functions
|
||||
|
||||
```javascript
|
||||
function parseWaveOutput(output) {
|
||||
function parseIssueOutput(output) {
|
||||
// Extract JSON block from agent output
|
||||
const jsonMatch = output.match(/```json\s*([\s\S]*?)```/)
|
||||
if (jsonMatch) {
|
||||
@@ -332,8 +327,8 @@ function parseWaveOutput(output) {
|
||||
}
|
||||
// Fallback: try parsing entire output as JSON
|
||||
try { return JSON.parse(output) } catch {}
|
||||
// Last resort: return empty wave with all_planned
|
||||
return { wave: 0, status: 'all_planned', queue: [], remaining_issues: [], summary: 'Parse failed' }
|
||||
// Last resort: return empty with all_planned
|
||||
return { status: 'all_planned', issue_id: null, remaining_issues: [], summary: 'Parse failed' }
|
||||
}
|
||||
```
|
||||
|
||||
@@ -342,11 +337,11 @@ function parseWaveOutput(output) {
|
||||
```javascript
|
||||
const CONFIG = {
|
||||
sessionDir: ".workflow/.team/PEX-{slug}-{date}/",
|
||||
artifactsDir: ".workflow/.team/PEX-{slug}-{date}/artifacts/",
|
||||
issueDataDir: ".workflow/issues/",
|
||||
maxWaveSize: 5,
|
||||
plannerTimeout: 900000, // 15 min
|
||||
executorTimeout: 1200000, // 20 min
|
||||
maxWaves: 10
|
||||
maxIssues: 50
|
||||
}
|
||||
```
|
||||
|
||||
@@ -356,10 +351,10 @@ const CONFIG = {
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Planner wave timeout | send_input 催促收敛,retry wait 120s |
|
||||
| Planner issue timeout | send_input 催促收敛,retry wait 120s |
|
||||
| Executor timeout | 标记为 failed,继续其他 executor |
|
||||
| Batch wait partial timeout | 收集已完成结果,继续 pipeline |
|
||||
| Pipeline stall (> 2 waves timeout) | 中止 pipeline,输出部分结果 |
|
||||
| Pipeline stall (> 3 issues timeout) | 中止 pipeline,输出部分结果 |
|
||||
|
||||
### Cleanup Protocol
|
||||
|
||||
@@ -379,5 +374,5 @@ allAgentIds.forEach(id => {
|
||||
| No issues created | Report error, abort pipeline |
|
||||
| Solution planning failure | Skip issue, report in final results |
|
||||
| Executor implementation failure | Mark as failed, continue with other executors |
|
||||
| All executors in wave fail | Report wave failure, continue to next wave |
|
||||
| Planner exits early | Treat as all_planned, finish current wave |
|
||||
| Inline conflict check failure | Use empty depends_on, continue |
|
||||
| Planner exits early | Treat as all_planned, finish current executors |
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: planex-executor
|
||||
description: |
|
||||
PlanEx 执行角色。加载 solution plan → 代码实现 → 测试验证 → git commit。
|
||||
PlanEx 执行角色。从中间产物文件加载 solution plan(兼容 CLI fallback)→ 代码实现 → 测试验证 → git commit。
|
||||
每个 executor 实例处理一个 issue 的 solution。
|
||||
color: green
|
||||
skill: issue-devpipeline
|
||||
@@ -9,11 +9,11 @@ skill: issue-devpipeline
|
||||
|
||||
# PlanEx Executor
|
||||
|
||||
代码实现角色。接收编排器派发的 issue + solution 信息,加载 solution plan,实现代码变更,运行测试验证,提交变更。每个 executor 实例独立处理一个 issue。
|
||||
代码实现角色。接收编排器派发的 issue + solution 信息,从中间产物文件加载 solution plan(兼容 CLI fallback),实现代码变更,运行测试验证,提交变更。每个 executor 实例独立处理一个 issue。
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **Solution 加载**: 通过 `ccw issue solutions <id> --json` 加载绑定的 solution plan
|
||||
1. **Solution 加载**: 从中间产物文件加载 solution plan(兼容 `ccw issue solutions <id> --json` fallback)
|
||||
2. **代码实现**: 按 solution plan 的任务列表顺序实现代码变更
|
||||
3. **测试验证**: 运行相关测试确保变更正确且不破坏现有功能
|
||||
4. **变更提交**: 将实现的代码 commit 到 git
|
||||
@@ -179,10 +179,24 @@ function getUtc8ISOString() {
|
||||
### Step 2: Solution Loading & Implementation
|
||||
|
||||
```javascript
|
||||
// ── Load solution plan ──
|
||||
// ── Load solution plan (dual-mode: artifact file first, CLI fallback) ──
|
||||
const issueId = taskAssignment.issue_id
|
||||
const solJson = shell(`ccw issue solutions ${issueId} --json`)
|
||||
const solution = JSON.parse(solJson)
|
||||
const solutionFile = taskAssignment.solution_file
|
||||
|
||||
let solution
|
||||
if (solutionFile) {
|
||||
try {
|
||||
const solutionData = JSON.parse(read_file(solutionFile))
|
||||
solution = solutionData.bound ? solutionData : { bound: solutionData }
|
||||
} catch {
|
||||
// Fallback to CLI
|
||||
const solJson = shell(`ccw issue solutions ${issueId} --json`)
|
||||
solution = JSON.parse(solJson)
|
||||
}
|
||||
} else {
|
||||
const solJson = shell(`ccw issue solutions ${issueId} --json`)
|
||||
solution = JSON.parse(solJson)
|
||||
}
|
||||
|
||||
if (!solution.bound) {
|
||||
outputError(`No bound solution for ${issueId}`)
|
||||
|
||||
@@ -1,23 +1,24 @@
|
||||
---
|
||||
name: planex-planner
|
||||
description: |
|
||||
PlanEx 规划角色。需求拆解 → issue 创建 → 方案设计 → 队列编排。
|
||||
按波次 (wave) 输出执行队列,支持 Deep Interaction 多轮交互。
|
||||
PlanEx 规划角色。需求拆解 → issue 创建 → 方案设计 → inline 冲突检查。
|
||||
逐 issue 输出执行信息,支持 Deep Interaction 多轮交互。
|
||||
color: blue
|
||||
skill: issue-devpipeline
|
||||
---
|
||||
|
||||
# PlanEx Planner
|
||||
|
||||
需求分析和规划角色。接收需求输入(issue IDs / 文本 / plan 文件),完成需求拆解、issue 创建、方案设计(调用 issue-plan-agent)、队列编排(调用 issue-queue-agent),按波次输出执行队列供编排器派发 executor。
|
||||
需求分析和规划角色。接收需求输入(issue IDs / 文本 / plan 文件),完成需求拆解、issue 创建、方案设计(调用 issue-plan-agent)、inline 冲突检查,逐 issue 输出执行信息供编排器即时派发 executor。
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **需求分析**: 解析输入类型,提取需求要素
|
||||
2. **Issue 创建**: 将文本/plan 拆解为结构化 issue(通过 `ccw issue new`)
|
||||
3. **方案设计**: 调用 issue-plan-agent 为每个 issue 生成 solution
|
||||
4. **队列编排**: 调用 issue-queue-agent 按依赖排序形成执行队列
|
||||
5. **波次输出**: 每波最多 5 个 issues,输出结构化 JSON 队列
|
||||
4. **Inline 冲突检查**: 基于 files_touched 重叠检测 + 显式依赖排序
|
||||
5. **中间产物**: 将 solution 写入文件供 executor 直接加载
|
||||
6. **逐 issue 输出**: 每完成一个 issue 立即输出 JSON,编排器即时派发
|
||||
|
||||
## Execution Process
|
||||
|
||||
@@ -32,6 +33,7 @@ skill: issue-devpipeline
|
||||
- **Goal**: What to achieve
|
||||
- **Scope**: What's allowed and forbidden
|
||||
- **Input**: Input payload with type, issueIds, text, planFile
|
||||
- **Session Dir**: Path for writing solution artifacts
|
||||
- **Deliverables**: Expected JSON output format
|
||||
|
||||
### Step 2: Input Processing & Issue Creation
|
||||
@@ -40,6 +42,7 @@ skill: issue-devpipeline
|
||||
|
||||
```javascript
|
||||
const input = taskAssignment.input
|
||||
const sessionDir = taskAssignment.session_dir
|
||||
|
||||
if (input.type === 'issue_ids') {
|
||||
// Issue IDs 已提供,直接使用
|
||||
@@ -68,28 +71,24 @@ if (input.type === 'plan_file') {
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Solution Planning & Queue Formation
|
||||
### Step 3: Per-Issue Solution Planning & Artifact Writing
|
||||
|
||||
分波次处理 issues。每波最多 5 个。
|
||||
逐 issue 处理:plan-agent → 写中间产物 → 冲突检查 → 输出 JSON。
|
||||
|
||||
```javascript
|
||||
const WAVE_SIZE = 5
|
||||
const allIssues = [...issueIds]
|
||||
const waves = []
|
||||
const projectRoot = shell('pwd').trim()
|
||||
const dispatchedSolutions = []
|
||||
const remainingIssues = [...issueIds]
|
||||
|
||||
for (let i = 0; i < allIssues.length; i += WAVE_SIZE) {
|
||||
waves.push(allIssues.slice(i, i + WAVE_SIZE))
|
||||
}
|
||||
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
|
||||
|
||||
// 处理第一个 wave(后续 wave 通过 send_input 触发)
|
||||
const currentWave = waves[0]
|
||||
const remainingWaves = waves.slice(1)
|
||||
const remainingIssues = remainingWaves.flat()
|
||||
for (let i = 0; i < issueIds.length; i++) {
|
||||
const issueId = issueIds[i]
|
||||
remainingIssues.shift()
|
||||
|
||||
// ── Solution Planning ──
|
||||
// 调用 issue-plan-agent 为当前 wave 的 issues 生成 solutions
|
||||
const planAgent = spawn_agent({
|
||||
message: `
|
||||
// --- Step 3a: Spawn issue-plan-agent for single issue ---
|
||||
const planAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
@@ -97,89 +96,112 @@ const planAgent = spawn_agent({
|
||||
|
||||
---
|
||||
|
||||
issue_ids: ${JSON.stringify(currentWave)}
|
||||
project_root: "${shell('pwd').trim()}"
|
||||
issue_ids: ["${issueId}"]
|
||||
project_root: "${projectRoot}"
|
||||
|
||||
## Requirements
|
||||
- Generate solutions for each issue
|
||||
- Auto-bind single solutions
|
||||
- Generate solution for this issue
|
||||
- Auto-bind single solution
|
||||
- For multiple solutions, select the most pragmatic one
|
||||
`
|
||||
})
|
||||
const planResult = wait({ ids: [planAgent], timeout_ms: 600000 })
|
||||
close_agent({ id: planAgent })
|
||||
})
|
||||
const planResult = wait({ ids: [planAgent], timeout_ms: 600000 })
|
||||
|
||||
// ── Queue Formation ──
|
||||
// 调用 issue-queue-agent 形成执行队列
|
||||
const queueAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
if (planResult.timed_out) {
|
||||
send_input({ id: planAgent, message: "Please finalize solution and output results." })
|
||||
wait({ ids: [planAgent], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-queue-agent.md (MUST read first)
|
||||
close_agent({ id: planAgent })
|
||||
|
||||
---
|
||||
// --- Step 3b: Load solution + write artifact file ---
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
const solution = JSON.parse(solJson)
|
||||
|
||||
issue_ids: ${JSON.stringify(currentWave)}
|
||||
project_root: "${shell('pwd').trim()}"
|
||||
const solutionFile = `${sessionDir}/artifacts/solutions/${issueId}.json`
|
||||
write_file(solutionFile, JSON.stringify({
|
||||
issue_id: issueId,
|
||||
...solution,
|
||||
timestamp: new Date().toISOString()
|
||||
}, null, 2))
|
||||
|
||||
## Requirements
|
||||
- Order solutions by dependency (DAG)
|
||||
- Detect conflicts between solutions
|
||||
- Output execution queue
|
||||
`
|
||||
})
|
||||
const queueResult = wait({ ids: [queueAgent], timeout_ms: 300000 })
|
||||
close_agent({ id: queueAgent })
|
||||
// --- Step 3c: Inline conflict check ---
|
||||
const dependsOn = inlineConflictCheck(issueId, solution, dispatchedSolutions)
|
||||
|
||||
// 读取生成的 queue 文件
|
||||
const queuePath = '.workflow/issues/queue/execution-queue.json'
|
||||
const queue = JSON.parse(readFile(queuePath))
|
||||
// --- Step 3d: Track + output per-issue JSON ---
|
||||
dispatchedSolutions.push({ issueId, solution, solutionFile })
|
||||
|
||||
const isLast = remainingIssues.length === 0
|
||||
|
||||
// Output per-issue JSON for orchestrator
|
||||
console.log(JSON.stringify({
|
||||
status: isLast ? "all_planned" : "issue_ready",
|
||||
issue_id: issueId,
|
||||
solution_id: solution.bound?.id || 'N/A',
|
||||
title: solution.bound?.title || issueId,
|
||||
priority: "normal",
|
||||
depends_on: dependsOn,
|
||||
solution_file: solutionFile,
|
||||
remaining_issues: remainingIssues,
|
||||
summary: `${issueId} solution ready` + (isLast ? ` (all ${issueIds.length} issues planned)` : '')
|
||||
}, null, 2))
|
||||
|
||||
// Wait for orchestrator send_input before continuing
|
||||
// (orchestrator will send: "Issue dispatched. Continue.")
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Output Delivery
|
||||
|
||||
输出严格遵循编排器要求的 JSON 格式。
|
||||
输出格式(每个 issue 独立输出):
|
||||
|
||||
```json
|
||||
{
|
||||
"wave": 1,
|
||||
"status": "wave_ready",
|
||||
"issues": ["ISS-xxx", "ISS-yyy"],
|
||||
"queue": [
|
||||
{
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"title": "实现功能A",
|
||||
"priority": "normal",
|
||||
"depends_on": []
|
||||
},
|
||||
{
|
||||
"issue_id": "ISS-yyy",
|
||||
"solution_id": "SOL-yyy",
|
||||
"title": "实现功能B",
|
||||
"priority": "normal",
|
||||
"depends_on": ["ISS-xxx"]
|
||||
}
|
||||
],
|
||||
"remaining_issues": ["ISS-zzz"],
|
||||
"summary": "Wave 1 规划完成: 2 个 issues, 按依赖排序"
|
||||
"status": "issue_ready",
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"title": "实现功能A",
|
||||
"priority": "normal",
|
||||
"depends_on": [],
|
||||
"solution_file": ".workflow/.team/PEX-xxx/artifacts/solutions/ISS-xxx.json",
|
||||
"remaining_issues": ["ISS-yyy", "ISS-zzz"],
|
||||
"summary": "ISS-xxx solution ready"
|
||||
}
|
||||
```
|
||||
|
||||
**status 取值**:
|
||||
- `"wave_ready"` — 本波次完成,还有后续波次
|
||||
- `"all_planned"` — 所有 issues 已规划完毕(包含最后一个波次的 queue)
|
||||
- `"issue_ready"` — 本 issue 完成,还有后续 issues
|
||||
- `"all_planned"` — 所有 issues 已规划完毕(最后一个 issue 的输出)
|
||||
|
||||
### Multi-Round: 处理后续 Wave
|
||||
## Inline Conflict Check
|
||||
|
||||
编排器会通过 `send_input` 触发后续波次规划。收到 send_input 后:
|
||||
```javascript
|
||||
function inlineConflictCheck(issueId, solution, dispatchedSolutions) {
|
||||
const currentFiles = solution.bound?.files_touched
|
||||
|| solution.bound?.affected_files || []
|
||||
const blockedBy = []
|
||||
|
||||
1. 解析 `remaining_issues` 列表
|
||||
2. 取下一批(最多 WAVE_SIZE 个)
|
||||
3. 重复 Step 3 的 solution planning + queue formation
|
||||
4. 输出下一个 wave 的 JSON
|
||||
5. 如果没有剩余 issues,`status` 设为 `"all_planned"`
|
||||
// 1. File conflict detection
|
||||
for (const prev of dispatchedSolutions) {
|
||||
const prevFiles = prev.solution.bound?.files_touched
|
||||
|| prev.solution.bound?.affected_files || []
|
||||
const overlap = currentFiles.filter(f => prevFiles.includes(f))
|
||||
if (overlap.length > 0) {
|
||||
blockedBy.push(prev.issueId)
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Explicit dependencies
|
||||
const explicitDeps = solution.bound?.dependencies?.on_issues || []
|
||||
for (const depId of explicitDeps) {
|
||||
if (!blockedBy.includes(depId)) {
|
||||
blockedBy.push(depId)
|
||||
}
|
||||
}
|
||||
|
||||
return blockedBy
|
||||
}
|
||||
```
|
||||
|
||||
## Plan File Parsing
|
||||
|
||||
@@ -219,11 +241,11 @@ function parsePlanPhases(planContent) {
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅执行规划相关工作(需求分析、issue 创建、方案设计、队列编排)
|
||||
- 仅执行规划相关工作(需求分析、issue 创建、方案设计、冲突检查)
|
||||
- 输出严格遵循 JSON 格式
|
||||
- 每波最多 5 个 issues
|
||||
- 按依赖关系排序队列
|
||||
- 复用已有 issue-plan-agent 和 issue-queue-agent
|
||||
- 按依赖关系标记 depends_on
|
||||
- 将 solution 写入中间产物文件
|
||||
- 每个 issue 完成后立即输出 JSON
|
||||
|
||||
### MUST NOT
|
||||
|
||||
@@ -237,17 +259,19 @@ function parsePlanPhases(planContent) {
|
||||
|
||||
**ALWAYS**:
|
||||
- Read role definition file as FIRST action
|
||||
- Output strictly formatted JSON for each wave
|
||||
- Output strictly formatted JSON for each issue
|
||||
- Include `remaining_issues` for orchestrator to track progress
|
||||
- Set correct `status` (`wave_ready` vs `all_planned`)
|
||||
- Set correct `status` (`issue_ready` vs `all_planned`)
|
||||
- Write solution artifact file before outputting JSON
|
||||
- Include `solution_file` path in output
|
||||
- Use `ccw issue new --json` for issue creation
|
||||
- Clean up spawned sub-agents (issue-plan-agent, issue-queue-agent)
|
||||
- Clean up spawned sub-agents (issue-plan-agent)
|
||||
|
||||
**NEVER**:
|
||||
- Implement code (executor's job)
|
||||
- Output free-form text instead of structured JSON
|
||||
- Skip solution planning (every issue needs a bound solution)
|
||||
- Hold more than 5 issues in a single wave
|
||||
- Skip writing solution artifact file
|
||||
|
||||
## Error Handling
|
||||
|
||||
@@ -255,7 +279,8 @@ function parsePlanPhases(planContent) {
|
||||
|----------|--------|
|
||||
| Issue creation fails | Retry once with simplified text, skip if still fails |
|
||||
| issue-plan-agent timeout | Retry once, output partial results |
|
||||
| issue-queue-agent timeout | Output queue without dependency ordering |
|
||||
| Inline conflict check failure | Use empty depends_on, continue |
|
||||
| Solution artifact write failure | Report error in JSON output, continue |
|
||||
| Plan file not found | Report in output JSON: `"error": "plan file not found"` |
|
||||
| Empty input | Output: `"status": "all_planned", "queue": [], "error": "no input"` |
|
||||
| Empty input | Output: `"status": "all_planned", "error": "no input"` |
|
||||
| Sub-agent parse failure | Use raw output, include in summary |
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
name: team-planex
|
||||
description: 2-member plan-and-execute pipeline with Wave Pipeline for concurrent planning and execution. Planner decomposes requirements into issues, generates solutions, forms execution queues. Executor implements solutions via configurable backends (agent/codex/gemini). Triggers on "team planex".
|
||||
description: 2-member plan-and-execute pipeline with per-issue beat pipeline for concurrent planning and execution. Planner decomposes requirements into issues, generates solutions, writes artifacts. Executor implements solutions via configurable backends (agent/codex/gemini). Triggers on "team planex".
|
||||
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
|
||||
argument-hint: "<issue-ids|--text 'description'|--plan path> [--exec=agent|codex|gemini|auto] [-y]"
|
||||
---
|
||||
|
||||
# Team PlanEx
|
||||
|
||||
2 成员边规划边执行团队。通过 Wave Pipeline(波次流水线)实现 planner 和 executor 并行工作:planner 完成一个 wave 的 queue 后,orchestrator 立即 spawn executor agent 处理该 wave,同时 send_input 让 planner 继续下一 wave。
|
||||
2 成员边规划边执行团队。通过逐 Issue 节拍流水线实现 planner 和 executor 并行工作:planner 每完成一个 issue 的 solution 后输出 ISSUE_READY 信号,orchestrator 立即 spawn executor agent 处理该 issue,同时 send_input 让 planner 继续下一 issue。
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
@@ -16,7 +16,7 @@ argument-hint: "<issue-ids|--text 'description'|--plan path> [--exec=agent|codex
|
||||
│ Orchestrator (this file) │
|
||||
│ → Parse input → Spawn planner → Spawn exec │
|
||||
└────────────────┬─────────────────────────────┘
|
||||
│ Wave Pipeline
|
||||
│ Per-Issue Beat Pipeline
|
||||
┌───────┴───────┐
|
||||
↓ ↓
|
||||
┌─────────┐ ┌──────────┐
|
||||
@@ -25,17 +25,16 @@ argument-hint: "<issue-ids|--text 'description'|--plan path> [--exec=agent|codex
|
||||
└─────────┘ └──────────┘
|
||||
│ │
|
||||
issue-plan-agent code-developer
|
||||
issue-queue-agent (or codex/gemini CLI)
|
||||
(or codex/gemini CLI)
|
||||
```
|
||||
|
||||
## Agent Registry
|
||||
|
||||
| Agent | Role File | Responsibility | New/Existing |
|
||||
|-------|-----------|----------------|--------------|
|
||||
| `planex-planner` | `.codex/skills/team-planex/agents/planex-planner.md` | 需求拆解 → issue 创建 → 方案设计 → 队列编排 | New (skill-specific) |
|
||||
| `planex-planner` | `.codex/skills/team-planex/agents/planex-planner.md` | 需求拆解 → issue 创建 → 方案设计 → 冲突检查 → 逐 issue 派发 | New (skill-specific) |
|
||||
| `planex-executor` | `.codex/skills/team-planex/agents/planex-executor.md` | 加载 solution → 代码实现 → 测试 → 提交 | New (skill-specific) |
|
||||
| `issue-plan-agent` | `~/.codex/agents/issue-plan-agent.md` | ACE exploration + solution generation + binding | Existing |
|
||||
| `issue-queue-agent` | `~/.codex/agents/issue-queue-agent.md` | Solution ordering + conflict detection | Existing |
|
||||
| `code-developer` | `~/.codex/agents/code-developer.md` | Code implementation (agent backend) | Existing |
|
||||
|
||||
## Input Types
|
||||
@@ -86,11 +85,18 @@ if (explicitExec) {
|
||||
// Interactive: ask user for preferences
|
||||
// (orchestrator handles user interaction directly)
|
||||
}
|
||||
|
||||
// Initialize session directory for artifacts
|
||||
const slug = (issueIds[0] || 'batch').replace(/[^a-zA-Z0-9-]/g, '')
|
||||
const dateStr = new Date().toISOString().slice(0,10).replace(/-/g,'')
|
||||
const sessionId = `PEX-${slug}-${dateStr}`
|
||||
const sessionDir = `.workflow/.team/${sessionId}`
|
||||
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
|
||||
```
|
||||
|
||||
### Phase 2: Planning (Planner Agent — Deep Interaction)
|
||||
### Phase 2: Planning (Planner Agent — Per-Issue Beat)
|
||||
|
||||
Spawn planner agent for wave-based planning. Uses send_input for multi-wave progression.
|
||||
Spawn planner agent for per-issue planning. Uses send_input for issue-by-issue progression.
|
||||
|
||||
```javascript
|
||||
// Build planner input context
|
||||
@@ -110,7 +116,7 @@ const planner = spawn_agent({
|
||||
|
||||
---
|
||||
|
||||
Goal: Decompose requirements into waves of executable solutions
|
||||
Goal: Decompose requirements into executable solutions (per-issue beat)
|
||||
|
||||
## Input
|
||||
${plannerInput}
|
||||
@@ -119,61 +125,64 @@ ${plannerInput}
|
||||
execution_method: ${executionConfig.executionMethod}
|
||||
code_review: ${executionConfig.codeReviewTool}
|
||||
|
||||
## Session Dir
|
||||
session_dir: ${sessionDir}
|
||||
|
||||
## Deliverables
|
||||
For EACH wave, output structured wave data:
|
||||
For EACH issue, output structured data:
|
||||
|
||||
\`\`\`
|
||||
WAVE_READY:
|
||||
wave_number: N
|
||||
issue_ids: [ISS-xxx, ...]
|
||||
queue_path: .workflow/issues/queue/execution-queue.json
|
||||
exec_tasks: [
|
||||
{ issue_id: "ISS-xxx", solution_id: "SOL-xxx", title: "...", priority: "normal", depends_on: [] },
|
||||
...
|
||||
]
|
||||
ISSUE_READY:
|
||||
{
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"title": "...",
|
||||
"priority": "normal",
|
||||
"depends_on": [],
|
||||
"solution_file": "${sessionDir}/artifacts/solutions/ISS-xxx.json"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
After ALL waves planned, output:
|
||||
After ALL issues planned, output:
|
||||
\`\`\`
|
||||
ALL_PLANNED:
|
||||
total_waves: N
|
||||
total_issues: N
|
||||
{ "total_issues": N }
|
||||
\`\`\`
|
||||
|
||||
## Quality bar
|
||||
- Every issue has a bound solution
|
||||
- Queue respects dependency DAG
|
||||
- Wave boundaries are logical groupings
|
||||
- Solution artifact written to file before output
|
||||
- Inline conflict check determines depends_on
|
||||
`
|
||||
})
|
||||
|
||||
// Wait for Wave 1
|
||||
const wave1 = wait({ ids: [planner], timeout_ms: 600000 })
|
||||
// Wait for first ISSUE_READY
|
||||
const firstIssue = wait({ ids: [planner], timeout_ms: 600000 })
|
||||
|
||||
if (wave1.timed_out) {
|
||||
send_input({ id: planner, message: "Please finalize current wave and output WAVE_READY." })
|
||||
if (firstIssue.timed_out) {
|
||||
send_input({ id: planner, message: "Please finalize current issue and output ISSUE_READY." })
|
||||
const retry = wait({ ids: [planner], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
// Parse wave data from planner output
|
||||
const wave1Data = parseWaveReady(wave1.status[planner].completed)
|
||||
// Parse first issue data
|
||||
const firstIssueData = parseIssueReady(firstIssue.status[planner].completed)
|
||||
```
|
||||
|
||||
### Phase 3: Wave Pipeline (Planning + Execution Interleaved)
|
||||
### Phase 3: Per-Issue Beat Pipeline (Planning + Execution Interleaved)
|
||||
|
||||
Pipeline: spawn executor for current wave while planner continues next wave.
|
||||
Pipeline: spawn executor for current issue while planner continues next issue.
|
||||
|
||||
```javascript
|
||||
const allAgentIds = [planner]
|
||||
const executorAgents = []
|
||||
let waveNum = 1
|
||||
let allPlanned = false
|
||||
let currentIssueOutput = firstIssue.status[planner].completed
|
||||
|
||||
while (!allPlanned) {
|
||||
// --- Spawn executor for current wave ---
|
||||
const waveData = parseWaveReady(currentWaveOutput)
|
||||
// --- Spawn executor for current issue ---
|
||||
const issueData = parseIssueReady(currentIssueOutput)
|
||||
|
||||
if (waveData && waveData.exec_tasks.length > 0) {
|
||||
if (issueData) {
|
||||
const executor = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
@@ -185,75 +194,82 @@ while (!allPlanned) {
|
||||
|
||||
---
|
||||
|
||||
Goal: Implement all solutions in Wave ${waveNum}
|
||||
Goal: Implement solution for ${issueData.issue_id}
|
||||
|
||||
## Wave ${waveNum} Tasks
|
||||
${JSON.stringify(waveData.exec_tasks, null, 2)}
|
||||
## Task
|
||||
${JSON.stringify([issueData], null, 2)}
|
||||
|
||||
## Execution Config
|
||||
execution_method: ${executionConfig.executionMethod}
|
||||
code_review: ${executionConfig.codeReviewTool}
|
||||
|
||||
## Solution File
|
||||
solution_file: ${issueData.solution_file}
|
||||
|
||||
## Session Dir
|
||||
session_dir: ${sessionDir}
|
||||
|
||||
## Deliverables
|
||||
For each task, output:
|
||||
\`\`\`
|
||||
IMPL_COMPLETE:
|
||||
issue_id: ISS-xxx
|
||||
issue_id: ${issueData.issue_id}
|
||||
status: success|failed
|
||||
test_result: pass|fail
|
||||
commit: <hash or N/A>
|
||||
\`\`\`
|
||||
|
||||
After all wave tasks done:
|
||||
\`\`\`
|
||||
WAVE_DONE:
|
||||
wave_number: ${waveNum}
|
||||
completed: N
|
||||
failed: N
|
||||
\`\`\`
|
||||
|
||||
## Quality bar
|
||||
- All existing tests pass after each implementation
|
||||
- All existing tests pass after implementation
|
||||
- Code follows project conventions
|
||||
- One commit per solution
|
||||
`
|
||||
})
|
||||
allAgentIds.push(executor)
|
||||
executorAgents.push({ id: executor, wave: waveNum })
|
||||
executorAgents.push({ id: executor, issueId: issueData.issue_id })
|
||||
}
|
||||
|
||||
// --- Tell planner to continue next wave ---
|
||||
if (!allPlanned) {
|
||||
send_input({ id: planner, message: `Wave ${waveNum} dispatched to executor. Continue to Wave ${waveNum + 1}.` })
|
||||
// --- Check if ALL_PLANNED was in this output ---
|
||||
if (currentIssueOutput.includes("ALL_PLANNED")) {
|
||||
allPlanned = true
|
||||
break
|
||||
}
|
||||
|
||||
// Wait for both: planner (next wave) + current executor
|
||||
const activeIds = [planner]
|
||||
if (executorAgents.length > 0) {
|
||||
activeIds.push(executorAgents[executorAgents.length - 1].id)
|
||||
}
|
||||
|
||||
const results = wait({ ids: activeIds, timeout_ms: 600000 })
|
||||
|
||||
// Check planner output
|
||||
const plannerOutput = results.status[planner]?.completed || ""
|
||||
if (plannerOutput.includes("ALL_PLANNED")) {
|
||||
allPlanned = true
|
||||
} else if (plannerOutput.includes("WAVE_READY")) {
|
||||
waveNum++
|
||||
currentWaveOutput = plannerOutput
|
||||
// --- Tell planner to continue next issue ---
|
||||
send_input({ id: planner, message: `Issue ${issueData?.issue_id || 'unknown'} dispatched. Continue to next issue.` })
|
||||
|
||||
// Wait for planner (next issue)
|
||||
const plannerResult = wait({ ids: [planner], timeout_ms: 600000 })
|
||||
|
||||
if (plannerResult.timed_out) {
|
||||
send_input({ id: planner, message: "Please finalize current issue and output results." })
|
||||
const retry = wait({ ids: [planner], timeout_ms: 120000 })
|
||||
currentIssueOutput = retry.status?.[planner]?.completed || ""
|
||||
} else {
|
||||
currentIssueOutput = plannerResult.status[planner]?.completed || ""
|
||||
}
|
||||
|
||||
// Check for ALL_PLANNED
|
||||
if (currentIssueOutput.includes("ALL_PLANNED")) {
|
||||
// May contain a final ISSUE_READY before ALL_PLANNED
|
||||
const finalIssue = parseIssueReady(currentIssueOutput)
|
||||
if (finalIssue) {
|
||||
// Spawn one more executor for the last issue
|
||||
const lastExec = spawn_agent({
|
||||
message: `... same executor spawn as above for ${finalIssue.issue_id} ...`
|
||||
})
|
||||
allAgentIds.push(lastExec)
|
||||
executorAgents.push({ id: lastExec, issueId: finalIssue.issue_id })
|
||||
}
|
||||
allPlanned = true
|
||||
}
|
||||
}
|
||||
|
||||
// Wait for remaining executor agents
|
||||
const pendingExecutors = executorAgents
|
||||
.map(e => e.id)
|
||||
.filter(id => !completedIds.includes(id))
|
||||
// Wait for all remaining executor agents
|
||||
const pendingExecutors = executorAgents.map(e => e.id)
|
||||
|
||||
if (pendingExecutors.length > 0) {
|
||||
const finalResults = wait({ ids: pendingExecutors, timeout_ms: 900000 })
|
||||
|
||||
// Handle timeout
|
||||
if (finalResults.timed_out) {
|
||||
const pending = pendingExecutors.filter(id => !finalResults.status[id]?.completed)
|
||||
pending.forEach(id => {
|
||||
@@ -269,21 +285,21 @@ if (pendingExecutors.length > 0) {
|
||||
```javascript
|
||||
// Collect results from all executors
|
||||
const pipelineResults = {
|
||||
waves: [],
|
||||
issues: [],
|
||||
totalCompleted: 0,
|
||||
totalFailed: 0
|
||||
}
|
||||
|
||||
executorAgents.forEach(({ id, wave }) => {
|
||||
executorAgents.forEach(({ id, issueId }) => {
|
||||
const output = results.status[id]?.completed || ""
|
||||
const waveDone = parseWaveDone(output)
|
||||
pipelineResults.waves.push({
|
||||
wave,
|
||||
completed: waveDone?.completed || 0,
|
||||
failed: waveDone?.failed || 0
|
||||
const implResult = parseImplComplete(output)
|
||||
pipelineResults.issues.push({
|
||||
issueId,
|
||||
status: implResult?.status || 'unknown',
|
||||
commit: implResult?.commit || 'N/A'
|
||||
})
|
||||
pipelineResults.totalCompleted += waveDone?.completed || 0
|
||||
pipelineResults.totalFailed += waveDone?.failed || 0
|
||||
if (implResult?.status === 'success') pipelineResults.totalCompleted++
|
||||
else pipelineResults.totalFailed++
|
||||
})
|
||||
|
||||
// Output final summary
|
||||
@@ -291,13 +307,13 @@ console.log(`
|
||||
## PlanEx Pipeline Complete
|
||||
|
||||
### Summary
|
||||
- Total Waves: ${waveNum}
|
||||
- Total Completed: ${pipelineResults.totalCompleted}
|
||||
- Total Failed: ${pipelineResults.totalFailed}
|
||||
- Total Issues: ${executorAgents.length}
|
||||
- Completed: ${pipelineResults.totalCompleted}
|
||||
- Failed: ${pipelineResults.totalFailed}
|
||||
|
||||
### Wave Details
|
||||
${pipelineResults.waves.map(w =>
|
||||
`- Wave ${w.wave}: ${w.completed} completed, ${w.failed} failed`
|
||||
### Issue Details
|
||||
${pipelineResults.issues.map(i =>
|
||||
`- ${i.issueId}: ${i.status} (commit: ${i.commit})`
|
||||
).join('\n')}
|
||||
`)
|
||||
|
||||
@@ -315,27 +331,26 @@ Since Codex agents have isolated contexts, use file-based coordination:
|
||||
|
||||
| File | Purpose | Writer | Reader |
|
||||
|------|---------|--------|--------|
|
||||
| `.workflow/.team/PEX-{slug}-{date}/wave-{N}.json` | Wave plan data | planner | orchestrator |
|
||||
| `.workflow/.team/PEX-{slug}-{date}/exec-{issueId}.json` | Execution result | executor | orchestrator |
|
||||
| `.workflow/.team/PEX-{slug}-{date}/pipeline-log.ndjson` | Event log | both | orchestrator |
|
||||
| `.workflow/issues/queue/execution-queue.json` | Execution queue | planner (via issue-queue-agent) | executor |
|
||||
| `{sessionDir}/artifacts/solutions/{issueId}.json` | Solution artifact | planner | executor |
|
||||
| `{sessionDir}/exec-{issueId}.json` | Execution result | executor | orchestrator |
|
||||
| `{sessionDir}/pipeline-log.ndjson` | Event log | both | orchestrator |
|
||||
|
||||
### Wave Data Format
|
||||
### Solution Artifact Format
|
||||
|
||||
```json
|
||||
{
|
||||
"wave_number": 1,
|
||||
"issue_ids": ["ISS-20260215-001", "ISS-20260215-002"],
|
||||
"queue_path": ".workflow/issues/queue/execution-queue.json",
|
||||
"exec_tasks": [
|
||||
{
|
||||
"issue_id": "ISS-20260215-001",
|
||||
"solution_id": "SOL-001",
|
||||
"title": "Implement auth module",
|
||||
"priority": "high",
|
||||
"depends_on": []
|
||||
}
|
||||
]
|
||||
"issue_id": "ISS-20260215-001",
|
||||
"bound": {
|
||||
"id": "SOL-001",
|
||||
"title": "Implement auth module",
|
||||
"tasks": [...],
|
||||
"files_touched": ["src/auth/login.ts"]
|
||||
},
|
||||
"execution_config": {
|
||||
"execution_method": "Agent",
|
||||
"code_review": "Skip"
|
||||
},
|
||||
"timestamp": "2026-02-15T10:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
@@ -358,7 +373,7 @@ Since Codex agents have isolated contexts, use file-based coordination:
|
||||
|
||||
| Timeout Scenario | Action |
|
||||
|-----------------|--------|
|
||||
| Planner wave timeout | send_input to urge convergence, retry wait |
|
||||
| Planner issue timeout | send_input to urge convergence, retry wait |
|
||||
| Executor impl timeout | send_input to finalize, record partial result |
|
||||
| All agents timeout | Log error, abort with partial state |
|
||||
|
||||
@@ -380,28 +395,27 @@ allAgentIds.forEach(id => {
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Planner wave failure | Retry once via send_input, then abort pipeline |
|
||||
| Executor impl failure | Record failure, continue with next wave tasks |
|
||||
| Planner issue failure | Retry once via send_input, then skip issue |
|
||||
| Executor impl failure | Record failure, continue with next issue |
|
||||
| No issues created from text | Report to user, abort |
|
||||
| Solution generation failure | Skip issue, continue with remaining |
|
||||
| Queue formation failure | Create exec tasks without DAG ordering |
|
||||
| Inline conflict check failure | Use empty depends_on, continue |
|
||||
| Pipeline stall (no progress) | Timeout handling → urge convergence → abort |
|
||||
| Missing role file | Log error, use inline fallback instructions |
|
||||
|
||||
## Helper Functions
|
||||
|
||||
```javascript
|
||||
function parseWaveReady(output) {
|
||||
const match = output.match(/WAVE_READY:\s*\n([\s\S]*?)(?=\n```|$)/)
|
||||
function parseIssueReady(output) {
|
||||
const match = output.match(/ISSUE_READY:\s*\n([\s\S]*?)(?=\n```|$)/)
|
||||
if (!match) return null
|
||||
// Parse structured wave data
|
||||
return JSON.parse(match[1])
|
||||
try { return JSON.parse(match[1]) } catch { return null }
|
||||
}
|
||||
|
||||
function parseWaveDone(output) {
|
||||
const match = output.match(/WAVE_DONE:\s*\n([\s\S]*?)(?=\n```|$)/)
|
||||
function parseImplComplete(output) {
|
||||
const match = output.match(/IMPL_COMPLETE:\s*\n([\s\S]*?)(?=\n```|$)/)
|
||||
if (!match) return null
|
||||
return JSON.parse(match[1])
|
||||
try { return JSON.parse(match[1]) } catch { return null }
|
||||
}
|
||||
|
||||
function resolveExecutor(method, taskCount) {
|
||||
|
||||
@@ -1,20 +1,20 @@
|
||||
---
|
||||
name: planex-executor
|
||||
description: |
|
||||
Execution agent for PlanEx pipeline. Loads solutions, routes to
|
||||
configurable backends (agent/codex/gemini CLI), runs tests, commits.
|
||||
Processes all tasks within a single wave assignment.
|
||||
Execution agent for PlanEx pipeline. Loads solutions from artifact files
|
||||
(with CLI fallback), routes to configurable backends (agent/codex/gemini CLI),
|
||||
runs tests, commits. Processes all tasks within a single assignment.
|
||||
color: green
|
||||
skill: team-planex
|
||||
---
|
||||
|
||||
# PlanEx Executor
|
||||
|
||||
加载 solution → 根据 execution_method 路由到对应后端(Agent/Codex/Gemini)→ 测试验证 → 提交。每次被 spawn 时处理一个 wave 的所有 exec tasks,按依赖顺序执行。
|
||||
从中间产物文件加载 solution(兼容 CLI fallback)→ 根据 execution_method 路由到对应后端(Agent/Codex/Gemini)→ 测试验证 → 提交。每次被 spawn 时处理分配的 exec tasks,按依赖顺序执行。
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **Solution Loading**: 从 issue system 加载 bound solution plan
|
||||
1. **Solution Loading**: 从中间产物文件加载 bound solution plan(兼容 CLI fallback)
|
||||
2. **Multi-Backend Routing**: 根据 execution_method 选择 agent/codex/gemini 后端
|
||||
3. **Test Verification**: 实现后运行测试验证
|
||||
4. **Commit Management**: 每个 solution 完成后 git commit
|
||||
@@ -209,9 +209,22 @@ for (const task of sorted) {
|
||||
const issueId = task.issue_id
|
||||
const taskStartTime = Date.now()
|
||||
|
||||
// --- Load solution ---
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
const solution = JSON.parse(solJson)
|
||||
// --- Load solution (dual-mode: artifact file first, CLI fallback) ---
|
||||
let solution
|
||||
const solutionFile = task.solution_file
|
||||
if (solutionFile) {
|
||||
try {
|
||||
const solutionData = JSON.parse(read_file(solutionFile))
|
||||
solution = solutionData.bound ? solutionData : { bound: solutionData }
|
||||
} catch {
|
||||
// Fallback to CLI
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
solution = JSON.parse(solJson)
|
||||
}
|
||||
} else {
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
solution = JSON.parse(solJson)
|
||||
}
|
||||
|
||||
if (!solution.bound) {
|
||||
recordTaskStart(issueId, task.title, 'N/A', '')
|
||||
|
||||
@@ -2,22 +2,23 @@
|
||||
name: planex-planner
|
||||
description: |
|
||||
Planning lead for PlanEx pipeline. Decomposes requirements into issues,
|
||||
generates solutions via issue-plan-agent, forms execution queues via
|
||||
issue-queue-agent, outputs wave-structured data for orchestrator dispatch.
|
||||
generates solutions via issue-plan-agent, performs inline conflict check,
|
||||
writes solution artifacts. Per-issue output for orchestrator dispatch.
|
||||
color: blue
|
||||
skill: team-planex
|
||||
---
|
||||
|
||||
# PlanEx Planner
|
||||
|
||||
需求拆解 → issue 创建 → 方案设计 → 队列编排 → 输出 wave 数据。内部 spawn issue-plan-agent 和 issue-queue-agent 子代理,通过 Wave Pipeline 持续推进。每完成一个 wave 立即输出 WAVE_READY,等待 orchestrator send_input 继续下一 wave。
|
||||
需求拆解 → issue 创建 → 方案设计 → inline 冲突检查 → 写中间产物 → 逐 issue 输出。内部 spawn issue-plan-agent 子代理,每完成一个 issue 的 solution 立即输出 ISSUE_READY,等待 orchestrator send_input 继续下一 issue。
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **Requirement Decomposition**: 将需求文本/plan 文件拆解为独立 issues
|
||||
2. **Solution Planning**: 通过 issue-plan-agent 为每个 issue 生成 solution
|
||||
3. **Queue Formation**: 通过 issue-queue-agent 排序 solutions 并检测冲突
|
||||
4. **Wave Output**: 每个 wave 完成后输出结构化 WAVE_READY 数据
|
||||
3. **Inline Conflict Check**: 基于 files_touched 重叠检测 + 显式依赖排序
|
||||
4. **Solution Artifacts**: 将 solution 写入中间产物文件供 executor 加载
|
||||
5. **Per-Issue Output**: 每个 issue 完成后立即输出 ISSUE_READY 数据
|
||||
|
||||
## Execution Process
|
||||
|
||||
@@ -32,7 +33,8 @@ skill: team-planex
|
||||
- **Goal**: What to achieve
|
||||
- **Input**: Issue IDs / text / plan file
|
||||
- **Execution Config**: execution_method + code_review settings
|
||||
- **Deliverables**: WAVE_READY + ALL_PLANNED structured output
|
||||
- **Session Dir**: Path for writing solution artifacts
|
||||
- **Deliverables**: ISSUE_READY + ALL_PLANNED structured output
|
||||
|
||||
### Step 2: Input Parsing & Issue Creation
|
||||
|
||||
@@ -40,6 +42,8 @@ Parse the input from TASK ASSIGNMENT and create issues as needed.
|
||||
|
||||
```javascript
|
||||
const input = taskAssignment.input
|
||||
const sessionDir = taskAssignment.session_dir
|
||||
const executionConfig = taskAssignment.execution_config
|
||||
|
||||
// 1) 已有 Issue IDs
|
||||
const issueIds = input.match(/ISS-\d{8}-\d{6}/g) || []
|
||||
@@ -47,7 +51,6 @@ const issueIds = input.match(/ISS-\d{8}-\d{6}/g) || []
|
||||
// 2) 文本输入 → 创建 issue
|
||||
const textMatch = input.match(/text:\s*(.+)/)
|
||||
if (textMatch && issueIds.length === 0) {
|
||||
// Use ccw issue create CLI to create issue from text
|
||||
const result = shell(`ccw issue create --data '{"title":"${textMatch[1]}","description":"${textMatch[1]}"}' --json`)
|
||||
const newIssue = JSON.parse(result)
|
||||
issueIds.push(newIssue.id)
|
||||
@@ -58,11 +61,10 @@ const planMatch = input.match(/plan_file:\s*(\S+)/)
|
||||
if (planMatch && issueIds.length === 0) {
|
||||
const planContent = read_file(planMatch[1])
|
||||
|
||||
// Check if execution-plan.json from req-plan-with-file
|
||||
try {
|
||||
const content = JSON.parse(planContent)
|
||||
if (content.waves && content.issue_ids) {
|
||||
// execution-plan format: use wave structure directly
|
||||
// execution-plan format: use issue_ids directly
|
||||
executionPlan = content
|
||||
issueIds = content.issue_ids
|
||||
}
|
||||
@@ -77,30 +79,20 @@ if (planMatch && issueIds.length === 0) {
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Wave-Based Solution Planning
|
||||
### Step 3: Per-Issue Solution Planning & Artifact Writing
|
||||
|
||||
Group issues into waves, spawn sub-agents for each wave.
|
||||
Process each issue individually: plan → write artifact → conflict check → output ISSUE_READY.
|
||||
|
||||
```javascript
|
||||
const projectRoot = shell('cd . && pwd').trim()
|
||||
const dispatchedSolutions = []
|
||||
|
||||
// Group into waves (max 5 per wave, or use execution-plan wave structure)
|
||||
const WAVE_SIZE = 5
|
||||
let waves
|
||||
if (executionPlan) {
|
||||
waves = executionPlan.waves.map(w => w.issue_ids)
|
||||
} else {
|
||||
waves = []
|
||||
for (let i = 0; i < issueIds.length; i += WAVE_SIZE) {
|
||||
waves.push(issueIds.slice(i, i + WAVE_SIZE))
|
||||
}
|
||||
}
|
||||
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
|
||||
|
||||
let waveNum = 0
|
||||
for (const waveIssues of waves) {
|
||||
waveNum++
|
||||
for (let i = 0; i < issueIds.length; i++) {
|
||||
const issueId = issueIds[i]
|
||||
|
||||
// --- Step 3a: Spawn issue-plan-agent for solutions ---
|
||||
// --- Step 3a: Spawn issue-plan-agent for single issue ---
|
||||
const planAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
@@ -112,116 +104,121 @@ for (const waveIssues of waves) {
|
||||
|
||||
---
|
||||
|
||||
Goal: Generate solutions for Wave ${waveNum} issues
|
||||
Goal: Generate solution for issue ${issueId}
|
||||
|
||||
issue_ids: ${JSON.stringify(waveIssues)}
|
||||
issue_ids: ["${issueId}"]
|
||||
project_root: "${projectRoot}"
|
||||
|
||||
## Requirements
|
||||
- Generate solutions for each issue
|
||||
- Auto-bind single solutions
|
||||
- Generate solution for this issue
|
||||
- Auto-bind single solution
|
||||
- For multiple solutions, select the most pragmatic one
|
||||
|
||||
## Deliverables
|
||||
Structured output with solution bindings per issue.
|
||||
Structured output with solution binding.
|
||||
`
|
||||
})
|
||||
|
||||
const planResult = wait({ ids: [planAgent], timeout_ms: 600000 })
|
||||
|
||||
if (planResult.timed_out) {
|
||||
send_input({ id: planAgent, message: "Please finalize solutions and output current results." })
|
||||
send_input({ id: planAgent, message: "Please finalize solution and output results." })
|
||||
wait({ ids: [planAgent], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
close_agent({ id: planAgent })
|
||||
|
||||
// --- Step 3b: Spawn issue-queue-agent for ordering ---
|
||||
const queueAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
// --- Step 3b: Load solution + write artifact file ---
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
const solution = JSON.parse(solJson)
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-queue-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
const solutionFile = `${sessionDir}/artifacts/solutions/${issueId}.json`
|
||||
write_file(solutionFile, JSON.stringify({
|
||||
issue_id: issueId,
|
||||
...solution,
|
||||
execution_config: {
|
||||
execution_method: executionConfig.executionMethod,
|
||||
code_review: executionConfig.codeReviewTool
|
||||
},
|
||||
timestamp: new Date().toISOString()
|
||||
}, null, 2))
|
||||
|
||||
---
|
||||
// --- Step 3c: Inline conflict check ---
|
||||
const blockedBy = inlineConflictCheck(issueId, solution, dispatchedSolutions)
|
||||
|
||||
Goal: Form execution queue for Wave ${waveNum}
|
||||
// --- Step 3d: Output ISSUE_READY for orchestrator ---
|
||||
dispatchedSolutions.push({ issueId, solution, solutionFile })
|
||||
|
||||
issue_ids: ${JSON.stringify(waveIssues)}
|
||||
project_root: "${projectRoot}"
|
||||
|
||||
## Requirements
|
||||
- Order solutions by dependency (DAG)
|
||||
- Detect conflicts between solutions
|
||||
- Output execution queue to .workflow/issues/queue/execution-queue.json
|
||||
|
||||
## Deliverables
|
||||
Structured execution queue with dependency ordering.
|
||||
`
|
||||
})
|
||||
|
||||
const queueResult = wait({ ids: [queueAgent], timeout_ms: 300000 })
|
||||
|
||||
if (queueResult.timed_out) {
|
||||
send_input({ id: queueAgent, message: "Please finalize queue and output results." })
|
||||
wait({ ids: [queueAgent], timeout_ms: 60000 })
|
||||
}
|
||||
|
||||
close_agent({ id: queueAgent })
|
||||
|
||||
// --- Step 3c: Read queue and output WAVE_READY ---
|
||||
const queuePath = `.workflow/issues/queue/execution-queue.json`
|
||||
const queue = JSON.parse(read_file(queuePath))
|
||||
|
||||
const execTasks = queue.queue.map(entry => ({
|
||||
issue_id: entry.issue_id,
|
||||
solution_id: entry.solution_id,
|
||||
title: entry.title || entry.issue_id,
|
||||
priority: entry.priority || "normal",
|
||||
depends_on: entry.depends_on || []
|
||||
}))
|
||||
|
||||
// Output structured wave data for orchestrator
|
||||
console.log(`
|
||||
WAVE_READY:
|
||||
ISSUE_READY:
|
||||
${JSON.stringify({
|
||||
wave_number: waveNum,
|
||||
issue_ids: waveIssues,
|
||||
queue_path: queuePath,
|
||||
exec_tasks: execTasks
|
||||
}, null, 2)}
|
||||
issue_id: issueId,
|
||||
solution_id: solution.bound?.id || 'N/A',
|
||||
title: solution.bound?.title || issueId,
|
||||
priority: "normal",
|
||||
depends_on: blockedBy,
|
||||
solution_file: solutionFile
|
||||
}, null, 2)}
|
||||
`)
|
||||
|
||||
// Wait for orchestrator send_input before continuing to next wave
|
||||
// (orchestrator will send: "Wave N dispatched. Continue to Wave N+1.")
|
||||
// Wait for orchestrator send_input before continuing to next issue
|
||||
// (orchestrator will send: "Issue dispatched. Continue to next issue.")
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Finalization
|
||||
|
||||
After all waves are planned, output ALL_PLANNED signal.
|
||||
After all issues are planned, output ALL_PLANNED signal.
|
||||
|
||||
```javascript
|
||||
console.log(`
|
||||
ALL_PLANNED:
|
||||
${JSON.stringify({
|
||||
total_waves: waveNum,
|
||||
total_issues: issueIds.length
|
||||
}, null, 2)}
|
||||
`)
|
||||
```
|
||||
|
||||
## Inline Conflict Check
|
||||
|
||||
```javascript
|
||||
function inlineConflictCheck(issueId, solution, dispatchedSolutions) {
|
||||
const currentFiles = solution.bound?.files_touched
|
||||
|| solution.bound?.affected_files || []
|
||||
const blockedBy = []
|
||||
|
||||
// 1. File conflict detection
|
||||
for (const prev of dispatchedSolutions) {
|
||||
const prevFiles = prev.solution.bound?.files_touched
|
||||
|| prev.solution.bound?.affected_files || []
|
||||
const overlap = currentFiles.filter(f => prevFiles.includes(f))
|
||||
if (overlap.length > 0) {
|
||||
blockedBy.push(prev.issueId)
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Explicit dependencies
|
||||
const explicitDeps = solution.bound?.dependencies?.on_issues || []
|
||||
for (const depId of explicitDeps) {
|
||||
if (!blockedBy.includes(depId)) {
|
||||
blockedBy.push(depId)
|
||||
}
|
||||
}
|
||||
|
||||
return blockedBy
|
||||
}
|
||||
```
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅执行规划和拆解工作
|
||||
- 每个 wave 完成后输出 WAVE_READY 结构化数据
|
||||
- 所有 wave 完成后输出 ALL_PLANNED
|
||||
- 通过 spawn_agent 调用 issue-plan-agent 和 issue-queue-agent
|
||||
- 等待 orchestrator send_input 才继续下一 wave
|
||||
- 每个 issue 完成后输出 ISSUE_READY 结构化数据
|
||||
- 所有 issues 完成后输出 ALL_PLANNED
|
||||
- 通过 spawn_agent 调用 issue-plan-agent(逐个 issue)
|
||||
- 等待 orchestrator send_input 才继续下一 issue
|
||||
- 将 solution 写入中间产物文件
|
||||
|
||||
### MUST NOT
|
||||
|
||||
@@ -267,16 +264,17 @@ function parsePlanPhases(planContent) {
|
||||
|
||||
**ALWAYS**:
|
||||
- Read role definition file as FIRST action (Step 1)
|
||||
- Follow structured output template (WAVE_READY / ALL_PLANNED)
|
||||
- Follow structured output template (ISSUE_READY / ALL_PLANNED)
|
||||
- Stay within planning boundaries (no code implementation)
|
||||
- Spawn issue-plan-agent and issue-queue-agent for each wave
|
||||
- Include all issue IDs and solution references in wave data
|
||||
- Spawn issue-plan-agent for each issue individually
|
||||
- Write solution artifact file before outputting ISSUE_READY
|
||||
- Include solution_file path in ISSUE_READY data
|
||||
|
||||
**NEVER**:
|
||||
- Modify source code files
|
||||
- Skip context loading (Step 1)
|
||||
- Produce unstructured or free-form output
|
||||
- Continue to next wave without outputting WAVE_READY
|
||||
- Continue to next issue without outputting ISSUE_READY
|
||||
- Close without outputting ALL_PLANNED
|
||||
|
||||
## Error Handling
|
||||
@@ -285,7 +283,8 @@ function parsePlanPhases(planContent) {
|
||||
|----------|--------|
|
||||
| Issue creation failure | Retry once with simplified text, report in output |
|
||||
| issue-plan-agent timeout | Urge convergence via send_input, close and report partial |
|
||||
| issue-queue-agent failure | Create exec tasks without DAG ordering |
|
||||
| Inline conflict check failure | Use empty depends_on, continue |
|
||||
| Solution artifact write failure | Report error, continue with ISSUE_READY output |
|
||||
| Plan file not found | Report error in output with CLARIFICATION_NEEDED |
|
||||
| Empty input (no issues, no text) | Output CLARIFICATION_NEEDED asking for requirements |
|
||||
| Sub-agent produces invalid output | Report error, continue with available data |
|
||||
|
||||
Reference in New Issue
Block a user