feat: add CLI settings export/import functionality

- Implemented exportSettings and importSettings APIs for CLI settings.
- Added hooks useExportSettings and useImportSettings for managing export/import operations in the frontend.
- Updated SettingsPage to include buttons for exporting and importing CLI settings.
- Enhanced backend to handle export and import requests, including validation and conflict resolution.
- Introduced new data structures for exported settings and import options.
- Updated localization files to support new export/import features.
- Refactored CLI tool configurations to remove hardcoded model defaults, allowing dynamic model retrieval.
This commit is contained in:
catlog22
2026-02-25 21:40:24 +08:00
parent 4c2bf31525
commit b2b8688d26
24 changed files with 1287 additions and 651 deletions

View File

@@ -1,13 +1,13 @@
---
name: team-planex
description: 2-member plan-and-execute pipeline with Wave Pipeline for concurrent planning and execution. Planner decomposes requirements into issues, generates solutions, forms execution queues. Executor implements solutions via configurable backends (agent/codex/gemini). Triggers on "team planex".
description: 2-member plan-and-execute pipeline with per-issue beat pipeline for concurrent planning and execution. Planner decomposes requirements into issues, generates solutions, writes artifacts. Executor implements solutions via configurable backends (agent/codex/gemini). Triggers on "team planex".
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
argument-hint: "<issue-ids|--text 'description'|--plan path> [--exec=agent|codex|gemini|auto] [-y]"
---
# Team PlanEx
2 成员边规划边执行团队。通过 Wave Pipeline波次流水线实现 planner 和 executor 并行工作planner 完成一个 wave 的 queue 后orchestrator 立即 spawn executor agent 处理该 wave同时 send_input 让 planner 继续下一 wave。
2 成员边规划边执行团队。通过逐 Issue 节拍流水线实现 planner 和 executor 并行工作planner 完成一个 issue 的 solution 后输出 ISSUE_READY 信号orchestrator 立即 spawn executor agent 处理该 issue同时 send_input 让 planner 继续下一 issue。
## Architecture Overview
@@ -16,7 +16,7 @@ argument-hint: "<issue-ids|--text 'description'|--plan path> [--exec=agent|codex
│ Orchestrator (this file) │
│ → Parse input → Spawn planner → Spawn exec │
└────────────────┬─────────────────────────────┘
Wave Pipeline
Per-Issue Beat Pipeline
┌───────┴───────┐
↓ ↓
┌─────────┐ ┌──────────┐
@@ -25,17 +25,16 @@ argument-hint: "<issue-ids|--text 'description'|--plan path> [--exec=agent|codex
└─────────┘ └──────────┘
│ │
issue-plan-agent code-developer
issue-queue-agent (or codex/gemini CLI)
(or codex/gemini CLI)
```
## Agent Registry
| Agent | Role File | Responsibility | New/Existing |
|-------|-----------|----------------|--------------|
| `planex-planner` | `.codex/skills/team-planex/agents/planex-planner.md` | 需求拆解 → issue 创建 → 方案设计 → 队列编排 | New (skill-specific) |
| `planex-planner` | `.codex/skills/team-planex/agents/planex-planner.md` | 需求拆解 → issue 创建 → 方案设计 → 冲突检查 → 逐 issue 派发 | New (skill-specific) |
| `planex-executor` | `.codex/skills/team-planex/agents/planex-executor.md` | 加载 solution → 代码实现 → 测试 → 提交 | New (skill-specific) |
| `issue-plan-agent` | `~/.codex/agents/issue-plan-agent.md` | ACE exploration + solution generation + binding | Existing |
| `issue-queue-agent` | `~/.codex/agents/issue-queue-agent.md` | Solution ordering + conflict detection | Existing |
| `code-developer` | `~/.codex/agents/code-developer.md` | Code implementation (agent backend) | Existing |
## Input Types
@@ -86,11 +85,18 @@ if (explicitExec) {
// Interactive: ask user for preferences
// (orchestrator handles user interaction directly)
}
// Initialize session directory for artifacts
const slug = (issueIds[0] || 'batch').replace(/[^a-zA-Z0-9-]/g, '')
const dateStr = new Date().toISOString().slice(0,10).replace(/-/g,'')
const sessionId = `PEX-${slug}-${dateStr}`
const sessionDir = `.workflow/.team/${sessionId}`
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
```
### Phase 2: Planning (Planner Agent — Deep Interaction)
### Phase 2: Planning (Planner Agent — Per-Issue Beat)
Spawn planner agent for wave-based planning. Uses send_input for multi-wave progression.
Spawn planner agent for per-issue planning. Uses send_input for issue-by-issue progression.
```javascript
// Build planner input context
@@ -110,7 +116,7 @@ const planner = spawn_agent({
---
Goal: Decompose requirements into waves of executable solutions
Goal: Decompose requirements into executable solutions (per-issue beat)
## Input
${plannerInput}
@@ -119,61 +125,64 @@ ${plannerInput}
execution_method: ${executionConfig.executionMethod}
code_review: ${executionConfig.codeReviewTool}
## Session Dir
session_dir: ${sessionDir}
## Deliverables
For EACH wave, output structured wave data:
For EACH issue, output structured data:
\`\`\`
WAVE_READY:
wave_number: N
issue_ids: [ISS-xxx, ...]
queue_path: .workflow/issues/queue/execution-queue.json
exec_tasks: [
{ issue_id: "ISS-xxx", solution_id: "SOL-xxx", title: "...", priority: "normal", depends_on: [] },
...
]
ISSUE_READY:
{
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx",
"title": "...",
"priority": "normal",
"depends_on": [],
"solution_file": "${sessionDir}/artifacts/solutions/ISS-xxx.json"
}
\`\`\`
After ALL waves planned, output:
After ALL issues planned, output:
\`\`\`
ALL_PLANNED:
total_waves: N
total_issues: N
{ "total_issues": N }
\`\`\`
## Quality bar
- Every issue has a bound solution
- Queue respects dependency DAG
- Wave boundaries are logical groupings
- Solution artifact written to file before output
- Inline conflict check determines depends_on
`
})
// Wait for Wave 1
const wave1 = wait({ ids: [planner], timeout_ms: 600000 })
// Wait for first ISSUE_READY
const firstIssue = wait({ ids: [planner], timeout_ms: 600000 })
if (wave1.timed_out) {
send_input({ id: planner, message: "Please finalize current wave and output WAVE_READY." })
if (firstIssue.timed_out) {
send_input({ id: planner, message: "Please finalize current issue and output ISSUE_READY." })
const retry = wait({ ids: [planner], timeout_ms: 120000 })
}
// Parse wave data from planner output
const wave1Data = parseWaveReady(wave1.status[planner].completed)
// Parse first issue data
const firstIssueData = parseIssueReady(firstIssue.status[planner].completed)
```
### Phase 3: Wave Pipeline (Planning + Execution Interleaved)
### Phase 3: Per-Issue Beat Pipeline (Planning + Execution Interleaved)
Pipeline: spawn executor for current wave while planner continues next wave.
Pipeline: spawn executor for current issue while planner continues next issue.
```javascript
const allAgentIds = [planner]
const executorAgents = []
let waveNum = 1
let allPlanned = false
let currentIssueOutput = firstIssue.status[planner].completed
while (!allPlanned) {
// --- Spawn executor for current wave ---
const waveData = parseWaveReady(currentWaveOutput)
// --- Spawn executor for current issue ---
const issueData = parseIssueReady(currentIssueOutput)
if (waveData && waveData.exec_tasks.length > 0) {
if (issueData) {
const executor = spawn_agent({
message: `
## TASK ASSIGNMENT
@@ -185,75 +194,82 @@ while (!allPlanned) {
---
Goal: Implement all solutions in Wave ${waveNum}
Goal: Implement solution for ${issueData.issue_id}
## Wave ${waveNum} Tasks
${JSON.stringify(waveData.exec_tasks, null, 2)}
## Task
${JSON.stringify([issueData], null, 2)}
## Execution Config
execution_method: ${executionConfig.executionMethod}
code_review: ${executionConfig.codeReviewTool}
## Solution File
solution_file: ${issueData.solution_file}
## Session Dir
session_dir: ${sessionDir}
## Deliverables
For each task, output:
\`\`\`
IMPL_COMPLETE:
issue_id: ISS-xxx
issue_id: ${issueData.issue_id}
status: success|failed
test_result: pass|fail
commit: <hash or N/A>
\`\`\`
After all wave tasks done:
\`\`\`
WAVE_DONE:
wave_number: ${waveNum}
completed: N
failed: N
\`\`\`
## Quality bar
- All existing tests pass after each implementation
- All existing tests pass after implementation
- Code follows project conventions
- One commit per solution
`
})
allAgentIds.push(executor)
executorAgents.push({ id: executor, wave: waveNum })
executorAgents.push({ id: executor, issueId: issueData.issue_id })
}
// --- Tell planner to continue next wave ---
if (!allPlanned) {
send_input({ id: planner, message: `Wave ${waveNum} dispatched to executor. Continue to Wave ${waveNum + 1}.` })
// --- Check if ALL_PLANNED was in this output ---
if (currentIssueOutput.includes("ALL_PLANNED")) {
allPlanned = true
break
}
// Wait for both: planner (next wave) + current executor
const activeIds = [planner]
if (executorAgents.length > 0) {
activeIds.push(executorAgents[executorAgents.length - 1].id)
}
const results = wait({ ids: activeIds, timeout_ms: 600000 })
// Check planner output
const plannerOutput = results.status[planner]?.completed || ""
if (plannerOutput.includes("ALL_PLANNED")) {
allPlanned = true
} else if (plannerOutput.includes("WAVE_READY")) {
waveNum++
currentWaveOutput = plannerOutput
// --- Tell planner to continue next issue ---
send_input({ id: planner, message: `Issue ${issueData?.issue_id || 'unknown'} dispatched. Continue to next issue.` })
// Wait for planner (next issue)
const plannerResult = wait({ ids: [planner], timeout_ms: 600000 })
if (plannerResult.timed_out) {
send_input({ id: planner, message: "Please finalize current issue and output results." })
const retry = wait({ ids: [planner], timeout_ms: 120000 })
currentIssueOutput = retry.status?.[planner]?.completed || ""
} else {
currentIssueOutput = plannerResult.status[planner]?.completed || ""
}
// Check for ALL_PLANNED
if (currentIssueOutput.includes("ALL_PLANNED")) {
// May contain a final ISSUE_READY before ALL_PLANNED
const finalIssue = parseIssueReady(currentIssueOutput)
if (finalIssue) {
// Spawn one more executor for the last issue
const lastExec = spawn_agent({
message: `... same executor spawn as above for ${finalIssue.issue_id} ...`
})
allAgentIds.push(lastExec)
executorAgents.push({ id: lastExec, issueId: finalIssue.issue_id })
}
allPlanned = true
}
}
// Wait for remaining executor agents
const pendingExecutors = executorAgents
.map(e => e.id)
.filter(id => !completedIds.includes(id))
// Wait for all remaining executor agents
const pendingExecutors = executorAgents.map(e => e.id)
if (pendingExecutors.length > 0) {
const finalResults = wait({ ids: pendingExecutors, timeout_ms: 900000 })
// Handle timeout
if (finalResults.timed_out) {
const pending = pendingExecutors.filter(id => !finalResults.status[id]?.completed)
pending.forEach(id => {
@@ -269,21 +285,21 @@ if (pendingExecutors.length > 0) {
```javascript
// Collect results from all executors
const pipelineResults = {
waves: [],
issues: [],
totalCompleted: 0,
totalFailed: 0
}
executorAgents.forEach(({ id, wave }) => {
executorAgents.forEach(({ id, issueId }) => {
const output = results.status[id]?.completed || ""
const waveDone = parseWaveDone(output)
pipelineResults.waves.push({
wave,
completed: waveDone?.completed || 0,
failed: waveDone?.failed || 0
const implResult = parseImplComplete(output)
pipelineResults.issues.push({
issueId,
status: implResult?.status || 'unknown',
commit: implResult?.commit || 'N/A'
})
pipelineResults.totalCompleted += waveDone?.completed || 0
pipelineResults.totalFailed += waveDone?.failed || 0
if (implResult?.status === 'success') pipelineResults.totalCompleted++
else pipelineResults.totalFailed++
})
// Output final summary
@@ -291,13 +307,13 @@ console.log(`
## PlanEx Pipeline Complete
### Summary
- Total Waves: ${waveNum}
- Total Completed: ${pipelineResults.totalCompleted}
- Total Failed: ${pipelineResults.totalFailed}
- Total Issues: ${executorAgents.length}
- Completed: ${pipelineResults.totalCompleted}
- Failed: ${pipelineResults.totalFailed}
### Wave Details
${pipelineResults.waves.map(w =>
`- Wave ${w.wave}: ${w.completed} completed, ${w.failed} failed`
### Issue Details
${pipelineResults.issues.map(i =>
`- ${i.issueId}: ${i.status} (commit: ${i.commit})`
).join('\n')}
`)
@@ -315,27 +331,26 @@ Since Codex agents have isolated contexts, use file-based coordination:
| File | Purpose | Writer | Reader |
|------|---------|--------|--------|
| `.workflow/.team/PEX-{slug}-{date}/wave-{N}.json` | Wave plan data | planner | orchestrator |
| `.workflow/.team/PEX-{slug}-{date}/exec-{issueId}.json` | Execution result | executor | orchestrator |
| `.workflow/.team/PEX-{slug}-{date}/pipeline-log.ndjson` | Event log | both | orchestrator |
| `.workflow/issues/queue/execution-queue.json` | Execution queue | planner (via issue-queue-agent) | executor |
| `{sessionDir}/artifacts/solutions/{issueId}.json` | Solution artifact | planner | executor |
| `{sessionDir}/exec-{issueId}.json` | Execution result | executor | orchestrator |
| `{sessionDir}/pipeline-log.ndjson` | Event log | both | orchestrator |
### Wave Data Format
### Solution Artifact Format
```json
{
"wave_number": 1,
"issue_ids": ["ISS-20260215-001", "ISS-20260215-002"],
"queue_path": ".workflow/issues/queue/execution-queue.json",
"exec_tasks": [
{
"issue_id": "ISS-20260215-001",
"solution_id": "SOL-001",
"title": "Implement auth module",
"priority": "high",
"depends_on": []
}
]
"issue_id": "ISS-20260215-001",
"bound": {
"id": "SOL-001",
"title": "Implement auth module",
"tasks": [...],
"files_touched": ["src/auth/login.ts"]
},
"execution_config": {
"execution_method": "Agent",
"code_review": "Skip"
},
"timestamp": "2026-02-15T10:00:00Z"
}
```
@@ -358,7 +373,7 @@ Since Codex agents have isolated contexts, use file-based coordination:
| Timeout Scenario | Action |
|-----------------|--------|
| Planner wave timeout | send_input to urge convergence, retry wait |
| Planner issue timeout | send_input to urge convergence, retry wait |
| Executor impl timeout | send_input to finalize, record partial result |
| All agents timeout | Log error, abort with partial state |
@@ -380,28 +395,27 @@ allAgentIds.forEach(id => {
| Scenario | Resolution |
|----------|------------|
| Planner wave failure | Retry once via send_input, then abort pipeline |
| Executor impl failure | Record failure, continue with next wave tasks |
| Planner issue failure | Retry once via send_input, then skip issue |
| Executor impl failure | Record failure, continue with next issue |
| No issues created from text | Report to user, abort |
| Solution generation failure | Skip issue, continue with remaining |
| Queue formation failure | Create exec tasks without DAG ordering |
| Inline conflict check failure | Use empty depends_on, continue |
| Pipeline stall (no progress) | Timeout handling → urge convergence → abort |
| Missing role file | Log error, use inline fallback instructions |
## Helper Functions
```javascript
function parseWaveReady(output) {
const match = output.match(/WAVE_READY:\s*\n([\s\S]*?)(?=\n```|$)/)
function parseIssueReady(output) {
const match = output.match(/ISSUE_READY:\s*\n([\s\S]*?)(?=\n```|$)/)
if (!match) return null
// Parse structured wave data
return JSON.parse(match[1])
try { return JSON.parse(match[1]) } catch { return null }
}
function parseWaveDone(output) {
const match = output.match(/WAVE_DONE:\s*\n([\s\S]*?)(?=\n```|$)/)
function parseImplComplete(output) {
const match = output.match(/IMPL_COMPLETE:\s*\n([\s\S]*?)(?=\n```|$)/)
if (!match) return null
return JSON.parse(match[1])
try { return JSON.parse(match[1]) } catch { return null }
}
function resolveExecutor(method, taskCount) {

View File

@@ -1,20 +1,20 @@
---
name: planex-executor
description: |
Execution agent for PlanEx pipeline. Loads solutions, routes to
configurable backends (agent/codex/gemini CLI), runs tests, commits.
Processes all tasks within a single wave assignment.
Execution agent for PlanEx pipeline. Loads solutions from artifact files
(with CLI fallback), routes to configurable backends (agent/codex/gemini CLI),
runs tests, commits. Processes all tasks within a single assignment.
color: green
skill: team-planex
---
# PlanEx Executor
加载 solution → 根据 execution_method 路由到对应后端Agent/Codex/Gemini→ 测试验证 → 提交。每次被 spawn 时处理一个 wave 的所有 exec tasks按依赖顺序执行。
从中间产物文件加载 solution(兼容 CLI fallback→ 根据 execution_method 路由到对应后端Agent/Codex/Gemini→ 测试验证 → 提交。每次被 spawn 时处理分配的 exec tasks按依赖顺序执行。
## Core Capabilities
1. **Solution Loading**: 从 issue system 加载 bound solution plan
1. **Solution Loading**: 从中间产物文件加载 bound solution plan(兼容 CLI fallback
2. **Multi-Backend Routing**: 根据 execution_method 选择 agent/codex/gemini 后端
3. **Test Verification**: 实现后运行测试验证
4. **Commit Management**: 每个 solution 完成后 git commit
@@ -209,9 +209,22 @@ for (const task of sorted) {
const issueId = task.issue_id
const taskStartTime = Date.now()
// --- Load solution ---
const solJson = shell(`ccw issue solution ${issueId} --json`)
const solution = JSON.parse(solJson)
// --- Load solution (dual-mode: artifact file first, CLI fallback) ---
let solution
const solutionFile = task.solution_file
if (solutionFile) {
try {
const solutionData = JSON.parse(read_file(solutionFile))
solution = solutionData.bound ? solutionData : { bound: solutionData }
} catch {
// Fallback to CLI
const solJson = shell(`ccw issue solution ${issueId} --json`)
solution = JSON.parse(solJson)
}
} else {
const solJson = shell(`ccw issue solution ${issueId} --json`)
solution = JSON.parse(solJson)
}
if (!solution.bound) {
recordTaskStart(issueId, task.title, 'N/A', '')

View File

@@ -2,22 +2,23 @@
name: planex-planner
description: |
Planning lead for PlanEx pipeline. Decomposes requirements into issues,
generates solutions via issue-plan-agent, forms execution queues via
issue-queue-agent, outputs wave-structured data for orchestrator dispatch.
generates solutions via issue-plan-agent, performs inline conflict check,
writes solution artifacts. Per-issue output for orchestrator dispatch.
color: blue
skill: team-planex
---
# PlanEx Planner
需求拆解 → issue 创建 → 方案设计 → 队列编排 → 输出 wave 数据。内部 spawn issue-plan-agent 和 issue-queue-agent 子代理,通过 Wave Pipeline 持续推进。每完成一个 wave 立即输出 WAVE_READY等待 orchestrator send_input 继续下一 wave。
需求拆解 → issue 创建 → 方案设计 → inline 冲突检查 → 写中间产物 → 逐 issue 输出。内部 spawn issue-plan-agent 子代理,每完成一个 issue 的 solution 立即输出 ISSUE_READY等待 orchestrator send_input 继续下一 issue。
## Core Capabilities
1. **Requirement Decomposition**: 将需求文本/plan 文件拆解为独立 issues
2. **Solution Planning**: 通过 issue-plan-agent 为每个 issue 生成 solution
3. **Queue Formation**: 通过 issue-queue-agent 排序 solutions 并检测冲突
4. **Wave Output**: 每个 wave 完成后输出结构化 WAVE_READY 数据
3. **Inline Conflict Check**: 基于 files_touched 重叠检测 + 显式依赖排序
4. **Solution Artifacts**: 将 solution 写入中间产物文件供 executor 加载
5. **Per-Issue Output**: 每个 issue 完成后立即输出 ISSUE_READY 数据
## Execution Process
@@ -32,7 +33,8 @@ skill: team-planex
- **Goal**: What to achieve
- **Input**: Issue IDs / text / plan file
- **Execution Config**: execution_method + code_review settings
- **Deliverables**: WAVE_READY + ALL_PLANNED structured output
- **Session Dir**: Path for writing solution artifacts
- **Deliverables**: ISSUE_READY + ALL_PLANNED structured output
### Step 2: Input Parsing & Issue Creation
@@ -40,6 +42,8 @@ Parse the input from TASK ASSIGNMENT and create issues as needed.
```javascript
const input = taskAssignment.input
const sessionDir = taskAssignment.session_dir
const executionConfig = taskAssignment.execution_config
// 1) 已有 Issue IDs
const issueIds = input.match(/ISS-\d{8}-\d{6}/g) || []
@@ -47,7 +51,6 @@ const issueIds = input.match(/ISS-\d{8}-\d{6}/g) || []
// 2) 文本输入 → 创建 issue
const textMatch = input.match(/text:\s*(.+)/)
if (textMatch && issueIds.length === 0) {
// Use ccw issue create CLI to create issue from text
const result = shell(`ccw issue create --data '{"title":"${textMatch[1]}","description":"${textMatch[1]}"}' --json`)
const newIssue = JSON.parse(result)
issueIds.push(newIssue.id)
@@ -58,11 +61,10 @@ const planMatch = input.match(/plan_file:\s*(\S+)/)
if (planMatch && issueIds.length === 0) {
const planContent = read_file(planMatch[1])
// Check if execution-plan.json from req-plan-with-file
try {
const content = JSON.parse(planContent)
if (content.waves && content.issue_ids) {
// execution-plan format: use wave structure directly
// execution-plan format: use issue_ids directly
executionPlan = content
issueIds = content.issue_ids
}
@@ -77,30 +79,20 @@ if (planMatch && issueIds.length === 0) {
}
```
### Step 3: Wave-Based Solution Planning
### Step 3: Per-Issue Solution Planning & Artifact Writing
Group issues into waves, spawn sub-agents for each wave.
Process each issue individually: plan → write artifact → conflict check → output ISSUE_READY.
```javascript
const projectRoot = shell('cd . && pwd').trim()
const dispatchedSolutions = []
// Group into waves (max 5 per wave, or use execution-plan wave structure)
const WAVE_SIZE = 5
let waves
if (executionPlan) {
waves = executionPlan.waves.map(w => w.issue_ids)
} else {
waves = []
for (let i = 0; i < issueIds.length; i += WAVE_SIZE) {
waves.push(issueIds.slice(i, i + WAVE_SIZE))
}
}
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
let waveNum = 0
for (const waveIssues of waves) {
waveNum++
for (let i = 0; i < issueIds.length; i++) {
const issueId = issueIds[i]
// --- Step 3a: Spawn issue-plan-agent for solutions ---
// --- Step 3a: Spawn issue-plan-agent for single issue ---
const planAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
@@ -112,116 +104,121 @@ for (const waveIssues of waves) {
---
Goal: Generate solutions for Wave ${waveNum} issues
Goal: Generate solution for issue ${issueId}
issue_ids: ${JSON.stringify(waveIssues)}
issue_ids: ["${issueId}"]
project_root: "${projectRoot}"
## Requirements
- Generate solutions for each issue
- Auto-bind single solutions
- Generate solution for this issue
- Auto-bind single solution
- For multiple solutions, select the most pragmatic one
## Deliverables
Structured output with solution bindings per issue.
Structured output with solution binding.
`
})
const planResult = wait({ ids: [planAgent], timeout_ms: 600000 })
if (planResult.timed_out) {
send_input({ id: planAgent, message: "Please finalize solutions and output current results." })
send_input({ id: planAgent, message: "Please finalize solution and output results." })
wait({ ids: [planAgent], timeout_ms: 120000 })
}
close_agent({ id: planAgent })
// --- Step 3b: Spawn issue-queue-agent for ordering ---
const queueAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
// --- Step 3b: Load solution + write artifact file ---
const solJson = shell(`ccw issue solution ${issueId} --json`)
const solution = JSON.parse(solJson)
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/issue-queue-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
const solutionFile = `${sessionDir}/artifacts/solutions/${issueId}.json`
write_file(solutionFile, JSON.stringify({
issue_id: issueId,
...solution,
execution_config: {
execution_method: executionConfig.executionMethod,
code_review: executionConfig.codeReviewTool
},
timestamp: new Date().toISOString()
}, null, 2))
---
// --- Step 3c: Inline conflict check ---
const blockedBy = inlineConflictCheck(issueId, solution, dispatchedSolutions)
Goal: Form execution queue for Wave ${waveNum}
// --- Step 3d: Output ISSUE_READY for orchestrator ---
dispatchedSolutions.push({ issueId, solution, solutionFile })
issue_ids: ${JSON.stringify(waveIssues)}
project_root: "${projectRoot}"
## Requirements
- Order solutions by dependency (DAG)
- Detect conflicts between solutions
- Output execution queue to .workflow/issues/queue/execution-queue.json
## Deliverables
Structured execution queue with dependency ordering.
`
})
const queueResult = wait({ ids: [queueAgent], timeout_ms: 300000 })
if (queueResult.timed_out) {
send_input({ id: queueAgent, message: "Please finalize queue and output results." })
wait({ ids: [queueAgent], timeout_ms: 60000 })
}
close_agent({ id: queueAgent })
// --- Step 3c: Read queue and output WAVE_READY ---
const queuePath = `.workflow/issues/queue/execution-queue.json`
const queue = JSON.parse(read_file(queuePath))
const execTasks = queue.queue.map(entry => ({
issue_id: entry.issue_id,
solution_id: entry.solution_id,
title: entry.title || entry.issue_id,
priority: entry.priority || "normal",
depends_on: entry.depends_on || []
}))
// Output structured wave data for orchestrator
console.log(`
WAVE_READY:
ISSUE_READY:
${JSON.stringify({
wave_number: waveNum,
issue_ids: waveIssues,
queue_path: queuePath,
exec_tasks: execTasks
}, null, 2)}
issue_id: issueId,
solution_id: solution.bound?.id || 'N/A',
title: solution.bound?.title || issueId,
priority: "normal",
depends_on: blockedBy,
solution_file: solutionFile
}, null, 2)}
`)
// Wait for orchestrator send_input before continuing to next wave
// (orchestrator will send: "Wave N dispatched. Continue to Wave N+1.")
// Wait for orchestrator send_input before continuing to next issue
// (orchestrator will send: "Issue dispatched. Continue to next issue.")
}
```
### Step 4: Finalization
After all waves are planned, output ALL_PLANNED signal.
After all issues are planned, output ALL_PLANNED signal.
```javascript
console.log(`
ALL_PLANNED:
${JSON.stringify({
total_waves: waveNum,
total_issues: issueIds.length
}, null, 2)}
`)
```
## Inline Conflict Check
```javascript
function inlineConflictCheck(issueId, solution, dispatchedSolutions) {
const currentFiles = solution.bound?.files_touched
|| solution.bound?.affected_files || []
const blockedBy = []
// 1. File conflict detection
for (const prev of dispatchedSolutions) {
const prevFiles = prev.solution.bound?.files_touched
|| prev.solution.bound?.affected_files || []
const overlap = currentFiles.filter(f => prevFiles.includes(f))
if (overlap.length > 0) {
blockedBy.push(prev.issueId)
}
}
// 2. Explicit dependencies
const explicitDeps = solution.bound?.dependencies?.on_issues || []
for (const depId of explicitDeps) {
if (!blockedBy.includes(depId)) {
blockedBy.push(depId)
}
}
return blockedBy
}
```
## Role Boundaries
### MUST
- 仅执行规划和拆解工作
- 每个 wave 完成后输出 WAVE_READY 结构化数据
- 所有 wave 完成后输出 ALL_PLANNED
- 通过 spawn_agent 调用 issue-plan-agent issue-queue-agent
- 等待 orchestrator send_input 才继续下一 wave
- 每个 issue 完成后输出 ISSUE_READY 结构化数据
- 所有 issues 完成后输出 ALL_PLANNED
- 通过 spawn_agent 调用 issue-plan-agent(逐个 issue
- 等待 orchestrator send_input 才继续下一 issue
- 将 solution 写入中间产物文件
### MUST NOT
@@ -267,16 +264,17 @@ function parsePlanPhases(planContent) {
**ALWAYS**:
- Read role definition file as FIRST action (Step 1)
- Follow structured output template (WAVE_READY / ALL_PLANNED)
- Follow structured output template (ISSUE_READY / ALL_PLANNED)
- Stay within planning boundaries (no code implementation)
- Spawn issue-plan-agent and issue-queue-agent for each wave
- Include all issue IDs and solution references in wave data
- Spawn issue-plan-agent for each issue individually
- Write solution artifact file before outputting ISSUE_READY
- Include solution_file path in ISSUE_READY data
**NEVER**:
- Modify source code files
- Skip context loading (Step 1)
- Produce unstructured or free-form output
- Continue to next wave without outputting WAVE_READY
- Continue to next issue without outputting ISSUE_READY
- Close without outputting ALL_PLANNED
## Error Handling
@@ -285,7 +283,8 @@ function parsePlanPhases(planContent) {
|----------|--------|
| Issue creation failure | Retry once with simplified text, report in output |
| issue-plan-agent timeout | Urge convergence via send_input, close and report partial |
| issue-queue-agent failure | Create exec tasks without DAG ordering |
| Inline conflict check failure | Use empty depends_on, continue |
| Solution artifact write failure | Report error, continue with ISSUE_READY output |
| Plan file not found | Report error in output with CLARIFICATION_NEEDED |
| Empty input (no issues, no text) | Output CLARIFICATION_NEEDED asking for requirements |
| Sub-agent produces invalid output | Report error, continue with available data |