mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-28 09:23:08 +08:00
feat: add SpecDialog component for editing spec frontmatter
- Implement SpecDialog for managing spec details including title, read mode, priority, and keywords. - Add validation and keyword management functionality. - Integrate SpecDialog into SpecsSettingsPage for editing specs. feat: create index file for specs components - Export SpecCard, SpecDialog, and related types from a new index file for better organization. feat: implement SpecsSettingsPage for managing specs and hooks - Create main settings page with tabs for Project Specs, Personal Specs, Hooks, Injection, and Settings. - Integrate SpecDialog and HookDialog for editing specs and hooks. - Add search functionality and mock data for specs and hooks. feat: add spec management API routes - Implement API endpoints for listing specs, getting spec details, updating frontmatter, rebuilding indices, and initializing the spec system. - Handle errors and responses appropriately for each endpoint.
This commit is contained in:
@@ -148,7 +148,7 @@ Phase 5: Plan Quality Check (MANDATORY)
|
||||
│ ├─ Dependency correctness (no circular deps, proper ordering)
|
||||
│ ├─ Acceptance criteria quality (quantified, testable)
|
||||
│ ├─ Implementation steps sufficiency (2+ steps per task)
|
||||
│ └─ Constraint compliance (follows project-guidelines.json)
|
||||
│ └─ Constraint compliance (follows specs/*.md)
|
||||
├─ Parse check results and categorize issues
|
||||
└─ Decision:
|
||||
├─ No issues → Return plan to orchestrator
|
||||
@@ -848,7 +848,7 @@ After generating plan.json, **MUST** execute CLI quality check before returning
|
||||
| **Dependencies** | No circular deps, correct ordering | Yes |
|
||||
| **Acceptance Criteria** | Quantified and testable (not vague) | No |
|
||||
| **Implementation Steps** | 2+ actionable steps per task | No |
|
||||
| **Constraint Compliance** | Follows project-guidelines.json | Yes |
|
||||
| **Constraint Compliance** | Follows specs/*.md | Yes |
|
||||
|
||||
### CLI Command Format
|
||||
|
||||
@@ -857,7 +857,7 @@ Use `ccw cli` with analysis mode to validate plan against quality dimensions:
|
||||
```bash
|
||||
ccw cli -p "Validate plan quality: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance" \
|
||||
--tool gemini --mode analysis \
|
||||
--context "@{plan_json_path} @.workflow/project-guidelines.json"
|
||||
--context "@{plan_json_path} @.workflow/specs/*.md"
|
||||
```
|
||||
|
||||
**Expected Output Structure**:
|
||||
|
||||
@@ -23,8 +23,8 @@ Check these items. Report results as a checklist.
|
||||
|
||||
- **project-tech.json**: Check `{projectRoot}/.workflow/project-tech.json`
|
||||
- If missing: Read `package.json` / `tsconfig.json` / `pyproject.toml` and generate a minimal version. Ask user: "检测到项目使用 [tech stack], 是否正确?需要补充什么?"
|
||||
- **project-guidelines.json**: Check `{projectRoot}/.workflow/project-guidelines.json`
|
||||
- If missing: Scan for `.eslintrc`, `.prettierrc`, `ruff.toml` etc. Ask user: "未找到 project-guidelines.json, 是否有特定的编码规范需要遵循?"
|
||||
- **specs/*.md**: Check `{projectRoot}/.workflow/specs/*.md`
|
||||
- If missing: Scan for `.eslintrc`, `.prettierrc`, `ruff.toml` etc. Ask user: "未找到 specs/*.md, 是否有特定的编码规范需要遵循?"
|
||||
- **Test framework**: Detect from config files (jest.config, vitest.config, pytest.ini, etc.)
|
||||
- If missing: Ask user: "未检测到测试框架配置,请指定测试命令(如 `npm test`, `pytest`),或输入 'skip' 跳过测试验证"
|
||||
|
||||
@@ -39,7 +39,7 @@ Print formatted checklist:
|
||||
✓ 工作空间: .workflow/.cycle/ 就绪
|
||||
⚠ Git: 3 个未提交变更
|
||||
✓ project-tech.json: 已检测 (Express + TypeORM + PostgreSQL)
|
||||
⚠ project-guidelines.json: 未找到 (已跳过)
|
||||
⚠ specs/*.md: 未找到 (已跳过)
|
||||
✓ 测试框架: jest (npm test)
|
||||
```
|
||||
|
||||
@@ -173,7 +173,7 @@ Read the user's `$TASK` and score each dimension:
|
||||
For dimensions still at score 1 after Q&A, auto-enhance from codebase:
|
||||
- **Scope**: Use `Glob` and `Grep` to find related files, list them
|
||||
- **Context**: Read `project-tech.json` and key config files
|
||||
- **Constraints**: Infer from `project-guidelines.json` and existing patterns
|
||||
- **Constraints**: Infer from `specs/*.md` and existing patterns
|
||||
|
||||
### 2.5 Assemble Refined Task
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ Check these items. Report results as a checklist.
|
||||
|
||||
- **project-tech.json**: Check `{projectRoot}/.workflow/project-tech.json`
|
||||
- If missing: WARN — Phase 1 will call `workflow:init` to generate it. Ask user: "检测到项目使用 [tech stack from package.json], 是否正确?需要补充什么?"
|
||||
- **project-guidelines.json**: Check `{projectRoot}/.workflow/project-guidelines.json`
|
||||
- **specs/*.md**: Check `{projectRoot}/.workflow/specs/*.md`
|
||||
- If missing: WARN — will be generated as empty scaffold. Ask: "有特定的编码规范需要遵循吗?"
|
||||
- **Test framework**: Detect from config files (jest.config, vitest.config, pytest.ini, etc.)
|
||||
- If missing: Ask: "未检测到测试框架,请指定测试命令(如 `npm test`),或输入 'skip' 跳过"
|
||||
@@ -39,7 +39,7 @@ Print formatted checklist:
|
||||
✓ .workflow/ 目录就绪
|
||||
⚠ Git: 3 个未提交变更
|
||||
✓ project-tech.json: 已检测 (Express + TypeORM + PostgreSQL)
|
||||
⚠ project-guidelines.json: 未找到 (Phase 1 将生成空模板)
|
||||
⚠ specs/*.md: 未找到 (Phase 1 将生成空模板)
|
||||
✓ 测试框架: jest (npm test)
|
||||
```
|
||||
|
||||
@@ -163,7 +163,7 @@ Each dimension scores 0-2 (0=missing, 1=vague, 2=clear). **Total minimum: 6/10 t
|
||||
For dimensions still at score 1 after Q&A, auto-enhance from codebase:
|
||||
- **Scope**: Use `Glob` and `Grep` to find related files
|
||||
- **Context**: Read `project-tech.json` and key config files
|
||||
- **Constraints**: Infer from `project-guidelines.json`
|
||||
- **Constraints**: Infer from `specs/*.md`
|
||||
|
||||
### 2.5 Assemble Structured Description
|
||||
|
||||
|
||||
@@ -85,7 +85,7 @@ Step 1: Topic Understanding
|
||||
|
||||
Step 2: Exploration (Inline, No Agents)
|
||||
├─ Detect codebase → search relevant modules, patterns
|
||||
│ ├─ Read project-tech.json / project-guidelines.json (if exists)
|
||||
│ ├─ Read project-tech.json / specs/*.md (if exists)
|
||||
│ └─ Use Grep, Glob, Read, mcp__ace-tool__search_context
|
||||
├─ Multi-perspective analysis (if selected, serial)
|
||||
│ ├─ Single: Comprehensive analysis
|
||||
@@ -298,7 +298,7 @@ const hasCodebase = Bash(`
|
||||
if (hasCodebase !== 'none') {
|
||||
// 1. Read project metadata (if exists)
|
||||
// - .workflow/project-tech.json (tech stack info)
|
||||
// - .workflow/project-guidelines.json (project conventions)
|
||||
// - .workflow/specs/*.md (project conventions)
|
||||
|
||||
// 2. Search codebase for relevant content
|
||||
// Use: Grep, Glob, Read, or mcp__ace-tool__search_context
|
||||
|
||||
@@ -359,7 +359,7 @@ const agentIds = perspectives.map(perspective => {
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: ${projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -194,7 +194,7 @@ Use built-in tools directly to understand the task scope and identify sub-domain
|
||||
**Analysis Activities**:
|
||||
1. **Search for references** — Find related documentation, README files, and architecture guides
|
||||
- Use: `mcp__ace-tool__search_context`, Grep, Glob, Read
|
||||
- Read: `.workflow/project-tech.json`, `.workflow/project-guidelines.json` (if exists)
|
||||
- Read: `.workflow/project-tech.json`, `.workflow/specs/*.md` (if exists)
|
||||
2. **Extract task keywords** — Identify key terms and concepts from the task description
|
||||
3. **Identify ambiguities** — List any unclear points or multiple possible interpretations
|
||||
4. **Clarify with user** — If ambiguities found, use AskUserQuestion for clarification
|
||||
|
||||
@@ -1,378 +0,0 @@
|
||||
---
|
||||
name: issue-devpipeline
|
||||
description: |
|
||||
Plan-and-Execute pipeline with per-issue beat pattern.
|
||||
Orchestrator coordinates planner (Deep Interaction) and executors (Parallel Fan-out).
|
||||
Planner outputs per-issue solutions, executors implement solutions concurrently.
|
||||
agents: 3
|
||||
phases: 4
|
||||
---
|
||||
|
||||
# Issue DevPipeline
|
||||
|
||||
边规划边执行流水线。编排器通过逐 Issue 节拍流水线协调 planner 和 executor(s):planner 每完成一个 issue 的规划后立即输出,编排器即时为该 issue 派发 executor agent,同时 planner 继续规划下一 issue。
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Orchestrator (this file) │
|
||||
│ → Parse input → Manage planner → Dispatch executors │
|
||||
└───────────┬──────────────────────────────────────┬──────────┘
|
||||
│ │
|
||||
┌──────┴──────┐ ┌──────────┴──────────┐
|
||||
│ Planner │ │ Executors (N) │
|
||||
│ (Deep │ │ (Parallel Fan-out) │
|
||||
│ Interaction│ │ │
|
||||
│ per-issue) │ │ exec-1 exec-2 ... │
|
||||
└──────┬──────┘ └──────────┬──────────┘
|
||||
│ │
|
||||
┌──────┴──────┐ ┌──────────┴──────────┐
|
||||
│ issue-plan │ │ code-developer │
|
||||
│ (existing) │ │ (role reference) │
|
||||
└─────────────┘ └─────────────────────┘
|
||||
```
|
||||
|
||||
**Per-Issue Beat Pipeline Flow**:
|
||||
```
|
||||
Planner → Issue 1 solution → ISSUE_READY
|
||||
↓ (spawn executor for issue 1)
|
||||
↓ send_input → Planner → Issue 2 solution → ISSUE_READY
|
||||
↓ (spawn executor for issue 2)
|
||||
...
|
||||
↓ Planner outputs "all_planned"
|
||||
↓ wait for all executor agents
|
||||
↓ Aggregate results → Done
|
||||
```
|
||||
|
||||
## Agent Registry
|
||||
|
||||
| Agent | Role File | Responsibility | New/Existing |
|
||||
|-------|-----------|----------------|--------------|
|
||||
| `planex-planner` | `~/.codex/agents/planex-planner.md` | 需求拆解 → issue 创建 → 方案设计 → 冲突检查 → 逐 issue 输出 | New |
|
||||
| `planex-executor` | `~/.codex/agents/planex-executor.md` | 加载 solution → 代码实现 → 测试 → 提交 | New |
|
||||
| `issue-plan-agent` | `~/.codex/agents/issue-plan-agent.md` | Closed-loop: ACE 探索 + solution 生成 | Existing |
|
||||
|
||||
## Input Types
|
||||
|
||||
支持 3 种输入方式(通过 orchestrator 参数传入):
|
||||
|
||||
| 输入类型 | 格式 | 示例 |
|
||||
|----------|------|------|
|
||||
| Issue IDs | 直接传入 ID | `ISS-20260215-001 ISS-20260215-002` |
|
||||
| 需求文本 | `--text '...'` | `--text '实现用户认证模块'` |
|
||||
| Plan 文件 | `--plan path` | `--plan plan/2026-02-15-auth.md` |
|
||||
|
||||
## Phase Execution
|
||||
|
||||
### Phase 1: Input Parsing (Orchestrator Inline)
|
||||
|
||||
```javascript
|
||||
// Parse input arguments
|
||||
const args = orchestratorInput
|
||||
const issueIds = args.match(/ISS-\d{8}-\d{6}/g) || []
|
||||
const textMatch = args.match(/--text\s+['"]([^'"]+)['"]/)
|
||||
const planMatch = args.match(/--plan\s+(\S+)/)
|
||||
|
||||
let inputType = 'unknown'
|
||||
if (issueIds.length > 0) inputType = 'issue_ids'
|
||||
else if (textMatch) inputType = 'text'
|
||||
else if (planMatch) inputType = 'plan_file'
|
||||
else inputType = 'text_from_description'
|
||||
|
||||
const inputPayload = {
|
||||
type: inputType,
|
||||
issueIds: issueIds,
|
||||
text: textMatch ? textMatch[1] : args,
|
||||
planFile: planMatch ? planMatch[1] : null
|
||||
}
|
||||
|
||||
// Initialize session directory for artifacts
|
||||
const slug = (issueIds[0] || 'batch').replace(/[^a-zA-Z0-9-]/g, '')
|
||||
const dateStr = new Date().toISOString().slice(0,10).replace(/-/g,'')
|
||||
const sessionId = `PEX-${slug}-${dateStr}`
|
||||
const sessionDir = `.workflow/.team/${sessionId}`
|
||||
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
|
||||
```
|
||||
|
||||
### Phase 2: Planning (Deep Interaction with Planner — Per-Issue Beat)
|
||||
|
||||
```javascript
|
||||
// Track all agents for cleanup
|
||||
const allAgentIds = []
|
||||
|
||||
// Spawn planner agent
|
||||
const plannerId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/planex-planner.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
Goal: 分析需求并逐 issue 输出规划结果。每完成一个 issue 立即输出。
|
||||
|
||||
Input:
|
||||
${JSON.stringify(inputPayload, null, 2)}
|
||||
|
||||
Session Dir: ${sessionDir}
|
||||
|
||||
Scope:
|
||||
- Include: 需求分析、issue 创建、方案设计、inline 冲突检查、写中间产物
|
||||
- Exclude: 代码实现、测试执行、git 操作
|
||||
|
||||
Deliverables:
|
||||
每个 issue 输出严格遵循以下 JSON 格式:
|
||||
\`\`\`json
|
||||
{
|
||||
"status": "issue_ready" | "all_planned",
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"title": "描述",
|
||||
"priority": "normal",
|
||||
"depends_on": [],
|
||||
"solution_file": "${sessionDir}/artifacts/solutions/ISS-xxx.json",
|
||||
"remaining_issues": ["ISS-yyy", ...],
|
||||
"summary": "本 issue 规划摘要"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
Quality bar:
|
||||
- 每个 issue 必须有绑定的 solution
|
||||
- Solution 写入中间产物文件
|
||||
- Inline 冲突检查标记 depends_on
|
||||
`
|
||||
})
|
||||
allAgentIds.push(plannerId)
|
||||
|
||||
// Wait for planner first issue output
|
||||
let plannerResult = wait({ ids: [plannerId], timeout_ms: 900000 })
|
||||
|
||||
if (plannerResult.timed_out) {
|
||||
send_input({ id: plannerId, message: "请尽快输出当前已完成的规划结果。" })
|
||||
plannerResult = wait({ ids: [plannerId], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
// Parse planner output
|
||||
let issueData = parseIssueOutput(plannerResult.status[plannerId].completed)
|
||||
```
|
||||
|
||||
### Phase 3: Per-Issue Execution Loop
|
||||
|
||||
```javascript
|
||||
const executorResults = []
|
||||
let issueCount = 0
|
||||
|
||||
while (true) {
|
||||
issueCount++
|
||||
|
||||
// ─── Dispatch executor for current issue (if valid) ───
|
||||
if (issueData && issueData.issue_id) {
|
||||
const executorId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/planex-executor.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
Goal: 实现 ${issueData.issue_id} 的 solution
|
||||
|
||||
Issue: ${issueData.issue_id}
|
||||
Solution: ${issueData.solution_id}
|
||||
Title: ${issueData.title}
|
||||
Priority: ${issueData.priority}
|
||||
Dependencies: ${issueData.depends_on?.join(', ') || 'none'}
|
||||
Solution File: ${issueData.solution_file}
|
||||
Session Dir: ${sessionDir}
|
||||
|
||||
Scope:
|
||||
- Include: 加载 solution plan、代码实现、测试运行、git commit
|
||||
- Exclude: issue 创建、方案修改
|
||||
|
||||
Deliverables:
|
||||
输出严格遵循以下格式:
|
||||
\`\`\`json
|
||||
{
|
||||
"issue_id": "${issueData.issue_id}",
|
||||
"status": "success" | "failed",
|
||||
"files_changed": ["path/to/file", ...],
|
||||
"tests_passed": true | false,
|
||||
"committed": true | false,
|
||||
"commit_hash": "abc123" | null,
|
||||
"error": null | "错误描述",
|
||||
"summary": "实现摘要"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
Quality bar:
|
||||
- solution plan 中的所有任务必须实现
|
||||
- 现有测试不能 break
|
||||
- 遵循项目编码规范
|
||||
- 每个变更必须 commit
|
||||
`
|
||||
})
|
||||
allAgentIds.push(executorId)
|
||||
executorResults.push({
|
||||
id: executorId,
|
||||
issueId: issueData.issue_id,
|
||||
index: issueCount
|
||||
})
|
||||
}
|
||||
|
||||
// ─── Check if all planned ───
|
||||
if (issueData?.status === 'all_planned') {
|
||||
break
|
||||
}
|
||||
|
||||
// ─── Request next issue from planner ───
|
||||
send_input({
|
||||
id: plannerId,
|
||||
message: `Issue ${issueData?.issue_id || 'unknown'} dispatched. Continue to next issue.`
|
||||
})
|
||||
|
||||
// ─── Wait for planner next issue ───
|
||||
const nextResult = wait({ ids: [plannerId], timeout_ms: 900000 })
|
||||
|
||||
if (nextResult.timed_out) {
|
||||
send_input({ id: plannerId, message: "请尽快输出当前已完成的规划结果。" })
|
||||
const retryResult = wait({ ids: [plannerId], timeout_ms: 120000 })
|
||||
if (retryResult.timed_out) break
|
||||
issueData = parseIssueOutput(retryResult.status[plannerId].completed)
|
||||
} else {
|
||||
issueData = parseIssueOutput(nextResult.status[plannerId].completed)
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Wait for all executor agents ───
|
||||
const executorIds = executorResults.map(e => e.id)
|
||||
if (executorIds.length > 0) {
|
||||
const execResults = wait({ ids: executorIds, timeout_ms: 1200000 })
|
||||
|
||||
// Handle timeouts
|
||||
if (execResults.timed_out) {
|
||||
const pending = executorIds.filter(id => !execResults.status[id]?.completed)
|
||||
pending.forEach(id => {
|
||||
send_input({ id, message: "Please finalize current task and output results." })
|
||||
})
|
||||
wait({ ids: pending, timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
// Collect results
|
||||
executorResults.forEach(entry => {
|
||||
entry.result = execResults.status[entry.id]?.completed || 'timeout'
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Aggregation & Cleanup
|
||||
|
||||
```javascript
|
||||
// ─── Aggregate results ───
|
||||
const succeeded = executorResults.filter(r => {
|
||||
try {
|
||||
const parsed = JSON.parse(r.result)
|
||||
return parsed.status === 'success'
|
||||
} catch { return false }
|
||||
})
|
||||
|
||||
const failed = executorResults.filter(r => {
|
||||
try {
|
||||
const parsed = JSON.parse(r.result)
|
||||
return parsed.status === 'failed'
|
||||
} catch { return true }
|
||||
})
|
||||
|
||||
// ─── Output final report ───
|
||||
const report = `
|
||||
## PlanEx Pipeline Complete
|
||||
|
||||
**Total Issues**: ${executorResults.length}
|
||||
**Succeeded**: ${succeeded.length}
|
||||
**Failed**: ${failed.length}
|
||||
|
||||
### Results
|
||||
${executorResults.map(r => `- ${r.issueId} | ${(() => {
|
||||
try { return JSON.parse(r.result).status } catch { return 'error' }
|
||||
})()}`).join('\n')}
|
||||
|
||||
${failed.length > 0 ? `### Failed Issues
|
||||
${failed.map(r => `- ${r.issueId}: ${(() => {
|
||||
try { return JSON.parse(r.result).error } catch { return r.result?.slice(0, 200) || 'unknown' }
|
||||
})()}`).join('\n')}` : ''}
|
||||
`
|
||||
|
||||
console.log(report)
|
||||
|
||||
// ─── Lifecycle cleanup ───
|
||||
allAgentIds.forEach(id => {
|
||||
try { close_agent({ id }) } catch { /* already closed */ }
|
||||
})
|
||||
```
|
||||
|
||||
## Helper Functions
|
||||
|
||||
```javascript
|
||||
function parseIssueOutput(output) {
|
||||
// Extract JSON block from agent output
|
||||
const jsonMatch = output.match(/```json\s*([\s\S]*?)```/)
|
||||
if (jsonMatch) {
|
||||
try { return JSON.parse(jsonMatch[1]) } catch {}
|
||||
}
|
||||
// Fallback: try parsing entire output as JSON
|
||||
try { return JSON.parse(output) } catch {}
|
||||
// Last resort: return empty with all_planned
|
||||
return { status: 'all_planned', issue_id: null, remaining_issues: [], summary: 'Parse failed' }
|
||||
}
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const CONFIG = {
|
||||
sessionDir: ".workflow/.team/PEX-{slug}-{date}/",
|
||||
artifactsDir: ".workflow/.team/PEX-{slug}-{date}/artifacts/",
|
||||
issueDataDir: ".workflow/issues/",
|
||||
plannerTimeout: 900000, // 15 min
|
||||
executorTimeout: 1200000, // 20 min
|
||||
maxIssues: 50
|
||||
}
|
||||
```
|
||||
|
||||
## Lifecycle Management
|
||||
|
||||
### Timeout Handling
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Planner issue timeout | send_input 催促收敛,retry wait 120s |
|
||||
| Executor timeout | 标记为 failed,继续其他 executor |
|
||||
| Batch wait partial timeout | 收集已完成结果,继续 pipeline |
|
||||
| Pipeline stall (> 3 issues timeout) | 中止 pipeline,输出部分结果 |
|
||||
|
||||
### Cleanup Protocol
|
||||
|
||||
```javascript
|
||||
// All agents tracked in allAgentIds
|
||||
// Final cleanup at end or on error
|
||||
allAgentIds.forEach(id => {
|
||||
try { close_agent({ id }) } catch { /* already closed */ }
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Planner output parse failure | Retry with send_input asking for strict JSON |
|
||||
| No issues created | Report error, abort pipeline |
|
||||
| Solution planning failure | Skip issue, report in final results |
|
||||
| Executor implementation failure | Mark as failed, continue with other executors |
|
||||
| Inline conflict check failure | Use empty depends_on, continue |
|
||||
| Planner exits early | Treat as all_planned, finish current executors |
|
||||
@@ -1,451 +0,0 @@
|
||||
---
|
||||
name: planex-executor
|
||||
description: |
|
||||
PlanEx 执行角色。从中间产物文件加载 solution plan(兼容 CLI fallback)→ 代码实现 → 测试验证 → git commit。
|
||||
每个 executor 实例处理一个 issue 的 solution。
|
||||
color: green
|
||||
skill: issue-devpipeline
|
||||
---
|
||||
|
||||
# PlanEx Executor
|
||||
|
||||
代码实现角色。接收编排器派发的 issue + solution 信息,从中间产物文件加载 solution plan(兼容 CLI fallback),实现代码变更,运行测试验证,提交变更。每个 executor 实例独立处理一个 issue。
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **Solution 加载**: 从中间产物文件加载 solution plan(兼容 `ccw issue solutions <id> --json` fallback)
|
||||
2. **代码实现**: 按 solution plan 的任务列表顺序实现代码变更
|
||||
3. **测试验证**: 运行相关测试确保变更正确且不破坏现有功能
|
||||
4. **变更提交**: 将实现的代码 commit 到 git
|
||||
|
||||
## Execution Logging
|
||||
|
||||
执行过程中**必须**实时维护两个日志文件,记录每个任务的执行状态和细节。
|
||||
|
||||
### Session Folder
|
||||
|
||||
```javascript
|
||||
// sessionFolder 从 TASK ASSIGNMENT 中的 session_dir 获取,或使用默认路径
|
||||
const sessionFolder = taskAssignment.session_dir || `.workflow/.team/PEX-${issueId}`
|
||||
```
|
||||
|
||||
### execution.md — 执行概览
|
||||
|
||||
在开始实现前初始化,任务完成/失败时更新状态。
|
||||
|
||||
```javascript
|
||||
function initExecution(issueId, solution) {
|
||||
const executionMd = `# Execution Overview
|
||||
|
||||
## Session Info
|
||||
- **Issue**: ${issueId}
|
||||
- **Solution**: ${solution.bound?.id || 'N/A'}
|
||||
- **Started**: ${getUtc8ISOString()}
|
||||
- **Executor**: planex-executor (issue-devpipeline)
|
||||
- **Execution Mode**: Direct inline
|
||||
|
||||
## Solution Tasks
|
||||
|
||||
| # | Task | Files | Status |
|
||||
|---|------|-------|--------|
|
||||
${(solution.bound?.tasks || []).map((t, i) =>
|
||||
`| ${i+1} | ${t.title || t.description || 'Task ' + (i+1)} | ${(t.files || []).join(', ') || '-'} | pending |`
|
||||
).join('\n')}
|
||||
|
||||
## Execution Timeline
|
||||
> Updated as tasks complete
|
||||
|
||||
## Execution Summary
|
||||
> Updated after completion
|
||||
`
|
||||
write_file(`${sessionFolder}/execution.md`, executionMd)
|
||||
}
|
||||
```
|
||||
|
||||
### execution-events.md — 事件流
|
||||
|
||||
每个任务的 START/COMPLETE/FAIL 实时追加记录。
|
||||
|
||||
```javascript
|
||||
function initEvents(issueId) {
|
||||
const eventsHeader = `# Execution Events
|
||||
|
||||
**Issue**: ${issueId}
|
||||
**Executor**: planex-executor (issue-devpipeline)
|
||||
**Started**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
`
|
||||
write_file(`${sessionFolder}/execution-events.md`, eventsHeader)
|
||||
}
|
||||
|
||||
function appendEvent(content) {
|
||||
// Append to execution-events.md
|
||||
const existing = read_file(`${sessionFolder}/execution-events.md`)
|
||||
write_file(`${sessionFolder}/execution-events.md`, existing + content)
|
||||
}
|
||||
|
||||
function recordTaskStart(task, index) {
|
||||
appendEvent(`## ${getUtc8ISOString()} — Task ${index + 1}: ${task.title || task.description || 'Unnamed'}
|
||||
|
||||
**Status**: ⏳ IN PROGRESS
|
||||
**Files**: ${(task.files || []).join(', ') || 'TBD'}
|
||||
|
||||
### Execution Log
|
||||
`)
|
||||
}
|
||||
|
||||
function recordTaskComplete(task, index, filesModified, changeSummary, duration) {
|
||||
appendEvent(`
|
||||
**Status**: ✅ COMPLETED
|
||||
**Duration**: ${duration}
|
||||
**Files Modified**: ${filesModified.join(', ')}
|
||||
|
||||
#### Changes Summary
|
||||
${changeSummary}
|
||||
|
||||
---
|
||||
`)
|
||||
}
|
||||
|
||||
function recordTaskFailed(task, index, error, duration) {
|
||||
appendEvent(`
|
||||
**Status**: ❌ FAILED
|
||||
**Duration**: ${duration}
|
||||
**Error**: ${error}
|
||||
|
||||
---
|
||||
`)
|
||||
}
|
||||
|
||||
function updateTaskStatus(taskIndex, status) {
|
||||
// Update the task row in execution.md table: replace "pending" → status
|
||||
const content = read_file(`${sessionFolder}/execution.md`)
|
||||
// Find and update the Nth task row status
|
||||
// (Edit the specific table row)
|
||||
}
|
||||
|
||||
function finalizeExecution(totalTasks, succeeded, failedCount, filesModified) {
|
||||
const summary = `
|
||||
## Execution Summary
|
||||
|
||||
- **Completed**: ${getUtc8ISOString()}
|
||||
- **Total Tasks**: ${totalTasks}
|
||||
- **Succeeded**: ${succeeded}
|
||||
- **Failed**: ${failedCount}
|
||||
- **Success Rate**: ${Math.round(succeeded / totalTasks * 100)}%
|
||||
- **Files Modified**: ${filesModified.join(', ')}
|
||||
`
|
||||
// Append summary to execution.md
|
||||
const content = read_file(`${sessionFolder}/execution.md`)
|
||||
write_file(`${sessionFolder}/execution.md`,
|
||||
content.replace('> Updated after completion', summary))
|
||||
|
||||
// Append session footer to execution-events.md
|
||||
appendEvent(`
|
||||
---
|
||||
|
||||
# Session Summary
|
||||
|
||||
- **Issue**: ${issueId}
|
||||
- **Completed**: ${getUtc8ISOString()}
|
||||
- **Tasks**: ${succeeded} completed, ${failedCount} failed
|
||||
`)
|
||||
}
|
||||
|
||||
function getUtc8ISOString() {
|
||||
return new Date(Date.now() + 8 * 3600000).toISOString().replace('Z', '+08:00')
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Step 1: Context Loading
|
||||
|
||||
**MANDATORY**: Execute these steps FIRST before any other action.
|
||||
|
||||
1. Read this role definition file (already done if you're reading this)
|
||||
2. Read: `.workflow/project-tech.json` — understand project technology stack
|
||||
3. Read: `.workflow/project-guidelines.json` — understand project conventions
|
||||
4. Parse the TASK ASSIGNMENT from the spawn message for:
|
||||
- **Goal**: 实现指定 issue 的 solution
|
||||
- **Issue ID**: 目标 issue 标识
|
||||
- **Solution ID**: 绑定的 solution 标识
|
||||
- **Dependencies**: 依赖的其他 issues(应已完成)
|
||||
- **Session Dir**: 日志文件存放路径
|
||||
- **Deliverables**: Expected JSON output format
|
||||
|
||||
### Step 2: Solution Loading & Implementation
|
||||
|
||||
```javascript
|
||||
// ── Load solution plan (dual-mode: artifact file first, CLI fallback) ──
|
||||
const issueId = taskAssignment.issue_id
|
||||
const solutionFile = taskAssignment.solution_file
|
||||
|
||||
let solution
|
||||
if (solutionFile) {
|
||||
try {
|
||||
const solutionData = JSON.parse(read_file(solutionFile))
|
||||
solution = solutionData.bound ? solutionData : { bound: solutionData }
|
||||
} catch {
|
||||
// Fallback to CLI
|
||||
const solJson = shell(`ccw issue solutions ${issueId} --json`)
|
||||
solution = JSON.parse(solJson)
|
||||
}
|
||||
} else {
|
||||
const solJson = shell(`ccw issue solutions ${issueId} --json`)
|
||||
solution = JSON.parse(solJson)
|
||||
}
|
||||
|
||||
if (!solution.bound) {
|
||||
outputError(`No bound solution for ${issueId}`)
|
||||
return
|
||||
}
|
||||
|
||||
// ── Initialize execution logs ──
|
||||
shell(`mkdir -p ${sessionFolder}`)
|
||||
initExecution(issueId, solution)
|
||||
initEvents(issueId)
|
||||
|
||||
// Update issue status
|
||||
shell(`ccw issue update ${issueId} --status in-progress`)
|
||||
|
||||
// ── Implement according to solution plan ──
|
||||
const plan = solution.bound
|
||||
const tasks = plan.tasks || []
|
||||
let succeeded = 0, failedCount = 0
|
||||
const allFilesModified = []
|
||||
|
||||
for (let i = 0; i < tasks.length; i++) {
|
||||
const task = tasks[i]
|
||||
const startTime = Date.now()
|
||||
|
||||
// Record START event
|
||||
recordTaskStart(task, i)
|
||||
|
||||
try {
|
||||
// 1. Read target files
|
||||
// 2. Apply changes following existing patterns
|
||||
// 3. Write/Edit files
|
||||
// 4. Verify no syntax errors
|
||||
|
||||
const endTime = Date.now()
|
||||
const duration = `${Math.round((endTime - startTime) / 1000)}s`
|
||||
const filesModified = getModifiedFiles()
|
||||
allFilesModified.push(...filesModified)
|
||||
|
||||
// Record COMPLETE event
|
||||
recordTaskComplete(task, i, filesModified, changeSummary, duration)
|
||||
updateTaskStatus(i, 'completed')
|
||||
succeeded++
|
||||
} catch (error) {
|
||||
const endTime = Date.now()
|
||||
const duration = `${Math.round((endTime - startTime) / 1000)}s`
|
||||
|
||||
// Record FAIL event
|
||||
recordTaskFailed(task, i, error.message, duration)
|
||||
updateTaskStatus(i, 'failed')
|
||||
failedCount++
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**实现原则**:
|
||||
- 按 solution plan 中的 task 顺序实现
|
||||
- 遵循项目现有代码风格和模式
|
||||
- 最小化变更,不做超出 solution 范围的修改
|
||||
- 每个 task 完成后验证无语法错误
|
||||
|
||||
### Step 3: Testing, Commit & Finalize Logs
|
||||
|
||||
```javascript
|
||||
// ── Detect test command ──
|
||||
let testCmd = 'npm test'
|
||||
try {
|
||||
const pkgJson = JSON.parse(readFile('package.json'))
|
||||
if (pkgJson.scripts?.test) testCmd = 'npm test'
|
||||
else if (pkgJson.scripts?.['test:unit']) testCmd = 'npm run test:unit'
|
||||
} catch {
|
||||
if (fileExists('pytest.ini') || fileExists('setup.py')) testCmd = 'pytest'
|
||||
else if (fileExists('Cargo.toml')) testCmd = 'cargo test'
|
||||
}
|
||||
|
||||
// ── Run tests ──
|
||||
const testStartTime = Date.now()
|
||||
appendEvent(`## ${getUtc8ISOString()} — Integration Test Verification
|
||||
|
||||
**Status**: ⏳ IN PROGRESS
|
||||
**Command**: \`${testCmd}\`
|
||||
|
||||
### Test Log
|
||||
`)
|
||||
|
||||
const testResult = shell(`${testCmd} 2>&1`)
|
||||
let testsPassed = !testResult.includes('FAIL') && testResult.exitCode === 0
|
||||
|
||||
if (!testsPassed) {
|
||||
let retries = 0
|
||||
while (retries < 2 && !testsPassed) {
|
||||
appendEvent(`- Retry ${retries + 1}: fixing test failures...\n`)
|
||||
retries++
|
||||
const retestResult = shell(`${testCmd} 2>&1`)
|
||||
testsPassed = !retestResult.includes('FAIL') && retestResult.exitCode === 0
|
||||
}
|
||||
}
|
||||
|
||||
const testDuration = `${Math.round((Date.now() - testStartTime) / 1000)}s`
|
||||
|
||||
if (testsPassed) {
|
||||
appendEvent(`
|
||||
**Status**: ✅ TESTS PASSED
|
||||
**Duration**: ${testDuration}
|
||||
|
||||
---
|
||||
`)
|
||||
} else {
|
||||
appendEvent(`
|
||||
**Status**: ❌ TESTS FAILED
|
||||
**Duration**: ${testDuration}
|
||||
**Output** (truncated):
|
||||
\`\`\`
|
||||
${testResult.slice(0, 500)}
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
`)
|
||||
}
|
||||
|
||||
// ── Commit if tests pass ──
|
||||
let commitHash = null
|
||||
let committed = false
|
||||
|
||||
if (testsPassed) {
|
||||
shell('git add -A')
|
||||
shell(`git commit -m "feat(${issueId}): implement solution ${solution.bound.id}"`)
|
||||
commitHash = shell('git rev-parse --short HEAD').trim()
|
||||
committed = true
|
||||
|
||||
appendEvent(`## ${getUtc8ISOString()} — Git Commit
|
||||
|
||||
**Commit**: \`${commitHash}\`
|
||||
**Message**: feat(${issueId}): implement solution ${solution.bound.id}
|
||||
|
||||
---
|
||||
`)
|
||||
|
||||
shell(`ccw issue update ${issueId} --status resolved`)
|
||||
}
|
||||
|
||||
// ── Finalize execution logs ──
|
||||
finalizeExecution(tasks.length, succeeded, failedCount, [...new Set(allFilesModified)])
|
||||
```
|
||||
|
||||
### Step 4: Output Delivery
|
||||
|
||||
输出严格遵循编排器要求的 JSON 格式:
|
||||
|
||||
```json
|
||||
{
|
||||
"issue_id": "ISS-20260215-001",
|
||||
"status": "success",
|
||||
"files_changed": [
|
||||
"src/auth/login.ts",
|
||||
"src/auth/login.test.ts"
|
||||
],
|
||||
"tests_passed": true,
|
||||
"committed": true,
|
||||
"commit_hash": "abc1234",
|
||||
"error": null,
|
||||
"summary": "实现用户登录功能,添加 2 个文件,通过所有测试",
|
||||
"execution_logs": {
|
||||
"execution_md": "${sessionFolder}/execution.md",
|
||||
"events_md": "${sessionFolder}/execution-events.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**失败时的输出**:
|
||||
|
||||
```json
|
||||
{
|
||||
"issue_id": "ISS-20260215-001",
|
||||
"status": "failed",
|
||||
"files_changed": ["src/auth/login.ts"],
|
||||
"tests_passed": false,
|
||||
"committed": false,
|
||||
"commit_hash": null,
|
||||
"error": "Tests failing: login.test.ts:42 - Expected 200 got 401",
|
||||
"summary": "代码实现完成但测试未通过,需要 solution 修订",
|
||||
"execution_logs": {
|
||||
"execution_md": "${sessionFolder}/execution.md",
|
||||
"events_md": "${sessionFolder}/execution-events.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Log Output Structure
|
||||
|
||||
```
|
||||
${sessionFolder}/
|
||||
├── execution.md # 执行概览:session info, task table, summary
|
||||
└── execution-events.md # 事件流:每个 task 的 START/COMPLETE/FAIL 详情
|
||||
```
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `execution.md` | 概览:solution tasks 表格、执行统计、最终结果 |
|
||||
| `execution-events.md` | 时间线:每个任务和测试验证的详细事件记录 |
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅处理分配的单个 issue
|
||||
- 严格按 solution plan 实现
|
||||
- 实现前先读取目标文件理解现有代码
|
||||
- 遵循项目编码规范(from project-guidelines.json)
|
||||
- 运行测试验证变更
|
||||
- 输出严格 JSON 格式结果
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- ❌ 创建新的 issue
|
||||
- ❌ 修改 solution 或 queue
|
||||
- ❌ 实现超出 solution 范围的功能
|
||||
- ❌ 跳过测试直接提交
|
||||
- ❌ 修改与当前 issue 无关的文件
|
||||
- ❌ 输出非 JSON 格式的结果
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Read role definition file as FIRST action (Step 1)
|
||||
- **Initialize execution.md + execution-events.md BEFORE starting implementation**
|
||||
- **Record START event before each solution task**
|
||||
- **Record COMPLETE/FAIL event after each task with duration and details**
|
||||
- **Finalize logs after testing and commit**
|
||||
- Load solution plan before implementing
|
||||
- Follow existing code patterns in the project
|
||||
- Run tests before committing
|
||||
- Report accurate `files_changed` list
|
||||
- Include meaningful `summary` and `error` descriptions
|
||||
|
||||
**NEVER**:
|
||||
- Modify files outside the solution scope
|
||||
- Skip context loading (Step 1)
|
||||
- Commit untested code
|
||||
- Over-engineer beyond the solution plan
|
||||
- Suppress test failures (`@ts-ignore`, `.skip`, etc.)
|
||||
- Output unstructured text
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Solution not found | Output `status: "failed"`, `error: "No bound solution"` |
|
||||
| Target file not found | Create file if solution specifies, otherwise report error |
|
||||
| Syntax/type errors after changes | Fix immediately, re-verify |
|
||||
| Tests failing after 2 retries | Output `status: "failed"` with test output in error |
|
||||
| Git commit failure | Output `committed: false`, include error |
|
||||
| Issue status update failure | Log warning, continue with output |
|
||||
@@ -1,286 +0,0 @@
|
||||
---
|
||||
name: planex-planner
|
||||
description: |
|
||||
PlanEx 规划角色。需求拆解 → issue 创建 → 方案设计 → inline 冲突检查。
|
||||
逐 issue 输出执行信息,支持 Deep Interaction 多轮交互。
|
||||
color: blue
|
||||
skill: issue-devpipeline
|
||||
---
|
||||
|
||||
# PlanEx Planner
|
||||
|
||||
需求分析和规划角色。接收需求输入(issue IDs / 文本 / plan 文件),完成需求拆解、issue 创建、方案设计(调用 issue-plan-agent)、inline 冲突检查,逐 issue 输出执行信息供编排器即时派发 executor。
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **需求分析**: 解析输入类型,提取需求要素
|
||||
2. **Issue 创建**: 将文本/plan 拆解为结构化 issue(通过 `ccw issue new`)
|
||||
3. **方案设计**: 调用 issue-plan-agent 为每个 issue 生成 solution
|
||||
4. **Inline 冲突检查**: 基于 files_touched 重叠检测 + 显式依赖排序
|
||||
5. **中间产物**: 将 solution 写入文件供 executor 直接加载
|
||||
6. **逐 issue 输出**: 每完成一个 issue 立即输出 JSON,编排器即时派发
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Step 1: Context Loading
|
||||
|
||||
**MANDATORY**: Execute these steps FIRST before any other action.
|
||||
|
||||
1. Read this role definition file (already done if you're reading this)
|
||||
2. Read: `.workflow/project-tech.json` — understand project technology stack
|
||||
3. Read: `.workflow/project-guidelines.json` — understand project conventions
|
||||
4. Parse the TASK ASSIGNMENT from the spawn message for:
|
||||
- **Goal**: What to achieve
|
||||
- **Scope**: What's allowed and forbidden
|
||||
- **Input**: Input payload with type, issueIds, text, planFile
|
||||
- **Session Dir**: Path for writing solution artifacts
|
||||
- **Deliverables**: Expected JSON output format
|
||||
|
||||
### Step 2: Input Processing & Issue Creation
|
||||
|
||||
根据输入类型创建 issues。
|
||||
|
||||
```javascript
|
||||
const input = taskAssignment.input
|
||||
const sessionDir = taskAssignment.session_dir
|
||||
|
||||
if (input.type === 'issue_ids') {
|
||||
// Issue IDs 已提供,直接使用
|
||||
issueIds = input.issueIds
|
||||
}
|
||||
|
||||
if (input.type === 'text' || input.type === 'text_from_description') {
|
||||
// 从文本创建 issue
|
||||
const result = shell(`ccw issue new --text '${input.text}' --json`)
|
||||
const issue = JSON.parse(result)
|
||||
issueIds = [issue.id]
|
||||
}
|
||||
|
||||
if (input.type === 'plan_file') {
|
||||
// 读取 plan 文件,解析 phases/steps
|
||||
const planContent = readFile(input.planFile)
|
||||
const phases = parsePlanPhases(planContent)
|
||||
|
||||
// 每个 phase 创建一个 issue
|
||||
issueIds = []
|
||||
for (const phase of phases) {
|
||||
const result = shell(`ccw issue new --text '${phase.title}: ${phase.description}' --json`)
|
||||
const issue = JSON.parse(result)
|
||||
issueIds.push(issue.id)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Per-Issue Solution Planning & Artifact Writing
|
||||
|
||||
逐 issue 处理:plan-agent → 写中间产物 → 冲突检查 → 输出 JSON。
|
||||
|
||||
```javascript
|
||||
const projectRoot = shell('pwd').trim()
|
||||
const dispatchedSolutions = []
|
||||
const remainingIssues = [...issueIds]
|
||||
|
||||
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
|
||||
|
||||
for (let i = 0; i < issueIds.length; i++) {
|
||||
const issueId = issueIds[i]
|
||||
remainingIssues.shift()
|
||||
|
||||
// --- Step 3a: Spawn issue-plan-agent for single issue ---
|
||||
const planAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
|
||||
|
||||
---
|
||||
|
||||
issue_ids: ["${issueId}"]
|
||||
project_root: "${projectRoot}"
|
||||
|
||||
## Requirements
|
||||
- Generate solution for this issue
|
||||
- Auto-bind single solution
|
||||
- For multiple solutions, select the most pragmatic one
|
||||
`
|
||||
})
|
||||
const planResult = wait({ ids: [planAgent], timeout_ms: 600000 })
|
||||
|
||||
if (planResult.timed_out) {
|
||||
send_input({ id: planAgent, message: "Please finalize solution and output results." })
|
||||
wait({ ids: [planAgent], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
close_agent({ id: planAgent })
|
||||
|
||||
// --- Step 3b: Load solution + write artifact file ---
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
const solution = JSON.parse(solJson)
|
||||
|
||||
const solutionFile = `${sessionDir}/artifacts/solutions/${issueId}.json`
|
||||
write_file(solutionFile, JSON.stringify({
|
||||
issue_id: issueId,
|
||||
...solution,
|
||||
timestamp: new Date().toISOString()
|
||||
}, null, 2))
|
||||
|
||||
// --- Step 3c: Inline conflict check ---
|
||||
const dependsOn = inlineConflictCheck(issueId, solution, dispatchedSolutions)
|
||||
|
||||
// --- Step 3d: Track + output per-issue JSON ---
|
||||
dispatchedSolutions.push({ issueId, solution, solutionFile })
|
||||
|
||||
const isLast = remainingIssues.length === 0
|
||||
|
||||
// Output per-issue JSON for orchestrator
|
||||
console.log(JSON.stringify({
|
||||
status: isLast ? "all_planned" : "issue_ready",
|
||||
issue_id: issueId,
|
||||
solution_id: solution.bound?.id || 'N/A',
|
||||
title: solution.bound?.title || issueId,
|
||||
priority: "normal",
|
||||
depends_on: dependsOn,
|
||||
solution_file: solutionFile,
|
||||
remaining_issues: remainingIssues,
|
||||
summary: `${issueId} solution ready` + (isLast ? ` (all ${issueIds.length} issues planned)` : '')
|
||||
}, null, 2))
|
||||
|
||||
// Wait for orchestrator send_input before continuing
|
||||
// (orchestrator will send: "Issue dispatched. Continue.")
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Output Delivery
|
||||
|
||||
输出格式(每个 issue 独立输出):
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "issue_ready",
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"title": "实现功能A",
|
||||
"priority": "normal",
|
||||
"depends_on": [],
|
||||
"solution_file": ".workflow/.team/PEX-xxx/artifacts/solutions/ISS-xxx.json",
|
||||
"remaining_issues": ["ISS-yyy", "ISS-zzz"],
|
||||
"summary": "ISS-xxx solution ready"
|
||||
}
|
||||
```
|
||||
|
||||
**status 取值**:
|
||||
- `"issue_ready"` — 本 issue 完成,还有后续 issues
|
||||
- `"all_planned"` — 所有 issues 已规划完毕(最后一个 issue 的输出)
|
||||
|
||||
## Inline Conflict Check
|
||||
|
||||
```javascript
|
||||
function inlineConflictCheck(issueId, solution, dispatchedSolutions) {
|
||||
const currentFiles = solution.bound?.files_touched
|
||||
|| solution.bound?.affected_files || []
|
||||
const blockedBy = []
|
||||
|
||||
// 1. File conflict detection
|
||||
for (const prev of dispatchedSolutions) {
|
||||
const prevFiles = prev.solution.bound?.files_touched
|
||||
|| prev.solution.bound?.affected_files || []
|
||||
const overlap = currentFiles.filter(f => prevFiles.includes(f))
|
||||
if (overlap.length > 0) {
|
||||
blockedBy.push(prev.issueId)
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Explicit dependencies
|
||||
const explicitDeps = solution.bound?.dependencies?.on_issues || []
|
||||
for (const depId of explicitDeps) {
|
||||
if (!blockedBy.includes(depId)) {
|
||||
blockedBy.push(depId)
|
||||
}
|
||||
}
|
||||
|
||||
return blockedBy
|
||||
}
|
||||
```
|
||||
|
||||
## Plan File Parsing
|
||||
|
||||
```javascript
|
||||
function parsePlanPhases(planContent) {
|
||||
const phases = []
|
||||
const phaseRegex = /^#{2,3}\s+(?:Phase|Step|阶段)\s*\d*[:.:]\s*(.+?)$/gm
|
||||
let match
|
||||
let lastIndex = 0
|
||||
let lastTitle = null
|
||||
|
||||
while ((match = phaseRegex.exec(planContent)) !== null) {
|
||||
if (lastTitle !== null) {
|
||||
phases.push({ title: lastTitle, description: planContent.slice(lastIndex, match.index).trim() })
|
||||
}
|
||||
lastTitle = match[1].trim()
|
||||
lastIndex = match.index + match[0].length
|
||||
}
|
||||
|
||||
if (lastTitle !== null) {
|
||||
phases.push({ title: lastTitle, description: planContent.slice(lastIndex).trim() })
|
||||
}
|
||||
|
||||
if (phases.length === 0) {
|
||||
const titleMatch = planContent.match(/^#\s+(.+)$/m)
|
||||
phases.push({
|
||||
title: titleMatch ? titleMatch[1] : 'Plan Implementation',
|
||||
description: planContent.slice(0, 500)
|
||||
})
|
||||
}
|
||||
|
||||
return phases
|
||||
}
|
||||
```
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅执行规划相关工作(需求分析、issue 创建、方案设计、冲突检查)
|
||||
- 输出严格遵循 JSON 格式
|
||||
- 按依赖关系标记 depends_on
|
||||
- 将 solution 写入中间产物文件
|
||||
- 每个 issue 完成后立即输出 JSON
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- ❌ 直接编写/修改业务代码
|
||||
- ❌ 运行项目测试
|
||||
- ❌ 执行 git commit
|
||||
- ❌ 修改已存在的 solution
|
||||
- ❌ 输出非 JSON 格式的结果
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Read role definition file as FIRST action
|
||||
- Output strictly formatted JSON for each issue
|
||||
- Include `remaining_issues` for orchestrator to track progress
|
||||
- Set correct `status` (`issue_ready` vs `all_planned`)
|
||||
- Write solution artifact file before outputting JSON
|
||||
- Include `solution_file` path in output
|
||||
- Use `ccw issue new --json` for issue creation
|
||||
- Clean up spawned sub-agents (issue-plan-agent)
|
||||
|
||||
**NEVER**:
|
||||
- Implement code (executor's job)
|
||||
- Output free-form text instead of structured JSON
|
||||
- Skip solution planning (every issue needs a bound solution)
|
||||
- Skip writing solution artifact file
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Issue creation fails | Retry once with simplified text, skip if still fails |
|
||||
| issue-plan-agent timeout | Retry once, output partial results |
|
||||
| Inline conflict check failure | Use empty depends_on, continue |
|
||||
| Solution artifact write failure | Report error in JSON output, continue |
|
||||
| Plan file not found | Report in output JSON: `"error": "plan file not found"` |
|
||||
| Empty input | Output: `"status": "all_planned", "error": "no input"` |
|
||||
| Sub-agent parse failure | Use raw output, include in summary |
|
||||
@@ -232,7 +232,7 @@ const agentId = spawn_agent({
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
|
||||
2. Read: {projectRoot}/.workflow/project-tech.json
|
||||
3. Read: {projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: {projectRoot}/.workflow/specs/*.md
|
||||
|
||||
## TASK CONTEXT
|
||||
${taskContext}
|
||||
|
||||
@@ -118,7 +118,7 @@ selectedPerspectives.forEach(perspective => {
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
|
||||
2. Read: {projectRoot}/.workflow/project-tech.json
|
||||
3. Read: {projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: {projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
@@ -196,7 +196,7 @@ if (selectedPerspectives.includes('security') || selectedPerspectives.includes('
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
|
||||
2. Read: {projectRoot}/.workflow/project-tech.json
|
||||
3. Read: {projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: {projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -416,7 +416,7 @@ function buildDimensionPromptWithACE(dimension, iteration, previousFindings, ace
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
|
||||
2. Read: {projectRoot}/.workflow/project-tech.json
|
||||
3. Read: {projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: {projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,820 +0,0 @@
|
||||
---
|
||||
name: issue-execute
|
||||
description: Execute all solutions from issue queue with git commit after each solution. Supports batch processing and execution control.
|
||||
argument-hint: "--queue=<id> [--worktree=<path|new>] [--skip-tests] [--skip-build] [--dry-run]"
|
||||
---
|
||||
|
||||
# Issue Execute (Codex Version)
|
||||
|
||||
## Core Principle
|
||||
|
||||
**Serial Execution**: Execute solutions ONE BY ONE from the issue queue via `ccw issue next`. For each solution, complete all tasks sequentially (implement → test → verify), then commit once per solution with formatted summary. Continue autonomously until queue is empty.
|
||||
|
||||
## Project Context (MANDATORY FIRST STEPS)
|
||||
|
||||
Before starting execution, load project context:
|
||||
|
||||
1. **Read project tech stack**: `.workflow/project-tech.json`
|
||||
2. **Read project guidelines**: `.workflow/project-guidelines.json`
|
||||
3. **Read solution schema**: `~/.ccw/workflows/cli-templates/schemas/solution-schema.json`
|
||||
|
||||
This ensures execution follows project conventions and patterns.
|
||||
|
||||
## Parameters
|
||||
|
||||
- `--queue=<id>`: Queue ID to execute (REQUIRED)
|
||||
- `--worktree=<path|new>`: Worktree path or 'new' for creating new worktree
|
||||
- `--skip-tests`: Skip test execution during solution implementation
|
||||
- `--skip-build`: Skip build step
|
||||
- `--dry-run`: Preview execution without making changes
|
||||
|
||||
## Queue ID Requirement (MANDATORY)
|
||||
|
||||
**`--queue <queue-id>` parameter is REQUIRED**
|
||||
|
||||
### When Queue ID Not Provided
|
||||
|
||||
```
|
||||
List queues → Output options → Stop and wait for user
|
||||
```
|
||||
|
||||
**Actions**:
|
||||
|
||||
1. `ccw issue queue list --brief --json` - Fetch queue list
|
||||
2. Filter active/pending status, output formatted list
|
||||
3. **Stop execution**, prompt user to rerun with `codex -p "@.codex/prompts/issue-execute.md --queue QUE-xxx"`
|
||||
|
||||
**No auto-selection** - User MUST explicitly specify queue-id
|
||||
|
||||
## Worktree Mode (Recommended for Parallel Execution)
|
||||
|
||||
When `--worktree` is specified, create or use a git worktree to isolate work.
|
||||
|
||||
**Usage**:
|
||||
- `--worktree` - Create a new worktree with timestamp-based name
|
||||
- `--worktree <existing-path>` - Resume in an existing worktree (for recovery/continuation)
|
||||
|
||||
**Note**: `ccw issue` commands auto-detect worktree and redirect to main repo automatically.
|
||||
|
||||
```bash
|
||||
# Step 0: Setup worktree before starting (run from MAIN REPO)
|
||||
|
||||
# Use absolute paths to avoid issues when running from subdirectories
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||
WORKTREE_BASE="${REPO_ROOT}/.ccw/worktrees"
|
||||
|
||||
# Check if existing worktree path was provided
|
||||
EXISTING_WORKTREE="${1:-}" # Pass as argument or empty
|
||||
|
||||
if [[ -n "${EXISTING_WORKTREE}" && -d "${EXISTING_WORKTREE}" ]]; then
|
||||
# Resume mode: Use existing worktree
|
||||
WORKTREE_PATH="${EXISTING_WORKTREE}"
|
||||
WORKTREE_NAME=$(basename "${WORKTREE_PATH}")
|
||||
|
||||
# Verify it's a valid git worktree
|
||||
if ! git -C "${WORKTREE_PATH}" rev-parse --is-inside-work-tree &>/dev/null; then
|
||||
echo "Error: ${EXISTING_WORKTREE} is not a valid git worktree"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Resuming in existing worktree: ${WORKTREE_PATH}"
|
||||
else
|
||||
# Create mode: New worktree with timestamp
|
||||
WORKTREE_NAME="issue-exec-$(date +%Y%m%d-%H%M%S)"
|
||||
WORKTREE_PATH="${WORKTREE_BASE}/${WORKTREE_NAME}"
|
||||
|
||||
# Ensure worktree base directory exists (gitignored)
|
||||
mkdir -p "${WORKTREE_BASE}"
|
||||
|
||||
# Prune stale worktrees from previous interrupted executions
|
||||
git worktree prune
|
||||
|
||||
# Create worktree from current branch
|
||||
git worktree add "${WORKTREE_PATH}" -b "${WORKTREE_NAME}"
|
||||
|
||||
echo "Created new worktree: ${WORKTREE_PATH}"
|
||||
fi
|
||||
|
||||
# Setup cleanup trap for graceful failure handling
|
||||
cleanup_worktree() {
|
||||
echo "Cleaning up worktree due to interruption..."
|
||||
cd "${REPO_ROOT}" 2>/dev/null || true
|
||||
git worktree remove "${WORKTREE_PATH}" --force 2>/dev/null || true
|
||||
# Keep branch for debugging failed executions
|
||||
echo "Worktree removed. Branch '${WORKTREE_NAME}' kept for inspection."
|
||||
}
|
||||
trap cleanup_worktree EXIT INT TERM
|
||||
|
||||
# Change to worktree directory
|
||||
cd "${WORKTREE_PATH}"
|
||||
|
||||
# ccw issue commands auto-detect worktree and use main repo's .workflow/
|
||||
# So you can run ccw issue next/done directly from worktree
|
||||
```
|
||||
|
||||
**Worktree Execution Pattern**:
|
||||
```
|
||||
0. [MAIN REPO] Validate queue ID (--queue required, or prompt user to select)
|
||||
1. [WORKTREE] ccw issue next --queue <queue-id> → auto-redirects to main repo's .workflow/
|
||||
2. [WORKTREE] Implement all tasks, run tests, git commit
|
||||
3. [WORKTREE] ccw issue done <item_id> → auto-redirects to main repo
|
||||
4. Repeat from step 1
|
||||
```
|
||||
|
||||
**Note**: Add `.ccw/worktrees/` to `.gitignore` to prevent tracking worktree contents.
|
||||
|
||||
**Benefits:**
|
||||
- Parallel executors don't conflict with each other
|
||||
- Main working directory stays clean
|
||||
- Easy cleanup after execution
|
||||
- **Resume support**: Pass existing worktree path to continue interrupted executions
|
||||
|
||||
**Resume Examples:**
|
||||
```bash
|
||||
# List existing worktrees to find interrupted execution
|
||||
git worktree list
|
||||
|
||||
# Resume in existing worktree (pass path as argument)
|
||||
# The worktree path will be used instead of creating a new one
|
||||
codex -p "@.codex/prompts/issue-execute.md --worktree /path/to/existing/worktree"
|
||||
```
|
||||
|
||||
**Completion - User Choice:**
|
||||
|
||||
When all solutions are complete, output options and wait for user to specify:
|
||||
|
||||
```
|
||||
All solutions completed in worktree. Choose next action:
|
||||
|
||||
1. Merge to main - Merge worktree branch into main and cleanup
|
||||
2. Create PR - Push branch and create pull request (Recommended for parallel execution)
|
||||
3. Keep branch - Keep branch for manual handling, cleanup worktree only
|
||||
|
||||
Please respond with: 1, 2, or 3
|
||||
```
|
||||
|
||||
**Based on user response:**
|
||||
|
||||
```bash
|
||||
# Disable cleanup trap before intentional cleanup
|
||||
trap - EXIT INT TERM
|
||||
|
||||
# Return to main repo first (use REPO_ROOT from setup)
|
||||
cd "${REPO_ROOT}"
|
||||
|
||||
# Validate main repo state before merge (prevents conflicts)
|
||||
validate_main_clean() {
|
||||
if [[ -n $(git status --porcelain) ]]; then
|
||||
echo "⚠️ Warning: Main repo has uncommitted changes."
|
||||
echo "Cannot auto-merge. Falling back to 'Create PR' option."
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# Option 1: Merge to main (only if main is clean)
|
||||
if validate_main_clean; then
|
||||
git merge "${WORKTREE_NAME}" --no-ff -m "Merge issue queue execution: ${WORKTREE_NAME}"
|
||||
git worktree remove "${WORKTREE_PATH}"
|
||||
git branch -d "${WORKTREE_NAME}"
|
||||
else
|
||||
# Fallback to PR if main is dirty
|
||||
git push -u origin "${WORKTREE_NAME}"
|
||||
gh pr create --title "Issue Queue: ${WORKTREE_NAME}" --body "Automated issue queue execution (main had uncommitted changes)"
|
||||
git worktree remove "${WORKTREE_PATH}"
|
||||
fi
|
||||
|
||||
# Option 2: Create PR (Recommended for parallel execution)
|
||||
git push -u origin "${WORKTREE_NAME}"
|
||||
gh pr create --title "Issue Queue: ${WORKTREE_NAME}" --body "Automated issue queue execution"
|
||||
git worktree remove "${WORKTREE_PATH}"
|
||||
# Branch kept on remote
|
||||
|
||||
# Option 3: Keep branch
|
||||
git worktree remove "${WORKTREE_PATH}"
|
||||
# Branch kept locally for manual handling
|
||||
echo "Branch '${WORKTREE_NAME}' kept. Merge manually when ready."
|
||||
```
|
||||
|
||||
**Parallel Execution Safety**: For parallel executors, "Create PR" is the safest option as it avoids race conditions during merge. Multiple PRs can be reviewed and merged sequentially.
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
STEP 0: Validate queue ID (--queue required, or prompt user to select)
|
||||
|
||||
INIT: Fetch first solution via ccw issue next --queue <queue-id>
|
||||
|
||||
WHILE solution exists:
|
||||
1. Receive solution JSON from ccw issue next --queue <queue-id>
|
||||
2. Execute all tasks in solution.tasks sequentially:
|
||||
FOR each task:
|
||||
- IMPLEMENT: Follow task.implementation steps
|
||||
- TEST: Run task.test commands
|
||||
- VERIFY: Check task.acceptance criteria
|
||||
3. COMMIT: Stage all files, commit once with formatted summary
|
||||
4. Report completion via ccw issue done <item_id>
|
||||
5. Fetch next solution via ccw issue next --queue <queue-id>
|
||||
|
||||
WHEN queue empty:
|
||||
Output final summary
|
||||
```
|
||||
|
||||
## Step 1: Fetch First Solution
|
||||
|
||||
**Prerequisite**: Queue ID must be determined (either from `--queue` argument or user selection in Step 0).
|
||||
|
||||
Run this command to get your first solution:
|
||||
|
||||
```javascript
|
||||
// ccw auto-detects worktree and uses main repo's .workflow/
|
||||
// QUEUE_ID is required - obtained from --queue argument or user selection
|
||||
const result = shell_command({ command: `ccw issue next --queue ${QUEUE_ID}` })
|
||||
```
|
||||
|
||||
This returns JSON with the full solution definition:
|
||||
- `item_id`: Solution identifier in queue (e.g., "S-1")
|
||||
- `issue_id`: Parent issue ID (e.g., "ISS-20251227-001")
|
||||
- `solution_id`: Solution ID (e.g., "SOL-ISS-20251227-001-1")
|
||||
- `solution`: Full solution with all tasks
|
||||
- `execution_hints`: Timing and executor hints
|
||||
|
||||
If response contains `{ "status": "empty" }`, all solutions are complete - skip to final summary.
|
||||
|
||||
## Step 2: Parse Solution Response
|
||||
|
||||
Expected solution structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"item_id": "S-1",
|
||||
"issue_id": "ISS-20251227-001",
|
||||
"solution_id": "SOL-ISS-20251227-001-1",
|
||||
"status": "pending",
|
||||
"solution": {
|
||||
"id": "SOL-ISS-20251227-001-1",
|
||||
"description": "Description of solution approach",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "T1",
|
||||
"title": "Task title",
|
||||
"scope": "src/module/",
|
||||
"action": "Create|Modify|Fix|Refactor|Add",
|
||||
"description": "What to do",
|
||||
"modification_points": [
|
||||
{ "file": "path/to/file.ts", "target": "function name", "change": "description" }
|
||||
],
|
||||
"implementation": [
|
||||
"Step 1: Do this",
|
||||
"Step 2: Do that"
|
||||
],
|
||||
"test": {
|
||||
"commands": ["npm test -- --filter=xxx"],
|
||||
"unit": ["Unit test requirement 1", "Unit test requirement 2"]
|
||||
},
|
||||
"regression": ["Verify existing tests still pass"],
|
||||
"acceptance": {
|
||||
"criteria": ["Criterion 1: Must pass", "Criterion 2: Must verify"],
|
||||
"verification": ["Run test command", "Manual verification step"]
|
||||
},
|
||||
"commit": {
|
||||
"type": "feat|fix|test|refactor",
|
||||
"scope": "module",
|
||||
"message_template": "feat(scope): description"
|
||||
},
|
||||
"depends_on": [],
|
||||
"estimated_minutes": 30,
|
||||
"priority": 1
|
||||
}
|
||||
],
|
||||
"exploration_context": {
|
||||
"relevant_files": ["path/to/reference.ts"],
|
||||
"patterns": "Follow existing pattern in xxx",
|
||||
"integration_points": "Used by other modules"
|
||||
},
|
||||
"analysis": {
|
||||
"risk": "low|medium|high",
|
||||
"impact": "low|medium|high",
|
||||
"complexity": "low|medium|high"
|
||||
},
|
||||
"score": 0.95,
|
||||
"is_bound": true
|
||||
},
|
||||
"execution_hints": {
|
||||
"executor": "codex",
|
||||
"estimated_minutes": 180
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Step 2.1: Determine Execution Strategy
|
||||
|
||||
After parsing the solution, analyze the issue type and task actions to determine the appropriate execution strategy. The strategy defines additional verification steps and quality gates beyond the basic implement-test-verify cycle.
|
||||
|
||||
### Strategy Auto-Matching
|
||||
|
||||
**Matching Priority**:
|
||||
1. Explicit `solution.strategy_type` if provided
|
||||
2. Infer from `task.action` keywords (Debug, Fix, Feature, Refactor, Test, etc.)
|
||||
3. Infer from `solution.description` and `task.title` content
|
||||
4. Default to "standard" if no clear match
|
||||
|
||||
**Strategy Types and Matching Keywords**:
|
||||
|
||||
| Strategy Type | Match Keywords | Description |
|
||||
|---------------|----------------|-------------|
|
||||
| `debug` | Debug, Diagnose, Trace, Investigate | Bug diagnosis with logging and debugging |
|
||||
| `bugfix` | Fix, Patch, Resolve, Correct | Bug fixing with root cause analysis |
|
||||
| `feature` | Feature, Add, Implement, Create, Build | New feature development with full testing |
|
||||
| `refactor` | Refactor, Restructure, Optimize, Cleanup | Code restructuring with behavior preservation |
|
||||
| `test` | Test, Coverage, E2E, Integration | Test implementation with coverage checks |
|
||||
| `performance` | Performance, Optimize, Speed, Memory | Performance optimization with benchmarking |
|
||||
| `security` | Security, Vulnerability, CVE, Audit | Security fixes with vulnerability checks |
|
||||
| `hotfix` | Hotfix, Urgent, Critical, Emergency | Urgent fixes with minimal changes |
|
||||
| `documentation` | Documentation, Docs, Comment, README | Documentation updates with example validation |
|
||||
| `chore` | Chore, Dependency, Config, Maintenance | Maintenance tasks with compatibility checks |
|
||||
| `standard` | (default) | Standard implementation without extra steps |
|
||||
|
||||
### Strategy-Specific Execution Phases
|
||||
|
||||
Each strategy extends the basic cycle with additional quality gates:
|
||||
|
||||
#### 1. Debug → Reproduce → Instrument → Diagnose → Implement → Test → Verify → Cleanup
|
||||
|
||||
```
|
||||
REPRODUCE → INSTRUMENT → DIAGNOSE → IMPLEMENT → TEST → VERIFY → CLEANUP
|
||||
```
|
||||
|
||||
#### 2. Bugfix → Root Cause → Implement → Test → Edge Cases → Regression → Verify
|
||||
|
||||
```
|
||||
ROOT_CAUSE → IMPLEMENT → TEST → EDGE_CASES → REGRESSION → VERIFY
|
||||
```
|
||||
|
||||
#### 3. Feature → Design Review → Unit Tests → Implement → Integration Tests → Code Review → Docs → Verify
|
||||
|
||||
```
|
||||
DESIGN_REVIEW → UNIT_TESTS → IMPLEMENT → INTEGRATION_TESTS → TEST → CODE_REVIEW → DOCS → VERIFY
|
||||
```
|
||||
|
||||
#### 4. Refactor → Baseline Tests → Implement → Test → Behavior Check → Performance Compare → Verify
|
||||
|
||||
```
|
||||
BASELINE_TESTS → IMPLEMENT → TEST → BEHAVIOR_PRESERVATION → PERFORMANCE_CMP → VERIFY
|
||||
```
|
||||
|
||||
#### 5. Test → Coverage Baseline → Test Design → Implement → Coverage Check → Verify
|
||||
|
||||
```
|
||||
COVERAGE_BASELINE → TEST_DESIGN → IMPLEMENT → COVERAGE_CHECK → VERIFY
|
||||
```
|
||||
|
||||
#### 6. Performance → Profiling → Bottleneck → Implement → Benchmark → Test → Verify
|
||||
|
||||
```
|
||||
PROFILING → BOTTLENECK → IMPLEMENT → BENCHMARK → TEST → VERIFY
|
||||
```
|
||||
|
||||
#### 7. Security → Vulnerability Scan → Implement → Security Test → Penetration Test → Verify
|
||||
|
||||
```
|
||||
VULNERABILITY_SCAN → IMPLEMENT → SECURITY_TEST → PENETRATION_TEST → VERIFY
|
||||
```
|
||||
|
||||
#### 8. Hotfix → Impact Assessment → Implement → Test → Quick Verify → Verify
|
||||
|
||||
```
|
||||
IMPACT_ASSESSMENT → IMPLEMENT → TEST → QUICK_VERIFY → VERIFY
|
||||
```
|
||||
|
||||
#### 9. Documentation → Implement → Example Validation → Format Check → Link Validation → Verify
|
||||
|
||||
```
|
||||
IMPLEMENT → EXAMPLE_VALIDATION → FORMAT_CHECK → LINK_VALIDATION → VERIFY
|
||||
```
|
||||
|
||||
#### 10. Chore → Implement → Compatibility Check → Test → Changelog → Verify
|
||||
|
||||
```
|
||||
IMPLEMENT → COMPATIBILITY_CHECK → TEST → CHANGELOG → VERIFY
|
||||
```
|
||||
|
||||
#### 11. Standard → Implement → Test → Verify
|
||||
|
||||
```
|
||||
IMPLEMENT → TEST → VERIFY
|
||||
```
|
||||
|
||||
### Strategy Selection Implementation
|
||||
|
||||
**Pseudo-code for strategy matching**:
|
||||
|
||||
```javascript
|
||||
function determineStrategy(solution) {
|
||||
// Priority 1: Explicit strategy type
|
||||
if (solution.strategy_type) {
|
||||
return solution.strategy_type
|
||||
}
|
||||
|
||||
// Priority 2: Infer from task actions
|
||||
const actions = solution.tasks.map(t => t.action.toLowerCase())
|
||||
const titles = solution.tasks.map(t => t.title.toLowerCase())
|
||||
const description = solution.description.toLowerCase()
|
||||
const allText = [...actions, ...titles, description].join(' ')
|
||||
|
||||
// Match keywords (order matters - more specific first)
|
||||
if (/hotfix|urgent|critical|emergency/.test(allText)) return 'hotfix'
|
||||
if (/debug|diagnose|trace|investigate/.test(allText)) return 'debug'
|
||||
if (/security|vulnerability|cve|audit/.test(allText)) return 'security'
|
||||
if (/performance|optimize|speed|memory|benchmark/.test(allText)) return 'performance'
|
||||
if (/refactor|restructure|cleanup/.test(allText)) return 'refactor'
|
||||
if (/test|coverage|e2e|integration/.test(allText)) return 'test'
|
||||
if (/documentation|docs|comment|readme/.test(allText)) return 'documentation'
|
||||
if (/chore|dependency|config|maintenance/.test(allText)) return 'chore'
|
||||
if (/fix|patch|resolve|correct/.test(allText)) return 'bugfix'
|
||||
if (/feature|add|implement|create|build/.test(allText)) return 'feature'
|
||||
|
||||
// Default
|
||||
return 'standard'
|
||||
}
|
||||
```
|
||||
|
||||
**Usage in execution flow**:
|
||||
|
||||
```javascript
|
||||
// After parsing solution (Step 2)
|
||||
const strategy = determineStrategy(solution)
|
||||
console.log(`Strategy selected: ${strategy}`)
|
||||
|
||||
// During task execution (Step 3), follow strategy-specific phases
|
||||
for (const task of solution.tasks) {
|
||||
executeTaskWithStrategy(task, strategy)
|
||||
}
|
||||
```
|
||||
|
||||
## Step 2.5: Initialize Task Tracking
|
||||
|
||||
After parsing solution and determining strategy, use `update_plan` to track each task:
|
||||
|
||||
```javascript
|
||||
// Initialize plan with all tasks from solution
|
||||
update_plan({
|
||||
explanation: `Starting solution ${item_id}`,
|
||||
plan: solution.tasks.map(task => ({
|
||||
step: `${task.id}: ${task.title}`,
|
||||
status: "pending"
|
||||
}))
|
||||
})
|
||||
```
|
||||
|
||||
**Note**: Codex uses `update_plan` tool for task tracking (not TodoWrite).
|
||||
|
||||
## Step 3: Execute Tasks Sequentially
|
||||
|
||||
Iterate through `solution.tasks` array and execute each task.
|
||||
|
||||
**Before starting each task**, mark it as in_progress:
|
||||
```javascript
|
||||
// Update current task status
|
||||
update_plan({
|
||||
explanation: `Working on ${task.id}: ${task.title}`,
|
||||
plan: tasks.map(t => ({
|
||||
step: `${t.id}: ${t.title}`,
|
||||
status: t.id === task.id ? "in_progress" : (t.completed ? "completed" : "pending")
|
||||
}))
|
||||
})
|
||||
```
|
||||
|
||||
**After completing each task** (verification passed), mark it as completed:
|
||||
```javascript
|
||||
// Mark task as completed (commit happens at solution level)
|
||||
update_plan({
|
||||
explanation: `Completed ${task.id}: ${task.title}`,
|
||||
plan: tasks.map(t => ({
|
||||
step: `${t.id}: ${t.title}`,
|
||||
status: t.id === task.id ? "completed" : t.status
|
||||
}))
|
||||
})
|
||||
```
|
||||
|
||||
### Phase A: IMPLEMENT
|
||||
|
||||
1. **Read context files in parallel** using `multi_tool_use.parallel`:
|
||||
```javascript
|
||||
// Read all relevant files in parallel for context
|
||||
multi_tool_use.parallel({
|
||||
tool_uses: solution.exploration_context.relevant_files.map(file => ({
|
||||
recipient_name: "functions.read_file",
|
||||
parameters: { path: file }
|
||||
}))
|
||||
})
|
||||
```
|
||||
|
||||
2. Follow `task.implementation` steps in order
|
||||
3. Apply changes to `task.modification_points` files
|
||||
4. Follow `solution.exploration_context.patterns` for code style consistency
|
||||
5. Run `task.regression` checks if specified to ensure no breakage
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Implementing: [task.title] (Task [N]/[Total])
|
||||
|
||||
**Scope**: [task.scope]
|
||||
**Action**: [task.action]
|
||||
|
||||
**Steps**:
|
||||
1. ✓ [implementation step 1]
|
||||
2. ✓ [implementation step 2]
|
||||
...
|
||||
|
||||
**Files Modified**:
|
||||
- path/to/file1.ts
|
||||
- path/to/file2.ts
|
||||
```
|
||||
|
||||
### Phase B: TEST
|
||||
|
||||
1. Run all commands in `task.test.commands`
|
||||
2. Verify unit tests pass (`task.test.unit`)
|
||||
3. Run integration tests if specified (`task.test.integration`)
|
||||
|
||||
**If tests fail**: Fix the code and re-run. Do NOT proceed until tests pass.
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Testing: [task.title]
|
||||
|
||||
**Test Results**:
|
||||
- [x] Unit tests: PASSED
|
||||
- [x] Integration tests: PASSED (or N/A)
|
||||
```
|
||||
|
||||
### Phase C: VERIFY
|
||||
|
||||
Check all `task.acceptance.criteria` are met using `task.acceptance.verification` steps:
|
||||
|
||||
```
|
||||
## Verifying: [task.title]
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [x] Criterion 1: Verified
|
||||
- [x] Criterion 2: Verified
|
||||
...
|
||||
|
||||
**Verification Steps**:
|
||||
- [x] Run test command
|
||||
- [x] Manual verification step
|
||||
|
||||
All criteria met: YES
|
||||
```
|
||||
|
||||
**If any criterion fails**: Go back to IMPLEMENT phase and fix.
|
||||
|
||||
### Repeat for Next Task
|
||||
|
||||
Continue to next task in `solution.tasks` array until all tasks are complete.
|
||||
|
||||
**Note**: Do NOT commit after each task. Commits happen at solution level after all tasks pass.
|
||||
|
||||
## Step 3.5: Commit Solution
|
||||
|
||||
After ALL tasks in the solution pass implementation, testing, and verification, commit once for the entire solution:
|
||||
|
||||
```bash
|
||||
# Stage all modified files from all tasks
|
||||
git add path/to/file1.ts path/to/file2.ts ...
|
||||
|
||||
# Commit with clean, standard format (NO solution metadata)
|
||||
git commit -m "[commit_type](scope): [brief description of changes]"
|
||||
|
||||
# Example commits:
|
||||
# feat(auth): add token refresh mechanism
|
||||
# fix(payment): resolve timeout handling in checkout flow
|
||||
# refactor(api): simplify error handling logic
|
||||
```
|
||||
|
||||
**Commit Type Selection**:
|
||||
- `feat`: New feature or capability
|
||||
- `fix`: Bug fix
|
||||
- `refactor`: Code restructuring without behavior change
|
||||
- `test`: Adding or updating tests
|
||||
- `docs`: Documentation changes
|
||||
- `chore`: Maintenance tasks
|
||||
|
||||
**Commit Language**:
|
||||
- Use **Chinese** commit summary if project's `CLAUDE.md` specifies Chinese response guidelines or user explicitly requests Chinese
|
||||
- Use **English** commit summary by default or when project targets international collaboration
|
||||
- Check project's existing commit history for language convention consistency
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Solution Committed: [solution_id]
|
||||
|
||||
**Commit**: [commit hash]
|
||||
**Type**: [commit_type]([scope])
|
||||
|
||||
**Changes**:
|
||||
- [Feature/Fix/Improvement]: [What functionality was added/fixed/improved]
|
||||
- [Specific change 1]
|
||||
- [Specific change 2]
|
||||
|
||||
**Files Modified**:
|
||||
- path/to/file1.ts - [Brief description of changes]
|
||||
- path/to/file2.ts - [Brief description of changes]
|
||||
- path/to/file3.ts - [Brief description of changes]
|
||||
|
||||
**Solution**: [solution_id] ([N] tasks completed)
|
||||
```
|
||||
|
||||
## Step 4: Report Completion
|
||||
|
||||
After ALL tasks in the solution are complete and committed, report to queue system with full solution metadata:
|
||||
|
||||
```javascript
|
||||
// ccw auto-detects worktree and uses main repo's .workflow/
|
||||
// Record ALL solution context here (NOT in git commit)
|
||||
shell_command({
|
||||
command: `ccw issue done ${item_id} --result '${JSON.stringify({
|
||||
solution_id: solution.id,
|
||||
issue_id: issue_id,
|
||||
commit: {
|
||||
hash: commit_hash,
|
||||
type: commit_type,
|
||||
scope: commit_scope,
|
||||
message: commit_message
|
||||
},
|
||||
analysis: {
|
||||
risk: solution.analysis.risk,
|
||||
impact: solution.analysis.impact,
|
||||
complexity: solution.analysis.complexity
|
||||
},
|
||||
tasks_completed: solution.tasks.map(t => ({
|
||||
id: t.id,
|
||||
title: t.title,
|
||||
action: t.action,
|
||||
scope: t.scope
|
||||
})),
|
||||
files_modified: ["path1", "path2", ...],
|
||||
tests_passed: true,
|
||||
verification: {
|
||||
all_tests_passed: true,
|
||||
acceptance_criteria_met: true,
|
||||
regression_checked: true
|
||||
},
|
||||
summary: "[What was accomplished - brief description]"
|
||||
})}'`
|
||||
})
|
||||
```
|
||||
|
||||
**Complete Example**:
|
||||
|
||||
```javascript
|
||||
shell_command({
|
||||
command: `ccw issue done S-1 --result '${JSON.stringify({
|
||||
solution_id: "SOL-ISS-20251227-001-1",
|
||||
issue_id: "ISS-20251227-001",
|
||||
commit: {
|
||||
hash: "a1b2c3d4",
|
||||
type: "feat",
|
||||
scope: "auth",
|
||||
message: "feat(auth): add token refresh mechanism"
|
||||
},
|
||||
analysis: {
|
||||
risk: "low",
|
||||
impact: "medium",
|
||||
complexity: "medium"
|
||||
},
|
||||
tasks_completed: [
|
||||
{ id: "T1", title: "Implement refresh token endpoint", action: "Add", scope: "src/auth/" },
|
||||
{ id: "T2", title: "Add token rotation logic", action: "Create", scope: "src/auth/services/" }
|
||||
],
|
||||
files_modified: [
|
||||
"src/auth/routes/token.ts",
|
||||
"src/auth/services/refresh.ts",
|
||||
"src/auth/middleware/validate.ts"
|
||||
],
|
||||
tests_passed: true,
|
||||
verification: {
|
||||
all_tests_passed: true,
|
||||
acceptance_criteria_met: true,
|
||||
regression_checked: true
|
||||
},
|
||||
summary: "Implemented token refresh mechanism with automatic rotation"
|
||||
})}'`
|
||||
})
|
||||
```
|
||||
|
||||
**If solution failed:**
|
||||
|
||||
```javascript
|
||||
shell_command({
|
||||
command: `ccw issue done ${item_id} --fail --reason '${JSON.stringify({
|
||||
task_id: "TX",
|
||||
error_type: "test_failure",
|
||||
message: "Integration tests failed: timeout in token validation",
|
||||
files_attempted: ["path1", "path2"],
|
||||
commit: null
|
||||
})}'`
|
||||
})
|
||||
```
|
||||
|
||||
## Step 5: Continue to Next Solution
|
||||
|
||||
Fetch next solution (using same QUEUE_ID from Step 0/1):
|
||||
|
||||
```javascript
|
||||
// ccw auto-detects worktree
|
||||
// Continue using the same QUEUE_ID throughout execution
|
||||
const result = shell_command({ command: `ccw issue next --queue ${QUEUE_ID}` })
|
||||
```
|
||||
|
||||
**Output progress:**
|
||||
```
|
||||
✓ [N/M] Completed: [item_id] - [solution.description]
|
||||
Commit: [commit_hash] ([commit_type])
|
||||
Tasks: [task_count] completed
|
||||
→ Fetching next solution...
|
||||
```
|
||||
|
||||
**DO NOT STOP.** Return to Step 2 and continue until queue is empty.
|
||||
|
||||
## Final Summary
|
||||
|
||||
When `ccw issue next` returns `{ "status": "empty" }`:
|
||||
|
||||
**If running in worktree mode**: Prompt user for merge/PR/keep choice (see "Completion - User Choice" above) before outputting summary.
|
||||
|
||||
```markdown
|
||||
## Issue Queue Execution Complete
|
||||
|
||||
**Total Solutions Executed**: N
|
||||
**Total Tasks Executed**: M
|
||||
**Total Commits**: N (one per solution)
|
||||
|
||||
**Solution Commits**:
|
||||
| # | Solution | Tasks | Commit | Type |
|
||||
|---|----------|-------|--------|------|
|
||||
| 1 | SOL-xxx-1 | T1, T2 | abc123 | feat |
|
||||
| 2 | SOL-xxx-2 | T1 | def456 | fix |
|
||||
| 3 | SOL-yyy-1 | T1, T2, T3 | ghi789 | refactor |
|
||||
|
||||
**Files Modified**:
|
||||
- path/to/file1.ts
|
||||
- path/to/file2.ts
|
||||
|
||||
**Summary**:
|
||||
[Overall what was accomplished]
|
||||
```
|
||||
|
||||
## Execution Rules
|
||||
|
||||
1. **Never stop mid-queue** - Continue until queue is empty
|
||||
2. **One solution at a time** - Fully complete (all tasks + commit + report) before moving on
|
||||
3. **Sequential within solution** - Complete each task's implement/test/verify before next task
|
||||
4. **Tests MUST pass** - Do not proceed if any task's tests fail
|
||||
5. **One commit per solution** - All tasks share a single commit with formatted summary
|
||||
6. **Self-verify** - All acceptance criteria must pass before solution commit
|
||||
7. **Report accurately** - Use `ccw issue done` after each solution
|
||||
8. **Handle failures gracefully** - If a solution fails, report via `ccw issue done --fail` and continue to next
|
||||
9. **Track with update_plan** - Use update_plan tool for task progress tracking
|
||||
10. **Worktree auto-detect** - `ccw issue` commands auto-redirect to main repo from worktree
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| `ccw issue next` returns empty | All done - output final summary |
|
||||
| Tests fail | Fix code, re-run tests |
|
||||
| Verification fails | Go back to implement phase |
|
||||
| Solution commit fails | Check staging, retry commit |
|
||||
| `ccw issue done` fails | Log error, continue to next solution |
|
||||
| Any task unrecoverable | Call `ccw issue done --fail`, continue to next solution |
|
||||
|
||||
## CLI Command Reference
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `ccw issue queue list --brief --json` | List all queues (for queue selection) |
|
||||
| `ccw issue next --queue QUE-xxx` | Fetch next solution from specified queue (**--queue required**) |
|
||||
| `ccw issue done <id>` | Mark solution complete with result (auto-detects queue) |
|
||||
| `ccw issue done <id> --fail --reason "..."` | Mark solution failed with structured reason |
|
||||
| `ccw issue retry --queue QUE-xxx` | Reset failed items in specific queue |
|
||||
|
||||
## Start Execution
|
||||
|
||||
**Step 0: Validate Queue ID**
|
||||
|
||||
If `--queue` was NOT provided in the command arguments:
|
||||
1. Run `ccw issue queue list --brief --json`
|
||||
2. Filter and display active/pending queues to user
|
||||
3. **Stop execution**, prompt user to rerun with `--queue QUE-xxx`
|
||||
|
||||
**Step 1: Fetch First Solution**
|
||||
|
||||
Once queue ID is confirmed, begin by running:
|
||||
|
||||
```bash
|
||||
ccw issue next --queue <queue-id>
|
||||
```
|
||||
|
||||
Then follow the solution lifecycle for each solution until queue is empty.
|
||||
@@ -1,399 +0,0 @@
|
||||
---
|
||||
name: issue-resolve
|
||||
description: Unified issue resolution pipeline with source selection. Plan issues via AI exploration, convert from artifacts, import from brainstorm sessions, form execution queues, or export solutions to task JSON. Triggers on "issue:plan", "issue:queue", "issue:convert-to-plan", "issue:from-brainstorm", "export-to-tasks", "resolve issue", "plan issue", "queue issues", "convert plan to issue".
|
||||
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
|
||||
---
|
||||
|
||||
# Issue Resolve (Codex Version)
|
||||
|
||||
Unified issue resolution pipeline that orchestrates solution creation from multiple sources and queue formation for execution.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Issue Resolve Orchestrator (SKILL.md) │
|
||||
│ → Source selection → Route to phase → Execute → Summary │
|
||||
└───────────────┬─────────────────────────────────────────────────┘
|
||||
│
|
||||
├─ ASK_USER: Select issue source
|
||||
│
|
||||
┌───────────┼───────────┬───────────┬───────────┐
|
||||
↓ ↓ ↓ ↓ │
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
|
||||
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │ Phase 4 │ │
|
||||
│ Explore │ │ Convert │ │ From │ │ Form │ │
|
||||
│ & Plan │ │Artifact │ │Brainstorm│ │ Queue │ │
|
||||
└─────────┘ └─────────┘ └─────────┘ └─────────┘ │
|
||||
↓ ↓ ↓ ↓ │
|
||||
Solutions Solutions Issue+Sol Exec Queue │
|
||||
(bound) (bound) (bound) (ordered) │
|
||||
│ │ │ │
|
||||
└─────┬─────┘───────────┘ │
|
||||
↓ (optional --export-tasks) │
|
||||
.task/TASK-*.json │
|
||||
│
|
||||
┌────────────────────────────────┘
|
||||
↓
|
||||
/issue:execute
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **Source-Driven Routing**: ASK_USER selects workflow, then load single phase
|
||||
2. **Progressive Phase Loading**: Only read the selected phase document
|
||||
3. **CLI-First Data Access**: All issue/solution CRUD via `ccw issue` CLI commands
|
||||
4. **Auto Mode Support**: `-y` flag skips source selection (defaults to Explore & Plan)
|
||||
|
||||
## Subagent API Reference
|
||||
|
||||
### spawn_agent
|
||||
Create a new subagent with task assignment.
|
||||
|
||||
```javascript
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
|
||||
2. Read: {projectRoot}/.workflow/project-tech.json
|
||||
3. Read: {projectRoot}/.workflow/project-guidelines.json
|
||||
|
||||
## TASK CONTEXT
|
||||
${taskContext}
|
||||
|
||||
## DELIVERABLES
|
||||
${deliverables}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### wait
|
||||
Get results from subagent (only way to retrieve results).
|
||||
|
||||
```javascript
|
||||
const result = wait({
|
||||
ids: [agentId],
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
})
|
||||
|
||||
if (result.timed_out) {
|
||||
// Handle timeout - can continue waiting or send_input to prompt completion
|
||||
}
|
||||
```
|
||||
|
||||
### send_input
|
||||
Continue interaction with active subagent (for clarification or follow-up).
|
||||
|
||||
```javascript
|
||||
send_input({
|
||||
id: agentId,
|
||||
message: `
|
||||
## CLARIFICATION ANSWERS
|
||||
${answers}
|
||||
|
||||
## NEXT STEP
|
||||
Continue with updated analysis.
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### close_agent
|
||||
Clean up subagent resources (irreversible).
|
||||
|
||||
```javascript
|
||||
close_agent({ id: agentId })
|
||||
```
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Skip source selection, use Explore & Plan for issue IDs, or auto-detect source type for paths.
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
codex -p "@.codex/prompts/issue-resolve.md <task description or issue IDs>"
|
||||
codex -p "@.codex/prompts/issue-resolve.md [FLAGS] \"<input>\""
|
||||
|
||||
# Flags
|
||||
-y, --yes Skip all confirmations (auto mode)
|
||||
--source <type> Pre-select source: plan|convert|brainstorm|queue
|
||||
--batch-size <n> Max issues per agent batch (plan mode, default: 3)
|
||||
--issue <id> Bind to existing issue (convert mode)
|
||||
--supplement Add tasks to existing solution (convert mode)
|
||||
--queues <n> Number of parallel queues (queue mode, default: 1)
|
||||
--export-tasks Export solution tasks to .task/TASK-*.json (task-schema.json format)
|
||||
|
||||
# Examples
|
||||
codex -p "@.codex/prompts/issue-resolve.md GH-123,GH-124" # Explore & plan issues
|
||||
codex -p "@.codex/prompts/issue-resolve.md --source plan --all-pending" # Plan all pending issues
|
||||
codex -p "@.codex/prompts/issue-resolve.md --source convert \".workflow/.lite-plan/my-plan\"" # Convert artifact
|
||||
codex -p "@.codex/prompts/issue-resolve.md --source brainstorm SESSION=\"BS-rate-limiting\"" # From brainstorm
|
||||
codex -p "@.codex/prompts/issue-resolve.md --source queue" # Form execution queue
|
||||
codex -p "@.codex/prompts/issue-resolve.md -y GH-123" # Auto mode, plan single issue
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
└─ Parse flags (--source, -y, --issue, etc.) and positional args
|
||||
|
||||
Source Selection:
|
||||
├─ --source flag provided → Route directly
|
||||
├─ Auto-detect from input:
|
||||
│ ├─ Issue IDs (GH-xxx, ISS-xxx) → Explore & Plan
|
||||
│ ├─ SESSION="..." → From Brainstorm
|
||||
│ ├─ File/folder path → Convert from Artifact
|
||||
│ └─ No input or --all-pending → Explore & Plan (all pending)
|
||||
└─ Otherwise → ASK_USER to select source
|
||||
|
||||
Phase Execution (load one phase):
|
||||
├─ Phase 1: Explore & Plan → phases/01-issue-plan.md
|
||||
├─ Phase 2: Convert Artifact → phases/02-convert-to-plan.md
|
||||
├─ Phase 3: From Brainstorm → phases/03-from-brainstorm.md
|
||||
└─ Phase 4: Form Queue → phases/04-issue-queue.md
|
||||
|
||||
Post-Phase:
|
||||
├─ Export to Task JSON (optional, with --export-tasks flag)
|
||||
│ ├─ For each solution.tasks[] → write .task/TASK-{T-id}.json
|
||||
│ └─ Generate plan.json (plan-overview-base-schema) from exported tasks
|
||||
└─ Summary + Next steps recommendation
|
||||
```
|
||||
|
||||
### Phase Reference Documents
|
||||
|
||||
| Phase | Document | Load When | Purpose |
|
||||
|-------|----------|-----------|---------|
|
||||
| Phase 1 | [phases/01-issue-plan.md](phases/01-issue-plan.md) | Source = Explore & Plan | Batch plan issues via issue-plan-agent |
|
||||
| Phase 2 | [phases/02-convert-to-plan.md](phases/02-convert-to-plan.md) | Source = Convert Artifact | Convert lite-plan/session/markdown to solutions |
|
||||
| Phase 3 | [phases/03-from-brainstorm.md](phases/03-from-brainstorm.md) | Source = From Brainstorm | Convert brainstorm ideas to issue + solution |
|
||||
| Phase 4 | [phases/04-issue-queue.md](phases/04-issue-queue.md) | Source = Form Queue | Order bound solutions into execution queue |
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Source Selection First**: Always determine source before loading any phase
|
||||
2. **Single Phase Load**: Only read the selected phase document, never load all phases
|
||||
3. **CLI Data Access**: Use `ccw issue` CLI for all issue/solution operations, NEVER read files directly
|
||||
4. **Content Preservation**: Each phase contains complete execution logic from original commands
|
||||
5. **Auto-Detect Input**: Smart input parsing reduces need for explicit --source flag
|
||||
6. **DO NOT STOP**: Continuous multi-phase workflow. After completing each phase, immediately proceed to next
|
||||
7. **Explicit Lifecycle**: Always close_agent after wait completes to free resources
|
||||
|
||||
## Input Processing
|
||||
|
||||
### Auto-Detection Logic
|
||||
|
||||
```javascript
|
||||
function detectSource(input, flags) {
|
||||
// 1. Explicit --source flag
|
||||
if (flags.source) return flags.source;
|
||||
|
||||
// 2. Auto-detect from input content
|
||||
const trimmed = input.trim();
|
||||
|
||||
// Issue IDs pattern (GH-xxx, ISS-xxx, comma-separated)
|
||||
if (trimmed.match(/^[A-Z]+-\d+/i) || trimmed.includes(',')) {
|
||||
return 'plan';
|
||||
}
|
||||
|
||||
// --all-pending or empty input → plan all pending
|
||||
if (flags.allPending || trimmed === '') {
|
||||
return 'plan';
|
||||
}
|
||||
|
||||
// SESSION="..." pattern → brainstorm
|
||||
if (trimmed.includes('SESSION=')) {
|
||||
return 'brainstorm';
|
||||
}
|
||||
|
||||
// File/folder path → convert
|
||||
if (trimmed.match(/\.(md|json)$/) || trimmed.includes('.workflow/')) {
|
||||
return 'convert';
|
||||
}
|
||||
|
||||
// Cannot auto-detect → ask user
|
||||
return null;
|
||||
}
|
||||
```
|
||||
|
||||
### Source Selection (ASK_USER)
|
||||
|
||||
```javascript
|
||||
// When source cannot be auto-detected
|
||||
const answer = ASK_USER([{
|
||||
id: "source",
|
||||
type: "select",
|
||||
prompt: "How would you like to create/manage issue solutions?",
|
||||
options: [
|
||||
{
|
||||
label: "Explore & Plan (Recommended)",
|
||||
description: "AI explores codebase and generates solutions for issues"
|
||||
},
|
||||
{
|
||||
label: "Convert from Artifact",
|
||||
description: "Convert existing lite-plan, workflow session, or markdown to solution"
|
||||
},
|
||||
{
|
||||
label: "From Brainstorm",
|
||||
description: "Convert brainstorm session ideas into issue with solution"
|
||||
},
|
||||
{
|
||||
label: "Form Execution Queue",
|
||||
description: "Order bound solutions into execution queue for /issue:execute"
|
||||
}
|
||||
]
|
||||
}]); // BLOCKS (wait for user response)
|
||||
|
||||
// Route based on selection
|
||||
const sourceMap = {
|
||||
"Explore & Plan": "plan",
|
||||
"Convert from Artifact": "convert",
|
||||
"From Brainstorm": "brainstorm",
|
||||
"Form Execution Queue": "queue"
|
||||
};
|
||||
```
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
User Input (issue IDs / artifact path / session ID / flags)
|
||||
↓
|
||||
[Parse Flags + Auto-Detect Source]
|
||||
↓
|
||||
[Source Selection] ← ASK_USER (if needed)
|
||||
↓
|
||||
[Read Selected Phase Document]
|
||||
↓
|
||||
[Execute Phase Logic]
|
||||
↓
|
||||
[Summary + Next Steps]
|
||||
├─ After Plan/Convert/Brainstorm → Suggest /issue:queue or /issue:execute
|
||||
└─ After Queue → Suggest /issue:execute
|
||||
|
||||
(Optional) Export to Task JSON (when --export-tasks flag is set):
|
||||
├─ For each solution.tasks[] entry:
|
||||
│ ├─ solution.task.id → id (prefixed as TASK-{T-id})
|
||||
│ ├─ solution.task.title → title
|
||||
│ ├─ solution.task.description → description
|
||||
│ ├─ solution.task.action → action
|
||||
│ ├─ solution.task.scope → scope
|
||||
│ ├─ solution.task.modification_points[] → files[]
|
||||
│ │ ├─ mp.file → files[].path
|
||||
│ │ ├─ mp.target → files[].target
|
||||
│ │ └─ mp.change → files[].changes[]
|
||||
│ ├─ solution.task.acceptance → convergence
|
||||
│ │ ├─ acceptance.criteria[] → convergence.criteria[]
|
||||
│ │ └─ acceptance.verification[]→ convergence.verification (joined)
|
||||
│ ├─ solution.task.implementation → implementation[]
|
||||
│ ├─ solution.task.test → test
|
||||
│ ├─ solution.task.depends_on → depends_on
|
||||
│ ├─ solution.task.commit → commit
|
||||
│ └─ solution.task.priority → priority (1→critical, 2→high, 3→medium, 4-5→low)
|
||||
├─ Output path: .workflow/issues/{issue-id}/.task/TASK-{T-id}.json
|
||||
├─ Each file follows task-schema.json (IDENTITY + CONVERGENCE + FILES required)
|
||||
├─ source.tool = "issue-resolve", source.issue_id = {issue-id}
|
||||
│
|
||||
└─ Generate plan.json (after all TASK-*.json exported):
|
||||
const issueDir = `.workflow/issues/${issueId}`
|
||||
const taskFiles = Glob(`${issueDir}/.task/TASK-*.json`)
|
||||
const taskIds = taskFiles.map(f => JSON.parse(Read(f)).id).sort()
|
||||
|
||||
// Guard: skip plan.json if no tasks generated
|
||||
if (taskIds.length === 0) {
|
||||
console.warn('No tasks generated; skipping plan.json')
|
||||
} else {
|
||||
|
||||
const planOverview = {
|
||||
summary: `Issue resolution plan for ${issueId}: ${issueTitle}`,
|
||||
approach: solution.approach || "AI-explored resolution strategy",
|
||||
task_ids: taskIds,
|
||||
task_count: taskIds.length,
|
||||
complexity: taskIds.length > 5 ? "High" : taskIds.length > 2 ? "Medium" : "Low",
|
||||
_metadata: {
|
||||
timestamp: getUtc8ISOString(),
|
||||
source: "issue-plan-agent",
|
||||
planning_mode: "agent-based",
|
||||
plan_type: "feature",
|
||||
schema_version: "2.0"
|
||||
}
|
||||
}
|
||||
Write(`${issueDir}/plan.json`, JSON.stringify(planOverview, null, 2))
|
||||
|
||||
} // end guard
|
||||
Output path: .workflow/issues/{issue-id}/plan.json
|
||||
```
|
||||
|
||||
## Task Tracking Pattern
|
||||
|
||||
```javascript
|
||||
// Initialize plan with phase steps
|
||||
update_plan({
|
||||
explanation: "Issue resolve workflow started",
|
||||
plan: [
|
||||
{ step: "Select issue source", status: "completed" },
|
||||
{ step: "Execute: [selected phase name]", status: "in_progress" },
|
||||
{ step: "Summary & next steps", status: "pending" }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
Phase-specific sub-tasks are attached when the phase executes (see individual phase docs for details).
|
||||
|
||||
## Core Guidelines
|
||||
|
||||
**Data Access Principle**: Issues and solutions files can grow very large. To avoid context overflow:
|
||||
|
||||
| Operation | Correct | Incorrect |
|
||||
|-----------|---------|-----------|
|
||||
| List issues (brief) | `ccw issue list --status pending --brief` | `Read('issues.jsonl')` |
|
||||
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
|
||||
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
|
||||
| Bind solution | `ccw issue bind <id> <sol-id>` | Direct file edit |
|
||||
| Batch solutions | `ccw issue solutions --status planned --brief` | Loop individual queries |
|
||||
|
||||
**Output Options**:
|
||||
- `--brief`: JSON with minimal fields (orchestrator use)
|
||||
- `--json`: Full JSON (agent use only)
|
||||
|
||||
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `solutions/*.jsonl` directly.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No source detected | Show ASK_USER with all 4 options |
|
||||
| Invalid source type | Show available sources, re-prompt |
|
||||
| Phase execution fails | Report error, suggest manual intervention |
|
||||
| No pending issues (plan) | Suggest creating issues first |
|
||||
| No bound solutions (queue) | Suggest running plan/convert/brainstorm first |
|
||||
|
||||
## Post-Phase Next Steps
|
||||
|
||||
After successful phase execution, recommend next action:
|
||||
|
||||
```javascript
|
||||
// After Plan/Convert/Brainstorm (solutions created)
|
||||
ASK_USER([{
|
||||
id: "next_action",
|
||||
type: "select",
|
||||
prompt: "Solutions created. What next?",
|
||||
options: [
|
||||
{ label: "Form Queue", description: "Order solutions for execution (/issue:queue)" },
|
||||
{ label: "Plan More Issues", description: "Continue creating solutions" },
|
||||
{ label: "View Issues", description: "Review issue details" },
|
||||
{ label: "Done", description: "Exit workflow" }
|
||||
]
|
||||
}]); // BLOCKS (wait for user response)
|
||||
|
||||
// After Queue (queue formed)
|
||||
// → Suggest /issue:execute directly
|
||||
```
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `issue-manage` - Interactive issue CRUD operations
|
||||
- `/issue:execute` - Execute queue with DAG-based parallel orchestration
|
||||
- `ccw issue list` - List all issues
|
||||
- `ccw issue status <id>` - View issue details
|
||||
@@ -1,316 +0,0 @@
|
||||
# Phase 1: Explore & Plan
|
||||
|
||||
## Overview
|
||||
|
||||
Batch plan issue resolution using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow.
|
||||
|
||||
**Behavior:**
|
||||
- Single solution per issue → auto-bind
|
||||
- Multiple solutions → return for user selection
|
||||
- Agent handles file generation
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Issue IDs provided (comma-separated) or `--all-pending` flag
|
||||
- `ccw issue` CLI available
|
||||
- `{projectRoot}/.workflow/issues/` directory exists or will be created
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-bind solutions without confirmation, use recommended settings.
|
||||
|
||||
## Core Guidelines
|
||||
|
||||
**Data Access Principle**: Issues and solutions files can grow very large. To avoid context overflow:
|
||||
|
||||
| Operation | Correct | Incorrect |
|
||||
|-----------|---------|-----------|
|
||||
| List issues (brief) | `ccw issue list --status pending --brief` | `Read('issues.jsonl')` |
|
||||
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
|
||||
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
|
||||
| Bind solution | `ccw issue bind <id> <sol-id>` | Direct file edit |
|
||||
|
||||
**Output Options**:
|
||||
- `--brief`: JSON with minimal fields (id, title, status, priority, tags)
|
||||
- `--json`: Full JSON (agent use only)
|
||||
|
||||
**Orchestration vs Execution**:
|
||||
- **Command (orchestrator)**: Use `--brief` for minimal context
|
||||
- **Agent (executor)**: Fetch full details → `ccw issue status <id> --json`
|
||||
|
||||
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `solutions/*.jsonl` directly.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1.1: Issue Loading (Brief Info Only)
|
||||
|
||||
```javascript
|
||||
const batchSize = flags.batchSize || 3;
|
||||
let issues = []; // {id, title, tags} - brief info for grouping only
|
||||
|
||||
// Default to --all-pending if no input provided
|
||||
const useAllPending = flags.allPending || !userInput || userInput.trim() === '';
|
||||
|
||||
if (useAllPending) {
|
||||
// Get pending issues with brief metadata via CLI
|
||||
const result = Bash(`ccw issue list --status pending,registered --json`).trim();
|
||||
const parsed = result ? JSON.parse(result) : [];
|
||||
issues = parsed.map(i => ({ id: i.id, title: i.title || '', tags: i.tags || [] }));
|
||||
|
||||
if (issues.length === 0) {
|
||||
console.log('No pending issues found.');
|
||||
return;
|
||||
}
|
||||
console.log(`Found ${issues.length} pending issues`);
|
||||
} else {
|
||||
// Parse comma-separated issue IDs, fetch brief metadata
|
||||
const ids = userInput.includes(',')
|
||||
? userInput.split(',').map(s => s.trim())
|
||||
: [userInput.trim()];
|
||||
|
||||
for (const id of ids) {
|
||||
Bash(`ccw issue init ${id} --title "Issue ${id}" 2>/dev/null || true`);
|
||||
const info = Bash(`ccw issue status ${id} --json`).trim();
|
||||
const parsed = info ? JSON.parse(info) : {};
|
||||
issues.push({ id, title: parsed.title || '', tags: parsed.tags || [] });
|
||||
}
|
||||
}
|
||||
// Note: Agent fetches full issue content via `ccw issue status <id> --json`
|
||||
|
||||
// Intelligent grouping: Analyze issues by title/tags, group semantically similar ones
|
||||
// Strategy: Same module/component, related bugs, feature clusters
|
||||
// Constraint: Max ${batchSize} issues per batch
|
||||
|
||||
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es)`);
|
||||
|
||||
update_plan({
|
||||
explanation: "Issue loading complete, starting batch planning",
|
||||
plan: batches.map((_, i) => ({
|
||||
step: `Plan batch ${i+1}`,
|
||||
status: 'pending'
|
||||
}))
|
||||
});
|
||||
```
|
||||
|
||||
### Step 1.2: Unified Explore + Plan (issue-plan-agent) - PARALLEL
|
||||
|
||||
```javascript
|
||||
Bash(`mkdir -p ${projectRoot}/.workflow/issues/solutions`);
|
||||
const pendingSelections = []; // Collect multi-solution issues for user selection
|
||||
const agentResults = []; // Collect all agent results for conflict aggregation
|
||||
|
||||
// Build prompts for all batches
|
||||
const agentTasks = batches.map((batch, batchIndex) => {
|
||||
const issueList = batch.map(i => `- ${i.id}: ${i.title}${i.tags.length ? ` [${i.tags.join(', ')}]` : ''}`).join('\n');
|
||||
const batchIds = batch.map(i => i.id);
|
||||
|
||||
const issuePrompt = `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
|
||||
2. Read: {projectRoot}/.workflow/project-tech.json
|
||||
3. Read: {projectRoot}/.workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Plan Issues
|
||||
|
||||
**Issues** (grouped by similarity):
|
||||
${issueList}
|
||||
|
||||
**Project Root**: ${process.cwd()}
|
||||
|
||||
### Project Context (MANDATORY)
|
||||
1. Read: {projectRoot}/.workflow/project-tech.json (technology stack, architecture)
|
||||
2. Read: {projectRoot}/.workflow/project-guidelines.json (constraints and conventions)
|
||||
|
||||
### Workflow
|
||||
1. Fetch issue details: ccw issue status <id> --json
|
||||
2. **Analyze failure history** (if issue.feedback exists):
|
||||
- Extract failure details from issue.feedback (type='failure', stage='execute')
|
||||
- Parse error_type, message, task_id, solution_id from content JSON
|
||||
- Identify failure patterns: repeated errors, root causes, blockers
|
||||
- **Constraint**: Avoid repeating failed approaches
|
||||
3. Load project context files
|
||||
4. Explore codebase (ACE semantic search)
|
||||
5. Plan solution with tasks (schema: solution-schema.json)
|
||||
- **If previous solution failed**: Reference failure analysis in solution.approach
|
||||
- Add explicit verification steps to prevent same failure mode
|
||||
6. **If github_url exists**: Add final task to comment on GitHub issue
|
||||
7. Write solution to: ${projectRoot}/.workflow/issues/solutions/{issue-id}.jsonl
|
||||
8. **CRITICAL - Binding Decision**:
|
||||
- Single solution → **MUST execute**: ccw issue bind <issue-id> <solution-id>
|
||||
- Multiple solutions → Return pending_selection only (no bind)
|
||||
|
||||
### Failure-Aware Planning Rules
|
||||
- **Extract failure patterns**: Parse issue.feedback where type='failure' and stage='execute'
|
||||
- **Identify root causes**: Analyze error_type (test_failure, compilation, timeout, etc.)
|
||||
- **Design alternative approach**: Create solution that addresses root cause
|
||||
- **Add prevention steps**: Include explicit verification to catch same error earlier
|
||||
- **Document lessons**: Reference previous failures in solution.approach
|
||||
|
||||
### Rules
|
||||
- Solution ID format: SOL-{issue-id}-{uid} (uid: 4 random alphanumeric chars, e.g., a7x9)
|
||||
- Single solution per issue → auto-bind via ccw issue bind
|
||||
- Multiple solutions → register only, return pending_selection
|
||||
- Tasks must have quantified acceptance.criteria
|
||||
|
||||
### Return Summary
|
||||
{"bound":[{"issue_id":"...","solution_id":"...","task_count":N}],"pending_selection":[{"issue_id":"...","solutions":[{"id":"...","description":"...","task_count":N}]}]}
|
||||
`;
|
||||
|
||||
return { batchIndex, batchIds, issuePrompt, batch };
|
||||
});
|
||||
|
||||
// Launch agents in parallel (max 10 concurrent)
|
||||
const MAX_PARALLEL = 10;
|
||||
for (let i = 0; i < agentTasks.length; i += MAX_PARALLEL) {
|
||||
const chunk = agentTasks.slice(i, i + MAX_PARALLEL);
|
||||
const agentIds = [];
|
||||
|
||||
// Step 1: Spawn agents in parallel
|
||||
for (const { batchIndex, batchIds, issuePrompt, batch } of chunk) {
|
||||
updatePlanStep(`Plan batch ${batchIndex + 1}`, 'in_progress');
|
||||
const agentId = spawn_agent({
|
||||
message: issuePrompt
|
||||
});
|
||||
agentIds.push({ agentId, batchIndex });
|
||||
}
|
||||
|
||||
console.log(`Launched ${agentIds.length} agents (chunk ${Math.floor(i/MAX_PARALLEL) + 1}/${Math.ceil(agentTasks.length/MAX_PARALLEL)})...`);
|
||||
|
||||
// Step 2: Batch wait for all agents in this chunk
|
||||
const allIds = agentIds.map(a => a.agentId);
|
||||
const waitResult = wait({
|
||||
ids: allIds,
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
});
|
||||
|
||||
if (waitResult.timed_out) {
|
||||
console.log('Some agents timed out, continuing with completed results');
|
||||
}
|
||||
|
||||
// Step 3: Collect results from completed agents
|
||||
for (const { agentId, batchIndex } of agentIds) {
|
||||
const agentStatus = waitResult.status[agentId];
|
||||
if (!agentStatus || !agentStatus.completed) {
|
||||
console.log(`Batch ${batchIndex + 1}: Agent did not complete, skipping`);
|
||||
updatePlanStep(`Plan batch ${batchIndex + 1}`, 'completed');
|
||||
continue;
|
||||
}
|
||||
|
||||
const result = agentStatus.completed;
|
||||
|
||||
// Extract JSON from potential markdown code blocks (agent may wrap in ```json...```)
|
||||
const jsonText = extractJsonFromMarkdown(result);
|
||||
let summary;
|
||||
try {
|
||||
summary = JSON.parse(jsonText);
|
||||
} catch (e) {
|
||||
console.log(`Batch ${batchIndex + 1}: Failed to parse agent result, skipping`);
|
||||
updatePlanStep(`Plan batch ${batchIndex + 1}`, 'completed');
|
||||
continue;
|
||||
}
|
||||
agentResults.push(summary); // Store for conflict aggregation
|
||||
|
||||
// Verify binding for bound issues (agent should have executed bind)
|
||||
for (const item of summary.bound || []) {
|
||||
const status = JSON.parse(Bash(`ccw issue status ${item.issue_id} --json`).trim());
|
||||
if (status.bound_solution_id === item.solution_id) {
|
||||
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
|
||||
} else {
|
||||
// Fallback: agent failed to bind, execute here
|
||||
Bash(`ccw issue bind ${item.issue_id} ${item.solution_id}`);
|
||||
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks) [recovered]`);
|
||||
}
|
||||
}
|
||||
// Collect pending selections
|
||||
for (const pending of summary.pending_selection || []) {
|
||||
pendingSelections.push(pending);
|
||||
}
|
||||
updatePlanStep(`Plan batch ${batchIndex + 1}`, 'completed');
|
||||
}
|
||||
|
||||
// Step 4: Batch cleanup - close all agents in this chunk
|
||||
allIds.forEach(id => close_agent({ id }));
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.3: Solution Selection (if pending)
|
||||
|
||||
```javascript
|
||||
// Handle multi-solution issues
|
||||
for (const pending of pendingSelections) {
|
||||
if (pending.solutions.length === 0) continue;
|
||||
|
||||
const options = pending.solutions.slice(0, 4).map(sol => ({
|
||||
label: `${sol.id} (${sol.task_count} tasks)`,
|
||||
description: sol.description || sol.approach || 'No description'
|
||||
}));
|
||||
|
||||
const answer = ASK_USER([{
|
||||
id: pending.issue_id,
|
||||
type: "select",
|
||||
prompt: `Issue ${pending.issue_id}: which solution to bind?`,
|
||||
options: options
|
||||
}]); // BLOCKS (wait for user response)
|
||||
|
||||
const selected = answer[Object.keys(answer)[0]];
|
||||
if (!selected || selected === 'Other') continue;
|
||||
|
||||
const solId = selected.split(' ')[0];
|
||||
Bash(`ccw issue bind ${pending.issue_id} ${solId}`);
|
||||
console.log(`${pending.issue_id}: ${solId} bound`);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.4: Summary
|
||||
|
||||
```javascript
|
||||
// Count planned issues via CLI
|
||||
const planned = JSON.parse(Bash(`ccw issue list --status planned --brief`) || '[]');
|
||||
const plannedCount = planned.length;
|
||||
|
||||
console.log(`
|
||||
## Done: ${issues.length} issues → ${plannedCount} planned
|
||||
|
||||
Next: \`/issue:queue\` → \`/issue:execute\`
|
||||
`);
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Issue not found | Auto-create in issues.jsonl |
|
||||
| ACE search fails | Agent falls back to ripgrep |
|
||||
| No solutions generated | Display error, suggest manual planning |
|
||||
| User cancels selection | Skip issue, continue with others |
|
||||
| File conflicts | Agent detects and suggests resolution order |
|
||||
|
||||
## Bash Compatibility
|
||||
|
||||
**Avoid**: `$(cmd)`, `$var`, `for` loops — will be escaped incorrectly
|
||||
|
||||
**Use**: Simple commands + `&&` chains, quote comma params `"pending,registered"`
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before completing, verify:
|
||||
|
||||
- [ ] All input issues have solutions in `solutions/{issue-id}.jsonl`
|
||||
- [ ] Single solution issues are auto-bound (`bound_solution_id` set)
|
||||
- [ ] Multi-solution issues returned in `pending_selection` for user choice
|
||||
- [ ] Each solution has executable tasks with `modification_points`
|
||||
- [ ] Task acceptance criteria are quantified (not vague)
|
||||
- [ ] Conflicts detected and reported (if multiple issues touch same files)
|
||||
- [ ] Issue status updated to `planned` after binding
|
||||
- [ ] All spawned agents are properly closed via close_agent
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After plan completion:
|
||||
- All processed issues should have `status: planned` and `bound_solution_id` set
|
||||
- Report: total issues processed, solutions bound, pending selections resolved
|
||||
- Recommend next step: Form execution queue via Phase 4
|
||||
@@ -1,691 +0,0 @@
|
||||
# Phase 2: Convert from Artifact
|
||||
|
||||
## Overview
|
||||
|
||||
Converts various planning artifact formats into issue workflow solutions with intelligent detection and automatic binding.
|
||||
|
||||
**Supported Sources** (auto-detected):
|
||||
- **lite-plan**: `{projectRoot}/.workflow/.lite-plan/{slug}/plan.json`
|
||||
- **workflow-session**: `WFS-xxx` ID or `{projectRoot}/.workflow/active/{session}/` folder
|
||||
- **markdown**: Any `.md` file with implementation/task content
|
||||
- **json**: Direct JSON files matching plan-json-schema
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Source artifact path or WFS-xxx ID provided
|
||||
- `ccw issue` CLI available
|
||||
- `{projectRoot}/.workflow/issues/` directory exists or will be created
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Skip confirmation, auto-create issue and bind solution.
|
||||
|
||||
## Command Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `<SOURCE>` | Planning artifact path or WFS-xxx ID | Required |
|
||||
| `--issue <id>` | Bind to existing issue instead of creating new | Auto-create |
|
||||
| `--supplement` | Add tasks to existing solution (requires --issue) | false |
|
||||
| `-y, --yes` | Skip all confirmations | false |
|
||||
|
||||
## Core Data Access Principle
|
||||
|
||||
**Important**: Use CLI commands for all issue/solution operations.
|
||||
|
||||
| Operation | Correct | Incorrect |
|
||||
|-----------|---------|-----------|
|
||||
| Get issue | `ccw issue status <id> --json` | Read issues.jsonl directly |
|
||||
| Create issue | `ccw issue init <id> --title "..."` | Write to issues.jsonl |
|
||||
| Bind solution | `ccw issue bind <id> <sol-id>` | Edit issues.jsonl |
|
||||
| List solutions | `ccw issue solutions --issue <id> --brief` | Read solutions/*.jsonl |
|
||||
|
||||
## Solution Schema Reference
|
||||
|
||||
Target format for all extracted data (from solution-schema.json):
|
||||
|
||||
```typescript
|
||||
interface Solution {
|
||||
id: string; // SOL-{issue-id}-{4-char-uid}
|
||||
description?: string; // High-level summary
|
||||
approach?: string; // Technical strategy
|
||||
tasks: Task[]; // Required: at least 1 task
|
||||
exploration_context?: object; // Optional: source context
|
||||
analysis?: { risk, impact, complexity };
|
||||
score?: number; // 0.0-1.0
|
||||
is_bound: boolean;
|
||||
created_at: string;
|
||||
bound_at?: string;
|
||||
}
|
||||
|
||||
interface Task {
|
||||
id: string; // T1, T2, T3... (pattern: ^T[0-9]+$)
|
||||
title: string; // Required: action verb + target
|
||||
scope: string; // Required: module path or feature area
|
||||
action: Action; // Required: Create|Update|Implement|...
|
||||
description?: string;
|
||||
modification_points?: Array<{file, target, change}>;
|
||||
implementation: string[]; // Required: step-by-step guide
|
||||
test?: { unit?, integration?, commands?, coverage_target? };
|
||||
acceptance: { criteria: string[], verification: string[] }; // Required
|
||||
commit?: { type, scope, message_template, breaking? };
|
||||
depends_on?: string[];
|
||||
priority?: number; // 1-5 (default: 3)
|
||||
}
|
||||
|
||||
type Action = 'Create' | 'Update' | 'Implement' | 'Refactor' | 'Add' | 'Delete' | 'Configure' | 'Test' | 'Fix';
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 2.1: Parse Arguments & Detect Source Type
|
||||
|
||||
```javascript
|
||||
const input = userInput.trim();
|
||||
const flags = parseFlags(userInput); // --issue, --supplement, -y/--yes
|
||||
|
||||
// Extract source path (first non-flag argument)
|
||||
const source = extractSourceArg(input);
|
||||
|
||||
// Detect source type
|
||||
function detectSourceType(source) {
|
||||
// Check for WFS-xxx pattern (workflow session ID)
|
||||
if (source.match(/^WFS-[\w-]+$/)) {
|
||||
return { type: 'workflow-session-id', path: `${projectRoot}/.workflow/active/${source}` };
|
||||
}
|
||||
|
||||
// Check if directory
|
||||
const isDir = Bash(`test -d "${source}" && echo "dir" || echo "file"`).trim() === 'dir';
|
||||
|
||||
if (isDir) {
|
||||
// Check for lite-plan indicator
|
||||
const hasPlanJson = Bash(`test -f "${source}/plan.json" && echo "yes" || echo "no"`).trim() === 'yes';
|
||||
if (hasPlanJson) {
|
||||
return { type: 'lite-plan', path: source };
|
||||
}
|
||||
|
||||
// Check for workflow session indicator
|
||||
const hasSession = Bash(`test -f "${source}/workflow-session.json" && echo "yes" || echo "no"`).trim() === 'yes';
|
||||
if (hasSession) {
|
||||
return { type: 'workflow-session', path: source };
|
||||
}
|
||||
}
|
||||
|
||||
// Check file extensions
|
||||
if (source.endsWith('.json')) {
|
||||
return { type: 'json-file', path: source };
|
||||
}
|
||||
if (source.endsWith('.md')) {
|
||||
return { type: 'markdown-file', path: source };
|
||||
}
|
||||
|
||||
// Check if path exists at all
|
||||
const exists = Bash(`test -e "${source}" && echo "yes" || echo "no"`).trim() === 'yes';
|
||||
if (!exists) {
|
||||
throw new Error(`E001: Source not found: ${source}`);
|
||||
}
|
||||
|
||||
return { type: 'unknown', path: source };
|
||||
}
|
||||
|
||||
const sourceInfo = detectSourceType(source);
|
||||
if (sourceInfo.type === 'unknown') {
|
||||
throw new Error(`E002: Unable to detect source format for: ${source}`);
|
||||
}
|
||||
|
||||
console.log(`Detected source type: ${sourceInfo.type}`);
|
||||
```
|
||||
|
||||
### Step 2.2: Extract Data Using Format-Specific Extractor
|
||||
|
||||
```javascript
|
||||
let extracted = { title: '', approach: '', tasks: [], metadata: {} };
|
||||
|
||||
switch (sourceInfo.type) {
|
||||
case 'lite-plan':
|
||||
extracted = extractFromLitePlan(sourceInfo.path);
|
||||
break;
|
||||
case 'workflow-session':
|
||||
case 'workflow-session-id':
|
||||
extracted = extractFromWorkflowSession(sourceInfo.path);
|
||||
break;
|
||||
case 'markdown-file':
|
||||
extracted = await extractFromMarkdownAI(sourceInfo.path);
|
||||
break;
|
||||
case 'json-file':
|
||||
extracted = extractFromJsonFile(sourceInfo.path);
|
||||
break;
|
||||
}
|
||||
|
||||
// Validate extraction
|
||||
if (!extracted.tasks || extracted.tasks.length === 0) {
|
||||
throw new Error('E006: No tasks extracted from source');
|
||||
}
|
||||
|
||||
// Ensure task IDs are normalized to T1, T2, T3...
|
||||
extracted.tasks = normalizeTaskIds(extracted.tasks);
|
||||
|
||||
console.log(`Extracted: ${extracted.tasks.length} tasks`);
|
||||
```
|
||||
|
||||
#### Extractor: Lite-Plan
|
||||
|
||||
```javascript
|
||||
function extractFromLitePlan(folderPath) {
|
||||
const planJson = Read(`${folderPath}/plan.json`);
|
||||
const plan = JSON.parse(planJson);
|
||||
|
||||
return {
|
||||
title: plan.summary?.split('.')[0]?.trim() || 'Untitled Plan',
|
||||
description: plan.summary,
|
||||
approach: plan.approach,
|
||||
tasks: plan.tasks.map(t => ({
|
||||
id: t.id,
|
||||
title: t.title,
|
||||
scope: t.scope || '',
|
||||
action: t.action || 'Implement',
|
||||
description: t.description || t.title,
|
||||
modification_points: t.modification_points || [],
|
||||
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
|
||||
test: t.verification ? {
|
||||
unit: t.verification.unit_tests,
|
||||
integration: t.verification.integration_tests,
|
||||
commands: t.verification.manual_checks
|
||||
} : {},
|
||||
acceptance: {
|
||||
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
|
||||
verification: t.verification?.manual_checks || []
|
||||
},
|
||||
depends_on: t.depends_on || [],
|
||||
priority: 3
|
||||
})),
|
||||
metadata: {
|
||||
source_type: 'lite-plan',
|
||||
source_path: folderPath,
|
||||
complexity: plan.complexity,
|
||||
estimated_time: plan.estimated_time,
|
||||
exploration_angles: plan._metadata?.exploration_angles || [],
|
||||
original_timestamp: plan._metadata?.timestamp
|
||||
}
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
#### Extractor: Workflow Session
|
||||
|
||||
```javascript
|
||||
function extractFromWorkflowSession(sessionPath) {
|
||||
// Load session metadata
|
||||
const sessionJson = Read(`${sessionPath}/workflow-session.json`);
|
||||
const session = JSON.parse(sessionJson);
|
||||
|
||||
// Load IMPL_PLAN.md for approach (if exists)
|
||||
let approach = '';
|
||||
const implPlanPath = `${sessionPath}/IMPL_PLAN.md`;
|
||||
const hasImplPlan = Bash(`test -f "${implPlanPath}" && echo "yes" || echo "no"`).trim() === 'yes';
|
||||
if (hasImplPlan) {
|
||||
const implPlan = Read(implPlanPath);
|
||||
// Extract overview/approach section
|
||||
const overviewMatch = implPlan.match(/##\s*(?:Overview|Approach|Strategy)\s*\n([\s\S]*?)(?=\n##|$)/i);
|
||||
approach = overviewMatch?.[1]?.trim() || implPlan.split('\n').slice(0, 10).join('\n');
|
||||
}
|
||||
|
||||
// Load all task JSONs from .task folder
|
||||
const taskFiles = Glob({ pattern: `${sessionPath}/.task/IMPL-*.json` });
|
||||
const tasks = taskFiles.map(f => {
|
||||
const taskJson = Read(f);
|
||||
const task = JSON.parse(taskJson);
|
||||
return {
|
||||
id: task.id?.replace(/^IMPL-0*/, 'T') || 'T1', // IMPL-001 → T1
|
||||
title: task.title,
|
||||
scope: task.scope || inferScopeFromTask(task),
|
||||
action: capitalizeAction(task.type) || 'Implement',
|
||||
description: task.description,
|
||||
modification_points: task.implementation?.modification_points || [],
|
||||
implementation: task.implementation?.steps || [],
|
||||
test: task.implementation?.test || {},
|
||||
acceptance: {
|
||||
criteria: task.acceptance_criteria || [],
|
||||
verification: task.verification_steps || []
|
||||
},
|
||||
commit: task.commit,
|
||||
depends_on: (task.depends_on || []).map(d => d.replace(/^IMPL-0*/, 'T')),
|
||||
priority: task.priority || 3
|
||||
};
|
||||
});
|
||||
|
||||
return {
|
||||
title: session.name || session.description?.split('.')[0] || 'Workflow Session',
|
||||
description: session.description || session.name,
|
||||
approach: approach || session.description,
|
||||
tasks: tasks,
|
||||
metadata: {
|
||||
source_type: 'workflow-session',
|
||||
source_path: sessionPath,
|
||||
session_id: session.id,
|
||||
created_at: session.created_at
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function inferScopeFromTask(task) {
|
||||
if (task.implementation?.modification_points?.length) {
|
||||
const files = task.implementation.modification_points.map(m => m.file);
|
||||
// Find common directory prefix
|
||||
const dirs = files.map(f => f.split('/').slice(0, -1).join('/'));
|
||||
return [...new Set(dirs)][0] || '';
|
||||
}
|
||||
return '';
|
||||
}
|
||||
|
||||
function capitalizeAction(type) {
|
||||
if (!type) return 'Implement';
|
||||
const map = { feature: 'Implement', bugfix: 'Fix', refactor: 'Refactor', test: 'Test', docs: 'Update' };
|
||||
return map[type.toLowerCase()] || type.charAt(0).toUpperCase() + type.slice(1);
|
||||
}
|
||||
```
|
||||
|
||||
#### Extractor: Markdown (AI-Assisted via Gemini)
|
||||
|
||||
```javascript
|
||||
async function extractFromMarkdownAI(filePath) {
|
||||
const fileContent = Read(filePath);
|
||||
|
||||
// Use Gemini CLI for intelligent extraction
|
||||
const cliPrompt = `PURPOSE: Extract implementation plan from markdown document for issue solution conversion. Must output ONLY valid JSON.
|
||||
TASK: • Analyze document structure • Identify title/summary • Extract approach/strategy section • Parse tasks from any format (lists, tables, sections, code blocks) • Normalize each task to solution schema
|
||||
MODE: analysis
|
||||
CONTEXT: Document content provided below
|
||||
EXPECTED: Valid JSON object with format:
|
||||
{
|
||||
"title": "extracted title",
|
||||
"approach": "extracted approach/strategy",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "T1",
|
||||
"title": "task title",
|
||||
"scope": "module or feature area",
|
||||
"action": "Implement|Update|Create|Fix|Refactor|Add|Delete|Configure|Test",
|
||||
"description": "what to do",
|
||||
"implementation": ["step 1", "step 2"],
|
||||
"acceptance": ["criteria 1", "criteria 2"]
|
||||
}
|
||||
]
|
||||
}
|
||||
CONSTRAINTS: Output ONLY valid JSON - no markdown, no explanation | Action must be one of: Create, Update, Implement, Refactor, Add, Delete, Configure, Test, Fix | Tasks must have id, title, scope, action, implementation (array), acceptance (array)
|
||||
|
||||
DOCUMENT CONTENT:
|
||||
${fileContent}`;
|
||||
|
||||
// Execute Gemini CLI
|
||||
const result = Bash(`ccw cli -p '${cliPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis`, { timeout: 120000 });
|
||||
|
||||
// Parse JSON from result (may be wrapped in markdown code block)
|
||||
let jsonText = result.trim();
|
||||
const jsonMatch = jsonText.match(/```(?:json)?\s*([\s\S]*?)```/);
|
||||
if (jsonMatch) {
|
||||
jsonText = jsonMatch[1].trim();
|
||||
}
|
||||
|
||||
try {
|
||||
const extracted = JSON.parse(jsonText);
|
||||
|
||||
// Normalize tasks
|
||||
const tasks = (extracted.tasks || []).map((t, i) => ({
|
||||
id: t.id || `T${i + 1}`,
|
||||
title: t.title || 'Untitled task',
|
||||
scope: t.scope || '',
|
||||
action: validateAction(t.action) || 'Implement',
|
||||
description: t.description || t.title,
|
||||
modification_points: t.modification_points || [],
|
||||
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
|
||||
test: t.test || {},
|
||||
acceptance: {
|
||||
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
|
||||
verification: t.verification || []
|
||||
},
|
||||
depends_on: t.depends_on || [],
|
||||
priority: t.priority || 3
|
||||
}));
|
||||
|
||||
return {
|
||||
title: extracted.title || 'Extracted Plan',
|
||||
description: extracted.summary || extracted.title,
|
||||
approach: extracted.approach || '',
|
||||
tasks: tasks,
|
||||
metadata: {
|
||||
source_type: 'markdown',
|
||||
source_path: filePath,
|
||||
extraction_method: 'gemini-ai'
|
||||
}
|
||||
};
|
||||
} catch (e) {
|
||||
// Provide more context for debugging
|
||||
throw new Error(`E005: Failed to extract tasks from markdown. Gemini response was not valid JSON. Error: ${e.message}. Response preview: ${jsonText.substring(0, 200)}...`);
|
||||
}
|
||||
}
|
||||
|
||||
function validateAction(action) {
|
||||
const validActions = ['Create', 'Update', 'Implement', 'Refactor', 'Add', 'Delete', 'Configure', 'Test', 'Fix'];
|
||||
if (!action) return null;
|
||||
const normalized = action.charAt(0).toUpperCase() + action.slice(1).toLowerCase();
|
||||
return validActions.includes(normalized) ? normalized : null;
|
||||
}
|
||||
```
|
||||
|
||||
#### Extractor: JSON File
|
||||
|
||||
```javascript
|
||||
function extractFromJsonFile(filePath) {
|
||||
const content = Read(filePath);
|
||||
const plan = JSON.parse(content);
|
||||
|
||||
// Detect if it's already solution format or plan format
|
||||
if (plan.tasks && Array.isArray(plan.tasks)) {
|
||||
// Map tasks to normalized format
|
||||
const tasks = plan.tasks.map((t, i) => ({
|
||||
id: t.id || `T${i + 1}`,
|
||||
title: t.title,
|
||||
scope: t.scope || '',
|
||||
action: t.action || 'Implement',
|
||||
description: t.description || t.title,
|
||||
modification_points: t.modification_points || [],
|
||||
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
|
||||
test: t.test || t.verification || {},
|
||||
acceptance: normalizeAcceptance(t.acceptance),
|
||||
depends_on: t.depends_on || [],
|
||||
priority: t.priority || 3
|
||||
}));
|
||||
|
||||
return {
|
||||
title: plan.summary?.split('.')[0] || plan.title || 'JSON Plan',
|
||||
description: plan.summary || plan.description,
|
||||
approach: plan.approach,
|
||||
tasks: tasks,
|
||||
metadata: {
|
||||
source_type: 'json',
|
||||
source_path: filePath,
|
||||
complexity: plan.complexity,
|
||||
original_metadata: plan._metadata
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
throw new Error('E002: JSON file does not contain valid plan structure (missing tasks array)');
|
||||
}
|
||||
|
||||
function normalizeAcceptance(acceptance) {
|
||||
if (!acceptance) return { criteria: [], verification: [] };
|
||||
if (typeof acceptance === 'object' && acceptance.criteria) return acceptance;
|
||||
if (Array.isArray(acceptance)) return { criteria: acceptance, verification: [] };
|
||||
return { criteria: [String(acceptance)], verification: [] };
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2.3: Normalize Task IDs
|
||||
|
||||
```javascript
|
||||
function normalizeTaskIds(tasks) {
|
||||
return tasks.map((t, i) => ({
|
||||
...t,
|
||||
id: `T${i + 1}`,
|
||||
// Also normalize depends_on references
|
||||
depends_on: (t.depends_on || []).map(d => {
|
||||
// Handle various ID formats: IMPL-001, T1, 1, etc.
|
||||
const num = d.match(/\d+/)?.[0];
|
||||
return num ? `T${parseInt(num)}` : d;
|
||||
})
|
||||
}));
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2.4: Resolve Issue (Create or Find)
|
||||
|
||||
```javascript
|
||||
let issueId = flags.issue;
|
||||
let existingSolution = null;
|
||||
|
||||
if (issueId) {
|
||||
// Validate issue exists
|
||||
let issueCheck;
|
||||
try {
|
||||
issueCheck = Bash(`ccw issue status ${issueId} --json 2>/dev/null`).trim();
|
||||
if (!issueCheck || issueCheck === '') {
|
||||
throw new Error('empty response');
|
||||
}
|
||||
} catch (e) {
|
||||
throw new Error(`E003: Issue not found: ${issueId}`);
|
||||
}
|
||||
|
||||
const issue = JSON.parse(issueCheck);
|
||||
|
||||
// Check if issue already has bound solution
|
||||
if (issue.bound_solution_id && !flags.supplement) {
|
||||
throw new Error(`E004: Issue ${issueId} already has bound solution (${issue.bound_solution_id}). Use --supplement to add tasks.`);
|
||||
}
|
||||
|
||||
// Load existing solution for supplement mode
|
||||
if (flags.supplement && issue.bound_solution_id) {
|
||||
try {
|
||||
const solResult = Bash(`ccw issue solution ${issue.bound_solution_id} --json`).trim();
|
||||
existingSolution = JSON.parse(solResult);
|
||||
console.log(`Loaded existing solution with ${existingSolution.tasks.length} tasks`);
|
||||
} catch (e) {
|
||||
throw new Error(`Failed to load existing solution: ${e.message}`);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Create new issue via ccw issue create (auto-generates correct ID)
|
||||
// Smart extraction: title from content, priority from complexity
|
||||
const title = extracted.title || 'Converted Plan';
|
||||
const context = extracted.description || extracted.approach || title;
|
||||
|
||||
// Auto-determine priority based on complexity
|
||||
const complexityMap = { high: 2, medium: 3, low: 4 };
|
||||
const priority = complexityMap[extracted.metadata.complexity?.toLowerCase()] || 3;
|
||||
|
||||
try {
|
||||
// Use heredoc to avoid shell escaping issues
|
||||
const createResult = Bash(`ccw issue create << 'EOF'
|
||||
{
|
||||
"title": ${JSON.stringify(title)},
|
||||
"context": ${JSON.stringify(context)},
|
||||
"priority": ${priority},
|
||||
"source": "converted"
|
||||
}
|
||||
EOF`).trim();
|
||||
|
||||
// Parse result to get created issue ID
|
||||
const created = JSON.parse(createResult);
|
||||
issueId = created.id;
|
||||
console.log(`Created issue: ${issueId} (priority: ${priority})`);
|
||||
} catch (e) {
|
||||
throw new Error(`Failed to create issue: ${e.message}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2.5: Generate Solution
|
||||
|
||||
```javascript
|
||||
// Generate solution ID
|
||||
function generateSolutionId(issueId) {
|
||||
const chars = 'abcdefghijklmnopqrstuvwxyz0123456789';
|
||||
let uid = '';
|
||||
for (let i = 0; i < 4; i++) {
|
||||
uid += chars[Math.floor(Math.random() * chars.length)];
|
||||
}
|
||||
return `SOL-${issueId}-${uid}`;
|
||||
}
|
||||
|
||||
let solution;
|
||||
const solutionId = generateSolutionId(issueId);
|
||||
|
||||
if (flags.supplement && existingSolution) {
|
||||
// Supplement mode: merge with existing solution
|
||||
const maxTaskId = Math.max(...existingSolution.tasks.map(t => parseInt(t.id.slice(1))));
|
||||
|
||||
const newTasks = extracted.tasks.map((t, i) => ({
|
||||
...t,
|
||||
id: `T${maxTaskId + i + 1}`
|
||||
}));
|
||||
|
||||
solution = {
|
||||
...existingSolution,
|
||||
tasks: [...existingSolution.tasks, ...newTasks],
|
||||
approach: existingSolution.approach + '\n\n[Supplementary] ' + (extracted.approach || ''),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
console.log(`Supplementing: ${existingSolution.tasks.length} existing + ${newTasks.length} new = ${solution.tasks.length} total tasks`);
|
||||
} else {
|
||||
// New solution
|
||||
solution = {
|
||||
id: solutionId,
|
||||
description: extracted.description || extracted.title,
|
||||
approach: extracted.approach,
|
||||
tasks: extracted.tasks,
|
||||
exploration_context: extracted.metadata.exploration_angles ? {
|
||||
exploration_angles: extracted.metadata.exploration_angles
|
||||
} : undefined,
|
||||
analysis: {
|
||||
risk: 'medium',
|
||||
impact: 'medium',
|
||||
complexity: extracted.metadata.complexity?.toLowerCase() || 'medium'
|
||||
},
|
||||
is_bound: false,
|
||||
created_at: new Date().toISOString(),
|
||||
_conversion_metadata: {
|
||||
source_type: extracted.metadata.source_type,
|
||||
source_path: extracted.metadata.source_path,
|
||||
converted_at: new Date().toISOString()
|
||||
}
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2.6: Confirm & Persist
|
||||
|
||||
```javascript
|
||||
// Display preview
|
||||
console.log(`
|
||||
## Conversion Summary
|
||||
|
||||
**Issue**: ${issueId}
|
||||
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
|
||||
**Tasks**: ${solution.tasks.length}
|
||||
**Mode**: ${flags.supplement ? 'Supplement' : 'New'}
|
||||
|
||||
### Tasks:
|
||||
${solution.tasks.map(t => `- ${t.id}: ${t.title} [${t.action}]`).join('\n')}
|
||||
`);
|
||||
|
||||
// Confirm if not auto mode
|
||||
if (!flags.yes && !flags.y) {
|
||||
const confirmed = CONFIRM(`Create solution for issue ${issueId} with ${solution.tasks.length} tasks?`); // BLOCKS (wait for user response)
|
||||
|
||||
if (!confirmed) {
|
||||
console.log('Cancelled.');
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Persist solution (following issue-plan-agent pattern)
|
||||
Bash(`mkdir -p ${projectRoot}/.workflow/issues/solutions`);
|
||||
|
||||
const solutionFile = `${projectRoot}/.workflow/issues/solutions/${issueId}.jsonl`;
|
||||
|
||||
if (flags.supplement) {
|
||||
// Supplement mode: update existing solution line atomically
|
||||
try {
|
||||
const existingContent = Read(solutionFile);
|
||||
const lines = existingContent.trim().split('\n').filter(l => l);
|
||||
const updatedLines = lines.map(line => {
|
||||
const sol = JSON.parse(line);
|
||||
if (sol.id === existingSolution.id) {
|
||||
return JSON.stringify(solution);
|
||||
}
|
||||
return line;
|
||||
});
|
||||
// Atomic write: write entire content at once
|
||||
Write({ file_path: solutionFile, content: updatedLines.join('\n') + '\n' });
|
||||
console.log(`Updated solution: ${existingSolution.id}`);
|
||||
} catch (e) {
|
||||
throw new Error(`Failed to update solution: ${e.message}`);
|
||||
}
|
||||
|
||||
// Note: No need to rebind - solution is already bound to issue
|
||||
} else {
|
||||
// New solution: append to JSONL file (following issue-plan-agent pattern)
|
||||
try {
|
||||
const solutionLine = JSON.stringify(solution);
|
||||
|
||||
// Read existing content, append new line, write atomically
|
||||
const existing = Bash(`test -f "${solutionFile}" && cat "${solutionFile}" || echo ""`).trim();
|
||||
const newContent = existing ? existing + '\n' + solutionLine + '\n' : solutionLine + '\n';
|
||||
Write({ file_path: solutionFile, content: newContent });
|
||||
|
||||
console.log(`Created solution: ${solutionId}`);
|
||||
} catch (e) {
|
||||
throw new Error(`Failed to write solution: ${e.message}`);
|
||||
}
|
||||
|
||||
// Bind solution to issue
|
||||
try {
|
||||
Bash(`ccw issue bind ${issueId} ${solutionId}`);
|
||||
console.log(`Bound solution to issue`);
|
||||
} catch (e) {
|
||||
// Cleanup: remove solution file on bind failure
|
||||
try {
|
||||
Bash(`rm -f "${solutionFile}"`);
|
||||
} catch (cleanupError) {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
throw new Error(`Failed to bind solution: ${e.message}`);
|
||||
}
|
||||
|
||||
// Update issue status to planned
|
||||
try {
|
||||
Bash(`ccw issue update ${issueId} --status planned`);
|
||||
} catch (e) {
|
||||
throw new Error(`Failed to update issue status: ${e.message}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2.7: Summary
|
||||
|
||||
```javascript
|
||||
console.log(`
|
||||
## Done
|
||||
|
||||
**Issue**: ${issueId}
|
||||
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
|
||||
**Tasks**: ${solution.tasks.length}
|
||||
**Status**: planned
|
||||
|
||||
### Next Steps:
|
||||
- \`/issue:queue\` → Form execution queue
|
||||
- \`ccw issue status ${issueId}\` → View issue details
|
||||
- \`ccw issue solution ${flags.supplement ? existingSolution.id : solutionId}\` → View solution
|
||||
`);
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Code | Resolution |
|
||||
|-------|------|------------|
|
||||
| Source not found | E001 | Check path exists |
|
||||
| Invalid source format | E002 | Verify file contains valid plan structure |
|
||||
| Issue not found | E003 | Check issue ID or omit --issue to create new |
|
||||
| Solution already bound | E004 | Use --supplement to add tasks |
|
||||
| AI extraction failed | E005 | Check markdown structure, try simpler format |
|
||||
| No tasks extracted | E006 | Source must contain at least 1 task |
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After conversion completion:
|
||||
- Issue created/updated with `status: planned` and `bound_solution_id` set
|
||||
- Solution persisted in `{projectRoot}/.workflow/issues/solutions/{issue-id}.jsonl`
|
||||
- Report: issue ID, solution ID, task count, mode (new/supplement)
|
||||
- Recommend next step: Form execution queue via Phase 4
|
||||
@@ -1,391 +0,0 @@
|
||||
# Phase 3: From Brainstorm
|
||||
|
||||
## Overview
|
||||
|
||||
Bridge command that converts **brainstorm-with-file** session output into executable **issue + solution** for parallel-dev-cycle consumption.
|
||||
|
||||
**Core workflow**: Load Session → Select Idea → Convert to Issue → Generate Solution → Bind & Ready
|
||||
|
||||
**Input sources**:
|
||||
- **synthesis.json** - Main brainstorm results with top_ideas
|
||||
- **perspectives.json** - Multi-CLI perspectives (creative/pragmatic/systematic)
|
||||
- **.brainstorming/** - Synthesis artifacts (clarifications, enhancements from role analyses)
|
||||
|
||||
**Output**:
|
||||
- **Issue** (ISS-YYYYMMDD-NNN) - Full context with clarifications
|
||||
- **Solution** (SOL-{issue-id}-{uid}) - Structured tasks for parallel-dev-cycle
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Brainstorm session ID or path (e.g., `SESSION="BS-rate-limiting-2025-01-28"`)
|
||||
- `synthesis.json` must exist in session directory
|
||||
- `ccw issue` CLI available
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-select highest-scored idea, skip confirmations, create issue directly.
|
||||
|
||||
## Arguments
|
||||
|
||||
| Argument | Required | Type | Default | Description |
|
||||
|----------|----------|------|---------|-------------|
|
||||
| SESSION | Yes | String | - | Session ID or path to `{projectRoot}/.workflow/.brainstorm/BS-xxx` |
|
||||
| --idea | No | Integer | - | Pre-select idea by index (0-based) |
|
||||
| --auto | No | Flag | false | Auto-select highest-scored idea |
|
||||
| -y, --yes | No | Flag | false | Skip all confirmations |
|
||||
|
||||
## Data Structures
|
||||
|
||||
### Issue Schema (Output)
|
||||
|
||||
```typescript
|
||||
interface Issue {
|
||||
id: string; // ISS-YYYYMMDD-NNN
|
||||
title: string; // From idea.title
|
||||
status: 'planned'; // Auto-set after solution binding
|
||||
priority: number; // 1-5 (derived from idea.score)
|
||||
context: string; // Full description with clarifications
|
||||
source: 'brainstorm';
|
||||
labels: string[]; // ['brainstorm', perspective, feasibility]
|
||||
|
||||
// Structured fields
|
||||
expected_behavior: string; // From key_strengths
|
||||
actual_behavior: string; // From main_challenges
|
||||
affected_components: string[]; // Extracted from description
|
||||
|
||||
_brainstorm_metadata: {
|
||||
session_id: string;
|
||||
idea_score: number;
|
||||
novelty: number;
|
||||
feasibility: string;
|
||||
clarifications_count: number;
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Solution Schema (Output)
|
||||
|
||||
```typescript
|
||||
interface Solution {
|
||||
id: string; // SOL-{issue-id}-{4-char-uid}
|
||||
description: string; // idea.title
|
||||
approach: string; // idea.description
|
||||
tasks: Task[]; // Generated from idea.next_steps
|
||||
|
||||
analysis: {
|
||||
risk: 'low' | 'medium' | 'high';
|
||||
impact: 'low' | 'medium' | 'high';
|
||||
complexity: 'low' | 'medium' | 'high';
|
||||
};
|
||||
|
||||
is_bound: boolean; // true
|
||||
created_at: string;
|
||||
bound_at: string;
|
||||
}
|
||||
|
||||
interface Task {
|
||||
id: string; // T1, T2, T3...
|
||||
title: string; // Actionable task name
|
||||
scope: string; // design|implementation|testing|documentation
|
||||
action: string; // Implement|Design|Research|Test|Document
|
||||
description: string;
|
||||
|
||||
implementation: string[]; // Step-by-step guide
|
||||
acceptance: {
|
||||
criteria: string[]; // What defines success
|
||||
verification: string[]; // How to verify
|
||||
};
|
||||
|
||||
priority: number; // 1-5
|
||||
depends_on: string[]; // Task dependencies
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 3.1: Session Loading
|
||||
|
||||
```
|
||||
Phase 1: Session Loading
|
||||
├─ Validate session path
|
||||
├─ Load synthesis.json (required)
|
||||
├─ Load perspectives.json (optional - multi-CLI insights)
|
||||
├─ Load .brainstorming/** (optional - synthesis artifacts)
|
||||
└─ Validate top_ideas array exists
|
||||
```
|
||||
|
||||
### Step 3.2: Idea Selection
|
||||
|
||||
```
|
||||
Phase 2: Idea Selection
|
||||
├─ Auto mode: Select highest scored idea
|
||||
├─ Pre-selected: Use --idea=N index
|
||||
└─ Interactive: Display table, ask user to select
|
||||
```
|
||||
|
||||
### Step 3.3: Enrich Issue Context
|
||||
|
||||
```
|
||||
Phase 3: Enrich Issue Context
|
||||
├─ Base: idea.description + key_strengths + main_challenges
|
||||
├─ Add: Relevant clarifications (Requirements/Architecture/Feasibility)
|
||||
├─ Add: Multi-perspective insights (creative/pragmatic/systematic)
|
||||
└─ Add: Session metadata (session_id, completion date, clarification count)
|
||||
```
|
||||
|
||||
### Step 3.4: Create Issue
|
||||
|
||||
```
|
||||
Phase 4: Create Issue
|
||||
├─ Generate issue data with enriched context
|
||||
├─ Calculate priority from idea.score (0-10 → 1-5)
|
||||
├─ Create via: ccw issue create (heredoc for JSON)
|
||||
└─ Returns: ISS-YYYYMMDD-NNN
|
||||
```
|
||||
|
||||
### Step 3.5: Generate Solution Tasks
|
||||
|
||||
```
|
||||
Phase 5: Generate Solution Tasks
|
||||
├─ T1: Research & Validate (if main_challenges exist)
|
||||
├─ T2: Design & Specification (if key_strengths exist)
|
||||
├─ T3+: Implementation tasks (from idea.next_steps)
|
||||
└─ Each task includes: implementation steps + acceptance criteria
|
||||
```
|
||||
|
||||
### Step 3.6: Bind Solution
|
||||
|
||||
```
|
||||
Phase 6: Bind Solution
|
||||
├─ Write solution to {projectRoot}/.workflow/issues/solutions/{issue-id}.jsonl
|
||||
├─ Bind via: ccw issue bind {issue-id} {solution-id}
|
||||
├─ Update issue status to 'planned'
|
||||
└─ Returns: SOL-{issue-id}-{uid}
|
||||
```
|
||||
|
||||
### Step 3.7: Next Steps
|
||||
|
||||
```
|
||||
Phase 7: Next Steps
|
||||
└─ Offer: Form queue | Convert another idea | View details | Done
|
||||
```
|
||||
|
||||
## Context Enrichment Logic
|
||||
|
||||
### Base Context (Always Included)
|
||||
|
||||
- **Description**: `idea.description`
|
||||
- **Why This Idea**: `idea.key_strengths[]`
|
||||
- **Challenges to Address**: `idea.main_challenges[]`
|
||||
- **Implementation Steps**: `idea.next_steps[]`
|
||||
|
||||
### Enhanced Context (If Available)
|
||||
|
||||
**From Synthesis Artifacts** (`.brainstorming/*/analysis*.md`):
|
||||
- Extract clarifications matching categories: Requirements, Architecture, Feasibility
|
||||
- Format: `**{Category}** ({role}): {question} → {answer}`
|
||||
- Limit: Top 3 most relevant
|
||||
|
||||
**From Perspectives** (`perspectives.json`):
|
||||
- **Creative**: First insight from `perspectives.creative.insights[0]`
|
||||
- **Pragmatic**: First blocker from `perspectives.pragmatic.blockers[0]`
|
||||
- **Systematic**: First pattern from `perspectives.systematic.patterns[0]`
|
||||
|
||||
**Session Metadata**:
|
||||
- Session ID, Topic, Completion Date
|
||||
- Clarifications count (if synthesis artifacts loaded)
|
||||
|
||||
## Task Generation Strategy
|
||||
|
||||
### Task 1: Research & Validation
|
||||
**Trigger**: `idea.main_challenges.length > 0`
|
||||
- **Title**: "Research & Validate Approach"
|
||||
- **Scope**: design
|
||||
- **Action**: Research
|
||||
- **Implementation**: Investigate blockers, review similar implementations, validate with team
|
||||
- **Acceptance**: Blockers documented, feasibility assessed, approach validated
|
||||
|
||||
### Task 2: Design & Specification
|
||||
**Trigger**: `idea.key_strengths.length > 0`
|
||||
- **Title**: "Design & Create Specification"
|
||||
- **Scope**: design
|
||||
- **Action**: Design
|
||||
- **Implementation**: Create design doc, define success criteria, plan phases
|
||||
- **Acceptance**: Design complete, metrics defined, plan outlined
|
||||
|
||||
### Task 3+: Implementation Tasks
|
||||
**Trigger**: `idea.next_steps[]`
|
||||
- **Title**: From `next_steps[i]` (max 60 chars)
|
||||
- **Scope**: Inferred from keywords (test→testing, api→backend, ui→frontend)
|
||||
- **Action**: Detected from verbs (implement, create, update, fix, test, document)
|
||||
- **Implementation**: Execute step + follow design + write tests
|
||||
- **Acceptance**: Step implemented + tests passing + code reviewed
|
||||
|
||||
### Fallback Task
|
||||
**Trigger**: No tasks generated from above
|
||||
- **Title**: `idea.title`
|
||||
- **Scope**: implementation
|
||||
- **Action**: Implement
|
||||
- **Generic implementation + acceptance criteria**
|
||||
|
||||
## Priority Calculation
|
||||
|
||||
### Issue Priority (1-5)
|
||||
```
|
||||
idea.score: 0-10
|
||||
priority = max(1, min(5, ceil((10 - score) / 2)))
|
||||
|
||||
Examples:
|
||||
score 9-10 → priority 1 (critical)
|
||||
score 7-8 → priority 2 (high)
|
||||
score 5-6 → priority 3 (medium)
|
||||
score 3-4 → priority 4 (low)
|
||||
score 0-2 → priority 5 (lowest)
|
||||
```
|
||||
|
||||
### Task Priority (1-5)
|
||||
- Research task: 1 (highest)
|
||||
- Design task: 2
|
||||
- Implementation tasks: 3 by default, decrement for later tasks
|
||||
- Testing/documentation: 4-5
|
||||
|
||||
### Complexity Analysis
|
||||
```
|
||||
risk: main_challenges.length > 2 ? 'high' : 'medium'
|
||||
impact: score >= 8 ? 'high' : score >= 6 ? 'medium' : 'low'
|
||||
complexity: main_challenges > 3 OR tasks > 5 ? 'high'
|
||||
tasks > 3 ? 'medium' : 'low'
|
||||
```
|
||||
|
||||
## CLI Integration
|
||||
|
||||
### Issue Creation
|
||||
```bash
|
||||
# Uses heredoc to avoid shell escaping
|
||||
ccw issue create << 'EOF'
|
||||
{
|
||||
"title": "...",
|
||||
"context": "...",
|
||||
"priority": 3,
|
||||
"source": "brainstorm",
|
||||
"labels": ["brainstorm", "creative", "feasibility-high"],
|
||||
...
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
### Solution Binding
|
||||
```bash
|
||||
# Append solution to JSONL file
|
||||
echo '{"id":"SOL-xxx","tasks":[...]}' >> ${projectRoot}/.workflow/issues/solutions/{issue-id}.jsonl
|
||||
|
||||
# Bind to issue
|
||||
ccw issue bind {issue-id} {solution-id}
|
||||
|
||||
# Update status
|
||||
ccw issue update {issue-id} --status planned
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Message | Resolution |
|
||||
|-------|---------|------------|
|
||||
| Session not found | synthesis.json missing | Check session ID, list available sessions |
|
||||
| No ideas | top_ideas array empty | Complete brainstorm workflow first |
|
||||
| Invalid idea index | Index out of range | Check valid range 0 to N-1 |
|
||||
| Issue creation failed | ccw issue create error | Verify CLI endpoint working |
|
||||
| Solution binding failed | Bind error | Check issue exists, retry |
|
||||
|
||||
## Examples
|
||||
|
||||
### Interactive Mode
|
||||
|
||||
```bash
|
||||
codex -p "@.codex/prompts/issue-resolve.md --source brainstorm SESSION=\"BS-rate-limiting-2025-01-28\""
|
||||
|
||||
# Output:
|
||||
# | # | Title | Score | Feasibility |
|
||||
# |---|-------|-------|-------------|
|
||||
# | 0 | Token Bucket Algorithm | 8.5 | High |
|
||||
# | 1 | Sliding Window Counter | 7.2 | Medium |
|
||||
# | 2 | Fixed Window | 6.1 | High |
|
||||
|
||||
# User selects: #0
|
||||
|
||||
# Result:
|
||||
# Created issue: ISS-20250128-001
|
||||
# Created solution: SOL-ISS-20250128-001-ab3d
|
||||
# Bound solution to issue
|
||||
# → Next: /issue:queue
|
||||
```
|
||||
|
||||
### Auto Mode
|
||||
|
||||
```bash
|
||||
codex -p "@.codex/prompts/issue-resolve.md --source brainstorm SESSION=\"BS-caching-2025-01-28\" --auto"
|
||||
|
||||
# Result:
|
||||
# Auto-selected: Redis Cache Layer (Score: 9.2/10)
|
||||
# Created issue: ISS-20250128-002
|
||||
# Solution with 4 tasks
|
||||
# → Status: planned
|
||||
```
|
||||
|
||||
## Integration Flow
|
||||
|
||||
```
|
||||
brainstorm-with-file
|
||||
│
|
||||
├─ synthesis.json
|
||||
├─ perspectives.json
|
||||
└─ .brainstorming/** (optional)
|
||||
│
|
||||
▼
|
||||
Phase 3: From Brainstorm ◄─── This phase
|
||||
│
|
||||
├─ ISS-YYYYMMDD-NNN (enriched issue)
|
||||
└─ SOL-{issue-id}-{uid} (structured solution)
|
||||
│
|
||||
▼
|
||||
Phase 4: Form Queue
|
||||
│
|
||||
▼
|
||||
/issue:execute
|
||||
│
|
||||
▼
|
||||
RA → EP → CD → VAS
|
||||
```
|
||||
|
||||
## Session Files Reference
|
||||
|
||||
### Input Files
|
||||
|
||||
```
|
||||
{projectRoot}/.workflow/.brainstorm/BS-{slug}-{date}/
|
||||
├── synthesis.json # REQUIRED - Top ideas with scores
|
||||
├── perspectives.json # OPTIONAL - Multi-CLI insights
|
||||
├── brainstorm.md # Reference only
|
||||
└── .brainstorming/ # OPTIONAL - Synthesis artifacts
|
||||
├── system-architect/
|
||||
│ └── analysis.md # Contains clarifications + enhancements
|
||||
├── api-designer/
|
||||
│ └── analysis.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
### Output Files
|
||||
|
||||
```
|
||||
{projectRoot}/.workflow/issues/
|
||||
├── solutions/
|
||||
│ └── ISS-YYYYMMDD-001.jsonl # Created solution (JSONL)
|
||||
└── (managed by ccw issue CLI)
|
||||
```
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After brainstorm conversion:
|
||||
- Issue created with `status: planned`, enriched context from brainstorm session
|
||||
- Solution bound with structured tasks derived from idea.next_steps
|
||||
- Report: issue ID, solution ID, task count, idea score
|
||||
- Recommend next step: Form execution queue via Phase 4
|
||||
@@ -1,449 +0,0 @@
|
||||
# Phase 4: Form Execution Queue
|
||||
|
||||
## Overview
|
||||
|
||||
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves **inter-solution** conflicts, and creates an ordered execution queue at **solution level**.
|
||||
|
||||
**Design Principle**: Queue items are **solutions**, not individual tasks. Each executor receives a complete solution with all its tasks.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Issues with `status: planned` and `bound_solution_id` exist
|
||||
- Solutions written in `{projectRoot}/.workflow/issues/solutions/{issue-id}.jsonl`
|
||||
- `ccw issue` CLI available
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm queue formation, use recommended conflict resolutions.
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
- **Agent-driven**: issue-queue-agent handles all ordering logic
|
||||
- **Solution-level granularity**: Queue items are solutions, not tasks
|
||||
- **Conflict clarification**: High-severity conflicts prompt user decision
|
||||
- Semantic priority calculation per solution (0.0-1.0)
|
||||
- Parallel/Sequential group assignment for solutions
|
||||
|
||||
## Core Guidelines
|
||||
|
||||
**Data Access Principle**: Issues and queue files can grow very large. To avoid context overflow:
|
||||
|
||||
| Operation | Correct | Incorrect |
|
||||
|-----------|---------|-----------|
|
||||
| List issues (brief) | `ccw issue list --status planned --brief` | `Read('issues.jsonl')` |
|
||||
| **Batch solutions (NEW)** | `ccw issue solutions --status planned --brief` | Loop `ccw issue solution <id>` |
|
||||
| List queue (brief) | `ccw issue queue --brief` | `Read('queues/*.json')` |
|
||||
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
|
||||
| Get next item | `ccw issue next --json` | `Read('queues/*.json')` |
|
||||
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
|
||||
| Sync from queue | `ccw issue update --from-queue` | Direct file edit |
|
||||
| Read solution (single) | `ccw issue solution <id> --brief` | `Read('solutions/*.jsonl')` |
|
||||
|
||||
**Output Options**:
|
||||
- `--brief`: JSON with minimal fields (id, status, counts)
|
||||
- `--json`: Full JSON (agent use only)
|
||||
|
||||
**Orchestration vs Execution**:
|
||||
- **Command (orchestrator)**: Use `--brief` for minimal context
|
||||
- **Agent (executor)**: Fetch full details → `ccw issue status <id> --json`
|
||||
|
||||
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `queues/*.json` directly.
|
||||
|
||||
## Flags
|
||||
|
||||
| Flag | Description | Default |
|
||||
|------|-------------|---------|
|
||||
| `--queues <n>` | Number of parallel queues | 1 |
|
||||
| `--issue <id>` | Form queue for specific issue only | All planned |
|
||||
| `--append <id>` | Append issue to active queue (don't create new) | - |
|
||||
| `--force` | Skip active queue check, always create new queue | false |
|
||||
|
||||
## CLI Subcommands Reference
|
||||
|
||||
```bash
|
||||
ccw issue queue list List all queues with status
|
||||
ccw issue queue add <issue-id> Add issue to queue (interactive if active queue exists)
|
||||
ccw issue queue add <issue-id> -f Add to new queue without prompt (force)
|
||||
ccw issue queue merge <src> --queue <target> Merge source queue into target queue
|
||||
ccw issue queue switch <queue-id> Switch active queue
|
||||
ccw issue queue archive Archive current queue
|
||||
ccw issue queue delete <queue-id> Delete queue from history
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 4.1: Solution Loading & Distribution
|
||||
|
||||
**Data Loading:**
|
||||
- Use `ccw issue solutions --status planned --brief` to get all planned issues with solutions in **one call**
|
||||
- Returns: Array of `{ issue_id, solution_id, is_bound, task_count, files_touched[], priority }`
|
||||
- If no bound solutions found → display message, suggest running plan/convert/brainstorm first
|
||||
|
||||
**Build Solution Objects:**
|
||||
```javascript
|
||||
// Single CLI call replaces N individual queries
|
||||
const result = Bash(`ccw issue solutions --status planned --brief`).trim();
|
||||
const solutions = result ? JSON.parse(result) : [];
|
||||
|
||||
if (solutions.length === 0) {
|
||||
console.log('No bound solutions found. Run /issue:plan first.');
|
||||
return;
|
||||
}
|
||||
|
||||
// solutions already in correct format:
|
||||
// { issue_id, solution_id, is_bound, task_count, files_touched[], priority }
|
||||
```
|
||||
|
||||
**Multi-Queue Distribution** (if `--queues > 1`):
|
||||
- Use `files_touched` from brief output for partitioning
|
||||
- Group solutions with overlapping files into same queue
|
||||
|
||||
**Output:** Array of solution objects (or N arrays if multi-queue)
|
||||
|
||||
### Step 4.2: Agent-Driven Queue Formation
|
||||
|
||||
**Generate Queue IDs** (command layer, pass to agent):
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14);
|
||||
const numQueues = args.queues || 1;
|
||||
const queueIds = numQueues === 1
|
||||
? [`QUE-${timestamp}`]
|
||||
: Array.from({length: numQueues}, (_, i) => `QUE-${timestamp}-${i + 1}`);
|
||||
```
|
||||
|
||||
**Agent Prompt** (same for each queue, with assigned solutions):
|
||||
```
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-queue-agent.md (MUST read first)
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Order Solutions into Execution Queue
|
||||
|
||||
**Queue ID**: ${queueId}
|
||||
**Solutions**: ${solutions.length} from ${issues.length} issues
|
||||
**Project Root**: ${cwd}
|
||||
**Queue Index**: ${queueIndex} of ${numQueues}
|
||||
|
||||
### Input
|
||||
${JSON.stringify(solutions)}
|
||||
// Each object: { issue_id, solution_id, task_count, files_touched[], priority }
|
||||
|
||||
### Workflow
|
||||
|
||||
Step 1: Build dependency graph from solutions (nodes=solutions, edges=file conflicts via files_touched)
|
||||
Step 2: Use Gemini CLI for conflict analysis (5 types: file, API, data, dependency, architecture)
|
||||
Step 3: For high-severity conflicts without clear resolution → add to `clarifications`
|
||||
Step 4: Calculate semantic priority (base from issue priority + task_count boost)
|
||||
Step 5: Assign execution groups: P* (parallel, no overlaps) / S* (sequential, shared files)
|
||||
Step 6: Write queue JSON + update index
|
||||
|
||||
### Output Requirements
|
||||
|
||||
**Write files** (exactly 2):
|
||||
- `${projectRoot}/.workflow/issues/queues/${queueId}.json` - Full queue with solutions, conflicts, groups
|
||||
- `${projectRoot}/.workflow/issues/queues/index.json` - Update with new queue entry
|
||||
|
||||
**Return JSON**:
|
||||
\`\`\`json
|
||||
{
|
||||
"queue_id": "${queueId}",
|
||||
"total_solutions": N,
|
||||
"total_tasks": N,
|
||||
"execution_groups": [{"id": "P1", "type": "parallel", "count": N}],
|
||||
"issues_queued": ["ISS-xxx"],
|
||||
"clarifications": [{"conflict_id": "CFT-1", "question": "...", "options": [...]}]
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
### Rules
|
||||
- Solution granularity (NOT individual tasks)
|
||||
- Queue Item ID format: S-1, S-2, S-3, ...
|
||||
- Use provided Queue ID (do NOT generate new)
|
||||
- `clarifications` only present if high-severity unresolved conflicts exist
|
||||
- Use `files_touched` from input (already extracted by orchestrator)
|
||||
|
||||
### Done Criteria
|
||||
- [ ] Queue JSON written with all solutions ordered
|
||||
- [ ] Index updated with active_queue_id
|
||||
- [ ] No circular dependencies
|
||||
- [ ] Parallel groups have no file overlaps
|
||||
- [ ] Return JSON matches required shape
|
||||
```
|
||||
|
||||
**Launch Agents** (parallel if multi-queue):
|
||||
```javascript
|
||||
const numQueues = args.queues || 1;
|
||||
|
||||
if (numQueues === 1) {
|
||||
// Single queue: single agent call
|
||||
const agentId = spawn_agent({
|
||||
message: buildPrompt(queueIds[0], solutions)
|
||||
});
|
||||
|
||||
// Wait for completion
|
||||
const result = wait({
|
||||
ids: [agentId],
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
});
|
||||
|
||||
// Close agent
|
||||
close_agent({ id: agentId });
|
||||
} else {
|
||||
// Multi-queue: parallel agent calls
|
||||
const agentIds = [];
|
||||
|
||||
// Step 1: Spawn all agents
|
||||
solutionGroups.forEach((group, i) => {
|
||||
const agentId = spawn_agent({
|
||||
message: buildPrompt(queueIds[i], group, i + 1, numQueues)
|
||||
});
|
||||
agentIds.push(agentId);
|
||||
});
|
||||
|
||||
// Step 2: Batch wait for all agents
|
||||
const results = wait({
|
||||
ids: agentIds,
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
});
|
||||
|
||||
if (results.timed_out) {
|
||||
console.log('Some queue agents timed out, continuing with completed results');
|
||||
}
|
||||
|
||||
// Step 3: Collect results (see Step 4.3 for clarification handling)
|
||||
|
||||
// Step 4: Batch cleanup - close all agents
|
||||
agentIds.forEach(id => close_agent({ id }));
|
||||
}
|
||||
```
|
||||
|
||||
**Multi-Queue Index Update:**
|
||||
- First queue sets `active_queue_id`
|
||||
- All queues added to `queues` array with `queue_group` field linking them
|
||||
|
||||
### Step 4.3: Conflict Clarification
|
||||
|
||||
**Collect Agent Results** (multi-queue):
|
||||
```javascript
|
||||
// Collect clarifications from all agents
|
||||
const allClarifications = [];
|
||||
agentIds.forEach((agentId, i) => {
|
||||
const agentStatus = results.status[agentId];
|
||||
if (agentStatus && agentStatus.completed) {
|
||||
const parsed = JSON.parse(agentStatus.completed);
|
||||
for (const c of parsed.clarifications || []) {
|
||||
allClarifications.push({ ...c, queue_id: queueIds[i], agent_id: agentId });
|
||||
}
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Check Agent Return:**
|
||||
- Parse agent result JSON (or all results if multi-queue)
|
||||
- If any `clarifications` array exists and non-empty → user decision required
|
||||
|
||||
**Clarification Flow:**
|
||||
```javascript
|
||||
if (allClarifications.length > 0) {
|
||||
for (const clarification of allClarifications) {
|
||||
// Present to user via ASK_USER
|
||||
const answer = ASK_USER([{
|
||||
id: clarification.conflict_id,
|
||||
type: "select",
|
||||
prompt: `[${clarification.queue_id}] ${clarification.question}`,
|
||||
options: clarification.options
|
||||
}]); // BLOCKS (wait for user response)
|
||||
|
||||
// Re-spawn agent with user decision (original agent already closed)
|
||||
// Create new agent with previous context + resolution
|
||||
const resolveAgentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-queue-agent.md (MUST read first)
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Conflict Resolution Update
|
||||
|
||||
**Queue ID**: ${clarification.queue_id}
|
||||
**Conflict**: ${clarification.conflict_id} resolved: ${answer.selected}
|
||||
|
||||
### Instructions
|
||||
1. Read existing queue file: ${projectRoot}/.workflow/issues/queues/${clarification.queue_id}.json
|
||||
2. Update conflict resolution with user decision
|
||||
3. Re-order affected solutions if needed
|
||||
4. Write updated queue file
|
||||
`
|
||||
});
|
||||
|
||||
const resolveResult = wait({
|
||||
ids: [resolveAgentId],
|
||||
timeout_ms: 300000 // 5 minutes
|
||||
});
|
||||
|
||||
close_agent({ id: resolveAgentId });
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4.4: Status Update & Summary
|
||||
|
||||
**Status Update** (MUST use CLI command, NOT direct file operations):
|
||||
|
||||
```bash
|
||||
# Option 1: Batch update from queue (recommended)
|
||||
ccw issue update --from-queue [queue-id] --json
|
||||
ccw issue update --from-queue --json # Use active queue
|
||||
ccw issue update --from-queue QUE-xxx --json # Use specific queue
|
||||
|
||||
# Option 2: Individual issue update
|
||||
ccw issue update <issue-id> --status queued
|
||||
```
|
||||
|
||||
**IMPORTANT**: Do NOT directly modify `issues.jsonl`. Always use CLI command to ensure proper validation and history tracking.
|
||||
|
||||
**Output** (JSON):
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"queue_id": "QUE-xxx",
|
||||
"queued": ["ISS-001", "ISS-002"],
|
||||
"queued_count": 2,
|
||||
"unplanned": ["ISS-003"],
|
||||
"unplanned_count": 1
|
||||
}
|
||||
```
|
||||
|
||||
**Behavior:**
|
||||
- Updates issues in queue to `status: 'queued'` (skips already queued/executing/completed)
|
||||
- Identifies planned issues with `bound_solution_id` NOT in queue → `unplanned` array
|
||||
- Optional `queue-id`: defaults to active queue if omitted
|
||||
|
||||
**Summary Output:**
|
||||
- Display queue ID, solution count, task count
|
||||
- Show unplanned issues (planned but NOT in queue)
|
||||
- Show next step: `/issue:execute`
|
||||
|
||||
### Step 4.5: Active Queue Check & Decision
|
||||
|
||||
**After agent completes, check for active queue:**
|
||||
|
||||
```bash
|
||||
ccw issue queue list --brief
|
||||
```
|
||||
|
||||
**Decision:**
|
||||
- If `active_queue_id` is null → `ccw issue queue switch <new-queue-id>` (activate new queue)
|
||||
- If active queue exists → Use **ASK_USER** to prompt user
|
||||
|
||||
**ASK_USER:**
|
||||
```javascript
|
||||
ASK_USER([{
|
||||
id: "queue_action",
|
||||
type: "select",
|
||||
prompt: "Active queue exists. How would you like to proceed?",
|
||||
options: [
|
||||
{ label: "Merge into existing queue", description: "Add new items to active queue, delete new queue" },
|
||||
{ label: "Use new queue", description: "Switch to new queue, keep existing in history" },
|
||||
{ label: "Cancel", description: "Delete new queue, keep existing active" }
|
||||
]
|
||||
}]); // BLOCKS (wait for user response)
|
||||
```
|
||||
|
||||
**Action Commands:**
|
||||
|
||||
| User Choice | Commands |
|
||||
|-------------|----------|
|
||||
| **Merge into existing** | `ccw issue queue merge <new-queue-id> --queue <active-queue-id>` then `ccw issue queue delete <new-queue-id>` |
|
||||
| **Use new queue** | `ccw issue queue switch <new-queue-id>` |
|
||||
| **Cancel** | `ccw issue queue delete <new-queue-id>` |
|
||||
|
||||
## Storage Structure (Queue History)
|
||||
|
||||
```
|
||||
{projectRoot}/.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queues/ # Queue history directory
|
||||
│ ├── index.json # Queue index (active + history)
|
||||
│ ├── {queue-id}.json # Individual queue files
|
||||
│ └── ...
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue
|
||||
└── ...
|
||||
```
|
||||
|
||||
### Queue Index Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"active_queue_id": "QUE-20251227-143000",
|
||||
"active_queue_group": "QGR-20251227-143000",
|
||||
"queues": [
|
||||
{
|
||||
"id": "QUE-20251227-143000-1",
|
||||
"queue_group": "QGR-20251227-143000",
|
||||
"queue_index": 1,
|
||||
"total_queues": 3,
|
||||
"status": "active",
|
||||
"issue_ids": ["ISS-xxx", "ISS-yyy"],
|
||||
"total_solutions": 3,
|
||||
"completed_solutions": 1,
|
||||
"created_at": "2025-12-27T14:30:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Multi-Queue Fields:**
|
||||
- `queue_group`: Links queues created in same batch (format: `QGR-{timestamp}`)
|
||||
- `queue_index`: Position in group (1-based)
|
||||
- `total_queues`: Total queues in group
|
||||
- `active_queue_group`: Current active group (for multi-queue execution)
|
||||
|
||||
**Note**: Queue file schema is produced by `issue-queue-agent`. See agent documentation for details.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No bound solutions | Display message, suggest phases 1-3 (plan/convert/brainstorm) |
|
||||
| Circular dependency | List cycles, abort queue formation |
|
||||
| High-severity conflict | Return `clarifications`, prompt user decision |
|
||||
| User cancels clarification | Abort queue formation |
|
||||
| **index.json not updated** | Auto-fix: Set active_queue_id to new queue |
|
||||
| **Queue file missing solutions** | Abort with error, agent must regenerate |
|
||||
| **User cancels queue add** | Display message, return without changes |
|
||||
| **Merge with empty source** | Skip merge, display warning |
|
||||
| **All items duplicate** | Skip merge, display "All items already exist" |
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before completing, verify:
|
||||
|
||||
- [ ] All planned issues with `bound_solution_id` are included
|
||||
- [ ] Queue JSON written to `queues/{queue-id}.json` (N files if multi-queue)
|
||||
- [ ] Index updated in `queues/index.json` with `active_queue_id`
|
||||
- [ ] Multi-queue: All queues share same `queue_group`
|
||||
- [ ] No circular dependencies in solution DAG
|
||||
- [ ] All conflicts resolved (auto or via user clarification)
|
||||
- [ ] Parallel groups have no file overlaps
|
||||
- [ ] Cross-queue: No file overlaps between queues
|
||||
- [ ] Issue statuses updated to `queued`
|
||||
- [ ] All spawned agents are properly closed via close_agent
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After queue formation:
|
||||
- All planned issues updated to `status: queued`
|
||||
- Queue files written and index updated
|
||||
- Report: queue ID(s), solution count, task count, execution groups
|
||||
- Recommend next step: `/issue:execute` to begin execution
|
||||
@@ -86,7 +86,7 @@ Cross-reference the task description against these documents for completeness.
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/requirements-analyst.md
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json (if exists)
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json (if exists)
|
||||
3. Read: ${projectRoot}/.workflow/specs/*.md (if exists)
|
||||
4. Read: ${projectRoot}/.workflow/.cycle/${cycleId}.progress/coordination/feedback.md (if exists)
|
||||
|
||||
---
|
||||
@@ -169,7 +169,7 @@ function spawnEPAgent(cycleId, state, progressDir) {
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/exploration-planner.md
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: ${projectRoot}/.workflow/specs/*.md
|
||||
4. Read: ${progressDir}/ra/requirements.md
|
||||
|
||||
---
|
||||
|
||||
@@ -1,442 +0,0 @@
|
||||
---
|
||||
name: plan-converter
|
||||
description: Convert any planning/analysis/brainstorm output to .task/*.json multi-file format. Supports roadmap.jsonl, plan.json, plan-note.md, conclusions.json, synthesis.json.
|
||||
argument-hint: "<input-file> [-o <output-file>]"
|
||||
---
|
||||
|
||||
# Plan Converter
|
||||
|
||||
## Overview
|
||||
|
||||
Converts any planning artifact to **`.task/*.json` multi-file format** — the single standard consumed by `unified-execute-with-file`.
|
||||
|
||||
> **Schema**: `cat ~/.ccw/workflows/cli-templates/schemas/task-schema.json`
|
||||
|
||||
```bash
|
||||
# Auto-detect format, output to same directory .task/
|
||||
/codex:plan-converter ".workflow/.req-plan/RPLAN-auth-2025-01-21/roadmap.jsonl"
|
||||
|
||||
# Specify output directory
|
||||
/codex:plan-converter ".workflow/.planning/CPLAN-xxx/plan-note.md" -o .task/
|
||||
|
||||
# Convert brainstorm synthesis
|
||||
/codex:plan-converter ".workflow/.brainstorm/BS-xxx/synthesis.json"
|
||||
```
|
||||
|
||||
**Supported inputs**: roadmap.jsonl, .task/*.json (per-domain), plan-note.md, conclusions.json, synthesis.json
|
||||
|
||||
**Output**: `.task/*.json` (one file per task, in same directory's `.task/` subfolder, or specified `-o` path)
|
||||
|
||||
## Task JSON Schema
|
||||
|
||||
每个任务一个独立 JSON 文件 (`.task/TASK-{id}.json`),遵循统一 schema:
|
||||
|
||||
> **Schema 定义**: `cat ~/.ccw/workflows/cli-templates/schemas/task-schema.json`
|
||||
|
||||
**Producer 使用的字段集** (plan-converter 输出):
|
||||
|
||||
```
|
||||
IDENTITY (必填): id, title, description
|
||||
CLASSIFICATION (可选): type, priority, effort, action
|
||||
SCOPE (可选): scope, excludes, focus_paths
|
||||
DEPENDENCIES (必填): depends_on, parallel_group, inputs, outputs
|
||||
CONVERGENCE (必填): convergence.criteria, convergence.verification, convergence.definition_of_done
|
||||
FILES (可选): files[].path, files[].action, files[].changes, files[].change, files[].target, files[].conflict_risk
|
||||
IMPLEMENTATION (可选): implementation[], test.manual_checks, test.success_metrics
|
||||
PLANNING (可选): reference, rationale, risks
|
||||
CONTEXT (可选): source.tool, source.session_id, source.original_id, evidence, risk_items
|
||||
RUNTIME (执行时填充): status, executed_at, result
|
||||
```
|
||||
|
||||
**文件命名**: `TASK-{id}.json` (保留原有 ID 前缀: L0-, T1-, IDEA- 等)
|
||||
|
||||
## Target Input
|
||||
|
||||
**$ARGUMENTS**
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Step 0: Parse arguments, resolve input path
|
||||
Step 1: Detect input format
|
||||
Step 2: Parse input → extract raw records
|
||||
Step 3: Transform → unified task records
|
||||
Step 4: Validate convergence quality
|
||||
Step 5: Write .task/*.json output + display summary
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Step 0: Parse Arguments
|
||||
|
||||
```javascript
|
||||
const args = $ARGUMENTS
|
||||
const outputMatch = args.match(/-o\s+(\S+)/)
|
||||
const outputPath = outputMatch ? outputMatch[1] : null
|
||||
const inputPath = args.replace(/-o\s+\S+/, '').trim()
|
||||
|
||||
// Resolve absolute path
|
||||
const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()
|
||||
const resolvedInput = path.isAbsolute(inputPath) ? inputPath : `${projectRoot}/${inputPath}`
|
||||
```
|
||||
|
||||
### Step 1: Detect Format
|
||||
|
||||
```javascript
|
||||
const filename = path.basename(resolvedInput)
|
||||
const content = Read(resolvedInput)
|
||||
|
||||
function detectFormat(filename, content) {
|
||||
if (filename === 'roadmap.jsonl') return 'roadmap-jsonl'
|
||||
if (filename === 'tasks.jsonl') return 'tasks-jsonl' // legacy JSONL or per-domain
|
||||
if (filename === 'plan-note.md') return 'plan-note-md'
|
||||
if (filename === 'conclusions.json') return 'conclusions-json'
|
||||
if (filename === 'synthesis.json') return 'synthesis-json'
|
||||
if (filename.endsWith('.jsonl')) return 'generic-jsonl'
|
||||
if (filename.endsWith('.json')) {
|
||||
const parsed = JSON.parse(content)
|
||||
if (parsed.top_ideas) return 'synthesis-json'
|
||||
if (parsed.recommendations && parsed.key_conclusions) return 'conclusions-json'
|
||||
if (parsed.tasks && parsed.focus_area) return 'domain-plan-json'
|
||||
return 'unknown-json'
|
||||
}
|
||||
if (filename.endsWith('.md')) return 'plan-note-md'
|
||||
return 'unknown'
|
||||
}
|
||||
```
|
||||
|
||||
**Format Detection Table**:
|
||||
|
||||
| Filename | Format ID | Source Tool |
|
||||
|----------|-----------|------------|
|
||||
| `roadmap.jsonl` | roadmap-jsonl | req-plan-with-file |
|
||||
| `tasks.jsonl` (legacy) / `.task/*.json` | tasks-jsonl / task-json | collaborative-plan-with-file |
|
||||
| `plan-note.md` | plan-note-md | collaborative-plan-with-file |
|
||||
| `conclusions.json` | conclusions-json | analyze-with-file |
|
||||
| `synthesis.json` | synthesis-json | brainstorm-with-file |
|
||||
|
||||
### Step 2: Parse Input
|
||||
|
||||
#### roadmap-jsonl (req-plan-with-file)
|
||||
|
||||
```javascript
|
||||
function parseRoadmapJsonl(content) {
|
||||
return content.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line))
|
||||
}
|
||||
// Records have: id (L0/T1), name/title, goal/scope, convergence, depends_on, etc.
|
||||
```
|
||||
|
||||
#### plan-note-md (collaborative-plan-with-file)
|
||||
|
||||
```javascript
|
||||
function parsePlanNoteMd(content) {
|
||||
const tasks = []
|
||||
// 1. Extract YAML frontmatter for session metadata
|
||||
const frontmatter = extractYamlFrontmatter(content)
|
||||
|
||||
// 2. Find all "## 任务池 - {Domain}" sections
|
||||
const taskPoolSections = content.match(/## 任务池 - .+/g) || []
|
||||
|
||||
// 3. For each section, extract tasks matching:
|
||||
// ### TASK-{ID}: {Title} [{domain}]
|
||||
// - **状态**: pending
|
||||
// - **复杂度**: Medium
|
||||
// - **依赖**: TASK-xxx
|
||||
// - **范围**: ...
|
||||
// - **修改点**: `file:location`: change summary
|
||||
// - **冲突风险**: Low
|
||||
taskPoolSections.forEach(section => {
|
||||
const sectionContent = extractSectionContent(content, section)
|
||||
const taskPattern = /### (TASK-\d+):\s+(.+?)\s+\[(.+?)\]/g
|
||||
let match
|
||||
while ((match = taskPattern.exec(sectionContent)) !== null) {
|
||||
const [_, id, title, domain] = match
|
||||
const taskBlock = extractTaskBlock(sectionContent, match.index)
|
||||
tasks.push({
|
||||
id, title, domain,
|
||||
...parseTaskDetails(taskBlock)
|
||||
})
|
||||
}
|
||||
})
|
||||
return { tasks, frontmatter }
|
||||
}
|
||||
```
|
||||
|
||||
#### conclusions-json (analyze-with-file)
|
||||
|
||||
```javascript
|
||||
function parseConclusionsJson(content) {
|
||||
const conclusions = JSON.parse(content)
|
||||
// Extract from: conclusions.recommendations[]
|
||||
// { action, rationale, priority }
|
||||
// Also available: conclusions.key_conclusions[]
|
||||
return conclusions
|
||||
}
|
||||
```
|
||||
|
||||
#### synthesis-json (brainstorm-with-file)
|
||||
|
||||
```javascript
|
||||
function parseSynthesisJson(content) {
|
||||
const synthesis = JSON.parse(content)
|
||||
// Extract from: synthesis.top_ideas[]
|
||||
// { title, description, score, feasibility, next_steps, key_strengths, main_challenges }
|
||||
// Also available: synthesis.recommendations
|
||||
return synthesis
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Transform to Unified Records
|
||||
|
||||
#### roadmap-jsonl → unified
|
||||
|
||||
```javascript
|
||||
function transformRoadmap(records, sessionId) {
|
||||
return records.map(rec => {
|
||||
// roadmap.jsonl now uses unified field names (title, description, source)
|
||||
// Passthrough is mostly direct
|
||||
return {
|
||||
id: rec.id,
|
||||
title: rec.title,
|
||||
description: rec.description,
|
||||
type: rec.type || 'feature',
|
||||
effort: rec.effort,
|
||||
scope: rec.scope,
|
||||
excludes: rec.excludes,
|
||||
depends_on: rec.depends_on || [],
|
||||
parallel_group: rec.parallel_group,
|
||||
inputs: rec.inputs,
|
||||
outputs: rec.outputs,
|
||||
convergence: rec.convergence, // already unified format
|
||||
risk_items: rec.risk_items,
|
||||
source: rec.source || {
|
||||
tool: 'req-plan-with-file',
|
||||
session_id: sessionId,
|
||||
original_id: rec.id
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
#### plan-note-md → unified
|
||||
|
||||
```javascript
|
||||
function transformPlanNote(parsed) {
|
||||
const { tasks, frontmatter } = parsed
|
||||
return tasks.map(task => ({
|
||||
id: task.id,
|
||||
title: task.title,
|
||||
description: task.scope || task.title,
|
||||
type: task.type || inferTypeFromTitle(task.title),
|
||||
priority: task.priority || inferPriorityFromEffort(task.effort),
|
||||
effort: task.effort || 'medium',
|
||||
scope: task.scope,
|
||||
depends_on: task.depends_on || [],
|
||||
convergence: task.convergence || generateConvergence(task), // plan-note now has convergence
|
||||
files: task.files?.map(f => ({
|
||||
path: f.path || f.file,
|
||||
action: f.action || 'modify',
|
||||
changes: f.changes || (f.change ? [f.change] : undefined),
|
||||
change: f.change,
|
||||
target: f.target,
|
||||
conflict_risk: f.conflict_risk
|
||||
})),
|
||||
source: {
|
||||
tool: 'collaborative-plan-with-file',
|
||||
session_id: frontmatter.session_id,
|
||||
original_id: task.id
|
||||
}
|
||||
}))
|
||||
}
|
||||
|
||||
// Generate convergence from task details when source lacks it (legacy fallback)
|
||||
function generateConvergence(task) {
|
||||
return {
|
||||
criteria: [
|
||||
// Derive testable conditions from scope and files
|
||||
// e.g., "Modified files compile without errors"
|
||||
// e.g., scope-derived: "API endpoint returns expected response"
|
||||
],
|
||||
verification: '// Derive from files — e.g., test commands',
|
||||
definition_of_done: '// Derive from scope — business language summary'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### conclusions-json → unified
|
||||
|
||||
```javascript
|
||||
function transformConclusions(conclusions) {
|
||||
return conclusions.recommendations.map((rec, index) => ({
|
||||
id: `TASK-${String(index + 1).padStart(3, '0')}`,
|
||||
title: rec.action,
|
||||
description: rec.rationale,
|
||||
type: inferTypeFromAction(rec.action),
|
||||
priority: rec.priority,
|
||||
depends_on: [],
|
||||
convergence: {
|
||||
criteria: generateCriteriaFromAction(rec),
|
||||
verification: generateVerificationFromAction(rec),
|
||||
definition_of_done: generateDoDFromRationale(rec)
|
||||
},
|
||||
evidence: conclusions.key_conclusions.map(c => c.point),
|
||||
source: {
|
||||
tool: 'analyze-with-file',
|
||||
session_id: conclusions.session_id
|
||||
}
|
||||
}))
|
||||
}
|
||||
|
||||
function inferTypeFromAction(action) {
|
||||
const lower = action.toLowerCase()
|
||||
if (/fix|resolve|repair|修复/.test(lower)) return 'fix'
|
||||
if (/refactor|restructure|extract|重构/.test(lower)) return 'refactor'
|
||||
if (/add|implement|create|新增|实现/.test(lower)) return 'feature'
|
||||
if (/improve|optimize|enhance|优化/.test(lower)) return 'enhancement'
|
||||
if (/test|coverage|validate|测试/.test(lower)) return 'testing'
|
||||
return 'feature'
|
||||
}
|
||||
```
|
||||
|
||||
#### synthesis-json → unified
|
||||
|
||||
```javascript
|
||||
function transformSynthesis(synthesis) {
|
||||
return synthesis.top_ideas
|
||||
.filter(idea => idea.score >= 6) // Only viable ideas (score ≥ 6)
|
||||
.map((idea, index) => ({
|
||||
id: `IDEA-${String(index + 1).padStart(3, '0')}`,
|
||||
title: idea.title,
|
||||
description: idea.description,
|
||||
type: 'feature',
|
||||
priority: idea.score >= 8 ? 'high' : idea.score >= 6 ? 'medium' : 'low',
|
||||
effort: idea.feasibility >= 4 ? 'small' : idea.feasibility >= 2 ? 'medium' : 'large',
|
||||
depends_on: [],
|
||||
convergence: {
|
||||
criteria: idea.next_steps || [`${idea.title} implemented and functional`],
|
||||
verification: 'Manual validation of feature functionality',
|
||||
definition_of_done: idea.description
|
||||
},
|
||||
risk_items: idea.main_challenges || [],
|
||||
source: {
|
||||
tool: 'brainstorm-with-file',
|
||||
session_id: synthesis.session_id,
|
||||
original_id: `idea-${index + 1}`
|
||||
}
|
||||
}))
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Validate Convergence Quality
|
||||
|
||||
All records must pass convergence quality checks before output.
|
||||
|
||||
```javascript
|
||||
function validateConvergence(records) {
|
||||
const vaguePatterns = /正常|正确|好|可以|没问题|works|fine|good|correct/i
|
||||
const technicalPatterns = /compile|build|lint|npm|npx|jest|tsc|eslint/i
|
||||
const issues = []
|
||||
|
||||
records.forEach(record => {
|
||||
const c = record.convergence
|
||||
if (!c) {
|
||||
issues.push({ id: record.id, field: 'convergence', issue: 'Missing entirely' })
|
||||
return
|
||||
}
|
||||
if (!c.criteria?.length) {
|
||||
issues.push({ id: record.id, field: 'criteria', issue: 'Empty criteria array' })
|
||||
}
|
||||
c.criteria?.forEach((criterion, i) => {
|
||||
if (vaguePatterns.test(criterion) && criterion.length < 15) {
|
||||
issues.push({ id: record.id, field: `criteria[${i}]`, issue: `Too vague: "${criterion}"` })
|
||||
}
|
||||
})
|
||||
if (!c.verification || c.verification.length < 10) {
|
||||
issues.push({ id: record.id, field: 'verification', issue: 'Too short or missing' })
|
||||
}
|
||||
if (technicalPatterns.test(c.definition_of_done)) {
|
||||
issues.push({ id: record.id, field: 'definition_of_done', issue: 'Should be business language' })
|
||||
}
|
||||
})
|
||||
|
||||
return issues
|
||||
}
|
||||
|
||||
// Auto-fix strategy:
|
||||
// | Issue | Fix |
|
||||
// |----------------------|----------------------------------------------|
|
||||
// | Missing convergence | Generate from title + description + files |
|
||||
// | Vague criteria | Replace with specific condition from context |
|
||||
// | Short verification | Expand with file-based test suggestion |
|
||||
// | Technical DoD | Rewrite in business language |
|
||||
```
|
||||
|
||||
### Step 5: Write .task/*.json Output & Summary
|
||||
|
||||
```javascript
|
||||
// Determine output directory
|
||||
const outputDir = outputPath
|
||||
|| `${path.dirname(resolvedInput)}/.task`
|
||||
|
||||
// Create output directory
|
||||
Bash(`mkdir -p ${outputDir}`)
|
||||
|
||||
// Clean records: remove undefined/null optional fields
|
||||
const cleanedRecords = records.map(rec => {
|
||||
const clean = { ...rec }
|
||||
Object.keys(clean).forEach(key => {
|
||||
if (clean[key] === undefined || clean[key] === null) delete clean[key]
|
||||
if (Array.isArray(clean[key]) && clean[key].length === 0 && key !== 'depends_on') delete clean[key]
|
||||
})
|
||||
return clean
|
||||
})
|
||||
|
||||
// Write individual task JSON files
|
||||
cleanedRecords.forEach(record => {
|
||||
const filename = `${record.id}.json`
|
||||
Write(`${outputDir}/${filename}`, JSON.stringify(record, null, 2))
|
||||
})
|
||||
|
||||
// Display summary
|
||||
// | Source | Format | Records | Issues |
|
||||
// |-----------------|-------------------|---------|--------|
|
||||
// | roadmap.jsonl | roadmap-jsonl | 4 | 0 |
|
||||
//
|
||||
// Output: .workflow/.req-plan/RPLAN-xxx/.task/ (4 files)
|
||||
// Records: 4 tasks with convergence criteria
|
||||
// Quality: All convergence checks passed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Conversion Matrix
|
||||
|
||||
| Source | Source Tool | ID Pattern | Has Convergence | Has Files | Has Priority | Has Source |
|
||||
|--------|-----------|------------|-----------------|-----------|--------------|-----------|
|
||||
| roadmap.jsonl (progressive) | req-plan | L0-L3 | **Yes** | No | No | **Yes** |
|
||||
| roadmap.jsonl (direct) | req-plan | T1-TN | **Yes** | No | No | **Yes** |
|
||||
| .task/TASK-*.json (per-domain) | collaborative-plan | TASK-NNN | **Yes** | **Yes** (detailed) | Optional | **Yes** |
|
||||
| plan-note.md | collaborative-plan | TASK-NNN | **Generate** | **Yes** (from 修改文件) | From effort | No |
|
||||
| conclusions.json | analyze | TASK-NNN | **Generate** | No | **Yes** | No |
|
||||
| synthesis.json | brainstorm | IDEA-NNN | **Generate** | No | From score | No |
|
||||
|
||||
**Legend**: Yes = source already has it, Generate = converter produces it, No = not available
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Input file not found | Report error, suggest checking path |
|
||||
| Unknown format | Report error, list supported formats |
|
||||
| Empty input | Report error, no output file created |
|
||||
| Convergence validation fails | Auto-fix where possible, report remaining issues |
|
||||
| Partial parse failure | Convert valid records, report skipped items |
|
||||
| Output file exists | Overwrite with warning message |
|
||||
| plan-note.md has empty sections | Skip empty domains, report in summary |
|
||||
|
||||
---
|
||||
|
||||
**Now execute plan-converter for**: $ARGUMENTS
|
||||
@@ -268,7 +268,7 @@ const hasCodebase = bash(`
|
||||
// 2. Codebase Exploration (only when hasCodebase !== 'none')
|
||||
if (hasCodebase !== 'none') {
|
||||
// Read project metadata (if exists)
|
||||
// .workflow/project-tech.json, .workflow/project-guidelines.json
|
||||
// .workflow/project-tech.json, .workflow/specs/*.md
|
||||
|
||||
// Search codebase for requirement-relevant context
|
||||
// Use: mcp__ace-tool__search_context, Grep, Glob, Read
|
||||
|
||||
@@ -304,7 +304,7 @@ const agentId = spawn_agent({
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: ${projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -95,7 +95,7 @@ dimensions.forEach(dimension => {
|
||||
4. Validate file access: bash(ls -la ${targetFiles.join(' ')})
|
||||
5. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
|
||||
6. Read: ${projectRoot}/.workflow/project-tech.json (technology stack and architecture context)
|
||||
7. Read: ${projectRoot}/.workflow/project-guidelines.json (user-defined constraints and conventions to validate against)
|
||||
7. Read: ${projectRoot}/.workflow/specs/*.md (user-defined constraints and conventions to validate against)
|
||||
|
||||
---
|
||||
|
||||
@@ -218,7 +218,7 @@ dimensions.forEach(dimension => {
|
||||
5. Read review state: ${reviewStateJsonPath}
|
||||
6. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
|
||||
7. Read: ${projectRoot}/.workflow/project-tech.json (technology stack and architecture context)
|
||||
8. Read: ${projectRoot}/.workflow/project-guidelines.json (user-defined constraints and conventions to validate against)
|
||||
8. Read: ${projectRoot}/.workflow/specs/*.md (user-defined constraints and conventions to validate against)
|
||||
|
||||
---
|
||||
|
||||
@@ -337,7 +337,7 @@ const deepDiveAgentId = spawn_agent({
|
||||
5. Read test files: bash(find ${projectDir}/tests -name "*${basename(file, '.ts')}*" -type f)
|
||||
6. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
|
||||
7. Read: ${projectRoot}/.workflow/project-tech.json (technology stack and architecture context)
|
||||
8. Read: ${projectRoot}/.workflow/project-guidelines.json (user-defined constraints for remediation compliance)
|
||||
8. Read: ${projectRoot}/.workflow/specs/*.md (user-defined constraints for remediation compliance)
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -90,7 +90,7 @@ selectedFindings.forEach(finding => {
|
||||
5. Read test files: bash(find ${projectDir}/tests -name "*${basename(finding.file, '.ts')}*" -type f)
|
||||
6. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
|
||||
7. Read: ${projectRoot}/.workflow/project-tech.json (technology stack and architecture context)
|
||||
8. Read: ${projectRoot}/.workflow/project-guidelines.json (user-defined constraints for remediation compliance)
|
||||
8. Read: ${projectRoot}/.workflow/specs/*.md (user-defined constraints for remediation compliance)
|
||||
|
||||
---
|
||||
|
||||
@@ -201,7 +201,7 @@ selectedFindings.forEach(finding => {
|
||||
5. Read test files: bash(find ${workflowDir}/tests -name "*${basename(finding.file, '.ts')}*" -type f)
|
||||
6. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
|
||||
7. Read: ${projectRoot}/.workflow/project-tech.json (technology stack and architecture context)
|
||||
8. Read: ${projectRoot}/.workflow/project-guidelines.json (user-defined constraints for remediation compliance)
|
||||
8. Read: ${projectRoot}/.workflow/specs/*.md (user-defined constraints for remediation compliance)
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -106,7 +106,7 @@ const agentId = spawn_agent({
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-planning-agent.md (MUST read first)
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: ${projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -61,7 +61,7 @@ const execAgentId = spawn_agent({
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-execution-agent.md (MUST read first)
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: ${projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,427 +0,0 @@
|
||||
---
|
||||
name: team-planex
|
||||
description: 2-member plan-and-execute pipeline with per-issue beat pipeline for concurrent planning and execution. Planner decomposes requirements into issues, generates solutions, writes artifacts. Executor implements solutions via configurable backends (agent/codex/gemini). Triggers on "team planex".
|
||||
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
|
||||
argument-hint: "<issue-ids|--text 'description'|--plan path> [--exec=agent|codex|gemini|auto] [-y]"
|
||||
---
|
||||
|
||||
# Team PlanEx
|
||||
|
||||
2 成员边规划边执行团队。通过逐 Issue 节拍流水线实现 planner 和 executor 并行工作:planner 每完成一个 issue 的 solution 后输出 ISSUE_READY 信号,orchestrator 立即 spawn executor agent 处理该 issue,同时 send_input 让 planner 继续下一 issue。
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────┐
|
||||
│ Orchestrator (this file) │
|
||||
│ → Parse input → Spawn planner → Spawn exec │
|
||||
└────────────────┬─────────────────────────────┘
|
||||
│ Per-Issue Beat Pipeline
|
||||
┌───────┴───────┐
|
||||
↓ ↓
|
||||
┌─────────┐ ┌──────────┐
|
||||
│ planner │ │ executor │
|
||||
│ (plan) │ │ (impl) │
|
||||
└─────────┘ └──────────┘
|
||||
│ │
|
||||
issue-plan-agent code-developer
|
||||
(or codex/gemini CLI)
|
||||
```
|
||||
|
||||
## Agent Registry
|
||||
|
||||
| Agent | Role File | Responsibility | New/Existing |
|
||||
|-------|-----------|----------------|--------------|
|
||||
| `planex-planner` | `.codex/skills/team-planex/agents/planex-planner.md` | 需求拆解 → issue 创建 → 方案设计 → 冲突检查 → 逐 issue 派发 | New (skill-specific) |
|
||||
| `planex-executor` | `.codex/skills/team-planex/agents/planex-executor.md` | 加载 solution → 代码实现 → 测试 → 提交 | New (skill-specific) |
|
||||
| `issue-plan-agent` | `~/.codex/agents/issue-plan-agent.md` | ACE exploration + solution generation + binding | Existing |
|
||||
| `code-developer` | `~/.codex/agents/code-developer.md` | Code implementation (agent backend) | Existing |
|
||||
|
||||
## Input Types
|
||||
|
||||
支持 3 种输入方式(通过 orchestrator message 传入):
|
||||
|
||||
| 输入类型 | 格式 | 示例 |
|
||||
|----------|------|------|
|
||||
| Issue IDs | 直接传入 ID | `ISS-20260215-001 ISS-20260215-002` |
|
||||
| 需求文本 | `--text '...'` | `--text '实现用户认证模块'` |
|
||||
| Plan 文件 | `--plan path` | `--plan plan/2026-02-15-auth.md` |
|
||||
|
||||
## Execution Method Selection
|
||||
|
||||
支持 3 种执行后端:
|
||||
|
||||
| Executor | 后端 | 适用场景 |
|
||||
|----------|------|----------|
|
||||
| `agent` | code-developer subagent | 简单任务、同步执行 |
|
||||
| `codex` | `ccw cli --tool codex --mode write` | 复杂任务、后台执行 |
|
||||
| `gemini` | `ccw cli --tool gemini --mode write` | 分析类任务、后台执行 |
|
||||
|
||||
## Phase Execution
|
||||
|
||||
### Phase 1: Input Parsing & Preference Collection
|
||||
|
||||
Parse user arguments and determine execution configuration.
|
||||
|
||||
```javascript
|
||||
// Parse input from orchestrator message
|
||||
const args = orchestratorMessage
|
||||
const issueIds = args.match(/ISS-\d{8}-\d{6}/g) || []
|
||||
const textMatch = args.match(/--text\s+['"]([^'"]+)['"]/)
|
||||
const planMatch = args.match(/--plan\s+(\S+)/)
|
||||
const autoYes = /\b(-y|--yes)\b/.test(args)
|
||||
const explicitExec = args.match(/--exec[=\s]+(agent|codex|gemini|auto)/i)?.[1]
|
||||
|
||||
let executionConfig
|
||||
|
||||
if (explicitExec) {
|
||||
executionConfig = {
|
||||
executionMethod: explicitExec.charAt(0).toUpperCase() + explicitExec.slice(1),
|
||||
codeReviewTool: "Skip"
|
||||
}
|
||||
} else if (autoYes) {
|
||||
executionConfig = { executionMethod: "Auto", codeReviewTool: "Skip" }
|
||||
} else {
|
||||
// Interactive: ask user for preferences
|
||||
// (orchestrator handles user interaction directly)
|
||||
}
|
||||
|
||||
// Initialize session directory for artifacts
|
||||
const slug = (issueIds[0] || 'batch').replace(/[^a-zA-Z0-9-]/g, '')
|
||||
const dateStr = new Date().toISOString().slice(0,10).replace(/-/g,'')
|
||||
const sessionId = `PEX-${slug}-${dateStr}`
|
||||
const sessionDir = `.workflow/.team/${sessionId}`
|
||||
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
|
||||
```
|
||||
|
||||
### Phase 2: Planning (Planner Agent — Per-Issue Beat)
|
||||
|
||||
Spawn planner agent for per-issue planning. Uses send_input for issue-by-issue progression.
|
||||
|
||||
```javascript
|
||||
// Build planner input context
|
||||
let plannerInput = ""
|
||||
if (issueIds.length > 0) plannerInput = `issue_ids: ${JSON.stringify(issueIds)}`
|
||||
else if (textMatch) plannerInput = `text: ${textMatch[1]}`
|
||||
else if (planMatch) plannerInput = `plan_file: ${planMatch[1]}`
|
||||
|
||||
const planner = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: .codex/skills/team-planex/agents/planex-planner.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
Goal: Decompose requirements into executable solutions (per-issue beat)
|
||||
|
||||
## Input
|
||||
${plannerInput}
|
||||
|
||||
## Execution Config
|
||||
execution_method: ${executionConfig.executionMethod}
|
||||
code_review: ${executionConfig.codeReviewTool}
|
||||
|
||||
## Session Dir
|
||||
session_dir: ${sessionDir}
|
||||
|
||||
## Deliverables
|
||||
For EACH issue, output structured data:
|
||||
|
||||
\`\`\`
|
||||
ISSUE_READY:
|
||||
{
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"title": "...",
|
||||
"priority": "normal",
|
||||
"depends_on": [],
|
||||
"solution_file": "${sessionDir}/artifacts/solutions/ISS-xxx.json"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
After ALL issues planned, output:
|
||||
\`\`\`
|
||||
ALL_PLANNED:
|
||||
{ "total_issues": N }
|
||||
\`\`\`
|
||||
|
||||
## Quality bar
|
||||
- Every issue has a bound solution
|
||||
- Solution artifact written to file before output
|
||||
- Inline conflict check determines depends_on
|
||||
`
|
||||
})
|
||||
|
||||
// Wait for first ISSUE_READY
|
||||
const firstIssue = wait({ ids: [planner], timeout_ms: 600000 })
|
||||
|
||||
if (firstIssue.timed_out) {
|
||||
send_input({ id: planner, message: "Please finalize current issue and output ISSUE_READY." })
|
||||
const retry = wait({ ids: [planner], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
// Parse first issue data
|
||||
const firstIssueData = parseIssueReady(firstIssue.status[planner].completed)
|
||||
```
|
||||
|
||||
### Phase 3: Per-Issue Beat Pipeline (Planning + Execution Interleaved)
|
||||
|
||||
Pipeline: spawn executor for current issue while planner continues next issue.
|
||||
|
||||
```javascript
|
||||
const allAgentIds = [planner]
|
||||
const executorAgents = []
|
||||
let allPlanned = false
|
||||
let currentIssueOutput = firstIssue.status[planner].completed
|
||||
|
||||
while (!allPlanned) {
|
||||
// --- Spawn executor for current issue ---
|
||||
const issueData = parseIssueReady(currentIssueOutput)
|
||||
|
||||
if (issueData) {
|
||||
const executor = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: .codex/skills/team-planex/agents/planex-executor.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
Goal: Implement solution for ${issueData.issue_id}
|
||||
|
||||
## Task
|
||||
${JSON.stringify([issueData], null, 2)}
|
||||
|
||||
## Execution Config
|
||||
execution_method: ${executionConfig.executionMethod}
|
||||
code_review: ${executionConfig.codeReviewTool}
|
||||
|
||||
## Solution File
|
||||
solution_file: ${issueData.solution_file}
|
||||
|
||||
## Session Dir
|
||||
session_dir: ${sessionDir}
|
||||
|
||||
## Deliverables
|
||||
\`\`\`
|
||||
IMPL_COMPLETE:
|
||||
issue_id: ${issueData.issue_id}
|
||||
status: success|failed
|
||||
test_result: pass|fail
|
||||
commit: <hash or N/A>
|
||||
\`\`\`
|
||||
|
||||
## Quality bar
|
||||
- All existing tests pass after implementation
|
||||
- Code follows project conventions
|
||||
- One commit per solution
|
||||
`
|
||||
})
|
||||
allAgentIds.push(executor)
|
||||
executorAgents.push({ id: executor, issueId: issueData.issue_id })
|
||||
}
|
||||
|
||||
// --- Check if ALL_PLANNED was in this output ---
|
||||
if (currentIssueOutput.includes("ALL_PLANNED")) {
|
||||
allPlanned = true
|
||||
break
|
||||
}
|
||||
|
||||
// --- Tell planner to continue next issue ---
|
||||
send_input({ id: planner, message: `Issue ${issueData?.issue_id || 'unknown'} dispatched. Continue to next issue.` })
|
||||
|
||||
// Wait for planner (next issue)
|
||||
const plannerResult = wait({ ids: [planner], timeout_ms: 600000 })
|
||||
|
||||
if (plannerResult.timed_out) {
|
||||
send_input({ id: planner, message: "Please finalize current issue and output results." })
|
||||
const retry = wait({ ids: [planner], timeout_ms: 120000 })
|
||||
currentIssueOutput = retry.status?.[planner]?.completed || ""
|
||||
} else {
|
||||
currentIssueOutput = plannerResult.status[planner]?.completed || ""
|
||||
}
|
||||
|
||||
// Check for ALL_PLANNED
|
||||
if (currentIssueOutput.includes("ALL_PLANNED")) {
|
||||
// May contain a final ISSUE_READY before ALL_PLANNED
|
||||
const finalIssue = parseIssueReady(currentIssueOutput)
|
||||
if (finalIssue) {
|
||||
// Spawn one more executor for the last issue
|
||||
const lastExec = spawn_agent({
|
||||
message: `... same executor spawn as above for ${finalIssue.issue_id} ...`
|
||||
})
|
||||
allAgentIds.push(lastExec)
|
||||
executorAgents.push({ id: lastExec, issueId: finalIssue.issue_id })
|
||||
}
|
||||
allPlanned = true
|
||||
}
|
||||
}
|
||||
|
||||
// Wait for all remaining executor agents
|
||||
const pendingExecutors = executorAgents.map(e => e.id)
|
||||
|
||||
if (pendingExecutors.length > 0) {
|
||||
const finalResults = wait({ ids: pendingExecutors, timeout_ms: 900000 })
|
||||
|
||||
if (finalResults.timed_out) {
|
||||
const pending = pendingExecutors.filter(id => !finalResults.status[id]?.completed)
|
||||
pending.forEach(id => {
|
||||
send_input({ id, message: "Please finalize current task and output results." })
|
||||
})
|
||||
wait({ ids: pending, timeout_ms: 120000 })
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Result Aggregation & Cleanup
|
||||
|
||||
```javascript
|
||||
// Collect results from all executors
|
||||
const pipelineResults = {
|
||||
issues: [],
|
||||
totalCompleted: 0,
|
||||
totalFailed: 0
|
||||
}
|
||||
|
||||
executorAgents.forEach(({ id, issueId }) => {
|
||||
const output = results.status[id]?.completed || ""
|
||||
const implResult = parseImplComplete(output)
|
||||
pipelineResults.issues.push({
|
||||
issueId,
|
||||
status: implResult?.status || 'unknown',
|
||||
commit: implResult?.commit || 'N/A'
|
||||
})
|
||||
if (implResult?.status === 'success') pipelineResults.totalCompleted++
|
||||
else pipelineResults.totalFailed++
|
||||
})
|
||||
|
||||
// Output final summary
|
||||
console.log(`
|
||||
## PlanEx Pipeline Complete
|
||||
|
||||
### Summary
|
||||
- Total Issues: ${executorAgents.length}
|
||||
- Completed: ${pipelineResults.totalCompleted}
|
||||
- Failed: ${pipelineResults.totalFailed}
|
||||
|
||||
### Issue Details
|
||||
${pipelineResults.issues.map(i =>
|
||||
`- ${i.issueId}: ${i.status} (commit: ${i.commit})`
|
||||
).join('\n')}
|
||||
`)
|
||||
|
||||
// Cleanup ALL agents
|
||||
allAgentIds.forEach(id => {
|
||||
try { close_agent({ id }) } catch { /* already closed */ }
|
||||
})
|
||||
```
|
||||
|
||||
## Coordination Protocol
|
||||
|
||||
### File-Based Communication
|
||||
|
||||
Since Codex agents have isolated contexts, use file-based coordination:
|
||||
|
||||
| File | Purpose | Writer | Reader |
|
||||
|------|---------|--------|--------|
|
||||
| `{sessionDir}/artifacts/solutions/{issueId}.json` | Solution artifact | planner | executor |
|
||||
| `{sessionDir}/exec-{issueId}.json` | Execution result | executor | orchestrator |
|
||||
| `{sessionDir}/pipeline-log.ndjson` | Event log | both | orchestrator |
|
||||
|
||||
### Solution Artifact Format
|
||||
|
||||
```json
|
||||
{
|
||||
"issue_id": "ISS-20260215-001",
|
||||
"bound": {
|
||||
"id": "SOL-001",
|
||||
"title": "Implement auth module",
|
||||
"tasks": [...],
|
||||
"files_touched": ["src/auth/login.ts"]
|
||||
},
|
||||
"execution_config": {
|
||||
"execution_method": "Agent",
|
||||
"code_review": "Skip"
|
||||
},
|
||||
"timestamp": "2026-02-15T10:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Execution Result Format
|
||||
|
||||
```json
|
||||
{
|
||||
"issue_id": "ISS-20260215-001",
|
||||
"status": "success",
|
||||
"executor": "agent",
|
||||
"test_result": "pass",
|
||||
"commit": "abc123",
|
||||
"files_changed": ["src/auth/login.ts", "src/auth/login.test.ts"]
|
||||
}
|
||||
```
|
||||
|
||||
## Lifecycle Management
|
||||
|
||||
### Timeout Handling
|
||||
|
||||
| Timeout Scenario | Action |
|
||||
|-----------------|--------|
|
||||
| Planner issue timeout | send_input to urge convergence, retry wait |
|
||||
| Executor impl timeout | send_input to finalize, record partial result |
|
||||
| All agents timeout | Log error, abort with partial state |
|
||||
|
||||
### Cleanup Protocol
|
||||
|
||||
```javascript
|
||||
// Track all agents created during execution
|
||||
const allAgentIds = []
|
||||
|
||||
// ... (agents added during phase execution) ...
|
||||
|
||||
// Final cleanup (end of orchestrator or on error)
|
||||
allAgentIds.forEach(id => {
|
||||
try { close_agent({ id }) } catch { /* already closed */ }
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Planner issue failure | Retry once via send_input, then skip issue |
|
||||
| Executor impl failure | Record failure, continue with next issue |
|
||||
| No issues created from text | Report to user, abort |
|
||||
| Solution generation failure | Skip issue, continue with remaining |
|
||||
| Inline conflict check failure | Use empty depends_on, continue |
|
||||
| Pipeline stall (no progress) | Timeout handling → urge convergence → abort |
|
||||
| Missing role file | Log error, use inline fallback instructions |
|
||||
|
||||
## Helper Functions
|
||||
|
||||
```javascript
|
||||
function parseIssueReady(output) {
|
||||
const match = output.match(/ISSUE_READY:\s*\n([\s\S]*?)(?=\n```|$)/)
|
||||
if (!match) return null
|
||||
try { return JSON.parse(match[1]) } catch { return null }
|
||||
}
|
||||
|
||||
function parseImplComplete(output) {
|
||||
const match = output.match(/IMPL_COMPLETE:\s*\n([\s\S]*?)(?=\n```|$)/)
|
||||
if (!match) return null
|
||||
try { return JSON.parse(match[1]) } catch { return null }
|
||||
}
|
||||
|
||||
function resolveExecutor(method, taskCount) {
|
||||
if (method.toLowerCase() === 'auto') {
|
||||
return taskCount <= 3 ? 'agent' : 'codex'
|
||||
}
|
||||
return method.toLowerCase()
|
||||
}
|
||||
```
|
||||
218
.codex/skills/team-planex/agents/executor.md
Normal file
218
.codex/skills/team-planex/agents/executor.md
Normal file
@@ -0,0 +1,218 @@
|
||||
---
|
||||
name: planex-executor
|
||||
description: |
|
||||
PlanEx executor agent. Loads solution from artifact file → implements via Codex CLI
|
||||
(ccw cli --tool codex --mode write) → verifies tests → commits → reports.
|
||||
Deploy to: ~/.codex/agents/planex-executor.md
|
||||
color: green
|
||||
---
|
||||
|
||||
# PlanEx Executor
|
||||
|
||||
Single-issue implementation agent. Loads solution from JSON artifact, executes
|
||||
implementation via Codex CLI, verifies with tests, commits, and outputs a structured
|
||||
completion report.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Tag**: `[executor]`
|
||||
- **Backend**: Codex CLI only (`ccw cli --tool codex --mode write`)
|
||||
- **Granularity**: One issue per agent instance
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
| Action | Allowed |
|
||||
|--------|---------|
|
||||
| Read solution artifact from disk | ✅ |
|
||||
| Implement via Codex CLI | ✅ |
|
||||
| Run tests for verification | ✅ |
|
||||
| git commit completed work | ✅ |
|
||||
| Create or modify issues | ❌ |
|
||||
| Spawn subagents | ❌ |
|
||||
| Interact with user (AskUserQuestion) | ❌ |
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Step 1: Load Context
|
||||
|
||||
After reading role definition:
|
||||
- Read: `.workflow/project-tech.json`
|
||||
- Read: `.workflow/specs/*.md`
|
||||
- Extract issue ID, solution file path, session dir from task message
|
||||
|
||||
### Step 2: Load Solution
|
||||
|
||||
Read solution artifact:
|
||||
|
||||
```javascript
|
||||
const solutionData = JSON.parse(Read(solutionFile))
|
||||
const solution = solutionData.solution
|
||||
```
|
||||
|
||||
If file not found or invalid:
|
||||
- Log error: `[executor] ERROR: Solution file not found: ${solutionFile}`
|
||||
- Output: `EXEC_FAILED:{issueId}:solution_file_missing`
|
||||
- Stop execution
|
||||
|
||||
Verify solution has required fields:
|
||||
- `solution.bound.title` or `solution.title`
|
||||
- `solution.bound.tasks` or `solution.tasks`
|
||||
|
||||
### Step 3: Update Issue Status
|
||||
|
||||
```bash
|
||||
ccw issue update ${issueId} --status executing
|
||||
```
|
||||
|
||||
### Step 4: Codex CLI Execution
|
||||
|
||||
Build execution prompt and invoke Codex:
|
||||
|
||||
```bash
|
||||
ccw cli -p "$(cat <<'PROMPT_EOF'
|
||||
## Issue
|
||||
ID: ${issueId}
|
||||
Title: ${solution.bound.title}
|
||||
|
||||
## Solution Plan
|
||||
${JSON.stringify(solution.bound, null, 2)}
|
||||
|
||||
## Implementation Requirements
|
||||
1. Follow the solution plan tasks in order
|
||||
2. Write clean, minimal code following existing patterns
|
||||
3. Read .workflow/specs/*.md for project conventions
|
||||
4. Run tests after each significant change
|
||||
5. Ensure all existing tests still pass
|
||||
6. Do NOT over-engineer - implement exactly what the solution specifies
|
||||
|
||||
## Quality Checklist
|
||||
- [ ] All solution tasks implemented
|
||||
- [ ] No TypeScript/linting errors (run: npx tsc --noEmit)
|
||||
- [ ] Existing tests pass
|
||||
- [ ] New tests added where specified in solution
|
||||
- [ ] No security vulnerabilities introduced
|
||||
|
||||
## Project Guidelines
|
||||
@.workflow/specs/*.md
|
||||
PROMPT_EOF
|
||||
)" --tool codex --mode write --id planex-${issueId}
|
||||
```
|
||||
|
||||
**STOP after spawn** — Codex CLI executes in background. Do NOT poll or wait inside this agent. The CLI process handles implementation autonomously.
|
||||
|
||||
Wait for CLI completion signal before proceeding to Step 5.
|
||||
|
||||
### Step 5: Verify Tests
|
||||
|
||||
Detect and run project test command:
|
||||
|
||||
```javascript
|
||||
// Detection priority:
|
||||
// 1. package.json scripts.test
|
||||
// 2. package.json scripts.test:unit
|
||||
// 3. pytest.ini / setup.cfg (Python)
|
||||
// 4. Makefile test target
|
||||
|
||||
const testCmd = detectTestCommand()
|
||||
|
||||
if (testCmd) {
|
||||
const testResult = Bash(`${testCmd} 2>&1 || echo TEST_FAILED`)
|
||||
|
||||
if (testResult.includes('TEST_FAILED') || testResult.includes('FAIL')) {
|
||||
// Report failure with resume command
|
||||
const resumeCmd = `ccw cli -p "Fix failing tests" --resume planex-${issueId} --tool codex --mode write`
|
||||
|
||||
Write({
|
||||
file_path: `${sessionDir}/errors.json`,
|
||||
content: JSON.stringify({
|
||||
issue_id: issueId,
|
||||
type: 'test_failure',
|
||||
test_output: testResult.slice(0, 2000),
|
||||
resume_cmd: resumeCmd,
|
||||
timestamp: new Date().toISOString()
|
||||
}, null, 2)
|
||||
})
|
||||
|
||||
Output: `EXEC_FAILED:${issueId}:tests_failing`
|
||||
Stop.
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Commit
|
||||
|
||||
```bash
|
||||
git add -A
|
||||
git commit -m "feat(${issueId}): ${solution.bound.title}"
|
||||
```
|
||||
|
||||
If commit fails (nothing to commit, pre-commit hook error):
|
||||
- Log warning: `[executor] WARN: Commit failed for ${issueId}, continuing`
|
||||
- Still proceed to Step 7
|
||||
|
||||
### Step 7: Update Issue & Report
|
||||
|
||||
```bash
|
||||
ccw issue update ${issueId} --status completed
|
||||
```
|
||||
|
||||
Output completion report:
|
||||
|
||||
```
|
||||
## [executor] Implementation Complete
|
||||
|
||||
**Issue**: ${issueId}
|
||||
**Title**: ${solution.bound.title}
|
||||
**Backend**: codex
|
||||
**Tests**: ${testCmd ? 'passing' : 'skipped (no test command found)'}
|
||||
**Commit**: ${commitHash}
|
||||
**Status**: resolved
|
||||
|
||||
EXEC_DONE:${issueId}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Resume Protocol
|
||||
|
||||
If Codex CLI execution fails or times out:
|
||||
|
||||
```bash
|
||||
# Resume with same session ID
|
||||
ccw cli -p "Continue implementation from where stopped" \
|
||||
--resume planex-${issueId} \
|
||||
--tool codex --mode write \
|
||||
--id planex-${issueId}-retry
|
||||
```
|
||||
|
||||
Resume command is always logged to `${sessionDir}/errors.json` on any failure.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Solution file missing | Output `EXEC_FAILED:{id}:solution_file_missing`, stop |
|
||||
| Solution JSON malformed | Output `EXEC_FAILED:{id}:solution_invalid`, stop |
|
||||
| Issue status update fails | Log warning, continue |
|
||||
| Codex CLI failure | Log resume command to errors.json, output `EXEC_FAILED:{id}:codex_failed` |
|
||||
| Tests failing | Log test output + resume command, output `EXEC_FAILED:{id}:tests_failing` |
|
||||
| Commit fails | Log warning, still output `EXEC_DONE:{id}` (implementation complete) |
|
||||
| No test command found | Skip test step, proceed to commit |
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Output `EXEC_DONE:{issueId}` on its own line when implementation succeeds
|
||||
- Output `EXEC_FAILED:{issueId}:{reason}` on its own line when implementation fails
|
||||
- Log resume command to errors.json on any failure
|
||||
- Use `[executor]` prefix in all status messages
|
||||
|
||||
**NEVER**:
|
||||
- Use any execution backend other than Codex CLI
|
||||
- Create, modify, or read issues beyond the assigned issueId
|
||||
- Spawn subagents
|
||||
- Ask the user for clarification (fail fast with structured error)
|
||||
@@ -1,544 +0,0 @@
|
||||
---
|
||||
name: planex-executor
|
||||
description: |
|
||||
Execution agent for PlanEx pipeline. Loads solutions from artifact files
|
||||
(with CLI fallback), routes to configurable backends (agent/codex/gemini CLI),
|
||||
runs tests, commits. Processes all tasks within a single assignment.
|
||||
color: green
|
||||
skill: team-planex
|
||||
---
|
||||
|
||||
# PlanEx Executor
|
||||
|
||||
从中间产物文件加载 solution(兼容 CLI fallback)→ 根据 execution_method 路由到对应后端(Agent/Codex/Gemini)→ 测试验证 → 提交。每次被 spawn 时处理分配的 exec tasks,按依赖顺序执行。
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **Solution Loading**: 从中间产物文件加载 bound solution plan(兼容 CLI fallback)
|
||||
2. **Multi-Backend Routing**: 根据 execution_method 选择 agent/codex/gemini 后端
|
||||
3. **Test Verification**: 实现后运行测试验证
|
||||
4. **Commit Management**: 每个 solution 完成后 git commit
|
||||
5. **Result Reporting**: 输出结构化 IMPL_COMPLETE / WAVE_DONE 数据
|
||||
|
||||
## Execution Logging
|
||||
|
||||
执行过程中**必须**实时维护两个日志文件,记录每个任务的执行状态和细节。
|
||||
|
||||
### Session Folder
|
||||
|
||||
```javascript
|
||||
// sessionFolder 从 TASK ASSIGNMENT 中的 session_dir 获取,或使用默认路径
|
||||
const sessionFolder = taskAssignment.session_dir
|
||||
|| `.workflow/.team/PEX-wave${waveNum}-${new Date().toISOString().slice(0,10)}`
|
||||
```
|
||||
|
||||
### execution.md — 执行概览
|
||||
|
||||
在开始实现前初始化,任务完成/失败时更新状态。
|
||||
|
||||
```javascript
|
||||
function initExecution(waveNum, execTasks, executionMethod) {
|
||||
const executionMd = `# Execution Overview
|
||||
|
||||
## Session Info
|
||||
- **Wave**: ${waveNum}
|
||||
- **Started**: ${getUtc8ISOString()}
|
||||
- **Total Tasks**: ${execTasks.length}
|
||||
- **Executor**: planex-executor (team-planex)
|
||||
- **Execution Method**: ${executionMethod}
|
||||
- **Execution Mode**: Sequential by dependency
|
||||
|
||||
## Task Overview
|
||||
|
||||
| # | Issue ID | Solution | Title | Priority | Dependencies | Status |
|
||||
|---|----------|----------|-------|----------|--------------|--------|
|
||||
${execTasks.map((t, i) =>
|
||||
`| ${i+1} | ${t.issue_id} | ${t.solution_id} | ${t.title} | ${t.priority} | ${(t.depends_on || []).join(', ') || '-'} | pending |`
|
||||
).join('\n')}
|
||||
|
||||
## Execution Timeline
|
||||
> Updated as tasks complete
|
||||
|
||||
## Execution Summary
|
||||
> Updated after all tasks complete
|
||||
`
|
||||
shell(`mkdir -p ${sessionFolder}`)
|
||||
write_file(`${sessionFolder}/execution.md`, executionMd)
|
||||
}
|
||||
```
|
||||
|
||||
### execution-events.md — 事件流
|
||||
|
||||
每个任务的 START/COMPLETE/FAIL 实时追加记录。
|
||||
|
||||
```javascript
|
||||
function initEvents(waveNum) {
|
||||
const eventsHeader = `# Execution Events
|
||||
|
||||
**Wave**: ${waveNum}
|
||||
**Executor**: planex-executor (team-planex)
|
||||
**Started**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
`
|
||||
write_file(`${sessionFolder}/execution-events.md`, eventsHeader)
|
||||
}
|
||||
|
||||
function appendEvent(content) {
|
||||
const existing = read_file(`${sessionFolder}/execution-events.md`)
|
||||
write_file(`${sessionFolder}/execution-events.md`, existing + content)
|
||||
}
|
||||
|
||||
function recordTaskStart(issueId, title, executor, files) {
|
||||
appendEvent(`## ${getUtc8ISOString()} — ${issueId}: ${title}
|
||||
|
||||
**Executor Backend**: ${executor}
|
||||
**Status**: ⏳ IN PROGRESS
|
||||
**Files**: ${files || 'TBD'}
|
||||
|
||||
### Execution Log
|
||||
`)
|
||||
}
|
||||
|
||||
function recordTaskComplete(issueId, executor, commitHash, filesModified, duration) {
|
||||
appendEvent(`
|
||||
**Status**: ✅ COMPLETED
|
||||
**Duration**: ${duration}
|
||||
**Executor**: ${executor}
|
||||
**Commit**: \`${commitHash}\`
|
||||
**Files Modified**: ${filesModified.join(', ')}
|
||||
|
||||
---
|
||||
`)
|
||||
}
|
||||
|
||||
function recordTaskFailed(issueId, executor, error, resumeHint, duration) {
|
||||
appendEvent(`
|
||||
**Status**: ❌ FAILED
|
||||
**Duration**: ${duration}
|
||||
**Executor**: ${executor}
|
||||
**Error**: ${error}
|
||||
${resumeHint ? `**Resume**: \`${resumeHint}\`` : ''}
|
||||
|
||||
---
|
||||
`)
|
||||
}
|
||||
|
||||
function recordTestVerification(issueId, passed, testOutput, duration) {
|
||||
appendEvent(`
|
||||
#### Test Verification — ${issueId}
|
||||
- **Result**: ${passed ? '✅ PASS' : '❌ FAIL'}
|
||||
- **Duration**: ${duration}
|
||||
${!passed ? `- **Output** (truncated):\n\`\`\`\n${testOutput.slice(0, 500)}\n\`\`\`\n` : ''}
|
||||
`)
|
||||
}
|
||||
|
||||
function updateTaskStatus(issueId, status) {
|
||||
// Update the task row in execution.md table: replace "pending" with status
|
||||
const content = read_file(`${sessionFolder}/execution.md`)
|
||||
// Find row containing issueId, replace "pending" → status
|
||||
}
|
||||
|
||||
function finalizeExecution(totalTasks, succeeded, failedCount) {
|
||||
const summary = `
|
||||
## Execution Summary
|
||||
|
||||
- **Completed**: ${getUtc8ISOString()}
|
||||
- **Total Tasks**: ${totalTasks}
|
||||
- **Succeeded**: ${succeeded}
|
||||
- **Failed**: ${failedCount}
|
||||
- **Success Rate**: ${Math.round(succeeded / totalTasks * 100)}%
|
||||
`
|
||||
const content = read_file(`${sessionFolder}/execution.md`)
|
||||
write_file(`${sessionFolder}/execution.md`,
|
||||
content.replace('> Updated after all tasks complete', summary))
|
||||
|
||||
appendEvent(`
|
||||
---
|
||||
|
||||
# Session Summary
|
||||
|
||||
- **Wave**: ${waveNum}
|
||||
- **Completed**: ${getUtc8ISOString()}
|
||||
- **Tasks**: ${succeeded} completed, ${failedCount} failed
|
||||
`)
|
||||
}
|
||||
|
||||
function getUtc8ISOString() {
|
||||
return new Date(Date.now() + 8 * 3600000).toISOString().replace('Z', '+08:00')
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Step 1: Context Loading
|
||||
|
||||
**MANDATORY**: Execute these steps FIRST before any other action.
|
||||
|
||||
1. Read this role definition file (already done if you're reading this)
|
||||
2. Read: `.workflow/project-tech.json` — understand project technology stack
|
||||
3. Read: `.workflow/project-guidelines.json` — understand project conventions
|
||||
4. Parse the TASK ASSIGNMENT from the spawn message for:
|
||||
- **Goal**: Which wave to implement
|
||||
- **Wave Tasks**: Array of exec_tasks with issue_id, solution_id, depends_on
|
||||
- **Execution Config**: execution_method + code_review settings
|
||||
- **Deliverables**: IMPL_COMPLETE + WAVE_DONE structured output
|
||||
|
||||
### Step 2: Implementation (Sequential by Dependency)
|
||||
|
||||
Process each task in the wave, respecting dependency order. **Record every task to execution logs.**
|
||||
|
||||
```javascript
|
||||
const tasks = taskAssignment.exec_tasks
|
||||
const executionMethod = taskAssignment.execution_config.execution_method
|
||||
const codeReview = taskAssignment.execution_config.code_review
|
||||
const waveNum = taskAssignment.wave_number
|
||||
|
||||
// ── Initialize execution logs ──
|
||||
initExecution(waveNum, tasks, executionMethod)
|
||||
initEvents(waveNum)
|
||||
|
||||
let completed = 0
|
||||
let failed = 0
|
||||
|
||||
// Sort by dependencies (topological order — tasks with no deps first)
|
||||
const sorted = topologicalSort(tasks)
|
||||
|
||||
for (const task of sorted) {
|
||||
const issueId = task.issue_id
|
||||
const taskStartTime = Date.now()
|
||||
|
||||
// --- Load solution (dual-mode: artifact file first, CLI fallback) ---
|
||||
let solution
|
||||
const solutionFile = task.solution_file
|
||||
if (solutionFile) {
|
||||
try {
|
||||
const solutionData = JSON.parse(read_file(solutionFile))
|
||||
solution = solutionData.bound ? solutionData : { bound: solutionData }
|
||||
} catch {
|
||||
// Fallback to CLI
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
solution = JSON.parse(solJson)
|
||||
}
|
||||
} else {
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
solution = JSON.parse(solJson)
|
||||
}
|
||||
|
||||
if (!solution.bound) {
|
||||
recordTaskStart(issueId, task.title, 'N/A', '')
|
||||
recordTaskFailed(issueId, 'N/A', 'No bound solution', null,
|
||||
`${Math.round((Date.now() - taskStartTime) / 1000)}s`)
|
||||
updateTaskStatus(issueId, 'failed')
|
||||
|
||||
console.log(`IMPL_COMPLETE:\n${JSON.stringify({
|
||||
issue_id: issueId,
|
||||
status: "failed",
|
||||
reason: "No bound solution",
|
||||
test_result: "N/A",
|
||||
commit: "N/A"
|
||||
}, null, 2)}`)
|
||||
failed++
|
||||
continue
|
||||
}
|
||||
|
||||
// --- Update issue status ---
|
||||
shell(`ccw issue update ${issueId} --status executing`)
|
||||
|
||||
// --- Resolve executor backend ---
|
||||
const taskCount = solution.bound.task_count || solution.bound.tasks?.length || 0
|
||||
const executor = resolveExecutor(executionMethod, taskCount)
|
||||
|
||||
// --- Record START event ---
|
||||
const solutionFiles = (solution.bound.tasks || [])
|
||||
.flatMap(t => t.files || []).join(', ')
|
||||
recordTaskStart(issueId, task.title, executor, solutionFiles)
|
||||
updateTaskStatus(issueId, 'in_progress')
|
||||
|
||||
// --- Build execution prompt ---
|
||||
const prompt = buildExecutionPrompt(issueId, solution)
|
||||
|
||||
// --- Route to backend ---
|
||||
let implSuccess = false
|
||||
|
||||
if (executor === 'agent') {
|
||||
// Spawn code-developer subagent (synchronous)
|
||||
appendEvent(`- Spawning code-developer agent...\n`)
|
||||
const devAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/code-developer.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
${prompt}
|
||||
`
|
||||
})
|
||||
|
||||
const devResult = wait({ ids: [devAgent], timeout_ms: 900000 })
|
||||
|
||||
if (devResult.timed_out) {
|
||||
appendEvent(`- Agent timed out, urging convergence...\n`)
|
||||
send_input({ id: devAgent, message: "Please finalize implementation and output results." })
|
||||
wait({ ids: [devAgent], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
close_agent({ id: devAgent })
|
||||
appendEvent(`- code-developer agent completed\n`)
|
||||
implSuccess = true
|
||||
|
||||
} else if (executor === 'codex') {
|
||||
const fixedId = `planex-${issueId}`
|
||||
appendEvent(`- Executing via Codex CLI (id: ${fixedId})...\n`)
|
||||
shell(`ccw cli -p "${prompt}" --tool codex --mode write --id ${fixedId}`)
|
||||
appendEvent(`- Codex CLI completed\n`)
|
||||
implSuccess = true
|
||||
|
||||
} else if (executor === 'gemini') {
|
||||
const fixedId = `planex-${issueId}`
|
||||
appendEvent(`- Executing via Gemini CLI (id: ${fixedId})...\n`)
|
||||
shell(`ccw cli -p "${prompt}" --tool gemini --mode write --id ${fixedId}`)
|
||||
appendEvent(`- Gemini CLI completed\n`)
|
||||
implSuccess = true
|
||||
}
|
||||
|
||||
// --- Test verification ---
|
||||
let testCmd = 'npm test'
|
||||
try {
|
||||
const pkgJson = JSON.parse(read_file('package.json'))
|
||||
if (pkgJson.scripts?.test) testCmd = 'npm test'
|
||||
else if (pkgJson.scripts?.['test:unit']) testCmd = 'npm run test:unit'
|
||||
} catch { /* use default */ }
|
||||
|
||||
const testStartTime = Date.now()
|
||||
appendEvent(`- Running tests: \`${testCmd}\`...\n`)
|
||||
const testResult = shell(`${testCmd} 2>&1 || echo "TEST_FAILED"`)
|
||||
const testPassed = !testResult.includes('TEST_FAILED') && !testResult.includes('FAIL')
|
||||
const testDuration = `${Math.round((Date.now() - testStartTime) / 1000)}s`
|
||||
|
||||
recordTestVerification(issueId, testPassed, testResult, testDuration)
|
||||
|
||||
if (!testPassed) {
|
||||
const duration = `${Math.round((Date.now() - taskStartTime) / 1000)}s`
|
||||
const resumeHint = executor !== 'agent'
|
||||
? `ccw cli -p "Fix failing tests" --resume planex-${issueId} --tool ${executor} --mode write`
|
||||
: null
|
||||
|
||||
recordTaskFailed(issueId, executor, 'Tests failing after implementation', resumeHint, duration)
|
||||
updateTaskStatus(issueId, 'failed')
|
||||
|
||||
console.log(`IMPL_COMPLETE:\n${JSON.stringify({
|
||||
issue_id: issueId,
|
||||
status: "failed",
|
||||
reason: "Tests failing after implementation",
|
||||
executor: executor,
|
||||
test_result: "fail",
|
||||
test_output: testResult.slice(0, 500),
|
||||
commit: "N/A",
|
||||
resume_hint: resumeHint || "Re-spawn code-developer with fix instructions"
|
||||
}, null, 2)}`)
|
||||
failed++
|
||||
continue
|
||||
}
|
||||
|
||||
// --- Optional code review ---
|
||||
if (codeReview && codeReview !== 'Skip') {
|
||||
appendEvent(`- Running code review (${codeReview})...\n`)
|
||||
executeCodeReview(codeReview, issueId)
|
||||
}
|
||||
|
||||
// --- Git commit ---
|
||||
shell(`git add -A && git commit -m "feat(${issueId}): implement solution ${task.solution_id}"`)
|
||||
const commitHash = shell('git rev-parse --short HEAD').trim()
|
||||
|
||||
appendEvent(`- Committed: \`${commitHash}\`\n`)
|
||||
|
||||
// --- Update issue status ---
|
||||
shell(`ccw issue update ${issueId} --status completed`)
|
||||
|
||||
// --- Record completion ---
|
||||
const duration = `${Math.round((Date.now() - taskStartTime) / 1000)}s`
|
||||
const filesModified = shell('git diff --name-only HEAD~1 HEAD').trim().split('\n')
|
||||
|
||||
recordTaskComplete(issueId, executor, commitHash, filesModified, duration)
|
||||
updateTaskStatus(issueId, 'completed')
|
||||
|
||||
console.log(`IMPL_COMPLETE:\n${JSON.stringify({
|
||||
issue_id: issueId,
|
||||
status: "success",
|
||||
executor: executor,
|
||||
test_result: "pass",
|
||||
commit: commitHash
|
||||
}, null, 2)}`)
|
||||
|
||||
completed++
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Wave Completion Report & Log Finalization
|
||||
|
||||
```javascript
|
||||
// ── Finalize execution logs ──
|
||||
finalizeExecution(sorted.length, completed, failed)
|
||||
|
||||
// ── Output structured wave result ──
|
||||
console.log(`WAVE_DONE:\n${JSON.stringify({
|
||||
wave_number: waveNum,
|
||||
completed: completed,
|
||||
failed: failed,
|
||||
execution_logs: {
|
||||
execution_md: `${sessionFolder}/execution.md`,
|
||||
events_md: `${sessionFolder}/execution-events.md`
|
||||
}
|
||||
}, null, 2)}`)
|
||||
```
|
||||
|
||||
## Execution Log Output Structure
|
||||
|
||||
```
|
||||
${sessionFolder}/
|
||||
├── execution.md # 执行概览:wave info, task table, summary
|
||||
└── execution-events.md # 事件流:每个 task 的 START/COMPLETE/FAIL + 测试验证详情
|
||||
```
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `execution.md` | 概览:wave task 表格(issue/solution/status)、执行统计、最终结果 |
|
||||
| `execution-events.md` | 时间线:每个 task 的后端选择、实现日志、测试验证、commit 记录 |
|
||||
|
||||
## Execution Method Resolution
|
||||
|
||||
```javascript
|
||||
function resolveExecutor(method, taskCount) {
|
||||
if (method.toLowerCase() === 'auto') {
|
||||
return taskCount <= 3 ? 'agent' : 'codex'
|
||||
}
|
||||
return method.toLowerCase() // 'agent' | 'codex' | 'gemini'
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Prompt Builder
|
||||
|
||||
```javascript
|
||||
function buildExecutionPrompt(issueId, solution) {
|
||||
return `
|
||||
## Issue
|
||||
ID: ${issueId}
|
||||
Title: ${solution.bound.title || 'N/A'}
|
||||
|
||||
## Solution Plan
|
||||
${JSON.stringify(solution.bound, null, 2)}
|
||||
|
||||
## Implementation Requirements
|
||||
1. Follow the solution plan tasks in order
|
||||
2. Write clean, minimal code following existing patterns
|
||||
3. Run tests after each significant change
|
||||
4. Ensure all existing tests still pass
|
||||
5. Do NOT over-engineer — implement exactly what the solution specifies
|
||||
|
||||
## Quality Checklist
|
||||
- [ ] All solution tasks implemented
|
||||
- [ ] No TypeScript/linting errors
|
||||
- [ ] Existing tests pass
|
||||
- [ ] New tests added where appropriate
|
||||
- [ ] No security vulnerabilities introduced
|
||||
`
|
||||
}
|
||||
```
|
||||
|
||||
## Code Review (Optional)
|
||||
|
||||
```javascript
|
||||
function executeCodeReview(reviewTool, issueId) {
|
||||
if (reviewTool === 'Gemini Review') {
|
||||
shell(`ccw cli -p "PURPOSE: Code review for ${issueId} implementation
|
||||
TASK: Verify solution convergence, check test coverage, analyze quality
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Quality assessment with issue identification
|
||||
CONSTRAINTS: Focus on solution adherence" --tool gemini --mode analysis`)
|
||||
} else if (reviewTool === 'Codex Review') {
|
||||
shell(`ccw cli --tool codex --mode review --uncommitted`)
|
||||
}
|
||||
// Agent Review: perform inline review (read diff, analyze)
|
||||
}
|
||||
```
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅处理被分配的 wave 中的 exec tasks
|
||||
- 按依赖顺序(topological sort)执行任务
|
||||
- 每个 task 完成后输出 IMPL_COMPLETE
|
||||
- 所有 tasks 完成后输出 WAVE_DONE
|
||||
- 通过 spawn_agent 调用 code-developer(agent 后端)
|
||||
- 运行测试验证实现
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- ❌ 创建 issue(planner 职责)
|
||||
- ❌ 修改 solution 或 queue(planner 职责)
|
||||
- ❌ Spawn issue-plan-agent 或 issue-queue-agent
|
||||
- ❌ 处理非当前 wave 的任务
|
||||
- ❌ 跳过测试验证直接 commit
|
||||
|
||||
## Topological Sort
|
||||
|
||||
```javascript
|
||||
function topologicalSort(tasks) {
|
||||
const taskMap = new Map(tasks.map(t => [t.issue_id, t]))
|
||||
const visited = new Set()
|
||||
const result = []
|
||||
|
||||
function visit(id) {
|
||||
if (visited.has(id)) return
|
||||
visited.add(id)
|
||||
const task = taskMap.get(id)
|
||||
if (task?.depends_on) {
|
||||
task.depends_on.forEach(dep => visit(dep))
|
||||
}
|
||||
result.push(task)
|
||||
}
|
||||
|
||||
tasks.forEach(t => visit(t.issue_id))
|
||||
return result.filter(Boolean)
|
||||
}
|
||||
```
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Read role definition file as FIRST action (Step 1)
|
||||
- **Initialize execution.md + execution-events.md BEFORE starting any task**
|
||||
- **Record START event before each task implementation**
|
||||
- **Record COMPLETE/FAIL event after each task with duration and details**
|
||||
- **Finalize logs at wave completion**
|
||||
- Follow structured output template (IMPL_COMPLETE / WAVE_DONE)
|
||||
- Verify tests pass before committing
|
||||
- Respect dependency ordering within the wave
|
||||
- Include executor backend info and commit hash in reports
|
||||
|
||||
**NEVER**:
|
||||
- Skip test verification before commit
|
||||
- Modify files outside of the assigned solution scope
|
||||
- Produce unstructured output
|
||||
- Continue to next task if current has unresolved blockers
|
||||
- Create new issues or modify planning artifacts
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Solution not found | Report IMPL_COMPLETE with status=failed, reason |
|
||||
| code-developer timeout | Urge convergence via send_input, close and report |
|
||||
| CLI execution failure | Include resume_hint in IMPL_COMPLETE output |
|
||||
| Tests failing | Report with test_output excerpt and resume_hint |
|
||||
| Git commit failure | Retry once, then report in IMPL_COMPLETE |
|
||||
| Unknown execution_method | Fallback to 'agent' with warning |
|
||||
| Dependency task failed | Skip dependent tasks, report as failed with reason |
|
||||
@@ -1,290 +0,0 @@
|
||||
---
|
||||
name: planex-planner
|
||||
description: |
|
||||
Planning lead for PlanEx pipeline. Decomposes requirements into issues,
|
||||
generates solutions via issue-plan-agent, performs inline conflict check,
|
||||
writes solution artifacts. Per-issue output for orchestrator dispatch.
|
||||
color: blue
|
||||
skill: team-planex
|
||||
---
|
||||
|
||||
# PlanEx Planner
|
||||
|
||||
需求拆解 → issue 创建 → 方案设计 → inline 冲突检查 → 写中间产物 → 逐 issue 输出。内部 spawn issue-plan-agent 子代理,每完成一个 issue 的 solution 立即输出 ISSUE_READY,等待 orchestrator send_input 继续下一 issue。
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **Requirement Decomposition**: 将需求文本/plan 文件拆解为独立 issues
|
||||
2. **Solution Planning**: 通过 issue-plan-agent 为每个 issue 生成 solution
|
||||
3. **Inline Conflict Check**: 基于 files_touched 重叠检测 + 显式依赖排序
|
||||
4. **Solution Artifacts**: 将 solution 写入中间产物文件供 executor 加载
|
||||
5. **Per-Issue Output**: 每个 issue 完成后立即输出 ISSUE_READY 数据
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Step 1: Context Loading
|
||||
|
||||
**MANDATORY**: Execute these steps FIRST before any other action.
|
||||
|
||||
1. Read this role definition file (already done if you're reading this)
|
||||
2. Read: `.workflow/project-tech.json` — understand project technology stack
|
||||
3. Read: `.workflow/project-guidelines.json` — understand project conventions
|
||||
4. Parse the TASK ASSIGNMENT from the spawn message for:
|
||||
- **Goal**: What to achieve
|
||||
- **Input**: Issue IDs / text / plan file
|
||||
- **Execution Config**: execution_method + code_review settings
|
||||
- **Session Dir**: Path for writing solution artifacts
|
||||
- **Deliverables**: ISSUE_READY + ALL_PLANNED structured output
|
||||
|
||||
### Step 2: Input Parsing & Issue Creation
|
||||
|
||||
Parse the input from TASK ASSIGNMENT and create issues as needed.
|
||||
|
||||
```javascript
|
||||
const input = taskAssignment.input
|
||||
const sessionDir = taskAssignment.session_dir
|
||||
const executionConfig = taskAssignment.execution_config
|
||||
|
||||
// 1) 已有 Issue IDs
|
||||
const issueIds = input.match(/ISS-\d{8}-\d{6}/g) || []
|
||||
|
||||
// 2) 文本输入 → 创建 issue
|
||||
const textMatch = input.match(/text:\s*(.+)/)
|
||||
if (textMatch && issueIds.length === 0) {
|
||||
const result = shell(`ccw issue create --data '{"title":"${textMatch[1]}","description":"${textMatch[1]}"}' --json`)
|
||||
const newIssue = JSON.parse(result)
|
||||
issueIds.push(newIssue.id)
|
||||
}
|
||||
|
||||
// 3) Plan 文件 → 解析并批量创建 issues
|
||||
const planMatch = input.match(/plan_file:\s*(\S+)/)
|
||||
if (planMatch && issueIds.length === 0) {
|
||||
const planContent = read_file(planMatch[1])
|
||||
|
||||
try {
|
||||
const content = JSON.parse(planContent)
|
||||
if (content.waves && content.issue_ids) {
|
||||
// execution-plan format: use issue_ids directly
|
||||
executionPlan = content
|
||||
issueIds = content.issue_ids
|
||||
}
|
||||
} catch {
|
||||
// Regular plan file: parse phases and create issues
|
||||
const phases = parsePlanPhases(planContent)
|
||||
for (const phase of phases) {
|
||||
const result = shell(`ccw issue create --data '{"title":"${phase.title}","description":"${phase.description}"}' --json`)
|
||||
issueIds.push(JSON.parse(result).id)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Per-Issue Solution Planning & Artifact Writing
|
||||
|
||||
Process each issue individually: plan → write artifact → conflict check → output ISSUE_READY.
|
||||
|
||||
```javascript
|
||||
const projectRoot = shell('cd . && pwd').trim()
|
||||
const dispatchedSolutions = []
|
||||
|
||||
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
|
||||
|
||||
for (let i = 0; i < issueIds.length; i++) {
|
||||
const issueId = issueIds[i]
|
||||
|
||||
// --- Step 3a: Spawn issue-plan-agent for single issue ---
|
||||
const planAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
Goal: Generate solution for issue ${issueId}
|
||||
|
||||
issue_ids: ["${issueId}"]
|
||||
project_root: "${projectRoot}"
|
||||
|
||||
## Requirements
|
||||
- Generate solution for this issue
|
||||
- Auto-bind single solution
|
||||
- For multiple solutions, select the most pragmatic one
|
||||
|
||||
## Deliverables
|
||||
Structured output with solution binding.
|
||||
`
|
||||
})
|
||||
|
||||
const planResult = wait({ ids: [planAgent], timeout_ms: 600000 })
|
||||
|
||||
if (planResult.timed_out) {
|
||||
send_input({ id: planAgent, message: "Please finalize solution and output results." })
|
||||
wait({ ids: [planAgent], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
close_agent({ id: planAgent })
|
||||
|
||||
// --- Step 3b: Load solution + write artifact file ---
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
const solution = JSON.parse(solJson)
|
||||
|
||||
const solutionFile = `${sessionDir}/artifacts/solutions/${issueId}.json`
|
||||
write_file(solutionFile, JSON.stringify({
|
||||
issue_id: issueId,
|
||||
...solution,
|
||||
execution_config: {
|
||||
execution_method: executionConfig.executionMethod,
|
||||
code_review: executionConfig.codeReviewTool
|
||||
},
|
||||
timestamp: new Date().toISOString()
|
||||
}, null, 2))
|
||||
|
||||
// --- Step 3c: Inline conflict check ---
|
||||
const blockedBy = inlineConflictCheck(issueId, solution, dispatchedSolutions)
|
||||
|
||||
// --- Step 3d: Output ISSUE_READY for orchestrator ---
|
||||
dispatchedSolutions.push({ issueId, solution, solutionFile })
|
||||
|
||||
console.log(`
|
||||
ISSUE_READY:
|
||||
${JSON.stringify({
|
||||
issue_id: issueId,
|
||||
solution_id: solution.bound?.id || 'N/A',
|
||||
title: solution.bound?.title || issueId,
|
||||
priority: "normal",
|
||||
depends_on: blockedBy,
|
||||
solution_file: solutionFile
|
||||
}, null, 2)}
|
||||
`)
|
||||
|
||||
// Wait for orchestrator send_input before continuing to next issue
|
||||
// (orchestrator will send: "Issue dispatched. Continue to next issue.")
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Finalization
|
||||
|
||||
After all issues are planned, output ALL_PLANNED signal.
|
||||
|
||||
```javascript
|
||||
console.log(`
|
||||
ALL_PLANNED:
|
||||
${JSON.stringify({
|
||||
total_issues: issueIds.length
|
||||
}, null, 2)}
|
||||
`)
|
||||
```
|
||||
|
||||
## Inline Conflict Check
|
||||
|
||||
```javascript
|
||||
function inlineConflictCheck(issueId, solution, dispatchedSolutions) {
|
||||
const currentFiles = solution.bound?.files_touched
|
||||
|| solution.bound?.affected_files || []
|
||||
const blockedBy = []
|
||||
|
||||
// 1. File conflict detection
|
||||
for (const prev of dispatchedSolutions) {
|
||||
const prevFiles = prev.solution.bound?.files_touched
|
||||
|| prev.solution.bound?.affected_files || []
|
||||
const overlap = currentFiles.filter(f => prevFiles.includes(f))
|
||||
if (overlap.length > 0) {
|
||||
blockedBy.push(prev.issueId)
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Explicit dependencies
|
||||
const explicitDeps = solution.bound?.dependencies?.on_issues || []
|
||||
for (const depId of explicitDeps) {
|
||||
if (!blockedBy.includes(depId)) {
|
||||
blockedBy.push(depId)
|
||||
}
|
||||
}
|
||||
|
||||
return blockedBy
|
||||
}
|
||||
```
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅执行规划和拆解工作
|
||||
- 每个 issue 完成后输出 ISSUE_READY 结构化数据
|
||||
- 所有 issues 完成后输出 ALL_PLANNED
|
||||
- 通过 spawn_agent 调用 issue-plan-agent(逐个 issue)
|
||||
- 等待 orchestrator send_input 才继续下一 issue
|
||||
- 将 solution 写入中间产物文件
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- ❌ 直接编写/修改业务代码(executor 职责)
|
||||
- ❌ Spawn code-developer agent(executor 职责)
|
||||
- ❌ 运行项目测试
|
||||
- ❌ git commit 代码变更
|
||||
- ❌ 直接修改 solution 内容(issue-plan-agent 负责)
|
||||
|
||||
## Plan File Parsing
|
||||
|
||||
```javascript
|
||||
function parsePlanPhases(planContent) {
|
||||
const phases = []
|
||||
const phaseRegex = /^#{2,3}\s+(?:Phase|Step|阶段)\s*\d*[:.:]\s*(.+?)$/gm
|
||||
let match, lastIndex = 0, lastTitle = null
|
||||
|
||||
while ((match = phaseRegex.exec(planContent)) !== null) {
|
||||
if (lastTitle !== null) {
|
||||
phases.push({ title: lastTitle, description: planContent.slice(lastIndex, match.index).trim() })
|
||||
}
|
||||
lastTitle = match[1].trim()
|
||||
lastIndex = match.index + match[0].length
|
||||
}
|
||||
|
||||
if (lastTitle !== null) {
|
||||
phases.push({ title: lastTitle, description: planContent.slice(lastIndex).trim() })
|
||||
}
|
||||
|
||||
if (phases.length === 0) {
|
||||
const titleMatch = planContent.match(/^#\s+(.+)$/m)
|
||||
phases.push({
|
||||
title: titleMatch ? titleMatch[1] : 'Plan Implementation',
|
||||
description: planContent.slice(0, 500)
|
||||
})
|
||||
}
|
||||
|
||||
return phases
|
||||
}
|
||||
```
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Read role definition file as FIRST action (Step 1)
|
||||
- Follow structured output template (ISSUE_READY / ALL_PLANNED)
|
||||
- Stay within planning boundaries (no code implementation)
|
||||
- Spawn issue-plan-agent for each issue individually
|
||||
- Write solution artifact file before outputting ISSUE_READY
|
||||
- Include solution_file path in ISSUE_READY data
|
||||
|
||||
**NEVER**:
|
||||
- Modify source code files
|
||||
- Skip context loading (Step 1)
|
||||
- Produce unstructured or free-form output
|
||||
- Continue to next issue without outputting ISSUE_READY
|
||||
- Close without outputting ALL_PLANNED
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Issue creation failure | Retry once with simplified text, report in output |
|
||||
| issue-plan-agent timeout | Urge convergence via send_input, close and report partial |
|
||||
| Inline conflict check failure | Use empty depends_on, continue |
|
||||
| Solution artifact write failure | Report error, continue with ISSUE_READY output |
|
||||
| Plan file not found | Report error in output with CLARIFICATION_NEEDED |
|
||||
| Empty input (no issues, no text) | Output CLARIFICATION_NEEDED asking for requirements |
|
||||
| Sub-agent produces invalid output | Report error, continue with available data |
|
||||
184
.codex/skills/team-planex/agents/planner.md
Normal file
184
.codex/skills/team-planex/agents/planner.md
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
name: planex-planner
|
||||
description: |
|
||||
PlanEx planner agent. Issue decomposition + solution design with beat protocol.
|
||||
Outputs ISSUE_READY:{id} after each solution, waits for "Continue" signal.
|
||||
Deploy to: ~/.codex/agents/planex-planner.md
|
||||
color: blue
|
||||
---
|
||||
|
||||
# PlanEx Planner
|
||||
|
||||
Requirement decomposition → issue creation → solution design, one issue at a time.
|
||||
Outputs `ISSUE_READY:{issueId}` after each solution and waits for orchestrator to signal
|
||||
"Continue". Only outputs `ALL_PLANNED:{count}` when all issues are processed.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Tag**: `[planner]`
|
||||
- **Beat Protocol**: ISSUE_READY per issue → wait → ALL_PLANNED when done
|
||||
- **Boundary**: Planning only — no code writing, no test running, no git commits
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
| Action | Allowed |
|
||||
|--------|---------|
|
||||
| Parse input (Issue IDs / text / plan file) | ✅ |
|
||||
| Create issues via CLI | ✅ |
|
||||
| Generate solution via issue-plan-agent | ✅ |
|
||||
| Write solution artifacts to disk | ✅ |
|
||||
| Output ISSUE_READY / ALL_PLANNED signals | ✅ |
|
||||
| Write or modify business code | ❌ |
|
||||
| Run tests or git commit | ❌ |
|
||||
|
||||
---
|
||||
|
||||
## CLI Toolbox
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `ccw issue create --data '{"title":"...","description":"..."}' --json` | Create issue |
|
||||
| `ccw issue status <id> --json` | Check issue status |
|
||||
| `ccw issue plan <id>` | Plan single issue (generates solution) |
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Step 1: Load Context
|
||||
|
||||
After reading role definition, load project context:
|
||||
- Read: `.workflow/project-tech.json`
|
||||
- Read: `.workflow/specs/*.md`
|
||||
- Extract session directory and artifacts directory from task message
|
||||
|
||||
### Step 2: Parse Input
|
||||
|
||||
Determine input type from task message:
|
||||
|
||||
| Detection | Condition | Action |
|
||||
|-----------|-----------|--------|
|
||||
| Issue IDs | `ISS-\d{8}-\d{6}` pattern | Use directly for planning |
|
||||
| `--text '...'` | Flag in message | Create issue(s) first via CLI |
|
||||
| `--plan <path>` | Flag in message | Read file, parse phases, batch create issues |
|
||||
|
||||
**Plan file parsing rules** (when `--plan` is used):
|
||||
- Match `## Phase N: Title`, `## Step N: Title`, or `### N. Title`
|
||||
- Each match → one issue (title + description from section content)
|
||||
- Fallback: no structure found → entire file as single issue
|
||||
|
||||
### Step 3: Issue Processing Loop (Beat Protocol)
|
||||
|
||||
For each issue, execute in sequence:
|
||||
|
||||
#### 3a. Generate Solution
|
||||
|
||||
Use `issue-plan-agent` subagent to generate and bind solution:
|
||||
|
||||
```
|
||||
spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
|
||||
---
|
||||
|
||||
issue_ids: ["${issueId}"]
|
||||
project_root: "${projectRoot}"
|
||||
|
||||
## Requirements
|
||||
- Generate solution for this issue
|
||||
- Auto-bind single solution
|
||||
- Output solution JSON when complete
|
||||
`
|
||||
})
|
||||
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
close_agent({ id: agent })
|
||||
```
|
||||
|
||||
#### 3b. Write Solution Artifact
|
||||
|
||||
```javascript
|
||||
// Extract solution from issue-plan-agent result
|
||||
const solution = parseSolution(result)
|
||||
|
||||
Write({
|
||||
file_path: `${artifactsDir}/${issueId}.json`,
|
||||
content: JSON.stringify({
|
||||
session_id: sessionId,
|
||||
issue_id: issueId,
|
||||
solution: solution,
|
||||
planned_at: new Date().toISOString()
|
||||
}, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
#### 3c. Output Beat Signal
|
||||
|
||||
Output EXACTLY (no surrounding text on this line):
|
||||
```
|
||||
ISSUE_READY:{issueId}
|
||||
```
|
||||
|
||||
Then STOP. Do not process next issue. Wait for "Continue" message from orchestrator.
|
||||
|
||||
### Step 4: After All Issues
|
||||
|
||||
When every issue has been processed and confirmed with "Continue":
|
||||
|
||||
Output EXACTLY:
|
||||
```
|
||||
ALL_PLANNED:{totalCount}
|
||||
```
|
||||
|
||||
Where `{totalCount}` is the integer count of issues planned.
|
||||
|
||||
---
|
||||
|
||||
## Issue Creation (when needed)
|
||||
|
||||
For `--text` input:
|
||||
|
||||
```bash
|
||||
ccw issue create --data '{"title":"<title>","description":"<description>"}' --json
|
||||
```
|
||||
|
||||
Parse returned JSON for `id` field → use as issue ID.
|
||||
|
||||
For `--plan` input, create issues one at a time:
|
||||
```bash
|
||||
# For each parsed phase/step:
|
||||
ccw issue create --data '{"title":"<phase-title>","description":"<phase-content>"}' --json
|
||||
```
|
||||
|
||||
Collect all created issue IDs before proceeding to Step 3.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Issue creation failure | Retry once with simplified text, then report error |
|
||||
| `issue-plan-agent` failure | Retry once, then skip issue with `ISSUE_SKIP:{issueId}:reason` signal |
|
||||
| Plan file not found | Output error immediately, do not proceed |
|
||||
| Artifact write failure | Log warning inline, still output ISSUE_READY (executor will handle missing file) |
|
||||
| "Continue" not received after 5 min | Re-output `ISSUE_READY:{issueId}` once as reminder |
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Output `ISSUE_READY:{issueId}` on its own line with no surrounding text
|
||||
- Wait after each ISSUE_READY — do NOT auto-continue
|
||||
- Write solution file before outputting ISSUE_READY
|
||||
- Use `[planner]` prefix in all status messages
|
||||
|
||||
**NEVER**:
|
||||
- Output multiple ISSUE_READY signals before waiting for "Continue"
|
||||
- Proceed to next issue without receiving "Continue"
|
||||
- Write or modify any business logic files
|
||||
- Run tests or execute git commands
|
||||
286
.codex/skills/team-planex/orchestrator.md
Normal file
286
.codex/skills/team-planex/orchestrator.md
Normal file
@@ -0,0 +1,286 @@
|
||||
---
|
||||
name: team-planex
|
||||
description: |
|
||||
Beat pipeline: planner decomposes requirements issue-by-issue, orchestrator spawns
|
||||
Codex executor per issue immediately. All execution via Codex CLI only.
|
||||
agents: 2
|
||||
phases: 3
|
||||
---
|
||||
|
||||
# Team PlanEx (Codex)
|
||||
|
||||
逐 Issue 节拍流水线。Planner 每完成一个 issue 的 solution 立即输出 `ISSUE_READY` 信号,Orchestrator 即刻 spawn 独立 Codex executor 并行实现,无需等待 planner 完成全部规划。
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Input (Issue IDs / --text / --plan)
|
||||
→ Orchestrator: parse input → init session → spawn planner
|
||||
→ Beat loop:
|
||||
wait(planner) → ISSUE_READY:{issueId} → spawn_agent(executor)
|
||||
→ send_input(planner, "Continue")
|
||||
→ ALL_PLANNED:{count} → close_agent(planner)
|
||||
→ wait(all executors) → report
|
||||
```
|
||||
|
||||
## Agent Registry
|
||||
|
||||
| Agent | Role File | Responsibility |
|
||||
|-------|-----------|----------------|
|
||||
| `planner` | `~/.codex/agents/planex-planner.md` | Issue decomp → solution design → ISSUE_READY signals |
|
||||
| `executor` | `~/.codex/agents/planex-executor.md` | Codex CLI implementation per issue |
|
||||
|
||||
> Both agents must be deployed to `~/.codex/agents/` before use.
|
||||
> Source: `.codex/skills/team-planex/agents/`
|
||||
|
||||
---
|
||||
|
||||
## Input Parsing
|
||||
|
||||
Supported input types (parse from `$ARGUMENTS`):
|
||||
|
||||
| Type | Detection | Handler |
|
||||
|------|-----------|---------|
|
||||
| Issue IDs | `ISS-\d{8}-\d{6}` regex | Pass directly to planner |
|
||||
| Text | `--text '...'` flag | Planner creates issue(s) first |
|
||||
| Plan file | `--plan <path>` flag | Planner reads file, batch creates issues |
|
||||
|
||||
---
|
||||
|
||||
## Session Setup
|
||||
|
||||
Before spawning agents, initialize session directory:
|
||||
|
||||
```javascript
|
||||
// Generate session slug from input description (max 20 chars, kebab-case)
|
||||
const slug = toSlug(inputDescription).slice(0, 20)
|
||||
const date = new Date().toISOString().slice(0, 10).replace(/-/g, '')
|
||||
const sessionDir = `.workflow/.team/PEX-${slug}-${date}`
|
||||
const artifactsDir = `${sessionDir}/artifacts/solutions`
|
||||
|
||||
Bash(`mkdir -p "${artifactsDir}"`)
|
||||
|
||||
// Write initial session state
|
||||
Write({
|
||||
file_path: `${sessionDir}/team-session.json`,
|
||||
content: JSON.stringify({
|
||||
session_id: `PEX-${slug}-${date}`,
|
||||
input_type: inputType,
|
||||
input: rawInput,
|
||||
status: "running",
|
||||
started_at: new Date().toISOString(),
|
||||
executors: []
|
||||
}, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Spawn Planner
|
||||
|
||||
```javascript
|
||||
const plannerAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/planex-planner.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
## Session
|
||||
Session directory: ${sessionDir}
|
||||
Artifacts directory: ${artifactsDir}
|
||||
|
||||
## Input
|
||||
${inputType === 'issues' ? `Issue IDs: ${issueIds.join(' ')}` : ''}
|
||||
${inputType === 'text' ? `Requirement: ${requirementText}` : ''}
|
||||
${inputType === 'plan' ? `Plan file: ${planPath}` : ''}
|
||||
|
||||
## Beat Protocol (CRITICAL)
|
||||
Process issues one at a time. After completing each issue's solution:
|
||||
1. Write solution JSON to: ${artifactsDir}/{issueId}.json
|
||||
2. Output EXACTLY this line: ISSUE_READY:{issueId}
|
||||
3. STOP and wait — do NOT continue until you receive "Continue"
|
||||
|
||||
When ALL issues are processed:
|
||||
1. Output EXACTLY: ALL_PLANNED:{totalCount}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Beat Loop
|
||||
|
||||
Orchestrator coordinates the planner-executor pipeline:
|
||||
|
||||
```javascript
|
||||
const executorIds = []
|
||||
const executorIssueMap = {}
|
||||
|
||||
while (true) {
|
||||
// Wait for planner beat signal (up to 10 min per issue)
|
||||
const plannerOut = wait({ ids: [plannerAgent], timeout_ms: 600000 })
|
||||
|
||||
// Handle timeout: urge convergence and retry
|
||||
if (plannerOut.timed_out) {
|
||||
send_input({
|
||||
id: plannerAgent,
|
||||
message: "Please output ISSUE_READY:{issueId} for current issue or ALL_PLANNED if done."
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
const output = plannerOut.status[plannerAgent].completed
|
||||
|
||||
// Detect ALL_PLANNED — pipeline complete
|
||||
if (output.includes('ALL_PLANNED')) {
|
||||
const match = output.match(/ALL_PLANNED:(\d+)/)
|
||||
const total = match ? parseInt(match[1]) : executorIds.length
|
||||
close_agent({ id: plannerAgent })
|
||||
break
|
||||
}
|
||||
|
||||
// Detect ISSUE_READY — spawn executor immediately
|
||||
const issueMatch = output.match(/ISSUE_READY:(ISS-\d{8}-\d{6}|[A-Z0-9-]+)/)
|
||||
if (issueMatch) {
|
||||
const issueId = issueMatch[1]
|
||||
const solutionFile = `${artifactsDir}/${issueId}.json`
|
||||
|
||||
const executorId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/planex-executor.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
## Issue
|
||||
Issue ID: ${issueId}
|
||||
Solution file: ${solutionFile}
|
||||
Session: ${sessionDir}
|
||||
|
||||
## Execution
|
||||
Load solution from file → implement via Codex CLI → verify tests → commit → report.
|
||||
`
|
||||
})
|
||||
|
||||
executorIds.push(executorId)
|
||||
executorIssueMap[executorId] = issueId
|
||||
|
||||
// Signal planner to continue to next issue
|
||||
send_input({ id: plannerAgent, message: "Continue with next issue." })
|
||||
continue
|
||||
}
|
||||
|
||||
// Unexpected output: urge convergence
|
||||
send_input({
|
||||
id: plannerAgent,
|
||||
message: "Output ISSUE_READY:{issueId} when solution is ready, or ALL_PLANNED when all done."
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Wait All Executors
|
||||
|
||||
```javascript
|
||||
if (executorIds.length > 0) {
|
||||
// Extended timeout: Codex CLI execution per issue (~10-20 min each)
|
||||
const execResults = wait({ ids: executorIds, timeout_ms: 1800000 })
|
||||
|
||||
if (execResults.timed_out) {
|
||||
const completed = executorIds.filter(id => execResults.status[id]?.completed)
|
||||
const pending = executorIds.filter(id => !execResults.status[id]?.completed)
|
||||
// Log pending issues for manual follow-up
|
||||
if (pending.length > 0) {
|
||||
const pendingIssues = pending.map(id => executorIssueMap[id])
|
||||
Write({
|
||||
file_path: `${sessionDir}/pending-executors.json`,
|
||||
content: JSON.stringify({ pending_issues: pendingIssues, executor_ids: pending }, null, 2)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Collect summaries
|
||||
const summaries = executorIds.map(id => ({
|
||||
issue_id: executorIssueMap[id],
|
||||
status: execResults.status[id]?.completed ? 'completed' : 'timeout',
|
||||
output: execResults.status[id]?.completed ?? null
|
||||
}))
|
||||
|
||||
// Cleanup
|
||||
executorIds.forEach(id => {
|
||||
try { close_agent({ id }) } catch { /* already closed */ }
|
||||
})
|
||||
|
||||
// Final report
|
||||
const completed = summaries.filter(s => s.status === 'completed').length
|
||||
const failed = summaries.filter(s => s.status === 'timeout').length
|
||||
|
||||
return `
|
||||
## Pipeline Complete
|
||||
|
||||
**Total issues**: ${executorIds.length}
|
||||
**Completed**: ${completed}
|
||||
**Timed out**: ${failed}
|
||||
|
||||
${summaries.map(s => `- ${s.issue_id}: ${s.status}`).join('\n')}
|
||||
|
||||
Session: ${sessionDir}
|
||||
`
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## User Commands
|
||||
|
||||
During execution, the user may issue:
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Show executor progress summary |
|
||||
| `resume` / `continue` | Urge stalled planner or executor |
|
||||
| `add <issue-ids>` | `send_input` to planner with new issue IDs |
|
||||
| `add --text '...'` | `send_input` to planner to create and plan new issue |
|
||||
| `add --plan <path>` | `send_input` to planner to parse and batch create from plan file |
|
||||
|
||||
**`add` handler** (inject mid-execution):
|
||||
|
||||
```javascript
|
||||
// Get current planner agent ID from session state
|
||||
const session = JSON.parse(Read(`${sessionDir}/team-session.json`))
|
||||
const plannerAgentId = session.planner_agent_id // saved during Phase 1
|
||||
|
||||
send_input({
|
||||
id: plannerAgentId,
|
||||
message: `
|
||||
## NEW ISSUES INJECTED
|
||||
${newInput}
|
||||
|
||||
Process these after current issue (or immediately if idle).
|
||||
Follow beat protocol: ISSUE_READY → wait for Continue → next issue.
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Planner timeout (>10 min per issue) | `send_input` urge convergence, re-enter loop |
|
||||
| Planner never outputs ISSUE_READY | After 3 retries, `close_agent` + report stall |
|
||||
| Solution file not written | Executor reports error, logs to `${sessionDir}/errors.json` |
|
||||
| Executor (Codex CLI) failure | Executor handles resume; logs CLI resume command |
|
||||
| ALL_PLANNED never received | After 60 min total, close planner, wait remaining executors |
|
||||
| No issues to process | AskUserQuestion for clarification |
|
||||
@@ -89,7 +89,7 @@ const agentId = spawn_agent({
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: ${projectRoot}/.workflow/specs/*.md
|
||||
|
||||
## TASK CONTEXT
|
||||
${taskContext}
|
||||
|
||||
@@ -76,7 +76,7 @@ const contextAgentId = spawn_agent({
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/test-context-search-agent.md (MUST read first)
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: ${projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
@@ -103,7 +103,7 @@ const contextAgentId = spawn_agent({
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/context-search-agent.md (MUST read first)
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: ${projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
@@ -177,7 +177,7 @@ const analysisAgentId = spawn_agent({
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-execution-agent.md (MUST read first)
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: ${projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
@@ -246,7 +246,7 @@ const taskGenAgentId = spawn_agent({
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/action-planning-agent.md (MUST read first)
|
||||
2. Read: ${projectRoot}/.workflow/project-tech.json
|
||||
3. Read: ${projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: ${projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -91,7 +91,7 @@ const analysisAgentId = spawn_agent({
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-planning-agent.md (MUST read first)
|
||||
2. Read: {projectRoot}/.workflow/project-tech.json
|
||||
3. Read: {projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: {projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
@@ -158,7 +158,7 @@ const fixAgentId = spawn_agent({
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/test-fix-agent.md (MUST read first)
|
||||
2. Read: {projectRoot}/.workflow/project-tech.json
|
||||
3. Read: {projectRoot}/.workflow/project-guidelines.json
|
||||
3. Read: {projectRoot}/.workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
|
||||
Reference in New Issue
Block a user