mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-01 15:03:57 +08:00
feat: add SpecDialog component for editing spec frontmatter
- Implement SpecDialog for managing spec details including title, read mode, priority, and keywords. - Add validation and keyword management functionality. - Integrate SpecDialog into SpecsSettingsPage for editing specs. feat: create index file for specs components - Export SpecCard, SpecDialog, and related types from a new index file for better organization. feat: implement SpecsSettingsPage for managing specs and hooks - Create main settings page with tabs for Project Specs, Personal Specs, Hooks, Injection, and Settings. - Integrate SpecDialog and HookDialog for editing specs and hooks. - Add search functionality and mock data for specs and hooks. feat: add spec management API routes - Implement API endpoints for listing specs, getting spec details, updating frontmatter, rebuilding indices, and initializing the spec system. - Handle errors and responses appropriately for each endpoint.
This commit is contained in:
@@ -1,427 +0,0 @@
|
||||
---
|
||||
name: team-planex
|
||||
description: 2-member plan-and-execute pipeline with per-issue beat pipeline for concurrent planning and execution. Planner decomposes requirements into issues, generates solutions, writes artifacts. Executor implements solutions via configurable backends (agent/codex/gemini). Triggers on "team planex".
|
||||
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
|
||||
argument-hint: "<issue-ids|--text 'description'|--plan path> [--exec=agent|codex|gemini|auto] [-y]"
|
||||
---
|
||||
|
||||
# Team PlanEx
|
||||
|
||||
2 成员边规划边执行团队。通过逐 Issue 节拍流水线实现 planner 和 executor 并行工作:planner 每完成一个 issue 的 solution 后输出 ISSUE_READY 信号,orchestrator 立即 spawn executor agent 处理该 issue,同时 send_input 让 planner 继续下一 issue。
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────┐
|
||||
│ Orchestrator (this file) │
|
||||
│ → Parse input → Spawn planner → Spawn exec │
|
||||
└────────────────┬─────────────────────────────┘
|
||||
│ Per-Issue Beat Pipeline
|
||||
┌───────┴───────┐
|
||||
↓ ↓
|
||||
┌─────────┐ ┌──────────┐
|
||||
│ planner │ │ executor │
|
||||
│ (plan) │ │ (impl) │
|
||||
└─────────┘ └──────────┘
|
||||
│ │
|
||||
issue-plan-agent code-developer
|
||||
(or codex/gemini CLI)
|
||||
```
|
||||
|
||||
## Agent Registry
|
||||
|
||||
| Agent | Role File | Responsibility | New/Existing |
|
||||
|-------|-----------|----------------|--------------|
|
||||
| `planex-planner` | `.codex/skills/team-planex/agents/planex-planner.md` | 需求拆解 → issue 创建 → 方案设计 → 冲突检查 → 逐 issue 派发 | New (skill-specific) |
|
||||
| `planex-executor` | `.codex/skills/team-planex/agents/planex-executor.md` | 加载 solution → 代码实现 → 测试 → 提交 | New (skill-specific) |
|
||||
| `issue-plan-agent` | `~/.codex/agents/issue-plan-agent.md` | ACE exploration + solution generation + binding | Existing |
|
||||
| `code-developer` | `~/.codex/agents/code-developer.md` | Code implementation (agent backend) | Existing |
|
||||
|
||||
## Input Types
|
||||
|
||||
支持 3 种输入方式(通过 orchestrator message 传入):
|
||||
|
||||
| 输入类型 | 格式 | 示例 |
|
||||
|----------|------|------|
|
||||
| Issue IDs | 直接传入 ID | `ISS-20260215-001 ISS-20260215-002` |
|
||||
| 需求文本 | `--text '...'` | `--text '实现用户认证模块'` |
|
||||
| Plan 文件 | `--plan path` | `--plan plan/2026-02-15-auth.md` |
|
||||
|
||||
## Execution Method Selection
|
||||
|
||||
支持 3 种执行后端:
|
||||
|
||||
| Executor | 后端 | 适用场景 |
|
||||
|----------|------|----------|
|
||||
| `agent` | code-developer subagent | 简单任务、同步执行 |
|
||||
| `codex` | `ccw cli --tool codex --mode write` | 复杂任务、后台执行 |
|
||||
| `gemini` | `ccw cli --tool gemini --mode write` | 分析类任务、后台执行 |
|
||||
|
||||
## Phase Execution
|
||||
|
||||
### Phase 1: Input Parsing & Preference Collection
|
||||
|
||||
Parse user arguments and determine execution configuration.
|
||||
|
||||
```javascript
|
||||
// Parse input from orchestrator message
|
||||
const args = orchestratorMessage
|
||||
const issueIds = args.match(/ISS-\d{8}-\d{6}/g) || []
|
||||
const textMatch = args.match(/--text\s+['"]([^'"]+)['"]/)
|
||||
const planMatch = args.match(/--plan\s+(\S+)/)
|
||||
const autoYes = /\b(-y|--yes)\b/.test(args)
|
||||
const explicitExec = args.match(/--exec[=\s]+(agent|codex|gemini|auto)/i)?.[1]
|
||||
|
||||
let executionConfig
|
||||
|
||||
if (explicitExec) {
|
||||
executionConfig = {
|
||||
executionMethod: explicitExec.charAt(0).toUpperCase() + explicitExec.slice(1),
|
||||
codeReviewTool: "Skip"
|
||||
}
|
||||
} else if (autoYes) {
|
||||
executionConfig = { executionMethod: "Auto", codeReviewTool: "Skip" }
|
||||
} else {
|
||||
// Interactive: ask user for preferences
|
||||
// (orchestrator handles user interaction directly)
|
||||
}
|
||||
|
||||
// Initialize session directory for artifacts
|
||||
const slug = (issueIds[0] || 'batch').replace(/[^a-zA-Z0-9-]/g, '')
|
||||
const dateStr = new Date().toISOString().slice(0,10).replace(/-/g,'')
|
||||
const sessionId = `PEX-${slug}-${dateStr}`
|
||||
const sessionDir = `.workflow/.team/${sessionId}`
|
||||
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
|
||||
```
|
||||
|
||||
### Phase 2: Planning (Planner Agent — Per-Issue Beat)
|
||||
|
||||
Spawn planner agent for per-issue planning. Uses send_input for issue-by-issue progression.
|
||||
|
||||
```javascript
|
||||
// Build planner input context
|
||||
let plannerInput = ""
|
||||
if (issueIds.length > 0) plannerInput = `issue_ids: ${JSON.stringify(issueIds)}`
|
||||
else if (textMatch) plannerInput = `text: ${textMatch[1]}`
|
||||
else if (planMatch) plannerInput = `plan_file: ${planMatch[1]}`
|
||||
|
||||
const planner = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: .codex/skills/team-planex/agents/planex-planner.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
Goal: Decompose requirements into executable solutions (per-issue beat)
|
||||
|
||||
## Input
|
||||
${plannerInput}
|
||||
|
||||
## Execution Config
|
||||
execution_method: ${executionConfig.executionMethod}
|
||||
code_review: ${executionConfig.codeReviewTool}
|
||||
|
||||
## Session Dir
|
||||
session_dir: ${sessionDir}
|
||||
|
||||
## Deliverables
|
||||
For EACH issue, output structured data:
|
||||
|
||||
\`\`\`
|
||||
ISSUE_READY:
|
||||
{
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"title": "...",
|
||||
"priority": "normal",
|
||||
"depends_on": [],
|
||||
"solution_file": "${sessionDir}/artifacts/solutions/ISS-xxx.json"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
After ALL issues planned, output:
|
||||
\`\`\`
|
||||
ALL_PLANNED:
|
||||
{ "total_issues": N }
|
||||
\`\`\`
|
||||
|
||||
## Quality bar
|
||||
- Every issue has a bound solution
|
||||
- Solution artifact written to file before output
|
||||
- Inline conflict check determines depends_on
|
||||
`
|
||||
})
|
||||
|
||||
// Wait for first ISSUE_READY
|
||||
const firstIssue = wait({ ids: [planner], timeout_ms: 600000 })
|
||||
|
||||
if (firstIssue.timed_out) {
|
||||
send_input({ id: planner, message: "Please finalize current issue and output ISSUE_READY." })
|
||||
const retry = wait({ ids: [planner], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
// Parse first issue data
|
||||
const firstIssueData = parseIssueReady(firstIssue.status[planner].completed)
|
||||
```
|
||||
|
||||
### Phase 3: Per-Issue Beat Pipeline (Planning + Execution Interleaved)
|
||||
|
||||
Pipeline: spawn executor for current issue while planner continues next issue.
|
||||
|
||||
```javascript
|
||||
const allAgentIds = [planner]
|
||||
const executorAgents = []
|
||||
let allPlanned = false
|
||||
let currentIssueOutput = firstIssue.status[planner].completed
|
||||
|
||||
while (!allPlanned) {
|
||||
// --- Spawn executor for current issue ---
|
||||
const issueData = parseIssueReady(currentIssueOutput)
|
||||
|
||||
if (issueData) {
|
||||
const executor = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: .codex/skills/team-planex/agents/planex-executor.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
Goal: Implement solution for ${issueData.issue_id}
|
||||
|
||||
## Task
|
||||
${JSON.stringify([issueData], null, 2)}
|
||||
|
||||
## Execution Config
|
||||
execution_method: ${executionConfig.executionMethod}
|
||||
code_review: ${executionConfig.codeReviewTool}
|
||||
|
||||
## Solution File
|
||||
solution_file: ${issueData.solution_file}
|
||||
|
||||
## Session Dir
|
||||
session_dir: ${sessionDir}
|
||||
|
||||
## Deliverables
|
||||
\`\`\`
|
||||
IMPL_COMPLETE:
|
||||
issue_id: ${issueData.issue_id}
|
||||
status: success|failed
|
||||
test_result: pass|fail
|
||||
commit: <hash or N/A>
|
||||
\`\`\`
|
||||
|
||||
## Quality bar
|
||||
- All existing tests pass after implementation
|
||||
- Code follows project conventions
|
||||
- One commit per solution
|
||||
`
|
||||
})
|
||||
allAgentIds.push(executor)
|
||||
executorAgents.push({ id: executor, issueId: issueData.issue_id })
|
||||
}
|
||||
|
||||
// --- Check if ALL_PLANNED was in this output ---
|
||||
if (currentIssueOutput.includes("ALL_PLANNED")) {
|
||||
allPlanned = true
|
||||
break
|
||||
}
|
||||
|
||||
// --- Tell planner to continue next issue ---
|
||||
send_input({ id: planner, message: `Issue ${issueData?.issue_id || 'unknown'} dispatched. Continue to next issue.` })
|
||||
|
||||
// Wait for planner (next issue)
|
||||
const plannerResult = wait({ ids: [planner], timeout_ms: 600000 })
|
||||
|
||||
if (plannerResult.timed_out) {
|
||||
send_input({ id: planner, message: "Please finalize current issue and output results." })
|
||||
const retry = wait({ ids: [planner], timeout_ms: 120000 })
|
||||
currentIssueOutput = retry.status?.[planner]?.completed || ""
|
||||
} else {
|
||||
currentIssueOutput = plannerResult.status[planner]?.completed || ""
|
||||
}
|
||||
|
||||
// Check for ALL_PLANNED
|
||||
if (currentIssueOutput.includes("ALL_PLANNED")) {
|
||||
// May contain a final ISSUE_READY before ALL_PLANNED
|
||||
const finalIssue = parseIssueReady(currentIssueOutput)
|
||||
if (finalIssue) {
|
||||
// Spawn one more executor for the last issue
|
||||
const lastExec = spawn_agent({
|
||||
message: `... same executor spawn as above for ${finalIssue.issue_id} ...`
|
||||
})
|
||||
allAgentIds.push(lastExec)
|
||||
executorAgents.push({ id: lastExec, issueId: finalIssue.issue_id })
|
||||
}
|
||||
allPlanned = true
|
||||
}
|
||||
}
|
||||
|
||||
// Wait for all remaining executor agents
|
||||
const pendingExecutors = executorAgents.map(e => e.id)
|
||||
|
||||
if (pendingExecutors.length > 0) {
|
||||
const finalResults = wait({ ids: pendingExecutors, timeout_ms: 900000 })
|
||||
|
||||
if (finalResults.timed_out) {
|
||||
const pending = pendingExecutors.filter(id => !finalResults.status[id]?.completed)
|
||||
pending.forEach(id => {
|
||||
send_input({ id, message: "Please finalize current task and output results." })
|
||||
})
|
||||
wait({ ids: pending, timeout_ms: 120000 })
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Result Aggregation & Cleanup
|
||||
|
||||
```javascript
|
||||
// Collect results from all executors
|
||||
const pipelineResults = {
|
||||
issues: [],
|
||||
totalCompleted: 0,
|
||||
totalFailed: 0
|
||||
}
|
||||
|
||||
executorAgents.forEach(({ id, issueId }) => {
|
||||
const output = results.status[id]?.completed || ""
|
||||
const implResult = parseImplComplete(output)
|
||||
pipelineResults.issues.push({
|
||||
issueId,
|
||||
status: implResult?.status || 'unknown',
|
||||
commit: implResult?.commit || 'N/A'
|
||||
})
|
||||
if (implResult?.status === 'success') pipelineResults.totalCompleted++
|
||||
else pipelineResults.totalFailed++
|
||||
})
|
||||
|
||||
// Output final summary
|
||||
console.log(`
|
||||
## PlanEx Pipeline Complete
|
||||
|
||||
### Summary
|
||||
- Total Issues: ${executorAgents.length}
|
||||
- Completed: ${pipelineResults.totalCompleted}
|
||||
- Failed: ${pipelineResults.totalFailed}
|
||||
|
||||
### Issue Details
|
||||
${pipelineResults.issues.map(i =>
|
||||
`- ${i.issueId}: ${i.status} (commit: ${i.commit})`
|
||||
).join('\n')}
|
||||
`)
|
||||
|
||||
// Cleanup ALL agents
|
||||
allAgentIds.forEach(id => {
|
||||
try { close_agent({ id }) } catch { /* already closed */ }
|
||||
})
|
||||
```
|
||||
|
||||
## Coordination Protocol
|
||||
|
||||
### File-Based Communication
|
||||
|
||||
Since Codex agents have isolated contexts, use file-based coordination:
|
||||
|
||||
| File | Purpose | Writer | Reader |
|
||||
|------|---------|--------|--------|
|
||||
| `{sessionDir}/artifacts/solutions/{issueId}.json` | Solution artifact | planner | executor |
|
||||
| `{sessionDir}/exec-{issueId}.json` | Execution result | executor | orchestrator |
|
||||
| `{sessionDir}/pipeline-log.ndjson` | Event log | both | orchestrator |
|
||||
|
||||
### Solution Artifact Format
|
||||
|
||||
```json
|
||||
{
|
||||
"issue_id": "ISS-20260215-001",
|
||||
"bound": {
|
||||
"id": "SOL-001",
|
||||
"title": "Implement auth module",
|
||||
"tasks": [...],
|
||||
"files_touched": ["src/auth/login.ts"]
|
||||
},
|
||||
"execution_config": {
|
||||
"execution_method": "Agent",
|
||||
"code_review": "Skip"
|
||||
},
|
||||
"timestamp": "2026-02-15T10:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Execution Result Format
|
||||
|
||||
```json
|
||||
{
|
||||
"issue_id": "ISS-20260215-001",
|
||||
"status": "success",
|
||||
"executor": "agent",
|
||||
"test_result": "pass",
|
||||
"commit": "abc123",
|
||||
"files_changed": ["src/auth/login.ts", "src/auth/login.test.ts"]
|
||||
}
|
||||
```
|
||||
|
||||
## Lifecycle Management
|
||||
|
||||
### Timeout Handling
|
||||
|
||||
| Timeout Scenario | Action |
|
||||
|-----------------|--------|
|
||||
| Planner issue timeout | send_input to urge convergence, retry wait |
|
||||
| Executor impl timeout | send_input to finalize, record partial result |
|
||||
| All agents timeout | Log error, abort with partial state |
|
||||
|
||||
### Cleanup Protocol
|
||||
|
||||
```javascript
|
||||
// Track all agents created during execution
|
||||
const allAgentIds = []
|
||||
|
||||
// ... (agents added during phase execution) ...
|
||||
|
||||
// Final cleanup (end of orchestrator or on error)
|
||||
allAgentIds.forEach(id => {
|
||||
try { close_agent({ id }) } catch { /* already closed */ }
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Planner issue failure | Retry once via send_input, then skip issue |
|
||||
| Executor impl failure | Record failure, continue with next issue |
|
||||
| No issues created from text | Report to user, abort |
|
||||
| Solution generation failure | Skip issue, continue with remaining |
|
||||
| Inline conflict check failure | Use empty depends_on, continue |
|
||||
| Pipeline stall (no progress) | Timeout handling → urge convergence → abort |
|
||||
| Missing role file | Log error, use inline fallback instructions |
|
||||
|
||||
## Helper Functions
|
||||
|
||||
```javascript
|
||||
function parseIssueReady(output) {
|
||||
const match = output.match(/ISSUE_READY:\s*\n([\s\S]*?)(?=\n```|$)/)
|
||||
if (!match) return null
|
||||
try { return JSON.parse(match[1]) } catch { return null }
|
||||
}
|
||||
|
||||
function parseImplComplete(output) {
|
||||
const match = output.match(/IMPL_COMPLETE:\s*\n([\s\S]*?)(?=\n```|$)/)
|
||||
if (!match) return null
|
||||
try { return JSON.parse(match[1]) } catch { return null }
|
||||
}
|
||||
|
||||
function resolveExecutor(method, taskCount) {
|
||||
if (method.toLowerCase() === 'auto') {
|
||||
return taskCount <= 3 ? 'agent' : 'codex'
|
||||
}
|
||||
return method.toLowerCase()
|
||||
}
|
||||
```
|
||||
218
.codex/skills/team-planex/agents/executor.md
Normal file
218
.codex/skills/team-planex/agents/executor.md
Normal file
@@ -0,0 +1,218 @@
|
||||
---
|
||||
name: planex-executor
|
||||
description: |
|
||||
PlanEx executor agent. Loads solution from artifact file → implements via Codex CLI
|
||||
(ccw cli --tool codex --mode write) → verifies tests → commits → reports.
|
||||
Deploy to: ~/.codex/agents/planex-executor.md
|
||||
color: green
|
||||
---
|
||||
|
||||
# PlanEx Executor
|
||||
|
||||
Single-issue implementation agent. Loads solution from JSON artifact, executes
|
||||
implementation via Codex CLI, verifies with tests, commits, and outputs a structured
|
||||
completion report.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Tag**: `[executor]`
|
||||
- **Backend**: Codex CLI only (`ccw cli --tool codex --mode write`)
|
||||
- **Granularity**: One issue per agent instance
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
| Action | Allowed |
|
||||
|--------|---------|
|
||||
| Read solution artifact from disk | ✅ |
|
||||
| Implement via Codex CLI | ✅ |
|
||||
| Run tests for verification | ✅ |
|
||||
| git commit completed work | ✅ |
|
||||
| Create or modify issues | ❌ |
|
||||
| Spawn subagents | ❌ |
|
||||
| Interact with user (AskUserQuestion) | ❌ |
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Step 1: Load Context
|
||||
|
||||
After reading role definition:
|
||||
- Read: `.workflow/project-tech.json`
|
||||
- Read: `.workflow/specs/*.md`
|
||||
- Extract issue ID, solution file path, session dir from task message
|
||||
|
||||
### Step 2: Load Solution
|
||||
|
||||
Read solution artifact:
|
||||
|
||||
```javascript
|
||||
const solutionData = JSON.parse(Read(solutionFile))
|
||||
const solution = solutionData.solution
|
||||
```
|
||||
|
||||
If file not found or invalid:
|
||||
- Log error: `[executor] ERROR: Solution file not found: ${solutionFile}`
|
||||
- Output: `EXEC_FAILED:{issueId}:solution_file_missing`
|
||||
- Stop execution
|
||||
|
||||
Verify solution has required fields:
|
||||
- `solution.bound.title` or `solution.title`
|
||||
- `solution.bound.tasks` or `solution.tasks`
|
||||
|
||||
### Step 3: Update Issue Status
|
||||
|
||||
```bash
|
||||
ccw issue update ${issueId} --status executing
|
||||
```
|
||||
|
||||
### Step 4: Codex CLI Execution
|
||||
|
||||
Build execution prompt and invoke Codex:
|
||||
|
||||
```bash
|
||||
ccw cli -p "$(cat <<'PROMPT_EOF'
|
||||
## Issue
|
||||
ID: ${issueId}
|
||||
Title: ${solution.bound.title}
|
||||
|
||||
## Solution Plan
|
||||
${JSON.stringify(solution.bound, null, 2)}
|
||||
|
||||
## Implementation Requirements
|
||||
1. Follow the solution plan tasks in order
|
||||
2. Write clean, minimal code following existing patterns
|
||||
3. Read .workflow/specs/*.md for project conventions
|
||||
4. Run tests after each significant change
|
||||
5. Ensure all existing tests still pass
|
||||
6. Do NOT over-engineer - implement exactly what the solution specifies
|
||||
|
||||
## Quality Checklist
|
||||
- [ ] All solution tasks implemented
|
||||
- [ ] No TypeScript/linting errors (run: npx tsc --noEmit)
|
||||
- [ ] Existing tests pass
|
||||
- [ ] New tests added where specified in solution
|
||||
- [ ] No security vulnerabilities introduced
|
||||
|
||||
## Project Guidelines
|
||||
@.workflow/specs/*.md
|
||||
PROMPT_EOF
|
||||
)" --tool codex --mode write --id planex-${issueId}
|
||||
```
|
||||
|
||||
**STOP after spawn** — Codex CLI executes in background. Do NOT poll or wait inside this agent. The CLI process handles implementation autonomously.
|
||||
|
||||
Wait for CLI completion signal before proceeding to Step 5.
|
||||
|
||||
### Step 5: Verify Tests
|
||||
|
||||
Detect and run project test command:
|
||||
|
||||
```javascript
|
||||
// Detection priority:
|
||||
// 1. package.json scripts.test
|
||||
// 2. package.json scripts.test:unit
|
||||
// 3. pytest.ini / setup.cfg (Python)
|
||||
// 4. Makefile test target
|
||||
|
||||
const testCmd = detectTestCommand()
|
||||
|
||||
if (testCmd) {
|
||||
const testResult = Bash(`${testCmd} 2>&1 || echo TEST_FAILED`)
|
||||
|
||||
if (testResult.includes('TEST_FAILED') || testResult.includes('FAIL')) {
|
||||
// Report failure with resume command
|
||||
const resumeCmd = `ccw cli -p "Fix failing tests" --resume planex-${issueId} --tool codex --mode write`
|
||||
|
||||
Write({
|
||||
file_path: `${sessionDir}/errors.json`,
|
||||
content: JSON.stringify({
|
||||
issue_id: issueId,
|
||||
type: 'test_failure',
|
||||
test_output: testResult.slice(0, 2000),
|
||||
resume_cmd: resumeCmd,
|
||||
timestamp: new Date().toISOString()
|
||||
}, null, 2)
|
||||
})
|
||||
|
||||
Output: `EXEC_FAILED:${issueId}:tests_failing`
|
||||
Stop.
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Commit
|
||||
|
||||
```bash
|
||||
git add -A
|
||||
git commit -m "feat(${issueId}): ${solution.bound.title}"
|
||||
```
|
||||
|
||||
If commit fails (nothing to commit, pre-commit hook error):
|
||||
- Log warning: `[executor] WARN: Commit failed for ${issueId}, continuing`
|
||||
- Still proceed to Step 7
|
||||
|
||||
### Step 7: Update Issue & Report
|
||||
|
||||
```bash
|
||||
ccw issue update ${issueId} --status completed
|
||||
```
|
||||
|
||||
Output completion report:
|
||||
|
||||
```
|
||||
## [executor] Implementation Complete
|
||||
|
||||
**Issue**: ${issueId}
|
||||
**Title**: ${solution.bound.title}
|
||||
**Backend**: codex
|
||||
**Tests**: ${testCmd ? 'passing' : 'skipped (no test command found)'}
|
||||
**Commit**: ${commitHash}
|
||||
**Status**: resolved
|
||||
|
||||
EXEC_DONE:${issueId}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Resume Protocol
|
||||
|
||||
If Codex CLI execution fails or times out:
|
||||
|
||||
```bash
|
||||
# Resume with same session ID
|
||||
ccw cli -p "Continue implementation from where stopped" \
|
||||
--resume planex-${issueId} \
|
||||
--tool codex --mode write \
|
||||
--id planex-${issueId}-retry
|
||||
```
|
||||
|
||||
Resume command is always logged to `${sessionDir}/errors.json` on any failure.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Solution file missing | Output `EXEC_FAILED:{id}:solution_file_missing`, stop |
|
||||
| Solution JSON malformed | Output `EXEC_FAILED:{id}:solution_invalid`, stop |
|
||||
| Issue status update fails | Log warning, continue |
|
||||
| Codex CLI failure | Log resume command to errors.json, output `EXEC_FAILED:{id}:codex_failed` |
|
||||
| Tests failing | Log test output + resume command, output `EXEC_FAILED:{id}:tests_failing` |
|
||||
| Commit fails | Log warning, still output `EXEC_DONE:{id}` (implementation complete) |
|
||||
| No test command found | Skip test step, proceed to commit |
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Output `EXEC_DONE:{issueId}` on its own line when implementation succeeds
|
||||
- Output `EXEC_FAILED:{issueId}:{reason}` on its own line when implementation fails
|
||||
- Log resume command to errors.json on any failure
|
||||
- Use `[executor]` prefix in all status messages
|
||||
|
||||
**NEVER**:
|
||||
- Use any execution backend other than Codex CLI
|
||||
- Create, modify, or read issues beyond the assigned issueId
|
||||
- Spawn subagents
|
||||
- Ask the user for clarification (fail fast with structured error)
|
||||
@@ -1,544 +0,0 @@
|
||||
---
|
||||
name: planex-executor
|
||||
description: |
|
||||
Execution agent for PlanEx pipeline. Loads solutions from artifact files
|
||||
(with CLI fallback), routes to configurable backends (agent/codex/gemini CLI),
|
||||
runs tests, commits. Processes all tasks within a single assignment.
|
||||
color: green
|
||||
skill: team-planex
|
||||
---
|
||||
|
||||
# PlanEx Executor
|
||||
|
||||
从中间产物文件加载 solution(兼容 CLI fallback)→ 根据 execution_method 路由到对应后端(Agent/Codex/Gemini)→ 测试验证 → 提交。每次被 spawn 时处理分配的 exec tasks,按依赖顺序执行。
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **Solution Loading**: 从中间产物文件加载 bound solution plan(兼容 CLI fallback)
|
||||
2. **Multi-Backend Routing**: 根据 execution_method 选择 agent/codex/gemini 后端
|
||||
3. **Test Verification**: 实现后运行测试验证
|
||||
4. **Commit Management**: 每个 solution 完成后 git commit
|
||||
5. **Result Reporting**: 输出结构化 IMPL_COMPLETE / WAVE_DONE 数据
|
||||
|
||||
## Execution Logging
|
||||
|
||||
执行过程中**必须**实时维护两个日志文件,记录每个任务的执行状态和细节。
|
||||
|
||||
### Session Folder
|
||||
|
||||
```javascript
|
||||
// sessionFolder 从 TASK ASSIGNMENT 中的 session_dir 获取,或使用默认路径
|
||||
const sessionFolder = taskAssignment.session_dir
|
||||
|| `.workflow/.team/PEX-wave${waveNum}-${new Date().toISOString().slice(0,10)}`
|
||||
```
|
||||
|
||||
### execution.md — 执行概览
|
||||
|
||||
在开始实现前初始化,任务完成/失败时更新状态。
|
||||
|
||||
```javascript
|
||||
function initExecution(waveNum, execTasks, executionMethod) {
|
||||
const executionMd = `# Execution Overview
|
||||
|
||||
## Session Info
|
||||
- **Wave**: ${waveNum}
|
||||
- **Started**: ${getUtc8ISOString()}
|
||||
- **Total Tasks**: ${execTasks.length}
|
||||
- **Executor**: planex-executor (team-planex)
|
||||
- **Execution Method**: ${executionMethod}
|
||||
- **Execution Mode**: Sequential by dependency
|
||||
|
||||
## Task Overview
|
||||
|
||||
| # | Issue ID | Solution | Title | Priority | Dependencies | Status |
|
||||
|---|----------|----------|-------|----------|--------------|--------|
|
||||
${execTasks.map((t, i) =>
|
||||
`| ${i+1} | ${t.issue_id} | ${t.solution_id} | ${t.title} | ${t.priority} | ${(t.depends_on || []).join(', ') || '-'} | pending |`
|
||||
).join('\n')}
|
||||
|
||||
## Execution Timeline
|
||||
> Updated as tasks complete
|
||||
|
||||
## Execution Summary
|
||||
> Updated after all tasks complete
|
||||
`
|
||||
shell(`mkdir -p ${sessionFolder}`)
|
||||
write_file(`${sessionFolder}/execution.md`, executionMd)
|
||||
}
|
||||
```
|
||||
|
||||
### execution-events.md — 事件流
|
||||
|
||||
每个任务的 START/COMPLETE/FAIL 实时追加记录。
|
||||
|
||||
```javascript
|
||||
function initEvents(waveNum) {
|
||||
const eventsHeader = `# Execution Events
|
||||
|
||||
**Wave**: ${waveNum}
|
||||
**Executor**: planex-executor (team-planex)
|
||||
**Started**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
`
|
||||
write_file(`${sessionFolder}/execution-events.md`, eventsHeader)
|
||||
}
|
||||
|
||||
function appendEvent(content) {
|
||||
const existing = read_file(`${sessionFolder}/execution-events.md`)
|
||||
write_file(`${sessionFolder}/execution-events.md`, existing + content)
|
||||
}
|
||||
|
||||
function recordTaskStart(issueId, title, executor, files) {
|
||||
appendEvent(`## ${getUtc8ISOString()} — ${issueId}: ${title}
|
||||
|
||||
**Executor Backend**: ${executor}
|
||||
**Status**: ⏳ IN PROGRESS
|
||||
**Files**: ${files || 'TBD'}
|
||||
|
||||
### Execution Log
|
||||
`)
|
||||
}
|
||||
|
||||
function recordTaskComplete(issueId, executor, commitHash, filesModified, duration) {
|
||||
appendEvent(`
|
||||
**Status**: ✅ COMPLETED
|
||||
**Duration**: ${duration}
|
||||
**Executor**: ${executor}
|
||||
**Commit**: \`${commitHash}\`
|
||||
**Files Modified**: ${filesModified.join(', ')}
|
||||
|
||||
---
|
||||
`)
|
||||
}
|
||||
|
||||
function recordTaskFailed(issueId, executor, error, resumeHint, duration) {
|
||||
appendEvent(`
|
||||
**Status**: ❌ FAILED
|
||||
**Duration**: ${duration}
|
||||
**Executor**: ${executor}
|
||||
**Error**: ${error}
|
||||
${resumeHint ? `**Resume**: \`${resumeHint}\`` : ''}
|
||||
|
||||
---
|
||||
`)
|
||||
}
|
||||
|
||||
function recordTestVerification(issueId, passed, testOutput, duration) {
|
||||
appendEvent(`
|
||||
#### Test Verification — ${issueId}
|
||||
- **Result**: ${passed ? '✅ PASS' : '❌ FAIL'}
|
||||
- **Duration**: ${duration}
|
||||
${!passed ? `- **Output** (truncated):\n\`\`\`\n${testOutput.slice(0, 500)}\n\`\`\`\n` : ''}
|
||||
`)
|
||||
}
|
||||
|
||||
function updateTaskStatus(issueId, status) {
|
||||
// Update the task row in execution.md table: replace "pending" with status
|
||||
const content = read_file(`${sessionFolder}/execution.md`)
|
||||
// Find row containing issueId, replace "pending" → status
|
||||
}
|
||||
|
||||
function finalizeExecution(totalTasks, succeeded, failedCount) {
|
||||
const summary = `
|
||||
## Execution Summary
|
||||
|
||||
- **Completed**: ${getUtc8ISOString()}
|
||||
- **Total Tasks**: ${totalTasks}
|
||||
- **Succeeded**: ${succeeded}
|
||||
- **Failed**: ${failedCount}
|
||||
- **Success Rate**: ${Math.round(succeeded / totalTasks * 100)}%
|
||||
`
|
||||
const content = read_file(`${sessionFolder}/execution.md`)
|
||||
write_file(`${sessionFolder}/execution.md`,
|
||||
content.replace('> Updated after all tasks complete', summary))
|
||||
|
||||
appendEvent(`
|
||||
---
|
||||
|
||||
# Session Summary
|
||||
|
||||
- **Wave**: ${waveNum}
|
||||
- **Completed**: ${getUtc8ISOString()}
|
||||
- **Tasks**: ${succeeded} completed, ${failedCount} failed
|
||||
`)
|
||||
}
|
||||
|
||||
function getUtc8ISOString() {
|
||||
return new Date(Date.now() + 8 * 3600000).toISOString().replace('Z', '+08:00')
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Step 1: Context Loading
|
||||
|
||||
**MANDATORY**: Execute these steps FIRST before any other action.
|
||||
|
||||
1. Read this role definition file (already done if you're reading this)
|
||||
2. Read: `.workflow/project-tech.json` — understand project technology stack
|
||||
3. Read: `.workflow/project-guidelines.json` — understand project conventions
|
||||
4. Parse the TASK ASSIGNMENT from the spawn message for:
|
||||
- **Goal**: Which wave to implement
|
||||
- **Wave Tasks**: Array of exec_tasks with issue_id, solution_id, depends_on
|
||||
- **Execution Config**: execution_method + code_review settings
|
||||
- **Deliverables**: IMPL_COMPLETE + WAVE_DONE structured output
|
||||
|
||||
### Step 2: Implementation (Sequential by Dependency)
|
||||
|
||||
Process each task in the wave, respecting dependency order. **Record every task to execution logs.**
|
||||
|
||||
```javascript
|
||||
const tasks = taskAssignment.exec_tasks
|
||||
const executionMethod = taskAssignment.execution_config.execution_method
|
||||
const codeReview = taskAssignment.execution_config.code_review
|
||||
const waveNum = taskAssignment.wave_number
|
||||
|
||||
// ── Initialize execution logs ──
|
||||
initExecution(waveNum, tasks, executionMethod)
|
||||
initEvents(waveNum)
|
||||
|
||||
let completed = 0
|
||||
let failed = 0
|
||||
|
||||
// Sort by dependencies (topological order — tasks with no deps first)
|
||||
const sorted = topologicalSort(tasks)
|
||||
|
||||
for (const task of sorted) {
|
||||
const issueId = task.issue_id
|
||||
const taskStartTime = Date.now()
|
||||
|
||||
// --- Load solution (dual-mode: artifact file first, CLI fallback) ---
|
||||
let solution
|
||||
const solutionFile = task.solution_file
|
||||
if (solutionFile) {
|
||||
try {
|
||||
const solutionData = JSON.parse(read_file(solutionFile))
|
||||
solution = solutionData.bound ? solutionData : { bound: solutionData }
|
||||
} catch {
|
||||
// Fallback to CLI
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
solution = JSON.parse(solJson)
|
||||
}
|
||||
} else {
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
solution = JSON.parse(solJson)
|
||||
}
|
||||
|
||||
if (!solution.bound) {
|
||||
recordTaskStart(issueId, task.title, 'N/A', '')
|
||||
recordTaskFailed(issueId, 'N/A', 'No bound solution', null,
|
||||
`${Math.round((Date.now() - taskStartTime) / 1000)}s`)
|
||||
updateTaskStatus(issueId, 'failed')
|
||||
|
||||
console.log(`IMPL_COMPLETE:\n${JSON.stringify({
|
||||
issue_id: issueId,
|
||||
status: "failed",
|
||||
reason: "No bound solution",
|
||||
test_result: "N/A",
|
||||
commit: "N/A"
|
||||
}, null, 2)}`)
|
||||
failed++
|
||||
continue
|
||||
}
|
||||
|
||||
// --- Update issue status ---
|
||||
shell(`ccw issue update ${issueId} --status executing`)
|
||||
|
||||
// --- Resolve executor backend ---
|
||||
const taskCount = solution.bound.task_count || solution.bound.tasks?.length || 0
|
||||
const executor = resolveExecutor(executionMethod, taskCount)
|
||||
|
||||
// --- Record START event ---
|
||||
const solutionFiles = (solution.bound.tasks || [])
|
||||
.flatMap(t => t.files || []).join(', ')
|
||||
recordTaskStart(issueId, task.title, executor, solutionFiles)
|
||||
updateTaskStatus(issueId, 'in_progress')
|
||||
|
||||
// --- Build execution prompt ---
|
||||
const prompt = buildExecutionPrompt(issueId, solution)
|
||||
|
||||
// --- Route to backend ---
|
||||
let implSuccess = false
|
||||
|
||||
if (executor === 'agent') {
|
||||
// Spawn code-developer subagent (synchronous)
|
||||
appendEvent(`- Spawning code-developer agent...\n`)
|
||||
const devAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/code-developer.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
${prompt}
|
||||
`
|
||||
})
|
||||
|
||||
const devResult = wait({ ids: [devAgent], timeout_ms: 900000 })
|
||||
|
||||
if (devResult.timed_out) {
|
||||
appendEvent(`- Agent timed out, urging convergence...\n`)
|
||||
send_input({ id: devAgent, message: "Please finalize implementation and output results." })
|
||||
wait({ ids: [devAgent], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
close_agent({ id: devAgent })
|
||||
appendEvent(`- code-developer agent completed\n`)
|
||||
implSuccess = true
|
||||
|
||||
} else if (executor === 'codex') {
|
||||
const fixedId = `planex-${issueId}`
|
||||
appendEvent(`- Executing via Codex CLI (id: ${fixedId})...\n`)
|
||||
shell(`ccw cli -p "${prompt}" --tool codex --mode write --id ${fixedId}`)
|
||||
appendEvent(`- Codex CLI completed\n`)
|
||||
implSuccess = true
|
||||
|
||||
} else if (executor === 'gemini') {
|
||||
const fixedId = `planex-${issueId}`
|
||||
appendEvent(`- Executing via Gemini CLI (id: ${fixedId})...\n`)
|
||||
shell(`ccw cli -p "${prompt}" --tool gemini --mode write --id ${fixedId}`)
|
||||
appendEvent(`- Gemini CLI completed\n`)
|
||||
implSuccess = true
|
||||
}
|
||||
|
||||
// --- Test verification ---
|
||||
let testCmd = 'npm test'
|
||||
try {
|
||||
const pkgJson = JSON.parse(read_file('package.json'))
|
||||
if (pkgJson.scripts?.test) testCmd = 'npm test'
|
||||
else if (pkgJson.scripts?.['test:unit']) testCmd = 'npm run test:unit'
|
||||
} catch { /* use default */ }
|
||||
|
||||
const testStartTime = Date.now()
|
||||
appendEvent(`- Running tests: \`${testCmd}\`...\n`)
|
||||
const testResult = shell(`${testCmd} 2>&1 || echo "TEST_FAILED"`)
|
||||
const testPassed = !testResult.includes('TEST_FAILED') && !testResult.includes('FAIL')
|
||||
const testDuration = `${Math.round((Date.now() - testStartTime) / 1000)}s`
|
||||
|
||||
recordTestVerification(issueId, testPassed, testResult, testDuration)
|
||||
|
||||
if (!testPassed) {
|
||||
const duration = `${Math.round((Date.now() - taskStartTime) / 1000)}s`
|
||||
const resumeHint = executor !== 'agent'
|
||||
? `ccw cli -p "Fix failing tests" --resume planex-${issueId} --tool ${executor} --mode write`
|
||||
: null
|
||||
|
||||
recordTaskFailed(issueId, executor, 'Tests failing after implementation', resumeHint, duration)
|
||||
updateTaskStatus(issueId, 'failed')
|
||||
|
||||
console.log(`IMPL_COMPLETE:\n${JSON.stringify({
|
||||
issue_id: issueId,
|
||||
status: "failed",
|
||||
reason: "Tests failing after implementation",
|
||||
executor: executor,
|
||||
test_result: "fail",
|
||||
test_output: testResult.slice(0, 500),
|
||||
commit: "N/A",
|
||||
resume_hint: resumeHint || "Re-spawn code-developer with fix instructions"
|
||||
}, null, 2)}`)
|
||||
failed++
|
||||
continue
|
||||
}
|
||||
|
||||
// --- Optional code review ---
|
||||
if (codeReview && codeReview !== 'Skip') {
|
||||
appendEvent(`- Running code review (${codeReview})...\n`)
|
||||
executeCodeReview(codeReview, issueId)
|
||||
}
|
||||
|
||||
// --- Git commit ---
|
||||
shell(`git add -A && git commit -m "feat(${issueId}): implement solution ${task.solution_id}"`)
|
||||
const commitHash = shell('git rev-parse --short HEAD').trim()
|
||||
|
||||
appendEvent(`- Committed: \`${commitHash}\`\n`)
|
||||
|
||||
// --- Update issue status ---
|
||||
shell(`ccw issue update ${issueId} --status completed`)
|
||||
|
||||
// --- Record completion ---
|
||||
const duration = `${Math.round((Date.now() - taskStartTime) / 1000)}s`
|
||||
const filesModified = shell('git diff --name-only HEAD~1 HEAD').trim().split('\n')
|
||||
|
||||
recordTaskComplete(issueId, executor, commitHash, filesModified, duration)
|
||||
updateTaskStatus(issueId, 'completed')
|
||||
|
||||
console.log(`IMPL_COMPLETE:\n${JSON.stringify({
|
||||
issue_id: issueId,
|
||||
status: "success",
|
||||
executor: executor,
|
||||
test_result: "pass",
|
||||
commit: commitHash
|
||||
}, null, 2)}`)
|
||||
|
||||
completed++
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Wave Completion Report & Log Finalization
|
||||
|
||||
```javascript
|
||||
// ── Finalize execution logs ──
|
||||
finalizeExecution(sorted.length, completed, failed)
|
||||
|
||||
// ── Output structured wave result ──
|
||||
console.log(`WAVE_DONE:\n${JSON.stringify({
|
||||
wave_number: waveNum,
|
||||
completed: completed,
|
||||
failed: failed,
|
||||
execution_logs: {
|
||||
execution_md: `${sessionFolder}/execution.md`,
|
||||
events_md: `${sessionFolder}/execution-events.md`
|
||||
}
|
||||
}, null, 2)}`)
|
||||
```
|
||||
|
||||
## Execution Log Output Structure
|
||||
|
||||
```
|
||||
${sessionFolder}/
|
||||
├── execution.md # 执行概览:wave info, task table, summary
|
||||
└── execution-events.md # 事件流:每个 task 的 START/COMPLETE/FAIL + 测试验证详情
|
||||
```
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `execution.md` | 概览:wave task 表格(issue/solution/status)、执行统计、最终结果 |
|
||||
| `execution-events.md` | 时间线:每个 task 的后端选择、实现日志、测试验证、commit 记录 |
|
||||
|
||||
## Execution Method Resolution
|
||||
|
||||
```javascript
|
||||
function resolveExecutor(method, taskCount) {
|
||||
if (method.toLowerCase() === 'auto') {
|
||||
return taskCount <= 3 ? 'agent' : 'codex'
|
||||
}
|
||||
return method.toLowerCase() // 'agent' | 'codex' | 'gemini'
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Prompt Builder
|
||||
|
||||
```javascript
|
||||
function buildExecutionPrompt(issueId, solution) {
|
||||
return `
|
||||
## Issue
|
||||
ID: ${issueId}
|
||||
Title: ${solution.bound.title || 'N/A'}
|
||||
|
||||
## Solution Plan
|
||||
${JSON.stringify(solution.bound, null, 2)}
|
||||
|
||||
## Implementation Requirements
|
||||
1. Follow the solution plan tasks in order
|
||||
2. Write clean, minimal code following existing patterns
|
||||
3. Run tests after each significant change
|
||||
4. Ensure all existing tests still pass
|
||||
5. Do NOT over-engineer — implement exactly what the solution specifies
|
||||
|
||||
## Quality Checklist
|
||||
- [ ] All solution tasks implemented
|
||||
- [ ] No TypeScript/linting errors
|
||||
- [ ] Existing tests pass
|
||||
- [ ] New tests added where appropriate
|
||||
- [ ] No security vulnerabilities introduced
|
||||
`
|
||||
}
|
||||
```
|
||||
|
||||
## Code Review (Optional)
|
||||
|
||||
```javascript
|
||||
function executeCodeReview(reviewTool, issueId) {
|
||||
if (reviewTool === 'Gemini Review') {
|
||||
shell(`ccw cli -p "PURPOSE: Code review for ${issueId} implementation
|
||||
TASK: Verify solution convergence, check test coverage, analyze quality
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Quality assessment with issue identification
|
||||
CONSTRAINTS: Focus on solution adherence" --tool gemini --mode analysis`)
|
||||
} else if (reviewTool === 'Codex Review') {
|
||||
shell(`ccw cli --tool codex --mode review --uncommitted`)
|
||||
}
|
||||
// Agent Review: perform inline review (read diff, analyze)
|
||||
}
|
||||
```
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅处理被分配的 wave 中的 exec tasks
|
||||
- 按依赖顺序(topological sort)执行任务
|
||||
- 每个 task 完成后输出 IMPL_COMPLETE
|
||||
- 所有 tasks 完成后输出 WAVE_DONE
|
||||
- 通过 spawn_agent 调用 code-developer(agent 后端)
|
||||
- 运行测试验证实现
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- ❌ 创建 issue(planner 职责)
|
||||
- ❌ 修改 solution 或 queue(planner 职责)
|
||||
- ❌ Spawn issue-plan-agent 或 issue-queue-agent
|
||||
- ❌ 处理非当前 wave 的任务
|
||||
- ❌ 跳过测试验证直接 commit
|
||||
|
||||
## Topological Sort
|
||||
|
||||
```javascript
|
||||
function topologicalSort(tasks) {
|
||||
const taskMap = new Map(tasks.map(t => [t.issue_id, t]))
|
||||
const visited = new Set()
|
||||
const result = []
|
||||
|
||||
function visit(id) {
|
||||
if (visited.has(id)) return
|
||||
visited.add(id)
|
||||
const task = taskMap.get(id)
|
||||
if (task?.depends_on) {
|
||||
task.depends_on.forEach(dep => visit(dep))
|
||||
}
|
||||
result.push(task)
|
||||
}
|
||||
|
||||
tasks.forEach(t => visit(t.issue_id))
|
||||
return result.filter(Boolean)
|
||||
}
|
||||
```
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Read role definition file as FIRST action (Step 1)
|
||||
- **Initialize execution.md + execution-events.md BEFORE starting any task**
|
||||
- **Record START event before each task implementation**
|
||||
- **Record COMPLETE/FAIL event after each task with duration and details**
|
||||
- **Finalize logs at wave completion**
|
||||
- Follow structured output template (IMPL_COMPLETE / WAVE_DONE)
|
||||
- Verify tests pass before committing
|
||||
- Respect dependency ordering within the wave
|
||||
- Include executor backend info and commit hash in reports
|
||||
|
||||
**NEVER**:
|
||||
- Skip test verification before commit
|
||||
- Modify files outside of the assigned solution scope
|
||||
- Produce unstructured output
|
||||
- Continue to next task if current has unresolved blockers
|
||||
- Create new issues or modify planning artifacts
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Solution not found | Report IMPL_COMPLETE with status=failed, reason |
|
||||
| code-developer timeout | Urge convergence via send_input, close and report |
|
||||
| CLI execution failure | Include resume_hint in IMPL_COMPLETE output |
|
||||
| Tests failing | Report with test_output excerpt and resume_hint |
|
||||
| Git commit failure | Retry once, then report in IMPL_COMPLETE |
|
||||
| Unknown execution_method | Fallback to 'agent' with warning |
|
||||
| Dependency task failed | Skip dependent tasks, report as failed with reason |
|
||||
@@ -1,290 +0,0 @@
|
||||
---
|
||||
name: planex-planner
|
||||
description: |
|
||||
Planning lead for PlanEx pipeline. Decomposes requirements into issues,
|
||||
generates solutions via issue-plan-agent, performs inline conflict check,
|
||||
writes solution artifacts. Per-issue output for orchestrator dispatch.
|
||||
color: blue
|
||||
skill: team-planex
|
||||
---
|
||||
|
||||
# PlanEx Planner
|
||||
|
||||
需求拆解 → issue 创建 → 方案设计 → inline 冲突检查 → 写中间产物 → 逐 issue 输出。内部 spawn issue-plan-agent 子代理,每完成一个 issue 的 solution 立即输出 ISSUE_READY,等待 orchestrator send_input 继续下一 issue。
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **Requirement Decomposition**: 将需求文本/plan 文件拆解为独立 issues
|
||||
2. **Solution Planning**: 通过 issue-plan-agent 为每个 issue 生成 solution
|
||||
3. **Inline Conflict Check**: 基于 files_touched 重叠检测 + 显式依赖排序
|
||||
4. **Solution Artifacts**: 将 solution 写入中间产物文件供 executor 加载
|
||||
5. **Per-Issue Output**: 每个 issue 完成后立即输出 ISSUE_READY 数据
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Step 1: Context Loading
|
||||
|
||||
**MANDATORY**: Execute these steps FIRST before any other action.
|
||||
|
||||
1. Read this role definition file (already done if you're reading this)
|
||||
2. Read: `.workflow/project-tech.json` — understand project technology stack
|
||||
3. Read: `.workflow/project-guidelines.json` — understand project conventions
|
||||
4. Parse the TASK ASSIGNMENT from the spawn message for:
|
||||
- **Goal**: What to achieve
|
||||
- **Input**: Issue IDs / text / plan file
|
||||
- **Execution Config**: execution_method + code_review settings
|
||||
- **Session Dir**: Path for writing solution artifacts
|
||||
- **Deliverables**: ISSUE_READY + ALL_PLANNED structured output
|
||||
|
||||
### Step 2: Input Parsing & Issue Creation
|
||||
|
||||
Parse the input from TASK ASSIGNMENT and create issues as needed.
|
||||
|
||||
```javascript
|
||||
const input = taskAssignment.input
|
||||
const sessionDir = taskAssignment.session_dir
|
||||
const executionConfig = taskAssignment.execution_config
|
||||
|
||||
// 1) 已有 Issue IDs
|
||||
const issueIds = input.match(/ISS-\d{8}-\d{6}/g) || []
|
||||
|
||||
// 2) 文本输入 → 创建 issue
|
||||
const textMatch = input.match(/text:\s*(.+)/)
|
||||
if (textMatch && issueIds.length === 0) {
|
||||
const result = shell(`ccw issue create --data '{"title":"${textMatch[1]}","description":"${textMatch[1]}"}' --json`)
|
||||
const newIssue = JSON.parse(result)
|
||||
issueIds.push(newIssue.id)
|
||||
}
|
||||
|
||||
// 3) Plan 文件 → 解析并批量创建 issues
|
||||
const planMatch = input.match(/plan_file:\s*(\S+)/)
|
||||
if (planMatch && issueIds.length === 0) {
|
||||
const planContent = read_file(planMatch[1])
|
||||
|
||||
try {
|
||||
const content = JSON.parse(planContent)
|
||||
if (content.waves && content.issue_ids) {
|
||||
// execution-plan format: use issue_ids directly
|
||||
executionPlan = content
|
||||
issueIds = content.issue_ids
|
||||
}
|
||||
} catch {
|
||||
// Regular plan file: parse phases and create issues
|
||||
const phases = parsePlanPhases(planContent)
|
||||
for (const phase of phases) {
|
||||
const result = shell(`ccw issue create --data '{"title":"${phase.title}","description":"${phase.description}"}' --json`)
|
||||
issueIds.push(JSON.parse(result).id)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Per-Issue Solution Planning & Artifact Writing
|
||||
|
||||
Process each issue individually: plan → write artifact → conflict check → output ISSUE_READY.
|
||||
|
||||
```javascript
|
||||
const projectRoot = shell('cd . && pwd').trim()
|
||||
const dispatchedSolutions = []
|
||||
|
||||
shell(`mkdir -p "${sessionDir}/artifacts/solutions"`)
|
||||
|
||||
for (let i = 0; i < issueIds.length; i++) {
|
||||
const issueId = issueIds[i]
|
||||
|
||||
// --- Step 3a: Spawn issue-plan-agent for single issue ---
|
||||
const planAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
Goal: Generate solution for issue ${issueId}
|
||||
|
||||
issue_ids: ["${issueId}"]
|
||||
project_root: "${projectRoot}"
|
||||
|
||||
## Requirements
|
||||
- Generate solution for this issue
|
||||
- Auto-bind single solution
|
||||
- For multiple solutions, select the most pragmatic one
|
||||
|
||||
## Deliverables
|
||||
Structured output with solution binding.
|
||||
`
|
||||
})
|
||||
|
||||
const planResult = wait({ ids: [planAgent], timeout_ms: 600000 })
|
||||
|
||||
if (planResult.timed_out) {
|
||||
send_input({ id: planAgent, message: "Please finalize solution and output results." })
|
||||
wait({ ids: [planAgent], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
close_agent({ id: planAgent })
|
||||
|
||||
// --- Step 3b: Load solution + write artifact file ---
|
||||
const solJson = shell(`ccw issue solution ${issueId} --json`)
|
||||
const solution = JSON.parse(solJson)
|
||||
|
||||
const solutionFile = `${sessionDir}/artifacts/solutions/${issueId}.json`
|
||||
write_file(solutionFile, JSON.stringify({
|
||||
issue_id: issueId,
|
||||
...solution,
|
||||
execution_config: {
|
||||
execution_method: executionConfig.executionMethod,
|
||||
code_review: executionConfig.codeReviewTool
|
||||
},
|
||||
timestamp: new Date().toISOString()
|
||||
}, null, 2))
|
||||
|
||||
// --- Step 3c: Inline conflict check ---
|
||||
const blockedBy = inlineConflictCheck(issueId, solution, dispatchedSolutions)
|
||||
|
||||
// --- Step 3d: Output ISSUE_READY for orchestrator ---
|
||||
dispatchedSolutions.push({ issueId, solution, solutionFile })
|
||||
|
||||
console.log(`
|
||||
ISSUE_READY:
|
||||
${JSON.stringify({
|
||||
issue_id: issueId,
|
||||
solution_id: solution.bound?.id || 'N/A',
|
||||
title: solution.bound?.title || issueId,
|
||||
priority: "normal",
|
||||
depends_on: blockedBy,
|
||||
solution_file: solutionFile
|
||||
}, null, 2)}
|
||||
`)
|
||||
|
||||
// Wait for orchestrator send_input before continuing to next issue
|
||||
// (orchestrator will send: "Issue dispatched. Continue to next issue.")
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Finalization
|
||||
|
||||
After all issues are planned, output ALL_PLANNED signal.
|
||||
|
||||
```javascript
|
||||
console.log(`
|
||||
ALL_PLANNED:
|
||||
${JSON.stringify({
|
||||
total_issues: issueIds.length
|
||||
}, null, 2)}
|
||||
`)
|
||||
```
|
||||
|
||||
## Inline Conflict Check
|
||||
|
||||
```javascript
|
||||
function inlineConflictCheck(issueId, solution, dispatchedSolutions) {
|
||||
const currentFiles = solution.bound?.files_touched
|
||||
|| solution.bound?.affected_files || []
|
||||
const blockedBy = []
|
||||
|
||||
// 1. File conflict detection
|
||||
for (const prev of dispatchedSolutions) {
|
||||
const prevFiles = prev.solution.bound?.files_touched
|
||||
|| prev.solution.bound?.affected_files || []
|
||||
const overlap = currentFiles.filter(f => prevFiles.includes(f))
|
||||
if (overlap.length > 0) {
|
||||
blockedBy.push(prev.issueId)
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Explicit dependencies
|
||||
const explicitDeps = solution.bound?.dependencies?.on_issues || []
|
||||
for (const depId of explicitDeps) {
|
||||
if (!blockedBy.includes(depId)) {
|
||||
blockedBy.push(depId)
|
||||
}
|
||||
}
|
||||
|
||||
return blockedBy
|
||||
}
|
||||
```
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅执行规划和拆解工作
|
||||
- 每个 issue 完成后输出 ISSUE_READY 结构化数据
|
||||
- 所有 issues 完成后输出 ALL_PLANNED
|
||||
- 通过 spawn_agent 调用 issue-plan-agent(逐个 issue)
|
||||
- 等待 orchestrator send_input 才继续下一 issue
|
||||
- 将 solution 写入中间产物文件
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- ❌ 直接编写/修改业务代码(executor 职责)
|
||||
- ❌ Spawn code-developer agent(executor 职责)
|
||||
- ❌ 运行项目测试
|
||||
- ❌ git commit 代码变更
|
||||
- ❌ 直接修改 solution 内容(issue-plan-agent 负责)
|
||||
|
||||
## Plan File Parsing
|
||||
|
||||
```javascript
|
||||
function parsePlanPhases(planContent) {
|
||||
const phases = []
|
||||
const phaseRegex = /^#{2,3}\s+(?:Phase|Step|阶段)\s*\d*[:.:]\s*(.+?)$/gm
|
||||
let match, lastIndex = 0, lastTitle = null
|
||||
|
||||
while ((match = phaseRegex.exec(planContent)) !== null) {
|
||||
if (lastTitle !== null) {
|
||||
phases.push({ title: lastTitle, description: planContent.slice(lastIndex, match.index).trim() })
|
||||
}
|
||||
lastTitle = match[1].trim()
|
||||
lastIndex = match.index + match[0].length
|
||||
}
|
||||
|
||||
if (lastTitle !== null) {
|
||||
phases.push({ title: lastTitle, description: planContent.slice(lastIndex).trim() })
|
||||
}
|
||||
|
||||
if (phases.length === 0) {
|
||||
const titleMatch = planContent.match(/^#\s+(.+)$/m)
|
||||
phases.push({
|
||||
title: titleMatch ? titleMatch[1] : 'Plan Implementation',
|
||||
description: planContent.slice(0, 500)
|
||||
})
|
||||
}
|
||||
|
||||
return phases
|
||||
}
|
||||
```
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Read role definition file as FIRST action (Step 1)
|
||||
- Follow structured output template (ISSUE_READY / ALL_PLANNED)
|
||||
- Stay within planning boundaries (no code implementation)
|
||||
- Spawn issue-plan-agent for each issue individually
|
||||
- Write solution artifact file before outputting ISSUE_READY
|
||||
- Include solution_file path in ISSUE_READY data
|
||||
|
||||
**NEVER**:
|
||||
- Modify source code files
|
||||
- Skip context loading (Step 1)
|
||||
- Produce unstructured or free-form output
|
||||
- Continue to next issue without outputting ISSUE_READY
|
||||
- Close without outputting ALL_PLANNED
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Issue creation failure | Retry once with simplified text, report in output |
|
||||
| issue-plan-agent timeout | Urge convergence via send_input, close and report partial |
|
||||
| Inline conflict check failure | Use empty depends_on, continue |
|
||||
| Solution artifact write failure | Report error, continue with ISSUE_READY output |
|
||||
| Plan file not found | Report error in output with CLARIFICATION_NEEDED |
|
||||
| Empty input (no issues, no text) | Output CLARIFICATION_NEEDED asking for requirements |
|
||||
| Sub-agent produces invalid output | Report error, continue with available data |
|
||||
184
.codex/skills/team-planex/agents/planner.md
Normal file
184
.codex/skills/team-planex/agents/planner.md
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
name: planex-planner
|
||||
description: |
|
||||
PlanEx planner agent. Issue decomposition + solution design with beat protocol.
|
||||
Outputs ISSUE_READY:{id} after each solution, waits for "Continue" signal.
|
||||
Deploy to: ~/.codex/agents/planex-planner.md
|
||||
color: blue
|
||||
---
|
||||
|
||||
# PlanEx Planner
|
||||
|
||||
Requirement decomposition → issue creation → solution design, one issue at a time.
|
||||
Outputs `ISSUE_READY:{issueId}` after each solution and waits for orchestrator to signal
|
||||
"Continue". Only outputs `ALL_PLANNED:{count}` when all issues are processed.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Tag**: `[planner]`
|
||||
- **Beat Protocol**: ISSUE_READY per issue → wait → ALL_PLANNED when done
|
||||
- **Boundary**: Planning only — no code writing, no test running, no git commits
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
| Action | Allowed |
|
||||
|--------|---------|
|
||||
| Parse input (Issue IDs / text / plan file) | ✅ |
|
||||
| Create issues via CLI | ✅ |
|
||||
| Generate solution via issue-plan-agent | ✅ |
|
||||
| Write solution artifacts to disk | ✅ |
|
||||
| Output ISSUE_READY / ALL_PLANNED signals | ✅ |
|
||||
| Write or modify business code | ❌ |
|
||||
| Run tests or git commit | ❌ |
|
||||
|
||||
---
|
||||
|
||||
## CLI Toolbox
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `ccw issue create --data '{"title":"...","description":"..."}' --json` | Create issue |
|
||||
| `ccw issue status <id> --json` | Check issue status |
|
||||
| `ccw issue plan <id>` | Plan single issue (generates solution) |
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Step 1: Load Context
|
||||
|
||||
After reading role definition, load project context:
|
||||
- Read: `.workflow/project-tech.json`
|
||||
- Read: `.workflow/specs/*.md`
|
||||
- Extract session directory and artifacts directory from task message
|
||||
|
||||
### Step 2: Parse Input
|
||||
|
||||
Determine input type from task message:
|
||||
|
||||
| Detection | Condition | Action |
|
||||
|-----------|-----------|--------|
|
||||
| Issue IDs | `ISS-\d{8}-\d{6}` pattern | Use directly for planning |
|
||||
| `--text '...'` | Flag in message | Create issue(s) first via CLI |
|
||||
| `--plan <path>` | Flag in message | Read file, parse phases, batch create issues |
|
||||
|
||||
**Plan file parsing rules** (when `--plan` is used):
|
||||
- Match `## Phase N: Title`, `## Step N: Title`, or `### N. Title`
|
||||
- Each match → one issue (title + description from section content)
|
||||
- Fallback: no structure found → entire file as single issue
|
||||
|
||||
### Step 3: Issue Processing Loop (Beat Protocol)
|
||||
|
||||
For each issue, execute in sequence:
|
||||
|
||||
#### 3a. Generate Solution
|
||||
|
||||
Use `issue-plan-agent` subagent to generate and bind solution:
|
||||
|
||||
```
|
||||
spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
|
||||
---
|
||||
|
||||
issue_ids: ["${issueId}"]
|
||||
project_root: "${projectRoot}"
|
||||
|
||||
## Requirements
|
||||
- Generate solution for this issue
|
||||
- Auto-bind single solution
|
||||
- Output solution JSON when complete
|
||||
`
|
||||
})
|
||||
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
close_agent({ id: agent })
|
||||
```
|
||||
|
||||
#### 3b. Write Solution Artifact
|
||||
|
||||
```javascript
|
||||
// Extract solution from issue-plan-agent result
|
||||
const solution = parseSolution(result)
|
||||
|
||||
Write({
|
||||
file_path: `${artifactsDir}/${issueId}.json`,
|
||||
content: JSON.stringify({
|
||||
session_id: sessionId,
|
||||
issue_id: issueId,
|
||||
solution: solution,
|
||||
planned_at: new Date().toISOString()
|
||||
}, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
#### 3c. Output Beat Signal
|
||||
|
||||
Output EXACTLY (no surrounding text on this line):
|
||||
```
|
||||
ISSUE_READY:{issueId}
|
||||
```
|
||||
|
||||
Then STOP. Do not process next issue. Wait for "Continue" message from orchestrator.
|
||||
|
||||
### Step 4: After All Issues
|
||||
|
||||
When every issue has been processed and confirmed with "Continue":
|
||||
|
||||
Output EXACTLY:
|
||||
```
|
||||
ALL_PLANNED:{totalCount}
|
||||
```
|
||||
|
||||
Where `{totalCount}` is the integer count of issues planned.
|
||||
|
||||
---
|
||||
|
||||
## Issue Creation (when needed)
|
||||
|
||||
For `--text` input:
|
||||
|
||||
```bash
|
||||
ccw issue create --data '{"title":"<title>","description":"<description>"}' --json
|
||||
```
|
||||
|
||||
Parse returned JSON for `id` field → use as issue ID.
|
||||
|
||||
For `--plan` input, create issues one at a time:
|
||||
```bash
|
||||
# For each parsed phase/step:
|
||||
ccw issue create --data '{"title":"<phase-title>","description":"<phase-content>"}' --json
|
||||
```
|
||||
|
||||
Collect all created issue IDs before proceeding to Step 3.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Issue creation failure | Retry once with simplified text, then report error |
|
||||
| `issue-plan-agent` failure | Retry once, then skip issue with `ISSUE_SKIP:{issueId}:reason` signal |
|
||||
| Plan file not found | Output error immediately, do not proceed |
|
||||
| Artifact write failure | Log warning inline, still output ISSUE_READY (executor will handle missing file) |
|
||||
| "Continue" not received after 5 min | Re-output `ISSUE_READY:{issueId}` once as reminder |
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Output `ISSUE_READY:{issueId}` on its own line with no surrounding text
|
||||
- Wait after each ISSUE_READY — do NOT auto-continue
|
||||
- Write solution file before outputting ISSUE_READY
|
||||
- Use `[planner]` prefix in all status messages
|
||||
|
||||
**NEVER**:
|
||||
- Output multiple ISSUE_READY signals before waiting for "Continue"
|
||||
- Proceed to next issue without receiving "Continue"
|
||||
- Write or modify any business logic files
|
||||
- Run tests or execute git commands
|
||||
286
.codex/skills/team-planex/orchestrator.md
Normal file
286
.codex/skills/team-planex/orchestrator.md
Normal file
@@ -0,0 +1,286 @@
|
||||
---
|
||||
name: team-planex
|
||||
description: |
|
||||
Beat pipeline: planner decomposes requirements issue-by-issue, orchestrator spawns
|
||||
Codex executor per issue immediately. All execution via Codex CLI only.
|
||||
agents: 2
|
||||
phases: 3
|
||||
---
|
||||
|
||||
# Team PlanEx (Codex)
|
||||
|
||||
逐 Issue 节拍流水线。Planner 每完成一个 issue 的 solution 立即输出 `ISSUE_READY` 信号,Orchestrator 即刻 spawn 独立 Codex executor 并行实现,无需等待 planner 完成全部规划。
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Input (Issue IDs / --text / --plan)
|
||||
→ Orchestrator: parse input → init session → spawn planner
|
||||
→ Beat loop:
|
||||
wait(planner) → ISSUE_READY:{issueId} → spawn_agent(executor)
|
||||
→ send_input(planner, "Continue")
|
||||
→ ALL_PLANNED:{count} → close_agent(planner)
|
||||
→ wait(all executors) → report
|
||||
```
|
||||
|
||||
## Agent Registry
|
||||
|
||||
| Agent | Role File | Responsibility |
|
||||
|-------|-----------|----------------|
|
||||
| `planner` | `~/.codex/agents/planex-planner.md` | Issue decomp → solution design → ISSUE_READY signals |
|
||||
| `executor` | `~/.codex/agents/planex-executor.md` | Codex CLI implementation per issue |
|
||||
|
||||
> Both agents must be deployed to `~/.codex/agents/` before use.
|
||||
> Source: `.codex/skills/team-planex/agents/`
|
||||
|
||||
---
|
||||
|
||||
## Input Parsing
|
||||
|
||||
Supported input types (parse from `$ARGUMENTS`):
|
||||
|
||||
| Type | Detection | Handler |
|
||||
|------|-----------|---------|
|
||||
| Issue IDs | `ISS-\d{8}-\d{6}` regex | Pass directly to planner |
|
||||
| Text | `--text '...'` flag | Planner creates issue(s) first |
|
||||
| Plan file | `--plan <path>` flag | Planner reads file, batch creates issues |
|
||||
|
||||
---
|
||||
|
||||
## Session Setup
|
||||
|
||||
Before spawning agents, initialize session directory:
|
||||
|
||||
```javascript
|
||||
// Generate session slug from input description (max 20 chars, kebab-case)
|
||||
const slug = toSlug(inputDescription).slice(0, 20)
|
||||
const date = new Date().toISOString().slice(0, 10).replace(/-/g, '')
|
||||
const sessionDir = `.workflow/.team/PEX-${slug}-${date}`
|
||||
const artifactsDir = `${sessionDir}/artifacts/solutions`
|
||||
|
||||
Bash(`mkdir -p "${artifactsDir}"`)
|
||||
|
||||
// Write initial session state
|
||||
Write({
|
||||
file_path: `${sessionDir}/team-session.json`,
|
||||
content: JSON.stringify({
|
||||
session_id: `PEX-${slug}-${date}`,
|
||||
input_type: inputType,
|
||||
input: rawInput,
|
||||
status: "running",
|
||||
started_at: new Date().toISOString(),
|
||||
executors: []
|
||||
}, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Spawn Planner
|
||||
|
||||
```javascript
|
||||
const plannerAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/planex-planner.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
## Session
|
||||
Session directory: ${sessionDir}
|
||||
Artifacts directory: ${artifactsDir}
|
||||
|
||||
## Input
|
||||
${inputType === 'issues' ? `Issue IDs: ${issueIds.join(' ')}` : ''}
|
||||
${inputType === 'text' ? `Requirement: ${requirementText}` : ''}
|
||||
${inputType === 'plan' ? `Plan file: ${planPath}` : ''}
|
||||
|
||||
## Beat Protocol (CRITICAL)
|
||||
Process issues one at a time. After completing each issue's solution:
|
||||
1. Write solution JSON to: ${artifactsDir}/{issueId}.json
|
||||
2. Output EXACTLY this line: ISSUE_READY:{issueId}
|
||||
3. STOP and wait — do NOT continue until you receive "Continue"
|
||||
|
||||
When ALL issues are processed:
|
||||
1. Output EXACTLY: ALL_PLANNED:{totalCount}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Beat Loop
|
||||
|
||||
Orchestrator coordinates the planner-executor pipeline:
|
||||
|
||||
```javascript
|
||||
const executorIds = []
|
||||
const executorIssueMap = {}
|
||||
|
||||
while (true) {
|
||||
// Wait for planner beat signal (up to 10 min per issue)
|
||||
const plannerOut = wait({ ids: [plannerAgent], timeout_ms: 600000 })
|
||||
|
||||
// Handle timeout: urge convergence and retry
|
||||
if (plannerOut.timed_out) {
|
||||
send_input({
|
||||
id: plannerAgent,
|
||||
message: "Please output ISSUE_READY:{issueId} for current issue or ALL_PLANNED if done."
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
const output = plannerOut.status[plannerAgent].completed
|
||||
|
||||
// Detect ALL_PLANNED — pipeline complete
|
||||
if (output.includes('ALL_PLANNED')) {
|
||||
const match = output.match(/ALL_PLANNED:(\d+)/)
|
||||
const total = match ? parseInt(match[1]) : executorIds.length
|
||||
close_agent({ id: plannerAgent })
|
||||
break
|
||||
}
|
||||
|
||||
// Detect ISSUE_READY — spawn executor immediately
|
||||
const issueMatch = output.match(/ISSUE_READY:(ISS-\d{8}-\d{6}|[A-Z0-9-]+)/)
|
||||
if (issueMatch) {
|
||||
const issueId = issueMatch[1]
|
||||
const solutionFile = `${artifactsDir}/${issueId}.json`
|
||||
|
||||
const executorId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/planex-executor.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/specs/*.md
|
||||
|
||||
---
|
||||
|
||||
## Issue
|
||||
Issue ID: ${issueId}
|
||||
Solution file: ${solutionFile}
|
||||
Session: ${sessionDir}
|
||||
|
||||
## Execution
|
||||
Load solution from file → implement via Codex CLI → verify tests → commit → report.
|
||||
`
|
||||
})
|
||||
|
||||
executorIds.push(executorId)
|
||||
executorIssueMap[executorId] = issueId
|
||||
|
||||
// Signal planner to continue to next issue
|
||||
send_input({ id: plannerAgent, message: "Continue with next issue." })
|
||||
continue
|
||||
}
|
||||
|
||||
// Unexpected output: urge convergence
|
||||
send_input({
|
||||
id: plannerAgent,
|
||||
message: "Output ISSUE_READY:{issueId} when solution is ready, or ALL_PLANNED when all done."
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Wait All Executors
|
||||
|
||||
```javascript
|
||||
if (executorIds.length > 0) {
|
||||
// Extended timeout: Codex CLI execution per issue (~10-20 min each)
|
||||
const execResults = wait({ ids: executorIds, timeout_ms: 1800000 })
|
||||
|
||||
if (execResults.timed_out) {
|
||||
const completed = executorIds.filter(id => execResults.status[id]?.completed)
|
||||
const pending = executorIds.filter(id => !execResults.status[id]?.completed)
|
||||
// Log pending issues for manual follow-up
|
||||
if (pending.length > 0) {
|
||||
const pendingIssues = pending.map(id => executorIssueMap[id])
|
||||
Write({
|
||||
file_path: `${sessionDir}/pending-executors.json`,
|
||||
content: JSON.stringify({ pending_issues: pendingIssues, executor_ids: pending }, null, 2)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Collect summaries
|
||||
const summaries = executorIds.map(id => ({
|
||||
issue_id: executorIssueMap[id],
|
||||
status: execResults.status[id]?.completed ? 'completed' : 'timeout',
|
||||
output: execResults.status[id]?.completed ?? null
|
||||
}))
|
||||
|
||||
// Cleanup
|
||||
executorIds.forEach(id => {
|
||||
try { close_agent({ id }) } catch { /* already closed */ }
|
||||
})
|
||||
|
||||
// Final report
|
||||
const completed = summaries.filter(s => s.status === 'completed').length
|
||||
const failed = summaries.filter(s => s.status === 'timeout').length
|
||||
|
||||
return `
|
||||
## Pipeline Complete
|
||||
|
||||
**Total issues**: ${executorIds.length}
|
||||
**Completed**: ${completed}
|
||||
**Timed out**: ${failed}
|
||||
|
||||
${summaries.map(s => `- ${s.issue_id}: ${s.status}`).join('\n')}
|
||||
|
||||
Session: ${sessionDir}
|
||||
`
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## User Commands
|
||||
|
||||
During execution, the user may issue:
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Show executor progress summary |
|
||||
| `resume` / `continue` | Urge stalled planner or executor |
|
||||
| `add <issue-ids>` | `send_input` to planner with new issue IDs |
|
||||
| `add --text '...'` | `send_input` to planner to create and plan new issue |
|
||||
| `add --plan <path>` | `send_input` to planner to parse and batch create from plan file |
|
||||
|
||||
**`add` handler** (inject mid-execution):
|
||||
|
||||
```javascript
|
||||
// Get current planner agent ID from session state
|
||||
const session = JSON.parse(Read(`${sessionDir}/team-session.json`))
|
||||
const plannerAgentId = session.planner_agent_id // saved during Phase 1
|
||||
|
||||
send_input({
|
||||
id: plannerAgentId,
|
||||
message: `
|
||||
## NEW ISSUES INJECTED
|
||||
${newInput}
|
||||
|
||||
Process these after current issue (or immediately if idle).
|
||||
Follow beat protocol: ISSUE_READY → wait for Continue → next issue.
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Planner timeout (>10 min per issue) | `send_input` urge convergence, re-enter loop |
|
||||
| Planner never outputs ISSUE_READY | After 3 retries, `close_agent` + report stall |
|
||||
| Solution file not written | Executor reports error, logs to `${sessionDir}/errors.json` |
|
||||
| Executor (Codex CLI) failure | Executor handles resume; logs CLI resume command |
|
||||
| ALL_PLANNED never received | After 60 min total, close planner, wait remaining executors |
|
||||
| No issues to process | AskUserQuestion for clarification |
|
||||
Reference in New Issue
Block a user