feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -1,504 +1,130 @@
---
name: team-planex
description: |
Plan-and-execute pipeline with inverted control. Accepts issues.jsonl or roadmap
session from roadmap-with-file. Delegates planning to issue-plan-agent (background),
executes inline (main flow). Interleaved plan-execute loop.
description: Unified team skill for plan-and-execute pipeline. Pure router — coordinator always. Beat model is coordinator-only in monitor.md. Triggers on "team planex".
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), mcp__ccw-tools__team_msg(*)
---
# Team PlanEx
接收 `issues.jsonl` 或 roadmap session 作为输入,委托规划 + 内联执行。Spawn issue-plan-agent 后台规划下一个 issue主流程内联执行当前已规划的 issue。交替循环规划 → 执行 → 规划下一个 → 执行 → 直到所有 issue 完成。
Unified team skill: plan-and-execute pipeline for issue-based development. Built on **team-worker agent architecture** — coordinator orchestrates, workers are team-worker agents loading role-specific instructions from `roles/<role>/role.md`.
## Architecture
```
┌────────────────────────────────────────────────────┐
SKILL.md (主流程 = 内联执行 + 循环控制)
│ │
Phase 1: 加载 issues.jsonl / roadmap session
│ Phase 2: 规划-执行交替循环 │
├── 取下一个未规划 issue → spawn planner (bg)
├── 取下一个已规划 issue → 内联执行
│ ├── 内联实现 (Read/Edit/Write/Bash) │
├── 验证测试 + 自修复 (max 3 retries)
│ └── git commit │
└── 循环直到所有 issue 规划+执行完毕 │
Phase 3: 汇总报告 │
└────────────────────────────────────────────────────┘
Planner (single reusable agent, background):
issue-plan-agent spawn once → send_input per issue → write solution JSON → write .ready marker
Skill(skill="team-planex", args="task description")
|
SKILL.md (this file) = Router
|
+--------------+--------------+
| |
no --role flag --role <name>
| |
Coordinator Worker
roles/coordinator/role.md roles/<name>/role.md
|
+-- analyze -> dispatch -> spawn workers -> STOP
|
+---------------+---------------+
v v
[planner] [executor]
(team-worker agent, (team-worker agent,
loads roles/planner/role.md) loads roles/executor/role.md)
```
## Beat Model (Interleaved Plan-Execute Loop)
## Role Registry
| Role | Path | Prefix | Inner Loop |
|------|------|--------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
| planner | [roles/planner/role.md](roles/planner/role.md) | PLAN-* | true |
| executor | [roles/executor/role.md](roles/executor/role.md) | EXEC-* | true |
## Role Router
Parse `$ARGUMENTS`:
- Has `--role <name>` -> Read `roles/<name>/role.md`, execute Phase 2-4
- No `--role` -> `roles/coordinator/role.md`, execute entry router
## Shared Constants
- **Session prefix**: `PEX`
- **Session path**: `.workflow/.team/PEX-<slug>-<date>/`
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
## Worker Spawn Template
Coordinator spawns workers using this template:
```
Interleaved Loop (Phase 2, single planner reused via send_input):
═══════════════════════════════════════════════════════════════
Beat 1 Beat 2 Beat 3
| | |
spawn+plan(A) send_input(C) (drain)
↓ ↓ ↓
poll → A.ready poll → B.ready poll → C.ready
↓ ↓ ↓
exec(A) exec(B) exec(C)
↓ ↓ ↓
send_input(B) send_input(done) done
(eager delegate) (all delegated)
═══════════════════════════════════════════════════════════════
spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: <role>
role_spec: <skill_root>/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
requirement: <task-description>
inner_loop: <true|false>
execution_method: <codex|gemini>
Planner timeline (never idle between issues):
─────────────────────────────────────────────────────────────
Planner: ├─plan(A)─┤├─plan(B)─┤├─plan(C)─┤ done
Main: 2a(spawn) ├─exec(A)──┤├─exec(B)──┤├─exec(C)──┤
^2f→send(B) ^2f→send(C)
─────────────────────────────────────────────────────────────
Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.` },
Single Beat Detail:
───────────────────────────────────────────────────────────────
1. Delegate next 2. Poll ready 3. Execute 4. Verify + commit
(2a or 2f eager) solutions inline + eager delegate
| | | |
send_input(bg) Glob(*.ready) Read/Edit/Write test → commit
│ → send_input(next)
wait if none ready
───────────────────────────────────────────────────────────────
```
{ type: "text", text: `## Task Context
task_id: <task-id>
title: <task-title>
description: <task-description>
pipeline_phase: <pipeline-phase>` },
---
## Agent Registry
| Agent | Role File | Responsibility | Pattern |
|-------|-----------|----------------|---------|
| `issue-plan-agent` | `~/.codex/agents/issue-plan-agent.md` | Explore codebase + generate solutions, single instance reused via send_input | 2.3 Deep Interaction |
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs and agent instructions are reduced to summaries, **you MUST immediately `Read` the corresponding agent.md to reload before continuing execution**.
---
## Subagent API Reference
### spawn_agent
```javascript
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
${taskContext}
`
{ type: "text", text: `## Upstream Context
<prev_context>` }
]
})
```
### wait
```javascript
const result = wait({
ids: [agentId],
timeout_ms: 600000 // 10 minutes
})
```
### send_input
Continue interaction with active agent (reuse for next issue).
```javascript
send_input({
id: agentId,
message: `
## NEXT ISSUE
issue_ids: ["<nextIssueId>"]
## Output Requirements
1. Generate solution for this issue
2. Write solution JSON to: <artifactsDir>/<nextIssueId>.json
3. Write ready marker to: <artifactsDir>/<nextIssueId>.ready
`
})
```
### close_agent
```javascript
close_agent({ id: agentId })
```
---
## Input
Accepts output from `roadmap-with-file` or direct `issues.jsonl` path.
Supported input forms (parse from `$ARGUMENTS`):
| Form | Detection | Example |
|------|-----------|---------|
| Roadmap session | `RMAP-` prefix or `--session` flag | `team-planex --session RMAP-auth-20260301` |
| Issues JSONL path | `.jsonl` extension | `team-planex .workflow/issues/issues.jsonl` |
| Issue IDs | `ISS-\d{8}-\d{3,6}` regex | `team-planex ISS-20260301-001 ISS-20260301-002` |
### Input Resolution
| Input Form | Resolution |
|------------|------------|
| `--session RMAP-*` | Read `.workflow/.roadmap/<sessionId>/roadmap.md` → extract issue IDs from Roadmap table → load issue data from `.workflow/issues/issues.jsonl` |
| `.jsonl` path | Read file → parse each line as JSON → collect all issues |
| Issue IDs | Use directly → fetch details via `ccw issue status <id> --json` |
### Issue Record Fields Used
| Field | Usage |
|-------|-------|
| `id` | Issue identifier for planning and execution |
| `title` | Commit message and reporting |
| `status` | Skip if already `completed` |
| `tags` | Wave ordering: `wave-1` before `wave-2` |
| `extended_context.notes.depends_on_issues` | Execution ordering |
### Wave Ordering
Issues are sorted by wave tag for execution order:
| Priority | Rule |
|----------|------|
| 1 | Lower wave number first (`wave-1` before `wave-2`) |
| 2 | Within same wave: issues without `depends_on_issues` first |
| 3 | Within same wave + no deps: original order from JSONL |
---
## Session Setup
Initialize session directory before processing:
| Item | Value |
|------|-------|
| Slug | `toSlug(<first issue title>)` truncated to 20 chars |
| Date | `YYYYMMDD` format |
| Session dir | `.workflow/.team/PEX-<slug>-<date>` |
| Solutions dir | `<sessionDir>/artifacts/solutions` |
Create directories:
```bash
mkdir -p "<sessionDir>/artifacts/solutions"
```
Write `<sessionDir>/team-session.json`:
| Field | Value |
|-------|-------|
| `session_id` | `PEX-<slug>-<date>` |
| `input_type` | `roadmap` / `jsonl` / `issue_ids` |
| `source_session` | Roadmap session ID (if applicable) |
| `issue_ids` | Array of all issue IDs to process (wave-sorted) |
| `status` | `"running"` |
| `started_at` | ISO timestamp |
---
## Phase 1: Load Issues + Initialize
1. Parse `$ARGUMENTS` to determine input form
2. Resolve issues (see Input Resolution table)
3. Filter out issues with `status: completed`
4. Sort by wave ordering
5. Collect into `<issueQueue>` (ordered list of issue IDs to process)
6. Initialize session directory and `team-session.json`
7. Set `<plannedSet>` = {} , `<executedSet>` = {} , `<plannerAgent>` = null (single reusable planner)
---
## Phase 2: Plan-Execute Loop
Interleaved loop that keeps planner agent busy at all times. Each beat: (1) delegate next issue to planner if idle, (2) poll for ready solutions, (3) execute inline, (4) after execution completes, immediately delegate next issue to planner before polling again.
### Loop Entry
Set `<queueIndex>` = 0 (pointer into `<issueQueue>`).
### 2a. Delegate Next Issue to Planner
Single planner agent is spawned once and reused via `send_input` for subsequent issues.
| Condition | Action |
|-----------|--------|
| `<plannerAgent>` is null AND `<queueIndex>` < `<issueQueue>.length` | Spawn planner (first issue), advance `<queueIndex>` |
| `<plannerAgent>` exists, idle AND `<queueIndex>` < `<issueQueue>.length` | `send_input` next issue, advance `<queueIndex>` |
| `<plannerAgent>` is busy | Skip (wait for current planning to finish) |
| `<queueIndex>` >= `<issueQueue>.length` | No more issues to plan |
**First issue — spawn planner**:
```javascript
const plannerAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
issue_ids: ["<issueId>"]
project_root: "<projectRoot>"
## Output Requirements
1. Generate solution for this issue
2. Write solution JSON to: <artifactsDir>/<issueId>.json
3. Write ready marker to: <artifactsDir>/<issueId>.ready
- Marker content: {"issue_id":"<issueId>","task_count":<task_count>,"file_count":<file_count>}
## Multi-Issue Mode
You will receive additional issues via follow-up messages. After completing each issue,
output results and wait for next instruction.
`
})
```
**Subsequent issues — send_input**:
```javascript
send_input({
id: plannerAgent,
message: `
## NEXT ISSUE
issue_ids: ["<nextIssueId>"]
## Output Requirements
1. Generate solution for this issue
2. Write solution JSON to: <artifactsDir>/<nextIssueId>.json
3. Write ready marker to: <artifactsDir>/<nextIssueId>.ready
- Marker content: {"issue_id":"<nextIssueId>","task_count":<task_count>,"file_count":<file_count>}
`
})
```
Record `<planningIssueId>` = current issue ID.
### 2b. Poll for Ready Solutions
Poll `<artifactsDir>/*.ready` using Glob.
| Condition | Action |
|-----------|--------|
| New `.ready` found (not in `<executedSet>`) | Load `<issueId>.json` solution → proceed to 2c |
| `<plannerAgent>` busy, no `.ready` yet | Check planner: `wait({ ids: [<plannerAgent>], timeout_ms: 30000 })` |
| Planner finished current issue | Mark planner idle, re-poll |
| Planner timed out (30s wait) | Re-poll (planner still working) |
| No `.ready`, planner idle, all issues delegated | Exit loop → Phase 3 |
| Idle >5 minutes total | Exit loop → Phase 3 |
### 2c. Inline Execution
Main flow implements the solution directly. For each task in `solution.tasks`, ordered by `depends_on` sequence:
| Step | Action | Tool |
|------|--------|------|
| 1. Read context | Read all files referenced in current task | Read |
| 2. Identify patterns | Note imports, naming conventions, existing structure | — (inline reasoning) |
| 3. Apply changes | Modify existing files or create new files | Edit (prefer) / Write (new files) |
| 4. Build check | Run project build command if available | Bash |
Build verification:
```bash
npm run build 2>&1 || echo BUILD_FAILED
```
| Build Result | Action |
|--------------|--------|
| Success | Proceed to 2d |
| Failure | Analyze error → fix source → rebuild (max 3 retries) |
| No build command | Skip, proceed to 2d |
### 2d. Verify Tests
Detect test command:
| Priority | Detection |
|----------|-----------|
| 1 | `package.json``scripts.test` |
| 2 | `package.json``scripts.test:unit` |
| 3 | `pytest.ini` / `setup.cfg` (Python) |
| 4 | `Makefile` test target |
Run tests. If tests fail → self-repair loop:
| Attempt | Action |
|---------|--------|
| 13 | Analyze test output → diagnose → fix source code → re-run tests |
| After 3 | Mark issue as failed, log to `<sessionDir>/errors.json`, continue |
### 2e. Git Commit
Stage and commit changes for this issue:
```bash
git add -A
git commit -m "feat(<issueId>): <solution-title>"
```
| Outcome | Action |
|---------|--------|
| Commit succeeds | Record commit hash |
| Commit fails (nothing to commit) | Warn, continue |
| Pre-commit hook fails | Attempt fix once, then warn and continue |
### 2f. Update Status + Eagerly Delegate Next
Update issue status:
```bash
ccw issue update <issueId> --status completed
```
Add `<issueId>` to `<executedSet>`.
**Eager delegation**: Immediately check planner state and delegate next issue before returning to poll:
| Planner State | Action |
|---------------|--------|
| Idle AND more issues in queue | `send_input` next issue → advance `<queueIndex>` |
| Busy (still planning) | Skip — planner already working |
| All issues delegated | Skip — nothing to delegate |
This ensures planner is never idle between beats. Return to 2b for next beat.
---
## Phase 3: Report
### 3a. Cleanup
Close the planner agent. Ignore cleanup failures.
```javascript
if (plannerAgent) {
try { close_agent({ id: plannerAgent }) } catch {}
}
```
### 3b. Generate Report
Update `<sessionDir>/team-session.json`:
| Field | Value |
|-------|-------|
| `status` | `"completed"` |
| `completed_at` | ISO timestamp |
| `results.total` | Total issues in queue |
| `results.completed` | Count in `<executedSet>` |
| `results.failed` | Count of failed issues |
Output summary:
```
## Pipeline Complete
**Total issues**: <total>
**Completed**: <completed>
**Failed**: <failed>
<per-issue status list>
Session: <sessionDir>
```
---
## File-Based Coordination Protocol
| File | Writer | Reader | Purpose |
|------|--------|--------|---------|
| `<artifactsDir>/<issueId>.json` | planner | main flow | Solution data |
| `<artifactsDir>/<issueId>.ready` | planner | main flow | Atomicity signal |
### Ready Marker Format
```json
{
"issue_id": "<issueId>",
"task_count": "<task_count>",
"file_count": "<file_count>"
}
```
---
## Session Directory
```
.workflow/.team/PEX-<slug>-<date>/
├── team-session.json
├── artifacts/
│ └── solutions/
│ ├── <issueId>.json
│ └── <issueId>.ready
└── errors.json
```
---
## Lifecycle Management
### Timeout Protocol
| Phase | Default Timeout | On Timeout |
|-------|-----------------|------------|
| Phase 1 (Load) | 60s | Report error, stop |
| Phase 2 (Planner wait) | 600s per issue | Skip issue, write `.error` marker |
| Phase 2 (Execution) | No timeout | Self-repair up to 3 retries |
| Phase 2 (Loop idle) | 5 min total idle | Break loop → Phase 3 |
### Cleanup Protocol
At workflow end, close the planner agent:
```javascript
if (plannerAgent) {
try { close_agent({ id: plannerAgent }) } catch { /* already closed */ }
}
```
---
## Error Handling
| Phase | Scenario | Resolution |
|-------|----------|------------|
| 1 | Invalid input (no issues, bad JSONL) | Report error, stop |
| 1 | Roadmap session not found | Report error, stop |
| 1 | Issue fetch fails | Retry once, then skip issue |
| 2 | Planner spawn fails | Retry once, then skip issue |
| 2 | issue-plan-agent timeout (>10 min) | Skip issue, write `.error` marker, continue |
| 2 | Solution file corrupt / unreadable | Skip, log to `errors.json`, continue |
| 2 | Implementation error | Self-repair up to 3 retries per task |
| 2 | Tests failing after 3 retries | Mark issue failed, log, continue |
| 2 | Git commit fails | Warn, mark completed anyway |
| 2 | Polling idle >5 minutes | Break loop → Phase 3 |
| 3 | Agent cleanup fails | Ignore |
---
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
## User Commands
| Command | Action |
|---------|--------|
| `check` / `status` | Show progress: planned / executing / completed / failed counts |
| `resume` / `continue` | Re-enter loop from Phase 2 |
| `check` / `status` | View execution status graph |
| `resume` / `continue` | Advance to next step |
| `add <issue-ids or --text '...' or --plan path>` | Append new tasks to planner queue |
## Session Directory
```
.workflow/.team/PEX-<slug>-<YYYY-MM-DD>/
├── .msg/
│ ├── messages.jsonl # Message bus log
│ └── meta.json # Session state
├── task-analysis.json # Coordinator analyze output
├── artifacts/
│ └── solutions/ # Planner solution output per issue
│ ├── <issueId-1>.json
│ └── <issueId-N>.json
└── wisdom/ # Cross-task knowledge
├── learnings.md
├── decisions.md
├── conventions.md
└── issues.md
```
## Specs Reference
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions, task metadata registry, execution method selection
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown command | Error with available command list |
| Role not found | Error with role registry |
| Role spec file not found | Error with expected path (roles/<name>/role.md) |
| team-worker agent unavailable | Error: requires .claude/agents/team-worker.md |
| Planner issue planning failure | Retry once, then skip to next issue |
| Executor impl failure | Report to coordinator, continue with next EXEC-* task |
| Pipeline stall | Coordinator monitors, escalate to user |
| Worker no response | Report waiting task, suggest user `resume` |

View File

@@ -1,301 +0,0 @@
# Team PlanEx — Agent Instruction
This instruction is loaded by team-worker agents when spawned with `role: planner` or `role: executor`.
---
## Role-Based Execution
### Planner Role
**Responsibility**: Explore codebase, generate implementation solution for issue.
**Input**:
- `issue_ids`: Array of issue IDs to plan (from spawn message or send_input)
- `session`: Session directory path
- `session_id`: Session identifier
**Execution Protocol**:
1. **Read issue details**:
```bash
ccw issue status {issue_id} --json
```
2. **Explore codebase** (use CLI analysis tools):
```bash
ccw cli -p "PURPOSE: Explore codebase for {issue_title}
TASK: • Identify relevant files • Find existing patterns • Locate integration points
CONTEXT: @**/* | Memory: Issue {issue_id}
EXPECTED: Exploration findings with file paths and patterns
CONSTRAINTS: Read-only analysis" --tool gemini --mode analysis --rule analysis-trace-code-execution
```
3. **Generate solution**:
- Break down into 2-7 implementation tasks
- Define task dependencies (topological order)
- Specify files to modify per task
- Define convergence criteria per task
- Assess complexity: Low (1-2 files), Medium (3-5 files), High (6+ files)
4. **Write solution file**:
```javascript
Write(`{session}/artifacts/solutions/{issue_id}.json`, JSON.stringify({
issue_id: "{issue_id}",
title: "{issue_title}",
approach: "Strategy pattern with...",
complexity: "Medium",
tasks: [
{
task_id: "EXEC-001",
title: "Create interface",
description: "Define provider interface...",
files: ["src/auth/providers/oauth-provider.ts"],
depends_on: [],
convergence_criteria: ["Interface compiles", "Types exported"]
}
],
exploration_findings: {
existing_patterns: ["Strategy pattern in payment module"],
tech_stack: ["TypeScript", "Express"],
integration_points: ["User service"]
}
}, null, 2))
```
5. **Write ready marker**:
```javascript
Write(`{session}/artifacts/solutions/{issue_id}.ready`, JSON.stringify({
issue_id: "{issue_id}",
task_count: tasks.length,
file_count: uniqueFiles.length
}))
```
6. **Report to coordinator** (via team_msg):
```javascript
mcp__ccw-tools__team_msg({
operation: "log",
session_id: "{session_id}",
from: "planner",
to: "coordinator",
type: "plan_ready",
summary: "Planning complete for {issue_id}",
data: {
issue_id: "{issue_id}",
solution_path: "artifacts/solutions/{issue_id}.json",
task_count: tasks.length
}
})
```
7. **Wait for next issue** (multi-issue mode):
- After completing one issue, output results and wait
- Coordinator will send next issue via send_input
- Repeat steps 1-6 for each issue
**Success Criteria**:
- Solution file written with valid JSON
- Ready marker created
- Message sent to coordinator
- All tasks have valid dependencies (no cycles)
---
### Executor Role
**Responsibility**: Execute implementation tasks from planner solution.
**Input**:
- `issue_id`: Issue to implement
- `session`: Session directory path
- `session_id`: Session identifier
- `execution_method`: `codex` or `gemini` (from coordinator)
- `inner_loop`: `true` (executor uses inner loop for self-repair)
**Execution Protocol**:
1. **Read solution file**:
```javascript
const solution = JSON.parse(Read(`{session}/artifacts/solutions/{issue_id}.json`))
```
2. **For each task in solution.tasks** (ordered by depends_on):
a. **Report start**:
```javascript
mcp__ccw-tools__team_msg({
operation: "log",
session_id: "{session_id}",
from: "executor",
to: "coordinator",
type: "impl_start",
summary: "Starting {task_id}",
data: { task_id: "{task_id}", issue_id: "{issue_id}" }
})
```
b. **Read context files**:
```javascript
for (const file of task.files) {
Read(file) // Load existing code
}
```
c. **Identify patterns**:
- Note imports, naming conventions, existing structure
- Follow project patterns from exploration_findings
d. **Apply changes**:
- Use Edit for existing files (prefer)
- Use Write for new files
- Follow convergence criteria from task
e. **Build check** (if build command exists):
```bash
npm run build 2>&1 || echo BUILD_FAILED
```
- If build fails: analyze error → fix → rebuild (max 3 retries)
f. **Verify convergence**:
- Check each criterion in task.convergence_criteria
- If not met: self-repair loop (max 3 iterations)
g. **Report progress**:
```javascript
mcp__ccw-tools__team_msg({
operation: "log",
session_id: "{session_id}",
from: "executor",
to: "coordinator",
type: "impl_progress",
summary: "Completed {task_id}",
data: { task_id: "{task_id}", progress_pct: (taskIndex / totalTasks) * 100 }
})
```
3. **Run tests** (after all tasks complete):
```bash
npm test 2>&1
```
- If tests fail: self-repair loop (max 3 retries)
- Target: 95% pass rate
4. **Git commit**:
```bash
git add -A
git commit -m "feat({issue_id}): {solution.title}"
```
5. **Report completion**:
```javascript
mcp__ccw-tools__team_msg({
operation: "log",
session_id: "{session_id}",
from: "executor",
to: "coordinator",
type: "impl_complete",
summary: "Completed {issue_id}",
data: {
task_id: "{task_id}",
issue_id: "{issue_id}",
files_modified: modifiedFiles,
commit_hash: commitHash
}
})
```
6. **Update issue status**:
```bash
ccw issue update {issue_id} --status completed
```
**Success Criteria**:
- All tasks completed in dependency order
- Build passes (if build command exists)
- Tests pass (95% target)
- Git commit created
- Issue status updated to completed
---
## Inner Loop Protocol
Both roles support inner loop for self-repair:
| Scenario | Max Iterations | Action |
|----------|---------------|--------|
| Build failure | 3 | Analyze error → fix source → rebuild |
| Test failure | 3 | Analyze failure → fix source → re-run tests |
| Convergence not met | 3 | Check criteria → adjust implementation → re-verify |
After 3 failed iterations: report error to coordinator, mark task as failed.
---
## CLI Tool Usage
### Analysis (Planner)
```bash
ccw cli -p "PURPOSE: {goal}
TASK: • {step1} • {step2}
CONTEXT: @**/* | Memory: {context}
EXPECTED: {deliverable}
CONSTRAINTS: Read-only" --tool gemini --mode analysis --rule {template}
```
### Implementation (Executor, optional)
```bash
ccw cli -p "PURPOSE: {goal}
TASK: • {step1} • {step2}
CONTEXT: @{files} | Memory: {context}
EXPECTED: {deliverable}
CONSTRAINTS: {constraints}" --tool {execution_method} --mode write --rule development-implement-feature
```
Use CLI tools when:
- Planner: Always use for codebase exploration
- Executor: Use for complex tasks (High complexity), direct implementation for Low/Medium
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Solution file not found | Report error to coordinator, skip issue |
| Solution JSON corrupt | Report error, skip issue |
| Build fails after 3 retries | Mark task failed, report to coordinator |
| Tests fail after 3 retries | Mark task failed, report to coordinator |
| Git commit fails | Warn, mark completed anyway |
| CLI tool timeout | Fallback to direct implementation |
| Dependency task failed | Skip dependent tasks, report to coordinator |
---
## Wisdom Directory
Record learnings in `{session}/wisdom/`:
| File | Content |
|------|---------|
| `learnings.md` | Patterns discovered, gotchas, best practices |
| `decisions.md` | Architecture decisions, trade-offs |
| `conventions.md` | Code style, naming conventions |
| `issues.md` | Issue-specific notes, blockers resolved |
Append to these files during execution to share knowledge across issues.
---
## Output Format
No structured output required. Workers communicate via:
- Solution files (planner)
- Message bus (both roles)
- Git commits (executor)
- Wisdom files (both roles)
Coordinator monitors message bus and meta.json for state tracking.

View File

@@ -0,0 +1,52 @@
# Analyze Task
Parse plan-and-execute input -> detect input type -> determine execution method -> assess scope.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Input Pattern | Type | Action |
|--------------|------|--------|
| `ISS-\d{8}-\d{6}` pattern | Issue IDs | Use directly |
| `--text '...'` flag | Text requirement | Create issues via CLI |
| `--plan <path>` flag | Plan file | Read file, parse phases |
## Execution Method Selection
| Condition | Execution Method |
|-----------|-----------------|
| `--exec=codex` specified | Codex |
| `--exec=gemini` specified | Gemini |
| `-y` or `--yes` flag present | Auto (default Gemini) |
| No flags (interactive) | request_user_input -> user choice |
| Auto + task_count <= 3 | Gemini |
| Auto + task_count > 3 | Codex |
## Scope Assessment
| Factor | Complexity |
|--------|------------|
| Issue count 1-3 | Low |
| Issue count 4-10 | Medium |
| Issue count > 10 | High |
| Cross-cutting concern | +1 level |
## Output
Write <session>/task-analysis.json:
```json
{
"task_description": "<original>",
"input_type": "<issues|text|plan>",
"raw_input": "<original input>",
"execution_method": "<codex|gemini>",
"issue_count_estimate": 0,
"complexity": { "score": 0, "level": "Low|Medium|High" },
"pipeline_type": "plan-execute",
"roles": [
{ "name": "planner", "prefix": "PLAN", "inner_loop": true },
{ "name": "executor", "prefix": "EXEC", "inner_loop": true }
]
}
```

View File

@@ -0,0 +1,80 @@
# Command: dispatch
## Purpose
Create the initial task chain for team-planex pipeline. Creates PLAN-001 for planner. EXEC-* tasks are NOT pre-created -- planner creates them at runtime per issue.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Input type | Phase 1 requirements | Yes |
| Raw input | Phase 1 requirements | Yes |
| Session folder | Phase 2 session init | Yes |
| Execution method | Phase 1 requirements | Yes |
## Phase 3: Task Chain Creation
### Task Creation
Create a single PLAN-001 task entry in tasks.json:
```json
[
{
"id": "PLAN-001",
"title": "PLAN-001: Requirement decomposition and solution design",
"description": "Decompose requirements into issues and generate solutions.\n\nInput type: <issues|text|plan>\nInput: <raw-input>\nSession: <session-folder>\nExecution method: <agent|codex|gemini>\n\n## Instructions\n1. Parse input to get issue list\n2. For each issue: call issue-plan-agent -> write solution artifact\n3. After each solution: create EXEC-* task entry in tasks.json with solution_file path, role: executor\n4. After all issues: send all_planned signal\n\nInnerLoop: true",
"status": "pending",
"role": "planner",
"prefix": "PLAN",
"deps": [],
"findings": "",
"error": ""
}
]
```
### EXEC-* Task Template (for planner reference)
Planner creates EXEC-* task entries in tasks.json at runtime using this template:
```json
{
"id": "EXEC-00N",
"title": "EXEC-00N: Implement <issue-title>",
"description": "Implement solution for issue <issueId>.\n\nIssue ID: <issueId>\nSolution file: <session-folder>/artifacts/solutions/<issueId>.json\nSession: <session-folder>\nExecution method: <agent|codex|gemini>\n\nInnerLoop: true",
"status": "pending",
"role": "executor",
"prefix": "EXEC",
"deps": [],
"findings": "",
"error": ""
}
```
### Add Command Task Template
When coordinator handles `add` command, create additional PLAN tasks in tasks.json:
```json
{
"id": "PLAN-00N",
"title": "PLAN-00N: Additional requirement decomposition",
"description": "Additional requirements to decompose.\n\nInput type: <issues|text|plan>\nInput: <new-input>\nSession: <session-folder>\nExecution method: <execution-method>\n\nInnerLoop: true",
"status": "pending",
"role": "planner",
"prefix": "PLAN",
"deps": [],
"findings": "",
"error": ""
}
```
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| PLAN-001 created | tasks.json contains PLAN-001 entry |
| Description complete | Contains Input, Session, Execution method |
| No orphans | All tasks have valid role |

View File

@@ -0,0 +1,164 @@
# Command: monitor
## Purpose
Event-driven pipeline coordination with Spawn-and-Stop pattern. Three wake-up sources: worker results, user `check`, user `resume`.
## Constants
| Constant | Value | Description |
|----------|-------|-------------|
| SPAWN_MODE | spawn_agent | All workers spawned via `spawn_agent` |
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
| WORKER_AGENT | team_worker | All workers are team_worker agents |
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session file | `<session-folder>/.msg/meta.json` | Yes |
| Task list | Read tasks.json | Yes |
| Active workers | session.active_workers[] | Yes |
## Phase 3: Handler Routing
### Wake-up Source Detection
| Priority | Condition | Handler |
|----------|-----------|---------|
| 1 | Message contains `[planner]` or `[executor]` tag | handleCallback |
| 2 | Contains "check" or "status" | handleCheck |
| 3 | Contains "resume", "continue", or "next" | handleResume |
| 4 | None of the above (initial spawn) | handleSpawnNext |
---
### Handler: handleCallback
```
Receive result from wait_agent for [<role>]
+- Match role: planner or executor
+- Progress update (not final)?
| +- YES -> Update session -> STOP
+- Task status = completed?
| +- YES -> remove from active_workers -> update session
| | +- Close agent: close_agent({ id: <agentId> })
| | +- role = planner?
| | | +- Check for new EXEC-* tasks in tasks.json (planner creates them)
| | | +- -> handleSpawnNext (spawn executor for new EXEC-* tasks)
| | +- role = executor?
| | +- Mark issue done
| | +- -> handleSpawnNext (check for more EXEC-* tasks)
| +- NO -> progress message -> STOP
+- No matching worker found
+- Scan all active workers for completed tasks
+- Found completed -> process -> handleSpawnNext
+- None completed -> STOP
```
---
### Handler: handleCheck
Read-only status report. No advancement.
```
[coordinator] PlanEx Pipeline Status
[coordinator] Progress: <completed>/<total> (<percent>%)
[coordinator] Task Graph:
PLAN-001: <status-icon> <summary>
EXEC-001: <status-icon> <issue-title>
EXEC-002: <status-icon> <issue-title>
...
done=completed >>>=running o=pending
[coordinator] Active Workers:
> <subject> (<role>) - running <elapsed>
[coordinator] Ready to spawn: <subjects>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
Then STOP.
---
### Handler: handleResume
```
Load active_workers
+- No active workers -> handleSpawnNext
+- Has active workers -> check each:
+- completed -> mark done, log
+- in_progress -> still running
+- other -> worker failure -> reset to pending
After:
+- Some completed -> handleSpawnNext
+- All running -> report status -> STOP
+- All failed -> handleSpawnNext (retry)
```
---
### Handler: handleSpawnNext
```
Collect task states from tasks.json
+- Filter tasks: PLAN-* and EXEC-* prefixes
+- readySubjects: pending + not blocked (no deps or all deps completed)
+- NONE ready + work in progress -> report waiting -> STOP
+- NONE ready + nothing running -> PIPELINE_COMPLETE -> Phase 5
+- HAS ready tasks -> for each:
+- Inner Loop role AND already has active_worker for that role?
| +- YES -> SKIP spawn (existing worker picks up via inner loop)
| +- NO -> spawn below
+- Determine role from task prefix:
| +- PLAN-* -> planner
| +- EXEC-* -> executor
+- Spawn team_worker:
const agentId = spawn_agent({
agent_type: "team_worker",
items: [{ type: "text", text: `## Role Assignment
role: <role>
role_spec: ~ or <project>/.codex/skills/team-planex/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: <team-name>
requirement: <task-description>
inner_loop: true
execution_method: <method>` }]
})
// Collect results:
const result = wait_agent({ ids: [agentId], timeout_ms: 900000 })
// Process result, update tasks.json
close_agent({ id: agentId })
+- Add to session.active_workers
Update session -> output summary -> STOP
```
---
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| Session state consistent | active_workers matches in_progress tasks |
| No orphaned tasks | Every in_progress has active_worker |
| Pipeline completeness | All expected EXEC-* tasks accounted for |
## Worker Failure Handling
1. Reset task -> pending in tasks.json
2. Log via team_msg (type: error)
3. Report to user
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Session file not found | Error, suggest re-initialization |
| Unknown role callback | Log, scan for other completions |
| All workers running on resume | Report status, suggest check later |
| Pipeline stall (no ready + no running + has pending) | Check deps chains, report |

View File

@@ -0,0 +1,140 @@
# Coordinator Role
Orchestrate team-planex: analyze -> dispatch -> spawn -> monitor -> report.
## Identity
- Name: coordinator | Tag: [coordinator]
- Responsibility: Parse input -> Create team -> Dispatch PLAN-001 -> Spawn planner -> Monitor results -> Spawn executors -> Report
## Boundaries
### MUST
- Parse user input (Issue IDs / --text / --plan) and determine execution method
- Create session and initialize tasks.json
- Dispatch tasks via `commands/dispatch.md`
- Monitor progress via `commands/monitor.md` with Spawn-and-Stop pattern
- Maintain session state (.msg/meta.json)
### MUST NOT
- Execute planning or implementation work directly (delegate to workers)
- Modify solution artifacts or code (workers own their deliverables)
- Call implementation CLI tools (code-developer, etc.) directly
- Skip dependency validation when creating task chains
## Command Execution Protocol
When coordinator needs to execute a command:
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker result | Result from wait_agent contains [planner] or [executor] tag | -> handleCallback (monitor.md) |
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
| Add tasks | Args contain "add" | -> handleAdd |
| Interrupted session | Active/paused session exists in `.workflow/.team/PEX-*` | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For callback/check/resume: load `@commands/monitor.md` and execute the appropriate handler, then STOP.
### handleAdd
1. Parse new input (Issue IDs / `--text` / `--plan`)
2. Get current max PLAN-* sequence from tasks.json
3. Add new PLAN-00N task entry to tasks.json with role: "planner"
4. If planner already sent `all_planned` (check team_msg) -> signal planner to re-enter loop
5. STOP
## Phase 0: Session Resume Check
1. Scan `.workflow/.team/PEX-*/.msg/meta.json` for sessions with status "active" or "paused"
2. No sessions -> Phase 1
3. Single session -> resume (Session Reconciliation)
4. Multiple sessions -> request_user_input for selection
**Session Reconciliation**:
1. Read tasks.json -> reconcile session state vs task status
2. Reset in_progress tasks -> pending (they were interrupted)
3. Rebuild team if needed (create session + spawn needed workers)
4. Kick first executable task -> Phase 4
## Phase 1: Input Parsing + Execution Method
TEXT-LEVEL ONLY. No source code reading.
1. Delegate to @commands/analyze.md -> produces task-analysis.json
2. Parse arguments: Extract input type (Issue IDs / --text / --plan) and optional flags (--exec, -y)
3. Determine execution method (see specs/pipelines.md Selection Decision Table):
- Explicit `--exec` flag -> use specified method
- `-y` / `--yes` flag -> Auto mode
- No flags -> request_user_input for method choice
4. Store requirements: input_type, raw_input, execution_method
5. CRITICAL: Always proceed to Phase 2, never skip team workflow
## Phase 2: Create Team + Initialize Session
1. Resolve workspace paths (MUST do first):
- `project_root` = result of `Bash("pwd")`
- `skill_root` = `<project_root>/.codex/skills/team-planex`
2. Generate session ID: `PEX-<slug>-<date>`
3. Create session folder: `.workflow/.team/<session-id>/`
4. Create subdirectories: `artifacts/solutions/`, `wisdom/`
5. Create session folder + initialize `tasks.json` (empty array)
6. Initialize wisdom files (learnings.md, decisions.md, conventions.md, issues.md)
7. Initialize meta.json with pipeline metadata:
```typescript
mcp__ccw-tools__team_msg({
operation: "log", session_id: "<id>", from: "coordinator",
type: "state_update", summary: "Session initialized",
data: {
pipeline_mode: "plan-execute",
pipeline_stages: ["planner", "executor"],
roles: ["coordinator", "planner", "executor"],
team_name: "planex",
input_type: "<issues|text|plan>",
execution_method: "<codex|gemini>"
}
})
```
## Phase 3: Create Task Chain
Delegate to `@commands/dispatch.md`:
1. Read `roles/coordinator/commands/dispatch.md`
2. Execute its workflow to create PLAN-001 task in tasks.json
3. PLAN-001 contains input info + execution method in description
## Phase 4: Spawn-and-Stop
1. Load `@commands/monitor.md`
2. Execute `handleSpawnNext` to find ready tasks and spawn planner worker
3. Output status summary
4. STOP (idle, wait for worker result)
**ONE_STEP_PER_INVOCATION**: true -- coordinator does one operation per wake-up, then STOPS.
## Phase 5: Report + Completion Action
1. Load session state -> count completed tasks, duration
2. List deliverables with output paths
3. Update session status -> "completed"
4. Execute Completion Action per session.completion_action:
- interactive -> request_user_input (Archive/Keep/Export)
- auto_archive -> Archive & Clean (status=completed, clean up session)
- auto_keep -> Keep Active (status=paused)
- auto_yes -> Archive & Clean without prompting
## Error Handling
| Error | Resolution |
|-------|------------|
| Session file not found | Error, suggest re-initialization |
| Unknown worker callback | Log, scan for other completions |
| Pipeline stall | Check missing tasks, report to user |
| Worker crash | Reset task to pending, re-spawn on next beat |
| All workers running on resume | Report status, suggest check later |

View File

@@ -0,0 +1,91 @@
---
role: executor
prefix: EXEC
inner_loop: true
message_types:
success: impl_complete
error: impl_failed
---
# Executor
Single-issue implementation agent. Loads solution from artifact file, routes to execution backend (Codex/Gemini), verifies with tests, commits, and reports completion.
## Phase 2: Task & Solution Loading
| Input | Source | Required |
|-------|--------|----------|
| Issue ID | Task description `Issue ID:` field | Yes |
| Solution file | Task description `Solution file:` field | Yes |
| Session folder | Task description `Session:` field | Yes |
| Execution method | Task description `Execution method:` field | Yes |
| Wisdom | `<session>/wisdom/` | No |
1. Extract issue ID, solution file path, session folder, execution method
2. Load solution JSON from file (file-first)
3. If file not found -> fallback: `ccw issue solution <issueId> --json`
4. Load wisdom files for conventions and patterns
5. Verify solution has required fields: title, tasks
## Phase 3: Implementation
### Backend Selection
| Method | Backend | CLI Tool |
|--------|---------|----------|
| `codex` | `ccw cli --tool codex --mode write` | CLI |
| `gemini` | `ccw cli --tool gemini --mode write` | CLI |
### CLI Backend (Codex/Gemini)
```bash
ccw cli -p "PURPOSE: Implement solution for issue <issueId>; success = all tasks completed, tests pass
TASK: <solution.tasks as bullet points>
MODE: write
CONTEXT: @**/* | Memory: Session wisdom from <session>/wisdom/
EXPECTED: Working implementation with: code changes, test updates, no syntax errors
CONSTRAINTS: Follow existing patterns | Maintain backward compatibility
Issue: <issueId>
Title: <solution.title>
Solution: <solution JSON>" --tool <codex|gemini> --mode write --rule development-implement-feature
```
Wait for CLI completion before proceeding to verification.
## Phase 4: Verification + Commit
### Test Verification
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Tests | Detect and run project test command | All pass |
| Syntax | IDE diagnostics or `tsc --noEmit` | No errors |
If tests fail: retry implementation once, then report `impl_failed`.
### Commit
```bash
git add -A
git commit -m "feat(<issueId>): <solution.title>"
```
### Update Issue Status
```bash
ccw issue update <issueId> --status completed
```
### Report
Write results to discoveries file for coordinator to read.
## Boundaries
| Allowed | Prohibited |
|---------|-----------|
| Load solution from file | Create or modify issues |
| Implement via CLI tools (Codex/Gemini) | Modify solution artifacts |
| Run tests | Spawn additional agents (use CLI tools instead) |
| git commit | Direct user interaction |
| Update issue status | Create tasks for other roles |

View File

@@ -0,0 +1,112 @@
---
role: planner
prefix: PLAN
inner_loop: true
message_types:
success: issue_ready
error: error
---
# Planner
Requirement decomposition -> issue creation -> solution design -> EXEC-* task creation. Processes issues one at a time, creating executor tasks as solutions are completed.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Input type + raw input | Task description | Yes |
| Session folder | Task description `Session:` field | Yes |
| Execution method | Task description `Execution method:` field | Yes |
| Wisdom | `<session>/wisdom/` | No |
1. Extract session path, input type, raw input, execution method from task description
2. Load wisdom files if available
3. Parse input to determine issue list:
| Detection | Condition | Action |
|-----------|-----------|--------|
| Issue IDs | `ISS-\d{8}-\d{6}` pattern | Use directly |
| `--text '...'` | Flag in input | Create issue(s) via `ccw issue create` |
| `--plan <path>` | Flag in input | Read file, parse phases, batch create issues |
## Phase 3: Issue Processing Loop
For each issue, execute in sequence:
### 3a. Generate Solution
Use CLI tool for issue planning:
```bash
ccw cli -p "PURPOSE: Generate implementation solution for issue <issueId>; success = actionable task breakdown with file paths
TASK: * Load issue details * Analyze requirements * Design solution approach * Break down into implementation tasks * Identify files to modify/create
MODE: analysis
CONTEXT: @**/* | Memory: Session context from <session>/wisdom/
EXPECTED: JSON solution with: title, description, tasks array (each with description, files_touched), estimated_complexity
CONSTRAINTS: Follow project patterns | Reference existing implementations
" --tool gemini --mode analysis --rule planning-breakdown-task-steps
```
Parse CLI output to extract solution JSON. If CLI fails, fallback to `ccw issue solution <issueId> --json`.
### 3b. Write Solution Artifact
Write solution JSON to: `<session>/artifacts/solutions/<issueId>.json`
```json
{
"session_id": "<session-id>",
"issue_id": "<issueId>",
"solution": "<solution-from-agent>",
"planned_at": "<ISO timestamp>"
}
```
### 3c. Check Conflicts
Extract `files_touched` from solution. Compare against prior solutions in session.
Overlapping files -> log warning to `wisdom/issues.md`, continue.
### 3d. Create EXEC-* Task
Add new task entry to tasks.json:
```json
{
"id": "EXEC-00N",
"title": "EXEC-00N: Implement <issue-title>",
"description": "Implement solution for issue <issueId>.\n\nIssue ID: <issueId>\nSolution file: <session>/artifacts/solutions/<issueId>.json\nSession: <session>\nExecution method: <method>\n\nInnerLoop: true",
"status": "pending",
"role": "executor",
"prefix": "EXEC",
"deps": [],
"findings": "",
"error": ""
}
```
### 3e. Signal issue_ready
Write result via team_msg:
- type: `issue_ready`
### 3f. Continue Loop
Process next issue. Do NOT wait for executor.
## Phase 4: Completion Signal
After all issues processed:
1. Send `all_planned` message via team_msg
2. Summary: total issues planned, EXEC-* tasks created
## Boundaries
| Allowed | Prohibited |
|---------|-----------|
| Parse input, create issues | Write/modify business code |
| Generate solutions (CLI) | Run tests |
| Write solution artifacts | git commit |
| Create EXEC-* tasks in tasks.json | Call code-developer |
| Conflict checking | Direct user interaction |

View File

@@ -1,198 +0,0 @@
# Team PlanEx — Tasks Schema
## Task Metadata Registry
Team PlanEx uses a **message bus + state file** architecture instead of CSV. Tasks are tracked in `.msg/meta.json` with state updates via `team_msg`.
### Task State Fields
| Field | Type | Description | Example |
|-------|------|-------------|---------|
| `task_id` | string | Unique task identifier | `"PLAN-001"` |
| `issue_id` | string | Source issue identifier | `"ISS-20260308-001"` |
| `title` | string | Task title from solution | `"Implement OAuth2 provider"` |
| `role` | enum | Worker role: `planner`, `executor` | `"executor"` |
| `status` | enum | Task status: `pending`, `in_progress`, `completed`, `failed` | `"completed"` |
| `assigned_to` | string | Worker agent name | `"executor"` |
| `depends_on` | array | Dependency task IDs | `["PLAN-001"]` |
| `files` | array | Files to modify | `["src/auth/oauth.ts"]` |
| `convergence_criteria` | array | Success criteria | `["Tests pass", "No lint errors"]` |
| `started_at` | string | ISO timestamp | `"2026-03-08T10:00:00+08:00"` |
| `completed_at` | string | ISO timestamp | `"2026-03-08T10:15:00+08:00"` |
| `error` | string | Error message if failed | `""` |
---
## Message Bus Schema
### Message Types
| Type | From | To | Data Schema | Description |
|------|------|----|-----------|----|
| `plan_ready` | planner | coordinator | `{issue_id, solution_path, task_count}` | Planning complete |
| `impl_start` | executor | coordinator | `{task_id, issue_id}` | Implementation started |
| `impl_complete` | executor | coordinator | `{task_id, issue_id, files_modified[], commit_hash}` | Implementation complete |
| `impl_progress` | executor | coordinator | `{task_id, progress_pct, current_step}` | Progress update |
| `error` | any | coordinator | `{task_id, error_type, message}` | Error occurred |
| `state_update` | any | coordinator | `{role, state: {}}` | Role state update (auto-synced to meta.json) |
### Message Format (NDJSON)
```jsonl
{"id":"MSG-001","ts":"2026-03-08T10:00:00+08:00","from":"planner","to":"coordinator","type":"plan_ready","summary":"Planning complete for ISS-001","data":{"issue_id":"ISS-20260308-001","solution_path":"artifacts/solutions/ISS-20260308-001.json","task_count":3}}
{"id":"MSG-002","ts":"2026-03-08T10:05:00+08:00","from":"executor","to":"coordinator","type":"impl_start","summary":"Starting EXEC-001","data":{"task_id":"EXEC-001","issue_id":"ISS-20260308-001"}}
{"id":"MSG-003","ts":"2026-03-08T10:15:00+08:00","from":"executor","to":"coordinator","type":"impl_complete","summary":"Completed EXEC-001","data":{"task_id":"EXEC-001","issue_id":"ISS-20260308-001","files_modified":["src/auth/oauth.ts"],"commit_hash":"abc123"}}
```
---
## State File Schema (meta.json)
### Structure
```json
{
"session_id": "PEX-auth-system-20260308",
"pipeline_mode": "plan-execute",
"execution_method": "codex",
"status": "running",
"started_at": "2026-03-08T10:00:00+08:00",
"issues": {
"ISS-20260308-001": {
"status": "completed",
"solution_path": "artifacts/solutions/ISS-20260308-001.json",
"tasks": ["EXEC-001", "EXEC-002"],
"completed_at": "2026-03-08T10:30:00+08:00"
}
},
"tasks": {
"EXEC-001": {
"task_id": "EXEC-001",
"issue_id": "ISS-20260308-001",
"title": "Implement OAuth2 provider",
"role": "executor",
"status": "completed",
"assigned_to": "executor",
"depends_on": [],
"files": ["src/auth/oauth.ts"],
"convergence_criteria": ["Tests pass", "No lint errors"],
"started_at": "2026-03-08T10:05:00+08:00",
"completed_at": "2026-03-08T10:15:00+08:00",
"error": ""
}
},
"roles": {
"coordinator": {
"status": "active",
"current_phase": "execution",
"last_update": "2026-03-08T10:15:00+08:00"
},
"planner": {
"status": "idle",
"issues_planned": 5,
"last_update": "2026-03-08T10:10:00+08:00"
},
"executor": {
"status": "active",
"current_task": "EXEC-002",
"tasks_completed": 1,
"last_update": "2026-03-08T10:15:00+08:00"
}
}
}
```
---
## Solution File Schema
Planner generates solution files in `artifacts/solutions/<issue-id>.json`:
```json
{
"issue_id": "ISS-20260308-001",
"title": "Implement OAuth2 authentication",
"approach": "Strategy pattern with provider abstraction",
"complexity": "Medium",
"tasks": [
{
"task_id": "EXEC-001",
"title": "Create OAuth2 provider interface",
"description": "Define provider interface with authorize/token/refresh methods",
"files": ["src/auth/providers/oauth-provider.ts"],
"depends_on": [],
"convergence_criteria": [
"Interface compiles without errors",
"Type definitions exported"
]
},
{
"task_id": "EXEC-002",
"title": "Implement Google OAuth2 provider",
"description": "Concrete implementation for Google OAuth2",
"files": ["src/auth/providers/google-oauth.ts"],
"depends_on": ["EXEC-001"],
"convergence_criteria": [
"Tests pass",
"Handles token refresh",
"Error handling complete"
]
}
],
"exploration_findings": {
"existing_patterns": ["Strategy pattern in payment module"],
"tech_stack": ["TypeScript", "Express", "Passport.js"],
"integration_points": ["User service", "Session store"]
}
}
```
---
## Execution Method Selection
Coordinator selects execution method based on issue complexity:
| Complexity | Method | Criteria |
|------------|--------|----------|
| Low | `gemini` | 1-2 files, simple logic, no architecture changes |
| Medium | `codex` | 3-5 files, moderate complexity, existing patterns |
| High | `codex` | 6+ files, complex logic, architecture changes |
Stored in `meta.json``execution_method` field.
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique task IDs | No duplicate `task_id` in meta.json | "Duplicate task ID: {task_id}" |
| Valid deps | All `depends_on` task IDs exist | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {task_id}" |
| No circular deps | Dependency graph is acyclic | "Circular dependency detected" |
| Valid status | status ∈ {pending, in_progress, completed, failed} | "Invalid status: {status}" |
| Valid role | role ∈ {planner, executor} | "Invalid role: {role}" |
| Issue exists | issue_id exists in issues registry | "Unknown issue: {issue_id}" |
| Solution file exists | solution_path points to valid file | "Solution file not found: {path}" |
---
## Cross-Role Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| Planner solution | Executor | Read solution JSON from artifacts/solutions/ |
| Executor progress | Coordinator | Message bus (impl_progress, impl_complete) |
| Coordinator state | All workers | Read meta.json state field |
| Any role state update | meta.json | Auto-sync via team_msg type="state_update" |
---
## Discovery Types
Team PlanEx does not use discoveries.ndjson. All context is stored in:
- Solution files (planner output)
- Message bus (real-time communication)
- meta.json (persistent state)
- wisdom/ directory (cross-task knowledge)

View File

@@ -0,0 +1,93 @@
# PlanEx Pipeline Definitions
## Pipeline Diagram
Issue-based beat pipeline — planner creates EXEC-* tasks at runtime as solutions are completed.
```
PLAN-001 ──> [planner] issue-1 solution -> EXEC-001
issue-2 solution -> EXEC-002
...
issue-N solution -> EXEC-00N
all_planned signal
EXEC-001 ──> [executor] implement issue-1
EXEC-002 ──> [executor] implement issue-2
...
EXEC-00N ──> [executor] implement issue-N
```
## Beat Cycle
Event-driven Spawn-and-Stop. Each beat = coordinator wake -> process callback -> spawn next -> STOP.
```
Event Coordinator Workers
----------------------------------------------------------------------
User invokes -------> Phase 1-3:
Parse input
Init session folder + tasks.json
Create PLAN-001
Phase 4:
spawn planner ---------> [planner] Phase 1-5
STOP (idle) |
|
callback <-- planner issue_ready -(per issue)-----------+
handleCallback:
detect new EXEC-* tasks
spawn executor ---------------------> [executor] Phase 1-5
STOP (idle) |
|
callback <-- executor impl_complete ---------------------+
handleCallback:
mark issue done
check next ready EXEC-*
spawn next executor / STOP
```
## Task Metadata Registry
| Task ID | Role | Dependencies | Description |
|---------|------|-------------|-------------|
| PLAN-001 | planner | (none) | Requirement decomposition: parse input, create issues, generate solutions, create EXEC-* tasks |
| EXEC-001 | executor | PLAN-001 (created at runtime by planner) | Implement solution for issue #1 |
| EXEC-002 | executor | PLAN-001 (created at runtime by planner) | Implement solution for issue #2 |
| EXEC-00N | executor | PLAN-001 (created at runtime by planner) | Implement solution for issue #N |
> EXEC-* tasks are created by planner at runtime (per-issue beat), not predefined in the task chain.
## Execution Method Selection
| Condition | Execution Method |
|-----------|-----------------|
| `--exec=codex` specified | codex |
| `--exec=gemini` specified | gemini |
| `-y` or `--yes` flag present | Auto (default gemini) |
| No flags (interactive) | request_user_input -> user choice |
| Auto + task_count <= 3 | gemini |
| Auto + task_count > 3 | codex |
## Input Type Detection
| Input Pattern | Type | Action |
|--------------|------|--------|
| `ISS-\d{8}-\d{6}` pattern | Issue IDs | Use directly |
| `--text '...'` flag | Text requirement | Create issues via `ccw issue create` |
| `--plan <path>` flag | Plan file | Read file, parse phases, batch create issues |
## Checkpoints
| Trigger | Condition | Action |
|---------|-----------|--------|
| Planner complete | all_planned signal received | Wait for remaining EXEC-* executors to finish |
| Pipeline stall | No ready tasks + no running tasks + has pending | Coordinator checks blockedBy chains, escalates to user |
| Executor blocked | blocked > 2 tasks | Coordinator escalates to user |
## Scope Assessment
| Factor | Complexity |
|--------|------------|
| Issue count 1-3 | Low |
| Issue count 4-10 | Medium |
| Issue count > 10 | High |
| Cross-cutting concern | +1 level |