mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-10 17:11:04 +08:00
Add unit tests for various components and stores in the terminal dashboard
- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management. - Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping. - Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
This commit is contained in:
599
.codex/skills/team-planex-v2/SKILL.md
Normal file
599
.codex/skills/team-planex-v2/SKILL.md
Normal file
@@ -0,0 +1,599 @@
|
||||
---
|
||||
name: team-planex-v2
|
||||
description: Hybrid team skill for plan-and-execute pipeline. CSV wave primary for planning and execution. Planner decomposes requirements into issues and solutions, then executor implements each via CLI tools. Supports issue IDs, text input, and plan file input.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] [--exec=codex|gemini] \"issue IDs or --text 'description' or --plan path\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team PlanEx
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
$team-planex-v2 "ISS-20260308-120000 ISS-20260308-120001"
|
||||
$team-planex-v2 -c 3 "--text 'Add rate limiting to all API endpoints'"
|
||||
$team-planex-v2 -y "--plan .workflow/specs/roadmap.md --exec=codex"
|
||||
$team-planex-v2 --continue "planex-rate-limit-20260308"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
||||
- `--continue`: Resume existing session
|
||||
- `--exec=codex|gemini|qwen`: Force execution method for implementation
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Plan-and-execute pipeline for issue-based development. Planner decomposes requirements into individual issues with solution plans, then executors implement each issue independently.
|
||||
|
||||
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
|
||||
|
||||
```
|
||||
+---------------------------------------------------------------------------+
|
||||
| TEAM PLANEX WORKFLOW |
|
||||
+---------------------------------------------------------------------------+
|
||||
| |
|
||||
| Phase 0: Pre-Wave Interactive (Input Analysis) |
|
||||
| +-- Parse input type (issue IDs / --text / --plan) |
|
||||
| +-- Determine execution method (codex/gemini/auto) |
|
||||
| +-- Create issues from text/plan if needed |
|
||||
| +-- Output: refined issue list for decomposition |
|
||||
| |
|
||||
| Phase 1: Requirement -> CSV + Classification |
|
||||
| +-- Planning wave: generate solutions for each issue |
|
||||
| +-- Execution wave: implement each issue independently |
|
||||
| +-- Classify tasks: csv-wave (default) | interactive |
|
||||
| +-- Compute dependency waves (topological sort) |
|
||||
| +-- Generate tasks.csv with wave + exec_mode columns |
|
||||
| +-- User validates task breakdown (skip if -y) |
|
||||
| |
|
||||
| Phase 2: Wave Execution Engine (Extended) |
|
||||
| +-- For each wave (1..N): |
|
||||
| | +-- Build wave CSV (filter csv-wave tasks for this wave) |
|
||||
| | +-- Inject previous findings into prev_context column |
|
||||
| | +-- spawn_agents_on_csv(wave CSV) |
|
||||
| | +-- Merge all results into master tasks.csv |
|
||||
| | +-- Check: any failed? -> skip dependents |
|
||||
| +-- discoveries.ndjson shared across all modes (append-only) |
|
||||
| |
|
||||
| Phase 3: Results Aggregation |
|
||||
| +-- Export final results.csv |
|
||||
| +-- Generate context.md with all findings |
|
||||
| +-- Display summary: completed/failed/skipped per wave |
|
||||
| +-- Offer: view results | retry failed | done |
|
||||
| |
|
||||
+---------------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Rules
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, clarification needed |
|
||||
|
||||
**Classification Decision**:
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Solution planning per issue (PLAN-*) | `csv-wave` |
|
||||
| Code implementation per issue (EXEC-*) | `csv-wave` |
|
||||
| Complex multi-issue coordination (rare) | `interactive` |
|
||||
|
||||
> In the standard PlanEx pipeline, all tasks default to `csv-wave`. Interactive mode is reserved for edge cases requiring multi-round coordination.
|
||||
|
||||
---
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,role,issue_ids,input_type,raw_input,exec_mode,execution_method,deps,context_from,wave,status,findings,artifact_path,error
|
||||
"PLAN-001","Plan issue-1","Generate solution for ISS-20260308-120000","planner","ISS-20260308-120000","issues","ISS-20260308-120000","csv-wave","","","","1","pending","","",""
|
||||
"PLAN-002","Plan issue-2","Generate solution for ISS-20260308-120001","planner","ISS-20260308-120001","issues","ISS-20260308-120001","csv-wave","","","","1","pending","","",""
|
||||
"EXEC-001","Implement issue-1","Implement solution for ISS-20260308-120000","executor","ISS-20260308-120000","","","csv-wave","gemini","PLAN-001","PLAN-001","2","pending","","",""
|
||||
"EXEC-002","Implement issue-2","Implement solution for ISS-20260308-120001","executor","ISS-20260308-120001","","","csv-wave","gemini","PLAN-002","PLAN-002","2","pending","","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (PLAN-NNN, EXEC-NNN) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description |
|
||||
| `role` | Input | Worker role: planner or executor |
|
||||
| `issue_ids` | Input | Semicolon-separated issue IDs this task covers |
|
||||
| `input_type` | Input | Input type: issues, text, or plan (planner tasks only) |
|
||||
| `raw_input` | Input | Raw input text (planner tasks only) |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `execution_method` | Input | codex, gemini, qwen, or empty (executor tasks only) |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `artifact_path` | Output | Path to generated artifact (solution file, build result) |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 3 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 3 |
|
||||
| `artifacts/solutions/{issueId}.json` | Planner solution artifacts | Created by planner agents |
|
||||
| `builds/{issueId}.json` | Executor build results | Created by executor agents |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
+-- tasks.csv # Master state (all tasks)
|
||||
+-- results.csv # Final results export
|
||||
+-- discoveries.ndjson # Shared discovery board
|
||||
+-- context.md # Human-readable report
|
||||
+-- wave-{N}.csv # Temporary per-wave input
|
||||
+-- artifacts/
|
||||
| +-- solutions/ # Planner output
|
||||
| +-- {issueId}.json
|
||||
+-- builds/ # Executor output
|
||||
| +-- {issueId}.json
|
||||
+-- wisdom/ # Cross-task knowledge
|
||||
+-- learnings.md
|
||||
+-- decisions.md
|
||||
+-- conventions.md
|
||||
+-- issues.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
|
||||
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||
.trim()
|
||||
|
||||
// Parse execution method
|
||||
let executionMethod = 'gemini' // default
|
||||
const execMatch = requirement.match(/--exec=(\w+)/)
|
||||
if (execMatch) executionMethod = execMatch[1]
|
||||
|
||||
// Detect input type
|
||||
const issueIdPattern = /ISS-\d{8}-\d{6}/g
|
||||
const textMatch = requirement.match(/--text\s+'([^']+)'/)
|
||||
const planMatch = requirement.match(/--plan\s+(\S+)/)
|
||||
|
||||
let inputType = 'issues'
|
||||
let rawInput = requirement
|
||||
let issueIds = requirement.match(issueIdPattern) || []
|
||||
|
||||
if (textMatch) {
|
||||
inputType = 'text'
|
||||
rawInput = textMatch[1]
|
||||
issueIds = [] // will be created by planner
|
||||
} else if (planMatch) {
|
||||
inputType = 'plan'
|
||||
rawInput = planMatch[1]
|
||||
issueIds = [] // will be parsed from plan file
|
||||
}
|
||||
|
||||
// If no input detected, ask user
|
||||
if (issueIds.length === 0 && inputType === 'issues') {
|
||||
const answer = AskUserQuestion("No input detected. Provide issue IDs, or use --text 'description' or --plan <path>:")
|
||||
issueIds = answer.match(issueIdPattern) || []
|
||||
if (issueIds.length === 0 && !answer.includes('--text') && !answer.includes('--plan')) {
|
||||
inputType = 'text'
|
||||
rawInput = answer
|
||||
}
|
||||
}
|
||||
|
||||
// Execution method selection (interactive if no flag)
|
||||
if (!execMatch && !AUTO_YES) {
|
||||
const methodChoice = AskUserQuestion({
|
||||
questions: [{ question: "Select execution method for implementation:",
|
||||
options: [
|
||||
{ label: "Gemini", description: "gemini-2.5-pro (recommended for <= 3 tasks)" },
|
||||
{ label: "Codex", description: "gpt-5.2 (recommended for > 3 tasks)" },
|
||||
{ label: "Auto", description: "Auto-select based on task count" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
if (methodChoice === 'Codex') executionMethod = 'codex'
|
||||
else if (methodChoice === 'Auto') executionMethod = 'auto'
|
||||
}
|
||||
|
||||
const slug = (issueIds[0] || rawInput).toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 30)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
const sessionId = `planex-${slug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}/{artifacts/solutions,builds,wisdom}`)
|
||||
|
||||
Write(`${sessionFolder}/discoveries.ndjson`, `# Discovery Board - ${sessionId}\n# Format: NDJSON\n`)
|
||||
|
||||
// Initialize wisdom files
|
||||
Write(`${sessionFolder}/wisdom/learnings.md`, `# Learnings\n\nAccumulated during ${sessionId}\n`)
|
||||
Write(`${sessionFolder}/wisdom/decisions.md`, `# Decisions\n\n`)
|
||||
Write(`${sessionFolder}/wisdom/conventions.md`, `# Conventions\n\n`)
|
||||
Write(`${sessionFolder}/wisdom/issues.md`, `# Issues\n\n`)
|
||||
|
||||
// Store session metadata
|
||||
Write(`${sessionFolder}/session.json`, JSON.stringify({
|
||||
session_id: sessionId,
|
||||
pipeline_type: 'plan-execute',
|
||||
input_type: inputType,
|
||||
raw_input: rawInput,
|
||||
issue_ids: issueIds,
|
||||
execution_method: executionMethod,
|
||||
created_at: getUtc8ISOString()
|
||||
}, null, 2))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive (Input Analysis)
|
||||
|
||||
**Objective**: Parse and normalize input into a list of issue IDs ready for the planning wave.
|
||||
|
||||
**Input Type Handling**:
|
||||
|
||||
| Input Type | Processing |
|
||||
|------------|-----------|
|
||||
| `issues` (ISS-* IDs) | Use directly, verify exist via `ccw issue status` |
|
||||
| `text` (--text flag) | Create issues via `ccw issue create --title ... --context ...` |
|
||||
| `plan` (--plan flag) | Read plan file, parse phases/tasks, batch create issues |
|
||||
|
||||
For `text` input:
|
||||
```bash
|
||||
# Create issue from text description
|
||||
ccw issue create --title "<derived-title>" --context "<raw_input>"
|
||||
# Parse output for new issue ID
|
||||
```
|
||||
|
||||
For `plan` input:
|
||||
```bash
|
||||
# Read plan file
|
||||
planContent = Read("<plan-path>")
|
||||
# Parse phases/sections into individual issues
|
||||
# Create each as a separate issue via ccw issue create
|
||||
```
|
||||
|
||||
After processing, update session.json with resolved issue_ids.
|
||||
|
||||
**Success Criteria**:
|
||||
- All inputs resolved to valid issue IDs
|
||||
- Session metadata updated with final issue list
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement -> CSV + Classification
|
||||
|
||||
**Objective**: Generate tasks.csv with PLAN-* tasks (wave 1) and EXEC-* tasks (wave 2).
|
||||
|
||||
**Two-Wave Structure**:
|
||||
|
||||
Wave 1 (Planning): One PLAN-NNN task per issue, all independent (no deps), concurrent execution.
|
||||
Wave 2 (Execution): One EXEC-NNN task per issue, each depends on its corresponding PLAN-NNN.
|
||||
|
||||
**Task Generation**:
|
||||
|
||||
```javascript
|
||||
const tasks = []
|
||||
|
||||
// Wave 1: Planning tasks (one per issue)
|
||||
for (let i = 0; i < issueIds.length; i++) {
|
||||
const n = String(i + 1).padStart(3, '0')
|
||||
tasks.push({
|
||||
id: `PLAN-${n}`,
|
||||
title: `Plan ${issueIds[i]}`,
|
||||
description: `Generate implementation solution for issue ${issueIds[i]}. Analyze requirements, design solution approach, break down into implementation tasks, identify files to modify/create.`,
|
||||
role: 'planner',
|
||||
issue_ids: issueIds[i],
|
||||
input_type: inputType,
|
||||
raw_input: inputType === 'issues' ? issueIds[i] : rawInput,
|
||||
exec_mode: 'csv-wave',
|
||||
execution_method: '',
|
||||
deps: '',
|
||||
context_from: '',
|
||||
wave: '1',
|
||||
status: 'pending',
|
||||
findings: '', artifact_path: '', error: ''
|
||||
})
|
||||
}
|
||||
|
||||
// Wave 2: Execution tasks (one per issue, depends on corresponding PLAN)
|
||||
for (let i = 0; i < issueIds.length; i++) {
|
||||
const n = String(i + 1).padStart(3, '0')
|
||||
// Resolve execution method
|
||||
let method = executionMethod
|
||||
if (method === 'auto') {
|
||||
method = issueIds.length <= 3 ? 'gemini' : 'codex'
|
||||
}
|
||||
tasks.push({
|
||||
id: `EXEC-${n}`,
|
||||
title: `Implement ${issueIds[i]}`,
|
||||
description: `Implement solution for issue ${issueIds[i]}. Load solution artifact, execute implementation via CLI, run tests, commit.`,
|
||||
role: 'executor',
|
||||
issue_ids: issueIds[i],
|
||||
input_type: '',
|
||||
raw_input: '',
|
||||
exec_mode: 'csv-wave',
|
||||
execution_method: method,
|
||||
deps: `PLAN-${n}`,
|
||||
context_from: `PLAN-${n}`,
|
||||
wave: '2',
|
||||
status: 'pending',
|
||||
findings: '', artifact_path: '', error: ''
|
||||
})
|
||||
}
|
||||
|
||||
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
||||
```
|
||||
|
||||
**User Validation**: Display task breakdown with wave assignment (skip if AUTO_YES).
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema and wave assignments
|
||||
- PLAN-* tasks in wave 1, EXEC-* tasks in wave 2
|
||||
- Each EXEC-* depends on its corresponding PLAN-*
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with context propagation between planning and execution waves.
|
||||
|
||||
```javascript
|
||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
let tasks = parseCsv(masterCsv)
|
||||
const maxWave = Math.max(...tasks.map(t => parseInt(t.wave)))
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
console.log(`\nWave ${wave}/${maxWave} (${wave === 1 ? 'Planning' : 'Execution'})`)
|
||||
|
||||
// 1. Filter tasks for this wave
|
||||
const waveTasks = tasks.filter(t => parseInt(t.wave) === wave && t.status === 'pending')
|
||||
|
||||
// 2. Check dependencies - skip if upstream failed
|
||||
for (const task of waveTasks) {
|
||||
const depIds = (task.deps || '').split(';').filter(Boolean)
|
||||
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
|
||||
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
|
||||
task.status = 'skipped'
|
||||
task.error = `Dependency failed: ${depIds.filter((id, i) =>
|
||||
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
|
||||
}
|
||||
}
|
||||
|
||||
const pendingTasks = waveTasks.filter(t => t.status === 'pending')
|
||||
if (pendingTasks.length === 0) {
|
||||
console.log(`Wave ${wave}: No pending tasks, skipping...`)
|
||||
continue
|
||||
}
|
||||
|
||||
// 3. Build prev_context from completed upstream tasks
|
||||
for (const task of pendingTasks) {
|
||||
const contextIds = (task.context_from || '').split(';').filter(Boolean)
|
||||
const prevFindings = contextIds.map(id => {
|
||||
const src = tasks.find(t => t.id === id)
|
||||
if (!src?.findings) return ''
|
||||
return `## [${src.id}] ${src.title}\n${src.findings}\nArtifact: ${src.artifact_path || 'N/A'}`
|
||||
}).filter(Boolean).join('\n\n')
|
||||
task.prev_context = prevFindings
|
||||
}
|
||||
|
||||
// 4. Write wave CSV
|
||||
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingTasks))
|
||||
|
||||
// 5. Execute wave
|
||||
spawn_agents_on_csv({
|
||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||
id_column: "id",
|
||||
instruction: Read(".codex/skills/team-planex/instructions/agent-instruction.md"),
|
||||
max_concurrency: maxConcurrency,
|
||||
max_runtime_seconds: 1200,
|
||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||
output_schema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
id: { type: "string" },
|
||||
status: { type: "string", enum: ["completed", "failed"] },
|
||||
findings: { type: "string" },
|
||||
artifact_path: { type: "string" },
|
||||
error: { type: "string" }
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// 6. Merge results into master CSV
|
||||
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||
for (const r of results) {
|
||||
const t = tasks.find(t => t.id === r.id)
|
||||
if (t) Object.assign(t, r)
|
||||
}
|
||||
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
||||
|
||||
// 7. Cleanup temp files
|
||||
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
|
||||
|
||||
// 8. Display wave summary
|
||||
const completed = results.filter(r => r.status === 'completed').length
|
||||
const failed = results.filter(r => r.status === 'failed').length
|
||||
console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed`)
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves
|
||||
- Planning wave completes before execution wave starts
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```javascript
|
||||
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
const completed = tasks.filter(t => t.status === 'completed')
|
||||
const failed = tasks.filter(t => t.status === 'failed')
|
||||
const skipped = tasks.filter(t => t.status === 'skipped')
|
||||
|
||||
const planTasks = tasks.filter(t => t.role === 'planner')
|
||||
const execTasks = tasks.filter(t => t.role === 'executor')
|
||||
|
||||
// Export results.csv
|
||||
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
|
||||
|
||||
// Generate context.md
|
||||
let contextMd = `# PlanEx Pipeline Report\n\n`
|
||||
contextMd += `**Session**: ${sessionId}\n`
|
||||
contextMd += `**Input Type**: ${inputType}\n`
|
||||
contextMd += `**Execution Method**: ${executionMethod}\n`
|
||||
contextMd += `**Issues**: ${issueIds.join(', ')}\n\n`
|
||||
|
||||
contextMd += `## Summary\n\n`
|
||||
contextMd += `| Status | Count |\n|--------|-------|\n`
|
||||
contextMd += `| Completed | ${completed.length} |\n`
|
||||
contextMd += `| Failed | ${failed.length} |\n`
|
||||
contextMd += `| Skipped | ${skipped.length} |\n\n`
|
||||
|
||||
contextMd += `## Planning Wave\n\n`
|
||||
for (const t of planTasks) {
|
||||
const icon = t.status === 'completed' ? '[OK]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
|
||||
contextMd += `${icon} **${t.id}**: ${t.title}\n`
|
||||
if (t.findings) contextMd += ` ${t.findings.substring(0, 200)}\n`
|
||||
if (t.artifact_path) contextMd += ` Solution: ${t.artifact_path}\n`
|
||||
contextMd += `\n`
|
||||
}
|
||||
|
||||
contextMd += `## Execution Wave\n\n`
|
||||
for (const t of execTasks) {
|
||||
const icon = t.status === 'completed' ? '[OK]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
|
||||
contextMd += `${icon} **${t.id}**: ${t.title}\n`
|
||||
if (t.findings) contextMd += ` ${t.findings.substring(0, 200)}\n`
|
||||
if (t.error) contextMd += ` Error: ${t.error}\n`
|
||||
contextMd += `\n`
|
||||
}
|
||||
|
||||
contextMd += `## Deliverables\n\n`
|
||||
contextMd += `| Artifact | Path |\n|----------|------|\n`
|
||||
contextMd += `| Solution Plans | ${sessionFolder}/artifacts/solutions/ |\n`
|
||||
contextMd += `| Build Results | ${sessionFolder}/builds/ |\n`
|
||||
contextMd += `| Discovery Board | ${sessionFolder}/discoveries.ndjson |\n`
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextMd)
|
||||
|
||||
// Display summary
|
||||
console.log(`
|
||||
PlanEx Pipeline Complete
|
||||
Input: ${inputType} (${issueIds.length} issues)
|
||||
Planning: ${planTasks.filter(t => t.status === 'completed').length}/${planTasks.length} completed
|
||||
Execution: ${execTasks.filter(t => t.status === 'completed').length}/${execTasks.length} completed
|
||||
Failed: ${failed.length} | Skipped: ${skipped.length}
|
||||
Output: ${sessionFolder}
|
||||
`)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks)
|
||||
- context.md generated
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
Both planner and executor agents share the same discoveries.ndjson file:
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"PLAN-001","type":"solution_designed","data":{"issue_id":"ISS-20260308-120000","approach":"refactor","task_count":4,"estimated_files":6}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"PLAN-002","type":"conflict_warning","data":{"issue_ids":["ISS-20260308-120000","ISS-20260308-120001"],"overlapping_files":["src/auth/handler.ts"]}}
|
||||
{"ts":"2026-03-08T10:10:00Z","worker":"EXEC-001","type":"impl_result","data":{"issue_id":"ISS-20260308-120000","files_changed":3,"tests_pass":true,"commit":"abc123"}}
|
||||
```
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `solution_designed` | `issue_id` | `{issue_id, approach, task_count, estimated_files}` | Planner: solution plan completed |
|
||||
| `conflict_warning` | `issue_ids` | `{issue_ids, overlapping_files}` | Planner: file overlap detected between issues |
|
||||
| `pattern_found` | `pattern+location` | `{pattern, location, description}` | Any: code pattern identified |
|
||||
| `impl_result` | `issue_id` | `{issue_id, files_changed, tests_pass, commit}` | Executor: implementation outcome |
|
||||
| `test_failure` | `issue_id` | `{issue_id, test_file, error_msg}` | Executor: test failure details |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent EXEC tasks |
|
||||
| Planner fails to create solution | Mark PLAN task failed, skip corresponding EXEC task |
|
||||
| Executor fails implementation | Mark as failed, report in context.md |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| No input provided | Ask user for input via AskUserQuestion |
|
||||
| Issue creation fails (text/plan input) | Report error, suggest manual issue creation |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then input parsing
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state
|
||||
4. **CSV First**: Default to csv-wave for all tasks; interactive only for edge cases
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
|
||||
7. **Skip on Failure**: If PLAN-N failed, skip EXEC-N automatically
|
||||
8. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
9. **Two-Wave Pipeline**: Wave 1 = Planning (PLAN-*), Wave 2 = Execution (EXEC-*)
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
193
.codex/skills/team-planex-v2/instructions/agent-instruction.md
Normal file
193
.codex/skills/team-planex-v2/instructions/agent-instruction.md
Normal file
@@ -0,0 +1,193 @@
|
||||
# Agent Instruction -- Team PlanEx
|
||||
|
||||
CSV agent instruction template for `spawn_agents_on_csv`. Each agent receives this template with its row's column values substituted via `{column_name}` placeholders.
|
||||
|
||||
---
|
||||
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: `.workflow/.csv-wave/{session_id}/discoveries.ndjson` (if exists, skip if not)
|
||||
2. Read project context: `.workflow/project-tech.json` (if exists)
|
||||
3. Read wisdom files: `.workflow/.csv-wave/{session_id}/wisdom/` (conventions, learnings)
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Description**: {description}
|
||||
**Role**: {role}
|
||||
**Issue IDs**: {issue_ids}
|
||||
**Input Type**: {input_type}
|
||||
**Raw Input**: {raw_input}
|
||||
**Execution Method**: {execution_method}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
### Role Router
|
||||
|
||||
Determine your execution steps based on `{role}`:
|
||||
|
||||
| Role | Execution Steps |
|
||||
|------|----------------|
|
||||
| planner | Step A: Solution Planning |
|
||||
| executor | Step B: Implementation |
|
||||
|
||||
---
|
||||
|
||||
### Step A: Solution Planning (planner role)
|
||||
|
||||
1. Parse issue ID from `{issue_ids}`
|
||||
2. Determine input source from `{input_type}`:
|
||||
|
||||
| Input Type | Action |
|
||||
|------------|--------|
|
||||
| `issues` | Load issue details: `Bash("ccw issue status {issue_ids} --json")` |
|
||||
| `text` | Create issue from text: `Bash("ccw issue create --title '<derived>' --context '{raw_input}'")` |
|
||||
| `plan` | Read plan file: `Read("{raw_input}")`, parse into issue requirements |
|
||||
|
||||
3. Generate solution via CLI:
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Generate implementation solution for issue <issueId>; success = actionable task breakdown with file paths
|
||||
TASK: * Load issue details * Analyze requirements * Design solution approach * Break down into implementation tasks * Identify files to modify/create
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Session wisdom
|
||||
EXPECTED: JSON solution with: title, description, tasks array (each with description, files_touched), estimated_complexity
|
||||
CONSTRAINTS: Follow project patterns | Reference existing implementations
|
||||
" --tool gemini --mode analysis --rule planning-breakdown-task-steps
|
||||
```
|
||||
|
||||
4. Parse CLI output to extract solution JSON
|
||||
|
||||
5. Write solution artifact:
|
||||
```javascript
|
||||
Write("<session>/artifacts/solutions/<issueId>.json", JSON.stringify({
|
||||
session_id: "<session-id>",
|
||||
issue_id: "<issueId>",
|
||||
solution: solutionFromCli,
|
||||
planned_at: new Date().toISOString()
|
||||
}))
|
||||
```
|
||||
|
||||
6. Check for file conflicts with other solutions in session:
|
||||
- Read other solution files in `<session>/artifacts/solutions/`
|
||||
- Compare `files_touched` lists
|
||||
- If overlapping files found, log warning to discoveries.ndjson
|
||||
|
||||
7. Share discoveries to board:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"solution_designed","data":{"issue_id":"<issueId>","approach":"<approach>","task_count":<N>,"estimated_files":<M>}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step B: Implementation (executor role)
|
||||
|
||||
1. Parse issue ID from `{issue_ids}`
|
||||
|
||||
2. Load solution artifact:
|
||||
- Primary: Read file from prev_context artifact_path
|
||||
- Fallback: `Read("<session>/artifacts/solutions/<issueId>.json")`
|
||||
- Last resort: `Bash("ccw issue solutions <issueId> --json")`
|
||||
|
||||
3. Load wisdom files for conventions and patterns
|
||||
|
||||
4. Determine execution backend from `{execution_method}`:
|
||||
|
||||
| Method | CLI Command |
|
||||
|--------|-------------|
|
||||
| codex | `ccw cli --tool codex --mode write --id exec-<issueId>` |
|
||||
| gemini | `ccw cli --tool gemini --mode write --id exec-<issueId>` |
|
||||
| qwen | `ccw cli --tool qwen --mode write --id exec-<issueId>` |
|
||||
|
||||
5. Execute implementation via CLI:
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Implement solution for issue <issueId>; success = all tasks completed, tests pass
|
||||
TASK: <solution.tasks as bullet points>
|
||||
MODE: write
|
||||
CONTEXT: @**/* | Memory: Solution plan, session wisdom
|
||||
EXPECTED: Working implementation with code changes, test updates, no syntax errors
|
||||
CONSTRAINTS: Follow existing patterns | Maintain backward compatibility
|
||||
Issue: <issueId>
|
||||
Title: <solution.title>
|
||||
Solution: <solution JSON>" --tool <execution_method> --mode write --rule development-implement-feature
|
||||
```
|
||||
|
||||
6. Verify implementation:
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Tests | Detect and run project test command | All pass |
|
||||
| Syntax | IDE diagnostics or `tsc --noEmit` | No errors |
|
||||
|
||||
If tests fail: retry implementation once, then report as failed.
|
||||
|
||||
7. Commit changes:
|
||||
```bash
|
||||
git add -A
|
||||
git commit -m "feat(<issueId>): <solution.title>"
|
||||
```
|
||||
|
||||
8. Update issue status:
|
||||
```bash
|
||||
ccw issue update <issueId> --status completed
|
||||
```
|
||||
|
||||
9. Share discoveries to board:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"impl_result","data":{"issue_id":"<issueId>","files_changed":<N>,"tests_pass":<bool>,"commit":"<hash>"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Share Discoveries (ALL ROLES)
|
||||
|
||||
After completing your work, append findings to the shared discovery board:
|
||||
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Discovery Types to Share**:
|
||||
|
||||
| Type | Data Schema | When to Use |
|
||||
|------|-------------|-------------|
|
||||
| `solution_designed` | `{issue_id, approach, task_count, estimated_files}` | Planner: solution plan completed |
|
||||
| `conflict_warning` | `{issue_ids, overlapping_files}` | Planner: file overlap between issues |
|
||||
| `pattern_found` | `{pattern, location, description}` | Any: code pattern identified |
|
||||
| `impl_result` | `{issue_id, files_changed, tests_pass, commit}` | Executor: implementation outcome |
|
||||
| `test_failure` | `{issue_id, test_file, error_msg}` | Executor: test failure |
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
```json
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed | failed",
|
||||
"findings": "Key discoveries and implementation notes (max 500 chars)",
|
||||
"artifact_path": "relative path to main artifact (e.g., artifacts/solutions/ISS-xxx.json or builds/ISS-xxx.json)",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before reporting complete:
|
||||
- [ ] Mandatory first steps completed (discoveries, project context, wisdom)
|
||||
- [ ] Role-specific execution steps followed
|
||||
- [ ] At least 1 discovery shared to board
|
||||
- [ ] Artifact file written to session folder
|
||||
- [ ] Findings include actionable details (file paths, task counts, etc.)
|
||||
- [ ] prev_context findings were incorporated where available
|
||||
206
.codex/skills/team-planex-v2/schemas/tasks-schema.md
Normal file
206
.codex/skills/team-planex-v2/schemas/tasks-schema.md
Normal file
@@ -0,0 +1,206 @@
|
||||
# Team PlanEx -- CSV Schema
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
|
||||
### Column Definitions
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier | `"PLAN-001"` |
|
||||
| `title` | string | Yes | Short task title | `"Plan ISS-20260308-120000"` |
|
||||
| `description` | string | Yes | Detailed task description (self-contained) | `"Generate implementation solution for issue ISS-20260308-120000"` |
|
||||
| `role` | enum | Yes | Worker role: `planner` or `executor` | `"planner"` |
|
||||
| `issue_ids` | string | Yes | Semicolon-separated issue IDs | `"ISS-20260308-120000"` |
|
||||
| `input_type` | string | No | Input source type (planner only): `issues`, `text`, or `plan` | `"issues"` |
|
||||
| `raw_input` | string | No | Raw input text (planner only) | `"ISS-20260308-120000"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
| `execution_method` | string | No | CLI tool for EXEC tasks: codex, gemini, qwen, or empty | `"gemini"` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"PLAN-001"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"PLAN-001"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[PLAN-001] Designed 4-task solution..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Solution designed with 4 implementation tasks..."` |
|
||||
| `artifact_path` | string | Path to generated artifact file | `"artifacts/solutions/ISS-20260308-120000.json"` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution (edge cases) |
|
||||
|
||||
> In standard PlanEx, all tasks use `csv-wave`. Interactive mode is reserved for rare multi-round coordination scenarios.
|
||||
|
||||
---
|
||||
|
||||
### Role Values
|
||||
|
||||
| Role | Task Prefixes | Responsibility |
|
||||
|------|---------------|----------------|
|
||||
| `planner` | PLAN-* | Requirement decomposition, solution design, issue creation |
|
||||
| `executor` | EXEC-* | Solution implementation, testing, verification, commit |
|
||||
|
||||
---
|
||||
|
||||
### Example Data
|
||||
|
||||
```csv
|
||||
id,title,description,role,issue_ids,input_type,raw_input,exec_mode,execution_method,deps,context_from,wave,status,findings,artifact_path,error
|
||||
"PLAN-001","Plan issue-1","Generate solution for ISS-20260308-120000","planner","ISS-20260308-120000","issues","ISS-20260308-120000","csv-wave","","","","1","pending","","",""
|
||||
"PLAN-002","Plan issue-2","Generate solution for ISS-20260308-120001","planner","ISS-20260308-120001","issues","ISS-20260308-120001","csv-wave","","","","1","pending","","",""
|
||||
"EXEC-001","Implement issue-1","Implement solution for ISS-20260308-120000","executor","ISS-20260308-120000","","","csv-wave","gemini","PLAN-001","PLAN-001","2","pending","","",""
|
||||
"EXEC-002","Implement issue-2","Implement solution for ISS-20260308-120001","executor","ISS-20260308-120001","","","csv-wave","gemini","PLAN-002","PLAN-002","2","pending","","",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
--------------------- -------------------- -----------------
|
||||
id ----------> id ----------> id
|
||||
title ----------> title ----------> (reads)
|
||||
description ----------> description ----------> (reads)
|
||||
role ----------> role ----------> (reads)
|
||||
issue_ids ----------> issue_ids ----------> (reads)
|
||||
input_type ----------> input_type ----------> (reads, planner)
|
||||
raw_input ----------> raw_input ----------> (reads, planner)
|
||||
exec_mode ----------> exec_mode ----------> (reads)
|
||||
execution_method ------> execution_method -----> (reads, executor)
|
||||
deps ----------> deps ----------> (reads)
|
||||
context_from----------> context_from----------> (reads)
|
||||
wave ----------> (reads)
|
||||
prev_context ----------> (reads)
|
||||
status
|
||||
findings
|
||||
artifact_path
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "PLAN-001",
|
||||
"status": "completed",
|
||||
"findings": "Designed solution for ISS-20260308-120000: 4 implementation tasks, 6 files affected. Approach: refactor authentication handler to support token refresh.",
|
||||
"artifact_path": "artifacts/solutions/ISS-20260308-120000.json",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `solution_designed` | `issue_id` | `{issue_id, approach, task_count, estimated_files}` | Planner: solution plan completed |
|
||||
| `conflict_warning` | `issue_ids` | `{issue_ids, overlapping_files}` | Planner: file overlap between issues |
|
||||
| `pattern_found` | `pattern+location` | `{pattern, location, description}` | Any: code pattern identified |
|
||||
| `impl_result` | `issue_id` | `{issue_id, files_changed, tests_pass, commit}` | Executor: implementation outcome |
|
||||
| `test_failure` | `issue_id` | `{issue_id, test_file, error_msg}` | Executor: test failure details |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"PLAN-001","type":"solution_designed","data":{"issue_id":"ISS-20260308-120000","approach":"refactor","task_count":4,"estimated_files":6}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"PLAN-002","type":"conflict_warning","data":{"issue_ids":["ISS-20260308-120000","ISS-20260308-120001"],"overlapping_files":["src/auth/handler.ts"]}}
|
||||
{"ts":"2026-03-08T10:10:00Z","worker":"EXEC-001","type":"impl_result","data":{"issue_id":"ISS-20260308-120000","files_changed":3,"tests_pass":true,"commit":"abc123"}}
|
||||
```
|
||||
|
||||
> All agents (planner and executor) read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Wave Context Flow
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| PLAN-N findings | EXEC-N prev_context | Injected via prev_context column in wave-2.csv |
|
||||
| PLAN-N artifact_path | EXEC-N | Executor reads solution file from artifact_path |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
|
||||
---
|
||||
|
||||
## Pipeline Structure
|
||||
|
||||
### Standard Two-Wave Pipeline
|
||||
|
||||
| Wave | Tasks | Role | Parallelism |
|
||||
|------|-------|------|-------------|
|
||||
| 1 | PLAN-001..N | planner | All concurrent (up to max_concurrency) |
|
||||
| 2 | EXEC-001..N | executor | All concurrent (up to max_concurrency) |
|
||||
|
||||
Each EXEC-NNN depends on its corresponding PLAN-NNN. If PLAN-NNN fails, EXEC-NNN is automatically skipped.
|
||||
|
||||
---
|
||||
|
||||
## Solution Artifact Schema
|
||||
|
||||
Written by planner agents to `artifacts/solutions/{issueId}.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "planex-xxx-20260308",
|
||||
"issue_id": "ISS-20260308-120000",
|
||||
"solution": {
|
||||
"title": "Add rate limiting middleware",
|
||||
"approach": "Create express middleware with sliding window",
|
||||
"tasks": [
|
||||
{
|
||||
"order": 1,
|
||||
"description": "Create rate limiter middleware in src/middleware/rate-limit.ts",
|
||||
"files_touched": ["src/middleware/rate-limit.ts"]
|
||||
},
|
||||
{
|
||||
"order": 2,
|
||||
"description": "Add per-route configuration in src/config/routes.ts",
|
||||
"files_touched": ["src/config/routes.ts"]
|
||||
}
|
||||
],
|
||||
"estimated_complexity": "Medium",
|
||||
"estimated_files": 4
|
||||
},
|
||||
"planned_at": "2026-03-08T10:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Validation Rules
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Role valid | Value in {planner, executor} | "Invalid role: {role}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status" |
|
||||
| EXEC deps on PLAN | Every EXEC-N must depend on PLAN-N | "EXEC task without PLAN dependency: {id}" |
|
||||
| Issue IDs non-empty | Every task has at least one issue_id | "No issue_ids for task: {id}" |
|
||||
Reference in New Issue
Block a user