mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-28 20:01:17 +08:00
feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture
- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files) - Delete old team-lifecycle (v3) and team-planex-v2 - Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs) - Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate) to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input) - Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor) - Convert all coordinator role files: dispatch.md, monitor.md, role.md - Convert all worker role files: remove run_in_background, fix Bash syntax - Convert all specs/pipelines.md references - Final state: 20 team skills, 217 .md files, zero Claude Code API residuals Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -1,751 +1,166 @@
|
||||
---
|
||||
name: team-issue
|
||||
description: Hybrid team skill for issue resolution. CSV wave primary for exploration, planning, integration, and implementation. Interactive agents for review gates with fix cycles. Supports Quick, Full, and Batch pipelines.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] [--mode=quick|full|batch] \"issue-ids or --all-pending\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, request_user_input
|
||||
description: Unified team skill for issue resolution. Uses team-worker agent architecture with role directories for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents. Triggers on "team issue".
|
||||
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*), mcp__ccw-tools__team_msg(*)
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team Issue Resolution
|
||||
|
||||
## Usage
|
||||
Orchestrate issue resolution pipeline: explore context -> plan solution -> review (optional) -> marshal queue -> implement. Supports Quick, Full, and Batch pipelines with review-fix cycle.
|
||||
|
||||
```bash
|
||||
$team-issue "ISS-20260308-120000 ISS-20260308-120001"
|
||||
$team-issue -c 4 "ISS-20260308-120000 --mode=full"
|
||||
$team-issue -y "--all-pending"
|
||||
$team-issue --continue "issue-auth-fix-20260308"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
||||
- `--continue`: Resume existing session
|
||||
- `--mode=quick|full|batch`: Force pipeline mode (default: auto-detect)
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Orchestrate issue resolution pipeline: explore context, plan solution, review (optional), marshal queue, implement. Supports Quick, Full, and Batch pipelines with review-fix cycle.
|
||||
|
||||
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
|
||||
## Architecture
|
||||
|
||||
```
|
||||
+---------------------------------------------------------------------------+
|
||||
| TEAM ISSUE RESOLUTION WORKFLOW |
|
||||
+---------------------------------------------------------------------------+
|
||||
| |
|
||||
| Phase 1: Requirement Parsing + Pipeline Selection |
|
||||
| +-- Parse issue IDs (GH-\d+, ISS-\d{8}-\d{6}, --all-pending) |
|
||||
| +-- Auto-detect pipeline mode (quick/full/batch) |
|
||||
| +-- Determine execution method (codex/gemini/auto) |
|
||||
| +-- Generate tasks.csv with wave + exec_mode columns |
|
||||
| +-- User validates task breakdown (skip if -y) |
|
||||
| |
|
||||
| Phase 2: Wave Execution Engine (Extended) |
|
||||
| +-- For each wave (1..N): |
|
||||
| | +-- Execute pre-wave interactive tasks (if any) |
|
||||
| | +-- Build wave CSV (filter csv-wave tasks for this wave) |
|
||||
| | +-- Inject previous findings into prev_context column |
|
||||
| | +-- spawn_agents_on_csv(wave CSV) |
|
||||
| | +-- Execute post-wave interactive tasks (if any) |
|
||||
| | +-- Merge all results into master tasks.csv |
|
||||
| | +-- Check: any failed? -> skip dependents |
|
||||
| +-- discoveries.ndjson shared across all modes (append-only) |
|
||||
| |
|
||||
| Phase 3: Post-Wave Interactive (Review Gate) |
|
||||
| +-- Reviewer agent: multi-dimensional review with verdict |
|
||||
| +-- Fix cycle: rejected -> revise solution -> re-review (max 2) |
|
||||
| +-- Final aggregation / report |
|
||||
| |
|
||||
| Phase 4: Results Aggregation |
|
||||
| +-- Export final results.csv |
|
||||
| +-- Generate context.md with all findings |
|
||||
| +-- Display summary: completed/failed/skipped per wave |
|
||||
| +-- Offer: view results | retry failed | done |
|
||||
| |
|
||||
+---------------------------------------------------------------------------+
|
||||
Skill(skill="team-issue", args="<issue-ids> [--mode=<mode>]")
|
||||
|
|
||||
SKILL.md (this file) = Router
|
||||
|
|
||||
+--------------+--------------+
|
||||
| |
|
||||
no --role flag --role <name>
|
||||
| |
|
||||
Coordinator Worker
|
||||
roles/coordinator/role.md roles/<name>/role.md
|
||||
|
|
||||
+-- clarify -> dispatch -> spawn workers -> STOP
|
||||
|
|
||||
+-------+-------+-------+-------+
|
||||
v v v v v
|
||||
[explor] [plann] [review] [integ] [imple]
|
||||
```
|
||||
|
||||
---
|
||||
## Role Registry
|
||||
|
||||
## Task Classification Rules
|
||||
| Role | Path | Prefix | Inner Loop |
|
||||
|------|------|--------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
|
||||
| explorer | [roles/explorer/role.md](roles/explorer/role.md) | EXPLORE-* | false |
|
||||
| planner | [roles/planner/role.md](roles/planner/role.md) | SOLVE-* | false |
|
||||
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | AUDIT-* | false |
|
||||
| integrator | [roles/integrator/role.md](roles/integrator/role.md) | MARSHAL-* | false |
|
||||
| implementer | [roles/implementer/role.md](roles/implementer/role.md) | BUILD-* | false |
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
## Role Router
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, clarification, review gates |
|
||||
Parse `$ARGUMENTS`:
|
||||
- Has `--role <name>` -> Read `roles/<name>/role.md`, execute Phase 2-4
|
||||
- No `--role` -> `roles/coordinator/role.md`, execute entry router
|
||||
|
||||
**Classification Decision**:
|
||||
## Shared Constants
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Codebase exploration (EXPLORE-*) | `csv-wave` |
|
||||
| Solution planning (SOLVE-*) | `csv-wave` |
|
||||
| Queue formation / integration (MARSHAL-*) | `csv-wave` |
|
||||
| Code implementation (BUILD-*) | `csv-wave` |
|
||||
| Technical review with verdict (AUDIT-*) | `interactive` |
|
||||
| Solution revision after rejection (SOLVE-fix-*) | `csv-wave` |
|
||||
- **Session prefix**: `TISL`
|
||||
- **Session path**: `.workflow/.team/TISL-<slug>-<date>/`
|
||||
- **Team name**: `issue`
|
||||
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
|
||||
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
|
||||
|
||||
---
|
||||
## Worker Spawn Template
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,role,issue_ids,exec_mode,execution_method,deps,context_from,wave,status,findings,artifact_path,error
|
||||
"EXPLORE-001","Context analysis","Analyze issue context and map codebase impact for ISS-20260308-120000","explorer","ISS-20260308-120000","csv-wave","","","","1","pending","","","",""
|
||||
"SOLVE-001","Solution design","Design solution and decompose into implementation tasks","planner","ISS-20260308-120000","csv-wave","","EXPLORE-001","EXPLORE-001","2","pending","","","",""
|
||||
"AUDIT-001","Technical review","Review solution for feasibility, risk, and completeness","reviewer","ISS-20260308-120000","interactive","","SOLVE-001","SOLVE-001","3","pending","","","",""
|
||||
"MARSHAL-001","Queue formation","Form execution queue with conflict detection","integrator","ISS-20260308-120000","csv-wave","","AUDIT-001","SOLVE-001","4","pending","","","",""
|
||||
"BUILD-001","Implementation","Implement solution plan and verify with tests","implementer","ISS-20260308-120000","csv-wave","gemini","MARSHAL-001","EXPLORE-001;SOLVE-001","5","pending","","","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (EXPLORE-NNN, SOLVE-NNN, AUDIT-NNN, MARSHAL-NNN, BUILD-NNN) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description |
|
||||
| `role` | Input | Worker role: explorer, planner, reviewer, integrator, implementer |
|
||||
| `issue_ids` | Input | Semicolon-separated issue IDs this task covers |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `execution_method` | Input | codex, gemini, qwen, or empty (for non-BUILD tasks) |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `artifact_path` | Output | Path to generated artifact (context report, solution, queue, etc.) |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| reviewer | agents/reviewer.md | 2.3 (structured review) | Multi-dimensional solution review with verdict | post-wave (after SOLVE wave) |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `explorations/context-{issueId}.json` | Explorer context reports | Created by explorer agents |
|
||||
| `solutions/solution-{issueId}.json` | Planner solution plans | Created by planner agents |
|
||||
| `audits/audit-report.json` | Reviewer audit report | Created by reviewer agent |
|
||||
| `queue/execution-queue.json` | Integrator execution queue | Created by integrator agent |
|
||||
| `builds/build-{issueId}.json` | Implementer build results | Created by implementer agents |
|
||||
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
Coordinator spawns workers using this template:
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
+-- tasks.csv # Master state (all tasks, both modes)
|
||||
+-- results.csv # Final results export
|
||||
+-- discoveries.ndjson # Shared discovery board (all agents)
|
||||
+-- context.md # Human-readable report
|
||||
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
+-- explorations/ # Explorer output
|
||||
| +-- context-{issueId}.json
|
||||
+-- solutions/ # Planner output
|
||||
| +-- solution-{issueId}.json
|
||||
+-- audits/ # Reviewer output
|
||||
| +-- audit-report.json
|
||||
+-- queue/ # Integrator output
|
||||
| +-- execution-queue.json
|
||||
+-- builds/ # Implementer output
|
||||
| +-- build-{issueId}.json
|
||||
+-- interactive/ # Interactive task artifacts
|
||||
| +-- {id}-result.json
|
||||
+-- wisdom/ # Cross-task knowledge
|
||||
+-- learnings.md
|
||||
+-- decisions.md
|
||||
+-- conventions.md
|
||||
+-- issues.md
|
||||
spawn_agent({
|
||||
agent_type: "team_worker",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: <skill_root>/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
requirement: <task-description>
|
||||
inner_loop: false
|
||||
|
||||
Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.` },
|
||||
|
||||
{ type: "text", text: `## Task Context
|
||||
task_id: <task-id>
|
||||
title: <task-title>
|
||||
description: <task-description>
|
||||
pipeline_phase: <pipeline-phase>` },
|
||||
|
||||
{ type: "text", text: `## Upstream Context
|
||||
<prev_context>` }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
|
||||
|
||||
## Implementation
|
||||
**Parallel spawn** (Batch mode, N explorer or M implementer instances):
|
||||
|
||||
### Session Initialization
|
||||
```
|
||||
spawn_agent({
|
||||
agent_type: "team_worker",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: <skill_root>/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
requirement: <task-description>
|
||||
agent_name: <role>-<N>
|
||||
inner_loop: false
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.` },
|
||||
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
|
||||
{ type: "text", text: `## Task Context
|
||||
task_id: <task-id>
|
||||
title: <task-title>
|
||||
description: <task-description>
|
||||
pipeline_phase: <pipeline-phase>` },
|
||||
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||
.trim()
|
||||
|
||||
// Parse issue IDs
|
||||
const issueIdPattern = /(?:GH-\d+|ISS-\d{8}-\d{6})/g
|
||||
let issueIds = requirement.match(issueIdPattern) || []
|
||||
|
||||
// Parse mode override
|
||||
const modeMatch = requirement.match(/--mode=(\w+)/)
|
||||
let pipelineMode = modeMatch ? modeMatch[1] : null
|
||||
|
||||
// Handle --all-pending
|
||||
if (requirement.includes('--all-pending')) {
|
||||
const result = Bash("ccw issue list --status registered,pending --json")
|
||||
issueIds = JSON.parse(result).map(i => i.id)
|
||||
}
|
||||
|
||||
// If no issue IDs, ask user
|
||||
if (issueIds.length === 0) {
|
||||
const answer = request_user_input({
|
||||
questions: [{
|
||||
question: "No issue IDs found. Please provide issue IDs.",
|
||||
header: "Issue IDs",
|
||||
id: "issue_input",
|
||||
options: [
|
||||
{ label: "Enter IDs", description: "Provide issue IDs (e.g., ISS-20260308-120000)" },
|
||||
{ label: "Cancel", description: "Abort the pipeline" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
if (answer.answers.issue_input.answers[0] === "Cancel") return
|
||||
issueIds = answer.answers.issue_input.answers[0].match(issueIdPattern) || []
|
||||
if (issueIds.length === 0) return // abort
|
||||
}
|
||||
|
||||
// Auto-detect pipeline mode
|
||||
if (!pipelineMode) {
|
||||
// Load issue priorities
|
||||
const priorities = []
|
||||
for (const id of issueIds) {
|
||||
const info = JSON.parse(Bash(`ccw issue status ${id} --json`))
|
||||
priorities.push(info.priority || 0)
|
||||
}
|
||||
const hasHighPriority = priorities.some(p => p >= 4)
|
||||
|
||||
if (issueIds.length <= 2 && !hasHighPriority) pipelineMode = 'quick'
|
||||
else if (issueIds.length <= 4) pipelineMode = 'full'
|
||||
else pipelineMode = 'batch'
|
||||
}
|
||||
|
||||
// Execution method selection
|
||||
let executionMethod = 'gemini' // default
|
||||
const execMatch = requirement.match(/--exec=(\w+)/)
|
||||
if (execMatch) executionMethod = execMatch[1]
|
||||
|
||||
const slug = issueIds[0].toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
const sessionId = `issue-${slug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}/{explorations,solutions,audits,queue,builds,interactive,wisdom}`)
|
||||
|
||||
Write(`${sessionFolder}/discoveries.ndjson`, `# Discovery Board - ${sessionId}\n# Format: NDJSON\n`)
|
||||
|
||||
// Initialize wisdom files
|
||||
Write(`${sessionFolder}/wisdom/learnings.md`, `# Learnings\n\nAccumulated during ${sessionId}\n`)
|
||||
Write(`${sessionFolder}/wisdom/decisions.md`, `# Decisions\n\n`)
|
||||
Write(`${sessionFolder}/wisdom/conventions.md`, `# Conventions\n\n`)
|
||||
Write(`${sessionFolder}/wisdom/issues.md`, `# Issues\n\n`)
|
||||
|
||||
// Store session metadata
|
||||
Write(`${sessionFolder}/session.json`, JSON.stringify({
|
||||
session_id: sessionId,
|
||||
pipeline_mode: pipelineMode,
|
||||
issue_ids: issueIds,
|
||||
execution_method: executionMethod,
|
||||
fix_cycles: 0,
|
||||
max_fix_cycles: 2,
|
||||
created_at: getUtc8ISOString()
|
||||
}, null, 2))
|
||||
{ type: "text", text: `## Upstream Context
|
||||
<prev_context>` }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
|
||||
|
||||
### Phase 1: Requirement -> CSV + Classification
|
||||
## User Commands
|
||||
|
||||
**Objective**: Parse issue IDs, determine pipeline mode, generate tasks.csv with wave and exec_mode assignments.
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | View execution status graph, no advancement |
|
||||
| `resume` / `continue` | Check worker states, advance next step |
|
||||
|
||||
**Decomposition Rules**:
|
||||
## Session Directory
|
||||
|
||||
| Pipeline | Tasks Generated |
|
||||
|----------|----------------|
|
||||
| quick | EXPLORE-001, SOLVE-001, MARSHAL-001, BUILD-001 (4 tasks, waves 1-4) |
|
||||
| full | EXPLORE-001, SOLVE-001, AUDIT-001, MARSHAL-001, BUILD-001 (5 tasks, waves 1-5) |
|
||||
| batch | EXPLORE-001..N, SOLVE-001..N, AUDIT-001, MARSHAL-001, BUILD-001..M (N+N+1+1+M tasks) |
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
| Task Prefix | Role | exec_mode | Rationale |
|
||||
|-------------|------|-----------|-----------|
|
||||
| EXPLORE-* | explorer | csv-wave | One-shot codebase analysis |
|
||||
| SOLVE-* | planner | csv-wave | One-shot solution design via CLI |
|
||||
| SOLVE-fix-* | planner | csv-wave | One-shot revision addressing feedback |
|
||||
| AUDIT-* | reviewer | interactive | Multi-round review with verdict routing |
|
||||
| MARSHAL-* | integrator | csv-wave | One-shot queue formation |
|
||||
| BUILD-* | implementer | csv-wave | One-shot implementation via CLI |
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
|
||||
|
||||
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
|
||||
|
||||
**Task Generation by Pipeline Mode**:
|
||||
|
||||
Quick pipeline:
|
||||
```csv
|
||||
id,title,description,role,issue_ids,exec_mode,execution_method,deps,context_from,wave,status,findings,artifact_path,error
|
||||
"EXPLORE-001","Context analysis","Analyze issue context and map codebase impact","explorer","<issue-ids>","csv-wave","","","","1","pending","","",""
|
||||
"SOLVE-001","Solution design","Design solution and decompose into implementation tasks","planner","<issue-ids>","csv-wave","","EXPLORE-001","EXPLORE-001","2","pending","","",""
|
||||
"MARSHAL-001","Queue formation","Form execution queue with conflict detection and ordering","integrator","<issue-ids>","csv-wave","","SOLVE-001","SOLVE-001","3","pending","","",""
|
||||
"BUILD-001","Implementation","Implement solution plan and verify with tests","implementer","<issue-ids>","csv-wave","<exec-method>","MARSHAL-001","EXPLORE-001;SOLVE-001","4","pending","","",""
|
||||
```
|
||||
.workflow/.team/TISL-<slug>-<date>/
|
||||
├── session.json # Session metadata + pipeline + fix_cycles
|
||||
├── task-analysis.json # Coordinator analyze output
|
||||
├── .msg/
|
||||
│ ├── messages.jsonl # Message bus log
|
||||
│ └── meta.json # Session state + cross-role state
|
||||
├── wisdom/ # Cross-task knowledge
|
||||
│ ├── learnings.md
|
||||
│ ├── decisions.md
|
||||
│ ├── conventions.md
|
||||
│ └── issues.md
|
||||
├── explorations/ # Explorer output
|
||||
│ └── context-<issueId>.json
|
||||
├── solutions/ # Planner output
|
||||
│ └── solution-<issueId>.json
|
||||
├── audits/ # Reviewer output
|
||||
│ └── audit-report.json
|
||||
├── queue/ # Integrator output (also .workflow/issues/queue/)
|
||||
└── builds/ # Implementer output
|
||||
```
|
||||
|
||||
Full pipeline (adds AUDIT-001 as interactive between SOLVE and MARSHAL):
|
||||
```csv
|
||||
"AUDIT-001","Technical review","Review solution for feasibility, risk, and completeness","reviewer","<issue-ids>","interactive","","SOLVE-001","SOLVE-001","3","pending","","",""
|
||||
"MARSHAL-001","Queue formation","...","integrator","<issue-ids>","csv-wave","","AUDIT-001","SOLVE-001","4","pending","","",""
|
||||
"BUILD-001","Implementation","...","implementer","<issue-ids>","csv-wave","<exec-method>","MARSHAL-001","EXPLORE-001;SOLVE-001","5","pending","","",""
|
||||
```
|
||||
## Specs Reference
|
||||
|
||||
Batch pipeline (parallel EXPLORE, sequential SOLVE, then AUDIT, MARSHAL, deferred BUILD):
|
||||
- EXPLORE-001..N with wave=1, no deps
|
||||
- SOLVE-001..N with wave=2, deps on all EXPLORE-*
|
||||
- AUDIT-001 with wave=3, deps on all SOLVE-*, interactive
|
||||
- MARSHAL-001 with wave=4, deps on AUDIT-001
|
||||
- BUILD-001..M created after MARSHAL completes (deferred)
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema, wave, and exec_mode assignments
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
|
||||
|
||||
```javascript
|
||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
let tasks = parseCsv(masterCsv)
|
||||
const maxWave = Math.max(...tasks.map(t => parseInt(t.wave)))
|
||||
let fixCycles = 0
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
console.log(`\nWave ${wave}/${maxWave}`)
|
||||
|
||||
// 1. Separate tasks by exec_mode
|
||||
const waveTasks = tasks.filter(t => parseInt(t.wave) === wave)
|
||||
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave' && t.status === 'pending')
|
||||
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive' && t.status === 'pending')
|
||||
|
||||
// 2. Check dependencies - skip if upstream failed
|
||||
for (const task of waveTasks) {
|
||||
const depIds = (task.deps || '').split(';').filter(Boolean)
|
||||
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
|
||||
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
|
||||
task.status = 'skipped'
|
||||
task.error = `Dependency failed: ${depIds.filter((id, i) =>
|
||||
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Execute csv-wave tasks
|
||||
const pendingCsv = csvTasks.filter(t => t.status === 'pending')
|
||||
if (pendingCsv.length > 0) {
|
||||
// Build prev_context for each task
|
||||
for (const task of pendingCsv) {
|
||||
const contextIds = (task.context_from || '').split(';').filter(Boolean)
|
||||
const prevFindings = contextIds.map(id => {
|
||||
const src = tasks.find(t => t.id === id)
|
||||
if (!src?.findings) return ''
|
||||
return `## [${src.id}] ${src.title}\n${src.findings}`
|
||||
}).filter(Boolean).join('\n\n')
|
||||
task.prev_context = prevFindings
|
||||
}
|
||||
|
||||
// Write wave CSV
|
||||
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsv))
|
||||
|
||||
// Execute
|
||||
spawn_agents_on_csv({
|
||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||
id_column: "id",
|
||||
instruction: Read("~ or <project>/.codex/skills/team-issue/instructions/agent-instruction.md"),
|
||||
max_concurrency: maxConcurrency,
|
||||
max_runtime_seconds: 1200,
|
||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||
output_schema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
id: { type: "string" },
|
||||
status: { type: "string", enum: ["completed", "failed"] },
|
||||
findings: { type: "string" },
|
||||
artifact_path: { type: "string" },
|
||||
error: { type: "string" }
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Merge results
|
||||
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||
for (const r of results) {
|
||||
const t = tasks.find(t => t.id === r.id)
|
||||
if (t) Object.assign(t, r)
|
||||
}
|
||||
|
||||
// Cleanup temp files
|
||||
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
|
||||
}
|
||||
|
||||
// 4. Execute interactive tasks (post-wave)
|
||||
const pendingInteractive = interactiveTasks.filter(t => t.status === 'pending')
|
||||
for (const task of pendingInteractive) {
|
||||
// Read agent definition
|
||||
const agentDef = Read(`~ or <project>/.codex/skills/team-issue/agents/reviewer.md`)
|
||||
|
||||
// Build context from upstream tasks
|
||||
const contextIds = (task.context_from || '').split(';').filter(Boolean)
|
||||
const prevContext = contextIds.map(id => {
|
||||
const src = tasks.find(t => t.id === id)
|
||||
if (!src?.findings) return ''
|
||||
return `## [${src.id}] ${src.title}\n${src.findings}\nArtifact: ${src.artifact_path || 'N/A'}`
|
||||
}).filter(Boolean).join('\n\n')
|
||||
|
||||
const agent = spawn_agent({
|
||||
message: `## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-issue/agents/reviewer.md (MUST read first)
|
||||
2. Read: ${sessionFolder}/discoveries.ndjson (shared discoveries)
|
||||
3. Read: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
Goal: ${task.description}
|
||||
Issue IDs: ${task.issue_ids}
|
||||
Session: ${sessionFolder}
|
||||
Scope: Review all solutions in ${sessionFolder}/solutions/ for technical feasibility, risk, and completeness
|
||||
|
||||
Deliverables:
|
||||
- Audit report at ${sessionFolder}/audits/audit-report.json
|
||||
- Per-issue verdict: approved (>=80), concerns (60-79), rejected (<60)
|
||||
- Overall verdict
|
||||
|
||||
### Previous Context
|
||||
${prevContext}`
|
||||
})
|
||||
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
|
||||
if (result.timed_out) {
|
||||
send_input({ id: agent, message: "Please finalize and output current findings immediately." })
|
||||
const retry = wait({ ids: [agent], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
// Store interactive result
|
||||
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
|
||||
task_id: task.id,
|
||||
status: "completed",
|
||||
findings: "Review completed",
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
|
||||
close_agent({ id: agent })
|
||||
|
||||
// Parse review verdict from audit report
|
||||
let verdict = 'approved'
|
||||
try {
|
||||
const auditReport = JSON.parse(Read(`${sessionFolder}/audits/audit-report.json`))
|
||||
verdict = auditReport.overall_verdict || 'approved'
|
||||
} catch (e) { /* default to approved */ }
|
||||
|
||||
task.status = 'completed'
|
||||
task.findings = `Review verdict: ${verdict}`
|
||||
|
||||
// Handle review-fix cycle
|
||||
if (verdict === 'rejected' && fixCycles < 2) {
|
||||
fixCycles++
|
||||
// Create SOLVE-fix and AUDIT re-review tasks
|
||||
const fixTask = {
|
||||
id: `SOLVE-fix-${String(fixCycles).padStart(3, '0')}`,
|
||||
title: `Revise solution (fix cycle ${fixCycles})`,
|
||||
description: `Revise solution addressing reviewer feedback. Read audit report for rejection reasons.`,
|
||||
role: 'planner',
|
||||
issue_ids: task.issue_ids,
|
||||
exec_mode: 'csv-wave',
|
||||
execution_method: '',
|
||||
deps: task.id,
|
||||
context_from: task.id,
|
||||
wave: String(parseInt(task.wave) + 1),
|
||||
status: 'pending',
|
||||
findings: '', artifact_path: '', error: ''
|
||||
}
|
||||
const reReviewTask = {
|
||||
id: `AUDIT-${String(fixCycles + 1).padStart(3, '0')}`,
|
||||
title: `Re-review revised solution (cycle ${fixCycles})`,
|
||||
description: `Re-review revised solution focusing on previously rejected dimensions.`,
|
||||
role: 'reviewer',
|
||||
issue_ids: task.issue_ids,
|
||||
exec_mode: 'interactive',
|
||||
execution_method: '',
|
||||
deps: fixTask.id,
|
||||
context_from: fixTask.id,
|
||||
wave: String(parseInt(task.wave) + 2),
|
||||
status: 'pending',
|
||||
findings: '', artifact_path: '', error: ''
|
||||
}
|
||||
tasks.push(fixTask, reReviewTask)
|
||||
// Adjust MARSHAL and BUILD waves
|
||||
for (const t of tasks) {
|
||||
if (t.id.startsWith('MARSHAL') || t.id.startsWith('BUILD')) {
|
||||
t.wave = String(parseInt(reReviewTask.wave) + (t.id.startsWith('MARSHAL') ? 1 : 2))
|
||||
if (t.id.startsWith('MARSHAL')) t.deps = reReviewTask.id
|
||||
}
|
||||
}
|
||||
} else if (verdict === 'rejected' && fixCycles >= 2) {
|
||||
// Force proceed with warning
|
||||
console.log(`WARNING: Fix cycle limit (${fixCycles}) reached. Forcing proceed to MARSHAL.`)
|
||||
}
|
||||
}
|
||||
|
||||
// 5. Merge all results into master CSV
|
||||
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
||||
|
||||
// 6. Handle deferred BUILD task creation (batch mode after MARSHAL)
|
||||
const completedMarshal = tasks.find(t => t.id === 'MARSHAL-001' && t.status === 'completed')
|
||||
if (completedMarshal && pipelineMode === 'batch') {
|
||||
try {
|
||||
const queue = JSON.parse(Read(`${sessionFolder}/queue/execution-queue.json`))
|
||||
const buildCount = queue.parallel_groups?.length || 1
|
||||
for (let b = 1; b <= Math.min(buildCount, 3); b++) {
|
||||
const buildIssues = queue.parallel_groups[b-1]?.issues || issueIds
|
||||
tasks.push({
|
||||
id: `BUILD-${String(b).padStart(3, '0')}`,
|
||||
title: `Implementation group ${b}`,
|
||||
description: `Implement solutions for issues in parallel group ${b}`,
|
||||
role: 'implementer',
|
||||
issue_ids: buildIssues.join(';'),
|
||||
exec_mode: 'csv-wave',
|
||||
execution_method: executionMethod,
|
||||
deps: 'MARSHAL-001',
|
||||
context_from: 'EXPLORE-001;SOLVE-001',
|
||||
wave: String(parseInt(completedMarshal.wave) + 1),
|
||||
status: 'pending',
|
||||
findings: '', artifact_path: '', error: ''
|
||||
})
|
||||
}
|
||||
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
||||
} catch (e) { /* single BUILD fallback */ }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Both csv-wave and interactive tasks handled per wave
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves and mechanisms
|
||||
- Review-fix cycles handled (max 2)
|
||||
- Deferred BUILD tasks created after MARSHAL (batch mode)
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Post-Wave Interactive
|
||||
|
||||
**Objective**: Handle any remaining interactive tasks after all waves complete. In most cases, the review gate is handled inline during Phase 2 wave execution.
|
||||
|
||||
If any interactive tasks remain unprocessed (e.g., from dynamically added fix cycles), execute them using the same spawn_agent protocol as Phase 2.
|
||||
|
||||
**Success Criteria**:
|
||||
- All interactive tasks completed or skipped
|
||||
- Fix cycle limit respected
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```javascript
|
||||
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
const completed = tasks.filter(t => t.status === 'completed')
|
||||
const failed = tasks.filter(t => t.status === 'failed')
|
||||
const skipped = tasks.filter(t => t.status === 'skipped')
|
||||
|
||||
// Export results.csv
|
||||
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
|
||||
|
||||
// Generate context.md
|
||||
let contextMd = `# Issue Resolution Report\n\n`
|
||||
contextMd += `**Session**: ${sessionId}\n`
|
||||
contextMd += `**Pipeline**: ${pipelineMode}\n`
|
||||
contextMd += `**Issues**: ${issueIds.join(', ')}\n`
|
||||
contextMd += `**Fix Cycles**: ${fixCycles}/${2}\n\n`
|
||||
|
||||
contextMd += `## Summary\n\n`
|
||||
contextMd += `| Status | Count |\n|--------|-------|\n`
|
||||
contextMd += `| Completed | ${completed.length} |\n`
|
||||
contextMd += `| Failed | ${failed.length} |\n`
|
||||
contextMd += `| Skipped | ${skipped.length} |\n\n`
|
||||
|
||||
contextMd += `## Task Details\n\n`
|
||||
for (const t of tasks) {
|
||||
const icon = t.status === 'completed' ? '[OK]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
|
||||
contextMd += `${icon} **${t.id}**: ${t.title} (${t.role})\n`
|
||||
if (t.findings) contextMd += ` Findings: ${t.findings.substring(0, 200)}\n`
|
||||
if (t.artifact_path) contextMd += ` Artifact: ${t.artifact_path}\n`
|
||||
if (t.error) contextMd += ` Error: ${t.error}\n`
|
||||
contextMd += `\n`
|
||||
}
|
||||
|
||||
contextMd += `## Deliverables\n\n`
|
||||
contextMd += `| Artifact | Path |\n|----------|------|\n`
|
||||
contextMd += `| Context Reports | ${sessionFolder}/explorations/ |\n`
|
||||
contextMd += `| Solution Plans | ${sessionFolder}/solutions/ |\n`
|
||||
contextMd += `| Audit Report | ${sessionFolder}/audits/audit-report.json |\n`
|
||||
contextMd += `| Execution Queue | ${sessionFolder}/queue/execution-queue.json |\n`
|
||||
contextMd += `| Build Results | ${sessionFolder}/builds/ |\n`
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextMd)
|
||||
|
||||
// Display summary
|
||||
console.log(`
|
||||
Issue Resolution Complete
|
||||
Pipeline: ${pipelineMode}
|
||||
Completed: ${completed.length} | Failed: ${failed.length} | Skipped: ${skipped.length}
|
||||
Fix Cycles Used: ${fixCycles}/2
|
||||
Output: ${sessionFolder}
|
||||
`)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks, both modes)
|
||||
- context.md generated
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
Both csv-wave and interactive agents share the same discoveries.ndjson file:
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"EXPLORE-001","type":"file_found","data":{"path":"src/auth/handler.ts","relevance":"high","purpose":"Main auth handler"}}
|
||||
{"ts":"2026-03-08T10:01:00Z","worker":"EXPLORE-001","type":"pattern_found","data":{"pattern":"middleware-chain","location":"src/middleware/","description":"Express middleware chain pattern"}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"SOLVE-001","type":"solution_approach","data":{"issue_id":"ISS-20260308-120000","approach":"refactor","estimated_files":5}}
|
||||
{"ts":"2026-03-08T10:10:00Z","worker":"BUILD-001","type":"impl_result","data":{"issue_id":"ISS-20260308-120000","files_changed":3,"tests_pass":true}}
|
||||
```
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `file_found` | `path` | `{path, relevance, purpose}` | Relevant file discovered |
|
||||
| `pattern_found` | `pattern+location` | `{pattern, location, description}` | Code pattern identified |
|
||||
| `dependency_found` | `from+to` | `{from, to, type}` | Dependency relationship |
|
||||
| `solution_approach` | `issue_id` | `{issue_id, approach, estimated_files}` | Solution strategy |
|
||||
| `conflict_found` | `files` | `{issues, files, resolution}` | File conflict between issues |
|
||||
| `impl_result` | `issue_id` | `{issue_id, files_changed, tests_pass}` | Implementation outcome |
|
||||
|
||||
---
|
||||
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions and task registry
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| Interactive agent failed | Mark as failed, skip dependents |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| Review rejection exceeds 2 rounds | Force convergence to MARSHAL with warning |
|
||||
| No issues found for given IDs | Report error, ask user for valid IDs |
|
||||
| Deferred BUILD count unknown | Read execution-queue.json after MARSHAL completes |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 1
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson -- both mechanisms share it
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
|
||||
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent
|
||||
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Role Constraints (Main Agent)
|
||||
|
||||
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
|
||||
|
||||
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
|
||||
- Spawns agents with task assignments
|
||||
- Waits for agent callbacks
|
||||
- Merges results and coordinates workflow
|
||||
- Manages workflow transitions between phases
|
||||
|
||||
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
|
||||
- Wait patiently for `wait()` calls to complete
|
||||
- NOT skip workflow steps due to perceived delays
|
||||
- NOT assume agents have failed just because they're taking time
|
||||
- Trust the timeout mechanisms defined in the skill
|
||||
|
||||
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
|
||||
- Use `send_input()` to ask questions or provide clarification
|
||||
- NOT skip the agent or move to next phase prematurely
|
||||
- Give agents opportunity to respond before escalating
|
||||
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
|
||||
|
||||
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
|
||||
- Skip phases or stages defined in the workflow
|
||||
- Bypass required approval or review steps
|
||||
- Execute dependent tasks before prerequisites complete
|
||||
- Assume task completion without explicit agent callback
|
||||
- Make up or fabricate agent results
|
||||
|
||||
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
|
||||
- Total execution time may range from 30-90 minutes or longer
|
||||
- Each phase may take 10-30 minutes depending on complexity
|
||||
- The coordinator must remain active and attentive throughout the entire process
|
||||
- Do not terminate or skip steps due to time concerns
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown command | Error with available command list |
|
||||
| Role not found | Error with role registry |
|
||||
| CLI tool fails | Worker fallback to direct implementation |
|
||||
| Fast-advance conflict | Coordinator reconciles on next callback |
|
||||
| Completion action fails | Default to Keep Active |
|
||||
| Review rejection exceeds 2 rounds | Force convergence to integrator |
|
||||
| No issues found for given IDs | Coordinator reports error to user |
|
||||
| Deferred BUILD count unknown | Defer to MARSHAL callback |
|
||||
|
||||
@@ -1,204 +0,0 @@
|
||||
# Reviewer Agent
|
||||
|
||||
Technical review agent for issue solutions. Performs multi-dimensional review with scored verdict. Used as interactive agent within the team-issue pipeline when review gates are required (full/batch modes).
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Multi-dimensional solution review with verdict routing
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read all solution artifacts and explorer context before reviewing
|
||||
- Score across three weighted dimensions: Technical Feasibility (40%), Risk (30%), Completeness (30%)
|
||||
- Produce structured output with per-issue and overall verdicts
|
||||
- Include file:line references in findings
|
||||
- Write audit report to session audits folder
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Modify solution artifacts or code
|
||||
- Produce unstructured output
|
||||
- Review without reading explorer context (when available)
|
||||
- Skip any scoring dimension
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | file | Load solution artifacts and context files |
|
||||
| `Bash` | shell | Run `ccw issue solutions <id> --json` to load bound solutions |
|
||||
| `Grep` | search | Search codebase for pattern conformance checks |
|
||||
| `Glob` | search | Find relevant files for coverage validation |
|
||||
| `Write` | file | Write audit report |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Load all inputs needed for review.
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Solution artifacts | Yes | `<session>/solutions/solution-<issueId>.json` |
|
||||
| Explorer context | No | `<session>/explorations/context-<issueId>.json` |
|
||||
| Bound solutions | Yes | `ccw issue solutions <issueId> --json` |
|
||||
| Discoveries | No | `<session>/discoveries.ndjson` |
|
||||
| Wisdom files | No | `<session>/wisdom/` |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read session folder path from spawn message
|
||||
2. Extract issue IDs from spawn message
|
||||
3. Load explorer context reports for each issue
|
||||
4. Load bound solutions for each issue via CLI
|
||||
5. Load discoveries for cross-reference
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Multi-Dimensional Review
|
||||
|
||||
**Objective**: Score each solution across three weighted dimensions.
|
||||
|
||||
**Technical Feasibility (40%)**:
|
||||
|
||||
| Criterion | Check | Score Impact |
|
||||
|-----------|-------|-------------|
|
||||
| File Coverage | Solution covers all affected files from explorer context | High |
|
||||
| Dependency Awareness | Considers dependency cascade effects | Medium |
|
||||
| API Compatibility | Maintains backward compatibility | High |
|
||||
| Pattern Conformance | Follows existing code patterns | Medium |
|
||||
|
||||
**Risk Assessment (30%)**:
|
||||
|
||||
| Criterion | Check | Score Impact |
|
||||
|-----------|-------|-------------|
|
||||
| Scope Creep | Solution stays within issue boundary (task_count <= 10) | High |
|
||||
| Breaking Changes | No destructive modifications | High |
|
||||
| Side Effects | No unforeseen side effects | Medium |
|
||||
| Rollback Path | Can rollback if issues occur | Low |
|
||||
|
||||
**Completeness (30%)**:
|
||||
|
||||
| Criterion | Check | Score Impact |
|
||||
|-----------|-------|-------------|
|
||||
| All Tasks Defined | Task decomposition is complete (count > 0) | High |
|
||||
| Test Coverage | Includes test plan | Medium |
|
||||
| Edge Cases | Considers boundary conditions | Low |
|
||||
|
||||
**Score Calculation**:
|
||||
|
||||
```
|
||||
total_score = round(
|
||||
technical_feasibility.score * 0.4 +
|
||||
risk_assessment.score * 0.3 +
|
||||
completeness.score * 0.3
|
||||
)
|
||||
```
|
||||
|
||||
**Verdict Rules**:
|
||||
|
||||
| Score | Verdict | Description |
|
||||
|-------|---------|-------------|
|
||||
| >= 80 | approved | Solution is ready for implementation |
|
||||
| 60-79 | concerns | Minor issues noted, proceed with warnings |
|
||||
| < 60 | rejected | Solution needs revision before proceeding |
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Compile Audit Report
|
||||
|
||||
**Objective**: Write structured audit report.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Compute per-issue scores and verdicts
|
||||
2. Compute overall verdict (any rejected -> overall rejected)
|
||||
3. Write audit report to `<session>/audits/audit-report.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "<session-id>",
|
||||
"review_timestamp": "<ISO8601>",
|
||||
"issues_reviewed": [
|
||||
{
|
||||
"issue_id": "<issueId>",
|
||||
"solution_id": "<solutionId>",
|
||||
"total_score": 85,
|
||||
"verdict": "approved",
|
||||
"technical_feasibility": {
|
||||
"score": 90,
|
||||
"findings": ["Good file coverage", "API compatible"]
|
||||
},
|
||||
"risk_assessment": {
|
||||
"score": 80,
|
||||
"findings": ["No breaking changes", "Rollback via git revert"]
|
||||
},
|
||||
"completeness": {
|
||||
"score": 82,
|
||||
"findings": ["5 tasks defined", "Test plan included"]
|
||||
}
|
||||
}
|
||||
],
|
||||
"overall_verdict": "approved",
|
||||
"overall_score": 85,
|
||||
"review_count": 1,
|
||||
"rejection_reasons": [],
|
||||
"actionable_feedback": []
|
||||
}
|
||||
```
|
||||
|
||||
4. For rejected solutions: include specific rejection reasons and actionable feedback for SOLVE-fix task
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Review of <N> solutions: <verdict>
|
||||
|
||||
## Findings
|
||||
- Finding 1: specific description with file:line reference
|
||||
- Finding 2: specific description with file:line reference
|
||||
|
||||
## Per-Issue Verdicts
|
||||
- <issueId>: <score>/100 (<verdict>)
|
||||
- Technical: <score>/100
|
||||
- Risk: <score>/100
|
||||
- Completeness: <score>/100
|
||||
|
||||
## Overall Verdict
|
||||
<approved|concerns|rejected> (score: <N>/100)
|
||||
|
||||
## Rejection Feedback (if rejected)
|
||||
1. Specific concern with remediation suggestion
|
||||
2. Specific concern with remediation suggestion
|
||||
|
||||
## Open Questions
|
||||
1. Question needing clarification (if any)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Solution file not found | Report in Open Questions, score as 0 for completeness |
|
||||
| Explorer context missing | Proceed with reduced confidence, note in findings |
|
||||
| Bound solution not found via CLI | Attempt file-based fallback, report if still missing |
|
||||
| Processing failure | Output partial results with clear status indicator |
|
||||
| Timeout approaching | Output current findings with "PARTIAL" status |
|
||||
@@ -1,198 +0,0 @@
|
||||
# Agent Instruction -- Team Issue Resolution
|
||||
|
||||
CSV agent instruction template for `spawn_agents_on_csv`. Each agent receives this template with its row's column values substituted via `{column_name}` placeholders.
|
||||
|
||||
---
|
||||
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: `.workflow/.csv-wave/{session_id}/discoveries.ndjson` (if exists, skip if not)
|
||||
2. Read project context: `.workflow/project-tech.json` (if exists)
|
||||
3. Read wisdom files: `.workflow/.csv-wave/{session_id}/wisdom/` (conventions, learnings)
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Description**: {description}
|
||||
**Role**: {role}
|
||||
**Issue IDs**: {issue_ids}
|
||||
**Execution Method**: {execution_method}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
### Role Router
|
||||
|
||||
Determine your execution steps based on `{role}`:
|
||||
|
||||
| Role | Execution Steps |
|
||||
|------|----------------|
|
||||
| explorer | Step A: Codebase Exploration |
|
||||
| planner | Step B: Solution Design |
|
||||
| integrator | Step C: Queue Formation |
|
||||
| implementer | Step D: Implementation |
|
||||
|
||||
---
|
||||
|
||||
### Step A: Codebase Exploration (explorer role)
|
||||
|
||||
1. Extract issue ID from `{issue_ids}` (pattern: `GH-\d+` or `ISS-\d{8}-\d{6}`)
|
||||
2. Load issue details: `Bash("ccw issue status <issueId> --json")`
|
||||
3. Assess complexity from issue keywords:
|
||||
|
||||
| Signal | Weight |
|
||||
|--------|--------|
|
||||
| Structural change (refactor, architect) | +2 |
|
||||
| Cross-cutting (multiple, across) | +2 |
|
||||
| Integration (api, database) | +1 |
|
||||
| High priority (>= 4) | +1 |
|
||||
|
||||
4. Explore codebase:
|
||||
- Use `mcp__ace-tool__search_context` for semantic search based on issue keywords
|
||||
- Read relevant files to understand context
|
||||
- Map dependencies and integration points
|
||||
- Check git log for related changes
|
||||
|
||||
5. Write context report:
|
||||
```bash
|
||||
# Write to session explorations folder
|
||||
Write("<session>/explorations/context-<issueId>.json", JSON.stringify({
|
||||
issue_id: "<issueId>",
|
||||
issue: { id, title, priority, status, labels, feedback },
|
||||
relevant_files: [{ path, relevance }],
|
||||
dependencies: [],
|
||||
impact_scope: "low|medium|high",
|
||||
existing_patterns: [],
|
||||
related_changes: [],
|
||||
key_findings: [],
|
||||
complexity_assessment: "Low|Medium|High"
|
||||
}))
|
||||
```
|
||||
|
||||
6. Share discoveries to board
|
||||
|
||||
---
|
||||
|
||||
### Step B: Solution Design (planner role)
|
||||
|
||||
1. Extract issue ID from `{issue_ids}`
|
||||
2. Load explorer context (if available): Read upstream artifact from prev_context
|
||||
3. Check if this is a revision task (SOLVE-fix-*): If yes, read audit report for rejection feedback
|
||||
4. Generate solution via CLI:
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Design solution for issue <issueId> and decompose into implementation tasks; success = solution with task breakdown
|
||||
TASK: * Load issue details * Analyze explorer context * Design solution approach * Break into tasks * Generate solution JSON
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Issue <issueId>, Explorer findings from prev_context
|
||||
EXPECTED: Solution JSON with: issue_id, solution_id, approach, tasks[], estimated_files, dependencies
|
||||
CONSTRAINTS: Follow existing patterns | Minimal changes
|
||||
" --tool gemini --mode analysis --rule planning-breakdown-task-steps
|
||||
```
|
||||
5. Write solution artifact:
|
||||
```bash
|
||||
Write("<session>/solutions/solution-<issueId>.json", solutionJson)
|
||||
```
|
||||
6. Bind solution to issue: `Bash("ccw issue bind <issueId> <solutionId>")`
|
||||
|
||||
---
|
||||
|
||||
### Step C: Queue Formation (integrator role)
|
||||
|
||||
1. Extract issue IDs from `{issue_ids}`
|
||||
2. Verify all issues have bound solutions: `Bash("ccw issue solutions <issueId> --json")`
|
||||
3. Analyze file conflicts between solutions
|
||||
4. Build dependency graph for execution ordering
|
||||
5. Determine parallel execution groups
|
||||
6. Write execution queue:
|
||||
```bash
|
||||
Write("<session>/queue/execution-queue.json", JSON.stringify({
|
||||
queue: [{ issue_id, solution_id, order, depends_on: [], estimated_files: [] }],
|
||||
conflicts: [{ issues: [], files: [], resolution: "" }],
|
||||
parallel_groups: [{ group: 0, issues: [] }]
|
||||
}))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step D: Implementation (implementer role)
|
||||
|
||||
1. Extract issue ID from `{issue_ids}`
|
||||
2. Load bound solution: `Bash("ccw issue solutions <issueId> --json")`
|
||||
3. Load explorer context (from prev_context or file)
|
||||
4. Determine execution backend from `{execution_method}`:
|
||||
|
||||
| Method | CLI Command |
|
||||
|--------|-------------|
|
||||
| codex | `ccw cli --tool codex --mode write --id issue-<issueId>` |
|
||||
| gemini | `ccw cli --tool gemini --mode write --id issue-<issueId>` |
|
||||
| qwen | `ccw cli --tool qwen --mode write --id issue-<issueId>` |
|
||||
|
||||
5. Execute implementation:
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Implement solution for issue <issueId>; success = all tasks completed, tests pass
|
||||
TASK: <solution.tasks as bullet points>
|
||||
MODE: write
|
||||
CONTEXT: @**/* | Memory: Solution plan, explorer context
|
||||
EXPECTED: Working implementation with code changes, test updates
|
||||
CONSTRAINTS: Follow existing patterns | Maintain backward compatibility
|
||||
" --tool <execution_method> --mode write --rule development-implement-feature
|
||||
```
|
||||
|
||||
6. Verify: Run tests, check for errors
|
||||
7. Update issue status: `Bash("ccw issue update <issueId> --status resolved")`
|
||||
|
||||
---
|
||||
|
||||
## Share Discoveries (ALL ROLES)
|
||||
|
||||
After completing your work, append findings to the shared discovery board:
|
||||
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Discovery Types to Share**:
|
||||
|
||||
| Type | Data Schema | When to Use |
|
||||
|------|-------------|-------------|
|
||||
| `file_found` | `{path, relevance, purpose}` | Explorer: relevant file discovered |
|
||||
| `pattern_found` | `{pattern, location, description}` | Explorer: code pattern identified |
|
||||
| `dependency_found` | `{from, to, type}` | Explorer: module dependency found |
|
||||
| `solution_approach` | `{issue_id, approach, estimated_files}` | Planner: solution strategy |
|
||||
| `conflict_found` | `{issues, files, resolution}` | Integrator: file conflict |
|
||||
| `impl_result` | `{issue_id, files_changed, tests_pass}` | Implementer: build outcome |
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
```json
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed | failed",
|
||||
"findings": "Key discoveries and implementation notes (max 500 chars)",
|
||||
"artifact_path": "relative path to main artifact file (e.g., explorations/context-ISS-xxx.json)",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before reporting complete:
|
||||
- [ ] Mandatory first steps completed (discoveries, project context, wisdom)
|
||||
- [ ] Role-specific execution steps followed
|
||||
- [ ] At least 1 discovery shared to board
|
||||
- [ ] Artifact file written to session folder
|
||||
- [ ] Findings include file:line references where applicable
|
||||
- [ ] prev_context findings were incorporated
|
||||
@@ -0,0 +1,64 @@
|
||||
# Analyze Task
|
||||
|
||||
Parse user issue description -> detect required capabilities -> assess complexity -> select pipeline mode.
|
||||
|
||||
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
|
||||
|
||||
## Signal Detection
|
||||
|
||||
| Keywords | Capability | Prefix |
|
||||
|----------|------------|--------|
|
||||
| explore, analyze, context, impact, understand | explorer | EXPLORE |
|
||||
| plan, solve, design, solution, approach | planner | SOLVE |
|
||||
| review, audit, validate, feasibility | reviewer | AUDIT |
|
||||
| marshal, integrate, queue, conflict, order | integrator | MARSHAL |
|
||||
| build, implement, execute, code, develop | implementer | BUILD |
|
||||
|
||||
## Dependency Graph
|
||||
|
||||
Natural ordering tiers:
|
||||
- Tier 0: explorer (context analysis -- no dependencies)
|
||||
- Tier 1: planner (requires explorer output)
|
||||
- Tier 2: reviewer (requires planner output, full/batch modes only)
|
||||
- Tier 3: integrator (requires reviewer or planner output)
|
||||
- Tier 4: implementer (requires integrator output)
|
||||
|
||||
## Complexity Scoring
|
||||
|
||||
| Factor | Points |
|
||||
|--------|--------|
|
||||
| Issue count > 2 | +3 |
|
||||
| Issue count > 4 | +2 more |
|
||||
| Any high-priority issue (priority >= 4) | +2 |
|
||||
| Multiple issue types / cross-cutting | +2 |
|
||||
| Simple / single issue | -2 |
|
||||
|
||||
Results:
|
||||
- 0-2: Low -> quick (4 tasks: EXPLORE -> SOLVE -> MARSHAL -> BUILD)
|
||||
- 3-4: Medium -> full (5 tasks: EXPLORE -> SOLVE -> AUDIT -> MARSHAL -> BUILD)
|
||||
- 5+: High -> batch (N+N+1+1+M tasks, parallel exploration and implementation)
|
||||
|
||||
## Pipeline Selection
|
||||
|
||||
| Complexity | Pipeline | Tasks |
|
||||
|------------|----------|-------|
|
||||
| Low | quick | EXPLORE -> SOLVE -> MARSHAL -> BUILD |
|
||||
| Medium | full | EXPLORE -> SOLVE -> AUDIT -> MARSHAL -> BUILD |
|
||||
| High | batch | EXPLORE-001..N (parallel) -> SOLVE-001..N -> AUDIT -> MARSHAL -> BUILD-001..M (parallel) |
|
||||
|
||||
## Output
|
||||
|
||||
Write <session>/task-analysis.json:
|
||||
```json
|
||||
{
|
||||
"task_description": "<original>",
|
||||
"pipeline_type": "<quick|full|batch>",
|
||||
"issue_ids": ["<id1>", "<id2>"],
|
||||
"capabilities": [{ "name": "<cap>", "prefix": "<PREFIX>", "keywords": ["..."] }],
|
||||
"dependency_graph": { "<TASK-ID>": { "role": "<role>", "blockedBy": ["..."], "priority": "P0|P1|P2" } },
|
||||
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
|
||||
"complexity": { "score": 0, "level": "Low|Medium|High" },
|
||||
"parallel_explorers": 1,
|
||||
"parallel_builders": 1
|
||||
}
|
||||
```
|
||||
273
.codex/skills/team-issue/roles/coordinator/commands/dispatch.md
Normal file
273
.codex/skills/team-issue/roles/coordinator/commands/dispatch.md
Normal file
@@ -0,0 +1,273 @@
|
||||
# Dispatch
|
||||
|
||||
Create the issue resolution task chain with correct dependencies and structured task descriptions based on selected pipeline mode.
|
||||
|
||||
## Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Requirement | From coordinator Phase 1 | Yes |
|
||||
| Session folder | From coordinator Phase 2 | Yes |
|
||||
| Pipeline mode | From session.json mode | Yes |
|
||||
| Issue IDs | From session.json issue_ids | Yes |
|
||||
| Execution method | From session.json execution_method | Yes |
|
||||
| Code review | From session.json code_review | No |
|
||||
|
||||
1. Load requirement, pipeline mode, issue IDs, and execution method from session.json
|
||||
2. Determine task chain from pipeline mode
|
||||
|
||||
## Task Description Template
|
||||
|
||||
Every task is built as a JSON entry in the tasks array:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "<TASK-ID>",
|
||||
"title": "<TASK-ID>",
|
||||
"description": "PURPOSE: <what this task achieves> | Success: <completion criteria>\nTASK:\n - <step 1>\n - <step 2>\n - <step 3>\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <issue-id-list>\n - Upstream artifacts: <artifact-list>\nEXPECTED: <deliverable path> + <quality criteria>\nCONSTRAINTS: <scope limits>\n---\nInnerLoop: false\nexecution_method: <method>\ncode_review: <setting>",
|
||||
"status": "pending",
|
||||
"role": "<role>",
|
||||
"prefix": "<PREFIX>",
|
||||
"deps": ["<dependency-list>"],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
## Pipeline Router
|
||||
|
||||
| Mode | Action |
|
||||
|------|--------|
|
||||
| quick | Create 4 tasks (EXPLORE -> SOLVE -> MARSHAL -> BUILD) |
|
||||
| full | Create 5 tasks (EXPLORE -> SOLVE -> AUDIT -> MARSHAL -> BUILD) |
|
||||
| batch | Create N+N+1+1+M tasks (EXPLORE-001..N -> SOLVE-001..N -> AUDIT-001 -> MARSHAL-001 -> BUILD-001..M) |
|
||||
|
||||
---
|
||||
|
||||
### Quick Pipeline
|
||||
|
||||
Build tasks array and write to tasks.json:
|
||||
|
||||
**EXPLORE-001** (explorer):
|
||||
```json
|
||||
{
|
||||
"id": "EXPLORE-001",
|
||||
"title": "EXPLORE-001",
|
||||
"description": "PURPOSE: Analyze issue context and map codebase impact | Success: Context report with relevant files and dependencies\nTASK:\n - Load issue details via ccw issue status\n - Explore codebase for relevant files and patterns\n - Assess complexity and impact scope\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <issue-id-list>\nEXPECTED: <session>/explorations/context-<issueId>.json with relevant files, dependencies, and impact assessment\nCONSTRAINTS: Exploration and analysis only, no solution design\n---\nInnerLoop: false",
|
||||
"status": "pending",
|
||||
"role": "explorer",
|
||||
"prefix": "EXPLORE",
|
||||
"deps": [],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
**SOLVE-001** (planner):
|
||||
```json
|
||||
{
|
||||
"id": "SOLVE-001",
|
||||
"title": "SOLVE-001",
|
||||
"description": "PURPOSE: Design solution and decompose into implementation tasks | Success: Bound solution with task decomposition\nTASK:\n - Load explorer context report\n - Generate solution plan via CLI\n - Bind solution to issue\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <issue-id-list>\n - Upstream artifacts: explorations/context-<issueId>.json\nEXPECTED: <session>/solutions/solution-<issueId>.json with solution plan and task list\nCONSTRAINTS: Solution design only, no code implementation\n---\nInnerLoop: false",
|
||||
"status": "pending",
|
||||
"role": "planner",
|
||||
"prefix": "SOLVE",
|
||||
"deps": ["EXPLORE-001"],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
**MARSHAL-001** (integrator):
|
||||
```json
|
||||
{
|
||||
"id": "MARSHAL-001",
|
||||
"title": "MARSHAL-001",
|
||||
"description": "PURPOSE: Form execution queue with conflict detection and ordering | Success: Execution queue file with resolved conflicts\nTASK:\n - Verify all issues have bound solutions\n - Detect file conflicts between solutions\n - Produce ordered execution queue with DAG-based parallel groups\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <issue-id-list>\n - Upstream artifacts: solutions/solution-<issueId>.json\nEXPECTED: .workflow/issues/queue/execution-queue.json with queue, conflicts, parallel groups\nCONSTRAINTS: Queue formation only, no implementation\n---\nInnerLoop: false",
|
||||
"status": "pending",
|
||||
"role": "integrator",
|
||||
"prefix": "MARSHAL",
|
||||
"deps": ["SOLVE-001"],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
**BUILD-001** (implementer):
|
||||
```json
|
||||
{
|
||||
"id": "BUILD-001",
|
||||
"title": "BUILD-001",
|
||||
"description": "PURPOSE: Implement solution plan and verify with tests | Success: Code changes committed, tests pass\nTASK:\n - Load bound solution and explorer context\n - Route to execution backend (Auto/Codex/Gemini)\n - Run tests and verify implementation\n - Commit changes\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <issue-id-list>\n - Upstream artifacts: explorations/context-<issueId>.json, solutions/solution-<issueId>.json, queue/execution-queue.json\nEXPECTED: <session>/builds/ with implementation results, tests passing\nCONSTRAINTS: Follow solution plan, no scope creep\n---\nInnerLoop: false\nexecution_method: <execution_method>\ncode_review: <code_review>",
|
||||
"status": "pending",
|
||||
"role": "implementer",
|
||||
"prefix": "BUILD",
|
||||
"deps": ["MARSHAL-001"],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Full Pipeline
|
||||
|
||||
Creates 5 tasks. EXPLORE-001 and SOLVE-001 same as Quick, then AUDIT gate before MARSHAL and BUILD.
|
||||
|
||||
**AUDIT-001** (reviewer):
|
||||
```json
|
||||
{
|
||||
"id": "AUDIT-001",
|
||||
"title": "AUDIT-001",
|
||||
"description": "PURPOSE: Review solution for technical feasibility, risk, and completeness | Success: Clear verdict (approved/concerns/rejected) with scores\nTASK:\n - Load explorer context and bound solution\n - Score across 3 dimensions: technical feasibility (40%), risk (30%), completeness (30%)\n - Produce verdict: approved (>=80), concerns (60-79), rejected (<60)\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <issue-id-list>\n - Upstream artifacts: explorations/context-<issueId>.json, solutions/solution-<issueId>.json\nEXPECTED: <session>/audits/audit-report.json with per-issue scores and overall verdict\nCONSTRAINTS: Review only, do not modify solutions\n---\nInnerLoop: false",
|
||||
"status": "pending",
|
||||
"role": "reviewer",
|
||||
"prefix": "AUDIT",
|
||||
"deps": ["SOLVE-001"],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
**MARSHAL-001**: Same as Quick, but `deps: ["AUDIT-001"]`.
|
||||
|
||||
**BUILD-001**: Same as Quick, `deps: ["MARSHAL-001"]`.
|
||||
|
||||
---
|
||||
|
||||
### Batch Pipeline
|
||||
|
||||
Creates tasks in parallel batches. Issue count = N, BUILD tasks = M (from queue parallel groups).
|
||||
|
||||
**EXPLORE-001..N** (explorer, parallel):
|
||||
|
||||
For each issue in issue_ids (up to 5), create an EXPLORE task with distinct role:
|
||||
|
||||
| Issue Count | Role Assignment |
|
||||
|-------------|-----------------|
|
||||
| N = 1 | role: "explorer" |
|
||||
| N > 1 | role: "explorer-1", "explorer-2", ..., "explorer-N" (max 5) |
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "EXPLORE-<NNN>",
|
||||
"title": "EXPLORE-<NNN>",
|
||||
"description": "PURPOSE: Analyze issue <issueId> context and map codebase impact | Success: Context report for <issueId>\nTASK:\n - Load issue details for <issueId>\n - Explore codebase for relevant files\n - Assess complexity and impact scope\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <issueId>\nEXPECTED: <session>/explorations/context-<issueId>.json\nCONSTRAINTS: Single issue scope, exploration only\n---\nInnerLoop: false",
|
||||
"status": "pending",
|
||||
"role": "explorer-<N>",
|
||||
"prefix": "EXPLORE",
|
||||
"deps": [],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
**SOLVE-001..N** (planner, sequential after all EXPLORE):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "SOLVE-<NNN>",
|
||||
"title": "SOLVE-<NNN>",
|
||||
"description": "PURPOSE: Design solution for <issueId> | Success: Bound solution with tasks\nTASK:\n - Load explorer context for <issueId>\n - Generate solution plan\n - Bind solution\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <issueId>\n - Upstream artifacts: explorations/context-<issueId>.json\nEXPECTED: <session>/solutions/solution-<issueId>.json\nCONSTRAINTS: Solution design only\n---\nInnerLoop: false",
|
||||
"status": "pending",
|
||||
"role": "planner",
|
||||
"prefix": "SOLVE",
|
||||
"deps": ["EXPLORE-001", "...", "EXPLORE-<N>"],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
**AUDIT-001** (reviewer, batch review):
|
||||
```json
|
||||
{
|
||||
"id": "AUDIT-001",
|
||||
"title": "AUDIT-001",
|
||||
"description": "PURPOSE: Batch review all solutions | Success: Verdict for each solution\nTASK:\n - Load all explorer contexts and bound solutions\n - Score each solution across 3 dimensions\n - Produce per-issue verdicts and overall verdict\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <all-issue-ids>\n - Upstream artifacts: explorations/*.json, solutions/*.json\nEXPECTED: <session>/audits/audit-report.json with batch results\nCONSTRAINTS: Review only\n---\nInnerLoop: false",
|
||||
"status": "pending",
|
||||
"role": "reviewer",
|
||||
"prefix": "AUDIT",
|
||||
"deps": ["SOLVE-001", "...", "SOLVE-<N>"],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
**MARSHAL-001** (integrator): `deps: ["AUDIT-001"]`.
|
||||
|
||||
**BUILD-001..M** (implementer, DAG parallel):
|
||||
|
||||
> Note: In Batch mode, BUILD task count M is not known at dispatch time (depends on MARSHAL queue output). Defer BUILD task creation to handleCallback when MARSHAL completes. Coordinator creates BUILD tasks dynamically after reading execution-queue.json.
|
||||
|
||||
When M is known (deferred creation after MARSHAL), assign distinct roles:
|
||||
|
||||
| Build Count | Role Assignment |
|
||||
|-------------|-----------------|
|
||||
| M <= 2 | role: "implementer" |
|
||||
| M > 2 | role: "implementer-1", ..., "implementer-M" (max 3) |
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "BUILD-<NNN>",
|
||||
"title": "BUILD-<NNN>",
|
||||
"description": "PURPOSE: Implement solution for <issueId> | Success: Code committed, tests pass\nTASK:\n - Load bound solution and explorer context\n - Execute implementation via <execution_method>\n - Run tests, commit\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <issueId>\n - Upstream artifacts: explorations/context-<issueId>.json, solutions/solution-<issueId>.json, queue/execution-queue.json\nEXPECTED: <session>/builds/ with results\nCONSTRAINTS: Follow solution plan\n---\nInnerLoop: false\nexecution_method: <execution_method>\ncode_review: <code_review>",
|
||||
"status": "pending",
|
||||
"role": "implementer-<M>",
|
||||
"prefix": "BUILD",
|
||||
"deps": ["MARSHAL-001"],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Review-Fix Cycle (Full/Batch modes)
|
||||
|
||||
When AUDIT rejects a solution, coordinator creates fix tasks dynamically in handleCallback -- NOT at dispatch time.
|
||||
|
||||
**SOLVE-fix-001** (planner, revision) -- added to tasks.json dynamically:
|
||||
```json
|
||||
{
|
||||
"id": "SOLVE-fix-001",
|
||||
"title": "SOLVE-fix-001",
|
||||
"description": "PURPOSE: Revise solution addressing reviewer feedback (fix cycle <round>) | Success: Revised solution addressing rejection reasons\nTASK:\n - Read reviewer feedback from audit report\n - Design alternative approach addressing concerns\n - Re-bind revised solution\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <rejected-issue-ids>\n - Upstream artifacts: audits/audit-report.json\n - Reviewer feedback: <rejection-reasons>\nEXPECTED: <session>/solutions/solution-<issueId>.json (revised)\nCONSTRAINTS: Address reviewer concerns specifically\n---\nInnerLoop: false",
|
||||
"status": "pending",
|
||||
"role": "planner",
|
||||
"prefix": "SOLVE",
|
||||
"deps": ["AUDIT-001"],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
**AUDIT-002** (reviewer, re-review) -- added to tasks.json dynamically:
|
||||
```json
|
||||
{
|
||||
"id": "AUDIT-002",
|
||||
"title": "AUDIT-002",
|
||||
"description": "PURPOSE: Re-review revised solution (fix cycle <round>) | Success: Verdict on revised solution\nTASK:\n - Load revised solution\n - Re-evaluate previously rejected dimensions\n - Produce updated verdict\nCONTEXT:\n - Session: <session-folder>\n - Issue IDs: <rejected-issue-ids>\n - Upstream artifacts: solutions/solution-<issueId>.json (revised), audits/audit-report.json\nEXPECTED: <session>/audits/audit-report.json (updated)\nCONSTRAINTS: Focus on previously rejected dimensions\n---\nInnerLoop: false",
|
||||
"status": "pending",
|
||||
"role": "reviewer",
|
||||
"prefix": "AUDIT",
|
||||
"deps": ["SOLVE-fix-001"],
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
1. Verify all tasks created by reading tasks.json
|
||||
2. Check dependency chain integrity:
|
||||
- No circular dependencies
|
||||
- All deps references exist
|
||||
- First task(s) have empty deps (EXPLORE tasks)
|
||||
3. Log task count and pipeline mode
|
||||
4. Verify mode-specific constraints:
|
||||
|
||||
| Mode | Constraint |
|
||||
|------|-----------|
|
||||
| quick | Exactly 4 tasks, no AUDIT |
|
||||
| full | Exactly 5 tasks, includes AUDIT |
|
||||
| batch | N EXPLORE + N SOLVE + 1 AUDIT + 1 MARSHAL + deferred BUILD |
|
||||
194
.codex/skills/team-issue/roles/coordinator/commands/monitor.md
Normal file
194
.codex/skills/team-issue/roles/coordinator/commands/monitor.md
Normal file
@@ -0,0 +1,194 @@
|
||||
# Monitor Pipeline
|
||||
|
||||
Event-driven pipeline coordination. Beat model: coordinator wake -> process -> spawn -> STOP.
|
||||
|
||||
## Constants
|
||||
|
||||
- SPAWN_MODE: spawn_agent
|
||||
- ONE_STEP_PER_INVOCATION: true
|
||||
- FAST_ADVANCE_AWARE: true
|
||||
- WORKER_AGENT: team_worker
|
||||
- MAX_FIX_CYCLES: 2
|
||||
|
||||
## Handler Router
|
||||
|
||||
| Source | Handler |
|
||||
|--------|---------|
|
||||
| Message contains [explorer], [planner], [reviewer], [integrator], [implementer] | handleCallback |
|
||||
| "consensus_blocked" | handleConsensus |
|
||||
| "capability_gap" | handleAdapt |
|
||||
| "check" or "status" | handleCheck |
|
||||
| "resume" or "continue" | handleResume |
|
||||
| All tasks completed | handleComplete |
|
||||
| Default | handleSpawnNext |
|
||||
|
||||
## handleCallback
|
||||
|
||||
Worker completed. Process and advance.
|
||||
|
||||
1. Parse message to identify role and task ID:
|
||||
|
||||
| Message Pattern | Role Detection |
|
||||
|----------------|---------------|
|
||||
| `[explorer]` or task ID `EXPLORE-*` | explorer |
|
||||
| `[planner]` or task ID `SOLVE-*` | planner |
|
||||
| `[reviewer]` or task ID `AUDIT-*` | reviewer |
|
||||
| `[integrator]` or task ID `MARSHAL-*` | integrator |
|
||||
| `[implementer]` or task ID `BUILD-*` | implementer |
|
||||
|
||||
2. Mark task as completed: Read tasks.json, update matching entry status to "completed", write back
|
||||
3. Record completion in session state
|
||||
|
||||
4. **Review gate check** (when reviewer completes):
|
||||
- If completed task is AUDIT-* AND pipeline is full or batch:
|
||||
- Read audit report from `<session>/audits/audit-report.json`
|
||||
- Read .msg/meta.json for fix_cycles
|
||||
|
||||
| Verdict | fix_cycles < max | Action |
|
||||
|---------|-----------------|--------|
|
||||
| rejected | Yes | Increment fix_cycles, create SOLVE-fix + AUDIT re-review tasks (add to tasks.json per dispatch.md Review-Fix Cycle), proceed to handleSpawnNext |
|
||||
| rejected | No (>= max) | Force proceed -- log warning, unblock MARSHAL |
|
||||
| concerns | - | Log concerns, proceed to MARSHAL (non-blocking) |
|
||||
| approved | - | Proceed to MARSHAL via handleSpawnNext |
|
||||
|
||||
- Log team_msg with type "review_result" or "fix_required"
|
||||
- If force proceeding past rejection, mark skipped fix tasks as completed (skip)
|
||||
|
||||
5. **Deferred BUILD task creation** (when integrator completes):
|
||||
- If completed task is MARSHAL-* AND pipeline is batch:
|
||||
- Read execution queue from `.workflow/issues/queue/execution-queue.json`
|
||||
- Parse parallel_groups to determine BUILD task count M
|
||||
- Create BUILD-001..M tasks dynamically (add to tasks.json per dispatch.md Batch Pipeline BUILD section)
|
||||
- Proceed to handleSpawnNext
|
||||
|
||||
6. Close completed agent: `close_agent({ id: <agentId> })`
|
||||
7. Proceed to handleSpawnNext
|
||||
|
||||
## handleCheck
|
||||
|
||||
Read-only status report, then STOP.
|
||||
|
||||
```
|
||||
[coordinator] Pipeline Status (<pipeline-mode>)
|
||||
[coordinator] Progress: <done>/<total> (<pct>%)
|
||||
[coordinator] Active: <workers with elapsed time>
|
||||
[coordinator] Ready: <pending tasks with resolved deps>
|
||||
[coordinator] Fix Cycles: <fix_cycles>/<max_fix_cycles>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
## handleResume
|
||||
|
||||
1. Audit task list: Tasks stuck in "in_progress" -> reset to "pending"
|
||||
2. Proceed to handleSpawnNext
|
||||
|
||||
## handleSpawnNext
|
||||
|
||||
Find ready tasks, spawn workers, STOP.
|
||||
|
||||
1. Collect: completedSubjects, inProgressSubjects, readySubjects
|
||||
2. No ready + work in progress -> report waiting, STOP
|
||||
3. No ready + nothing in progress -> handleComplete
|
||||
4. Has ready -> for each:
|
||||
a. Update tasks.json entry status -> "in_progress"
|
||||
b. team_msg log -> task_unblocked
|
||||
c. Spawn team_worker:
|
||||
```
|
||||
const agentId = spawn_agent({
|
||||
agent_type: "team_worker",
|
||||
items: [{ type: "text", text: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: ~ or <project>/.codex/skills/team-issue/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: issue
|
||||
requirement: <task-description>
|
||||
inner_loop: false
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).` }]
|
||||
})
|
||||
```
|
||||
d. Collect results: `wait_agent({ ids: [agentId], timeout_ms: 900000 })`
|
||||
e. Read discoveries from output files
|
||||
f. Update tasks.json with results
|
||||
g. Close agent: `close_agent({ id: agentId })`
|
||||
|
||||
5. Parallel spawn rules:
|
||||
|
||||
| Pipeline | Scenario | Spawn Behavior |
|
||||
|----------|----------|---------------|
|
||||
| Quick | All stages | One worker at a time |
|
||||
| Full | All stages | One worker at a time |
|
||||
| Batch | EXPLORE-001..N unblocked | Spawn ALL N explorer workers in parallel (max 5) |
|
||||
| Batch | BUILD-001..M unblocked | Spawn ALL M implementer workers in parallel (max 3) |
|
||||
| Batch | Other stages | One worker at a time |
|
||||
|
||||
**Parallel spawn** (Batch mode with multiple ready tasks for same role):
|
||||
```
|
||||
const agentIds = []
|
||||
for (const task of readyTasks) {
|
||||
agentIds.push(spawn_agent({
|
||||
agent_type: "team_worker",
|
||||
items: [{ type: "text", text: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: ~ or <project>/.codex/skills/team-issue/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: issue
|
||||
requirement: <task-description>
|
||||
agent_name: <role>-<N>
|
||||
inner_loop: false
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery, owner=<role>-<N>) -> role Phase 2-4 -> built-in Phase 5 (report).` }]
|
||||
}))
|
||||
}
|
||||
const results = wait_agent({ ids: agentIds, timeout_ms: 900000 })
|
||||
// Process results, close agents
|
||||
for (const id of agentIds) { close_agent({ id }) }
|
||||
```
|
||||
|
||||
6. Update session, output summary, STOP
|
||||
|
||||
## handleComplete
|
||||
|
||||
Pipeline done. Generate report and completion action.
|
||||
|
||||
Completion check by mode:
|
||||
| Mode | Completion Condition |
|
||||
|------|---------------------|
|
||||
| quick | All 4 tasks completed |
|
||||
| full | All 5 tasks (+ any fix cycle tasks) completed |
|
||||
| batch | All N EXPLORE + N SOLVE + 1 AUDIT + 1 MARSHAL + M BUILD (+ any fix cycle tasks) completed |
|
||||
|
||||
1. Verify all tasks completed via reading tasks.json
|
||||
2. If any tasks not completed, return to handleSpawnNext
|
||||
3. If all completed -> transition to coordinator Phase 5
|
||||
|
||||
## handleConsensus
|
||||
|
||||
Handle consensus_blocked signals.
|
||||
|
||||
| Severity | Action |
|
||||
|----------|--------|
|
||||
| HIGH | Pause pipeline, notify user with findings summary |
|
||||
| MEDIUM | Log finding, attempt to continue |
|
||||
| LOW | Log finding, continue pipeline |
|
||||
|
||||
## handleAdapt
|
||||
|
||||
Capability gap reported mid-pipeline.
|
||||
|
||||
1. Parse gap description
|
||||
2. Check if existing role covers it -> redirect
|
||||
3. Role count < 6 -> generate dynamic role-spec in <session>/role-specs/
|
||||
4. Create new task (add to tasks.json), spawn worker
|
||||
5. Role count >= 6 -> merge or pause
|
||||
|
||||
## Fast-Advance Reconciliation
|
||||
|
||||
On every coordinator wake:
|
||||
1. Read team_msg entries with type="fast_advance"
|
||||
2. Sync active_workers with spawned successors
|
||||
3. No duplicate spawns
|
||||
175
.codex/skills/team-issue/roles/coordinator/role.md
Normal file
175
.codex/skills/team-issue/roles/coordinator/role.md
Normal file
@@ -0,0 +1,175 @@
|
||||
---
|
||||
role: coordinator
|
||||
---
|
||||
|
||||
# Coordinator — Issue Resolution Team
|
||||
|
||||
Orchestrate the issue resolution pipeline: clarify requirements -> create team -> dispatch tasks -> monitor pipeline -> report results. Supports quick, full, and batch modes.
|
||||
|
||||
## Identity
|
||||
- Name: coordinator | Tag: [coordinator]
|
||||
- Responsibility: Issue clarification -> Mode detection -> Create team -> Dispatch tasks -> Monitor pipeline -> Report results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Use `team_worker` agent type for all worker spawns
|
||||
- Follow Command Execution Protocol for dispatch and monitor commands
|
||||
- Respect pipeline stage dependencies (deps)
|
||||
- Stop after spawning workers -- wait for results via wait_agent
|
||||
- Handle review-fix cycles with max 2 iterations
|
||||
- Execute completion action in Phase 5
|
||||
|
||||
### MUST NOT
|
||||
- Implement domain logic (exploring, planning, reviewing, implementing) -- workers handle this
|
||||
- Spawn workers without creating tasks first
|
||||
- Skip review gate in full/batch modes
|
||||
- Force-advance pipeline past failed review
|
||||
- Modify source code directly -- delegate to implementer worker
|
||||
- Call CLI tools directly for implementation tasks
|
||||
|
||||
## Command Execution Protocol
|
||||
|
||||
When coordinator needs to execute a specific phase:
|
||||
1. Read `commands/<command>.md`
|
||||
2. Follow the workflow defined in the command
|
||||
3. Commands are inline execution guides, NOT separate agents
|
||||
4. Execute synchronously, complete before proceeding
|
||||
|
||||
## Entry Router
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker result | Result from wait_agent contains [explorer], [planner], [reviewer], [integrator], [implementer] | -> handleCallback (monitor.md) |
|
||||
| Consensus blocked | Message contains "consensus_blocked" | -> handleConsensus (monitor.md) |
|
||||
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
|
||||
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
|
||||
| Capability gap | Message contains "capability_gap" | -> handleAdapt (monitor.md) |
|
||||
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
|
||||
| Interrupted session | Active session in .workflow/.team/TISL-* | -> Phase 0 |
|
||||
| New session | None of above | -> Phase 1 |
|
||||
|
||||
For callback/check/resume/consensus/adapt/complete: load `@commands/monitor.md`, execute handler, STOP.
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
1. Scan `.workflow/.team/TISL-*/session.json` for active/paused sessions
|
||||
2. No sessions -> Phase 1
|
||||
3. Single session -> reconcile (read tasks.json, reset in_progress->pending, rebuild team, spawn first ready task)
|
||||
4. Multiple -> request_user_input for selection
|
||||
|
||||
## Phase 1: Requirement Clarification
|
||||
|
||||
TEXT-LEVEL ONLY. No source code reading.
|
||||
|
||||
1. Parse issue IDs and mode from $ARGUMENTS:
|
||||
|
||||
| Pattern | Extraction |
|
||||
|---------|------------|
|
||||
| `GH-\d+` | GitHub issue ID |
|
||||
| `ISS-\d{8}-\d{6}` | Local issue ID |
|
||||
| `--mode=<mode>` | Explicit mode override |
|
||||
| `--all-pending` | Load all pending issues via `Bash("ccw issue list --status registered,pending --json")` |
|
||||
|
||||
2. If no issue IDs found -> request_user_input for clarification
|
||||
|
||||
3. **Mode auto-detection** (when `--mode` not specified):
|
||||
|
||||
| Condition | Mode |
|
||||
|-----------|------|
|
||||
| Issue count <= 2 AND no high-priority (priority < 4) | `quick` |
|
||||
| Issue count <= 2 AND has high-priority (priority >= 4) | `full` |
|
||||
| 3-4 issues | `full` |
|
||||
| Issue count >= 5 | `batch` |
|
||||
|
||||
4. **Execution method selection** for BUILD phase (default: Auto):
|
||||
|
||||
| Option | Trigger |
|
||||
|--------|---------|
|
||||
| codex | task_count > 3 or explicit `--exec=codex` |
|
||||
| gemini | task_count <= 3 or explicit `--exec=gemini` |
|
||||
| qwen | explicit `--exec=qwen` |
|
||||
| Auto | Auto-select based on task_count |
|
||||
|
||||
5. Record requirements: issue_ids, mode, execution_method, code_review settings
|
||||
|
||||
## Phase 2: Create Team + Initialize Session
|
||||
|
||||
1. Resolve workspace paths (MUST do first):
|
||||
- `project_root` = result of `Bash("pwd")`
|
||||
- `skill_root` = `<project_root>/.codex/skills/team-issue`
|
||||
2. Generate session ID: `TISL-<issue-slug>-<date>`
|
||||
3. Create session folder structure:
|
||||
```
|
||||
Bash("mkdir -p .workflow/.team/TISL-<slug>-<date>/{explorations,solutions,audits,queue,builds,wisdom,.msg}")
|
||||
```
|
||||
4. Create session folder + initialize `tasks.json` (empty array)
|
||||
5. Write session.json with pipeline_mode, issue_ids, execution_method, fix_cycles=0, max_fix_cycles=2
|
||||
6. Initialize meta.json via team_msg state_update:
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: "<id>", from: "coordinator",
|
||||
type: "state_update", summary: "Session initialized",
|
||||
data: { pipeline_mode: "<mode>", pipeline_stages: ["explorer","planner","reviewer","integrator","implementer"], team_name: "issue", issue_ids: [...], fix_cycles: 0 }
|
||||
})
|
||||
```
|
||||
7. Initialize wisdom files (learnings.md, decisions.md, conventions.md, issues.md)
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
Delegate to @commands/dispatch.md:
|
||||
1. Read pipeline mode and issue IDs from session.json
|
||||
2. Build tasks array and write to tasks.json with correct deps
|
||||
3. Update session.json with task count
|
||||
|
||||
## Phase 4: Spawn-and-Stop
|
||||
|
||||
Delegate to @commands/monitor.md#handleSpawnNext:
|
||||
1. Find ready tasks (pending + deps resolved)
|
||||
2. Spawn team_worker agents (see SKILL.md Spawn Template)
|
||||
3. Output status summary
|
||||
4. STOP
|
||||
|
||||
## Phase 5: Report + Completion Action
|
||||
|
||||
1. Load session state -> count completed tasks, calculate duration
|
||||
2. List deliverables:
|
||||
|
||||
| Deliverable | Path |
|
||||
|-------------|------|
|
||||
| Context Reports | <session>/explorations/context-*.json |
|
||||
| Solution Plans | <session>/solutions/solution-*.json |
|
||||
| Audit Reports | <session>/audits/audit-report.json |
|
||||
| Execution Queue | .workflow/issues/queue/execution-queue.json |
|
||||
| Build Results | <session>/builds/ |
|
||||
|
||||
3. Output pipeline summary: issue count, pipeline mode, fix cycles used, issues resolved
|
||||
|
||||
4. Execute completion action (interactive):
|
||||
```
|
||||
request_user_input({
|
||||
questions: [{ question: "Issue pipeline complete. What would you like to do?",
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "New Batch", description: "Return to Phase 1 with new issue IDs" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Steps |
|
||||
|--------|-------|
|
||||
| Archive & Clean | Verify all completed -> update session status="completed" -> clean up session -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output: "Resume with: Skill(skill='team-issue', args='resume')" |
|
||||
| New Batch | Return to Phase 1 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No issue IDs provided | request_user_input for clarification |
|
||||
| Session corruption | Attempt recovery, fallback to manual |
|
||||
| Worker crash | Reset task to pending, respawn |
|
||||
| Review rejection exceeds 2 rounds | Force convergence to MARSHAL |
|
||||
| Deferred BUILD count unknown | Read execution-queue.json after MARSHAL completes |
|
||||
94
.codex/skills/team-issue/roles/explorer/role.md
Normal file
94
.codex/skills/team-issue/roles/explorer/role.md
Normal file
@@ -0,0 +1,94 @@
|
||||
---
|
||||
role: explorer
|
||||
prefix: EXPLORE
|
||||
inner_loop: false
|
||||
message_types: [context_ready, error]
|
||||
---
|
||||
|
||||
# Issue Explorer
|
||||
|
||||
Analyze issue context, explore codebase for relevant files, map dependencies and impact scope. Produce a shared context report for planner, reviewer, and implementer.
|
||||
|
||||
## Phase 2: Issue Loading & Context Setup
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Issue ID | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
||||
| Issue details | `ccw issue status <id> --json` | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| wisdom meta | <session>/wisdom/.msg/meta.json | No |
|
||||
|
||||
1. Extract issue ID from task description via regex: `(?:GH-\d+|ISS-\d{8}-\d{6})`
|
||||
2. If no issue ID found -> report error, STOP
|
||||
3. Load issue details:
|
||||
|
||||
```
|
||||
Bash("ccw issue status <issueId> --json")
|
||||
```
|
||||
|
||||
4. Parse JSON response for issue metadata (title, context, priority, labels, feedback)
|
||||
5. Load wisdom files from `<session>/wisdom/` if available
|
||||
|
||||
## Phase 3: Codebase Exploration & Impact Analysis
|
||||
|
||||
**Complexity assessment determines exploration depth**:
|
||||
|
||||
| Signal | Weight | Keywords |
|
||||
|--------|--------|----------|
|
||||
| Structural change | +2 | refactor, architect, restructure, module, system |
|
||||
| Cross-cutting | +2 | multiple, across, cross |
|
||||
| Integration | +1 | integrate, api, database |
|
||||
| High priority | +1 | priority >= 4 |
|
||||
|
||||
| Score | Complexity | Strategy |
|
||||
|-------|------------|----------|
|
||||
| >= 4 | High | Deep exploration via CLI tool |
|
||||
| 2-3 | Medium | Hybrid: ACE search + selective CLI |
|
||||
| 0-1 | Low | Direct ACE search only |
|
||||
|
||||
**Exploration execution**:
|
||||
|
||||
| Complexity | Execution |
|
||||
|------------|-----------|
|
||||
| Low | Direct ACE search: `mcp__ace-tool__search_context(project_root_path, query)` |
|
||||
| Medium/High | CLI exploration: `Bash("ccw cli -p \"<exploration_prompt>\" --tool gemini --mode analysis")` |
|
||||
|
||||
**CLI exploration prompt template**:
|
||||
|
||||
```
|
||||
PURPOSE: Explore codebase for issue <issueId> to identify relevant files, dependencies, and impact scope; success = comprehensive context report written to <session>/explorations/context-<issueId>.json
|
||||
|
||||
TASK: * Run ccw tool exec get_modules_by_depth '{}' * Execute ACE searches for issue keywords * Map file dependencies and integration points * Assess impact scope * Find existing patterns * Check git log for related changes
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT: @**/* | Memory: Issue <issueId> - <issue.title> (Priority: <issue.priority>)
|
||||
|
||||
EXPECTED: JSON report with: relevant_files (path + relevance), dependencies, impact_scope (low/medium/high), existing_patterns, related_changes, key_findings, complexity_assessment
|
||||
|
||||
CONSTRAINTS: Focus on issue context | Write output to <session>/explorations/context-<issueId>.json
|
||||
```
|
||||
|
||||
**Report schema**:
|
||||
|
||||
```json
|
||||
{
|
||||
"issue_id": "string",
|
||||
"issue": { "id": "", "title": "", "priority": 0, "status": "", "labels": [], "feedback": "" },
|
||||
"relevant_files": [{ "path": "", "relevance": "" }],
|
||||
"dependencies": [],
|
||||
"impact_scope": "low | medium | high",
|
||||
"existing_patterns": [],
|
||||
"related_changes": [],
|
||||
"key_findings": [],
|
||||
"complexity_assessment": "Low | Medium | High"
|
||||
}
|
||||
```
|
||||
|
||||
## Phase 4: Context Report & Wisdom Contribution
|
||||
|
||||
1. Write context report to `<session>/explorations/context-<issueId>.json`
|
||||
2. If file not found from agent, build minimal report from ACE results
|
||||
3. Update `<session>/wisdom/.msg/meta.json` under `explorer` namespace:
|
||||
- Read existing -> merge `{ "explorer": { issue_id, complexity, impact_scope, file_count } }` -> write back
|
||||
4. Contribute discoveries to `<session>/wisdom/learnings.md` if new patterns found
|
||||
87
.codex/skills/team-issue/roles/implementer/role.md
Normal file
87
.codex/skills/team-issue/roles/implementer/role.md
Normal file
@@ -0,0 +1,87 @@
|
||||
---
|
||||
role: implementer
|
||||
prefix: BUILD
|
||||
inner_loop: false
|
||||
message_types: [impl_complete, impl_failed, error]
|
||||
---
|
||||
|
||||
# Issue Implementer
|
||||
|
||||
Load solution plan, route to execution backend (Agent/Codex/Gemini), run tests, and commit. Execution method determined by coordinator during task creation. Supports parallel instances for batch mode.
|
||||
|
||||
## Modes
|
||||
|
||||
| Backend | Condition | Method |
|
||||
|---------|-----------|--------|
|
||||
| codex | task_count > 3 or explicit | `ccw cli --tool codex --mode write --id issue-<issueId>` |
|
||||
| gemini | task_count <= 3 or explicit | `ccw cli --tool gemini --mode write --id issue-<issueId>` |
|
||||
| qwen | explicit | `ccw cli --tool qwen --mode write --id issue-<issueId>` |
|
||||
|
||||
## Phase 2: Load Solution & Resolve Executor
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Issue ID | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
||||
| Bound solution | `ccw issue solutions <id> --json` | Yes |
|
||||
| Explorer context | `<session>/explorations/context-<issueId>.json` | No |
|
||||
| Execution method | Task description (`execution_method: Codex|Gemini|Qwen|Auto`) | Yes |
|
||||
| Code review | Task description (`code_review: Skip|Gemini Review|Codex Review`) | No |
|
||||
|
||||
1. Extract issue ID from task description
|
||||
2. If no issue ID -> report error, STOP
|
||||
3. Load bound solution: `Bash("ccw issue solutions <issueId> --json")`
|
||||
4. If no bound solution -> report error, STOP
|
||||
5. Load explorer context (if available)
|
||||
6. Resolve execution method (Auto: task_count <= 3 -> gemini, else codex)
|
||||
7. Update issue status: `Bash("ccw issue update <issueId> --status in-progress")`
|
||||
|
||||
## Phase 3: Implementation (Multi-Backend Routing)
|
||||
|
||||
**Execution prompt template** (all backends):
|
||||
|
||||
```
|
||||
## Issue
|
||||
ID: <issueId>
|
||||
Title: <solution.bound.title>
|
||||
|
||||
## Solution Plan
|
||||
<solution.bound JSON>
|
||||
|
||||
## Codebase Context (from explorer)
|
||||
Relevant files: <explorerContext.relevant_files>
|
||||
Existing patterns: <explorerContext.existing_patterns>
|
||||
Dependencies: <explorerContext.dependencies>
|
||||
|
||||
## Implementation Requirements
|
||||
1. Follow the solution plan tasks in order
|
||||
2. Write clean, minimal code following existing patterns
|
||||
3. Run tests after each significant change
|
||||
4. Ensure all existing tests still pass
|
||||
5. Do NOT over-engineer
|
||||
|
||||
## Quality Checklist
|
||||
- All solution tasks implemented
|
||||
- No TypeScript/linting errors
|
||||
- Existing tests pass
|
||||
- New tests added where appropriate
|
||||
```
|
||||
|
||||
Route by executor:
|
||||
- **codex**: `Bash("ccw cli -p \"<prompt>\" --tool codex --mode write --id issue-<issueId>")`
|
||||
- **gemini**: `Bash("ccw cli -p \"<prompt>\" --tool gemini --mode write --id issue-<issueId>")`
|
||||
- **qwen**: `Bash("ccw cli -p \"<prompt>\" --tool qwen --mode write --id issue-<issueId>")`
|
||||
|
||||
On CLI failure, resume: `ccw cli -p "Continue" --resume issue-<issueId> --tool <tool> --mode write`
|
||||
|
||||
## Phase 4: Verify & Commit
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Tests pass | Detect and run test command | No new failures |
|
||||
| Code review | Optional, per task config | Review output logged |
|
||||
|
||||
- Tests pass -> optional code review -> `ccw issue update <issueId> --status resolved` -> report `impl_complete`
|
||||
- Tests fail -> report `impl_failed` with truncated test output
|
||||
|
||||
Update `<session>/wisdom/.msg/meta.json` under `implementer` namespace:
|
||||
- Read existing -> merge `{ "implementer": { issue_id, executor, test_status, review_status } }` -> write back
|
||||
84
.codex/skills/team-issue/roles/integrator/role.md
Normal file
84
.codex/skills/team-issue/roles/integrator/role.md
Normal file
@@ -0,0 +1,84 @@
|
||||
---
|
||||
role: integrator
|
||||
prefix: MARSHAL
|
||||
inner_loop: false
|
||||
message_types: [queue_ready, conflict_found, error]
|
||||
---
|
||||
|
||||
# Issue Integrator
|
||||
|
||||
Queue orchestration, conflict detection, and execution order optimization. Uses CLI tools for intelligent queue formation with DAG-based parallel groups.
|
||||
|
||||
## Phase 2: Collect Bound Solutions
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Issue IDs | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
||||
| Bound solutions | `ccw issue solutions <id> --json` | Yes |
|
||||
| wisdom meta | <session>/wisdom/.msg/meta.json | No |
|
||||
|
||||
1. Extract issue IDs from task description via regex
|
||||
2. Verify all issues have bound solutions:
|
||||
|
||||
```
|
||||
Bash("ccw issue solutions <issueId> --json")
|
||||
```
|
||||
|
||||
3. Check for unbound issues:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| All issues bound | Proceed to Phase 3 |
|
||||
| Any issue unbound | Report error to coordinator, STOP |
|
||||
|
||||
## Phase 3: Queue Formation via CLI
|
||||
|
||||
**CLI invocation**:
|
||||
|
||||
```
|
||||
Bash("ccw cli -p \"
|
||||
PURPOSE: Form execution queue for <count> issues with conflict detection and optimal ordering; success = DAG-based queue with parallel groups written to execution-queue.json
|
||||
|
||||
TASK: * Load all bound solutions from .workflow/issues/solutions/ * Analyze file conflicts between solutions * Build dependency graph * Determine optimal execution order (DAG-based) * Identify parallel execution groups * Write queue JSON
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT: @.workflow/issues/solutions/**/*.json | Memory: Issues to queue: <issueIds>
|
||||
|
||||
EXPECTED: Queue JSON with: ordered issue list, conflict analysis, parallel_groups (issues that can run concurrently), depends_on relationships
|
||||
Write to: .workflow/issues/queue/execution-queue.json
|
||||
|
||||
CONSTRAINTS: Resolve file conflicts | Optimize for parallelism | Maintain dependency order
|
||||
\" --tool gemini --mode analysis")
|
||||
```
|
||||
|
||||
**Parse queue result**:
|
||||
|
||||
```
|
||||
Read(".workflow/issues/queue/execution-queue.json")
|
||||
```
|
||||
|
||||
**Queue schema**:
|
||||
|
||||
```json
|
||||
{
|
||||
"queue": [{ "issue_id": "", "solution_id": "", "order": 0, "depends_on": [], "estimated_files": [] }],
|
||||
"conflicts": [{ "issues": [], "files": [], "resolution": "" }],
|
||||
"parallel_groups": [{ "group": 0, "issues": [] }]
|
||||
}
|
||||
```
|
||||
|
||||
## Phase 4: Conflict Resolution & Reporting
|
||||
|
||||
**Queue validation**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Queue file exists, no unresolved conflicts | Report `queue_ready` |
|
||||
| Queue file exists, has unresolved conflicts | Report `conflict_found` for user decision |
|
||||
| Queue file not found | Report `error`, STOP |
|
||||
|
||||
**Queue metrics for report**: queue size, parallel group count, resolved conflict count, execution order list.
|
||||
|
||||
Update `<session>/wisdom/.msg/meta.json` under `integrator` namespace:
|
||||
- Read existing -> merge `{ "integrator": { queue_size, parallel_groups, conflict_count } }` -> write back
|
||||
81
.codex/skills/team-issue/roles/planner/role.md
Normal file
81
.codex/skills/team-issue/roles/planner/role.md
Normal file
@@ -0,0 +1,81 @@
|
||||
---
|
||||
role: planner
|
||||
prefix: SOLVE
|
||||
inner_loop: false
|
||||
additional_prefixes: [SOLVE-fix]
|
||||
message_types: [solution_ready, multi_solution, error]
|
||||
---
|
||||
|
||||
# Issue Planner
|
||||
|
||||
Design solutions and decompose into implementation tasks. Uses CLI tools for ACE exploration and solution generation. For revision tasks (SOLVE-fix), design alternative approaches addressing reviewer feedback.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Issue ID | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
||||
| Explorer context | `<session>/explorations/context-<issueId>.json` | No |
|
||||
| Review feedback | Task description (for SOLVE-fix tasks) | No |
|
||||
| wisdom meta | <session>/wisdom/.msg/meta.json | No |
|
||||
|
||||
1. Extract issue ID from task description via regex: `(?:GH-\d+|ISS-\d{8}-\d{6})`
|
||||
2. If no issue ID found -> report error, STOP
|
||||
3. Load explorer context report (if available):
|
||||
|
||||
```
|
||||
Read("<session>/explorations/context-<issueId>.json")
|
||||
```
|
||||
|
||||
4. Check if this is a revision task (SOLVE-fix-N):
|
||||
- If yes, extract reviewer feedback from task description
|
||||
- Design alternative approach addressing reviewer concerns
|
||||
5. Load wisdom files for accumulated codebase knowledge
|
||||
|
||||
## Phase 3: Solution Generation via CLI
|
||||
|
||||
**CLI invocation**:
|
||||
|
||||
```
|
||||
Bash("ccw cli -p \"
|
||||
PURPOSE: Design solution for issue <issueId> and decompose into implementation tasks; success = solution bound to issue with task breakdown
|
||||
|
||||
TASK: * Load issue details from ccw issue status * Analyze explorer context * Design solution approach * Break down into implementation tasks * Generate solution JSON * Bind solution to issue
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT: @**/* | Memory: Issue <issueId> - <issue.title> (Priority: <issue.priority>)
|
||||
Explorer findings: <explorerContext.key_findings>
|
||||
Relevant files: <explorerContext.relevant_files>
|
||||
Complexity: <explorerContext.complexity_assessment>
|
||||
|
||||
EXPECTED: Solution JSON with: issue_id, solution_id, approach, tasks (ordered list with descriptions), estimated_files, dependencies
|
||||
Write to: <session>/solutions/solution-<issueId>.json
|
||||
Then bind: ccw issue bind <issueId> <solution_id>
|
||||
|
||||
CONSTRAINTS: Follow existing patterns | Minimal changes | Address reviewer feedback if SOLVE-fix task
|
||||
\" --tool gemini --mode analysis")
|
||||
```
|
||||
|
||||
**Expected CLI output**: Solution file path and binding confirmation
|
||||
|
||||
**Parse result**:
|
||||
|
||||
```
|
||||
Read("<session>/solutions/solution-<issueId>.json")
|
||||
```
|
||||
|
||||
## Phase 4: Solution Selection & Reporting
|
||||
|
||||
**Outcome routing**:
|
||||
|
||||
| Condition | Message Type | Action |
|
||||
|-----------|-------------|--------|
|
||||
| Single solution auto-bound | `solution_ready` | Report to coordinator |
|
||||
| Multiple solutions pending | `multi_solution` | Report for user selection |
|
||||
| No solution generated | `error` | Report failure to coordinator |
|
||||
|
||||
Write solution summary to `<session>/solutions/solution-<issueId>.json`.
|
||||
|
||||
Update `<session>/wisdom/.msg/meta.json` under `planner` namespace:
|
||||
- Read existing -> merge `{ "planner": { issue_id, solution_id, task_count, is_revision } }` -> write back
|
||||
86
.codex/skills/team-issue/roles/reviewer/role.md
Normal file
86
.codex/skills/team-issue/roles/reviewer/role.md
Normal file
@@ -0,0 +1,86 @@
|
||||
---
|
||||
role: reviewer
|
||||
prefix: AUDIT
|
||||
inner_loop: false
|
||||
message_types: [approved, concerns, rejected, error]
|
||||
---
|
||||
|
||||
# Issue Reviewer
|
||||
|
||||
Review solution plans for technical feasibility, risk, and completeness. Quality gate role between plan and execute phases. Provides clear verdicts: approved, rejected, or concerns.
|
||||
|
||||
## Phase 2: Context & Solution Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Issue IDs | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
||||
| Explorer context | `<session>/explorations/context-<issueId>.json` | No |
|
||||
| Bound solution | `ccw issue solutions <id> --json` | Yes |
|
||||
| wisdom meta | <session>/wisdom/.msg/meta.json | No |
|
||||
|
||||
1. Extract issue IDs from task description via regex
|
||||
2. Load explorer context reports for each issue
|
||||
3. Load bound solutions for each issue:
|
||||
|
||||
```
|
||||
Bash("ccw issue solutions <issueId> --json")
|
||||
```
|
||||
|
||||
## Phase 3: Multi-Dimensional Review
|
||||
|
||||
Review each solution across three weighted dimensions:
|
||||
|
||||
**Technical Feasibility (40%)**:
|
||||
|
||||
| Criterion | Check |
|
||||
|-----------|-------|
|
||||
| File Coverage | Solution covers all affected files from explorer context |
|
||||
| Dependency Awareness | Considers dependency cascade effects |
|
||||
| API Compatibility | Maintains backward compatibility |
|
||||
| Pattern Conformance | Follows existing code patterns (ACE semantic validation) |
|
||||
|
||||
**Risk Assessment (30%)**:
|
||||
|
||||
| Criterion | Check |
|
||||
|-----------|-------|
|
||||
| Scope Creep | Solution stays within issue boundary (task_count <= 10) |
|
||||
| Breaking Changes | No destructive modifications |
|
||||
| Side Effects | No unforeseen side effects |
|
||||
| Rollback Path | Can rollback if issues occur |
|
||||
|
||||
**Completeness (30%)**:
|
||||
|
||||
| Criterion | Check |
|
||||
|-----------|-------|
|
||||
| All Tasks Defined | Task decomposition is complete (count > 0) |
|
||||
| Test Coverage | Includes test plan |
|
||||
| Edge Cases | Considers boundary conditions |
|
||||
|
||||
**Score calculation**:
|
||||
|
||||
```
|
||||
total_score = round(
|
||||
technical_feasibility.score * 0.4 +
|
||||
risk_assessment.score * 0.3 +
|
||||
completeness.score * 0.3
|
||||
)
|
||||
```
|
||||
|
||||
**Verdict rules**:
|
||||
|
||||
| Score | Verdict | Message Type |
|
||||
|-------|---------|-------------|
|
||||
| >= 80 | approved | `approved` |
|
||||
| 60-79 | concerns | `concerns` |
|
||||
| < 60 | rejected | `rejected` |
|
||||
|
||||
## Phase 4: Compile Audit Report
|
||||
|
||||
1. Write audit report to `<session>/audits/audit-report.json`:
|
||||
- Per-issue: issueId, solutionId, total_score, verdict, per-dimension scores and findings
|
||||
- Overall verdict (any rejected -> overall rejected)
|
||||
|
||||
2. Update `<session>/wisdom/.msg/meta.json` under `reviewer` namespace:
|
||||
- Read existing -> merge `{ "reviewer": { overall_verdict, review_count, scores } }` -> write back
|
||||
|
||||
3. For rejected solutions, include specific rejection reasons and actionable feedback for SOLVE-fix task creation
|
||||
@@ -1,198 +0,0 @@
|
||||
# Team Issue Resolution -- CSV Schema
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
|
||||
### Column Definitions
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier | `"EXPLORE-001"` |
|
||||
| `title` | string | Yes | Short task title | `"Context analysis"` |
|
||||
| `description` | string | Yes | Detailed task description (self-contained) | `"Analyze issue context and map codebase impact for ISS-20260308-120000"` |
|
||||
| `role` | enum | Yes | Worker role: explorer, planner, reviewer, integrator, implementer | `"explorer"` |
|
||||
| `issue_ids` | string | Yes | Semicolon-separated issue IDs | `"ISS-20260308-120000;ISS-20260308-120001"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
| `execution_method` | string | No | CLI tool for BUILD tasks: codex, gemini, qwen, or empty | `"gemini"` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"EXPLORE-001"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"EXPLORE-001"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[EXPLORE-001] Found 5 relevant files..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Identified 3 affected modules..."` |
|
||||
| `artifact_path` | string | Path to generated artifact file | `"explorations/context-ISS-20260308-120000.json"` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution (review gates) |
|
||||
|
||||
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
|
||||
|
||||
---
|
||||
|
||||
### Role Values
|
||||
|
||||
| Role | Task Prefixes | Responsibility |
|
||||
|------|---------------|----------------|
|
||||
| `explorer` | EXPLORE-* | Codebase exploration, context analysis, impact assessment |
|
||||
| `planner` | SOLVE-*, SOLVE-fix-* | Solution design, task decomposition, revision |
|
||||
| `reviewer` | AUDIT-* | Technical review with multi-dimensional scoring |
|
||||
| `integrator` | MARSHAL-* | Queue formation, conflict detection, execution ordering |
|
||||
| `implementer` | BUILD-* | Code implementation, testing, verification |
|
||||
|
||||
---
|
||||
|
||||
### Example Data
|
||||
|
||||
```csv
|
||||
id,title,description,role,issue_ids,exec_mode,execution_method,deps,context_from,wave,status,findings,artifact_path,error
|
||||
"EXPLORE-001","Context analysis","Analyze issue context and map codebase impact for ISS-20260308-120000","explorer","ISS-20260308-120000","csv-wave","","","","1","pending","","",""
|
||||
"SOLVE-001","Solution design","Design solution and decompose into implementation tasks for ISS-20260308-120000","planner","ISS-20260308-120000","csv-wave","","EXPLORE-001","EXPLORE-001","2","pending","","",""
|
||||
"AUDIT-001","Technical review","Review solution for feasibility risk and completeness","reviewer","ISS-20260308-120000","interactive","","SOLVE-001","SOLVE-001","3","pending","","",""
|
||||
"MARSHAL-001","Queue formation","Form execution queue with conflict detection and optimal ordering","integrator","ISS-20260308-120000","csv-wave","","AUDIT-001","SOLVE-001","4","pending","","",""
|
||||
"BUILD-001","Implementation","Implement solution plan and verify with tests","implementer","ISS-20260308-120000","csv-wave","gemini","MARSHAL-001","EXPLORE-001;SOLVE-001","5","pending","","",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
--------------------- -------------------- -----------------
|
||||
id ----------> id ----------> id
|
||||
title ----------> title ----------> (reads)
|
||||
description ----------> description ----------> (reads)
|
||||
role ----------> role ----------> (reads)
|
||||
issue_ids ----------> issue_ids ----------> (reads)
|
||||
exec_mode ----------> exec_mode ----------> (reads)
|
||||
execution_method ------> execution_method -----> (reads)
|
||||
deps ----------> deps ----------> (reads)
|
||||
context_from----------> context_from----------> (reads)
|
||||
wave ----------> (reads)
|
||||
prev_context ----------> (reads)
|
||||
status
|
||||
findings
|
||||
artifact_path
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "EXPLORE-001",
|
||||
"status": "completed",
|
||||
"findings": "Identified 5 relevant files in src/auth/. Impact scope: medium. Key dependency: shared/utils/token.ts. Existing pattern: middleware-chain in src/middleware/.",
|
||||
"artifact_path": "explorations/context-ISS-20260308-120000.json",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
|
||||
|
||||
---
|
||||
|
||||
## Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `file_found` | `path` | `{path, relevance, purpose}` | Relevant file discovered during exploration |
|
||||
| `pattern_found` | `pattern+location` | `{pattern, location, description}` | Code pattern identified |
|
||||
| `dependency_found` | `from+to` | `{from, to, type}` | Dependency relationship between modules |
|
||||
| `solution_approach` | `issue_id` | `{issue_id, approach, estimated_files}` | Solution strategy chosen |
|
||||
| `conflict_found` | `files` | `{issues, files, resolution}` | File conflict between issue solutions |
|
||||
| `impl_result` | `issue_id` | `{issue_id, files_changed, tests_pass}` | Implementation outcome |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"EXPLORE-001","type":"file_found","data":{"path":"src/auth/handler.ts","relevance":"high","purpose":"Main auth request handler"}}
|
||||
{"ts":"2026-03-08T10:01:00Z","worker":"EXPLORE-001","type":"pattern_found","data":{"pattern":"middleware-chain","location":"src/middleware/","description":"Express middleware chain pattern used across all route handlers"}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"SOLVE-001","type":"solution_approach","data":{"issue_id":"ISS-20260308-120000","approach":"refactor-extract","estimated_files":5}}
|
||||
{"ts":"2026-03-08T10:15:00Z","worker":"MARSHAL-001","type":"conflict_found","data":{"issues":["ISS-20260308-120000","ISS-20260308-120001"],"files":["src/auth/handler.ts"],"resolution":"sequential"}}
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Flow
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| CSV task findings | Interactive task | Injected via spawn message (prev_context) |
|
||||
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
|
||||
---
|
||||
|
||||
## Pipeline-Specific Schemas
|
||||
|
||||
### Quick Pipeline (4 tasks, 4 waves)
|
||||
|
||||
| Wave | Tasks | exec_mode |
|
||||
|------|-------|-----------|
|
||||
| 1 | EXPLORE-001 | csv-wave |
|
||||
| 2 | SOLVE-001 | csv-wave |
|
||||
| 3 | MARSHAL-001 | csv-wave |
|
||||
| 4 | BUILD-001 | csv-wave |
|
||||
|
||||
### Full Pipeline (5 tasks, 5 waves)
|
||||
|
||||
| Wave | Tasks | exec_mode |
|
||||
|------|-------|-----------|
|
||||
| 1 | EXPLORE-001 | csv-wave |
|
||||
| 2 | SOLVE-001 | csv-wave |
|
||||
| 3 | AUDIT-001 | interactive |
|
||||
| 4 | MARSHAL-001 | csv-wave |
|
||||
| 5 | BUILD-001 | csv-wave |
|
||||
|
||||
### Batch Pipeline (N+N+1+1+M tasks)
|
||||
|
||||
| Wave | Tasks | exec_mode | Parallelism |
|
||||
|------|-------|-----------|-------------|
|
||||
| 1 | EXPLORE-001..N | csv-wave | max 5 concurrent |
|
||||
| 2 | SOLVE-001..N | csv-wave | sequential |
|
||||
| 3 | AUDIT-001 | interactive | 1 |
|
||||
| 4 | MARSHAL-001 | csv-wave | 1 |
|
||||
| 5 | BUILD-001..M (deferred) | csv-wave | max 3 concurrent |
|
||||
|
||||
---
|
||||
|
||||
## Validation Rules
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Role valid | Value in {explorer, planner, reviewer, integrator, implementer} | "Invalid role: {role}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
||||
| Cross-mechanism deps | Interactive->CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
|
||||
| Issue IDs non-empty | Every task has at least one issue_id | "No issue_ids for task: {id}" |
|
||||
124
.codex/skills/team-issue/specs/pipelines.md
Normal file
124
.codex/skills/team-issue/specs/pipelines.md
Normal file
@@ -0,0 +1,124 @@
|
||||
# Pipeline Definitions — team-issue
|
||||
|
||||
## Available Pipelines
|
||||
|
||||
### Quick Pipeline (4 beats, strictly serial)
|
||||
|
||||
```
|
||||
EXPLORE-001 → SOLVE-001 → MARSHAL-001 → BUILD-001
|
||||
[explorer] [planner] [integrator] [implementer]
|
||||
```
|
||||
|
||||
Use when: 1-2 simple issues, no high-priority (priority < 4).
|
||||
|
||||
### Full Pipeline (5 beats, with review gate)
|
||||
|
||||
```
|
||||
EXPLORE-001 → SOLVE-001 → AUDIT-001 ─┬─(approved/concerns)→ MARSHAL-001 → BUILD-001
|
||||
[explorer] [planner] [reviewer] │
|
||||
└─(rejected, round<2)→ SOLVE-fix-001 → AUDIT-002 → MARSHAL-001 → BUILD-001
|
||||
```
|
||||
|
||||
Use when: 1-4 issues with high-priority, or 3-4 issues regardless of priority.
|
||||
|
||||
### Batch Pipeline (parallel windows)
|
||||
|
||||
```
|
||||
[EXPLORE-001..N](parallel, max 5) → [SOLVE-001..N](sequential) → AUDIT-001 → MARSHAL-001 → [BUILD-001..M](parallel, max 3, deferred)
|
||||
```
|
||||
|
||||
Use when: 5+ issues.
|
||||
|
||||
Note: BUILD tasks are created dynamically after MARSHAL completes and execution-queue.json is available.
|
||||
|
||||
## Task Metadata Registry
|
||||
|
||||
| Task ID | Role | Phase | Dependencies | Description |
|
||||
|---------|------|-------|-------------|-------------|
|
||||
| EXPLORE-001 | explorer | explore | (none) | Context analysis and impact assessment |
|
||||
| EXPLORE-002..N | explorer | explore | (none) | Parallel exploration (Batch mode only, max 5) |
|
||||
| SOLVE-001 | planner | plan | EXPLORE-001 (or all EXPLORE-*) | Solution design and task decomposition |
|
||||
| SOLVE-002..N | planner | plan | all EXPLORE-* | Parallel solution design (Batch mode only) |
|
||||
| AUDIT-001 | reviewer | review | SOLVE-001 (or all SOLVE-*) | Technical feasibility and risk review (Full/Batch) |
|
||||
| SOLVE-fix-001 | planner | fix | AUDIT-001 | Revised solution addressing reviewer feedback |
|
||||
| AUDIT-002 | reviewer | re-review | SOLVE-fix-001 | Re-review of revised solution |
|
||||
| MARSHAL-001 | integrator | integrate | AUDIT-001 (or last SOLVE-* in quick) | Conflict detection and queue orchestration |
|
||||
| BUILD-001 | implementer | implement | MARSHAL-001 | Code implementation and result submission |
|
||||
| BUILD-002..M | implementer | implement | MARSHAL-001 | Parallel implementation (Batch, deferred creation) |
|
||||
|
||||
## Mode Auto-Detection
|
||||
|
||||
| Condition | Mode |
|
||||
|-----------|------|
|
||||
| User specifies `--mode=<M>` | Use specified mode |
|
||||
| Issue count <= 2 AND no high-priority (priority < 4) | `quick` |
|
||||
| Issue count <= 2 AND has high-priority (priority >= 4) | `full` |
|
||||
| 3-4 issues | `full` |
|
||||
| Issue count >= 5 | `batch` |
|
||||
|
||||
## Checkpoints
|
||||
|
||||
| Trigger | Location | Behavior |
|
||||
|---------|----------|----------|
|
||||
| Review gate | After AUDIT-* | approved/concerns → MARSHAL; rejected → SOLVE-fix (max 2 rounds) |
|
||||
| Review loop limit | fix_cycles >= 2 | Force proceed to MARSHAL with warnings |
|
||||
| Deferred BUILD creation | After MARSHAL-* (batch) | Read execution-queue.json, create BUILD tasks |
|
||||
| Pipeline stall | No ready + no running | Check missing tasks, report to user |
|
||||
|
||||
## Completion Conditions
|
||||
|
||||
| Mode | Completion Condition |
|
||||
|------|---------------------|
|
||||
| quick | All 4 tasks completed |
|
||||
| full | All 5 tasks (+ any fix cycle tasks) completed |
|
||||
| batch | All N EXPLORE + N SOLVE + 1 AUDIT + 1 MARSHAL + M BUILD (+ any fix cycle tasks) completed |
|
||||
|
||||
## Parallel Spawn Rules
|
||||
|
||||
| Pipeline | Stage | Max Parallel |
|
||||
|----------|-------|-------------|
|
||||
| Batch | EXPLORE-001..N | min(N, 5) |
|
||||
| Batch | BUILD-001..M | min(M, 3) |
|
||||
| All | All other stages | 1 |
|
||||
|
||||
## Shared State (meta.json)
|
||||
|
||||
| Role | State Key |
|
||||
|------|-----------|
|
||||
| explorer | `explorer` (issue_id, complexity, impact_scope, file_count) |
|
||||
| planner | `planner` (issue_id, solution_id, task_count, is_revision) |
|
||||
| reviewer | `reviewer` (overall_verdict, review_count, scores) |
|
||||
| integrator | `integrator` (queue_size, parallel_groups, conflict_count) |
|
||||
| implementer | `implementer` (issue_id, executor, test_status, review_status) |
|
||||
|
||||
## Message Types
|
||||
|
||||
| Role | Types |
|
||||
|------|-------|
|
||||
| coordinator | `pipeline_selected`, `review_result`, `fix_required`, `task_unblocked`, `error`, `shutdown` |
|
||||
| explorer | `context_ready`, `error` |
|
||||
| planner | `solution_ready`, `multi_solution`, `error` |
|
||||
| reviewer | `approved`, `concerns`, `rejected`, `error` |
|
||||
| integrator | `queue_ready`, `conflict_found`, `error` |
|
||||
| implementer | `impl_complete`, `impl_failed`, `error` |
|
||||
|
||||
## Review-Fix Cycle
|
||||
|
||||
```
|
||||
AUDIT verdict: rejected
|
||||
│
|
||||
├─ fix_cycles < 2 → create SOLVE-fix-<N> + AUDIT-<N+1> → spawn planner → wait
|
||||
│ ↑
|
||||
│ (repeat if rejected again)
|
||||
│
|
||||
└─ fix_cycles >= 2 → force proceed to MARSHAL with rejection warning logged
|
||||
```
|
||||
|
||||
## Deferred BUILD Creation (Batch Mode)
|
||||
|
||||
BUILD tasks are not created during initial dispatch. After MARSHAL-001 completes:
|
||||
1. Read `.workflow/issues/queue/execution-queue.json`
|
||||
2. Parse `parallel_groups` to determine M
|
||||
3. Create BUILD-001..M tasks with `addBlockedBy: ["MARSHAL-001"]`
|
||||
4. Assign owners: M <= 2 → "implementer"; M > 2 → "implementer-1".."implementer-M" (max 3)
|
||||
5. Spawn implementer workers via handleSpawnNext
|
||||
70
.codex/skills/team-issue/specs/team-config.json
Normal file
70
.codex/skills/team-issue/specs/team-config.json
Normal file
@@ -0,0 +1,70 @@
|
||||
{
|
||||
"team_name": "issue",
|
||||
"team_display_name": "Issue Resolution",
|
||||
"description": "Unified team skill for issue processing pipeline (plan → queue → execute). Issue creation handled by issue-discover, CRUD by issue-manage.",
|
||||
"version": "1.0.0",
|
||||
|
||||
"roles": {
|
||||
"coordinator": {
|
||||
"task_prefix": null,
|
||||
"responsibility": "Pipeline orchestration, mode selection, task chain creation, progress monitoring",
|
||||
"message_types": ["task_assigned", "pipeline_update", "escalation", "shutdown", "error"]
|
||||
},
|
||||
"explorer": {
|
||||
"task_prefix": "EXPLORE",
|
||||
"responsibility": "Issue context analysis, codebase exploration, dependency identification, impact assessment",
|
||||
"message_types": ["context_ready", "impact_assessed", "error"],
|
||||
"reuses_agent": "cli-explore-agent"
|
||||
},
|
||||
"planner": {
|
||||
"task_prefix": "SOLVE",
|
||||
"responsibility": "Solution design, task decomposition via issue-plan-agent",
|
||||
"message_types": ["solution_ready", "multi_solution", "error"],
|
||||
"reuses_agent": "issue-plan-agent"
|
||||
},
|
||||
"reviewer": {
|
||||
"task_prefix": "AUDIT",
|
||||
"responsibility": "Solution review, technical feasibility validation, risk assessment",
|
||||
"message_types": ["approved", "rejected", "concerns", "error"],
|
||||
"reuses_agent": null
|
||||
},
|
||||
"integrator": {
|
||||
"task_prefix": "MARSHAL",
|
||||
"responsibility": "Queue formation, conflict detection, execution order optimization via issue-queue-agent",
|
||||
"message_types": ["queue_ready", "conflict_found", "error"],
|
||||
"reuses_agent": "issue-queue-agent"
|
||||
},
|
||||
"implementer": {
|
||||
"task_prefix": "BUILD",
|
||||
"responsibility": "Code implementation, test verification, result submission via code-developer",
|
||||
"message_types": ["impl_complete", "impl_failed", "error"],
|
||||
"reuses_agent": "code-developer"
|
||||
}
|
||||
},
|
||||
|
||||
"pipelines": {
|
||||
"quick": {
|
||||
"description": "Simple issues, skip review",
|
||||
"task_chain": ["EXPLORE-001", "SOLVE-001", "MARSHAL-001", "BUILD-001"]
|
||||
},
|
||||
"full": {
|
||||
"description": "Complex issues, with CP-2 review cycle (max 2 rounds)",
|
||||
"task_chain": ["EXPLORE-001", "SOLVE-001", "AUDIT-001", "MARSHAL-001", "BUILD-001"]
|
||||
},
|
||||
"batch": {
|
||||
"description": "Batch processing 5-100 issues with parallel exploration and execution",
|
||||
"task_chain": "EXPLORE-001..N(batch≤5) → SOLVE-001..N(batch≤3) → AUDIT-001 → MARSHAL-001 → BUILD-001..M(DAG parallel)"
|
||||
}
|
||||
},
|
||||
|
||||
"collaboration_patterns": ["CP-1", "CP-2", "CP-3", "CP-5"],
|
||||
|
||||
"session_dirs": {
|
||||
"base": ".workflow/.team-plan/issue/",
|
||||
"context": ".workflow/.team-plan/issue/context-{issueId}.json",
|
||||
"audit": ".workflow/.team-plan/issue/audit-report.json",
|
||||
"queue": ".workflow/issues/queue/execution-queue.json",
|
||||
"solutions": ".workflow/issues/solutions/",
|
||||
"messages": ".workflow/.team-msg/issue/"
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user