mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-11 17:21:03 +08:00
Add unit tests for various components and stores in the terminal dashboard
- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management. - Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping. - Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
This commit is contained in:
737
.codex/skills/team-lifecycle-v4/SKILL.md
Normal file
737
.codex/skills/team-lifecycle-v4/SKILL.md
Normal file
@@ -0,0 +1,737 @@
|
||||
---
|
||||
name: team-lifecycle-v4
|
||||
description: Full lifecycle team skill — specification, planning, implementation, testing, and review. Supports spec-only, impl-only, full-lifecycle, and frontend pipelines with optional supervisor checkpoints.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"task description\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team Lifecycle v4
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
$team-lifecycle-v4 "Design and implement a user authentication system"
|
||||
$team-lifecycle-v4 -c 4 "Full lifecycle: build a REST API for order management"
|
||||
$team-lifecycle-v4 -y "Implement dark mode toggle in settings page"
|
||||
$team-lifecycle-v4 --continue "tlv4-auth-system-20260308"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
||||
- `--continue`: Resume existing session
|
||||
- `--no-supervision`: Skip CHECKPOINT tasks (supervisor opt-out)
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Full lifecycle software development orchestration: requirement analysis, specification writing (product brief, requirements, architecture, epics), quality gating, implementation planning, code implementation, testing, and code review. Supports multiple pipeline modes with optional supervisor checkpoints at phase transition points.
|
||||
|
||||
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary for supervisor checkpoints and requirement clarification)
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------------------+
|
||||
| TEAM LIFECYCLE v4 WORKFLOW |
|
||||
+--------------------------------------------------------------------------+
|
||||
| |
|
||||
| Phase 0: Pre-Wave Interactive |
|
||||
| +-- Requirement clarification + pipeline selection |
|
||||
| +-- Complexity scoring + signal detection |
|
||||
| +-- Output: refined requirements for decomposition |
|
||||
| |
|
||||
| Phase 1: Requirement -> CSV + Classification |
|
||||
| +-- Parse task into lifecycle tasks per selected pipeline |
|
||||
| +-- Assign roles: analyst, writer, planner, executor, tester, |
|
||||
| | reviewer, supervisor |
|
||||
| +-- Classify tasks: csv-wave | interactive (exec_mode) |
|
||||
| +-- Compute dependency waves (topological sort -> depth grouping) |
|
||||
| +-- Generate tasks.csv with wave + exec_mode columns |
|
||||
| +-- User validates task breakdown (skip if -y) |
|
||||
| |
|
||||
| Phase 2: Wave Execution Engine (Extended) |
|
||||
| +-- For each wave (1..N): |
|
||||
| | +-- Execute pre-wave interactive tasks (if any) |
|
||||
| | +-- Build wave CSV (filter csv-wave tasks for this wave) |
|
||||
| | +-- Inject previous findings into prev_context column |
|
||||
| | +-- spawn_agents_on_csv(wave CSV) |
|
||||
| | +-- Execute post-wave interactive tasks (if any) |
|
||||
| | +-- Handle CHECKPOINT tasks via interactive supervisor |
|
||||
| | +-- Merge all results into master tasks.csv |
|
||||
| | +-- Check: any failed? -> skip dependents |
|
||||
| +-- discoveries.ndjson shared across all modes (append-only) |
|
||||
| |
|
||||
| Phase 3: Post-Wave Interactive |
|
||||
| +-- Quality gate evaluation (QUALITY-001) |
|
||||
| +-- User approval checkpoint before implementation |
|
||||
| +-- Complexity-based implementation routing |
|
||||
| |
|
||||
| Phase 4: Results Aggregation |
|
||||
| +-- Export final results.csv |
|
||||
| +-- Generate context.md with all findings |
|
||||
| +-- Display summary: completed/failed/skipped per wave |
|
||||
| +-- Offer: view results | retry failed | done |
|
||||
| |
|
||||
+--------------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Rules
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, clarification, checkpoint evaluation |
|
||||
|
||||
**Classification Decision**:
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Research / analysis (RESEARCH-*) | `csv-wave` |
|
||||
| Document generation (DRAFT-*) | `csv-wave` |
|
||||
| Implementation planning (PLAN-*) | `csv-wave` |
|
||||
| Code implementation (IMPL-*) | `csv-wave` |
|
||||
| Test execution (TEST-*) | `csv-wave` |
|
||||
| Code review (REVIEW-*) | `csv-wave` |
|
||||
| Quality gate scoring (QUALITY-*) | `csv-wave` |
|
||||
| Supervisor checkpoints (CHECKPOINT-*) | `interactive` |
|
||||
| Requirement clarification (Phase 0) | `interactive` |
|
||||
| Quality gate user approval | `interactive` |
|
||||
|
||||
---
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,role,pipeline_phase,deps,context_from,exec_mode,wave,status,findings,quality_score,supervision_verdict,error
|
||||
"RESEARCH-001","Domain research","Explore domain, extract structured context, identify constraints","analyst","research","","","csv-wave","1","pending","","","",""
|
||||
"DRAFT-001","Product brief","Generate product brief from research context","writer","product-brief","RESEARCH-001","RESEARCH-001","csv-wave","2","pending","","","",""
|
||||
"CHECKPOINT-001","Brief-PRD consistency","Verify terminology alignment and scope consistency between brief and PRD","supervisor","checkpoint","DRAFT-002","DRAFT-001;DRAFT-002","interactive","4","pending","","","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (string) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description |
|
||||
| `role` | Input | Worker role: analyst, writer, planner, executor, tester, reviewer, supervisor |
|
||||
| `pipeline_phase` | Input | Lifecycle phase: research, product-brief, requirements, architecture, epics, checkpoint, readiness, planning, implementation, validation, review |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `quality_score` | Output | Quality gate score (0-100) for QUALITY-* tasks |
|
||||
| `supervision_verdict` | Output | `pass` / `warn` / `block` for CHECKPOINT-* tasks |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| requirement-clarifier | agents/requirement-clarifier.md | 2.3 (wait-respond) | Parse task, detect signals, select pipeline mode | standalone (Phase 0) |
|
||||
| supervisor | agents/supervisor.md | 2.3 (wait-respond) | Verify cross-artifact consistency at phase transitions | post-wave (after checkpoint dependencies complete) |
|
||||
| quality-gate | agents/quality-gate.md | 2.3 (wait-respond) | Evaluate quality and present user approval | post-wave (after QUALITY-001 completes) |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
+-- tasks.csv # Master state (all tasks, both modes)
|
||||
+-- results.csv # Final results export
|
||||
+-- discoveries.ndjson # Shared discovery board (all agents)
|
||||
+-- context.md # Human-readable report
|
||||
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
+-- spec/ # Specification artifacts
|
||||
| +-- spec-config.json
|
||||
| +-- discovery-context.json
|
||||
| +-- product-brief.md
|
||||
| +-- requirements/
|
||||
| +-- architecture.md
|
||||
| +-- epics.md
|
||||
+-- plan/ # Implementation plan
|
||||
| +-- plan.json
|
||||
| +-- .task/TASK-*.json
|
||||
+-- artifacts/ # Review and checkpoint reports
|
||||
| +-- CHECKPOINT-*-report.md
|
||||
| +-- review-report.md
|
||||
+-- wisdom/ # Cross-task knowledge
|
||||
+-- explorations/ # Shared exploration cache
|
||||
+-- interactive/ # Interactive task artifacts
|
||||
+-- {id}-result.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
// Parse flags
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue')
|
||||
const noSupervision = $ARGUMENTS.includes('--no-supervision')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
|
||||
|
||||
// Clean requirement text (remove flags)
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|--no-supervision|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = requirement.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
let sessionId = `tlv4-${slug}-${dateStr}`
|
||||
let sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
// Continue mode: find existing session
|
||||
if (continueMode) {
|
||||
const existing = Bash(`ls -dt .workflow/.csv-wave/tlv4-* 2>/dev/null | head -1`).trim()
|
||||
if (existing) {
|
||||
sessionId = existing.split('/').pop()
|
||||
sessionFolder = existing
|
||||
// Read existing tasks.csv, find incomplete waves, resume from Phase 2
|
||||
}
|
||||
}
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}/{spec,plan,plan/.task,artifacts,wisdom,explorations,interactive}`)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive
|
||||
|
||||
**Objective**: Clarify requirement, detect capabilities, select pipeline mode.
|
||||
|
||||
**Execution**:
|
||||
|
||||
```javascript
|
||||
const clarifier = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: .codex/skills/team-lifecycle-v4/agents/requirement-clarifier.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
Goal: Analyze task requirement and select appropriate pipeline
|
||||
Requirement: ${requirement}
|
||||
|
||||
### Task
|
||||
1. Parse task description for capability signals:
|
||||
- spec/design/document/requirements -> spec-only
|
||||
- implement/build/fix/code -> impl-only
|
||||
- full/lifecycle/end-to-end -> full-lifecycle
|
||||
- frontend/UI/react/vue -> fe-only or fullstack
|
||||
2. Score complexity (per capability +1, cross-domain +2, parallel tracks +1, serial depth >3 +1)
|
||||
3. Return structured result with pipeline_type, capabilities, complexity
|
||||
`
|
||||
})
|
||||
|
||||
const clarifierResult = wait({ ids: [clarifier], timeout_ms: 120000 })
|
||||
if (clarifierResult.timed_out) {
|
||||
send_input({ id: clarifier, message: "Please finalize and output current findings." })
|
||||
wait({ ids: [clarifier], timeout_ms: 60000 })
|
||||
}
|
||||
close_agent({ id: clarifier })
|
||||
|
||||
Write(`${sessionFolder}/interactive/requirement-clarifier-result.json`, JSON.stringify({
|
||||
task_id: "requirement-clarification",
|
||||
status: "completed",
|
||||
pipeline_type: parsedPipelineType,
|
||||
capabilities: parsedCapabilities,
|
||||
complexity: parsedComplexity,
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
```
|
||||
|
||||
If not AUTO_YES, confirm pipeline selection:
|
||||
|
||||
```javascript
|
||||
if (!AUTO_YES) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Requirement: "${requirement}"\nDetected pipeline: ${pipeline_type} (complexity: ${complexity.level})\nRoles: ${capabilities.map(c => c.name).join(', ')}\n\nApprove?`,
|
||||
header: "Pipeline Selection",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Approve", description: `Use ${pipeline_type} pipeline` },
|
||||
{ label: "Spec Only", description: "Research -> draft specs -> quality gate" },
|
||||
{ label: "Impl Only", description: "Plan -> implement -> test + review" },
|
||||
{ label: "Full Lifecycle", description: "Spec pipeline + implementation pipeline" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Refined requirements available for Phase 1 decomposition
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement -> CSV + Classification
|
||||
|
||||
**Objective**: Build tasks.csv from selected pipeline mode with proper wave assignments.
|
||||
|
||||
**Decomposition Rules**:
|
||||
|
||||
| Pipeline | Tasks | Wave Structure |
|
||||
|----------|-------|---------------|
|
||||
| spec-only | RESEARCH-001 -> DRAFT-001 -> DRAFT-002 -> [CHECKPOINT-001] -> DRAFT-003 -> DRAFT-004 -> [CHECKPOINT-002] -> QUALITY-001 | 8 waves (6 csv + 2 interactive checkpoints) |
|
||||
| impl-only | PLAN-001 -> [CHECKPOINT-003] -> IMPL-001 -> TEST-001 + REVIEW-001 | 4 waves (3 csv + 1 interactive) |
|
||||
| full-lifecycle | spec-only pipeline + impl-only pipeline (PLAN blocked by QUALITY-001) | 12 waves |
|
||||
|
||||
**Pipeline Task Definitions**:
|
||||
|
||||
#### Spec-Only Pipeline
|
||||
|
||||
| Task ID | Role | Wave | Deps | exec_mode | Description |
|
||||
|---------|------|------|------|-----------|-------------|
|
||||
| RESEARCH-001 | analyst | 1 | (none) | csv-wave | Research domain, extract structured context |
|
||||
| DRAFT-001 | writer | 2 | RESEARCH-001 | csv-wave | Generate product brief |
|
||||
| DRAFT-002 | writer | 3 | DRAFT-001 | csv-wave | Generate requirements PRD |
|
||||
| CHECKPOINT-001 | supervisor | 4 | DRAFT-002 | interactive | Brief-PRD consistency check |
|
||||
| DRAFT-003 | writer | 5 | CHECKPOINT-001 | csv-wave | Generate architecture design |
|
||||
| DRAFT-004 | writer | 6 | DRAFT-003 | csv-wave | Generate epics and stories |
|
||||
| CHECKPOINT-002 | supervisor | 7 | DRAFT-004 | interactive | Full spec consistency check |
|
||||
| QUALITY-001 | reviewer | 8 | CHECKPOINT-002 | csv-wave | Quality gate scoring |
|
||||
|
||||
#### Impl-Only Pipeline
|
||||
|
||||
| Task ID | Role | Wave | Deps | exec_mode | Description |
|
||||
|---------|------|------|------|-----------|-------------|
|
||||
| PLAN-001 | planner | 1 | (none) | csv-wave | Break down into implementation steps |
|
||||
| CHECKPOINT-003 | supervisor | 2 | PLAN-001 | interactive | Plan-input alignment check |
|
||||
| IMPL-001 | executor | 3 | CHECKPOINT-003 | csv-wave | Execute implementation plan |
|
||||
| TEST-001 | tester | 4 | IMPL-001 | csv-wave | Run tests, fix failures |
|
||||
| REVIEW-001 | reviewer | 4 | IMPL-001 | csv-wave | Code review |
|
||||
|
||||
When `--no-supervision` is set, skip all CHECKPOINT-* tasks entirely, adjust wave numbers and dependencies accordingly (e.g., DRAFT-003 depends directly on DRAFT-002).
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
All lifecycle work tasks (research, drafting, planning, implementation, testing, review, quality) are `csv-wave`. Supervisor checkpoints are `interactive` (post-wave, spawned by orchestrator to verify cross-artifact consistency). Quality gate user approval is `interactive`.
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
|
||||
|
||||
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema, wave, and exec_mode assignments
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
|
||||
|
||||
```javascript
|
||||
const failedIds = new Set()
|
||||
const skippedIds = new Set()
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
console.log(`\n## Wave ${wave}/${maxWave}\n`)
|
||||
|
||||
// 1. Read current master CSV
|
||||
const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
|
||||
// 2. Separate csv-wave and interactive tasks for this wave
|
||||
const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
|
||||
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
|
||||
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
|
||||
|
||||
// 3. Skip tasks whose deps failed
|
||||
const executableCsvTasks = []
|
||||
for (const task of csvTasks) {
|
||||
const deps = task.deps.split(';').filter(Boolean)
|
||||
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
||||
skippedIds.add(task.id)
|
||||
updateMasterCsvRow(sessionFolder, task.id, {
|
||||
status: 'skipped', error: 'Dependency failed or skipped'
|
||||
})
|
||||
continue
|
||||
}
|
||||
executableCsvTasks.push(task)
|
||||
}
|
||||
|
||||
// 4. Build prev_context for each csv-wave task
|
||||
for (const task of executableCsvTasks) {
|
||||
const contextIds = task.context_from.split(';').filter(Boolean)
|
||||
const prevFindings = contextIds
|
||||
.map(id => {
|
||||
const prevRow = masterCsv.find(r => r.id === id)
|
||||
if (prevRow && prevRow.status === 'completed' && prevRow.findings) {
|
||||
return `[Task ${id}: ${prevRow.title}] ${prevRow.findings}`
|
||||
}
|
||||
// Check interactive results
|
||||
try {
|
||||
const interactiveResult = JSON.parse(Read(`${sessionFolder}/interactive/${id}-result.json`))
|
||||
return `[Task ${id}] ${JSON.stringify(interactiveResult.key_findings || interactiveResult.findings || '')}`
|
||||
} catch { return null }
|
||||
})
|
||||
.filter(Boolean)
|
||||
.join('\n')
|
||||
task.prev_context = prevFindings || 'No previous context available'
|
||||
}
|
||||
|
||||
// 5. Write wave CSV and execute csv-wave tasks
|
||||
if (executableCsvTasks.length > 0) {
|
||||
const waveHeader = 'id,title,description,role,pipeline_phase,deps,context_from,exec_mode,wave,prev_context'
|
||||
const waveRows = executableCsvTasks.map(t =>
|
||||
[t.id, t.title, t.description, t.role, t.pipeline_phase, t.deps, t.context_from, t.exec_mode, t.wave, t.prev_context]
|
||||
.map(cell => `"${String(cell).replace(/"/g, '""')}"`)
|
||||
.join(',')
|
||||
)
|
||||
Write(`${sessionFolder}/wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
|
||||
|
||||
const waveResult = spawn_agents_on_csv({
|
||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||
id_column: "id",
|
||||
instruction: Read(`.codex/skills/team-lifecycle-v4/instructions/agent-instruction.md`)
|
||||
.replace(/{session-id}/g, sessionId),
|
||||
max_concurrency: maxConcurrency,
|
||||
max_runtime_seconds: 900,
|
||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||
output_schema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
id: { type: "string" },
|
||||
status: { type: "string", enum: ["completed", "failed"] },
|
||||
findings: { type: "string" },
|
||||
quality_score: { type: "string" },
|
||||
supervision_verdict: { type: "string" },
|
||||
error: { type: "string" }
|
||||
},
|
||||
required: ["id", "status", "findings"]
|
||||
}
|
||||
})
|
||||
|
||||
// Merge results into master CSV
|
||||
const waveResults = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||
for (const result of waveResults) {
|
||||
updateMasterCsvRow(sessionFolder, result.id, {
|
||||
status: result.status,
|
||||
findings: result.findings || '',
|
||||
quality_score: result.quality_score || '',
|
||||
supervision_verdict: result.supervision_verdict || '',
|
||||
error: result.error || ''
|
||||
})
|
||||
if (result.status === 'failed') failedIds.add(result.id)
|
||||
}
|
||||
|
||||
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv"`)
|
||||
}
|
||||
|
||||
// 6. Execute post-wave interactive tasks (supervisor checkpoints)
|
||||
for (const task of interactiveTasks) {
|
||||
if (task.status !== 'pending') continue
|
||||
const deps = task.deps.split(';').filter(Boolean)
|
||||
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
||||
skippedIds.add(task.id)
|
||||
continue
|
||||
}
|
||||
|
||||
// Spawn supervisor agent for CHECKPOINT tasks
|
||||
const supervisorAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: .codex/skills/team-lifecycle-v4/agents/supervisor.md (MUST read first)
|
||||
2. Read: ${sessionFolder}/discoveries.ndjson (shared discoveries)
|
||||
|
||||
---
|
||||
|
||||
Goal: Execute checkpoint verification
|
||||
Session: ${sessionFolder}
|
||||
Task ID: ${task.id}
|
||||
Description: ${task.description}
|
||||
Scope: ${task.deps}
|
||||
|
||||
### Context
|
||||
Read upstream artifacts and verify cross-artifact consistency.
|
||||
Produce verdict: pass (score >= 0.8), warn (0.5-0.79), block (< 0.5).
|
||||
Write report to ${sessionFolder}/artifacts/${task.id}-report.md.
|
||||
`
|
||||
})
|
||||
|
||||
const checkpointResult = wait({ ids: [supervisorAgent], timeout_ms: 300000 })
|
||||
if (checkpointResult.timed_out) {
|
||||
send_input({ id: supervisorAgent, message: "Please finalize your checkpoint evaluation now." })
|
||||
wait({ ids: [supervisorAgent], timeout_ms: 120000 })
|
||||
}
|
||||
close_agent({ id: supervisorAgent })
|
||||
|
||||
// Parse checkpoint verdict
|
||||
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
|
||||
task_id: task.id, status: "completed",
|
||||
supervision_verdict: parsedVerdict,
|
||||
supervision_score: parsedScore,
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
|
||||
// Handle verdict
|
||||
if (parsedVerdict === 'block') {
|
||||
if (!AUTO_YES) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Checkpoint ${task.id} BLOCKED (score: ${parsedScore}). What to do?`,
|
||||
header: "Checkpoint Blocked",
|
||||
options: [
|
||||
{ label: "Override", description: "Proceed despite block" },
|
||||
{ label: "Revise upstream", description: "Go back and fix issues" },
|
||||
{ label: "Abort", description: "Stop pipeline" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
// Handle user choice
|
||||
}
|
||||
}
|
||||
|
||||
updateMasterCsvRow(sessionFolder, task.id, {
|
||||
status: 'completed',
|
||||
findings: `Checkpoint verdict: ${parsedVerdict} (score: ${parsedScore})`,
|
||||
supervision_verdict: parsedVerdict
|
||||
})
|
||||
}
|
||||
|
||||
// 7. Handle special post-wave logic
|
||||
// After QUALITY-001: pause for user approval before implementation
|
||||
// After PLAN-001: read complexity for conditional routing
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Both csv-wave and interactive tasks handled per wave
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves and mechanisms
|
||||
- Supervisor checkpoints evaluated with proper verdict routing
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Post-Wave Interactive
|
||||
|
||||
**Objective**: Handle quality gate user approval and complexity-based implementation routing.
|
||||
|
||||
After QUALITY-001 completes (spec pipelines):
|
||||
1. Read quality score from QUALITY-001 findings
|
||||
2. If score >= 80%: present user approval for implementation (if full-lifecycle)
|
||||
3. If score 60-79%: suggest revisions, offer retry
|
||||
4. If score < 60%: return to writer for rework
|
||||
|
||||
After PLAN-001 completes (impl pipelines):
|
||||
1. Read plan.json complexity assessment
|
||||
2. Route by complexity:
|
||||
- Low (1-2 modules): direct IMPL-001
|
||||
- Medium (3-4 modules): parallel IMPL-{1..N}
|
||||
- High (5+ modules): detailed architecture first, then parallel IMPL
|
||||
|
||||
**Success Criteria**:
|
||||
- Post-wave interactive processing complete
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```javascript
|
||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
Write(`${sessionFolder}/results.csv`, masterCsv)
|
||||
|
||||
const tasks = parseCsv(masterCsv)
|
||||
const completed = tasks.filter(t => t.status === 'completed')
|
||||
const failed = tasks.filter(t => t.status === 'failed')
|
||||
const skipped = tasks.filter(t => t.status === 'skipped')
|
||||
|
||||
const contextContent = `# Team Lifecycle v4 Report
|
||||
|
||||
**Session**: ${sessionId}
|
||||
**Requirement**: ${requirement}
|
||||
**Pipeline**: ${pipeline_type}
|
||||
**Completed**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | Count |
|
||||
|--------|-------|
|
||||
| Total Tasks | ${tasks.length} |
|
||||
| Completed | ${completed.length} |
|
||||
| Failed | ${failed.length} |
|
||||
| Skipped | ${skipped.length} |
|
||||
| Supervision | ${noSupervision ? 'Disabled' : 'Enabled'} |
|
||||
|
||||
---
|
||||
|
||||
## Pipeline Execution
|
||||
|
||||
${waveDetails}
|
||||
|
||||
---
|
||||
|
||||
## Deliverables
|
||||
|
||||
${deliverablesList}
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
${qualityGateResults}
|
||||
|
||||
---
|
||||
|
||||
## Checkpoint Reports
|
||||
|
||||
${checkpointResults}
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextContent)
|
||||
```
|
||||
|
||||
If not AUTO_YES, offer completion action:
|
||||
|
||||
```javascript
|
||||
if (!AUTO_YES) {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up work" },
|
||||
{ label: "Export Results", description: "Export deliverables to target directory" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks, both modes)
|
||||
- context.md generated
|
||||
- All interactive agents closed
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
All agents across all waves share `discoveries.ndjson`. This enables cross-role knowledge sharing.
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `research` | `data.dimension` | `{dimension, findings[], constraints[], integration_points[]}` | Research findings |
|
||||
| `spec_artifact` | `data.doc_type` | `{doc_type, path, sections[], key_decisions[]}` | Specification document artifact |
|
||||
| `exploration` | `data.angle` | `{angle, relevant_files[], patterns[], recommendations[]}` | Codebase exploration finding |
|
||||
| `plan_task` | `data.task_id` | `{task_id, title, files[], complexity, convergence_criteria[]}` | Implementation task definition |
|
||||
| `implementation` | `data.task_id` | `{task_id, files_modified[], approach, changes_summary}` | Implementation result |
|
||||
| `test_result` | `data.framework` | `{framework, pass_rate, failures[], fix_iterations}` | Test execution result |
|
||||
| `review_finding` | `data.file` | `{file, line, severity, dimension, description, suggested_fix}` | Code review finding |
|
||||
| `checkpoint` | `data.checkpoint_id` | `{checkpoint_id, verdict, score, risks[], blocks[]}` | Supervisor checkpoint result |
|
||||
| `quality_gate` | `data.gate_id` | `{gate_id, score, dimensions{}, verdict}` | Quality gate assessment |
|
||||
|
||||
**Format**: NDJSON, each line is self-contained JSON:
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00+08:00","worker":"RESEARCH-001","type":"research","data":{"dimension":"domain","findings":["Auth system needs OAuth2 + RBAC"],"constraints":["Must support SSO"],"integration_points":["User service API"]}}
|
||||
{"ts":"2026-03-08T10:15:00+08:00","worker":"DRAFT-001","type":"spec_artifact","data":{"doc_type":"product-brief","path":"spec/product-brief.md","sections":["Vision","Problem","Users","Goals"],"key_decisions":["OAuth2 over custom auth"]}}
|
||||
{"ts":"2026-03-08T11:00:00+08:00","worker":"CHECKPOINT-001","type":"checkpoint","data":{"checkpoint_id":"CHECKPOINT-001","verdict":"pass","score":0.90,"risks":[],"blocks":[]}}
|
||||
```
|
||||
|
||||
**Protocol Rules**:
|
||||
1. Read board before own work -> leverage existing context
|
||||
2. Write discoveries immediately via `echo >>` -> don't batch
|
||||
3. Deduplicate -- check existing entries by type + dedup key
|
||||
4. Append-only -- never modify or delete existing lines
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| Interactive agent failed | Mark as failed, skip dependents |
|
||||
| Supervisor checkpoint blocked | AskUserQuestion: Override / Revise / Abort |
|
||||
| Quality gate failed (< 60%) | Return to writer for rework |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| CLI tool fails | Agent fallback to direct implementation |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 0/1
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson -- both mechanisms share it
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
|
||||
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent
|
||||
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
@@ -0,0 +1,725 @@
|
||||
# Team Lifecycle v4 — Agent Instruction
|
||||
|
||||
This instruction is loaded by team-worker agents when spawned with roles: `analyst`, `writer`, `planner`, `executor`, `tester`, `reviewer`.
|
||||
|
||||
---
|
||||
|
||||
## Role-Based Execution
|
||||
|
||||
### Analyst Role
|
||||
|
||||
**Responsibility**: Research domain, extract structured context, identify constraints.
|
||||
|
||||
**Input**:
|
||||
- `id`: Task ID (e.g., `RESEARCH-001`)
|
||||
- `title`: Task title
|
||||
- `description`: Detailed task description with PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS
|
||||
- `role`: `analyst`
|
||||
- `pipeline_phase`: `research`
|
||||
- `prev_context`: Previous tasks' findings (empty for wave 1)
|
||||
|
||||
**Execution Protocol**:
|
||||
|
||||
1. **Read shared discoveries**:
|
||||
```javascript
|
||||
const discoveries = Read(`{session}/discoveries.ndjson`)
|
||||
```
|
||||
|
||||
2. **Explore domain** (use CLI analysis tools):
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Research domain for {requirement}
|
||||
TASK: • Identify problem statement • Define target users • Extract constraints • Map integration points
|
||||
CONTEXT: @**/* | Memory: {requirement}
|
||||
EXPECTED: Structured research context with problem/users/domain/constraints
|
||||
CONSTRAINTS: Read-only analysis" --tool gemini --mode analysis --rule analysis-trace-code-execution
|
||||
```
|
||||
|
||||
3. **Extract structured context**:
|
||||
- Problem statement: What problem are we solving?
|
||||
- Target users: Who will use this?
|
||||
- Domain: What domain/industry?
|
||||
- Constraints: Technical, business, regulatory constraints
|
||||
- Integration points: External systems, APIs, services
|
||||
|
||||
4. **Write discovery context**:
|
||||
```javascript
|
||||
Write(`{session}/spec/discovery-context.json`, JSON.stringify({
|
||||
problem_statement: "Users need OAuth2 authentication with SSO support",
|
||||
target_users: ["Enterprise customers", "Internal teams"],
|
||||
domain: "Authentication & Authorization",
|
||||
constraints: ["Must support SAML", "GDPR compliance", "99.9% uptime"],
|
||||
integration_points: ["User service API", "Session store", "Audit log"],
|
||||
exploration_dimensions: ["Security", "Scalability", "User experience"]
|
||||
}, null, 2))
|
||||
```
|
||||
|
||||
5. **Share discoveries**:
|
||||
```bash
|
||||
echo '{"ts":"2026-03-08T10:00:00+08:00","worker":"{id}","type":"research","data":{"dimension":"domain","findings":["Auth system needs OAuth2 + RBAC"],"constraints":["Must support SSO"],"integration_points":["User service API"]}}' >> {session}/discoveries.ndjson
|
||||
```
|
||||
|
||||
6. **Report result**:
|
||||
```javascript
|
||||
report_agent_job_result({
|
||||
id: "{id}",
|
||||
status: "completed",
|
||||
findings: "Explored domain: identified OAuth2+RBAC auth pattern, 5 integration points, TypeScript/React stack. Key constraint: must support SSO.",
|
||||
quality_score: "",
|
||||
supervision_verdict: "",
|
||||
error: ""
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Discovery context written with all required fields
|
||||
- Problem statement clear and actionable
|
||||
- Constraints identified
|
||||
- Integration points mapped
|
||||
|
||||
---
|
||||
|
||||
### Writer Role
|
||||
|
||||
**Responsibility**: Generate specification documents (product brief, requirements, architecture, epics).
|
||||
|
||||
**Input**:
|
||||
- `id`: Task ID (e.g., `DRAFT-001`)
|
||||
- `title`: Task title
|
||||
- `description`: Detailed task description
|
||||
- `role`: `writer`
|
||||
- `pipeline_phase`: `product-brief`, `requirements`, `architecture`, or `epics`
|
||||
- `context_from`: Upstream task IDs
|
||||
- `prev_context`: Previous tasks' findings
|
||||
- `inner_loop`: `true` (writer uses inner loop for revision)
|
||||
|
||||
**Execution Protocol**:
|
||||
|
||||
1. **Read upstream artifacts**:
|
||||
```javascript
|
||||
const discoveryContext = JSON.parse(Read(`{session}/spec/discovery-context.json`))
|
||||
const productBrief = Read(`{session}/spec/product-brief.md`) // if exists
|
||||
```
|
||||
|
||||
2. **Generate document based on pipeline_phase**:
|
||||
|
||||
**Product Brief** (DRAFT-001):
|
||||
```markdown
|
||||
# Product Brief: OAuth2 Authentication System
|
||||
|
||||
## Vision
|
||||
Enable enterprise customers to authenticate users via OAuth2 with SSO support.
|
||||
|
||||
## Problem
|
||||
Current authentication system lacks OAuth2 support, blocking enterprise adoption.
|
||||
|
||||
## Target Users
|
||||
- Enterprise customers requiring SSO
|
||||
- Internal teams needing centralized auth
|
||||
|
||||
## Success Goals
|
||||
- 99.9% uptime
|
||||
- <200ms auth latency
|
||||
- GDPR compliant
|
||||
- Support 10k concurrent users
|
||||
|
||||
## Key Decisions
|
||||
- Use OAuth2 over custom auth
|
||||
- Support SAML for SSO
|
||||
- Implement RBAC for authorization
|
||||
```
|
||||
|
||||
**Requirements PRD** (DRAFT-002):
|
||||
```markdown
|
||||
# Requirements: OAuth2 Authentication
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
### FR-001: OAuth2 Authorization Flow
|
||||
**Priority**: Must Have
|
||||
**Description**: Implement OAuth2 authorization code flow
|
||||
**Acceptance Criteria**:
|
||||
- User redirected to OAuth provider
|
||||
- Authorization code exchanged for access token
|
||||
- Token stored securely in session
|
||||
|
||||
### FR-002: SSO Integration
|
||||
**Priority**: Must Have
|
||||
**Description**: Support SAML-based SSO
|
||||
**Acceptance Criteria**:
|
||||
- SAML assertion validated
|
||||
- User attributes mapped to internal user model
|
||||
- Session created with SSO context
|
||||
|
||||
## User Stories
|
||||
|
||||
### US-001: Enterprise User Login
|
||||
**As an** enterprise user
|
||||
**I want to** log in via my company's SSO
|
||||
**So that** I don't need separate credentials
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- Given I'm on the login page
|
||||
- When I click "Login with SSO"
|
||||
- Then I'm redirected to my company's SSO provider
|
||||
- And I'm logged in after successful authentication
|
||||
```
|
||||
|
||||
**Architecture Design** (DRAFT-003):
|
||||
```markdown
|
||||
# Architecture: OAuth2 Authentication
|
||||
|
||||
## Component Diagram
|
||||
[User] -> [Auth Gateway] -> [OAuth Provider]
|
||||
|
|
||||
v
|
||||
[Session Store]
|
||||
|
|
||||
v
|
||||
[User Service]
|
||||
|
||||
## Tech Stack
|
||||
- **Backend**: Node.js + Express
|
||||
- **OAuth Library**: Passport.js
|
||||
- **Session Store**: Redis
|
||||
- **Database**: PostgreSQL
|
||||
|
||||
## Architecture Decision Records
|
||||
|
||||
### ADR-001: Use Passport.js for OAuth
|
||||
**Status**: Accepted
|
||||
**Context**: Need OAuth2 + SAML support
|
||||
**Decision**: Use Passport.js with passport-oauth2 and passport-saml strategies
|
||||
**Consequences**: Mature library, good community support, but adds dependency
|
||||
|
||||
## Data Model
|
||||
```sql
|
||||
CREATE TABLE users (
|
||||
id UUID PRIMARY KEY,
|
||||
email VARCHAR(255) UNIQUE,
|
||||
oauth_provider VARCHAR(50),
|
||||
oauth_id VARCHAR(255)
|
||||
);
|
||||
```
|
||||
|
||||
## Integration Points
|
||||
- User Service API: GET /users/:id, POST /users
|
||||
- Session Store: Redis SET/GET with TTL
|
||||
- Audit Log: POST /audit/events
|
||||
```
|
||||
|
||||
**Epics and Stories** (DRAFT-004):
|
||||
```markdown
|
||||
# Epics: OAuth2 Authentication
|
||||
|
||||
## Epic 1: OAuth2 Core Flow
|
||||
**Priority**: Must Have (MVP)
|
||||
**Estimate**: 13 story points
|
||||
|
||||
### Stories
|
||||
1. **STORY-001**: Implement authorization endpoint (3 pts)
|
||||
2. **STORY-002**: Implement token exchange (5 pts)
|
||||
3. **STORY-003**: Implement token refresh (3 pts)
|
||||
4. **STORY-004**: Add session management (2 pts)
|
||||
|
||||
## Epic 2: SSO Integration
|
||||
**Priority**: Must Have (MVP)
|
||||
**Estimate**: 8 story points
|
||||
|
||||
### Stories
|
||||
1. **STORY-005**: Integrate SAML provider (5 pts)
|
||||
2. **STORY-006**: Map SAML attributes (3 pts)
|
||||
|
||||
## Epic 3: RBAC Authorization
|
||||
**Priority**: Should Have
|
||||
**Estimate**: 8 story points
|
||||
|
||||
### Stories
|
||||
1. **STORY-007**: Define role model (2 pts)
|
||||
2. **STORY-008**: Implement permission checks (3 pts)
|
||||
3. **STORY-009**: Add role assignment UI (3 pts)
|
||||
```
|
||||
|
||||
3. **Write document to spec/ directory**:
|
||||
```javascript
|
||||
Write(`{session}/spec/{doc-type}.md`, documentContent)
|
||||
```
|
||||
|
||||
4. **Share discoveries**:
|
||||
```bash
|
||||
echo '{"ts":"2026-03-08T10:15:00+08:00","worker":"{id}","type":"spec_artifact","data":{"doc_type":"product-brief","path":"spec/product-brief.md","sections":["Vision","Problem","Users","Goals"],"key_decisions":["OAuth2 over custom auth"]}}' >> {session}/discoveries.ndjson
|
||||
```
|
||||
|
||||
5. **Report result**:
|
||||
```javascript
|
||||
report_agent_job_result({
|
||||
id: "{id}",
|
||||
status: "completed",
|
||||
findings: "Generated product brief with vision, problem statement, target users, success goals. Key decision: OAuth2 over custom auth.",
|
||||
quality_score: "",
|
||||
supervision_verdict: "",
|
||||
error: ""
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Document follows template structure
|
||||
- All required sections present
|
||||
- Terminology consistent with upstream docs
|
||||
- Key decisions documented
|
||||
|
||||
---
|
||||
|
||||
### Planner Role
|
||||
|
||||
**Responsibility**: Break down requirements into implementation tasks.
|
||||
|
||||
**Input**:
|
||||
- `id`: Task ID (e.g., `PLAN-001`)
|
||||
- `title`: Task title
|
||||
- `description`: Detailed task description
|
||||
- `role`: `planner`
|
||||
- `pipeline_phase`: `planning`
|
||||
- `context_from`: Upstream task IDs (e.g., `QUALITY-001`)
|
||||
- `prev_context`: Previous tasks' findings
|
||||
- `inner_loop`: `true` (planner uses inner loop for refinement)
|
||||
|
||||
**Execution Protocol**:
|
||||
|
||||
1. **Read spec artifacts**:
|
||||
```javascript
|
||||
const requirements = Read(`{session}/spec/requirements.md`)
|
||||
const architecture = Read(`{session}/spec/architecture.md`)
|
||||
const epics = Read(`{session}/spec/epics.md`)
|
||||
```
|
||||
|
||||
2. **Explore codebase** (use CLI analysis tools):
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Explore codebase for {requirement}
|
||||
TASK: • Identify relevant files • Find existing patterns • Locate integration points
|
||||
CONTEXT: @**/* | Memory: {requirement}
|
||||
EXPECTED: Exploration findings with file paths and patterns
|
||||
CONSTRAINTS: Read-only analysis" --tool gemini --mode analysis --rule analysis-trace-code-execution
|
||||
```
|
||||
|
||||
3. **Generate implementation plan**:
|
||||
```javascript
|
||||
const plan = {
|
||||
requirement: "{requirement}",
|
||||
complexity: "Medium", // Low (1-2 modules), Medium (3-4), High (5+)
|
||||
approach: "Strategy pattern for OAuth providers",
|
||||
tasks: [
|
||||
{
|
||||
task_id: "TASK-001",
|
||||
title: "Create OAuth provider interface",
|
||||
description: "Define provider interface with authorize/token/refresh methods",
|
||||
files: ["src/auth/providers/oauth-provider.ts"],
|
||||
depends_on: [],
|
||||
convergence_criteria: [
|
||||
"Interface compiles without errors",
|
||||
"Type definitions exported"
|
||||
]
|
||||
},
|
||||
{
|
||||
task_id: "TASK-002",
|
||||
title: "Implement Google OAuth provider",
|
||||
description: "Concrete implementation for Google OAuth2",
|
||||
files: ["src/auth/providers/google-oauth.ts"],
|
||||
depends_on: ["TASK-001"],
|
||||
convergence_criteria: [
|
||||
"Tests pass",
|
||||
"Handles token refresh",
|
||||
"Error handling complete"
|
||||
]
|
||||
}
|
||||
],
|
||||
exploration_findings: {
|
||||
existing_patterns: ["Strategy pattern in payment module"],
|
||||
tech_stack: ["TypeScript", "Express", "Passport.js"],
|
||||
integration_points: ["User service", "Session store"]
|
||||
}
|
||||
}
|
||||
Write(`{session}/plan/plan.json`, JSON.stringify(plan, null, 2))
|
||||
```
|
||||
|
||||
4. **Write per-task files**:
|
||||
```javascript
|
||||
for (const task of plan.tasks) {
|
||||
Write(`{session}/plan/.task/${task.task_id}.json`, JSON.stringify(task, null, 2))
|
||||
}
|
||||
```
|
||||
|
||||
5. **Share discoveries**:
|
||||
```bash
|
||||
echo '{"ts":"2026-03-08T11:00:00+08:00","worker":"{id}","type":"plan_task","data":{"task_id":"TASK-001","title":"Create OAuth provider interface","files":["src/auth/providers/oauth-provider.ts"],"complexity":"Low","convergence_criteria":["Interface compiles"]}}' >> {session}/discoveries.ndjson
|
||||
```
|
||||
|
||||
6. **Report result**:
|
||||
```javascript
|
||||
report_agent_job_result({
|
||||
id: "{id}",
|
||||
status: "completed",
|
||||
findings: "Generated implementation plan with 2 tasks. Complexity: Medium. Approach: Strategy pattern for OAuth providers. Identified existing strategy pattern in payment module.",
|
||||
quality_score: "",
|
||||
supervision_verdict: "",
|
||||
error: ""
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- plan.json written with valid structure
|
||||
- 2-7 tasks defined
|
||||
- Task dependencies form DAG (no cycles)
|
||||
- Convergence criteria defined per task
|
||||
- Complexity assessed
|
||||
|
||||
---
|
||||
|
||||
### Executor Role
|
||||
|
||||
**Responsibility**: Execute implementation plan tasks.
|
||||
|
||||
**Input**:
|
||||
- `id`: Task ID (e.g., `IMPL-001`)
|
||||
- `title`: Task title
|
||||
- `description`: Detailed task description
|
||||
- `role`: `executor`
|
||||
- `pipeline_phase`: `implementation`
|
||||
- `context_from`: Upstream task IDs (e.g., `PLAN-001`)
|
||||
- `prev_context`: Previous tasks' findings
|
||||
- `inner_loop`: `true` (executor uses inner loop for self-repair)
|
||||
|
||||
**Execution Protocol**:
|
||||
|
||||
1. **Read implementation plan**:
|
||||
```javascript
|
||||
const plan = JSON.parse(Read(`{session}/plan/plan.json`))
|
||||
```
|
||||
|
||||
2. **For each task in plan.tasks** (ordered by depends_on):
|
||||
|
||||
a. **Read context files**:
|
||||
```javascript
|
||||
for (const file of task.files) {
|
||||
if (fileExists(file)) Read(file)
|
||||
}
|
||||
```
|
||||
|
||||
b. **Identify patterns**:
|
||||
- Note imports, naming conventions, existing structure
|
||||
- Follow project patterns from exploration_findings
|
||||
|
||||
c. **Apply changes**:
|
||||
- Use Edit for existing files (prefer)
|
||||
- Use Write for new files
|
||||
- Follow convergence criteria from task
|
||||
|
||||
d. **Build check** (if build command exists):
|
||||
```bash
|
||||
npm run build 2>&1 || echo BUILD_FAILED
|
||||
```
|
||||
- If build fails: analyze error → fix → rebuild (max 3 retries)
|
||||
|
||||
e. **Verify convergence**:
|
||||
- Check each criterion in task.convergence_criteria
|
||||
- If not met: self-repair loop (max 3 iterations)
|
||||
|
||||
3. **Share discoveries**:
|
||||
```bash
|
||||
echo '{"ts":"2026-03-08T11:00:00+08:00","worker":"{id}","type":"implementation","data":{"task_id":"IMPL-001","files_modified":["src/auth/oauth.ts","src/auth/rbac.ts"],"approach":"Strategy pattern for auth providers","changes_summary":"Created OAuth2 provider, RBAC middleware, session management"}}' >> {session}/discoveries.ndjson
|
||||
```
|
||||
|
||||
4. **Report result**:
|
||||
```javascript
|
||||
report_agent_job_result({
|
||||
id: "{id}",
|
||||
status: "completed",
|
||||
findings: "Implemented 2 tasks: OAuth provider interface + Google OAuth implementation. Modified 2 files. All convergence criteria met.",
|
||||
quality_score: "",
|
||||
supervision_verdict: "",
|
||||
error: ""
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All tasks completed in dependency order
|
||||
- Build passes (if build command exists)
|
||||
- All convergence criteria met
|
||||
- Code follows project patterns
|
||||
|
||||
---
|
||||
|
||||
### Tester Role
|
||||
|
||||
**Responsibility**: Run tests, fix failures, achieve 95% pass rate.
|
||||
|
||||
**Input**:
|
||||
- `id`: Task ID (e.g., `TEST-001`)
|
||||
- `title`: Task title
|
||||
- `description`: Detailed task description
|
||||
- `role`: `tester`
|
||||
- `pipeline_phase`: `validation`
|
||||
- `context_from`: Upstream task IDs (e.g., `IMPL-001`)
|
||||
- `prev_context`: Previous tasks' findings
|
||||
|
||||
**Execution Protocol**:
|
||||
|
||||
1. **Detect test framework**:
|
||||
```javascript
|
||||
const packageJson = JSON.parse(Read('package.json'))
|
||||
const testCommand = packageJson.scripts?.test || packageJson.scripts?.['test:unit']
|
||||
```
|
||||
|
||||
2. **Run affected tests first** (if possible):
|
||||
```bash
|
||||
npm test -- --changed
|
||||
```
|
||||
|
||||
3. **Run full test suite**:
|
||||
```bash
|
||||
npm test 2>&1
|
||||
```
|
||||
|
||||
4. **Parse test results**:
|
||||
- Total tests
|
||||
- Passed tests
|
||||
- Failed tests
|
||||
- Pass rate = passed / total
|
||||
|
||||
5. **Self-repair loop** (if pass rate < 95%):
|
||||
- Analyze test output
|
||||
- Diagnose failure cause
|
||||
- Fix source code
|
||||
- Re-run tests
|
||||
- Max 10 iterations
|
||||
|
||||
6. **Share discoveries**:
|
||||
```bash
|
||||
echo '{"ts":"2026-03-08T11:30:00+08:00","worker":"{id}","type":"test_result","data":{"framework":"vitest","pass_rate":98,"failures":["timeout in SSO integration test"],"fix_iterations":2}}' >> {session}/discoveries.ndjson
|
||||
```
|
||||
|
||||
7. **Report result**:
|
||||
```javascript
|
||||
report_agent_job_result({
|
||||
id: "{id}",
|
||||
status: "completed",
|
||||
findings: "Ran 50 tests. Pass rate: 98% (49/50). Fixed 2 failures in 2 iterations. Remaining failure: timeout in SSO integration test (non-blocking).",
|
||||
quality_score: "",
|
||||
supervision_verdict: "",
|
||||
error: ""
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Test suite executed
|
||||
- Pass rate >= 95%
|
||||
- Failures fixed (max 10 iterations)
|
||||
- Test results documented
|
||||
|
||||
---
|
||||
|
||||
### Reviewer Role
|
||||
|
||||
**Responsibility**: Multi-dimensional code review or quality gate scoring.
|
||||
|
||||
**Input**:
|
||||
- `id`: Task ID (e.g., `REVIEW-001` or `QUALITY-001`)
|
||||
- `title`: Task title
|
||||
- `description`: Detailed task description
|
||||
- `role`: `reviewer`
|
||||
- `pipeline_phase`: `review` or `readiness`
|
||||
- `context_from`: Upstream task IDs
|
||||
- `prev_context`: Previous tasks' findings
|
||||
|
||||
**Execution Protocol**:
|
||||
|
||||
**For Code Review** (REVIEW-*):
|
||||
|
||||
1. **Read implementation files**:
|
||||
```javascript
|
||||
const plan = JSON.parse(Read(`{session}/plan/plan.json`))
|
||||
const modifiedFiles = plan.tasks.flatMap(t => t.files)
|
||||
```
|
||||
|
||||
2. **Multi-dimensional review**:
|
||||
- **Quality**: Code style, naming, structure
|
||||
- **Security**: Input validation, auth checks, SQL injection
|
||||
- **Architecture**: Follows design, proper abstractions
|
||||
- **Requirements**: Covers all FRs, acceptance criteria met
|
||||
|
||||
3. **Determine verdict**:
|
||||
- `BLOCK`: Critical issues, cannot merge
|
||||
- `CONDITIONAL`: Minor issues, can merge with fixes
|
||||
- `APPROVE`: No issues, ready to merge
|
||||
|
||||
4. **Write review report**:
|
||||
```markdown
|
||||
# Code Review: {id}
|
||||
|
||||
## Verdict: APPROVE
|
||||
|
||||
## Quality (8/10)
|
||||
- Code style consistent
|
||||
- Naming clear and semantic
|
||||
- Minor: some functions could be extracted
|
||||
|
||||
## Security (9/10)
|
||||
- Input validation present
|
||||
- Auth checks correct
|
||||
- SQL injection prevented
|
||||
|
||||
## Architecture (8/10)
|
||||
- Follows strategy pattern
|
||||
- Proper abstractions
|
||||
- Minor: could use dependency injection
|
||||
|
||||
## Requirements Coverage (10/10)
|
||||
- All FRs implemented
|
||||
- Acceptance criteria met
|
||||
- Edge cases handled
|
||||
|
||||
## Issues
|
||||
(none)
|
||||
|
||||
## Recommendations
|
||||
1. Extract validation logic to separate module
|
||||
2. Add dependency injection for testability
|
||||
```
|
||||
|
||||
**For Quality Gate** (QUALITY-*):
|
||||
|
||||
1. **Read all spec artifacts**:
|
||||
```javascript
|
||||
const productBrief = Read(`{session}/spec/product-brief.md`)
|
||||
const requirements = Read(`{session}/spec/requirements.md`)
|
||||
const architecture = Read(`{session}/spec/architecture.md`)
|
||||
const epics = Read(`{session}/spec/epics.md`)
|
||||
```
|
||||
|
||||
2. **Score 4 dimensions** (25% each):
|
||||
- **Completeness**: All sections present, no gaps
|
||||
- **Consistency**: Terminology aligned, decisions traced
|
||||
- **Traceability**: Vision → requirements → architecture → epics
|
||||
- **Depth**: Sufficient detail for implementation
|
||||
|
||||
3. **Calculate overall score**:
|
||||
```javascript
|
||||
const score = (completeness + consistency + traceability + depth) / 4
|
||||
```
|
||||
|
||||
4. **Determine gate verdict**:
|
||||
- `>= 80%`: PASS (proceed to implementation)
|
||||
- `60-79%`: REVIEW (revisions recommended)
|
||||
- `< 60%`: FAIL (return to writer for rework)
|
||||
|
||||
5. **Write quality report**:
|
||||
```markdown
|
||||
# Quality Gate: {id}
|
||||
|
||||
## Overall Score: 82%
|
||||
|
||||
## Dimension Scores
|
||||
- Completeness: 90% (23/25)
|
||||
- Consistency: 85% (21/25)
|
||||
- Traceability: 80% (20/25)
|
||||
- Depth: 75% (19/25)
|
||||
|
||||
## Verdict: PASS
|
||||
|
||||
## Findings
|
||||
- All spec documents present and complete
|
||||
- Terminology consistent across docs
|
||||
- Clear trace from vision to epics
|
||||
- Sufficient detail for implementation
|
||||
- Minor: architecture could include more error handling details
|
||||
```
|
||||
|
||||
6. **Share discoveries**:
|
||||
```bash
|
||||
echo '{"ts":"2026-03-08T12:00:00+08:00","worker":"{id}","type":"quality_gate","data":{"gate_id":"QUALITY-001","score":82,"dimensions":{"completeness":90,"consistency":85,"traceability":80,"depth":75},"verdict":"pass"}}' >> {session}/discoveries.ndjson
|
||||
```
|
||||
|
||||
7. **Report result**:
|
||||
```javascript
|
||||
report_agent_job_result({
|
||||
id: "{id}",
|
||||
status: "completed",
|
||||
findings: "Quality gate: Completeness 90%, Consistency 85%, Traceability 80%, Depth 75%. Overall: 82.5% PASS.",
|
||||
quality_score: "82",
|
||||
supervision_verdict: "",
|
||||
error: ""
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All dimensions scored
|
||||
- Report written with findings
|
||||
- Verdict determined
|
||||
- Score >= 80% for quality gate pass
|
||||
|
||||
---
|
||||
|
||||
## Inner Loop Protocol
|
||||
|
||||
Roles with `inner_loop: true` support self-repair:
|
||||
|
||||
| Scenario | Max Iterations | Action |
|
||||
|----------|---------------|--------|
|
||||
| Build failure | 3 | Analyze error → fix source → rebuild |
|
||||
| Test failure | 10 | Analyze failure → fix source → re-run tests |
|
||||
| Convergence not met | 3 | Check criteria → adjust implementation → re-verify |
|
||||
| Document incomplete | 2 | Review template → add missing sections → re-validate |
|
||||
|
||||
After max iterations: report error, mark task as failed.
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board
|
||||
|
||||
All roles read/write `{session}/discoveries.ndjson`:
|
||||
|
||||
**Discovery Types**:
|
||||
- `research`: Research findings
|
||||
- `spec_artifact`: Specification document
|
||||
- `exploration`: Codebase exploration
|
||||
- `plan_task`: Implementation task definition
|
||||
- `implementation`: Implementation result
|
||||
- `test_result`: Test execution result
|
||||
- `review_finding`: Code review finding
|
||||
- `checkpoint`: Supervisor checkpoint result
|
||||
- `quality_gate`: Quality gate assessment
|
||||
|
||||
**Protocol**:
|
||||
1. Read discoveries at start
|
||||
2. Append discoveries during execution (never modify existing)
|
||||
3. Deduplicate by type + dedup key
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Upstream artifact not found | Report error, mark failed |
|
||||
| Spec document invalid format | Report error, mark failed |
|
||||
| Plan JSON corrupt | Report error, mark failed |
|
||||
| Build fails after 3 retries | Mark task failed, report error |
|
||||
| Tests fail after 10 retries | Mark task failed, report error |
|
||||
| CLI tool timeout | Fallback to direct implementation |
|
||||
| Dependency task failed | Skip dependent tasks, report error |
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
All roles use `report_agent_job_result` with this schema:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed" | "failed",
|
||||
"findings": "Key discoveries (max 500 chars)",
|
||||
"quality_score": "0-100 (reviewer only)",
|
||||
"supervision_verdict": "pass|warn|block (supervisor only)",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
178
.codex/skills/team-lifecycle-v4/schemas/tasks-schema.md
Normal file
178
.codex/skills/team-lifecycle-v4/schemas/tasks-schema.md
Normal file
@@ -0,0 +1,178 @@
|
||||
# Team Lifecycle v4 -- CSV Schema
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
|
||||
### Column Definitions
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier | `"RESEARCH-001"` |
|
||||
| `title` | string | Yes | Short task title | `"Domain research"` |
|
||||
| `description` | string | Yes | Detailed task description (self-contained) | `"Explore domain, extract structured context..."` |
|
||||
| `role` | string | Yes | Worker role: analyst, writer, planner, executor, tester, reviewer, supervisor | `"analyst"` |
|
||||
| `pipeline_phase` | string | Yes | Lifecycle phase: research, product-brief, requirements, architecture, epics, checkpoint, readiness, planning, implementation, validation, review | `"research"` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"RESEARCH-001"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"RESEARCH-001"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[Task RESEARCH-001] Explored domain..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Identified 5 integration points..."` |
|
||||
| `quality_score` | string | Quality gate score (0-100) for reviewer tasks | `"85"` |
|
||||
| `supervision_verdict` | string | Checkpoint verdict: `pass` / `warn` / `block` | `"pass"` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
|
||||
|
||||
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
|
||||
|
||||
---
|
||||
|
||||
### Example Data
|
||||
|
||||
```csv
|
||||
id,title,description,role,pipeline_phase,deps,context_from,exec_mode,wave,status,findings,quality_score,supervision_verdict,error
|
||||
"RESEARCH-001","Domain research","Explore domain and competitors. Extract structured context: problem statement, target users, domain, constraints, exploration dimensions. Use CLI analysis tools.","analyst","research","","","csv-wave","1","pending","","","",""
|
||||
"DRAFT-001","Product brief","Generate product brief from research context. Include vision statement, problem definition, target users, success goals. Use templates/product-brief.md template.","writer","product-brief","RESEARCH-001","RESEARCH-001","csv-wave","2","pending","","","",""
|
||||
"DRAFT-002","Requirements PRD","Generate requirements PRD with functional requirements (FR-NNN), acceptance criteria, MoSCoW prioritization, user stories.","writer","requirements","DRAFT-001","DRAFT-001","csv-wave","3","pending","","","",""
|
||||
"CHECKPOINT-001","Brief-PRD consistency","Verify: vision->requirements trace, terminology alignment, scope consistency, decision continuity, artifact existence.","supervisor","checkpoint","DRAFT-002","DRAFT-001;DRAFT-002","interactive","4","pending","","","",""
|
||||
"DRAFT-003","Architecture design","Generate architecture with component diagram, tech stack justification, ADRs, data model, integration points.","writer","architecture","CHECKPOINT-001","DRAFT-002;CHECKPOINT-001","csv-wave","5","pending","","","",""
|
||||
"DRAFT-004","Epics and stories","Generate 2-8 epics with 3-12 stories each. Include MVP subset, story format with ACs and estimates.","writer","epics","DRAFT-003","DRAFT-003","csv-wave","6","pending","","","",""
|
||||
"CHECKPOINT-002","Full spec consistency","Verify: 4-doc terminology, decision chain, architecture-epics alignment, quality trend, open questions.","supervisor","checkpoint","DRAFT-004","DRAFT-001;DRAFT-002;DRAFT-003;DRAFT-004","interactive","7","pending","","","",""
|
||||
"QUALITY-001","Readiness gate","Score spec quality across Completeness, Consistency, Traceability, Depth (25% each). Gate: >=80% pass, 60-79% review, <60% fail.","reviewer","readiness","CHECKPOINT-002","DRAFT-001;DRAFT-002;DRAFT-003;DRAFT-004","csv-wave","8","pending","","","",""
|
||||
"PLAN-001","Implementation planning","Explore codebase, generate plan.json + TASK-*.json (2-7 tasks), assess complexity (Low/Medium/High).","planner","planning","QUALITY-001","QUALITY-001","csv-wave","9","pending","","","",""
|
||||
"CHECKPOINT-003","Plan-input alignment","Verify: plan covers requirements, complexity sanity, dependency chain, execution method, upstream context.","supervisor","checkpoint","PLAN-001","PLAN-001","interactive","10","pending","","","",""
|
||||
"IMPL-001","Code implementation","Execute implementation plan tasks. Follow existing code patterns. Run convergence checks.","executor","implementation","CHECKPOINT-003","PLAN-001","csv-wave","11","pending","","","",""
|
||||
"TEST-001","Test execution","Detect test framework. Run affected tests first, then full suite. Fix failures (max 10 iterations, 95% target).","tester","validation","IMPL-001","IMPL-001","csv-wave","12","pending","","","",""
|
||||
"REVIEW-001","Code review","Multi-dimensional code review: quality, security, architecture, requirements coverage. Verdict: BLOCK/CONDITIONAL/APPROVE.","reviewer","review","IMPL-001","IMPL-001","csv-wave","12","pending","","","",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
--------------------- -------------------- -----------------
|
||||
id ----------> id ----------> id
|
||||
title ----------> title ----------> (reads)
|
||||
description ----------> description ----------> (reads)
|
||||
role ----------> role ----------> (reads)
|
||||
pipeline_phase --------> pipeline_phase --------> (reads)
|
||||
deps ----------> deps ----------> (reads)
|
||||
context_from----------> context_from----------> (reads)
|
||||
exec_mode ----------> exec_mode ----------> (reads)
|
||||
wave ----------> (reads)
|
||||
prev_context ----------> (reads)
|
||||
status
|
||||
findings
|
||||
quality_score
|
||||
supervision_verdict
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "RESEARCH-001",
|
||||
"status": "completed",
|
||||
"findings": "Explored domain: identified OAuth2+RBAC auth pattern, 5 integration points, TypeScript/React stack. Key constraint: must support SSO.",
|
||||
"quality_score": "",
|
||||
"supervision_verdict": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Quality gate output:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "QUALITY-001",
|
||||
"status": "completed",
|
||||
"findings": "Quality gate: Completeness 90%, Consistency 85%, Traceability 80%, Depth 75%. Overall: 82.5% PASS.",
|
||||
"quality_score": "82",
|
||||
"supervision_verdict": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Interactive tasks (CHECKPOINT-*) output via JSON written to `interactive/{id}-result.json`.
|
||||
|
||||
---
|
||||
|
||||
## Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `research` | `data.dimension` | `{dimension, findings[], constraints[], integration_points[]}` | Research context |
|
||||
| `spec_artifact` | `data.doc_type` | `{doc_type, path, sections[], key_decisions[]}` | Specification document |
|
||||
| `exploration` | `data.angle` | `{angle, relevant_files[], patterns[], recommendations[]}` | Codebase exploration |
|
||||
| `plan_task` | `data.task_id` | `{task_id, title, files[], complexity, convergence_criteria[]}` | Plan task definition |
|
||||
| `implementation` | `data.task_id` | `{task_id, files_modified[], approach, changes_summary}` | Implementation result |
|
||||
| `test_result` | `data.framework` | `{framework, pass_rate, failures[], fix_iterations}` | Test result |
|
||||
| `review_finding` | `data.file` | `{file, line, severity, dimension, description, suggested_fix}` | Review finding |
|
||||
| `checkpoint` | `data.checkpoint_id` | `{checkpoint_id, verdict, score, risks[], blocks[]}` | Checkpoint result |
|
||||
| `quality_gate` | `data.gate_id` | `{gate_id, score, dimensions{}, verdict}` | Quality assessment |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00+08:00","worker":"RESEARCH-001","type":"research","data":{"dimension":"domain","findings":["Auth system needs OAuth2 + RBAC"],"constraints":["Must support SSO"],"integration_points":["User service API"]}}
|
||||
{"ts":"2026-03-08T10:15:00+08:00","worker":"DRAFT-001","type":"spec_artifact","data":{"doc_type":"product-brief","path":"spec/product-brief.md","sections":["Vision","Problem","Users","Goals"],"key_decisions":["OAuth2 over custom auth"]}}
|
||||
{"ts":"2026-03-08T11:00:00+08:00","worker":"IMPL-001","type":"implementation","data":{"task_id":"IMPL-001","files_modified":["src/auth/oauth.ts","src/auth/rbac.ts"],"approach":"Strategy pattern for auth providers","changes_summary":"Created OAuth2 provider, RBAC middleware, session management"}}
|
||||
{"ts":"2026-03-08T11:30:00+08:00","worker":"TEST-001","type":"test_result","data":{"framework":"vitest","pass_rate":98,"failures":["timeout in SSO integration test"],"fix_iterations":2}}
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Flow
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| CSV task findings | Interactive task | Injected via spawn message or send_input |
|
||||
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
|
||||
---
|
||||
|
||||
## Validation Rules
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
||||
| Valid role | role in {analyst, writer, planner, executor, tester, reviewer, supervisor} | "Invalid role: {role}" |
|
||||
| Valid pipeline_phase | pipeline_phase in {research, product-brief, requirements, architecture, epics, checkpoint, readiness, planning, implementation, validation, review} | "Invalid pipeline_phase: {value}" |
|
||||
| Cross-mechanism deps | Interactive->CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
|
||||
Reference in New Issue
Block a user