feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -1,178 +1,236 @@
# Team Lifecycle v4 -- CSV Schema
# Tasks Schema (JSON)
## Master CSV: tasks.csv
## 1. Overview
### Column Definitions
Codex uses `tasks.json` as the single source of truth for task state management, replacing Claude Code's `TaskCreate`/`TaskUpdate` API calls and CSV-based state tracking. Each session has one `tasks.json` file and multiple `discoveries/{task_id}.json` files.
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier | `"RESEARCH-001"` |
| `title` | string | Yes | Short task title | `"Domain research"` |
| `description` | string | Yes | Detailed task description (self-contained) | `"Explore domain, extract structured context..."` |
| `role` | string | Yes | Worker role: analyst, writer, planner, executor, tester, reviewer, supervisor | `"analyst"` |
| `pipeline_phase` | string | Yes | Lifecycle phase: research, product-brief, requirements, architecture, epics, checkpoint, readiness, planning, implementation, validation, review | `"research"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"RESEARCH-001"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"RESEARCH-001"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[Task RESEARCH-001] Explored domain..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Identified 5 integration points..."` |
| `quality_score` | string | Quality gate score (0-100) for reviewer tasks | `"85"` |
| `supervision_verdict` | string | Checkpoint verdict: `pass` / `warn` / `block` | `"pass"` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Example Data
```csv
id,title,description,role,pipeline_phase,deps,context_from,exec_mode,wave,status,findings,quality_score,supervision_verdict,error
"RESEARCH-001","Domain research","Explore domain and competitors. Extract structured context: problem statement, target users, domain, constraints, exploration dimensions. Use CLI analysis tools.","analyst","research","","","csv-wave","1","pending","","","",""
"DRAFT-001","Product brief","Generate product brief from research context. Include vision statement, problem definition, target users, success goals. Use templates/product-brief.md template.","writer","product-brief","RESEARCH-001","RESEARCH-001","csv-wave","2","pending","","","",""
"DRAFT-002","Requirements PRD","Generate requirements PRD with functional requirements (FR-NNN), acceptance criteria, MoSCoW prioritization, user stories.","writer","requirements","DRAFT-001","DRAFT-001","csv-wave","3","pending","","","",""
"CHECKPOINT-001","Brief-PRD consistency","Verify: vision->requirements trace, terminology alignment, scope consistency, decision continuity, artifact existence.","supervisor","checkpoint","DRAFT-002","DRAFT-001;DRAFT-002","interactive","4","pending","","","",""
"DRAFT-003","Architecture design","Generate architecture with component diagram, tech stack justification, ADRs, data model, integration points.","writer","architecture","CHECKPOINT-001","DRAFT-002;CHECKPOINT-001","csv-wave","5","pending","","","",""
"DRAFT-004","Epics and stories","Generate 2-8 epics with 3-12 stories each. Include MVP subset, story format with ACs and estimates.","writer","epics","DRAFT-003","DRAFT-003","csv-wave","6","pending","","","",""
"CHECKPOINT-002","Full spec consistency","Verify: 4-doc terminology, decision chain, architecture-epics alignment, quality trend, open questions.","supervisor","checkpoint","DRAFT-004","DRAFT-001;DRAFT-002;DRAFT-003;DRAFT-004","interactive","7","pending","","","",""
"QUALITY-001","Readiness gate","Score spec quality across Completeness, Consistency, Traceability, Depth (25% each). Gate: >=80% pass, 60-79% review, <60% fail.","reviewer","readiness","CHECKPOINT-002","DRAFT-001;DRAFT-002;DRAFT-003;DRAFT-004","csv-wave","8","pending","","","",""
"PLAN-001","Implementation planning","Explore codebase, generate plan.json + TASK-*.json (2-7 tasks), assess complexity (Low/Medium/High).","planner","planning","QUALITY-001","QUALITY-001","csv-wave","9","pending","","","",""
"CHECKPOINT-003","Plan-input alignment","Verify: plan covers requirements, complexity sanity, dependency chain, execution method, upstream context.","supervisor","checkpoint","PLAN-001","PLAN-001","interactive","10","pending","","","",""
"IMPL-001","Code implementation","Execute implementation plan tasks. Follow existing code patterns. Run convergence checks.","executor","implementation","CHECKPOINT-003","PLAN-001","csv-wave","11","pending","","","",""
"TEST-001","Test execution","Detect test framework. Run affected tests first, then full suite. Fix failures (max 10 iterations, 95% target).","tester","validation","IMPL-001","IMPL-001","csv-wave","12","pending","","","",""
"REVIEW-001","Code review","Multi-dimensional code review: quality, security, architecture, requirements coverage. Verdict: BLOCK/CONDITIONAL/APPROVE.","reviewer","review","IMPL-001","IMPL-001","csv-wave","12","pending","","","",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
--------------------- -------------------- -----------------
id ----------> id ----------> id
title ----------> title ----------> (reads)
description ----------> description ----------> (reads)
role ----------> role ----------> (reads)
pipeline_phase --------> pipeline_phase --------> (reads)
deps ----------> deps ----------> (reads)
context_from----------> context_from----------> (reads)
exec_mode ----------> exec_mode ----------> (reads)
wave ----------> (reads)
prev_context ----------> (reads)
status
findings
quality_score
supervision_verdict
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
## 2. tasks.json Top-Level Structure
```json
{
"id": "RESEARCH-001",
"status": "completed",
"findings": "Explored domain: identified OAuth2+RBAC auth pattern, 5 integration points, TypeScript/React stack. Key constraint: must support SSO.",
"quality_score": "",
"supervision_verdict": "",
"error": ""
"session_id": "string — unique session identifier (e.g., tlv4-auth-system-20260324)",
"pipeline": "string — one of: spec-only | impl-only | full-lifecycle | fe-only | fullstack | full-lifecycle-fe",
"requirement": "string — original user requirement text",
"created_at": "string — ISO 8601 timestamp with timezone",
"supervision": "boolean — whether CHECKPOINT tasks are active (default: true)",
"completed_waves": "number[] — list of completed wave numbers",
"active_agents": "object — map of task_id -> agent_id for currently running agents",
"tasks": "object — map of task_id -> TaskEntry"
}
```
Quality gate output:
### Required Fields
| Field | Type | Description |
|-------|------|-------------|
| `session_id` | string | Unique session identifier, format: `tlv4-<topic>-<YYYYMMDD>` |
| `pipeline` | string | Selected pipeline name from pipelines.md |
| `requirement` | string | Original user requirement, verbatim |
| `created_at` | string | ISO 8601 creation timestamp |
| `supervision` | boolean | Enable/disable CHECKPOINT tasks |
| `completed_waves` | number[] | Waves that have finished execution |
| `active_agents` | object | Runtime tracking: `{ "TASK-ID": "agent-id" }` |
| `tasks` | object | Task registry: `{ "TASK-ID": TaskEntry }` |
## 3. TaskEntry Schema
```json
{
"id": "QUALITY-001",
"status": "completed",
"findings": "Quality gate: Completeness 90%, Consistency 85%, Traceability 80%, Depth 75%. Overall: 82.5% PASS.",
"quality_score": "82",
"supervision_verdict": "",
"error": ""
"title": "string — short task title",
"description": "string — detailed task description",
"role": "string — role name matching roles/<role>/role.md",
"pipeline_phase": "string — phase from pipelines.md Task Metadata Registry",
"deps": "string[] — task IDs that must complete before this task starts",
"context_from": "string[] — task IDs whose discoveries to load as upstream context",
"wave": "number — execution wave (1-based, determines parallel grouping)",
"status": "string — one of: pending | in_progress | completed | failed | skipped",
"findings": "string | null — summary of task output (max 500 chars)",
"quality_score": "number | null — 0-100, set by reviewer roles only",
"supervision_verdict": "string | null — pass | warn | block, set by CHECKPOINT tasks only",
"error": "string | null — error description if status is failed or skipped"
}
```
Interactive tasks (CHECKPOINT-*) output via JSON written to `interactive/{id}-result.json`.
### Field Definitions
---
| Field | Type | Required | Default | Description |
|-------|------|----------|---------|-------------|
| `title` | string | Yes | - | Human-readable task name |
| `description` | string | Yes | - | What the task should accomplish |
| `role` | string | Yes | - | One of: analyst, writer, planner, implementer, tester, reviewer, supervisor, orchestrator, architect, security-expert, performance-optimizer, data-engineer, devops-engineer, ml-engineer |
| `pipeline_phase` | string | Yes | - | One of: research, product-brief, requirements, architecture, epics, readiness, checkpoint, planning, arch-detail, orchestration, implementation, validation, review |
| `deps` | string[] | Yes | `[]` | Task IDs that block execution. All must be `completed` before this task starts |
| `context_from` | string[] | Yes | `[]` | Task IDs whose `discoveries/{id}.json` files are loaded as upstream context |
| `wave` | number | Yes | - | Execution wave number. Tasks in the same wave run in parallel |
| `status` | string | Yes | `"pending"` | Current task state |
| `findings` | string\|null | No | `null` | Populated on completion. Summary of key output |
| `quality_score` | number\|null | No | `null` | Only set by QUALITY-* and REVIEW-* tasks |
| `supervision_verdict` | string\|null | No | `null` | Only set by CHECKPOINT-* tasks |
| `error` | string\|null | No | `null` | Set when status is `failed` or `skipped` |
## Discovery Types
### Status Lifecycle
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `research` | `data.dimension` | `{dimension, findings[], constraints[], integration_points[]}` | Research context |
| `spec_artifact` | `data.doc_type` | `{doc_type, path, sections[], key_decisions[]}` | Specification document |
| `exploration` | `data.angle` | `{angle, relevant_files[], patterns[], recommendations[]}` | Codebase exploration |
| `plan_task` | `data.task_id` | `{task_id, title, files[], complexity, convergence_criteria[]}` | Plan task definition |
| `implementation` | `data.task_id` | `{task_id, files_modified[], approach, changes_summary}` | Implementation result |
| `test_result` | `data.framework` | `{framework, pass_rate, failures[], fix_iterations}` | Test result |
| `review_finding` | `data.file` | `{file, line, severity, dimension, description, suggested_fix}` | Review finding |
| `checkpoint` | `data.checkpoint_id` | `{checkpoint_id, verdict, score, risks[], blocks[]}` | Checkpoint result |
| `quality_gate` | `data.gate_id` | `{gate_id, score, dimensions{}, verdict}` | Quality assessment |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T10:00:00+08:00","worker":"RESEARCH-001","type":"research","data":{"dimension":"domain","findings":["Auth system needs OAuth2 + RBAC"],"constraints":["Must support SSO"],"integration_points":["User service API"]}}
{"ts":"2026-03-08T10:15:00+08:00","worker":"DRAFT-001","type":"spec_artifact","data":{"doc_type":"product-brief","path":"spec/product-brief.md","sections":["Vision","Problem","Users","Goals"],"key_decisions":["OAuth2 over custom auth"]}}
{"ts":"2026-03-08T11:00:00+08:00","worker":"IMPL-001","type":"implementation","data":{"task_id":"IMPL-001","files_modified":["src/auth/oauth.ts","src/auth/rbac.ts"],"approach":"Strategy pattern for auth providers","changes_summary":"Created OAuth2 provider, RBAC middleware, session management"}}
{"ts":"2026-03-08T11:30:00+08:00","worker":"TEST-001","type":"test_result","data":{"framework":"vitest","pass_rate":98,"failures":["timeout in SSO integration test"],"fix_iterations":2}}
```
pending -> in_progress -> completed
-> failed
pending -> skipped (when upstream dependency failed/skipped)
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
## 4. discoveries/{task_id}.json Schema
---
Each task writes a discovery file on completion. This replaces Claude Code's `team_msg(type="state_update")`.
## Cross-Mechanism Context Flow
```json
{
"task_id": "string — matches the task key in tasks.json",
"worker": "string — same as task_id (identifies the producing agent)",
"timestamp": "string — ISO 8601 completion timestamp",
"type": "string — same as pipeline_phase",
"status": "string — completed | failed",
"findings": "string — summary (max 500 chars)",
"quality_score": "number | null",
"supervision_verdict": "string | null — pass | warn | block",
"error": "string | null",
"data": {
"key_findings": "string[] — max 5 items, each under 100 chars",
"decisions": "string[] — include rationale, not just choice",
"files_modified": "string[] — only for implementation tasks",
"verification": "string — self-validated | peer-reviewed | tested",
"risks_logged": "number — CHECKPOINT only: count of risks",
"blocks_detected": "number — CHECKPOINT only: count of blocking issues"
},
"artifacts_produced": "string[] — paths to generated artifact files"
}
```
| Source | Target | Mechanism |
|--------|--------|-----------|
| CSV task findings | Interactive task | Injected via spawn message or send_input |
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
## 5. Validation Rules
---
### Structural Validation
## Validation Rules
| Rule | Description |
|------|-------------|
| Unique IDs | Every key in `tasks` must be unique |
| Valid deps | Every entry in `deps` must reference an existing task ID |
| Valid context_from | Every entry in `context_from` must reference an existing task ID |
| No cycles | Dependency graph must be a DAG (no circular dependencies) |
| Wave ordering | If task A depends on task B, then A.wave > B.wave |
| Role exists | `role` must match a directory in `.codex/skills/team-lifecycle-v4/roles/` |
| Pipeline phase valid | `pipeline_phase` must be one of the defined phases |
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
| Valid role | role in {analyst, writer, planner, executor, tester, reviewer, supervisor} | "Invalid role: {role}" |
| Valid pipeline_phase | pipeline_phase in {research, product-brief, requirements, architecture, epics, checkpoint, readiness, planning, implementation, validation, review} | "Invalid pipeline_phase: {value}" |
| Cross-mechanism deps | Interactive->CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
### Runtime Validation
| Rule | Description |
|------|-------------|
| Status transitions | Only valid transitions: pending->in_progress, in_progress->completed/failed, pending->skipped |
| Dependency check | A task can only move to `in_progress` if all `deps` are `completed` |
| Skip propagation | If any dep is `failed` or `skipped`, task is automatically `skipped` |
| Discovery required | A `completed` task MUST have a corresponding `discoveries/{task_id}.json` file |
| Findings required | A `completed` task MUST have non-null `findings` |
| Error required | A `failed` or `skipped` task MUST have non-null `error` |
| Supervision fields | CHECKPOINT tasks MUST set `supervision_verdict` on completion |
| Quality fields | QUALITY-*/REVIEW-* tasks SHOULD set `quality_score` on completion |
## 6. Semantic Mapping: Claude Code <-> Codex
### TaskCreate Mapping
| Claude Code `TaskCreate` Field | Codex `tasks.json` Equivalent |
|-------------------------------|-------------------------------|
| `title` | `tasks[id].title` |
| `description` | `tasks[id].description` |
| `assignee` (role) | `tasks[id].role` |
| `status: "open"` | `tasks[id].status: "pending"` |
| `metadata.pipeline_phase` | `tasks[id].pipeline_phase` |
| `metadata.deps` | `tasks[id].deps` |
| `metadata.context_from` | `tasks[id].context_from` |
| `metadata.wave` | `tasks[id].wave` |
### TaskUpdate Mapping
| Claude Code `TaskUpdate` Operation | Codex Equivalent |
|------------------------------------|------------------|
| `status: "in_progress"` | Write `tasks[id].status = "in_progress"` in tasks.json |
| `status: "completed"` + findings | Write `tasks[id].status = "completed"` + Write `discoveries/{id}.json` |
| `status: "failed"` + error | Write `tasks[id].status = "failed"` + `tasks[id].error` |
| Attach result metadata | Write `discoveries/{id}.json` with full data payload |
### team_msg Mapping
| Claude Code `team_msg` Operation | Codex Equivalent |
|---------------------------------|------------------|
| `team_msg(operation="get_state", role=<upstream>)` | Read `tasks.json` + Read `discoveries/{upstream_id}.json` |
| `team_msg(type="state_update", payload={...})` | Write `discoveries/{task_id}.json` |
| `team_msg(type="broadcast", ...)` | Write to `wisdom/*.md` (session-wide visibility) |
## 7. Column Lifecycle Correspondence
Maps the conceptual "columns" from Claude Code's task board to tasks.json status values.
| Claude Code Column | tasks.json Status | Transition Trigger |
|-------------------|-------------------|-------------------|
| Backlog | `pending` | Task created in tasks.json |
| In Progress | `in_progress` | `spawn_agent` called for task |
| Blocked | `pending` (deps not met) | Implicit: deps not all `completed` |
| Done | `completed` | Agent writes discovery + coordinator updates status |
| Failed | `failed` | Agent reports error or timeout |
| Skipped | `skipped` | Upstream dependency failed/skipped |
## 8. Example: Full tasks.json
```json
{
"session_id": "tlv4-auth-system-20260324",
"pipeline": "full-lifecycle",
"requirement": "Design and implement user authentication system with OAuth2 and RBAC",
"created_at": "2026-03-24T10:00:00+08:00",
"supervision": true,
"completed_waves": [1],
"active_agents": {
"DRAFT-001": "agent-abc123"
},
"tasks": {
"RESEARCH-001": {
"title": "Domain research",
"description": "Explore auth domain: OAuth2 flows, RBAC patterns, competitor analysis, integration constraints",
"role": "analyst",
"pipeline_phase": "research",
"deps": [],
"context_from": [],
"wave": 1,
"status": "completed",
"findings": "Identified OAuth2+RBAC pattern, 5 integration points, SSO requirement from enterprise customers",
"quality_score": null,
"supervision_verdict": null,
"error": null
},
"DRAFT-001": {
"title": "Product brief",
"description": "Generate product brief from research context, define vision, problem, users, success metrics",
"role": "writer",
"pipeline_phase": "product-brief",
"deps": ["RESEARCH-001"],
"context_from": ["RESEARCH-001"],
"wave": 2,
"status": "in_progress",
"findings": null,
"quality_score": null,
"supervision_verdict": null,
"error": null
},
"DRAFT-002": {
"title": "Requirements PRD",
"description": "Write requirements document with functional/non-functional reqs, user stories, acceptance criteria",
"role": "writer",
"pipeline_phase": "requirements",
"deps": ["DRAFT-001"],
"context_from": ["DRAFT-001"],
"wave": 3,
"status": "pending",
"findings": null,
"quality_score": null,
"supervision_verdict": null,
"error": null
}
}
}
```