mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-28 20:01:17 +08:00
feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture
- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files) - Delete old team-lifecycle (v3) and team-planex-v2 - Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs) - Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate) to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input) - Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor) - Convert all coordinator role files: dispatch.md, monitor.md, role.md - Convert all worker role files: remove run_in_background, fix Bash syntax - Convert all specs/pipelines.md references - Final state: 20 team skills, 217 .md files, zero Claude Code API residuals Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
512
.codex/skills/team-lifecycle-v4/MIGRATION-PLAN.md
Normal file
512
.codex/skills/team-lifecycle-v4/MIGRATION-PLAN.md
Normal file
@@ -0,0 +1,512 @@
|
||||
# Team Lifecycle v4 — Codex 迁移计划
|
||||
|
||||
## 目标
|
||||
|
||||
将 Claude Code 版 `.claude/skills/team-lifecycle-v4` 迁移到 Codex 版 `.codex/skills/team-lifecycle-v4`,采用 **通用 worker + role 文档注入** 模式(与 Claude Code 设计对齐),去掉 `spawn_agents_on_csv`,用 `spawn_agent` + `wait_agent` + `send_input` + `close_agent` 实现同构编排。
|
||||
|
||||
## 设计原则
|
||||
|
||||
1. **通用 agent,role 文档区分角色** — 只定义 2 个 TOML(worker + supervisor),角色差异由 `roles/<role>/role.md` 决定
|
||||
2. **role 文档直接复用** — `.claude/skills/` 中的 roles/specs/templates 原样迁移
|
||||
3. **items 结构化传参** — 取代 message 字符串,分段传递角色分配、任务描述、上游上下文
|
||||
4. **JSON 取代 CSV** — `tasks.json` 管理状态,`discoveries/{id}.json` 分文件写入
|
||||
5. **两版同构** — Codex 调用原语与 Claude Code 形成 1:1 映射
|
||||
|
||||
## 平台调用映射
|
||||
|
||||
| 编排概念 | Claude Code | Codex |
|
||||
|---------|------------|-------|
|
||||
| Worker spawn | `Agent({ subagent_type: "team-worker", prompt })` | `spawn_agent({ agent_type: "tlv4_worker", items })` |
|
||||
| Supervisor spawn | `Agent({ subagent_type: "team-supervisor", prompt })` | `spawn_agent({ agent_type: "tlv4_supervisor", items })` |
|
||||
| Supervisor wake | `SendMessage({ recipient: "supervisor", content })` | `send_input({ id: supervisorId, items })` |
|
||||
| Supervisor shutdown | `SendMessage({ type: "shutdown_request" })` | `close_agent({ id: supervisorId })` |
|
||||
| 等待完成 | 后台回调 -> monitor.md | `wait_agent({ ids, timeout_ms })` |
|
||||
| 任务状态 | `TaskCreate` / `TaskUpdate` | `tasks.json` 文件读写 |
|
||||
| 团队管理 | `TeamCreate` / `TeamDelete` | session folder init / cleanup |
|
||||
| 消息总线 | `mcp__ccw-tools__team_msg` | `discoveries/{id}.json` + `session-state.json` |
|
||||
| 用户交互 | `AskUserQuestion` | `request_user_input` |
|
||||
| 角色加载 | prompt 中 `@roles/<role>/role.md` | items text 中指示 `Read roles/<role>/role.md` |
|
||||
|
||||
## 目录结构 (迁移后)
|
||||
|
||||
```
|
||||
.codex/
|
||||
├── agents/
|
||||
│ ├── tlv4-worker.toml # 通用 worker (NEW)
|
||||
│ └── tlv4-supervisor.toml # 驻留 supervisor (NEW)
|
||||
└── skills/
|
||||
└── team-lifecycle-v4/
|
||||
├── SKILL.md # 主编排 (REWRITE)
|
||||
├── MIGRATION-PLAN.md # 本文档
|
||||
├── roles/ # 从 .claude/ 复制
|
||||
│ ├── coordinator/
|
||||
│ │ ├── role.md
|
||||
│ │ └── commands/
|
||||
│ │ ├── analyze.md
|
||||
│ │ ├── dispatch.md
|
||||
│ │ └── monitor.md
|
||||
│ ├── analyst/role.md
|
||||
│ ├── writer/role.md
|
||||
│ ├── planner/role.md
|
||||
│ ├── executor/
|
||||
│ │ ├── role.md
|
||||
│ │ └── commands/
|
||||
│ │ ├── implement.md
|
||||
│ │ └── fix.md
|
||||
│ ├── tester/role.md
|
||||
│ ├── reviewer/
|
||||
│ │ ├── role.md
|
||||
│ │ └── commands/
|
||||
│ │ ├── review-code.md
|
||||
│ │ └── review-spec.md
|
||||
│ └── supervisor/role.md
|
||||
├── specs/ # 从 .claude/ 复制
|
||||
│ ├── pipelines.md
|
||||
│ ├── quality-gates.md
|
||||
│ └── knowledge-transfer.md
|
||||
├── templates/ # 从 .claude/ 复制
|
||||
│ ├── product-brief.md
|
||||
│ ├── requirements.md
|
||||
│ ├── architecture.md
|
||||
│ └── epics.md
|
||||
└── schemas/
|
||||
└── tasks-schema.md # REWRITE: CSV -> JSON
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 步骤 1: Agent TOML 定义
|
||||
|
||||
### `.codex/agents/tlv4-worker.toml`
|
||||
|
||||
```toml
|
||||
name = "tlv4_worker"
|
||||
description = "Generic team-lifecycle-v4 worker. Role-specific behavior loaded from role.md at spawn time."
|
||||
model = "gpt-5.4"
|
||||
model_reasoning_effort = "high"
|
||||
sandbox_mode = "workspace-write"
|
||||
|
||||
developer_instructions = """
|
||||
You are a team-lifecycle-v4 worker agent.
|
||||
|
||||
## Boot Protocol
|
||||
1. Read role_spec file path from your task assignment (MUST read first)
|
||||
2. Read session state from session path
|
||||
3. Execute role-specific Phase 2-4 defined in role.md
|
||||
4. Write deliverables to session artifacts directory
|
||||
5. Write findings to discoveries/{task_id}.json
|
||||
6. Report via report_agent_job_result
|
||||
|
||||
## Output Schema
|
||||
{
|
||||
"id": "<task_id>",
|
||||
"status": "completed | failed",
|
||||
"findings": "<max 500 chars>",
|
||||
"quality_score": "<0-100, reviewer only>",
|
||||
"supervision_verdict": "",
|
||||
"error": ""
|
||||
}
|
||||
"""
|
||||
```
|
||||
|
||||
### `.codex/agents/tlv4-supervisor.toml`
|
||||
|
||||
```toml
|
||||
name = "tlv4_supervisor"
|
||||
description = "Resident supervisor for team-lifecycle-v4. Woken via send_input for checkpoint verification."
|
||||
model = "gpt-5.4"
|
||||
model_reasoning_effort = "high"
|
||||
sandbox_mode = "read-only"
|
||||
|
||||
developer_instructions = """
|
||||
You are a team-lifecycle-v4 supervisor agent (resident pattern).
|
||||
|
||||
## Boot Protocol
|
||||
1. Read role_spec file path from your task assignment (MUST read first)
|
||||
2. Load baseline context from session
|
||||
3. Report ready, wait for checkpoint requests via send_input
|
||||
|
||||
## Per Checkpoint
|
||||
1. Read artifacts specified in checkpoint request
|
||||
2. Verify cross-artifact consistency per role.md definitions
|
||||
3. Issue verdict: pass (>= 0.8), warn (0.5-0.79), block (< 0.5)
|
||||
4. Write report to artifacts/CHECKPOINT-{id}-report.md
|
||||
5. Report findings
|
||||
|
||||
## Constraints
|
||||
- Read-only: never modify artifacts
|
||||
- Never issue pass when critical inconsistencies exist
|
||||
- Never block for minor style issues
|
||||
"""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 步骤 2: SKILL.md 改写
|
||||
|
||||
### 核心变更
|
||||
|
||||
| 区域 | 现状 | 改写后 |
|
||||
|------|------|--------|
|
||||
| allowed-tools | `spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, ...` | `spawn_agent, wait_agent, send_input, close_agent, report_agent_job_result, Read, Write, Edit, Bash, Glob, Grep, request_user_input` |
|
||||
| 执行模型 | hybrid: CSV wave (primary) + spawn_agent (secondary) | 统一: spawn_agent + wait_agent (all tasks) |
|
||||
| 状态管理 | tasks.csv (CSV) | tasks.json (JSON) |
|
||||
| 发现板 | discoveries.ndjson (共享追加) | discoveries/{task_id}.json (分文件) |
|
||||
| exec_mode 分类 | csv-wave / interactive | 移除 — 所有任务统一用 spawn_agent |
|
||||
| wave CSV 构建 | 生成 wave-{N}.csv, spawn_agents_on_csv | 循环 spawn_agent + 批量 wait_agent |
|
||||
|
||||
### Worker Spawn Template
|
||||
|
||||
```javascript
|
||||
// 对齐 Claude Code 的 Agent({ subagent_type: "team-worker", prompt }) 模式
|
||||
spawn_agent({
|
||||
agent_type: "tlv4_worker",
|
||||
items: [
|
||||
// 段 1: 角色分配 (对齐 Claude Code prompt 的 Role Assignment 块)
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: ${task.role}
|
||||
role_spec: ${skillRoot}/roles/${task.role}/role.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
requirement: ${requirement}
|
||||
inner_loop: ${hasInnerLoop(task.role)}` },
|
||||
|
||||
// 段 2: 读取指示 (核心 — 保持 role 文档引用模式)
|
||||
{ type: "text", text: `Read role_spec file (${skillRoot}/roles/${task.role}/role.md) to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).` },
|
||||
|
||||
// 段 3: 任务上下文
|
||||
{ type: "text", text: `## Task Context
|
||||
task_id: ${task.id}
|
||||
title: ${task.title}
|
||||
description: ${task.description}
|
||||
pipeline_phase: ${task.pipeline_phase}` },
|
||||
|
||||
// 段 4: 上游发现
|
||||
{ type: "text", text: `## Upstream Context\n${task.prev_context}` }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
### Supervisor Spawn Template
|
||||
|
||||
```javascript
|
||||
// Spawn — 一次 (Phase 2 init, 对齐 Claude Code Agent({ subagent_type: "team-supervisor" }))
|
||||
const supervisorId = spawn_agent({
|
||||
agent_type: "tlv4_supervisor",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: supervisor
|
||||
role_spec: ${skillRoot}/roles/supervisor/role.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
requirement: ${requirement}
|
||||
|
||||
Read role_spec file (${skillRoot}/roles/supervisor/role.md) to load checkpoint definitions.
|
||||
Init: load baseline context, report ready, go idle.
|
||||
Wake cycle: orchestrator sends checkpoint requests via send_input.` }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
### Supervisor Wake Template
|
||||
|
||||
```javascript
|
||||
// 对齐 Claude Code SendMessage({ recipient: "supervisor", content })
|
||||
send_input({
|
||||
id: supervisorId,
|
||||
items: [
|
||||
{ type: "text", text: `## Checkpoint Request
|
||||
task_id: ${task.id}
|
||||
scope: [${task.deps}]
|
||||
pipeline_progress: ${done}/${total} tasks completed` }
|
||||
]
|
||||
})
|
||||
wait_agent({ ids: [supervisorId], timeout_ms: 300000 })
|
||||
```
|
||||
|
||||
### Supervisor Shutdown
|
||||
|
||||
```javascript
|
||||
// 对齐 Claude Code SendMessage({ type: "shutdown_request" })
|
||||
close_agent({ id: supervisorId })
|
||||
```
|
||||
|
||||
### Wave 执行引擎
|
||||
|
||||
```javascript
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
const state = JSON.parse(Read(`${sessionFolder}/tasks.json`))
|
||||
const waveTasks = Object.values(state.tasks).filter(t => t.wave === wave && t.status === 'pending')
|
||||
|
||||
// 跳过依赖失败的任务
|
||||
const executableTasks = []
|
||||
for (const task of waveTasks) {
|
||||
if (task.deps.some(d => ['failed', 'skipped'].includes(state.tasks[d]?.status))) {
|
||||
state.tasks[task.id].status = 'skipped'
|
||||
state.tasks[task.id].error = 'Dependency failed or skipped'
|
||||
continue
|
||||
}
|
||||
executableTasks.push(task)
|
||||
}
|
||||
|
||||
// 构建 prev_context
|
||||
for (const task of executableTasks) {
|
||||
const contextParts = task.context_from
|
||||
.map(id => {
|
||||
const prev = state.tasks[id]
|
||||
if (prev?.status === 'completed' && prev.findings) {
|
||||
return `[Task ${id}: ${prev.title}] ${prev.findings}`
|
||||
}
|
||||
try {
|
||||
const disc = JSON.parse(Read(`${sessionFolder}/discoveries/${id}.json`))
|
||||
return `[Task ${id}] ${disc.findings || JSON.stringify(disc.key_findings || '')}`
|
||||
} catch { return null }
|
||||
})
|
||||
.filter(Boolean)
|
||||
task.prev_context = contextParts.join('\n') || 'No previous context available'
|
||||
}
|
||||
|
||||
// 分离普通任务和 CHECKPOINT 任务
|
||||
const regularTasks = executableTasks.filter(t => !t.id.startsWith('CHECKPOINT-'))
|
||||
const checkpointTasks = executableTasks.filter(t => t.id.startsWith('CHECKPOINT-'))
|
||||
|
||||
// 1) 并发 spawn 普通任务
|
||||
const agentMap = [] // [{ agentId, taskId }]
|
||||
for (const task of regularTasks) {
|
||||
state.tasks[task.id].status = 'in_progress'
|
||||
const agentId = spawn_agent({
|
||||
agent_type: "tlv4_worker",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: ${task.role}
|
||||
role_spec: ${skillRoot}/roles/${task.role}/role.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
requirement: ${requirement}
|
||||
inner_loop: ${hasInnerLoop(task.role)}` },
|
||||
{ type: "text", text: `Read role_spec file (${skillRoot}/roles/${task.role}/role.md) to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).` },
|
||||
{ type: "text", text: `## Task Context
|
||||
task_id: ${task.id}
|
||||
title: ${task.title}
|
||||
description: ${task.description}
|
||||
pipeline_phase: ${task.pipeline_phase}` },
|
||||
{ type: "text", text: `## Upstream Context\n${task.prev_context}` }
|
||||
]
|
||||
})
|
||||
agentMap.push({ agentId, taskId: task.id })
|
||||
}
|
||||
|
||||
// 2) 批量等待
|
||||
if (agentMap.length > 0) {
|
||||
wait_agent({ ids: agentMap.map(a => a.agentId), timeout_ms: 900000 })
|
||||
}
|
||||
|
||||
// 3) 收集结果,合并到 tasks.json
|
||||
for (const { agentId, taskId } of agentMap) {
|
||||
try {
|
||||
const disc = JSON.parse(Read(`${sessionFolder}/discoveries/${taskId}.json`))
|
||||
state.tasks[taskId].status = disc.status || 'completed'
|
||||
state.tasks[taskId].findings = disc.findings || ''
|
||||
state.tasks[taskId].quality_score = disc.quality_score || null
|
||||
state.tasks[taskId].error = disc.error || null
|
||||
} catch {
|
||||
state.tasks[taskId].status = 'failed'
|
||||
state.tasks[taskId].error = 'No discovery file produced'
|
||||
}
|
||||
close_agent({ id: agentId })
|
||||
}
|
||||
|
||||
// 4) 执行 CHECKPOINT 任务 (send_input 唤醒 supervisor)
|
||||
for (const task of checkpointTasks) {
|
||||
send_input({
|
||||
id: supervisorId,
|
||||
items: [
|
||||
{ type: "text", text: `## Checkpoint Request
|
||||
task_id: ${task.id}
|
||||
scope: [${task.deps.join(', ')}]
|
||||
pipeline_progress: ${completedCount}/${totalCount} tasks completed` }
|
||||
]
|
||||
})
|
||||
wait_agent({ ids: [supervisorId], timeout_ms: 300000 })
|
||||
|
||||
// 读取 checkpoint 报告
|
||||
try {
|
||||
const report = Read(`${sessionFolder}/artifacts/${task.id}-report.md`)
|
||||
const verdict = parseVerdict(report) // pass | warn | block
|
||||
state.tasks[task.id].status = 'completed'
|
||||
state.tasks[task.id].findings = `Verdict: ${verdict.decision} (score: ${verdict.score})`
|
||||
state.tasks[task.id].supervision_verdict = verdict.decision
|
||||
|
||||
if (verdict.decision === 'block') {
|
||||
const action = request_user_input({
|
||||
questions: [{
|
||||
question: `Checkpoint ${task.id} BLOCKED (score: ${verdict.score}). Choose action.`,
|
||||
header: "Blocked",
|
||||
id: "blocked_action",
|
||||
options: [
|
||||
{ label: "Override", description: "Proceed despite block" },
|
||||
{ label: "Revise upstream", description: "Go back and fix issues" },
|
||||
{ label: "Abort", description: "Stop pipeline" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
// Handle user choice...
|
||||
}
|
||||
} catch {
|
||||
state.tasks[task.id].status = 'failed'
|
||||
state.tasks[task.id].error = 'Supervisor report not produced'
|
||||
}
|
||||
}
|
||||
|
||||
// 5) 持久化 tasks.json
|
||||
Write(`${sessionFolder}/tasks.json`, JSON.stringify(state, null, 2))
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 步骤 3: 复制 Role 文档
|
||||
|
||||
从 `.claude/skills/team-lifecycle-v4/` 复制以下目录到 `.codex/skills/team-lifecycle-v4/`:
|
||||
|
||||
| 源 | 目标 | 说明 |
|
||||
|----|------|------|
|
||||
| `roles/` (全部) | `roles/` | 原样复制,coordinator 中平台相关调用需适配 |
|
||||
| `specs/pipelines.md` | `specs/pipelines.md` | 原样复制 |
|
||||
| `specs/quality-gates.md` | `specs/quality-gates.md` | 原样复制 |
|
||||
| `specs/knowledge-transfer.md` | `specs/knowledge-transfer.md` | 需适配: team_msg -> discoveries/ 文件 |
|
||||
| `templates/` (全部) | `templates/` | 原样复制 |
|
||||
|
||||
### Role 文档适配点
|
||||
|
||||
**`roles/coordinator/role.md`** — 需改写:
|
||||
- Phase 2: `TeamCreate` -> session folder init
|
||||
- Phase 3: `TaskCreate` -> tasks.json 写入
|
||||
- Phase 4: `Agent(team-worker)` -> `spawn_agent(tlv4_worker)`
|
||||
- monitor.md 中 callback 处理 -> `wait_agent` 结果处理
|
||||
|
||||
**`roles/coordinator/commands/monitor.md`** — 需改写:
|
||||
- handleCallback -> wait_agent 结果解析
|
||||
- handleSpawnNext -> spawn_agent 循环
|
||||
- SendMessage(supervisor) -> send_input(supervisorId)
|
||||
|
||||
**`specs/knowledge-transfer.md`** — 需适配:
|
||||
- `team_msg(operation="get_state")` -> 读 tasks.json
|
||||
- `team_msg(type="state_update")` -> 写 discoveries/{id}.json
|
||||
- 探索缓存协议保持不变
|
||||
|
||||
**其余 role 文档** (analyst, writer, planner, executor, tester, reviewer, supervisor):
|
||||
- 核心执行逻辑不变
|
||||
- `ccw cli` 调用保持不变 (CLI 工具两侧通用)
|
||||
- 发现产出改为写 `discoveries/{task_id}.json` (替代 team_msg)
|
||||
- `report_agent_job_result` 替代 team_msg state_update
|
||||
|
||||
---
|
||||
|
||||
## 步骤 4: Tasks Schema 改写
|
||||
|
||||
### 现状: tasks.csv
|
||||
|
||||
```csv
|
||||
id,title,description,role,pipeline_phase,deps,context_from,exec_mode,wave,status,findings,quality_score,supervision_verdict,error
|
||||
```
|
||||
|
||||
### 改写后: tasks.json
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "tlv4-auth-system-20260324",
|
||||
"pipeline": "full-lifecycle",
|
||||
"requirement": "Design and implement user authentication system",
|
||||
"created_at": "2026-03-24T10:00:00+08:00",
|
||||
"supervision": true,
|
||||
"completed_waves": [],
|
||||
"active_agents": {},
|
||||
"tasks": {
|
||||
"RESEARCH-001": {
|
||||
"title": "Domain research",
|
||||
"description": "Explore domain, extract structured context...",
|
||||
"role": "analyst",
|
||||
"pipeline_phase": "research",
|
||||
"deps": [],
|
||||
"context_from": [],
|
||||
"wave": 1,
|
||||
"status": "pending",
|
||||
"findings": null,
|
||||
"quality_score": null,
|
||||
"supervision_verdict": null,
|
||||
"error": null
|
||||
},
|
||||
"DRAFT-001": {
|
||||
"title": "Product brief",
|
||||
"description": "Generate product brief from research context...",
|
||||
"role": "writer",
|
||||
"pipeline_phase": "product-brief",
|
||||
"deps": ["RESEARCH-001"],
|
||||
"context_from": ["RESEARCH-001"],
|
||||
"wave": 2,
|
||||
"status": "pending",
|
||||
"findings": null,
|
||||
"quality_score": null,
|
||||
"supervision_verdict": null,
|
||||
"error": null
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 发现文件: discoveries/{task_id}.json
|
||||
|
||||
```json
|
||||
{
|
||||
"task_id": "RESEARCH-001",
|
||||
"worker": "RESEARCH-001",
|
||||
"timestamp": "2026-03-24T10:15:00+08:00",
|
||||
"type": "research",
|
||||
"status": "completed",
|
||||
"findings": "Explored domain: identified OAuth2+RBAC pattern, 5 integration points.",
|
||||
"quality_score": null,
|
||||
"supervision_verdict": null,
|
||||
"error": null,
|
||||
"data": {
|
||||
"dimension": "domain",
|
||||
"findings": ["Auth system needs OAuth2 + RBAC"],
|
||||
"constraints": ["Must support SSO"],
|
||||
"integration_points": ["User service API"]
|
||||
},
|
||||
"artifacts_produced": ["spec/discovery-context.json"]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 步骤 5: 删除旧文件
|
||||
|
||||
迁移完成后,删除 Codex 版中不再需要的文件:
|
||||
|
||||
| 文件 | 原因 |
|
||||
|------|------|
|
||||
| `agents/agent-instruction.md` | 角色逻辑在 roles/ 中,通用协议在 TOML developer_instructions 中 |
|
||||
| `agents/requirement-clarifier.md` | 需求澄清逻辑合并到 coordinator/role.md Phase 1 |
|
||||
| `agents/supervisor.md` | 迁移到 roles/supervisor/role.md |
|
||||
| `agents/quality-gate.md` | 迁移到 roles/reviewer/role.md (QUALITY-* 任务处理) |
|
||||
| `schemas/tasks-schema.md` (旧版) | 被 JSON schema 版本替代 |
|
||||
|
||||
---
|
||||
|
||||
## 实施顺序
|
||||
|
||||
| 步骤 | 内容 | 依赖 | 复杂度 |
|
||||
|------|------|------|--------|
|
||||
| **1** | 创建 2 个 TOML agent 定义 | 无 | 低 |
|
||||
| **2** | 复制 roles/specs/templates 从 .claude/ | 无 | 低 (纯复制) |
|
||||
| **3** | 改写 tasks-schema.md (CSV -> JSON) | 无 | 低 |
|
||||
| **4** | 改写 SKILL.md 主编排 | 1, 2, 3 | 高 (核心工作) |
|
||||
| **5** | 适配 coordinator role.md + commands/ | 4 | 中 |
|
||||
| **6** | 适配 knowledge-transfer.md | 3 | 低 |
|
||||
| **7** | 适配 worker role 文档 (发现产出方式) | 3 | 低 |
|
||||
| **8** | 删除旧文件,清理 | 全部 | 低 |
|
||||
|
||||
步骤 1-3 可并行,步骤 4 是关键路径,步骤 5-7 依赖步骤 4 但可并行。
|
||||
@@ -1,775 +1,218 @@
|
||||
---
|
||||
name: team-lifecycle-v4
|
||||
description: Full lifecycle team skill — specification, planning, implementation, testing, and review. Supports spec-only, impl-only, full-lifecycle, and frontend pipelines with optional supervisor checkpoints.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"task description\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, request_user_input
|
||||
description: Full lifecycle team skill with clean architecture. SKILL.md is a universal router — all roles read it. Beat model is coordinator-only. Structure is roles/ + specs/ + templates/. Triggers on "team lifecycle v4".
|
||||
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), request_user_input(*)
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team Lifecycle v4
|
||||
|
||||
## Usage
|
||||
Orchestrate multi-agent software development: specification -> planning -> implementation -> testing -> review.
|
||||
|
||||
```bash
|
||||
$team-lifecycle-v4 "Design and implement a user authentication system"
|
||||
$team-lifecycle-v4 -c 4 "Full lifecycle: build a REST API for order management"
|
||||
$team-lifecycle-v4 -y "Implement dark mode toggle in settings page"
|
||||
$team-lifecycle-v4 --continue "tlv4-auth-system-20260308"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
||||
- `--continue`: Resume existing session
|
||||
- `--no-supervision`: Skip CHECKPOINT tasks (supervisor opt-out)
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Full lifecycle software development orchestration: requirement analysis, specification writing (product brief, requirements, architecture, epics), quality gating, implementation planning, code implementation, testing, and code review. Supports multiple pipeline modes with optional supervisor checkpoints at phase transition points.
|
||||
|
||||
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary for supervisor checkpoints and requirement clarification)
|
||||
## Architecture
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------------------+
|
||||
| TEAM LIFECYCLE v4 WORKFLOW |
|
||||
+--------------------------------------------------------------------------+
|
||||
| |
|
||||
| Phase 0: Pre-Wave Interactive |
|
||||
| +-- Requirement clarification + pipeline selection |
|
||||
| +-- Complexity scoring + signal detection |
|
||||
| +-- Output: refined requirements for decomposition |
|
||||
| |
|
||||
| Phase 1: Requirement -> CSV + Classification |
|
||||
| +-- Parse task into lifecycle tasks per selected pipeline |
|
||||
| +-- Assign roles: analyst, writer, planner, executor, tester, |
|
||||
| | reviewer, supervisor |
|
||||
| +-- Classify tasks: csv-wave | interactive (exec_mode) |
|
||||
| +-- Compute dependency waves (topological sort -> depth grouping) |
|
||||
| +-- Generate tasks.csv with wave + exec_mode columns |
|
||||
| +-- User validates task breakdown (skip if -y) |
|
||||
| |
|
||||
| Phase 2: Wave Execution Engine (Extended) |
|
||||
| +-- For each wave (1..N): |
|
||||
| | +-- Execute pre-wave interactive tasks (if any) |
|
||||
| | +-- Build wave CSV (filter csv-wave tasks for this wave) |
|
||||
| | +-- Inject previous findings into prev_context column |
|
||||
| | +-- spawn_agents_on_csv(wave CSV) |
|
||||
| | +-- Execute post-wave interactive tasks (if any) |
|
||||
| | +-- Handle CHECKPOINT tasks via interactive supervisor |
|
||||
| | +-- Merge all results into master tasks.csv |
|
||||
| | +-- Check: any failed? -> skip dependents |
|
||||
| +-- discoveries.ndjson shared across all modes (append-only) |
|
||||
| |
|
||||
| Phase 3: Post-Wave Interactive |
|
||||
| +-- Quality gate evaluation (QUALITY-001) |
|
||||
| +-- User approval checkpoint before implementation |
|
||||
| +-- Complexity-based implementation routing |
|
||||
| |
|
||||
| Phase 4: Results Aggregation |
|
||||
| +-- Export final results.csv |
|
||||
| +-- Generate context.md with all findings |
|
||||
| +-- Display summary: completed/failed/skipped per wave |
|
||||
| +-- Offer: view results | retry failed | done |
|
||||
| |
|
||||
+--------------------------------------------------------------------------+
|
||||
Skill(skill="team-lifecycle-v4", args="task description")
|
||||
|
|
||||
SKILL.md (this file) = Router
|
||||
|
|
||||
+--------------+--------------+
|
||||
| |
|
||||
no --role flag --role <name>
|
||||
| |
|
||||
Coordinator Worker
|
||||
roles/coordinator/role.md roles/<name>/role.md
|
||||
|
|
||||
+-- analyze -> dispatch -> spawn -> wait -> collect
|
||||
|
|
||||
+--------+---+--------+
|
||||
v v v
|
||||
spawn_agent ... spawn_agent
|
||||
(team_worker) (team_supervisor)
|
||||
per-task resident agent
|
||||
lifecycle send_input-driven
|
||||
| |
|
||||
+-- wait_agent --------+
|
||||
|
|
||||
collect results
|
||||
```
|
||||
|
||||
---
|
||||
## Role Registry
|
||||
|
||||
## Task Classification Rules
|
||||
| Role | Path | Prefix | Inner Loop |
|
||||
|------|------|--------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | -- | -- |
|
||||
| analyst | [roles/analyst/role.md](roles/analyst/role.md) | RESEARCH-* | false |
|
||||
| writer | [roles/writer/role.md](roles/writer/role.md) | DRAFT-* | true |
|
||||
| planner | [roles/planner/role.md](roles/planner/role.md) | PLAN-* | true |
|
||||
| executor | [roles/executor/role.md](roles/executor/role.md) | IMPL-* | true |
|
||||
| tester | [roles/tester/role.md](roles/tester/role.md) | TEST-* | false |
|
||||
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REVIEW-*, QUALITY-*, IMPROVE-* | false |
|
||||
| supervisor | [roles/supervisor/role.md](roles/supervisor/role.md) | CHECKPOINT-* | false |
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
## Role Router
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, clarification, checkpoint evaluation |
|
||||
Parse `$ARGUMENTS`:
|
||||
- Has `--role <name>` -> Read `roles/<name>/role.md`, execute Phase 2-4
|
||||
- No `--role` -> `roles/coordinator/role.md`, execute entry router
|
||||
|
||||
**Classification Decision**:
|
||||
## Shared Constants
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Research / analysis (RESEARCH-*) | `csv-wave` |
|
||||
| Document generation (DRAFT-*) | `csv-wave` |
|
||||
| Implementation planning (PLAN-*) | `csv-wave` |
|
||||
| Code implementation (IMPL-*) | `csv-wave` |
|
||||
| Test execution (TEST-*) | `csv-wave` |
|
||||
| Code review (REVIEW-*) | `csv-wave` |
|
||||
| Quality gate scoring (QUALITY-*) | `csv-wave` |
|
||||
| Supervisor checkpoints (CHECKPOINT-*) | `interactive` |
|
||||
| Requirement clarification (Phase 0) | `interactive` |
|
||||
| Quality gate user approval | `interactive` |
|
||||
- **Session prefix**: `TLV4`
|
||||
- **Session path**: `.workflow/.team/TLV4-<slug>-<date>/`
|
||||
- **State file**: `<session>/tasks.json`
|
||||
- **Discovery files**: `<session>/discoveries/{task_id}.json`
|
||||
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
|
||||
|
||||
---
|
||||
## Worker Spawn Template
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,role,pipeline_phase,deps,context_from,exec_mode,wave,status,findings,quality_score,supervision_verdict,error
|
||||
"RESEARCH-001","Domain research","Explore domain, extract structured context, identify constraints","analyst","research","","","csv-wave","1","pending","","","",""
|
||||
"DRAFT-001","Product brief","Generate product brief from research context","writer","product-brief","RESEARCH-001","RESEARCH-001","csv-wave","2","pending","","","",""
|
||||
"CHECKPOINT-001","Brief-PRD consistency","Verify terminology alignment and scope consistency between brief and PRD","supervisor","checkpoint","DRAFT-002","DRAFT-001;DRAFT-002","interactive","4","pending","","","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (string) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description |
|
||||
| `role` | Input | Worker role: analyst, writer, planner, executor, tester, reviewer, supervisor |
|
||||
| `pipeline_phase` | Input | Lifecycle phase: research, product-brief, requirements, architecture, epics, checkpoint, readiness, planning, implementation, validation, review |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `quality_score` | Output | Quality gate score (0-100) for QUALITY-* tasks |
|
||||
| `supervision_verdict` | Output | `pass` / `warn` / `block` for CHECKPOINT-* tasks |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| requirement-clarifier | agents/requirement-clarifier.md | 2.3 (wait-respond) | Parse task, detect signals, select pipeline mode | standalone (Phase 0) |
|
||||
| supervisor | agents/supervisor.md | 2.3 (wait-respond) | Verify cross-artifact consistency at phase transitions | post-wave (after checkpoint dependencies complete) |
|
||||
| quality-gate | agents/quality-gate.md | 2.3 (wait-respond) | Evaluate quality and present user approval | post-wave (after QUALITY-001 completes) |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
Coordinator spawns workers using this template:
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
+-- tasks.csv # Master state (all tasks, both modes)
|
||||
+-- results.csv # Final results export
|
||||
+-- discoveries.ndjson # Shared discovery board (all agents)
|
||||
+-- context.md # Human-readable report
|
||||
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
+-- spec/ # Specification artifacts
|
||||
| +-- spec-config.json
|
||||
| +-- discovery-context.json
|
||||
| +-- product-brief.md
|
||||
| +-- requirements/
|
||||
| +-- architecture.md
|
||||
| +-- epics.md
|
||||
+-- plan/ # Implementation plan
|
||||
| +-- plan.json
|
||||
| +-- .task/TASK-*.json
|
||||
+-- artifacts/ # Review and checkpoint reports
|
||||
| +-- CHECKPOINT-*-report.md
|
||||
| +-- review-report.md
|
||||
+-- wisdom/ # Cross-task knowledge
|
||||
+-- explorations/ # Shared exploration cache
|
||||
+-- interactive/ # Interactive task artifacts
|
||||
+-- {id}-result.json
|
||||
```
|
||||
spawn_agent({
|
||||
agent_type: "team_worker",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: <skill_root>/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
|
||||
---
|
||||
Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).` },
|
||||
|
||||
## Implementation
|
||||
{ type: "text", text: `## Task Context
|
||||
task_id: <task-id>
|
||||
title: <task-title>
|
||||
description: <task-description>
|
||||
pipeline_phase: <pipeline-phase>` },
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
// Parse flags
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue')
|
||||
const noSupervision = $ARGUMENTS.includes('--no-supervision')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
|
||||
|
||||
// Clean requirement text (remove flags)
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|--no-supervision|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = requirement.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
let sessionId = `tlv4-${slug}-${dateStr}`
|
||||
let sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
// Continue mode: find existing session
|
||||
if (continueMode) {
|
||||
const existing = Bash(`ls -dt .workflow/.csv-wave/tlv4-* 2>/dev/null | head -1`).trim()
|
||||
if (existing) {
|
||||
sessionId = existing.split('/').pop()
|
||||
sessionFolder = existing
|
||||
// Read existing tasks.csv, find incomplete waves, resume from Phase 2
|
||||
}
|
||||
}
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}/{spec,plan,plan/.task,artifacts,wisdom,explorations,interactive}`)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive
|
||||
|
||||
**Objective**: Clarify requirement, detect capabilities, select pipeline mode.
|
||||
|
||||
**Execution**:
|
||||
|
||||
```javascript
|
||||
const clarifier = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-lifecycle-v4/agents/requirement-clarifier.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
Goal: Analyze task requirement and select appropriate pipeline
|
||||
Requirement: ${requirement}
|
||||
|
||||
### Task
|
||||
1. Parse task description for capability signals:
|
||||
- spec/design/document/requirements -> spec-only
|
||||
- implement/build/fix/code -> impl-only
|
||||
- full/lifecycle/end-to-end -> full-lifecycle
|
||||
- frontend/UI/react/vue -> fe-only or fullstack
|
||||
2. Score complexity (per capability +1, cross-domain +2, parallel tracks +1, serial depth >3 +1)
|
||||
3. Return structured result with pipeline_type, capabilities, complexity
|
||||
`
|
||||
{ type: "text", text: `## Upstream Context
|
||||
<prev_context>` }
|
||||
]
|
||||
})
|
||||
|
||||
const clarifierResult = wait({ ids: [clarifier], timeout_ms: 120000 })
|
||||
if (clarifierResult.timed_out) {
|
||||
send_input({ id: clarifier, message: "Please finalize and output current findings." })
|
||||
wait({ ids: [clarifier], timeout_ms: 60000 })
|
||||
}
|
||||
close_agent({ id: clarifier })
|
||||
|
||||
Write(`${sessionFolder}/interactive/requirement-clarifier-result.json`, JSON.stringify({
|
||||
task_id: "requirement-clarification",
|
||||
status: "completed",
|
||||
pipeline_type: parsedPipelineType,
|
||||
capabilities: parsedCapabilities,
|
||||
complexity: parsedComplexity,
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
```
|
||||
|
||||
If not AUTO_YES, confirm pipeline selection:
|
||||
## Supervisor Spawn Template
|
||||
|
||||
```javascript
|
||||
if (!AUTO_YES) {
|
||||
const answer = request_user_input({
|
||||
questions: [{
|
||||
question: `Requirement: "${requirement}" — Detected: ${pipeline_type}. Approve or override?`,
|
||||
header: "Pipeline",
|
||||
id: "pipeline_select",
|
||||
options: [
|
||||
{ label: "Approve (Recommended)", description: `Use ${pipeline_type} pipeline (complexity: ${complexity.level})` },
|
||||
{ label: "Spec Only", description: "Research -> draft specs -> quality gate" },
|
||||
{ label: "Impl/Full", description: "Implementation pipeline or full lifecycle" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
Supervisor is a **resident agent** (independent from team_worker). Spawned once during session init, woken via send_input for each CHECKPOINT task.
|
||||
|
||||
### Spawn (Phase 2 -- once per session)
|
||||
|
||||
```
|
||||
supervisorId = spawn_agent({
|
||||
agent_type: "team_supervisor",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: supervisor
|
||||
role_spec: <skill_root>/roles/supervisor/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
requirement: <task-description>
|
||||
|
||||
Read role_spec file (<skill_root>/roles/supervisor/role.md) to load checkpoint definitions.
|
||||
Init: load baseline context, report ready, go idle.
|
||||
Wake cycle: orchestrator sends checkpoint requests via send_input.` }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Refined requirements available for Phase 1 decomposition
|
||||
- Interactive agents closed, results stored
|
||||
### Wake (per CHECKPOINT task)
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement -> CSV + Classification
|
||||
|
||||
**Objective**: Build tasks.csv from selected pipeline mode with proper wave assignments.
|
||||
|
||||
**Decomposition Rules**:
|
||||
|
||||
| Pipeline | Tasks | Wave Structure |
|
||||
|----------|-------|---------------|
|
||||
| spec-only | RESEARCH-001 -> DRAFT-001 -> DRAFT-002 -> [CHECKPOINT-001] -> DRAFT-003 -> DRAFT-004 -> [CHECKPOINT-002] -> QUALITY-001 | 8 waves (6 csv + 2 interactive checkpoints) |
|
||||
| impl-only | PLAN-001 -> [CHECKPOINT-003] -> IMPL-001 -> TEST-001 + REVIEW-001 | 4 waves (3 csv + 1 interactive) |
|
||||
| full-lifecycle | spec-only pipeline + impl-only pipeline (PLAN blocked by QUALITY-001) | 12 waves |
|
||||
|
||||
**Pipeline Task Definitions**:
|
||||
|
||||
#### Spec-Only Pipeline
|
||||
|
||||
| Task ID | Role | Wave | Deps | exec_mode | Description |
|
||||
|---------|------|------|------|-----------|-------------|
|
||||
| RESEARCH-001 | analyst | 1 | (none) | csv-wave | Research domain, extract structured context |
|
||||
| DRAFT-001 | writer | 2 | RESEARCH-001 | csv-wave | Generate product brief |
|
||||
| DRAFT-002 | writer | 3 | DRAFT-001 | csv-wave | Generate requirements PRD |
|
||||
| CHECKPOINT-001 | supervisor | 4 | DRAFT-002 | interactive | Brief-PRD consistency check |
|
||||
| DRAFT-003 | writer | 5 | CHECKPOINT-001 | csv-wave | Generate architecture design |
|
||||
| DRAFT-004 | writer | 6 | DRAFT-003 | csv-wave | Generate epics and stories |
|
||||
| CHECKPOINT-002 | supervisor | 7 | DRAFT-004 | interactive | Full spec consistency check |
|
||||
| QUALITY-001 | reviewer | 8 | CHECKPOINT-002 | csv-wave | Quality gate scoring |
|
||||
|
||||
#### Impl-Only Pipeline
|
||||
|
||||
| Task ID | Role | Wave | Deps | exec_mode | Description |
|
||||
|---------|------|------|------|-----------|-------------|
|
||||
| PLAN-001 | planner | 1 | (none) | csv-wave | Break down into implementation steps |
|
||||
| CHECKPOINT-003 | supervisor | 2 | PLAN-001 | interactive | Plan-input alignment check |
|
||||
| IMPL-001 | executor | 3 | CHECKPOINT-003 | csv-wave | Execute implementation plan |
|
||||
| TEST-001 | tester | 4 | IMPL-001 | csv-wave | Run tests, fix failures |
|
||||
| REVIEW-001 | reviewer | 4 | IMPL-001 | csv-wave | Code review |
|
||||
|
||||
When `--no-supervision` is set, skip all CHECKPOINT-* tasks entirely, adjust wave numbers and dependencies accordingly (e.g., DRAFT-003 depends directly on DRAFT-002).
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
All lifecycle work tasks (research, drafting, planning, implementation, testing, review, quality) are `csv-wave`. Supervisor checkpoints are `interactive` (post-wave, spawned by orchestrator to verify cross-artifact consistency). Quality gate user approval is `interactive`.
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
|
||||
|
||||
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema, wave, and exec_mode assignments
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
|
||||
|
||||
```javascript
|
||||
const failedIds = new Set()
|
||||
const skippedIds = new Set()
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
console.log(`\n## Wave ${wave}/${maxWave}\n`)
|
||||
|
||||
// 1. Read current master CSV
|
||||
const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
|
||||
// 2. Separate csv-wave and interactive tasks for this wave
|
||||
const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
|
||||
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
|
||||
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
|
||||
|
||||
// 3. Skip tasks whose deps failed
|
||||
const executableCsvTasks = []
|
||||
for (const task of csvTasks) {
|
||||
const deps = task.deps.split(';').filter(Boolean)
|
||||
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
||||
skippedIds.add(task.id)
|
||||
updateMasterCsvRow(sessionFolder, task.id, {
|
||||
status: 'skipped', error: 'Dependency failed or skipped'
|
||||
})
|
||||
continue
|
||||
}
|
||||
executableCsvTasks.push(task)
|
||||
}
|
||||
|
||||
// 4. Build prev_context for each csv-wave task
|
||||
for (const task of executableCsvTasks) {
|
||||
const contextIds = task.context_from.split(';').filter(Boolean)
|
||||
const prevFindings = contextIds
|
||||
.map(id => {
|
||||
const prevRow = masterCsv.find(r => r.id === id)
|
||||
if (prevRow && prevRow.status === 'completed' && prevRow.findings) {
|
||||
return `[Task ${id}: ${prevRow.title}] ${prevRow.findings}`
|
||||
}
|
||||
// Check interactive results
|
||||
try {
|
||||
const interactiveResult = JSON.parse(Read(`${sessionFolder}/interactive/${id}-result.json`))
|
||||
return `[Task ${id}] ${JSON.stringify(interactiveResult.key_findings || interactiveResult.findings || '')}`
|
||||
} catch { return null }
|
||||
})
|
||||
.filter(Boolean)
|
||||
.join('\n')
|
||||
task.prev_context = prevFindings || 'No previous context available'
|
||||
}
|
||||
|
||||
// 5. Write wave CSV and execute csv-wave tasks
|
||||
if (executableCsvTasks.length > 0) {
|
||||
const waveHeader = 'id,title,description,role,pipeline_phase,deps,context_from,exec_mode,wave,prev_context'
|
||||
const waveRows = executableCsvTasks.map(t =>
|
||||
[t.id, t.title, t.description, t.role, t.pipeline_phase, t.deps, t.context_from, t.exec_mode, t.wave, t.prev_context]
|
||||
.map(cell => `"${String(cell).replace(/"/g, '""')}"`)
|
||||
.join(',')
|
||||
)
|
||||
Write(`${sessionFolder}/wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
|
||||
|
||||
const waveResult = spawn_agents_on_csv({
|
||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||
id_column: "id",
|
||||
instruction: Read(`~ or <project>/.codex/skills/team-lifecycle-v4/instructions/agent-instruction.md`)
|
||||
.replace(/{session-id}/g, sessionId),
|
||||
max_concurrency: maxConcurrency,
|
||||
max_runtime_seconds: 900,
|
||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||
output_schema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
id: { type: "string" },
|
||||
status: { type: "string", enum: ["completed", "failed"] },
|
||||
findings: { type: "string" },
|
||||
quality_score: { type: "string" },
|
||||
supervision_verdict: { type: "string" },
|
||||
error: { type: "string" }
|
||||
},
|
||||
required: ["id", "status", "findings"]
|
||||
}
|
||||
})
|
||||
|
||||
// Merge results into master CSV
|
||||
const waveResults = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||
for (const result of waveResults) {
|
||||
updateMasterCsvRow(sessionFolder, result.id, {
|
||||
status: result.status,
|
||||
findings: result.findings || '',
|
||||
quality_score: result.quality_score || '',
|
||||
supervision_verdict: result.supervision_verdict || '',
|
||||
error: result.error || ''
|
||||
})
|
||||
if (result.status === 'failed') failedIds.add(result.id)
|
||||
}
|
||||
|
||||
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv"`)
|
||||
}
|
||||
|
||||
// 6. Execute post-wave interactive tasks (supervisor checkpoints)
|
||||
for (const task of interactiveTasks) {
|
||||
if (task.status !== 'pending') continue
|
||||
const deps = task.deps.split(';').filter(Boolean)
|
||||
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
||||
skippedIds.add(task.id)
|
||||
continue
|
||||
}
|
||||
|
||||
// Spawn supervisor agent for CHECKPOINT tasks
|
||||
const supervisorAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-lifecycle-v4/agents/supervisor.md (MUST read first)
|
||||
2. Read: ${sessionFolder}/discoveries.ndjson (shared discoveries)
|
||||
|
||||
---
|
||||
|
||||
Goal: Execute checkpoint verification
|
||||
Session: ${sessionFolder}
|
||||
Task ID: ${task.id}
|
||||
Description: ${task.description}
|
||||
Scope: ${task.deps}
|
||||
|
||||
### Context
|
||||
Read upstream artifacts and verify cross-artifact consistency.
|
||||
Produce verdict: pass (score >= 0.8), warn (0.5-0.79), block (< 0.5).
|
||||
Write report to ${sessionFolder}/artifacts/${task.id}-report.md.
|
||||
`
|
||||
})
|
||||
|
||||
const checkpointResult = wait({ ids: [supervisorAgent], timeout_ms: 300000 })
|
||||
if (checkpointResult.timed_out) {
|
||||
send_input({ id: supervisorAgent, message: "Please finalize your checkpoint evaluation now." })
|
||||
wait({ ids: [supervisorAgent], timeout_ms: 120000 })
|
||||
}
|
||||
close_agent({ id: supervisorAgent })
|
||||
|
||||
// Parse checkpoint verdict
|
||||
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
|
||||
task_id: task.id, status: "completed",
|
||||
supervision_verdict: parsedVerdict,
|
||||
supervision_score: parsedScore,
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
|
||||
// Handle verdict
|
||||
if (parsedVerdict === 'block') {
|
||||
if (!AUTO_YES) {
|
||||
const answer = request_user_input({
|
||||
questions: [{
|
||||
question: `Checkpoint ${task.id} BLOCKED (score: ${parsedScore}). Choose action.`,
|
||||
header: "Blocked",
|
||||
id: "blocked_action",
|
||||
options: [
|
||||
{ label: "Override", description: "Proceed despite block" },
|
||||
{ label: "Revise upstream", description: "Go back and fix issues" },
|
||||
{ label: "Abort", description: "Stop pipeline" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
// Handle user choice
|
||||
}
|
||||
}
|
||||
|
||||
updateMasterCsvRow(sessionFolder, task.id, {
|
||||
status: 'completed',
|
||||
findings: `Checkpoint verdict: ${parsedVerdict} (score: ${parsedScore})`,
|
||||
supervision_verdict: parsedVerdict
|
||||
})
|
||||
}
|
||||
|
||||
// 7. Handle special post-wave logic
|
||||
// After QUALITY-001: pause for user approval before implementation
|
||||
// After PLAN-001: read complexity for conditional routing
|
||||
}
|
||||
```
|
||||
send_input({
|
||||
id: supervisorId,
|
||||
items: [
|
||||
{ type: "text", text: `## Checkpoint Request
|
||||
task_id: <CHECKPOINT-NNN>
|
||||
scope: [<upstream-task-ids>]
|
||||
pipeline_progress: <done>/<total> tasks completed` }
|
||||
]
|
||||
})
|
||||
wait_agent({ ids: [supervisorId], timeout_ms: 300000 })
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Both csv-wave and interactive tasks handled per wave
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves and mechanisms
|
||||
- Supervisor checkpoints evaluated with proper verdict routing
|
||||
### Shutdown (pipeline complete)
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Post-Wave Interactive
|
||||
|
||||
**Objective**: Handle quality gate user approval and complexity-based implementation routing.
|
||||
|
||||
After QUALITY-001 completes (spec pipelines):
|
||||
1. Read quality score from QUALITY-001 findings
|
||||
2. If score >= 80%: present user approval for implementation (if full-lifecycle)
|
||||
3. If score 60-79%: suggest revisions, offer retry
|
||||
4. If score < 60%: return to writer for rework
|
||||
|
||||
After PLAN-001 completes (impl pipelines):
|
||||
1. Read plan.json complexity assessment
|
||||
2. Route by complexity:
|
||||
- Low (1-2 modules): direct IMPL-001
|
||||
- Medium (3-4 modules): parallel IMPL-{1..N}
|
||||
- High (5+ modules): detailed architecture first, then parallel IMPL
|
||||
|
||||
**Success Criteria**:
|
||||
- Post-wave interactive processing complete
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```javascript
|
||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
Write(`${sessionFolder}/results.csv`, masterCsv)
|
||||
|
||||
const tasks = parseCsv(masterCsv)
|
||||
const completed = tasks.filter(t => t.status === 'completed')
|
||||
const failed = tasks.filter(t => t.status === 'failed')
|
||||
const skipped = tasks.filter(t => t.status === 'skipped')
|
||||
|
||||
const contextContent = `# Team Lifecycle v4 Report
|
||||
|
||||
**Session**: ${sessionId}
|
||||
**Requirement**: ${requirement}
|
||||
**Pipeline**: ${pipeline_type}
|
||||
**Completed**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | Count |
|
||||
|--------|-------|
|
||||
| Total Tasks | ${tasks.length} |
|
||||
| Completed | ${completed.length} |
|
||||
| Failed | ${failed.length} |
|
||||
| Skipped | ${skipped.length} |
|
||||
| Supervision | ${noSupervision ? 'Disabled' : 'Enabled'} |
|
||||
|
||||
---
|
||||
|
||||
## Pipeline Execution
|
||||
|
||||
${waveDetails}
|
||||
|
||||
---
|
||||
|
||||
## Deliverables
|
||||
|
||||
${deliverablesList}
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
${qualityGateResults}
|
||||
|
||||
---
|
||||
|
||||
## Checkpoint Reports
|
||||
|
||||
${checkpointResults}
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextContent)
|
||||
```
|
||||
close_agent({ id: supervisorId })
|
||||
```
|
||||
|
||||
If not AUTO_YES, offer completion action:
|
||||
## Wave Execution Engine
|
||||
|
||||
```javascript
|
||||
if (!AUTO_YES) {
|
||||
request_user_input({
|
||||
questions: [{
|
||||
question: "Pipeline complete. Choose next action.",
|
||||
header: "Done",
|
||||
id: "completion",
|
||||
options: [
|
||||
{ label: "Archive (Recommended)", description: "Archive session" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up work" },
|
||||
{ label: "Export Results", description: "Export deliverables to target directory" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
For each wave in the pipeline:
|
||||
|
||||
1. **Load state** -- Read `<session>/tasks.json`, filter tasks for current wave
|
||||
2. **Skip failed deps** -- Mark tasks whose dependencies failed/skipped as `skipped`
|
||||
3. **Build upstream context** -- For each task, gather findings from `context_from` tasks via tasks.json and `discoveries/{id}.json`
|
||||
4. **Separate task types** -- Split into regular tasks and CHECKPOINT tasks
|
||||
5. **Spawn regular tasks** -- For each regular task, call `spawn_agent({ agent_type: "team_worker", items: [...] })`, collect agent IDs
|
||||
6. **Wait** -- `wait_agent({ ids: [...], timeout_ms: 900000 })`
|
||||
7. **Collect results** -- Read `discoveries/{task_id}.json` for each agent, update tasks.json status/findings/error, then `close_agent({ id })` each worker
|
||||
8. **Execute checkpoints** -- For each CHECKPOINT task, `send_input` to supervisor, `wait_agent`, read checkpoint report from `artifacts/`, parse verdict
|
||||
9. **Handle block** -- If verdict is `block`, prompt user via `request_user_input` with options: Override / Revise upstream / Abort
|
||||
10. **Persist** -- Write updated state to `<session>/tasks.json`
|
||||
|
||||
## User Commands
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | View execution status graph |
|
||||
| `resume` / `continue` | Advance to next step |
|
||||
| `revise <TASK-ID> [feedback]` | Revise specific task |
|
||||
| `feedback <text>` | Inject feedback for revision |
|
||||
| `recheck` | Re-run quality check |
|
||||
| `improve [dimension]` | Auto-improve weakest dimension |
|
||||
|
||||
## Completion Action
|
||||
|
||||
When pipeline completes, coordinator presents:
|
||||
|
||||
```
|
||||
request_user_input({
|
||||
questions: [{
|
||||
question: "Pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up resources" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up work" },
|
||||
{ label: "Export Results", description: "Export deliverables to target directory" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks, both modes)
|
||||
- context.md generated
|
||||
- All interactive agents closed
|
||||
- Summary displayed to user
|
||||
## Specs Reference
|
||||
|
||||
---
|
||||
- [specs/pipelines.md](specs/pipelines.md) -- Pipeline definitions and task registry
|
||||
- [specs/quality-gates.md](specs/quality-gates.md) -- Quality gate criteria and scoring
|
||||
- [specs/knowledge-transfer.md](specs/knowledge-transfer.md) -- Artifact and state transfer protocols
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
## Session Directory
|
||||
|
||||
All agents across all waves share `discoveries.ndjson`. This enables cross-role knowledge sharing.
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `research` | `data.dimension` | `{dimension, findings[], constraints[], integration_points[]}` | Research findings |
|
||||
| `spec_artifact` | `data.doc_type` | `{doc_type, path, sections[], key_decisions[]}` | Specification document artifact |
|
||||
| `exploration` | `data.angle` | `{angle, relevant_files[], patterns[], recommendations[]}` | Codebase exploration finding |
|
||||
| `plan_task` | `data.task_id` | `{task_id, title, files[], complexity, convergence_criteria[]}` | Implementation task definition |
|
||||
| `implementation` | `data.task_id` | `{task_id, files_modified[], approach, changes_summary}` | Implementation result |
|
||||
| `test_result` | `data.framework` | `{framework, pass_rate, failures[], fix_iterations}` | Test execution result |
|
||||
| `review_finding` | `data.file` | `{file, line, severity, dimension, description, suggested_fix}` | Code review finding |
|
||||
| `checkpoint` | `data.checkpoint_id` | `{checkpoint_id, verdict, score, risks[], blocks[]}` | Supervisor checkpoint result |
|
||||
| `quality_gate` | `data.gate_id` | `{gate_id, score, dimensions{}, verdict}` | Quality gate assessment |
|
||||
|
||||
**Format**: NDJSON, each line is self-contained JSON:
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00+08:00","worker":"RESEARCH-001","type":"research","data":{"dimension":"domain","findings":["Auth system needs OAuth2 + RBAC"],"constraints":["Must support SSO"],"integration_points":["User service API"]}}
|
||||
{"ts":"2026-03-08T10:15:00+08:00","worker":"DRAFT-001","type":"spec_artifact","data":{"doc_type":"product-brief","path":"spec/product-brief.md","sections":["Vision","Problem","Users","Goals"],"key_decisions":["OAuth2 over custom auth"]}}
|
||||
{"ts":"2026-03-08T11:00:00+08:00","worker":"CHECKPOINT-001","type":"checkpoint","data":{"checkpoint_id":"CHECKPOINT-001","verdict":"pass","score":0.90,"risks":[],"blocks":[]}}
|
||||
```
|
||||
|
||||
**Protocol Rules**:
|
||||
1. Read board before own work -> leverage existing context
|
||||
2. Write discoveries immediately via `echo >>` -> don't batch
|
||||
3. Deduplicate -- check existing entries by type + dedup key
|
||||
4. Append-only -- never modify or delete existing lines
|
||||
|
||||
---
|
||||
.workflow/.team/TLV4-<slug>-<date>/
|
||||
├── tasks.json # Task state (JSON)
|
||||
├── discoveries/ # Per-task findings ({task_id}.json)
|
||||
├── spec/ # Spec phase outputs
|
||||
├── plan/ # Implementation plan
|
||||
├── artifacts/ # All deliverables
|
||||
├── wisdom/ # Cross-task knowledge
|
||||
├── explorations/ # Shared explore cache
|
||||
└── discussions/ # Discuss round records
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| Interactive agent failed | Mark as failed, skip dependents |
|
||||
| Supervisor checkpoint blocked | request_user_input: Override / Revise / Abort |
|
||||
| Quality gate failed (< 60%) | Return to writer for rework |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| CLI tool fails | Agent fallback to direct implementation |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 0/1
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson -- both mechanisms share it
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
|
||||
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent
|
||||
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Role Constraints (Main Agent)
|
||||
|
||||
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
|
||||
|
||||
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
|
||||
- Spawns agents with task assignments
|
||||
- Waits for agent callbacks
|
||||
- Merges results and coordinates workflow
|
||||
- Manages workflow transitions between phases
|
||||
|
||||
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
|
||||
- Wait patiently for `wait()` calls to complete
|
||||
- NOT skip workflow steps due to perceived delays
|
||||
- NOT assume agents have failed just because they're taking time
|
||||
- Trust the timeout mechanisms defined in the skill
|
||||
|
||||
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
|
||||
- Use `send_input()` to ask questions or provide clarification
|
||||
- NOT skip the agent or move to next phase prematurely
|
||||
- Give agents opportunity to respond before escalating
|
||||
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
|
||||
|
||||
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
|
||||
- Skip phases or stages defined in the workflow
|
||||
- Bypass required approval or review steps
|
||||
- Execute dependent tasks before prerequisites complete
|
||||
- Assume task completion without explicit agent callback
|
||||
- Make up or fabricate agent results
|
||||
|
||||
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
|
||||
- Total execution time may range from 30-90 minutes or longer
|
||||
- Each phase may take 10-30 minutes depending on complexity
|
||||
- The coordinator must remain active and attentive throughout the entire process
|
||||
- Do not terminate or skip steps due to time concerns
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown command | Error with available command list |
|
||||
| Role not found | Error with role registry |
|
||||
| CLI tool fails | Worker fallback to direct implementation |
|
||||
| Supervisor crash | Respawn with `recovery: true`, auto-rebuilds from existing reports |
|
||||
| Supervisor not ready for CHECKPOINT | Spawn/respawn supervisor, wait for ready, then wake |
|
||||
| Completion action fails | Default to Keep Active |
|
||||
| Worker timeout | Mark task as failed, continue wave |
|
||||
| Discovery file missing | Mark task as failed with "No discovery file produced" |
|
||||
|
||||
@@ -1,165 +0,0 @@
|
||||
# Quality Gate Agent
|
||||
|
||||
Evaluate quality metrics from the QUALITY-001 task, apply threshold checks, and present a summary to the user for approval or rejection before the pipeline advances.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Evaluate quality metrics and present user approval gate
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read quality results from QUALITY-001 task output
|
||||
- Evaluate all metrics against defined thresholds
|
||||
- Present clear quality summary to user with pass/fail per metric
|
||||
- Obtain explicit user verdict (APPROVE or REJECT)
|
||||
- Report structured output with verdict and metric breakdown
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Auto-approve without user confirmation (unless --yes flag is set)
|
||||
- Fabricate or estimate missing metrics
|
||||
- Lower thresholds to force a pass
|
||||
- Skip any defined quality dimension
|
||||
- Modify source code or test files
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load quality results and task artifacts |
|
||||
| `Bash` | builtin | Run verification commands (build check, test rerun) |
|
||||
| `request_user_input` | builtin | Present quality summary and obtain user verdict |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Quality Results Loading
|
||||
|
||||
**Objective**: Load and parse quality metrics from QUALITY-001 task output.
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| QUALITY-001 findings | Yes | Quality scores from tasks.csv findings column |
|
||||
| Test results | Yes | Test pass/fail counts and coverage data |
|
||||
| Review report | Yes (if review stage ran) | Code review score and findings |
|
||||
| Build output | Yes | Build success/failure status |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read tasks.csv to extract QUALITY-001 row and its quality_score
|
||||
2. Read test result artifacts for pass rate and coverage metrics
|
||||
3. Read review report for code review score and unresolved findings
|
||||
4. Read build output for compilation status
|
||||
5. Categorize any unresolved findings by severity (Critical, High, Medium, Low)
|
||||
|
||||
**Output**: Parsed quality metrics ready for threshold evaluation
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Threshold Evaluation
|
||||
|
||||
**Objective**: Evaluate each quality metric against defined thresholds.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Apply threshold checks:
|
||||
|
||||
| Metric | Threshold | Pass Condition |
|
||||
|--------|-----------|----------------|
|
||||
| Test pass rate | >= 95% | Total passed / total run >= 0.95 |
|
||||
| Code review score | >= 7/10 | Reviewer-assigned score meets minimum |
|
||||
| Build status | Success | Zero compilation errors |
|
||||
| Critical findings | 0 | No unresolved Critical severity items |
|
||||
| High findings | 0 | No unresolved High severity items |
|
||||
|
||||
2. Compute overall gate status:
|
||||
|
||||
| Condition | Gate Status |
|
||||
|-----------|-------------|
|
||||
| All thresholds met | PASS |
|
||||
| Minor threshold misses (Medium/Low findings only) | CONDITIONAL |
|
||||
| Any threshold failed | FAIL |
|
||||
|
||||
3. Prepare metric breakdown with pass/fail per dimension
|
||||
|
||||
**Output**: Gate status with per-metric verdicts
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: User Approval Gate
|
||||
|
||||
**Objective**: Present quality summary to user and obtain APPROVE/REJECT verdict.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Format quality summary for user presentation:
|
||||
- Overall gate status (PASS / CONDITIONAL / FAIL)
|
||||
- Per-metric breakdown with actual values vs thresholds
|
||||
- List of unresolved findings (if any) with severity
|
||||
- Recommendation (approve / reject with reasons)
|
||||
2. Present to user via request_user_input:
|
||||
- If gate status is PASS: recommend approval
|
||||
- If gate status is CONDITIONAL: present risks, ask user to decide
|
||||
- If gate status is FAIL: recommend rejection with specific failures listed
|
||||
3. Record user verdict (APPROVE or REJECT)
|
||||
4. If --yes flag is set and gate status is PASS: auto-approve without asking
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Gate status: PASS | CONDITIONAL | FAIL
|
||||
- User verdict: APPROVE | REJECT
|
||||
- Overall quality score: [N/100]
|
||||
|
||||
## Metric Breakdown
|
||||
|
||||
| Metric | Threshold | Actual | Status |
|
||||
|--------|-----------|--------|--------|
|
||||
| Test pass rate | >= 95% | [X%] | pass | fail |
|
||||
| Code review score | >= 7/10 | [X/10] | pass | fail |
|
||||
| Build status | Success | [success|failure] | pass | fail |
|
||||
| Critical findings | 0 | [N] | pass | fail |
|
||||
| High findings | 0 | [N] | pass | fail |
|
||||
|
||||
## Unresolved Findings (if any)
|
||||
- [severity] [finding-id]: [description] — [file:line]
|
||||
|
||||
## Verdict
|
||||
- **Decision**: APPROVE | REJECT
|
||||
- **Rationale**: [user's stated reason or auto-approve justification]
|
||||
- **Conditions** (if CONDITIONAL approval): [list of accepted risks]
|
||||
|
||||
## Artifacts Read
|
||||
- tasks.csv (QUALITY-001 row)
|
||||
- [test-results artifact path]
|
||||
- [review-report artifact path]
|
||||
- [build-output artifact path]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| QUALITY-001 task not found or not completed | Report error, gate status = FAIL, ask user how to proceed |
|
||||
| Test results artifact missing | Mark test pass rate as unknown, gate status = FAIL |
|
||||
| Review report missing (review stage skipped) | Mark review score as N/A, evaluate remaining metrics only |
|
||||
| Build output missing | Run quick build check via Bash, use result |
|
||||
| User does not respond to approval prompt | Default to REJECT after timeout, log reason |
|
||||
| Metrics are partially available | Evaluate available metrics, mark missing as unknown, gate status = CONDITIONAL at best |
|
||||
| --yes flag with FAIL status | Do NOT auto-approve, still present to user |
|
||||
@@ -1,163 +0,0 @@
|
||||
# Requirement Clarifier Agent
|
||||
|
||||
Parse user task input, detect pipeline signals, select execution mode, and produce a structured task-analysis result for downstream decomposition.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Parse task, detect signals, select pipeline mode
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Parse user requirement text for scope keywords and intent signals
|
||||
- Detect if spec artifacts already exist (resume mode)
|
||||
- Detect --no-supervision flag and propagate accordingly
|
||||
- Select one pipeline mode: spec-only, impl-only, full-lifecycle, frontend
|
||||
- Ask clarifying questions when intent is ambiguous
|
||||
- Produce structured JSON output with mode, scope, and flags
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Make assumptions about pipeline mode when signals are ambiguous
|
||||
- Skip signal detection and default to full-lifecycle without evidence
|
||||
- Modify any existing artifacts
|
||||
- Proceed without user confirmation on selected mode (unless --yes)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load existing spec artifacts to detect resume mode |
|
||||
| `Glob` | builtin | Find existing artifacts in workspace |
|
||||
| `Grep` | builtin | Search for keywords and patterns in artifacts |
|
||||
| `Bash` | builtin | Run utility commands |
|
||||
| `request_user_input` | builtin | Clarify ambiguous requirements with user |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Signal Detection
|
||||
|
||||
**Objective**: Parse user requirement and detect input signals for pipeline routing.
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| User requirement text | Yes | Raw task description from invocation |
|
||||
| Existing artifacts | No | Previous spec/impl artifacts in workspace |
|
||||
| CLI flags | No | --yes, --no-supervision, --continue |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Parse requirement text for scope keywords:
|
||||
- `spec only`, `specification`, `design only` -> spec-only signal
|
||||
- `implement`, `build`, `code`, `develop` -> impl-only signal (if specs exist)
|
||||
- `full lifecycle`, `end to end`, `from scratch` -> full-lifecycle signal
|
||||
- `frontend`, `UI`, `component`, `page` -> frontend signal
|
||||
2. Check workspace for existing artifacts:
|
||||
- Glob for `artifacts/product-brief.md`, `artifacts/requirements.md`, `artifacts/architecture.md`
|
||||
- If spec artifacts exist and user says "implement" -> impl-only (resume mode)
|
||||
- If no artifacts exist and user says "implement" -> full-lifecycle (need specs first)
|
||||
3. Detect CLI flags:
|
||||
- `--no-supervision` -> set noSupervision=true (skip CHECKPOINT tasks)
|
||||
- `--yes` -> set autoMode=true (skip confirmations)
|
||||
- `--continue` -> load previous session state
|
||||
|
||||
**Output**: Detected signals with confidence scores
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Pipeline Mode Selection
|
||||
|
||||
**Objective**: Select the appropriate pipeline mode based on detected signals.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Evaluate signal combinations:
|
||||
|
||||
| Signals Detected | Selected Mode |
|
||||
|------------------|---------------|
|
||||
| spec keywords + no existing specs | `spec-only` |
|
||||
| impl keywords + existing specs | `impl-only` |
|
||||
| full-lifecycle keywords OR (impl keywords + no existing specs) | `full-lifecycle` |
|
||||
| frontend keywords | `frontend` |
|
||||
| Ambiguous / conflicting signals | Ask user via request_user_input |
|
||||
|
||||
2. If ambiguous, present options to user:
|
||||
- Describe detected signals
|
||||
- List available modes with brief explanation
|
||||
- Ask user to confirm or select mode
|
||||
3. Determine complexity estimate (low/medium/high) based on:
|
||||
- Number of distinct features mentioned
|
||||
- Technical domain breadth
|
||||
- Integration points referenced
|
||||
|
||||
**Output**: Selected pipeline mode with rationale
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Task Analysis Output
|
||||
|
||||
**Objective**: Write structured task-analysis result for downstream decomposition.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Assemble task-analysis JSON with all collected data
|
||||
2. Write to `artifacts/task-analysis.json`
|
||||
3. Report summary to orchestrator
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Requirement: [condensed user requirement, 1-2 sentences]
|
||||
- Pipeline mode: spec-only | impl-only | full-lifecycle | frontend
|
||||
- Complexity: low | medium | high
|
||||
- Resume mode: yes | no
|
||||
|
||||
## Detected Signals
|
||||
- Scope keywords: [list of matched keywords]
|
||||
- Existing artifacts: [list of found spec artifacts, or "none"]
|
||||
- CLI flags: [--yes, --no-supervision, --continue, or "none"]
|
||||
|
||||
## Task Analysis JSON
|
||||
{
|
||||
"mode": "<pipeline-mode>",
|
||||
"scope": "<condensed requirement>",
|
||||
"complexity": "<low|medium|high>",
|
||||
"resume": <true|false>,
|
||||
"flags": {
|
||||
"noSupervision": <true|false>,
|
||||
"autoMode": <true|false>
|
||||
},
|
||||
"existingArtifacts": ["<list of found artifacts>"],
|
||||
"detectedFeatures": ["<extracted feature list>"]
|
||||
}
|
||||
|
||||
## Artifacts Written
|
||||
- artifacts/task-analysis.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Requirement text is empty or too vague | Ask user for clarification via request_user_input |
|
||||
| Conflicting signals (e.g., "spec only" + "implement now") | Present conflict to user, ask for explicit choice |
|
||||
| Existing artifacts are corrupted or incomplete | Log warning, treat as no-artifacts (full-lifecycle) |
|
||||
| Workspace not writable | Report error, output JSON to stdout instead |
|
||||
| User does not respond to clarification | Default to full-lifecycle with warn note |
|
||||
| --continue flag but no previous session found | Report error, fall back to fresh start |
|
||||
@@ -1,182 +0,0 @@
|
||||
# Supervisor Agent
|
||||
|
||||
Verify cross-artifact consistency at phase transition checkpoints. Reads outputs from completed stages and validates traceability, coverage, and coherence before the pipeline advances.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Verify cross-artifact consistency at phase transitions (checkpoint tasks)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Identify which checkpoint type this invocation covers (CHECKPOINT-SPEC or CHECKPOINT-IMPL)
|
||||
- Read all relevant artifacts produced by predecessor tasks
|
||||
- Verify bidirectional traceability between artifacts
|
||||
- Issue a clear verdict: pass, warn, or block
|
||||
- Provide specific file:line references for any findings
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Modify any artifacts (read-only verification)
|
||||
- Skip traceability checks for convenience
|
||||
- Issue pass verdict when critical inconsistencies exist
|
||||
- Block pipeline for minor style or formatting issues
|
||||
- Make subjective quality judgments (that is quality-gate's role)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load spec and implementation artifacts |
|
||||
| `Grep` | builtin | Search for cross-references and traceability markers |
|
||||
| `Glob` | builtin | Find artifacts in workspace |
|
||||
| `Bash` | builtin | Run validation scripts or diff checks |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Checkpoint Context Loading
|
||||
|
||||
**Objective**: Identify checkpoint type and load all relevant artifacts.
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Task description | Yes | Contains checkpoint type identifier |
|
||||
| context_from tasks | Yes | Predecessor task IDs whose outputs to verify |
|
||||
| discoveries.ndjson | No | Shared findings from previous waves |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Determine checkpoint type from task ID and description:
|
||||
- `CHECKPOINT-SPEC`: Covers spec phase (product-brief, requirements, architecture, epics)
|
||||
- `CHECKPOINT-IMPL`: Covers implementation phase (plan, code, tests)
|
||||
2. Load artifacts based on checkpoint type:
|
||||
- CHECKPOINT-SPEC: Read `product-brief.md`, `requirements.md`, `architecture.md`, `epics.md`
|
||||
- CHECKPOINT-IMPL: Read `implementation-plan.md`, source files, test results, review report
|
||||
3. Load predecessor task findings from tasks.csv for context
|
||||
|
||||
**Output**: Loaded artifact set with checkpoint type classification
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Cross-Artifact Consistency Verification
|
||||
|
||||
**Objective**: Verify traceability and consistency across artifacts.
|
||||
|
||||
**Steps**:
|
||||
|
||||
For **CHECKPOINT-SPEC**:
|
||||
|
||||
1. **Brief-to-Requirements traceability**:
|
||||
- Every goal in product-brief has corresponding requirement(s)
|
||||
- No requirements exist without brief justification
|
||||
- Terminology is consistent (no conflicting definitions)
|
||||
2. **Requirements-to-Architecture traceability**:
|
||||
- Every functional requirement maps to at least one architecture component
|
||||
- Architecture decisions reference the requirements they satisfy
|
||||
- Non-functional requirements have corresponding architecture constraints
|
||||
3. **Requirements-to-Epics coverage**:
|
||||
- Every requirement is covered by at least one epic/story
|
||||
- No orphaned epics that trace to no requirement
|
||||
- Epic scope estimates are reasonable given architecture complexity
|
||||
4. **Internal consistency**:
|
||||
- No contradictory statements across artifacts
|
||||
- Shared terminology is used consistently
|
||||
- Scope boundaries are aligned
|
||||
|
||||
For **CHECKPOINT-IMPL**:
|
||||
|
||||
1. **Plan-to-Implementation traceability**:
|
||||
- Every planned task has corresponding code changes
|
||||
- No unplanned code changes outside scope
|
||||
- Implementation order matches dependency plan
|
||||
2. **Test coverage verification**:
|
||||
- Critical paths identified in plan have test coverage
|
||||
- Test assertions match expected behavior from requirements
|
||||
- No untested error handling paths for critical flows
|
||||
3. **Unresolved items check**:
|
||||
- Grep for TODO, FIXME, HACK in implemented code
|
||||
- Verify no placeholder implementations remain
|
||||
- Check that all planned integration points are connected
|
||||
|
||||
**Output**: List of findings categorized by severity (critical, high, medium, low)
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Verdict Issuance
|
||||
|
||||
**Objective**: Issue checkpoint verdict based on findings.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Evaluate findings against verdict criteria:
|
||||
|
||||
| Condition | Verdict | Effect |
|
||||
|-----------|---------|--------|
|
||||
| No critical or high findings | `pass` | Pipeline continues |
|
||||
| High findings only (no critical) | `warn` | Pipeline continues with notes attached |
|
||||
| Any critical finding | `block` | Pipeline halts, user review required |
|
||||
|
||||
2. Write verdict with supporting evidence
|
||||
3. Attach findings to task output for downstream visibility
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Checkpoint: CHECKPOINT-SPEC | CHECKPOINT-IMPL
|
||||
- Verdict: pass | warn | block
|
||||
- Findings: N critical, M high, K medium, L low
|
||||
|
||||
## Artifacts Verified
|
||||
- [artifact-name]: loaded from [path], [N items checked]
|
||||
|
||||
## Findings
|
||||
|
||||
### Critical (if any)
|
||||
- [C-01] [description] — [artifact-a] vs [artifact-b], [file:line reference]
|
||||
|
||||
### High (if any)
|
||||
- [H-01] [description] — [artifact], [file:line reference]
|
||||
|
||||
### Medium (if any)
|
||||
- [M-01] [description] — [artifact], [details]
|
||||
|
||||
### Low (if any)
|
||||
- [L-01] [description] — [artifact], [details]
|
||||
|
||||
## Traceability Matrix
|
||||
| Source Item | Target Artifact | Status |
|
||||
|-------------|-----------------|--------|
|
||||
| [requirement-id] | [architecture-component] | covered | traced | missing |
|
||||
|
||||
## Verdict
|
||||
- **Decision**: pass | warn | block
|
||||
- **Rationale**: [1-2 sentence justification]
|
||||
- **Action required** (if block): [what needs to be fixed before proceeding]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Referenced artifact not found | Issue critical finding, verdict = block |
|
||||
| Artifact is empty or malformed | Issue high finding, attempt partial verification |
|
||||
| Checkpoint type cannot be determined | Read task description and context_from to infer, ask orchestrator if ambiguous |
|
||||
| Too many findings to enumerate | Summarize top 10 by severity, note total count |
|
||||
| Predecessor task failed | Issue block verdict, note dependency failure |
|
||||
| Timeout approaching | Output partial findings with verdict = warn and note incomplete check |
|
||||
104
.codex/skills/team-lifecycle-v4/roles/analyst/role.md
Normal file
104
.codex/skills/team-lifecycle-v4/roles/analyst/role.md
Normal file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
role: analyst
|
||||
prefix: RESEARCH
|
||||
inner_loop: false
|
||||
discuss_rounds: [DISCUSS-001]
|
||||
message_types:
|
||||
success: research_ready
|
||||
error: error
|
||||
---
|
||||
|
||||
# Analyst
|
||||
|
||||
Research and codebase exploration for context gathering.
|
||||
|
||||
## Identity
|
||||
- Tag: [analyst] | Prefix: RESEARCH-*
|
||||
- Responsibility: Gather structured context from topic and codebase
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Extract structured seed information from task topic
|
||||
- Explore codebase if project detected
|
||||
- Package context for downstream roles
|
||||
### MUST NOT
|
||||
- Implement code or modify files
|
||||
- Make architectural decisions
|
||||
- Skip codebase exploration when project files exist
|
||||
|
||||
## Phase 2: Seed Analysis
|
||||
|
||||
1. Read upstream state:
|
||||
- Read `tasks.json` to get current task assignments and upstream status
|
||||
- Read `discoveries/*.json` to load any prior discoveries from upstream roles
|
||||
2. Extract session folder from task description
|
||||
3. Parse topic from task description
|
||||
4. If topic references file (@path or .md/.txt) -> read it
|
||||
5. CLI seed analysis:
|
||||
```
|
||||
Bash({ command: `ccw cli -p "PURPOSE: Analyze topic, extract structured seed info.
|
||||
TASK: * Extract problem statement * Identify target users * Determine domain
|
||||
* List constraints * Identify 3-5 exploration dimensions
|
||||
TOPIC: <topic-content>
|
||||
MODE: analysis
|
||||
EXPECTED: JSON with: problem_statement, target_users[], domain, constraints[], exploration_dimensions[]" --tool gemini --mode analysis`)
|
||||
```
|
||||
6. Parse result JSON
|
||||
|
||||
## Phase 3: Codebase Exploration
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| package.json / Cargo.toml / pyproject.toml / go.mod exists | Explore |
|
||||
| No project files | Skip (codebase_context = null) |
|
||||
|
||||
When project detected:
|
||||
```
|
||||
Bash({ command: `ccw cli -p "PURPOSE: Explore codebase for context
|
||||
TASK: * Identify tech stack * Map architecture patterns * Document conventions * List integration points
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: JSON with: tech_stack[], architecture_patterns[], conventions[], integration_points[]" --tool gemini --mode analysis`)
|
||||
```
|
||||
|
||||
## Phase 4: Context Packaging
|
||||
|
||||
1. Write spec-config.json -> <session>/spec/
|
||||
2. Write discovery-context.json -> <session>/spec/
|
||||
3. Inline Discuss (DISCUSS-001):
|
||||
- Artifact: <session>/spec/discovery-context.json
|
||||
- Perspectives: product, risk, coverage
|
||||
4. Handle verdict per consensus protocol
|
||||
5. Write discovery to `discoveries/<task_id>.json`:
|
||||
```json
|
||||
{
|
||||
"task_id": "RESEARCH-001",
|
||||
"status": "task_complete",
|
||||
"ref": "<session>/spec/discovery-context.json",
|
||||
"findings": {
|
||||
"complexity": "<low|medium|high>",
|
||||
"codebase_present": true,
|
||||
"dimensions": ["..."],
|
||||
"discuss_verdict": "<verdict>"
|
||||
},
|
||||
"data": {
|
||||
"output_paths": ["spec-config.json", "discovery-context.json"]
|
||||
}
|
||||
}
|
||||
```
|
||||
6. Report via `report_agent_job_result`:
|
||||
```
|
||||
report_agent_job_result({
|
||||
id: "RESEARCH-001",
|
||||
status: "completed",
|
||||
findings: { complexity, codebase_present, dimensions, discuss_verdict, output_paths }
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI failure | Fallback to direct analysis |
|
||||
| No project detected | Continue as new project |
|
||||
| Topic too vague | Report with clarification questions |
|
||||
@@ -0,0 +1,56 @@
|
||||
# Analyze Task
|
||||
|
||||
Parse user task -> detect capabilities -> build dependency graph -> design roles.
|
||||
|
||||
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
|
||||
|
||||
## Signal Detection
|
||||
|
||||
| Keywords | Capability | Prefix |
|
||||
|----------|------------|--------|
|
||||
| investigate, explore, research | analyst | RESEARCH |
|
||||
| write, draft, document | writer | DRAFT |
|
||||
| implement, build, code, fix | executor | IMPL |
|
||||
| design, architect, plan | planner | PLAN |
|
||||
| test, verify, validate | tester | TEST |
|
||||
| analyze, review, audit | reviewer | REVIEW |
|
||||
|
||||
## Dependency Graph
|
||||
|
||||
Natural ordering tiers:
|
||||
- Tier 0: analyst, planner (knowledge gathering)
|
||||
- Tier 1: writer (creation requires context)
|
||||
- Tier 2: executor (implementation requires plan/design)
|
||||
- Tier 3: tester, reviewer (validation requires artifacts)
|
||||
|
||||
## Complexity Scoring
|
||||
|
||||
| Factor | Points |
|
||||
|--------|--------|
|
||||
| Per capability | +1 |
|
||||
| Cross-domain | +2 |
|
||||
| Parallel tracks | +1 per track |
|
||||
| Serial depth > 3 | +1 |
|
||||
|
||||
Results: 1-3 Low, 4-6 Medium, 7+ High
|
||||
|
||||
## Role Minimization
|
||||
|
||||
- Cap at 5 roles
|
||||
- Merge overlapping capabilities
|
||||
- Absorb trivial single-step roles
|
||||
|
||||
## Output
|
||||
|
||||
Write <session>/task-analysis.json:
|
||||
```json
|
||||
{
|
||||
"task_description": "<original>",
|
||||
"pipeline_type": "<spec-only|impl-only|full-lifecycle|...>",
|
||||
"capabilities": [{ "name": "<cap>", "prefix": "<PREFIX>", "keywords": ["..."] }],
|
||||
"dependency_graph": { "<TASK-ID>": { "role": "<role>", "blockedBy": ["..."], "priority": "P0|P1|P2" } },
|
||||
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
|
||||
"complexity": { "score": 0, "level": "Low|Medium|High" },
|
||||
"needs_research": true
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,61 @@
|
||||
# Dispatch Tasks
|
||||
|
||||
Create task chains from dependency graph, write to tasks.json with proper deps relationships.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Read task-analysis.json -> extract dependency_graph
|
||||
2. Read specs/pipelines.md -> get task registry for selected pipeline
|
||||
3. Topological sort tasks (respect deps)
|
||||
4. Validate all owners exist in role registry (SKILL.md)
|
||||
5. For each task (in order):
|
||||
- Add task entry to tasks.json `tasks` object (see template below)
|
||||
- Set deps array with upstream task IDs
|
||||
- Assign wave number based on dependency depth
|
||||
6. Update tasks.json metadata: total count, wave assignments
|
||||
7. Validate chain (no orphans, no cycles, all refs valid)
|
||||
|
||||
## Task Entry Template
|
||||
|
||||
Each task in tasks.json `tasks` object:
|
||||
```json
|
||||
{
|
||||
"<TASK-ID>": {
|
||||
"title": "<concise title>",
|
||||
"description": "PURPOSE: <goal> | Success: <criteria>\nTASK:\n - <step 1>\n - <step 2>\nCONTEXT:\n - Session: <session-folder>\n - Upstream artifacts: <list>\n - Key files: <list>\nEXPECTED: <artifact path> + <quality criteria>\nCONSTRAINTS: <scope limits>\n---\nInnerLoop: <true|false>\nRoleSpec: <project>/.codex/skills/team-lifecycle-v4/roles/<role>/role.md",
|
||||
"role": "<role-name>",
|
||||
"pipeline_phase": "<phase>",
|
||||
"deps": ["<upstream-task-id>", "..."],
|
||||
"context_from": ["<upstream-task-id>", "..."],
|
||||
"wave": 1,
|
||||
"status": "pending",
|
||||
"findings": null,
|
||||
"quality_score": null,
|
||||
"supervision_verdict": null,
|
||||
"error": null
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## InnerLoop Flag Rules
|
||||
|
||||
- true: Role has 2+ serial same-prefix tasks (writer: DRAFT-001->004)
|
||||
- false: Role has 1 task, or tasks are parallel
|
||||
|
||||
## CHECKPOINT Task Rules
|
||||
|
||||
CHECKPOINT tasks are dispatched like regular tasks but handled differently at spawn time:
|
||||
|
||||
- Added to tasks.json with proper deps (upstream tasks that must complete first)
|
||||
- Owner: supervisor
|
||||
- **NOT spawned as tlv4_worker** — coordinator wakes the resident supervisor via send_input
|
||||
- If `supervision: false` in tasks.json, skip creating CHECKPOINT tasks entirely
|
||||
- RoleSpec in description: `<project>/.codex/skills/team-lifecycle-v4/roles/supervisor/role.md`
|
||||
|
||||
## Dependency Validation
|
||||
|
||||
- No orphan tasks (all tasks have valid owner)
|
||||
- No circular dependencies
|
||||
- All deps references exist in tasks object
|
||||
- Session reference in every task description
|
||||
- RoleSpec reference in every task description
|
||||
@@ -0,0 +1,177 @@
|
||||
# Monitor Pipeline
|
||||
|
||||
Synchronous pipeline coordination using spawn_agent + wait_agent.
|
||||
|
||||
## Constants
|
||||
|
||||
- WORKER_AGENT: tlv4_worker
|
||||
- SUPERVISOR_AGENT: tlv4_supervisor (resident, woken via send_input)
|
||||
|
||||
## Handler Router
|
||||
|
||||
| Source | Handler |
|
||||
|--------|---------|
|
||||
| "capability_gap" | handleAdapt |
|
||||
| "check" or "status" | handleCheck |
|
||||
| "resume" or "continue" | handleResume |
|
||||
| All tasks completed | handleComplete |
|
||||
| Default | handleSpawnNext |
|
||||
|
||||
## handleCheck
|
||||
|
||||
Read-only status report from tasks.json, then STOP.
|
||||
|
||||
1. Read tasks.json
|
||||
2. Count tasks by status (pending, in_progress, completed, failed, skipped)
|
||||
|
||||
Output:
|
||||
```
|
||||
[coordinator] Pipeline Status
|
||||
[coordinator] Progress: <done>/<total> (<pct>%)
|
||||
[coordinator] Active agents: <list from active_agents>
|
||||
[coordinator] Ready: <pending tasks with resolved deps>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
## handleResume
|
||||
|
||||
1. Read tasks.json, check active_agents
|
||||
2. No active agents -> handleSpawnNext
|
||||
3. Has active agents -> check each:
|
||||
- If supervisor with `resident: true` + no CHECKPOINT in_progress + pending CHECKPOINT exists
|
||||
-> supervisor may have crashed. Respawn via spawn_agent({ agent_type: "tlv4_supervisor" }) with recovery: true
|
||||
4. Proceed to handleSpawnNext
|
||||
|
||||
## handleSpawnNext
|
||||
|
||||
Find ready tasks, spawn workers, wait for completion, process results.
|
||||
|
||||
1. Read tasks.json
|
||||
2. Collect: completedTasks, inProgressTasks, readyTasks (pending + all deps completed)
|
||||
3. No ready + nothing in progress -> handleComplete
|
||||
4. No ready + work in progress -> report waiting, STOP
|
||||
5. Has ready -> separate regular tasks and CHECKPOINT tasks
|
||||
|
||||
### Spawn Regular Tasks
|
||||
|
||||
For each ready non-CHECKPOINT task:
|
||||
|
||||
```javascript
|
||||
// 1) Update status in tasks.json
|
||||
state.tasks[task.id].status = 'in_progress'
|
||||
|
||||
// 2) Spawn worker
|
||||
const agentId = spawn_agent({
|
||||
agent_type: "tlv4_worker",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: ${task.role}
|
||||
role_spec: ${skillRoot}/roles/${task.role}/role.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
requirement: ${requirement}
|
||||
inner_loop: ${hasInnerLoop(task.role)}` },
|
||||
|
||||
{ type: "text", text: `Read role_spec file (${skillRoot}/roles/${task.role}/role.md) to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).` },
|
||||
|
||||
{ type: "text", text: `## Task Context
|
||||
task_id: ${task.id}
|
||||
title: ${task.title}
|
||||
description: ${task.description}
|
||||
pipeline_phase: ${task.pipeline_phase}` },
|
||||
|
||||
{ type: "text", text: `## Upstream Context\n${prevContext}` }
|
||||
]
|
||||
})
|
||||
|
||||
// 3) Track agent
|
||||
state.active_agents[task.id] = { agentId, role: task.role, started_at: now }
|
||||
```
|
||||
|
||||
After spawning all ready regular tasks:
|
||||
|
||||
```javascript
|
||||
// 4) Batch wait for all spawned workers
|
||||
const agentIds = Object.values(state.active_agents)
|
||||
.filter(a => !a.resident)
|
||||
.map(a => a.agentId)
|
||||
wait_agent({ ids: agentIds, timeout_ms: 900000 })
|
||||
|
||||
// 5) Collect results from discoveries/{task_id}.json
|
||||
for (const [taskId, agent] of Object.entries(state.active_agents)) {
|
||||
if (agent.resident) continue
|
||||
try {
|
||||
const disc = JSON.parse(Read(`${sessionFolder}/discoveries/${taskId}.json`))
|
||||
state.tasks[taskId].status = disc.status || 'completed'
|
||||
state.tasks[taskId].findings = disc.findings || ''
|
||||
state.tasks[taskId].quality_score = disc.quality_score || null
|
||||
state.tasks[taskId].error = disc.error || null
|
||||
} catch {
|
||||
state.tasks[taskId].status = 'failed'
|
||||
state.tasks[taskId].error = 'No discovery file produced'
|
||||
}
|
||||
close_agent({ id: agent.agentId })
|
||||
delete state.active_agents[taskId]
|
||||
}
|
||||
```
|
||||
|
||||
### Handle CHECKPOINT Tasks
|
||||
|
||||
For each ready CHECKPOINT task:
|
||||
|
||||
1. Verify supervisor is in active_agents with `resident: true`
|
||||
- Not found -> spawn supervisor via SKILL.md Supervisor Spawn Template, record supervisorId
|
||||
2. Determine scope: list task IDs that this checkpoint depends on (its deps)
|
||||
3. Wake supervisor:
|
||||
```javascript
|
||||
send_input({
|
||||
id: supervisorId,
|
||||
items: [
|
||||
{ type: "text", text: `## Checkpoint Request
|
||||
task_id: ${task.id}
|
||||
scope: [${task.deps.join(', ')}]
|
||||
pipeline_progress: ${completedCount}/${totalCount} tasks completed` }
|
||||
]
|
||||
})
|
||||
wait_agent({ ids: [supervisorId], timeout_ms: 300000 })
|
||||
```
|
||||
4. Read checkpoint report from artifacts/${task.id}-report.md
|
||||
5. Parse verdict (pass / warn / block):
|
||||
- **pass** -> mark completed, proceed
|
||||
- **warn** -> log risks to wisdom, mark completed, proceed
|
||||
- **block** -> request_user_input: Override / Revise upstream / Abort
|
||||
|
||||
### Persist and Loop
|
||||
|
||||
After processing all results:
|
||||
1. Write updated tasks.json
|
||||
2. Check if more tasks are now ready (deps newly resolved)
|
||||
3. If yes -> loop back to step 1 of handleSpawnNext
|
||||
4. If no more ready and all done -> handleComplete
|
||||
5. If no more ready but some still blocked -> report status, STOP
|
||||
|
||||
## handleComplete
|
||||
|
||||
Pipeline done. Generate report and completion action.
|
||||
|
||||
1. Shutdown resident supervisor (if active):
|
||||
```javascript
|
||||
close_agent({ id: supervisorId })
|
||||
```
|
||||
Remove from active_agents in tasks.json
|
||||
2. Generate summary (deliverables, stats, discussions)
|
||||
3. Read tasks.json completion_action:
|
||||
- interactive -> request_user_input (Archive/Keep/Export)
|
||||
- auto_archive -> Archive & Clean (rm -rf session folder)
|
||||
- auto_keep -> Keep Active (update status to "paused")
|
||||
|
||||
## handleAdapt
|
||||
|
||||
Capability gap reported mid-pipeline.
|
||||
|
||||
1. Parse gap description
|
||||
2. Check if existing role covers it -> redirect
|
||||
3. Role count < 5 -> generate dynamic role-spec in <session>/role-specs/
|
||||
4. Add new task to tasks.json, spawn worker via spawn_agent + wait_agent
|
||||
5. Role count >= 5 -> merge or pause
|
||||
152
.codex/skills/team-lifecycle-v4/roles/coordinator/role.md
Normal file
152
.codex/skills/team-lifecycle-v4/roles/coordinator/role.md
Normal file
@@ -0,0 +1,152 @@
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate team-lifecycle-v4: analyze -> dispatch -> spawn -> monitor -> report.
|
||||
|
||||
## Identity
|
||||
- Name: coordinator | Tag: [coordinator]
|
||||
- Responsibility: Analyze task -> Create session -> Dispatch tasks -> Monitor progress -> Report results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Parse task description (text-level only, no codebase reading)
|
||||
- Create session folder and spawn tlv4_worker agents via spawn_agent
|
||||
- Dispatch tasks with proper dependency chains (tasks.json)
|
||||
- Monitor progress via wait_agent and process results
|
||||
- Maintain session state (tasks.json)
|
||||
- Handle capability_gap reports
|
||||
- Execute completion action when pipeline finishes
|
||||
|
||||
### MUST NOT
|
||||
- Read source code or explore codebase (delegate to workers)
|
||||
- Execute task work directly
|
||||
- Modify task output artifacts
|
||||
- Spawn workers with general-purpose agent (MUST use tlv4_worker)
|
||||
- Generate more than 5 worker roles
|
||||
|
||||
## Command Execution Protocol
|
||||
When coordinator needs to execute a specific phase:
|
||||
1. Read `commands/<command>.md`
|
||||
2. Follow the workflow defined in the command
|
||||
3. Commands are inline execution guides, NOT separate agents
|
||||
4. Execute synchronously, complete before proceeding
|
||||
|
||||
## Entry Router
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
|
||||
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
|
||||
| Capability gap | Message contains "capability_gap" | -> handleAdapt (monitor.md) |
|
||||
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
|
||||
| Interrupted session | Active session in .workflow/.team/TLV4-* | -> Phase 0 |
|
||||
| New session | None of above | -> Phase 1 |
|
||||
|
||||
For check/resume/adapt/complete: load @commands/monitor.md, execute handler, STOP.
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
1. Scan .workflow/.team/TLV4-*/tasks.json for active/paused sessions
|
||||
2. No sessions -> Phase 1
|
||||
3. Single session -> reconcile:
|
||||
a. Read tasks.json, reset in_progress -> pending
|
||||
b. Rebuild active_agents map
|
||||
c. If pipeline has CHECKPOINT tasks AND `supervision !== false`:
|
||||
- Respawn supervisor via `spawn_agent({ agent_type: "tlv4_supervisor" })` with `recovery: true`
|
||||
- Supervisor auto-rebuilds context from existing CHECKPOINT-*-report.md files
|
||||
d. Kick first ready task via handleSpawnNext
|
||||
4. Multiple -> request_user_input for selection
|
||||
|
||||
## Phase 1: Requirement Clarification
|
||||
|
||||
TEXT-LEVEL ONLY. No source code reading.
|
||||
|
||||
1. Parse task description
|
||||
2. Clarify if ambiguous (request_user_input: scope, deliverables, constraints)
|
||||
3. Delegate to @commands/analyze.md
|
||||
4. Output: task-analysis.json
|
||||
5. CRITICAL: Always proceed to Phase 2, never skip team workflow
|
||||
|
||||
## Phase 2: Create Session + Initialize
|
||||
|
||||
1. Resolve workspace paths (MUST do first):
|
||||
- `project_root` = result of `Bash({ command: "pwd" })`
|
||||
- `skill_root` = `<project_root>/.codex/skills/team-lifecycle-v4`
|
||||
2. Generate session ID: TLV4-<slug>-<date>
|
||||
3. Create session folder structure:
|
||||
```bash
|
||||
mkdir -p .workflow/.team/${SESSION_ID}/{artifacts,discoveries,wisdom,role-specs}
|
||||
```
|
||||
4. Read specs/pipelines.md -> select pipeline
|
||||
5. Register roles in tasks.json metadata
|
||||
6. Initialize shared infrastructure (wisdom/*.md, explorations/cache-index.json)
|
||||
7. Write initial tasks.json:
|
||||
```json
|
||||
{
|
||||
"session_id": "<id>",
|
||||
"pipeline": "<mode>",
|
||||
"requirement": "<original requirement>",
|
||||
"created_at": "<ISO timestamp>",
|
||||
"supervision": true,
|
||||
"completed_waves": [],
|
||||
"active_agents": {},
|
||||
"tasks": {}
|
||||
}
|
||||
```
|
||||
8. Spawn resident supervisor (if pipeline has CHECKPOINT tasks AND `supervision !== false`):
|
||||
- Use SKILL.md Supervisor Spawn Template:
|
||||
```javascript
|
||||
const supervisorId = spawn_agent({
|
||||
agent_type: "tlv4_supervisor",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: supervisor
|
||||
role_spec: ${skillRoot}/roles/supervisor/role.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
requirement: ${requirement}
|
||||
|
||||
Read role_spec file to load checkpoint definitions.
|
||||
Init: load baseline context, report ready, go idle.` }
|
||||
]
|
||||
})
|
||||
```
|
||||
- Record supervisorId in tasks.json active_agents with `resident: true` flag
|
||||
- Proceed to Phase 3
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
Delegate to @commands/dispatch.md:
|
||||
1. Read dependency graph from task-analysis.json
|
||||
2. Read specs/pipelines.md for selected pipeline's task registry
|
||||
3. Topological sort tasks
|
||||
4. Write tasks to tasks.json with deps arrays
|
||||
5. Update tasks.json metadata (total count, wave assignments)
|
||||
|
||||
## Phase 4: Spawn-and-Wait
|
||||
|
||||
Delegate to @commands/monitor.md#handleSpawnNext:
|
||||
1. Find ready tasks (pending + deps resolved)
|
||||
2. Spawn tlv4_worker agents via spawn_agent
|
||||
3. Wait for completion via wait_agent
|
||||
4. Process results, advance pipeline
|
||||
5. Repeat until all waves complete or pipeline blocked
|
||||
|
||||
## Phase 5: Report + Completion Action
|
||||
|
||||
1. Generate summary (deliverables, pipeline stats, discussions)
|
||||
2. Execute completion action per tasks.json completion_action:
|
||||
- interactive -> request_user_input (Archive/Keep/Export)
|
||||
- auto_archive -> Archive & Clean (rm -rf session folder)
|
||||
- auto_keep -> Keep Active
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Task too vague | request_user_input for clarification |
|
||||
| Session corruption | Attempt recovery, fallback to manual |
|
||||
| Worker crash | Reset task to pending in tasks.json, respawn via spawn_agent |
|
||||
| Supervisor crash | Respawn via spawn_agent({ agent_type: "tlv4_supervisor" }) with recovery: true |
|
||||
| Dependency cycle | Detect in analysis, halt |
|
||||
| Role limit exceeded | Merge overlapping roles |
|
||||
@@ -0,0 +1,35 @@
|
||||
# Fix
|
||||
|
||||
Revision workflow for bug fixes and feedback-driven changes.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Read original task + feedback/revision notes from task description
|
||||
2. Load original implementation context (files modified, approach taken)
|
||||
3. Analyze feedback to identify specific changes needed
|
||||
4. Apply fixes:
|
||||
- Agent mode: Edit tool for targeted changes
|
||||
- CLI mode: Resume previous session with fix prompt
|
||||
5. Re-validate convergence criteria
|
||||
6. Report: original task, changes applied, validation result
|
||||
|
||||
## Fix Prompt Template (CLI mode)
|
||||
|
||||
```
|
||||
PURPOSE: Fix issues in <task.title> based on feedback
|
||||
TASK:
|
||||
- Review original implementation
|
||||
- Apply feedback: <feedback text>
|
||||
- Verify fixes address all feedback points
|
||||
MODE: write
|
||||
CONTEXT: @<modified files>
|
||||
EXPECTED: All feedback points addressed, convergence criteria met
|
||||
CONSTRAINTS: Minimal changes | No scope creep
|
||||
```
|
||||
|
||||
## Quality Rules
|
||||
|
||||
- Fix ONLY what feedback requests
|
||||
- No refactoring beyond fix scope
|
||||
- Verify original convergence criteria still pass
|
||||
- Report partial_completion if some feedback unclear
|
||||
@@ -0,0 +1,63 @@
|
||||
# Implement
|
||||
|
||||
Execute implementation from task JSON via agent or CLI delegation.
|
||||
|
||||
## Agent Mode
|
||||
|
||||
Direct implementation using Edit/Write/Bash tools:
|
||||
|
||||
1. Read task.files[] as target files
|
||||
2. Read task.implementation[] as step-by-step instructions
|
||||
3. For each step:
|
||||
- Substitute [variable] placeholders with pre_analysis results
|
||||
- New file -> Write tool; Modify file -> Edit tool
|
||||
- Follow task.reference patterns
|
||||
4. Apply task.rationale.chosen_approach
|
||||
5. Mitigate task.risks[] during implementation
|
||||
|
||||
Quality rules:
|
||||
- Verify module existence before referencing
|
||||
- Incremental progress -- small working changes
|
||||
- Follow existing patterns from task.reference
|
||||
- ASCII-only, no premature abstractions
|
||||
|
||||
## CLI Delegation Mode
|
||||
|
||||
Build prompt from task JSON, delegate to CLI:
|
||||
|
||||
Prompt structure:
|
||||
```
|
||||
PURPOSE: <task.title>
|
||||
<task.description>
|
||||
|
||||
TARGET FILES:
|
||||
<task.files[] with paths and changes>
|
||||
|
||||
IMPLEMENTATION STEPS:
|
||||
<task.implementation[] numbered>
|
||||
|
||||
PRE-ANALYSIS CONTEXT:
|
||||
<pre_analysis results>
|
||||
|
||||
REFERENCE:
|
||||
<task.reference pattern and files>
|
||||
|
||||
DONE WHEN:
|
||||
<task.convergence.criteria[]>
|
||||
|
||||
MODE: write
|
||||
CONSTRAINTS: Only modify listed files | Follow existing patterns
|
||||
```
|
||||
|
||||
CLI call:
|
||||
```
|
||||
Bash(`ccw cli -p "<prompt>" --tool <tool> --mode write --rule development-implement-feature`)
|
||||
```
|
||||
|
||||
Resume strategy:
|
||||
| Strategy | Command |
|
||||
|----------|---------|
|
||||
| new | --id <session>-<task_id> |
|
||||
| resume | --resume <parent_id> |
|
||||
|
||||
RoleSpec: `.codex/skills/team-lifecycle-v4/roles/executor/role.md`
|
||||
89
.codex/skills/team-lifecycle-v4/roles/executor/role.md
Normal file
89
.codex/skills/team-lifecycle-v4/roles/executor/role.md
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
role: executor
|
||||
prefix: IMPL
|
||||
inner_loop: true
|
||||
message_types:
|
||||
success: impl_complete
|
||||
progress: impl_progress
|
||||
error: error
|
||||
---
|
||||
|
||||
# Executor
|
||||
|
||||
Code implementation worker with dual execution modes.
|
||||
|
||||
## Identity
|
||||
- Tag: [executor] | Prefix: IMPL-*
|
||||
- Responsibility: Implement code from plan tasks via agent or CLI delegation
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Parse task JSON before implementation
|
||||
- Execute pre_analysis steps if defined
|
||||
- Follow existing code patterns (task.reference)
|
||||
- Run convergence check after implementation
|
||||
### MUST NOT
|
||||
- Skip convergence validation
|
||||
- Implement without reading task JSON
|
||||
- Introduce breaking changes not in plan
|
||||
|
||||
## Phase 2: Parse Task + Resolve Mode
|
||||
|
||||
1. Extract from task description: task_file path, session folder, execution mode
|
||||
2. Read task JSON (id, title, files[], implementation[], convergence.criteria[])
|
||||
3. Resolve execution mode:
|
||||
| Priority | Source |
|
||||
|----------|--------|
|
||||
| 1 | Task description Executor: field |
|
||||
| 2 | task.meta.execution_config.method |
|
||||
| 3 | plan.json recommended_execution |
|
||||
| 4 | Auto: Low -> agent, Medium/High -> codex |
|
||||
4. Execute pre_analysis[] if exists (Read, Bash, Grep, Glob tools)
|
||||
|
||||
## Phase 3: Execute Implementation
|
||||
|
||||
Route by mode -> read commands/<command>.md:
|
||||
- agent / gemini / codex / qwen -> commands/implement.md
|
||||
- Revision task -> commands/fix.md
|
||||
|
||||
## Phase 4: Self-Validation + Report
|
||||
|
||||
| Step | Method | Pass Criteria |
|
||||
|------|--------|--------------|
|
||||
| Convergence check | Match criteria vs output | All criteria addressed |
|
||||
| Syntax check | tsc --noEmit or equivalent | Exit code 0 |
|
||||
| Test detection | Find test files for modified files | Tests identified |
|
||||
|
||||
1. Write discovery to `discoveries/{task_id}.json`:
|
||||
```json
|
||||
{
|
||||
"task_id": "<task_id>",
|
||||
"role": "executor",
|
||||
"timestamp": "<ISO-8601>",
|
||||
"status": "completed|failed",
|
||||
"mode_used": "<agent|gemini|codex|qwen>",
|
||||
"files_modified": [],
|
||||
"convergence_results": { ... }
|
||||
}
|
||||
```
|
||||
2. Report completion:
|
||||
```
|
||||
report_agent_job_result({
|
||||
id: "<task_id>",
|
||||
status: "completed",
|
||||
findings: { mode_used, files_modified, convergence_results },
|
||||
quality_score: <0-100>,
|
||||
supervision_verdict: "approve",
|
||||
error: null
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Agent mode syntax errors | Retry with error context (max 3) |
|
||||
| CLI mode failure | Retry or resume with --resume |
|
||||
| pre_analysis failure | Follow on_error (fail/continue/skip) |
|
||||
| CLI tool unavailable | Fallback: gemini -> qwen -> codex |
|
||||
| Max retries exceeded | Report via report_agent_job_result with status "failed" |
|
||||
108
.codex/skills/team-lifecycle-v4/roles/planner/role.md
Normal file
108
.codex/skills/team-lifecycle-v4/roles/planner/role.md
Normal file
@@ -0,0 +1,108 @@
|
||||
---
|
||||
role: planner
|
||||
prefix: PLAN
|
||||
inner_loop: true
|
||||
message_types:
|
||||
success: plan_ready
|
||||
revision: plan_revision
|
||||
error: error
|
||||
---
|
||||
|
||||
# Planner
|
||||
|
||||
Codebase-informed implementation planning with complexity assessment.
|
||||
|
||||
## Identity
|
||||
- Tag: [planner] | Prefix: PLAN-*
|
||||
- Responsibility: Explore codebase -> generate structured plan -> assess complexity
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Check shared exploration cache before re-exploring
|
||||
- Generate plan.json + TASK-*.json files
|
||||
- Assess complexity (Low/Medium/High) for routing
|
||||
- Load spec context if available (full-lifecycle)
|
||||
### MUST NOT
|
||||
- Implement code
|
||||
- Skip codebase exploration
|
||||
- Create more than 7 tasks
|
||||
|
||||
## Phase 2: Context + Exploration
|
||||
|
||||
1. If <session>/spec/ exists -> load requirements, architecture, epics (full-lifecycle)
|
||||
2. Read context from filesystem:
|
||||
- Read `tasks.json` for current task assignments and status
|
||||
- Read `discoveries/*.json` for prior exploration/analysis results from other roles
|
||||
3. Check <session>/explorations/cache-index.json for cached explorations
|
||||
4. Explore codebase (cache-aware):
|
||||
```
|
||||
Bash({ command: `ccw cli -p "PURPOSE: Explore codebase to inform planning
|
||||
TASK: * Search for relevant patterns * Identify files to modify * Document integration points
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: JSON with: relevant_files[], patterns[], integration_points[], recommendations[]" --tool gemini --mode analysis`)
|
||||
```
|
||||
5. Store results in <session>/explorations/
|
||||
|
||||
## Phase 3: Plan Generation
|
||||
|
||||
Generate plan.json + .task/TASK-*.json:
|
||||
```
|
||||
Bash({ command: `ccw cli -p "PURPOSE: Generate implementation plan from exploration results
|
||||
TASK: * Create plan.json overview * Generate TASK-*.json files (2-7 tasks) * Define dependencies * Set convergence criteria
|
||||
MODE: write
|
||||
CONTEXT: @<session>/explorations/*.json
|
||||
EXPECTED: Files: plan.json + .task/TASK-*.json
|
||||
CONSTRAINTS: 2-7 tasks, include id/title/files[]/convergence.criteria/depends_on" --tool gemini --mode write`)
|
||||
```
|
||||
|
||||
Output files:
|
||||
```
|
||||
<session>/plan/
|
||||
+-- plan.json # Overview + complexity assessment
|
||||
\-- .task/TASK-*.json # Individual task definitions
|
||||
```
|
||||
|
||||
## Phase 4: Report Results
|
||||
|
||||
1. Read plan.json and TASK-*.json
|
||||
2. Write discovery to `discoveries/{task_id}.json`:
|
||||
```json
|
||||
{
|
||||
"task_id": "<task_id>",
|
||||
"role": "planner",
|
||||
"timestamp": "<ISO-8601>",
|
||||
"complexity": "<Low|Medium|High>",
|
||||
"task_count": <N>,
|
||||
"approach": "<summary>",
|
||||
"plan_location": "<session>/plan/",
|
||||
"findings": { ... }
|
||||
}
|
||||
```
|
||||
3. Report completion:
|
||||
```
|
||||
report_agent_job_result({
|
||||
id: "<task_id>",
|
||||
status: "completed",
|
||||
findings: { complexity, task_count, approach, plan_location },
|
||||
quality_score: <0-100>,
|
||||
supervision_verdict: "approve",
|
||||
error: null
|
||||
})
|
||||
```
|
||||
4. Coordinator reads complexity for conditional routing (see specs/pipelines.md)
|
||||
|
||||
## Exploration Cache Protocol
|
||||
|
||||
- Before exploring, check `<session>/explorations/cache-index.json`
|
||||
- Reuse cached results if query matches and cache is fresh
|
||||
- After exploring, update cache-index with new entries
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI exploration failure | Plan from description only |
|
||||
| CLI planning failure | Fallback to direct planning |
|
||||
| Plan rejected 3+ times | Report via report_agent_job_result with status "failed" |
|
||||
| Cache index corrupt | Clear cache, re-explore |
|
||||
@@ -0,0 +1,57 @@
|
||||
# Code Review
|
||||
|
||||
4-dimension code review for implementation quality.
|
||||
|
||||
## Inputs
|
||||
|
||||
- Plan file (`{session}/plan/plan.json`)
|
||||
- Implementation discovery files (`{session}/discoveries/IMPL-*.json`)
|
||||
- Test results (if available)
|
||||
|
||||
## Gather Modified Files
|
||||
|
||||
Read upstream context from file system (no team_msg):
|
||||
|
||||
```javascript
|
||||
// 1. Read plan for file list
|
||||
const plan = JSON.parse(Read(`{session}/plan/plan.json`))
|
||||
const plannedFiles = plan.tasks.flatMap(t => t.files)
|
||||
|
||||
// 2. Read implementation discoveries for actual modified files
|
||||
const implFiles = Glob(`{session}/discoveries/IMPL-*.json`)
|
||||
const modifiedFiles = new Set()
|
||||
for (const f of implFiles) {
|
||||
const discovery = JSON.parse(Read(f))
|
||||
for (const file of (discovery.files_modified || [])) {
|
||||
modifiedFiles.add(file)
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Union of planned + actually modified files
|
||||
const allFiles = [...new Set([...plannedFiles, ...modifiedFiles])]
|
||||
```
|
||||
|
||||
## Dimensions
|
||||
|
||||
| Dimension | Critical Issues |
|
||||
|-----------|----------------|
|
||||
| Quality | Empty catch, any casts, @ts-ignore, console.log |
|
||||
| Security | Hardcoded secrets, SQL injection, eval/exec, innerHTML |
|
||||
| Architecture | Circular deps, imports >2 levels deep, files >500 lines |
|
||||
| Requirements | Missing core functionality, incomplete acceptance criteria |
|
||||
|
||||
## Review Process
|
||||
|
||||
1. Gather modified files from plan.json + discoveries/IMPL-*.json
|
||||
2. Read each modified file
|
||||
3. Score per dimension (0-100%)
|
||||
4. Classify issues by severity (Critical/High/Medium/Low)
|
||||
5. Generate verdict (BLOCK/CONDITIONAL/APPROVE)
|
||||
|
||||
## Output
|
||||
|
||||
Write review report to `{session}/artifacts/review-report.md`:
|
||||
- Per-dimension scores
|
||||
- Issue list with file:line references
|
||||
- Verdict with justification
|
||||
- Recommendations (if CONDITIONAL)
|
||||
@@ -0,0 +1,44 @@
|
||||
# Spec Quality Review
|
||||
|
||||
5-dimension spec quality gate with discuss protocol.
|
||||
|
||||
## Inputs
|
||||
|
||||
- All spec docs in `{session}/spec/`
|
||||
- Quality gate config from specs/quality-gates.md
|
||||
|
||||
## Dimensions
|
||||
|
||||
| Dimension | Weight | Focus |
|
||||
|-----------|--------|-------|
|
||||
| Completeness | 25% | All sections present with substance |
|
||||
| Consistency | 25% | Terminology, format, references uniform |
|
||||
| Traceability | 25% | Goals->Reqs->Arch->Stories chain |
|
||||
| Depth | 25% | AC testable, ADRs justified, stories estimable |
|
||||
|
||||
## Review Process
|
||||
|
||||
1. Read all spec documents from `{session}/spec/`
|
||||
2. Load quality gate thresholds from specs/quality-gates.md
|
||||
3. Score each dimension
|
||||
4. Run cross-document validation
|
||||
5. Generate readiness-report.md + spec-summary.md
|
||||
6. Run DISCUSS-003:
|
||||
- Artifact: `{session}/spec/readiness-report.md`
|
||||
- Perspectives: product, technical, quality, risk, coverage
|
||||
- Handle verdict per consensus protocol
|
||||
- DISCUSS-003 HIGH always triggers user pause
|
||||
|
||||
## Quality Gate
|
||||
|
||||
| Gate | Score |
|
||||
|------|-------|
|
||||
| PASS | >= 80% |
|
||||
| REVIEW | 60-79% |
|
||||
| FAIL | < 60% |
|
||||
|
||||
## Output
|
||||
|
||||
Write to `{session}/artifacts/`:
|
||||
- readiness-report.md: Dimension scores, issue list, traceability matrix
|
||||
- spec-summary.md: Executive summary of all spec docs
|
||||
98
.codex/skills/team-lifecycle-v4/roles/reviewer/role.md
Normal file
98
.codex/skills/team-lifecycle-v4/roles/reviewer/role.md
Normal file
@@ -0,0 +1,98 @@
|
||||
---
|
||||
role: reviewer
|
||||
prefix: REVIEW
|
||||
additional_prefixes: [QUALITY, IMPROVE]
|
||||
inner_loop: false
|
||||
discuss_rounds: [DISCUSS-003]
|
||||
message_types:
|
||||
success_review: review_result
|
||||
success_quality: quality_result
|
||||
fix: fix_required
|
||||
error: error
|
||||
---
|
||||
|
||||
# Reviewer
|
||||
|
||||
Quality review for both code (REVIEW-*) and specifications (QUALITY-*, IMPROVE-*).
|
||||
|
||||
## Identity
|
||||
- Tag: [reviewer] | Prefix: REVIEW-*, QUALITY-*, IMPROVE-*
|
||||
- Responsibility: Multi-dimensional review with verdict routing
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Detect review mode from task prefix
|
||||
- Apply correct dimensions per mode
|
||||
- Run DISCUSS-003 for spec quality (QUALITY-*/IMPROVE-*)
|
||||
- Generate actionable verdict
|
||||
### MUST NOT
|
||||
- Mix code review with spec quality dimensions
|
||||
- Skip discuss for QUALITY-* tasks
|
||||
- Implement fixes (only recommend)
|
||||
|
||||
## Phase 2: Mode Detection
|
||||
|
||||
| Task Prefix | Mode | Command |
|
||||
|-------------|------|---------|
|
||||
| REVIEW-* | Code Review | commands/review-code.md |
|
||||
| QUALITY-* | Spec Quality | commands/review-spec.md |
|
||||
| IMPROVE-* | Spec Quality (recheck) | commands/review-spec.md |
|
||||
|
||||
## Phase 3: Review Execution
|
||||
|
||||
Route to command based on detected mode.
|
||||
|
||||
## Phase 4: Verdict + Report
|
||||
|
||||
### Code Review Verdict
|
||||
| Verdict | Criteria |
|
||||
|---------|----------|
|
||||
| BLOCK | Critical issues present |
|
||||
| CONDITIONAL | High/medium only |
|
||||
| APPROVE | Low or none |
|
||||
|
||||
### Spec Quality Gate
|
||||
| Gate | Criteria |
|
||||
|------|----------|
|
||||
| PASS | Score >= 80% |
|
||||
| REVIEW | Score 60-79% |
|
||||
| FAIL | Score < 60% |
|
||||
|
||||
### Write Discovery
|
||||
|
||||
```javascript
|
||||
Write(`{session}/discoveries/{id}.json`, JSON.stringify({
|
||||
task_id: "{id}",
|
||||
type: "review_result", // or "quality_gate"
|
||||
mode: "code_review", // or "spec_quality"
|
||||
verdict: "APPROVE", // BLOCK/CONDITIONAL/APPROVE or PASS/REVIEW/FAIL
|
||||
dimensions: { quality: 85, security: 90, architecture: 80, requirements: 95 },
|
||||
overall_score: 87,
|
||||
issues: [],
|
||||
report_path: "artifacts/review-report.md"
|
||||
}, null, 2))
|
||||
```
|
||||
|
||||
### Report Result
|
||||
|
||||
```javascript
|
||||
report_agent_job_result({
|
||||
id: "{id}",
|
||||
status: "completed",
|
||||
findings: "Code review: Quality 85%, Security 90%, Architecture 80%, Requirements 95%. Verdict: APPROVE.",
|
||||
quality_score: "87",
|
||||
supervision_verdict: "",
|
||||
error: ""
|
||||
})
|
||||
```
|
||||
|
||||
Report includes: mode, verdict/gate, dimension scores, discuss verdict (quality only), output paths.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Missing context | Request from coordinator |
|
||||
| Invalid mode | Abort with error |
|
||||
| Discuss fails | Proceed without discuss, log warning |
|
||||
| Upstream discovery file missing | Report error, mark failed |
|
||||
210
.codex/skills/team-lifecycle-v4/roles/supervisor/role.md
Normal file
210
.codex/skills/team-lifecycle-v4/roles/supervisor/role.md
Normal file
@@ -0,0 +1,210 @@
|
||||
---
|
||||
role: supervisor
|
||||
prefix: CHECKPOINT
|
||||
inner_loop: false
|
||||
discuss_rounds: []
|
||||
message_types:
|
||||
success: supervision_report
|
||||
alert: consistency_alert
|
||||
warning: pattern_warning
|
||||
error: error
|
||||
---
|
||||
|
||||
# Supervisor
|
||||
|
||||
Process and execution supervision at pipeline phase transition points.
|
||||
|
||||
## Identity
|
||||
- Tag: [supervisor] | Prefix: CHECKPOINT-*
|
||||
- Responsibility: Verify cross-artifact consistency, process compliance, and execution health between pipeline phases
|
||||
- Residency: Spawned once, awakened via `send_input` at each checkpoint trigger (not SendMessage)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Read all upstream discoveries from discoveries/ directory
|
||||
- Read upstream artifacts referenced in state data
|
||||
- Check terminology consistency across produced documents
|
||||
- Verify process compliance (upstream consumed, artifacts exist, wisdom contributed)
|
||||
- Analyze error/retry patterns from task history
|
||||
- Output supervision_report with clear verdict (pass/warn/block)
|
||||
- Write checkpoint report to `<session>/artifacts/CHECKPOINT-NNN-report.md`
|
||||
|
||||
### MUST NOT
|
||||
- Perform deep quality scoring (reviewer's job -- 4 dimensions x 25% weight)
|
||||
- Evaluate AC testability or ADR justification (reviewer's job)
|
||||
- Modify any artifacts (read-only observer)
|
||||
- Skip reading discoveries history (essential for pattern detection)
|
||||
- Block pipeline without justification (every block needs specific evidence)
|
||||
- Run discussion rounds (no consensus needed for checkpoints)
|
||||
|
||||
## Phase 2: Context Gathering
|
||||
|
||||
Load ALL available context for comprehensive supervision:
|
||||
|
||||
### Step 1: Discoveries Analysis
|
||||
Read all `discoveries/*.json` files:
|
||||
- Collect all discovery records from completed tasks
|
||||
- Group by: task prefix, status, error count
|
||||
- Build timeline of task completions and their quality_self_scores
|
||||
|
||||
### Step 2: Upstream State Loading
|
||||
Read `tasks.json` to get task assignments and status for all roles:
|
||||
- Load state for every completed upstream role
|
||||
- Extract: key_findings, decisions, terminology_keys, open_questions
|
||||
- Note: upstream_refs_consumed for reference chain verification
|
||||
|
||||
### Step 3: Artifact Reading
|
||||
- Read each artifact referenced in upstream discoveries' `ref` paths
|
||||
- Extract document structure, key terms, design decisions
|
||||
- DO NOT deep-read entire documents -- scan headings + key sections only
|
||||
|
||||
### Step 4: Wisdom Loading
|
||||
- Read `<session>/wisdom/*.md` for accumulated team knowledge
|
||||
- Check for contradictions between wisdom entries and current artifacts
|
||||
|
||||
## Phase 3: Supervision Checks
|
||||
|
||||
Execute checks based on CHECKPOINT type. Each checkpoint has a predefined scope.
|
||||
|
||||
### CHECKPOINT-001: Brief <-> PRD Consistency (after DRAFT-002)
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Vision->Requirements trace | Compare brief goals with PRD FR-NNN IDs | Every vision goal maps to >=1 requirement |
|
||||
| Terminology alignment | Extract key terms from both docs | Same concept uses same term (no "user" vs "customer" drift) |
|
||||
| Scope consistency | Compare brief scope with PRD scope | No requirements outside brief scope |
|
||||
| Decision continuity | Compare decisions in analyst state vs writer state | No contradictions |
|
||||
| Artifact existence | Check file paths | product-brief.md and requirements/ exist |
|
||||
|
||||
### CHECKPOINT-002: Full Spec Consistency (after DRAFT-004)
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| 4-doc term consistency | Extract terms from brief, PRD, arch, epics | Unified terminology across all 4 |
|
||||
| Decision chain | Trace decisions from RESEARCH -> DRAFT-001 -> ... -> DRAFT-004 | No contradictions, decisions build progressively |
|
||||
| Architecture<->Epics alignment | Compare arch components with epic stories | Every component has implementation coverage |
|
||||
| Quality self-score trend | Compare quality_self_score across DRAFT-001..004 discoveries | Not degrading (score[N] >= score[N-1] - 10) |
|
||||
| Open questions resolved | Check open_questions across all discoveries | No critical open questions remaining |
|
||||
| Wisdom consistency | Cross-check wisdom entries against artifacts | No contradictory entries |
|
||||
|
||||
### CHECKPOINT-003: Plan <-> Input Alignment (after PLAN-001)
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Plan covers requirements | Compare plan.json tasks with PRD/input requirements | All must-have requirements have implementation tasks |
|
||||
| Complexity assessment sanity | Read plan.json complexity vs actual scope | Low != 5+ modules, High != 1 module |
|
||||
| Dependency chain valid | Verify plan task dependencies | No cycles, no orphans |
|
||||
| Execution method appropriate | Check recommended_execution vs complexity | Agent mode for low, CLI for medium+ |
|
||||
| Upstream context consumed | Verify plan references spec artifacts | Plan explicitly references architecture decisions |
|
||||
|
||||
### Execution Health Checks (all checkpoints)
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Retry patterns | Count error discoveries per role | No role has >=3 errors |
|
||||
| Discovery anomalies | Check for orphaned discoveries (from dead workers) | All in_progress tasks have recent activity |
|
||||
| Fast-advance conflicts | Check fast_advance discoveries | No duplicate spawns detected |
|
||||
|
||||
## Phase 4: Verdict Generation
|
||||
|
||||
### Scoring
|
||||
|
||||
Each check produces: pass (1.0) | warn (0.5) | fail (0.0)
|
||||
|
||||
```
|
||||
checkpoint_score = sum(check_scores) / num_checks
|
||||
```
|
||||
|
||||
| Verdict | Score | Action |
|
||||
|---------|-------|--------|
|
||||
| `pass` | >= 0.8 | Auto-proceed, log report |
|
||||
| `warn` | 0.5-0.79 | Proceed with recorded risks in wisdom |
|
||||
| `block` | < 0.5 | Halt pipeline, report to coordinator |
|
||||
|
||||
### Report Generation
|
||||
|
||||
Write to `<session>/artifacts/CHECKPOINT-NNN-report.md`:
|
||||
|
||||
```markdown
|
||||
# Checkpoint Report: CHECKPOINT-NNN
|
||||
|
||||
## Scope
|
||||
Tasks checked: [DRAFT-001, DRAFT-002]
|
||||
|
||||
## Results
|
||||
|
||||
### Consistency
|
||||
| Check | Result | Details |
|
||||
|-------|--------|---------|
|
||||
| Terminology | pass | Unified across 2 docs |
|
||||
| Decision chain | warn | Minor: "auth" term undefined in PRD |
|
||||
|
||||
### Process Compliance
|
||||
| Check | Result | Details |
|
||||
|-------|--------|---------|
|
||||
| Upstream consumed | pass | All refs loaded |
|
||||
| Artifacts exist | pass | 2/2 files present |
|
||||
|
||||
### Execution Health
|
||||
| Check | Result | Details |
|
||||
|-------|--------|---------|
|
||||
| Error patterns | pass | 0 errors |
|
||||
| Retries | pass | No retries |
|
||||
|
||||
## Verdict: PASS (score: 0.90)
|
||||
|
||||
## Recommendations
|
||||
- Define "auth" explicitly in PRD glossary section
|
||||
|
||||
## Risks Logged
|
||||
- None
|
||||
```
|
||||
|
||||
### Discovery and Reporting
|
||||
|
||||
1. Write discovery to `discoveries/<task_id>.json`:
|
||||
```json
|
||||
{
|
||||
"task_id": "CHECKPOINT-001",
|
||||
"status": "task_complete",
|
||||
"ref": "<session>/artifacts/CHECKPOINT-001-report.md",
|
||||
"findings": {
|
||||
"key_findings": ["Terminology aligned", "Decision chain consistent"],
|
||||
"decisions": ["Proceed to architecture phase"],
|
||||
"supervision_verdict": "pass",
|
||||
"supervision_score": 0.90,
|
||||
"risks_logged": 0,
|
||||
"blocks_detected": 0
|
||||
},
|
||||
"data": {
|
||||
"verification": "self-validated",
|
||||
"checks_passed": 5,
|
||||
"checks_total": 5
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Report via `report_agent_job_result`:
|
||||
```
|
||||
report_agent_job_result({
|
||||
id: "CHECKPOINT-001",
|
||||
status: "completed",
|
||||
findings: {
|
||||
supervision_verdict: "pass",
|
||||
supervision_score: 0.90,
|
||||
risks_logged: 0,
|
||||
blocks_detected: 0,
|
||||
report_path: "<session>/artifacts/CHECKPOINT-001-report.md"
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Artifact file not found | Score as warn (not fail), log missing path |
|
||||
| Discoveries directory empty | Score as warn, note "no discoveries to analyze" |
|
||||
| State missing for upstream role | Use artifact reading as fallback |
|
||||
| All checks pass trivially | Still generate report for audit trail |
|
||||
| Checkpoint blocked but user overrides | Log override in wisdom, proceed |
|
||||
126
.codex/skills/team-lifecycle-v4/roles/tester/role.md
Normal file
126
.codex/skills/team-lifecycle-v4/roles/tester/role.md
Normal file
@@ -0,0 +1,126 @@
|
||||
---
|
||||
role: tester
|
||||
prefix: TEST
|
||||
inner_loop: false
|
||||
message_types:
|
||||
success: test_result
|
||||
fix: fix_required
|
||||
error: error
|
||||
---
|
||||
|
||||
# Tester
|
||||
|
||||
Test execution with iterative fix cycle.
|
||||
|
||||
## Identity
|
||||
- Tag: [tester] | Prefix: TEST-*
|
||||
- Responsibility: Detect framework -> run tests -> fix failures -> report results
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Auto-detect test framework before running
|
||||
- Run affected tests first, then full suite
|
||||
- Classify failures by severity
|
||||
- Iterate fix cycle up to MAX_ITERATIONS
|
||||
### MUST NOT
|
||||
- Skip framework detection
|
||||
- Run full suite before affected tests
|
||||
- Exceed MAX_ITERATIONS without reporting
|
||||
|
||||
## Phase 2: Framework Detection + Test Discovery
|
||||
|
||||
Framework detection (priority order):
|
||||
| Priority | Method | Frameworks |
|
||||
|----------|--------|-----------|
|
||||
| 1 | package.json devDependencies | vitest, jest, mocha, pytest |
|
||||
| 2 | package.json scripts.test | vitest, jest, mocha, pytest |
|
||||
| 3 | Config files | vitest.config.*, jest.config.*, pytest.ini |
|
||||
|
||||
Affected test discovery from executor's modified files:
|
||||
|
||||
1. **Read upstream implementation discovery**:
|
||||
```javascript
|
||||
const implDiscovery = JSON.parse(Read(`{session}/discoveries/IMPL-001.json`))
|
||||
const modifiedFiles = implDiscovery.files_modified || []
|
||||
```
|
||||
|
||||
2. **Search for matching test files**:
|
||||
- Search: <name>.test.ts, <name>.spec.ts, tests/<name>.test.ts, __tests__/<name>.test.ts
|
||||
|
||||
## Phase 3: Test Execution + Fix Cycle
|
||||
|
||||
Config: MAX_ITERATIONS=10, PASS_RATE_TARGET=95%, AFFECTED_TESTS_FIRST=true
|
||||
|
||||
Loop:
|
||||
1. Run affected tests -> parse results
|
||||
2. Pass rate met -> run full suite
|
||||
3. Failures -> select strategy -> fix -> re-run
|
||||
|
||||
Strategy selection:
|
||||
| Condition | Strategy |
|
||||
|-----------|----------|
|
||||
| Iteration <= 3 or pass >= 80% | Conservative: fix one critical failure |
|
||||
| Critical failures < 5 | Surgical: fix specific pattern everywhere |
|
||||
| Pass < 50% or iteration > 7 | Aggressive: fix all in batch |
|
||||
|
||||
Test commands:
|
||||
| Framework | Affected | Full Suite |
|
||||
|-----------|---------|------------|
|
||||
| vitest | vitest run <files> | vitest run |
|
||||
| jest | jest <files> --no-coverage | jest --no-coverage |
|
||||
| pytest | pytest <files> -v | pytest -v |
|
||||
|
||||
## Phase 4: Result Analysis + Report
|
||||
|
||||
Failure classification:
|
||||
| Severity | Patterns |
|
||||
|----------|----------|
|
||||
| Critical | SyntaxError, cannot find module, undefined |
|
||||
| High | Assertion failures, toBe/toEqual |
|
||||
| Medium | Timeout, async errors |
|
||||
| Low | Warnings, deprecations |
|
||||
|
||||
### Write Discovery
|
||||
|
||||
```javascript
|
||||
Write(`{session}/discoveries/{id}.json`, JSON.stringify({
|
||||
task_id: "{id}",
|
||||
type: "test_result",
|
||||
framework: "vitest",
|
||||
pass_rate: 98,
|
||||
total_tests: 50,
|
||||
passed: 49,
|
||||
failed: 1,
|
||||
failures: [{ test: "SSO integration", severity: "Medium", error: "timeout" }],
|
||||
fix_iterations: 2,
|
||||
files_tested: ["src/auth/oauth.test.ts"]
|
||||
}, null, 2))
|
||||
```
|
||||
|
||||
### Report Result
|
||||
|
||||
Report routing:
|
||||
| Condition | Type |
|
||||
|-----------|------|
|
||||
| Pass rate >= target | test_result (success) |
|
||||
| Pass rate < target after max iterations | fix_required |
|
||||
|
||||
```javascript
|
||||
report_agent_job_result({
|
||||
id: "{id}",
|
||||
status: "completed", // or "failed"
|
||||
findings: "Ran 50 tests. Pass rate: 98% (49/50). Fixed 2 failures in 2 iterations. Remaining: timeout in SSO integration test.",
|
||||
quality_score: "",
|
||||
supervision_verdict: "",
|
||||
error: ""
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Framework not detected | Prompt coordinator |
|
||||
| No tests found | Report to coordinator |
|
||||
| Infinite fix loop | Abort after MAX_ITERATIONS |
|
||||
| Upstream discovery file missing | Report error, mark failed |
|
||||
125
.codex/skills/team-lifecycle-v4/roles/writer/role.md
Normal file
125
.codex/skills/team-lifecycle-v4/roles/writer/role.md
Normal file
@@ -0,0 +1,125 @@
|
||||
---
|
||||
role: writer
|
||||
prefix: DRAFT
|
||||
inner_loop: true
|
||||
discuss_rounds: [DISCUSS-002]
|
||||
message_types:
|
||||
success: draft_ready
|
||||
revision: draft_revision
|
||||
error: error
|
||||
---
|
||||
|
||||
# Writer
|
||||
|
||||
Template-driven document generation with progressive dependency loading.
|
||||
|
||||
## Identity
|
||||
- Tag: [writer] | Prefix: DRAFT-*
|
||||
- Responsibility: Generate spec documents (product brief, requirements, architecture, epics)
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Load upstream context progressively (each doc builds on previous)
|
||||
- Use templates from templates/ directory
|
||||
- Self-validate every document
|
||||
- Run DISCUSS-002 for Requirements PRD
|
||||
### MUST NOT
|
||||
- Generate code
|
||||
- Skip validation
|
||||
- Modify upstream artifacts
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
### Document Type Routing
|
||||
|
||||
| Task Contains | Doc Type | Template | Validation |
|
||||
|---------------|----------|----------|------------|
|
||||
| Product Brief | product-brief | templates/product-brief.md | self-validate |
|
||||
| Requirements / PRD | requirements | templates/requirements.md | DISCUSS-002 |
|
||||
| Architecture | architecture | templates/architecture.md | self-validate |
|
||||
| Epics | epics | templates/epics.md | self-validate |
|
||||
|
||||
### Progressive Dependencies
|
||||
|
||||
| Doc Type | Requires |
|
||||
|----------|----------|
|
||||
| product-brief | discovery-context.json |
|
||||
| requirements | + product-brief.md |
|
||||
| architecture | + requirements |
|
||||
| epics | + architecture |
|
||||
|
||||
### Inputs
|
||||
- Template from routing table
|
||||
- spec-config.json from <session>/spec/
|
||||
- discovery-context.json from <session>/spec/
|
||||
- Prior decisions from context_accumulator (inner loop)
|
||||
- Discussion feedback from <session>/discussions/ (if exists)
|
||||
- Read `tasks.json` to get upstream task status
|
||||
- Read `discoveries/*.json` to load upstream discoveries and context
|
||||
|
||||
## Phase 3: Document Generation
|
||||
|
||||
CLI generation:
|
||||
```
|
||||
Bash({ command: `ccw cli -p "PURPOSE: Generate <doc-type> document following template
|
||||
TASK: * Load template * Apply spec config and discovery context * Integrate prior feedback * Generate all sections
|
||||
MODE: write
|
||||
CONTEXT: @<session>/spec/*.json @<template-path>
|
||||
EXPECTED: Document at <output-path> with YAML frontmatter, all sections, cross-references
|
||||
CONSTRAINTS: Follow document standards" --tool gemini --mode write --cd <session>`)
|
||||
```
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
### Self-Validation (all doc types)
|
||||
| Check | Verify |
|
||||
|-------|--------|
|
||||
| has_frontmatter | YAML frontmatter present |
|
||||
| sections_complete | All template sections filled |
|
||||
| cross_references | Valid references to upstream docs |
|
||||
|
||||
### Validation Routing
|
||||
| Doc Type | Method |
|
||||
|----------|--------|
|
||||
| product-brief | Self-validate -> report |
|
||||
| requirements | Self-validate + DISCUSS-002 |
|
||||
| architecture | Self-validate -> report |
|
||||
| epics | Self-validate -> report |
|
||||
|
||||
### Reporting
|
||||
|
||||
1. Write discovery to `discoveries/<task_id>.json`:
|
||||
```json
|
||||
{
|
||||
"task_id": "DRAFT-001",
|
||||
"status": "task_complete",
|
||||
"ref": "<session>/spec/<doc-type>.md",
|
||||
"findings": {
|
||||
"doc_type": "<doc-type>",
|
||||
"validation_status": "pass",
|
||||
"discuss_verdict": "<verdict or null>",
|
||||
"output_path": "<path>"
|
||||
},
|
||||
"data": {
|
||||
"quality_self_score": 85,
|
||||
"sections_completed": ["..."],
|
||||
"cross_references_valid": true
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Report via `report_agent_job_result`:
|
||||
```
|
||||
report_agent_job_result({
|
||||
id: "DRAFT-001",
|
||||
status: "completed",
|
||||
findings: { doc_type, validation_status, discuss_verdict, output_path }
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI failure | Retry once with alternative tool |
|
||||
| Prior doc missing | Notify coordinator |
|
||||
| Discussion contradicts prior | Note conflict, flag for coordinator |
|
||||
@@ -1,178 +1,236 @@
|
||||
# Team Lifecycle v4 -- CSV Schema
|
||||
# Tasks Schema (JSON)
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
## 1. Overview
|
||||
|
||||
### Column Definitions
|
||||
Codex uses `tasks.json` as the single source of truth for task state management, replacing Claude Code's `TaskCreate`/`TaskUpdate` API calls and CSV-based state tracking. Each session has one `tasks.json` file and multiple `discoveries/{task_id}.json` files.
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier | `"RESEARCH-001"` |
|
||||
| `title` | string | Yes | Short task title | `"Domain research"` |
|
||||
| `description` | string | Yes | Detailed task description (self-contained) | `"Explore domain, extract structured context..."` |
|
||||
| `role` | string | Yes | Worker role: analyst, writer, planner, executor, tester, reviewer, supervisor | `"analyst"` |
|
||||
| `pipeline_phase` | string | Yes | Lifecycle phase: research, product-brief, requirements, architecture, epics, checkpoint, readiness, planning, implementation, validation, review | `"research"` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"RESEARCH-001"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"RESEARCH-001"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[Task RESEARCH-001] Explored domain..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Identified 5 integration points..."` |
|
||||
| `quality_score` | string | Quality gate score (0-100) for reviewer tasks | `"85"` |
|
||||
| `supervision_verdict` | string | Checkpoint verdict: `pass` / `warn` / `block` | `"pass"` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
|
||||
|
||||
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
|
||||
|
||||
---
|
||||
|
||||
### Example Data
|
||||
|
||||
```csv
|
||||
id,title,description,role,pipeline_phase,deps,context_from,exec_mode,wave,status,findings,quality_score,supervision_verdict,error
|
||||
"RESEARCH-001","Domain research","Explore domain and competitors. Extract structured context: problem statement, target users, domain, constraints, exploration dimensions. Use CLI analysis tools.","analyst","research","","","csv-wave","1","pending","","","",""
|
||||
"DRAFT-001","Product brief","Generate product brief from research context. Include vision statement, problem definition, target users, success goals. Use templates/product-brief.md template.","writer","product-brief","RESEARCH-001","RESEARCH-001","csv-wave","2","pending","","","",""
|
||||
"DRAFT-002","Requirements PRD","Generate requirements PRD with functional requirements (FR-NNN), acceptance criteria, MoSCoW prioritization, user stories.","writer","requirements","DRAFT-001","DRAFT-001","csv-wave","3","pending","","","",""
|
||||
"CHECKPOINT-001","Brief-PRD consistency","Verify: vision->requirements trace, terminology alignment, scope consistency, decision continuity, artifact existence.","supervisor","checkpoint","DRAFT-002","DRAFT-001;DRAFT-002","interactive","4","pending","","","",""
|
||||
"DRAFT-003","Architecture design","Generate architecture with component diagram, tech stack justification, ADRs, data model, integration points.","writer","architecture","CHECKPOINT-001","DRAFT-002;CHECKPOINT-001","csv-wave","5","pending","","","",""
|
||||
"DRAFT-004","Epics and stories","Generate 2-8 epics with 3-12 stories each. Include MVP subset, story format with ACs and estimates.","writer","epics","DRAFT-003","DRAFT-003","csv-wave","6","pending","","","",""
|
||||
"CHECKPOINT-002","Full spec consistency","Verify: 4-doc terminology, decision chain, architecture-epics alignment, quality trend, open questions.","supervisor","checkpoint","DRAFT-004","DRAFT-001;DRAFT-002;DRAFT-003;DRAFT-004","interactive","7","pending","","","",""
|
||||
"QUALITY-001","Readiness gate","Score spec quality across Completeness, Consistency, Traceability, Depth (25% each). Gate: >=80% pass, 60-79% review, <60% fail.","reviewer","readiness","CHECKPOINT-002","DRAFT-001;DRAFT-002;DRAFT-003;DRAFT-004","csv-wave","8","pending","","","",""
|
||||
"PLAN-001","Implementation planning","Explore codebase, generate plan.json + TASK-*.json (2-7 tasks), assess complexity (Low/Medium/High).","planner","planning","QUALITY-001","QUALITY-001","csv-wave","9","pending","","","",""
|
||||
"CHECKPOINT-003","Plan-input alignment","Verify: plan covers requirements, complexity sanity, dependency chain, execution method, upstream context.","supervisor","checkpoint","PLAN-001","PLAN-001","interactive","10","pending","","","",""
|
||||
"IMPL-001","Code implementation","Execute implementation plan tasks. Follow existing code patterns. Run convergence checks.","executor","implementation","CHECKPOINT-003","PLAN-001","csv-wave","11","pending","","","",""
|
||||
"TEST-001","Test execution","Detect test framework. Run affected tests first, then full suite. Fix failures (max 10 iterations, 95% target).","tester","validation","IMPL-001","IMPL-001","csv-wave","12","pending","","","",""
|
||||
"REVIEW-001","Code review","Multi-dimensional code review: quality, security, architecture, requirements coverage. Verdict: BLOCK/CONDITIONAL/APPROVE.","reviewer","review","IMPL-001","IMPL-001","csv-wave","12","pending","","","",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
--------------------- -------------------- -----------------
|
||||
id ----------> id ----------> id
|
||||
title ----------> title ----------> (reads)
|
||||
description ----------> description ----------> (reads)
|
||||
role ----------> role ----------> (reads)
|
||||
pipeline_phase --------> pipeline_phase --------> (reads)
|
||||
deps ----------> deps ----------> (reads)
|
||||
context_from----------> context_from----------> (reads)
|
||||
exec_mode ----------> exec_mode ----------> (reads)
|
||||
wave ----------> (reads)
|
||||
prev_context ----------> (reads)
|
||||
status
|
||||
findings
|
||||
quality_score
|
||||
supervision_verdict
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
## 2. tasks.json Top-Level Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "RESEARCH-001",
|
||||
"status": "completed",
|
||||
"findings": "Explored domain: identified OAuth2+RBAC auth pattern, 5 integration points, TypeScript/React stack. Key constraint: must support SSO.",
|
||||
"quality_score": "",
|
||||
"supervision_verdict": "",
|
||||
"error": ""
|
||||
"session_id": "string — unique session identifier (e.g., tlv4-auth-system-20260324)",
|
||||
"pipeline": "string — one of: spec-only | impl-only | full-lifecycle | fe-only | fullstack | full-lifecycle-fe",
|
||||
"requirement": "string — original user requirement text",
|
||||
"created_at": "string — ISO 8601 timestamp with timezone",
|
||||
"supervision": "boolean — whether CHECKPOINT tasks are active (default: true)",
|
||||
"completed_waves": "number[] — list of completed wave numbers",
|
||||
"active_agents": "object — map of task_id -> agent_id for currently running agents",
|
||||
"tasks": "object — map of task_id -> TaskEntry"
|
||||
}
|
||||
```
|
||||
|
||||
Quality gate output:
|
||||
### Required Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `session_id` | string | Unique session identifier, format: `tlv4-<topic>-<YYYYMMDD>` |
|
||||
| `pipeline` | string | Selected pipeline name from pipelines.md |
|
||||
| `requirement` | string | Original user requirement, verbatim |
|
||||
| `created_at` | string | ISO 8601 creation timestamp |
|
||||
| `supervision` | boolean | Enable/disable CHECKPOINT tasks |
|
||||
| `completed_waves` | number[] | Waves that have finished execution |
|
||||
| `active_agents` | object | Runtime tracking: `{ "TASK-ID": "agent-id" }` |
|
||||
| `tasks` | object | Task registry: `{ "TASK-ID": TaskEntry }` |
|
||||
|
||||
## 3. TaskEntry Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "QUALITY-001",
|
||||
"status": "completed",
|
||||
"findings": "Quality gate: Completeness 90%, Consistency 85%, Traceability 80%, Depth 75%. Overall: 82.5% PASS.",
|
||||
"quality_score": "82",
|
||||
"supervision_verdict": "",
|
||||
"error": ""
|
||||
"title": "string — short task title",
|
||||
"description": "string — detailed task description",
|
||||
"role": "string — role name matching roles/<role>/role.md",
|
||||
"pipeline_phase": "string — phase from pipelines.md Task Metadata Registry",
|
||||
"deps": "string[] — task IDs that must complete before this task starts",
|
||||
"context_from": "string[] — task IDs whose discoveries to load as upstream context",
|
||||
"wave": "number — execution wave (1-based, determines parallel grouping)",
|
||||
"status": "string — one of: pending | in_progress | completed | failed | skipped",
|
||||
"findings": "string | null — summary of task output (max 500 chars)",
|
||||
"quality_score": "number | null — 0-100, set by reviewer roles only",
|
||||
"supervision_verdict": "string | null — pass | warn | block, set by CHECKPOINT tasks only",
|
||||
"error": "string | null — error description if status is failed or skipped"
|
||||
}
|
||||
```
|
||||
|
||||
Interactive tasks (CHECKPOINT-*) output via JSON written to `interactive/{id}-result.json`.
|
||||
### Field Definitions
|
||||
|
||||
---
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `title` | string | Yes | - | Human-readable task name |
|
||||
| `description` | string | Yes | - | What the task should accomplish |
|
||||
| `role` | string | Yes | - | One of: analyst, writer, planner, implementer, tester, reviewer, supervisor, orchestrator, architect, security-expert, performance-optimizer, data-engineer, devops-engineer, ml-engineer |
|
||||
| `pipeline_phase` | string | Yes | - | One of: research, product-brief, requirements, architecture, epics, readiness, checkpoint, planning, arch-detail, orchestration, implementation, validation, review |
|
||||
| `deps` | string[] | Yes | `[]` | Task IDs that block execution. All must be `completed` before this task starts |
|
||||
| `context_from` | string[] | Yes | `[]` | Task IDs whose `discoveries/{id}.json` files are loaded as upstream context |
|
||||
| `wave` | number | Yes | - | Execution wave number. Tasks in the same wave run in parallel |
|
||||
| `status` | string | Yes | `"pending"` | Current task state |
|
||||
| `findings` | string\|null | No | `null` | Populated on completion. Summary of key output |
|
||||
| `quality_score` | number\|null | No | `null` | Only set by QUALITY-* and REVIEW-* tasks |
|
||||
| `supervision_verdict` | string\|null | No | `null` | Only set by CHECKPOINT-* tasks |
|
||||
| `error` | string\|null | No | `null` | Set when status is `failed` or `skipped` |
|
||||
|
||||
## Discovery Types
|
||||
### Status Lifecycle
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `research` | `data.dimension` | `{dimension, findings[], constraints[], integration_points[]}` | Research context |
|
||||
| `spec_artifact` | `data.doc_type` | `{doc_type, path, sections[], key_decisions[]}` | Specification document |
|
||||
| `exploration` | `data.angle` | `{angle, relevant_files[], patterns[], recommendations[]}` | Codebase exploration |
|
||||
| `plan_task` | `data.task_id` | `{task_id, title, files[], complexity, convergence_criteria[]}` | Plan task definition |
|
||||
| `implementation` | `data.task_id` | `{task_id, files_modified[], approach, changes_summary}` | Implementation result |
|
||||
| `test_result` | `data.framework` | `{framework, pass_rate, failures[], fix_iterations}` | Test result |
|
||||
| `review_finding` | `data.file` | `{file, line, severity, dimension, description, suggested_fix}` | Review finding |
|
||||
| `checkpoint` | `data.checkpoint_id` | `{checkpoint_id, verdict, score, risks[], blocks[]}` | Checkpoint result |
|
||||
| `quality_gate` | `data.gate_id` | `{gate_id, score, dimensions{}, verdict}` | Quality assessment |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00+08:00","worker":"RESEARCH-001","type":"research","data":{"dimension":"domain","findings":["Auth system needs OAuth2 + RBAC"],"constraints":["Must support SSO"],"integration_points":["User service API"]}}
|
||||
{"ts":"2026-03-08T10:15:00+08:00","worker":"DRAFT-001","type":"spec_artifact","data":{"doc_type":"product-brief","path":"spec/product-brief.md","sections":["Vision","Problem","Users","Goals"],"key_decisions":["OAuth2 over custom auth"]}}
|
||||
{"ts":"2026-03-08T11:00:00+08:00","worker":"IMPL-001","type":"implementation","data":{"task_id":"IMPL-001","files_modified":["src/auth/oauth.ts","src/auth/rbac.ts"],"approach":"Strategy pattern for auth providers","changes_summary":"Created OAuth2 provider, RBAC middleware, session management"}}
|
||||
{"ts":"2026-03-08T11:30:00+08:00","worker":"TEST-001","type":"test_result","data":{"framework":"vitest","pass_rate":98,"failures":["timeout in SSO integration test"],"fix_iterations":2}}
|
||||
```
|
||||
pending -> in_progress -> completed
|
||||
-> failed
|
||||
pending -> skipped (when upstream dependency failed/skipped)
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
## 4. discoveries/{task_id}.json Schema
|
||||
|
||||
---
|
||||
Each task writes a discovery file on completion. This replaces Claude Code's `team_msg(type="state_update")`.
|
||||
|
||||
## Cross-Mechanism Context Flow
|
||||
```json
|
||||
{
|
||||
"task_id": "string — matches the task key in tasks.json",
|
||||
"worker": "string — same as task_id (identifies the producing agent)",
|
||||
"timestamp": "string — ISO 8601 completion timestamp",
|
||||
"type": "string — same as pipeline_phase",
|
||||
"status": "string — completed | failed",
|
||||
"findings": "string — summary (max 500 chars)",
|
||||
"quality_score": "number | null",
|
||||
"supervision_verdict": "string | null — pass | warn | block",
|
||||
"error": "string | null",
|
||||
"data": {
|
||||
"key_findings": "string[] — max 5 items, each under 100 chars",
|
||||
"decisions": "string[] — include rationale, not just choice",
|
||||
"files_modified": "string[] — only for implementation tasks",
|
||||
"verification": "string — self-validated | peer-reviewed | tested",
|
||||
"risks_logged": "number — CHECKPOINT only: count of risks",
|
||||
"blocks_detected": "number — CHECKPOINT only: count of blocking issues"
|
||||
},
|
||||
"artifacts_produced": "string[] — paths to generated artifact files"
|
||||
}
|
||||
```
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| CSV task findings | Interactive task | Injected via spawn message or send_input |
|
||||
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
## 5. Validation Rules
|
||||
|
||||
---
|
||||
### Structural Validation
|
||||
|
||||
## Validation Rules
|
||||
| Rule | Description |
|
||||
|------|-------------|
|
||||
| Unique IDs | Every key in `tasks` must be unique |
|
||||
| Valid deps | Every entry in `deps` must reference an existing task ID |
|
||||
| Valid context_from | Every entry in `context_from` must reference an existing task ID |
|
||||
| No cycles | Dependency graph must be a DAG (no circular dependencies) |
|
||||
| Wave ordering | If task A depends on task B, then A.wave > B.wave |
|
||||
| Role exists | `role` must match a directory in `.codex/skills/team-lifecycle-v4/roles/` |
|
||||
| Pipeline phase valid | `pipeline_phase` must be one of the defined phases |
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
||||
| Valid role | role in {analyst, writer, planner, executor, tester, reviewer, supervisor} | "Invalid role: {role}" |
|
||||
| Valid pipeline_phase | pipeline_phase in {research, product-brief, requirements, architecture, epics, checkpoint, readiness, planning, implementation, validation, review} | "Invalid pipeline_phase: {value}" |
|
||||
| Cross-mechanism deps | Interactive->CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
|
||||
### Runtime Validation
|
||||
|
||||
| Rule | Description |
|
||||
|------|-------------|
|
||||
| Status transitions | Only valid transitions: pending->in_progress, in_progress->completed/failed, pending->skipped |
|
||||
| Dependency check | A task can only move to `in_progress` if all `deps` are `completed` |
|
||||
| Skip propagation | If any dep is `failed` or `skipped`, task is automatically `skipped` |
|
||||
| Discovery required | A `completed` task MUST have a corresponding `discoveries/{task_id}.json` file |
|
||||
| Findings required | A `completed` task MUST have non-null `findings` |
|
||||
| Error required | A `failed` or `skipped` task MUST have non-null `error` |
|
||||
| Supervision fields | CHECKPOINT tasks MUST set `supervision_verdict` on completion |
|
||||
| Quality fields | QUALITY-*/REVIEW-* tasks SHOULD set `quality_score` on completion |
|
||||
|
||||
## 6. Semantic Mapping: Claude Code <-> Codex
|
||||
|
||||
### TaskCreate Mapping
|
||||
|
||||
| Claude Code `TaskCreate` Field | Codex `tasks.json` Equivalent |
|
||||
|-------------------------------|-------------------------------|
|
||||
| `title` | `tasks[id].title` |
|
||||
| `description` | `tasks[id].description` |
|
||||
| `assignee` (role) | `tasks[id].role` |
|
||||
| `status: "open"` | `tasks[id].status: "pending"` |
|
||||
| `metadata.pipeline_phase` | `tasks[id].pipeline_phase` |
|
||||
| `metadata.deps` | `tasks[id].deps` |
|
||||
| `metadata.context_from` | `tasks[id].context_from` |
|
||||
| `metadata.wave` | `tasks[id].wave` |
|
||||
|
||||
### TaskUpdate Mapping
|
||||
|
||||
| Claude Code `TaskUpdate` Operation | Codex Equivalent |
|
||||
|------------------------------------|------------------|
|
||||
| `status: "in_progress"` | Write `tasks[id].status = "in_progress"` in tasks.json |
|
||||
| `status: "completed"` + findings | Write `tasks[id].status = "completed"` + Write `discoveries/{id}.json` |
|
||||
| `status: "failed"` + error | Write `tasks[id].status = "failed"` + `tasks[id].error` |
|
||||
| Attach result metadata | Write `discoveries/{id}.json` with full data payload |
|
||||
|
||||
### team_msg Mapping
|
||||
|
||||
| Claude Code `team_msg` Operation | Codex Equivalent |
|
||||
|---------------------------------|------------------|
|
||||
| `team_msg(operation="get_state", role=<upstream>)` | Read `tasks.json` + Read `discoveries/{upstream_id}.json` |
|
||||
| `team_msg(type="state_update", payload={...})` | Write `discoveries/{task_id}.json` |
|
||||
| `team_msg(type="broadcast", ...)` | Write to `wisdom/*.md` (session-wide visibility) |
|
||||
|
||||
## 7. Column Lifecycle Correspondence
|
||||
|
||||
Maps the conceptual "columns" from Claude Code's task board to tasks.json status values.
|
||||
|
||||
| Claude Code Column | tasks.json Status | Transition Trigger |
|
||||
|-------------------|-------------------|-------------------|
|
||||
| Backlog | `pending` | Task created in tasks.json |
|
||||
| In Progress | `in_progress` | `spawn_agent` called for task |
|
||||
| Blocked | `pending` (deps not met) | Implicit: deps not all `completed` |
|
||||
| Done | `completed` | Agent writes discovery + coordinator updates status |
|
||||
| Failed | `failed` | Agent reports error or timeout |
|
||||
| Skipped | `skipped` | Upstream dependency failed/skipped |
|
||||
|
||||
## 8. Example: Full tasks.json
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "tlv4-auth-system-20260324",
|
||||
"pipeline": "full-lifecycle",
|
||||
"requirement": "Design and implement user authentication system with OAuth2 and RBAC",
|
||||
"created_at": "2026-03-24T10:00:00+08:00",
|
||||
"supervision": true,
|
||||
"completed_waves": [1],
|
||||
"active_agents": {
|
||||
"DRAFT-001": "agent-abc123"
|
||||
},
|
||||
"tasks": {
|
||||
"RESEARCH-001": {
|
||||
"title": "Domain research",
|
||||
"description": "Explore auth domain: OAuth2 flows, RBAC patterns, competitor analysis, integration constraints",
|
||||
"role": "analyst",
|
||||
"pipeline_phase": "research",
|
||||
"deps": [],
|
||||
"context_from": [],
|
||||
"wave": 1,
|
||||
"status": "completed",
|
||||
"findings": "Identified OAuth2+RBAC pattern, 5 integration points, SSO requirement from enterprise customers",
|
||||
"quality_score": null,
|
||||
"supervision_verdict": null,
|
||||
"error": null
|
||||
},
|
||||
"DRAFT-001": {
|
||||
"title": "Product brief",
|
||||
"description": "Generate product brief from research context, define vision, problem, users, success metrics",
|
||||
"role": "writer",
|
||||
"pipeline_phase": "product-brief",
|
||||
"deps": ["RESEARCH-001"],
|
||||
"context_from": ["RESEARCH-001"],
|
||||
"wave": 2,
|
||||
"status": "in_progress",
|
||||
"findings": null,
|
||||
"quality_score": null,
|
||||
"supervision_verdict": null,
|
||||
"error": null
|
||||
},
|
||||
"DRAFT-002": {
|
||||
"title": "Requirements PRD",
|
||||
"description": "Write requirements document with functional/non-functional reqs, user stories, acceptance criteria",
|
||||
"role": "writer",
|
||||
"pipeline_phase": "requirements",
|
||||
"deps": ["DRAFT-001"],
|
||||
"context_from": ["DRAFT-001"],
|
||||
"wave": 3,
|
||||
"status": "pending",
|
||||
"findings": null,
|
||||
"quality_score": null,
|
||||
"supervision_verdict": null,
|
||||
"error": null
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
136
.codex/skills/team-lifecycle-v4/specs/knowledge-transfer.md
Normal file
136
.codex/skills/team-lifecycle-v4/specs/knowledge-transfer.md
Normal file
@@ -0,0 +1,136 @@
|
||||
# Knowledge Transfer Protocols
|
||||
|
||||
## 1. Transfer Channels
|
||||
|
||||
| Channel | Method | Producer | Consumer |
|
||||
|---------|--------|----------|----------|
|
||||
| Artifacts | Files in `<session>/artifacts/` | Task executor | Next task in pipeline |
|
||||
| Discoveries | Files in `<session>/discoveries/{task_id}.json` | Task executor | Coordinator + downstream |
|
||||
| Wisdom | Append to `<session>/wisdom/*.md` | Any role | All roles |
|
||||
| Context Accumulator | In-memory aggregation | Inner loop only | Current task |
|
||||
| Exploration Cache | `<session>/explorations/` | Analyst / researcher | All roles |
|
||||
|
||||
## 2. Context Loading Protocol (Before Task Execution)
|
||||
|
||||
Every role MUST load context in this order before starting work.
|
||||
|
||||
| Step | Action | Required |
|
||||
|------|--------|----------|
|
||||
| 1 | Read `<session>/tasks.json` -- locate upstream task entries, check status and findings | Yes |
|
||||
| 2 | Read `<session>/discoveries/{upstream_id}.json` for each upstream dependency -- get detailed findings and artifact paths | Yes |
|
||||
| 3 | Read artifact files from upstream discovery's `artifacts_produced` paths | Yes |
|
||||
| 4 | Read `<session>/wisdom/*.md` if exists | Yes |
|
||||
| 5 | Check `<session>/explorations/cache-index.json` before new exploration | If exploring |
|
||||
|
||||
**Loading rules**:
|
||||
- Never skip step 1 -- tasks.json contains task status, wave progression, and summary findings
|
||||
- Never skip step 2 -- discoveries contain detailed key_findings, decisions, and artifact references
|
||||
- If artifact path in discovery does not exist, log warning and continue
|
||||
- Wisdom files are append-only -- read all entries, newest last
|
||||
|
||||
## 3. Context Publishing Protocol (After Task Completion)
|
||||
|
||||
| Step | Action | Required |
|
||||
|------|--------|----------|
|
||||
| 1 | Write deliverable to `<session>/artifacts/<task-id>-<name>.md` | Yes |
|
||||
| 2 | Write `<session>/discoveries/{task_id}.json` with payload (see schema below) | Yes |
|
||||
| 3 | Append wisdom entries for learnings, decisions, issues found | If applicable |
|
||||
|
||||
## 4. Discovery File Schema
|
||||
|
||||
Written to `<session>/discoveries/{task_id}.json` on task completion.
|
||||
|
||||
```json
|
||||
{
|
||||
"task_id": "<TASK-NNN>",
|
||||
"worker": "<TASK-NNN>",
|
||||
"timestamp": "2026-03-24T10:15:00+08:00",
|
||||
"type": "<pipeline_phase>",
|
||||
"status": "completed | failed",
|
||||
"findings": "Summary string (max 500 chars)",
|
||||
"quality_score": null,
|
||||
"supervision_verdict": null,
|
||||
"error": null,
|
||||
"data": {
|
||||
"key_findings": [
|
||||
"Finding 1",
|
||||
"Finding 2"
|
||||
],
|
||||
"decisions": [
|
||||
"Decision with rationale"
|
||||
],
|
||||
"files_modified": [
|
||||
"path/to/file.ts"
|
||||
],
|
||||
"verification": "self-validated | peer-reviewed | tested"
|
||||
},
|
||||
"artifacts_produced": [
|
||||
"<session>/artifacts/<task-id>-<name>.md"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Field rules**:
|
||||
- `artifacts_produced`: Always artifact paths, never inline content
|
||||
- `data.key_findings`: Max 5 items, each under 100 chars
|
||||
- `data.decisions`: Include rationale, not just the choice
|
||||
- `data.files_modified`: Only for implementation tasks
|
||||
- `data.verification`: One of `self-validated`, `peer-reviewed`, `tested`
|
||||
|
||||
**Supervisor-specific extensions** (CHECKPOINT tasks only):
|
||||
|
||||
```json
|
||||
{
|
||||
"supervision_verdict": "pass | warn | block",
|
||||
"supervision_score": 0.85,
|
||||
"data": {
|
||||
"risks_logged": 0,
|
||||
"blocks_detected": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `supervision_verdict`: Required for CHECKPOINT tasks. Determines pipeline progression.
|
||||
- `supervision_score`: Float 0.0-1.0. Aggregate of individual check scores.
|
||||
- `data.risks_logged`: Count of risks written to wisdom/issues.md.
|
||||
- `data.blocks_detected`: Count of blocking issues found. >0 implies verdict=block.
|
||||
|
||||
## 5. Exploration Cache Protocol
|
||||
|
||||
Prevents redundant research across tasks.
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | Read `<session>/explorations/cache-index.json` |
|
||||
| 2 | If angle already explored, read cached result from `explore-<angle>.json` |
|
||||
| 3 | If not cached, perform exploration |
|
||||
| 4 | Write result to `<session>/explorations/explore-<angle>.json` |
|
||||
| 5 | Update `cache-index.json` with new entry |
|
||||
|
||||
**cache-index.json format**:
|
||||
```json
|
||||
{
|
||||
"entries": [
|
||||
{
|
||||
"angle": "competitor-analysis",
|
||||
"file": "explore-competitor-analysis.json",
|
||||
"created_by": "RESEARCH-001",
|
||||
"timestamp": "2026-01-15T10:30:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Rules**:
|
||||
- Cache key is the exploration `angle` (normalized to kebab-case)
|
||||
- Cache entries never expire within a session
|
||||
- Any role can read cached explorations; only the creator updates them
|
||||
|
||||
## 6. Platform Mapping Reference
|
||||
|
||||
| Claude Code Operation | Codex Equivalent |
|
||||
|----------------------|------------------|
|
||||
| `team_msg(operation="get_state", role=<upstream>)` | Read `tasks.json` + Read `discoveries/{upstream_id}.json` |
|
||||
| `team_msg(type="state_update", payload={...})` | Write `discoveries/{task_id}.json` |
|
||||
| `TaskCreate` / `TaskUpdate` status fields | Read/Write `tasks.json` task entries |
|
||||
| In-memory state aggregation | Parse `tasks.json` + glob `discoveries/*.json` |
|
||||
125
.codex/skills/team-lifecycle-v4/specs/pipelines.md
Normal file
125
.codex/skills/team-lifecycle-v4/specs/pipelines.md
Normal file
@@ -0,0 +1,125 @@
|
||||
# Pipeline Definitions
|
||||
|
||||
## 1. Pipeline Selection Criteria
|
||||
|
||||
| Keywords | Pipeline |
|
||||
|----------|----------|
|
||||
| spec, design, document, requirements | `spec-only` |
|
||||
| implement, build, fix, code | `impl-only` |
|
||||
| full, lifecycle, end-to-end | `full-lifecycle` |
|
||||
| frontend, UI, react, vue | `fe-only` or `fullstack` |
|
||||
| Ambiguous / unclear | request_user_input |
|
||||
|
||||
## 2. Spec-Only Pipeline
|
||||
|
||||
**6 tasks + 2 optional checkpoints**
|
||||
|
||||
```
|
||||
RESEARCH-001 -> DRAFT-001 -> DRAFT-002 -> [CHECKPOINT-001] -> DRAFT-003 -> DRAFT-004 -> [CHECKPOINT-002] -> QUALITY-001
|
||||
```
|
||||
|
||||
| Task | Role | Description |
|
||||
|------|------|-------------|
|
||||
| RESEARCH-001 | analyst | Research domain, competitors, constraints |
|
||||
| DRAFT-001 | writer | Product brief, self-validate |
|
||||
| DRAFT-002 | writer | Requirements PRD |
|
||||
| CHECKPOINT-001 | supervisor | Brief<->PRD consistency, terminology alignment |
|
||||
| DRAFT-003 | writer | Architecture design, self-validate |
|
||||
| DRAFT-004 | writer | Epics & stories, self-validate |
|
||||
| CHECKPOINT-002 | supervisor | Full spec consistency (4 docs), quality trend |
|
||||
| QUALITY-001 | reviewer | Quality gate scoring |
|
||||
|
||||
**Checkpoint**: After QUALITY-001 -- pause for user approval before any implementation.
|
||||
|
||||
**Supervision opt-out**: Set `supervision: false` in tasks.json to skip CHECKPOINT tasks.
|
||||
|
||||
## 3. Impl-Only Pipeline
|
||||
|
||||
**4 tasks + 1 optional checkpoint**
|
||||
|
||||
```
|
||||
PLAN-001 -> [CHECKPOINT-003] -> IMPL-001 -> TEST-001 + REVIEW-001
|
||||
```
|
||||
|
||||
| Task | Role | Description |
|
||||
|------|------|-------------|
|
||||
| PLAN-001 | planner | Break down into implementation steps, assess complexity |
|
||||
| CHECKPOINT-003 | supervisor | Plan<->input alignment, complexity sanity check |
|
||||
| IMPL-001 | implementer | Execute implementation plan |
|
||||
| TEST-001 | tester | Validate against acceptance criteria |
|
||||
| REVIEW-001 | reviewer | Code review |
|
||||
|
||||
TEST-001 and REVIEW-001 run in parallel after IMPL-001 completes.
|
||||
|
||||
**Supervision opt-out**: Set `supervision: false` in tasks.json to skip CHECKPOINT tasks.
|
||||
|
||||
## 4. Full-Lifecycle Pipeline
|
||||
|
||||
**10 tasks + 3 optional checkpoints = spec-only (6+2) + impl (4+1)**
|
||||
|
||||
```
|
||||
[Spec pipeline with CHECKPOINT-001/002] -> PLAN-001(blockedBy: QUALITY-001) -> [CHECKPOINT-003] -> IMPL-001 -> TEST-001 + REVIEW-001
|
||||
```
|
||||
|
||||
PLAN-001 is blocked until QUALITY-001 passes and user approves the checkpoint.
|
||||
|
||||
**Supervision opt-out**: Set `supervision: false` in tasks.json to skip all CHECKPOINT tasks.
|
||||
|
||||
## 5. Frontend Pipelines
|
||||
|
||||
| Pipeline | Description |
|
||||
|----------|-------------|
|
||||
| `fe-only` | Frontend implementation only: PLAN-001 -> IMPL-001 (fe-implementer) -> TEST-001 + REVIEW-001 |
|
||||
| `fullstack` | Backend + frontend: PLAN-001 -> IMPL-001 (backend) + IMPL-002 (frontend) -> TEST-001 + REVIEW-001 |
|
||||
| `full-lifecycle-fe` | Full spec pipeline -> fullstack impl pipeline |
|
||||
|
||||
## 6. Conditional Routing
|
||||
|
||||
PLAN-001 outputs a complexity assessment that determines the impl topology.
|
||||
|
||||
| Complexity | Modules | Route |
|
||||
|------------|---------|-------|
|
||||
| Low | 1-2 | PLAN-001 -> IMPL-001 -> TEST + REVIEW |
|
||||
| Medium | 3-4 | PLAN-001 -> ORCH-001 -> IMPL-{1..N} (parallel) -> TEST + REVIEW |
|
||||
| High | 5+ | PLAN-001 -> ARCH-001 -> ORCH-001 -> IMPL-{1..N} -> TEST + REVIEW |
|
||||
|
||||
- **ORCH-001** (orchestrator): Coordinates parallel IMPL tasks, manages dependencies
|
||||
- **ARCH-001** (architect): Detailed architecture decisions before orchestration
|
||||
|
||||
## 7. Task Metadata Registry
|
||||
|
||||
| Task ID | Role | Phase | Depends On | Priority |
|
||||
|---------|------|-------|------------|----------|
|
||||
| RESEARCH-001 | analyst | research | - | P0 |
|
||||
| DRAFT-001 | writer | product-brief | RESEARCH-001 | P0 |
|
||||
| DRAFT-002 | writer | requirements | DRAFT-001 | P0 |
|
||||
| DRAFT-003 | writer | architecture | DRAFT-002 | P0 |
|
||||
| DRAFT-004 | writer | epics | DRAFT-003 | P0 |
|
||||
| QUALITY-001 | reviewer | readiness | CHECKPOINT-002 (or DRAFT-004) | P0 |
|
||||
| CHECKPOINT-001 | supervisor | checkpoint | DRAFT-002 | P1 |
|
||||
| CHECKPOINT-002 | supervisor | checkpoint | DRAFT-004 | P1 |
|
||||
| CHECKPOINT-003 | supervisor | checkpoint | PLAN-001 | P1 |
|
||||
| PLAN-001 | planner | planning | QUALITY-001 (or user input) | P0 |
|
||||
| ARCH-001 | architect | arch-detail | PLAN-001 | P1 |
|
||||
| ORCH-001 | orchestrator | orchestration | PLAN-001 or ARCH-001 | P1 |
|
||||
| IMPL-001 | implementer | implementation | PLAN-001 or ORCH-001 | P0 |
|
||||
| IMPL-{N} | implementer | implementation | ORCH-001 | P0 |
|
||||
| TEST-001 | tester | validation | IMPL-* | P0 |
|
||||
| REVIEW-001 | reviewer | review | IMPL-* | P0 |
|
||||
|
||||
## 8. Dynamic Specialist Injection
|
||||
|
||||
When task content or user request matches trigger keywords, inject a specialist task.
|
||||
|
||||
| Trigger Keywords | Specialist Role | Task Prefix | Priority | Insert After |
|
||||
|------------------|----------------|-------------|----------|--------------|
|
||||
| security, vulnerability, OWASP | security-expert | SECURITY-* | P0 | PLAN |
|
||||
| performance, optimization, latency | performance-optimizer | PERF-* | P1 | IMPL |
|
||||
| data, pipeline, ETL, migration | data-engineer | DATA-* | P0 | parallel with IMPL |
|
||||
| devops, CI/CD, deployment, infra | devops-engineer | DEVOPS-* | P1 | IMPL |
|
||||
| ML, model, training, inference | ml-engineer | ML-* | P0 | parallel with IMPL |
|
||||
|
||||
**Injection rules**:
|
||||
- Specialist tasks inherit the session context and wisdom
|
||||
- They write discoveries/{task_id}.json on completion like any other task
|
||||
- P0 specialists block downstream tasks; P1 run in parallel
|
||||
130
.codex/skills/team-lifecycle-v4/specs/quality-gates.md
Normal file
130
.codex/skills/team-lifecycle-v4/specs/quality-gates.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# Quality Gates
|
||||
|
||||
## 1. Quality Thresholds
|
||||
|
||||
| Result | Score | Action |
|
||||
|--------|-------|--------|
|
||||
| Pass | >= 80% | Proceed to next phase |
|
||||
| Review | 60-79% | Revise flagged items, re-evaluate |
|
||||
| Fail | < 60% | Return to producer for rework |
|
||||
|
||||
## 2. Scoring Dimensions
|
||||
|
||||
| Dimension | Weight | Criteria |
|
||||
|-----------|--------|----------|
|
||||
| Completeness | 25% | All required sections present with substantive content |
|
||||
| Consistency | 25% | Terminology, formatting, cross-references are uniform |
|
||||
| Traceability | 25% | Clear chain: Goals -> Requirements -> Architecture -> Stories |
|
||||
| Depth | 25% | ACs are testable, ADRs justified, stories estimable |
|
||||
|
||||
**Score** = weighted average of all dimensions (0-100 per dimension).
|
||||
|
||||
## 3. Per-Phase Quality Gates
|
||||
|
||||
### Phase 2: Product Brief
|
||||
|
||||
| Check | Pass Criteria |
|
||||
|-------|---------------|
|
||||
| Vision statement | Clear, one-paragraph, measurable outcome |
|
||||
| Problem definition | Specific pain points with evidence |
|
||||
| Target users | Defined personas or segments |
|
||||
| Success goals | Quantifiable metrics (KPIs) |
|
||||
| Success metrics | Measurement method specified |
|
||||
|
||||
### Phase 3: Requirements PRD
|
||||
|
||||
| Check | Pass Criteria |
|
||||
|-------|---------------|
|
||||
| Functional requirements | Each has unique ID (FR-NNN) |
|
||||
| Acceptance criteria | Testable given/when/then format |
|
||||
| Prioritization | MoSCoW applied to all requirements |
|
||||
| User stories | Format: As a [role], I want [goal], so that [benefit] |
|
||||
| Non-functional reqs | Performance, security, scalability addressed |
|
||||
|
||||
### Phase 4: Architecture
|
||||
|
||||
| Check | Pass Criteria |
|
||||
|-------|---------------|
|
||||
| Component diagram | All major components identified with boundaries |
|
||||
| Tech stack | Each choice justified against alternatives |
|
||||
| ADRs | At least 1 ADR per major decision, with status |
|
||||
| Data model | Entities, relationships, key fields defined |
|
||||
| Integration points | APIs, protocols, data formats specified |
|
||||
|
||||
### Phase 5: Epics & Stories
|
||||
|
||||
| Check | Pass Criteria |
|
||||
|-------|---------------|
|
||||
| Epic count | 2-8 epics (too few = too broad, too many = too granular) |
|
||||
| MVP subset | Clearly marked MVP epics/stories |
|
||||
| Stories per epic | 3-12 stories each |
|
||||
| Story format | Title, description, ACs, estimate present |
|
||||
|
||||
### Phase 6: Readiness Gate
|
||||
|
||||
| Check | Pass Criteria |
|
||||
|-------|---------------|
|
||||
| All docs exist | Brief, PRD, Architecture, Epics all present |
|
||||
| Cross-refs valid | All document references resolve correctly |
|
||||
| Overall score | >= 60% across all dimensions |
|
||||
| No P0 issues | Zero Error-class issues outstanding |
|
||||
|
||||
## 4. Cross-Document Validation
|
||||
|
||||
| Source | Target | Validation |
|
||||
|--------|--------|------------|
|
||||
| Brief goals | PRD requirements | Every goal has >= 1 requirement |
|
||||
| PRD requirements | Architecture components | Every requirement maps to a component |
|
||||
| PRD requirements | Epic stories | Every requirement covered by >= 1 story |
|
||||
| Architecture components | Epic stories | Every component has implementation stories |
|
||||
| Brief success metrics | Epic ACs | Metrics traceable to acceptance criteria |
|
||||
|
||||
## 5. Code Review Dimensions
|
||||
|
||||
For REVIEW-* tasks during implementation phases.
|
||||
|
||||
### Quality
|
||||
|
||||
| Check | Severity |
|
||||
|-------|----------|
|
||||
| Empty catch blocks | Error |
|
||||
| `as any` type casts | Warning |
|
||||
| `@ts-ignore` / `@ts-expect-error` | Warning |
|
||||
| `console.log` in production code | Warning |
|
||||
| Unused imports/variables | Info |
|
||||
|
||||
### Security
|
||||
|
||||
| Check | Severity |
|
||||
|-------|----------|
|
||||
| Hardcoded secrets/credentials | Error |
|
||||
| SQL injection vectors | Error |
|
||||
| `eval()` or `Function()` usage | Error |
|
||||
| `innerHTML` assignment | Warning |
|
||||
| Missing input validation | Warning |
|
||||
|
||||
### Architecture
|
||||
|
||||
| Check | Severity |
|
||||
|-------|----------|
|
||||
| Circular dependencies | Error |
|
||||
| Deep cross-boundary imports (3+ levels) | Warning |
|
||||
| Files > 500 lines | Warning |
|
||||
| Functions > 50 lines | Info |
|
||||
|
||||
### Requirements Coverage
|
||||
|
||||
| Check | Severity |
|
||||
|-------|----------|
|
||||
| Core functionality implemented | Error if missing |
|
||||
| Acceptance criteria covered | Error if missing |
|
||||
| Edge cases handled | Warning |
|
||||
| Error states handled | Warning |
|
||||
|
||||
## 6. Issue Classification
|
||||
|
||||
| Class | Label | Action |
|
||||
|-------|-------|--------|
|
||||
| Error | Must fix | Blocks progression, must resolve before proceeding |
|
||||
| Warning | Should fix | Should resolve, can proceed with justification |
|
||||
| Info | Nice to have | Optional improvement, log for future |
|
||||
254
.codex/skills/team-lifecycle-v4/templates/architecture.md
Normal file
254
.codex/skills/team-lifecycle-v4/templates/architecture.md
Normal file
@@ -0,0 +1,254 @@
|
||||
# Architecture Document Template (Directory Structure)
|
||||
|
||||
Template for generating architecture decision documents as a directory of individual ADR files in Phase 4.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 4 (Architecture) | Generate `architecture/` directory from requirements analysis |
|
||||
| Output Location | `{workDir}/architecture/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/architecture/
|
||||
├── _index.md # Overview, components, tech stack, data model, security
|
||||
├── ADR-001-{slug}.md # Individual Architecture Decision Record
|
||||
├── ADR-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 4
|
||||
document_type: architecture-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
---
|
||||
|
||||
# Architecture: {product_name}
|
||||
|
||||
{executive_summary - high-level architecture approach and key decisions}
|
||||
|
||||
## System Overview
|
||||
|
||||
### Architecture Style
|
||||
{description of chosen architecture style: microservices, monolith, serverless, etc.}
|
||||
|
||||
### System Context Diagram
|
||||
|
||||
```mermaid
|
||||
C4Context
|
||||
title System Context Diagram
|
||||
Person(user, "User", "Primary user")
|
||||
System(system, "{product_name}", "Core system")
|
||||
System_Ext(ext1, "{external_system}", "{description}")
|
||||
Rel(user, system, "Uses")
|
||||
Rel(system, ext1, "Integrates with")
|
||||
```
|
||||
|
||||
## Component Architecture
|
||||
|
||||
### Component Diagram
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph "{product_name}"
|
||||
A[Component A] --> B[Component B]
|
||||
B --> C[Component C]
|
||||
A --> D[Component D]
|
||||
end
|
||||
B --> E[External Service]
|
||||
```
|
||||
|
||||
### Component Descriptions
|
||||
|
||||
| Component | Responsibility | Technology | Dependencies |
|
||||
|-----------|---------------|------------|--------------|
|
||||
| {component_name} | {what it does} | {tech stack} | {depends on} |
|
||||
|
||||
## Technology Stack
|
||||
|
||||
### Core Technologies
|
||||
|
||||
| Layer | Technology | Version | Rationale |
|
||||
|-------|-----------|---------|-----------|
|
||||
| Frontend | {technology} | {version} | {why chosen} |
|
||||
| Backend | {technology} | {version} | {why chosen} |
|
||||
| Database | {technology} | {version} | {why chosen} |
|
||||
| Infrastructure | {technology} | {version} | {why chosen} |
|
||||
|
||||
### Key Libraries & Frameworks
|
||||
|
||||
| Library | Purpose | License |
|
||||
|---------|---------|---------|
|
||||
| {library_name} | {purpose} | {license} |
|
||||
|
||||
## Architecture Decision Records
|
||||
|
||||
| ADR | Title | Status | Key Choice |
|
||||
|-----|-------|--------|------------|
|
||||
| [ADR-001](ADR-001-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-002](ADR-002-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-003](ADR-003-{slug}.md) | {title} | Proposed | {one-line summary} |
|
||||
|
||||
## Data Architecture
|
||||
|
||||
### Data Model
|
||||
|
||||
```mermaid
|
||||
erDiagram
|
||||
ENTITY_A ||--o{ ENTITY_B : "has many"
|
||||
ENTITY_A {
|
||||
string id PK
|
||||
string name
|
||||
datetime created_at
|
||||
}
|
||||
ENTITY_B {
|
||||
string id PK
|
||||
string entity_a_id FK
|
||||
string value
|
||||
}
|
||||
```
|
||||
|
||||
### Data Storage Strategy
|
||||
|
||||
| Data Type | Storage | Retention | Backup |
|
||||
|-----------|---------|-----------|--------|
|
||||
| {type} | {storage solution} | {retention policy} | {backup strategy} |
|
||||
|
||||
## API Design
|
||||
|
||||
### API Overview
|
||||
|
||||
| Endpoint | Method | Purpose | Auth |
|
||||
|----------|--------|---------|------|
|
||||
| {/api/resource} | {GET/POST/etc} | {purpose} | {auth type} |
|
||||
|
||||
## Security Architecture
|
||||
|
||||
### Security Controls
|
||||
|
||||
| Control | Implementation | Requirement |
|
||||
|---------|---------------|-------------|
|
||||
| Authentication | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Authorization | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Data Protection | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
|
||||
## Infrastructure & Deployment
|
||||
|
||||
### Deployment Architecture
|
||||
|
||||
{description of deployment model: containers, serverless, VMs, etc.}
|
||||
|
||||
### Environment Strategy
|
||||
|
||||
| Environment | Purpose | Configuration |
|
||||
|-------------|---------|---------------|
|
||||
| Development | Local development | {config} |
|
||||
| Staging | Pre-production testing | {config} |
|
||||
| Production | Live system | {config} |
|
||||
|
||||
## Codebase Integration
|
||||
|
||||
{if has_codebase is true:}
|
||||
|
||||
### Existing Code Mapping
|
||||
|
||||
| New Component | Existing Module | Integration Type | Notes |
|
||||
|--------------|----------------|------------------|-------|
|
||||
| {component} | {existing module path} | Extend/Replace/New | {notes} |
|
||||
|
||||
### Migration Notes
|
||||
{any migration considerations for existing code}
|
||||
|
||||
## Quality Attributes
|
||||
|
||||
| Attribute | Target | Measurement | ADR Reference |
|
||||
|-----------|--------|-------------|---------------|
|
||||
| Performance | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Scalability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Reliability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Impact | Probability | Mitigation |
|
||||
|------|--------|-------------|------------|
|
||||
| {risk} | High/Medium/Low | High/Medium/Low | {mitigation approach} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {architectural question 1}
|
||||
- [ ] {architectural question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Product Brief](../product-brief.md)
|
||||
- Next: [Epics & Stories](../epics/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: ADR-NNN-{slug}.md (Individual Architecture Decision Record)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: ADR-{NNN}
|
||||
status: Accepted
|
||||
traces_to: [{REQ-NNN}, {NFR-X-NNN}]
|
||||
date: {timestamp}
|
||||
---
|
||||
|
||||
# ADR-{NNN}: {decision_title}
|
||||
|
||||
## Context
|
||||
|
||||
{what is the situation that motivates this decision}
|
||||
|
||||
## Decision
|
||||
|
||||
{what is the chosen approach}
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
| Option | Pros | Cons |
|
||||
|--------|------|------|
|
||||
| {option_1 - chosen} | {pros} | {cons} |
|
||||
| {option_2} | {pros} | {cons} |
|
||||
| {option_3} | {pros} | {cons} |
|
||||
|
||||
## Consequences
|
||||
|
||||
- **Positive**: {positive outcomes}
|
||||
- **Negative**: {tradeoffs accepted}
|
||||
- **Risks**: {risks to monitor}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Requirements**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md), [NFR-X-{NNN}](../requirements/NFR-X-{NNN}-{slug}.md)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | ADR/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from decision title |
|
||||
| `{has_codebase}` | spec-config.json | Whether existing codebase exists |
|
||||
196
.codex/skills/team-lifecycle-v4/templates/epics.md
Normal file
196
.codex/skills/team-lifecycle-v4/templates/epics.md
Normal file
@@ -0,0 +1,196 @@
|
||||
# Epics & Stories Template (Directory Structure)
|
||||
|
||||
Template for generating epic/story breakdown as a directory of individual Epic files in Phase 5.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 5 (Epics & Stories) | Generate `epics/` directory from requirements decomposition |
|
||||
| Output Location | `{workDir}/epics/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/epics/
|
||||
├── _index.md # Overview table + dependency map + MVP scope + execution order
|
||||
├── EPIC-001-{slug}.md # Individual Epic with its Stories
|
||||
├── EPIC-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 5
|
||||
document_type: epics-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
- ../architecture/_index.md
|
||||
---
|
||||
|
||||
# Epics & Stories: {product_name}
|
||||
|
||||
{executive_summary - overview of epic structure and MVP scope}
|
||||
|
||||
## Epic Overview
|
||||
|
||||
| Epic ID | Title | Priority | MVP | Stories | Est. Size |
|
||||
|---------|-------|----------|-----|---------|-----------|
|
||||
| [EPIC-001](EPIC-001-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-002](EPIC-002-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-003](EPIC-003-{slug}.md) | {title} | Should | No | {n} | {S/M/L/XL} |
|
||||
|
||||
## Dependency Map
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
EPIC-001 --> EPIC-002
|
||||
EPIC-001 --> EPIC-003
|
||||
EPIC-002 --> EPIC-004
|
||||
EPIC-003 --> EPIC-005
|
||||
```
|
||||
|
||||
### Dependency Notes
|
||||
{explanation of why these dependencies exist and suggested execution order}
|
||||
|
||||
### Recommended Execution Order
|
||||
1. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - foundational}
|
||||
2. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - depends on #1}
|
||||
3. ...
|
||||
|
||||
## MVP Scope
|
||||
|
||||
### MVP Epics
|
||||
{list of epics included in MVP with justification, linking to each}
|
||||
|
||||
### MVP Definition of Done
|
||||
- [ ] {MVP completion criterion 1}
|
||||
- [ ] {MVP completion criterion 2}
|
||||
- [ ] {MVP completion criterion 3}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Requirement | Epic | Stories | Architecture |
|
||||
|-------------|------|---------|--------------|
|
||||
| [REQ-001](../requirements/REQ-001-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-001, STORY-001-002 | [ADR-001](../architecture/ADR-001-{slug}.md) |
|
||||
| [REQ-002](../requirements/REQ-002-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-003 | Component B |
|
||||
| [REQ-003](../requirements/REQ-003-{slug}.md) | [EPIC-002](EPIC-002-{slug}.md) | STORY-002-001 | [ADR-002](../architecture/ADR-002-{slug}.md) |
|
||||
|
||||
## Estimation Summary
|
||||
|
||||
| Size | Meaning | Count |
|
||||
|------|---------|-------|
|
||||
| S | Small - well-understood, minimal risk | {n} |
|
||||
| M | Medium - some complexity, moderate risk | {n} |
|
||||
| L | Large - significant complexity, should consider splitting | {n} |
|
||||
| XL | Extra Large - high complexity, must split before implementation | {n} |
|
||||
|
||||
## Risks & Considerations
|
||||
|
||||
| Risk | Affected Epics | Mitigation |
|
||||
|------|---------------|------------|
|
||||
| {risk description} | [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) | {mitigation} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {question about scope or implementation 1}
|
||||
- [ ] {question about scope or implementation 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Architecture](../architecture/_index.md)
|
||||
- Handoff to: execution workflows (lite-plan, plan, req-plan)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: EPIC-NNN-{slug}.md (Individual Epic)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: EPIC-{NNN}
|
||||
priority: {Must|Should|Could}
|
||||
mvp: {true|false}
|
||||
size: {S|M|L|XL}
|
||||
requirements: [REQ-{NNN}]
|
||||
architecture: [ADR-{NNN}]
|
||||
dependencies: [EPIC-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# EPIC-{NNN}: {epic_title}
|
||||
|
||||
**Priority**: {Must|Should|Could}
|
||||
**MVP**: {Yes|No}
|
||||
**Estimated Size**: {S|M|L|XL}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed epic description}
|
||||
|
||||
## Requirements
|
||||
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
|
||||
## Architecture
|
||||
|
||||
- [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md): {title}
|
||||
- Component: {component_name}
|
||||
|
||||
## Dependencies
|
||||
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (blocking): {reason}
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (soft): {reason}
|
||||
|
||||
## Stories
|
||||
|
||||
### STORY-{EPIC}-001: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
- [ ] {criterion 3}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
|
||||
---
|
||||
|
||||
### STORY-{EPIC}-002: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{EPIC}` | Auto-increment | Epic number (3 digits) |
|
||||
| `{NNN}` | Auto-increment | Story/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from epic/story title |
|
||||
| `{S\|M\|L\|XL}` | CLI analysis | Relative size estimate |
|
||||
133
.codex/skills/team-lifecycle-v4/templates/product-brief.md
Normal file
133
.codex/skills/team-lifecycle-v4/templates/product-brief.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Product Brief Template
|
||||
|
||||
Template for generating product brief documents in Phase 2.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 2 (Product Brief) | Generate product-brief.md from multi-CLI analysis |
|
||||
| Output Location | `{workDir}/product-brief.md` |
|
||||
|
||||
---
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 2
|
||||
document_type: product-brief
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
stepsCompleted: []
|
||||
version: 1
|
||||
dependencies:
|
||||
- spec-config.json
|
||||
---
|
||||
|
||||
# Product Brief: {product_name}
|
||||
|
||||
{executive_summary - 2-3 sentences capturing the essence of the product/feature}
|
||||
|
||||
## Vision
|
||||
|
||||
{vision_statement - clear, aspirational 1-3 sentence statement of what success looks like}
|
||||
|
||||
## Problem Statement
|
||||
|
||||
### Current Situation
|
||||
{description of the current state and pain points}
|
||||
|
||||
### Impact
|
||||
{quantified impact of the problem - who is affected, how much, how often}
|
||||
|
||||
## Target Users
|
||||
|
||||
{for each user persona:}
|
||||
|
||||
### {Persona Name}
|
||||
- **Role**: {user's role/context}
|
||||
- **Needs**: {primary needs related to this product}
|
||||
- **Pain Points**: {current frustrations}
|
||||
- **Success Criteria**: {what success looks like for this user}
|
||||
|
||||
## Goals & Success Metrics
|
||||
|
||||
| Goal ID | Goal | Success Metric | Target |
|
||||
|---------|------|----------------|--------|
|
||||
| G-001 | {goal description} | {measurable metric} | {specific target} |
|
||||
| G-002 | {goal description} | {measurable metric} | {specific target} |
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
- {feature/capability 1}
|
||||
- {feature/capability 2}
|
||||
- {feature/capability 3}
|
||||
|
||||
### Out of Scope
|
||||
- {explicitly excluded item 1}
|
||||
- {explicitly excluded item 2}
|
||||
|
||||
### Assumptions
|
||||
- {key assumption 1}
|
||||
- {key assumption 2}
|
||||
|
||||
## Competitive Landscape
|
||||
|
||||
| Aspect | Current State | Proposed Solution | Advantage |
|
||||
|--------|--------------|-------------------|-----------|
|
||||
| {aspect} | {how it's done now} | {our approach} | {differentiator} |
|
||||
|
||||
## Constraints & Dependencies
|
||||
|
||||
### Technical Constraints
|
||||
- {constraint 1}
|
||||
- {constraint 2}
|
||||
|
||||
### Business Constraints
|
||||
- {constraint 1}
|
||||
|
||||
### Dependencies
|
||||
- {external dependency 1}
|
||||
- {external dependency 2}
|
||||
|
||||
## Multi-Perspective Synthesis
|
||||
|
||||
### Product Perspective
|
||||
{summary of product/market analysis findings}
|
||||
|
||||
### Technical Perspective
|
||||
{summary of technical feasibility and constraints}
|
||||
|
||||
### User Perspective
|
||||
{summary of user journey and UX considerations}
|
||||
|
||||
### Convergent Themes
|
||||
{themes where all perspectives agree}
|
||||
|
||||
### Conflicting Views
|
||||
{areas where perspectives differ, with notes on resolution approach}
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {unresolved question 1}
|
||||
- [ ] {unresolved question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [spec-config.json](spec-config.json)
|
||||
- Next: [Requirements PRD](requirements.md)
|
||||
```
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | Seed analysis | Product/feature name |
|
||||
| `{executive_summary}` | CLI synthesis | 2-3 sentence summary |
|
||||
| `{vision_statement}` | CLI product perspective | Aspirational vision |
|
||||
| All `{...}` fields | CLI analysis outputs | Filled from multi-perspective analysis |
|
||||
224
.codex/skills/team-lifecycle-v4/templates/requirements.md
Normal file
224
.codex/skills/team-lifecycle-v4/templates/requirements.md
Normal file
@@ -0,0 +1,224 @@
|
||||
# Requirements PRD Template (Directory Structure)
|
||||
|
||||
Template for generating Product Requirements Document as a directory of individual requirement files in Phase 3.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 3 (Requirements) | Generate `requirements/` directory from product brief expansion |
|
||||
| Output Location | `{workDir}/requirements/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/requirements/
|
||||
├── _index.md # Summary + MoSCoW table + traceability matrix + links
|
||||
├── REQ-001-{slug}.md # Individual functional requirement
|
||||
├── REQ-002-{slug}.md
|
||||
├── NFR-P-001-{slug}.md # Non-functional: Performance
|
||||
├── NFR-S-001-{slug}.md # Non-functional: Security
|
||||
├── NFR-SC-001-{slug}.md # Non-functional: Scalability
|
||||
├── NFR-U-001-{slug}.md # Non-functional: Usability
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 3
|
||||
document_type: requirements-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
---
|
||||
|
||||
# Requirements: {product_name}
|
||||
|
||||
{executive_summary - brief overview of what this PRD covers and key decisions}
|
||||
|
||||
## Requirement Summary
|
||||
|
||||
| Priority | Count | Coverage |
|
||||
|----------|-------|----------|
|
||||
| Must Have | {n} | {description of must-have scope} |
|
||||
| Should Have | {n} | {description of should-have scope} |
|
||||
| Could Have | {n} | {description of could-have scope} |
|
||||
| Won't Have | {n} | {description of explicitly excluded} |
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
| ID | Title | Priority | Traces To |
|
||||
|----|-------|----------|-----------|
|
||||
| [REQ-001](REQ-001-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-002](REQ-002-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-003](REQ-003-{slug}.md) | {title} | Should | [G-002](../product-brief.md#goals--success-metrics) |
|
||||
|
||||
## Non-Functional Requirements
|
||||
|
||||
### Performance
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-P-001](NFR-P-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Security
|
||||
|
||||
| ID | Title | Standard |
|
||||
|----|-------|----------|
|
||||
| [NFR-S-001](NFR-S-001-{slug}.md) | {title} | {standard/framework} |
|
||||
|
||||
### Scalability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-SC-001](NFR-SC-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Usability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-U-001](NFR-U-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
## Data Requirements
|
||||
|
||||
### Data Entities
|
||||
|
||||
| Entity | Description | Key Attributes |
|
||||
|--------|-------------|----------------|
|
||||
| {entity_name} | {description} | {attr1, attr2, attr3} |
|
||||
|
||||
### Data Flows
|
||||
|
||||
{description of key data flows, optionally with Mermaid diagram}
|
||||
|
||||
## Integration Requirements
|
||||
|
||||
| System | Direction | Protocol | Data Format | Notes |
|
||||
|--------|-----------|----------|-------------|-------|
|
||||
| {system_name} | Inbound/Outbound/Both | {REST/gRPC/etc} | {JSON/XML/etc} | {notes} |
|
||||
|
||||
## Constraints & Assumptions
|
||||
|
||||
### Constraints
|
||||
- {technical or business constraint 1}
|
||||
- {technical or business constraint 2}
|
||||
|
||||
### Assumptions
|
||||
- {assumption 1 - must be validated}
|
||||
- {assumption 2 - must be validated}
|
||||
|
||||
## Priority Rationale
|
||||
|
||||
{explanation of MoSCoW prioritization decisions, especially for Should/Could boundaries}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Goal | Requirements |
|
||||
|------|-------------|
|
||||
| G-001 | [REQ-001](REQ-001-{slug}.md), [REQ-002](REQ-002-{slug}.md), [NFR-P-001](NFR-P-001-{slug}.md) |
|
||||
| G-002 | [REQ-003](REQ-003-{slug}.md), [NFR-S-001](NFR-S-001-{slug}.md) |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {unresolved question 1}
|
||||
- [ ] {unresolved question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Product Brief](../product-brief.md)
|
||||
- Next: [Architecture](../architecture/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: REQ-NNN-{slug}.md (Individual Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: REQ-{NNN}
|
||||
type: functional
|
||||
priority: {Must|Should|Could|Won't}
|
||||
traces_to: [G-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# REQ-{NNN}: {requirement_title}
|
||||
|
||||
**Priority**: {Must|Should|Could|Won't}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## User Story
|
||||
|
||||
As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] {specific, testable criterion 1}
|
||||
- [ ] {specific, testable criterion 2}
|
||||
- [ ] {specific, testable criterion 3}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: NFR-{type}-NNN-{slug}.md (Individual Non-Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: NFR-{type}-{NNN}
|
||||
type: non-functional
|
||||
category: {Performance|Security|Scalability|Usability}
|
||||
priority: {Must|Should|Could}
|
||||
status: draft
|
||||
---
|
||||
|
||||
# NFR-{type}-{NNN}: {requirement_title}
|
||||
|
||||
**Category**: {Performance|Security|Scalability|Usability}
|
||||
**Priority**: {Must|Should|Could}
|
||||
|
||||
## Requirement
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## Metric & Target
|
||||
|
||||
| Metric | Target | Measurement Method |
|
||||
|--------|--------|--------------------|
|
||||
| {metric} | {target value} | {how measured} |
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | Requirement number (zero-padded 3 digits) |
|
||||
| `{slug}` | Auto-generated | Kebab-case from requirement title |
|
||||
| `{type}` | Category | P (Performance), S (Security), SC (Scalability), U (Usability) |
|
||||
| `{Must\|Should\|Could\|Won't}` | User input / auto | MoSCoW priority tag |
|
||||
Reference in New Issue
Block a user