mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-26 19:56:37 +08:00
feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture
- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files) - Delete old team-lifecycle (v3) and team-planex-v2 - Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs) - Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate) to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input) - Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor) - Convert all coordinator role files: dispatch.md, monitor.md, role.md - Convert all worker role files: remove run_in_background, fix Bash syntax - Convert all specs/pipelines.md references - Final state: 20 team skills, 217 .md files, zero Claude Code API residuals Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
104
.codex/skills/team-lifecycle-v4/roles/analyst/role.md
Normal file
104
.codex/skills/team-lifecycle-v4/roles/analyst/role.md
Normal file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
role: analyst
|
||||
prefix: RESEARCH
|
||||
inner_loop: false
|
||||
discuss_rounds: [DISCUSS-001]
|
||||
message_types:
|
||||
success: research_ready
|
||||
error: error
|
||||
---
|
||||
|
||||
# Analyst
|
||||
|
||||
Research and codebase exploration for context gathering.
|
||||
|
||||
## Identity
|
||||
- Tag: [analyst] | Prefix: RESEARCH-*
|
||||
- Responsibility: Gather structured context from topic and codebase
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Extract structured seed information from task topic
|
||||
- Explore codebase if project detected
|
||||
- Package context for downstream roles
|
||||
### MUST NOT
|
||||
- Implement code or modify files
|
||||
- Make architectural decisions
|
||||
- Skip codebase exploration when project files exist
|
||||
|
||||
## Phase 2: Seed Analysis
|
||||
|
||||
1. Read upstream state:
|
||||
- Read `tasks.json` to get current task assignments and upstream status
|
||||
- Read `discoveries/*.json` to load any prior discoveries from upstream roles
|
||||
2. Extract session folder from task description
|
||||
3. Parse topic from task description
|
||||
4. If topic references file (@path or .md/.txt) -> read it
|
||||
5. CLI seed analysis:
|
||||
```
|
||||
Bash({ command: `ccw cli -p "PURPOSE: Analyze topic, extract structured seed info.
|
||||
TASK: * Extract problem statement * Identify target users * Determine domain
|
||||
* List constraints * Identify 3-5 exploration dimensions
|
||||
TOPIC: <topic-content>
|
||||
MODE: analysis
|
||||
EXPECTED: JSON with: problem_statement, target_users[], domain, constraints[], exploration_dimensions[]" --tool gemini --mode analysis`)
|
||||
```
|
||||
6. Parse result JSON
|
||||
|
||||
## Phase 3: Codebase Exploration
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| package.json / Cargo.toml / pyproject.toml / go.mod exists | Explore |
|
||||
| No project files | Skip (codebase_context = null) |
|
||||
|
||||
When project detected:
|
||||
```
|
||||
Bash({ command: `ccw cli -p "PURPOSE: Explore codebase for context
|
||||
TASK: * Identify tech stack * Map architecture patterns * Document conventions * List integration points
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: JSON with: tech_stack[], architecture_patterns[], conventions[], integration_points[]" --tool gemini --mode analysis`)
|
||||
```
|
||||
|
||||
## Phase 4: Context Packaging
|
||||
|
||||
1. Write spec-config.json -> <session>/spec/
|
||||
2. Write discovery-context.json -> <session>/spec/
|
||||
3. Inline Discuss (DISCUSS-001):
|
||||
- Artifact: <session>/spec/discovery-context.json
|
||||
- Perspectives: product, risk, coverage
|
||||
4. Handle verdict per consensus protocol
|
||||
5. Write discovery to `discoveries/<task_id>.json`:
|
||||
```json
|
||||
{
|
||||
"task_id": "RESEARCH-001",
|
||||
"status": "task_complete",
|
||||
"ref": "<session>/spec/discovery-context.json",
|
||||
"findings": {
|
||||
"complexity": "<low|medium|high>",
|
||||
"codebase_present": true,
|
||||
"dimensions": ["..."],
|
||||
"discuss_verdict": "<verdict>"
|
||||
},
|
||||
"data": {
|
||||
"output_paths": ["spec-config.json", "discovery-context.json"]
|
||||
}
|
||||
}
|
||||
```
|
||||
6. Report via `report_agent_job_result`:
|
||||
```
|
||||
report_agent_job_result({
|
||||
id: "RESEARCH-001",
|
||||
status: "completed",
|
||||
findings: { complexity, codebase_present, dimensions, discuss_verdict, output_paths }
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI failure | Fallback to direct analysis |
|
||||
| No project detected | Continue as new project |
|
||||
| Topic too vague | Report with clarification questions |
|
||||
@@ -0,0 +1,56 @@
|
||||
# Analyze Task
|
||||
|
||||
Parse user task -> detect capabilities -> build dependency graph -> design roles.
|
||||
|
||||
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
|
||||
|
||||
## Signal Detection
|
||||
|
||||
| Keywords | Capability | Prefix |
|
||||
|----------|------------|--------|
|
||||
| investigate, explore, research | analyst | RESEARCH |
|
||||
| write, draft, document | writer | DRAFT |
|
||||
| implement, build, code, fix | executor | IMPL |
|
||||
| design, architect, plan | planner | PLAN |
|
||||
| test, verify, validate | tester | TEST |
|
||||
| analyze, review, audit | reviewer | REVIEW |
|
||||
|
||||
## Dependency Graph
|
||||
|
||||
Natural ordering tiers:
|
||||
- Tier 0: analyst, planner (knowledge gathering)
|
||||
- Tier 1: writer (creation requires context)
|
||||
- Tier 2: executor (implementation requires plan/design)
|
||||
- Tier 3: tester, reviewer (validation requires artifacts)
|
||||
|
||||
## Complexity Scoring
|
||||
|
||||
| Factor | Points |
|
||||
|--------|--------|
|
||||
| Per capability | +1 |
|
||||
| Cross-domain | +2 |
|
||||
| Parallel tracks | +1 per track |
|
||||
| Serial depth > 3 | +1 |
|
||||
|
||||
Results: 1-3 Low, 4-6 Medium, 7+ High
|
||||
|
||||
## Role Minimization
|
||||
|
||||
- Cap at 5 roles
|
||||
- Merge overlapping capabilities
|
||||
- Absorb trivial single-step roles
|
||||
|
||||
## Output
|
||||
|
||||
Write <session>/task-analysis.json:
|
||||
```json
|
||||
{
|
||||
"task_description": "<original>",
|
||||
"pipeline_type": "<spec-only|impl-only|full-lifecycle|...>",
|
||||
"capabilities": [{ "name": "<cap>", "prefix": "<PREFIX>", "keywords": ["..."] }],
|
||||
"dependency_graph": { "<TASK-ID>": { "role": "<role>", "blockedBy": ["..."], "priority": "P0|P1|P2" } },
|
||||
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
|
||||
"complexity": { "score": 0, "level": "Low|Medium|High" },
|
||||
"needs_research": true
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,61 @@
|
||||
# Dispatch Tasks
|
||||
|
||||
Create task chains from dependency graph, write to tasks.json with proper deps relationships.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Read task-analysis.json -> extract dependency_graph
|
||||
2. Read specs/pipelines.md -> get task registry for selected pipeline
|
||||
3. Topological sort tasks (respect deps)
|
||||
4. Validate all owners exist in role registry (SKILL.md)
|
||||
5. For each task (in order):
|
||||
- Add task entry to tasks.json `tasks` object (see template below)
|
||||
- Set deps array with upstream task IDs
|
||||
- Assign wave number based on dependency depth
|
||||
6. Update tasks.json metadata: total count, wave assignments
|
||||
7. Validate chain (no orphans, no cycles, all refs valid)
|
||||
|
||||
## Task Entry Template
|
||||
|
||||
Each task in tasks.json `tasks` object:
|
||||
```json
|
||||
{
|
||||
"<TASK-ID>": {
|
||||
"title": "<concise title>",
|
||||
"description": "PURPOSE: <goal> | Success: <criteria>\nTASK:\n - <step 1>\n - <step 2>\nCONTEXT:\n - Session: <session-folder>\n - Upstream artifacts: <list>\n - Key files: <list>\nEXPECTED: <artifact path> + <quality criteria>\nCONSTRAINTS: <scope limits>\n---\nInnerLoop: <true|false>\nRoleSpec: <project>/.codex/skills/team-lifecycle-v4/roles/<role>/role.md",
|
||||
"role": "<role-name>",
|
||||
"pipeline_phase": "<phase>",
|
||||
"deps": ["<upstream-task-id>", "..."],
|
||||
"context_from": ["<upstream-task-id>", "..."],
|
||||
"wave": 1,
|
||||
"status": "pending",
|
||||
"findings": null,
|
||||
"quality_score": null,
|
||||
"supervision_verdict": null,
|
||||
"error": null
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## InnerLoop Flag Rules
|
||||
|
||||
- true: Role has 2+ serial same-prefix tasks (writer: DRAFT-001->004)
|
||||
- false: Role has 1 task, or tasks are parallel
|
||||
|
||||
## CHECKPOINT Task Rules
|
||||
|
||||
CHECKPOINT tasks are dispatched like regular tasks but handled differently at spawn time:
|
||||
|
||||
- Added to tasks.json with proper deps (upstream tasks that must complete first)
|
||||
- Owner: supervisor
|
||||
- **NOT spawned as tlv4_worker** — coordinator wakes the resident supervisor via send_input
|
||||
- If `supervision: false` in tasks.json, skip creating CHECKPOINT tasks entirely
|
||||
- RoleSpec in description: `<project>/.codex/skills/team-lifecycle-v4/roles/supervisor/role.md`
|
||||
|
||||
## Dependency Validation
|
||||
|
||||
- No orphan tasks (all tasks have valid owner)
|
||||
- No circular dependencies
|
||||
- All deps references exist in tasks object
|
||||
- Session reference in every task description
|
||||
- RoleSpec reference in every task description
|
||||
@@ -0,0 +1,177 @@
|
||||
# Monitor Pipeline
|
||||
|
||||
Synchronous pipeline coordination using spawn_agent + wait_agent.
|
||||
|
||||
## Constants
|
||||
|
||||
- WORKER_AGENT: tlv4_worker
|
||||
- SUPERVISOR_AGENT: tlv4_supervisor (resident, woken via send_input)
|
||||
|
||||
## Handler Router
|
||||
|
||||
| Source | Handler |
|
||||
|--------|---------|
|
||||
| "capability_gap" | handleAdapt |
|
||||
| "check" or "status" | handleCheck |
|
||||
| "resume" or "continue" | handleResume |
|
||||
| All tasks completed | handleComplete |
|
||||
| Default | handleSpawnNext |
|
||||
|
||||
## handleCheck
|
||||
|
||||
Read-only status report from tasks.json, then STOP.
|
||||
|
||||
1. Read tasks.json
|
||||
2. Count tasks by status (pending, in_progress, completed, failed, skipped)
|
||||
|
||||
Output:
|
||||
```
|
||||
[coordinator] Pipeline Status
|
||||
[coordinator] Progress: <done>/<total> (<pct>%)
|
||||
[coordinator] Active agents: <list from active_agents>
|
||||
[coordinator] Ready: <pending tasks with resolved deps>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
## handleResume
|
||||
|
||||
1. Read tasks.json, check active_agents
|
||||
2. No active agents -> handleSpawnNext
|
||||
3. Has active agents -> check each:
|
||||
- If supervisor with `resident: true` + no CHECKPOINT in_progress + pending CHECKPOINT exists
|
||||
-> supervisor may have crashed. Respawn via spawn_agent({ agent_type: "tlv4_supervisor" }) with recovery: true
|
||||
4. Proceed to handleSpawnNext
|
||||
|
||||
## handleSpawnNext
|
||||
|
||||
Find ready tasks, spawn workers, wait for completion, process results.
|
||||
|
||||
1. Read tasks.json
|
||||
2. Collect: completedTasks, inProgressTasks, readyTasks (pending + all deps completed)
|
||||
3. No ready + nothing in progress -> handleComplete
|
||||
4. No ready + work in progress -> report waiting, STOP
|
||||
5. Has ready -> separate regular tasks and CHECKPOINT tasks
|
||||
|
||||
### Spawn Regular Tasks
|
||||
|
||||
For each ready non-CHECKPOINT task:
|
||||
|
||||
```javascript
|
||||
// 1) Update status in tasks.json
|
||||
state.tasks[task.id].status = 'in_progress'
|
||||
|
||||
// 2) Spawn worker
|
||||
const agentId = spawn_agent({
|
||||
agent_type: "tlv4_worker",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: ${task.role}
|
||||
role_spec: ${skillRoot}/roles/${task.role}/role.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
requirement: ${requirement}
|
||||
inner_loop: ${hasInnerLoop(task.role)}` },
|
||||
|
||||
{ type: "text", text: `Read role_spec file (${skillRoot}/roles/${task.role}/role.md) to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).` },
|
||||
|
||||
{ type: "text", text: `## Task Context
|
||||
task_id: ${task.id}
|
||||
title: ${task.title}
|
||||
description: ${task.description}
|
||||
pipeline_phase: ${task.pipeline_phase}` },
|
||||
|
||||
{ type: "text", text: `## Upstream Context\n${prevContext}` }
|
||||
]
|
||||
})
|
||||
|
||||
// 3) Track agent
|
||||
state.active_agents[task.id] = { agentId, role: task.role, started_at: now }
|
||||
```
|
||||
|
||||
After spawning all ready regular tasks:
|
||||
|
||||
```javascript
|
||||
// 4) Batch wait for all spawned workers
|
||||
const agentIds = Object.values(state.active_agents)
|
||||
.filter(a => !a.resident)
|
||||
.map(a => a.agentId)
|
||||
wait_agent({ ids: agentIds, timeout_ms: 900000 })
|
||||
|
||||
// 5) Collect results from discoveries/{task_id}.json
|
||||
for (const [taskId, agent] of Object.entries(state.active_agents)) {
|
||||
if (agent.resident) continue
|
||||
try {
|
||||
const disc = JSON.parse(Read(`${sessionFolder}/discoveries/${taskId}.json`))
|
||||
state.tasks[taskId].status = disc.status || 'completed'
|
||||
state.tasks[taskId].findings = disc.findings || ''
|
||||
state.tasks[taskId].quality_score = disc.quality_score || null
|
||||
state.tasks[taskId].error = disc.error || null
|
||||
} catch {
|
||||
state.tasks[taskId].status = 'failed'
|
||||
state.tasks[taskId].error = 'No discovery file produced'
|
||||
}
|
||||
close_agent({ id: agent.agentId })
|
||||
delete state.active_agents[taskId]
|
||||
}
|
||||
```
|
||||
|
||||
### Handle CHECKPOINT Tasks
|
||||
|
||||
For each ready CHECKPOINT task:
|
||||
|
||||
1. Verify supervisor is in active_agents with `resident: true`
|
||||
- Not found -> spawn supervisor via SKILL.md Supervisor Spawn Template, record supervisorId
|
||||
2. Determine scope: list task IDs that this checkpoint depends on (its deps)
|
||||
3. Wake supervisor:
|
||||
```javascript
|
||||
send_input({
|
||||
id: supervisorId,
|
||||
items: [
|
||||
{ type: "text", text: `## Checkpoint Request
|
||||
task_id: ${task.id}
|
||||
scope: [${task.deps.join(', ')}]
|
||||
pipeline_progress: ${completedCount}/${totalCount} tasks completed` }
|
||||
]
|
||||
})
|
||||
wait_agent({ ids: [supervisorId], timeout_ms: 300000 })
|
||||
```
|
||||
4. Read checkpoint report from artifacts/${task.id}-report.md
|
||||
5. Parse verdict (pass / warn / block):
|
||||
- **pass** -> mark completed, proceed
|
||||
- **warn** -> log risks to wisdom, mark completed, proceed
|
||||
- **block** -> request_user_input: Override / Revise upstream / Abort
|
||||
|
||||
### Persist and Loop
|
||||
|
||||
After processing all results:
|
||||
1. Write updated tasks.json
|
||||
2. Check if more tasks are now ready (deps newly resolved)
|
||||
3. If yes -> loop back to step 1 of handleSpawnNext
|
||||
4. If no more ready and all done -> handleComplete
|
||||
5. If no more ready but some still blocked -> report status, STOP
|
||||
|
||||
## handleComplete
|
||||
|
||||
Pipeline done. Generate report and completion action.
|
||||
|
||||
1. Shutdown resident supervisor (if active):
|
||||
```javascript
|
||||
close_agent({ id: supervisorId })
|
||||
```
|
||||
Remove from active_agents in tasks.json
|
||||
2. Generate summary (deliverables, stats, discussions)
|
||||
3. Read tasks.json completion_action:
|
||||
- interactive -> request_user_input (Archive/Keep/Export)
|
||||
- auto_archive -> Archive & Clean (rm -rf session folder)
|
||||
- auto_keep -> Keep Active (update status to "paused")
|
||||
|
||||
## handleAdapt
|
||||
|
||||
Capability gap reported mid-pipeline.
|
||||
|
||||
1. Parse gap description
|
||||
2. Check if existing role covers it -> redirect
|
||||
3. Role count < 5 -> generate dynamic role-spec in <session>/role-specs/
|
||||
4. Add new task to tasks.json, spawn worker via spawn_agent + wait_agent
|
||||
5. Role count >= 5 -> merge or pause
|
||||
152
.codex/skills/team-lifecycle-v4/roles/coordinator/role.md
Normal file
152
.codex/skills/team-lifecycle-v4/roles/coordinator/role.md
Normal file
@@ -0,0 +1,152 @@
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate team-lifecycle-v4: analyze -> dispatch -> spawn -> monitor -> report.
|
||||
|
||||
## Identity
|
||||
- Name: coordinator | Tag: [coordinator]
|
||||
- Responsibility: Analyze task -> Create session -> Dispatch tasks -> Monitor progress -> Report results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Parse task description (text-level only, no codebase reading)
|
||||
- Create session folder and spawn tlv4_worker agents via spawn_agent
|
||||
- Dispatch tasks with proper dependency chains (tasks.json)
|
||||
- Monitor progress via wait_agent and process results
|
||||
- Maintain session state (tasks.json)
|
||||
- Handle capability_gap reports
|
||||
- Execute completion action when pipeline finishes
|
||||
|
||||
### MUST NOT
|
||||
- Read source code or explore codebase (delegate to workers)
|
||||
- Execute task work directly
|
||||
- Modify task output artifacts
|
||||
- Spawn workers with general-purpose agent (MUST use tlv4_worker)
|
||||
- Generate more than 5 worker roles
|
||||
|
||||
## Command Execution Protocol
|
||||
When coordinator needs to execute a specific phase:
|
||||
1. Read `commands/<command>.md`
|
||||
2. Follow the workflow defined in the command
|
||||
3. Commands are inline execution guides, NOT separate agents
|
||||
4. Execute synchronously, complete before proceeding
|
||||
|
||||
## Entry Router
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
|
||||
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
|
||||
| Capability gap | Message contains "capability_gap" | -> handleAdapt (monitor.md) |
|
||||
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
|
||||
| Interrupted session | Active session in .workflow/.team/TLV4-* | -> Phase 0 |
|
||||
| New session | None of above | -> Phase 1 |
|
||||
|
||||
For check/resume/adapt/complete: load @commands/monitor.md, execute handler, STOP.
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
1. Scan .workflow/.team/TLV4-*/tasks.json for active/paused sessions
|
||||
2. No sessions -> Phase 1
|
||||
3. Single session -> reconcile:
|
||||
a. Read tasks.json, reset in_progress -> pending
|
||||
b. Rebuild active_agents map
|
||||
c. If pipeline has CHECKPOINT tasks AND `supervision !== false`:
|
||||
- Respawn supervisor via `spawn_agent({ agent_type: "tlv4_supervisor" })` with `recovery: true`
|
||||
- Supervisor auto-rebuilds context from existing CHECKPOINT-*-report.md files
|
||||
d. Kick first ready task via handleSpawnNext
|
||||
4. Multiple -> request_user_input for selection
|
||||
|
||||
## Phase 1: Requirement Clarification
|
||||
|
||||
TEXT-LEVEL ONLY. No source code reading.
|
||||
|
||||
1. Parse task description
|
||||
2. Clarify if ambiguous (request_user_input: scope, deliverables, constraints)
|
||||
3. Delegate to @commands/analyze.md
|
||||
4. Output: task-analysis.json
|
||||
5. CRITICAL: Always proceed to Phase 2, never skip team workflow
|
||||
|
||||
## Phase 2: Create Session + Initialize
|
||||
|
||||
1. Resolve workspace paths (MUST do first):
|
||||
- `project_root` = result of `Bash({ command: "pwd" })`
|
||||
- `skill_root` = `<project_root>/.codex/skills/team-lifecycle-v4`
|
||||
2. Generate session ID: TLV4-<slug>-<date>
|
||||
3. Create session folder structure:
|
||||
```bash
|
||||
mkdir -p .workflow/.team/${SESSION_ID}/{artifacts,discoveries,wisdom,role-specs}
|
||||
```
|
||||
4. Read specs/pipelines.md -> select pipeline
|
||||
5. Register roles in tasks.json metadata
|
||||
6. Initialize shared infrastructure (wisdom/*.md, explorations/cache-index.json)
|
||||
7. Write initial tasks.json:
|
||||
```json
|
||||
{
|
||||
"session_id": "<id>",
|
||||
"pipeline": "<mode>",
|
||||
"requirement": "<original requirement>",
|
||||
"created_at": "<ISO timestamp>",
|
||||
"supervision": true,
|
||||
"completed_waves": [],
|
||||
"active_agents": {},
|
||||
"tasks": {}
|
||||
}
|
||||
```
|
||||
8. Spawn resident supervisor (if pipeline has CHECKPOINT tasks AND `supervision !== false`):
|
||||
- Use SKILL.md Supervisor Spawn Template:
|
||||
```javascript
|
||||
const supervisorId = spawn_agent({
|
||||
agent_type: "tlv4_supervisor",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: supervisor
|
||||
role_spec: ${skillRoot}/roles/supervisor/role.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
requirement: ${requirement}
|
||||
|
||||
Read role_spec file to load checkpoint definitions.
|
||||
Init: load baseline context, report ready, go idle.` }
|
||||
]
|
||||
})
|
||||
```
|
||||
- Record supervisorId in tasks.json active_agents with `resident: true` flag
|
||||
- Proceed to Phase 3
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
Delegate to @commands/dispatch.md:
|
||||
1. Read dependency graph from task-analysis.json
|
||||
2. Read specs/pipelines.md for selected pipeline's task registry
|
||||
3. Topological sort tasks
|
||||
4. Write tasks to tasks.json with deps arrays
|
||||
5. Update tasks.json metadata (total count, wave assignments)
|
||||
|
||||
## Phase 4: Spawn-and-Wait
|
||||
|
||||
Delegate to @commands/monitor.md#handleSpawnNext:
|
||||
1. Find ready tasks (pending + deps resolved)
|
||||
2. Spawn tlv4_worker agents via spawn_agent
|
||||
3. Wait for completion via wait_agent
|
||||
4. Process results, advance pipeline
|
||||
5. Repeat until all waves complete or pipeline blocked
|
||||
|
||||
## Phase 5: Report + Completion Action
|
||||
|
||||
1. Generate summary (deliverables, pipeline stats, discussions)
|
||||
2. Execute completion action per tasks.json completion_action:
|
||||
- interactive -> request_user_input (Archive/Keep/Export)
|
||||
- auto_archive -> Archive & Clean (rm -rf session folder)
|
||||
- auto_keep -> Keep Active
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Task too vague | request_user_input for clarification |
|
||||
| Session corruption | Attempt recovery, fallback to manual |
|
||||
| Worker crash | Reset task to pending in tasks.json, respawn via spawn_agent |
|
||||
| Supervisor crash | Respawn via spawn_agent({ agent_type: "tlv4_supervisor" }) with recovery: true |
|
||||
| Dependency cycle | Detect in analysis, halt |
|
||||
| Role limit exceeded | Merge overlapping roles |
|
||||
@@ -0,0 +1,35 @@
|
||||
# Fix
|
||||
|
||||
Revision workflow for bug fixes and feedback-driven changes.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Read original task + feedback/revision notes from task description
|
||||
2. Load original implementation context (files modified, approach taken)
|
||||
3. Analyze feedback to identify specific changes needed
|
||||
4. Apply fixes:
|
||||
- Agent mode: Edit tool for targeted changes
|
||||
- CLI mode: Resume previous session with fix prompt
|
||||
5. Re-validate convergence criteria
|
||||
6. Report: original task, changes applied, validation result
|
||||
|
||||
## Fix Prompt Template (CLI mode)
|
||||
|
||||
```
|
||||
PURPOSE: Fix issues in <task.title> based on feedback
|
||||
TASK:
|
||||
- Review original implementation
|
||||
- Apply feedback: <feedback text>
|
||||
- Verify fixes address all feedback points
|
||||
MODE: write
|
||||
CONTEXT: @<modified files>
|
||||
EXPECTED: All feedback points addressed, convergence criteria met
|
||||
CONSTRAINTS: Minimal changes | No scope creep
|
||||
```
|
||||
|
||||
## Quality Rules
|
||||
|
||||
- Fix ONLY what feedback requests
|
||||
- No refactoring beyond fix scope
|
||||
- Verify original convergence criteria still pass
|
||||
- Report partial_completion if some feedback unclear
|
||||
@@ -0,0 +1,63 @@
|
||||
# Implement
|
||||
|
||||
Execute implementation from task JSON via agent or CLI delegation.
|
||||
|
||||
## Agent Mode
|
||||
|
||||
Direct implementation using Edit/Write/Bash tools:
|
||||
|
||||
1. Read task.files[] as target files
|
||||
2. Read task.implementation[] as step-by-step instructions
|
||||
3. For each step:
|
||||
- Substitute [variable] placeholders with pre_analysis results
|
||||
- New file -> Write tool; Modify file -> Edit tool
|
||||
- Follow task.reference patterns
|
||||
4. Apply task.rationale.chosen_approach
|
||||
5. Mitigate task.risks[] during implementation
|
||||
|
||||
Quality rules:
|
||||
- Verify module existence before referencing
|
||||
- Incremental progress -- small working changes
|
||||
- Follow existing patterns from task.reference
|
||||
- ASCII-only, no premature abstractions
|
||||
|
||||
## CLI Delegation Mode
|
||||
|
||||
Build prompt from task JSON, delegate to CLI:
|
||||
|
||||
Prompt structure:
|
||||
```
|
||||
PURPOSE: <task.title>
|
||||
<task.description>
|
||||
|
||||
TARGET FILES:
|
||||
<task.files[] with paths and changes>
|
||||
|
||||
IMPLEMENTATION STEPS:
|
||||
<task.implementation[] numbered>
|
||||
|
||||
PRE-ANALYSIS CONTEXT:
|
||||
<pre_analysis results>
|
||||
|
||||
REFERENCE:
|
||||
<task.reference pattern and files>
|
||||
|
||||
DONE WHEN:
|
||||
<task.convergence.criteria[]>
|
||||
|
||||
MODE: write
|
||||
CONSTRAINTS: Only modify listed files | Follow existing patterns
|
||||
```
|
||||
|
||||
CLI call:
|
||||
```
|
||||
Bash(`ccw cli -p "<prompt>" --tool <tool> --mode write --rule development-implement-feature`)
|
||||
```
|
||||
|
||||
Resume strategy:
|
||||
| Strategy | Command |
|
||||
|----------|---------|
|
||||
| new | --id <session>-<task_id> |
|
||||
| resume | --resume <parent_id> |
|
||||
|
||||
RoleSpec: `.codex/skills/team-lifecycle-v4/roles/executor/role.md`
|
||||
89
.codex/skills/team-lifecycle-v4/roles/executor/role.md
Normal file
89
.codex/skills/team-lifecycle-v4/roles/executor/role.md
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
role: executor
|
||||
prefix: IMPL
|
||||
inner_loop: true
|
||||
message_types:
|
||||
success: impl_complete
|
||||
progress: impl_progress
|
||||
error: error
|
||||
---
|
||||
|
||||
# Executor
|
||||
|
||||
Code implementation worker with dual execution modes.
|
||||
|
||||
## Identity
|
||||
- Tag: [executor] | Prefix: IMPL-*
|
||||
- Responsibility: Implement code from plan tasks via agent or CLI delegation
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Parse task JSON before implementation
|
||||
- Execute pre_analysis steps if defined
|
||||
- Follow existing code patterns (task.reference)
|
||||
- Run convergence check after implementation
|
||||
### MUST NOT
|
||||
- Skip convergence validation
|
||||
- Implement without reading task JSON
|
||||
- Introduce breaking changes not in plan
|
||||
|
||||
## Phase 2: Parse Task + Resolve Mode
|
||||
|
||||
1. Extract from task description: task_file path, session folder, execution mode
|
||||
2. Read task JSON (id, title, files[], implementation[], convergence.criteria[])
|
||||
3. Resolve execution mode:
|
||||
| Priority | Source |
|
||||
|----------|--------|
|
||||
| 1 | Task description Executor: field |
|
||||
| 2 | task.meta.execution_config.method |
|
||||
| 3 | plan.json recommended_execution |
|
||||
| 4 | Auto: Low -> agent, Medium/High -> codex |
|
||||
4. Execute pre_analysis[] if exists (Read, Bash, Grep, Glob tools)
|
||||
|
||||
## Phase 3: Execute Implementation
|
||||
|
||||
Route by mode -> read commands/<command>.md:
|
||||
- agent / gemini / codex / qwen -> commands/implement.md
|
||||
- Revision task -> commands/fix.md
|
||||
|
||||
## Phase 4: Self-Validation + Report
|
||||
|
||||
| Step | Method | Pass Criteria |
|
||||
|------|--------|--------------|
|
||||
| Convergence check | Match criteria vs output | All criteria addressed |
|
||||
| Syntax check | tsc --noEmit or equivalent | Exit code 0 |
|
||||
| Test detection | Find test files for modified files | Tests identified |
|
||||
|
||||
1. Write discovery to `discoveries/{task_id}.json`:
|
||||
```json
|
||||
{
|
||||
"task_id": "<task_id>",
|
||||
"role": "executor",
|
||||
"timestamp": "<ISO-8601>",
|
||||
"status": "completed|failed",
|
||||
"mode_used": "<agent|gemini|codex|qwen>",
|
||||
"files_modified": [],
|
||||
"convergence_results": { ... }
|
||||
}
|
||||
```
|
||||
2. Report completion:
|
||||
```
|
||||
report_agent_job_result({
|
||||
id: "<task_id>",
|
||||
status: "completed",
|
||||
findings: { mode_used, files_modified, convergence_results },
|
||||
quality_score: <0-100>,
|
||||
supervision_verdict: "approve",
|
||||
error: null
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Agent mode syntax errors | Retry with error context (max 3) |
|
||||
| CLI mode failure | Retry or resume with --resume |
|
||||
| pre_analysis failure | Follow on_error (fail/continue/skip) |
|
||||
| CLI tool unavailable | Fallback: gemini -> qwen -> codex |
|
||||
| Max retries exceeded | Report via report_agent_job_result with status "failed" |
|
||||
108
.codex/skills/team-lifecycle-v4/roles/planner/role.md
Normal file
108
.codex/skills/team-lifecycle-v4/roles/planner/role.md
Normal file
@@ -0,0 +1,108 @@
|
||||
---
|
||||
role: planner
|
||||
prefix: PLAN
|
||||
inner_loop: true
|
||||
message_types:
|
||||
success: plan_ready
|
||||
revision: plan_revision
|
||||
error: error
|
||||
---
|
||||
|
||||
# Planner
|
||||
|
||||
Codebase-informed implementation planning with complexity assessment.
|
||||
|
||||
## Identity
|
||||
- Tag: [planner] | Prefix: PLAN-*
|
||||
- Responsibility: Explore codebase -> generate structured plan -> assess complexity
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Check shared exploration cache before re-exploring
|
||||
- Generate plan.json + TASK-*.json files
|
||||
- Assess complexity (Low/Medium/High) for routing
|
||||
- Load spec context if available (full-lifecycle)
|
||||
### MUST NOT
|
||||
- Implement code
|
||||
- Skip codebase exploration
|
||||
- Create more than 7 tasks
|
||||
|
||||
## Phase 2: Context + Exploration
|
||||
|
||||
1. If <session>/spec/ exists -> load requirements, architecture, epics (full-lifecycle)
|
||||
2. Read context from filesystem:
|
||||
- Read `tasks.json` for current task assignments and status
|
||||
- Read `discoveries/*.json` for prior exploration/analysis results from other roles
|
||||
3. Check <session>/explorations/cache-index.json for cached explorations
|
||||
4. Explore codebase (cache-aware):
|
||||
```
|
||||
Bash({ command: `ccw cli -p "PURPOSE: Explore codebase to inform planning
|
||||
TASK: * Search for relevant patterns * Identify files to modify * Document integration points
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: JSON with: relevant_files[], patterns[], integration_points[], recommendations[]" --tool gemini --mode analysis`)
|
||||
```
|
||||
5. Store results in <session>/explorations/
|
||||
|
||||
## Phase 3: Plan Generation
|
||||
|
||||
Generate plan.json + .task/TASK-*.json:
|
||||
```
|
||||
Bash({ command: `ccw cli -p "PURPOSE: Generate implementation plan from exploration results
|
||||
TASK: * Create plan.json overview * Generate TASK-*.json files (2-7 tasks) * Define dependencies * Set convergence criteria
|
||||
MODE: write
|
||||
CONTEXT: @<session>/explorations/*.json
|
||||
EXPECTED: Files: plan.json + .task/TASK-*.json
|
||||
CONSTRAINTS: 2-7 tasks, include id/title/files[]/convergence.criteria/depends_on" --tool gemini --mode write`)
|
||||
```
|
||||
|
||||
Output files:
|
||||
```
|
||||
<session>/plan/
|
||||
+-- plan.json # Overview + complexity assessment
|
||||
\-- .task/TASK-*.json # Individual task definitions
|
||||
```
|
||||
|
||||
## Phase 4: Report Results
|
||||
|
||||
1. Read plan.json and TASK-*.json
|
||||
2. Write discovery to `discoveries/{task_id}.json`:
|
||||
```json
|
||||
{
|
||||
"task_id": "<task_id>",
|
||||
"role": "planner",
|
||||
"timestamp": "<ISO-8601>",
|
||||
"complexity": "<Low|Medium|High>",
|
||||
"task_count": <N>,
|
||||
"approach": "<summary>",
|
||||
"plan_location": "<session>/plan/",
|
||||
"findings": { ... }
|
||||
}
|
||||
```
|
||||
3. Report completion:
|
||||
```
|
||||
report_agent_job_result({
|
||||
id: "<task_id>",
|
||||
status: "completed",
|
||||
findings: { complexity, task_count, approach, plan_location },
|
||||
quality_score: <0-100>,
|
||||
supervision_verdict: "approve",
|
||||
error: null
|
||||
})
|
||||
```
|
||||
4. Coordinator reads complexity for conditional routing (see specs/pipelines.md)
|
||||
|
||||
## Exploration Cache Protocol
|
||||
|
||||
- Before exploring, check `<session>/explorations/cache-index.json`
|
||||
- Reuse cached results if query matches and cache is fresh
|
||||
- After exploring, update cache-index with new entries
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI exploration failure | Plan from description only |
|
||||
| CLI planning failure | Fallback to direct planning |
|
||||
| Plan rejected 3+ times | Report via report_agent_job_result with status "failed" |
|
||||
| Cache index corrupt | Clear cache, re-explore |
|
||||
@@ -0,0 +1,57 @@
|
||||
# Code Review
|
||||
|
||||
4-dimension code review for implementation quality.
|
||||
|
||||
## Inputs
|
||||
|
||||
- Plan file (`{session}/plan/plan.json`)
|
||||
- Implementation discovery files (`{session}/discoveries/IMPL-*.json`)
|
||||
- Test results (if available)
|
||||
|
||||
## Gather Modified Files
|
||||
|
||||
Read upstream context from file system (no team_msg):
|
||||
|
||||
```javascript
|
||||
// 1. Read plan for file list
|
||||
const plan = JSON.parse(Read(`{session}/plan/plan.json`))
|
||||
const plannedFiles = plan.tasks.flatMap(t => t.files)
|
||||
|
||||
// 2. Read implementation discoveries for actual modified files
|
||||
const implFiles = Glob(`{session}/discoveries/IMPL-*.json`)
|
||||
const modifiedFiles = new Set()
|
||||
for (const f of implFiles) {
|
||||
const discovery = JSON.parse(Read(f))
|
||||
for (const file of (discovery.files_modified || [])) {
|
||||
modifiedFiles.add(file)
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Union of planned + actually modified files
|
||||
const allFiles = [...new Set([...plannedFiles, ...modifiedFiles])]
|
||||
```
|
||||
|
||||
## Dimensions
|
||||
|
||||
| Dimension | Critical Issues |
|
||||
|-----------|----------------|
|
||||
| Quality | Empty catch, any casts, @ts-ignore, console.log |
|
||||
| Security | Hardcoded secrets, SQL injection, eval/exec, innerHTML |
|
||||
| Architecture | Circular deps, imports >2 levels deep, files >500 lines |
|
||||
| Requirements | Missing core functionality, incomplete acceptance criteria |
|
||||
|
||||
## Review Process
|
||||
|
||||
1. Gather modified files from plan.json + discoveries/IMPL-*.json
|
||||
2. Read each modified file
|
||||
3. Score per dimension (0-100%)
|
||||
4. Classify issues by severity (Critical/High/Medium/Low)
|
||||
5. Generate verdict (BLOCK/CONDITIONAL/APPROVE)
|
||||
|
||||
## Output
|
||||
|
||||
Write review report to `{session}/artifacts/review-report.md`:
|
||||
- Per-dimension scores
|
||||
- Issue list with file:line references
|
||||
- Verdict with justification
|
||||
- Recommendations (if CONDITIONAL)
|
||||
@@ -0,0 +1,44 @@
|
||||
# Spec Quality Review
|
||||
|
||||
5-dimension spec quality gate with discuss protocol.
|
||||
|
||||
## Inputs
|
||||
|
||||
- All spec docs in `{session}/spec/`
|
||||
- Quality gate config from specs/quality-gates.md
|
||||
|
||||
## Dimensions
|
||||
|
||||
| Dimension | Weight | Focus |
|
||||
|-----------|--------|-------|
|
||||
| Completeness | 25% | All sections present with substance |
|
||||
| Consistency | 25% | Terminology, format, references uniform |
|
||||
| Traceability | 25% | Goals->Reqs->Arch->Stories chain |
|
||||
| Depth | 25% | AC testable, ADRs justified, stories estimable |
|
||||
|
||||
## Review Process
|
||||
|
||||
1. Read all spec documents from `{session}/spec/`
|
||||
2. Load quality gate thresholds from specs/quality-gates.md
|
||||
3. Score each dimension
|
||||
4. Run cross-document validation
|
||||
5. Generate readiness-report.md + spec-summary.md
|
||||
6. Run DISCUSS-003:
|
||||
- Artifact: `{session}/spec/readiness-report.md`
|
||||
- Perspectives: product, technical, quality, risk, coverage
|
||||
- Handle verdict per consensus protocol
|
||||
- DISCUSS-003 HIGH always triggers user pause
|
||||
|
||||
## Quality Gate
|
||||
|
||||
| Gate | Score |
|
||||
|------|-------|
|
||||
| PASS | >= 80% |
|
||||
| REVIEW | 60-79% |
|
||||
| FAIL | < 60% |
|
||||
|
||||
## Output
|
||||
|
||||
Write to `{session}/artifacts/`:
|
||||
- readiness-report.md: Dimension scores, issue list, traceability matrix
|
||||
- spec-summary.md: Executive summary of all spec docs
|
||||
98
.codex/skills/team-lifecycle-v4/roles/reviewer/role.md
Normal file
98
.codex/skills/team-lifecycle-v4/roles/reviewer/role.md
Normal file
@@ -0,0 +1,98 @@
|
||||
---
|
||||
role: reviewer
|
||||
prefix: REVIEW
|
||||
additional_prefixes: [QUALITY, IMPROVE]
|
||||
inner_loop: false
|
||||
discuss_rounds: [DISCUSS-003]
|
||||
message_types:
|
||||
success_review: review_result
|
||||
success_quality: quality_result
|
||||
fix: fix_required
|
||||
error: error
|
||||
---
|
||||
|
||||
# Reviewer
|
||||
|
||||
Quality review for both code (REVIEW-*) and specifications (QUALITY-*, IMPROVE-*).
|
||||
|
||||
## Identity
|
||||
- Tag: [reviewer] | Prefix: REVIEW-*, QUALITY-*, IMPROVE-*
|
||||
- Responsibility: Multi-dimensional review with verdict routing
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Detect review mode from task prefix
|
||||
- Apply correct dimensions per mode
|
||||
- Run DISCUSS-003 for spec quality (QUALITY-*/IMPROVE-*)
|
||||
- Generate actionable verdict
|
||||
### MUST NOT
|
||||
- Mix code review with spec quality dimensions
|
||||
- Skip discuss for QUALITY-* tasks
|
||||
- Implement fixes (only recommend)
|
||||
|
||||
## Phase 2: Mode Detection
|
||||
|
||||
| Task Prefix | Mode | Command |
|
||||
|-------------|------|---------|
|
||||
| REVIEW-* | Code Review | commands/review-code.md |
|
||||
| QUALITY-* | Spec Quality | commands/review-spec.md |
|
||||
| IMPROVE-* | Spec Quality (recheck) | commands/review-spec.md |
|
||||
|
||||
## Phase 3: Review Execution
|
||||
|
||||
Route to command based on detected mode.
|
||||
|
||||
## Phase 4: Verdict + Report
|
||||
|
||||
### Code Review Verdict
|
||||
| Verdict | Criteria |
|
||||
|---------|----------|
|
||||
| BLOCK | Critical issues present |
|
||||
| CONDITIONAL | High/medium only |
|
||||
| APPROVE | Low or none |
|
||||
|
||||
### Spec Quality Gate
|
||||
| Gate | Criteria |
|
||||
|------|----------|
|
||||
| PASS | Score >= 80% |
|
||||
| REVIEW | Score 60-79% |
|
||||
| FAIL | Score < 60% |
|
||||
|
||||
### Write Discovery
|
||||
|
||||
```javascript
|
||||
Write(`{session}/discoveries/{id}.json`, JSON.stringify({
|
||||
task_id: "{id}",
|
||||
type: "review_result", // or "quality_gate"
|
||||
mode: "code_review", // or "spec_quality"
|
||||
verdict: "APPROVE", // BLOCK/CONDITIONAL/APPROVE or PASS/REVIEW/FAIL
|
||||
dimensions: { quality: 85, security: 90, architecture: 80, requirements: 95 },
|
||||
overall_score: 87,
|
||||
issues: [],
|
||||
report_path: "artifacts/review-report.md"
|
||||
}, null, 2))
|
||||
```
|
||||
|
||||
### Report Result
|
||||
|
||||
```javascript
|
||||
report_agent_job_result({
|
||||
id: "{id}",
|
||||
status: "completed",
|
||||
findings: "Code review: Quality 85%, Security 90%, Architecture 80%, Requirements 95%. Verdict: APPROVE.",
|
||||
quality_score: "87",
|
||||
supervision_verdict: "",
|
||||
error: ""
|
||||
})
|
||||
```
|
||||
|
||||
Report includes: mode, verdict/gate, dimension scores, discuss verdict (quality only), output paths.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Missing context | Request from coordinator |
|
||||
| Invalid mode | Abort with error |
|
||||
| Discuss fails | Proceed without discuss, log warning |
|
||||
| Upstream discovery file missing | Report error, mark failed |
|
||||
210
.codex/skills/team-lifecycle-v4/roles/supervisor/role.md
Normal file
210
.codex/skills/team-lifecycle-v4/roles/supervisor/role.md
Normal file
@@ -0,0 +1,210 @@
|
||||
---
|
||||
role: supervisor
|
||||
prefix: CHECKPOINT
|
||||
inner_loop: false
|
||||
discuss_rounds: []
|
||||
message_types:
|
||||
success: supervision_report
|
||||
alert: consistency_alert
|
||||
warning: pattern_warning
|
||||
error: error
|
||||
---
|
||||
|
||||
# Supervisor
|
||||
|
||||
Process and execution supervision at pipeline phase transition points.
|
||||
|
||||
## Identity
|
||||
- Tag: [supervisor] | Prefix: CHECKPOINT-*
|
||||
- Responsibility: Verify cross-artifact consistency, process compliance, and execution health between pipeline phases
|
||||
- Residency: Spawned once, awakened via `send_input` at each checkpoint trigger (not SendMessage)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Read all upstream discoveries from discoveries/ directory
|
||||
- Read upstream artifacts referenced in state data
|
||||
- Check terminology consistency across produced documents
|
||||
- Verify process compliance (upstream consumed, artifacts exist, wisdom contributed)
|
||||
- Analyze error/retry patterns from task history
|
||||
- Output supervision_report with clear verdict (pass/warn/block)
|
||||
- Write checkpoint report to `<session>/artifacts/CHECKPOINT-NNN-report.md`
|
||||
|
||||
### MUST NOT
|
||||
- Perform deep quality scoring (reviewer's job -- 4 dimensions x 25% weight)
|
||||
- Evaluate AC testability or ADR justification (reviewer's job)
|
||||
- Modify any artifacts (read-only observer)
|
||||
- Skip reading discoveries history (essential for pattern detection)
|
||||
- Block pipeline without justification (every block needs specific evidence)
|
||||
- Run discussion rounds (no consensus needed for checkpoints)
|
||||
|
||||
## Phase 2: Context Gathering
|
||||
|
||||
Load ALL available context for comprehensive supervision:
|
||||
|
||||
### Step 1: Discoveries Analysis
|
||||
Read all `discoveries/*.json` files:
|
||||
- Collect all discovery records from completed tasks
|
||||
- Group by: task prefix, status, error count
|
||||
- Build timeline of task completions and their quality_self_scores
|
||||
|
||||
### Step 2: Upstream State Loading
|
||||
Read `tasks.json` to get task assignments and status for all roles:
|
||||
- Load state for every completed upstream role
|
||||
- Extract: key_findings, decisions, terminology_keys, open_questions
|
||||
- Note: upstream_refs_consumed for reference chain verification
|
||||
|
||||
### Step 3: Artifact Reading
|
||||
- Read each artifact referenced in upstream discoveries' `ref` paths
|
||||
- Extract document structure, key terms, design decisions
|
||||
- DO NOT deep-read entire documents -- scan headings + key sections only
|
||||
|
||||
### Step 4: Wisdom Loading
|
||||
- Read `<session>/wisdom/*.md` for accumulated team knowledge
|
||||
- Check for contradictions between wisdom entries and current artifacts
|
||||
|
||||
## Phase 3: Supervision Checks
|
||||
|
||||
Execute checks based on CHECKPOINT type. Each checkpoint has a predefined scope.
|
||||
|
||||
### CHECKPOINT-001: Brief <-> PRD Consistency (after DRAFT-002)
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Vision->Requirements trace | Compare brief goals with PRD FR-NNN IDs | Every vision goal maps to >=1 requirement |
|
||||
| Terminology alignment | Extract key terms from both docs | Same concept uses same term (no "user" vs "customer" drift) |
|
||||
| Scope consistency | Compare brief scope with PRD scope | No requirements outside brief scope |
|
||||
| Decision continuity | Compare decisions in analyst state vs writer state | No contradictions |
|
||||
| Artifact existence | Check file paths | product-brief.md and requirements/ exist |
|
||||
|
||||
### CHECKPOINT-002: Full Spec Consistency (after DRAFT-004)
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| 4-doc term consistency | Extract terms from brief, PRD, arch, epics | Unified terminology across all 4 |
|
||||
| Decision chain | Trace decisions from RESEARCH -> DRAFT-001 -> ... -> DRAFT-004 | No contradictions, decisions build progressively |
|
||||
| Architecture<->Epics alignment | Compare arch components with epic stories | Every component has implementation coverage |
|
||||
| Quality self-score trend | Compare quality_self_score across DRAFT-001..004 discoveries | Not degrading (score[N] >= score[N-1] - 10) |
|
||||
| Open questions resolved | Check open_questions across all discoveries | No critical open questions remaining |
|
||||
| Wisdom consistency | Cross-check wisdom entries against artifacts | No contradictory entries |
|
||||
|
||||
### CHECKPOINT-003: Plan <-> Input Alignment (after PLAN-001)
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Plan covers requirements | Compare plan.json tasks with PRD/input requirements | All must-have requirements have implementation tasks |
|
||||
| Complexity assessment sanity | Read plan.json complexity vs actual scope | Low != 5+ modules, High != 1 module |
|
||||
| Dependency chain valid | Verify plan task dependencies | No cycles, no orphans |
|
||||
| Execution method appropriate | Check recommended_execution vs complexity | Agent mode for low, CLI for medium+ |
|
||||
| Upstream context consumed | Verify plan references spec artifacts | Plan explicitly references architecture decisions |
|
||||
|
||||
### Execution Health Checks (all checkpoints)
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Retry patterns | Count error discoveries per role | No role has >=3 errors |
|
||||
| Discovery anomalies | Check for orphaned discoveries (from dead workers) | All in_progress tasks have recent activity |
|
||||
| Fast-advance conflicts | Check fast_advance discoveries | No duplicate spawns detected |
|
||||
|
||||
## Phase 4: Verdict Generation
|
||||
|
||||
### Scoring
|
||||
|
||||
Each check produces: pass (1.0) | warn (0.5) | fail (0.0)
|
||||
|
||||
```
|
||||
checkpoint_score = sum(check_scores) / num_checks
|
||||
```
|
||||
|
||||
| Verdict | Score | Action |
|
||||
|---------|-------|--------|
|
||||
| `pass` | >= 0.8 | Auto-proceed, log report |
|
||||
| `warn` | 0.5-0.79 | Proceed with recorded risks in wisdom |
|
||||
| `block` | < 0.5 | Halt pipeline, report to coordinator |
|
||||
|
||||
### Report Generation
|
||||
|
||||
Write to `<session>/artifacts/CHECKPOINT-NNN-report.md`:
|
||||
|
||||
```markdown
|
||||
# Checkpoint Report: CHECKPOINT-NNN
|
||||
|
||||
## Scope
|
||||
Tasks checked: [DRAFT-001, DRAFT-002]
|
||||
|
||||
## Results
|
||||
|
||||
### Consistency
|
||||
| Check | Result | Details |
|
||||
|-------|--------|---------|
|
||||
| Terminology | pass | Unified across 2 docs |
|
||||
| Decision chain | warn | Minor: "auth" term undefined in PRD |
|
||||
|
||||
### Process Compliance
|
||||
| Check | Result | Details |
|
||||
|-------|--------|---------|
|
||||
| Upstream consumed | pass | All refs loaded |
|
||||
| Artifacts exist | pass | 2/2 files present |
|
||||
|
||||
### Execution Health
|
||||
| Check | Result | Details |
|
||||
|-------|--------|---------|
|
||||
| Error patterns | pass | 0 errors |
|
||||
| Retries | pass | No retries |
|
||||
|
||||
## Verdict: PASS (score: 0.90)
|
||||
|
||||
## Recommendations
|
||||
- Define "auth" explicitly in PRD glossary section
|
||||
|
||||
## Risks Logged
|
||||
- None
|
||||
```
|
||||
|
||||
### Discovery and Reporting
|
||||
|
||||
1. Write discovery to `discoveries/<task_id>.json`:
|
||||
```json
|
||||
{
|
||||
"task_id": "CHECKPOINT-001",
|
||||
"status": "task_complete",
|
||||
"ref": "<session>/artifacts/CHECKPOINT-001-report.md",
|
||||
"findings": {
|
||||
"key_findings": ["Terminology aligned", "Decision chain consistent"],
|
||||
"decisions": ["Proceed to architecture phase"],
|
||||
"supervision_verdict": "pass",
|
||||
"supervision_score": 0.90,
|
||||
"risks_logged": 0,
|
||||
"blocks_detected": 0
|
||||
},
|
||||
"data": {
|
||||
"verification": "self-validated",
|
||||
"checks_passed": 5,
|
||||
"checks_total": 5
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Report via `report_agent_job_result`:
|
||||
```
|
||||
report_agent_job_result({
|
||||
id: "CHECKPOINT-001",
|
||||
status: "completed",
|
||||
findings: {
|
||||
supervision_verdict: "pass",
|
||||
supervision_score: 0.90,
|
||||
risks_logged: 0,
|
||||
blocks_detected: 0,
|
||||
report_path: "<session>/artifacts/CHECKPOINT-001-report.md"
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Artifact file not found | Score as warn (not fail), log missing path |
|
||||
| Discoveries directory empty | Score as warn, note "no discoveries to analyze" |
|
||||
| State missing for upstream role | Use artifact reading as fallback |
|
||||
| All checks pass trivially | Still generate report for audit trail |
|
||||
| Checkpoint blocked but user overrides | Log override in wisdom, proceed |
|
||||
126
.codex/skills/team-lifecycle-v4/roles/tester/role.md
Normal file
126
.codex/skills/team-lifecycle-v4/roles/tester/role.md
Normal file
@@ -0,0 +1,126 @@
|
||||
---
|
||||
role: tester
|
||||
prefix: TEST
|
||||
inner_loop: false
|
||||
message_types:
|
||||
success: test_result
|
||||
fix: fix_required
|
||||
error: error
|
||||
---
|
||||
|
||||
# Tester
|
||||
|
||||
Test execution with iterative fix cycle.
|
||||
|
||||
## Identity
|
||||
- Tag: [tester] | Prefix: TEST-*
|
||||
- Responsibility: Detect framework -> run tests -> fix failures -> report results
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Auto-detect test framework before running
|
||||
- Run affected tests first, then full suite
|
||||
- Classify failures by severity
|
||||
- Iterate fix cycle up to MAX_ITERATIONS
|
||||
### MUST NOT
|
||||
- Skip framework detection
|
||||
- Run full suite before affected tests
|
||||
- Exceed MAX_ITERATIONS without reporting
|
||||
|
||||
## Phase 2: Framework Detection + Test Discovery
|
||||
|
||||
Framework detection (priority order):
|
||||
| Priority | Method | Frameworks |
|
||||
|----------|--------|-----------|
|
||||
| 1 | package.json devDependencies | vitest, jest, mocha, pytest |
|
||||
| 2 | package.json scripts.test | vitest, jest, mocha, pytest |
|
||||
| 3 | Config files | vitest.config.*, jest.config.*, pytest.ini |
|
||||
|
||||
Affected test discovery from executor's modified files:
|
||||
|
||||
1. **Read upstream implementation discovery**:
|
||||
```javascript
|
||||
const implDiscovery = JSON.parse(Read(`{session}/discoveries/IMPL-001.json`))
|
||||
const modifiedFiles = implDiscovery.files_modified || []
|
||||
```
|
||||
|
||||
2. **Search for matching test files**:
|
||||
- Search: <name>.test.ts, <name>.spec.ts, tests/<name>.test.ts, __tests__/<name>.test.ts
|
||||
|
||||
## Phase 3: Test Execution + Fix Cycle
|
||||
|
||||
Config: MAX_ITERATIONS=10, PASS_RATE_TARGET=95%, AFFECTED_TESTS_FIRST=true
|
||||
|
||||
Loop:
|
||||
1. Run affected tests -> parse results
|
||||
2. Pass rate met -> run full suite
|
||||
3. Failures -> select strategy -> fix -> re-run
|
||||
|
||||
Strategy selection:
|
||||
| Condition | Strategy |
|
||||
|-----------|----------|
|
||||
| Iteration <= 3 or pass >= 80% | Conservative: fix one critical failure |
|
||||
| Critical failures < 5 | Surgical: fix specific pattern everywhere |
|
||||
| Pass < 50% or iteration > 7 | Aggressive: fix all in batch |
|
||||
|
||||
Test commands:
|
||||
| Framework | Affected | Full Suite |
|
||||
|-----------|---------|------------|
|
||||
| vitest | vitest run <files> | vitest run |
|
||||
| jest | jest <files> --no-coverage | jest --no-coverage |
|
||||
| pytest | pytest <files> -v | pytest -v |
|
||||
|
||||
## Phase 4: Result Analysis + Report
|
||||
|
||||
Failure classification:
|
||||
| Severity | Patterns |
|
||||
|----------|----------|
|
||||
| Critical | SyntaxError, cannot find module, undefined |
|
||||
| High | Assertion failures, toBe/toEqual |
|
||||
| Medium | Timeout, async errors |
|
||||
| Low | Warnings, deprecations |
|
||||
|
||||
### Write Discovery
|
||||
|
||||
```javascript
|
||||
Write(`{session}/discoveries/{id}.json`, JSON.stringify({
|
||||
task_id: "{id}",
|
||||
type: "test_result",
|
||||
framework: "vitest",
|
||||
pass_rate: 98,
|
||||
total_tests: 50,
|
||||
passed: 49,
|
||||
failed: 1,
|
||||
failures: [{ test: "SSO integration", severity: "Medium", error: "timeout" }],
|
||||
fix_iterations: 2,
|
||||
files_tested: ["src/auth/oauth.test.ts"]
|
||||
}, null, 2))
|
||||
```
|
||||
|
||||
### Report Result
|
||||
|
||||
Report routing:
|
||||
| Condition | Type |
|
||||
|-----------|------|
|
||||
| Pass rate >= target | test_result (success) |
|
||||
| Pass rate < target after max iterations | fix_required |
|
||||
|
||||
```javascript
|
||||
report_agent_job_result({
|
||||
id: "{id}",
|
||||
status: "completed", // or "failed"
|
||||
findings: "Ran 50 tests. Pass rate: 98% (49/50). Fixed 2 failures in 2 iterations. Remaining: timeout in SSO integration test.",
|
||||
quality_score: "",
|
||||
supervision_verdict: "",
|
||||
error: ""
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Framework not detected | Prompt coordinator |
|
||||
| No tests found | Report to coordinator |
|
||||
| Infinite fix loop | Abort after MAX_ITERATIONS |
|
||||
| Upstream discovery file missing | Report error, mark failed |
|
||||
125
.codex/skills/team-lifecycle-v4/roles/writer/role.md
Normal file
125
.codex/skills/team-lifecycle-v4/roles/writer/role.md
Normal file
@@ -0,0 +1,125 @@
|
||||
---
|
||||
role: writer
|
||||
prefix: DRAFT
|
||||
inner_loop: true
|
||||
discuss_rounds: [DISCUSS-002]
|
||||
message_types:
|
||||
success: draft_ready
|
||||
revision: draft_revision
|
||||
error: error
|
||||
---
|
||||
|
||||
# Writer
|
||||
|
||||
Template-driven document generation with progressive dependency loading.
|
||||
|
||||
## Identity
|
||||
- Tag: [writer] | Prefix: DRAFT-*
|
||||
- Responsibility: Generate spec documents (product brief, requirements, architecture, epics)
|
||||
|
||||
## Boundaries
|
||||
### MUST
|
||||
- Load upstream context progressively (each doc builds on previous)
|
||||
- Use templates from templates/ directory
|
||||
- Self-validate every document
|
||||
- Run DISCUSS-002 for Requirements PRD
|
||||
### MUST NOT
|
||||
- Generate code
|
||||
- Skip validation
|
||||
- Modify upstream artifacts
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
### Document Type Routing
|
||||
|
||||
| Task Contains | Doc Type | Template | Validation |
|
||||
|---------------|----------|----------|------------|
|
||||
| Product Brief | product-brief | templates/product-brief.md | self-validate |
|
||||
| Requirements / PRD | requirements | templates/requirements.md | DISCUSS-002 |
|
||||
| Architecture | architecture | templates/architecture.md | self-validate |
|
||||
| Epics | epics | templates/epics.md | self-validate |
|
||||
|
||||
### Progressive Dependencies
|
||||
|
||||
| Doc Type | Requires |
|
||||
|----------|----------|
|
||||
| product-brief | discovery-context.json |
|
||||
| requirements | + product-brief.md |
|
||||
| architecture | + requirements |
|
||||
| epics | + architecture |
|
||||
|
||||
### Inputs
|
||||
- Template from routing table
|
||||
- spec-config.json from <session>/spec/
|
||||
- discovery-context.json from <session>/spec/
|
||||
- Prior decisions from context_accumulator (inner loop)
|
||||
- Discussion feedback from <session>/discussions/ (if exists)
|
||||
- Read `tasks.json` to get upstream task status
|
||||
- Read `discoveries/*.json` to load upstream discoveries and context
|
||||
|
||||
## Phase 3: Document Generation
|
||||
|
||||
CLI generation:
|
||||
```
|
||||
Bash({ command: `ccw cli -p "PURPOSE: Generate <doc-type> document following template
|
||||
TASK: * Load template * Apply spec config and discovery context * Integrate prior feedback * Generate all sections
|
||||
MODE: write
|
||||
CONTEXT: @<session>/spec/*.json @<template-path>
|
||||
EXPECTED: Document at <output-path> with YAML frontmatter, all sections, cross-references
|
||||
CONSTRAINTS: Follow document standards" --tool gemini --mode write --cd <session>`)
|
||||
```
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
### Self-Validation (all doc types)
|
||||
| Check | Verify |
|
||||
|-------|--------|
|
||||
| has_frontmatter | YAML frontmatter present |
|
||||
| sections_complete | All template sections filled |
|
||||
| cross_references | Valid references to upstream docs |
|
||||
|
||||
### Validation Routing
|
||||
| Doc Type | Method |
|
||||
|----------|--------|
|
||||
| product-brief | Self-validate -> report |
|
||||
| requirements | Self-validate + DISCUSS-002 |
|
||||
| architecture | Self-validate -> report |
|
||||
| epics | Self-validate -> report |
|
||||
|
||||
### Reporting
|
||||
|
||||
1. Write discovery to `discoveries/<task_id>.json`:
|
||||
```json
|
||||
{
|
||||
"task_id": "DRAFT-001",
|
||||
"status": "task_complete",
|
||||
"ref": "<session>/spec/<doc-type>.md",
|
||||
"findings": {
|
||||
"doc_type": "<doc-type>",
|
||||
"validation_status": "pass",
|
||||
"discuss_verdict": "<verdict or null>",
|
||||
"output_path": "<path>"
|
||||
},
|
||||
"data": {
|
||||
"quality_self_score": 85,
|
||||
"sections_completed": ["..."],
|
||||
"cross_references_valid": true
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Report via `report_agent_job_result`:
|
||||
```
|
||||
report_agent_job_result({
|
||||
id: "DRAFT-001",
|
||||
status: "completed",
|
||||
findings: { doc_type, validation_status, discuss_verdict, output_path }
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI failure | Retry once with alternative tool |
|
||||
| Prior doc missing | Notify coordinator |
|
||||
| Discussion contradicts prior | Note conflict, flag for coordinator |
|
||||
Reference in New Issue
Block a user