refactor: deep Codex v4 API conversion for all 20 team skills

Upgrade all team-* skills from mechanical v3→v4 API renames to deep
v4 tool integration with skill-adaptive patterns:

- list_agents: health checks in handleResume, cleanup verification in
  handleComplete, added to allowed-tools and coordinator toolbox
- Named targeting: task_name uses task-id (e.g. EXPLORE-001) instead
  of generic <role>-worker, enabling send_message/assign_task by name
- Message semantics: send_message for supplementary cross-agent context
  vs assign_task for triggering work, with skill-specific examples
- Model selection: per-role reasoning_effort guidance matching each
  skill's actual roles (not generic boilerplate)
- timeout_ms: added to all wait_agent calls, timed_out handling in
  all 18 monitor.md files
- Skill-adaptive v4 sections: ultra-analyze N-parallel coordination,
  lifecycle-v4 supervisor assign_task/send_message distinction,
  brainstorm ideator parallel patterns, iterdev generator-critic loops,
  frontend-debug iterative debug assign_task, perf-opt benchmark
  context sharing, executor lightweight trimmed v4, etc.

60 files changed across 20 team skills (SKILL.md, monitor.md, role.md)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-27 22:25:32 +08:00
parent 3d39ac6ac8
commit 88ea7fc6d7
60 changed files with 2318 additions and 215 deletions

View File

@@ -19,7 +19,7 @@
| Worker spawn | `Agent({ subagent_type: "team-worker", prompt })` | `spawn_agent({ agent_type: "tlv4_worker", items })` |
| Supervisor spawn | `Agent({ subagent_type: "team-supervisor", prompt })` | `spawn_agent({ agent_type: "tlv4_supervisor", items })` |
| Supervisor wake | `SendMessage({ recipient: "supervisor", content })` | `send_input({ id: supervisorId, items })` |
| Supervisor shutdown | `SendMessage({ type: "shutdown_request" })` | `close_agent({ id: supervisorId })` |
| Supervisor shutdown | `SendMessage({ type: "shutdown_request" })` | `close_agent({ target: supervisorId })` |
| 等待完成 | 后台回调 -> monitor.md | `wait_agent({ ids, timeout_ms })` |
| 任务状态 | `TaskCreate` / `TaskUpdate` | `tasks.json` 文件读写 |
| 团队管理 | `TeamCreate` / `TeamDelete` | session folder init / cleanup |
@@ -222,14 +222,14 @@ scope: [${task.deps}]
pipeline_progress: ${done}/${total} tasks completed` }
]
})
wait_agent({ ids: [supervisorId], timeout_ms: 300000 })
wait_agent({ targets: [supervisorId], timeout_ms: 300000 })
```
### Supervisor Shutdown
```javascript
// 对齐 Claude Code SendMessage({ type: "shutdown_request" })
close_agent({ id: supervisorId })
close_agent({ target: supervisorId })
```
### Wave 执行引擎
@@ -300,7 +300,7 @@ pipeline_phase: ${task.pipeline_phase}` },
// 2) 批量等待
if (agentMap.length > 0) {
wait_agent({ ids: agentMap.map(a => a.agentId), timeout_ms: 900000 })
wait_agent({ targets: agentMap.map(a => a.agentId), timeout_ms: 900000 })
}
// 3) 收集结果,合并到 tasks.json
@@ -315,7 +315,7 @@ pipeline_phase: ${task.pipeline_phase}` },
state.tasks[taskId].status = 'failed'
state.tasks[taskId].error = 'No discovery file produced'
}
close_agent({ id: agentId })
close_agent({ target: agentId })
}
// 4) 执行 CHECKPOINT 任务 (send_input 唤醒 supervisor)
@@ -329,7 +329,7 @@ scope: [${task.deps.join(', ')}]
pipeline_progress: ${completedCount}/${totalCount} tasks completed` }
]
})
wait_agent({ ids: [supervisorId], timeout_ms: 300000 })
wait_agent({ targets: [supervisorId], timeout_ms: 300000 })
// 读取 checkpoint 报告
try {

View File

@@ -1,7 +1,7 @@
---
name: team-lifecycle-v4
description: Full lifecycle team skill with clean architecture. SKILL.md is a universal router — all roles read it. Beat model is coordinator-only. Structure is roles/ + specs/ + templates/. Triggers on "team lifecycle v4".
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), request_user_input(*)
allowed-tools: spawn_agent(*), wait_agent(*), send_message(*), assign_task(*), close_agent(*), list_agents(*), report_agent_job_result(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), request_user_input(*)
---
# Team Lifecycle v4
@@ -29,7 +29,7 @@ Skill(skill="team-lifecycle-v4", args="task description")
spawn_agent ... spawn_agent
(team_worker) (team_supervisor)
per-task resident agent
lifecycle send_input-driven
lifecycle assign_task-driven
| |
+-- wait_agent --------+
|
@@ -63,7 +63,8 @@ Before calling ANY tool, apply this check:
| Tool Call | Verdict | Reason |
|-----------|---------|--------|
| `spawn_agent`, `wait_agent`, `close_agent`, `send_input` | ALLOWED | Orchestration |
| `spawn_agent`, `wait_agent`, `close_agent`, `send_message`, `assign_task` | ALLOWED | Orchestration |
| `list_agents` | ALLOWED | Agent health check |
| `request_user_input` | ALLOWED | User interaction |
| `mcp__ccw-tools__team_msg` | ALLOWED | Message bus |
| `Read/Write` on `.workflow/.team/` files | ALLOWED | Session state |
@@ -94,6 +95,8 @@ Coordinator spawns workers using this template:
```
spawn_agent({
agent_type: "team_worker",
task_name: "<task-id>",
fork_context: false,
items: [
{ type: "text", text: `## Role Assignment
role: <role>
@@ -120,13 +123,15 @@ pipeline_phase: <pipeline-phase>` },
## Supervisor Spawn Template
Supervisor is a **resident agent** (independent from team_worker). Spawned once during session init, woken via send_input for each CHECKPOINT task.
Supervisor is a **resident agent** (independent from team_worker). Spawned once during session init, woken via assign_task for each CHECKPOINT task.
### Spawn (Phase 2 -- once per session)
```
supervisorId = spawn_agent({
agent_type: "team_supervisor",
task_name: "supervisor",
fork_context: false,
items: [
{ type: "text", text: `## Role Assignment
role: supervisor
@@ -137,7 +142,7 @@ requirement: <task-description>
Read role_spec file (<skill_root>/roles/supervisor/role.md) to load checkpoint definitions.
Init: load baseline context, report ready, go idle.
Wake cycle: orchestrator sends checkpoint requests via send_input.` }
Wake cycle: orchestrator sends checkpoint requests via assign_task.` }
]
})
```
@@ -145,8 +150,8 @@ Wake cycle: orchestrator sends checkpoint requests via send_input.` }
### Wake (per CHECKPOINT task)
```
send_input({
id: supervisorId,
assign_task({
target: "supervisor",
items: [
{ type: "text", text: `## Checkpoint Request
task_id: <CHECKPOINT-NNN>
@@ -154,13 +159,38 @@ scope: [<upstream-task-ids>]
pipeline_progress: <done>/<total> tasks completed` }
]
})
wait_agent({ ids: [supervisorId], timeout_ms: 300000 })
wait_agent({ targets: ["supervisor"], timeout_ms: 300000 })
```
### Shutdown (pipeline complete)
```
close_agent({ id: supervisorId })
close_agent({ target: "supervisor" })
```
### Model Selection Guide
| Role | model | reasoning_effort | Rationale |
|------|-------|-------------------|-----------|
| Analyst (RESEARCH-*) | (default) | medium | Read-heavy exploration, less reasoning needed |
| Writer (DRAFT-*) | (default) | high | Spec writing requires precision and completeness |
| Planner (PLAN-*) | (default) | high | Architecture decisions need full reasoning |
| Executor (IMPL-*) | (default) | high | Code generation needs precision |
| Tester (TEST-*) | (default) | high | Test generation requires deep code understanding |
| Reviewer (REVIEW-*, QUALITY-*, IMPROVE-*) | (default) | high | Deep analysis for quality assessment |
| Supervisor (CHECKPOINT-*) | (default) | medium | Gate checking, report aggregation |
Override model/reasoning_effort in spawn_agent when cost optimization is needed:
```
spawn_agent({
agent_type: "team_worker",
task_name: "<task-id>",
fork_context: false,
model: "<model-override>",
reasoning_effort: "<effort-level>",
items: [...]
})
```
## Wave Execution Engine
@@ -172,9 +202,9 @@ For each wave in the pipeline:
3. **Build upstream context** -- For each task, gather findings from `context_from` tasks via tasks.json and `discoveries/{id}.json`
4. **Separate task types** -- Split into regular tasks and CHECKPOINT tasks
5. **Spawn regular tasks** -- For each regular task, call `spawn_agent({ agent_type: "team_worker", items: [...] })`, collect agent IDs
6. **Wait** -- `wait_agent({ ids: [...], timeout_ms: 900000 })`
7. **Collect results** -- Read `discoveries/{task_id}.json` for each agent, update tasks.json status/findings/error, then `close_agent({ id })` each worker
8. **Execute checkpoints** -- For each CHECKPOINT task, `send_input` to supervisor, `wait_agent`, read checkpoint report from `artifacts/`, parse verdict
6. **Wait** -- `wait_agent({ targets: [...], timeout_ms: 900000 })`
7. **Collect results** -- Read `discoveries/{task_id}.json` for each agent, update tasks.json status/findings/error, then `close_agent({ target })` each worker
8. **Execute checkpoints** -- For each CHECKPOINT task, `assign_task` to supervisor, `wait_agent`, read checkpoint report from `artifacts/`, parse verdict
9. **Handle block** -- If verdict is `block`, prompt user via `request_user_input` with options: Override / Revise upstream / Abort
10. **Persist** -- Write updated state to `<session>/tasks.json`
@@ -189,6 +219,39 @@ For each wave in the pipeline:
| `recheck` | Re-run quality check |
| `improve [dimension]` | Auto-improve weakest dimension |
## v4 Agent Coordination
### Message Semantics
| Intent | API | Example |
|--------|-----|---------|
| Queue supplementary info (don't interrupt) | `send_message` | Send planning results to running implementers |
| Wake resident supervisor for checkpoint | `assign_task` | Trigger CHECKPOINT-* evaluation on supervisor |
| Supervisor reports back to coordinator | `send_message` | Supervisor sends checkpoint verdict as supplementary info |
| Check running agents | `list_agents` | Verify agent + supervisor health during resume |
**CRITICAL**: The supervisor is a **resident agent** woken via `assign_task`, NOT `send_message`. Regular workers complete and are closed; the supervisor persists across checkpoints. See "Supervisor Spawn Template" above.
### Agent Health Check
Use `list_agents({})` in handleResume and handleComplete:
```
// Reconcile session state with actual running agents
const running = list_agents({})
// Compare with tasks.json active_agents
// Reset orphaned tasks (in_progress but agent gone) to pending
// ALSO check supervisor: if supervisor missing but CHECKPOINT tasks pending -> respawn
```
### Named Agent Targeting
Workers are spawned with `task_name: "<task-id>"` enabling direct addressing:
- `send_message({ target: "IMPL-001", items: [...] })` -- queue planning context to running implementer
- `assign_task({ target: "supervisor", items: [...] })` -- wake supervisor for checkpoint
- `close_agent({ target: "IMPL-001" })` -- cleanup regular worker by name
- `close_agent({ target: "supervisor" })` -- shutdown supervisor at pipeline end
## Completion Action
When pipeline completes, coordinator presents:

View File

@@ -48,7 +48,7 @@ CHECKPOINT tasks are dispatched like regular tasks but handled differently at sp
- Added to tasks.json with proper deps (upstream tasks that must complete first)
- Owner: supervisor
- **NOT spawned as tlv4_worker** — coordinator wakes the resident supervisor via send_input
- **NOT spawned as tlv4_worker** — coordinator wakes the resident supervisor via assign_task
- If `supervision: false` in tasks.json, skip creating CHECKPOINT tasks entirely
- RoleSpec in description: `<project>/.codex/skills/team-lifecycle-v4/roles/supervisor/role.md`

View File

@@ -5,7 +5,7 @@ Synchronous pipeline coordination using spawn_agent + wait_agent.
## Constants
- WORKER_AGENT: tlv4_worker
- SUPERVISOR_AGENT: tlv4_supervisor (resident, woken via send_input)
- SUPERVISOR_AGENT: tlv4_supervisor (resident, woken via assign_task)
## Handler Router
@@ -35,6 +35,16 @@ Output:
## handleResume
**Agent Health Check** (v4):
```
// Verify actual running agents match session state
const runningAgents = list_agents({})
// For each active_agent in tasks.json:
// - If agent NOT in runningAgents -> agent crashed
// - Reset that task to pending, remove from active_agents
// This prevents stale agent references from blocking the pipeline
```
1. Read tasks.json, check active_agents
2. No active agents -> handleSpawnNext
3. Has active agents -> check each:
@@ -63,6 +73,7 @@ state.tasks[task.id].status = 'in_progress'
// 2) Spawn worker
const agentId = spawn_agent({
agent_type: "tlv4_worker",
task_name: task.id, // e.g., "PLAN-001" — enables named targeting
items: [
{ type: "text", text: `## Role Assignment
role: ${task.role}
@@ -92,30 +103,52 @@ state.active_agents[task.id] = { agentId, role: task.role, started_at: now }
After spawning all ready regular tasks:
```javascript
// 4) Batch wait for all spawned workers
const agentIds = Object.values(state.active_agents)
.filter(a => !a.resident)
.map(a => a.agentId)
wait_agent({ ids: agentIds, timeout_ms: 900000 })
// 5) Collect results from discoveries/{task_id}.json
for (const [taskId, agent] of Object.entries(state.active_agents)) {
if (agent.resident) continue
try {
const disc = JSON.parse(Read(`${sessionFolder}/discoveries/${taskId}.json`))
state.tasks[taskId].status = disc.status || 'completed'
state.tasks[taskId].findings = disc.findings || ''
state.tasks[taskId].quality_score = disc.quality_score || null
state.tasks[taskId].error = disc.error || null
} catch {
state.tasks[taskId].status = 'failed'
state.tasks[taskId].error = 'No discovery file produced'
// 4) Batch wait — use task_name for stable targeting (v4)
const taskNames = Object.entries(state.active_agents)
.filter(([_, a]) => !a.resident)
.map(([taskId]) => taskId)
const waitResult = wait_agent({ targets: taskNames, timeout_ms: 900000 })
if (waitResult.timed_out) {
for (const taskId of taskNames) {
state.tasks[taskId].status = 'timed_out'
close_agent({ target: taskId })
delete state.active_agents[taskId]
}
} else {
// 5) Collect results from discoveries/{task_id}.json
for (const [taskId, agent] of Object.entries(state.active_agents)) {
if (agent.resident) continue
try {
const disc = JSON.parse(Read(`${sessionFolder}/discoveries/${taskId}.json`))
state.tasks[taskId].status = disc.status || 'completed'
state.tasks[taskId].findings = disc.findings || ''
state.tasks[taskId].quality_score = disc.quality_score || null
state.tasks[taskId].error = disc.error || null
} catch {
state.tasks[taskId].status = 'failed'
state.tasks[taskId].error = 'No discovery file produced'
}
close_agent({ target: taskId }) // Use task_name, not agentId
delete state.active_agents[taskId]
}
close_agent({ id: agent.agentId })
delete state.active_agents[taskId]
}
```
**Cross-Agent Supplementary Context** (v4):
When spawning workers in a later pipeline phase, send upstream results as supplementary context to already-running workers:
```
// Example: Send planning results to running implementers
send_message({
target: "<running-agent-task-name>",
items: [{ type: "text", text: `## Supplementary Context\n${upstreamFindings}` }]
})
// Note: send_message queues info without interrupting the agent's current work
```
Use `send_message` (not `assign_task`) for supplementary info that enriches but doesn't redirect the agent's current task.
### Handle CHECKPOINT Tasks
For each ready CHECKPOINT task:
@@ -125,7 +158,7 @@ For each ready CHECKPOINT task:
2. Determine scope: list task IDs that this checkpoint depends on (its deps)
3. Wake supervisor:
```javascript
send_input({
assign_task({
id: supervisorId,
items: [
{ type: "text", text: `## Checkpoint Request
@@ -134,7 +167,8 @@ For each ready CHECKPOINT task:
pipeline_progress: ${completedCount}/${totalCount} tasks completed` }
]
})
wait_agent({ ids: [supervisorId], timeout_ms: 300000 })
const cpResult = wait_agent({ targets: [supervisorId], timeout_ms: 300000 })
if (cpResult.timed_out) { /* mark checkpoint timed_out, close supervisor, STOP */ }
```
4. Read checkpoint report from artifacts/${task.id}-report.md
5. Parse verdict (pass / warn / block):
@@ -153,11 +187,19 @@ After processing all results:
## handleComplete
**Cleanup Verification** (v4):
```
// Verify all agents are properly closed
const remaining = list_agents({})
// If any team agents still running -> close_agent each
// Ensures clean session shutdown
```
Pipeline done. Generate report and completion action.
1. Shutdown resident supervisor (if active):
```javascript
close_agent({ id: supervisorId })
close_agent({ target: supervisorId })
```
Remove from active_agents in tasks.json
2. Generate summary (deliverables, stats, discussions)

View File

@@ -6,7 +6,7 @@ Orchestrate team-lifecycle-v4: analyze -> dispatch -> spawn -> monitor -> report
**You are a dispatcher, not a doer.** Your ONLY outputs are:
- Session state files (`.workflow/.team/` directory)
- `spawn_agent` / `wait_agent` / `close_agent` / `send_input` calls
- `spawn_agent` / `wait_agent` / `close_agent` / `send_message` / `assign_task` calls
- Status reports to the user
- `request_user_input` prompts
@@ -38,6 +38,8 @@ WRONG: Bash("npm test"), Bash("tsc"), etc. — worker work
- Maintain session state (tasks.json)
- Handle capability_gap reports
- Execute completion action when pipeline finishes
- Use `send_message` for supplementary context (non-interrupting) and `assign_task` for triggering new work
- Use `list_agents` for session resume health checks and cleanup verification
### MUST NOT
- Read source code or explore codebase (delegate to workers)
@@ -166,6 +168,16 @@ Delegate to @commands/monitor.md#handleSpawnNext:
- auto_archive -> Archive & Clean (rm -rf session folder)
- auto_keep -> Keep Active
## v4 Coordination Patterns
### Message Semantics
- **send_message**: Queue supplementary info to a running agent. Does NOT interrupt current processing. Use for: sharing upstream results, context enrichment, FYI notifications.
- **assign_task**: Assign new work and trigger processing. Use for: waking idle agents, redirecting work, requesting new output.
### Agent Lifecycle Management
- **list_agents({})**: Returns all running agents. Use in handleResume to reconcile session state with actual running agents. Use in handleComplete to verify clean shutdown.
- **Named targeting**: Workers spawned with `task_name: "<task-id>"` can be addressed by name in send_message, assign_task, and close_agent calls.
## Error Handling
| Error | Resolution |

View File

@@ -17,7 +17,7 @@ Process and execution supervision at pipeline phase transition points.
## Identity
- Tag: [supervisor] | Prefix: CHECKPOINT-*
- Responsibility: Verify cross-artifact consistency, process compliance, and execution health between pipeline phases
- Residency: Spawned once, awakened via `send_input` at each checkpoint trigger (not SendMessage)
- Residency: Spawned once, awakened via `assign_task` at each checkpoint trigger (not SendMessage)
## Boundaries