refactor: deep Codex v4 API conversion for all 20 team skills

Upgrade all team-* skills from mechanical v3→v4 API renames to deep
v4 tool integration with skill-adaptive patterns:

- list_agents: health checks in handleResume, cleanup verification in
  handleComplete, added to allowed-tools and coordinator toolbox
- Named targeting: task_name uses task-id (e.g. EXPLORE-001) instead
  of generic <role>-worker, enabling send_message/assign_task by name
- Message semantics: send_message for supplementary cross-agent context
  vs assign_task for triggering work, with skill-specific examples
- Model selection: per-role reasoning_effort guidance matching each
  skill's actual roles (not generic boilerplate)
- timeout_ms: added to all wait_agent calls, timed_out handling in
  all 18 monitor.md files
- Skill-adaptive v4 sections: ultra-analyze N-parallel coordination,
  lifecycle-v4 supervisor assign_task/send_message distinction,
  brainstorm ideator parallel patterns, iterdev generator-critic loops,
  frontend-debug iterative debug assign_task, perf-opt benchmark
  context sharing, executor lightweight trimmed v4, etc.

60 files changed across 20 team skills (SKILL.md, monitor.md, role.md)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-27 22:25:32 +08:00
parent 3d39ac6ac8
commit 88ea7fc6d7
60 changed files with 2318 additions and 215 deletions

View File

@@ -1,7 +1,7 @@
---
name: team-perf-opt
description: Unified team skill for performance optimization. Coordinator orchestrates pipeline, workers are team-worker agents. Supports single/fan-out/independent parallel modes. Triggers on "team perf-opt".
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*)
allowed-tools: spawn_agent(*), wait_agent(*), send_message(*), assign_task(*), close_agent(*), list_agents(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*)
---
# Team Performance Optimization
@@ -65,7 +65,8 @@ Before calling ANY tool, apply this check:
| Tool Call | Verdict | Reason |
|-----------|---------|--------|
| `spawn_agent`, `wait_agent`, `close_agent`, `send_input` | ALLOWED | Orchestration |
| `spawn_agent`, `wait_agent`, `close_agent`, `send_message`, `assign_task` | ALLOWED | Orchestration |
| `list_agents` | ALLOWED | Agent health check |
| `request_user_input` | ALLOWED | User interaction |
| `mcp__ccw-tools__team_msg` | ALLOWED | Message bus |
| `Read/Write` on `.workflow/.team/` files | ALLOWED | Session state |
@@ -96,6 +97,8 @@ Coordinator spawns workers using this template:
```
spawn_agent({
agent_type: "team_worker",
task_name: "<task-id>",
fork_context: false,
items: [
{ type: "text", text: `## Role Assignment
role: <role>
@@ -119,11 +122,37 @@ pipeline_phase: <pipeline-phase>` },
})
```
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
After spawning, use `wait_agent({ targets: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ target })` each worker.
**Inner Loop roles** (optimizer): Set `inner_loop: true`.
**Single-task roles** (profiler, strategist, benchmarker, reviewer): Set `inner_loop: false`.
### Model Selection Guide
Performance optimization is measurement-driven. Profiler and benchmarker need consistent context for before/after comparison.
| Role | reasoning_effort | Rationale |
|------|-------------------|-----------|
| profiler | high | Must identify subtle bottlenecks from profiling data |
| strategist | high | Optimization strategy requires understanding tradeoffs |
| optimizer | high | Performance-critical code changes need precision |
| benchmarker | medium | Benchmark execution follows defined measurement plan |
| reviewer | high | Must verify optimizations don't introduce regressions |
### Benchmark Context Sharing with fork_context
For before/after comparison, benchmarker should share context with profiler's baseline:
```
spawn_agent({
agent_type: "team_worker",
task_name: "BENCH-001",
fork_context: true, // Share context so benchmarker sees profiler's baseline metrics
reasoning_effort: "medium",
items: [...]
})
```
## User Commands
| Command | Action |
@@ -155,6 +184,42 @@ After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect
+-- .msg/meta.json # Session metadata
```
## v4 Agent Coordination
### Message Semantics
| Intent | API | Example |
|--------|-----|---------|
| Queue supplementary info (don't interrupt) | `send_message` | Send baseline metrics to running optimizer |
| Assign fix after benchmark regression | `assign_task` | Assign FIX task when benchmark shows regression |
| Check running agents | `list_agents` | Verify agent health during resume |
### Agent Health Check
Use `list_agents({})` in handleResume and handleComplete:
```
// Reconcile session state with actual running agents
const running = list_agents({})
// Compare with session.json active tasks
// Reset orphaned tasks (in_progress but agent gone) to pending
```
### Named Agent Targeting
Workers are spawned with `task_name: "<task-id>"` enabling direct addressing:
- `send_message({ target: "IMPL-001", items: [...] })` -- send strategy details to optimizer
- `assign_task({ target: "IMPL-001", items: [...] })` -- assign fix after benchmark regression
- `close_agent({ target: "BENCH-001" })` -- cleanup after benchmarking completes
### Baseline-to-Result Pipeline
Profiler baseline metrics flow through the pipeline and must reach benchmarker for comparison:
1. PROFILE-001 produces `baseline-metrics.json` in artifacts/
2. Coordinator includes baseline reference in upstream context for all downstream workers
3. BENCH-001 reads baseline and compares against post-optimization measurements
4. If regression detected, coordinator auto-creates FIX task with regression details
## Completion Action
When the pipeline completes: