feat: add multi-mode workflow planning skill with session management and task generation

This commit is contained in:
catlog22
2026-03-02 15:25:56 +08:00
parent 2c2b9d6e29
commit 121e834459
28 changed files with 6478 additions and 533 deletions

View File

@@ -0,0 +1,349 @@
---
name: team-perf-opt
description: Unified team skill for performance optimization. Uses team-worker agent architecture with role-spec files for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents. Triggers on "team perf-opt".
allowed-tools: Task, TaskCreate, TaskList, TaskGet, TaskUpdate, TeamCreate, TeamDelete, SendMessage, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep, mcp__ace-tool__search_context
---
# Team Performance Optimization
Unified team skill: Profile application performance, identify bottlenecks, design optimization strategies, implement changes, benchmark improvements, and review code quality. Built on **team-worker agent architecture** -- all worker roles share a single agent definition with role-specific Phase 2-4 loaded from markdown specs.
## Architecture
```
+---------------------------------------------------+
| Skill(skill="team-perf-opt") |
| args="<task-description>" |
+-------------------+-------------------------------+
|
Orchestration Mode (auto -> coordinator)
|
Coordinator (inline)
Phase 0-5 orchestration
|
+-------+-------+-------+-------+
v v v v v
[tw] [tw] [tw] [tw] [tw]
profiler strate- optim- bench- review-
gist izer marker er
Subagents (callable by workers, not team members):
[explore] [discuss]
(tw) = team-worker agent
```
## Role Router
This skill is **coordinator-only**. Workers do NOT invoke this skill -- they are spawned as `team-worker` agents directly.
### Input Parsing
Parse `$ARGUMENTS`. No `--role` needed -- always routes to coordinator.
### Role Registry
| Role | Spec | Task Prefix | Type | Inner Loop |
|------|------|-------------|------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | - |
| profiler | [role-specs/profiler.md](role-specs/profiler.md) | PROFILE-* | orchestration | false |
| strategist | [role-specs/strategist.md](role-specs/strategist.md) | STRATEGY-* | orchestration | false |
| optimizer | [role-specs/optimizer.md](role-specs/optimizer.md) | IMPL-* / FIX-* | code_generation | true |
| benchmarker | [role-specs/benchmarker.md](role-specs/benchmarker.md) | BENCH-* | validation | false |
| reviewer | [role-specs/reviewer.md](role-specs/reviewer.md) | REVIEW-* / QUALITY-* | read_only_analysis | false |
### Subagent Registry
| Subagent | Spec | Callable By | Purpose |
|----------|------|-------------|---------|
| explore | [subagents/explore-subagent.md](subagents/explore-subagent.md) | profiler, optimizer | Shared codebase exploration for performance-critical code paths |
| discuss | [subagents/discuss-subagent.md](subagents/discuss-subagent.md) | strategist, reviewer | Multi-perspective discussion for optimization approaches and review findings |
### Dispatch
Always route to coordinator. Coordinator reads `roles/coordinator/role.md` and executes its phases.
### Orchestration Mode
User just provides task description.
**Invocation**: `Skill(skill="team-perf-opt", args="<task-description>")`
**Lifecycle**:
```
User provides task description
-> coordinator Phase 1-3: Requirement clarification -> TeamCreate -> Create task chain
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
-> Worker (team-worker agent) executes -> SendMessage callback -> coordinator advances
-> Loop until pipeline complete -> Phase 5 report + completion action
```
**User Commands** (wake paused coordinator):
| Command | Action |
|---------|--------|
| `check` / `status` | Output execution status graph, no advancement |
| `resume` / `continue` | Check worker states, advance next step |
| `revise <TASK-ID> [feedback]` | Create revision task + cascade downstream |
| `feedback <text>` | Analyze feedback impact, create targeted revision chain |
| `recheck` | Re-run quality check |
| `improve [dimension]` | Auto-improve weakest dimension |
---
## Command Execution Protocol
When coordinator needs to execute a command (dispatch, monitor):
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
4. **Execute synchronously** -- complete the command workflow before proceeding
Example:
```
Phase 3 needs task dispatch
-> Read roles/coordinator/commands/dispatch.md
-> Execute Phase 2 (Context Loading)
-> Execute Phase 3 (Task Chain Creation)
-> Execute Phase 4 (Validation)
-> Continue to Phase 4
```
---
## Coordinator Spawn Template
### v5 Worker Spawn (all roles)
When coordinator spawns workers, use `team-worker` agent with role-spec path:
```
Task({
subagent_type: "team-worker",
description: "Spawn <role> worker",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-perf-opt/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
team_name: <team-name>
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
})
```
**Inner Loop roles** (optimizer): Set `inner_loop: true`. The team-worker agent handles the loop internally.
**Single-task roles** (profiler, strategist, benchmarker, reviewer): Set `inner_loop: false`.
---
## Pipeline Definitions
### Pipeline Diagram
```
Pipeline: Linear with Review-Fix Cycle
=====================================================================
Stage 1 Stage 2 Stage 3 Stage 4
(W:1) (W:2) (W:3) (W:4)
+-----------+
PROFILE-001 --> STRATEGY-001 --> IMPL-001 --> | BENCH-001 |
[profiler] [strategist] [optimizer] | [bench] |
^ +-----------+
| |
| +-----------+
+<--FIX--->| REVIEW-001|
| | [reviewer]|
| +-----------+
| |
(max 3 iterations) v
COMPLETE
=====================================================================
```
### Cadence Control
**Beat model**: Event-driven, each beat = coordinator wake -> process -> spawn -> STOP.
```
Beat Cycle (single beat)
======================================================================
Event Coordinator Workers
----------------------------------------------------------------------
callback/resume --> +- handleCallback -+
| mark completed |
| check pipeline |
+- handleSpawnNext -+
| find ready tasks |
| spawn workers ---+--> [team-worker A] Phase 1-5
| (parallel OK) --+--> [team-worker B] Phase 1-5
+- STOP (idle) -----+ |
|
callback <-----------------------------------------+
(next beat) SendMessage + TaskUpdate(completed)
======================================================================
Fast-Advance (skips coordinator for simple linear successors)
======================================================================
[Worker A] Phase 5 complete
+- 1 ready task? simple successor?
| --> spawn team-worker B directly
| --> log fast_advance to message bus (coordinator syncs on next wake)
+- complex case? --> SendMessage to coordinator
======================================================================
```
```
Beat View: Performance Optimization Pipeline
======================================================================
Event Coordinator Workers
----------------------------------------------------------------------
new task --> +- Phase 1-3: clarify -+
| TeamCreate |
| create PROFILE-001 |
+- Phase 4: spawn ------+--> [profiler] Phase 1-5
+- STOP (idle) ---------+ |
|
callback <----------------------------------------------+
(profiler done) --> +- handleCallback ------+ profile_complete
| mark PROFILE done |
| spawn strategist ----+--> [strategist] Phase 1-5
+- STOP ----------------+ |
|
callback <----------------------------------------------+
(strategist done)--> +- handleCallback ------+ strategy_complete
| mark STRATEGY done |
| spawn optimizer -----+--> [optimizer] Phase 1-5
+- STOP ----------------+ |
|
callback <----------------------------------------------+
(optimizer done) --> +- handleCallback ------+ impl_complete
| mark IMPL done |
| spawn bench+reviewer-+--> [benchmarker] Phase 1-5
| (parallel) -------+--> [reviewer] Phase 1-5
+- STOP ----------------+ | |
| |
callback x2 <--------------------------------------+-----------+
--> +- handleCallback ------+
| both done? |
| YES + pass -> Phase 5|
| NO / fail -> FIX task|
| spawn optimizer -----+--> [optimizer] FIX-001
+- STOP or Phase 5 -----+
======================================================================
```
**Checkpoints**:
| Checkpoint | Trigger | Location | Behavior |
|------------|---------|----------|----------|
| CP-1 | PROFILE-001 complete | After Stage 1 | User reviews bottleneck report, can refine scope |
| CP-2 | STRATEGY-001 complete | After Stage 2 | User reviews optimization plan, can adjust priorities |
| CP-3 | REVIEW/BENCH fail | Stage 4 | Auto-create FIX task, re-enter Stage 3 (max 3x) |
| CP-4 | All tasks complete | Phase 5 | Interactive completion action |
### Task Metadata Registry
| Task ID | Role | Phase | Dependencies | Description |
|---------|------|-------|-------------|-------------|
| PROFILE-001 | profiler | Stage 1 | (none) | Profile application, identify bottlenecks |
| STRATEGY-001 | strategist | Stage 2 | PROFILE-001 | Design optimization plan from bottleneck report |
| IMPL-001 | optimizer | Stage 3 | STRATEGY-001 | Implement highest-priority optimizations |
| BENCH-001 | benchmarker | Stage 4 | IMPL-001 | Run benchmarks, compare vs baseline |
| REVIEW-001 | reviewer | Stage 4 | IMPL-001 | Review optimization code for correctness |
| FIX-001 | optimizer | Stage 3 (cycle) | REVIEW-001 or BENCH-001 | Fix issues found in review/benchmark |
---
## Completion Action
When the pipeline completes (all tasks done, coordinator Phase 5):
```
AskUserQuestion({
questions: [{
question: "Team pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
]
}]
})
```
| Choice | Action |
|--------|--------|
| Archive & Clean | Update session status="completed" -> TeamDelete(perf-opt) -> output final summary |
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-perf-opt", args="resume")` |
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
---
## Session Directory
```
.workflow/<session-id>/
+-- session.json # Session metadata + status
+-- artifacts/
| +-- baseline-metrics.json # Profiler: before-optimization metrics
| +-- bottleneck-report.md # Profiler: ranked bottleneck findings
| +-- optimization-plan.md # Strategist: prioritized optimization plan
| +-- benchmark-results.json # Benchmarker: after-optimization metrics
| +-- review-report.md # Reviewer: code review findings
+-- explorations/
| +-- cache-index.json # Shared explore cache
| +-- <hash>.md # Cached exploration results
+-- wisdom/
| +-- patterns.md # Discovered patterns and conventions
| +-- shared-memory.json # Cross-role structured data
+-- discussions/
| +-- DISCUSS-OPT.md # Strategy discussion record
| +-- DISCUSS-REVIEW.md # Review discussion record
```
## Session Resume
Coordinator supports `--resume` / `--continue` for interrupted sessions:
1. Scan session directory for sessions with status "active" or "paused"
2. Multiple matches -> AskUserQuestion for selection
3. Audit TaskList -> reconcile session state <-> task status
4. Reset in_progress -> pending (interrupted tasks)
5. Rebuild team and spawn needed workers only
6. Create missing tasks with correct blockedBy
7. Kick first executable task -> Phase 4 coordination loop
## Shared Resources
| Resource | Path | Usage |
|----------|------|-------|
| Performance Baseline | [<session>/artifacts/baseline-metrics.json](<session>/artifacts/baseline-metrics.json) | Before-optimization metrics for comparison |
| Bottleneck Report | [<session>/artifacts/bottleneck-report.md](<session>/artifacts/bottleneck-report.md) | Profiler output consumed by strategist |
| Optimization Plan | [<session>/artifacts/optimization-plan.md](<session>/artifacts/optimization-plan.md) | Strategist output consumed by optimizer |
| Benchmark Results | [<session>/artifacts/benchmark-results.json](<session>/artifacts/benchmark-results.json) | Benchmarker output consumed by reviewer |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Role spec file not found | Error with expected path (role-specs/<name>.md) |
| Command file not found | Fallback to inline execution in coordinator role.md |
| Subagent spec not found | Error with expected path (subagents/<name>-subagent.md) |
| Fast-advance orphan detected | Coordinator resets task to pending on next check |
| consensus_blocked HIGH | Coordinator creates revision task or pauses pipeline |
| team-worker agent unavailable | Error: requires .claude/agents/team-worker.md |
| Completion action timeout | Default to Keep Active |
| Profiling tool not available | Fallback to static analysis methods |
| Benchmark regression detected | Auto-create FIX task with regression details |
| Review-fix cycle exceeds 3 iterations | Escalate to user with summary of remaining issues |

View File

@@ -0,0 +1,85 @@
---
prefix: BENCH
inner_loop: false
message_types:
success: bench_complete
error: error
fix: fix_required
---
# Performance Benchmarker
Run benchmarks comparing before/after optimization metrics. Validate that improvements meet plan success criteria and detect any regressions.
## Phase 2: Environment & Baseline Loading
| Input | Source | Required |
|-------|--------|----------|
| Baseline metrics | <session>/artifacts/baseline-metrics.json | Yes |
| Optimization plan | <session>/artifacts/optimization-plan.md | Yes |
| shared-memory.json | <session>/wisdom/shared-memory.json | Yes |
1. Extract session path from task description
2. Read baseline metrics -- extract pre-optimization performance numbers
3. Read optimization plan -- extract success criteria and target thresholds
4. Load shared-memory.json for project type and optimization scope
5. Detect available benchmark tools from project:
| Signal | Benchmark Tool | Method |
|--------|---------------|--------|
| package.json + vitest/jest | Test runner benchmarks | Run existing perf tests |
| package.json + webpack/vite | Bundle analysis | Compare build output sizes |
| Cargo.toml + criterion | Rust benchmarks | cargo bench |
| go.mod | Go benchmarks | go test -bench |
| Makefile with bench target | Custom benchmarks | make bench |
| No tooling detected | Manual measurement | Timed execution via Bash |
6. Get changed files scope from shared-memory (optimizer namespace)
## Phase 3: Benchmark Execution
Run benchmarks matching detected project type:
**Frontend benchmarks**:
- Compare bundle size before/after (build output analysis)
- Measure render performance for affected components
- Check for dependency weight changes
**Backend benchmarks**:
- Measure endpoint response times for affected routes
- Profile memory usage under simulated load
- Verify database query performance improvements
**CLI / Library benchmarks**:
- Measure execution time for representative workloads
- Compare memory peak usage
- Test throughput under sustained load
**All project types**:
- Run existing test suite to verify no regressions
- Collect post-optimization metrics matching baseline format
- Calculate improvement percentages per metric
## Phase 4: Result Analysis
Compare against baseline and plan criteria:
| Metric | Threshold | Verdict |
|--------|-----------|---------|
| Target improvement vs baseline | Meets plan success criteria | PASS |
| No regression in unrelated metrics | < 5% degradation allowed | PASS |
| All plan success criteria met | Every criterion satisfied | PASS |
| Improvement below target | > 50% of target achieved | WARN |
| Regression detected | Any unrelated metric degrades > 5% | FAIL -> fix_required |
| Plan criteria not met | Any criterion not satisfied | FAIL -> fix_required |
1. Write benchmark results to `<session>/artifacts/benchmark-results.json`:
- Per-metric: name, baseline value, current value, improvement %, verdict
- Overall verdict: PASS / WARN / FAIL
- Regression details (if any)
2. Update `<session>/wisdom/shared-memory.json` under `benchmarker` namespace:
- Read existing -> merge `{ "benchmarker": { verdict, improvements, regressions } }` -> write back
3. If verdict is FAIL, include detailed feedback in message for FIX task creation:
- Which metrics failed, by how much, suggested investigation areas

View File

@@ -0,0 +1,76 @@
---
prefix: IMPL
inner_loop: true
additional_prefixes: [FIX]
subagents: [explore]
message_types:
success: impl_complete
error: error
fix: fix_required
---
# Code Optimizer
Implement optimization changes following the strategy plan. For FIX tasks, apply targeted corrections based on review/benchmark feedback.
## Modes
| Mode | Task Prefix | Trigger | Focus |
|------|-------------|---------|-------|
| Implement | IMPL | Strategy plan ready | Apply optimizations per plan priority |
| Fix | FIX | Review/bench feedback | Targeted fixes for identified issues |
## Phase 2: Plan & Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Optimization plan | <session>/artifacts/optimization-plan.md | Yes (IMPL) |
| Review/bench feedback | From task description | Yes (FIX) |
| shared-memory.json | <session>/wisdom/shared-memory.json | Yes |
| Wisdom files | <session>/wisdom/patterns.md | No |
| Context accumulator | From prior IMPL/FIX tasks | Yes (inner loop) |
1. Extract session path and task mode (IMPL or FIX) from task description
2. For IMPL: read optimization plan -- extract priority-ordered changes and success criteria
3. For FIX: parse review/benchmark feedback for specific issues to address
4. Use `explore` subagent to load implementation context for target files
5. For inner loop: load context_accumulator from prior IMPL/FIX tasks to avoid re-reading
## Phase 3: Code Implementation
Implementation backend selection:
| Backend | Condition | Method |
|---------|-----------|--------|
| CLI | Multi-file optimization with clear plan | ccw cli --tool gemini --mode write |
| Direct | Single-file changes or targeted fixes | Inline Edit/Write tools |
For IMPL tasks:
- Apply optimizations in plan priority order (P0 first, then P1, etc.)
- Follow implementation guidance from plan (target files, patterns)
- Preserve existing behavior -- optimization must not break functionality
For FIX tasks:
- Read specific issues from review/benchmark feedback
- Apply targeted corrections to flagged code locations
- Verify the fix addresses the exact concern raised
General rules:
- Make minimal, focused changes per optimization
- Add comments only where optimization logic is non-obvious
- Preserve existing code style and conventions
## Phase 4: Self-Validation
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Syntax | IDE diagnostics or build check | No new errors |
| File integrity | Verify all planned files exist and are modified | All present |
| Acceptance | Match optimization plan success criteria | All target metrics addressed |
| No regression | Run existing tests if available | No new failures |
If validation fails, attempt auto-fix (max 2 attempts) before reporting error.
Append to context_accumulator for next IMPL/FIX task:
- Files modified, optimizations applied, validation results
- Any discovered patterns or caveats for subsequent iterations

View File

@@ -0,0 +1,73 @@
---
prefix: PROFILE
inner_loop: false
subagents: [explore]
message_types:
success: profile_complete
error: error
---
# Performance Profiler
Profile application performance to identify CPU, memory, I/O, network, and rendering bottlenecks. Produce quantified baseline metrics and a ranked bottleneck report.
## Phase 2: Context & Environment Detection
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| shared-memory.json | <session>/wisdom/shared-memory.json | No |
1. Extract session path and target scope from task description
2. Detect project type by scanning for framework markers:
| Signal File | Project Type | Profiling Focus |
|-------------|-------------|-----------------|
| package.json + React/Vue/Angular | Frontend | Render time, bundle size, FCP/LCP/CLS |
| package.json + Express/Fastify/NestJS | Backend Node | CPU hotspots, memory, DB queries |
| Cargo.toml / go.mod / pom.xml | Native/JVM Backend | CPU, memory, GC tuning |
| Mixed framework markers | Full-stack | Split into FE + BE profiling passes |
| CLI entry / bin/ directory | CLI Tool | Startup time, throughput, memory peak |
| No detection | Generic | All profiling dimensions |
3. Use `explore` subagent to map performance-critical code paths within target scope
4. Detect available profiling tools (test runners, benchmark harnesses, linting tools)
## Phase 3: Performance Profiling
Execute profiling based on detected project type:
**Frontend profiling**:
- Analyze bundle size and dependency weight via build output
- Identify render-blocking resources and heavy components
- Check for unnecessary re-renders, large DOM trees, unoptimized assets
**Backend profiling**:
- Trace hot code paths via execution analysis or instrumented runs
- Identify slow database queries, N+1 patterns, missing indexes
- Check memory allocation patterns and potential leaks
**CLI / Library profiling**:
- Measure startup time and critical path latency
- Profile throughput under representative workloads
- Identify memory peaks and allocation churn
**All project types**:
- Collect quantified baseline metrics (timing, memory, throughput)
- Rank top 3-5 bottlenecks by severity (Critical / High / Medium)
- Record evidence: file paths, line numbers, measured values
## Phase 4: Report Generation
1. Write baseline metrics to `<session>/artifacts/baseline-metrics.json`:
- Key metric names, measured values, units, measurement method
- Timestamp and environment details
2. Write bottleneck report to `<session>/artifacts/bottleneck-report.md`:
- Ranked list of bottlenecks with severity, location (file:line), measured impact
- Evidence summary per bottleneck
- Detected project type and profiling methods used
3. Update `<session>/wisdom/shared-memory.json` under `profiler` namespace:
- Read existing -> merge `{ "profiler": { project_type, bottleneck_count, top_bottleneck, scope } }` -> write back

View File

@@ -0,0 +1,69 @@
---
prefix: REVIEW
inner_loop: false
additional_prefixes: [QUALITY]
discuss_rounds: [DISCUSS-REVIEW]
subagents: [discuss]
message_types:
success: review_complete
error: error
fix: fix_required
---
# Optimization Reviewer
Review optimization code changes for correctness, side effects, regression risks, and adherence to best practices. Provide structured verdicts with actionable feedback.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Optimization code changes | From IMPL task artifacts / git diff | Yes |
| Optimization plan | <session>/artifacts/optimization-plan.md | Yes |
| Benchmark results | <session>/artifacts/benchmark-results.json | No |
| shared-memory.json | <session>/wisdom/shared-memory.json | Yes |
1. Extract session path from task description
2. Read optimization plan -- understand intended changes and success criteria
3. Load shared-memory.json for optimizer namespace (files modified, patterns applied)
4. Identify changed files from optimizer context -- read each modified file
5. If benchmark results available, read for cross-reference with code quality
## Phase 3: Multi-Dimension Review
Analyze optimization changes across five dimensions:
| Dimension | Focus | Severity |
|-----------|-------|----------|
| Correctness | Logic errors, off-by-one, race conditions, null safety | Critical |
| Side effects | Unintended behavior changes, API contract breaks, data loss | Critical |
| Maintainability | Code clarity, complexity increase, naming, documentation | High |
| Regression risk | Impact on unrelated code paths, implicit dependencies | High |
| Best practices | Idiomatic patterns, framework conventions, optimization anti-patterns | Medium |
Per-dimension review process:
- Scan modified files for patterns matching each dimension
- Record findings with severity (Critical / High / Medium / Low)
- Include specific file:line references and suggested fixes
If any Critical findings detected, invoke `discuss` subagent (DISCUSS-REVIEW round) to validate the assessment before issuing verdict.
## Phase 4: Verdict & Feedback
Classify overall verdict based on findings:
| Verdict | Condition | Action |
|---------|-----------|--------|
| APPROVE | No Critical or High findings | Send review_complete |
| REVISE | Has High findings, no Critical | Send fix_required with detailed feedback |
| REJECT | Has Critical findings or fundamental approach flaw | Send fix_required + flag for strategist escalation |
1. Write review report to `<session>/artifacts/review-report.md`:
- Per-dimension findings with severity, file:line, description
- Overall verdict with rationale
- Specific fix instructions for REVISE/REJECT verdicts
2. Update `<session>/wisdom/shared-memory.json` under `reviewer` namespace:
- Read existing -> merge `{ "reviewer": { verdict, finding_count, critical_count, dimensions_reviewed } }` -> write back
3. If DISCUSS-REVIEW was triggered, record discussion summary in `<session>/discussions/DISCUSS-REVIEW.md`

View File

@@ -0,0 +1,73 @@
---
prefix: STRATEGY
inner_loop: false
discuss_rounds: [DISCUSS-OPT]
subagents: [discuss]
message_types:
success: strategy_complete
error: error
---
# Optimization Strategist
Analyze bottleneck reports and baseline metrics to design a prioritized optimization plan with concrete strategies, expected improvements, and risk assessments.
## Phase 2: Analysis Loading
| Input | Source | Required |
|-------|--------|----------|
| Bottleneck report | <session>/artifacts/bottleneck-report.md | Yes |
| Baseline metrics | <session>/artifacts/baseline-metrics.json | Yes |
| shared-memory.json | <session>/wisdom/shared-memory.json | Yes |
| Wisdom files | <session>/wisdom/patterns.md | No |
1. Extract session path from task description
2. Read bottleneck report -- extract ranked bottleneck list with severities
3. Read baseline metrics -- extract current performance numbers
4. Load shared-memory.json for profiler findings (project_type, scope)
5. Assess overall optimization complexity:
| Bottleneck Count | Severity Mix | Complexity |
|-----------------|-------------|------------|
| 1-2 | All Medium | Low |
| 2-3 | Mix of High/Medium | Medium |
| 3+ or any Critical | Any Critical present | High |
## Phase 3: Strategy Formulation
For each bottleneck, select optimization approach by type:
| Bottleneck Type | Strategies | Risk Level |
|----------------|-----------|------------|
| CPU hotspot | Algorithm optimization, memoization, caching, worker threads | Medium |
| Memory leak/bloat | Pool reuse, lazy initialization, WeakRef, scope cleanup | High |
| I/O bound | Batching, async pipelines, streaming, connection pooling | Medium |
| Network latency | Request coalescing, compression, CDN, prefetching | Low |
| Rendering | Virtualization, memoization, CSS containment, code splitting | Medium |
| Database | Index optimization, query rewriting, caching layer, denormalization | High |
Prioritize optimizations by impact/effort ratio:
| Priority | Criteria |
|----------|----------|
| P0 (Critical) | High impact + Low effort -- quick wins |
| P1 (High) | High impact + Medium effort |
| P2 (Medium) | Medium impact + Low effort |
| P3 (Low) | Low impact or High effort -- defer |
If complexity is High, invoke `discuss` subagent (DISCUSS-OPT round) to evaluate trade-offs between competing strategies before finalizing the plan.
Define measurable success criteria per optimization (target metric value or improvement %).
## Phase 4: Plan Output
1. Write optimization plan to `<session>/artifacts/optimization-plan.md`:
- Priority-ordered list of optimizations
- Per optimization: target bottleneck, strategy, expected improvement %, risk level
- Success criteria: specific metric thresholds to verify
- Implementation guidance: files to modify, patterns to apply
2. Update `<session>/wisdom/shared-memory.json` under `strategist` namespace:
- Read existing -> merge `{ "strategist": { complexity, optimization_count, priorities, discuss_used } }` -> write back
3. If DISCUSS-OPT was triggered, record discussion summary in `<session>/discussions/DISCUSS-OPT.md`

View File

@@ -0,0 +1,175 @@
# Command: Dispatch
Create the performance optimization task chain with correct dependencies and structured task descriptions.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| User requirement | From coordinator Phase 1 | Yes |
| Session folder | From coordinator Phase 2 | Yes |
| Pipeline definition | From SKILL.md Pipeline Definitions | Yes |
1. Load user requirement and optimization scope from session.json
2. Load pipeline stage definitions from SKILL.md Task Metadata Registry
3. Determine if single-pass or multi-pass optimization is needed
## Phase 3: Task Chain Creation
### Task Description Template
Every task description uses structured format for clarity:
```
TaskCreate({
subject: "<TASK-ID>",
owner: "<role>",
description: "PURPOSE: <what this task achieves> | Success: <measurable completion criteria>
TASK:
- <step 1: specific action>
- <step 2: specific action>
- <step 3: specific action>
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Upstream artifacts: <artifact-1>, <artifact-2>
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: <deliverable path> + <quality criteria>
CONSTRAINTS: <scope limits, focus areas>
---
InnerLoop: <true|false>",
blockedBy: [<dependency-list>],
status: "pending"
})
```
### Task Chain
Create tasks in dependency order:
**PROFILE-001** (profiler, Stage 1):
```
TaskCreate({
subject: "PROFILE-001",
description: "PURPOSE: Profile application performance to identify bottlenecks | Success: Baseline metrics captured, top 3-5 bottlenecks ranked by severity
TASK:
- Detect project type and available profiling tools
- Execute profiling across relevant dimensions (CPU, memory, I/O, network, rendering)
- Collect baseline metrics and rank bottlenecks by severity
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: <session>/artifacts/baseline-metrics.json + <session>/artifacts/bottleneck-report.md | Quantified metrics with evidence
CONSTRAINTS: Focus on <optimization-scope> | Profile before any changes
---
InnerLoop: false",
status: "pending"
})
```
**STRATEGY-001** (strategist, Stage 2):
```
TaskCreate({
subject: "STRATEGY-001",
description: "PURPOSE: Design prioritized optimization plan from bottleneck analysis | Success: Actionable plan with measurable success criteria per optimization
TASK:
- Analyze bottleneck report and baseline metrics
- Select optimization strategies per bottleneck type
- Prioritize by impact/effort ratio, define success criteria
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Upstream artifacts: baseline-metrics.json, bottleneck-report.md
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: <session>/artifacts/optimization-plan.md | Priority-ordered with improvement targets
CONSTRAINTS: Focus on highest-impact optimizations | Risk assessment required
---
InnerLoop: false",
blockedBy: ["PROFILE-001"],
status: "pending"
})
```
**IMPL-001** (optimizer, Stage 3):
```
TaskCreate({
subject: "IMPL-001",
description: "PURPOSE: Implement optimization changes per strategy plan | Success: All planned optimizations applied, code compiles, existing tests pass
TASK:
- Load optimization plan and identify target files
- Apply optimizations in priority order (P0 first)
- Validate changes compile and pass existing tests
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Upstream artifacts: optimization-plan.md
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: Modified source files + validation passing | Optimizations applied without regressions
CONSTRAINTS: Preserve existing behavior | Minimal changes per optimization | Follow code conventions
---
InnerLoop: true",
blockedBy: ["STRATEGY-001"],
status: "pending"
})
```
**BENCH-001** (benchmarker, Stage 4 - parallel):
```
TaskCreate({
subject: "BENCH-001",
description: "PURPOSE: Benchmark optimization results against baseline | Success: All plan success criteria met, no regressions detected
TASK:
- Load baseline metrics and plan success criteria
- Run benchmarks matching project type
- Compare before/after metrics, calculate improvements
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Upstream artifacts: baseline-metrics.json, optimization-plan.md
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: <session>/artifacts/benchmark-results.json | Per-metric comparison with verdicts
CONSTRAINTS: Must compare against baseline | Flag any regressions
---
InnerLoop: false",
blockedBy: ["IMPL-001"],
status: "pending"
})
```
**REVIEW-001** (reviewer, Stage 4 - parallel):
```
TaskCreate({
subject: "REVIEW-001",
description: "PURPOSE: Review optimization code for correctness, side effects, and regression risks | Success: All dimensions reviewed, verdict issued
TASK:
- Load modified files and optimization plan
- Review across 5 dimensions: correctness, side effects, maintainability, regression risk, best practices
- Issue verdict: APPROVE, REVISE, or REJECT with actionable feedback
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Upstream artifacts: optimization-plan.md, benchmark-results.json (if available)
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: <session>/artifacts/review-report.md | Per-dimension findings with severity
CONSTRAINTS: Focus on optimization changes only | Provide specific file:line references
---
InnerLoop: false",
blockedBy: ["IMPL-001"],
status: "pending"
})
```
## Phase 4: Validation
Verify task chain integrity:
| Check | Method | Expected |
|-------|--------|----------|
| All 5 tasks created | TaskList count | 5 tasks |
| Dependencies correct | STRATEGY blocks on PROFILE, IMPL blocks on STRATEGY, BENCH+REVIEW block on IMPL | All valid |
| No circular dependencies | Trace dependency graph | Acyclic |
| All task IDs use correct prefixes | PROFILE-*, STRATEGY-*, IMPL-*, BENCH-*, REVIEW-* | Match role registry |
| Structured descriptions complete | Each has PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS | All present |
If validation fails, fix the specific task and re-validate.

View File

@@ -0,0 +1,201 @@
# Command: Monitor
Handle all coordinator monitoring events: worker callbacks, status checks, pipeline advancement, and completion.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session state | <session>/session.json | Yes |
| Task list | TaskList() | Yes |
| Trigger event | From Entry Router detection | Yes |
| Pipeline definition | From SKILL.md | Yes |
1. Load session.json for current state and fix cycle count
2. Run TaskList() to get current task statuses
3. Identify trigger event type from Entry Router
## Phase 3: Event Handlers
### handleCallback
Triggered when a worker sends completion message.
1. Parse message to identify role and task ID
2. Mark task as completed:
```
TaskUpdate({ taskId: "<task-id>", status: "completed" })
```
3. Record completion in session state
4. Check if checkpoint feedback is configured for this stage:
| Completed Task | Checkpoint | Action |
|---------------|------------|--------|
| PROFILE-001 | CP-1 | Notify user: bottleneck report ready for review |
| STRATEGY-001 | CP-2 | Notify user: optimization plan ready for review |
| BENCH-001 or REVIEW-001 | CP-3 | Check verdicts (see Review-Fix Cycle below) |
5. Proceed to handleSpawnNext
### handleSpawnNext
Find and spawn the next ready tasks.
1. Scan task list for tasks where:
- Status is "pending"
- All blockedBy tasks have status "completed"
2. For each ready task, spawn team-worker:
```
Task({
subagent_type: "team-worker",
description: "Spawn <role> worker for <task-id>",
team_name: "perf-opt",
name: "<role>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-perf-opt/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
team_name: perf-opt
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
})
```
3. For Stage 4 (BENCH-001 + REVIEW-001): spawn both in parallel since both block on IMPL-001
4. STOP after spawning -- wait for next callback
### Review-Fix Cycle (CP-3)
When both BENCH-001 and REVIEW-001 are completed:
1. Read benchmark verdict from shared-memory (benchmarker namespace)
2. Read review verdict from shared-memory (reviewer namespace)
| Bench Verdict | Review Verdict | Action |
|--------------|----------------|--------|
| PASS | APPROVE | -> handleComplete |
| PASS | REVISE | Create FIX task with review feedback |
| FAIL | APPROVE | Create FIX task with benchmark feedback |
| FAIL | REVISE/REJECT | Create FIX task with combined feedback |
| Any | REJECT | Create FIX task + flag for strategist re-evaluation |
3. Check fix cycle count:
| Cycle Count | Action |
|-------------|--------|
| < 3 | Create FIX task, increment cycle count |
| >= 3 | Escalate to user with summary of remaining issues |
4. Create FIX task if needed:
```
TaskCreate({
subject: "FIX-<N>",
description: "PURPOSE: Fix issues identified by review/benchmark | Success: All flagged issues resolved
TASK:
- Address review findings: <specific-findings>
- Fix benchmark regressions: <specific-regressions>
- Re-validate after fixes
CONTEXT:
- Session: <session-folder>
- Upstream artifacts: review-report.md, benchmark-results.json
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: Fixed source files | All flagged issues addressed
CONSTRAINTS: Targeted fixes only | Do not introduce new changes
---
InnerLoop: true",
blockedBy: [],
status: "pending"
})
```
5. Create new BENCH and REVIEW tasks blocked on FIX task
6. Proceed to handleSpawnNext (spawns optimizer for FIX task)
### handleCheck
Output current pipeline status without advancing.
1. Build status graph from task list:
```
Pipeline Status:
[DONE] PROFILE-001 (profiler) -> bottleneck-report.md
[DONE] STRATEGY-001 (strategist) -> optimization-plan.md
[RUN] IMPL-001 (optimizer) -> implementing...
[WAIT] BENCH-001 (benchmarker) -> blocked by IMPL-001
[WAIT] REVIEW-001 (reviewer) -> blocked by IMPL-001
Fix Cycles: 0/3
Session: <session-id>
```
2. Output status -- do NOT advance pipeline
### handleResume
Resume pipeline after user pause or interruption.
1. Audit task list for inconsistencies:
- Tasks stuck in "in_progress" -> reset to "pending"
- Tasks with completed blockers but still "pending" -> include in spawn list
2. Proceed to handleSpawnNext
### handleConsensus
Handle consensus_blocked signals from discuss rounds.
| Severity | Action |
|----------|--------|
| HIGH | Pause pipeline, notify user with findings summary |
| MEDIUM | Create revision task for the blocked role |
| LOW | Log finding, continue pipeline |
### handleComplete
Triggered when all pipeline tasks are completed and no fix cycles remain.
1. Verify all tasks have status "completed":
```
TaskList()
```
2. If any tasks not completed, return to handleSpawnNext
3. If all completed, transition to coordinator Phase 5 (Report + Completion Action)
### handleRevise
Triggered by user "revise <TASK-ID> [feedback]" command.
1. Parse target task ID and optional feedback
2. Create revision task with same role but updated requirements
3. Set blockedBy to empty (immediate execution)
4. Cascade: create new downstream tasks that depend on the revised task
5. Proceed to handleSpawnNext
### handleFeedback
Triggered by user "feedback <text>" command.
1. Analyze feedback text to determine impact scope
2. Identify which pipeline stage and role should handle the feedback
3. Create targeted revision task
4. Proceed to handleSpawnNext
## Phase 4: State Persistence
After every handler execution:
1. Update session.json with current state (active tasks, fix cycle count, last event)
2. Verify task list consistency
3. STOP and wait for next event

View File

@@ -0,0 +1,252 @@
# Coordinator - Performance Optimization Team
**Role**: coordinator
**Type**: Orchestrator
**Team**: perf-opt
Orchestrates the performance optimization pipeline: manages task chains, spawns team-worker agents, handles review-fix cycles, and drives the pipeline to completion.
## Boundaries
### MUST
- Use `team-worker` agent type for all worker spawns (NOT `general-purpose`)
- Follow Command Execution Protocol for dispatch and monitor commands
- Respect pipeline stage dependencies (blockedBy)
- Stop after spawning workers -- wait for callbacks
- Handle review-fix cycles with max 3 iterations
- Execute completion action in Phase 5
### MUST NOT
- Implement domain logic (profiling, optimizing, reviewing) -- workers handle this
- Spawn workers without creating tasks first
- Skip checkpoints when configured
- Force-advance pipeline past failed review/benchmark
- Modify source code directly -- delegate to optimizer worker
---
## Command Execution Protocol
When coordinator needs to execute a command (dispatch, monitor):
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
4. **Execute synchronously** -- complete the command workflow before proceeding
Example:
```
Phase 3 needs task dispatch
-> Read roles/coordinator/commands/dispatch.md
-> Execute Phase 2 (Context Loading)
-> Execute Phase 3 (Task Chain Creation)
-> Execute Phase 4 (Validation)
-> Continue to Phase 4
```
---
## Entry Router
When coordinator is invoked, detect invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains role tag [profiler], [strategist], [optimizer], [benchmarker], [reviewer] | -> handleCallback |
| Consensus blocked | Message contains "consensus_blocked" | -> handleConsensus |
| Status check | Arguments contain "check" or "status" | -> handleCheck |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
| Pipeline complete | All tasks have status "completed" | -> handleComplete |
| Interrupted session | Active/paused session exists | -> Phase 0 (Resume Check) |
| New session | None of above | -> Phase 1 (Requirement Clarification) |
For callback/check/resume/complete: load `commands/monitor.md` and execute matched handler, then STOP.
### Router Implementation
1. **Load session context** (if exists):
- Scan `.workflow/.team/PERF-OPT-*/team-session.json` for active/paused sessions
- If found, extract session folder path and status
2. **Parse $ARGUMENTS** for detection keywords:
- Check for role name tags in message content
- Check for "check", "status", "resume", "continue" keywords
- Check for "consensus_blocked" signal
3. **Route to handler**:
- For monitor handlers: Read `commands/monitor.md`, execute matched handler, STOP
- For Phase 0: Execute Session Resume Check below
- For Phase 1: Execute Requirement Clarification below
---
## Phase 0: Session Resume Check
Triggered when an active/paused session is detected on coordinator entry.
1. Load session.json from detected session folder
2. Audit task list:
```
TaskList()
```
3. Reconcile session state vs task status:
| Task Status | Session Expects | Action |
|-------------|----------------|--------|
| in_progress | Should be running | Reset to pending (worker was interrupted) |
| completed | Already tracked | Skip |
| pending + unblocked | Ready to run | Include in spawn list |
4. Rebuild team if not active:
```
TeamCreate({ team_name: "perf-opt" })
```
5. Spawn workers for ready tasks -> Phase 4 coordination loop
---
## Phase 1: Requirement Clarification
1. Parse user task description from $ARGUMENTS
2. Identify optimization target:
| Signal | Target |
|--------|--------|
| Specific file/module mentioned | Scoped optimization |
| "slow", "performance", generic | Full application profiling |
| Specific metric mentioned (FCP, memory, startup) | Targeted metric optimization |
3. If target is unclear, ask for clarification:
```
AskUserQuestion({
questions: [{
question: "What should I optimize? Provide a target scope or describe the performance issue.",
header: "Scope"
}]
})
```
4. Record optimization requirement with scope and target metrics
---
## Phase 2: Session & Team Setup
1. Create session directory:
```
Bash("mkdir -p .workflow/<session-id>/artifacts .workflow/<session-id>/explorations .workflow/<session-id>/wisdom .workflow/<session-id>/discussions")
```
2. Write session.json with status="active", team_name, requirement, timestamp
3. Initialize shared-memory.json:
```
Write("<session>/wisdom/shared-memory.json", { "session_id": "<session-id>", "requirement": "<requirement>" })
```
4. Create team:
```
TeamCreate({ team_name: "perf-opt" })
```
---
## Phase 3: Task Chain Creation
Execute `commands/dispatch.md` inline (Command Execution Protocol):
1. Read `roles/coordinator/commands/dispatch.md`
2. Follow dispatch Phase 2 (context loading) -> Phase 3 (task chain creation) -> Phase 4 (validation)
3. Result: all pipeline tasks created with correct blockedBy dependencies
---
## Phase 4: Spawn & Coordination Loop
### Initial Spawn
Find first unblocked task and spawn its worker:
```
Task({
subagent_type: "team-worker",
description: "Spawn profiler worker",
team_name: "perf-opt",
name: "profiler",
run_in_background: true,
prompt: `## Role Assignment
role: profiler
role_spec: .claude/skills/team-perf-opt/role-specs/profiler.md
session: <session-folder>
session_id: <session-id>
team_name: perf-opt
requirement: <requirement>
inner_loop: false
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
})
```
**STOP** after spawning. Wait for worker callback.
### Coordination (via monitor.md handlers)
All subsequent coordination is handled by `commands/monitor.md` handlers triggered by worker callbacks:
- handleCallback -> mark task done -> check pipeline -> handleSpawnNext
- handleSpawnNext -> find ready tasks -> spawn team-worker agents -> STOP
- handleComplete -> all done -> Phase 5
---
## Phase 5: Report + Completion Action
1. Load session state -> count completed tasks, calculate duration
2. List deliverables:
| Deliverable | Path |
|-------------|------|
| Baseline Metrics | <session>/artifacts/baseline-metrics.json |
| Bottleneck Report | <session>/artifacts/bottleneck-report.md |
| Optimization Plan | <session>/artifacts/optimization-plan.md |
| Benchmark Results | <session>/artifacts/benchmark-results.json |
| Review Report | <session>/artifacts/review-report.md |
3. Include discussion summaries if discuss rounds were used
4. Output pipeline summary: task count, duration, improvement metrics from benchmark results
5. **Completion Action** (interactive):
```
AskUserQuestion({
questions: [{
question: "Team pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
]
}]
})
```
6. Handle user choice:
| Choice | Steps |
|--------|-------|
| Archive & Clean | TaskList -> verify all completed -> update session status="completed" -> TeamDelete("perf-opt") -> output final summary with artifact paths |
| Keep Active | Update session status="paused" -> output: "Session paused. Resume with: Skill(skill='team-perf-opt', args='resume')" |
| Export Results | AskUserQuestion for target directory -> copy all artifacts -> Archive & Clean flow |

View File

@@ -0,0 +1,208 @@
{
"version": "5.0.0",
"team_name": "perf-opt",
"team_display_name": "Performance Optimization",
"skill_name": "team-perf-opt",
"skill_path": ".claude/skills/team-perf-opt/",
"worker_agent": "team-worker",
"pipeline_type": "Linear with Review-Fix Cycle",
"completion_action": "interactive",
"has_inline_discuss": true,
"has_shared_explore": true,
"has_checkpoint_feedback": true,
"has_session_resume": true,
"roles": [
{
"name": "coordinator",
"type": "orchestrator",
"description": "Orchestrates performance optimization pipeline, manages task chains, handles review-fix cycles",
"spec_path": "roles/coordinator/role.md",
"tools": ["Task", "TaskCreate", "TaskList", "TaskGet", "TaskUpdate", "TeamCreate", "TeamDelete", "SendMessage", "AskUserQuestion", "Read", "Write", "Bash", "Glob", "Grep"]
},
{
"name": "profiler",
"type": "orchestration",
"description": "Profiles application performance, identifies CPU/memory/IO/network/rendering bottlenecks",
"role_spec": "role-specs/profiler.md",
"inner_loop": false,
"frontmatter": {
"prefix": "PROFILE",
"inner_loop": false,
"additional_prefixes": [],
"discuss_rounds": [],
"subagents": ["explore"],
"message_types": {
"success": "profile_complete",
"error": "error"
}
},
"weight": 1,
"tools": ["Read", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
},
{
"name": "strategist",
"type": "orchestration",
"description": "Analyzes bottleneck reports, designs prioritized optimization plans with concrete strategies",
"role_spec": "role-specs/strategist.md",
"inner_loop": false,
"frontmatter": {
"prefix": "STRATEGY",
"inner_loop": false,
"additional_prefixes": [],
"discuss_rounds": ["DISCUSS-OPT"],
"subagents": ["discuss"],
"message_types": {
"success": "strategy_complete",
"error": "error"
}
},
"weight": 2,
"tools": ["Read", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
},
{
"name": "optimizer",
"type": "code_generation",
"description": "Implements optimization changes following the strategy plan",
"role_spec": "role-specs/optimizer.md",
"inner_loop": true,
"frontmatter": {
"prefix": "IMPL",
"inner_loop": true,
"additional_prefixes": ["FIX"],
"discuss_rounds": [],
"subagents": ["explore"],
"message_types": {
"success": "impl_complete",
"error": "error",
"fix": "fix_required"
}
},
"weight": 3,
"tools": ["Read", "Write", "Edit", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
},
{
"name": "benchmarker",
"type": "validation",
"description": "Runs benchmarks, compares before/after metrics, validates performance improvements",
"role_spec": "role-specs/benchmarker.md",
"inner_loop": false,
"frontmatter": {
"prefix": "BENCH",
"inner_loop": false,
"additional_prefixes": [],
"discuss_rounds": [],
"subagents": [],
"message_types": {
"success": "bench_complete",
"error": "error",
"fix": "fix_required"
}
},
"weight": 4,
"tools": ["Read", "Bash", "Glob", "Grep", "Task"]
},
{
"name": "reviewer",
"type": "read_only_analysis",
"description": "Reviews optimization code for correctness, side effects, and regression risks",
"role_spec": "role-specs/reviewer.md",
"inner_loop": false,
"frontmatter": {
"prefix": "REVIEW",
"inner_loop": false,
"additional_prefixes": ["QUALITY"],
"discuss_rounds": ["DISCUSS-REVIEW"],
"subagents": ["discuss"],
"message_types": {
"success": "review_complete",
"error": "error",
"fix": "fix_required"
}
},
"weight": 4,
"tools": ["Read", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
}
],
"pipeline": {
"stages": [
{
"stage": 1,
"name": "Performance Profiling",
"roles": ["profiler"],
"blockedBy": [],
"fast_advance": true
},
{
"stage": 2,
"name": "Optimization Strategy",
"roles": ["strategist"],
"blockedBy": ["PROFILE"],
"fast_advance": true
},
{
"stage": 3,
"name": "Code Optimization",
"roles": ["optimizer"],
"blockedBy": ["STRATEGY"],
"fast_advance": false
},
{
"stage": 4,
"name": "Benchmark & Review",
"roles": ["benchmarker", "reviewer"],
"blockedBy": ["IMPL"],
"fast_advance": false,
"parallel": true,
"review_fix_cycle": {
"trigger": "REVIEW or BENCH finds issues",
"target_stage": 3,
"max_iterations": 3
}
}
],
"diagram": "See pipeline-diagram section"
},
"subagents": [
{
"name": "explore",
"agent_type": "cli-explore-agent",
"callable_by": ["profiler", "optimizer"],
"purpose": "Shared codebase exploration for performance-critical code paths",
"has_cache": true,
"cache_domain": "explorations"
},
{
"name": "discuss",
"agent_type": "cli-discuss-agent",
"callable_by": ["strategist", "reviewer"],
"purpose": "Multi-perspective discussion for optimization approaches and review findings",
"has_cache": false
}
],
"shared_resources": [
{
"name": "Performance Baseline",
"path": "<session>/artifacts/baseline-metrics.json",
"usage": "Before-optimization metrics for comparison"
},
{
"name": "Bottleneck Report",
"path": "<session>/artifacts/bottleneck-report.md",
"usage": "Profiler output consumed by strategist"
},
{
"name": "Optimization Plan",
"path": "<session>/artifacts/optimization-plan.md",
"usage": "Strategist output consumed by optimizer"
},
{
"name": "Benchmark Results",
"path": "<session>/artifacts/benchmark-results.json",
"usage": "Benchmarker output consumed by reviewer"
}
]
}

View File

@@ -0,0 +1,89 @@
# Discuss Subagent
Multi-perspective discussion for evaluating optimization strategies and reviewing code change quality. Used by strategist (DISCUSS-OPT) and reviewer (DISCUSS-REVIEW) when complex trade-offs require multi-angle analysis.
## Design Rationale
Complex optimization decisions (e.g., choosing between algorithmic change vs caching layer) and nuanced code review findings (e.g., evaluating whether a side effect is acceptable) benefit from structured multi-perspective analysis. This subagent provides that analysis inline without spawning additional team members.
## Invocation
Called by strategist, reviewer after their primary analysis when complexity warrants multi-perspective evaluation:
```
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: "Discuss <round-id>: <topic> for performance optimization",
prompt: `Conduct a multi-perspective discussion on the following topic.
Round: <round-id>
Topic: <discussion-topic>
Session: <session-folder>
Context:
<relevant-context-from-calling-role>
Perspectives to consider:
- Performance impact: Will this actually improve the target metric?
- Risk assessment: What could go wrong? Side effects? Regressions?
- Maintainability: Is the optimized code still understandable and maintainable?
- Alternative approaches: Are there simpler or safer ways to achieve the same goal?
Evaluate trade-offs and provide a structured recommendation with:
- Consensus verdict: proceed / revise / escalate
- Confidence level: high / medium / low
- Key trade-offs identified
- Recommended approach with rationale
- Dissenting perspectives (if any)`
})
```
## Round Configuration
| Round | Artifact | Parameters | Calling Role |
|-------|----------|------------|-------------|
| DISCUSS-OPT | <session>/discussions/DISCUSS-OPT.md | Optimization strategy trade-offs | strategist |
| DISCUSS-REVIEW | <session>/discussions/DISCUSS-REVIEW.md | Code review finding validation | reviewer |
## Integration with Calling Role
The calling role is responsible for:
1. **Before calling**: Complete primary analysis, identify the specific trade-off or finding needing discussion
2. **Calling**: Invoke subagent with round ID, topic, and relevant context
3. **After calling**:
| Result | Action |
|--------|--------|
| consensus_reached (proceed) | Incorporate recommendation into output, continue |
| consensus_reached (revise) | Adjust findings/strategy based on discussion insights |
| consensus_blocked (HIGH) | Report to coordinator via message with severity |
| consensus_blocked (MEDIUM) | Include in output with recommendation for revision |
| consensus_blocked (LOW) | Note in output, proceed with original assessment |
## Output Schema
```json
{
"round_id": "<DISCUSS-OPT|DISCUSS-REVIEW>",
"topic": "<discussion-topic>",
"verdict": "<proceed|revise|escalate>",
"confidence": "<high|medium|low>",
"trade_offs": [
{ "dimension": "<performance|risk|maintainability>", "pro": "<benefit>", "con": "<cost>" }
],
"recommendation": "<recommended-approach>",
"rationale": "<reasoning>",
"dissenting_views": ["<alternative-perspective>"]
}
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Single perspective analysis fails | Continue with partial perspectives |
| All analyses fail | Return basic recommendation from calling role's primary analysis |
| Artifact not found | Return error immediately |
| Discussion inconclusive | Return "revise" verdict with low confidence |

View File

@@ -0,0 +1,108 @@
# Explore Subagent
Shared codebase exploration for discovering performance-critical code paths, module structures, and optimization opportunities. Results are cached to avoid redundant exploration across profiler and optimizer roles.
## Design Rationale
Codebase exploration is a read-only operation shared between profiler (mapping bottlenecks) and optimizer (understanding implementation context). Caching explorations avoids redundant work when optimizer re-explores paths the profiler already mapped.
## Invocation
Called by profiler, optimizer after needing codebase context for performance analysis or implementation:
```
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: "Explore codebase for performance-critical paths in <target-scope>",
prompt: `Explore the codebase to identify performance-critical code paths.
Target scope: <target-scope>
Session: <session-folder>
Focus: <exploration-focus>
Tasks:
1. Map the module structure and entry points within scope
2. Identify hot code paths (frequently called functions, critical loops)
3. Find performance-relevant patterns (caching, lazy loading, async, pooling)
4. Note any existing performance optimizations or benchmark harnesses
5. List key files with their roles in the performance-critical path
Output a structured exploration report with:
- Module map (key files and their relationships)
- Hot path analysis (call chains, loop nests, recursive patterns)
- Existing optimization patterns found
- Performance-relevant configuration (caching, pooling, batching settings)
- Recommended investigation targets for profiling`
})
```
## Cache Mechanism
### Cache Index Schema
`<session-folder>/explorations/cache-index.json`:
```json
{
"entries": [
{
"key": "<scope-hash>",
"scope": "<target-scope>",
"focus": "<exploration-focus>",
"timestamp": "<ISO-8601>",
"result_file": "<hash>.md"
}
]
}
```
### Cache Lookup Rules
| Condition | Action |
|-----------|--------|
| Exact scope+focus match exists | Return cached result from <hash>.md |
| No match | Execute subagent, cache result to <hash>.md, update index |
| Cache file missing but index has entry | Remove stale entry, re-execute |
| Cache older than current session | Use cached (explorations are stable within session) |
## Integration with Calling Role
The calling role is responsible for:
1. **Before calling**: Determine target scope and exploration focus
2. **Calling**: Check cache first, invoke subagent only on cache miss
3. **After calling**:
| Result | Action |
|--------|--------|
| Exploration successful | Use findings to inform profiling/implementation |
| Exploration partial | Use available findings, note gaps |
| Exploration failed | Proceed without exploration context, use direct file reading |
## Output Schema
```json
{
"scope": "<target-scope>",
"module_map": [
{ "file": "<path>", "role": "<description>", "hot_path": true }
],
"hot_paths": [
{ "chain": "<call-chain>", "frequency": "<high|medium|low>", "files": ["<path>"] }
],
"existing_optimizations": [
{ "type": "<pattern>", "location": "<file:line>", "description": "<what>" }
],
"investigation_targets": ["<file-or-pattern>"]
}
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Single exploration angle fails | Continue with partial results |
| All exploration fails | Return basic result from direct file listing |
| Target scope not found | Return error immediately |
| Cache corrupt | Clear cache-index.json, re-execute |