feat: add multi-mode workflow planning skill with session management and task generation

This commit is contained in:
catlog22
2026-03-02 15:25:56 +08:00
parent 2c2b9d6e29
commit 121e834459
28 changed files with 6478 additions and 533 deletions

View File

@@ -0,0 +1,175 @@
# Command: Dispatch
Create the performance optimization task chain with correct dependencies and structured task descriptions.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| User requirement | From coordinator Phase 1 | Yes |
| Session folder | From coordinator Phase 2 | Yes |
| Pipeline definition | From SKILL.md Pipeline Definitions | Yes |
1. Load user requirement and optimization scope from session.json
2. Load pipeline stage definitions from SKILL.md Task Metadata Registry
3. Determine if single-pass or multi-pass optimization is needed
## Phase 3: Task Chain Creation
### Task Description Template
Every task description uses structured format for clarity:
```
TaskCreate({
subject: "<TASK-ID>",
owner: "<role>",
description: "PURPOSE: <what this task achieves> | Success: <measurable completion criteria>
TASK:
- <step 1: specific action>
- <step 2: specific action>
- <step 3: specific action>
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Upstream artifacts: <artifact-1>, <artifact-2>
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: <deliverable path> + <quality criteria>
CONSTRAINTS: <scope limits, focus areas>
---
InnerLoop: <true|false>",
blockedBy: [<dependency-list>],
status: "pending"
})
```
### Task Chain
Create tasks in dependency order:
**PROFILE-001** (profiler, Stage 1):
```
TaskCreate({
subject: "PROFILE-001",
description: "PURPOSE: Profile application performance to identify bottlenecks | Success: Baseline metrics captured, top 3-5 bottlenecks ranked by severity
TASK:
- Detect project type and available profiling tools
- Execute profiling across relevant dimensions (CPU, memory, I/O, network, rendering)
- Collect baseline metrics and rank bottlenecks by severity
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: <session>/artifacts/baseline-metrics.json + <session>/artifacts/bottleneck-report.md | Quantified metrics with evidence
CONSTRAINTS: Focus on <optimization-scope> | Profile before any changes
---
InnerLoop: false",
status: "pending"
})
```
**STRATEGY-001** (strategist, Stage 2):
```
TaskCreate({
subject: "STRATEGY-001",
description: "PURPOSE: Design prioritized optimization plan from bottleneck analysis | Success: Actionable plan with measurable success criteria per optimization
TASK:
- Analyze bottleneck report and baseline metrics
- Select optimization strategies per bottleneck type
- Prioritize by impact/effort ratio, define success criteria
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Upstream artifacts: baseline-metrics.json, bottleneck-report.md
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: <session>/artifacts/optimization-plan.md | Priority-ordered with improvement targets
CONSTRAINTS: Focus on highest-impact optimizations | Risk assessment required
---
InnerLoop: false",
blockedBy: ["PROFILE-001"],
status: "pending"
})
```
**IMPL-001** (optimizer, Stage 3):
```
TaskCreate({
subject: "IMPL-001",
description: "PURPOSE: Implement optimization changes per strategy plan | Success: All planned optimizations applied, code compiles, existing tests pass
TASK:
- Load optimization plan and identify target files
- Apply optimizations in priority order (P0 first)
- Validate changes compile and pass existing tests
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Upstream artifacts: optimization-plan.md
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: Modified source files + validation passing | Optimizations applied without regressions
CONSTRAINTS: Preserve existing behavior | Minimal changes per optimization | Follow code conventions
---
InnerLoop: true",
blockedBy: ["STRATEGY-001"],
status: "pending"
})
```
**BENCH-001** (benchmarker, Stage 4 - parallel):
```
TaskCreate({
subject: "BENCH-001",
description: "PURPOSE: Benchmark optimization results against baseline | Success: All plan success criteria met, no regressions detected
TASK:
- Load baseline metrics and plan success criteria
- Run benchmarks matching project type
- Compare before/after metrics, calculate improvements
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Upstream artifacts: baseline-metrics.json, optimization-plan.md
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: <session>/artifacts/benchmark-results.json | Per-metric comparison with verdicts
CONSTRAINTS: Must compare against baseline | Flag any regressions
---
InnerLoop: false",
blockedBy: ["IMPL-001"],
status: "pending"
})
```
**REVIEW-001** (reviewer, Stage 4 - parallel):
```
TaskCreate({
subject: "REVIEW-001",
description: "PURPOSE: Review optimization code for correctness, side effects, and regression risks | Success: All dimensions reviewed, verdict issued
TASK:
- Load modified files and optimization plan
- Review across 5 dimensions: correctness, side effects, maintainability, regression risk, best practices
- Issue verdict: APPROVE, REVISE, or REJECT with actionable feedback
CONTEXT:
- Session: <session-folder>
- Scope: <optimization-scope>
- Upstream artifacts: optimization-plan.md, benchmark-results.json (if available)
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: <session>/artifacts/review-report.md | Per-dimension findings with severity
CONSTRAINTS: Focus on optimization changes only | Provide specific file:line references
---
InnerLoop: false",
blockedBy: ["IMPL-001"],
status: "pending"
})
```
## Phase 4: Validation
Verify task chain integrity:
| Check | Method | Expected |
|-------|--------|----------|
| All 5 tasks created | TaskList count | 5 tasks |
| Dependencies correct | STRATEGY blocks on PROFILE, IMPL blocks on STRATEGY, BENCH+REVIEW block on IMPL | All valid |
| No circular dependencies | Trace dependency graph | Acyclic |
| All task IDs use correct prefixes | PROFILE-*, STRATEGY-*, IMPL-*, BENCH-*, REVIEW-* | Match role registry |
| Structured descriptions complete | Each has PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS | All present |
If validation fails, fix the specific task and re-validate.

View File

@@ -0,0 +1,201 @@
# Command: Monitor
Handle all coordinator monitoring events: worker callbacks, status checks, pipeline advancement, and completion.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session state | <session>/session.json | Yes |
| Task list | TaskList() | Yes |
| Trigger event | From Entry Router detection | Yes |
| Pipeline definition | From SKILL.md | Yes |
1. Load session.json for current state and fix cycle count
2. Run TaskList() to get current task statuses
3. Identify trigger event type from Entry Router
## Phase 3: Event Handlers
### handleCallback
Triggered when a worker sends completion message.
1. Parse message to identify role and task ID
2. Mark task as completed:
```
TaskUpdate({ taskId: "<task-id>", status: "completed" })
```
3. Record completion in session state
4. Check if checkpoint feedback is configured for this stage:
| Completed Task | Checkpoint | Action |
|---------------|------------|--------|
| PROFILE-001 | CP-1 | Notify user: bottleneck report ready for review |
| STRATEGY-001 | CP-2 | Notify user: optimization plan ready for review |
| BENCH-001 or REVIEW-001 | CP-3 | Check verdicts (see Review-Fix Cycle below) |
5. Proceed to handleSpawnNext
### handleSpawnNext
Find and spawn the next ready tasks.
1. Scan task list for tasks where:
- Status is "pending"
- All blockedBy tasks have status "completed"
2. For each ready task, spawn team-worker:
```
Task({
subagent_type: "team-worker",
description: "Spawn <role> worker for <task-id>",
team_name: "perf-opt",
name: "<role>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-perf-opt/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
team_name: perf-opt
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
})
```
3. For Stage 4 (BENCH-001 + REVIEW-001): spawn both in parallel since both block on IMPL-001
4. STOP after spawning -- wait for next callback
### Review-Fix Cycle (CP-3)
When both BENCH-001 and REVIEW-001 are completed:
1. Read benchmark verdict from shared-memory (benchmarker namespace)
2. Read review verdict from shared-memory (reviewer namespace)
| Bench Verdict | Review Verdict | Action |
|--------------|----------------|--------|
| PASS | APPROVE | -> handleComplete |
| PASS | REVISE | Create FIX task with review feedback |
| FAIL | APPROVE | Create FIX task with benchmark feedback |
| FAIL | REVISE/REJECT | Create FIX task with combined feedback |
| Any | REJECT | Create FIX task + flag for strategist re-evaluation |
3. Check fix cycle count:
| Cycle Count | Action |
|-------------|--------|
| < 3 | Create FIX task, increment cycle count |
| >= 3 | Escalate to user with summary of remaining issues |
4. Create FIX task if needed:
```
TaskCreate({
subject: "FIX-<N>",
description: "PURPOSE: Fix issues identified by review/benchmark | Success: All flagged issues resolved
TASK:
- Address review findings: <specific-findings>
- Fix benchmark regressions: <specific-regressions>
- Re-validate after fixes
CONTEXT:
- Session: <session-folder>
- Upstream artifacts: review-report.md, benchmark-results.json
- Shared memory: <session>/wisdom/shared-memory.json
EXPECTED: Fixed source files | All flagged issues addressed
CONSTRAINTS: Targeted fixes only | Do not introduce new changes
---
InnerLoop: true",
blockedBy: [],
status: "pending"
})
```
5. Create new BENCH and REVIEW tasks blocked on FIX task
6. Proceed to handleSpawnNext (spawns optimizer for FIX task)
### handleCheck
Output current pipeline status without advancing.
1. Build status graph from task list:
```
Pipeline Status:
[DONE] PROFILE-001 (profiler) -> bottleneck-report.md
[DONE] STRATEGY-001 (strategist) -> optimization-plan.md
[RUN] IMPL-001 (optimizer) -> implementing...
[WAIT] BENCH-001 (benchmarker) -> blocked by IMPL-001
[WAIT] REVIEW-001 (reviewer) -> blocked by IMPL-001
Fix Cycles: 0/3
Session: <session-id>
```
2. Output status -- do NOT advance pipeline
### handleResume
Resume pipeline after user pause or interruption.
1. Audit task list for inconsistencies:
- Tasks stuck in "in_progress" -> reset to "pending"
- Tasks with completed blockers but still "pending" -> include in spawn list
2. Proceed to handleSpawnNext
### handleConsensus
Handle consensus_blocked signals from discuss rounds.
| Severity | Action |
|----------|--------|
| HIGH | Pause pipeline, notify user with findings summary |
| MEDIUM | Create revision task for the blocked role |
| LOW | Log finding, continue pipeline |
### handleComplete
Triggered when all pipeline tasks are completed and no fix cycles remain.
1. Verify all tasks have status "completed":
```
TaskList()
```
2. If any tasks not completed, return to handleSpawnNext
3. If all completed, transition to coordinator Phase 5 (Report + Completion Action)
### handleRevise
Triggered by user "revise <TASK-ID> [feedback]" command.
1. Parse target task ID and optional feedback
2. Create revision task with same role but updated requirements
3. Set blockedBy to empty (immediate execution)
4. Cascade: create new downstream tasks that depend on the revised task
5. Proceed to handleSpawnNext
### handleFeedback
Triggered by user "feedback <text>" command.
1. Analyze feedback text to determine impact scope
2. Identify which pipeline stage and role should handle the feedback
3. Create targeted revision task
4. Proceed to handleSpawnNext
## Phase 4: State Persistence
After every handler execution:
1. Update session.json with current state (active tasks, fix cycle count, last event)
2. Verify task list consistency
3. STOP and wait for next event

View File

@@ -0,0 +1,252 @@
# Coordinator - Performance Optimization Team
**Role**: coordinator
**Type**: Orchestrator
**Team**: perf-opt
Orchestrates the performance optimization pipeline: manages task chains, spawns team-worker agents, handles review-fix cycles, and drives the pipeline to completion.
## Boundaries
### MUST
- Use `team-worker` agent type for all worker spawns (NOT `general-purpose`)
- Follow Command Execution Protocol for dispatch and monitor commands
- Respect pipeline stage dependencies (blockedBy)
- Stop after spawning workers -- wait for callbacks
- Handle review-fix cycles with max 3 iterations
- Execute completion action in Phase 5
### MUST NOT
- Implement domain logic (profiling, optimizing, reviewing) -- workers handle this
- Spawn workers without creating tasks first
- Skip checkpoints when configured
- Force-advance pipeline past failed review/benchmark
- Modify source code directly -- delegate to optimizer worker
---
## Command Execution Protocol
When coordinator needs to execute a command (dispatch, monitor):
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
4. **Execute synchronously** -- complete the command workflow before proceeding
Example:
```
Phase 3 needs task dispatch
-> Read roles/coordinator/commands/dispatch.md
-> Execute Phase 2 (Context Loading)
-> Execute Phase 3 (Task Chain Creation)
-> Execute Phase 4 (Validation)
-> Continue to Phase 4
```
---
## Entry Router
When coordinator is invoked, detect invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains role tag [profiler], [strategist], [optimizer], [benchmarker], [reviewer] | -> handleCallback |
| Consensus blocked | Message contains "consensus_blocked" | -> handleConsensus |
| Status check | Arguments contain "check" or "status" | -> handleCheck |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
| Pipeline complete | All tasks have status "completed" | -> handleComplete |
| Interrupted session | Active/paused session exists | -> Phase 0 (Resume Check) |
| New session | None of above | -> Phase 1 (Requirement Clarification) |
For callback/check/resume/complete: load `commands/monitor.md` and execute matched handler, then STOP.
### Router Implementation
1. **Load session context** (if exists):
- Scan `.workflow/.team/PERF-OPT-*/team-session.json` for active/paused sessions
- If found, extract session folder path and status
2. **Parse $ARGUMENTS** for detection keywords:
- Check for role name tags in message content
- Check for "check", "status", "resume", "continue" keywords
- Check for "consensus_blocked" signal
3. **Route to handler**:
- For monitor handlers: Read `commands/monitor.md`, execute matched handler, STOP
- For Phase 0: Execute Session Resume Check below
- For Phase 1: Execute Requirement Clarification below
---
## Phase 0: Session Resume Check
Triggered when an active/paused session is detected on coordinator entry.
1. Load session.json from detected session folder
2. Audit task list:
```
TaskList()
```
3. Reconcile session state vs task status:
| Task Status | Session Expects | Action |
|-------------|----------------|--------|
| in_progress | Should be running | Reset to pending (worker was interrupted) |
| completed | Already tracked | Skip |
| pending + unblocked | Ready to run | Include in spawn list |
4. Rebuild team if not active:
```
TeamCreate({ team_name: "perf-opt" })
```
5. Spawn workers for ready tasks -> Phase 4 coordination loop
---
## Phase 1: Requirement Clarification
1. Parse user task description from $ARGUMENTS
2. Identify optimization target:
| Signal | Target |
|--------|--------|
| Specific file/module mentioned | Scoped optimization |
| "slow", "performance", generic | Full application profiling |
| Specific metric mentioned (FCP, memory, startup) | Targeted metric optimization |
3. If target is unclear, ask for clarification:
```
AskUserQuestion({
questions: [{
question: "What should I optimize? Provide a target scope or describe the performance issue.",
header: "Scope"
}]
})
```
4. Record optimization requirement with scope and target metrics
---
## Phase 2: Session & Team Setup
1. Create session directory:
```
Bash("mkdir -p .workflow/<session-id>/artifacts .workflow/<session-id>/explorations .workflow/<session-id>/wisdom .workflow/<session-id>/discussions")
```
2. Write session.json with status="active", team_name, requirement, timestamp
3. Initialize shared-memory.json:
```
Write("<session>/wisdom/shared-memory.json", { "session_id": "<session-id>", "requirement": "<requirement>" })
```
4. Create team:
```
TeamCreate({ team_name: "perf-opt" })
```
---
## Phase 3: Task Chain Creation
Execute `commands/dispatch.md` inline (Command Execution Protocol):
1. Read `roles/coordinator/commands/dispatch.md`
2. Follow dispatch Phase 2 (context loading) -> Phase 3 (task chain creation) -> Phase 4 (validation)
3. Result: all pipeline tasks created with correct blockedBy dependencies
---
## Phase 4: Spawn & Coordination Loop
### Initial Spawn
Find first unblocked task and spawn its worker:
```
Task({
subagent_type: "team-worker",
description: "Spawn profiler worker",
team_name: "perf-opt",
name: "profiler",
run_in_background: true,
prompt: `## Role Assignment
role: profiler
role_spec: .claude/skills/team-perf-opt/role-specs/profiler.md
session: <session-folder>
session_id: <session-id>
team_name: perf-opt
requirement: <requirement>
inner_loop: false
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
})
```
**STOP** after spawning. Wait for worker callback.
### Coordination (via monitor.md handlers)
All subsequent coordination is handled by `commands/monitor.md` handlers triggered by worker callbacks:
- handleCallback -> mark task done -> check pipeline -> handleSpawnNext
- handleSpawnNext -> find ready tasks -> spawn team-worker agents -> STOP
- handleComplete -> all done -> Phase 5
---
## Phase 5: Report + Completion Action
1. Load session state -> count completed tasks, calculate duration
2. List deliverables:
| Deliverable | Path |
|-------------|------|
| Baseline Metrics | <session>/artifacts/baseline-metrics.json |
| Bottleneck Report | <session>/artifacts/bottleneck-report.md |
| Optimization Plan | <session>/artifacts/optimization-plan.md |
| Benchmark Results | <session>/artifacts/benchmark-results.json |
| Review Report | <session>/artifacts/review-report.md |
3. Include discussion summaries if discuss rounds were used
4. Output pipeline summary: task count, duration, improvement metrics from benchmark results
5. **Completion Action** (interactive):
```
AskUserQuestion({
questions: [{
question: "Team pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
]
}]
})
```
6. Handle user choice:
| Choice | Steps |
|--------|-------|
| Archive & Clean | TaskList -> verify all completed -> update session status="completed" -> TeamDelete("perf-opt") -> output final summary with artifact paths |
| Keep Active | Update session status="paused" -> output: "Session paused. Resume with: Skill(skill='team-perf-opt', args='resume')" |
| Export Results | AskUserQuestion for target directory -> copy all artifacts -> Archive & Clean flow |