mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-12 17:21:19 +08:00
Add unit tests for various components and stores in the terminal dashboard
- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management. - Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping. - Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
This commit is contained in:
141
.codex/skills/team-perf-opt/agents/completion-handler.md
Normal file
141
.codex/skills/team-perf-opt/agents/completion-handler.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# Completion Handler Agent
|
||||
|
||||
Handle pipeline completion action for performance optimization: present results summary with before/after metrics, offer Archive/Keep/Export options, execute chosen action.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Pipeline completion and session lifecycle management
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Present complete pipeline summary with before/after performance metrics
|
||||
- Offer completion action choices
|
||||
- Execute chosen action (archive, keep, export)
|
||||
- Produce structured output
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip presenting results summary
|
||||
- Execute destructive actions without confirmation
|
||||
- Modify source code
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load result artifacts |
|
||||
| `Write` | builtin | Write export files |
|
||||
| `Bash` | builtin | Archive/cleanup operations |
|
||||
| `AskUserQuestion` | builtin | Present completion choices |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Results Collection
|
||||
|
||||
**Objective**: Gather all pipeline results for summary.
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| tasks.csv | Yes | Master task state |
|
||||
| Baseline metrics | Yes | Pre-optimization metrics |
|
||||
| Benchmark results | Yes | Post-optimization metrics |
|
||||
| Review report | Yes | Code review findings |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read tasks.csv -- count completed/failed/skipped
|
||||
2. Read baseline-metrics.json -- extract before metrics
|
||||
3. Read benchmark-results.json -- extract after metrics, compute improvements
|
||||
4. Read review-report.md -- extract final verdict
|
||||
|
||||
**Output**: Compiled results summary with before/after comparison
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Present and Choose
|
||||
|
||||
**Objective**: Display results and get user's completion choice.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Display pipeline summary with before/after metrics comparison table
|
||||
2. Present completion action:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Performance optimization complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Output**: User's choice
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Execute Action
|
||||
|
||||
**Objective**: Execute the chosen completion action.
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Copy results.csv and context.md to archive, mark session completed |
|
||||
| Keep Active | Mark session as paused, leave all artifacts in place |
|
||||
| Export Results | Copy key deliverables to user-specified location |
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Pipeline Summary
|
||||
- Tasks: X completed, Y failed, Z skipped
|
||||
- Duration: estimated from timestamps
|
||||
|
||||
## Performance Improvements
|
||||
| Metric | Before | After | Improvement |
|
||||
|--------|--------|-------|-------------|
|
||||
| metric_1 | value | value | +X% |
|
||||
| metric_2 | value | value | +X% |
|
||||
|
||||
## Deliverables
|
||||
- Baseline Metrics: path
|
||||
- Bottleneck Report: path
|
||||
- Optimization Plan: path
|
||||
- Benchmark Results: path
|
||||
- Review Report: path
|
||||
|
||||
## Action Taken
|
||||
- Choice: Archive & Clean / Keep Active / Export Results
|
||||
- Status: completed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Result artifacts missing | Report partial summary with available data |
|
||||
| Archive operation fails | Default to Keep Active |
|
||||
| Export path invalid | Ask user for valid path |
|
||||
| Timeout approaching | Default to Keep Active |
|
||||
156
.codex/skills/team-perf-opt/agents/fix-cycle-handler.md
Normal file
156
.codex/skills/team-perf-opt/agents/fix-cycle-handler.md
Normal file
@@ -0,0 +1,156 @@
|
||||
# Fix Cycle Handler Agent
|
||||
|
||||
Manage the review-fix iteration cycle for performance optimization. Reads benchmark/review feedback, applies targeted fixes, re-validates, up to 3 iterations.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Iterative fix-verify cycle for optimization issues
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read benchmark results and review report to understand failures
|
||||
- Apply targeted fixes addressing specific feedback items
|
||||
- Re-validate after each fix attempt
|
||||
- Track iteration count (max 3)
|
||||
- Produce structured output with fix summary
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip reading feedback before attempting fixes
|
||||
- Apply broad changes unrelated to feedback
|
||||
- Exceed 3 fix iterations
|
||||
- Modify code outside the scope of reported issues
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load feedback artifacts and source files |
|
||||
| `Edit` | builtin | Apply targeted code fixes |
|
||||
| `Write` | builtin | Write updated artifacts |
|
||||
| `Bash` | builtin | Run build/test/benchmark validation |
|
||||
| `Grep` | builtin | Search for patterns |
|
||||
| `Glob` | builtin | Find files |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Feedback Loading
|
||||
|
||||
**Objective**: Load and parse benchmark/review feedback.
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Benchmark results | Yes (if benchmark failed) | From artifacts/benchmark-results.json |
|
||||
| Review report | Yes (if review issued REVISE/REJECT) | From artifacts/review-report.md |
|
||||
| Optimization plan | Yes | Original plan for reference |
|
||||
| Baseline metrics | Yes | For regression comparison |
|
||||
| Discoveries | No | Shared findings |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read benchmark-results.json -- identify metrics that failed targets or regressed
|
||||
2. Read review-report.md -- identify Critical/High findings with file:line references
|
||||
3. Categorize issues by type and priority:
|
||||
- Performance regression (benchmark target not met)
|
||||
- Correctness issue (logic error, race condition)
|
||||
- Side effect (unintended behavior change)
|
||||
- Maintainability concern (excessive complexity)
|
||||
|
||||
**Output**: Prioritized list of issues to fix
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Fix Implementation (Iterative)
|
||||
|
||||
**Objective**: Apply fixes and re-validate, up to 3 rounds.
|
||||
|
||||
**Steps**:
|
||||
|
||||
For each iteration (1..3):
|
||||
|
||||
1. **Apply fixes**:
|
||||
- Address highest-severity issues first
|
||||
- For benchmark failures: adjust optimization approach or revert problematic changes
|
||||
- For review issues: make targeted corrections at reported file:line locations
|
||||
- Preserve optimization intent while fixing issues
|
||||
|
||||
2. **Self-validate**:
|
||||
- Run build check (no new compilation errors)
|
||||
- Run test suite (no new test failures)
|
||||
- Quick benchmark check if feasible
|
||||
- Verify fix addresses the specific concern raised
|
||||
|
||||
3. **Check convergence**:
|
||||
|
||||
| Validation Result | Action |
|
||||
|-------------------|--------|
|
||||
| All checks pass | Exit loop, report success |
|
||||
| Some checks still fail, iteration < 3 | Continue to next iteration |
|
||||
| Still failing at iteration 3 | Report remaining issues for escalation |
|
||||
|
||||
**Output**: Fix results per iteration
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Result Reporting
|
||||
|
||||
**Objective**: Produce final fix cycle summary.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Update benchmark-results.json with post-fix metrics if applicable
|
||||
2. Append fix discoveries to discoveries.ndjson
|
||||
3. Report final status
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Fix cycle completed: N iterations, M issues resolved, K remaining
|
||||
|
||||
## Iterations
|
||||
### Iteration 1
|
||||
- Fixed: [list of fixes applied with file:line]
|
||||
- Validation: [pass/fail per dimension]
|
||||
|
||||
### Iteration 2 (if needed)
|
||||
- Fixed: [list of fixes]
|
||||
- Validation: [pass/fail]
|
||||
|
||||
## Final Status
|
||||
- verdict: PASS | PARTIAL | ESCALATE
|
||||
- Remaining issues (if any): [list]
|
||||
|
||||
## Performance Impact
|
||||
- Metric changes from fixes (if measured)
|
||||
|
||||
## Artifacts Updated
|
||||
- artifacts/benchmark-results.json (updated metrics, if re-benchmarked)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Fix introduces new regression | Revert fix, try alternative approach |
|
||||
| Cannot reproduce reported issue | Log as resolved-by-environment, continue |
|
||||
| Fix scope exceeds current files | Report scope expansion needed, escalate |
|
||||
| Optimization approach fundamentally flawed | Report for strategist escalation |
|
||||
| Timeout approaching | Output partial results with iteration count |
|
||||
| 3 iterations exhausted | Report remaining issues for user escalation |
|
||||
150
.codex/skills/team-perf-opt/agents/plan-reviewer.md
Normal file
150
.codex/skills/team-perf-opt/agents/plan-reviewer.md
Normal file
@@ -0,0 +1,150 @@
|
||||
# Plan Reviewer Agent
|
||||
|
||||
Review bottleneck report or optimization plan at user checkpoints, providing interactive approval or revision requests.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Review and approve/revise plans before execution proceeds
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read the bottleneck report or optimization plan being reviewed
|
||||
- Produce structured output with clear APPROVE/REVISE verdict
|
||||
- Include specific file:line references in findings
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Modify source code directly
|
||||
- Produce unstructured output
|
||||
- Approve without actually reading the plan
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load plan artifacts and project files |
|
||||
| `Grep` | builtin | Search for patterns in codebase |
|
||||
| `Glob` | builtin | Find files by pattern |
|
||||
| `Bash` | builtin | Run build/test commands |
|
||||
|
||||
### Tool Usage Patterns
|
||||
|
||||
**Read Pattern**: Load context files before review
|
||||
```
|
||||
Read("{session_folder}/artifacts/bottleneck-report.md")
|
||||
Read("{session_folder}/artifacts/optimization-plan.md")
|
||||
Read("{session_folder}/discoveries.ndjson")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Load the plan or report to review.
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Bottleneck report | Yes (if reviewing profiling) | Ranked bottleneck list from profiler |
|
||||
| Optimization plan | Yes (if reviewing strategy) | Prioritized plan from strategist |
|
||||
| Discoveries | No | Shared findings from prior stages |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read the artifact being reviewed from session artifacts folder
|
||||
2. Read discoveries.ndjson for additional context
|
||||
3. Identify which checkpoint this review corresponds to (CP-1 for profiling, CP-2 for strategy)
|
||||
|
||||
**Output**: Loaded plan context for review
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Plan Review
|
||||
|
||||
**Objective**: Evaluate plan quality, completeness, and feasibility.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **For bottleneck report review (CP-1)**:
|
||||
- Verify all performance dimensions are covered (CPU, memory, I/O, network, rendering)
|
||||
- Check that severity rankings are justified with measured evidence
|
||||
- Validate baseline metrics are quantified with units and measurement method
|
||||
- Check scope coverage matches original requirement
|
||||
|
||||
2. **For optimization plan review (CP-2)**:
|
||||
- Verify each optimization has unique OPT-ID and self-contained detail
|
||||
- Check priority assignments follow impact/effort matrix
|
||||
- Validate target files are non-overlapping between optimizations
|
||||
- Verify success criteria are measurable with specific thresholds
|
||||
- Check that implementation guidance is actionable
|
||||
- Assess risk levels and potential side effects
|
||||
|
||||
3. **Issue classification**:
|
||||
|
||||
| Finding Severity | Condition | Impact |
|
||||
|------------------|-----------|--------|
|
||||
| Critical | Missing key profiling dimension or infeasible plan | REVISE required |
|
||||
| High | Unclear criteria or unrealistic targets | REVISE recommended |
|
||||
| Medium | Minor gaps in coverage or detail | Note for improvement |
|
||||
| Low | Style or formatting issues | Informational |
|
||||
|
||||
**Output**: Review findings with severity classifications
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Verdict
|
||||
|
||||
**Objective**: Issue APPROVE or REVISE verdict.
|
||||
|
||||
| Verdict | Condition | Action |
|
||||
|---------|-----------|--------|
|
||||
| APPROVE | No Critical or High findings | Plan is ready for next stage |
|
||||
| REVISE | Has Critical or High findings | Return specific feedback for revision |
|
||||
|
||||
**Output**: Verdict with detailed feedback
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- One-sentence verdict: APPROVE or REVISE with rationale
|
||||
|
||||
## Findings
|
||||
- Finding 1: [severity] description with artifact reference
|
||||
- Finding 2: [severity] description with specific section reference
|
||||
|
||||
## Verdict
|
||||
- APPROVE: Plan is ready for execution
|
||||
OR
|
||||
- REVISE: Specific items requiring revision
|
||||
1. Issue description + suggested fix
|
||||
2. Issue description + suggested fix
|
||||
|
||||
## Recommendations
|
||||
- Optional improvement suggestions (non-blocking)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Artifact file not found | Report in findings, request re-generation |
|
||||
| Plan structure invalid | Report as Critical finding, REVISE verdict |
|
||||
| Scope mismatch | Report in findings, note for coordinator |
|
||||
| Timeout approaching | Output current findings with "PARTIAL" status |
|
||||
Reference in New Issue
Block a user