- Added detailed constraints for the Coordinator role in the team UX improvement skill, emphasizing orchestration responsibilities and workflow management. - Updated test cases in DashboardToolbar, useIssues, and useWebSocket to improve reliability and clarity. - Introduced new tests for configStore and ignore patterns in Codex Lens to ensure proper functionality and configuration handling. - Enhanced smart search functionality with improved embedding selection logic and added tests for various scenarios. - Updated installation and usage documentation to reflect changes in directory structure and role specifications.
5.4 KiB
TASK ASSIGNMENT
MANDATORY FIRST STEPS
- Read shared discoveries: {session_folder}/discoveries.ndjson (if exists, skip if not)
- Read project context: .workflow/project-tech.json (if exists)
- Read task schema: ~ or /.codex/skills/team-perf-opt/schemas/tasks-schema.md
Your Task
Task ID: {id} Title: {title} Description: {description} Role: {role} Bottleneck Type: {bottleneck_type} Priority: {priority} Target Files: {target_files}
Previous Tasks' Findings (Context)
{prev_context}
Execution Protocol
-
Read discoveries: Load {session_folder}/discoveries.ndjson for shared exploration findings
-
Use context: Apply previous tasks' findings from prev_context above
-
Execute by role:
If role = profiler:
- Detect project type by scanning for framework markers:
- Frontend (React/Vue/Angular): render time, bundle size, FCP/LCP/CLS
- Backend Node (Express/Fastify/NestJS): CPU hotspots, memory, DB queries
- Native/JVM Backend (Cargo/Go/Java): CPU, memory, GC tuning
- CLI Tool: startup time, throughput, memory peak
- Trace hot code paths and CPU hotspots within target scope
- Identify memory allocation patterns and potential leaks
- Measure I/O and network latency where applicable
- Collect quantified baseline metrics (timing, memory, throughput)
- Rank top 3-5 bottlenecks by severity (Critical/High/Medium)
- Record evidence: file paths, line numbers, measured values
- Write
{session_folder}/artifacts/baseline-metrics.json(metrics) - Write
{session_folder}/artifacts/bottleneck-report.md(ranked bottlenecks)
If role = strategist:
- Read bottleneck report and baseline from {session_folder}/artifacts/
- For each bottleneck, select optimization strategy by type:
- CPU: algorithm optimization, memoization, caching, worker threads
- MEMORY: pool reuse, lazy init, WeakRef, scope cleanup
- IO: batching, async pipelines, streaming, connection pooling
- NETWORK: request coalescing, compression, CDN, prefetching
- RENDERING: virtualization, memoization, CSS containment, code splitting
- DATABASE: index optimization, query rewriting, caching layer
- Prioritize by impact/effort: P0 (high impact+low effort) to P3
- Assign unique OPT-IDs (OPT-001, 002, ...) with non-overlapping file targets
- Define measurable success criteria (target metric value or improvement %)
- Write
{session_folder}/artifacts/optimization-plan.md
If role = optimizer:
- Read optimization plan from {session_folder}/artifacts/optimization-plan.md
- Apply optimizations in priority order (P0 first)
- Preserve existing behavior -- optimization must not break functionality
- Make minimal, focused changes per optimization
- Add comments only where optimization logic is non-obvious
- Preserve existing code style and conventions
If role = benchmarker:
- Read baseline from {session_folder}/artifacts/baseline-metrics.json
- Read plan from {session_folder}/artifacts/optimization-plan.md
- Run benchmarks matching detected project type:
- Frontend: bundle size, render performance
- Backend: endpoint response times, memory under load, DB query times
- CLI: execution time, memory peak, throughput
- Run test suite to verify no regressions
- Collect post-optimization metrics matching baseline format
- Calculate improvement percentages per metric
- Compare against plan success criteria
- Write
{session_folder}/artifacts/benchmark-results.json - Set verdict: PASS (meets criteria) / WARN (partial) / FAIL (regression or criteria not met)
If role = reviewer:
- Read plan from {session_folder}/artifacts/optimization-plan.md
- Review changed files across 5 dimensions:
- Correctness: logic errors, race conditions, null safety
- Side effects: unintended behavior changes, API contract breaks
- Maintainability: code clarity, complexity increase, naming
- Regression risk: impact on unrelated code paths
- Best practices: idiomatic patterns, no optimization anti-patterns
- Write
{session_folder}/artifacts/review-report.md - Set verdict: APPROVE / REVISE / REJECT
- Detect project type by scanning for framework markers:
-
Share discoveries: Append exploration findings to shared board:
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> {session_folder}/discoveries.ndjson -
Report result: Return JSON via report_agent_job_result
Discovery Types to Share
bottleneck_found:{type, location, severity, description}-- Bottleneck identifiedhotspot_found:{file, function, cpu_pct, description}-- CPU hotspotmemory_issue:{file, type, size_mb, description}-- Memory problemio_issue:{operation, latency_ms, description}-- I/O issuedb_issue:{query, latency_ms, description}-- Database issuefile_modified:{file, change, lines_added}-- File change recordedmetric_measured:{metric, value, unit, context}-- Metric measuredpattern_found:{pattern_name, location, description}-- Pattern identifiedartifact_produced:{name, path, producer, type}-- Deliverable created
Output (report_agent_job_result)
Return JSON: { "id": "{id}", "status": "completed" | "failed", "findings": "Key discoveries and implementation notes (max 500 chars)", "verdict": "PASS|WARN|FAIL|APPROVE|REVISE|REJECT or empty", "artifacts_produced": "semicolon-separated artifact paths", "error": "" }