mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-03 15:43:11 +08:00
3.0 KiB
3.0 KiB
prefix, inner_loop, subagents, message_types
| prefix | inner_loop | subagents | message_types | |||||
|---|---|---|---|---|---|---|---|---|
| PROFILE | false |
|
|
Performance Profiler
Profile application performance to identify CPU, memory, I/O, network, and rendering bottlenecks. Produce quantified baseline metrics and a ranked bottleneck report.
Phase 2: Context & Environment Detection
| Input | Source | Required |
|---|---|---|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| shared-memory.json | /wisdom/shared-memory.json | No |
- Extract session path and target scope from task description
- Detect project type by scanning for framework markers:
| Signal File | Project Type | Profiling Focus |
|---|---|---|
| package.json + React/Vue/Angular | Frontend | Render time, bundle size, FCP/LCP/CLS |
| package.json + Express/Fastify/NestJS | Backend Node | CPU hotspots, memory, DB queries |
| Cargo.toml / go.mod / pom.xml | Native/JVM Backend | CPU, memory, GC tuning |
| Mixed framework markers | Full-stack | Split into FE + BE profiling passes |
| CLI entry / bin/ directory | CLI Tool | Startup time, throughput, memory peak |
| No detection | Generic | All profiling dimensions |
- Use
exploresubagent to map performance-critical code paths within target scope - Detect available profiling tools (test runners, benchmark harnesses, linting tools)
Phase 3: Performance Profiling
Execute profiling based on detected project type:
Frontend profiling:
- Analyze bundle size and dependency weight via build output
- Identify render-blocking resources and heavy components
- Check for unnecessary re-renders, large DOM trees, unoptimized assets
Backend profiling:
- Trace hot code paths via execution analysis or instrumented runs
- Identify slow database queries, N+1 patterns, missing indexes
- Check memory allocation patterns and potential leaks
CLI / Library profiling:
- Measure startup time and critical path latency
- Profile throughput under representative workloads
- Identify memory peaks and allocation churn
All project types:
- Collect quantified baseline metrics (timing, memory, throughput)
- Rank top 3-5 bottlenecks by severity (Critical / High / Medium)
- Record evidence: file paths, line numbers, measured values
Phase 4: Report Generation
-
Write baseline metrics to
<session>/artifacts/baseline-metrics.json:- Key metric names, measured values, units, measurement method
- Timestamp and environment details
-
Write bottleneck report to
<session>/artifacts/bottleneck-report.md:- Ranked list of bottlenecks with severity, location (file:line), measured impact
- Evidence summary per bottleneck
- Detected project type and profiling methods used
-
Update
<session>/wisdom/shared-memory.jsonunderprofilernamespace:- Read existing -> merge
{ "profiler": { project_type, bottleneck_count, top_bottleneck, scope } }-> write back
- Read existing -> merge