Add roles for fixer, reproducer, tester, verifier, and supervisor with detailed workflows

- Introduced `fixer` role for implementing code fixes based on RCA reports, including phases for parsing RCA, planning fixes, implementing changes, and documenting results.
- Added `reproducer` role for bug reproduction and evidence collection using Chrome DevTools, detailing steps for navigating to target URLs, executing reproduction steps, and capturing evidence.
- Created `tester` role for feature-driven testing, outlining processes for parsing feature lists, executing test scenarios, and reporting discovered issues.
- Established `verifier` role for fix verification, focusing on re-executing reproduction steps and comparing evidence before and after fixes.
- Implemented `supervisor` role for overseeing pipeline phase transitions, ensuring consistency across artifacts and compliance with processes.
- Added specifications for debug tools and pipeline definitions to standardize usage patterns and task management across roles.
This commit is contained in:
catlog22
2026-03-07 22:52:40 +08:00
parent 0d01e7bc50
commit 80d8954b7a
27 changed files with 3274 additions and 443 deletions

View File

@@ -0,0 +1,170 @@
---
name: team-frontend-debug
description: Frontend debugging team using Chrome DevTools MCP. Dual-mode — feature-list testing or bug-report debugging. Triggers on "team-frontend-debug", "frontend debug".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Agent(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), mcp__chrome-devtools__*(*)
---
# Frontend Debug Team
Dual-mode frontend debugging: feature-list testing or bug-report debugging, powered by Chrome DevTools MCP.
## Architecture
```
Skill(skill="team-frontend-debug", args="feature list or bug description")
|
SKILL.md (this file) = Router
|
+--------------+--------------+
| |
no --role flag --role <name>
| |
Coordinator Worker
roles/coordinator/role.md roles/<name>/role.md
|
+-- analyze input → select pipeline → dispatch → spawn → STOP
|
┌──────────────────────────┼──────────────────────┐
v v v
[test-pipeline] [debug-pipeline] [shared]
tester(DevTools) reproducer(DevTools) analyzer
fixer
verifier
```
## Pipeline Modes
| Input | Pipeline | Flow |
|-------|----------|------|
| Feature list / 功能清单 | `test-pipeline` | TEST → ANALYZE → FIX → VERIFY |
| Bug report / 错误描述 | `debug-pipeline` | REPRODUCE → ANALYZE → FIX → VERIFY |
## Role Registry
| Role | Path | Prefix | Inner Loop |
|------|------|--------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
| tester | [roles/tester/role.md](roles/tester/role.md) | TEST-* | true |
| reproducer | [roles/reproducer/role.md](roles/reproducer/role.md) | REPRODUCE-* | false |
| analyzer | [roles/analyzer/role.md](roles/analyzer/role.md) | ANALYZE-* | false |
| fixer | [roles/fixer/role.md](roles/fixer/role.md) | FIX-* | true |
| verifier | [roles/verifier/role.md](roles/verifier/role.md) | VERIFY-* | false |
## Role Router
Parse `$ARGUMENTS`:
- Has `--role <name>` → Read `roles/<name>/role.md`, execute Phase 2-4
- No `--role` → Read `roles/coordinator/role.md`, execute entry router
## Shared Constants
- **Session prefix**: `TFD`
- **Session path**: `.workflow/.team/TFD-<slug>-<date>/`
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
## Chrome DevTools MCP Tools
All browser inspection operations use Chrome DevTools MCP. Reproducer and Verifier are primary consumers.
| Tool | Purpose |
|------|---------|
| `mcp__chrome-devtools__navigate_page` | Navigate to target URL |
| `mcp__chrome-devtools__take_screenshot` | Capture visual state |
| `mcp__chrome-devtools__take_snapshot` | Capture DOM/a11y tree |
| `mcp__chrome-devtools__list_console_messages` | Read console logs |
| `mcp__chrome-devtools__get_console_message` | Get specific console message |
| `mcp__chrome-devtools__list_network_requests` | Monitor network activity |
| `mcp__chrome-devtools__get_network_request` | Inspect request/response detail |
| `mcp__chrome-devtools__performance_start_trace` | Start performance recording |
| `mcp__chrome-devtools__performance_stop_trace` | Stop and analyze trace |
| `mcp__chrome-devtools__click` | Simulate user click |
| `mcp__chrome-devtools__fill` | Fill form inputs |
| `mcp__chrome-devtools__hover` | Hover over elements |
| `mcp__chrome-devtools__evaluate_script` | Execute JavaScript in page |
| `mcp__chrome-devtools__wait_for` | Wait for element/text |
| `mcp__chrome-devtools__list_pages` | List open browser tabs |
| `mcp__chrome-devtools__select_page` | Switch active tab |
## Worker Spawn Template
Coordinator spawns workers using this template:
```
Agent({
subagent_type: "team-worker",
description: "Spawn <role> worker",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-frontend-debug/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: <team-name>
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).`
})
```
## User Commands
| Command | Action |
|---------|--------|
| `check` / `status` | View execution status graph |
| `resume` / `continue` | Advance to next step |
| `revise <TASK-ID> [feedback]` | Revise specific task |
| `feedback <text>` | Inject feedback for revision |
| `retry <TASK-ID>` | Re-run a failed task |
## Completion Action
When pipeline completes, coordinator presents:
```
AskUserQuestion({
questions: [{
question: "Pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up team" },
{ label: "Keep Active", description: "Keep session for follow-up debugging" },
{ label: "Export Results", description: "Export debug report and patches" }
]
}]
})
```
## Specs Reference
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions and task registry
- [specs/debug-tools.md](specs/debug-tools.md) — Chrome DevTools MCP usage patterns and evidence collection
## Session Directory
```
.workflow/.team/TFD-<slug>-<date>/
├── team-session.json # Session state + role registry
├── evidence/ # Screenshots, snapshots, network logs
├── artifacts/ # Test reports, RCA reports, patches, verification reports
├── wisdom/ # Cross-task debug knowledge
└── .msg/ # Team message bus
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| All features pass test | Report success, pipeline completes without ANALYZE/FIX/VERIFY |
| Bug not reproducible | Reproducer reports failure, coordinator asks user for more details |
| Browser not available | Report error, suggest manual reproduction steps |
| Analysis inconclusive | Analyzer requests more evidence via iteration loop |
| Fix introduces regression | Verifier reports fail, coordinator dispatches re-fix |
| No issues found in test | Skip downstream tasks, report all-pass |
| Unknown command | Error with available command list |
| Role not found | Error with role registry |

View File

@@ -0,0 +1,207 @@
---
role: analyzer
prefix: ANALYZE
inner_loop: false
message_types:
success: rca_ready
iteration: need_more_evidence
error: error
---
# Analyzer
Root cause analysis from debug evidence.
## Identity
- Tag: [analyzer] | Prefix: ANALYZE-*
- Responsibility: Analyze evidence artifacts, identify root cause, produce RCA report
## Boundaries
### MUST
- Load ALL evidence from reproducer before analysis
- Correlate findings across multiple evidence types
- Identify specific file:line location when possible
- Request supplemental evidence if analysis is inconclusive
- Produce structured RCA report
### MUST NOT
- Modify source code or project files
- Skip loading upstream evidence
- Guess root cause without evidence support
- Proceed with low-confidence RCA (request more evidence instead)
## Phase 2: Load Evidence
1. Read upstream artifacts via team_msg(operation="get_state", role="reproducer")
2. Extract evidence paths from reproducer's state_update ref
3. Load evidence-summary.json from session evidence/
4. Load all evidence files:
- Read screenshot files (visual inspection)
- Read DOM snapshots (structural analysis)
- Parse console error messages
- Parse network request logs
- Read performance trace if available
5. Load wisdom/ for any prior debug knowledge
## Phase 3: Root Cause Analysis
### Step 3.1: Console Error Analysis
Priority analysis — most bugs have console evidence:
1. Filter console messages by type: error > warn > log
2. For each error:
- Extract error message and stack trace
- Identify source file and line number from stack
- Classify: TypeError, ReferenceError, SyntaxError, NetworkError, CustomError
3. Map errors to reproduction steps (correlation by timing)
### Step 3.2: Network Analysis
If network evidence collected:
1. Identify failed requests (4xx, 5xx, timeout, CORS)
2. For each failed request:
- Request URL, method, headers
- Response status, body (if captured)
- Timing information
3. Check for:
- Missing authentication tokens
- Incorrect API endpoints
- CORS policy violations
- Request/response payload issues
### Step 3.3: DOM Structure Analysis
If snapshots collected:
1. Compare before/after snapshots
2. Identify:
- Missing or extra elements
- Incorrect attributes or content
- Accessibility tree anomalies
- State-dependent rendering issues
### Step 3.4: Performance Analysis
If performance trace collected:
1. Identify long tasks (>50ms)
2. Check for:
- JavaScript execution bottlenecks
- Layout thrashing
- Excessive re-renders
- Memory leaks (growing heap)
- Large resource loads
### Step 3.5: Cross-Correlation
Combine findings from all dimensions:
1. Build timeline of events leading to bug
2. Identify the earliest trigger point
3. Trace from trigger to visible symptom
4. Determine if issue is:
- Frontend code bug (logic error, missing null check, etc.)
- Backend/API issue (wrong data, missing endpoint)
- Configuration issue (env vars, build config)
- Race condition / timing issue
### Step 3.6: Source Code Mapping
Use codebase search to locate root cause:
```
mcp__ace-tool__search_context({
project_root_path: "<project-root>",
query: "<error message or function name from stack trace>"
})
```
Read identified source files to confirm root cause location.
### Step 3.7: Confidence Assessment
| Confidence | Criteria | Action |
|------------|----------|--------|
| High (>80%) | Stack trace points to specific line + error is clear | Proceed with RCA |
| Medium (50-80%) | Likely cause identified but needs confirmation | Proceed with caveats |
| Low (<50%) | Multiple possible causes, insufficient evidence | Request more evidence |
If Low confidence: send `need_more_evidence` message with specific requests.
## Phase 4: RCA Report
Write `<session>/artifacts/ANALYZE-001-rca.md`:
```markdown
# Root Cause Analysis Report
## Bug Summary
- **Description**: <bug description>
- **URL**: <target url>
- **Reproduction**: <success/partial/failed>
## Root Cause
- **Category**: <JS Error | Network | Rendering | Performance | State>
- **Confidence**: <High | Medium | Low>
- **Source File**: <file path>
- **Source Line**: <line number>
- **Root Cause**: <detailed explanation>
## Evidence Chain
1. <evidence 1 -> finding>
2. <evidence 2 -> finding>
3. <correlation -> root cause>
## Fix Recommendation
- **Approach**: <description of recommended fix>
- **Files to modify**: <list>
- **Risk level**: <Low | Medium | High>
- **Estimated scope**: <lines of code / number of files>
## Additional Observations
- <any related issues found>
- <potential regression risks>
```
Send state_update:
```json
{
"status": "task_complete",
"task_id": "ANALYZE-001",
"ref": "<session>/artifacts/ANALYZE-001-rca.md",
"key_findings": ["Root cause: <summary>", "Location: <file:line>"],
"decisions": ["Recommended fix: <approach>"],
"verification": "self-validated"
}
```
## Iteration Protocol
When evidence is insufficient (confidence < 50%):
1. Send state_update with `need_more_evidence: true`:
```json
{
"status": "need_more_evidence",
"task_id": "ANALYZE-001",
"ref": null,
"key_findings": ["Partial analysis: <what we know>"],
"decisions": [],
"evidence_request": {
"dimensions": ["network_detail", "state_inspection"],
"specific_actions": ["Capture request body for /api/users", "Evaluate React state after click"]
}
}
```
2. Coordinator creates REPRODUCE-002 + ANALYZE-002
3. ANALYZE-002 loads both original and supplemental evidence
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Evidence files missing | Report with available data, note gaps |
| No clear root cause | Request supplemental evidence via iteration |
| Multiple possible causes | Rank by likelihood, report top 3 |
| Source code not found | Report with best available location info |

View File

@@ -0,0 +1,174 @@
# Analyze Input
Parse user input -> detect mode (feature-test vs bug-report) -> build dependency graph -> assign roles.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Step 1: Detect Input Mode
```
if input contains: 功能, feature, 清单, list, 测试, test, 完成, done, 验收
→ mode = "test-pipeline"
elif input contains: bug, 错误, 报错, crash, 问题, 不工作, 白屏, 异常
→ mode = "debug-pipeline"
else
→ AskUserQuestion to clarify
```
```
AskUserQuestion({
questions: [{
question: "请确认调试模式",
header: "Mode",
multiSelect: false,
options: [
{ label: "功能测试", description: "根据功能清单逐项测试,发现并修复问题" },
{ label: "Bug修复", description: "针对已知Bug进行复现、分析和修复" }
]
}]
})
```
---
## Mode A: Feature Test (test-pipeline)
### Parse Feature List
Extract from user input:
| Field | Source | Required |
|-------|--------|----------|
| base_url | URL in text or AskUserQuestion | Yes |
| features | Feature list (bullet points, numbered list, or free text) | Yes |
| test_depth | User preference or default "standard" | Auto |
Parse features into structured format:
```json
[
{ "id": "F-001", "name": "用户登录", "url": "/login", "description": "..." },
{ "id": "F-002", "name": "数据列表", "url": "/dashboard", "description": "..." }
]
```
If base_url missing:
```
AskUserQuestion({
questions: [{
question: "请提供应用的访问地址",
header: "Base URL",
multiSelect: false,
options: [
{ label: "localhost:3000", description: "本地开发服务器" },
{ label: "localhost:5173", description: "Vite默认端口" },
{ label: "Custom", description: "自定义URL" }
]
}]
})
```
### Complexity Scoring (Test Mode)
| Factor | Points |
|--------|--------|
| Per feature | +1 |
| Features > 5 | +2 |
| Features > 10 | +3 |
| Cross-page workflows | +1 |
Results: 1-3 Low, 4-6 Medium, 7+ High
### Output (Test Mode)
```json
{
"mode": "test-pipeline",
"base_url": "<url>",
"features": [
{ "id": "F-001", "name": "<name>", "url": "<path>", "description": "<desc>" }
],
"pipeline_type": "test-pipeline",
"dependency_graph": {
"TEST-001": { "role": "tester", "blockedBy": [], "priority": "P0" },
"ANALYZE-001": { "role": "analyzer", "blockedBy": ["TEST-001"], "priority": "P0", "conditional": true },
"FIX-001": { "role": "fixer", "blockedBy": ["ANALYZE-001"], "priority": "P0", "conditional": true },
"VERIFY-001": { "role": "verifier", "blockedBy": ["FIX-001"], "priority": "P0", "conditional": true }
},
"roles": [
{ "name": "tester", "prefix": "TEST", "inner_loop": true },
{ "name": "analyzer", "prefix": "ANALYZE", "inner_loop": false },
{ "name": "fixer", "prefix": "FIX", "inner_loop": true },
{ "name": "verifier", "prefix": "VERIFY", "inner_loop": false }
],
"complexity": { "score": 0, "level": "Low|Medium|High" }
}
```
---
## Mode B: Bug Report (debug-pipeline)
### Parse Bug Report
Extract from user input:
| Field | Source | Required |
|-------|--------|----------|
| bug_description | User text | Yes |
| target_url | URL in text or AskUserQuestion | Yes |
| reproduction_steps | Steps in text or AskUserQuestion | Yes |
| expected_behavior | User description | Recommended |
| actual_behavior | User description | Recommended |
| severity | User indication or auto-assess | Auto |
### Debug Dimension Detection
| Keywords | Dimension | Evidence Needed |
|----------|-----------|-----------------|
| 渲染, 样式, 显示, 布局, CSS | UI/Rendering | screenshot, snapshot |
| 请求, API, 接口, 网络, 超时 | Network | network_requests |
| 错误, 报错, 异常, crash | JavaScript Error | console_messages |
| 慢, 卡顿, 性能, 加载 | Performance | performance_trace |
| 状态, 数据, 更新, 不同步 | State Management | console + snapshot |
| 交互, 点击, 输入, 表单 | User Interaction | click/fill + screenshot |
### Complexity Scoring (Debug Mode)
| Factor | Points |
|--------|--------|
| Single dimension (e.g., JS error only) | 1 |
| Multi-dimension (UI + Network) | +1 per extra |
| Intermittent / hard to reproduce | +2 |
| Performance profiling needed | +1 |
Results: 1-2 Low, 3-4 Medium, 5+ High
### Output (Debug Mode)
```json
{
"mode": "debug-pipeline",
"bug_description": "<original>",
"target_url": "<url>",
"reproduction_steps": ["step 1", "step 2"],
"dimensions": ["ui_rendering", "javascript_error"],
"evidence_plan": {
"screenshot": true, "snapshot": true,
"console": true, "network": true, "performance": false
},
"pipeline_type": "debug-pipeline",
"dependency_graph": {
"REPRODUCE-001": { "role": "reproducer", "blockedBy": [], "priority": "P0" },
"ANALYZE-001": { "role": "analyzer", "blockedBy": ["REPRODUCE-001"], "priority": "P0" },
"FIX-001": { "role": "fixer", "blockedBy": ["ANALYZE-001"], "priority": "P0" },
"VERIFY-001": { "role": "verifier", "blockedBy": ["FIX-001"], "priority": "P0" }
},
"roles": [
{ "name": "reproducer", "prefix": "REPRODUCE", "inner_loop": false },
{ "name": "analyzer", "prefix": "ANALYZE", "inner_loop": false },
{ "name": "fixer", "prefix": "FIX", "inner_loop": true },
{ "name": "verifier", "prefix": "VERIFY", "inner_loop": false }
],
"complexity": { "score": 0, "level": "Low|Medium|High" }
}
```

View File

@@ -0,0 +1,256 @@
# Dispatch Debug Tasks
Create task chains from dependency graph with proper blockedBy relationships.
## Workflow
1. Read task-analysis.json -> extract pipeline_type and dependency_graph
2. Read specs/pipelines.md -> get task registry for selected pipeline
3. Topological sort tasks (respect blockedBy)
4. Validate all owners exist in role registry (SKILL.md)
5. For each task (in order):
- TaskCreate with structured description (see template below)
- TaskUpdate with blockedBy + owner assignment
6. Update team-session.json with pipeline.tasks_total
7. Validate chain (no orphans, no cycles, all refs valid)
## Task Description Template
```
PURPOSE: <goal> | Success: <criteria>
TASK:
- <step 1>
- <step 2>
CONTEXT:
- Session: <session-folder>
- Base URL / Bug URL: <url>
- Upstream artifacts: <list>
EXPECTED: <artifact path> + <quality criteria>
CONSTRAINTS: <scope limits>
---
InnerLoop: <true|false>
RoleSpec: .claude/skills/team-frontend-debug/roles/<role>/role.md
```
---
## Test Pipeline Tasks (mode: test-pipeline)
### TEST-001: Feature Testing
```
PURPOSE: Test all features from feature list and discover issues | Success: All features tested with pass/fail results
TASK:
- Parse feature list from task description
- For each feature: navigate to URL, explore page, generate test scenarios
- Execute test scenarios using Chrome DevTools MCP (click, fill, hover, etc.)
- Capture evidence: screenshots, console logs, network requests
- Classify results: pass / fail / warning
- Compile test report with discovered issues
CONTEXT:
- Session: <session-folder>
- Base URL: <base-url>
- Features: <feature-list-from-task-analysis>
EXPECTED: <session>/artifacts/TEST-001-report.md + <session>/artifacts/TEST-001-issues.json
CONSTRAINTS: Use Chrome DevTools MCP only | Do not modify any code | Test all listed features
---
InnerLoop: true
RoleSpec: .claude/skills/team-frontend-debug/roles/tester/role.md
```
### ANALYZE-001 (Test Mode): Analyze Discovered Issues
```
PURPOSE: Analyze issues discovered by tester to identify root causes | Success: RCA for each discovered issue
TASK:
- Load test report and issues list from TEST-001
- For each high/medium severity issue: analyze evidence, identify root cause
- Correlate console errors, network failures, DOM anomalies to source code
- Produce consolidated RCA report covering all issues
CONTEXT:
- Session: <session-folder>
- Upstream: <session>/artifacts/TEST-001-issues.json
- Test evidence: <session>/evidence/
EXPECTED: <session>/artifacts/ANALYZE-001-rca.md with root causes for all issues
CONSTRAINTS: Read-only analysis | Skip low-severity warnings unless user requests
---
InnerLoop: false
RoleSpec: .claude/skills/team-frontend-debug/roles/analyzer/role.md
```
**Conditional**: If TEST-001 reports zero issues → skip ANALYZE-001, FIX-001, VERIFY-001. Pipeline completes.
### FIX-001 (Test Mode): Fix All Issues
```
PURPOSE: Fix all identified issues from RCA | Success: All high/medium issues resolved
TASK:
- Load consolidated RCA report from ANALYZE-001
- For each root cause: locate code, implement fix
- Run syntax/type check after all modifications
- Document all changes
CONTEXT:
- Session: <session-folder>
- Upstream: <session>/artifacts/ANALYZE-001-rca.md
EXPECTED: Modified source files + <session>/artifacts/FIX-001-changes.md
CONSTRAINTS: Minimal changes per issue | Follow existing code style
---
InnerLoop: true
RoleSpec: .claude/skills/team-frontend-debug/roles/fixer/role.md
```
### VERIFY-001 (Test Mode): Re-Test After Fix
```
PURPOSE: Re-run failed test scenarios to verify fixes | Success: Previously failed scenarios now pass
TASK:
- Load original test report (failed scenarios only)
- Re-execute failed scenarios using Chrome DevTools MCP
- Capture evidence and compare with original
- Report pass/fail per scenario
CONTEXT:
- Session: <session-folder>
- Original test report: <session>/artifacts/TEST-001-report.md
- Fix changes: <session>/artifacts/FIX-001-changes.md
- Failed features: <from TEST-001-issues.json>
EXPECTED: <session>/artifacts/VERIFY-001-report.md with pass/fail per previously-failed scenario
CONSTRAINTS: Only re-test failed scenarios | Use Chrome DevTools MCP only
---
InnerLoop: false
RoleSpec: .claude/skills/team-frontend-debug/roles/verifier/role.md
```
---
## Debug Pipeline Tasks (mode: debug-pipeline)
### REPRODUCE-001: Evidence Collection
```
PURPOSE: Reproduce reported bug and collect debug evidence | Success: Bug reproduced with evidence artifacts
TASK:
- Navigate to target URL
- Execute reproduction steps using Chrome DevTools MCP
- Capture evidence: screenshots, DOM snapshots, console logs, network requests
- If performance dimension: run performance trace
- Package all evidence into session evidence/ directory
CONTEXT:
- Session: <session-folder>
- Bug URL: <target-url>
- Steps: <reproduction-steps>
- Evidence plan: <from task-analysis.json>
EXPECTED: <session>/evidence/ directory with all captures + reproduction report
CONSTRAINTS: Use Chrome DevTools MCP only | Do not modify any code
---
InnerLoop: false
RoleSpec: .claude/skills/team-frontend-debug/roles/reproducer/role.md
```
### ANALYZE-001 (Debug Mode): Root Cause Analysis
```
PURPOSE: Analyze evidence to identify root cause | Success: RCA report with specific file:line location
TASK:
- Load evidence from REPRODUCE-001
- Analyze console errors and stack traces
- Analyze failed/abnormal network requests
- Compare DOM snapshot against expected structure
- Correlate findings to source code location
CONTEXT:
- Session: <session-folder>
- Upstream: <session>/evidence/
- Bug description: <bug-description>
EXPECTED: <session>/artifacts/ANALYZE-001-rca.md with root cause, file:line, fix recommendation
CONSTRAINTS: Read-only analysis | Request more evidence if inconclusive
---
InnerLoop: false
RoleSpec: .claude/skills/team-frontend-debug/roles/analyzer/role.md
```
### FIX-001 (Debug Mode): Code Fix
```
PURPOSE: Fix the identified bug | Success: Code changes that resolve the root cause
TASK:
- Load RCA report from ANALYZE-001
- Locate the problematic code
- Implement fix following existing code patterns
- Run syntax/type check on modified files
CONTEXT:
- Session: <session-folder>
- Upstream: <session>/artifacts/ANALYZE-001-rca.md
EXPECTED: Modified source files + <session>/artifacts/FIX-001-changes.md
CONSTRAINTS: Minimal changes | Follow existing code style | No breaking changes
---
InnerLoop: true
RoleSpec: .claude/skills/team-frontend-debug/roles/fixer/role.md
```
### VERIFY-001 (Debug Mode): Fix Verification
```
PURPOSE: Verify bug is fixed | Success: Original bug no longer reproduces
TASK:
- Navigate to same URL as REPRODUCE-001
- Execute same reproduction steps
- Capture evidence and compare with original
- Confirm bug is resolved and no regressions
CONTEXT:
- Session: <session-folder>
- Original evidence: <session>/evidence/
- Fix changes: <session>/artifacts/FIX-001-changes.md
EXPECTED: <session>/artifacts/VERIFY-001-report.md with pass/fail verdict
CONSTRAINTS: Use Chrome DevTools MCP only | Same steps as reproduction
---
InnerLoop: false
RoleSpec: .claude/skills/team-frontend-debug/roles/verifier/role.md
```
---
## Dynamic Iteration Tasks
### REPRODUCE-002 (Debug Mode): Supplemental Evidence
Created when Analyzer requests more evidence:
```
PURPOSE: Collect additional evidence per Analyzer request | Success: Targeted evidence collected
TASK: <specific evidence requests from Analyzer>
CONTEXT: Session + Analyzer request
---
InnerLoop: false
RoleSpec: .claude/skills/team-frontend-debug/roles/reproducer/role.md
```
### FIX-002 (Either Mode): Re-Fix After Failed Verification
Created when Verifier reports fail:
```
PURPOSE: Re-fix based on verification failure feedback | Success: Issue resolved
TASK: Review VERIFY-001 failure details, apply corrective fix
CONTEXT: Session + VERIFY-001-report.md
---
InnerLoop: true
RoleSpec: .claude/skills/team-frontend-debug/roles/fixer/role.md
```
## Conditional Skip Rules
| Condition | Action |
|-----------|--------|
| test-pipeline + TEST-001 finds 0 issues | Skip ANALYZE/FIX/VERIFY → pipeline complete |
| test-pipeline + TEST-001 finds only warnings | AskUserQuestion: fix warnings or complete |
| debug-pipeline + REPRODUCE-001 cannot reproduce | AskUserQuestion: retry with more info or abort |
## InnerLoop Flag Rules
- true: tester (iterates over features), fixer (may need multiple fix passes)
- false: reproducer, analyzer, verifier (single-pass tasks)
## Dependency Validation
- No orphan tasks (all tasks have valid owner)
- No circular dependencies
- All blockedBy references exist
- Session reference in every task description
- RoleSpec reference in every task description

View File

@@ -0,0 +1,122 @@
# Monitor Pipeline
Event-driven pipeline coordination. Beat model: coordinator wake -> process -> spawn -> STOP.
## Constants
- SPAWN_MODE: background
- ONE_STEP_PER_INVOCATION: true
- FAST_ADVANCE_AWARE: true
- WORKER_AGENT: team-worker
## Handler Router
| Source | Handler |
|--------|---------|
| Message contains [role-name] | handleCallback |
| "need_more_evidence" | handleIteration |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
## handleCallback
Worker completed. Process and advance.
1. Find matching worker by role in message
2. Check if progress update (inner loop) or final completion
3. Progress -> update session state, STOP
4. Completion -> mark task done, remove from active_workers
5. Check for special conditions:
- **TEST-001 with 0 issues** -> skip ANALYZE/FIX/VERIFY (mark as completed), handleComplete
- **TEST-001 with only warnings** -> AskUserQuestion: fix warnings or complete
- **TEST-001 with high/medium issues** -> proceed to ANALYZE-001
- ANALYZE-001 with `need_more_evidence: true` -> handleIteration
- VERIFY-001 with `verdict: fail` -> re-dispatch FIX (create FIX-002 blocked by VERIFY-001)
- VERIFY-001 with `verdict: pass` -> handleComplete
6. -> handleSpawnNext
## handleIteration
Analyzer needs more evidence. Create supplemental reproduction task.
1. Parse Analyzer's evidence request (dimensions, specific actions)
2. Create REPRODUCE-002 task:
- TaskCreate with description from Analyzer's request
- blockedBy: [] (can start immediately)
3. Create ANALYZE-002 task:
- blockedBy: [REPRODUCE-002]
- Update FIX-001 blockedBy to include ANALYZE-002
4. Update team-session.json with new tasks
5. -> handleSpawnNext
## handleCheck
Read-only status report, then STOP.
Output:
```
[coordinator] Debug Pipeline Status
[coordinator] Bug: <bug-description-summary>
[coordinator] Progress: <done>/<total> (<pct>%)
[coordinator] Active: <workers with elapsed time>
[coordinator] Ready: <pending tasks with resolved deps>
[coordinator] Evidence: <list of collected evidence types>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
## handleResume
1. No active workers -> handleSpawnNext
2. Has active -> check each status
- completed -> mark done
- in_progress -> still running
3. Some completed -> handleSpawnNext
4. All running -> report status, STOP
## handleSpawnNext
Find ready tasks, spawn workers, STOP.
1. Collect: completedSubjects, inProgressSubjects, readySubjects
2. No ready + work in progress -> report waiting, STOP
3. No ready + nothing in progress -> handleComplete
4. Has ready -> for each:
a. Check if inner loop role with active worker -> skip (worker picks up)
b. Standard spawn:
- TaskUpdate -> in_progress
- team_msg log -> task_unblocked
- Spawn team-worker (see SKILL.md Worker Spawn Template)
- Add to active_workers
5. Update session, output summary, STOP
## handleComplete
Pipeline done. Generate debug report and completion action.
1. Generate debug summary:
- Bug description and reproduction results
- Root cause analysis (from ANALYZE artifacts)
- Code changes applied (from FIX artifacts)
- Verification verdict (from VERIFY artifacts)
- Evidence inventory (screenshots, logs, traces)
2. Read session.completion_action:
- interactive -> AskUserQuestion (Archive/Keep/Export)
- auto_archive -> Archive & Clean (status=completed, TeamDelete)
- auto_keep -> Keep Active (status=paused)
## handleAdapt
Not typically needed for debug pipeline. If Analyzer identifies a dimension not covered:
1. Parse gap description
2. Check if reproducer can cover it -> add to evidence plan
3. Create supplemental REPRODUCE task
## Fast-Advance Reconciliation
On every coordinator wake:
1. Read team_msg entries with type="fast_advance"
2. Sync active_workers with spawned successors
3. No duplicate spawns

View File

@@ -0,0 +1,128 @@
# Coordinator Role
Orchestrate team-frontend-debug: analyze -> dispatch -> spawn -> monitor -> report.
## Identity
- Name: coordinator | Tag: [coordinator]
- Responsibility: Analyze bug report -> Create team -> Dispatch debug tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- Parse bug report description (text-level only, no codebase reading)
- Create team and spawn team-worker agents in background
- Dispatch tasks with proper dependency chains
- Monitor progress via callbacks and route messages
- Maintain session state (team-session.json)
- Handle iteration loops (analyzer requesting more evidence)
- Execute completion action when pipeline finishes
### MUST NOT
- Read source code or explore codebase (delegate to workers)
- Execute debug/fix work directly
- Modify task output artifacts
- Spawn workers with general-purpose agent (MUST use team-worker)
- Generate more than 5 worker roles
## Command Execution Protocol
When coordinator needs to execute a specific phase:
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains [role-name] | -> handleCallback (monitor.md) |
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
| Iteration request | Message contains "need_more_evidence" | -> handleIteration (monitor.md) |
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
| Interrupted session | Active session in .workflow/.team/TFD-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For callback/check/resume/iteration/complete: load commands/monitor.md, execute handler, STOP.
## Phase 0: Session Resume Check
1. Scan .workflow/.team/TFD-*/team-session.json for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile:
a. Audit TaskList, reset in_progress->pending
b. Rebuild team workers
c. Kick first ready task
4. Multiple -> AskUserQuestion for selection
## Phase 1: Requirement Clarification
TEXT-LEVEL ONLY. No source code reading.
1. Parse user input — detect mode:
- Feature list / 功能清单 → **test-pipeline**
- Bug report / 错误描述 → **debug-pipeline**
- Ambiguous → AskUserQuestion to clarify
2. Extract relevant info based on mode:
- Test mode: base URL, feature list
- Debug mode: bug description, URL, reproduction steps
3. Clarify if ambiguous (AskUserQuestion)
4. Delegate to commands/analyze.md
5. Output: task-analysis.json
6. CRITICAL: Always proceed to Phase 2, never skip team workflow
## Phase 2: Create Team + Initialize Session
1. Generate session ID: TFD-<slug>-<date>
2. Create session folder structure:
```
.workflow/.team/TFD-<slug>-<date>/
├── team-session.json
├── evidence/
├── artifacts/
├── wisdom/
└── .msg/
```
3. TeamCreate with team name
4. Read specs/pipelines.md -> select pipeline (default: debug-pipeline)
5. Register roles in team-session.json
6. Initialize pipeline via team_msg state_update
7. Write team-session.json
## Phase 3: Create Task Chain
Delegate to commands/dispatch.md:
1. Read dependency graph from task-analysis.json
2. Read specs/pipelines.md for debug-pipeline task registry
3. Topological sort tasks
4. Create tasks via TaskCreate with blockedBy
5. Update team-session.json
## Phase 4: Spawn-and-Stop
Delegate to commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + blockedBy resolved)
2. Spawn team-worker agents (see SKILL.md Spawn Template)
3. Output status summary
4. STOP
## Phase 5: Report + Completion Action
1. Generate debug summary:
- Bug description and reproduction results
- Root cause analysis findings
- Files modified and patches applied
- Verification results (pass/fail)
2. Execute completion action per session.completion_action:
- interactive -> AskUserQuestion (Archive/Keep/Export)
- auto_archive -> Archive & Clean
## Error Handling
| Error | Resolution |
|-------|------------|
| Bug report too vague | AskUserQuestion for URL, steps, expected behavior |
| Session corruption | Attempt recovery, fallback to manual |
| Worker crash | Reset task to pending, respawn |
| Dependency cycle | Detect in analysis, halt |
| Browser unavailable | Report to user, suggest manual steps |

View File

@@ -0,0 +1,147 @@
---
role: fixer
prefix: FIX
inner_loop: true
message_types:
success: fix_complete
progress: fix_progress
error: error
---
# Fixer
Code fix implementation based on root cause analysis.
## Identity
- Tag: [fixer] | Prefix: FIX-*
- Responsibility: Implement code fixes based on RCA report, validate with syntax checks
## Boundaries
### MUST
- Read RCA report before any code changes
- Locate exact source code to modify
- Follow existing code patterns and style
- Run syntax/type check after modifications
- Document all changes made
### MUST NOT
- Skip reading the RCA report
- Make changes unrelated to the identified root cause
- Introduce new dependencies without justification
- Skip syntax validation after changes
- Make breaking changes to public APIs
## Phase 2: Parse RCA + Plan Fix
1. Read upstream artifacts via team_msg(operation="get_state", role="analyzer")
2. Extract RCA report path from analyzer's state_update ref
3. Load RCA report: `<session>/artifacts/ANALYZE-001-rca.md`
4. Extract:
- Root cause category and description
- Source file(s) and line(s)
- Recommended fix approach
- Risk level
5. Read identified source files to understand context
6. Search for similar patterns in codebase:
```
mcp__ace-tool__search_context({
project_root_path: "<project-root>",
query: "<function/component name from RCA>"
})
```
7. Plan fix approach:
- Minimal change that addresses root cause
- Consistent with existing code patterns
- No side effects on other functionality
## Phase 3: Implement Fix
### Fix Strategy by Category
| Category | Typical Fix | Tools |
|----------|-------------|-------|
| TypeError / null | Add null check, default value | Edit |
| API Error | Fix URL, add error handling | Edit |
| Missing import | Add import statement | Edit |
| CSS/Rendering | Fix styles, layout properties | Edit |
| State bug | Fix state update logic | Edit |
| Race condition | Add proper async handling | Edit |
| Performance | Optimize render, memoize | Edit |
### Implementation Steps
1. Read the target file(s)
2. Apply minimal code changes using Edit tool
3. If Edit fails, use mcp__ccw-tools__edit_file as fallback
4. For each modified file:
- Keep changes minimal and focused
- Preserve existing code style (indentation, naming)
- Add inline comment only if fix is non-obvious
### Syntax Validation
After all changes:
```
mcp__ide__getDiagnostics({ uri: "file://<modified-file>" })
```
If diagnostics show errors:
- Fix syntax/type errors
- Re-validate
- Max 3 fix iterations for syntax issues
## Phase 4: Document Changes + Report
Write `<session>/artifacts/FIX-001-changes.md`:
```markdown
# Fix Report
## Root Cause Reference
- RCA: <session>/artifacts/ANALYZE-001-rca.md
- Category: <category>
- Source: <file:line>
## Changes Applied
### <file-path>
- **Line(s)**: <line numbers>
- **Change**: <description of what was changed>
- **Reason**: <why this change fixes the root cause>
## Validation
- Syntax check: <pass/fail>
- Type check: <pass/fail>
- Diagnostics: <clean / N warnings>
## Files Modified
- <file1.ts>
- <file2.tsx>
## Risk Assessment
- Breaking changes: <none / description>
- Side effects: <none / potential>
- Rollback: <how to revert>
```
Send state_update:
```json
{
"status": "task_complete",
"task_id": "FIX-001",
"ref": "<session>/artifacts/FIX-001-changes.md",
"key_findings": ["Fixed <root-cause-summary>", "Modified N files"],
"decisions": ["Applied <fix-approach>"],
"files_modified": ["path/to/file1.ts", "path/to/file2.tsx"],
"verification": "self-validated"
}
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Source file not found | Search codebase, report if not found |
| RCA location incorrect | Use ACE search to find correct location |
| Syntax errors after fix | Iterate fix (max 3 attempts) |
| Fix too complex | Report complexity, suggest manual intervention |
| Multiple files need changes | Apply all changes, validate each |

View File

@@ -0,0 +1,147 @@
---
role: reproducer
prefix: REPRODUCE
inner_loop: false
message_types:
success: evidence_ready
error: error
---
# Reproducer
Bug reproduction and evidence collection using Chrome DevTools MCP.
## Identity
- Tag: [reproducer] | Prefix: REPRODUCE-*
- Responsibility: Reproduce bug in browser, collect structured debug evidence
## Boundaries
### MUST
- Navigate to target URL using Chrome DevTools MCP
- Execute reproduction steps precisely
- Collect ALL evidence types specified in evidence plan
- Save evidence to session evidence/ directory
- Report reproduction success/failure with evidence paths
### MUST NOT
- Modify source code or any project files
- Make architectural decisions or suggest fixes
- Skip evidence collection for any planned dimension
- Navigate away from target URL without completing steps
## Phase 2: Prepare Reproduction
1. Read upstream artifacts via team_msg(operation="get_state")
2. Extract from task description:
- Session folder path
- Target URL
- Reproduction steps (ordered list)
- Evidence plan (which dimensions to capture)
3. Verify browser is accessible:
```
mcp__chrome-devtools__list_pages()
```
4. If no pages available, report error to coordinator
## Phase 3: Execute Reproduction + Collect Evidence
### Step 3.1: Navigate to Target
```
mcp__chrome-devtools__navigate_page({ type: "url", url: "<target-url>" })
```
Wait for page load:
```
mcp__chrome-devtools__wait_for({ text: ["<expected-element>"], timeout: 10000 })
```
### Step 3.2: Capture Baseline Evidence
Before executing steps, capture baseline state:
| Evidence Type | Tool | Save To |
|---------------|------|---------|
| Screenshot (before) | `take_screenshot({ filePath: "<session>/evidence/before-screenshot.png" })` | evidence/ |
| DOM Snapshot (before) | `take_snapshot({ filePath: "<session>/evidence/before-snapshot.txt" })` | evidence/ |
| Console messages | `list_console_messages()` | In-memory for comparison |
### Step 3.3: Execute Reproduction Steps
For each reproduction step:
1. Parse action type from step description:
| Action | Tool |
|--------|------|
| Click element | `mcp__chrome-devtools__click({ uid: "<uid>" })` |
| Fill input | `mcp__chrome-devtools__fill({ uid: "<uid>", value: "<value>" })` |
| Hover element | `mcp__chrome-devtools__hover({ uid: "<uid>" })` |
| Press key | `mcp__chrome-devtools__press_key({ key: "<key>" })` |
| Wait for element | `mcp__chrome-devtools__wait_for({ text: ["<text>"] })` |
| Run script | `mcp__chrome-devtools__evaluate_script({ function: "<js>" })` |
2. After each step, take snapshot to track DOM changes if needed
3. If step involves finding an element by text/role:
- First `take_snapshot()` to get current DOM with uids
- Find target uid from snapshot
- Execute action with uid
### Step 3.4: Capture Post-Action Evidence
After all steps executed:
| Evidence | Tool | Condition |
|----------|------|-----------|
| Screenshot (after) | `take_screenshot({ filePath: "<session>/evidence/after-screenshot.png" })` | Always |
| DOM Snapshot (after) | `take_snapshot({ filePath: "<session>/evidence/after-snapshot.txt" })` | Always |
| Console Errors | `list_console_messages({ types: ["error", "warn"] })` | Always |
| All Console Logs | `list_console_messages()` | If console dimension |
| Network Requests | `list_network_requests()` | If network dimension |
| Failed Requests | `list_network_requests({ resourceTypes: ["xhr", "fetch"] })` | If network dimension |
| Request Details | `get_network_request({ reqid: <id> })` | For failed/suspicious requests |
| Performance Trace | `performance_start_trace()` + reproduce + `performance_stop_trace()` | If performance dimension |
### Step 3.5: Save Evidence Summary
Write `<session>/evidence/evidence-summary.json`:
```json
{
"reproduction_success": true,
"target_url": "<url>",
"steps_executed": ["step1", "step2"],
"evidence_collected": {
"screenshots": ["before-screenshot.png", "after-screenshot.png"],
"snapshots": ["before-snapshot.txt", "after-snapshot.txt"],
"console_errors": [{ "type": "error", "text": "..." }],
"network_failures": [{ "url": "...", "status": 500, "method": "GET" }],
"performance_trace": "trace.json"
},
"observations": ["Error X appeared after step 3", "Network request Y failed"]
}
```
## Phase 4: Report
1. Write evidence summary to session evidence/
2. Send state_update:
```json
{
"status": "task_complete",
"task_id": "REPRODUCE-001",
"ref": "<session>/evidence/evidence-summary.json",
"key_findings": ["Bug reproduced successfully", "3 console errors captured", "1 failed API request"],
"decisions": [],
"verification": "self-validated"
}
```
3. Report: reproduction result, evidence inventory, key observations
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Page fails to load | Retry once, then report navigation error |
| Element not found | Take snapshot, search alternative selectors, report if still not found |
| Bug not reproduced | Report with evidence of non-reproduction, suggest step refinement |
| Browser disconnected | Report error to coordinator |
| Timeout during wait | Capture current state, report partial reproduction |

View File

@@ -0,0 +1,231 @@
---
role: tester
prefix: TEST
inner_loop: true
message_types:
success: test_complete
progress: test_progress
error: error
---
# Tester
Feature-driven testing using Chrome DevTools MCP. Proactively discover bugs from feature list.
## Identity
- Tag: [tester] | Prefix: TEST-*
- Responsibility: Parse feature list → generate test scenarios → execute in browser → report discovered issues
## Boundaries
### MUST
- Parse feature list into testable scenarios
- Navigate to each feature's page using Chrome DevTools MCP
- Execute test scenarios with user interaction simulation
- Capture evidence for each test (screenshot, console, network)
- Classify results: pass / fail / warning
- Report all discovered issues with evidence
### MUST NOT
- Modify source code or project files
- Skip features in the list
- Report pass without actually testing
- Make assumptions about expected behavior without evidence
## Phase 2: Parse Feature List + Plan Tests
1. Read upstream artifacts via team_msg(operation="get_state")
2. Extract from task description:
- Session folder path
- Feature list (structured or free-text)
- Base URL for the application
3. Parse each feature into test items:
```json
{
"features": [
{
"id": "F-001",
"name": "用户登录",
"url": "/login",
"scenarios": [
{ "name": "正常登录", "steps": ["填写用户名", "填写密码", "点击登录"], "expected": "跳转到首页" },
{ "name": "空密码登录", "steps": ["填写用户名", "点击登录"], "expected": "显示密码必填提示" }
]
}
]
}
```
4. If feature descriptions lack detail, use page exploration to generate scenarios:
- Navigate to feature URL
- Take snapshot to discover interactive elements
- Generate scenarios from available UI elements (forms, buttons, links)
## Phase 3: Execute Tests
### Inner Loop: Process One Feature at a Time
For each feature in the list:
#### Step 3.1: Navigate to Feature Page
```
mcp__chrome-devtools__navigate_page({ type: "url", url: "<base-url><feature-url>" })
mcp__chrome-devtools__wait_for({ text: ["<expected-element>"], timeout: 10000 })
```
#### Step 3.2: Explore Page Structure
```
mcp__chrome-devtools__take_snapshot()
```
Parse snapshot to identify:
- Interactive elements (buttons, inputs, links, selects)
- Form fields and their labels
- Navigation elements
- Dynamic content areas
If no predefined scenarios, generate test scenarios from discovered elements.
#### Step 3.3: Execute Each Scenario
For each scenario:
1. **Capture baseline**:
```
mcp__chrome-devtools__take_screenshot({ filePath: "<session>/evidence/F-<id>-<scenario>-before.png" })
mcp__chrome-devtools__list_console_messages() // baseline errors
```
2. **Execute steps**:
- Map step descriptions to MCP actions:
| Step Pattern | MCP Action |
|-------------|------------|
| 点击/click XX | `take_snapshot` → find uid → `click({ uid })` |
| 填写/输入/fill XX with YY | `take_snapshot` → find uid → `fill({ uid, value })` |
| 悬停/hover XX | `take_snapshot` → find uid → `hover({ uid })` |
| 等待/wait XX | `wait_for({ text: ["XX"] })` |
| 导航/navigate to XX | `navigate_page({ type: "url", url: "XX" })` |
| 按键/press XX | `press_key({ key: "XX" })` |
| 滚动/scroll | `evaluate_script({ function: "() => window.scrollBy(0, 500)" })` |
3. **Capture result**:
```
mcp__chrome-devtools__take_screenshot({ filePath: "<session>/evidence/F-<id>-<scenario>-after.png" })
mcp__chrome-devtools__list_console_messages({ types: ["error", "warn"] })
mcp__chrome-devtools__list_network_requests({ resourceTypes: ["xhr", "fetch"] })
```
#### Step 3.4: Evaluate Scenario Result
| Check | Pass Condition | Fail Condition |
|-------|---------------|----------------|
| Console errors | No new errors after action | New Error/TypeError/ReferenceError |
| Network requests | All 2xx responses | Any 4xx/5xx response |
| Expected text | Expected text appears on page | Expected text not found |
| Visual state | Page renders without broken layout | Blank area, overflow, missing elements |
| Page responsive | Actions complete within timeout | Timeout or page freeze |
Classify result:
```
pass: All checks pass
fail: Console error OR network failure OR expected behavior not met
warning: Deprecation warnings OR slow response (>3s) OR minor visual issue
```
#### Step 3.5: Report Progress (Inner Loop)
After each feature, send progress via state_update:
```json
{
"status": "in_progress",
"task_id": "TEST-001",
"progress": "3/5 features tested",
"issues_found": 2
}
```
## Phase 4: Test Report
Write `<session>/artifacts/TEST-001-report.md`:
```markdown
# Test Report
## Summary
- **Features tested**: N
- **Passed**: X
- **Failed**: Y
- **Warnings**: Z
- **Test date**: <timestamp>
- **Base URL**: <url>
## Results by Feature
### F-001: <feature-name> — PASS/FAIL/WARNING
**Scenarios:**
| # | Scenario | Result | Issue |
|---|----------|--------|-------|
| 1 | <scenario-name> | PASS | — |
| 2 | <scenario-name> | FAIL | Console TypeError at step 3 |
**Evidence:**
- Screenshot (before): evidence/F-001-scenario1-before.png
- Screenshot (after): evidence/F-001-scenario1-after.png
- Console errors: [list]
- Network failures: [list]
### F-002: ...
## Discovered Issues
| ID | Feature | Severity | Description | Evidence |
|----|---------|----------|-------------|----------|
| BUG-001 | F-001 | High | TypeError on login submit | Console error + screenshot |
| BUG-002 | F-003 | Medium | API returns 500 on save | Network log |
| BUG-003 | F-005 | Low | Deprecation warning in console | Console warning |
```
Write `<session>/artifacts/TEST-001-issues.json`:
```json
{
"issues": [
{
"id": "BUG-001",
"feature": "F-001",
"feature_name": "用户登录",
"severity": "high",
"description": "点击登录按钮后控制台报TypeError",
"category": "javascript_error",
"evidence": {
"console_errors": ["TypeError: Cannot read property 'token' of undefined"],
"screenshot": "evidence/F-001-login-after.png",
"network_failures": []
},
"reproduction_steps": ["导航到/login", "填写用户名admin", "填写密码test", "点击登录按钮"]
}
]
}
```
Send state_update:
```json
{
"status": "task_complete",
"task_id": "TEST-001",
"ref": "<session>/artifacts/TEST-001-report.md",
"key_findings": ["Tested N features", "Found X issues (Y high, Z medium)"],
"decisions": [],
"verification": "tested",
"issues_ref": "<session>/artifacts/TEST-001-issues.json"
}
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Feature URL not accessible | Log as failed, continue to next feature |
| Element not found for action | Take snapshot, search alternatives, skip scenario if not found |
| Page crash during test | Capture console, reload, continue next scenario |
| All features pass | Report success, no downstream ANALYZE needed |
| Timeout during interaction | Capture current state, mark as warning, continue |

View File

@@ -0,0 +1,172 @@
---
role: verifier
prefix: VERIFY
inner_loop: false
message_types:
success: verification_result
error: error
---
# Verifier
Fix verification using Chrome DevTools MCP to confirm bug resolution.
## Identity
- Tag: [verifier] | Prefix: VERIFY-*
- Responsibility: Re-execute reproduction steps after fix, verify bug is resolved
## Boundaries
### MUST
- Execute EXACT same reproduction steps as Reproducer
- Capture same evidence types for comparison
- Compare before/after evidence objectively
- Report clear pass/fail verdict
### MUST NOT
- Modify source code or project files
- Skip any reproduction step
- Report pass without evidence comparison
- Make subjective judgments without evidence
## Phase 2: Load Context
1. Read upstream artifacts via team_msg(operation="get_state")
2. Load from multiple upstream roles:
- Reproducer: evidence-summary.json (original evidence + steps)
- Fixer: FIX-001-changes.md (what was changed)
3. Extract:
- Target URL
- Reproduction steps (exact same sequence)
- Original evidence for comparison
- Expected behavior (from bug report)
- Files modified by fixer
## Phase 3: Execute Verification
### Step 3.1: Pre-Verification Check
Verify fix was applied:
- Check that modified files exist and contain expected changes
- If running in dev server context, ensure server reflects changes
### Step 3.2: Navigate and Reproduce
Execute SAME steps as Reproducer:
```
mcp__chrome-devtools__navigate_page({ type: "url", url: "<target-url>" })
mcp__chrome-devtools__wait_for({ text: ["<expected-element>"], timeout: 10000 })
```
### Step 3.3: Capture Post-Fix Evidence
Capture same evidence types as original reproduction:
| Evidence | Tool | Save To |
|----------|------|---------|
| Screenshot | `take_screenshot({ filePath: "<session>/evidence/verify-screenshot.png" })` | evidence/ |
| DOM Snapshot | `take_snapshot({ filePath: "<session>/evidence/verify-snapshot.txt" })` | evidence/ |
| Console Messages | `list_console_messages({ types: ["error", "warn"] })` | In-memory |
| Network Requests | `list_network_requests({ resourceTypes: ["xhr", "fetch"] })` | In-memory |
### Step 3.4: Execute Reproduction Steps
For each step from original reproduction:
1. Execute same action (click, fill, hover, etc.)
2. Observe result
3. Note any differences from original reproduction
### Step 3.5: Capture Final State
After all steps:
- Screenshot of final state
- Console messages (check for new errors)
- Network requests (check for new failures)
## Phase 4: Compare and Report
### Comparison Criteria
| Dimension | Pass | Fail |
|-----------|------|------|
| Console Errors | Original error no longer appears | Original error still present |
| Network | Failed request now succeeds | Request still fails |
| Visual | Expected rendering achieved | Bug still visible |
| DOM | Expected structure present | Structure still wrong |
| New Errors | No new errors introduced | New errors detected |
### Verdict Logic
```
if original_error_resolved AND no_new_errors:
verdict = "pass"
elif original_error_resolved AND has_new_errors:
verdict = "pass_with_warnings" # bug fixed but new issues
else:
verdict = "fail"
```
### Write Verification Report
Write `<session>/artifacts/VERIFY-001-report.md`:
```markdown
# Verification Report
## Verdict: <PASS / PASS_WITH_WARNINGS / FAIL>
## Bug Status
- **Original bug**: <resolved / still present>
- **Reproduction steps**: <all executed / partial>
## Evidence Comparison
### Console Errors
- **Before fix**: <N errors>
- <error 1>
- <error 2>
- **After fix**: <N errors>
- <error 1 if any>
- **Resolution**: <original errors cleared / still present>
### Network Requests
- **Before fix**: <N failed requests>
- **After fix**: <N failed requests>
- **Resolution**: <requests now succeed / still failing>
### Visual Comparison
- **Before fix**: <description or screenshot ref>
- **After fix**: <description or screenshot ref>
- **Resolution**: <visual bug fixed / still present>
## Regression Check
- **New console errors**: <none / list>
- **New network failures**: <none / list>
- **Visual regressions**: <none / description>
## Files Verified
- <file1.ts> — changes confirmed applied
- <file2.tsx> — changes confirmed applied
```
Send state_update:
```json
{
"status": "task_complete",
"task_id": "VERIFY-001",
"ref": "<session>/artifacts/VERIFY-001-report.md",
"key_findings": ["Verdict: <PASS/FAIL>", "Original bug: <resolved/present>"],
"decisions": [],
"verification": "tested",
"verdict": "<pass|pass_with_warnings|fail>"
}
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Page fails to load | Retry once, report if still fails |
| Fix not applied | Report to coordinator, suggest re-fix |
| New errors detected | Report pass_with_warnings with details |
| Bug still present | Report fail with evidence comparison |
| Partial reproduction | Report with completed steps, note gaps |

View File

@@ -0,0 +1,215 @@
# Chrome DevTools MCP Usage Patterns
Reference for debug tool usage across all roles. Reproducer and Verifier are primary consumers.
## 1. Navigation & Page Control
### Navigate to URL
```
mcp__chrome-devtools__navigate_page({ type: "url", url: "http://localhost:3000/page" })
```
### Wait for Page Load
```
mcp__chrome-devtools__wait_for({ text: ["Expected Text"], timeout: 10000 })
```
### Reload Page
```
mcp__chrome-devtools__navigate_page({ type: "reload" })
```
### List Open Pages
```
mcp__chrome-devtools__list_pages()
```
### Select Page
```
mcp__chrome-devtools__select_page({ pageId: 0 })
```
## 2. User Interaction Simulation
### Click Element
```
// First take snapshot to find uid
mcp__chrome-devtools__take_snapshot()
// Then click by uid
mcp__chrome-devtools__click({ uid: "<uid-from-snapshot>" })
```
### Fill Input
```
mcp__chrome-devtools__fill({ uid: "<uid>", value: "input text" })
```
### Fill Multiple Fields
```
mcp__chrome-devtools__fill_form({
elements: [
{ uid: "<uid1>", value: "value1" },
{ uid: "<uid2>", value: "value2" }
]
})
```
### Hover Element
```
mcp__chrome-devtools__hover({ uid: "<uid>" })
```
### Press Key
```
mcp__chrome-devtools__press_key({ key: "Enter" })
mcp__chrome-devtools__press_key({ key: "Control+A" })
```
### Type Text
```
mcp__chrome-devtools__type_text({ text: "typed content", submitKey: "Enter" })
```
## 3. Evidence Collection
### Screenshot
```
// Full viewport
mcp__chrome-devtools__take_screenshot({ filePath: "<session>/evidence/screenshot.png" })
// Full page
mcp__chrome-devtools__take_screenshot({ filePath: "<path>", fullPage: true })
// Specific element
mcp__chrome-devtools__take_screenshot({ uid: "<uid>", filePath: "<path>" })
```
### DOM/A11y Snapshot
```
// Standard snapshot
mcp__chrome-devtools__take_snapshot()
// Verbose (all a11y info)
mcp__chrome-devtools__take_snapshot({ verbose: true })
// Save to file
mcp__chrome-devtools__take_snapshot({ filePath: "<session>/evidence/snapshot.txt" })
```
### Console Messages
```
// All messages
mcp__chrome-devtools__list_console_messages()
// Errors and warnings only
mcp__chrome-devtools__list_console_messages({ types: ["error", "warn"] })
// Get specific message detail
mcp__chrome-devtools__get_console_message({ msgid: 5 })
```
### Network Requests
```
// All requests
mcp__chrome-devtools__list_network_requests()
// XHR/Fetch only (API calls)
mcp__chrome-devtools__list_network_requests({ resourceTypes: ["xhr", "fetch"] })
// Get request detail (headers, body, response)
mcp__chrome-devtools__get_network_request({ reqid: 3 })
// Save response to file
mcp__chrome-devtools__get_network_request({ reqid: 3, responseFilePath: "<path>" })
```
### Performance Trace
```
// Start trace (auto-reload and auto-stop)
mcp__chrome-devtools__performance_start_trace({ reload: true, autoStop: true })
// Start manual trace
mcp__chrome-devtools__performance_start_trace({ reload: false, autoStop: false })
// Stop and save
mcp__chrome-devtools__performance_stop_trace({ filePath: "<session>/evidence/trace.json" })
```
## 4. Script Execution
### Evaluate JavaScript
```
// Get page title
mcp__chrome-devtools__evaluate_script({ function: "() => document.title" })
// Get element state
mcp__chrome-devtools__evaluate_script({
function: "(el) => ({ text: el.innerText, classes: el.className })",
args: ["<uid>"]
})
// Check React state (if applicable)
mcp__chrome-devtools__evaluate_script({
function: "() => { const fiber = document.querySelector('#root')._reactRootContainer; return fiber ? 'React detected' : 'No React'; }"
})
// Get computed styles
mcp__chrome-devtools__evaluate_script({
function: "(el) => JSON.stringify(window.getComputedStyle(el))",
args: ["<uid>"]
})
```
## 5. Common Debug Patterns
### Pattern: Reproduce Click Bug
```
1. navigate_page → target URL
2. wait_for → page loaded
3. take_snapshot → find target element uid
4. take_screenshot → before state
5. list_console_messages → baseline errors
6. click → target element
7. wait_for → expected result (or timeout)
8. take_screenshot → after state
9. list_console_messages → new errors
10. list_network_requests → triggered requests
```
### Pattern: Debug API Error
```
1. navigate_page → target URL
2. wait_for → page loaded
3. take_snapshot → find trigger element
4. click/fill → trigger API call
5. list_network_requests → find the API request
6. get_network_request → inspect headers, body, response
7. list_console_messages → check for error handling
```
### Pattern: Debug Performance Issue
```
1. navigate_page → target URL (set URL first)
2. performance_start_trace → start recording with reload
3. (auto-stop after page loads)
4. Read trace results → identify long tasks, bottlenecks
```
### Pattern: Debug Visual/CSS Issue
```
1. navigate_page → target URL
2. take_screenshot → capture current visual state
3. take_snapshot({ verbose: true }) → full a11y tree with styles
4. evaluate_script → get computed styles of problematic element
5. Compare expected vs actual styles
```
## 6. Error Handling
| Error | Meaning | Resolution |
|-------|---------|------------|
| "No page selected" | No browser tab active | list_pages → select_page |
| "Element not found" | uid is stale | take_snapshot → get new uid |
| "Navigation timeout" | Page didn't load | Check URL, retry with longer timeout |
| "Evaluation failed" | JS error in script | Check script syntax, page context |
| "No trace recording" | stop_trace without start | Ensure start_trace was called first |

View File

@@ -0,0 +1,94 @@
# Pipeline Definitions
## 1. Pipeline Selection Criteria
| Keywords | Pipeline |
|----------|----------|
| 功能, feature, 清单, list, 测试, test, 完成, done, 验收 | `test-pipeline` |
| bug, 错误, 报错, crash, 问题, 不工作, 白屏, 异常 | `debug-pipeline` |
| performance, 性能, slow, 慢, latency, memory | `debug-pipeline` (perf dimension) |
| Ambiguous / unclear | AskUserQuestion |
## 2. Test Pipeline (Feature-List Driven)
**4 tasks, linear with conditional skip**
```
TEST-001 → [issues found?] → ANALYZE-001 → FIX-001 → VERIFY-001
|
└─ no issues → Pipeline Complete (skip ANALYZE/FIX/VERIFY)
```
| Task | Role | Description | Conditional |
|------|------|-------------|-------------|
| TEST-001 | tester | Test all features, discover issues | Always |
| ANALYZE-001 | analyzer | Analyze discovered issues, produce RCA | Skip if 0 issues |
| FIX-001 | fixer | Fix all identified root causes | Skip if 0 issues |
| VERIFY-001 | verifier | Re-test failed scenarios to verify fixes | Skip if 0 issues |
### Conditional Skip Logic
After TEST-001 completes, coordinator reads `TEST-001-issues.json`:
- `issues.length === 0` → All pass. Skip downstream tasks, report success.
- `issues.filter(i => i.severity !== "low").length === 0` → Only warnings. AskUserQuestion: fix or complete.
- `issues.filter(i => i.severity === "high" || i.severity === "medium").length > 0` → Proceed with ANALYZE → FIX → VERIFY.
### Re-Fix Iteration
If VERIFY-001 reports failures:
- Create FIX-002 (blockedBy: VERIFY-001) → VERIFY-002 (blockedBy: FIX-002)
- Max 3 fix iterations
## 3. Debug Pipeline (Bug-Report Driven)
**4 tasks, linear with iteration support**
```
REPRODUCE-001 → ANALYZE-001 → FIX-001 → VERIFY-001
↑ |
| (if fail) |
+--- REPRODUCE-002 ←----+
```
| Task | Role | Description |
|------|------|-------------|
| REPRODUCE-001 | reproducer | Reproduce bug, collect evidence |
| ANALYZE-001 | analyzer | Analyze evidence, produce RCA report |
| FIX-001 | fixer | Implement code fix based on RCA |
| VERIFY-001 | verifier | Verify fix with same reproduction steps |
### Iteration Rules
- **Analyzer → Reproducer**: If Analyzer confidence < 50%, creates REPRODUCE-002 → ANALYZE-002
- **Verifier → Fixer**: If Verifier verdict = fail, creates FIX-002 → VERIFY-002
### Maximum Iterations
- Max reproduction iterations: 2
- Max fix iterations: 3
- After max iterations: report to user for manual intervention
## 4. Task Metadata Registry
| Task ID | Role | Pipeline | Depends On | Priority |
|---------|------|----------|------------|----------|
| TEST-001 | tester | test | - | P0 |
| REPRODUCE-001 | reproducer | debug | - | P0 |
| ANALYZE-001 | analyzer | both | TEST-001 or REPRODUCE-001 | P0 |
| FIX-001 | fixer | both | ANALYZE-001 | P0 |
| VERIFY-001 | verifier | both | FIX-001 | P0 |
| REPRODUCE-002 | reproducer | debug | (dynamic) | P0 |
| ANALYZE-002 | analyzer | debug | REPRODUCE-002 | P0 |
| FIX-002 | fixer | both | VERIFY-001 | P0 |
| VERIFY-002 | verifier | both | FIX-002 | P0 |
## 5. Evidence Types Registry
| Dimension | Evidence | MCP Tool | Collector Roles |
|-----------|----------|----------|----------------|
| Visual | Screenshots | take_screenshot | tester, reproducer, verifier |
| DOM | A11y snapshots | take_snapshot | tester, reproducer, verifier |
| Console | Error/warn messages | list_console_messages | tester, reproducer, verifier |
| Network | API requests/responses | list/get_network_request | tester, reproducer, verifier |
| Performance | Trace recording | performance_start/stop_trace | reproducer, verifier |
| Interaction | User actions | click/fill/hover | tester, reproducer, verifier |