mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-12 17:21:19 +08:00
Add unit tests for various components and stores in the terminal dashboard
- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management. - Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping. - Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
This commit is contained in:
783
.codex/skills/team-frontend-debug/SKILL.md
Normal file
783
.codex/skills/team-frontend-debug/SKILL.md
Normal file
@@ -0,0 +1,783 @@
|
||||
---
|
||||
name: team-frontend-debug
|
||||
description: Frontend debugging team using Chrome DevTools MCP. Dual-mode -- feature-list testing or bug-report debugging. Covers reproduction, root cause analysis, code fixes, and verification. CSV wave pipeline with conditional skip and iteration loops.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"feature list or bug description\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Frontend Debug Team
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
$team-frontend-debug "Test features: login, dashboard, user profile at localhost:3000"
|
||||
$team-frontend-debug "Bug: clicking save button on /settings causes white screen"
|
||||
$team-frontend-debug -y "Test: 1. User registration 2. Email verification 3. Password reset"
|
||||
$team-frontend-debug --continue "tfd-login-bug-20260308"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 2)
|
||||
- `--continue`: Resume existing session
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Dual-mode frontend debugging: feature-list testing or bug-report debugging, powered by Chrome DevTools MCP. Roles: tester (test-pipeline), reproducer (debug-pipeline), analyzer, fixer, verifier. Supports conditional skip (all tests pass -> no downstream tasks), iteration loops (analyzer requesting more evidence, verifier triggering re-fix), and Chrome DevTools-based browser interaction.
|
||||
|
||||
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------------+
|
||||
| FRONTEND DEBUG WORKFLOW |
|
||||
+-------------------------------------------------------------------+
|
||||
| |
|
||||
| Phase 0: Pre-Wave Interactive (Input Analysis) |
|
||||
| +- Parse user input (feature list or bug report) |
|
||||
| +- Detect mode: test-pipeline or debug-pipeline |
|
||||
| +- Extract: base URL, features/steps, evidence plan |
|
||||
| +- Output: refined requirements for decomposition |
|
||||
| |
|
||||
| Phase 1: Requirement -> CSV + Classification |
|
||||
| +- Select pipeline (test or debug) |
|
||||
| +- Build dependency graph from pipeline definition |
|
||||
| +- Classify tasks: csv-wave | interactive (exec_mode) |
|
||||
| +- Compute dependency waves (topological sort) |
|
||||
| +- Generate tasks.csv with wave + exec_mode columns |
|
||||
| +- User validates task breakdown (skip if -y) |
|
||||
| |
|
||||
| Phase 2: Wave Execution Engine (Extended) |
|
||||
| +- For each wave (1..N): |
|
||||
| | +- Execute pre-wave interactive tasks (if any) |
|
||||
| | +- Build wave CSV (filter csv-wave tasks for this wave) |
|
||||
| | +- Inject previous findings into prev_context column |
|
||||
| | +- spawn_agents_on_csv(wave CSV) |
|
||||
| | +- Execute post-wave interactive tasks (if any) |
|
||||
| | +- Merge all results into master tasks.csv |
|
||||
| | +- Conditional skip: TEST-001 with 0 issues -> done |
|
||||
| | +- Iteration: ANALYZE needs more evidence -> REPRODUCE-002 |
|
||||
| | +- Re-fix: VERIFY fails -> FIX-002 -> VERIFY-002 |
|
||||
| +- discoveries.ndjson shared across all modes (append-only) |
|
||||
| |
|
||||
| Phase 3: Post-Wave Interactive (Completion Action) |
|
||||
| +- Pipeline completion report with debug summary |
|
||||
| +- Interactive completion choice (Archive/Keep/Export) |
|
||||
| +- Final aggregation / report |
|
||||
| |
|
||||
| Phase 4: Results Aggregation |
|
||||
| +- Export final results.csv |
|
||||
| +- Generate context.md with all findings |
|
||||
| +- Display summary: completed/failed/skipped per wave |
|
||||
| +- Offer: view results | retry failed | done |
|
||||
| |
|
||||
+-------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pipeline Modes
|
||||
|
||||
| Input Pattern | Pipeline | Flow |
|
||||
|---------------|----------|------|
|
||||
| Feature list / function checklist / test items | `test-pipeline` | TEST -> ANALYZE -> FIX -> VERIFY |
|
||||
| Bug report / error description / crash report | `debug-pipeline` | REPRODUCE -> ANALYZE -> FIX -> VERIFY |
|
||||
|
||||
### Pipeline Selection Keywords
|
||||
|
||||
| Keywords | Pipeline |
|
||||
|----------|----------|
|
||||
| feature, test, list, check, verify functions, validate | `test-pipeline` |
|
||||
| bug, error, crash, broken, white screen, not working | `debug-pipeline` |
|
||||
| performance, slow, latency, memory leak | `debug-pipeline` (perf dimension) |
|
||||
| Ambiguous / unclear | AskUserQuestion to clarify |
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Rules
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, progress updates, inner loop |
|
||||
|
||||
**Classification Decision**:
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Feature testing with inner loop (tester iterates over features) | `csv-wave` |
|
||||
| Bug reproduction (single pass) | `csv-wave` |
|
||||
| Root cause analysis (single pass) | `csv-wave` |
|
||||
| Code fix implementation | `csv-wave` |
|
||||
| Fix verification (single pass) | `csv-wave` |
|
||||
| Conditional skip gate (evaluating TEST results) | `interactive` |
|
||||
| Pipeline completion action | `interactive` |
|
||||
|
||||
---
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,role,pipeline_mode,base_url,evidence_dimensions,deps,context_from,exec_mode,wave,status,findings,artifacts_produced,issues_count,verdict,error
|
||||
"TEST-001","Feature testing","PURPOSE: Test all features from list | Success: All features tested with evidence","tester","test-pipeline","http://localhost:3000","screenshot;console;network","","","csv-wave","1","pending","","","","",""
|
||||
"ANALYZE-001","Root cause analysis","PURPOSE: Analyze discovered issues | Success: RCA for each issue","analyzer","test-pipeline","","console;network","TEST-001","TEST-001","csv-wave","2","pending","","","","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (PREFIX-NNN: TEST, REPRODUCE, ANALYZE, FIX, VERIFY) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description with PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS |
|
||||
| `role` | Input | Role name: `tester`, `reproducer`, `analyzer`, `fixer`, `verifier` |
|
||||
| `pipeline_mode` | Input | Pipeline: `test-pipeline` or `debug-pipeline` |
|
||||
| `base_url` | Input | Target URL for browser-based tasks (empty for non-browser tasks) |
|
||||
| `evidence_dimensions` | Input | Semicolon-separated evidence types: `screenshot`, `console`, `network`, `snapshot`, `performance` |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `artifacts_produced` | Output | Semicolon-separated paths of produced artifacts |
|
||||
| `issues_count` | Output | Number of issues found (tester/analyzer), empty for others |
|
||||
| `verdict` | Output | Verification verdict: `pass`, `pass_with_warnings`, `fail` (verifier only) |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| Conditional Skip Gate | agents/conditional-skip-gate.md | 2.3 (send_input cycle) | Evaluate TEST results and skip downstream if no issues | post-wave |
|
||||
| Iteration Handler | agents/iteration-handler.md | 2.3 (send_input cycle) | Handle analyzer's need_more_evidence request | post-wave |
|
||||
| Completion Handler | agents/completion-handler.md | 2.3 (send_input cycle) | Handle pipeline completion action (Archive/Keep/Export) | standalone |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Chrome DevTools MCP Tools
|
||||
|
||||
All browser inspection operations use Chrome DevTools MCP. Tester, reproducer, and verifier are primary consumers. These tools are available to CSV wave agents.
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `mcp__chrome-devtools__navigate_page` | Navigate to target URL |
|
||||
| `mcp__chrome-devtools__take_screenshot` | Capture visual state |
|
||||
| `mcp__chrome-devtools__take_snapshot` | Capture DOM/a11y tree |
|
||||
| `mcp__chrome-devtools__list_console_messages` | Read console logs |
|
||||
| `mcp__chrome-devtools__get_console_message` | Get specific console message |
|
||||
| `mcp__chrome-devtools__list_network_requests` | Monitor network activity |
|
||||
| `mcp__chrome-devtools__get_network_request` | Inspect request/response detail |
|
||||
| `mcp__chrome-devtools__performance_start_trace` | Start performance recording |
|
||||
| `mcp__chrome-devtools__performance_stop_trace` | Stop and analyze trace |
|
||||
| `mcp__chrome-devtools__click` | Simulate user click |
|
||||
| `mcp__chrome-devtools__fill` | Fill form inputs |
|
||||
| `mcp__chrome-devtools__hover` | Hover over elements |
|
||||
| `mcp__chrome-devtools__evaluate_script` | Execute JavaScript in page |
|
||||
| `mcp__chrome-devtools__wait_for` | Wait for element/text |
|
||||
| `mcp__chrome-devtools__list_pages` | List open browser tabs |
|
||||
| `mcp__chrome-devtools__select_page` | Switch active tab |
|
||||
| `mcp__chrome-devtools__press_key` | Press keyboard keys |
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `task-analysis.json` | Phase 0/1 output: mode, features/steps, dimensions | Created in Phase 1 |
|
||||
| `role-instructions/` | Per-role instruction templates for CSV agents | Created in Phase 1 |
|
||||
| `artifacts/` | All deliverables: test reports, RCA reports, fix changes, verification reports | Created by agents |
|
||||
| `evidence/` | Screenshots, snapshots, network logs, performance traces | Created by tester/reproducer/verifier |
|
||||
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
+-- tasks.csv # Master state (all tasks, both modes)
|
||||
+-- results.csv # Final results export
|
||||
+-- discoveries.ndjson # Shared discovery board (all agents)
|
||||
+-- context.md # Human-readable report
|
||||
+-- task-analysis.json # Phase 1 analysis output
|
||||
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
+-- role-instructions/ # Per-role instruction templates
|
||||
| +-- tester.md # (test-pipeline)
|
||||
| +-- reproducer.md # (debug-pipeline)
|
||||
| +-- analyzer.md
|
||||
| +-- fixer.md
|
||||
| +-- verifier.md
|
||||
+-- artifacts/ # All deliverables
|
||||
| +-- TEST-001-report.md
|
||||
| +-- TEST-001-issues.json
|
||||
| +-- ANALYZE-001-rca.md
|
||||
| +-- FIX-001-changes.md
|
||||
| +-- VERIFY-001-report.md
|
||||
+-- evidence/ # Browser evidence
|
||||
| +-- F-001-login-before.png
|
||||
| +-- F-001-login-after.png
|
||||
| +-- before-screenshot.png
|
||||
| +-- after-screenshot.png
|
||||
| +-- before-snapshot.txt
|
||||
| +-- after-snapshot.txt
|
||||
| +-- evidence-summary.json
|
||||
+-- interactive/ # Interactive task artifacts
|
||||
| +-- {id}-result.json
|
||||
+-- wisdom/ # Cross-task knowledge
|
||||
+-- learnings.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 2
|
||||
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = requirement.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
const sessionId = `tfd-${slug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/evidence ${sessionFolder}/role-instructions ${sessionFolder}/interactive ${sessionFolder}/wisdom`)
|
||||
|
||||
Write(`${sessionFolder}/discoveries.ndjson`, '')
|
||||
Write(`${sessionFolder}/wisdom/learnings.md`, '# Debug Learnings\n')
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive (Input Analysis)
|
||||
|
||||
**Objective**: Parse user input, detect mode (test vs debug), extract parameters.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse user input** from $ARGUMENTS
|
||||
|
||||
2. **Check for existing sessions** (continue mode):
|
||||
- Scan `.workflow/.csv-wave/tfd-*/tasks.csv` for sessions with pending tasks
|
||||
- If `--continue`: resume the specified or most recent session, skip to Phase 2
|
||||
|
||||
3. **Detect mode**:
|
||||
|
||||
| Input Pattern | Mode |
|
||||
|---------------|------|
|
||||
| Contains: feature, test, list, check, verify | `test-pipeline` |
|
||||
| Contains: bug, error, crash, broken, not working | `debug-pipeline` |
|
||||
| Ambiguous | AskUserQuestion to clarify |
|
||||
|
||||
4. **Extract parameters by mode**:
|
||||
|
||||
**Test Mode**:
|
||||
- `base_url`: URL in text or AskUserQuestion
|
||||
- `features`: Parse feature list (bullet points, numbered list, free text)
|
||||
- Generate structured feature items with id, name, url
|
||||
|
||||
**Debug Mode**:
|
||||
- `bug_description`: Bug description text
|
||||
- `target_url`: URL in text or AskUserQuestion
|
||||
- `reproduction_steps`: Steps in text or AskUserQuestion
|
||||
- `evidence_plan`: Detect dimensions from keywords (UI, network, console, performance)
|
||||
|
||||
5. **Dimension Detection** (debug mode):
|
||||
|
||||
| Keywords | Dimension |
|
||||
|----------|-----------|
|
||||
| render, style, display, layout, CSS | screenshot, snapshot |
|
||||
| request, API, network, timeout | network |
|
||||
| error, crash, exception | console |
|
||||
| slow, performance, lag, memory | performance |
|
||||
| interaction, click, input, form | screenshot, console |
|
||||
|
||||
**Success Criteria**:
|
||||
- Mode determined (test-pipeline or debug-pipeline)
|
||||
- Base URL and features/steps extracted
|
||||
- Evidence dimensions identified
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement -> CSV + Classification
|
||||
|
||||
**Objective**: Build task dependency graph, generate tasks.csv and per-role instruction templates.
|
||||
|
||||
**Decomposition Rules**:
|
||||
|
||||
1. **Pipeline Definition**:
|
||||
|
||||
**Test Pipeline** (4 tasks, conditional):
|
||||
```
|
||||
TEST-001 -> [issues?] -> ANALYZE-001 -> FIX-001 -> VERIFY-001
|
||||
|
|
||||
+-- no issues -> Pipeline Complete (skip downstream)
|
||||
```
|
||||
|
||||
**Debug Pipeline** (4 tasks, linear with iteration):
|
||||
```
|
||||
REPRODUCE-001 -> ANALYZE-001 -> FIX-001 -> VERIFY-001
|
||||
^ |
|
||||
| (if fail) |
|
||||
+--- REPRODUCE-002 <-----+
|
||||
```
|
||||
|
||||
2. **Task Description Template**: Every task uses PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS format with session path, base URL, and upstream artifact references
|
||||
|
||||
3. **Role Instruction Generation**: Write per-role instruction templates to `role-instructions/{role}.md` using the base instruction template customized for each role
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
| Task Property | exec_mode |
|
||||
|---------------|-----------|
|
||||
| Feature testing (tester with inner loop) | `csv-wave` |
|
||||
| Bug reproduction (single pass) | `csv-wave` |
|
||||
| Root cause analysis (single pass) | `csv-wave` |
|
||||
| Code fix (may need multiple passes) | `csv-wave` |
|
||||
| Fix verification (single pass) | `csv-wave` |
|
||||
| All standard pipeline tasks | `csv-wave` |
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking.
|
||||
|
||||
```javascript
|
||||
// Generate per-role instruction templates
|
||||
const roles = pipelineMode === 'test-pipeline'
|
||||
? ['tester', 'analyzer', 'fixer', 'verifier']
|
||||
: ['reproducer', 'analyzer', 'fixer', 'verifier']
|
||||
|
||||
for (const role of roles) {
|
||||
const instruction = generateRoleInstruction(role, sessionFolder, pipelineMode)
|
||||
Write(`${sessionFolder}/role-instructions/${role}.md`, instruction)
|
||||
}
|
||||
|
||||
const tasks = buildTasksCsv(pipelineMode, requirement, sessionFolder, baseUrl, evidencePlan)
|
||||
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
||||
Write(`${sessionFolder}/task-analysis.json`, JSON.stringify(analysisResult, null, 2))
|
||||
```
|
||||
|
||||
**User Validation**: Display task breakdown (skip if AUTO_YES).
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema and wave assignments
|
||||
- Role instruction templates generated
|
||||
- task-analysis.json written
|
||||
- No circular dependencies
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with conditional skip, iteration loops, and re-fix cycles.
|
||||
|
||||
```javascript
|
||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
let tasks = parseCsv(masterCsv)
|
||||
let maxWave = Math.max(...tasks.map(t => t.wave))
|
||||
let fixRound = 0
|
||||
const MAX_FIX_ROUNDS = 3
|
||||
const MAX_REPRODUCE_ROUNDS = 2
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
console.log(`\nWave ${wave}/${maxWave}`)
|
||||
|
||||
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
|
||||
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
|
||||
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
|
||||
|
||||
// Check dependencies -- skip tasks whose deps failed
|
||||
for (const task of waveTasks) {
|
||||
const depIds = (task.deps || '').split(';').filter(Boolean)
|
||||
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
|
||||
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
|
||||
task.status = 'skipped'
|
||||
task.error = `Dependency failed: ${depIds.filter((id, i) =>
|
||||
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
|
||||
}
|
||||
}
|
||||
|
||||
// Execute pre-wave interactive tasks (if any)
|
||||
for (const task of interactiveTasks.filter(t => t.status === 'pending')) {
|
||||
// Determine agent file based on task type
|
||||
const agentFile = task.id.includes('skip') ? 'agents/conditional-skip-gate.md'
|
||||
: task.id.includes('iter') ? 'agents/iteration-handler.md'
|
||||
: 'agents/completion-handler.md'
|
||||
|
||||
Read(agentFile)
|
||||
const agent = spawn_agent({
|
||||
message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${agentFile}\n2. Read: ${sessionFolder}/discoveries.ndjson\n\nGoal: ${task.description}\nSession: ${sessionFolder}\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
|
||||
})
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
if (result.timed_out) {
|
||||
send_input({ id: agent, message: "Please finalize and output current findings." })
|
||||
wait({ ids: [agent], timeout_ms: 120000 })
|
||||
}
|
||||
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
|
||||
task_id: task.id, status: "completed", findings: parseFindings(result),
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
close_agent({ id: agent })
|
||||
task.status = 'completed'
|
||||
task.findings = parseFindings(result)
|
||||
}
|
||||
|
||||
// Build prev_context for csv-wave tasks
|
||||
const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
|
||||
for (const task of pendingCsvTasks) {
|
||||
task.prev_context = buildPrevContext(task, tasks)
|
||||
}
|
||||
|
||||
if (pendingCsvTasks.length > 0) {
|
||||
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))
|
||||
|
||||
const waveInstruction = buildWaveInstruction(pendingCsvTasks, sessionFolder, wave)
|
||||
|
||||
spawn_agents_on_csv({
|
||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||
id_column: "id",
|
||||
instruction: waveInstruction,
|
||||
max_concurrency: maxConcurrency,
|
||||
max_runtime_seconds: 1200,
|
||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||
output_schema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
id: { type: "string" },
|
||||
status: { type: "string", enum: ["completed", "failed"] },
|
||||
findings: { type: "string" },
|
||||
artifacts_produced: { type: "string" },
|
||||
issues_count: { type: "string" },
|
||||
verdict: { type: "string" },
|
||||
error: { type: "string" }
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Merge results into master CSV
|
||||
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||
for (const r of results) {
|
||||
const t = tasks.find(t => t.id === r.id)
|
||||
if (t) Object.assign(t, r)
|
||||
}
|
||||
|
||||
// Conditional Skip: TEST-001 with 0 issues
|
||||
const testResult = results.find(r => r.id === 'TEST-001')
|
||||
if (testResult && parseInt(testResult.issues_count || '0') === 0) {
|
||||
// Skip all downstream tasks
|
||||
tasks.filter(t => t.wave > wave && t.status === 'pending').forEach(t => {
|
||||
t.status = 'skipped'
|
||||
t.error = 'No issues found in testing -- skipped'
|
||||
})
|
||||
console.log('All features passed. No issues found. Pipeline complete.')
|
||||
}
|
||||
|
||||
// Iteration: Analyzer needs more evidence
|
||||
const analyzerResult = results.find(r => r.id.startsWith('ANALYZE') && r.findings?.includes('need_more_evidence'))
|
||||
if (analyzerResult) {
|
||||
const reproduceRound = tasks.filter(t => t.id.startsWith('REPRODUCE')).length
|
||||
if (reproduceRound < MAX_REPRODUCE_ROUNDS) {
|
||||
const newRepId = `REPRODUCE-${String(reproduceRound + 1).padStart(3, '0')}`
|
||||
const newAnalyzeId = `ANALYZE-${String(tasks.filter(t => t.id.startsWith('ANALYZE')).length + 1).padStart(3, '0')}`
|
||||
tasks.push({
|
||||
id: newRepId, title: 'Supplemental evidence collection',
|
||||
description: `PURPOSE: Collect additional evidence per Analyzer request | Success: Targeted evidence collected`,
|
||||
role: 'reproducer', pipeline_mode: tasks[0].pipeline_mode,
|
||||
base_url: tasks[0].base_url, evidence_dimensions: tasks[0].evidence_dimensions,
|
||||
deps: '', context_from: analyzerResult.id,
|
||||
exec_mode: 'csv-wave', wave: wave + 1, status: 'pending',
|
||||
findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
|
||||
})
|
||||
tasks.push({
|
||||
id: newAnalyzeId, title: 'Re-analysis with supplemental evidence',
|
||||
description: `PURPOSE: Re-analyze with additional evidence | Success: Higher-confidence RCA`,
|
||||
role: 'analyzer', pipeline_mode: tasks[0].pipeline_mode,
|
||||
base_url: '', evidence_dimensions: '',
|
||||
deps: newRepId, context_from: `${analyzerResult.id};${newRepId}`,
|
||||
exec_mode: 'csv-wave', wave: wave + 2, status: 'pending',
|
||||
findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
|
||||
})
|
||||
// Update FIX task deps
|
||||
const fixTask = tasks.find(t => t.id === 'FIX-001' && t.status === 'pending')
|
||||
if (fixTask) fixTask.deps = newAnalyzeId
|
||||
}
|
||||
}
|
||||
|
||||
// Re-fix: Verifier verdict = fail
|
||||
const verifyResult = results.find(r => r.id.startsWith('VERIFY') && r.verdict === 'fail')
|
||||
if (verifyResult && fixRound < MAX_FIX_ROUNDS) {
|
||||
fixRound++
|
||||
const newFixId = `FIX-${String(fixRound + 1).padStart(3, '0')}`
|
||||
const newVerifyId = `VERIFY-${String(fixRound + 1).padStart(3, '0')}`
|
||||
tasks.push({
|
||||
id: newFixId, title: `Re-fix (round ${fixRound + 1})`,
|
||||
description: `PURPOSE: Re-fix based on verification failure | Success: Issue resolved`,
|
||||
role: 'fixer', pipeline_mode: tasks[0].pipeline_mode,
|
||||
base_url: '', evidence_dimensions: '',
|
||||
deps: verifyResult.id, context_from: verifyResult.id,
|
||||
exec_mode: 'csv-wave', wave: wave + 1, status: 'pending',
|
||||
findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
|
||||
})
|
||||
tasks.push({
|
||||
id: newVerifyId, title: `Re-verify (round ${fixRound + 1})`,
|
||||
description: `PURPOSE: Re-verify after fix | Success: Bug resolved`,
|
||||
role: 'verifier', pipeline_mode: tasks[0].pipeline_mode,
|
||||
base_url: tasks[0].base_url, evidence_dimensions: tasks[0].evidence_dimensions,
|
||||
deps: newFixId, context_from: newFixId,
|
||||
exec_mode: 'csv-wave', wave: wave + 2, status: 'pending',
|
||||
findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Update master CSV
|
||||
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
||||
|
||||
// Cleanup temp files
|
||||
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
|
||||
|
||||
// Recalculate maxWave (may have grown from iteration/re-fix)
|
||||
maxWave = Math.max(maxWave, ...tasks.map(t => t.wave))
|
||||
|
||||
// Display wave summary
|
||||
const completed = waveTasks.filter(t => t.status === 'completed').length
|
||||
const failed = waveTasks.filter(t => t.status === 'failed').length
|
||||
const skipped = waveTasks.filter(t => t.status === 'skipped').length
|
||||
console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Conditional skip handled (TEST with 0 issues)
|
||||
- Iteration loops handled (analyzer need_more_evidence)
|
||||
- Re-fix cycles handled (verifier fail verdict)
|
||||
- discoveries.ndjson accumulated across all waves
|
||||
- Max iteration/fix bounds respected
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Post-Wave Interactive (Completion Action)
|
||||
|
||||
**Objective**: Pipeline completion report with debug summary.
|
||||
|
||||
```javascript
|
||||
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
const completed = tasks.filter(t => t.status === 'completed')
|
||||
const pipelineMode = tasks[0]?.pipeline_mode
|
||||
|
||||
console.log(`
|
||||
============================================
|
||||
FRONTEND DEBUG COMPLETE
|
||||
|
||||
Pipeline: ${pipelineMode} | ${completed.length}/${tasks.length} tasks
|
||||
Fix Rounds: ${fixRound}/${MAX_FIX_ROUNDS}
|
||||
Session: ${sessionFolder}
|
||||
|
||||
Results:
|
||||
${completed.map(t => ` [DONE] ${t.id} (${t.role}): ${t.findings?.substring(0, 80) || 'completed'}`).join('\n')}
|
||||
============================================
|
||||
`)
|
||||
|
||||
if (!AUTO_YES) {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Debug pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up debugging" },
|
||||
{ label: "Export Results", description: "Export debug report and patches" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- User informed of debug pipeline results
|
||||
- Completion action taken
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```javascript
|
||||
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
|
||||
|
||||
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
let contextMd = `# Frontend Debug Report\n\n`
|
||||
contextMd += `**Session**: ${sessionId}\n`
|
||||
contextMd += `**Pipeline**: ${tasks[0]?.pipeline_mode}\n`
|
||||
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`
|
||||
|
||||
contextMd += `## Summary\n`
|
||||
contextMd += `| Status | Count |\n|--------|-------|\n`
|
||||
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
|
||||
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
|
||||
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`
|
||||
|
||||
const maxWave = Math.max(...tasks.map(t => t.wave))
|
||||
contextMd += `## Wave Execution\n\n`
|
||||
for (let w = 1; w <= maxWave; w++) {
|
||||
const waveTasks = tasks.filter(t => t.wave === w)
|
||||
contextMd += `### Wave ${w}\n\n`
|
||||
for (const t of waveTasks) {
|
||||
const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
|
||||
contextMd += `${icon} **${t.title}** [${t.role}]`
|
||||
if (t.verdict) contextMd += ` Verdict: ${t.verdict}`
|
||||
if (t.issues_count) contextMd += ` Issues: ${t.issues_count}`
|
||||
contextMd += ` ${t.findings || ''}\n\n`
|
||||
}
|
||||
}
|
||||
|
||||
// Debug-specific sections
|
||||
const verifyTasks = tasks.filter(t => t.role === 'verifier' && t.verdict)
|
||||
if (verifyTasks.length > 0) {
|
||||
contextMd += `## Verification Results\n\n`
|
||||
for (const v of verifyTasks) {
|
||||
contextMd += `- **${v.id}**: ${v.verdict}\n`
|
||||
}
|
||||
}
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextMd)
|
||||
console.log(`Results exported to: ${sessionFolder}/results.csv`)
|
||||
console.log(`Report generated at: ${sessionFolder}/context.md`)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported
|
||||
- context.md generated with debug summary
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
All agents share a single `discoveries.ndjson` file.
|
||||
|
||||
**Format**: One JSON object per line (NDJSON):
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"TEST-001","type":"feature_tested","data":{"feature":"F-001","name":"Login","result":"fail","issues":2}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"REPRODUCE-001","type":"bug_reproduced","data":{"url":"/settings","steps":3,"console_errors":2,"network_failures":1}}
|
||||
{"ts":"2026-03-08T10:10:00Z","worker":"ANALYZE-001","type":"root_cause_found","data":{"category":"TypeError","file":"src/components/Settings.tsx","line":142,"confidence":"high"}}
|
||||
{"ts":"2026-03-08T10:15:00Z","worker":"FIX-001","type":"file_modified","data":{"file":"src/components/Settings.tsx","change":"Added null check","lines_added":3}}
|
||||
{"ts":"2026-03-08T10:20:00Z","worker":"VERIFY-001","type":"verification_result","data":{"verdict":"pass","original_error_resolved":true,"new_errors":0}}
|
||||
```
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| Type | Data Schema | Description |
|
||||
|------|-------------|-------------|
|
||||
| `feature_tested` | `{feature, name, result, issues}` | Feature test result |
|
||||
| `bug_reproduced` | `{url, steps, console_errors, network_failures}` | Bug reproduction result |
|
||||
| `evidence_collected` | `{dimension, file, description}` | Evidence artifact saved |
|
||||
| `root_cause_found` | `{category, file, line, confidence}` | Root cause identified |
|
||||
| `file_modified` | `{file, change, lines_added}` | Code fix applied |
|
||||
| `verification_result` | `{verdict, original_error_resolved, new_errors}` | Fix verification result |
|
||||
| `issue_found` | `{file, line, severity, description}` | Issue discovered |
|
||||
|
||||
**Protocol**:
|
||||
1. Agents MUST read discoveries.ndjson at start of execution
|
||||
2. Agents MUST append relevant discoveries during execution
|
||||
3. Agents MUST NOT modify or delete existing entries
|
||||
4. Deduplication by `{type, data.file}` key
|
||||
|
||||
---
|
||||
|
||||
## Conditional Skip Logic
|
||||
|
||||
After TEST-001 completes, evaluate issues:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| `issues_count === 0` | Skip ANALYZE/FIX/VERIFY. Pipeline complete with all-pass. |
|
||||
| Only low-severity warnings | AskUserQuestion: fix warnings or complete |
|
||||
| High/medium severity issues | Proceed with ANALYZE -> FIX -> VERIFY |
|
||||
|
||||
---
|
||||
|
||||
## Iteration Rules
|
||||
|
||||
| Trigger | Condition | Action | Max |
|
||||
|---------|-----------|--------|-----|
|
||||
| Analyzer -> Reproducer | Confidence < 50% | Create REPRODUCE-002 -> ANALYZE-002 | 2 reproduction rounds |
|
||||
| Verifier -> Fixer | Verdict = fail | Create FIX-002 -> VERIFY-002 | 3 fix rounds |
|
||||
| Max iterations reached | Round >= max | Report to user for manual intervention | -- |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| All features pass test | Skip downstream tasks, report success |
|
||||
| Bug not reproducible | Report failure, ask user for more details |
|
||||
| Browser not available | Report error, suggest manual reproduction steps |
|
||||
| Analysis inconclusive | Request more evidence via iteration loop |
|
||||
| Fix introduces regression | Verifier reports fail, dispatch re-fix |
|
||||
| Max iterations reached | Escalate to user for manual intervention |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 0/1
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task
|
||||
8. **Conditional Skip**: If TEST finds 0 issues, skip all downstream tasks
|
||||
9. **Iteration Bounds**: Max 2 reproduction rounds, max 3 fix rounds
|
||||
10. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
11. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
142
.codex/skills/team-frontend-debug/agents/completion-handler.md
Normal file
142
.codex/skills/team-frontend-debug/agents/completion-handler.md
Normal file
@@ -0,0 +1,142 @@
|
||||
# Completion Handler Agent
|
||||
|
||||
Interactive agent for handling pipeline completion action. Presents debug summary and offers Archive/Keep/Export choices.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role File**: `agents/completion-handler.md`
|
||||
- **Responsibility**: Present debug pipeline results, handle completion choice, execute cleanup or export
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read all task results from master CSV
|
||||
- Present debug summary (reproduction, RCA, fix, verification)
|
||||
- Wait for user choice before acting
|
||||
- Produce structured output following template
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Delete session files without user approval
|
||||
- Modify task artifacts
|
||||
- Produce unstructured output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | built-in | Load task results and artifacts |
|
||||
| `AskUserQuestion` | built-in | Get user completion choice |
|
||||
| `Write` | built-in | Store completion result |
|
||||
| `Bash` | built-in | Execute archive/export operations |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Results Loading
|
||||
|
||||
**Objective**: Load all task results and build debug summary
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| tasks.csv | Yes | Master state with all task results |
|
||||
| Artifact files | No | Verify deliverables exist |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read master tasks.csv
|
||||
2. Parse all completed tasks and their artifacts
|
||||
3. Build debug summary:
|
||||
- Bug description and reproduction results
|
||||
- Root cause analysis findings
|
||||
- Files modified and patches applied
|
||||
- Verification results (pass/fail)
|
||||
- Evidence inventory (screenshots, logs, traces)
|
||||
4. Calculate pipeline statistics
|
||||
|
||||
**Output**: Debug summary ready for user
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Completion Choice
|
||||
|
||||
**Objective**: Present debug results and get user action
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Display pipeline summary with debug details
|
||||
2. Present completion choice:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Debug pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up debugging" },
|
||||
{ label: "Export Results", description: "Export debug report and patches" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
3. Handle response:
|
||||
|
||||
| Response | Action |
|
||||
|----------|--------|
|
||||
| Archive & Clean | Mark session completed, output final summary |
|
||||
| Keep Active | Mark session paused, keep all evidence/artifacts |
|
||||
| Export Results | Copy RCA report, fix changes, verification report to project directory |
|
||||
|
||||
**Output**: Completion action result
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Pipeline mode: <test-pipeline|debug-pipeline>
|
||||
- Tasks completed: <count>/<total>
|
||||
- Fix rounds: <count>/<max>
|
||||
- Final verdict: <pass|pass_with_warnings|fail>
|
||||
|
||||
## Debug Summary
|
||||
- Bug: <description>
|
||||
- Root cause: <category at file:line>
|
||||
- Fix: <description of changes>
|
||||
- Verification: <pass/fail>
|
||||
|
||||
## Evidence Inventory
|
||||
- Screenshots: <count>
|
||||
- Console logs: <captured/not captured>
|
||||
- Network logs: <captured/not captured>
|
||||
- Performance trace: <captured/not captured>
|
||||
|
||||
## Action Taken
|
||||
- Choice: <archive|keep|export>
|
||||
- Session status: <completed|paused|exported>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| tasks.csv not found | Report error, cannot complete |
|
||||
| Artifacts missing | Report partial completion with gaps noted |
|
||||
| User does not respond | Timeout, default to keep active |
|
||||
@@ -0,0 +1,130 @@
|
||||
# Conditional Skip Gate Agent
|
||||
|
||||
Interactive agent for evaluating TEST-001 results and determining whether to skip downstream tasks (ANALYZE, FIX, VERIFY) when no issues are found.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role File**: `agents/conditional-skip-gate.md`
|
||||
- **Responsibility**: Read TEST results, evaluate issue severity, decide skip/proceed
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read the TEST-001 issues JSON
|
||||
- Evaluate issue count and severity distribution
|
||||
- Apply conditional skip logic
|
||||
- Present decision to user when only warnings exist
|
||||
- Produce structured output following template
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Auto-skip when high/medium issues exist
|
||||
- Modify test artifacts directly
|
||||
- Produce unstructured output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | built-in | Load test results and issues |
|
||||
| `AskUserQuestion` | built-in | Get user decision on warnings |
|
||||
| `Write` | built-in | Store gate decision result |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Load Test Results
|
||||
|
||||
**Objective**: Load TEST-001 issues and evaluate severity
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| TEST-001-issues.json | Yes | Discovered issues with severity |
|
||||
| TEST-001-report.md | No | Full test report |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Extract session path from task assignment
|
||||
2. Read TEST-001-issues.json
|
||||
3. Parse issues array
|
||||
4. Count by severity: high, medium, low, warning
|
||||
|
||||
**Output**: Issue severity distribution
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Skip Decision
|
||||
|
||||
**Objective**: Apply conditional skip logic
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Evaluate issues:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| `issues.length === 0` | Skip all downstream. Report "all_pass". |
|
||||
| Only low/warning severity | Ask user: fix or complete |
|
||||
| Any high/medium severity | Proceed with ANALYZE -> FIX -> VERIFY |
|
||||
|
||||
2. If only warnings, present choice:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Testing found only low-severity warnings. How would you like to proceed?",
|
||||
header: "Test Results",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Fix warnings", description: "Proceed with analysis and fixes for warnings" },
|
||||
{ label: "Complete", description: "Accept current state, skip remaining tasks" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
3. Handle response and record decision
|
||||
|
||||
**Output**: Skip/proceed directive
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Test report evaluated: TEST-001
|
||||
- Issues found: <total>
|
||||
- High: <count>, Medium: <count>, Low: <count>, Warning: <count>
|
||||
- Decision: <all_pass|skip_warnings|proceed>
|
||||
|
||||
## Findings
|
||||
- All features tested: <count>
|
||||
- Pass rate: <percentage>
|
||||
|
||||
## Decision Details
|
||||
- Action: <skip-downstream|proceed-with-fixes>
|
||||
- Downstream tasks affected: ANALYZE-001, FIX-001, VERIFY-001
|
||||
- User choice: <if applicable>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| TEST-001-issues.json not found | Report error, cannot evaluate |
|
||||
| Issues JSON malformed | Report parse error, default to proceed |
|
||||
| User does not respond | Timeout, default to proceed with fixes |
|
||||
120
.codex/skills/team-frontend-debug/agents/iteration-handler.md
Normal file
120
.codex/skills/team-frontend-debug/agents/iteration-handler.md
Normal file
@@ -0,0 +1,120 @@
|
||||
# Iteration Handler Agent
|
||||
|
||||
Interactive agent for handling the analyzer's request for more evidence. Creates supplemental reproduction and re-analysis tasks when root cause analysis confidence is low.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role File**: `agents/iteration-handler.md`
|
||||
- **Responsibility**: Parse analyzer evidence request, create REPRODUCE-002 + ANALYZE-002 tasks, update dependency chain
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read the analyzer's need_more_evidence request
|
||||
- Parse specific evidence dimensions and actions requested
|
||||
- Create supplemental reproduction task description
|
||||
- Create re-analysis task description
|
||||
- Update FIX dependency to point to new ANALYZE task
|
||||
- Produce structured output following template
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Ignore the analyzer's specific requests
|
||||
- Create tasks beyond iteration bounds (max 2 reproduction rounds)
|
||||
- Modify existing task artifacts
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | built-in | Load analyzer output and session state |
|
||||
| `Write` | built-in | Store iteration handler result |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Parse Evidence Request
|
||||
|
||||
**Objective**: Understand what additional evidence the analyzer needs
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Analyzer findings | Yes | Contains need_more_evidence with specifics |
|
||||
| Session state | No | Current iteration count |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Extract session path from task assignment
|
||||
2. Read analyzer's findings or RCA report (partial)
|
||||
3. Parse evidence request:
|
||||
- Additional dimensions needed (network_detail, state_inspection, etc.)
|
||||
- Specific actions (capture request body, evaluate React state, etc.)
|
||||
4. Check current iteration count
|
||||
|
||||
**Output**: Parsed evidence request
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Create Iteration Tasks
|
||||
|
||||
**Objective**: Build task descriptions for supplemental reproduction and re-analysis
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Check iteration bounds:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Reproduction rounds < 2 | Create REPRODUCE-002 + ANALYZE-002 |
|
||||
| Reproduction rounds >= 2 | Escalate to user for manual investigation |
|
||||
|
||||
2. Build REPRODUCE-002 description with specific evidence requests from analyzer
|
||||
|
||||
3. Build ANALYZE-002 description that loads both original and supplemental evidence
|
||||
|
||||
4. Record new tasks and dependency updates
|
||||
|
||||
**Output**: Task descriptions for dynamic wave extension
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Analyzer evidence request processed
|
||||
- Iteration round: <current>/<max>
|
||||
- Action: <create-reproduction|escalate>
|
||||
|
||||
## Evidence Request
|
||||
- Dimensions needed: <list>
|
||||
- Specific actions: <list>
|
||||
|
||||
## Tasks Created
|
||||
- REPRODUCE-002: <description summary>
|
||||
- ANALYZE-002: <description summary>
|
||||
|
||||
## Dependency Updates
|
||||
- FIX-001 deps updated: ANALYZE-001 -> ANALYZE-002
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Evidence request unclear | Use all default dimensions |
|
||||
| Max iterations reached | Escalate to user |
|
||||
| Session state missing | Default to iteration round 1 |
|
||||
@@ -0,0 +1,272 @@
|
||||
# Agent Instruction Template -- Team Frontend Debug
|
||||
|
||||
Base instruction template for CSV wave agents. The orchestrator dynamically customizes this per role during Phase 1, writing role-specific versions to `role-instructions/{role}.md`.
|
||||
|
||||
## Purpose
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 1 | Coordinator generates per-role instruction from this template |
|
||||
| Phase 2 | Injected as `instruction` parameter to `spawn_agents_on_csv` |
|
||||
|
||||
---
|
||||
|
||||
## Base Instruction Template
|
||||
|
||||
```markdown
|
||||
## TASK ASSIGNMENT -- Team Frontend Debug
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read project context: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Role**: {role}
|
||||
**Pipeline Mode**: {pipeline_mode}
|
||||
**Base URL**: {base_url}
|
||||
**Evidence Dimensions**: {evidence_dimensions}
|
||||
|
||||
### Task Description
|
||||
{description}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
1. **Read discoveries**: Load <session-folder>/discoveries.ndjson for shared exploration findings
|
||||
2. **Use context**: Apply previous tasks' findings from prev_context above
|
||||
3. **Execute task**: Follow role-specific instructions below
|
||||
4. **Share discoveries**: Append exploration findings to shared board:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> <session-folder>/discoveries.ndjson
|
||||
```
|
||||
5. **Report result**: Return JSON via report_agent_job_result
|
||||
|
||||
### Discovery Types to Share
|
||||
- `feature_tested`: {feature, name, result, issues} -- Feature test result
|
||||
- `bug_reproduced`: {url, steps, console_errors, network_failures} -- Bug reproduction outcome
|
||||
- `evidence_collected`: {dimension, file, description} -- Evidence artifact saved
|
||||
- `root_cause_found`: {category, file, line, confidence} -- Root cause identified
|
||||
- `file_modified`: {file, change, lines_added} -- Code fix applied
|
||||
- `verification_result`: {verdict, original_error_resolved, new_errors} -- Verification outcome
|
||||
- `issue_found`: {file, line, severity, description} -- Issue discovered
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed" | "failed",
|
||||
"findings": "Key discoveries and implementation notes (max 500 chars)",
|
||||
"artifacts_produced": "semicolon-separated paths of produced files",
|
||||
"issues_count": "",
|
||||
"verdict": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Role-Specific Customization
|
||||
|
||||
The coordinator generates per-role instruction variants during Phase 1.
|
||||
|
||||
### For Tester Role (test-pipeline)
|
||||
|
||||
```
|
||||
3. **Execute**:
|
||||
- Parse feature list from task description
|
||||
- For each feature:
|
||||
a. Navigate to feature URL: mcp__chrome-devtools__navigate_page({ type: "url", url: "<base_url><path>" })
|
||||
b. Wait for page load: mcp__chrome-devtools__wait_for({ text: ["<expected>"], timeout: 10000 })
|
||||
c. Explore page structure: mcp__chrome-devtools__take_snapshot()
|
||||
d. Generate test scenarios from UI elements if not predefined
|
||||
e. Capture baseline: take_screenshot (before), list_console_messages
|
||||
f. Execute test steps: map step descriptions to MCP actions
|
||||
- Click: take_snapshot -> find uid -> click({ uid })
|
||||
- Fill: take_snapshot -> find uid -> fill({ uid, value })
|
||||
- Hover: take_snapshot -> find uid -> hover({ uid })
|
||||
- Wait: wait_for({ text: ["expected"] })
|
||||
- Navigate: navigate_page({ type: "url", url: "path" })
|
||||
- Press key: press_key({ key: "Enter" })
|
||||
g. Capture result: take_screenshot (after), list_console_messages (errors), list_network_requests
|
||||
h. Evaluate: console errors? network failures? expected text present? visual issues?
|
||||
i. Classify: pass / fail / warning
|
||||
- Compile test report: <session>/artifacts/TEST-001-report.md
|
||||
- Compile issues list: <session>/artifacts/TEST-001-issues.json
|
||||
- Set issues_count in output
|
||||
```
|
||||
|
||||
### For Reproducer Role (debug-pipeline)
|
||||
|
||||
```
|
||||
3. **Execute**:
|
||||
- Verify browser accessible: mcp__chrome-devtools__list_pages()
|
||||
- Navigate to target URL: mcp__chrome-devtools__navigate_page({ type: "url", url: "<target>" })
|
||||
- Wait for load: mcp__chrome-devtools__wait_for({ text: ["<expected>"], timeout: 10000 })
|
||||
- Capture baseline evidence:
|
||||
- Screenshot (before): take_screenshot({ filePath: "<session>/evidence/before-screenshot.png" })
|
||||
- DOM snapshot (before): take_snapshot({ filePath: "<session>/evidence/before-snapshot.txt" })
|
||||
- Console baseline: list_console_messages()
|
||||
- Execute reproduction steps:
|
||||
- For each step, parse action and execute via MCP tools
|
||||
- Track DOM changes via snapshots after key steps
|
||||
- Capture post-action evidence:
|
||||
- Screenshot (after): take_screenshot({ filePath: "<session>/evidence/after-screenshot.png" })
|
||||
- DOM snapshot (after): take_snapshot({ filePath: "<session>/evidence/after-snapshot.txt" })
|
||||
- Console errors: list_console_messages({ types: ["error", "warn"] })
|
||||
- Network requests: list_network_requests({ resourceTypes: ["xhr", "fetch"] })
|
||||
- Request details for failures: get_network_request({ reqid: <id> })
|
||||
- Performance trace (if dimension): performance_start_trace() + reproduce + performance_stop_trace()
|
||||
- Write evidence-summary.json to <session>/evidence/
|
||||
```
|
||||
|
||||
### For Analyzer Role
|
||||
|
||||
```
|
||||
3. **Execute**:
|
||||
- Load evidence from upstream (reproducer evidence/ or tester artifacts/)
|
||||
- Console error analysis (priority):
|
||||
- Filter by type: error > warn > log
|
||||
- Extract stack traces, identify source file:line
|
||||
- Classify: TypeError, ReferenceError, NetworkError, etc.
|
||||
- Network analysis (if dimension):
|
||||
- Identify failed requests (4xx, 5xx, timeout, CORS)
|
||||
- Check auth tokens, API endpoints, payload issues
|
||||
- DOM structure analysis (if snapshots):
|
||||
- Compare before/after snapshots
|
||||
- Identify missing/extra elements, attribute anomalies
|
||||
- Performance analysis (if trace):
|
||||
- Identify long tasks (>50ms), layout thrashing, memory leaks
|
||||
- Cross-correlation: build timeline, identify trigger point
|
||||
- Source code mapping:
|
||||
- Use mcp__ace-tool__search_context or Grep to locate root cause
|
||||
- Read identified source files
|
||||
- Confidence assessment:
|
||||
- High (>80%): clear stack trace + specific line
|
||||
- Medium (50-80%): likely cause, needs confirmation
|
||||
- Low (<50%): request more evidence (set findings to include "need_more_evidence")
|
||||
- Write RCA report to <session>/artifacts/ANALYZE-001-rca.md
|
||||
- Set issues_count in output
|
||||
```
|
||||
|
||||
### For Fixer Role
|
||||
|
||||
```
|
||||
3. **Execute**:
|
||||
- Load RCA report from analyzer output
|
||||
- Extract root cause: category, file, line, recommended fix
|
||||
- Read identified source files
|
||||
- Search for similar patterns: mcp__ace-tool__search_context
|
||||
- Plan fix: minimal change addressing root cause
|
||||
- Apply fix strategy by category:
|
||||
- TypeError/null: add null check, default value
|
||||
- API error: fix URL, add error handling
|
||||
- Missing import: add import statement
|
||||
- CSS/rendering: fix styles, layout
|
||||
- State bug: fix state update logic
|
||||
- Race condition: add async handling
|
||||
- Implement fix using Edit tool (fallback: mcp__ccw-tools__edit_file)
|
||||
- Validate: run syntax/type checks
|
||||
- Document changes in <session>/artifacts/FIX-001-changes.md
|
||||
```
|
||||
|
||||
### For Verifier Role
|
||||
|
||||
```
|
||||
3. **Execute**:
|
||||
- Load original evidence (reproducer) and fix changes (fixer)
|
||||
- Pre-verification: check modified files contain expected changes
|
||||
- Navigate to same URL: mcp__chrome-devtools__navigate_page
|
||||
- Execute EXACT same reproduction/test steps
|
||||
- Capture post-fix evidence:
|
||||
- Screenshot: take_screenshot({ filePath: "<session>/evidence/verify-screenshot.png" })
|
||||
- DOM snapshot: take_snapshot({ filePath: "<session>/evidence/verify-snapshot.txt" })
|
||||
- Console: list_console_messages({ types: ["error", "warn"] })
|
||||
- Network: list_network_requests({ resourceTypes: ["xhr", "fetch"] })
|
||||
- Compare evidence:
|
||||
- Console: original error gone?
|
||||
- Network: failed request now succeeds?
|
||||
- Visual: expected rendering achieved?
|
||||
- New errors: any regression?
|
||||
- Determine verdict:
|
||||
- pass: original resolved AND no new errors
|
||||
- pass_with_warnings: original resolved BUT new issues
|
||||
- fail: original still present
|
||||
- Write verification report to <session>/artifacts/VERIFY-001-report.md
|
||||
- Set verdict in output
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Chrome DevTools MCP Reference
|
||||
|
||||
### Common Patterns
|
||||
|
||||
**Navigate and Wait**:
|
||||
```
|
||||
mcp__chrome-devtools__navigate_page({ type: "url", url: "<url>" })
|
||||
mcp__chrome-devtools__wait_for({ text: ["<expected>"], timeout: 10000 })
|
||||
```
|
||||
|
||||
**Find Element and Interact**:
|
||||
```
|
||||
mcp__chrome-devtools__take_snapshot() // Get uids
|
||||
mcp__chrome-devtools__click({ uid: "<uid>" })
|
||||
mcp__chrome-devtools__fill({ uid: "<uid>", value: "<value>" })
|
||||
```
|
||||
|
||||
**Capture Evidence**:
|
||||
```
|
||||
mcp__chrome-devtools__take_screenshot({ filePath: "<path>" })
|
||||
mcp__chrome-devtools__list_console_messages({ types: ["error", "warn"] })
|
||||
mcp__chrome-devtools__list_network_requests({ resourceTypes: ["xhr", "fetch"] })
|
||||
```
|
||||
|
||||
**Debug API Error**:
|
||||
```
|
||||
mcp__chrome-devtools__list_network_requests() // Find request
|
||||
mcp__chrome-devtools__get_network_request({ reqid: <id> }) // Inspect details
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Requirements
|
||||
|
||||
All agents must verify before reporting complete:
|
||||
|
||||
| Requirement | Criteria |
|
||||
|-------------|----------|
|
||||
| Files produced | Verify all claimed artifacts exist via Read |
|
||||
| Evidence captured | All planned dimensions have evidence files |
|
||||
| Findings accuracy | Findings reflect actual observations |
|
||||
| Discovery sharing | At least 1 discovery shared to board |
|
||||
| Error reporting | Non-empty error field if status is failed |
|
||||
| Verdict set | verifier role sets verdict field |
|
||||
| Issues count set | tester/analyzer roles set issues_count field |
|
||||
|
||||
---
|
||||
|
||||
## Placeholder Reference
|
||||
|
||||
| Placeholder | Resolved By | When |
|
||||
|-------------|------------|------|
|
||||
| `<session-folder>` | Skill designer (Phase 1) | Literal path baked into instruction |
|
||||
| `{id}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{title}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{description}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{role}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{pipeline_mode}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{base_url}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{evidence_dimensions}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{prev_context}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
198
.codex/skills/team-frontend-debug/schemas/tasks-schema.md
Normal file
198
.codex/skills/team-frontend-debug/schemas/tasks-schema.md
Normal file
@@ -0,0 +1,198 @@
|
||||
# Team Frontend Debug -- CSV Schema
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
|
||||
### Column Definitions
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier (PREFIX-NNN) | `"TEST-001"` |
|
||||
| `title` | string | Yes | Short task title | `"Feature testing"` |
|
||||
| `description` | string | Yes | Detailed task description (self-contained) with PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS | `"PURPOSE: Test all features from list..."` |
|
||||
| `role` | enum | Yes | Worker role: `tester`, `reproducer`, `analyzer`, `fixer`, `verifier` | `"tester"` |
|
||||
| `pipeline_mode` | enum | Yes | Pipeline mode: `test-pipeline` or `debug-pipeline` | `"test-pipeline"` |
|
||||
| `base_url` | string | No | Target URL for browser-based tasks | `"http://localhost:3000"` |
|
||||
| `evidence_dimensions` | string | No | Semicolon-separated evidence types to collect | `"screenshot;console;network"` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"TEST-001"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"TEST-001"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[TEST-001] Found 3 issues: 2 high, 1 medium..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Tested 5 features: 3 pass, 2 fail. BUG-001: TypeError on login. BUG-002: API 500 on save."` |
|
||||
| `artifacts_produced` | string | Semicolon-separated paths of produced artifacts | `"artifacts/TEST-001-report.md;artifacts/TEST-001-issues.json"` |
|
||||
| `issues_count` | string | Number of issues found (tester/analyzer only, empty for others) | `"2"` |
|
||||
| `verdict` | string | Verification verdict: `pass`, `pass_with_warnings`, `fail` (verifier only) | `"pass"` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
|
||||
|
||||
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
|
||||
|
||||
---
|
||||
|
||||
### Role Prefixes
|
||||
|
||||
| Role | Prefix | Pipeline | Inner Loop |
|
||||
|------|--------|----------|------------|
|
||||
| tester | TEST | test-pipeline | Yes (iterates over features) |
|
||||
| reproducer | REPRODUCE | debug-pipeline | No |
|
||||
| analyzer | ANALYZE | both | No |
|
||||
| fixer | FIX | both | Yes (may need multiple fix passes) |
|
||||
| verifier | VERIFY | both | No |
|
||||
|
||||
---
|
||||
|
||||
### Example Data (Test Pipeline)
|
||||
|
||||
```csv
|
||||
id,title,description,role,pipeline_mode,base_url,evidence_dimensions,deps,context_from,exec_mode,wave,status,findings,artifacts_produced,issues_count,verdict,error
|
||||
"TEST-001","Feature testing","PURPOSE: Test all features from feature list and discover issues | Success: All features tested with pass/fail results\nTASK:\n- Parse feature list\n- Navigate to each feature URL using Chrome DevTools\n- Execute test scenarios (click, fill, hover)\n- Capture evidence: screenshots, console logs, network requests\n- Classify results: pass/fail/warning\nCONTEXT:\n- Session: .workflow/.csv-wave/tfd-login-test-20260308\n- Base URL: http://localhost:3000\n- Features: Login, Dashboard, Profile\nEXPECTED: artifacts/TEST-001-report.md + artifacts/TEST-001-issues.json\nCONSTRAINTS: Chrome DevTools MCP only | No code modifications","tester","test-pipeline","http://localhost:3000","screenshot;console;network","","","csv-wave","1","pending","","","","",""
|
||||
"ANALYZE-001","Root cause analysis","PURPOSE: Analyze discovered issues to identify root causes | Success: RCA for each high/medium issue\nTASK:\n- Load test report and issues list\n- Analyze console errors, network failures, DOM anomalies\n- Map to source code locations\nCONTEXT:\n- Session: .workflow/.csv-wave/tfd-login-test-20260308\n- Upstream: artifacts/TEST-001-issues.json\nEXPECTED: artifacts/ANALYZE-001-rca.md","analyzer","test-pipeline","","console;network","TEST-001","TEST-001","csv-wave","2","pending","","","","",""
|
||||
"FIX-001","Fix all issues","PURPOSE: Fix identified issues | Success: All high/medium issues resolved\nTASK:\n- Load RCA report\n- Locate and fix each root cause\n- Run syntax/type checks\nCONTEXT:\n- Session: .workflow/.csv-wave/tfd-login-test-20260308\n- Upstream: artifacts/ANALYZE-001-rca.md\nEXPECTED: Modified source files + artifacts/FIX-001-changes.md","fixer","test-pipeline","","","ANALYZE-001","ANALYZE-001","csv-wave","3","pending","","","","",""
|
||||
"VERIFY-001","Verify fixes","PURPOSE: Re-test failed scenarios to verify fixes | Success: Previously failed scenarios now pass\nTASK:\n- Re-execute failed test scenarios\n- Capture evidence and compare\n- Report pass/fail per scenario\nCONTEXT:\n- Session: .workflow/.csv-wave/tfd-login-test-20260308\n- Original: artifacts/TEST-001-report.md\n- Fix: artifacts/FIX-001-changes.md\nEXPECTED: artifacts/VERIFY-001-report.md","verifier","test-pipeline","http://localhost:3000","screenshot;console;network","FIX-001","FIX-001;TEST-001","csv-wave","4","pending","","","","",""
|
||||
```
|
||||
|
||||
### Example Data (Debug Pipeline)
|
||||
|
||||
```csv
|
||||
id,title,description,role,pipeline_mode,base_url,evidence_dimensions,deps,context_from,exec_mode,wave,status,findings,artifacts_produced,issues_count,verdict,error
|
||||
"REPRODUCE-001","Bug reproduction","PURPOSE: Reproduce bug and collect evidence | Success: Bug reproduced with artifacts\nTASK:\n- Navigate to target URL\n- Execute reproduction steps\n- Capture screenshots, snapshots, console logs, network\nCONTEXT:\n- Session: .workflow/.csv-wave/tfd-save-crash-20260308\n- Bug URL: http://localhost:3000/settings\n- Steps: 1. Click save 2. Observe white screen\nEXPECTED: evidence/ directory with all captures","reproducer","debug-pipeline","http://localhost:3000/settings","screenshot;console;network;snapshot","","","csv-wave","1","pending","","","","",""
|
||||
"ANALYZE-001","Root cause analysis","PURPOSE: Analyze evidence to find root cause | Success: RCA with file:line location\nTASK:\n- Load evidence from reproducer\n- Analyze console errors and stack traces\n- Map to source code\nCONTEXT:\n- Session: .workflow/.csv-wave/tfd-save-crash-20260308\n- Upstream: evidence/\nEXPECTED: artifacts/ANALYZE-001-rca.md","analyzer","debug-pipeline","","","REPRODUCE-001","REPRODUCE-001","csv-wave","2","pending","","","","",""
|
||||
"FIX-001","Code fix","PURPOSE: Fix the identified bug | Success: Root cause resolved\nTASK:\n- Load RCA report\n- Implement fix\n- Validate syntax\nCONTEXT:\n- Session: .workflow/.csv-wave/tfd-save-crash-20260308\n- Upstream: artifacts/ANALYZE-001-rca.md\nEXPECTED: Modified files + artifacts/FIX-001-changes.md","fixer","debug-pipeline","","","ANALYZE-001","ANALYZE-001","csv-wave","3","pending","","","","",""
|
||||
"VERIFY-001","Fix verification","PURPOSE: Verify bug is fixed | Success: Original bug no longer reproduces\nTASK:\n- Same reproduction steps as REPRODUCE-001\n- Capture evidence and compare\n- Confirm resolution\nCONTEXT:\n- Session: .workflow/.csv-wave/tfd-save-crash-20260308\n- Original: evidence/\n- Fix: artifacts/FIX-001-changes.md\nEXPECTED: artifacts/VERIFY-001-report.md","verifier","debug-pipeline","http://localhost:3000/settings","screenshot;console;network;snapshot","FIX-001","FIX-001;REPRODUCE-001","csv-wave","4","pending","","","","",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
--------------------- -------------------- -----------------
|
||||
id ----------> id ----------> id
|
||||
title ----------> title ----------> (reads)
|
||||
description ----------> description ----------> (reads)
|
||||
role ----------> role ----------> (reads)
|
||||
pipeline_mode ---------> pipeline_mode ---------> (reads)
|
||||
base_url ----------> base_url ----------> (reads)
|
||||
evidence_dimensions ---> evidence_dimensions ---> (reads)
|
||||
deps ----------> deps ----------> (reads)
|
||||
context_from----------> context_from----------> (reads)
|
||||
exec_mode ----------> exec_mode ----------> (reads)
|
||||
wave ----------> (reads)
|
||||
prev_context ----------> (reads)
|
||||
status
|
||||
findings
|
||||
artifacts_produced
|
||||
issues_count
|
||||
verdict
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
|
||||
Tester output:
|
||||
```json
|
||||
{
|
||||
"id": "TEST-001",
|
||||
"status": "completed",
|
||||
"findings": "Tested 5 features: 3 pass, 2 fail. BUG-001: TypeError on login submit. BUG-002: API 500 on profile save.",
|
||||
"artifacts_produced": "artifacts/TEST-001-report.md;artifacts/TEST-001-issues.json",
|
||||
"issues_count": "2",
|
||||
"verdict": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Verifier output:
|
||||
```json
|
||||
{
|
||||
"id": "VERIFY-001",
|
||||
"status": "completed",
|
||||
"findings": "Original bug resolved. Login error no longer appears. No new console errors. No new network failures.",
|
||||
"artifacts_produced": "artifacts/VERIFY-001-report.md",
|
||||
"issues_count": "",
|
||||
"verdict": "pass",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
|
||||
|
||||
---
|
||||
|
||||
## Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `feature_tested` | `data.feature` | `{feature, name, result, issues}` | Feature test result |
|
||||
| `bug_reproduced` | `data.url` | `{url, steps, console_errors, network_failures}` | Bug reproduction result |
|
||||
| `evidence_collected` | `data.dimension+data.file` | `{dimension, file, description}` | Evidence artifact saved |
|
||||
| `root_cause_found` | `data.file+data.line` | `{category, file, line, confidence}` | Root cause identified |
|
||||
| `file_modified` | `data.file` | `{file, change, lines_added}` | Code fix applied |
|
||||
| `verification_result` | `data.verdict` | `{verdict, original_error_resolved, new_errors}` | Fix verification |
|
||||
| `issue_found` | `data.file+data.line` | `{file, line, severity, description}` | Issue discovered |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"TEST-001","type":"feature_tested","data":{"feature":"F-001","name":"Login","result":"fail","issues":1}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"REPRODUCE-001","type":"bug_reproduced","data":{"url":"/settings","steps":3,"console_errors":2,"network_failures":0}}
|
||||
{"ts":"2026-03-08T10:10:00Z","worker":"ANALYZE-001","type":"root_cause_found","data":{"category":"TypeError","file":"src/components/Settings.tsx","line":142,"confidence":"high"}}
|
||||
{"ts":"2026-03-08T10:15:00Z","worker":"FIX-001","type":"file_modified","data":{"file":"src/components/Settings.tsx","change":"Added null check for user object","lines_added":3}}
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Flow
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| CSV task findings | Interactive task | Injected via spawn message or send_input |
|
||||
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
|
||||
---
|
||||
|
||||
## Validation Rules
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
||||
| Role valid | role in {tester, reproducer, analyzer, fixer, verifier} | "Invalid role: {role}" |
|
||||
| Pipeline mode valid | pipeline_mode in {test-pipeline, debug-pipeline} | "Invalid pipeline_mode: {mode}" |
|
||||
| Verdict valid | verdict in {pass, pass_with_warnings, fail, ""} | "Invalid verdict: {verdict}" |
|
||||
| Base URL for browser tasks | tester/reproducer/verifier have non-empty base_url | "Missing base_url for browser task: {id}" |
|
||||
Reference in New Issue
Block a user