Files
Claude-Code-Workflow/.claude/skills/team-testing/role-specs/executor.md
catlog22 bbdd1840de Add document standards, quality gates, and templates for team lifecycle phases
- Introduced `document-standards.md` to define YAML frontmatter schema, naming conventions, and content structure for spec-generator outputs.
- Created `quality-gates.md` outlining per-phase quality gate criteria and scoring dimensions for spec-generator outputs.
- Added templates for architecture documents, epics and stories, product briefs, and requirements PRD to streamline documentation in respective phases.
2026-03-04 23:54:20 +08:00

3.3 KiB

prefix, inner_loop, message_types
prefix inner_loop message_types
TESTRUN true
success failure coverage error
tests_passed tests_failed coverage_report error

Test Executor

Execute tests, collect coverage, attempt auto-fix for failures. Acts as the Critic in the Generator-Critic loop. Reports pass rate and coverage for coordinator GC decisions.

Phase 2: Context Loading

Input Source Required
Task description From task subject/description Yes
Session path Extracted from task description Yes
Test directory Task description (Input: ) Yes
Coverage target Task description (default: 80%) Yes
.msg/meta.json /wisdom/.msg/meta.json No
  1. Extract session path and test directory from task description
  2. Extract coverage target (default: 80%)
  3. Read .msg/meta.json for framework info (from strategist namespace)
  4. Determine test framework:
Framework Run Command
Jest npx jest --coverage --json --outputFile=<session>/results/jest-output.json
Pytest python -m pytest --cov --cov-report=json:<session>/results/coverage.json -v
Vitest npx vitest run --coverage --reporter=json
  1. Find test files to execute:
Glob("<session>/<test-dir>/**/*")

Phase 3: Test Execution + Fix Cycle

Iterative test-fix cycle (max 3 iterations):

Step Action
1 Run test command
2 Parse results: pass rate + coverage
3 pass_rate >= 0.95 AND coverage >= target -> success, exit
4 Extract failing test details
5 Delegate fix to CLI tool (gemini write mode)
6 Increment iteration; >= 3 -> exit with failures
Bash("<test-command> 2>&1 || true")

Auto-fix delegation (on failure):

Bash({
  command: `ccw cli -p "PURPOSE: Fix test failures to achieve pass rate >= 0.95; success = all tests pass
TASK: • Analyze test failure output • Identify root causes • Fix test code only (not source) • Preserve test intent
MODE: write
CONTEXT: @<session>/<test-dir>/**/* | Memory: Test framework: <framework>, iteration <N>/3
EXPECTED: Fixed test files with: corrected assertions, proper async handling, fixed imports, maintained coverage
CONSTRAINTS: Only modify test files | Preserve test structure | No source code changes
Test failures:
<test-output>" --tool gemini --mode write --cd <session>`,
  run_in_background: false
})

Save results: <session>/results/run-<N>.json

Phase 4: Defect Pattern Extraction & State Update

Extract defect patterns from failures:

Pattern Type Detection Keywords
Null reference "null", "undefined", "Cannot read property"
Async timing "timeout", "async", "await", "promise"
Import errors "Cannot find module", "import"
Type mismatches "type", "expected", "received"

Record effective test patterns (if pass_rate > 0.8):

Pattern Detection
Happy path "should succeed", "valid input"
Edge cases "edge", "boundary", "limit"
Error handling "should fail", "error", "throw"

Update <session>/wisdom/.msg/meta.json under executor namespace:

  • Merge { "executor": { pass_rate, coverage, defect_patterns, effective_patterns, coverage_history_entry } }