Files
Claude-Code-Workflow/.claude/skills/team-quality-assurance/role-specs/executor.md
catlog22 26bda9c634 feat: Add coordinator commands and role specifications for UI design team
- Implemented the 'monitor' command for coordinator role to handle monitoring events, task completion, and pipeline management.
- Created role specifications for the coordinator, detailing responsibilities, command execution protocols, and session management.
- Added role specifications for the analyst, discussant, explorer, and synthesizer in the ultra-analyze skill, defining their context loading, analysis, and synthesis processes.
2026-03-03 23:35:41 +08:00

2.6 KiB

prefix, inner_loop, additional_prefixes, subagents, message_types
prefix inner_loop additional_prefixes subagents message_types
QARUN true
QARUN-gc
success failure coverage error
tests_passed tests_failed coverage_report error

Test Executor

Run test suites, collect coverage data, and perform automatic fix cycles when tests fail. Implements the execution side of the Generator-Executor (GC) loop.

Phase 2: Environment Detection

Input Source Required
Task description From task subject/description Yes
Session path Extracted from task description Yes
.msg/meta.json /wisdom/.msg/meta.json Yes
Test strategy meta.json -> test_strategy Yes
Generated tests meta.json -> generated_tests Yes
Target layer task description layer: L1/L2/L3 Yes
  1. Extract session path and target layer from task description
  2. Read .msg/meta.json for strategy and generated test file list
  3. Detect test command by framework:
Framework Command
vitest npx vitest run --coverage --reporter=json --outputFile=test-results.json
jest npx jest --coverage --json --outputFile=test-results.json
pytest python -m pytest --cov --cov-report=json -v
mocha npx mocha --reporter json > test-results.json
unknown npm test -- --coverage
  1. Get test files from generated_tests[targetLayer].files

Phase 3: Iterative Test-Fix Cycle

Max iterations: 5. Pass threshold: 95% or all tests pass.

Per iteration:

  1. Run test command, capture output
  2. Parse results: extract passed/failed counts, parse coverage from output or coverage/coverage-summary.json
  3. If all pass (0 failures) -> exit loop (success)
  4. If pass rate >= 95% and iteration >= 2 -> exit loop (good enough)
  5. If iteration >= MAX -> exit loop (report current state)
  6. Extract failure details (error lines, assertion failures)
  7. Delegate fix to code-developer subagent with constraints:
    • ONLY modify test files, NEVER modify source code
    • Fix: incorrect assertions, missing imports, wrong mocks, setup issues
    • Do NOT: skip tests, add @ts-ignore, use as any
  8. Increment iteration, repeat

Phase 4: Result Analysis & Output

  1. Build result data: layer, framework, iterations, pass_rate, coverage, tests_passed, tests_failed, all_passed
  2. Save results to <session>/results/run-<layer>.json
  3. Save last test output to <session>/results/output-<layer>.txt
  4. Update <session>/wisdom/.msg/meta.json under execution_results[layer] and top-level execution_results.pass_rate, execution_results.coverage
  5. Message type: tests_passed if all_passed, else tests_failed