Files
Claude-Code-Workflow/.claude/skills/team-quality-assurance/roles/executor/role.md
catlog22 56c0429412 fix: enable dynamic inner_loop for parallel task execution across 6 team skills
- team-lifecycle-v4: executor inner_loop true→dynamic, add dynamicImplDispatch for PLAN-001 callback
- team-testing: executor inner_loop true→dynamic for comprehensive pipeline parallel TESTRUN
- team-quality-assurance: executor inner_loop true→dynamic for full mode parallel QARUN
- team-perf-opt: optimizer inner_loop true→dynamic for fan-out/independent parallel IMPL branches
- team-arch-opt: refactorer inner_loop true→dynamic for fan-out/independent parallel REFACTOR branches
- team-coordinate: fix dependency_graph schema mismatch, needs_research sequencing,
  handleSpawnNext role→task level check, add output_tag to template, precise inner_loop rules

All handleSpawnNext now read task description InnerLoop: field instead of role-level default,
enabling same-role workers to run in parallel when tasks have no mutual dependencies.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-29 21:37:35 +08:00

3.0 KiB

role, prefix, inner_loop, additional_prefixes, message_types
role prefix inner_loop additional_prefixes message_types
executor QARUN dynamic
QARUN-gc
success failure coverage error
tests_passed tests_failed coverage_report error

Test Executor

inner_loop: dynamic — Dispatch sets per-task: true for serial layer chains (discovery/testing mode), false for parallel layer execution (full mode where QARUN-L1-001 and QARUN-L2-001 run independently). When false, each QARUN task gets its own worker.

Run test suites, collect coverage data, and perform automatic fix cycles when tests fail. Implements the execution side of the Generator-Executor (GC) loop.

Phase 2: Environment Detection

Input Source Required
Task description From task subject/description Yes
Session path Extracted from task description Yes
.msg/meta.json /wisdom/.msg/meta.json Yes
Test strategy meta.json -> test_strategy Yes
Generated tests meta.json -> generated_tests Yes
Target layer task description layer: L1/L2/L3 Yes
  1. Extract session path and target layer from task description
  2. Load validation specs: Run ccw spec load --category validation for verification rules and acceptance criteria
  3. Read .msg/meta.json for strategy and generated test file list
  4. Detect test command by framework:
Framework Command
vitest npx vitest run --coverage --reporter=json --outputFile=test-results.json
jest npx jest --coverage --json --outputFile=test-results.json
pytest python -m pytest --cov --cov-report=json -v
mocha npx mocha --reporter json > test-results.json
unknown npm test -- --coverage
  1. Get test files from generated_tests[targetLayer].files

Phase 3: Iterative Test-Fix Cycle

Max iterations: 5. Pass threshold: 95% or all tests pass.

Per iteration:

  1. Run test command, capture output
  2. Parse results: extract passed/failed counts, parse coverage from output or coverage/coverage-summary.json
  3. If all pass (0 failures) -> exit loop (success)
  4. If pass rate >= 95% and iteration >= 2 -> exit loop (good enough)
  5. If iteration >= MAX -> exit loop (report current state)
  6. Extract failure details (error lines, assertion failures)
  7. Delegate fix via CLI tool with constraints:
    • ONLY modify test files, NEVER modify source code
    • Fix: incorrect assertions, missing imports, wrong mocks, setup issues
    • Do NOT: skip tests, add @ts-ignore, use as any
  8. Increment iteration, repeat

Phase 4: Result Analysis & Output

  1. Build result data: layer, framework, iterations, pass_rate, coverage, tests_passed, tests_failed, all_passed
  2. Save results to <session>/results/run-<layer>.json
  3. Save last test output to <session>/results/output-<layer>.txt
  4. Update <session>/wisdom/.msg/meta.json under execution_results[layer] and top-level execution_results.pass_rate, execution_results.coverage
  5. Message type: tests_passed if all_passed, else tests_failed