mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-30 20:21:09 +08:00
- team-lifecycle-v4: executor inner_loop true→dynamic, add dynamicImplDispatch for PLAN-001 callback - team-testing: executor inner_loop true→dynamic for comprehensive pipeline parallel TESTRUN - team-quality-assurance: executor inner_loop true→dynamic for full mode parallel QARUN - team-perf-opt: optimizer inner_loop true→dynamic for fan-out/independent parallel IMPL branches - team-arch-opt: refactorer inner_loop true→dynamic for fan-out/independent parallel REFACTOR branches - team-coordinate: fix dependency_graph schema mismatch, needs_research sequencing, handleSpawnNext role→task level check, add output_tag to template, precise inner_loop rules All handleSpawnNext now read task description InnerLoop: field instead of role-level default, enabling same-role workers to run in parallel when tasks have no mutual dependencies. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
3.0 KiB
3.0 KiB
role, prefix, inner_loop, additional_prefixes, message_types
| role | prefix | inner_loop | additional_prefixes | message_types | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| executor | QARUN | dynamic |
|
|
Test Executor
inner_loop: dynamic — Dispatch sets per-task:
truefor serial layer chains (discovery/testing mode),falsefor parallel layer execution (full mode where QARUN-L1-001 and QARUN-L2-001 run independently). When false, each QARUN task gets its own worker.
Run test suites, collect coverage data, and perform automatic fix cycles when tests fail. Implements the execution side of the Generator-Executor (GC) loop.
Phase 2: Environment Detection
| Input | Source | Required |
|---|---|---|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | /wisdom/.msg/meta.json | Yes |
| Test strategy | meta.json -> test_strategy | Yes |
| Generated tests | meta.json -> generated_tests | Yes |
| Target layer | task description layer: L1/L2/L3 |
Yes |
- Extract session path and target layer from task description
- Load validation specs: Run
ccw spec load --category validationfor verification rules and acceptance criteria - Read .msg/meta.json for strategy and generated test file list
- Detect test command by framework:
| Framework | Command |
|---|---|
| vitest | npx vitest run --coverage --reporter=json --outputFile=test-results.json |
| jest | npx jest --coverage --json --outputFile=test-results.json |
| pytest | python -m pytest --cov --cov-report=json -v |
| mocha | npx mocha --reporter json > test-results.json |
| unknown | npm test -- --coverage |
- Get test files from
generated_tests[targetLayer].files
Phase 3: Iterative Test-Fix Cycle
Max iterations: 5. Pass threshold: 95% or all tests pass.
Per iteration:
- Run test command, capture output
- Parse results: extract passed/failed counts, parse coverage from output or
coverage/coverage-summary.json - If all pass (0 failures) -> exit loop (success)
- If pass rate >= 95% and iteration >= 2 -> exit loop (good enough)
- If iteration >= MAX -> exit loop (report current state)
- Extract failure details (error lines, assertion failures)
- Delegate fix via CLI tool with constraints:
- ONLY modify test files, NEVER modify source code
- Fix: incorrect assertions, missing imports, wrong mocks, setup issues
- Do NOT: skip tests, add
@ts-ignore, useas any
- Increment iteration, repeat
Phase 4: Result Analysis & Output
- Build result data: layer, framework, iterations, pass_rate, coverage, tests_passed, tests_failed, all_passed
- Save results to
<session>/results/run-<layer>.json - Save last test output to
<session>/results/output-<layer>.txt - Update
<session>/wisdom/.msg/meta.jsonunderexecution_results[layer]and top-levelexecution_results.pass_rate,execution_results.coverage - Message type:
tests_passedif all_passed, elsetests_failed