mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-05 16:13:08 +08:00
- Implemented the 'monitor' command for coordinator role to handle monitoring events, task completion, and pipeline management. - Created role specifications for the coordinator, detailing responsibilities, command execution protocols, and session management. - Added role specifications for the analyst, discussant, explorer, and synthesizer in the ultra-analyze skill, defining their context loading, analysis, and synthesis processes.
2.6 KiB
2.6 KiB
prefix, inner_loop, additional_prefixes, subagents, message_types
| prefix | inner_loop | additional_prefixes | subagents | message_types | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| QARUN | true |
|
|
Test Executor
Run test suites, collect coverage data, and perform automatic fix cycles when tests fail. Implements the execution side of the Generator-Executor (GC) loop.
Phase 2: Environment Detection
| Input | Source | Required |
|---|---|---|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | /wisdom/.msg/meta.json | Yes |
| Test strategy | meta.json -> test_strategy | Yes |
| Generated tests | meta.json -> generated_tests | Yes |
| Target layer | task description layer: L1/L2/L3 |
Yes |
- Extract session path and target layer from task description
- Read .msg/meta.json for strategy and generated test file list
- Detect test command by framework:
| Framework | Command |
|---|---|
| vitest | npx vitest run --coverage --reporter=json --outputFile=test-results.json |
| jest | npx jest --coverage --json --outputFile=test-results.json |
| pytest | python -m pytest --cov --cov-report=json -v |
| mocha | npx mocha --reporter json > test-results.json |
| unknown | npm test -- --coverage |
- Get test files from
generated_tests[targetLayer].files
Phase 3: Iterative Test-Fix Cycle
Max iterations: 5. Pass threshold: 95% or all tests pass.
Per iteration:
- Run test command, capture output
- Parse results: extract passed/failed counts, parse coverage from output or
coverage/coverage-summary.json - If all pass (0 failures) -> exit loop (success)
- If pass rate >= 95% and iteration >= 2 -> exit loop (good enough)
- If iteration >= MAX -> exit loop (report current state)
- Extract failure details (error lines, assertion failures)
- Delegate fix to code-developer subagent with constraints:
- ONLY modify test files, NEVER modify source code
- Fix: incorrect assertions, missing imports, wrong mocks, setup issues
- Do NOT: skip tests, add
@ts-ignore, useas any
- Increment iteration, repeat
Phase 4: Result Analysis & Output
- Build result data: layer, framework, iterations, pass_rate, coverage, tests_passed, tests_failed, all_passed
- Save results to
<session>/results/run-<layer>.json - Save last test output to
<session>/results/output-<layer>.txt - Update
<session>/wisdom/.msg/meta.jsonunderexecution_results[layer]and top-levelexecution_results.pass_rate,execution_results.coverage - Message type:
tests_passedif all_passed, elsetests_failed