Files
Claude-Code-Workflow/.claude/skills/team-testing/role-specs/executor.md
catlog22 fd847070d5 Refactor architecture optimization and issue resolution workflows
- Enhanced multi-perspective discussion capabilities in discuss-subagent for architecture optimization, integrating CLI tools for structured analysis and recommendations.
- Updated explore-subagent to utilize CLI tools directly for architecture-critical structure exploration, improving efficiency.
- Streamlined discuss-subagent in team-coordinate to leverage CLI for multi-perspective critiques, enhancing artifact evaluation.
- Modified explore-subagent in team-coordinate to adopt CLI tools for codebase exploration, ensuring consistency across roles.
- Expanded team-issue skill to include additional tools for issue resolution, refining role-specific execution and restrictions.
- Improved explorer role specifications to utilize CLI for exploration tasks, enhancing context gathering for architecture-critical structures.
- Adjusted implementer role specifications to route execution through CLI tools, optimizing backend selection for task execution.
- Enhanced integrator role specifications to utilize CLI for queue formation, improving issue resolution efficiency.
- Updated planner role specifications to leverage CLI for solution generation, ensuring structured implementation planning.
- Refined analyst role specifications to utilize CLI for codebase exploration, enhancing context generation for research.
- Adjusted executor role specifications to utilize CLI tools for task execution, improving backend selection and error handling.
- Enhanced writer role specifications to generate documents using CLI tools, streamlining document generation processes.
- Updated team-planex skill to reflect changes in execution methods, focusing on CLI tools for task execution.
- Refined team-testing role specifications to utilize CLI for test generation and failure resolution, improving testing workflows.
- Enhanced ultra-analyze role specifications to leverage CLI tools for discussion and exploration tasks, improving analysis depth and clarity.
2026-03-04 23:19:36 +08:00

3.2 KiB

prefix, inner_loop, message_types
prefix inner_loop message_types
TESTRUN true
success failure coverage error
tests_passed tests_failed coverage_report error

Test Executor

Execute tests, collect coverage, attempt auto-fix for failures. Acts as the Critic in the Generator-Critic loop. Reports pass rate and coverage for coordinator GC decisions.

Phase 2: Context Loading

Input Source Required
Task description From task subject/description Yes
Session path Extracted from task description Yes
Test directory Task description (Input: ) Yes
Coverage target Task description (default: 80%) Yes
.msg/meta.json /wisdom/.msg/meta.json No
  1. Extract session path and test directory from task description
  2. Extract coverage target (default: 80%)
  3. Read .msg/meta.json for framework info (from strategist namespace)
  4. Determine test framework:
Framework Run Command
Jest npx jest --coverage --json --outputFile=<session>/results/jest-output.json
Pytest python -m pytest --cov --cov-report=json:<session>/results/coverage.json -v
Vitest npx vitest run --coverage --reporter=json
  1. Find test files to execute:
Glob("<session>/<test-dir>/**/*")

Phase 3: Test Execution + Fix Cycle

Iterative test-fix cycle (max 3 iterations):

Step Action
1 Run test command
2 Parse results: pass rate + coverage
3 pass_rate >= 0.95 AND coverage >= target -> success, exit
4 Extract failing test details
5 Delegate fix to code-developer subagent
6 Increment iteration; >= 3 -> exit with failures
Bash("<test-command> 2>&1 || true")

Auto-fix delegation (on failure):

Bash({
  command: `ccw cli -p "PURPOSE: Fix test failures to achieve pass rate >= 0.95; success = all tests pass
TASK: • Analyze test failure output • Identify root causes • Fix test code only (not source) • Preserve test intent
MODE: write
CONTEXT: @<session>/<test-dir>/**/* | Memory: Test framework: <framework>, iteration <N>/3
EXPECTED: Fixed test files with: corrected assertions, proper async handling, fixed imports, maintained coverage
CONSTRAINTS: Only modify test files | Preserve test structure | No source code changes
Test failures:
<test-output>" --tool gemini --mode write --cd <session>`,
  run_in_background: false
})

Save results: <session>/results/run-<N>.json

Phase 4: Defect Pattern Extraction & State Update

Extract defect patterns from failures:

Pattern Type Detection Keywords
Null reference "null", "undefined", "Cannot read property"
Async timing "timeout", "async", "await", "promise"
Import errors "Cannot find module", "import"
Type mismatches "type", "expected", "received"

Record effective test patterns (if pass_rate > 0.8):

Pattern Detection
Happy path "should succeed", "valid input"
Edge cases "edge", "boundary", "limit"
Error handling "should fail", "error", "throw"

Update <session>/wisdom/.msg/meta.json under executor namespace:

  • Merge { "executor": { pass_rate, coverage, defect_patterns, effective_patterns, coverage_history_entry } }