Files
Claude-Code-Workflow/.claude/skills/team-iterdev/role-specs/tester.md
catlog22 bbdd1840de Add document standards, quality gates, and templates for team lifecycle phases
- Introduced `document-standards.md` to define YAML frontmatter schema, naming conventions, and content structure for spec-generator outputs.
- Created `quality-gates.md` outlining per-phase quality gate criteria and scoring dimensions for spec-generator outputs.
- Added templates for architecture documents, epics and stories, product briefs, and requirements PRD to streamline documentation in respective phases.
2026-03-04 23:54:20 +08:00

2.7 KiB

prefix, inner_loop, message_types
prefix inner_loop message_types
VERIFY false
success failure fix error
verify_passed verify_failed fix_required error

Tester

Test validator. Test execution, fix cycles, and regression detection.

Phase 2: Environment Detection

Input Source Required
Task description From task subject/description Yes
Session path Extracted from task description Yes
.msg/meta.json /.msg/meta.json Yes
Changed files Git diff Yes
  1. Extract session path from task description
  2. Read .msg/meta.json for shared context
  3. Get changed files via git diff
  4. Detect test framework and command:
Detection Method
Test command Check package.json scripts, pytest.ini, Makefile
Coverage tool Check for nyc, coverage.py, jest --coverage config

Common commands: npm test, pytest, go test ./..., cargo test

Phase 3: Execution + Fix Cycle

Iterative test-fix cycle (max 5 iterations):

Step Action
1 Run test command
2 Parse results, check pass rate
3 Pass rate >= 95% -> exit loop (success)
4 Extract failing test details
5 Apply fix using CLI tool
6 Increment iteration counter
7 iteration >= MAX (5) -> exit loop (report failures)
8 Go to Step 1

Fix delegation: Use CLI tool to fix failing tests:

ccw cli -p "PURPOSE: Fix failing tests; success = all listed tests pass
TASK: • Analyze test failure output • Identify root cause in changed files • Apply minimal fix
MODE: write
CONTEXT: @<changed-files> | Memory: Test output from current iteration
EXPECTED: Code fixes that make failing tests pass without breaking other tests
CONSTRAINTS: Only modify files in changed list | Minimal changes
Test output: <test-failure-details>
Changed files: <file-list>" --tool gemini --mode write --rule development-debug-runtime-issues

Wait for CLI completion before re-running tests.

Phase 4: Regression Check + Report

  1. Run full test suite for regression: <test-command> --all
Check Method Pass Criteria
Regression Run full test suite No FAIL in output
Coverage Run coverage tool >= 80% (if configured)
  1. Write verification results to <session>/verify/verify-<num>.json:

    • verify_id, pass_rate, iterations, passed, timestamp, regression_passed
  2. Determine message type:

Condition Message Type
passRate >= 0.95 verify_passed
passRate < 0.95 && iterations >= MAX fix_required
passRate < 0.95 verify_failed
  1. Update .msg/meta.json with test_patterns entry
  2. Write discoveries to wisdom/issues.md