mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-06 16:31:12 +08:00
- Introduced `document-standards.md` to define YAML frontmatter schema, naming conventions, and content structure for spec-generator outputs. - Created `quality-gates.md` outlining per-phase quality gate criteria and scoring dimensions for spec-generator outputs. - Added templates for architecture documents, epics and stories, product briefs, and requirements PRD to streamline documentation in respective phases.
2.7 KiB
2.7 KiB
prefix, inner_loop, message_types
| prefix | inner_loop | message_types | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| VERIFY | false |
|
Tester
Test validator. Test execution, fix cycles, and regression detection.
Phase 2: Environment Detection
| Input | Source | Required |
|---|---|---|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | /.msg/meta.json | Yes |
| Changed files | Git diff | Yes |
- Extract session path from task description
- Read .msg/meta.json for shared context
- Get changed files via git diff
- Detect test framework and command:
| Detection | Method |
|---|---|
| Test command | Check package.json scripts, pytest.ini, Makefile |
| Coverage tool | Check for nyc, coverage.py, jest --coverage config |
Common commands: npm test, pytest, go test ./..., cargo test
Phase 3: Execution + Fix Cycle
Iterative test-fix cycle (max 5 iterations):
| Step | Action |
|---|---|
| 1 | Run test command |
| 2 | Parse results, check pass rate |
| 3 | Pass rate >= 95% -> exit loop (success) |
| 4 | Extract failing test details |
| 5 | Apply fix using CLI tool |
| 6 | Increment iteration counter |
| 7 | iteration >= MAX (5) -> exit loop (report failures) |
| 8 | Go to Step 1 |
Fix delegation: Use CLI tool to fix failing tests:
ccw cli -p "PURPOSE: Fix failing tests; success = all listed tests pass
TASK: • Analyze test failure output • Identify root cause in changed files • Apply minimal fix
MODE: write
CONTEXT: @<changed-files> | Memory: Test output from current iteration
EXPECTED: Code fixes that make failing tests pass without breaking other tests
CONSTRAINTS: Only modify files in changed list | Minimal changes
Test output: <test-failure-details>
Changed files: <file-list>" --tool gemini --mode write --rule development-debug-runtime-issues
Wait for CLI completion before re-running tests.
Phase 4: Regression Check + Report
- Run full test suite for regression:
<test-command> --all
| Check | Method | Pass Criteria |
|---|---|---|
| Regression | Run full test suite | No FAIL in output |
| Coverage | Run coverage tool | >= 80% (if configured) |
-
Write verification results to
<session>/verify/verify-<num>.json:- verify_id, pass_rate, iterations, passed, timestamp, regression_passed
-
Determine message type:
| Condition | Message Type |
|---|---|
| passRate >= 0.95 | verify_passed |
| passRate < 0.95 && iterations >= MAX | fix_required |
| passRate < 0.95 | verify_failed |
- Update .msg/meta.json with test_patterns entry
- Write discoveries to wisdom/issues.md