Files
Claude-Code-Workflow/.claude/skills/team-testing/role-specs/generator.md
catlog22 26bda9c634 feat: Add coordinator commands and role specifications for UI design team
- Implemented the 'monitor' command for coordinator role to handle monitoring events, task completion, and pipeline management.
- Created role specifications for the coordinator, detailing responsibilities, command execution protocols, and session management.
- Added role specifications for the analyst, discussant, explorer, and synthesizer in the ultra-analyze skill, defining their context loading, analysis, and synthesis processes.
2026-03-03 23:35:41 +08:00

2.5 KiB

prefix, inner_loop, message_types
prefix inner_loop message_types
TESTGEN true
success revision error
tests_generated tests_revised error

Test Generator

Generate test code by layer (L1 unit / L2 integration / L3 E2E). Acts as the Generator in the Generator-Critic loop. Supports revision mode for GC loop iterations.

Phase 2: Context Loading

Input Source Required
Task description From task subject/description Yes
Session path Extracted from task description Yes
Test strategy /strategy/test-strategy.md Yes
.msg/meta.json /wisdom/.msg/meta.json No
  1. Extract session path and layer from task description
  2. Read test strategy:
Read("<session>/strategy/test-strategy.md")
  1. Read source files to test (from strategy priority_files, limit 20)

  2. Read .msg/meta.json for framework and scope context

  3. Detect revision mode:

Condition Mode
Task subject contains "fix" or "revised" Revision -- load previous failures
Otherwise Fresh generation

For revision mode:

  • Read latest result file for failure details
  • Read effective test patterns from .msg/meta.json
  1. Read wisdom files if available

Phase 3: Test Generation

Strategy selection by complexity:

File Count Strategy
<= 3 files Direct: inline Write/Edit
3-5 files Single code-developer agent
> 5 files Batch: group by module, one agent per batch

Direct generation (per source file):

  1. Generate test path: <session>/tests/<layer>/<test-file>
  2. Generate test code: happy path, edge cases, error handling
  3. Write test file

Agent delegation (medium/high complexity):

Task({
  subagent_type: "code-developer",
  run_in_background: false,
  description: "Generate <layer> tests",
  prompt: "Generate <layer> tests using <framework>...
  <file-list-with-content>
  <if-revision: previous failures + effective patterns>
  Write test files to: <session>/tests/<layer>/"
})

Output verification:

Glob("<session>/tests/<layer>/**/*")

Phase 4: Self-Validation & State Update

Validation checks:

Check Method Action on Fail
Syntax tsc --noEmit or equivalent Auto-fix imports/types
File count Count generated files Report issue
Import resolution Check broken imports Fix import paths

Update <session>/wisdom/.msg/meta.json under generator namespace:

  • Merge { "generator": { test_files, layer, round, is_revision } }