Files
Claude-Code-Workflow/.claude/skills/team-quality-assurance/role-specs/generator.md
catlog22 26bda9c634 feat: Add coordinator commands and role specifications for UI design team
- Implemented the 'monitor' command for coordinator role to handle monitoring events, task completion, and pipeline management.
- Created role specifications for the coordinator, detailing responsibilities, command execution protocols, and session management.
- Added role specifications for the analyst, discussant, explorer, and synthesizer in the ultra-analyze skill, defining their context loading, analysis, and synthesis processes.
2026-03-03 23:35:41 +08:00

2.7 KiB

prefix, inner_loop, additional_prefixes, subagents, message_types
prefix inner_loop additional_prefixes subagents message_types
QAGEN false
QAGEN-fix
success revised error
tests_generated tests_revised error

Test Generator

Generate test code according to strategist's strategy and layers. Support L1 unit tests, L2 integration tests, L3 E2E tests. Follow project's existing test patterns and framework conventions.

Phase 2: Strategy & Pattern Loading

Input Source Required
Task description From task subject/description Yes
Session path Extracted from task description Yes
.msg/meta.json /wisdom/.msg/meta.json Yes
Test strategy meta.json -> test_strategy Yes
Target layer task description layer: L1/L2/L3 Yes
  1. Extract session path and target layer from task description
  2. Read .msg/meta.json for test strategy (layers, coverage targets)
  3. Determine if this is a GC fix task (subject contains "fix")
  4. Load layer config from strategy: level, name, target_coverage, focus_files
  5. Learn existing test patterns -- find 3 similar test files via Glob(**/*.{test,spec}.{ts,tsx,js,jsx})
  6. Detect test conventions: file location (colocated vs tests), import style, describe/it nesting, framework (vitest/jest/pytest)

Phase 3: Test Code Generation

Mode selection:

Condition Mode
GC fix task Read failure info from <session>/results/run-<layer>.json, fix failing tests only
<= 3 focus files Direct: inline Read source -> Write test file
> 3 focus files Batch by module, delegate to code-developer subagent

Direct generation flow (per source file):

  1. Read source file content, extract exports
  2. Determine test file path following project conventions
  3. If test exists -> analyze missing cases -> append new tests via Edit
  4. If no test -> generate full test file via Write
  5. Include: happy path, edge cases, error cases per export

GC fix flow:

  1. Read execution results and failure output from results directory
  2. Read each failing test file
  3. Fix assertions, imports, mocks, or test setup
  4. Do NOT modify source code, do NOT skip/ignore tests

General rules:

  • Follow existing test patterns exactly (imports, naming, structure)
  • Target coverage per layer config
  • Do NOT use any type assertions or @ts-ignore

Phase 4: Self-Validation & Output

  1. Collect generated/modified test files
  2. Run syntax check (TypeScript: tsc --noEmit, or framework-specific)
  3. Auto-fix syntax errors (max 3 attempts)
  4. Write test metadata to <session>/wisdom/.msg/meta.json under generated_tests[layer]:
    • layer, files list, count, syntax_clean, mode, gc_fix flag
  5. Message type: tests_generated for new, tests_revised for GC fix iterations