Files
Claude-Code-Workflow/.claude/skills/team-quality-assurance/roles/generator/role.md
catlog22 29a1fea467 feat: Add templates for epics, product brief, and requirements documentation
- Introduced a comprehensive template for generating epics and stories in Phase 5, including an index and individual epic files.
- Created a product brief template for Phase 2 to summarize product vision, goals, and target users.
- Developed a requirements PRD template for Phase 3, outlining functional and non-functional requirements, along with traceability matrices.

feat: Implement tech debt roles for assessment, execution, planning, scanning, validation, and analysis

- Added roles for tech debt assessment, executor, planner, scanner, validator, and analyst, each with defined phases and processes for managing technical debt.
- Each role includes structured input requirements, processing strategies, and output formats to ensure consistency and clarity in tech debt management.
2026-03-07 13:32:04 +08:00

2.7 KiB

role, prefix, inner_loop, additional_prefixes, message_types
role prefix inner_loop additional_prefixes message_types
generator QAGEN false
QAGEN-fix
success revised error
tests_generated tests_revised error

Test Generator

Generate test code according to strategist's strategy and layers. Support L1 unit tests, L2 integration tests, L3 E2E tests. Follow project's existing test patterns and framework conventions.

Phase 2: Strategy & Pattern Loading

Input Source Required
Task description From task subject/description Yes
Session path Extracted from task description Yes
.msg/meta.json /wisdom/.msg/meta.json Yes
Test strategy meta.json -> test_strategy Yes
Target layer task description layer: L1/L2/L3 Yes
  1. Extract session path and target layer from task description
  2. Read .msg/meta.json for test strategy (layers, coverage targets)
  3. Determine if this is a GC fix task (subject contains "fix")
  4. Load layer config from strategy: level, name, target_coverage, focus_files
  5. Learn existing test patterns -- find 3 similar test files via Glob(**/*.{test,spec}.{ts,tsx,js,jsx})
  6. Detect test conventions: file location (colocated vs tests), import style, describe/it nesting, framework (vitest/jest/pytest)

Phase 3: Test Code Generation

Mode selection:

Condition Mode
GC fix task Read failure info from <session>/results/run-<layer>.json, fix failing tests only
<= 3 focus files Direct: inline Read source -> Write test file
> 3 focus files Batch by module, delegate via CLI tool

Direct generation flow (per source file):

  1. Read source file content, extract exports
  2. Determine test file path following project conventions
  3. If test exists -> analyze missing cases -> append new tests via Edit
  4. If no test -> generate full test file via Write
  5. Include: happy path, edge cases, error cases per export

GC fix flow:

  1. Read execution results and failure output from results directory
  2. Read each failing test file
  3. Fix assertions, imports, mocks, or test setup
  4. Do NOT modify source code, do NOT skip/ignore tests

General rules:

  • Follow existing test patterns exactly (imports, naming, structure)
  • Target coverage per layer config
  • Do NOT use any type assertions or @ts-ignore

Phase 4: Self-Validation & Output

  1. Collect generated/modified test files
  2. Run syntax check (TypeScript: tsc --noEmit, or framework-specific)
  3. Auto-fix syntax errors (max 3 attempts)
  4. Write test metadata to <session>/wisdom/.msg/meta.json under generated_tests[layer]:
    • layer, files list, count, syntax_clean, mode, gc_fix flag
  5. Message type: tests_generated for new, tests_revised for GC fix iterations