refactor: optimize test workflow commands and extract templates

1. Optimize test-task-generate.md structure
   - Simplify from 417 lines to 217 lines (~48% reduction)
   - Reference task-generate-agent.md structure for consistency
   - Enhance agent prompt with clear specifications:
     * Explicit agent configuration reference
     * Detailed TEST_ANALYSIS_RESULTS.md integration mapping
     * Complete test-fix cycle configuration
     * Clear task structure requirements (IMPL-001 test-gen, IMPL-002+ test-fix)
   - Fix format errors (remove nested code blocks in agent prompt)
   - Emphasize "planning only" nature (does NOT execute tests)

2. Refactor test-concept-enhanced.md to use cli-execute-agent
   - Simplify from 463 lines to 142 lines (~69% reduction)
   - Change from direct Gemini execution to cli-execute-agent delegation
   - Extract Gemini prompt template to separate file:
     * Create ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt
     * Use $(cat ~/.claude/...) pattern for template reference
   - Clarify responsibility separation:
     * Command: Workflow coordination and validation
     * Agent: Execute Gemini analysis and generate outputs
   - Agent generates both gemini-test-analysis.md and TEST_ANALYSIS_RESULTS.md
   - Remove verbose embedded prompts and output format descriptions

Key improvements:
- Modular template management for easier maintenance
- Clear command vs agent responsibility separation
- Consistent documentation structure across workflow commands
- Enhanced agent prompts ensure correct output generation
- Use ~/.claude paths for installed workflow location
This commit is contained in:
catlog22
2025-11-24 19:33:02 +08:00
parent adbb2070bb
commit aabc6294f4
3 changed files with 405 additions and 748 deletions

View File

@@ -1,6 +1,6 @@
---
name: test-concept-enhanced
description: Analyze test requirements and generate test generation strategy using Gemini with test-context package
description: Coordinate test analysis workflow using cli-execute-agent to generate test strategy via Gemini
argument-hint: "--session WFS-test-session-id --context path/to/test-context-package.json"
examples:
- /workflow:tools:test-concept-enhanced --session WFS-test-auth --context .workflow/active/WFS-test-auth/.process/test-context-package.json
@@ -9,7 +9,7 @@ examples:
# Test Concept Enhanced Command
## Overview
Specialized analysis tool for test generation workflows that uses Gemini to analyze test coverage gaps, implementation context, and generate comprehensive test generation strategies.
Workflow coordinator that delegates test analysis to cli-execute-agent. Agent executes Gemini to analyze test coverage gaps, implementation context, and generate comprehensive test generation strategies.
## Core Philosophy
- **Coverage-Driven**: Focus on identified test gaps from context analysis
@@ -19,15 +19,16 @@ Specialized analysis tool for test generation workflows that uses Gemini to anal
- **No Code Generation**: Strategy and planning only, actual test generation happens in task execution
## Core Responsibilities
- Parse test-context-package.json from test-context-gather
- Analyze implementation summaries and coverage gaps
- Study existing test patterns and conventions
- Generate test generation strategy using Gemini
- Produce TEST_ANALYSIS_RESULTS.md for task generation
- Coordinate test analysis workflow using cli-execute-agent
- Validate test-context-package.json prerequisites
- Execute Gemini analysis via agent for test strategy generation
- Validate agent outputs (gemini-test-analysis.md, TEST_ANALYSIS_RESULTS.md)
## Execution Lifecycle
### Phase 1: Validation & Preparation
### Phase 1: Context Preparation (Command Responsibility)
**Command prepares session context and validates prerequisites.**
1. **Session Validation**
- Load `.workflow/active/{test_session_id}/workflow-session.json`
@@ -40,423 +41,100 @@ Specialized analysis tool for test generation workflows that uses Gemini to anal
- Extract coverage gaps and framework details
3. **Strategy Determination**
- **Simple Test Generation** (1-3 files): Single Gemini analysis
- **Medium Test Generation** (4-6 files): Gemini comprehensive analysis
- **Complex Test Generation** (>6 files): Gemini analysis with modular approach
- **Simple** (1-3 files): Single Gemini analysis
- **Medium** (4-6 files): Comprehensive analysis
- **Complex** (>6 files): Modular analysis approach
### Phase 2: Gemini Test Analysis
### Phase 2: Test Analysis Execution (Agent Responsibility)
**Tool Configuration**:
```bash
cd .workflow/active/{test_session_id}/.process && gemini -p "
PURPOSE: Analyze test coverage gaps and design comprehensive test generation strategy
TASK: Study implementation context, existing tests, and generate test requirements for missing coverage
MODE: analysis
CONTEXT: @{.workflow/active/{test_session_id}/.process/test-context-package.json}
**Purpose**: Analyze test coverage gaps and generate comprehensive test strategy.
**MANDATORY FIRST STEP**: Read and analyze test-context-package.json to understand:
- Test coverage gaps from test_coverage.missing_tests[]
- Implementation context from source_context.implementation_summaries[]
- Existing test patterns from test_framework.conventions
- Changed files requiring tests from source_context.implementation_summaries[].changed_files
**Agent Invocation**:
```javascript
Task(
subagent_type="cli-execute-agent",
description="Analyze test coverage gaps and generate test strategy",
prompt=`
## TASK OBJECTIVE
Analyze test requirements and generate comprehensive test generation strategy using Gemini CLI
**ANALYSIS REQUIREMENTS**:
## EXECUTION CONTEXT
Session: {test_session_id}
Source Session: {source_session_id}
Working Dir: .workflow/active/{test_session_id}/.process
Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt
1. **Implementation Understanding**
- Load all implementation summaries from source session
- Understand implemented features, APIs, and business logic
- Extract key functions, classes, and modules
- Identify integration points and dependencies
## EXECUTION STEPS
1. Execute Gemini analysis:
cd .workflow/active/{test_session_id}/.process && gemini -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --approval-mode yolo
2. **Existing Test Pattern Analysis**
- Study existing test files for patterns and conventions
- Identify test structure (describe/it, test suites, fixtures)
- Analyze assertion patterns and mocking strategies
- Extract test setup/teardown patterns
2. Generate TEST_ANALYSIS_RESULTS.md:
Synthesize gemini-test-analysis.md into standardized format for task generation
Include: coverage assessment, test framework, test requirements, generation strategy, implementation targets
3. **Coverage Gap Assessment**
- For each file in missing_tests[], analyze:
- File purpose and functionality
- Public APIs requiring test coverage
- Critical paths and edge cases
- Integration points requiring tests
- Prioritize tests: high (core logic), medium (utilities), low (helpers)
## EXPECTED OUTPUTS
1. gemini-test-analysis.md - Raw Gemini analysis
2. TEST_ANALYSIS_RESULTS.md - Standardized test requirements document
4. **Test Requirements Specification**
- For each missing test file, specify:
- **Test scope**: What needs to be tested
- **Test scenarios**: Happy path, error cases, edge cases, integration
- **Test data**: Required fixtures, mocks, test data
- **Dependencies**: External services, databases, APIs to mock
- **Coverage targets**: Functions/methods requiring tests
5. **Test Generation Strategy**
- Determine test generation approach for each file
- Identify reusable test patterns from existing tests
- Plan test data and fixture requirements
- Define mocking strategy for dependencies
- Specify expected test file structure
EXPECTED OUTPUT - Write to gemini-test-analysis.md:
# Test Generation Analysis
## 1. Implementation Context Summary
- **Source Session**: {source_session_id}
- **Implemented Features**: {feature_summary}
- **Changed Files**: {list_of_implementation_files}
- **Tech Stack**: {technologies_used}
## 2. Test Coverage Assessment
- **Existing Tests**: {count} files
- **Missing Tests**: {count} files
- **Coverage Percentage**: {percentage}%
- **Priority Breakdown**:
- High Priority: {count} files (core business logic)
- Medium Priority: {count} files (utilities, helpers)
- Low Priority: {count} files (configuration, constants)
## 3. Existing Test Pattern Analysis
- **Test Framework**: {framework_name_and_version}
- **File Naming Convention**: {pattern}
- **Test Structure**: {describe_it_or_other}
- **Assertion Style**: {expect_assert_should}
- **Mocking Strategy**: {mocking_framework_and_patterns}
- **Setup/Teardown**: {beforeEach_afterEach_patterns}
- **Test Data**: {fixtures_factories_builders}
## 4. Test Requirements by File
### File: {implementation_file_path}
**Test File**: {suggested_test_file_path}
**Priority**: {high|medium|low}
#### Scope
- {description_of_what_needs_testing}
#### Test Scenarios
1. **Happy Path Tests**
- {scenario_1}
- {scenario_2}
2. **Error Handling Tests**
- {error_scenario_1}
- {error_scenario_2}
3. **Edge Case Tests**
- {edge_case_1}
- {edge_case_2}
4. **Integration Tests** (if applicable)
- {integration_scenario_1}
- {integration_scenario_2}
#### Test Data & Fixtures
- {required_test_data}
- {required_mocks}
- {required_fixtures}
#### Dependencies to Mock
- {external_service_1}
- {external_service_2}
#### Coverage Targets
- Function: {function_name} - {test_requirements}
- Function: {function_name} - {test_requirements}
---
[Repeat for each missing test file]
---
## 5. Test Generation Strategy
### Overall Approach
- {strategy_description}
### Test Generation Order
1. {file_1} - {rationale}
2. {file_2} - {rationale}
3. {file_3} - {rationale}
### Reusable Patterns
- {pattern_1_from_existing_tests}
- {pattern_2_from_existing_tests}
### Test Data Strategy
- {approach_to_test_data_and_fixtures}
### Mocking Strategy
- {approach_to_mocking_dependencies}
### Quality Criteria
- Code coverage target: {percentage}%
- Test scenarios per function: {count}
- Integration test coverage: {approach}
## 6. Implementation Targets
**Purpose**: Identify new test files to create
**Format**: New test files only (no existing files to modify)
**Test Files to Create**:
1. **Target**: `tests/auth/TokenValidator.test.ts`
- **Type**: Create new test file
- **Purpose**: Test TokenValidator class
- **Scenarios**: 15 test cases covering validation logic, error handling, edge cases
- **Dependencies**: Mock JWT library, test fixtures for tokens
2. **Target**: `tests/middleware/errorHandler.test.ts`
- **Type**: Create new test file
- **Purpose**: Test error handling middleware
- **Scenarios**: 8 test cases for different error types and response formats
- **Dependencies**: Mock Express req/res/next, error fixtures
[List all test files to create]
## 7. Success Metrics
- **Test Coverage Goal**: {target_percentage}%
- **Test Quality**: All scenarios covered (happy, error, edge, integration)
- **Convention Compliance**: Follow existing test patterns
- **Maintainability**: Clear test descriptions, reusable fixtures
RULES:
- Focus on TEST REQUIREMENTS and GENERATION STRATEGY, NOT code generation
- Study existing test patterns thoroughly for consistency
- Prioritize critical business logic tests
- Specify clear test scenarios and coverage targets
- Identify all dependencies requiring mocks
- **MUST write output to .workflow/active/{test_session_id}/.process/gemini-test-analysis.md**
- Do NOT generate actual test code or implementation
- Output ONLY test analysis and generation strategy
" --approval-mode yolo
## QUALITY VALIDATION
- Both output files exist and are complete
- All required sections present in TEST_ANALYSIS_RESULTS.md
- Test requirements are actionable and quantified
- Test scenarios cover happy path, errors, edge cases
- Dependencies and mocks clearly identified
`
)
```
**Output Location**: `.workflow/active/{test_session_id}/.process/gemini-test-analysis.md`
**Output Files**:
- `.workflow/active/{test_session_id}/.process/gemini-test-analysis.md`
- `.workflow/active/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md`
### Phase 3: Results Synthesis
### Phase 3: Output Validation (Command Responsibility)
1. **Output Validation**
- Verify `gemini-test-analysis.md` exists and is complete
- Validate all required sections present
- Check test requirements are actionable
**Command validates agent outputs.**
2. **Quality Assessment**
- Test scenarios cover happy path, errors, edge cases
- Dependencies and mocks clearly identified
- Test generation strategy is practical
- Coverage targets are reasonable
### Phase 4: TEST_ANALYSIS_RESULTS.md Generation
Synthesize Gemini analysis into standardized format:
```markdown
# Test Generation Analysis Results
## Executive Summary
- **Test Session**: {test_session_id}
- **Source Session**: {source_session_id}
- **Analysis Timestamp**: {timestamp}
- **Coverage Gap**: {missing_test_count} files require tests
- **Test Framework**: {framework}
- **Overall Strategy**: {high_level_approach}
---
## 1. Coverage Assessment
### Current Coverage
- **Existing Tests**: {count} files
- **Implementation Files**: {count} files
- **Coverage Percentage**: {percentage}%
### Missing Tests (Priority Order)
1. **High Priority** ({count} files)
- {file_1} - {reason}
- {file_2} - {reason}
2. **Medium Priority** ({count} files)
- {file_1} - {reason}
3. **Low Priority** ({count} files)
- {file_1} - {reason}
---
## 2. Test Framework & Conventions
### Framework Configuration
- **Framework**: {framework_name}
- **Version**: {version}
- **Test Pattern**: {file_pattern}
- **Test Directory**: {directory_structure}
### Conventions
- **File Naming**: {convention}
- **Test Structure**: {describe_it_blocks}
- **Assertions**: {assertion_library}
- **Mocking**: {mocking_framework}
- **Setup/Teardown**: {beforeEach_afterEach}
### Example Pattern (from existing tests)
```
{example_test_structure_from_analysis}
```
---
## 3. Test Requirements by File
[For each missing test, include:]
### Test File: {test_file_path}
**Implementation**: {implementation_file}
**Priority**: {high|medium|low}
**Estimated Test Count**: {count}
#### Test Scenarios
1. **Happy Path**: {scenarios}
2. **Error Handling**: {scenarios}
3. **Edge Cases**: {scenarios}
4. **Integration**: {scenarios}
#### Dependencies & Mocks
- {dependency_1_to_mock}
- {dependency_2_to_mock}
#### Test Data Requirements
- {fixture_1}
- {fixture_2}
---
## 4. Test Generation Strategy
### Generation Approach
{overall_strategy_description}
### Generation Order
1. {test_file_1} - {rationale}
2. {test_file_2} - {rationale}
3. {test_file_3} - {rationale}
### Reusable Components
- **Test Fixtures**: {common_fixtures}
- **Mock Patterns**: {common_mocks}
- **Helper Functions**: {test_helpers}
### Quality Targets
- **Coverage Goal**: {percentage}%
- **Scenarios per Function**: {min_count}
- **Integration Coverage**: {approach}
---
## 5. Implementation Targets
**Purpose**: New test files to create (code-developer will generate these)
**Test Files to Create**:
1. **Target**: `tests/auth/TokenValidator.test.ts`
- **Implementation Source**: `src/auth/TokenValidator.ts`
- **Test Scenarios**: 15 (validation, error handling, edge cases)
- **Dependencies**: Mock JWT library, token fixtures
- **Priority**: High
2. **Target**: `tests/middleware/errorHandler.test.ts`
- **Implementation Source**: `src/middleware/errorHandler.ts`
- **Test Scenarios**: 8 (error types, response formats)
- **Dependencies**: Mock Express, error fixtures
- **Priority**: High
[List all test files with full specifications]
---
## 6. Success Criteria
### Coverage Metrics
- Achieve {target_percentage}% code coverage
- All public APIs have tests
- Critical paths fully covered
### Quality Standards
- All test scenarios covered (happy, error, edge, integration)
- Follow existing test conventions
- Clear test descriptions and assertions
- Maintainable test structure
### Validation Approach
- Run full test suite after generation
- Verify coverage with coverage tool
- Manual review of test quality
- Integration test validation
---
## 7. Reference Information
### Source Context
- **Implementation Summaries**: {paths}
- **Existing Tests**: {example_tests}
- **Documentation**: {relevant_docs}
### Analysis Tools
- **Gemini Analysis**: gemini-test-analysis.md
- **Coverage Tools**: {coverage_tool_if_detected}
```
**Output Location**: `.workflow/active/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md`
- Verify `gemini-test-analysis.md` exists and is complete
- Validate `TEST_ANALYSIS_RESULTS.md` generated by agent
- Check required sections present
- Confirm test requirements are actionable
## Error Handling
### Validation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Missing context package | test-context-gather not run | Run test-context-gather first |
| No coverage gaps | All files have tests | Skip test generation, proceed to test execution |
| No test framework detected | Missing test dependencies | Request user to configure test framework |
| Invalid source session | Source session incomplete | Complete implementation first |
| Error | Resolution |
|-------|------------|
| Missing context package | Run test-context-gather first |
| No coverage gaps | Skip test generation, proceed to execution |
| No test framework detected | Configure test framework |
| Invalid source session | Complete implementation first |
### Gemini Execution Errors
| Error | Cause | Recovery |
|-------|-------|----------|
| Timeout | Large project analysis | Reduce scope, analyze by module |
| Output incomplete | Token limit exceeded | Retry with focused analysis |
| No output file | Write permission error | Check directory permissions |
### Execution Errors
| Error | Recovery |
|-------|----------|
| Gemini timeout | Reduce scope, analyze by module |
| Output incomplete | Retry with focused analysis |
| No output file | Check directory permissions |
### Fallback Strategy
- If Gemini fails, generate basic TEST_ANALYSIS_RESULTS.md from context package
- Use coverage gaps and framework info to create minimal requirements
- Provide guidance for manual test planning
**Fallback Strategy**: Generate basic TEST_ANALYSIS_RESULTS.md from context package if Gemini fails
## Performance Optimization
## Integration & Usage
- **Focused Analysis**: Only analyze files with missing tests
- **Pattern Reuse**: Study existing tests for quick pattern extraction
- **Parallel Operations**: Load implementation summaries in parallel
- **Timeout Management**: 20-minute limit for Gemini analysis
### Command Chain
- **Called By**: `/workflow:test-gen` (Phase 4: Analysis)
- **Requires**: `test-context-package.json` from `/workflow:tools:test-context-gather`
- **Followed By**: `/workflow:tools:test-task-generate`
## Integration
### Performance
- Focused analysis: Only analyze files with missing tests
- Pattern reuse: Study existing tests for quick extraction
- Timeout: 20-minute limit for analysis
### Called By
- `/workflow:test-gen` (Phase 4: Analysis)
### Requires
- `/workflow:tools:test-context-gather` output (test-context-package.json)
### Followed By
- `/workflow:tools:test-task-generate` - Generates test task JSON with code-developer invocation
## Success Criteria
- ✅ Valid TEST_ANALYSIS_RESULTS.md generated
- ✅ All missing tests documented with requirements
- ✅ Test scenarios cover happy path, errors, edge cases
- ✅ Dependencies and mocks identified
- ✅ Test generation strategy is actionable
- ✅ Execution time < 20 minutes
- ✅ Output follows existing test conventions
### Success Criteria
- Valid TEST_ANALYSIS_RESULTS.md generated
- All missing tests documented with actionable requirements
- Test scenarios cover happy path, errors, edge cases, integration
- Dependencies and mocks clearly identified
- Test generation strategy is practical
- Output follows existing test conventions

View File

@@ -1,416 +1,216 @@
---
name: test-task-generate
description: Autonomous test-fix task generation using action-planning-agent with test-fix-retest cycle specification and discovery phase
description: Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests
argument-hint: "[--use-codex] [--cli-execute] --session WFS-test-session-id"
examples:
- /workflow:tools:test-task-generate --session WFS-test-auth
- /workflow:tools:test-task-generate --use-codex --session WFS-test-auth
- /workflow:tools:test-task-generate --cli-execute --session WFS-test-auth
- /workflow:tools:test-task-generate --cli-execute --use-codex --session WFS-test-auth
---
# Autonomous Test Task Generation Command
# Generate Test Planning Documents Command
## Overview
Autonomous test-fix task JSON generation using action-planning-agent with two-phase execution: discovery and document generation. Supports both agent-driven execution (default) and CLI tool execution modes. Generates specialized test-fix tasks with comprehensive test-fix-retest cycle specification.
Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent. This command produces **test planning artifacts only** - it does NOT execute tests or implement code. Actual test execution requires separate execution command (e.g., /workflow:test-cycle-execute).
## Core Philosophy
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
- **Planning Only**: Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT execute tests
- **Agent-Driven Document Generation**: Delegate test plan generation to action-planning-agent
- **Two-Phase Flow**: Context Preparation (command) → Test Document Generation (agent)
- **Memory-First**: Reuse loaded documents from conversation memory
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and test research
- **Pre-Selected Templates**: Command selects correct test template based on `--cli-execute` flag **before** invoking agent
- **Agent Simplicity**: Agent receives pre-selected template and focuses only on content generation
- **MCP-Enhanced**: Use MCP tools for test pattern research and analysis
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root
- **Test-First**: Generate comprehensive test coverage before execution
- **Iterative Refinement**: Test-fix-retest cycle until all tests pass
- **Surgical Fixes**: Minimal code changes, no refactoring during test fixes
- **Auto-Revert**: Rollback all changes if max iterations reached
## Execution Modes
## Test-Specific Execution Modes
### Test Generation (IMPL-001)
- **Agent Mode (Default)**: @code-developer generates tests within agent context
- **CLI Execute Mode (`--cli-execute`)**: Use Codex CLI for autonomous test generation
- **Agent Mode** (default): @code-developer generates tests within agent context
- **CLI Execute Mode** (`--cli-execute`): Use Codex CLI for autonomous test generation
### Test Fix (IMPL-002)
- **Manual Mode (Default)**: Gemini diagnosis → user applies fixes
- **Codex Mode (`--use-codex`)**: Gemini diagnosis → Codex applies fixes with resume mechanism
### Test Execution & Fix (IMPL-002+)
- **Manual Mode** (default): Gemini diagnosis → user applies fixes
- **Codex Mode** (`--use-codex`): Gemini diagnosis → Codex applies fixes with resume mechanism
## Execution Lifecycle
## Document Generation Lifecycle
### Phase 1: Discovery & Context Loading
**⚡ Memory-First Rule**: Skip file loading if documents already in conversation memory
### Phase 1: Context Preparation (Command Responsibility)
**Agent Context Package**:
```javascript
{
"session_id": "WFS-test-[session-id]",
"execution_mode": "agent-mode" | "cli-execute-mode", // Determined by flag
"task_json_template_path": "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt"
| "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt",
// Path selected by command based on --cli-execute flag, agent reads it
"workflow_type": "test_session",
"use_codex": true | false, // Determined by --use-codex flag
"session_metadata": {
// If in memory: use cached content
// Else: Load from .workflow/active/{test-session-id}/workflow-session.json
},
"test_analysis_results_path": ".workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md",
"test_analysis_results": {
// If in memory: use cached content
// Else: Load from TEST_ANALYSIS_RESULTS.md
},
"test_context_package_path": ".workflow/active/{test-session-id}/.process/test-context-package.json",
"test_context_package": {
// Existing test patterns and coverage analysis
},
"source_session_id": "[source-session-id]", // if exists
"source_session_summaries": {
// Implementation context from source session
},
"mcp_capabilities": {
"code_index": true,
"exa_code": true,
"exa_web": true
}
}
**Command prepares test session paths and metadata for planning document generation.**
**Test Session Path Structure**:
```
.workflow/active/WFS-test-{session-id}/
├── workflow-session.json # Test session metadata
├── .process/
├── TEST_ANALYSIS_RESULTS.md # Test requirements and strategy
│ ├── test-context-package.json # Test patterns and coverage
└── context-package.json # General context artifacts
├── .task/ # Output: Test task JSON files
├── IMPL_PLAN.md # Output: Test implementation plan
└── TODO_LIST.md # Output: Test TODO list
```
**Discovery Actions**:
1. **Load Test Session Context** (if not in memory)
```javascript
if (!memory.has("workflow-session.json")) {
Read(.workflow/active/{test-session-id}/workflow-session.json)
}
```
**Command Preparation**:
1. **Assemble Test Session Paths** for agent prompt:
- `session_metadata_path`
- `test_analysis_results_path` (REQUIRED)
- `test_context_package_path`
- Output directory paths
2. **Load TEST_ANALYSIS_RESULTS.md** (if not in memory, REQUIRED)
```javascript
if (!memory.has("TEST_ANALYSIS_RESULTS.md")) {
Read(.workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md)
}
```
2. **Provide Metadata** (simple values):
- `session_id`
- `execution_mode` (agent-mode | cli-execute-mode)
- `use_codex` flag (true | false)
- `source_session_id` (if exists)
- `mcp_capabilities` (available MCP tools)
3. **Load Test Context Package** (if not in memory)
```javascript
if (!memory.has("test-context-package.json")) {
Read(.workflow/active/{test-session-id}/.process/test-context-package.json)
}
```
### Phase 2: Test Document Generation (Agent Responsibility)
4. **Load Source Session Summaries** (if source_session_id exists)
```javascript
if (sessionMetadata.source_session_id) {
const summaryFiles = Bash("find .workflow/active/{source-session-id}/.summaries/ -name 'IMPL-*-summary.md'")
summaryFiles.forEach(file => Read(file))
}
```
5. **Code Analysis with Native Tools** (optional - enhance understanding)
```bash
# Find test files and patterns
find . -name "*test*" -type f
rg "describe|it\(|test\(" -g "*.ts"
```
6. **MCP External Research** (optional - gather test best practices)
```javascript
// Get external test examples and patterns
mcp__exa__get_code_context_exa(
query="TypeScript test generation best practices jest",
tokensNum="dynamic"
)
```
### Phase 2: Agent Execution (Document Generation)
**Pre-Agent Template Selection** (Command decides path before invoking agent):
```javascript
// Command checks flag and selects template PATH (not content)
const templatePath = hasCliExecuteFlag
? "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt"
: "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt";
```
**Purpose**: Generate test-specific IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT test execution.
**Agent Invocation**:
```javascript
Task(
subagent_type="action-planning-agent",
description="Generate test-fix task JSON and implementation plan",
description="Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## Execution Context
## TASK OBJECTIVE
Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for test workflow session
**Session ID**: WFS-test-{session-id}
**Workflow Type**: Test Session
**Execution Mode**: {agent-mode | cli-execute-mode}
**Task JSON Template Path**: {template_path}
**Use Codex**: {true | false}
IMPORTANT: This is TEST PLANNING ONLY - you are generating planning documents, NOT executing tests.
## Phase 1: Discovery Results (Provided Context)
CRITICAL: Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
### Test Session Metadata
{session_metadata_content}
- source_session_id: {source_session_id} (if exists)
- workflow_type: "test_session"
## AGENT CONFIGURATION REFERENCE
All test task generation rules, schemas, and quality standards are defined in your agent specification:
@.claude/agents/action-planning-agent.md
### TEST_ANALYSIS_RESULTS.md (REQUIRED)
{test_analysis_results_content}
- Coverage Assessment
- Test Framework & Conventions
- Test Requirements by File
- Test Generation Strategy
- Implementation Targets
- Success Criteria
Refer to your specification for:
- Test Task JSON Schema (6-field structure with test-specific metadata)
- Test IMPL_PLAN.md Structure (test_session variant with test-fix cycle)
- TODO_LIST.md Format (with test phase indicators)
- Progressive Loading Strategy (memory-first, load TEST_ANALYSIS_RESULTS.md as primary source)
- Quality Validation Rules (task count limits, requirement quantification)
### Test Context Package
{test_context_package_summary}
- Existing test patterns, framework config, coverage analysis
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{test-session-id}/workflow-session.json
- TEST_ANALYSIS_RESULTS: .workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md (REQUIRED - primary requirements source)
- Test Context Package: .workflow/active/{test-session-id}/.process/test-context-package.json
- Context Package: .workflow/active/{test-session-id}/.process/context-package.json
- Source Session Summaries: .workflow/active/{source-session-id}/.summaries/IMPL-*.md (if exists)
### Source Session Implementation Context (Optional)
{source_session_summaries}
- Implementation context from completed session
Output:
- Task Dir: .workflow/active/{test-session-id}/.task/
- IMPL_PLAN: .workflow/active/{test-session-id}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/{test-session-id}/TODO_LIST.md
### MCP Analysis Results (Optional)
**Code Structure**: {mcp_code_index_results}
**External Research**: {mcp_exa_research_results}
## CONTEXT METADATA
Session ID: {test-session-id}
Workflow Type: test_session
Planning Mode: {agent-mode | cli-execute-mode}
Use Codex: {true | false}
Source Session: {source-session-id} (if exists)
MCP Capabilities: {exa_code, exa_web, code_index}
## Phase 2: Test Task Document Generation
## TEST-SPECIFIC REQUIREMENTS SUMMARY
(Detailed specifications in your agent definition)
**Agent Configuration Reference**: All test task generation rules, test-fix cycle structure, quality standards, and execution details are defined in action-planning-agent.
### Task Structure Requirements
- Minimum 2 tasks: IMPL-001 (test generation) + IMPL-002 (test execution & fix)
- Expandable for complex projects: Add IMPL-003+ (per-module, integration, E2E tests)
Refer to: @.claude/agents/action-planning-agent.md for:
- Test Task Decomposition Standards
- Test-Fix-Retest Cycle Requirements
- 5-Field Task JSON Schema
- IMPL_PLAN.md Structure (Test variant)
- TODO_LIST.md Format
- Test Execution Flow & Quality Validation
Task Configuration:
IMPL-001 (Test Generation):
- meta.type: "test-gen"
- meta.agent: "@code-developer" (agent-mode) OR CLI execution (cli-execute-mode)
- flow_control: Test generation strategy from TEST_ANALYSIS_RESULTS.md
### Test-Specific Requirements Summary
IMPL-002+ (Test Execution & Fix):
- meta.type: "test-fix"
- meta.agent: "@test-fix-agent"
- meta.use_codex: true/false (based on flag)
- flow_control: Test-fix cycle with iteration limits and diagnosis configuration
#### Task Structure Philosophy
- **Minimum 2 tasks**: IMPL-001 (test generation) + IMPL-002 (test execution & fix)
- **Expandable**: Add IMPL-003+ for complex projects (per-module, integration, etc.)
- IMPL-001: Uses @code-developer or CLI execution
- IMPL-002: Uses @test-fix-agent with iterative fix cycle
### Test-Fix Cycle Specification (IMPL-002+)
Required flow_control fields:
- max_iterations: 5
- diagnosis_tool: "gemini"
- diagnosis_template: "~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt"
- fix_mode: "manual" OR "codex" (based on use_codex flag)
- cycle_pattern: "test → gemini_diagnose → fix → retest"
- exit_conditions: ["all_tests_pass", "max_iterations_reached"]
- auto_revert_on_failure: true
#### Test-Fix Cycle Configuration
- **Max Iterations**: 5 (for IMPL-002)
- **Diagnosis Tool**: Gemini with bug-fix template
- **Fix Application**: Manual (default) or Codex (if --use-codex flag)
- **Cycle Pattern**: test → gemini_diagnose → manual_fix (or codex) → retest
- **Exit Conditions**: All tests pass OR max iterations reached (auto-revert)
### TEST_ANALYSIS_RESULTS.md Mapping
PRIMARY requirements source - extract and map to task JSONs:
- Test framework config → meta.test_framework
- Coverage targets → meta.coverage_target
- Test requirements → context.requirements (quantified with explicit counts)
- Test generation strategy → IMPL-001 flow_control.implementation_approach
- Implementation targets → context.files_to_test (absolute paths)
#### Required Outputs Summary
## EXPECTED DELIVERABLES
1. Test Task JSON Files (.task/IMPL-*.json)
- 6-field schema with quantified requirements from TEST_ANALYSIS_RESULTS.md
- Test-specific metadata: type, agent, use_codex, test_framework, coverage_target
- Artifact references from test-context-package.json
- Absolute paths in context.files_to_test
##### 1. Test Task JSON Files (.task/IMPL-*.json)
- **Location**: `.workflow/active/{test-session-id}/.task/`
- **Template**: Read from `{template_path}` (pre-selected by command based on `--cli-execute` flag)
- **Schema**: 5-field structure with test-specific metadata
- IMPL-001: `meta.type: "test-gen"`, `meta.agent: "@code-developer"`
- IMPL-002: `meta.type: "test-fix"`, `meta.agent: "@test-fix-agent"`, `meta.use_codex: {use_codex}`
- `flow_control`: Test generation approach (IMPL-001) or test-fix cycle (IMPL-002)
- **Details**: See action-planning-agent.md § Test Task JSON Generation
2. Test Implementation Plan (IMPL_PLAN.md)
- Template: ~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt
- Test-specific frontmatter: workflow_type="test_session", test_framework, source_session_id
- Test-Fix-Retest Cycle section with diagnosis configuration
- Source session context integration (if applicable)
##### 2. IMPL_PLAN.md (Test Variant)
- **Location**: `.workflow/active/{test-session-id}/IMPL_PLAN.md`
- **Template**: `~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt`
- **Test-Specific Frontmatter**: workflow_type="test_session", test_framework, source_session_id
- **Test-Fix-Retest Cycle Section**: Iterative fix cycle with Gemini diagnosis
- **Details**: See action-planning-agent.md § Test Implementation Plan Creation
3. TODO List (TODO_LIST.md)
- Hierarchical structure with test phase containers
- Links to task JSONs with status markers
- Matches task JSON hierarchy
##### 3. TODO_LIST.md
- **Location**: `.workflow/active/{test-session-id}/TODO_LIST.md`
- **Format**: Task list with test generation and execution phases
- **Status**: [ ] (pending), [x] (completed)
- **Details**: See action-planning-agent.md § TODO List Generation
## QUALITY STANDARDS
Hard Constraints:
- Task count: minimum 2, maximum 12
- All requirements quantified from TEST_ANALYSIS_RESULTS.md
- Test framework configuration validated
- use_codex flag correctly set in IMPL-002+ tasks
- Absolute paths for all focus_paths
- Acceptance criteria include verification commands
### Agent Execution Summary
**Key Steps** (Detailed instructions in action-planning-agent.md):
1. Load task JSON template from provided path
2. Parse TEST_ANALYSIS_RESULTS.md for test requirements
3. Generate IMPL-001 (test generation) task JSON
4. Generate IMPL-002 (test execution & fix) task JSON with use_codex flag
5. Generate additional IMPL-*.json if project complexity requires
6. Create IMPL_PLAN.md using test template variant
7. Generate TODO_LIST.md with test task indicators
8. Update session state with test metadata
**Quality Gates** (Full checklist in action-planning-agent.md):
- ✓ Minimum 2 tasks created (IMPL-001 + IMPL-002)
- ✓ IMPL-001 has test generation approach from TEST_ANALYSIS_RESULTS.md
- ✓ IMPL-002 has test-fix cycle with correct use_codex flag
- ✓ Test framework configuration integrated
- ✓ Source session context referenced (if exists)
- ✓ MCP tool integration added
- ✓ Documents follow test template structure
## Output
Generate all three documents and report completion status:
- Test task JSON files created: N files (minimum 2)
- Test requirements integrated: TEST_ANALYSIS_RESULTS.md
- Test context integrated: existing patterns and coverage
- Source session context: {source_session_id} summaries (if exists)
- MCP enhancements: code-index, exa-research
- Session ready for test execution: /workflow:execute or /workflow:test-cycle-execute
## SUCCESS CRITERIA
- All test planning documents generated successfully
- Return completion status: task count, test framework, coverage targets, source session status
`
)
```
### Agent Context Passing
**Memory-Aware Context Assembly**:
```javascript
// Assemble context package for agent
const agentContext = {
session_id: "WFS-test-[id]",
workflow_type: "test_session",
use_codex: hasUseCodexFlag,
// Use memory if available, else load
session_metadata: memory.has("workflow-session.json")
? memory.get("workflow-session.json")
: Read(.workflow/active/WFS-test-[id]/workflow-session.json),
test_analysis_results_path: ".workflow/active/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md",
test_analysis_results: memory.has("TEST_ANALYSIS_RESULTS.md")
? memory.get("TEST_ANALYSIS_RESULTS.md")
: Read(".workflow/active/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md"),
test_context_package_path: ".workflow/active/WFS-test-[id]/.process/test-context-package.json",
test_context_package: memory.has("test-context-package.json")
? memory.get("test-context-package.json")
: Read(".workflow/active/WFS-test-[id]/.process/test-context-package.json"),
// Load source session summaries if exists
source_session_id: session_metadata.source_session_id || null,
source_session_summaries: session_metadata.source_session_id
? loadSourceSummaries(session_metadata.source_session_id)
: null,
// Optional MCP enhancements
mcp_analysis: executeMcpDiscovery()
}
```
## Test Task Structure Reference
This section provides quick reference for test task JSON structure. For complete implementation details, see the agent invocation prompt in Phase 2 above.
**Quick Reference**:
- Minimum 2 tasks: IMPL-001 (test-gen) + IMPL-002 (test-fix)
- Expandable for complex projects (IMPL-003+)
- IMPL-001: `meta.agent: "@code-developer"`, test generation approach
- IMPL-002: `meta.agent: "@test-fix-agent"`, `meta.use_codex: {flag}`, test-fix cycle
- See Phase 2 agent prompt for full schema and requirements
## Output Files Structure
```
.workflow/active/WFS-test-[session]/
├── workflow-session.json # Test session metadata
├── IMPL_PLAN.md # Test validation plan
├── TODO_LIST.md # Progress tracking
├── .task/
│ └── IMPL-001.json # Test-fix task with cycle spec
├── .process/
│ ├── ANALYSIS_RESULTS.md # From concept-enhanced (optional)
│ ├── context-package.json # From context-gather
│ ├── initial-test.log # Phase 1: Initial test results
│ ├── fix-iteration-1-diagnosis.md # Gemini diagnosis iteration 1
│ ├── fix-iteration-1-changes.log # Codex changes iteration 1
│ ├── fix-iteration-1-retest.log # Retest results iteration 1
│ ├── fix-iteration-N-*.md/log # Subsequent iterations
│ └── final-test.log # Phase 3: Final validation
└── .summaries/
└── IMPL-001-summary.md # Success report OR failure report
```
## Error Handling
### Input Validation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Not a test session | Missing workflow_type: "test_session" | Verify session created by test-gen |
| Source session not found | Invalid source_session_id | Check source session exists |
| No implementation summaries | Source session incomplete | Ensure source session has completed tasks |
### Test Framework Discovery Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| No test command found | Unknown framework | Manual test command specification |
| No test files found | Tests not written | Request user to write tests first |
| Test dependencies missing | Incomplete setup | Run dependency installation |
### Generation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Invalid JSON structure | Template error | Fix task generation logic |
| Missing required fields | Incomplete metadata | Validate session metadata |
## Integration & Usage
### Command Chain
- **Called By**: `/workflow:test-gen` (Phase 4), `/workflow:test-fix-gen` (Phase 4)
- **Invokes**: `action-planning-agent` for autonomous task generation
- **Followed By**: `/workflow:execute` or `/workflow:test-cycle-execute` (user-triggered)
- **Invokes**: `action-planning-agent` for test planning document generation
- **Followed By**: `/workflow:test-cycle-execute` or `/workflow:execute` (user-triggered)
### Basic Usage
### Usage Examples
```bash
# Agent mode (default, autonomous execution)
# Agent mode (default)
/workflow:tools:test-task-generate --session WFS-test-auth
# With automated Codex fixes for IMPL-002
# With automated Codex fixes
/workflow:tools:test-task-generate --use-codex --session WFS-test-auth
# CLI execution mode for IMPL-001 test generation
# CLI execution mode for test generation
/workflow:tools:test-task-generate --cli-execute --session WFS-test-auth
# Both flags combined
/workflow:tools:test-task-generate --cli-execute --use-codex --session WFS-test-auth
```
### Execution Modes
- **Agent mode** (default): Uses `action-planning-agent` with agent-mode task template
- **CLI mode** (`--cli-execute`): Uses Gemini/Qwen/Codex with cli-mode task template for IMPL-001
- **Codex fixes** (`--use-codex`): Enables automated fixes in IMPL-002 task
### Flag Behavior
- **No flags**: `meta.use_codex=false` (manual fixes), agent-mode generation
- **--use-codex**: `meta.use_codex=true` (Codex automated fixes with resume mechanism in IMPL-002)
- **--cli-execute**: Uses CLI tool execution mode for IMPL-001 test generation
- **No flags**: `meta.use_codex=false` (manual fixes), agent-mode test generation
- **--use-codex**: `meta.use_codex=true` (Codex automated fixes in IMPL-002+)
- **--cli-execute**: CLI tool execution mode for IMPL-001 test generation
- **Both flags**: CLI generation + automated Codex fixes
### Output
- Test task JSON files in `.task/` directory (minimum 2: IMPL-001.json + IMPL-002.json)
- IMPL_PLAN.md with test generation and fix cycle strategy
- TODO_LIST.md with test task indicators
- Session state updated with test metadata
- MCP enhancements integrated (if available)
## Agent Execution Notes
The `@test-fix-agent` will execute the task by following the `flow_control.implementation_approach` specification:
1. **Load task JSON**: Read complete test-fix task from `.task/IMPL-002.json`
2. **Check meta.use_codex**: Determine fix mode (manual or automated)
3. **Execute pre_analysis**: Load source context, discover framework, analyze tests
4. **Phase 1**: Run initial test suite
5. **Phase 2**: If failures, enter iterative loop:
- Use Gemini for diagnosis (analysis mode with bug-fix template)
- Check meta.use_codex flag:
- If false (default): Present fix suggestions to user for manual application
- If true (--use-codex): Use Codex resume for automated fixes (maintains context)
- Retest and check for regressions
- Repeat max 5 times
6. **Phase 3**: Generate summary and certify code
7. **Error Recovery**: Revert changes if max iterations reached
**Bug Diagnosis Template**: Uses `~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt` template for systematic root cause analysis, code path tracing, and targeted fix recommendations.
**Codex Usage**: The agent uses `codex exec "..." resume --last` pattern ONLY when meta.use_codex=true (--use-codex flag present) to maintain conversation context across multiple fix iterations, ensuring consistency and learning from previous attempts.
- Test task JSON files in `.task/` directory (minimum 2)
- IMPL_PLAN.md with test strategy and fix cycle specification
- TODO_LIST.md with test phase indicators
- Session ready for test execution

View File

@@ -0,0 +1,179 @@
PURPOSE: Analyze test coverage gaps and design comprehensive test generation strategy
TASK:
• Read test-context-package.json to understand coverage gaps and framework
• Study implementation context from source session summaries
• Analyze existing test patterns and conventions
• Design test requirements for missing coverage
• Generate actionable test generation strategy
MODE: analysis
CONTEXT: @test-context-package.json @../../../{source-session-id}/.summaries/*.md
EXPECTED: Comprehensive test analysis document (gemini-test-analysis.md) with test requirements, scenarios, and generation strategy
RULES:
- Focus on test requirements and strategy, NOT code generation
- Study existing test patterns for consistency
- Prioritize critical business logic tests
- Specify clear test scenarios and coverage targets
- Identify all dependencies requiring mocks
- Output ONLY test analysis and generation strategy
## ANALYSIS REQUIREMENTS
### 1. Implementation Understanding
- Load all implementation summaries from source session
- Understand implemented features, APIs, and business logic
- Extract key functions, classes, and modules
- Identify integration points and dependencies
### 2. Existing Test Pattern Analysis
- Study existing test files for patterns and conventions
- Identify test structure (describe/it, test suites, fixtures)
- Analyze assertion patterns and mocking strategies
- Extract test setup/teardown patterns
### 3. Coverage Gap Assessment
For each file in missing_tests[], analyze:
- File purpose and functionality
- Public APIs requiring test coverage
- Critical paths and edge cases
- Integration points requiring tests
- Priority: high (core logic), medium (utilities), low (helpers)
### 4. Test Requirements Specification
For each missing test file, specify:
- Test scope: What needs to be tested
- Test scenarios: Happy path, error cases, edge cases, integration
- Test data: Required fixtures, mocks, test data
- Dependencies: External services, databases, APIs to mock
- Coverage targets: Functions/methods requiring tests
### 5. Test Generation Strategy
- Determine test generation approach for each file
- Identify reusable test patterns from existing tests
- Plan test data and fixture requirements
- Define mocking strategy for dependencies
- Specify expected test file structure
## EXPECTED OUTPUT FORMAT
Write comprehensive analysis to gemini-test-analysis.md:
# Test Generation Analysis
## 1. Implementation Context Summary
- **Source Session**: {source_session_id}
- **Implemented Features**: {feature_summary}
- **Changed Files**: {list_of_implementation_files}
- **Tech Stack**: {technologies_used}
## 2. Test Coverage Assessment
- **Existing Tests**: {count} files
- **Missing Tests**: {count} files
- **Coverage Percentage**: {percentage}%
- **Priority Breakdown**:
- High Priority: {count} files (core business logic)
- Medium Priority: {count} files (utilities, helpers)
- Low Priority: {count} files (configuration, constants)
## 3. Existing Test Pattern Analysis
- **Test Framework**: {framework_name_and_version}
- **File Naming Convention**: {pattern}
- **Test Structure**: {describe_it_or_other}
- **Assertion Style**: {expect_assert_should}
- **Mocking Strategy**: {mocking_framework_and_patterns}
- **Setup/Teardown**: {beforeEach_afterEach_patterns}
- **Test Data**: {fixtures_factories_builders}
## 4. Test Requirements by File
### File: {implementation_file_path}
**Test File**: {suggested_test_file_path}
**Priority**: {high|medium|low}
#### Scope
- {description_of_what_needs_testing}
#### Test Scenarios
1. **Happy Path Tests**
- {scenario_1}
- {scenario_2}
2. **Error Handling Tests**
- {error_scenario_1}
- {error_scenario_2}
3. **Edge Case Tests**
- {edge_case_1}
- {edge_case_2}
4. **Integration Tests** (if applicable)
- {integration_scenario_1}
- {integration_scenario_2}
#### Test Data & Fixtures
- {required_test_data}
- {required_mocks}
- {required_fixtures}
#### Dependencies to Mock
- {external_service_1}
- {external_service_2}
#### Coverage Targets
- Function: {function_name} - {test_requirements}
- Function: {function_name} - {test_requirements}
---
[Repeat for each missing test file]
---
## 5. Test Generation Strategy
### Overall Approach
- {strategy_description}
### Test Generation Order
1. {file_1} - {rationale}
2. {file_2} - {rationale}
3. {file_3} - {rationale}
### Reusable Patterns
- {pattern_1_from_existing_tests}
- {pattern_2_from_existing_tests}
### Test Data Strategy
- {approach_to_test_data_and_fixtures}
### Mocking Strategy
- {approach_to_mocking_dependencies}
### Quality Criteria
- Code coverage target: {percentage}%
- Test scenarios per function: {count}
- Integration test coverage: {approach}
## 6. Implementation Targets
**Purpose**: Identify new test files to create
**Format**: New test files only (no existing files to modify)
**Test Files to Create**:
1. **Target**: `tests/auth/TokenValidator.test.ts`
- **Type**: Create new test file
- **Purpose**: Test TokenValidator class
- **Scenarios**: 15 test cases covering validation logic, error handling, edge cases
- **Dependencies**: Mock JWT library, test fixtures for tokens
2. **Target**: `tests/middleware/errorHandler.test.ts`
- **Type**: Create new test file
- **Purpose**: Test error handling middleware
- **Scenarios**: 8 test cases for different error types and response formats
- **Dependencies**: Mock Express req/res/next, error fixtures
[List all test files to create]
## 7. Success Metrics
- **Test Coverage Goal**: {target_percentage}%
- **Test Quality**: All scenarios covered (happy, error, edge, integration)
- **Convention Compliance**: Follow existing test patterns
- **Maintainability**: Clear test descriptions, reusable fixtures