refactor: optimize test workflow commands and extract templates

1. Optimize test-task-generate.md structure
   - Simplify from 417 lines to 217 lines (~48% reduction)
   - Reference task-generate-agent.md structure for consistency
   - Enhance agent prompt with clear specifications:
     * Explicit agent configuration reference
     * Detailed TEST_ANALYSIS_RESULTS.md integration mapping
     * Complete test-fix cycle configuration
     * Clear task structure requirements (IMPL-001 test-gen, IMPL-002+ test-fix)
   - Fix format errors (remove nested code blocks in agent prompt)
   - Emphasize "planning only" nature (does NOT execute tests)

2. Refactor test-concept-enhanced.md to use cli-execute-agent
   - Simplify from 463 lines to 142 lines (~69% reduction)
   - Change from direct Gemini execution to cli-execute-agent delegation
   - Extract Gemini prompt template to separate file:
     * Create ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt
     * Use $(cat ~/.claude/...) pattern for template reference
   - Clarify responsibility separation:
     * Command: Workflow coordination and validation
     * Agent: Execute Gemini analysis and generate outputs
   - Agent generates both gemini-test-analysis.md and TEST_ANALYSIS_RESULTS.md
   - Remove verbose embedded prompts and output format descriptions

Key improvements:
- Modular template management for easier maintenance
- Clear command vs agent responsibility separation
- Consistent documentation structure across workflow commands
- Enhanced agent prompts ensure correct output generation
- Use ~/.claude paths for installed workflow location
This commit is contained in:
catlog22
2025-11-24 19:33:02 +08:00
parent adbb2070bb
commit aabc6294f4
3 changed files with 405 additions and 748 deletions

View File

@@ -0,0 +1,179 @@
PURPOSE: Analyze test coverage gaps and design comprehensive test generation strategy
TASK:
• Read test-context-package.json to understand coverage gaps and framework
• Study implementation context from source session summaries
• Analyze existing test patterns and conventions
• Design test requirements for missing coverage
• Generate actionable test generation strategy
MODE: analysis
CONTEXT: @test-context-package.json @../../../{source-session-id}/.summaries/*.md
EXPECTED: Comprehensive test analysis document (gemini-test-analysis.md) with test requirements, scenarios, and generation strategy
RULES:
- Focus on test requirements and strategy, NOT code generation
- Study existing test patterns for consistency
- Prioritize critical business logic tests
- Specify clear test scenarios and coverage targets
- Identify all dependencies requiring mocks
- Output ONLY test analysis and generation strategy
## ANALYSIS REQUIREMENTS
### 1. Implementation Understanding
- Load all implementation summaries from source session
- Understand implemented features, APIs, and business logic
- Extract key functions, classes, and modules
- Identify integration points and dependencies
### 2. Existing Test Pattern Analysis
- Study existing test files for patterns and conventions
- Identify test structure (describe/it, test suites, fixtures)
- Analyze assertion patterns and mocking strategies
- Extract test setup/teardown patterns
### 3. Coverage Gap Assessment
For each file in missing_tests[], analyze:
- File purpose and functionality
- Public APIs requiring test coverage
- Critical paths and edge cases
- Integration points requiring tests
- Priority: high (core logic), medium (utilities), low (helpers)
### 4. Test Requirements Specification
For each missing test file, specify:
- Test scope: What needs to be tested
- Test scenarios: Happy path, error cases, edge cases, integration
- Test data: Required fixtures, mocks, test data
- Dependencies: External services, databases, APIs to mock
- Coverage targets: Functions/methods requiring tests
### 5. Test Generation Strategy
- Determine test generation approach for each file
- Identify reusable test patterns from existing tests
- Plan test data and fixture requirements
- Define mocking strategy for dependencies
- Specify expected test file structure
## EXPECTED OUTPUT FORMAT
Write comprehensive analysis to gemini-test-analysis.md:
# Test Generation Analysis
## 1. Implementation Context Summary
- **Source Session**: {source_session_id}
- **Implemented Features**: {feature_summary}
- **Changed Files**: {list_of_implementation_files}
- **Tech Stack**: {technologies_used}
## 2. Test Coverage Assessment
- **Existing Tests**: {count} files
- **Missing Tests**: {count} files
- **Coverage Percentage**: {percentage}%
- **Priority Breakdown**:
- High Priority: {count} files (core business logic)
- Medium Priority: {count} files (utilities, helpers)
- Low Priority: {count} files (configuration, constants)
## 3. Existing Test Pattern Analysis
- **Test Framework**: {framework_name_and_version}
- **File Naming Convention**: {pattern}
- **Test Structure**: {describe_it_or_other}
- **Assertion Style**: {expect_assert_should}
- **Mocking Strategy**: {mocking_framework_and_patterns}
- **Setup/Teardown**: {beforeEach_afterEach_patterns}
- **Test Data**: {fixtures_factories_builders}
## 4. Test Requirements by File
### File: {implementation_file_path}
**Test File**: {suggested_test_file_path}
**Priority**: {high|medium|low}
#### Scope
- {description_of_what_needs_testing}
#### Test Scenarios
1. **Happy Path Tests**
- {scenario_1}
- {scenario_2}
2. **Error Handling Tests**
- {error_scenario_1}
- {error_scenario_2}
3. **Edge Case Tests**
- {edge_case_1}
- {edge_case_2}
4. **Integration Tests** (if applicable)
- {integration_scenario_1}
- {integration_scenario_2}
#### Test Data & Fixtures
- {required_test_data}
- {required_mocks}
- {required_fixtures}
#### Dependencies to Mock
- {external_service_1}
- {external_service_2}
#### Coverage Targets
- Function: {function_name} - {test_requirements}
- Function: {function_name} - {test_requirements}
---
[Repeat for each missing test file]
---
## 5. Test Generation Strategy
### Overall Approach
- {strategy_description}
### Test Generation Order
1. {file_1} - {rationale}
2. {file_2} - {rationale}
3. {file_3} - {rationale}
### Reusable Patterns
- {pattern_1_from_existing_tests}
- {pattern_2_from_existing_tests}
### Test Data Strategy
- {approach_to_test_data_and_fixtures}
### Mocking Strategy
- {approach_to_mocking_dependencies}
### Quality Criteria
- Code coverage target: {percentage}%
- Test scenarios per function: {count}
- Integration test coverage: {approach}
## 6. Implementation Targets
**Purpose**: Identify new test files to create
**Format**: New test files only (no existing files to modify)
**Test Files to Create**:
1. **Target**: `tests/auth/TokenValidator.test.ts`
- **Type**: Create new test file
- **Purpose**: Test TokenValidator class
- **Scenarios**: 15 test cases covering validation logic, error handling, edge cases
- **Dependencies**: Mock JWT library, test fixtures for tokens
2. **Target**: `tests/middleware/errorHandler.test.ts`
- **Type**: Create new test file
- **Purpose**: Test error handling middleware
- **Scenarios**: 8 test cases for different error types and response formats
- **Dependencies**: Mock Express req/res/next, error fixtures
[List all test files to create]
## 7. Success Metrics
- **Test Coverage Goal**: {target_percentage}%
- **Test Quality**: All scenarios covered (happy, error, edge, integration)
- **Convention Compliance**: Follow existing test patterns
- **Maintainability**: Clear test descriptions, reusable fixtures