refactor: optimize test workflow commands and extract templates

1. Optimize test-task-generate.md structure
   - Simplify from 417 lines to 217 lines (~48% reduction)
   - Reference task-generate-agent.md structure for consistency
   - Enhance agent prompt with clear specifications:
     * Explicit agent configuration reference
     * Detailed TEST_ANALYSIS_RESULTS.md integration mapping
     * Complete test-fix cycle configuration
     * Clear task structure requirements (IMPL-001 test-gen, IMPL-002+ test-fix)
   - Fix format errors (remove nested code blocks in agent prompt)
   - Emphasize "planning only" nature (does NOT execute tests)

2. Refactor test-concept-enhanced.md to use cli-execute-agent
   - Simplify from 463 lines to 142 lines (~69% reduction)
   - Change from direct Gemini execution to cli-execute-agent delegation
   - Extract Gemini prompt template to separate file:
     * Create ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt
     * Use $(cat ~/.claude/...) pattern for template reference
   - Clarify responsibility separation:
     * Command: Workflow coordination and validation
     * Agent: Execute Gemini analysis and generate outputs
   - Agent generates both gemini-test-analysis.md and TEST_ANALYSIS_RESULTS.md
   - Remove verbose embedded prompts and output format descriptions

Key improvements:
- Modular template management for easier maintenance
- Clear command vs agent responsibility separation
- Consistent documentation structure across workflow commands
- Enhanced agent prompts ensure correct output generation
- Use ~/.claude paths for installed workflow location
This commit is contained in:
catlog22
2025-11-24 19:33:02 +08:00
parent adbb2070bb
commit aabc6294f4
3 changed files with 405 additions and 748 deletions

View File

@@ -1,6 +1,6 @@
---
name: test-concept-enhanced
description: Analyze test requirements and generate test generation strategy using Gemini with test-context package
description: Coordinate test analysis workflow using cli-execute-agent to generate test strategy via Gemini
argument-hint: "--session WFS-test-session-id --context path/to/test-context-package.json"
examples:
- /workflow:tools:test-concept-enhanced --session WFS-test-auth --context .workflow/active/WFS-test-auth/.process/test-context-package.json
@@ -9,7 +9,7 @@ examples:
# Test Concept Enhanced Command
## Overview
Specialized analysis tool for test generation workflows that uses Gemini to analyze test coverage gaps, implementation context, and generate comprehensive test generation strategies.
Workflow coordinator that delegates test analysis to cli-execute-agent. Agent executes Gemini to analyze test coverage gaps, implementation context, and generate comprehensive test generation strategies.
## Core Philosophy
- **Coverage-Driven**: Focus on identified test gaps from context analysis
@@ -19,15 +19,16 @@ Specialized analysis tool for test generation workflows that uses Gemini to anal
- **No Code Generation**: Strategy and planning only, actual test generation happens in task execution
## Core Responsibilities
- Parse test-context-package.json from test-context-gather
- Analyze implementation summaries and coverage gaps
- Study existing test patterns and conventions
- Generate test generation strategy using Gemini
- Produce TEST_ANALYSIS_RESULTS.md for task generation
- Coordinate test analysis workflow using cli-execute-agent
- Validate test-context-package.json prerequisites
- Execute Gemini analysis via agent for test strategy generation
- Validate agent outputs (gemini-test-analysis.md, TEST_ANALYSIS_RESULTS.md)
## Execution Lifecycle
### Phase 1: Validation & Preparation
### Phase 1: Context Preparation (Command Responsibility)
**Command prepares session context and validates prerequisites.**
1. **Session Validation**
- Load `.workflow/active/{test_session_id}/workflow-session.json`
@@ -40,423 +41,100 @@ Specialized analysis tool for test generation workflows that uses Gemini to anal
- Extract coverage gaps and framework details
3. **Strategy Determination**
- **Simple Test Generation** (1-3 files): Single Gemini analysis
- **Medium Test Generation** (4-6 files): Gemini comprehensive analysis
- **Complex Test Generation** (>6 files): Gemini analysis with modular approach
- **Simple** (1-3 files): Single Gemini analysis
- **Medium** (4-6 files): Comprehensive analysis
- **Complex** (>6 files): Modular analysis approach
### Phase 2: Gemini Test Analysis
### Phase 2: Test Analysis Execution (Agent Responsibility)
**Tool Configuration**:
```bash
cd .workflow/active/{test_session_id}/.process && gemini -p "
PURPOSE: Analyze test coverage gaps and design comprehensive test generation strategy
TASK: Study implementation context, existing tests, and generate test requirements for missing coverage
MODE: analysis
CONTEXT: @{.workflow/active/{test_session_id}/.process/test-context-package.json}
**Purpose**: Analyze test coverage gaps and generate comprehensive test strategy.
**MANDATORY FIRST STEP**: Read and analyze test-context-package.json to understand:
- Test coverage gaps from test_coverage.missing_tests[]
- Implementation context from source_context.implementation_summaries[]
- Existing test patterns from test_framework.conventions
- Changed files requiring tests from source_context.implementation_summaries[].changed_files
**Agent Invocation**:
```javascript
Task(
subagent_type="cli-execute-agent",
description="Analyze test coverage gaps and generate test strategy",
prompt=`
## TASK OBJECTIVE
Analyze test requirements and generate comprehensive test generation strategy using Gemini CLI
**ANALYSIS REQUIREMENTS**:
## EXECUTION CONTEXT
Session: {test_session_id}
Source Session: {source_session_id}
Working Dir: .workflow/active/{test_session_id}/.process
Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt
1. **Implementation Understanding**
- Load all implementation summaries from source session
- Understand implemented features, APIs, and business logic
- Extract key functions, classes, and modules
- Identify integration points and dependencies
## EXECUTION STEPS
1. Execute Gemini analysis:
cd .workflow/active/{test_session_id}/.process && gemini -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --approval-mode yolo
2. **Existing Test Pattern Analysis**
- Study existing test files for patterns and conventions
- Identify test structure (describe/it, test suites, fixtures)
- Analyze assertion patterns and mocking strategies
- Extract test setup/teardown patterns
2. Generate TEST_ANALYSIS_RESULTS.md:
Synthesize gemini-test-analysis.md into standardized format for task generation
Include: coverage assessment, test framework, test requirements, generation strategy, implementation targets
3. **Coverage Gap Assessment**
- For each file in missing_tests[], analyze:
- File purpose and functionality
- Public APIs requiring test coverage
- Critical paths and edge cases
- Integration points requiring tests
- Prioritize tests: high (core logic), medium (utilities), low (helpers)
## EXPECTED OUTPUTS
1. gemini-test-analysis.md - Raw Gemini analysis
2. TEST_ANALYSIS_RESULTS.md - Standardized test requirements document
4. **Test Requirements Specification**
- For each missing test file, specify:
- **Test scope**: What needs to be tested
- **Test scenarios**: Happy path, error cases, edge cases, integration
- **Test data**: Required fixtures, mocks, test data
- **Dependencies**: External services, databases, APIs to mock
- **Coverage targets**: Functions/methods requiring tests
5. **Test Generation Strategy**
- Determine test generation approach for each file
- Identify reusable test patterns from existing tests
- Plan test data and fixture requirements
- Define mocking strategy for dependencies
- Specify expected test file structure
EXPECTED OUTPUT - Write to gemini-test-analysis.md:
# Test Generation Analysis
## 1. Implementation Context Summary
- **Source Session**: {source_session_id}
- **Implemented Features**: {feature_summary}
- **Changed Files**: {list_of_implementation_files}
- **Tech Stack**: {technologies_used}
## 2. Test Coverage Assessment
- **Existing Tests**: {count} files
- **Missing Tests**: {count} files
- **Coverage Percentage**: {percentage}%
- **Priority Breakdown**:
- High Priority: {count} files (core business logic)
- Medium Priority: {count} files (utilities, helpers)
- Low Priority: {count} files (configuration, constants)
## 3. Existing Test Pattern Analysis
- **Test Framework**: {framework_name_and_version}
- **File Naming Convention**: {pattern}
- **Test Structure**: {describe_it_or_other}
- **Assertion Style**: {expect_assert_should}
- **Mocking Strategy**: {mocking_framework_and_patterns}
- **Setup/Teardown**: {beforeEach_afterEach_patterns}
- **Test Data**: {fixtures_factories_builders}
## 4. Test Requirements by File
### File: {implementation_file_path}
**Test File**: {suggested_test_file_path}
**Priority**: {high|medium|low}
#### Scope
- {description_of_what_needs_testing}
#### Test Scenarios
1. **Happy Path Tests**
- {scenario_1}
- {scenario_2}
2. **Error Handling Tests**
- {error_scenario_1}
- {error_scenario_2}
3. **Edge Case Tests**
- {edge_case_1}
- {edge_case_2}
4. **Integration Tests** (if applicable)
- {integration_scenario_1}
- {integration_scenario_2}
#### Test Data & Fixtures
- {required_test_data}
- {required_mocks}
- {required_fixtures}
#### Dependencies to Mock
- {external_service_1}
- {external_service_2}
#### Coverage Targets
- Function: {function_name} - {test_requirements}
- Function: {function_name} - {test_requirements}
---
[Repeat for each missing test file]
---
## 5. Test Generation Strategy
### Overall Approach
- {strategy_description}
### Test Generation Order
1. {file_1} - {rationale}
2. {file_2} - {rationale}
3. {file_3} - {rationale}
### Reusable Patterns
- {pattern_1_from_existing_tests}
- {pattern_2_from_existing_tests}
### Test Data Strategy
- {approach_to_test_data_and_fixtures}
### Mocking Strategy
- {approach_to_mocking_dependencies}
### Quality Criteria
- Code coverage target: {percentage}%
- Test scenarios per function: {count}
- Integration test coverage: {approach}
## 6. Implementation Targets
**Purpose**: Identify new test files to create
**Format**: New test files only (no existing files to modify)
**Test Files to Create**:
1. **Target**: `tests/auth/TokenValidator.test.ts`
- **Type**: Create new test file
- **Purpose**: Test TokenValidator class
- **Scenarios**: 15 test cases covering validation logic, error handling, edge cases
- **Dependencies**: Mock JWT library, test fixtures for tokens
2. **Target**: `tests/middleware/errorHandler.test.ts`
- **Type**: Create new test file
- **Purpose**: Test error handling middleware
- **Scenarios**: 8 test cases for different error types and response formats
- **Dependencies**: Mock Express req/res/next, error fixtures
[List all test files to create]
## 7. Success Metrics
- **Test Coverage Goal**: {target_percentage}%
- **Test Quality**: All scenarios covered (happy, error, edge, integration)
- **Convention Compliance**: Follow existing test patterns
- **Maintainability**: Clear test descriptions, reusable fixtures
RULES:
- Focus on TEST REQUIREMENTS and GENERATION STRATEGY, NOT code generation
- Study existing test patterns thoroughly for consistency
- Prioritize critical business logic tests
- Specify clear test scenarios and coverage targets
- Identify all dependencies requiring mocks
- **MUST write output to .workflow/active/{test_session_id}/.process/gemini-test-analysis.md**
- Do NOT generate actual test code or implementation
- Output ONLY test analysis and generation strategy
" --approval-mode yolo
## QUALITY VALIDATION
- Both output files exist and are complete
- All required sections present in TEST_ANALYSIS_RESULTS.md
- Test requirements are actionable and quantified
- Test scenarios cover happy path, errors, edge cases
- Dependencies and mocks clearly identified
`
)
```
**Output Location**: `.workflow/active/{test_session_id}/.process/gemini-test-analysis.md`
**Output Files**:
- `.workflow/active/{test_session_id}/.process/gemini-test-analysis.md`
- `.workflow/active/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md`
### Phase 3: Results Synthesis
### Phase 3: Output Validation (Command Responsibility)
1. **Output Validation**
- Verify `gemini-test-analysis.md` exists and is complete
- Validate all required sections present
- Check test requirements are actionable
**Command validates agent outputs.**
2. **Quality Assessment**
- Test scenarios cover happy path, errors, edge cases
- Dependencies and mocks clearly identified
- Test generation strategy is practical
- Coverage targets are reasonable
### Phase 4: TEST_ANALYSIS_RESULTS.md Generation
Synthesize Gemini analysis into standardized format:
```markdown
# Test Generation Analysis Results
## Executive Summary
- **Test Session**: {test_session_id}
- **Source Session**: {source_session_id}
- **Analysis Timestamp**: {timestamp}
- **Coverage Gap**: {missing_test_count} files require tests
- **Test Framework**: {framework}
- **Overall Strategy**: {high_level_approach}
---
## 1. Coverage Assessment
### Current Coverage
- **Existing Tests**: {count} files
- **Implementation Files**: {count} files
- **Coverage Percentage**: {percentage}%
### Missing Tests (Priority Order)
1. **High Priority** ({count} files)
- {file_1} - {reason}
- {file_2} - {reason}
2. **Medium Priority** ({count} files)
- {file_1} - {reason}
3. **Low Priority** ({count} files)
- {file_1} - {reason}
---
## 2. Test Framework & Conventions
### Framework Configuration
- **Framework**: {framework_name}
- **Version**: {version}
- **Test Pattern**: {file_pattern}
- **Test Directory**: {directory_structure}
### Conventions
- **File Naming**: {convention}
- **Test Structure**: {describe_it_blocks}
- **Assertions**: {assertion_library}
- **Mocking**: {mocking_framework}
- **Setup/Teardown**: {beforeEach_afterEach}
### Example Pattern (from existing tests)
```
{example_test_structure_from_analysis}
```
---
## 3. Test Requirements by File
[For each missing test, include:]
### Test File: {test_file_path}
**Implementation**: {implementation_file}
**Priority**: {high|medium|low}
**Estimated Test Count**: {count}
#### Test Scenarios
1. **Happy Path**: {scenarios}
2. **Error Handling**: {scenarios}
3. **Edge Cases**: {scenarios}
4. **Integration**: {scenarios}
#### Dependencies & Mocks
- {dependency_1_to_mock}
- {dependency_2_to_mock}
#### Test Data Requirements
- {fixture_1}
- {fixture_2}
---
## 4. Test Generation Strategy
### Generation Approach
{overall_strategy_description}
### Generation Order
1. {test_file_1} - {rationale}
2. {test_file_2} - {rationale}
3. {test_file_3} - {rationale}
### Reusable Components
- **Test Fixtures**: {common_fixtures}
- **Mock Patterns**: {common_mocks}
- **Helper Functions**: {test_helpers}
### Quality Targets
- **Coverage Goal**: {percentage}%
- **Scenarios per Function**: {min_count}
- **Integration Coverage**: {approach}
---
## 5. Implementation Targets
**Purpose**: New test files to create (code-developer will generate these)
**Test Files to Create**:
1. **Target**: `tests/auth/TokenValidator.test.ts`
- **Implementation Source**: `src/auth/TokenValidator.ts`
- **Test Scenarios**: 15 (validation, error handling, edge cases)
- **Dependencies**: Mock JWT library, token fixtures
- **Priority**: High
2. **Target**: `tests/middleware/errorHandler.test.ts`
- **Implementation Source**: `src/middleware/errorHandler.ts`
- **Test Scenarios**: 8 (error types, response formats)
- **Dependencies**: Mock Express, error fixtures
- **Priority**: High
[List all test files with full specifications]
---
## 6. Success Criteria
### Coverage Metrics
- Achieve {target_percentage}% code coverage
- All public APIs have tests
- Critical paths fully covered
### Quality Standards
- All test scenarios covered (happy, error, edge, integration)
- Follow existing test conventions
- Clear test descriptions and assertions
- Maintainable test structure
### Validation Approach
- Run full test suite after generation
- Verify coverage with coverage tool
- Manual review of test quality
- Integration test validation
---
## 7. Reference Information
### Source Context
- **Implementation Summaries**: {paths}
- **Existing Tests**: {example_tests}
- **Documentation**: {relevant_docs}
### Analysis Tools
- **Gemini Analysis**: gemini-test-analysis.md
- **Coverage Tools**: {coverage_tool_if_detected}
```
**Output Location**: `.workflow/active/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md`
- Verify `gemini-test-analysis.md` exists and is complete
- Validate `TEST_ANALYSIS_RESULTS.md` generated by agent
- Check required sections present
- Confirm test requirements are actionable
## Error Handling
### Validation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Missing context package | test-context-gather not run | Run test-context-gather first |
| No coverage gaps | All files have tests | Skip test generation, proceed to test execution |
| No test framework detected | Missing test dependencies | Request user to configure test framework |
| Invalid source session | Source session incomplete | Complete implementation first |
| Error | Resolution |
|-------|------------|
| Missing context package | Run test-context-gather first |
| No coverage gaps | Skip test generation, proceed to execution |
| No test framework detected | Configure test framework |
| Invalid source session | Complete implementation first |
### Gemini Execution Errors
| Error | Cause | Recovery |
|-------|-------|----------|
| Timeout | Large project analysis | Reduce scope, analyze by module |
| Output incomplete | Token limit exceeded | Retry with focused analysis |
| No output file | Write permission error | Check directory permissions |
### Execution Errors
| Error | Recovery |
|-------|----------|
| Gemini timeout | Reduce scope, analyze by module |
| Output incomplete | Retry with focused analysis |
| No output file | Check directory permissions |
### Fallback Strategy
- If Gemini fails, generate basic TEST_ANALYSIS_RESULTS.md from context package
- Use coverage gaps and framework info to create minimal requirements
- Provide guidance for manual test planning
**Fallback Strategy**: Generate basic TEST_ANALYSIS_RESULTS.md from context package if Gemini fails
## Performance Optimization
## Integration & Usage
- **Focused Analysis**: Only analyze files with missing tests
- **Pattern Reuse**: Study existing tests for quick pattern extraction
- **Parallel Operations**: Load implementation summaries in parallel
- **Timeout Management**: 20-minute limit for Gemini analysis
### Command Chain
- **Called By**: `/workflow:test-gen` (Phase 4: Analysis)
- **Requires**: `test-context-package.json` from `/workflow:tools:test-context-gather`
- **Followed By**: `/workflow:tools:test-task-generate`
## Integration
### Performance
- Focused analysis: Only analyze files with missing tests
- Pattern reuse: Study existing tests for quick extraction
- Timeout: 20-minute limit for analysis
### Called By
- `/workflow:test-gen` (Phase 4: Analysis)
### Requires
- `/workflow:tools:test-context-gather` output (test-context-package.json)
### Followed By
- `/workflow:tools:test-task-generate` - Generates test task JSON with code-developer invocation
## Success Criteria
- ✅ Valid TEST_ANALYSIS_RESULTS.md generated
- ✅ All missing tests documented with requirements
- ✅ Test scenarios cover happy path, errors, edge cases
- ✅ Dependencies and mocks identified
- ✅ Test generation strategy is actionable
- ✅ Execution time < 20 minutes
- ✅ Output follows existing test conventions
### Success Criteria
- Valid TEST_ANALYSIS_RESULTS.md generated
- All missing tests documented with actionable requirements
- Test scenarios cover happy path, errors, edge cases, integration
- Dependencies and mocks clearly identified
- Test generation strategy is practical
- Output follows existing test conventions