Files
Claude-Code-Workflow/.claude/skills/workflow-test-fix/phases/03-test-concept-enhanced.md
catlog22 80b7dfc817 feat: Add roles for issue resolution pipeline including planner, reviewer, integrator, and implementer
- Implemented `planner` role for solution design and task decomposition using issue-plan-agent.
- Introduced `reviewer` role for solution review, technical feasibility validation, and risk assessment.
- Created `integrator` role for queue formation and conflict detection using issue-queue-agent.
- Added `implementer` role for code implementation and test verification via code-developer.
- Defined message types and role boundaries for each role to ensure clear responsibilities.
- Established a team configuration file to manage roles, pipelines, and collaboration patterns for the issue processing pipeline.
2026-02-15 13:51:50 +08:00

5.0 KiB

Phase 3: Test Concept Enhanced (test-concept-enhanced)

Analyze test requirements with Gemini using progressive L0-L3 test layers.

Objective

  • Use Gemini to analyze coverage gaps
  • Detect project type and apply appropriate test templates
  • Generate multi-layered test requirements (L0-L3)
  • Scan for AI code issues

Core Philosophy

  • Coverage-Driven: Focus on identified test gaps from context analysis
  • Pattern-Based: Learn from existing tests and project conventions
  • Gemini-Powered: Use Gemini for test requirement analysis and strategy design
  • Single-Round Analysis: Comprehensive test analysis in one execution
  • No Code Generation: Strategy and planning only, actual test generation happens in task execution

Core Responsibilities

  • Coordinate test analysis workflow using cli-execution-agent
  • Validate test-context-package.json prerequisites
  • Execute Gemini analysis via agent for test strategy generation
  • Validate agent outputs (gemini-test-analysis.md, TEST_ANALYSIS_RESULTS.md)

Execution

Step 1.3: Test Generation Analysis

Phase 1: Context Preparation

Command prepares session context and validates prerequisites.

  1. Session Validation

    • Load .workflow/active/{test_session_id}/workflow-session.json
    • Verify test session type is "test-gen"
    • Extract source session reference
  2. Context Package Validation

    • Read test-context-package.json
    • Validate required sections: metadata, source_context, test_coverage, test_framework
    • Extract coverage gaps and framework details
  3. Strategy Determination

    • Simple (1-3 files): Single Gemini analysis
    • Medium (4-6 files): Comprehensive analysis
    • Complex (>6 files): Modular analysis approach

Phase 2: Test Analysis Execution

Purpose: Analyze test coverage gaps and generate comprehensive test strategy.

Task(
  subagent_type="cli-execution-agent",
  run_in_background=false,
  description="Analyze test coverage gaps and generate test strategy",
  prompt=`
## TASK OBJECTIVE
Analyze test requirements and generate comprehensive test generation strategy using Gemini CLI

## EXECUTION CONTEXT
Session: {test_session_id}
Source Session: {source_session_id}
Working Dir: .workflow/active/{test_session_id}/.process
Template: ~/.ccw/workflows/cli-templates/prompts/test/test-concept-analysis.txt

## EXECUTION STEPS
1. Execute Gemini analysis:
   ccw cli -p "..." --tool gemini --mode write --rule test-test-concept-analysis --cd .workflow/active/{test_session_id}/.process

2. Generate TEST_ANALYSIS_RESULTS.md:
   Synthesize gemini-test-analysis.md into standardized format for task generation
   Include: coverage assessment, test framework, test requirements, generation strategy, implementation targets

## EXPECTED OUTPUTS
1. gemini-test-analysis.md - Raw Gemini analysis
2. TEST_ANALYSIS_RESULTS.md - Standardized test requirements document

## QUALITY VALIDATION
- Both output files exist and are complete
- All required sections present in TEST_ANALYSIS_RESULTS.md
- Test requirements are actionable and quantified
- Test scenarios cover happy path, errors, edge cases
- Dependencies and mocks clearly identified
`
)

Output Files:

  • .workflow/active/{test_session_id}/.process/gemini-test-analysis.md
  • .workflow/active/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md

Phase 3: Output Validation

  • Verify gemini-test-analysis.md exists and is complete
  • Validate TEST_ANALYSIS_RESULTS.md generated by agent
  • Check required sections present
  • Confirm test requirements are actionable

Input:

  • testSessionId from Phase 1
  • contextPath from Phase 2

Expected Behavior:

  • Use Gemini to analyze coverage gaps
  • Detect project type and apply appropriate test templates
  • Generate multi-layered test requirements (L0-L3)
  • Scan for AI code issues
  • Generate TEST_ANALYSIS_RESULTS.md

Output: .workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md

Validation - TEST_ANALYSIS_RESULTS.md must include:

  • Project Type Detection (with confidence)
  • Coverage Assessment (current vs target)
  • Test Framework & Conventions
  • Multi-Layered Test Plan (L0-L3)
  • AI Issue Scan Results
  • Test Requirements by File (with layer annotations)
  • Quality Assurance Criteria
  • Success Criteria

Error Handling

Validation Errors

Error Resolution
Missing context package Run test-context-gather first
No coverage gaps Skip test generation, proceed to execution
No test framework detected Configure test framework
Invalid source session Complete implementation first

Execution Errors

Error Recovery
Gemini timeout Reduce scope, analyze by module
Output incomplete Retry with focused analysis
No output file Check directory permissions

Fallback Strategy: Generate basic TEST_ANALYSIS_RESULTS.md from context package if Gemini fails

Output

  • File: .workflow/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.md

Next Phase

Continue to Phase 4: Test Task Generate.