mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-05 01:50:27 +08:00
Ensure all agent executions wait for results before proceeding. Modified 20 workflow command files with 32 Task call updates. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
6.0 KiB
6.0 KiB
name, description, argument-hint, examples
| name | description | argument-hint | examples | |
|---|---|---|---|---|
| test-concept-enhanced | Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini | --session WFS-test-session-id --context path/to/test-context-package.json |
|
Test Concept Enhanced Command
Overview
Workflow coordinator that delegates test analysis to cli-execution-agent. Agent executes Gemini to analyze test coverage gaps, implementation context, and generate comprehensive test generation strategies.
Core Philosophy
- Coverage-Driven: Focus on identified test gaps from context analysis
- Pattern-Based: Learn from existing tests and project conventions
- Gemini-Powered: Use Gemini for test requirement analysis and strategy design
- Single-Round Analysis: Comprehensive test analysis in one execution
- No Code Generation: Strategy and planning only, actual test generation happens in task execution
Core Responsibilities
- Coordinate test analysis workflow using cli-execution-agent
- Validate test-context-package.json prerequisites
- Execute Gemini analysis via agent for test strategy generation
- Validate agent outputs (gemini-test-analysis.md, TEST_ANALYSIS_RESULTS.md)
Execution Process
Input Parsing:
├─ Parse flags: --session, --context
└─ Validation: Both REQUIRED
Phase 1: Context Preparation (Command)
├─ Load workflow-session.json
├─ Verify test session type is "test-gen"
├─ Validate test-context-package.json
└─ Determine strategy (Simple: 1-3 files | Medium: 4-6 | Complex: >6)
Phase 2: Test Analysis Execution (Agent)
├─ Execute Gemini analysis via cli-execution-agent
└─ Generate TEST_ANALYSIS_RESULTS.md
Phase 3: Output Validation (Command)
├─ Verify gemini-test-analysis.md exists
├─ Validate TEST_ANALYSIS_RESULTS.md
└─ Confirm test requirements are actionable
Execution Lifecycle
Phase 1: Context Preparation (Command Responsibility)
Command prepares session context and validates prerequisites.
-
Session Validation
- Load
.workflow/active/{test_session_id}/workflow-session.json - Verify test session type is "test-gen"
- Extract source session reference
- Load
-
Context Package Validation
- Read
test-context-package.json - Validate required sections: metadata, source_context, test_coverage, test_framework
- Extract coverage gaps and framework details
- Read
-
Strategy Determination
- Simple (1-3 files): Single Gemini analysis
- Medium (4-6 files): Comprehensive analysis
- Complex (>6 files): Modular analysis approach
Phase 2: Test Analysis Execution (Agent Responsibility)
Purpose: Analyze test coverage gaps and generate comprehensive test strategy.
Agent Invocation:
Task(
subagent_type="cli-execution-agent",
run_in_background=false,
description="Analyze test coverage gaps and generate test strategy",
prompt=`
## TASK OBJECTIVE
Analyze test requirements and generate comprehensive test generation strategy using Gemini CLI
## EXECUTION CONTEXT
Session: {test_session_id}
Source Session: {source_session_id}
Working Dir: .workflow/active/{test_session_id}/.process
Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt
## EXECUTION STEPS
1. Execute Gemini analysis:
ccw cli exec "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --tool gemini --mode write --cd .workflow/active/{test_session_id}/.process
2. Generate TEST_ANALYSIS_RESULTS.md:
Synthesize gemini-test-analysis.md into standardized format for task generation
Include: coverage assessment, test framework, test requirements, generation strategy, implementation targets
## EXPECTED OUTPUTS
1. gemini-test-analysis.md - Raw Gemini analysis
2. TEST_ANALYSIS_RESULTS.md - Standardized test requirements document
## QUALITY VALIDATION
- Both output files exist and are complete
- All required sections present in TEST_ANALYSIS_RESULTS.md
- Test requirements are actionable and quantified
- Test scenarios cover happy path, errors, edge cases
- Dependencies and mocks clearly identified
`
)
Output Files:
.workflow/active/{test_session_id}/.process/gemini-test-analysis.md.workflow/active/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md
Phase 3: Output Validation (Command Responsibility)
Command validates agent outputs.
- Verify
gemini-test-analysis.mdexists and is complete - Validate
TEST_ANALYSIS_RESULTS.mdgenerated by agent - Check required sections present
- Confirm test requirements are actionable
Error Handling
Validation Errors
| Error | Resolution |
|---|---|
| Missing context package | Run test-context-gather first |
| No coverage gaps | Skip test generation, proceed to execution |
| No test framework detected | Configure test framework |
| Invalid source session | Complete implementation first |
Execution Errors
| Error | Recovery |
|---|---|
| Gemini timeout | Reduce scope, analyze by module |
| Output incomplete | Retry with focused analysis |
| No output file | Check directory permissions |
Fallback Strategy: Generate basic TEST_ANALYSIS_RESULTS.md from context package if Gemini fails
Integration & Usage
Command Chain
- Called By:
/workflow:test-gen(Phase 4: Analysis) - Requires:
test-context-package.jsonfrom/workflow:tools:test-context-gather - Followed By:
/workflow:tools:test-task-generate
Performance
- Focused analysis: Only analyze files with missing tests
- Pattern reuse: Study existing tests for quick extraction
- Timeout: 20-minute limit for analysis
Success Criteria
- Valid TEST_ANALYSIS_RESULTS.md generated
- All missing tests documented with actionable requirements
- Test scenarios cover happy path, errors, edge cases, integration
- Dependencies and mocks clearly identified
- Test generation strategy is practical
- Output follows existing test conventions