Enhance workflows and commands for intelligent tools strategy

- Updated intelligent-tools-strategy.md to include `--skip-git-repo-check` for Codex write access and development commands.
- Improved context gathering and analysis processes in mcp-tool-strategy.md with additional examples and guidelines for file searching.
- Introduced new command concept-enhanced.md for enhanced intelligent analysis with parallel CLI execution and design blueprint generation.
- Added context-gather.md command for intelligent collection of project context based on task descriptions, generating standardized JSON context packages.
This commit is contained in:
catlog22
2025-09-29 23:30:03 +08:00
parent 8b907ac80f
commit 7e4d370d45
16 changed files with 963 additions and 2106 deletions

View File

@@ -1,330 +0,0 @@
---
name: run
description: Analyze context packages using intelligent tools and generate structured analysis reports
usage: /analysis:run --session <session_id> --context <context_package_path>
argument-hint: "--session WFS-session-id --context path/to/context-package.json"
examples:
- /analysis:run --session WFS-auth --context .workflow/WFS-auth/.process/context-package.json
- /analysis:run --session WFS-payment --context .workflow/WFS-payment/.process/context-package.json
---
# Analysis Run Command (/analysis:run)
## Overview
Intelligent analysis engine that processes standardized context packages, executes deep analysis using multiple AI tools, and generates structured analysis reports.
## Core Philosophy
- **Context-Driven**: Precise analysis based on comprehensive context
- **Intelligent Tool Selection**: Choose optimal analysis tools based on task characteristics
- **Structured Output**: Generate standardized analysis reports
- **Configurable Analysis**: Support different depth and focus levels
## Core Responsibilities
- **Context Package Parsing**: Read and validate context-package.json
- **Multi-Tool Orchestration**: Execute parallel analysis using Gemini/Codex
- **Perspective Synthesis**: Collect and organize different tool viewpoints
- **Consensus Analysis**: Identify agreements and conflicts between tools
- **Analysis Report Generation**: Output multi-perspective ANALYSIS_RESULTS.md (NOT implementation plan)
## Analysis Strategy Selection
### Tool Selection by Task Complexity
**Simple Tasks (≤3 modules)**:
- **Primary Tool**: Gemini (rapid understanding and pattern recognition)
- **Support Tool**: Code-index (structural analysis)
- **Execution Mode**: Single-round analysis, focus on existing patterns
**Medium Tasks (4-6 modules)**:
- **Primary Tool**: Gemini (comprehensive analysis and architecture design)
- **Support Tools**: Code-index + Exa (external best practices)
- **Execution Mode**: Two-round analysis, understanding + architecture design
**Complex Tasks (>6 modules)**:
- **Primary Tools**: Multi-tool collaboration (Gemini+Codex)
- **Analysis Strategy**: Layered analysis with progressive refinement
- **Execution Mode**: Three-round analysis, understand + design + validate
### Tool Preferences by Tech Stack
```json
{
"frontend": {
"primary": "gemini",
"secondary": "codex",
"focus": ["component_design", "state_management", "ui_patterns"]
},
"backend": {
"primary": "codex",
"secondary": "gemini",
"focus": ["api_design", "data_flow", "security", "performance"]
},
"fullstack": {
"primary": "gemini",
"secondary": "codex",
"focus": ["system_architecture", "integration", "data_consistency"]
}
}
```
## Execution Process
### Phase 1: Session & Context Package Analysis
1. **Session Validation**
```bash
# Validate session exists
if [ ! -d ".workflow/${session_id}" ]; then
echo "❌ Session ${session_id} not found"
exit 1
fi
# Load session metadata
session_metadata=".workflow/${session_id}/workflow-session.json"
```
2. **Package Validation**
```bash
# Validate context-package.json format
jq empty {context_package_path} || error "Invalid JSON format"
```
3. **Task Feature Extraction**
- Parse task description and keywords
- Load session task summaries for context
- Identify tech stack and complexity
- Determine analysis focus and objectives
4. **Resource Inventory**
- Count files by type
- Assess context size
- Define analysis scope
### Phase 2: Analysis Tool Selection & Preparation
1. **Intelligent Tool Selection**
```bash
# Select analysis tools based on complexity and tech stack
if [ "$complexity" = "simple" ]; then
analysis_tools=("gemini")
elif [ "$complexity" = "medium" ]; then
analysis_tools=("gemini")
else
analysis_tools=("gemini" "codex")
fi
```
2. **Prompt Template Selection**
- Choose analysis templates based on task type
- Customize prompts with context information
- Optimize token usage efficiency
### Phase 3: Multi-Tool Analysis Execution
1. **Gemini Analysis** (Comprehensive Understanding & Architecture Design)
```bash
cd "$project_root" && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Deep understanding of project architecture, existing patterns, and design implementation architecture
TASK: Comprehensive analysis of ${task_description}
CONTEXT: $(cat ${context_package_path} | jq -r '.assets[] | select(.priority=="high") | .path' | head -15)
EXPECTED:
- Existing architecture pattern analysis
- Relevant code component identification
- Technical risk assessment
- Implementation complexity evaluation
- Detailed technical implementation architecture
- Code organization structure recommendations
- Interface design and data flow
- Specific implementation steps
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt) | Focus on existing patterns, integration points, and comprehensive design
"
```
2. **Codex Analysis** (Implementation Feasibility & Technical Validation)
```bash
# Only used for complex tasks
if [ "$complexity" = "complex" ]; then
codex -C "$project_root" --full-auto exec "
PURPOSE: Validate technical implementation feasibility
TASK: Assess implementation risks and technical challenges for ${task_description}
CONTEXT: Project codebase + Gemini analysis results
EXPECTED: Technical feasibility report and risk assessment
RULES: Focus on technical risks, performance impact, and maintenance costs
" -s danger-full-access
fi
```
### Phase 4: Multi-Perspective Analysis Integration
1. **Tool-Specific Result Organization**
- Organize Gemini analysis (comprehensive understanding, patterns & architecture design)
- Organize Codex analysis (feasibility & validation)
2. **Perspective Synthesis**
- Identify consensus points across tools
- Document conflicting viewpoints
- Provide synthesis recommendations
3. **Analysis Report Generation**
- Generate multi-tool perspective ANALYSIS_RESULTS.md
- Preserve individual tool insights
- Provide consolidated recommendations (NOT detailed implementation plan)
## Analysis Results Format
Generated ANALYSIS_RESULTS.md format (Multi-Tool Perspective Analysis):
```markdown
# Multi-Tool Analysis Results
## Task Overview
- **Description**: {task_description}
- **Context Package**: {context_package_path}
- **Analysis Tools**: {tools_used}
- **Analysis Timestamp**: {timestamp}
## Tool-Specific Analysis
### 🧠 Gemini Analysis (Comprehensive Understanding & Architecture Design)
**Focus**: Existing codebase understanding, pattern identification, and technical architecture design
#### Current Architecture Assessment
- **Existing Patterns**: {identified_patterns}
- **Code Structure**: {current_structure}
- **Integration Points**: {integration_analysis}
- **Technical Debt**: {debt_assessment}
#### Compatibility Analysis
- **Framework Compatibility**: {framework_analysis}
- **Dependency Impact**: {dependency_analysis}
- **Migration Considerations**: {migration_notes}
#### Proposed Architecture
- **System Design**: {architectural_design}
- **Component Structure**: {component_design}
- **Data Flow**: {data_flow_design}
- **Interface Design**: {api_interface_design}
#### Implementation Strategy
- **Code Organization**: {code_structure_plan}
- **Module Dependencies**: {module_dependencies}
- **Testing Strategy**: {testing_approach}
### 🔧 Codex Analysis (Implementation & Technical Validation)
**Focus**: Implementation feasibility and technical validation
#### Feasibility Assessment
- **Technical Risks**: {implementation_risks}
- **Performance Impact**: {performance_analysis}
- **Resource Requirements**: {resource_assessment}
- **Maintenance Complexity**: {maintenance_analysis}
#### Implementation Recommendations
- **Tool Selection**: {recommended_tools}
- **Development Approach**: {development_strategy}
- **Quality Assurance**: {qa_recommendations}
## Synthesis & Consensus
### Consolidated Recommendations
- **Architecture Approach**: {consensus_architecture}
- **Implementation Priority**: {priority_recommendations}
- **Risk Mitigation**: {risk_mitigation_strategy}
### Tool Agreement Analysis
- **Consensus Points**: {agreed_recommendations}
- **Conflicting Views**: {conflicting_opinions}
- **Resolution Strategy**: {conflict_resolution}
### Task Decomposition Suggestions
1. **Primary Tasks**: {major_task_suggestions}
2. **Task Dependencies**: {dependency_mapping}
3. **Complexity Assessment**: {complexity_evaluation}
## Analysis Quality Metrics
- **Context Coverage**: {coverage_percentage}
- **Tool Consensus**: {consensus_level}
- **Analysis Depth**: {depth_assessment}
- **Confidence Score**: {overall_confidence}
```
## Error Handling & Fallbacks
### Common Error Handling
1. **Context Package Not Found**
```bash
if [ ! -f "$context_package_path" ]; then
echo "❌ Context package not found: $context_package_path"
echo "→ Run /context:gather first to generate context package"
exit 1
fi
```
2. **Tool Unavailability Handling**
```bash
# Use Codex when Gemini is unavailable
if ! command -v ~/.claude/scripts/gemini-wrapper; then
echo "⚠️ Gemini not available, falling back to Codex"
analysis_tools=("codex")
fi
```
3. **Large Context Handling**
```bash
# Batch processing for large contexts
context_size=$(jq '.assets | length' "$context_package_path")
if [ "$context_size" -gt 50 ]; then
echo "⚠️ Large context detected, using batch analysis"
# Batch analysis logic
fi
```
## Performance Optimization
### Analysis Optimization Strategies
- **Parallel Analysis**: Execute multiple tools in parallel to reduce total time
- **Context Sharding**: Analyze large projects by module shards
- **Caching Mechanism**: Reuse analysis results for similar contexts
- **Incremental Analysis**: Perform incremental analysis based on changes
### Resource Management
```bash
# Set analysis timeout
timeout 600s analysis_command || {
echo "⚠️ Analysis timeout, generating partial results"
# Generate partial results
}
# Memory usage monitoring
memory_usage=$(ps -o pid,vsz,rss,comm -p $$)
if [ "$memory_usage" -gt "$memory_limit" ]; then
echo "⚠️ High memory usage detected, optimizing..."
fi
```
## Integration Points
### Input Interface
- **Required**: `--session` parameter specifying session ID (e.g., WFS-auth)
- **Required**: `--context` parameter specifying context package path
- **Optional**: `--depth` specify analysis depth (quick|full|deep)
- **Optional**: `--focus` specify analysis focus areas
### Output Interface
- **Primary**: ANALYSIS_RESULTS.md file
- **Location**: .workflow/{session_id}/.process/ANALYSIS_RESULTS.md
- **Secondary**: analysis-summary.json (machine-readable format)
## Quality Assurance
### Analysis Quality Checks
- **Completeness Check**: Ensure all required analysis sections are completed
- **Consistency Check**: Verify consistency of multi-tool analysis results
- **Feasibility Validation**: Ensure recommended implementation plans are feasible
### Success Criteria
- Generate complete ANALYSIS_RESULTS.md file
- Include specific feasible task decomposition recommendations
- Analysis results highly relevant to context
- Execution time within reasonable range (<10 minutes)
## Related Commands
- `/context:gather` - Generate context packages required by this command
- `/workflow:plan` - Call this command for analysis
- `/task:create` - Create specific tasks based on analysis results

View File

@@ -1,369 +0,0 @@
---
name: gather
description: Intelligently collect project context based on task description and package into standardized JSON
usage: /context:gather --session <session_id> "<task_description>"
argument-hint: "--session WFS-session-id \"task description\""
examples:
- /context:gather --session WFS-user-auth "Implement user authentication system"
- /context:gather --session WFS-payment "Refactor payment module API"
- /context:gather --session WFS-bugfix "Fix login validation error"
---
# Context Gather Command (/context:gather)
## Overview
Enhanced intelligent context collector that leverages Gemini and Codex CLI tools to perform deep analysis, generate improvement suggestions, design blueprints, and create comprehensive analysis documents. All outputs are systematically organized in the current session's WFS `.process` directory.
## Core Philosophy
- **AI-Driven Analysis**: Use Gemini for architectural analysis and design blueprints
- **Implementation Focus**: Use Codex for practical implementation planning and quality assessment
- **Structured Output**: Generate organized analysis documents in WFS session `.process` folders
- **Synthesis Approach**: Combine all analysis results into a final comprehensive document
## Core Responsibilities
- **Initial Context Collection**: Use MCP tools to gather relevant codebase information
- **Gemini Analysis**: Generate deep architectural analysis and improvement suggestions
- **Design Blueprint Creation**: Create comprehensive design blueprints using Gemini
- **Codex Implementation Planning**: Generate practical implementation roadmaps with Codex
- **Quality Assessment**: Perform code quality analysis and optimization recommendations
- **Synthesis Documentation**: Combine all analysis into comprehensive final document
- **Process Organization**: Structure all outputs in WFS session `.process` directories
## Enhanced CLI Execution Process
### Phase 1: Task Analysis & Context Collection
1. **Initial Context Gathering**
```bash
# Get project structure
~/.claude/scripts/get_modules_by_depth.sh
# Use MCP tools for precise search
mcp__code-index__find_files(pattern="*{keywords}*")
mcp__code-index__search_code_advanced(pattern="{keywords}", file_pattern="*.{ts,js,py,go}")
# Get external context if needed
mcp__exa__get_code_context_exa(query="{task_type} {tech_stack} examples", tokensNum="dynamic")
```
2. **Session Directory Setup**
```bash
# Ensure session .process directory exists
mkdir -p ".workflow/${session_id}/.process"
```
### Phase 2: Gemini Analysis & Planning
1. **Comprehensive Analysis with Gemini**
```bash
cd . && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p "
PURPOSE: Deep analysis of task requirements and current codebase state
TASK: Analyze task '${task_description}' and generate improvement suggestions
CONTEXT: @{CLAUDE.md,README.md,**/*${keywords}*} Current session: ${session_id}
EXPECTED: Detailed analysis with improvement suggestions saved to .workflow/${session_id}/.process/
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt) | Generate specific improvement recommendations and save to improvement-suggestions.md
" > ".workflow/${session_id}/.process/gemini-analysis.md"
```
2. **Design Blueprint Generation**
```bash
cd . && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p "
PURPOSE: Create comprehensive design blueprint for task implementation
TASK: Design architectural blueprint for '${task_description}'
CONTEXT: @{.workflow/${session_id}/.process/gemini-analysis.md,CLAUDE.md} Previous analysis results
EXPECTED: Complete design blueprint with component diagrams and implementation strategy
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt) | Create detailed design blueprint and save to design-blueprint.md
" > ".workflow/${session_id}/.process/design-blueprint.md"
```
### Phase 3: Codex Implementation Planning
1. **Implementation Strategy with Codex**
```bash
codex --full-auto exec "
PURPOSE: Generate detailed implementation plan based on analysis and design
TASK: Create implementation roadmap for '${task_description}'
CONTEXT: @{.workflow/${session_id}/.process/gemini-analysis.md,.workflow/${session_id}/.process/design-blueprint.md}
EXPECTED: Step-by-step implementation plan with code examples
RULES: Generate practical implementation plan and save to .workflow/${session_id}/.process/implementation-plan.md
" -s danger-full-access
```
2. **Code Quality Assessment**
```bash
codex --full-auto exec "
PURPOSE: Assess current code quality and identify optimization opportunities
TASK: Evaluate codebase quality related to '${task_description}'
CONTEXT: @{**/*${keywords}*,.workflow/${session_id}/.process/design-blueprint.md}
EXPECTED: Quality assessment report with optimization recommendations
RULES: Focus on maintainability, performance, and security. Save to .workflow/${session_id}/.process/quality-assessment.md
" -s danger-full-access
```
### Phase 4: Synthesis & Final Analysis
1. **Comprehensive Analysis Document Generation**
```bash
cd . && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p "
PURPOSE: Synthesize all analysis results into comprehensive final document
TASK: Combine all generated analysis documents into unified comprehensive analysis
CONTEXT: @{.workflow/${session_id}/.process/gemini-analysis.md,.workflow/${session_id}/.process/design-blueprint.md,.workflow/${session_id}/.process/implementation-plan.md,.workflow/${session_id}/.process/quality-assessment.md}
EXPECTED: Single comprehensive analysis document with executive summary, detailed findings, and actionable recommendations
RULES: Create synthesis of all previous analysis. Structure: Executive Summary, Key Findings, Design Recommendations, Implementation Strategy, Quality Improvements, Next Steps
" > ".workflow/${session_id}/.process/comprehensive-analysis.md"
```
2. **Context Package Generation**
```bash
# Generate the final context-package.json based on all analysis
cat > ".workflow/${session_id}/.process/context-package.json" << EOF
{
"metadata": {
"task_description": "${task_description}",
"timestamp": "$(date -Iseconds)",
"session_id": "${session_id}",
"analysis_files": [
"gemini-analysis.md",
"design-blueprint.md",
"implementation-plan.md",
"quality-assessment.md",
"comprehensive-analysis.md"
]
},
"generated_analysis": {
"improvement_suggestions": ".workflow/${session_id}/.process/gemini-analysis.md",
"design_blueprint": ".workflow/${session_id}/.process/design-blueprint.md",
"implementation_plan": ".workflow/${session_id}/.process/implementation-plan.md",
"quality_assessment": ".workflow/${session_id}/.process/quality-assessment.md",
"comprehensive_analysis": ".workflow/${session_id}/.process/comprehensive-analysis.md"
}
}
EOF
```
## Enhanced Context Package Format
Generated context package format with analysis documents:
```json
{
"metadata": {
"task_description": "Implement user authentication system",
"timestamp": "2025-09-29T10:30:00Z",
"session_id": "WFS-user-auth",
"execution_method": "gemini_codex_analysis",
"analysis_files": [
"gemini-analysis.md",
"design-blueprint.md",
"implementation-plan.md",
"quality-assessment.md",
"comprehensive-analysis.md"
]
},
"generated_analysis": {
"improvement_suggestions": {
"file": ".workflow/WFS-user-auth/.process/gemini-analysis.md",
"description": "Gemini-generated analysis with improvement recommendations",
"tool": "gemini-wrapper"
},
"design_blueprint": {
"file": ".workflow/WFS-user-auth/.process/design-blueprint.md",
"description": "Architectural design blueprint with component diagrams",
"tool": "gemini-wrapper"
},
"implementation_plan": {
"file": ".workflow/WFS-user-auth/.process/implementation-plan.md",
"description": "Step-by-step implementation roadmap with code examples",
"tool": "codex"
},
"quality_assessment": {
"file": ".workflow/WFS-user-auth/.process/quality-assessment.md",
"description": "Code quality analysis and optimization recommendations",
"tool": "codex"
},
"comprehensive_analysis": {
"file": ".workflow/WFS-user-auth/.process/comprehensive-analysis.md",
"description": "Synthesized final analysis document combining all findings",
"tool": "gemini-wrapper"
}
},
"process_workflow": {
"phase_1": "Context collection with MCP tools",
"phase_2": "Gemini analysis and design blueprint generation",
"phase_3": "Codex implementation planning and quality assessment",
"phase_4": "Synthesis into comprehensive analysis document"
},
"output_location": ".workflow/${session_id}/.process/",
"final_deliverable": "comprehensive-analysis.md"
}
```
## Integration with MCP Tools
### Code Index Integration
```bash
# Set project path
mcp__code-index__set_project_path(path="{current_project_path}")
# Refresh index to ensure latest
mcp__code-index__refresh_index()
# Search relevant files
mcp__code-index__find_files(pattern="*{keyword}*")
# Search code content
mcp__code-index__search_code_advanced(
pattern="{keyword_patterns}",
file_pattern="*.{ts,js,py,go,md}",
context_lines=3
)
```
### Exa Integration
```bash
# Get external code context
mcp__exa__get_code_context_exa(
query="{task_type} {tech_stack} implementation examples",
tokensNum="dynamic"
)
# Get best practices
mcp__exa__get_code_context_exa(
query="{task_domain} best practices patterns",
tokensNum="dynamic"
)
```
## Session ID Integration
### Session ID Usage
- **Required Parameter**: `--session WFS-session-id`
- **Session Context Loading**: Load existing session state and task summaries
- **Session Continuity**: Maintain context across pipeline phases
### Session State Management
```bash
# Validate session exists
if [ ! -d ".workflow/${session_id}" ]; then
echo "❌ Session ${session_id} not found"
exit 1
fi
# Load session metadata
session_metadata=".workflow/${session_id}/workflow-session.json"
```
## Output Location
Context package output location:
```
.workflow/{session_id}/.process/context-package.json
```
## Error Handling
### Common Error Handling
1. **No Active Session**: Create temporary session directory
2. **MCP Tools Unavailable**: Fallback to traditional bash commands
3. **Permission Errors**: Prompt user to check file permissions
4. **Large Project Optimization**: Limit file count, prioritize high-relevance files
### Graceful Degradation Strategy
```bash
# Fallback when MCP unavailable
if ! command -v mcp__code-index__find_files; then
# Use find command
find . -name "*{keyword}*" -type f
fi
# Use ripgrep instead of MCP search
rg "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source
```
## Performance Optimization
### Large Project Optimization Strategy
- **File Count Limit**: Maximum 50 files per type
- **Size Filtering**: Skip oversized files (>10MB)
- **Depth Limit**: Maximum search depth of 3 levels
- **Caching Strategy**: Cache project structure analysis results
### Parallel Processing
- Documentation collection and code search in parallel
- MCP tool calls and traditional commands in parallel
- Reduce I/O wait time
## Practical Execution Summary
### Quick Start CLI Commands
```bash
# Example usage for authentication task
session_id="WFS-user-auth"
task_description="Implement user authentication system"
# 1. Setup and context collection
mkdir -p ".workflow/${session_id}/.process"
~/.claude/scripts/get_modules_by_depth.sh
# 2. Gemini analysis and design
cd . && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p "
PURPOSE: Deep analysis of task requirements and current codebase state
TASK: Analyze task '${task_description}' and generate improvement suggestions
CONTEXT: @{CLAUDE.md,README.md,**/*auth*} Current session: ${session_id}
EXPECTED: Detailed analysis with improvement suggestions
RULES: Generate specific improvement recommendations and save results
" > ".workflow/${session_id}/.process/gemini-analysis.md"
cd . && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p "
PURPOSE: Create comprehensive design blueprint
TASK: Design architectural blueprint for '${task_description}'
CONTEXT: @{.workflow/${session_id}/.process/gemini-analysis.md,CLAUDE.md}
EXPECTED: Complete design blueprint with component diagrams
RULES: Create detailed design blueprint
" > ".workflow/${session_id}/.process/design-blueprint.md"
# 3. Codex implementation and quality
codex --full-auto exec "
PURPOSE: Generate implementation plan based on analysis
TASK: Create implementation roadmap for '${task_description}'
CONTEXT: @{.workflow/${session_id}/.process/gemini-analysis.md,.workflow/${session_id}/.process/design-blueprint.md}
EXPECTED: Step-by-step implementation plan with code examples
RULES: Generate practical implementation plan and save to .workflow/${session_id}/.process/implementation-plan.md
" -s danger-full-access
codex --full-auto exec "
PURPOSE: Assess code quality and optimization opportunities
TASK: Evaluate codebase quality related to '${task_description}'
CONTEXT: @{**/*auth*,.workflow/${session_id}/.process/design-blueprint.md}
EXPECTED: Quality assessment with optimization recommendations
RULES: Save to .workflow/${session_id}/.process/quality-assessment.md
" -s danger-full-access
# 4. Final synthesis
cd . && ~/.claude/scripts/gemini-wrapper --approval-mode yolo -p "
PURPOSE: Synthesize all analysis results into comprehensive document
TASK: Combine all analysis documents into unified comprehensive analysis
CONTEXT: @{.workflow/${session_id}/.process/gemini-analysis.md,.workflow/${session_id}/.process/design-blueprint.md,.workflow/${session_id}/.process/implementation-plan.md,.workflow/${session_id}/.process/quality-assessment.md}
EXPECTED: Comprehensive analysis with executive summary and recommendations
RULES: Structure: Executive Summary, Key Findings, Design Recommendations, Implementation Strategy, Quality Improvements, Next Steps
" > ".workflow/${session_id}/.process/comprehensive-analysis.md"
```
### Expected Output Structure
```
.workflow/${session_id}/.process/
├── gemini-analysis.md # Improvement suggestions
├── design-blueprint.md # Architectural design
├── implementation-plan.md # Implementation roadmap
├── quality-assessment.md # Code quality analysis
├── comprehensive-analysis.md # Final synthesis document
└── context-package.json # Metadata package
```
## Success Criteria
- Generate comprehensive analysis documents in WFS `.process` directory
- Produce actionable improvement suggestions and design blueprints
- Create practical implementation plans with Codex
- Synthesize all findings into unified comprehensive analysis
- Complete execution within reasonable timeframe (5-10 minutes)
## Related Commands
- `/analysis:run` - Consumes output of this command for further analysis
- `/workflow:plan` - Integrates with this command for context gathering
- `/workflow:status` - Can display analysis generation status

View File

@@ -1,188 +0,0 @@
---
name: auto
description: Auto-select and execute appropriate template based on user input analysis
usage: /gemini:mode:auto "description of task or problem"
argument-hint: "description of what you want to analyze or plan"
examples:
- /gemini:mode:auto "authentication system keeps crashing during login"
- /gemini:mode:auto "design a real-time notification architecture"
- /gemini:mode:auto "database connection errors in production"
- /gemini:mode:auto "plan user dashboard with analytics features"
allowed-tools: Bash(ls:*), Bash(gemini:*)
model: sonnet
---
# Auto Template Selection (/gemini:mode:auto)
## Overview
Automatically analyzes user input to select the most appropriate template and execute Gemini CLI with optimal context.
**Directory Analysis Rule**: Intelligent detection of directory context intent - automatically navigate to target directory when analysis scope is directory-specific.
**--cd Parameter Rule**: When `--cd` parameter is provided, always execute `cd "[path]" && gemini --all-files -p "prompt"` to ensure analysis occurs in the specified directory context.
**Process**: List Templates → Analyze Input → Select Template → Execute with Context
## Usage
### Auto-Detection Examples
```bash
# Bug-related keywords → selects bug-fix.md
/gemini:mode:auto "React component not rendering after state update"
# Planning keywords → selects plan.md
/gemini:mode:auto "design microservices architecture for user management"
# Error/crash keywords → selects bug-fix.md
/gemini:mode:auto "API timeout errors in production environment"
# Architecture/design keywords → selects plan.md
/gemini:mode:auto "implement real-time chat system architecture"
# With directory context
/gemini:mode:auto "authentication issues" --cd "src/auth"
```
## Template Selection Logic
### Dynamic Template Discovery
**Templates auto-discovered from**: `~/.claude/prompt-templates/`
Templates are dynamically read from the directory, including their metadata (name, description, keywords) from the YAML frontmatter.
### Template Metadata Parsing
Each template contains YAML frontmatter with:
```yaml
---
name: template-name
description: Template purpose description
category: template-category
keywords: [keyword1, keyword2, keyword3]
---
```
**Auto-selection based on:**
- **Template keywords**: Matches user input against template-defined keywords
- **Template name**: Direct name matching (e.g., "bug-fix" matches bug-related queries)
- **Template description**: Semantic matching against description text
## Command Execution
### Step 1: Template Discovery
```bash
# Dynamically discover all templates and extract YAML frontmatter
cd "~/.claude/prompt-templates" && echo "Discovering templates..." && for template_file in *.md; do echo "=== $template_file ==="; head -6 "$template_file" 2>/dev/null || echo "Error reading $template_file"; echo; done
```
### Step 2: Dynamic Template Analysis & Selection
```pseudo
FUNCTION select_template(user_input):
templates = list_directory("~/.claude/prompt-templates/")
template_metadata = {}
# Parse all templates for metadata
FOR each template_file in templates:
content = read_file(template_file)
yaml_front = extract_yaml_frontmatter(content)
template_metadata[template_file] = {
"name": yaml_front.name,
"description": yaml_front.description,
"keywords": yaml_front.keywords || [],
"category": yaml_front.category || "general"
}
input_lower = user_input.toLowerCase()
best_match = null
highest_score = 0
# Score each template against user input
FOR each template, metadata in template_metadata:
score = 0
# Keyword matching (highest weight)
FOR each keyword in metadata.keywords:
IF input_lower.contains(keyword.toLowerCase()):
score += 3
# Template name matching
IF input_lower.contains(metadata.name.toLowerCase()):
score += 2
# Description semantic matching
FOR each word in metadata.description.split():
IF input_lower.contains(word.toLowerCase()) AND word.length > 3:
score += 1
IF score > highest_score:
highest_score = score
best_match = template
# Default to first template if no matches
RETURN best_match || templates[0]
END FUNCTION
```
### Step 3: Execute with Dynamically Selected Template
```bash
# Basic execution with selected template
gemini --all-files -p "$(cat ~/.claude/prompt-templates/[selected_template])
User Input: [user_input]"
# With --cd parameter
cd "[specified_directory]" && gemini --all-files -p "$(cat ~/.claude/prompt-templates/[selected_template])
User Input: [user_input]"
```
**Template selection is completely dynamic** - any new templates added to the directory will be automatically discovered and available for selection based on their YAML frontmatter.
### Manual Template Override
```bash
# Force specific template
/gemini:mode:auto "user authentication" --template bug-fix.md
/gemini:mode:auto "fix login issues" --template plan.md
```
### Dynamic Template Listing
```bash
# List all dynamically discovered templates
/gemini:mode:auto --list-templates
# Output:
# Dynamically discovered templates in ~/.claude/prompt-templates/:
# - bug-fix.md (用于定位bug并提供修改建议) [Keywords: 规划, bug, 修改方案]
# - plan.md (软件架构规划和技术实现计划分析模板) [Keywords: 规划, 架构, 实现计划, 技术设计, 修改方案]
# - [any-new-template].md (Auto-discovered description) [Keywords: auto-parsed]
```
**Complete template discovery** - new templates are automatically detected and their metadata parsed from YAML frontmatter.
## Auto-Selection Examples
### Dynamic Selection Examples
```bash
# Selection based on template keywords and metadata
"login system crashes on startup" → Matches template with keywords: [bug, 修改方案]
"design user dashboard with analytics" → Matches template with keywords: [规划, 架构, 技术设计]
"database timeout errors in production" → Matches template with keywords: [bug, 修改方案]
"implement real-time notification system" → Matches template with keywords: [规划, 实现计划, 技术设计]
# Any new templates added will be automatically matched
"[user input]" → Dynamically matches against all template keywords and descriptions
```
## Session Integration
saves to:
`.workflow/WFS-[topic]/.chat/auto-[template]-[timestamp].md`
**Session includes:**
- Original user input
- Template selection reasoning
- Template used
- Complete analysis results
This command streamlines template usage by automatically detecting user intent and selecting the optimal template for analysis.

View File

@@ -1,140 +0,0 @@
---
name: plan-precise
description: Precise path planning analysis for complex projects
usage: /gemini:mode:plan-precise "planning topic"
examples:
- /gemini:mode:plan-precise "design authentication system"
- /gemini:mode:plan-precise "refactor database layer architecture"
---
### 🚀 Command Overview: `/gemini:mode:plan-precise`
Precise path-based planning analysis using user-specified directories instead of --all-files.
### 📝 Execution Template
```pseudo
# Precise path planning with user-specified scope
PLANNING_TOPIC = user_argument
PATHS_FILE = "./planning-paths.txt"
# Step 1: Check paths file exists
IF not file_exists(PATHS_FILE):
Write(PATHS_FILE, template_content)
echo "📝 Created planning-paths.txt in project root"
echo "Please edit file and add paths to analyze"
# USER_INPUT: User edits planning-paths.txt and presses Enter
wait_for_user_input()
ELSE:
echo "📁 Using existing planning-paths.txt"
echo "Current paths preview:"
Bash(grep -v '^#' "$PATHS_FILE" | grep -v '^$' | head -5)
# USER_INPUT: User confirms y/n
user_confirm = prompt("Continue with these paths? (y/n): ")
IF user_confirm != "y":
echo "Please edit planning-paths.txt and retry"
exit
# Step 2: Read and validate paths
paths_ref = Bash(.claude/scripts/read-paths.sh "$PATHS_FILE")
IF paths_ref is empty:
echo "❌ No valid paths found in planning-paths.txt"
echo "Please add at least one path and retry"
exit
echo "🎯 Analysis paths: $paths_ref"
echo "📋 Planning topic: $PLANNING_TOPIC"
# BASH_EXECUTION_STOPS → MODEL_ANALYSIS_BEGINS
```
### 🧠 Model Analysis Phase
After bash script prepares paths, model takes control to:
1. **Present Configuration**: Show user the detected paths and analysis scope
2. **Request Confirmation**: Wait for explicit user approval
3. **Execute Analysis**: Run gemini with precise path references
### 📋 Execution Flow
```pseudo
# Step 1: Present plan to user
PRESENT_PLAN:
📋 Precise Path Planning Configuration:
Topic: design authentication system
Paths: src/auth/**/* src/middleware/auth* tests/auth/**/* config/auth.json
Gemini Reference: $(.claude/scripts/read-paths.sh ./planning-paths.txt)
⚠️ Continue with analysis? (y/n)
# Step 2: MANDATORY user confirmation
IF user_confirms():
# Step 3: Execute gemini analysis
Bash(gemini -p "$(.claude/scripts/read-paths.sh ./planning-paths.txt) @{CLAUDE.md} $(cat ~/.claude/prompt-templates/plan.md)
Planning Topic: $PLANNING_TOPIC")
ELSE:
abort_execution()
echo "Edit planning-paths.txt and retry"
```
### ✨ Features
- **Root Level Config**: `./planning-paths.txt` in project root (no subdirectories)
- **Simple Workflow**: Check file → Present plan → Confirm → Execute
- **Path Focused**: Only analyzes user-specified paths, not entire project
- **No Complexity**: No validation, suggestions, or result saving - just core function
- **Template Creation**: Auto-creates template file if missing
### 📚 Usage Examples
```bash
# Create analysis for authentication system
/gemini:mode:plan-precise "design authentication system"
# System creates planning-paths.txt (if needed)
# User edits: src/auth/**/* tests/auth/**/* config/auth.json
# System confirms paths and executes analysis
```
### 🔍 Complete Execution Example
```bash
# 1. Command execution
$ /gemini:mode:plan-precise "design authentication system"
# 2. System output
📋 Precise Path Planning Configuration:
Topic: design authentication system
Paths: src/auth/**/* src/middleware/auth* tests/auth/**/* config/auth.json
Gemini Reference: @{src/auth/**/*,src/middleware/auth*,tests/auth/**/*,config/auth.json}
⚠️ Continue with analysis? (y/n)
# 3. User confirms
$ y
# 4. Actual gemini command executed
$ gemini -p "$(.claude/scripts/read-paths.sh ./planning-paths.txt) @{CLAUDE.md} $(cat ~/.claude/prompt-templates/plan.md)
Planning Topic: design authentication system"
```
### 🔧 Path File Format
Simple text file in project root: `./planning-paths.txt`
```
# Comments start with #
src/auth/**/*
src/middleware/auth*
tests/auth/**/*
config/auth.json
docs/auth/*.md
```

View File

@@ -1,188 +0,0 @@
---
name: auto
description: Auto-select and execute appropriate template based on user input analysis
usage: /qwen:mode:auto "description of task or problem"
argument-hint: "description of what you want to analyze or plan"
examples:
- /qwen:mode:auto "authentication system keeps crashing during login"
- /qwen:mode:auto "design a real-time notification architecture"
- /qwen:mode:auto "database connection errors in production"
- /qwen:mode:auto "plan user dashboard with analytics features"
allowed-tools: Bash(ls:*), Bash(qwen:*)
model: sonnet
---
# Auto Template Selection (/qwen:mode:auto)
## Overview
Automatically analyzes user input to select the most appropriate template and execute qwen CLI with optimal context.
**Directory Analysis Rule**: Intelligent detection of directory context intent - automatically navigate to target directory when analysis scope is directory-specific.
**--cd Parameter Rule**: When `--cd` parameter is provided, always execute `cd "[path]" && qwen --all-files -p "prompt"` to ensure analysis occurs in the specified directory context.
**Process**: List Templates → Analyze Input → Select Template → Execute with Context
## Usage
### Auto-Detection Examples
```bash
# Bug-related keywords → selects bug-fix.md
/qwen:mode:auto "React component not rendering after state update"
# Planning keywords → selects plan.md
/qwen:mode:auto "design microservices architecture for user management"
# Error/crash keywords → selects bug-fix.md
/qwen:mode:auto "API timeout errors in production environment"
# Architecture/design keywords → selects plan.md
/qwen:mode:auto "implement real-time chat system architecture"
# With directory context
/qwen:mode:auto "authentication issues" --cd "src/auth"
```
## Template Selection Logic
### Dynamic Template Discovery
**Templates auto-discovered from**: `~/.claude/prompt-templates/`
Templates are dynamically read from the directory, including their metadata (name, description, keywords) from the YAML frontmatter.
### Template Metadata Parsing
Each template contains YAML frontmatter with:
```yaml
---
name: template-name
description: Template purpose description
category: template-category
keywords: [keyword1, keyword2, keyword3]
---
```
**Auto-selection based on:**
- **Template keywords**: Matches user input against template-defined keywords
- **Template name**: Direct name matching (e.g., "bug-fix" matches bug-related queries)
- **Template description**: Semantic matching against description text
## Command Execution
### Step 1: Template Discovery
```bash
# Dynamically discover all templates and extract YAML frontmatter
cd "~/.claude/prompt-templates" && echo "Discovering templates..." && for template_file in *.md; do echo "=== $template_file ==="; head -6 "$template_file" 2>/dev/null || echo "Error reading $template_file"; echo; done
```
### Step 2: Dynamic Template Analysis & Selection
```pseudo
FUNCTION select_template(user_input):
templates = list_directory("~/.claude/prompt-templates/")
template_metadata = {}
# Parse all templates for metadata
FOR each template_file in templates:
content = read_file(template_file)
yaml_front = extract_yaml_frontmatter(content)
template_metadata[template_file] = {
"name": yaml_front.name,
"description": yaml_front.description,
"keywords": yaml_front.keywords || [],
"category": yaml_front.category || "general"
}
input_lower = user_input.toLowerCase()
best_match = null
highest_score = 0
# Score each template against user input
FOR each template, metadata in template_metadata:
score = 0
# Keyword matching (highest weight)
FOR each keyword in metadata.keywords:
IF input_lower.contains(keyword.toLowerCase()):
score += 3
# Template name matching
IF input_lower.contains(metadata.name.toLowerCase()):
score += 2
# Description semantic matching
FOR each word in metadata.description.split():
IF input_lower.contains(word.toLowerCase()) AND word.length > 3:
score += 1
IF score > highest_score:
highest_score = score
best_match = template
# Default to first template if no matches
RETURN best_match || templates[0]
END FUNCTION
```
### Step 3: Execute with Dynamically Selected Template
```bash
# Basic execution with selected template
qwen --all-files -p "$(cat ~/.claude/prompt-templates/[selected_template])
User Input: [user_input]"
# With --cd parameter
cd "[specified_directory]" && qwen --all-files -p "$(cat ~/.claude/prompt-templates/[selected_template])
User Input: [user_input]"
```
**Template selection is completely dynamic** - any new templates added to the directory will be automatically discovered and available for selection based on their YAML frontmatter.
### Manual Template Override
```bash
# Force specific template
/qwen:mode:auto "user authentication" --template bug-fix.md
/qwen:mode:auto "fix login issues" --template plan.md
```
### Dynamic Template Listing
```bash
# List all dynamically discovered templates
/qwen:mode:auto --list-templates
# Output:
# Dynamically discovered templates in ~/.claude/prompt-templates/:
# - bug-fix.md (用于定位bug并提供修改建议) [Keywords: 规划, bug, 修改方案]
# - plan.md (软件架构规划和技术实现计划分析模板) [Keywords: 规划, 架构, 实现计划, 技术设计, 修改方案]
# - [any-new-template].md (Auto-discovered description) [Keywords: auto-parsed]
```
**Complete template discovery** - new templates are automatically detected and their metadata parsed from YAML frontmatter.
## Auto-Selection Examples
### Dynamic Selection Examples
```bash
# Selection based on template keywords and metadata
"login system crashes on startup" → Matches template with keywords: [bug, 修改方案]
"design user dashboard with analytics" → Matches template with keywords: [规划, 架构, 技术设计]
"database timeout errors in production" → Matches template with keywords: [bug, 修改方案]
"implement real-time notification system" → Matches template with keywords: [规划, 实现计划, 技术设计]
# Any new templates added will be automatically matched
"[user input]" → Dynamically matches against all template keywords and descriptions
```
## Session Integration
saves to:
`.workflow/WFS-[topic]/.chat/auto-[template]-[timestamp].md`
**Session includes:**
- Original user input
- Template selection reasoning
- Template used
- Complete analysis results
This command streamlines template usage by automatically detecting user intent and selecting the optimal template for analysis.

View File

@@ -1,140 +0,0 @@
---
name: plan-precise
description: Precise path planning analysis for complex projects
usage: /qwen:mode:plan-precise "planning topic"
examples:
- /qwen:mode:plan-precise "design authentication system"
- /qwen:mode:plan-precise "refactor database layer architecture"
---
### 🚀 Command Overview: `/qwen:mode:plan-precise`
Precise path-based planning analysis using user-specified directories instead of --all-files.
### 📝 Execution Template
```pseudo
# Precise path planning with user-specified scope
PLANNING_TOPIC = user_argument
PATHS_FILE = "./planning-paths.txt"
# Step 1: Check paths file exists
IF not file_exists(PATHS_FILE):
Write(PATHS_FILE, template_content)
echo "📝 Created planning-paths.txt in project root"
echo "Please edit file and add paths to analyze"
# USER_INPUT: User edits planning-paths.txt and presses Enter
wait_for_user_input()
ELSE:
echo "📁 Using existing planning-paths.txt"
echo "Current paths preview:"
Bash(grep -v '^#' "$PATHS_FILE" | grep -v '^$' | head -5)
# USER_INPUT: User confirms y/n
user_confirm = prompt("Continue with these paths? (y/n): ")
IF user_confirm != "y":
echo "Please edit planning-paths.txt and retry"
exit
# Step 2: Read and validate paths
paths_ref = Bash(.claude/scripts/read-paths.sh "$PATHS_FILE")
IF paths_ref is empty:
echo "❌ No valid paths found in planning-paths.txt"
echo "Please add at least one path and retry"
exit
echo "🎯 Analysis paths: $paths_ref"
echo "📋 Planning topic: $PLANNING_TOPIC"
# BASH_EXECUTION_STOPS → MODEL_ANALYSIS_BEGINS
```
### 🧠 Model Analysis Phase
After bash script prepares paths, model takes control to:
1. **Present Configuration**: Show user the detected paths and analysis scope
2. **Request Confirmation**: Wait for explicit user approval
3. **Execute Analysis**: Run qwen with precise path references
### 📋 Execution Flow
```pseudo
# Step 1: Present plan to user
PRESENT_PLAN:
📋 Precise Path Planning Configuration:
Topic: design authentication system
Paths: src/auth/**/* src/middleware/auth* tests/auth/**/* config/auth.json
qwen Reference: $(.claude/scripts/read-paths.sh ./planning-paths.txt)
⚠️ Continue with analysis? (y/n)
# Step 2: MANDATORY user confirmation
IF user_confirms():
# Step 3: Execute qwen analysis
Bash(qwen -p "$(.claude/scripts/read-paths.sh ./planning-paths.txt) @{CLAUDE.md} $(cat ~/.claude/prompt-templates/plan.md)
Planning Topic: $PLANNING_TOPIC")
ELSE:
abort_execution()
echo "Edit planning-paths.txt and retry"
```
### ✨ Features
- **Root Level Config**: `./planning-paths.txt` in project root (no subdirectories)
- **Simple Workflow**: Check file → Present plan → Confirm → Execute
- **Path Focused**: Only analyzes user-specified paths, not entire project
- **No Complexity**: No validation, suggestions, or result saving - just core function
- **Template Creation**: Auto-creates template file if missing
### 📚 Usage Examples
```bash
# Create analysis for authentication system
/qwen:mode:plan-precise "design authentication system"
# System creates planning-paths.txt (if needed)
# User edits: src/auth/**/* tests/auth/**/* config/auth.json
# System confirms paths and executes analysis
```
### 🔍 Complete Execution Example
```bash
# 1. Command execution
$ /qwen:mode:plan-precise "design authentication system"
# 2. System output
📋 Precise Path Planning Configuration:
Topic: design authentication system
Paths: src/auth/**/* src/middleware/auth* tests/auth/**/* config/auth.json
qwen Reference: @{src/auth/**/*,src/middleware/auth*,tests/auth/**/*,config/auth.json}
⚠️ Continue with analysis? (y/n)
# 3. User confirms
$ y
# 4. Actual qwen command executed
$ qwen -p "$(.claude/scripts/read-paths.sh ./planning-paths.txt) @{CLAUDE.md} $(cat ~/.claude/prompt-templates/plan.md)
Planning Topic: design authentication system"
```
### 🔧 Path File Format
Simple text file in project root: `./planning-paths.txt`
```
# Comments start with #
src/auth/**/*
src/middleware/auth*
tests/auth/**/*
config/auth.json
docs/auth/*.md
```

View File

@@ -80,7 +80,7 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
```
.workflow/WFS-[topic]/.brainstorming/
├── topic-framework.md # ★ STRUCTURED FRAMEWORK DOCUMENT
└── session.json # Framework metadata and role assignments
└── workflow-session.json # Framework metadata and role assignments
```
**Topic Framework Template**:

View File

@@ -24,14 +24,14 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
- **Multi-role**: Complex topics automatically select 2-3 complementary roles
- **Default**: `product-manager` if no clear match
**Template Loading**: `bash($(cat ~/.claude/workflows/cli-templates/planning-roles/<role-name>.md))`
**Template Loading**: `bash($(cat "~/.claude/workflows/cli-templates/planning-roles/<role-name>.md"))`
**Template Source**: `.claude/workflows/cli-templates/planning-roles/`
**Available Roles**: business-analyst, data-architect, feature-planner, innovation-lead, product-manager, security-expert, system-architect, test-strategist, ui-designer, user-researcher
**Example**:
```bash
bash($(cat ~/.claude/workflows/cli-templates/planning-roles/system-architect.md))
bash($(cat ~/.claude/workflows/cli-templates/planning-roles/ui-designer.md))
bash($(cat "~/.claude/workflows/cli-templates/planning-roles/system-architect.md"))
bash($(cat "~/.claude/workflows/cli-templates/planning-roles/ui-designer.md"))
```
## Core Workflow
@@ -78,7 +78,7 @@ Auto command coordinates independent specialized commands:
### Simplified Processing Standards
**Core Principles**:
1. **Minimal preprocessing** - Only session.json and basic role selection
1. **Minimal preprocessing** - Only workflow-session.json and basic role selection
2. **Agent autonomy** - Agents handle their own context and validation
3. **Parallel execution** - Multiple agents can work simultaneously
4. **Post-processing synthesis** - Integration happens after agent completion
@@ -139,12 +139,12 @@ Task(subagent_type="conceptual-planning-agent",
2. **load_role_template**
- Action: Load {role-name} planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/{role}.md))
- Command: bash($(cat "~/.claude/workflows/cli-templates/planning-roles/{role}.md"))
- Output: role_template
3. **load_session_metadata**
- Action: Load session metadata and topic description
- Command: bash(cat .workflow/WFS-{topic}/.brainstorming/session.json 2>/dev/null || echo '{}')
- Command: bash(cat .workflow/WFS-{topic}/.brainstorming/workflow-session.json 2>/dev/null || echo '{}')
- Output: session_metadata
### Implementation Context
@@ -157,11 +157,11 @@ Task(subagent_type="conceptual-planning-agent",
### Session Context
**Workflow Directory**: .workflow/WFS-{topic}/.brainstorming/
**Output Directory**: .workflow/WFS-{topic}/.brainstorming/{role}/
**Session JSON**: .workflow/WFS-{topic}/.brainstorming/session.json
**Session JSON**: .workflow/WFS-{topic}/.brainstorming/workflow-session.json
### Dependencies & Context
**Topic**: {user-provided-topic}
**Role Template**: ~/.claude/workflows/cli-templates/planning-roles/{role}.md
**Role Template**: "~/.claude/workflows/cli-templates/planning-roles/{role}.md"
**User Requirements**: To be gathered through interactive questioning
## Completion Requirements
@@ -172,7 +172,7 @@ Task(subagent_type="conceptual-planning-agent",
5. Create single comprehensive deliverable in OUTPUT_LOCATION:
- analysis.md (structured analysis addressing all topic framework points with role-specific insights)
6. Include framework reference: @../topic-framework.md in analysis.md
7. Update session.json with completion status",
7. Update workflow-session.json with completion status",
description="Execute {role-name} brainstorming analysis")
```
@@ -216,7 +216,7 @@ TodoWrite({
activeForm: "Executing artifacts command for topic framework"
},
{
content: "Select roles and create session.json with framework references",
content: "Select roles and create workflow-session.json with framework references",
status: "pending",
activeForm: "Selecting roles and creating session metadata"
},
@@ -247,7 +247,7 @@ TodoWrite({
activeForm: "Initializing brainstorming session"
},
{
content: "Select roles for topic analysis and create session.json",
content: "Select roles for topic analysis and create workflow-session.json",
status: "in_progress", // Mark current task as in_progress
activeForm: "Selecting roles and creating session metadata"
},

View File

@@ -18,6 +18,7 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
## Core Rules
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
**Execute all discovered pending tasks sequentially until workflow completion or blocking dependency.**
**Auto-complete session when all tasks finished: Call `/workflow:session:complete` upon workflow completion.**
## Core Responsibilities
- **Session Discovery**: Identify and select active workflow sessions
@@ -27,6 +28,7 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
- **Flow Control Execution**: Execute pre-analysis steps and context accumulation
- **Status Synchronization**: Update task JSON files and workflow state
- **Autonomous Completion**: Continue execution until all tasks complete or reach blocking state
- **Session Auto-Complete**: Call `/workflow:session:complete` when all workflow tasks finished
## Execution Philosophy
- **Discovery-first**: Auto-discover existing plans and tasks
@@ -41,9 +43,10 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
### Flow Control Rules
1. **Auto-trigger**: When `task.flow_control.pre_analysis` array exists in task JSON, agents execute these steps
2. **Sequential Processing**: Agents execute steps in order, accumulating context
3. **Variable Passing**: Agents use `[variable_name]` syntax to reference step outputs
2. **Sequential Processing**: Agents execute steps in order, accumulating context including artifacts
3. **Variable Passing**: Agents use `[variable_name]` syntax to reference step outputs including artifact content
4. **Error Handling**: Agents follow step-specific error strategies (`fail`, `skip_optional`, `retry_once`)
5. **Artifacts Priority**: When artifacts exist in task.context.artifacts, load synthesis specifications first
### Execution Pattern
```
@@ -53,10 +56,11 @@ Step 3: implement_solution [pattern_analysis] [dependency_context] → implement
```
### Context Accumulation Process (Executed by Agents)
- **Load Artifacts**: Agents retrieve synthesis specifications and brainstorming outputs from `context.artifacts`
- **Load Dependencies**: Agents retrieve summaries from `context.depends_on` tasks
- **Execute Analysis**: Agents run CLI tools with accumulated context
- **Execute Analysis**: Agents run CLI tools with accumulated context including artifacts
- **Prepare Implementation**: Agents build comprehensive context for implementation
- **Continue Implementation**: Agents use all accumulated context for task execution
- **Continue Implementation**: Agents use all accumulated context including artifacts for task execution
## Execution Lifecycle
@@ -221,30 +225,44 @@ TodoWrite({
**Comprehensive context preparation** for autonomous agent execution:
#### Context Sources (Priority Order)
1. **Complete Task JSON**: Full task definition including all fields
2. **Flow Control Context**: Accumulated outputs from pre_analysis steps
3. **Dependency Summaries**: Previous task completion summaries
4. **Session Context**: Workflow paths and session metadata
5. **Inherited Context**: Parent task context and shared variables
1. **Complete Task JSON**: Full task definition including all fields and artifacts
2. **Artifacts Context**: Brainstorming outputs and synthesis specifications from task.context.artifacts
3. **Flow Control Context**: Accumulated outputs from pre_analysis steps (including artifact loading)
4. **Dependency Summaries**: Previous task completion summaries
5. **Session Context**: Workflow paths and session metadata
6. **Inherited Context**: Parent task context and shared variables
#### Context Assembly Process
```
1. Load Task JSON → Base context
2. Execute Flow Control → Accumulated context
3. Load Dependencies → Dependency context
4. Prepare Session Paths → Session context
5. Combine All → Complete agent context
1. Load Task JSON → Base context (including artifacts array)
2. Load Artifacts → Synthesis specifications and brainstorming outputs
3. Execute Flow Control → Accumulated context (with artifact loading steps)
4. Load Dependencies → Dependency context
5. Prepare Session Paths → Session context
6. Combine All → Complete agent context with artifact integration
```
#### Agent Context Package Structure
```json
{
"task": { /* Complete task JSON */ },
"task": { /* Complete task JSON with artifacts array */ },
"artifacts": {
"synthesis_specification": { "path": ".workflow/WFS-session/.brainstorming/synthesis-specification.md", "priority": "highest" },
"topic_framework": { "path": ".workflow/WFS-session/.brainstorming/topic-framework.md", "priority": "medium" },
"role_analyses": [ /* Individual role analysis files */ ],
"available_artifacts": [ /* All detected brainstorming artifacts */ ]
},
"flow_context": {
"step_outputs": { "pattern_analysis": "...", "dependency_context": "..." }
"step_outputs": {
"synthesis_specification": "...",
"individual_artifacts": "...",
"pattern_analysis": "...",
"dependency_context": "..."
}
},
"session": {
"workflow_dir": ".workflow/WFS-session/",
"brainstorming_dir": ".workflow/WFS-session/.brainstorming/",
"todo_list_path": ".workflow/WFS-session/TODO_LIST.md",
"summaries_dir": ".workflow/WFS-session/.summaries/",
"task_json_path": ".workflow/WFS-session/.task/IMPL-1.1.json"
@@ -255,10 +273,11 @@ TodoWrite({
```
#### Context Validation Rules
- **Task JSON Complete**: All 5 fields present and valid
- **Flow Control Ready**: All pre_analysis steps completed if present
- **Task JSON Complete**: All 5 fields present and valid, including artifacts array in context
- **Artifacts Available**: Synthesis specifications and brainstorming outputs accessible
- **Flow Control Ready**: All pre_analysis steps completed including artifact loading steps
- **Dependencies Loaded**: All depends_on summaries available
- **Session Paths Valid**: All workflow paths exist and accessible
- **Session Paths Valid**: All workflow paths exist and accessible, including .brainstorming directory
- **Agent Assignment**: Valid agent type specified in meta.agent
### 4. Agent Execution Pattern
@@ -287,11 +306,22 @@ Task(subagent_type="{meta.agent}",
## STEP 3: Flow Control Execution (if flow_control.pre_analysis exists)
**AGENT RESPONSIBILITY**: Execute pre_analysis steps sequentially from loaded JSON:
**PRIORITY: Artifact Loading Steps First**
1. **Load Synthesis Specification** (if present): Priority artifact loading for consolidated design
2. **Load Individual Artifacts** (fallback): Load role-specific brainstorming outputs if synthesis unavailable
3. **Execute Remaining Steps**: Continue with other pre_analysis steps
For each step in flow_control.pre_analysis array:
1. Execute step.command with variable substitution
1. Execute step.command/commands with variable substitution (support both single command and commands array)
2. Store output to step.output_to variable
3. Handle errors per step.on_error strategy
4. Pass accumulated variables to next step
3. Handle errors per step.on_error strategy (skip_optional, fail, retry_once)
4. Pass accumulated variables to next step including artifact context
**Special Artifact Loading Commands**:
- Use `bash(ls path 2>/dev/null || echo 'file not found')` for artifact existence checks
- Use `Read(path)` for loading artifact content
- Use `find` commands for discovering multiple artifact files
- Reference artifacts in subsequent steps using output variables: [synthesis_specification], [individual_artifacts]
## STEP 4: Implementation Context (From JSON context field)
**Requirements**: Use context.requirements array from JSON
@@ -299,8 +329,9 @@ Task(subagent_type="{meta.agent}",
**Acceptance Criteria**: Use context.acceptance array from JSON
**Dependencies**: Use context.depends_on array from JSON
**Parent Context**: Use context.inherited object from JSON
**Artifacts**: Use context.artifacts array from JSON (synthesis specifications, brainstorming outputs)
**Target Files**: Use flow_control.target_files array from JSON
**Implementation Approach**: Use flow_control.implementation_approach object from JSON
**Implementation Approach**: Use flow_control.implementation_approach object from JSON (with artifact integration)
## STEP 5: Session Context (Provided by workflow:execute)
**Workflow Directory**: {session.workflow_dir}
@@ -361,10 +392,36 @@ Task(subagent_type="{meta.agent}",
"focus_paths": ["src/path1", "src/path2"],
"acceptance": ["criteria1", "criteria2"],
"depends_on": ["IMPL-1.1"],
"inherited": { "from": "parent", "context": ["info"] }
"inherited": { "from": "parent", "context": ["info"] },
"artifacts": [
{
"type": "synthesis_specification",
"source": "brainstorm_synthesis",
"path": ".workflow/WFS-[session]/.brainstorming/synthesis-specification.md",
"priority": "highest",
"contains": "complete_integrated_specification"
},
{
"type": "individual_role_analysis",
"source": "brainstorm_roles",
"path": ".workflow/WFS-[session]/.brainstorming/[role]/analysis.md",
"priority": "low",
"contains": "role_specific_analysis_fallback"
}
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_synthesis_specification",
"action": "Load consolidated synthesis specification from brainstorming",
"commands": [
"bash(ls .workflow/WFS-[session]/.brainstorming/synthesis-specification.md 2>/dev/null || echo 'synthesis specification not found')",
"Read(.workflow/WFS-[session]/.brainstorming/synthesis-specification.md)"
],
"output_to": "synthesis_specification",
"on_error": "skip_optional"
},
{
"step": "step_name",
"command": "bash_command",
@@ -372,7 +429,10 @@ Task(subagent_type="{meta.agent}",
"on_error": "skip_optional|fail|retry_once"
}
],
"implementation_approach": { "task_description": "...", "modification_points": ["..."] },
"implementation_approach": {
"task_description": "Implement following consolidated synthesis specification...",
"modification_points": ["Apply synthesis specification requirements..."]
},
"target_files": ["file:function:lines"]
}
}
@@ -505,5 +565,5 @@ fi
### Integration
- **Planning**: `/workflow:plan``/workflow:execute``/workflow:review`
- **Recovery**: `/workflow:session:tatus --validate``/workflow:execute`
- **Recovery**: `/workflow:status --validate``/workflow:execute`

View File

@@ -65,7 +65,7 @@ Creates implementation plans by orchestrating intelligent context gathering and
### Session ID Transmission Guidelines ⚠️ CRITICAL
- **Format**: `WFS-[topic-slug]` from active session markers
- **Usage**: `/context:gather --session WFS-[id]` and `/analysis:run --session WFS-[id]`
- **Usage**: `/workflow:tools:context-gather --session WFS-[id]` and `/workflow:tools:plan-enchanced --session WFS-[id]`
- **Rule**: ALL modular commands MUST receive current session ID for context continuity
### Brainstorming Artifacts Integration ⚠️ NEW FEATURE
@@ -84,13 +84,13 @@ Creates implementation plans by orchestrating intelligent context gathering and
4. **Context Preparation**: Load session state and prepare for planning
### Phase 2: Context Gathering
1. **Context Collection**: Execute `/context:gather` with task description and session ID
1. **Context Collection**: Execute `/workflow:tools:context-gather` with task description and session ID
2. **Asset Discovery**: Gather relevant documentation, code, and configuration files
3. **Context Packaging**: Generate standardized context-package.json
4. **Validation**: Ensure context package contains sufficient information
### Phase 3: Intelligent Analysis
1. **Analysis Execution**: Run `/analysis:run` with context package and session ID
1. **Analysis Execution**: Run `/workflow:tools:plan-enchanced` with context package and session ID
2. **Tool Selection**: Automatically select optimal analysis tools (Gemini/Qwen/Codex)
3. **Result Generation**: Produce structured ANALYSIS_RESULTS.md
4. **Validation**: Verify analysis completeness and task recommendations
@@ -112,6 +112,9 @@ Creates implementation plans by orchestrating intelligent context gathering and
4. **Continuous Tracking**: Maintain TodoWrite throughout entire planning workflow
### TodoWrite Tool Usage
**Core Rule**: Monitor slash command completion before proceeding to next step
```javascript
// Initialize planning workflow tracking
TodoWrite({

View File

@@ -11,309 +11,135 @@ examples:
# Workflow Test Generation Command
## Overview
Automatically generates comprehensive test workflows based on completed implementation tasks. **Creates dedicated test session with full test coverage planning**, including unit tests, integration tests, and validation workflows that mirror the implementation structure.
Analyzes completed implementation sessions and generates comprehensive test requirements, then calls workflow:plan to create test workflow.
## Core Rules
**Analyze completed implementation workflows to generate comprehensive test coverage workflows.**
**Create dedicated test session with systematic test task decomposition following implementation patterns.**
## Core Responsibilities
- **Implementation Analysis**: Analyze completed tasks and their deliverables
- **Test Coverage Planning**: Generate comprehensive test strategies for all implementations
- **Test Workflow Creation**: Create structured test session following workflow architecture
- **Task Decomposition**: Break down test requirements into executable test tasks
- **Dependency Mapping**: Establish test dependencies based on implementation relationships
- **Agent Assignment**: Assign appropriate test agents for different test types
## Execution Philosophy
- **Coverage-driven**: Ensure all implemented features have corresponding tests
- **Implementation-aware**: Tests reflect actual implementation patterns and dependencies
- **Systematic approach**: Follow established workflow patterns for test planning
- **Agent-optimized**: Assign specialized agents for different test types
- **Continuous validation**: Include ongoing test execution and maintenance tasks
## Test Generation Lifecycle
### Phase 1: Implementation Discovery
1. **Session Analysis**: Identify active or recently completed implementation session
2. **Task Analysis**: Parse completed IMPL-* tasks and their deliverables
3. **Code Analysis**: Examine implemented files and functionality
4. **Pattern Recognition**: Identify testing requirements from implementation patterns
### Phase 2: Test Strategy Planning
1. **Coverage Mapping**: Map implementation components to test requirements
2. **Test Type Classification**: Categorize tests (unit, integration, e2e, performance)
3. **Dependency Analysis**: Establish test execution dependencies
4. **Tool Selection**: Choose appropriate testing frameworks and tools
### Phase 3: Test Workflow Creation
1. **Session Creation**: Create dedicated test session `WFS-test-[base-session]`
2. **Plan Generation**: Create TEST_PLAN.md with comprehensive test strategy
3. **Task Decomposition**: Generate TEST-* task definitions following workflow patterns
4. **Agent Assignment**: Assign specialized test agents for execution
### Phase 4: Test Session Setup
1. **Structure Creation**: Establish test workflow directory structure
2. **Context Preparation**: Link test tasks to implementation context
3. **Flow Control Setup**: Configure test execution flow and dependencies
4. **Documentation Generation**: Create test documentation and tracking files
## Test Discovery & Analysis Process
### Implementation Analysis
```
├── Load completed implementation session
├── Analyze IMPL_PLAN.md and completed tasks
├── Scan .summaries/ for implementation deliverables
├── Examine target_files from task definitions
├── Identify implemented features and components
├── Map code coverage requirements
└── Generate test coverage matrix
```
### Test Pattern Recognition
```
Implementation Pattern → Test Pattern
├── API endpoints → API testing + contract testing
├── Database models → Data validation + migration testing
├── UI components → Component testing + user workflow testing
├── Business logic → Unit testing + integration testing
├── Authentication → Security testing + access control testing
├── Configuration → Environment testing + deployment testing
└── Performance critical → Load testing + performance testing
```
## Test Workflow Structure
### Generated Test Session Structure
```
.workflow/WFS-test-[base-session]/
├── TEST_PLAN.md # Comprehensive test planning document
├── TODO_LIST.md # Test execution progress tracking
├── .process/
│ ├── TEST_ANALYSIS.md # Test coverage analysis results
│ └── COVERAGE_MATRIX.md # Implementation-to-test mapping
├── .task/
│ ├── TEST-001.json # Unit test tasks
│ ├── TEST-002.json # Integration test tasks
│ ├── TEST-003.json # E2E test tasks
│ └── TEST-004.json # Performance test tasks
├── .summaries/ # Test execution summaries
└── .context/
├── impl-context.md # Implementation context reference
└── test-fixtures.md # Test data and fixture planning
```
## Test Task Types & Agent Assignment
### Task Categories
1. **Unit Tests** (`TEST-U-*`)
- **Agent**: `code-review-test-agent`
- **Scope**: Individual function/method testing
- **Dependencies**: Implementation files
2. **Integration Tests** (`TEST-I-*`)
- **Agent**: `code-review-test-agent`
- **Scope**: Component interaction testing
- **Dependencies**: Unit tests completion
3. **End-to-End Tests** (`TEST-E-*`)
- **Agent**: `general-purpose`
- **Scope**: User workflow and system testing
- **Dependencies**: Integration tests completion
4. **Performance Tests** (`TEST-P-*`)
- **Agent**: `code-developer`
- **Scope**: Load, stress, and performance validation
- **Dependencies**: E2E tests completion
5. **Security Tests** (`TEST-S-*`)
- **Agent**: `code-review-test-agent`
- **Scope**: Security validation and vulnerability testing
- **Dependencies**: Implementation completion
6. **Documentation Tests** (`TEST-D-*`)
- **Agent**: `doc-generator`
- **Scope**: Documentation validation and example testing
- **Dependencies**: Feature tests completion
## Test Task JSON Schema
Each test task follows the 5-field workflow architecture with test-specific extensions:
### Basic Test Task Structure
```json
{
"id": "TEST-U-001",
"title": "Unit tests for authentication service",
"status": "pending",
"meta": {
"type": "unit-test",
"agent": "code-review-test-agent",
"test_framework": "jest",
"coverage_target": "90%",
"impl_reference": "IMPL-001"
},
"context": {
"requirements": "Test all authentication service functions with edge cases",
"focus_paths": ["src/auth/", "tests/unit/auth/"],
"acceptance": [
"All auth service functions tested",
"Edge cases covered",
"90% code coverage achieved",
"Tests pass in CI/CD pipeline"
],
"depends_on": [],
"impl_context": "IMPL-001-summary.md",
"test_data": "auth-test-fixtures.json"
},
"flow_control": {
"pre_analysis": [
{
"step": "load_impl_context",
"action": "Load implementation context and deliverables",
"command": "bash(cat .workflow/WFS-[base-session]/.summaries/IMPL-001-summary.md)",
"output_to": "impl_context"
},
{
"step": "analyze_test_coverage",
"action": "Analyze existing test coverage and gaps",
"command": "bash(find src/auth/ -name '*.js' -o -name '*.ts' | head -20)",
"output_to": "coverage_analysis"
}
],
"implementation_approach": "test-driven",
"target_files": [
"tests/unit/auth/auth-service.test.js",
"tests/unit/auth/auth-utils.test.js",
"tests/fixtures/auth-test-data.json"
]
}
}
```
## Test Context Management
### Implementation Context Integration
Test tasks automatically inherit context from corresponding implementation tasks:
```json
"context": {
"impl_reference": "IMPL-001",
"impl_summary": ".workflow/WFS-[base-session]/.summaries/IMPL-001-summary.md",
"impl_files": ["src/auth/service.js", "src/auth/middleware.js"],
"test_requirements": "derived from implementation acceptance criteria",
"coverage_requirements": "90% line coverage, 80% branch coverage"
}
```
### Flow Control for Test Execution
```json
"flow_control": {
"pre_analysis": [
{
"step": "load_impl_deliverables",
"action": "Load implementation files and analyze test requirements",
"command": "~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze implementation for test requirements TASK: Review [impl_files] and identify test cases CONTEXT: @{[impl_files]} EXPECTED: Comprehensive test case list RULES: Focus on edge cases and integration points\""
},
{
"step": "setup_test_environment",
"action": "Prepare test environment and fixtures",
"command": "codex --full-auto exec \"Setup test environment for [test_framework] with fixtures for [feature_name]\" -s danger-full-access"
}
]
}
```
## Session Management & Integration
### Test Session Creation Process
1. **Base Session Discovery**: Identify implementation session to test
2. **Test Session Creation**: Create `WFS-test-[base-session]` directory structure
3. **Context Linking**: Establish references to implementation context
4. **Active Marker**: Create `.active-test-[base-session]` marker for session management
### Integration with Execute Command
Test workflows integrate seamlessly with existing execute infrastructure:
- Use same TodoWrite progress tracking
- Follow same agent orchestration patterns
- Support same flow control mechanisms
- Maintain same session isolation and management
## Usage Examples
### Generate Tests for Completed Implementation
## Usage
```bash
# After completing an implementation workflow
/workflow:execute # Complete implementation tasks
# Generate comprehensive test workflow
/workflow:test-gen # Auto-detects active session
# Execute test workflow
/workflow:execute # Runs test tasks
/workflow:test-gen # Auto-detect active session
/workflow:test-gen WFS-session-id # Analyze specific session
```
### Generate Tests for Specific Session
## Dynamic Session ID Resolution
The `${SESSION_ID}` variable is dynamically resolved based on:
1. **Command argument**: If session-id provided as argument, use it directly
2. **Auto-detection**: If no argument, detect from active session markers
3. **Format**: Always in format `WFS-session-name`
```bash
# Generate tests for specific implementation session
/workflow:test-gen WFS-user-auth-system
# Check test workflow status
/workflow:status --session=WFS-test-user-auth-system
# Execute specific test category
/task:execute TEST-U-001 # Run unit tests
# Example resolution logic:
# If argument provided: SESSION_ID = "WFS-user-auth"
# If no argument: SESSION_ID = $(find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//')
```
### Multi-Phase Test Generation
## Implementation Flow
### Step 1: Identify Target Session
```bash
# Generate and execute tests in phases
/workflow:test-gen WFS-api-implementation
/task:execute TEST-U-* # Unit tests first
/task:execute TEST-I-* # Integration tests
/task:execute TEST-E-* # E2E tests last
# Auto-detect active session (if no session-id provided)
find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//'
# Use provided session-id or detected session-id
# SESSION_ID = provided argument OR detected active session
```
## Error Handling & Recovery
### Step 2: Get Session Start Time
```bash
cat .workflow/WFS-${SESSION_ID}/workflow-session.json | jq -r .created_at
```
### Implementation Analysis Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| No completed implementations | No IMPL-* tasks found | Complete implementation tasks first |
| Missing implementation context | Corrupted summaries | Regenerate summaries from task results |
| Invalid implementation files | File references broken | Update file paths and re-analyze |
### Step 3: Git Change Analysis (using session start time)
```bash
git log --since="$(cat .workflow/WFS-${SESSION_ID}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u | grep -v '^$'
```
### Test Generation Errors
| Error | Cause | Recovery Strategy |
|-------|-------|------------------|
| Test framework not detected | No testing setup found | Prompt for test framework selection |
| Insufficient implementation context | Missing implementation details | Request additional implementation documentation |
| Test session collision | Test session already exists | Merge or create versioned test session |
### Step 4: Filter Code Files
```bash
git log --since="$(cat .workflow/WFS-${SESSION_ID}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u | grep -E '\.(js|ts|jsx|tsx|py|java|go|rs)$'
```
## Key Benefits
### Step 5: Load Session Context
```bash
cat .workflow/WFS-${SESSION_ID}/.summaries/IMPL-*-summary.md 2>/dev/null
```
### Comprehensive Coverage
- **Implementation-driven**: Tests generated based on actual implementation patterns
- **Multi-layered**: Unit, integration, E2E, and specialized testing
- **Dependency-aware**: Test execution follows logical dependency chains
- **Agent-optimized**: Specialized agents for different test types
### Step 6: Extract Focus Paths
```bash
find .workflow/WFS-${SESSION_ID}/.task/ -name '*.json' -exec jq -r '.context.focus_paths[]?' {} \;
```
### Workflow Integration
- **Seamless execution**: Uses existing workflow infrastructure
- **Progress tracking**: Full TodoWrite integration for test progress
- **Context preservation**: Maintains links to implementation context
- **Session management**: Independent test sessions with proper isolation
### Step 7: Gemini Analysis and Planning Document Generation
```bash
cd project-root && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Analyze implementation and generate comprehensive test planning document
TASK: Review changed files and implementation context to create detailed test planning document
CONTEXT: Changed files: [changed_files], Implementation summaries: [impl_summaries], Focus paths: [focus_paths]
EXPECTED: Complete test planning document including:
- Test strategy analysis
- Critical test scenarios identification
- Edge cases and error conditions
- Test priority matrix
- Resource requirements
- Implementation approach recommendations
- Specific test cases with acceptance criteria
RULES: Generate structured markdown document suitable for workflow planning. Focus on actionable test requirements based on actual implementation changes.
" > .workflow/WFS-${SESSION_ID}/.process/GEMINI_TEST_PLAN.md
```
### Maintenance & Evolution
- **Updateable**: Test workflows can evolve with implementation changes
- **Traceable**: Clear mapping from implementation to test requirements
- **Extensible**: Support for new test types and frameworks
- **Documentable**: Comprehensive test documentation and coverage reports
### Step 8: Generate Combined Test Requirements Document
```bash
mkdir -p .workflow/WFS-${SESSION_ID}/.process
```
## Integration Points
- **Planning**: Integrates with `/workflow:plan` for test planning
- **Execution**: Uses `/workflow:execute` for test task execution
- **Status**: Works with `/workflow:status` for test progress tracking
- **Documentation**: Coordinates with `/workflow:docs` for test documentation
- **Review**: Supports `/workflow:review` for test validation and coverage analysis
```bash
cat > .workflow/WFS-${SESSION_ID}/.process/TEST_REQUIREMENTS.md << 'EOF'
# Test Requirements Summary for WFS-${SESSION_ID}
## Analysis Data Sources
- Git change analysis results
- Implementation summaries and context
- Gemini-generated test planning document
## Reference Documents
- Detailed test plan: GEMINI_TEST_PLAN.md
- Implementation context: IMPL-*-summary.md files
## Integration Note
This document combines analysis data with Gemini-generated planning document for comprehensive test workflow generation.
EOF
```
### Step 9: Call Workflow Plan with Gemini Planning Document
```bash
/workflow:plan .workflow/WFS-${SESSION_ID}/.process/GEMINI_TEST_PLAN.md
```
## Simple Bash Commands
### Basic Operations
- **Find active session**: `find .workflow/ -name '.active-*'`
- **Get git changes**: `git log --since='date' --name-only`
- **Filter code files**: `grep -E '\.(js|ts|py)$'`
- **Load summaries**: `cat .workflow/WFS-*/summaries/*.md`
- **Extract JSON data**: `jq -r '.context.focus_paths[]'`
- **Create directory**: `mkdir -p .workflow/session/.process`
- **Write file**: `cat > file << 'EOF'`
### Gemini CLI Integration
- **Planning command**: `~/.claude/scripts/gemini-wrapper -p "prompt" > GEMINI_TEST_PLAN.md`
- **Context loading**: Include changed files and implementation context
- **Document generation**: Creates comprehensive test planning document
- **Direct handoff**: Pass Gemini planning document to workflow:plan
## No Complex Logic
- No variables or functions
- No conditional statements
- No loops or complex pipes
- Direct bash commands only
- Gemini CLI for intelligent analysis
## Related Commands
- `/workflow:plan` - Called to generate test workflow
- `/workflow:execute` - Executes generated test tasks
- `/workflow:status` - Shows test workflow progress

View File

@@ -0,0 +1,421 @@
---
name: plan-enchanced
description: Enhanced intelligent analysis with parallel CLI execution and design blueprint generation
usage: /workflow:tools:concept-enhanced --session <session_id> --context <context_package_path>
argument-hint: "--session WFS-session-id --context path/to/context-package.json"
examples:
- /workflow:tools:concept-enhanced --session WFS-auth --context .workflow/WFS-auth/.process/context-package.json
- /workflow:tools:concept-enhanced --session WFS-payment --context .workflow/WFS-payment/.process/context-package.json
---
# Enhanced Planning Command (/workflow:tools:concept-enhanced)
## Overview
Advanced intelligent planning engine with parallel CLI execution that processes standardized context packages, generates enhanced suggestions and design blueprints, and produces comprehensive analysis results with implementation strategies.
## Core Philosophy
- **Context-Driven**: Precise analysis based on comprehensive context
- **Intelligent Tool Selection**: Choose optimal analysis tools based on task characteristics
- **Parallel Execution**: Execute multiple CLI tools simultaneously for efficiency
- **Enhanced Suggestions**: Generate actionable recommendations and design blueprints
- **Write-Enabled**: Tools have full write permissions for implementation suggestions
- **Structured Output**: Generate standardized analysis reports with implementation roadmaps
## Core Responsibilities
- **Context Package Parsing**: Read and validate context-package.json
- **Parallel CLI Orchestration**: Execute multiple analysis tools simultaneously with write permissions
- **Enhanced Suggestions Generation**: Create actionable recommendations and design blueprints
- **Design Blueprint Creation**: Generate comprehensive technical implementation designs
- **Perspective Synthesis**: Collect and organize different tool viewpoints
- **Consensus Analysis**: Identify agreements and conflicts between tools
- **Summary Report Generation**: Output comprehensive ANALYSIS_RESULTS.md with implementation strategies
## Analysis Strategy Selection
### Tool Selection by Task Complexity
**Simple Tasks (≤3 modules)**:
- **Primary Tool**: Gemini (rapid understanding and pattern recognition)
- **Support Tool**: Code-index (structural analysis)
- **Execution Mode**: Single-round analysis, focus on existing patterns
**Medium Tasks (4-6 modules)**:
- **Primary Tool**: Gemini (comprehensive single-round analysis and architecture design)
- **Support Tools**: Code-index + Exa (external best practices)
- **Execution Mode**: Single comprehensive analysis covering understanding + architecture design
**Complex Tasks (>6 modules)**:
- **Primary Tools**: Gemini (single comprehensive analysis) + Codex (implementation validation)
- **Analysis Strategy**: Gemini handles understanding + architecture in one round, Codex validates implementation
- **Execution Mode**: Parallel execution - Gemini comprehensive analysis + Codex validation
### Tool Preferences by Tech Stack
```json
{
"frontend": {
"primary": "gemini",
"secondary": "codex",
"focus": ["component_design", "state_management", "ui_patterns"]
},
"backend": {
"primary": "codex",
"secondary": "gemini",
"focus": ["api_design", "data_flow", "security", "performance"]
},
"fullstack": {
"primary": "gemini",
"secondary": "codex",
"focus": ["system_architecture", "integration", "data_consistency"]
}
}
```
## Execution Lifecycle
### Phase 1: Validation & Preparation
1. **Session Validation**
- Verify session directory exists: `.workflow/{session_id}/`
- Load session metadata from `workflow-session.json`
- Validate session state and task context
2. **Context Package Validation**
- Verify context package exists at specified path
- Validate JSON format and structure
- Assess context package size and complexity
3. **Task Analysis & Classification**
- Parse task description and extract keywords
- Identify technical domain and complexity level
- Determine required analysis depth and scope
- Load existing session context and task summaries
4. **Tool Selection Strategy**
- **Simple/Medium Tasks**: Single Gemini comprehensive analysis
- **Complex Tasks**: Gemini comprehensive + Codex validation
- Load appropriate prompt templates and configurations
### Phase 2: Analysis Preparation
1. **Workspace Setup**
- Create analysis output directory: `.workflow/{session_id}/.process/`
- Initialize log files and monitoring structures
- Set process limits and resource management
2. **Context Optimization**
- Filter high-priority assets from context package
- Organize project structure and dependencies
- Prepare template references and rule configurations
3. **Execution Environment**
- Configure CLI tools with write permissions
- Set timeout parameters and monitoring intervals
- Prepare error handling and recovery mechanisms
### Phase 3: Parallel Analysis Execution
1. **Gemini Comprehensive Analysis & Documentation**
- **Tool Configuration**: `gemini-wrapper --approval-mode yolo` for write permissions
- **Purpose**: Enhanced comprehensive analysis with actionable suggestions, design blueprints, and documentation generation
- **Expected Outputs**:
- Current State Analysis: Architecture patterns, code quality, technical debt, performance bottlenecks
- Enhanced Suggestions: Implementation blueprints, code organization, API specifications, security guidelines
- Implementation Roadmap: Phase-by-phase plans, CI/CD blueprints, testing strategies
- Actionable Examples: Code templates, configuration scripts, integration patterns
- Documentation Generation: Technical documentation, API docs, user guides, and README files
- **Prompt Template**:
```
PURPOSE: Generate comprehensive analysis and documentation for {task_description}
TASK: Analyze codebase, create implementation blueprints, and generate supporting documentation
CONTEXT: {context_package_assets}
EXPECTED:
1. Complete technical analysis with implementation strategy
2. Generated documentation files (README.md, API.md, GUIDE.md as needed)
3. Code examples and configuration templates
4. Implementation roadmap with phase-by-phase plans
RULES: Create both analysis blueprints AND user-facing documentation. Generate actual documentation files, not just analysis of documentation needs.
```
- **Output Location**: `.workflow/{session_id}/.process/gemini-enhanced-analysis.md` + generated docs
2. **Codex Implementation Validation** (Complex Tasks Only)
- **Tool Configuration**: `codex --skip-git-repo-check --full-auto exec -s danger-full-access` for implementation validation
- **Purpose**: Technical feasibility validation with write-enabled blueprint generation
- **Expected Outputs**:
- Feasibility Assessment: Complexity analysis, resource requirements, technology compatibility
- Implementation Validation: Quality recommendations, security assessments, testing frameworks
- Implementation Guides: Step-by-step procedures, configuration management, monitoring setup
- **Output Location**: `.workflow/{session_id}/.process/codex-validation-analysis.md`
3. **Parallel Execution Management**
- Launch both tools simultaneously for complex tasks
- Monitor execution progress with timeout controls
- Handle process completion and error scenarios
- Maintain execution logs for debugging and recovery
### Phase 4: Results Collection & Validation
1. **Output Validation & Collection**
- **Gemini Results**: Validate `gemini-enhanced-analysis.md` exists and contains complete analysis
- **Codex Results**: For complex tasks, validate `codex-validation-analysis.md` with implementation guidance
- **Fallback Processing**: Use execution logs if primary outputs are incomplete
- **Status Classification**: Mark each tool as completed, partial, failed, or skipped
2. **Quality Assessment**
- Verify analysis completeness against expected output structure
- Assess actionability of recommendations and blueprints
- Validate presence of concrete implementation examples
- Check coverage of technical requirements and constraints
3. **Analysis Integration Strategy**
- **Simple/Medium Tasks**: Direct integration of Gemini comprehensive analysis
- **Complex Tasks**: Synthesis of Gemini understanding with Codex validation
- **Conflict Resolution**: Identify and resolve conflicting recommendations
- **Priority Matrix**: Organize recommendations by implementation priority
### Phase 5: ANALYSIS_RESULTS.md Generation
1. **Structured Report Assembly**
- **Executive Summary**: Task overview, timestamp, tools used, key findings
- **Analysis Results**: Complete Gemini analysis with optional Codex validation
- **Synthesis & Recommendations**: Consolidated implementation strategy with risk mitigation
- **Implementation Roadmap**: Phase-by-phase development plan with timelines
- **Quality Assurance**: Testing frameworks, monitoring strategies, success criteria
2. **Supplementary Outputs**
- **Machine-Readable Summary**: `analysis-summary.json` with structured metadata
- **Implementation Guidelines**: Phase-specific deliverables and checkpoints
- **Next Steps Matrix**: Immediate, short-term, and long-term action items
3. **Final Report Generation**
- **Primary Output**: `ANALYSIS_RESULTS.md` with comprehensive analysis and implementation blueprint
- **Secondary Output**: `analysis-summary.json` with machine-readable metadata and status
- **Report Structure**: Executive summary, analysis results, synthesis recommendations, implementation roadmap, quality assurance strategy
- **Action Items**: Immediate, short-term, and long-term next steps with success criteria
4. **Summary Report Finalization**
- Generate executive summary with key findings
- Create implementation priority matrix
- Provide next steps and action items
- Generate quality metrics and confidence scores
## Analysis Results Format
Generated ANALYSIS_RESULTS.md format (Multi-Tool Perspective Analysis):
```markdown
# Multi-Tool Analysis Results
## Task Overview
- **Description**: {task_description}
- **Context Package**: {context_package_path}
- **Analysis Tools**: {tools_used}
- **Analysis Timestamp**: {timestamp}
## Tool-Specific Analysis
### 🧠 Gemini Analysis (Enhanced Understanding & Architecture Blueprint)
**Focus**: Existing codebase understanding, enhanced suggestions, and comprehensive technical architecture design with actionable improvements
**Write Permissions**: Enabled for suggestion generation and blueprint creation
#### Current Architecture Assessment
- **Existing Patterns**: {identified_patterns}
- **Code Structure**: {current_structure}
- **Integration Points**: {integration_analysis}
- **Technical Debt**: {debt_assessment}
#### Compatibility Analysis
- **Framework Compatibility**: {framework_analysis}
- **Dependency Impact**: {dependency_analysis}
- **Migration Considerations**: {migration_notes}
#### Proposed Architecture
- **System Design**: {architectural_design}
- **Component Structure**: {component_design}
- **Data Flow**: {data_flow_design}
- **Interface Design**: {api_interface_design}
#### Implementation Strategy
- **Code Organization**: {code_structure_plan}
- **Module Dependencies**: {module_dependencies}
- **Testing Strategy**: {testing_approach}
### 🔧 Codex Analysis (Implementation Blueprints & Enhanced Validation)
**Focus**: Implementation feasibility, write-enabled blueprints, and comprehensive technical validation with concrete examples
**Write Permissions**: Full access for implementation suggestion generation
#### Feasibility Assessment
- **Technical Risks**: {implementation_risks}
- **Performance Impact**: {performance_analysis}
- **Resource Requirements**: {resource_assessment}
- **Maintenance Complexity**: {maintenance_analysis}
#### Enhanced Implementation Blueprints
- **Tool Selection**: {recommended_tools_with_examples}
- **Development Approach**: {detailed_development_strategy}
- **Quality Assurance**: {comprehensive_qa_recommendations}
- **Code Examples**: {implementation_code_samples}
- **Testing Blueprints**: {automated_testing_strategies}
- **Deployment Guidelines**: {deployment_implementation_guide}
## Synthesis & Consensus
### Enhanced Consolidated Recommendations
- **Architecture Approach**: {consensus_architecture_with_blueprints}
- **Implementation Priority**: {detailed_priority_matrix}
- **Risk Mitigation**: {comprehensive_risk_mitigation_strategy}
- **Performance Optimization**: {optimization_recommendations}
- **Security Enhancements**: {security_implementation_guide}
## Implementation Roadmap
### Phase 1: Foundation Setup
- **Infrastructure**: {infrastructure_setup_blueprint}
- **Development Environment**: {dev_env_configuration}
- **Initial Architecture**: {foundational_architecture}
### Phase 2: Core Implementation
- **Priority Features**: {core_feature_implementation}
- **Testing Framework**: {testing_implementation_strategy}
- **Quality Gates**: {quality_assurance_checkpoints}
### Phase 3: Enhancement & Optimization
- **Performance Tuning**: {performance_enhancement_plan}
- **Security Hardening**: {security_implementation_steps}
- **Scalability Improvements**: {scalability_enhancement_strategy}
## Enhanced Quality Assurance Strategy
### Automated Testing Blueprint
- **Unit Testing**: {unit_test_implementation}
- **Integration Testing**: {integration_test_strategy}
- **Performance Testing**: {performance_test_blueprint}
- **Security Testing**: {security_test_framework}
### Continuous Quality Monitoring
- **Code Quality Metrics**: {quality_monitoring_setup}
- **Performance Monitoring**: {performance_tracking_implementation}
- **Security Monitoring**: {security_monitoring_blueprint}
### Tool Agreement Analysis
- **Consensus Points**: {agreed_recommendations}
- **Conflicting Views**: {conflicting_opinions}
- **Resolution Strategy**: {conflict_resolution}
### Task Decomposition Suggestions
1. **Primary Tasks**: {major_task_suggestions}
2. **Task Dependencies**: {dependency_mapping}
3. **Complexity Assessment**: {complexity_evaluation}
## Enhanced Analysis Quality Metrics
- **Context Coverage**: {coverage_percentage}
- **Multi-Tool Consensus**: {three_tool_consensus_level}
- **Analysis Depth**: {comprehensive_depth_assessment}
- **Implementation Feasibility**: {feasibility_confidence_score}
- **Blueprint Completeness**: {blueprint_coverage_score}
- **Actionability Score**: {suggestion_actionability_rating}
- **Overall Confidence**: {enhanced_confidence_score}
## Implementation Success Indicators
- **Technical Feasibility**: {technical_implementation_confidence}
- **Resource Adequacy**: {resource_requirement_assessment}
- **Timeline Realism**: {timeline_feasibility_score}
- **Risk Management**: {risk_mitigation_effectiveness}
## Next Steps & Action Items
1. **Immediate Actions**: {immediate_implementation_steps}
2. **Short-term Goals**: {short_term_objectives}
3. **Long-term Strategy**: {long_term_implementation_plan}
4. **Success Metrics**: {implementation_success_criteria}
```
## Error Handling & Fallbacks
### Error Handling & Recovery Strategies
1. **Pre-execution Validation**
- **Session Verification**: Ensure session directory and metadata exist
- **Context Package Validation**: Verify JSON format and content structure
- **Tool Availability**: Confirm CLI tools are accessible and configured
- **Prerequisite Checks**: Validate all required dependencies and permissions
2. **Execution Monitoring & Timeout Management**
- **Progress Monitoring**: Track analysis execution with regular status checks
- **Timeout Controls**: 30-minute execution limit with graceful termination
- **Process Management**: Handle parallel tool execution and resource limits
- **Status Tracking**: Maintain real-time execution state and completion status
3. **Partial Results Recovery**
- **Fallback Strategy**: Generate analysis results even with incomplete outputs
- **Log Integration**: Use execution logs when primary outputs are unavailable
- **Recovery Mode**: Create partial analysis reports with available data
- **Guidance Generation**: Provide next steps and retry recommendations
4. **Resource Management**
- **Disk Space Monitoring**: Check available storage and cleanup temporary files
- **Process Limits**: Set CPU and memory constraints for analysis execution
- **Performance Optimization**: Manage resource utilization and system load
- **Cleanup Procedures**: Remove outdated logs and temporary files
5. **Comprehensive Error Recovery**
- **Error Detection**: Automatic error identification and classification
- **Recovery Workflows**: Structured approach to handling different failure modes
- **Status Reporting**: Clear communication of issues and resolution attempts
- **Graceful Degradation**: Provide useful outputs even with partial failures
## Performance Optimization
### Analysis Optimization Strategies
- **Parallel Analysis**: Execute multiple tools in parallel to reduce total time
- **Context Sharding**: Analyze large projects by module shards
- **Caching Mechanism**: Reuse analysis results for similar contexts
- **Incremental Analysis**: Perform incremental analysis based on changes
### Resource Management
```bash
# Set analysis timeout
timeout 600s analysis_command || {
echo "⚠️ Analysis timeout, generating partial results"
# Generate partial results
}
# Memory usage monitoring
memory_usage=$(ps -o pid,vsz,rss,comm -p $$)
if [ "$memory_usage" -gt "$memory_limit" ]; then
echo "⚠️ High memory usage detected, optimizing..."
fi
```
## Integration Points
### Input Interface
- **Required**: `--session` parameter specifying session ID (e.g., WFS-auth)
- **Required**: `--context` parameter specifying context package path
- **Optional**: `--depth` specify analysis depth (quick|full|deep)
- **Optional**: `--focus` specify analysis focus areas
### Output Interface
- **Primary**: Enhanced ANALYSIS_RESULTS.md file with implementation blueprints
- **Location**: .workflow/{session_id}/.process/ANALYSIS_RESULTS.md
- **Secondary**: analysis-summary.json (machine-readable format)
- **Tertiary**: implementation-roadmap.md (detailed implementation guide)
- **Quaternary**: blueprint-templates/ directory (code templates and examples)
## Quality Assurance
### Analysis Quality Checks
- **Completeness Check**: Ensure all required analysis sections are completed
- **Consistency Check**: Verify consistency of multi-tool analysis results
- **Feasibility Validation**: Ensure recommended implementation plans are feasible
### Success Criteria
- ✅ **Enhanced Output Generation**: Comprehensive ANALYSIS_RESULTS.md with implementation blueprints and machine-readable summary
- ✅ **Write-Enabled CLI Tools**: Full write permissions for Gemini (--approval-mode yolo) and Codex (--skip-git-repo-check -s danger-full-access)
- ✅ **Enhanced Suggestions**: Concrete implementation examples, configuration templates, and step-by-step procedures
- ✅ **Design Blueprints**: Detailed technical architecture with component diagrams and API specifications
- ✅ **Parallel Execution**: Efficient concurrent tool execution with proper monitoring and timeout handling
- ✅ **Robust Error Handling**: Comprehensive validation, timeout management, and partial result recovery
- ✅ **Actionable Results**: Implementation roadmap with phase-by-phase development strategy and success criteria
- ✅ **Quality Assurance**: Automated testing frameworks, CI/CD blueprints, and monitoring strategies
- ✅ **Performance Optimization**: Execution completes within 30 minutes with resource management
## Related Commands
- `/context:gather` - Generate context packages required by this command
- `/workflow:plan` - Call this command for analysis
- `/task:create` - Create specific tasks based on analysis results

View File

@@ -1,407 +0,0 @@
---
name: concept-eval
description: Evaluate concept planning before implementation with intelligent tool analysis
usage: /workflow:concept-eval [--tool gemini|codex|both] <input>
argument-hint: [--tool gemini|codex|both] "concept description"|file.md|ISS-001
examples:
- /workflow:concept-eval "Build microservices architecture"
- /workflow:concept-eval --tool gemini requirements.md
- /workflow:concept-eval --tool both ISS-001
allowed-tools: Task(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*)
---
# Workflow Concept Evaluation Command
## Overview
Pre-planning evaluation command that assesses concept feasibility, identifies potential issues, and provides optimization recommendations before formal planning begins. **Works before `/workflow:plan`** to catch conceptual problems early and improve initial design quality.
## Core Responsibilities
- **Concept Analysis**: Evaluate design concepts for architectural soundness
- **Feasibility Assessment**: Technical and resource feasibility evaluation
- **Risk Identification**: Early identification of potential implementation risks
- **Optimization Suggestions**: Generate actionable improvement recommendations
- **Context Integration**: Leverage existing codebase patterns and documentation
- **Tool Selection**: Use gemini for strategic analysis, codex for technical assessment
## Usage
```bash
/workflow:concept-eval [--tool gemini|codex|both] <input>
```
## Parameters
- **--tool**: Specify evaluation tool (default: both)
- `gemini`: Strategic and architectural evaluation
- `codex`: Technical feasibility and implementation assessment
- `both`: Comprehensive dual-perspective analysis
- **input**: Concept description, file path, or issue reference
## Input Detection
- **Files**: `.md/.txt/.json/.yaml/.yml` → Reads content and extracts concept requirements
- **Issues**: `ISS-*`, `ISSUE-*`, `*-request-*` → Loads issue data and requirement specifications
- **Text**: Everything else → Parses natural language concept descriptions
## Core Workflow
### Evaluation Process
The command performs comprehensive concept evaluation through:
**0. Context Preparation** ⚠️ FIRST STEP
- **MCP Tools Integration**: Use Code Index for codebase exploration, Exa for external context
- **Documentation loading**: Automatic context gathering based on concept scope
- **Always check**: `CLAUDE.md`, `README.md` - Project context and conventions
- **For architecture concepts**: `.workflow/docs/architecture/`, existing system patterns
- **For specific modules**: `.workflow/docs/modules/[relevant-module]/` documentation
- **For API concepts**: `.workflow/docs/api/` specifications
- **Claude Code Memory Integration**: Access conversation history and previous work context
- **Session Memory**: Current session analysis and decisions
- **Project Memory**: Previous implementations and lessons learned
- **Pattern Memory**: Successful approaches and anti-patterns identified
- **Context Continuity**: Reference previous concept evaluations and outcomes
- **Context-driven selection**: Only load documentation relevant to the concept scope
- **Pattern analysis**: Identify existing implementation patterns and conventions
**1. Input Processing & Context Gathering**
- Parse input to extract concept requirements and scope
- Automatic tool assignment based on evaluation needs:
- **Strategic evaluation** (gemini): Architectural soundness, design patterns, business alignment
- **Technical assessment** (codex): Implementation complexity, technical feasibility, resource requirements
- **Comprehensive analysis** (both): Combined strategic and technical evaluation
- Load relevant project documentation and existing patterns
**2. Concept Analysis** ⚠️ CRITICAL EVALUATION PHASE
- **Conceptual integrity**: Evaluate design coherence and completeness
- **Architectural soundness**: Assess alignment with existing system architecture
- **Technical feasibility**: Analyze implementation complexity and resource requirements
- **Risk assessment**: Identify potential technical and business risks
- **Dependency analysis**: Map required dependencies and integration points
**3. Evaluation Execution**
Based on tool selection, execute appropriate analysis:
**Gemini Strategic Analysis**:
```bash
~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Strategic evaluation of concept design and architecture
TASK: Analyze concept for architectural soundness, design patterns, and strategic alignment
CONTEXT: @{CLAUDE.md,README.md,.workflow/docs/**/*} Concept requirements and existing patterns | Previous conversation context and Claude Code session memory for continuity and pattern recognition
EXPECTED: Strategic assessment with architectural recommendations informed by session history
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/concept-eval.txt) | Focus on strategic soundness and design quality | Reference previous evaluations and lessons learned
"
```
**Codex Technical Assessment**:
```bash
codex --full-auto exec "
PURPOSE: Technical feasibility assessment of concept implementation
TASK: Evaluate implementation complexity, technical risks, and resource requirements
CONTEXT: @{CLAUDE.md,README.md,src/**/*} Concept requirements and existing codebase | Current session work context and previous technical decisions
EXPECTED: Technical assessment with implementation recommendations building on session memory
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/concept-eval.txt) | Focus on technical feasibility and implementation complexity | Consider previous technical approaches and outcomes
" -s danger-full-access
```
**Combined Analysis** (when --tool both):
Execute both analyses in parallel, then synthesize results for comprehensive evaluation.
**4. Optimization Recommendations**
- **Design improvements**: Architectural and design optimization suggestions
- **Risk mitigation**: Strategies to address identified risks
- **Implementation approach**: Recommended technical approaches and patterns
- **Resource optimization**: Efficient resource utilization strategies
- **Integration suggestions**: Optimal integration with existing systems
## Implementation Standards
### Evaluation Criteria ⚠️ CRITICAL
Concept evaluation focuses on these key dimensions:
**Strategic Evaluation (Gemini)**:
1. **Architectural Soundness**: Design coherence and system integration
2. **Business Alignment**: Concept alignment with business objectives
3. **Scalability Considerations**: Long-term growth and expansion potential
4. **Design Patterns**: Appropriate use of established design patterns
5. **Risk Assessment**: Strategic and business risk identification
**Technical Assessment (Codex)**:
1. **Implementation Complexity**: Technical difficulty and effort estimation
2. **Technical Feasibility**: Availability of required technologies and skills
3. **Resource Requirements**: Development time, infrastructure, and team resources
4. **Integration Challenges**: Technical integration complexity and risks
5. **Performance Implications**: System performance and scalability impact
### Evaluation Context Loading ⚠️ CRITICAL
Context preparation ensures comprehensive evaluation:
```json
// Context loading strategy for concept evaluation
"context_preparation": {
"required_docs": [
"CLAUDE.md",
"README.md"
],
"conditional_docs": {
"architecture_concepts": [
".workflow/docs/architecture/",
"docs/system-design.md"
],
"api_concepts": [
".workflow/docs/api/",
"api-documentation.md"
],
"module_concepts": [
".workflow/docs/modules/[relevant-module]/",
"src/[module]/**/*.md"
]
},
"pattern_analysis": {
"existing_implementations": "src/**/*",
"configuration_patterns": "config/",
"test_patterns": "test/**/*"
},
"claude_code_memory": {
"session_context": "Current session conversation history and decisions",
"project_memory": "Previous implementations and lessons learned across sessions",
"pattern_memory": "Successful approaches and anti-patterns identified",
"evaluation_history": "Previous concept evaluations and their outcomes",
"technical_decisions": "Past technical choices and their rationale",
"architectural_evolution": "System architecture changes and migration patterns"
}
}
```
### Analysis Output Structure
**Evaluation Categories**:
```markdown
## Concept Evaluation Summary
### ✅ Strengths Identified
- [ ] **Design Quality**: Well-defined architectural approach
- [ ] **Technical Approach**: Appropriate technology selection
- [ ] **Integration**: Good fit with existing systems
### ⚠️ Areas for Improvement
- [ ] **Complexity**: Reduce implementation complexity in module X
- [ ] **Dependencies**: Simplify dependency management approach
- [ ] **Scalability**: Address potential performance bottlenecks
### ❌ Critical Issues
- [ ] **Architecture**: Conflicts with existing system design
- [ ] **Resources**: Insufficient resources for proposed timeline
- [ ] **Risk**: High technical risk in component Y
### 🎯 Optimization Recommendations
- [ ] **Alternative Approach**: Consider microservices instead of monolithic design
- [ ] **Technology Stack**: Use existing React patterns instead of Vue
- [ ] **Implementation Strategy**: Phase implementation to reduce risk
```
## Document Generation & Output
**Evaluation Workflow**: Input Processing → Context Loading → Analysis Execution → Report Generation → Recommendations
**Always Created**:
- **CONCEPT_EVALUATION.md**: Complete evaluation results and recommendations
- **evaluation-session.json**: Evaluation metadata and tool configuration
- **OPTIMIZATION_SUGGESTIONS.md**: Actionable improvement recommendations
**Auto-Created (for comprehensive analysis)**:
- **strategic-analysis.md**: Gemini strategic evaluation results
- **technical-assessment.md**: Codex technical feasibility analysis
- **risk-assessment-matrix.md**: Comprehensive risk evaluation
- **implementation-roadmap.md**: Recommended implementation approach
**Document Structure**:
```
.workflow/WFS-[topic]/.evaluation/
├── evaluation-session.json # Evaluation session metadata
├── CONCEPT_EVALUATION.md # Complete evaluation results
├── OPTIMIZATION_SUGGESTIONS.md # Actionable recommendations
├── strategic-analysis.md # Gemini strategic evaluation
├── technical-assessment.md # Codex technical assessment
├── risk-assessment-matrix.md # Risk evaluation matrix
└── implementation-roadmap.md # Recommended approach
```
### Evaluation Implementation
**Session-Aware Evaluation**:
```bash
# Check for existing sessions and context
active_sessions=$(find .workflow/ -name ".active-*" 2>/dev/null)
if [ -n "$active_sessions" ]; then
echo "Found active sessions: $active_sessions"
echo "Concept evaluation will consider existing session context"
fi
# Create evaluation session directory
evaluation_session="CE-$(date +%Y%m%d_%H%M%S)"
mkdir -p ".workflow/.evaluation/$evaluation_session"
# Store evaluation metadata
cat > ".workflow/.evaluation/$evaluation_session/evaluation-session.json" << EOF
{
"session_id": "$evaluation_session",
"timestamp": "$(date -Iseconds)",
"concept_input": "$input_description",
"tool_selection": "$tool_choice",
"context_loaded": [
"CLAUDE.md",
"README.md"
],
"evaluation_scope": "$evaluation_scope"
}
EOF
```
**Tool Execution Pattern**:
```bash
# Execute based on tool selection
case "$tool_choice" in
"gemini")
echo "Performing strategic concept evaluation with Gemini..."
~/.claude/scripts/gemini-wrapper -p "$gemini_prompt" > ".workflow/.evaluation/$evaluation_session/strategic-analysis.md"
;;
"codex")
echo "Performing technical assessment with Codex..."
codex --full-auto exec "$codex_prompt" -s danger-full-access > ".workflow/.evaluation/$evaluation_session/technical-assessment.md"
;;
"both"|*)
echo "Performing comprehensive evaluation with both tools..."
~/.claude/scripts/gemini-wrapper -p "$gemini_prompt" > ".workflow/.evaluation/$evaluation_session/strategic-analysis.md" &
codex --full-auto exec "$codex_prompt" -s danger-full-access > ".workflow/.evaluation/$evaluation_session/technical-assessment.md" &
wait # Wait for both analyses to complete
# Synthesize results
~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Synthesize strategic and technical concept evaluations
TASK: Combine analyses and generate integrated recommendations
CONTEXT: @{.workflow/.evaluation/$evaluation_session/strategic-analysis.md,.workflow/.evaluation/$evaluation_session/technical-assessment.md}
EXPECTED: Integrated evaluation with prioritized recommendations
RULES: Focus on actionable insights and clear next steps
" > ".workflow/.evaluation/$evaluation_session/CONCEPT_EVALUATION.md"
;;
esac
```
## Integration with Workflow Commands
### Workflow Position
**Pre-Planning Phase**: Use before formal planning to optimize concept quality
```
concept-eval → plan → plan-verify → execute
```
### Usage Scenarios
**Early Concept Validation**:
```bash
# Validate initial concept before detailed planning
/workflow:concept-eval "Build real-time notification system using WebSockets"
```
**Architecture Review**:
```bash
# Strategic architecture evaluation
/workflow:concept-eval --tool gemini architecture-proposal.md
```
**Technical Feasibility Check**:
```bash
# Technical implementation assessment
/workflow:concept-eval --tool codex "Implement ML-based recommendation engine"
```
**Comprehensive Analysis**:
```bash
# Full strategic and technical evaluation
/workflow:concept-eval --tool both ISS-042
```
### Integration Benefits
- **Early Risk Detection**: Identify issues before detailed planning
- **Quality Improvement**: Optimize concepts before implementation planning
- **Resource Efficiency**: Avoid detailed planning of infeasible concepts
- **Decision Support**: Data-driven concept selection and refinement
- **Team Alignment**: Clear evaluation criteria and recommendations
## Error Handling & Edge Cases
### Input Validation
```bash
# Validate input format and accessibility
if [[ -z "$input" ]]; then
echo "Error: Concept input required"
echo "Usage: /workflow:concept-eval [--tool gemini|codex|both] <input>"
exit 1
fi
# Check file accessibility for file inputs
if [[ "$input" =~ \.(md|txt|json|yaml|yml)$ ]] && [[ ! -f "$input" ]]; then
echo "Error: File not found: $input"
echo "Please provide a valid file path or concept description"
exit 1
fi
```
### Tool Availability
```bash
# Check tool availability
if [[ "$tool_choice" == "gemini" ]] || [[ "$tool_choice" == "both" ]]; then
if ! command -v ~/.claude/scripts/gemini-wrapper &> /dev/null; then
echo "Warning: Gemini wrapper not available, using codex only"
tool_choice="codex"
fi
fi
if [[ "$tool_choice" == "codex" ]] || [[ "$tool_choice" == "both" ]]; then
if ! command -v codex &> /dev/null; then
echo "Warning: Codex not available, using gemini only"
tool_choice="gemini"
fi
fi
```
### Recovery Strategies
```bash
# Fallback to manual evaluation if tools fail
if [[ "$evaluation_failed" == "true" ]]; then
echo "Automated evaluation failed, generating manual evaluation template..."
cat > ".workflow/.evaluation/$evaluation_session/manual-evaluation-template.md" << EOF
# Manual Concept Evaluation
## Concept Description
$input_description
## Evaluation Checklist
- [ ] **Architectural Soundness**: Does the concept align with existing architecture?
- [ ] **Technical Feasibility**: Are required technologies available and mature?
- [ ] **Resource Requirements**: Are time and team resources realistic?
- [ ] **Integration Complexity**: How complex is integration with existing systems?
- [ ] **Risk Assessment**: What are the main technical and business risks?
## Recommendations
[Provide manual evaluation and recommendations]
EOF
fi
```
## Quality Standards
### Evaluation Excellence
- **Comprehensive Analysis**: Consider all aspects of concept feasibility
- **Context-Rich Assessment**: Leverage full project context and existing patterns
- **Actionable Recommendations**: Provide specific, implementable suggestions
- **Risk-Aware Evaluation**: Identify and assess potential implementation risks
### User Experience Excellence
- **Clear Results**: Present evaluation results in actionable format
- **Focused Recommendations**: Prioritize most critical optimization suggestions
- **Integration Guidance**: Provide clear next steps for concept refinement
- **Tool Transparency**: Clear indication of which tools were used and why
### Output Quality
- **Structured Reports**: Consistent, well-organized evaluation documentation
- **Evidence-Based**: All recommendations backed by analysis and reasoning
- **Prioritized Actions**: Clear indication of critical vs. optional improvements
- **Implementation Ready**: Evaluation results directly usable for planning phase

View File

@@ -0,0 +1,301 @@
---
name: gather
description: Intelligently collect project context based on task description and package into standardized JSON
usage: /workflow:tools:context-gather --session <session_id> "<task_description>"
argument-hint: "--session WFS-session-id \"task description\""
examples:
- /workflow:tools:context-gather --session WFS-user-auth "Implement user authentication system"
- /workflow:tools:context-gather --session WFS-payment "Refactor payment module API"
- /workflow:tools:context-gather --session WFS-bugfix "Fix login validation error"
---
# Context Gather Command (/workflow:tools:context-gather)
## Overview
Intelligent context collector that gathers relevant information from project codebase, documentation, and dependencies based on task descriptions, generating standardized context packages.
## Core Philosophy
- **Intelligent Collection**: Auto-identify relevant resources based on keyword analysis
- **Comprehensive Coverage**: Collect code, documentation, configurations, and dependencies
- **Standardized Output**: Generate unified format context-package.json
- **Efficient Execution**: Optimize collection strategies to avoid irrelevant information
## Core Responsibilities
- **Keyword Extraction**: Extract core keywords from task descriptions
- **Smart Documentation Loading**: Load relevant project documentation based on keywords
- **Code Structure Analysis**: Analyze project structure to locate relevant code files
- **Dependency Discovery**: Identify tech stack and dependency relationships
- **MCP Tools Integration**: Leverage code-index tools for enhanced collection
- **Context Packaging**: Generate standardized JSON context packages
## Execution Process
### Phase 1: Task Analysis
1. **Keyword Extraction**
- Parse task description to extract core keywords
- Identify technical domain (auth, API, frontend, backend, etc.)
- Determine complexity level (simple, medium, complex)
2. **Scope Determination**
- Define collection scope based on keywords
- Identify potentially involved modules and components
- Set file type filters
### Phase 2: Project Structure Exploration
1. **Architecture Analysis**
- Use `~/.claude/scripts/get_modules_by_depth.sh` for comprehensive project structure
- Analyze project layout and module organization
- Identify key directories and components
2. **Code File Location**
- Use MCP tools for precise search: `mcp__code-index__find_files()` and `mcp__code-index__search_code_advanced()`
- Search for relevant source code files based on keywords
- Locate implementation files, interfaces, and modules
3. **Documentation Collection**
- Load CLAUDE.md and README.md
- Load relevant documentation from .workflow/docs/ based on keywords
- Collect configuration files (package.json, requirements.txt, etc.)
### Phase 3: Intelligent Filtering & Association
1. **Relevance Scoring**
- Score based on keyword match degree
- Score based on file path relevance
- Score based on code content relevance
2. **Dependency Analysis**
- Analyze import/require statements
- Identify inter-module dependencies
- Determine core and optional dependencies
### Phase 4: Context Packaging
1. **Standardized Output**
- Generate context-package.json
- Organize resources by type and importance
- Add relevance descriptions and usage recommendations
## Context Package Format
Generated context package format:
```json
{
"metadata": {
"task_description": "Implement user authentication system",
"timestamp": "2025-09-29T10:30:00Z",
"keywords": ["user", "authentication", "JWT", "login"],
"complexity": "medium",
"tech_stack": ["typescript", "node.js", "express"],
"session_id": "WFS-user-auth"
},
"assets": [
{
"type": "documentation",
"path": "CLAUDE.md",
"relevance": "Project development standards and conventions",
"priority": "high"
},
{
"type": "documentation",
"path": ".workflow/docs/architecture/security.md",
"relevance": "Security architecture design guidance",
"priority": "high"
},
{
"type": "source_code",
"path": "src/auth/AuthService.ts",
"relevance": "Existing authentication service implementation",
"priority": "high"
},
{
"type": "source_code",
"path": "src/models/User.ts",
"relevance": "User data model definition",
"priority": "medium"
},
{
"type": "config",
"path": "package.json",
"relevance": "Project dependencies and tech stack",
"priority": "medium"
},
{
"type": "test",
"path": "tests/auth/*.test.ts",
"relevance": "Authentication related test cases",
"priority": "medium"
}
],
"tech_stack": {
"frameworks": ["express", "typescript"],
"libraries": ["jsonwebtoken", "bcrypt"],
"testing": ["jest", "supertest"]
},
"statistics": {
"total_files": 15,
"source_files": 8,
"docs_files": 4,
"config_files": 2,
"test_files": 1
}
}
```
## MCP Tools Integration
### Code Index Integration
```bash
# Set project path
mcp__code-index__set_project_path(path="{current_project_path}")
# Refresh index to ensure latest
mcp__code-index__refresh_index()
# Search relevant files
mcp__code-index__find_files(pattern="*{keyword}*")
# Search code content
mcp__code-index__search_code_advanced(
pattern="{keyword_patterns}",
file_pattern="*.{ts,js,py,go,md}",
context_lines=3
)
```
## Session ID Integration
### Session ID Usage
- **Required Parameter**: `--session WFS-session-id`
- **Session Context Loading**: Load existing session state and task summaries
- **Session Continuity**: Maintain context across pipeline phases
### Session State Management
```bash
# Validate session exists
if [ ! -d ".workflow/${session_id}" ]; then
echo "❌ Session ${session_id} not found"
exit 1
fi
# Load session metadata
session_metadata=".workflow/${session_id}/workflow-session.json"
```
## Output Location
Context package output location:
```
.workflow/{session_id}/.process/context-package.json
```
## Error Handling
### Common Error Handling
1. **No Active Session**: Create temporary session directory
2. **MCP Tools Unavailable**: Fallback to traditional bash commands
3. **Permission Errors**: Prompt user to check file permissions
4. **Large Project Optimization**: Limit file count, prioritize high-relevance files
### Graceful Degradation Strategy
```bash
# Fallback when MCP unavailable
if ! command -v mcp__code-index__find_files; then
# Use find command for file discovery
find . -name "*{keyword}*" -type f -not -path "*/node_modules/*" -not -path "*/.git/*"
# Alternative pattern matching
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.py" -o -name "*.go" \) -exec grep -l "{keyword}" {} \;
fi
# Use ripgrep instead of MCP search
rg "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 30
# Content-based search with context
rg -A 3 -B 3 "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source
# Quick relevance check
grep -r --include="*.{ts,js,py,go}" -l "{keywords}" . | head -15
# Test files discovery
find . -name "*test*" -o -name "*spec*" | grep -E "\.(ts|js|py|go)$" | head -10
# Import/dependency analysis
rg "^(import|from|require|#include)" --type-add 'source:*.{ts,js,py,go}' -t source | head -20
```
## Performance Optimization
### Large Project Optimization Strategy
- **File Count Limit**: Maximum 50 files per type
- **Size Filtering**: Skip oversized files (>10MB)
- **Depth Limit**: Maximum search depth of 3 levels
- **Caching Strategy**: Cache project structure analysis results
### Parallel Processing
- Documentation collection and code search in parallel
- MCP tool calls and traditional commands in parallel
- Reduce I/O wait time
## Essential Bash Commands (Max 10)
### 1. Project Structure Analysis
```bash
~/.claude/scripts/get_modules_by_depth.sh
```
### 2. File Discovery by Keywords
```bash
find . -name "*{keyword}*" -type f -not -path "*/node_modules/*" -not -path "*/.git/*"
```
### 3. Content Search in Code Files
```bash
rg "{keyword}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 20
```
### 4. Configuration Files Discovery
```bash
find . -maxdepth 3 \( -name "*.json" -o -name "package.json" -o -name "requirements.txt" -o -name "Cargo.toml" \) -not -path "*/node_modules/*"
```
### 5. Documentation Files Collection
```bash
find . -name "*.md" -o -name "README*" -o -name "CLAUDE.md" | grep -v node_modules | head -10
```
### 6. Test Files Location
```bash
find . \( -name "*test*" -o -name "*spec*" \) -type f | grep -E "\.(js|ts|py|go)$" | head -10
```
### 7. Function/Class Definitions Search
```bash
rg "^(function|def|func|class|interface)" --type-add 'source:*.{ts,js,py,go}' -t source -n --max-count 15
```
### 8. Import/Dependency Analysis
```bash
rg "^(import|from|require|#include)" --type-add 'source:*.{ts,js,py,go}' -t source | head -15
```
### 9. Workflow Session Information
```bash
find .workflow/ -name "*.json" -path "*/${session_id}/*" -o -name "workflow-session.json" | head -5
```
### 10. Context-Aware Content Search
```bash
rg -A 2 -B 2 "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 10
```
## Success Criteria
- Generate valid context-package.json file
- Contains sufficient relevant information for subsequent analysis
- Execution time controlled within 30 seconds
- File relevance accuracy rate >80%
## Related Commands
- `/analysis:run` - Consumes output of this command for analysis
- `/workflow:plan` - Calls this command to gather context
- `/workflow:status` - Can display context collection status

View File

@@ -34,7 +34,7 @@ type: strategic-guideline
### Permission Framework
- **Gemini/Qwen Write Access**: Use `--approval-mode yolo` when tools need to create/modify files
- **Codex Write Access**: Always use `-s danger-full-access` for development and file operations
- **Codex Write Access**: Always use `-s danger-full-access` and `--skip-git-repo-check` for development and file operations
- **Auto-approval Protocol**: Enable automatic tool approvals for autonomous workflow execution
## 🎯 Universal Command Template
@@ -60,7 +60,7 @@ RULES: [template reference and constraints]
"
# Codex Development
codex -C [directory] --full-auto exec "
codex -C [directory] --skip-git-repo-check --full-auto exec "
PURPOSE: [clear development goal]
TASK: [specific development task]
CONTEXT: [file references and memory context]
@@ -86,12 +86,12 @@ Tools execute in current working directory:
### Rules Field Format
```bash
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/[category]/[template].txt) | [constraints]
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/[category]/[template].txt") | [constraints]
```
**Examples**:
- Single template: `$(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt) | Focus on security`
- Multiple templates: `$(cat template1.txt) $(cat template2.txt) | Enterprise standards`
- Single template: `$(cat "~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt") | Focus on security`
- Multiple templates: `$(cat "template1.txt") $(cat "template2.txt") | Enterprise standards`
- No template: `Focus on security patterns, include dependency analysis`
- File patterns: `@{src/**/*.ts,CLAUDE.md} - Stay within scope`
@@ -156,7 +156,7 @@ PURPOSE: Understand codebase architecture
TASK: Analyze project structure and identify patterns
CONTEXT: @{src/**/*.ts,CLAUDE.md} Previous analysis of auth system
EXPECTED: Architecture overview and integration points
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt) | Focus on integration points
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt") | Focus on integration points
"
# Project Analysis (in different directory)
@@ -165,7 +165,7 @@ PURPOSE: Compare authentication patterns
TASK: Analyze auth implementation in related project
CONTEXT: @{src/auth/**/*} Current project context from session memory
EXPECTED: Pattern comparison and recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt) | Focus on architectural differences
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt") | Focus on architectural differences
"
# Architecture Design (with Qwen)
@@ -174,16 +174,16 @@ PURPOSE: Design authentication system architecture
TASK: Create modular JWT-based auth system design
CONTEXT: @{src/auth/**/*} Existing patterns and requirements
EXPECTED: Complete architecture with code scaffolding
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt) | Focus on modularity and security
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt") | Focus on modularity and security
"
# Feature Development (in target directory)
codex -C path/to/project --full-auto exec "
codex -C path/to/project --skip-git-repo-check --full-auto exec "
PURPOSE: Implement user authentication
TASK: Create JWT-based authentication system
CONTEXT: @{src/auth/**/*} Database schema from session memory
EXPECTED: Complete auth module with tests
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/development/feature.txt) | Follow security best practices
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/development/feature.txt") | Follow security best practices
" -s danger-full-access
# Code Review Preparation
@@ -192,7 +192,7 @@ PURPOSE: Prepare comprehensive code review
TASK: Analyze code changes and identify potential issues
CONTEXT: @{**/*.modified} Recent changes discussed in last session
EXPECTED: Review checklist and improvement suggestions
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/quality.txt) | Focus on maintainability
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/analysis/quality.txt") | Focus on maintainability
"
```
@@ -220,10 +220,10 @@ For every development task:
- **Best For**: System design, code scaffolding, architectural planning
### Codex
- **Command**: `codex --full-auto exec`
- **Command**: `codex --skip-git-repo-check --full-auto exec`
- **Strengths**: Autonomous development, mathematical reasoning
- **Best For**: Implementation, testing, automation
- **Required**: `-s danger-full-access` for development
- **Required**: `-s danger-full-access` and `--skip-git-repo-check` for development
### File Patterns
- All files: `@{**/*}`
@@ -257,7 +257,7 @@ cd src/auth && ~/.claude/scripts/gemini-wrapper -p "analyze auth patterns"
cd src/auth && ~/.claude/scripts/qwen-wrapper -p "design auth architecture"
# Focused implementation (Codex)
codex -C src/auth --full-auto exec "analyze auth implementation"
codex -C src/auth --skip-git-repo-check --full-auto exec "analyze auth implementation"
# Multi-scope (stay in root)
~/.claude/scripts/gemini-wrapper -p "CONTEXT: @{src/auth/**/*,src/api/**/*}"

View File

@@ -59,6 +59,12 @@ mcp__code-index__refresh_index() # git操作后刷新
- **定位文件**: `find_files(pattern="src/**/*.tsx")`
- **更新索引**: `refresh_index()` (git操作后)
**文件搜索测试结果**:
-`find_files(pattern="*.md")` - 搜索所有 Markdown 文件
-`find_files(pattern="*complete*")` - 通配符匹配文件名
-`find_files(pattern="complete.md")` - 精确匹配可能失败
- 📝 建议使用通配符模式获得更好的搜索结果
## 📊 Tool Selection Matrix
| Task | MCP Tool | Use Case | Integration |
@@ -106,6 +112,8 @@ codex -C src/async --full-auto exec "Apply modern async patterns" -s danger-full
- **Refresh after git ops** - Keep index synchronized
- **Pattern specificity** - Use precise regex patterns for better results
- **File patterns** - Combine with glob patterns for targeted search
- **Glob pattern matching** - Use `*.md`, `*complete*` patterns for file discovery
- **Exact vs wildcard** - Exact names may fail, use wildcards for better results
### Exa Code Context
- **Use "dynamic" tokens** for efficiency