Enhance workflows and commands for intelligent tools strategy

- Updated intelligent-tools-strategy.md to include `--skip-git-repo-check` for Codex write access and development commands.
- Improved context gathering and analysis processes in mcp-tool-strategy.md with additional examples and guidelines for file searching.
- Introduced new command concept-enhanced.md for enhanced intelligent analysis with parallel CLI execution and design blueprint generation.
- Added context-gather.md command for intelligent collection of project context based on task descriptions, generating standardized JSON context packages.
This commit is contained in:
catlog22
2025-09-29 23:30:03 +08:00
parent 8b907ac80f
commit 7e4d370d45
16 changed files with 963 additions and 2106 deletions

View File

@@ -80,7 +80,7 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
```
.workflow/WFS-[topic]/.brainstorming/
├── topic-framework.md # ★ STRUCTURED FRAMEWORK DOCUMENT
└── session.json # Framework metadata and role assignments
└── workflow-session.json # Framework metadata and role assignments
```
**Topic Framework Template**:

View File

@@ -24,14 +24,14 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
- **Multi-role**: Complex topics automatically select 2-3 complementary roles
- **Default**: `product-manager` if no clear match
**Template Loading**: `bash($(cat ~/.claude/workflows/cli-templates/planning-roles/<role-name>.md))`
**Template Loading**: `bash($(cat "~/.claude/workflows/cli-templates/planning-roles/<role-name>.md"))`
**Template Source**: `.claude/workflows/cli-templates/planning-roles/`
**Available Roles**: business-analyst, data-architect, feature-planner, innovation-lead, product-manager, security-expert, system-architect, test-strategist, ui-designer, user-researcher
**Example**:
```bash
bash($(cat ~/.claude/workflows/cli-templates/planning-roles/system-architect.md))
bash($(cat ~/.claude/workflows/cli-templates/planning-roles/ui-designer.md))
bash($(cat "~/.claude/workflows/cli-templates/planning-roles/system-architect.md"))
bash($(cat "~/.claude/workflows/cli-templates/planning-roles/ui-designer.md"))
```
## Core Workflow
@@ -78,7 +78,7 @@ Auto command coordinates independent specialized commands:
### Simplified Processing Standards
**Core Principles**:
1. **Minimal preprocessing** - Only session.json and basic role selection
1. **Minimal preprocessing** - Only workflow-session.json and basic role selection
2. **Agent autonomy** - Agents handle their own context and validation
3. **Parallel execution** - Multiple agents can work simultaneously
4. **Post-processing synthesis** - Integration happens after agent completion
@@ -139,12 +139,12 @@ Task(subagent_type="conceptual-planning-agent",
2. **load_role_template**
- Action: Load {role-name} planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/{role}.md))
- Command: bash($(cat "~/.claude/workflows/cli-templates/planning-roles/{role}.md"))
- Output: role_template
3. **load_session_metadata**
- Action: Load session metadata and topic description
- Command: bash(cat .workflow/WFS-{topic}/.brainstorming/session.json 2>/dev/null || echo '{}')
- Command: bash(cat .workflow/WFS-{topic}/.brainstorming/workflow-session.json 2>/dev/null || echo '{}')
- Output: session_metadata
### Implementation Context
@@ -157,11 +157,11 @@ Task(subagent_type="conceptual-planning-agent",
### Session Context
**Workflow Directory**: .workflow/WFS-{topic}/.brainstorming/
**Output Directory**: .workflow/WFS-{topic}/.brainstorming/{role}/
**Session JSON**: .workflow/WFS-{topic}/.brainstorming/session.json
**Session JSON**: .workflow/WFS-{topic}/.brainstorming/workflow-session.json
### Dependencies & Context
**Topic**: {user-provided-topic}
**Role Template**: ~/.claude/workflows/cli-templates/planning-roles/{role}.md
**Role Template**: "~/.claude/workflows/cli-templates/planning-roles/{role}.md"
**User Requirements**: To be gathered through interactive questioning
## Completion Requirements
@@ -172,7 +172,7 @@ Task(subagent_type="conceptual-planning-agent",
5. Create single comprehensive deliverable in OUTPUT_LOCATION:
- analysis.md (structured analysis addressing all topic framework points with role-specific insights)
6. Include framework reference: @../topic-framework.md in analysis.md
7. Update session.json with completion status",
7. Update workflow-session.json with completion status",
description="Execute {role-name} brainstorming analysis")
```
@@ -216,7 +216,7 @@ TodoWrite({
activeForm: "Executing artifacts command for topic framework"
},
{
content: "Select roles and create session.json with framework references",
content: "Select roles and create workflow-session.json with framework references",
status: "pending",
activeForm: "Selecting roles and creating session metadata"
},
@@ -247,7 +247,7 @@ TodoWrite({
activeForm: "Initializing brainstorming session"
},
{
content: "Select roles for topic analysis and create session.json",
content: "Select roles for topic analysis and create workflow-session.json",
status: "in_progress", // Mark current task as in_progress
activeForm: "Selecting roles and creating session metadata"
},

View File

@@ -18,6 +18,7 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
## Core Rules
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
**Execute all discovered pending tasks sequentially until workflow completion or blocking dependency.**
**Auto-complete session when all tasks finished: Call `/workflow:session:complete` upon workflow completion.**
## Core Responsibilities
- **Session Discovery**: Identify and select active workflow sessions
@@ -27,6 +28,7 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
- **Flow Control Execution**: Execute pre-analysis steps and context accumulation
- **Status Synchronization**: Update task JSON files and workflow state
- **Autonomous Completion**: Continue execution until all tasks complete or reach blocking state
- **Session Auto-Complete**: Call `/workflow:session:complete` when all workflow tasks finished
## Execution Philosophy
- **Discovery-first**: Auto-discover existing plans and tasks
@@ -41,9 +43,10 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
### Flow Control Rules
1. **Auto-trigger**: When `task.flow_control.pre_analysis` array exists in task JSON, agents execute these steps
2. **Sequential Processing**: Agents execute steps in order, accumulating context
3. **Variable Passing**: Agents use `[variable_name]` syntax to reference step outputs
2. **Sequential Processing**: Agents execute steps in order, accumulating context including artifacts
3. **Variable Passing**: Agents use `[variable_name]` syntax to reference step outputs including artifact content
4. **Error Handling**: Agents follow step-specific error strategies (`fail`, `skip_optional`, `retry_once`)
5. **Artifacts Priority**: When artifacts exist in task.context.artifacts, load synthesis specifications first
### Execution Pattern
```
@@ -53,10 +56,11 @@ Step 3: implement_solution [pattern_analysis] [dependency_context] → implement
```
### Context Accumulation Process (Executed by Agents)
- **Load Artifacts**: Agents retrieve synthesis specifications and brainstorming outputs from `context.artifacts`
- **Load Dependencies**: Agents retrieve summaries from `context.depends_on` tasks
- **Execute Analysis**: Agents run CLI tools with accumulated context
- **Execute Analysis**: Agents run CLI tools with accumulated context including artifacts
- **Prepare Implementation**: Agents build comprehensive context for implementation
- **Continue Implementation**: Agents use all accumulated context for task execution
- **Continue Implementation**: Agents use all accumulated context including artifacts for task execution
## Execution Lifecycle
@@ -221,30 +225,44 @@ TodoWrite({
**Comprehensive context preparation** for autonomous agent execution:
#### Context Sources (Priority Order)
1. **Complete Task JSON**: Full task definition including all fields
2. **Flow Control Context**: Accumulated outputs from pre_analysis steps
3. **Dependency Summaries**: Previous task completion summaries
4. **Session Context**: Workflow paths and session metadata
5. **Inherited Context**: Parent task context and shared variables
1. **Complete Task JSON**: Full task definition including all fields and artifacts
2. **Artifacts Context**: Brainstorming outputs and synthesis specifications from task.context.artifacts
3. **Flow Control Context**: Accumulated outputs from pre_analysis steps (including artifact loading)
4. **Dependency Summaries**: Previous task completion summaries
5. **Session Context**: Workflow paths and session metadata
6. **Inherited Context**: Parent task context and shared variables
#### Context Assembly Process
```
1. Load Task JSON → Base context
2. Execute Flow Control → Accumulated context
3. Load Dependencies → Dependency context
4. Prepare Session Paths → Session context
5. Combine All → Complete agent context
1. Load Task JSON → Base context (including artifacts array)
2. Load Artifacts → Synthesis specifications and brainstorming outputs
3. Execute Flow Control → Accumulated context (with artifact loading steps)
4. Load Dependencies → Dependency context
5. Prepare Session Paths → Session context
6. Combine All → Complete agent context with artifact integration
```
#### Agent Context Package Structure
```json
{
"task": { /* Complete task JSON */ },
"task": { /* Complete task JSON with artifacts array */ },
"artifacts": {
"synthesis_specification": { "path": ".workflow/WFS-session/.brainstorming/synthesis-specification.md", "priority": "highest" },
"topic_framework": { "path": ".workflow/WFS-session/.brainstorming/topic-framework.md", "priority": "medium" },
"role_analyses": [ /* Individual role analysis files */ ],
"available_artifacts": [ /* All detected brainstorming artifacts */ ]
},
"flow_context": {
"step_outputs": { "pattern_analysis": "...", "dependency_context": "..." }
"step_outputs": {
"synthesis_specification": "...",
"individual_artifacts": "...",
"pattern_analysis": "...",
"dependency_context": "..."
}
},
"session": {
"workflow_dir": ".workflow/WFS-session/",
"brainstorming_dir": ".workflow/WFS-session/.brainstorming/",
"todo_list_path": ".workflow/WFS-session/TODO_LIST.md",
"summaries_dir": ".workflow/WFS-session/.summaries/",
"task_json_path": ".workflow/WFS-session/.task/IMPL-1.1.json"
@@ -255,10 +273,11 @@ TodoWrite({
```
#### Context Validation Rules
- **Task JSON Complete**: All 5 fields present and valid
- **Flow Control Ready**: All pre_analysis steps completed if present
- **Task JSON Complete**: All 5 fields present and valid, including artifacts array in context
- **Artifacts Available**: Synthesis specifications and brainstorming outputs accessible
- **Flow Control Ready**: All pre_analysis steps completed including artifact loading steps
- **Dependencies Loaded**: All depends_on summaries available
- **Session Paths Valid**: All workflow paths exist and accessible
- **Session Paths Valid**: All workflow paths exist and accessible, including .brainstorming directory
- **Agent Assignment**: Valid agent type specified in meta.agent
### 4. Agent Execution Pattern
@@ -287,11 +306,22 @@ Task(subagent_type="{meta.agent}",
## STEP 3: Flow Control Execution (if flow_control.pre_analysis exists)
**AGENT RESPONSIBILITY**: Execute pre_analysis steps sequentially from loaded JSON:
**PRIORITY: Artifact Loading Steps First**
1. **Load Synthesis Specification** (if present): Priority artifact loading for consolidated design
2. **Load Individual Artifacts** (fallback): Load role-specific brainstorming outputs if synthesis unavailable
3. **Execute Remaining Steps**: Continue with other pre_analysis steps
For each step in flow_control.pre_analysis array:
1. Execute step.command with variable substitution
1. Execute step.command/commands with variable substitution (support both single command and commands array)
2. Store output to step.output_to variable
3. Handle errors per step.on_error strategy
4. Pass accumulated variables to next step
3. Handle errors per step.on_error strategy (skip_optional, fail, retry_once)
4. Pass accumulated variables to next step including artifact context
**Special Artifact Loading Commands**:
- Use `bash(ls path 2>/dev/null || echo 'file not found')` for artifact existence checks
- Use `Read(path)` for loading artifact content
- Use `find` commands for discovering multiple artifact files
- Reference artifacts in subsequent steps using output variables: [synthesis_specification], [individual_artifacts]
## STEP 4: Implementation Context (From JSON context field)
**Requirements**: Use context.requirements array from JSON
@@ -299,8 +329,9 @@ Task(subagent_type="{meta.agent}",
**Acceptance Criteria**: Use context.acceptance array from JSON
**Dependencies**: Use context.depends_on array from JSON
**Parent Context**: Use context.inherited object from JSON
**Artifacts**: Use context.artifacts array from JSON (synthesis specifications, brainstorming outputs)
**Target Files**: Use flow_control.target_files array from JSON
**Implementation Approach**: Use flow_control.implementation_approach object from JSON
**Implementation Approach**: Use flow_control.implementation_approach object from JSON (with artifact integration)
## STEP 5: Session Context (Provided by workflow:execute)
**Workflow Directory**: {session.workflow_dir}
@@ -361,10 +392,36 @@ Task(subagent_type="{meta.agent}",
"focus_paths": ["src/path1", "src/path2"],
"acceptance": ["criteria1", "criteria2"],
"depends_on": ["IMPL-1.1"],
"inherited": { "from": "parent", "context": ["info"] }
"inherited": { "from": "parent", "context": ["info"] },
"artifacts": [
{
"type": "synthesis_specification",
"source": "brainstorm_synthesis",
"path": ".workflow/WFS-[session]/.brainstorming/synthesis-specification.md",
"priority": "highest",
"contains": "complete_integrated_specification"
},
{
"type": "individual_role_analysis",
"source": "brainstorm_roles",
"path": ".workflow/WFS-[session]/.brainstorming/[role]/analysis.md",
"priority": "low",
"contains": "role_specific_analysis_fallback"
}
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_synthesis_specification",
"action": "Load consolidated synthesis specification from brainstorming",
"commands": [
"bash(ls .workflow/WFS-[session]/.brainstorming/synthesis-specification.md 2>/dev/null || echo 'synthesis specification not found')",
"Read(.workflow/WFS-[session]/.brainstorming/synthesis-specification.md)"
],
"output_to": "synthesis_specification",
"on_error": "skip_optional"
},
{
"step": "step_name",
"command": "bash_command",
@@ -372,7 +429,10 @@ Task(subagent_type="{meta.agent}",
"on_error": "skip_optional|fail|retry_once"
}
],
"implementation_approach": { "task_description": "...", "modification_points": ["..."] },
"implementation_approach": {
"task_description": "Implement following consolidated synthesis specification...",
"modification_points": ["Apply synthesis specification requirements..."]
},
"target_files": ["file:function:lines"]
}
}
@@ -505,5 +565,5 @@ fi
### Integration
- **Planning**: `/workflow:plan``/workflow:execute``/workflow:review`
- **Recovery**: `/workflow:session:tatus --validate``/workflow:execute`
- **Recovery**: `/workflow:status --validate``/workflow:execute`

View File

@@ -65,7 +65,7 @@ Creates implementation plans by orchestrating intelligent context gathering and
### Session ID Transmission Guidelines ⚠️ CRITICAL
- **Format**: `WFS-[topic-slug]` from active session markers
- **Usage**: `/context:gather --session WFS-[id]` and `/analysis:run --session WFS-[id]`
- **Usage**: `/workflow:tools:context-gather --session WFS-[id]` and `/workflow:tools:plan-enchanced --session WFS-[id]`
- **Rule**: ALL modular commands MUST receive current session ID for context continuity
### Brainstorming Artifacts Integration ⚠️ NEW FEATURE
@@ -84,13 +84,13 @@ Creates implementation plans by orchestrating intelligent context gathering and
4. **Context Preparation**: Load session state and prepare for planning
### Phase 2: Context Gathering
1. **Context Collection**: Execute `/context:gather` with task description and session ID
1. **Context Collection**: Execute `/workflow:tools:context-gather` with task description and session ID
2. **Asset Discovery**: Gather relevant documentation, code, and configuration files
3. **Context Packaging**: Generate standardized context-package.json
4. **Validation**: Ensure context package contains sufficient information
### Phase 3: Intelligent Analysis
1. **Analysis Execution**: Run `/analysis:run` with context package and session ID
1. **Analysis Execution**: Run `/workflow:tools:plan-enchanced` with context package and session ID
2. **Tool Selection**: Automatically select optimal analysis tools (Gemini/Qwen/Codex)
3. **Result Generation**: Produce structured ANALYSIS_RESULTS.md
4. **Validation**: Verify analysis completeness and task recommendations
@@ -112,6 +112,9 @@ Creates implementation plans by orchestrating intelligent context gathering and
4. **Continuous Tracking**: Maintain TodoWrite throughout entire planning workflow
### TodoWrite Tool Usage
**Core Rule**: Monitor slash command completion before proceeding to next step
```javascript
// Initialize planning workflow tracking
TodoWrite({

View File

@@ -11,309 +11,135 @@ examples:
# Workflow Test Generation Command
## Overview
Automatically generates comprehensive test workflows based on completed implementation tasks. **Creates dedicated test session with full test coverage planning**, including unit tests, integration tests, and validation workflows that mirror the implementation structure.
Analyzes completed implementation sessions and generates comprehensive test requirements, then calls workflow:plan to create test workflow.
## Core Rules
**Analyze completed implementation workflows to generate comprehensive test coverage workflows.**
**Create dedicated test session with systematic test task decomposition following implementation patterns.**
## Core Responsibilities
- **Implementation Analysis**: Analyze completed tasks and their deliverables
- **Test Coverage Planning**: Generate comprehensive test strategies for all implementations
- **Test Workflow Creation**: Create structured test session following workflow architecture
- **Task Decomposition**: Break down test requirements into executable test tasks
- **Dependency Mapping**: Establish test dependencies based on implementation relationships
- **Agent Assignment**: Assign appropriate test agents for different test types
## Execution Philosophy
- **Coverage-driven**: Ensure all implemented features have corresponding tests
- **Implementation-aware**: Tests reflect actual implementation patterns and dependencies
- **Systematic approach**: Follow established workflow patterns for test planning
- **Agent-optimized**: Assign specialized agents for different test types
- **Continuous validation**: Include ongoing test execution and maintenance tasks
## Test Generation Lifecycle
### Phase 1: Implementation Discovery
1. **Session Analysis**: Identify active or recently completed implementation session
2. **Task Analysis**: Parse completed IMPL-* tasks and their deliverables
3. **Code Analysis**: Examine implemented files and functionality
4. **Pattern Recognition**: Identify testing requirements from implementation patterns
### Phase 2: Test Strategy Planning
1. **Coverage Mapping**: Map implementation components to test requirements
2. **Test Type Classification**: Categorize tests (unit, integration, e2e, performance)
3. **Dependency Analysis**: Establish test execution dependencies
4. **Tool Selection**: Choose appropriate testing frameworks and tools
### Phase 3: Test Workflow Creation
1. **Session Creation**: Create dedicated test session `WFS-test-[base-session]`
2. **Plan Generation**: Create TEST_PLAN.md with comprehensive test strategy
3. **Task Decomposition**: Generate TEST-* task definitions following workflow patterns
4. **Agent Assignment**: Assign specialized test agents for execution
### Phase 4: Test Session Setup
1. **Structure Creation**: Establish test workflow directory structure
2. **Context Preparation**: Link test tasks to implementation context
3. **Flow Control Setup**: Configure test execution flow and dependencies
4. **Documentation Generation**: Create test documentation and tracking files
## Test Discovery & Analysis Process
### Implementation Analysis
```
├── Load completed implementation session
├── Analyze IMPL_PLAN.md and completed tasks
├── Scan .summaries/ for implementation deliverables
├── Examine target_files from task definitions
├── Identify implemented features and components
├── Map code coverage requirements
└── Generate test coverage matrix
```
### Test Pattern Recognition
```
Implementation Pattern → Test Pattern
├── API endpoints → API testing + contract testing
├── Database models → Data validation + migration testing
├── UI components → Component testing + user workflow testing
├── Business logic → Unit testing + integration testing
├── Authentication → Security testing + access control testing
├── Configuration → Environment testing + deployment testing
└── Performance critical → Load testing + performance testing
```
## Test Workflow Structure
### Generated Test Session Structure
```
.workflow/WFS-test-[base-session]/
├── TEST_PLAN.md # Comprehensive test planning document
├── TODO_LIST.md # Test execution progress tracking
├── .process/
│ ├── TEST_ANALYSIS.md # Test coverage analysis results
│ └── COVERAGE_MATRIX.md # Implementation-to-test mapping
├── .task/
│ ├── TEST-001.json # Unit test tasks
│ ├── TEST-002.json # Integration test tasks
│ ├── TEST-003.json # E2E test tasks
│ └── TEST-004.json # Performance test tasks
├── .summaries/ # Test execution summaries
└── .context/
├── impl-context.md # Implementation context reference
└── test-fixtures.md # Test data and fixture planning
```
## Test Task Types & Agent Assignment
### Task Categories
1. **Unit Tests** (`TEST-U-*`)
- **Agent**: `code-review-test-agent`
- **Scope**: Individual function/method testing
- **Dependencies**: Implementation files
2. **Integration Tests** (`TEST-I-*`)
- **Agent**: `code-review-test-agent`
- **Scope**: Component interaction testing
- **Dependencies**: Unit tests completion
3. **End-to-End Tests** (`TEST-E-*`)
- **Agent**: `general-purpose`
- **Scope**: User workflow and system testing
- **Dependencies**: Integration tests completion
4. **Performance Tests** (`TEST-P-*`)
- **Agent**: `code-developer`
- **Scope**: Load, stress, and performance validation
- **Dependencies**: E2E tests completion
5. **Security Tests** (`TEST-S-*`)
- **Agent**: `code-review-test-agent`
- **Scope**: Security validation and vulnerability testing
- **Dependencies**: Implementation completion
6. **Documentation Tests** (`TEST-D-*`)
- **Agent**: `doc-generator`
- **Scope**: Documentation validation and example testing
- **Dependencies**: Feature tests completion
## Test Task JSON Schema
Each test task follows the 5-field workflow architecture with test-specific extensions:
### Basic Test Task Structure
```json
{
"id": "TEST-U-001",
"title": "Unit tests for authentication service",
"status": "pending",
"meta": {
"type": "unit-test",
"agent": "code-review-test-agent",
"test_framework": "jest",
"coverage_target": "90%",
"impl_reference": "IMPL-001"
},
"context": {
"requirements": "Test all authentication service functions with edge cases",
"focus_paths": ["src/auth/", "tests/unit/auth/"],
"acceptance": [
"All auth service functions tested",
"Edge cases covered",
"90% code coverage achieved",
"Tests pass in CI/CD pipeline"
],
"depends_on": [],
"impl_context": "IMPL-001-summary.md",
"test_data": "auth-test-fixtures.json"
},
"flow_control": {
"pre_analysis": [
{
"step": "load_impl_context",
"action": "Load implementation context and deliverables",
"command": "bash(cat .workflow/WFS-[base-session]/.summaries/IMPL-001-summary.md)",
"output_to": "impl_context"
},
{
"step": "analyze_test_coverage",
"action": "Analyze existing test coverage and gaps",
"command": "bash(find src/auth/ -name '*.js' -o -name '*.ts' | head -20)",
"output_to": "coverage_analysis"
}
],
"implementation_approach": "test-driven",
"target_files": [
"tests/unit/auth/auth-service.test.js",
"tests/unit/auth/auth-utils.test.js",
"tests/fixtures/auth-test-data.json"
]
}
}
```
## Test Context Management
### Implementation Context Integration
Test tasks automatically inherit context from corresponding implementation tasks:
```json
"context": {
"impl_reference": "IMPL-001",
"impl_summary": ".workflow/WFS-[base-session]/.summaries/IMPL-001-summary.md",
"impl_files": ["src/auth/service.js", "src/auth/middleware.js"],
"test_requirements": "derived from implementation acceptance criteria",
"coverage_requirements": "90% line coverage, 80% branch coverage"
}
```
### Flow Control for Test Execution
```json
"flow_control": {
"pre_analysis": [
{
"step": "load_impl_deliverables",
"action": "Load implementation files and analyze test requirements",
"command": "~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze implementation for test requirements TASK: Review [impl_files] and identify test cases CONTEXT: @{[impl_files]} EXPECTED: Comprehensive test case list RULES: Focus on edge cases and integration points\""
},
{
"step": "setup_test_environment",
"action": "Prepare test environment and fixtures",
"command": "codex --full-auto exec \"Setup test environment for [test_framework] with fixtures for [feature_name]\" -s danger-full-access"
}
]
}
```
## Session Management & Integration
### Test Session Creation Process
1. **Base Session Discovery**: Identify implementation session to test
2. **Test Session Creation**: Create `WFS-test-[base-session]` directory structure
3. **Context Linking**: Establish references to implementation context
4. **Active Marker**: Create `.active-test-[base-session]` marker for session management
### Integration with Execute Command
Test workflows integrate seamlessly with existing execute infrastructure:
- Use same TodoWrite progress tracking
- Follow same agent orchestration patterns
- Support same flow control mechanisms
- Maintain same session isolation and management
## Usage Examples
### Generate Tests for Completed Implementation
## Usage
```bash
# After completing an implementation workflow
/workflow:execute # Complete implementation tasks
# Generate comprehensive test workflow
/workflow:test-gen # Auto-detects active session
# Execute test workflow
/workflow:execute # Runs test tasks
/workflow:test-gen # Auto-detect active session
/workflow:test-gen WFS-session-id # Analyze specific session
```
### Generate Tests for Specific Session
## Dynamic Session ID Resolution
The `${SESSION_ID}` variable is dynamically resolved based on:
1. **Command argument**: If session-id provided as argument, use it directly
2. **Auto-detection**: If no argument, detect from active session markers
3. **Format**: Always in format `WFS-session-name`
```bash
# Generate tests for specific implementation session
/workflow:test-gen WFS-user-auth-system
# Check test workflow status
/workflow:status --session=WFS-test-user-auth-system
# Execute specific test category
/task:execute TEST-U-001 # Run unit tests
# Example resolution logic:
# If argument provided: SESSION_ID = "WFS-user-auth"
# If no argument: SESSION_ID = $(find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//')
```
### Multi-Phase Test Generation
## Implementation Flow
### Step 1: Identify Target Session
```bash
# Generate and execute tests in phases
/workflow:test-gen WFS-api-implementation
/task:execute TEST-U-* # Unit tests first
/task:execute TEST-I-* # Integration tests
/task:execute TEST-E-* # E2E tests last
# Auto-detect active session (if no session-id provided)
find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//'
# Use provided session-id or detected session-id
# SESSION_ID = provided argument OR detected active session
```
## Error Handling & Recovery
### Step 2: Get Session Start Time
```bash
cat .workflow/WFS-${SESSION_ID}/workflow-session.json | jq -r .created_at
```
### Implementation Analysis Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| No completed implementations | No IMPL-* tasks found | Complete implementation tasks first |
| Missing implementation context | Corrupted summaries | Regenerate summaries from task results |
| Invalid implementation files | File references broken | Update file paths and re-analyze |
### Step 3: Git Change Analysis (using session start time)
```bash
git log --since="$(cat .workflow/WFS-${SESSION_ID}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u | grep -v '^$'
```
### Test Generation Errors
| Error | Cause | Recovery Strategy |
|-------|-------|------------------|
| Test framework not detected | No testing setup found | Prompt for test framework selection |
| Insufficient implementation context | Missing implementation details | Request additional implementation documentation |
| Test session collision | Test session already exists | Merge or create versioned test session |
### Step 4: Filter Code Files
```bash
git log --since="$(cat .workflow/WFS-${SESSION_ID}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u | grep -E '\.(js|ts|jsx|tsx|py|java|go|rs)$'
```
## Key Benefits
### Step 5: Load Session Context
```bash
cat .workflow/WFS-${SESSION_ID}/.summaries/IMPL-*-summary.md 2>/dev/null
```
### Comprehensive Coverage
- **Implementation-driven**: Tests generated based on actual implementation patterns
- **Multi-layered**: Unit, integration, E2E, and specialized testing
- **Dependency-aware**: Test execution follows logical dependency chains
- **Agent-optimized**: Specialized agents for different test types
### Step 6: Extract Focus Paths
```bash
find .workflow/WFS-${SESSION_ID}/.task/ -name '*.json' -exec jq -r '.context.focus_paths[]?' {} \;
```
### Workflow Integration
- **Seamless execution**: Uses existing workflow infrastructure
- **Progress tracking**: Full TodoWrite integration for test progress
- **Context preservation**: Maintains links to implementation context
- **Session management**: Independent test sessions with proper isolation
### Step 7: Gemini Analysis and Planning Document Generation
```bash
cd project-root && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Analyze implementation and generate comprehensive test planning document
TASK: Review changed files and implementation context to create detailed test planning document
CONTEXT: Changed files: [changed_files], Implementation summaries: [impl_summaries], Focus paths: [focus_paths]
EXPECTED: Complete test planning document including:
- Test strategy analysis
- Critical test scenarios identification
- Edge cases and error conditions
- Test priority matrix
- Resource requirements
- Implementation approach recommendations
- Specific test cases with acceptance criteria
RULES: Generate structured markdown document suitable for workflow planning. Focus on actionable test requirements based on actual implementation changes.
" > .workflow/WFS-${SESSION_ID}/.process/GEMINI_TEST_PLAN.md
```
### Maintenance & Evolution
- **Updateable**: Test workflows can evolve with implementation changes
- **Traceable**: Clear mapping from implementation to test requirements
- **Extensible**: Support for new test types and frameworks
- **Documentable**: Comprehensive test documentation and coverage reports
### Step 8: Generate Combined Test Requirements Document
```bash
mkdir -p .workflow/WFS-${SESSION_ID}/.process
```
## Integration Points
- **Planning**: Integrates with `/workflow:plan` for test planning
- **Execution**: Uses `/workflow:execute` for test task execution
- **Status**: Works with `/workflow:status` for test progress tracking
- **Documentation**: Coordinates with `/workflow:docs` for test documentation
- **Review**: Supports `/workflow:review` for test validation and coverage analysis
```bash
cat > .workflow/WFS-${SESSION_ID}/.process/TEST_REQUIREMENTS.md << 'EOF'
# Test Requirements Summary for WFS-${SESSION_ID}
## Analysis Data Sources
- Git change analysis results
- Implementation summaries and context
- Gemini-generated test planning document
## Reference Documents
- Detailed test plan: GEMINI_TEST_PLAN.md
- Implementation context: IMPL-*-summary.md files
## Integration Note
This document combines analysis data with Gemini-generated planning document for comprehensive test workflow generation.
EOF
```
### Step 9: Call Workflow Plan with Gemini Planning Document
```bash
/workflow:plan .workflow/WFS-${SESSION_ID}/.process/GEMINI_TEST_PLAN.md
```
## Simple Bash Commands
### Basic Operations
- **Find active session**: `find .workflow/ -name '.active-*'`
- **Get git changes**: `git log --since='date' --name-only`
- **Filter code files**: `grep -E '\.(js|ts|py)$'`
- **Load summaries**: `cat .workflow/WFS-*/summaries/*.md`
- **Extract JSON data**: `jq -r '.context.focus_paths[]'`
- **Create directory**: `mkdir -p .workflow/session/.process`
- **Write file**: `cat > file << 'EOF'`
### Gemini CLI Integration
- **Planning command**: `~/.claude/scripts/gemini-wrapper -p "prompt" > GEMINI_TEST_PLAN.md`
- **Context loading**: Include changed files and implementation context
- **Document generation**: Creates comprehensive test planning document
- **Direct handoff**: Pass Gemini planning document to workflow:plan
## No Complex Logic
- No variables or functions
- No conditional statements
- No loops or complex pipes
- Direct bash commands only
- Gemini CLI for intelligent analysis
## Related Commands
- `/workflow:plan` - Called to generate test workflow
- `/workflow:execute` - Executes generated test tasks
- `/workflow:status` - Shows test workflow progress

View File

@@ -0,0 +1,421 @@
---
name: plan-enchanced
description: Enhanced intelligent analysis with parallel CLI execution and design blueprint generation
usage: /workflow:tools:concept-enhanced --session <session_id> --context <context_package_path>
argument-hint: "--session WFS-session-id --context path/to/context-package.json"
examples:
- /workflow:tools:concept-enhanced --session WFS-auth --context .workflow/WFS-auth/.process/context-package.json
- /workflow:tools:concept-enhanced --session WFS-payment --context .workflow/WFS-payment/.process/context-package.json
---
# Enhanced Planning Command (/workflow:tools:concept-enhanced)
## Overview
Advanced intelligent planning engine with parallel CLI execution that processes standardized context packages, generates enhanced suggestions and design blueprints, and produces comprehensive analysis results with implementation strategies.
## Core Philosophy
- **Context-Driven**: Precise analysis based on comprehensive context
- **Intelligent Tool Selection**: Choose optimal analysis tools based on task characteristics
- **Parallel Execution**: Execute multiple CLI tools simultaneously for efficiency
- **Enhanced Suggestions**: Generate actionable recommendations and design blueprints
- **Write-Enabled**: Tools have full write permissions for implementation suggestions
- **Structured Output**: Generate standardized analysis reports with implementation roadmaps
## Core Responsibilities
- **Context Package Parsing**: Read and validate context-package.json
- **Parallel CLI Orchestration**: Execute multiple analysis tools simultaneously with write permissions
- **Enhanced Suggestions Generation**: Create actionable recommendations and design blueprints
- **Design Blueprint Creation**: Generate comprehensive technical implementation designs
- **Perspective Synthesis**: Collect and organize different tool viewpoints
- **Consensus Analysis**: Identify agreements and conflicts between tools
- **Summary Report Generation**: Output comprehensive ANALYSIS_RESULTS.md with implementation strategies
## Analysis Strategy Selection
### Tool Selection by Task Complexity
**Simple Tasks (≤3 modules)**:
- **Primary Tool**: Gemini (rapid understanding and pattern recognition)
- **Support Tool**: Code-index (structural analysis)
- **Execution Mode**: Single-round analysis, focus on existing patterns
**Medium Tasks (4-6 modules)**:
- **Primary Tool**: Gemini (comprehensive single-round analysis and architecture design)
- **Support Tools**: Code-index + Exa (external best practices)
- **Execution Mode**: Single comprehensive analysis covering understanding + architecture design
**Complex Tasks (>6 modules)**:
- **Primary Tools**: Gemini (single comprehensive analysis) + Codex (implementation validation)
- **Analysis Strategy**: Gemini handles understanding + architecture in one round, Codex validates implementation
- **Execution Mode**: Parallel execution - Gemini comprehensive analysis + Codex validation
### Tool Preferences by Tech Stack
```json
{
"frontend": {
"primary": "gemini",
"secondary": "codex",
"focus": ["component_design", "state_management", "ui_patterns"]
},
"backend": {
"primary": "codex",
"secondary": "gemini",
"focus": ["api_design", "data_flow", "security", "performance"]
},
"fullstack": {
"primary": "gemini",
"secondary": "codex",
"focus": ["system_architecture", "integration", "data_consistency"]
}
}
```
## Execution Lifecycle
### Phase 1: Validation & Preparation
1. **Session Validation**
- Verify session directory exists: `.workflow/{session_id}/`
- Load session metadata from `workflow-session.json`
- Validate session state and task context
2. **Context Package Validation**
- Verify context package exists at specified path
- Validate JSON format and structure
- Assess context package size and complexity
3. **Task Analysis & Classification**
- Parse task description and extract keywords
- Identify technical domain and complexity level
- Determine required analysis depth and scope
- Load existing session context and task summaries
4. **Tool Selection Strategy**
- **Simple/Medium Tasks**: Single Gemini comprehensive analysis
- **Complex Tasks**: Gemini comprehensive + Codex validation
- Load appropriate prompt templates and configurations
### Phase 2: Analysis Preparation
1. **Workspace Setup**
- Create analysis output directory: `.workflow/{session_id}/.process/`
- Initialize log files and monitoring structures
- Set process limits and resource management
2. **Context Optimization**
- Filter high-priority assets from context package
- Organize project structure and dependencies
- Prepare template references and rule configurations
3. **Execution Environment**
- Configure CLI tools with write permissions
- Set timeout parameters and monitoring intervals
- Prepare error handling and recovery mechanisms
### Phase 3: Parallel Analysis Execution
1. **Gemini Comprehensive Analysis & Documentation**
- **Tool Configuration**: `gemini-wrapper --approval-mode yolo` for write permissions
- **Purpose**: Enhanced comprehensive analysis with actionable suggestions, design blueprints, and documentation generation
- **Expected Outputs**:
- Current State Analysis: Architecture patterns, code quality, technical debt, performance bottlenecks
- Enhanced Suggestions: Implementation blueprints, code organization, API specifications, security guidelines
- Implementation Roadmap: Phase-by-phase plans, CI/CD blueprints, testing strategies
- Actionable Examples: Code templates, configuration scripts, integration patterns
- Documentation Generation: Technical documentation, API docs, user guides, and README files
- **Prompt Template**:
```
PURPOSE: Generate comprehensive analysis and documentation for {task_description}
TASK: Analyze codebase, create implementation blueprints, and generate supporting documentation
CONTEXT: {context_package_assets}
EXPECTED:
1. Complete technical analysis with implementation strategy
2. Generated documentation files (README.md, API.md, GUIDE.md as needed)
3. Code examples and configuration templates
4. Implementation roadmap with phase-by-phase plans
RULES: Create both analysis blueprints AND user-facing documentation. Generate actual documentation files, not just analysis of documentation needs.
```
- **Output Location**: `.workflow/{session_id}/.process/gemini-enhanced-analysis.md` + generated docs
2. **Codex Implementation Validation** (Complex Tasks Only)
- **Tool Configuration**: `codex --skip-git-repo-check --full-auto exec -s danger-full-access` for implementation validation
- **Purpose**: Technical feasibility validation with write-enabled blueprint generation
- **Expected Outputs**:
- Feasibility Assessment: Complexity analysis, resource requirements, technology compatibility
- Implementation Validation: Quality recommendations, security assessments, testing frameworks
- Implementation Guides: Step-by-step procedures, configuration management, monitoring setup
- **Output Location**: `.workflow/{session_id}/.process/codex-validation-analysis.md`
3. **Parallel Execution Management**
- Launch both tools simultaneously for complex tasks
- Monitor execution progress with timeout controls
- Handle process completion and error scenarios
- Maintain execution logs for debugging and recovery
### Phase 4: Results Collection & Validation
1. **Output Validation & Collection**
- **Gemini Results**: Validate `gemini-enhanced-analysis.md` exists and contains complete analysis
- **Codex Results**: For complex tasks, validate `codex-validation-analysis.md` with implementation guidance
- **Fallback Processing**: Use execution logs if primary outputs are incomplete
- **Status Classification**: Mark each tool as completed, partial, failed, or skipped
2. **Quality Assessment**
- Verify analysis completeness against expected output structure
- Assess actionability of recommendations and blueprints
- Validate presence of concrete implementation examples
- Check coverage of technical requirements and constraints
3. **Analysis Integration Strategy**
- **Simple/Medium Tasks**: Direct integration of Gemini comprehensive analysis
- **Complex Tasks**: Synthesis of Gemini understanding with Codex validation
- **Conflict Resolution**: Identify and resolve conflicting recommendations
- **Priority Matrix**: Organize recommendations by implementation priority
### Phase 5: ANALYSIS_RESULTS.md Generation
1. **Structured Report Assembly**
- **Executive Summary**: Task overview, timestamp, tools used, key findings
- **Analysis Results**: Complete Gemini analysis with optional Codex validation
- **Synthesis & Recommendations**: Consolidated implementation strategy with risk mitigation
- **Implementation Roadmap**: Phase-by-phase development plan with timelines
- **Quality Assurance**: Testing frameworks, monitoring strategies, success criteria
2. **Supplementary Outputs**
- **Machine-Readable Summary**: `analysis-summary.json` with structured metadata
- **Implementation Guidelines**: Phase-specific deliverables and checkpoints
- **Next Steps Matrix**: Immediate, short-term, and long-term action items
3. **Final Report Generation**
- **Primary Output**: `ANALYSIS_RESULTS.md` with comprehensive analysis and implementation blueprint
- **Secondary Output**: `analysis-summary.json` with machine-readable metadata and status
- **Report Structure**: Executive summary, analysis results, synthesis recommendations, implementation roadmap, quality assurance strategy
- **Action Items**: Immediate, short-term, and long-term next steps with success criteria
4. **Summary Report Finalization**
- Generate executive summary with key findings
- Create implementation priority matrix
- Provide next steps and action items
- Generate quality metrics and confidence scores
## Analysis Results Format
Generated ANALYSIS_RESULTS.md format (Multi-Tool Perspective Analysis):
```markdown
# Multi-Tool Analysis Results
## Task Overview
- **Description**: {task_description}
- **Context Package**: {context_package_path}
- **Analysis Tools**: {tools_used}
- **Analysis Timestamp**: {timestamp}
## Tool-Specific Analysis
### 🧠 Gemini Analysis (Enhanced Understanding & Architecture Blueprint)
**Focus**: Existing codebase understanding, enhanced suggestions, and comprehensive technical architecture design with actionable improvements
**Write Permissions**: Enabled for suggestion generation and blueprint creation
#### Current Architecture Assessment
- **Existing Patterns**: {identified_patterns}
- **Code Structure**: {current_structure}
- **Integration Points**: {integration_analysis}
- **Technical Debt**: {debt_assessment}
#### Compatibility Analysis
- **Framework Compatibility**: {framework_analysis}
- **Dependency Impact**: {dependency_analysis}
- **Migration Considerations**: {migration_notes}
#### Proposed Architecture
- **System Design**: {architectural_design}
- **Component Structure**: {component_design}
- **Data Flow**: {data_flow_design}
- **Interface Design**: {api_interface_design}
#### Implementation Strategy
- **Code Organization**: {code_structure_plan}
- **Module Dependencies**: {module_dependencies}
- **Testing Strategy**: {testing_approach}
### 🔧 Codex Analysis (Implementation Blueprints & Enhanced Validation)
**Focus**: Implementation feasibility, write-enabled blueprints, and comprehensive technical validation with concrete examples
**Write Permissions**: Full access for implementation suggestion generation
#### Feasibility Assessment
- **Technical Risks**: {implementation_risks}
- **Performance Impact**: {performance_analysis}
- **Resource Requirements**: {resource_assessment}
- **Maintenance Complexity**: {maintenance_analysis}
#### Enhanced Implementation Blueprints
- **Tool Selection**: {recommended_tools_with_examples}
- **Development Approach**: {detailed_development_strategy}
- **Quality Assurance**: {comprehensive_qa_recommendations}
- **Code Examples**: {implementation_code_samples}
- **Testing Blueprints**: {automated_testing_strategies}
- **Deployment Guidelines**: {deployment_implementation_guide}
## Synthesis & Consensus
### Enhanced Consolidated Recommendations
- **Architecture Approach**: {consensus_architecture_with_blueprints}
- **Implementation Priority**: {detailed_priority_matrix}
- **Risk Mitigation**: {comprehensive_risk_mitigation_strategy}
- **Performance Optimization**: {optimization_recommendations}
- **Security Enhancements**: {security_implementation_guide}
## Implementation Roadmap
### Phase 1: Foundation Setup
- **Infrastructure**: {infrastructure_setup_blueprint}
- **Development Environment**: {dev_env_configuration}
- **Initial Architecture**: {foundational_architecture}
### Phase 2: Core Implementation
- **Priority Features**: {core_feature_implementation}
- **Testing Framework**: {testing_implementation_strategy}
- **Quality Gates**: {quality_assurance_checkpoints}
### Phase 3: Enhancement & Optimization
- **Performance Tuning**: {performance_enhancement_plan}
- **Security Hardening**: {security_implementation_steps}
- **Scalability Improvements**: {scalability_enhancement_strategy}
## Enhanced Quality Assurance Strategy
### Automated Testing Blueprint
- **Unit Testing**: {unit_test_implementation}
- **Integration Testing**: {integration_test_strategy}
- **Performance Testing**: {performance_test_blueprint}
- **Security Testing**: {security_test_framework}
### Continuous Quality Monitoring
- **Code Quality Metrics**: {quality_monitoring_setup}
- **Performance Monitoring**: {performance_tracking_implementation}
- **Security Monitoring**: {security_monitoring_blueprint}
### Tool Agreement Analysis
- **Consensus Points**: {agreed_recommendations}
- **Conflicting Views**: {conflicting_opinions}
- **Resolution Strategy**: {conflict_resolution}
### Task Decomposition Suggestions
1. **Primary Tasks**: {major_task_suggestions}
2. **Task Dependencies**: {dependency_mapping}
3. **Complexity Assessment**: {complexity_evaluation}
## Enhanced Analysis Quality Metrics
- **Context Coverage**: {coverage_percentage}
- **Multi-Tool Consensus**: {three_tool_consensus_level}
- **Analysis Depth**: {comprehensive_depth_assessment}
- **Implementation Feasibility**: {feasibility_confidence_score}
- **Blueprint Completeness**: {blueprint_coverage_score}
- **Actionability Score**: {suggestion_actionability_rating}
- **Overall Confidence**: {enhanced_confidence_score}
## Implementation Success Indicators
- **Technical Feasibility**: {technical_implementation_confidence}
- **Resource Adequacy**: {resource_requirement_assessment}
- **Timeline Realism**: {timeline_feasibility_score}
- **Risk Management**: {risk_mitigation_effectiveness}
## Next Steps & Action Items
1. **Immediate Actions**: {immediate_implementation_steps}
2. **Short-term Goals**: {short_term_objectives}
3. **Long-term Strategy**: {long_term_implementation_plan}
4. **Success Metrics**: {implementation_success_criteria}
```
## Error Handling & Fallbacks
### Error Handling & Recovery Strategies
1. **Pre-execution Validation**
- **Session Verification**: Ensure session directory and metadata exist
- **Context Package Validation**: Verify JSON format and content structure
- **Tool Availability**: Confirm CLI tools are accessible and configured
- **Prerequisite Checks**: Validate all required dependencies and permissions
2. **Execution Monitoring & Timeout Management**
- **Progress Monitoring**: Track analysis execution with regular status checks
- **Timeout Controls**: 30-minute execution limit with graceful termination
- **Process Management**: Handle parallel tool execution and resource limits
- **Status Tracking**: Maintain real-time execution state and completion status
3. **Partial Results Recovery**
- **Fallback Strategy**: Generate analysis results even with incomplete outputs
- **Log Integration**: Use execution logs when primary outputs are unavailable
- **Recovery Mode**: Create partial analysis reports with available data
- **Guidance Generation**: Provide next steps and retry recommendations
4. **Resource Management**
- **Disk Space Monitoring**: Check available storage and cleanup temporary files
- **Process Limits**: Set CPU and memory constraints for analysis execution
- **Performance Optimization**: Manage resource utilization and system load
- **Cleanup Procedures**: Remove outdated logs and temporary files
5. **Comprehensive Error Recovery**
- **Error Detection**: Automatic error identification and classification
- **Recovery Workflows**: Structured approach to handling different failure modes
- **Status Reporting**: Clear communication of issues and resolution attempts
- **Graceful Degradation**: Provide useful outputs even with partial failures
## Performance Optimization
### Analysis Optimization Strategies
- **Parallel Analysis**: Execute multiple tools in parallel to reduce total time
- **Context Sharding**: Analyze large projects by module shards
- **Caching Mechanism**: Reuse analysis results for similar contexts
- **Incremental Analysis**: Perform incremental analysis based on changes
### Resource Management
```bash
# Set analysis timeout
timeout 600s analysis_command || {
echo "⚠️ Analysis timeout, generating partial results"
# Generate partial results
}
# Memory usage monitoring
memory_usage=$(ps -o pid,vsz,rss,comm -p $$)
if [ "$memory_usage" -gt "$memory_limit" ]; then
echo "⚠️ High memory usage detected, optimizing..."
fi
```
## Integration Points
### Input Interface
- **Required**: `--session` parameter specifying session ID (e.g., WFS-auth)
- **Required**: `--context` parameter specifying context package path
- **Optional**: `--depth` specify analysis depth (quick|full|deep)
- **Optional**: `--focus` specify analysis focus areas
### Output Interface
- **Primary**: Enhanced ANALYSIS_RESULTS.md file with implementation blueprints
- **Location**: .workflow/{session_id}/.process/ANALYSIS_RESULTS.md
- **Secondary**: analysis-summary.json (machine-readable format)
- **Tertiary**: implementation-roadmap.md (detailed implementation guide)
- **Quaternary**: blueprint-templates/ directory (code templates and examples)
## Quality Assurance
### Analysis Quality Checks
- **Completeness Check**: Ensure all required analysis sections are completed
- **Consistency Check**: Verify consistency of multi-tool analysis results
- **Feasibility Validation**: Ensure recommended implementation plans are feasible
### Success Criteria
- ✅ **Enhanced Output Generation**: Comprehensive ANALYSIS_RESULTS.md with implementation blueprints and machine-readable summary
- ✅ **Write-Enabled CLI Tools**: Full write permissions for Gemini (--approval-mode yolo) and Codex (--skip-git-repo-check -s danger-full-access)
- ✅ **Enhanced Suggestions**: Concrete implementation examples, configuration templates, and step-by-step procedures
- ✅ **Design Blueprints**: Detailed technical architecture with component diagrams and API specifications
- ✅ **Parallel Execution**: Efficient concurrent tool execution with proper monitoring and timeout handling
- ✅ **Robust Error Handling**: Comprehensive validation, timeout management, and partial result recovery
- ✅ **Actionable Results**: Implementation roadmap with phase-by-phase development strategy and success criteria
- ✅ **Quality Assurance**: Automated testing frameworks, CI/CD blueprints, and monitoring strategies
- ✅ **Performance Optimization**: Execution completes within 30 minutes with resource management
## Related Commands
- `/context:gather` - Generate context packages required by this command
- `/workflow:plan` - Call this command for analysis
- `/task:create` - Create specific tasks based on analysis results

View File

@@ -1,407 +0,0 @@
---
name: concept-eval
description: Evaluate concept planning before implementation with intelligent tool analysis
usage: /workflow:concept-eval [--tool gemini|codex|both] <input>
argument-hint: [--tool gemini|codex|both] "concept description"|file.md|ISS-001
examples:
- /workflow:concept-eval "Build microservices architecture"
- /workflow:concept-eval --tool gemini requirements.md
- /workflow:concept-eval --tool both ISS-001
allowed-tools: Task(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*)
---
# Workflow Concept Evaluation Command
## Overview
Pre-planning evaluation command that assesses concept feasibility, identifies potential issues, and provides optimization recommendations before formal planning begins. **Works before `/workflow:plan`** to catch conceptual problems early and improve initial design quality.
## Core Responsibilities
- **Concept Analysis**: Evaluate design concepts for architectural soundness
- **Feasibility Assessment**: Technical and resource feasibility evaluation
- **Risk Identification**: Early identification of potential implementation risks
- **Optimization Suggestions**: Generate actionable improvement recommendations
- **Context Integration**: Leverage existing codebase patterns and documentation
- **Tool Selection**: Use gemini for strategic analysis, codex for technical assessment
## Usage
```bash
/workflow:concept-eval [--tool gemini|codex|both] <input>
```
## Parameters
- **--tool**: Specify evaluation tool (default: both)
- `gemini`: Strategic and architectural evaluation
- `codex`: Technical feasibility and implementation assessment
- `both`: Comprehensive dual-perspective analysis
- **input**: Concept description, file path, or issue reference
## Input Detection
- **Files**: `.md/.txt/.json/.yaml/.yml` → Reads content and extracts concept requirements
- **Issues**: `ISS-*`, `ISSUE-*`, `*-request-*` → Loads issue data and requirement specifications
- **Text**: Everything else → Parses natural language concept descriptions
## Core Workflow
### Evaluation Process
The command performs comprehensive concept evaluation through:
**0. Context Preparation** ⚠️ FIRST STEP
- **MCP Tools Integration**: Use Code Index for codebase exploration, Exa for external context
- **Documentation loading**: Automatic context gathering based on concept scope
- **Always check**: `CLAUDE.md`, `README.md` - Project context and conventions
- **For architecture concepts**: `.workflow/docs/architecture/`, existing system patterns
- **For specific modules**: `.workflow/docs/modules/[relevant-module]/` documentation
- **For API concepts**: `.workflow/docs/api/` specifications
- **Claude Code Memory Integration**: Access conversation history and previous work context
- **Session Memory**: Current session analysis and decisions
- **Project Memory**: Previous implementations and lessons learned
- **Pattern Memory**: Successful approaches and anti-patterns identified
- **Context Continuity**: Reference previous concept evaluations and outcomes
- **Context-driven selection**: Only load documentation relevant to the concept scope
- **Pattern analysis**: Identify existing implementation patterns and conventions
**1. Input Processing & Context Gathering**
- Parse input to extract concept requirements and scope
- Automatic tool assignment based on evaluation needs:
- **Strategic evaluation** (gemini): Architectural soundness, design patterns, business alignment
- **Technical assessment** (codex): Implementation complexity, technical feasibility, resource requirements
- **Comprehensive analysis** (both): Combined strategic and technical evaluation
- Load relevant project documentation and existing patterns
**2. Concept Analysis** ⚠️ CRITICAL EVALUATION PHASE
- **Conceptual integrity**: Evaluate design coherence and completeness
- **Architectural soundness**: Assess alignment with existing system architecture
- **Technical feasibility**: Analyze implementation complexity and resource requirements
- **Risk assessment**: Identify potential technical and business risks
- **Dependency analysis**: Map required dependencies and integration points
**3. Evaluation Execution**
Based on tool selection, execute appropriate analysis:
**Gemini Strategic Analysis**:
```bash
~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Strategic evaluation of concept design and architecture
TASK: Analyze concept for architectural soundness, design patterns, and strategic alignment
CONTEXT: @{CLAUDE.md,README.md,.workflow/docs/**/*} Concept requirements and existing patterns | Previous conversation context and Claude Code session memory for continuity and pattern recognition
EXPECTED: Strategic assessment with architectural recommendations informed by session history
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/concept-eval.txt) | Focus on strategic soundness and design quality | Reference previous evaluations and lessons learned
"
```
**Codex Technical Assessment**:
```bash
codex --full-auto exec "
PURPOSE: Technical feasibility assessment of concept implementation
TASK: Evaluate implementation complexity, technical risks, and resource requirements
CONTEXT: @{CLAUDE.md,README.md,src/**/*} Concept requirements and existing codebase | Current session work context and previous technical decisions
EXPECTED: Technical assessment with implementation recommendations building on session memory
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/concept-eval.txt) | Focus on technical feasibility and implementation complexity | Consider previous technical approaches and outcomes
" -s danger-full-access
```
**Combined Analysis** (when --tool both):
Execute both analyses in parallel, then synthesize results for comprehensive evaluation.
**4. Optimization Recommendations**
- **Design improvements**: Architectural and design optimization suggestions
- **Risk mitigation**: Strategies to address identified risks
- **Implementation approach**: Recommended technical approaches and patterns
- **Resource optimization**: Efficient resource utilization strategies
- **Integration suggestions**: Optimal integration with existing systems
## Implementation Standards
### Evaluation Criteria ⚠️ CRITICAL
Concept evaluation focuses on these key dimensions:
**Strategic Evaluation (Gemini)**:
1. **Architectural Soundness**: Design coherence and system integration
2. **Business Alignment**: Concept alignment with business objectives
3. **Scalability Considerations**: Long-term growth and expansion potential
4. **Design Patterns**: Appropriate use of established design patterns
5. **Risk Assessment**: Strategic and business risk identification
**Technical Assessment (Codex)**:
1. **Implementation Complexity**: Technical difficulty and effort estimation
2. **Technical Feasibility**: Availability of required technologies and skills
3. **Resource Requirements**: Development time, infrastructure, and team resources
4. **Integration Challenges**: Technical integration complexity and risks
5. **Performance Implications**: System performance and scalability impact
### Evaluation Context Loading ⚠️ CRITICAL
Context preparation ensures comprehensive evaluation:
```json
// Context loading strategy for concept evaluation
"context_preparation": {
"required_docs": [
"CLAUDE.md",
"README.md"
],
"conditional_docs": {
"architecture_concepts": [
".workflow/docs/architecture/",
"docs/system-design.md"
],
"api_concepts": [
".workflow/docs/api/",
"api-documentation.md"
],
"module_concepts": [
".workflow/docs/modules/[relevant-module]/",
"src/[module]/**/*.md"
]
},
"pattern_analysis": {
"existing_implementations": "src/**/*",
"configuration_patterns": "config/",
"test_patterns": "test/**/*"
},
"claude_code_memory": {
"session_context": "Current session conversation history and decisions",
"project_memory": "Previous implementations and lessons learned across sessions",
"pattern_memory": "Successful approaches and anti-patterns identified",
"evaluation_history": "Previous concept evaluations and their outcomes",
"technical_decisions": "Past technical choices and their rationale",
"architectural_evolution": "System architecture changes and migration patterns"
}
}
```
### Analysis Output Structure
**Evaluation Categories**:
```markdown
## Concept Evaluation Summary
### ✅ Strengths Identified
- [ ] **Design Quality**: Well-defined architectural approach
- [ ] **Technical Approach**: Appropriate technology selection
- [ ] **Integration**: Good fit with existing systems
### ⚠️ Areas for Improvement
- [ ] **Complexity**: Reduce implementation complexity in module X
- [ ] **Dependencies**: Simplify dependency management approach
- [ ] **Scalability**: Address potential performance bottlenecks
### ❌ Critical Issues
- [ ] **Architecture**: Conflicts with existing system design
- [ ] **Resources**: Insufficient resources for proposed timeline
- [ ] **Risk**: High technical risk in component Y
### 🎯 Optimization Recommendations
- [ ] **Alternative Approach**: Consider microservices instead of monolithic design
- [ ] **Technology Stack**: Use existing React patterns instead of Vue
- [ ] **Implementation Strategy**: Phase implementation to reduce risk
```
## Document Generation & Output
**Evaluation Workflow**: Input Processing → Context Loading → Analysis Execution → Report Generation → Recommendations
**Always Created**:
- **CONCEPT_EVALUATION.md**: Complete evaluation results and recommendations
- **evaluation-session.json**: Evaluation metadata and tool configuration
- **OPTIMIZATION_SUGGESTIONS.md**: Actionable improvement recommendations
**Auto-Created (for comprehensive analysis)**:
- **strategic-analysis.md**: Gemini strategic evaluation results
- **technical-assessment.md**: Codex technical feasibility analysis
- **risk-assessment-matrix.md**: Comprehensive risk evaluation
- **implementation-roadmap.md**: Recommended implementation approach
**Document Structure**:
```
.workflow/WFS-[topic]/.evaluation/
├── evaluation-session.json # Evaluation session metadata
├── CONCEPT_EVALUATION.md # Complete evaluation results
├── OPTIMIZATION_SUGGESTIONS.md # Actionable recommendations
├── strategic-analysis.md # Gemini strategic evaluation
├── technical-assessment.md # Codex technical assessment
├── risk-assessment-matrix.md # Risk evaluation matrix
└── implementation-roadmap.md # Recommended approach
```
### Evaluation Implementation
**Session-Aware Evaluation**:
```bash
# Check for existing sessions and context
active_sessions=$(find .workflow/ -name ".active-*" 2>/dev/null)
if [ -n "$active_sessions" ]; then
echo "Found active sessions: $active_sessions"
echo "Concept evaluation will consider existing session context"
fi
# Create evaluation session directory
evaluation_session="CE-$(date +%Y%m%d_%H%M%S)"
mkdir -p ".workflow/.evaluation/$evaluation_session"
# Store evaluation metadata
cat > ".workflow/.evaluation/$evaluation_session/evaluation-session.json" << EOF
{
"session_id": "$evaluation_session",
"timestamp": "$(date -Iseconds)",
"concept_input": "$input_description",
"tool_selection": "$tool_choice",
"context_loaded": [
"CLAUDE.md",
"README.md"
],
"evaluation_scope": "$evaluation_scope"
}
EOF
```
**Tool Execution Pattern**:
```bash
# Execute based on tool selection
case "$tool_choice" in
"gemini")
echo "Performing strategic concept evaluation with Gemini..."
~/.claude/scripts/gemini-wrapper -p "$gemini_prompt" > ".workflow/.evaluation/$evaluation_session/strategic-analysis.md"
;;
"codex")
echo "Performing technical assessment with Codex..."
codex --full-auto exec "$codex_prompt" -s danger-full-access > ".workflow/.evaluation/$evaluation_session/technical-assessment.md"
;;
"both"|*)
echo "Performing comprehensive evaluation with both tools..."
~/.claude/scripts/gemini-wrapper -p "$gemini_prompt" > ".workflow/.evaluation/$evaluation_session/strategic-analysis.md" &
codex --full-auto exec "$codex_prompt" -s danger-full-access > ".workflow/.evaluation/$evaluation_session/technical-assessment.md" &
wait # Wait for both analyses to complete
# Synthesize results
~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Synthesize strategic and technical concept evaluations
TASK: Combine analyses and generate integrated recommendations
CONTEXT: @{.workflow/.evaluation/$evaluation_session/strategic-analysis.md,.workflow/.evaluation/$evaluation_session/technical-assessment.md}
EXPECTED: Integrated evaluation with prioritized recommendations
RULES: Focus on actionable insights and clear next steps
" > ".workflow/.evaluation/$evaluation_session/CONCEPT_EVALUATION.md"
;;
esac
```
## Integration with Workflow Commands
### Workflow Position
**Pre-Planning Phase**: Use before formal planning to optimize concept quality
```
concept-eval → plan → plan-verify → execute
```
### Usage Scenarios
**Early Concept Validation**:
```bash
# Validate initial concept before detailed planning
/workflow:concept-eval "Build real-time notification system using WebSockets"
```
**Architecture Review**:
```bash
# Strategic architecture evaluation
/workflow:concept-eval --tool gemini architecture-proposal.md
```
**Technical Feasibility Check**:
```bash
# Technical implementation assessment
/workflow:concept-eval --tool codex "Implement ML-based recommendation engine"
```
**Comprehensive Analysis**:
```bash
# Full strategic and technical evaluation
/workflow:concept-eval --tool both ISS-042
```
### Integration Benefits
- **Early Risk Detection**: Identify issues before detailed planning
- **Quality Improvement**: Optimize concepts before implementation planning
- **Resource Efficiency**: Avoid detailed planning of infeasible concepts
- **Decision Support**: Data-driven concept selection and refinement
- **Team Alignment**: Clear evaluation criteria and recommendations
## Error Handling & Edge Cases
### Input Validation
```bash
# Validate input format and accessibility
if [[ -z "$input" ]]; then
echo "Error: Concept input required"
echo "Usage: /workflow:concept-eval [--tool gemini|codex|both] <input>"
exit 1
fi
# Check file accessibility for file inputs
if [[ "$input" =~ \.(md|txt|json|yaml|yml)$ ]] && [[ ! -f "$input" ]]; then
echo "Error: File not found: $input"
echo "Please provide a valid file path or concept description"
exit 1
fi
```
### Tool Availability
```bash
# Check tool availability
if [[ "$tool_choice" == "gemini" ]] || [[ "$tool_choice" == "both" ]]; then
if ! command -v ~/.claude/scripts/gemini-wrapper &> /dev/null; then
echo "Warning: Gemini wrapper not available, using codex only"
tool_choice="codex"
fi
fi
if [[ "$tool_choice" == "codex" ]] || [[ "$tool_choice" == "both" ]]; then
if ! command -v codex &> /dev/null; then
echo "Warning: Codex not available, using gemini only"
tool_choice="gemini"
fi
fi
```
### Recovery Strategies
```bash
# Fallback to manual evaluation if tools fail
if [[ "$evaluation_failed" == "true" ]]; then
echo "Automated evaluation failed, generating manual evaluation template..."
cat > ".workflow/.evaluation/$evaluation_session/manual-evaluation-template.md" << EOF
# Manual Concept Evaluation
## Concept Description
$input_description
## Evaluation Checklist
- [ ] **Architectural Soundness**: Does the concept align with existing architecture?
- [ ] **Technical Feasibility**: Are required technologies available and mature?
- [ ] **Resource Requirements**: Are time and team resources realistic?
- [ ] **Integration Complexity**: How complex is integration with existing systems?
- [ ] **Risk Assessment**: What are the main technical and business risks?
## Recommendations
[Provide manual evaluation and recommendations]
EOF
fi
```
## Quality Standards
### Evaluation Excellence
- **Comprehensive Analysis**: Consider all aspects of concept feasibility
- **Context-Rich Assessment**: Leverage full project context and existing patterns
- **Actionable Recommendations**: Provide specific, implementable suggestions
- **Risk-Aware Evaluation**: Identify and assess potential implementation risks
### User Experience Excellence
- **Clear Results**: Present evaluation results in actionable format
- **Focused Recommendations**: Prioritize most critical optimization suggestions
- **Integration Guidance**: Provide clear next steps for concept refinement
- **Tool Transparency**: Clear indication of which tools were used and why
### Output Quality
- **Structured Reports**: Consistent, well-organized evaluation documentation
- **Evidence-Based**: All recommendations backed by analysis and reasoning
- **Prioritized Actions**: Clear indication of critical vs. optional improvements
- **Implementation Ready**: Evaluation results directly usable for planning phase

View File

@@ -0,0 +1,301 @@
---
name: gather
description: Intelligently collect project context based on task description and package into standardized JSON
usage: /workflow:tools:context-gather --session <session_id> "<task_description>"
argument-hint: "--session WFS-session-id \"task description\""
examples:
- /workflow:tools:context-gather --session WFS-user-auth "Implement user authentication system"
- /workflow:tools:context-gather --session WFS-payment "Refactor payment module API"
- /workflow:tools:context-gather --session WFS-bugfix "Fix login validation error"
---
# Context Gather Command (/workflow:tools:context-gather)
## Overview
Intelligent context collector that gathers relevant information from project codebase, documentation, and dependencies based on task descriptions, generating standardized context packages.
## Core Philosophy
- **Intelligent Collection**: Auto-identify relevant resources based on keyword analysis
- **Comprehensive Coverage**: Collect code, documentation, configurations, and dependencies
- **Standardized Output**: Generate unified format context-package.json
- **Efficient Execution**: Optimize collection strategies to avoid irrelevant information
## Core Responsibilities
- **Keyword Extraction**: Extract core keywords from task descriptions
- **Smart Documentation Loading**: Load relevant project documentation based on keywords
- **Code Structure Analysis**: Analyze project structure to locate relevant code files
- **Dependency Discovery**: Identify tech stack and dependency relationships
- **MCP Tools Integration**: Leverage code-index tools for enhanced collection
- **Context Packaging**: Generate standardized JSON context packages
## Execution Process
### Phase 1: Task Analysis
1. **Keyword Extraction**
- Parse task description to extract core keywords
- Identify technical domain (auth, API, frontend, backend, etc.)
- Determine complexity level (simple, medium, complex)
2. **Scope Determination**
- Define collection scope based on keywords
- Identify potentially involved modules and components
- Set file type filters
### Phase 2: Project Structure Exploration
1. **Architecture Analysis**
- Use `~/.claude/scripts/get_modules_by_depth.sh` for comprehensive project structure
- Analyze project layout and module organization
- Identify key directories and components
2. **Code File Location**
- Use MCP tools for precise search: `mcp__code-index__find_files()` and `mcp__code-index__search_code_advanced()`
- Search for relevant source code files based on keywords
- Locate implementation files, interfaces, and modules
3. **Documentation Collection**
- Load CLAUDE.md and README.md
- Load relevant documentation from .workflow/docs/ based on keywords
- Collect configuration files (package.json, requirements.txt, etc.)
### Phase 3: Intelligent Filtering & Association
1. **Relevance Scoring**
- Score based on keyword match degree
- Score based on file path relevance
- Score based on code content relevance
2. **Dependency Analysis**
- Analyze import/require statements
- Identify inter-module dependencies
- Determine core and optional dependencies
### Phase 4: Context Packaging
1. **Standardized Output**
- Generate context-package.json
- Organize resources by type and importance
- Add relevance descriptions and usage recommendations
## Context Package Format
Generated context package format:
```json
{
"metadata": {
"task_description": "Implement user authentication system",
"timestamp": "2025-09-29T10:30:00Z",
"keywords": ["user", "authentication", "JWT", "login"],
"complexity": "medium",
"tech_stack": ["typescript", "node.js", "express"],
"session_id": "WFS-user-auth"
},
"assets": [
{
"type": "documentation",
"path": "CLAUDE.md",
"relevance": "Project development standards and conventions",
"priority": "high"
},
{
"type": "documentation",
"path": ".workflow/docs/architecture/security.md",
"relevance": "Security architecture design guidance",
"priority": "high"
},
{
"type": "source_code",
"path": "src/auth/AuthService.ts",
"relevance": "Existing authentication service implementation",
"priority": "high"
},
{
"type": "source_code",
"path": "src/models/User.ts",
"relevance": "User data model definition",
"priority": "medium"
},
{
"type": "config",
"path": "package.json",
"relevance": "Project dependencies and tech stack",
"priority": "medium"
},
{
"type": "test",
"path": "tests/auth/*.test.ts",
"relevance": "Authentication related test cases",
"priority": "medium"
}
],
"tech_stack": {
"frameworks": ["express", "typescript"],
"libraries": ["jsonwebtoken", "bcrypt"],
"testing": ["jest", "supertest"]
},
"statistics": {
"total_files": 15,
"source_files": 8,
"docs_files": 4,
"config_files": 2,
"test_files": 1
}
}
```
## MCP Tools Integration
### Code Index Integration
```bash
# Set project path
mcp__code-index__set_project_path(path="{current_project_path}")
# Refresh index to ensure latest
mcp__code-index__refresh_index()
# Search relevant files
mcp__code-index__find_files(pattern="*{keyword}*")
# Search code content
mcp__code-index__search_code_advanced(
pattern="{keyword_patterns}",
file_pattern="*.{ts,js,py,go,md}",
context_lines=3
)
```
## Session ID Integration
### Session ID Usage
- **Required Parameter**: `--session WFS-session-id`
- **Session Context Loading**: Load existing session state and task summaries
- **Session Continuity**: Maintain context across pipeline phases
### Session State Management
```bash
# Validate session exists
if [ ! -d ".workflow/${session_id}" ]; then
echo "❌ Session ${session_id} not found"
exit 1
fi
# Load session metadata
session_metadata=".workflow/${session_id}/workflow-session.json"
```
## Output Location
Context package output location:
```
.workflow/{session_id}/.process/context-package.json
```
## Error Handling
### Common Error Handling
1. **No Active Session**: Create temporary session directory
2. **MCP Tools Unavailable**: Fallback to traditional bash commands
3. **Permission Errors**: Prompt user to check file permissions
4. **Large Project Optimization**: Limit file count, prioritize high-relevance files
### Graceful Degradation Strategy
```bash
# Fallback when MCP unavailable
if ! command -v mcp__code-index__find_files; then
# Use find command for file discovery
find . -name "*{keyword}*" -type f -not -path "*/node_modules/*" -not -path "*/.git/*"
# Alternative pattern matching
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.py" -o -name "*.go" \) -exec grep -l "{keyword}" {} \;
fi
# Use ripgrep instead of MCP search
rg "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 30
# Content-based search with context
rg -A 3 -B 3 "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source
# Quick relevance check
grep -r --include="*.{ts,js,py,go}" -l "{keywords}" . | head -15
# Test files discovery
find . -name "*test*" -o -name "*spec*" | grep -E "\.(ts|js|py|go)$" | head -10
# Import/dependency analysis
rg "^(import|from|require|#include)" --type-add 'source:*.{ts,js,py,go}' -t source | head -20
```
## Performance Optimization
### Large Project Optimization Strategy
- **File Count Limit**: Maximum 50 files per type
- **Size Filtering**: Skip oversized files (>10MB)
- **Depth Limit**: Maximum search depth of 3 levels
- **Caching Strategy**: Cache project structure analysis results
### Parallel Processing
- Documentation collection and code search in parallel
- MCP tool calls and traditional commands in parallel
- Reduce I/O wait time
## Essential Bash Commands (Max 10)
### 1. Project Structure Analysis
```bash
~/.claude/scripts/get_modules_by_depth.sh
```
### 2. File Discovery by Keywords
```bash
find . -name "*{keyword}*" -type f -not -path "*/node_modules/*" -not -path "*/.git/*"
```
### 3. Content Search in Code Files
```bash
rg "{keyword}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 20
```
### 4. Configuration Files Discovery
```bash
find . -maxdepth 3 \( -name "*.json" -o -name "package.json" -o -name "requirements.txt" -o -name "Cargo.toml" \) -not -path "*/node_modules/*"
```
### 5. Documentation Files Collection
```bash
find . -name "*.md" -o -name "README*" -o -name "CLAUDE.md" | grep -v node_modules | head -10
```
### 6. Test Files Location
```bash
find . \( -name "*test*" -o -name "*spec*" \) -type f | grep -E "\.(js|ts|py|go)$" | head -10
```
### 7. Function/Class Definitions Search
```bash
rg "^(function|def|func|class|interface)" --type-add 'source:*.{ts,js,py,go}' -t source -n --max-count 15
```
### 8. Import/Dependency Analysis
```bash
rg "^(import|from|require|#include)" --type-add 'source:*.{ts,js,py,go}' -t source | head -15
```
### 9. Workflow Session Information
```bash
find .workflow/ -name "*.json" -path "*/${session_id}/*" -o -name "workflow-session.json" | head -5
```
### 10. Context-Aware Content Search
```bash
rg -A 2 -B 2 "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 10
```
## Success Criteria
- Generate valid context-package.json file
- Contains sufficient relevant information for subsequent analysis
- Execution time controlled within 30 seconds
- File relevance accuracy rate >80%
## Related Commands
- `/analysis:run` - Consumes output of this command for analysis
- `/workflow:plan` - Calls this command to gather context
- `/workflow:status` - Can display context collection status