mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-13 02:41:50 +08:00
feat(agents): add cli-explore-agent and enhance workflow documentation
Add new cli-explore-agent for code structure analysis and dependency mapping: - Dual-source strategy (Bash + Gemini CLI) for comprehensive code exploration - Three analysis modes: quick-scan, deep-scan, dependency-map - Language-agnostic support (TypeScript, Python, Go, Java, Rust) Enhance lite-plan workflow documentation: - Clarify agent call prompts with structured return formats - Add expected return structures for cli-explore-agent and cli-planning-agent - Simplify AskUserQuestion usage with clearer examples - Document data flow between workflow phases Add code-map-memory command: - Generate Mermaid code flow diagrams from feature keywords - Create SKILL packages for code understanding - Auto-continue workflow with phase skipping Improve UI design system: - Add theme colors guide to ui-design-agent - Enhance code import workflow documentation 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
764
.claude/commands/memory/code-map-memory.md
Normal file
764
.claude/commands/memory/code-map-memory.md
Normal file
@@ -0,0 +1,764 @@
|
||||
---
|
||||
name: code-map-memory
|
||||
description: 3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)
|
||||
argument-hint: "\"feature-keyword\" [--regenerate] [--tool <gemini|qwen>]"
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
|
||||
---
|
||||
|
||||
# Code Flow Mapping Generator
|
||||
|
||||
## Overview
|
||||
|
||||
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates code flow analysis to specialized cli-explore-agent. Orchestrator transforms agent's JSON analysis into Mermaid documentation.
|
||||
|
||||
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
|
||||
|
||||
**Execution Paths**:
|
||||
- **Full Path**: All 3 phases (no existing codemap OR `--regenerate` specified)
|
||||
- **Skip Path**: Phase 1 → Phase 3 (existing codemap found AND no `--regenerate` flag)
|
||||
- **Phase 3 Always Executes**: SKILL index is always generated or updated
|
||||
|
||||
**Agent Responsibility** (cli-explore-agent):
|
||||
- Deep code flow analysis using dual-source strategy (Bash + Gemini CLI)
|
||||
- Returns structured JSON with architecture, functions, data flow, conditionals, patterns
|
||||
- NO file writing - analysis only
|
||||
|
||||
**Orchestrator Responsibility**:
|
||||
- Provides feature keyword and analysis scope to agent
|
||||
- Transforms agent's JSON into Mermaid-enriched markdown documentation
|
||||
- Writes all files (5 docs + metadata.json + SKILL.md)
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
|
||||
2. **Feature-Specific SKILL**: Each feature creates independent `.claude/skills/codemap-{feature}/` package
|
||||
3. **Specialized Agent**: Phase 2a uses cli-explore-agent for professional code analysis (Deep Scan mode)
|
||||
4. **Orchestrator Documentation**: Phase 2b transforms agent JSON into Mermaid markdown files
|
||||
5. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
|
||||
6. **No User Prompts**: Never ask user questions or wait for input between phases
|
||||
7. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
|
||||
8. **Multi-Level Detail**: Generate 4 levels: architecture → function → data → conditional
|
||||
|
||||
---
|
||||
|
||||
## 3-Phase Execution
|
||||
|
||||
### Phase 1: Parse Feature Keyword & Check Existing
|
||||
|
||||
**Goal**: Normalize feature keyword, check existing codemap, prepare for analysis
|
||||
|
||||
**Step 1: Parse Feature Keyword**
|
||||
```bash
|
||||
# Get feature keyword from argument
|
||||
FEATURE_KEYWORD="$1"
|
||||
|
||||
# Normalize: lowercase, spaces to hyphens
|
||||
normalized_feature=$(echo "$FEATURE_KEYWORD" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | tr '_' '-')
|
||||
|
||||
# Example: "User Authentication" → "user-authentication"
|
||||
# Example: "支付处理" → "支付处理" (keep non-ASCII)
|
||||
```
|
||||
|
||||
**Step 2: Set Tool Preference**
|
||||
```bash
|
||||
# Default to gemini unless --tool specified
|
||||
TOOL="${tool_flag:-gemini}"
|
||||
```
|
||||
|
||||
**Step 3: Check Existing Codemap**
|
||||
```bash
|
||||
# Define codemap directory
|
||||
CODEMAP_DIR=".claude/skills/codemap-${normalized_feature}"
|
||||
|
||||
# Check if codemap exists
|
||||
bash(test -d "$CODEMAP_DIR" && echo "exists" || echo "not_exists")
|
||||
|
||||
# Count existing files
|
||||
bash(find "$CODEMAP_DIR" -name "*.md" 2>/dev/null | wc -l || echo 0)
|
||||
```
|
||||
|
||||
**Step 4: Skip Decision**
|
||||
```javascript
|
||||
if (existing_files > 0 && !regenerate_flag) {
|
||||
SKIP_GENERATION = true
|
||||
message = "Codemap already exists, skipping Phase 2. Use --regenerate to force regeneration."
|
||||
} else if (regenerate_flag) {
|
||||
bash(rm -rf "$CODEMAP_DIR")
|
||||
SKIP_GENERATION = false
|
||||
message = "Regenerating codemap from scratch."
|
||||
} else {
|
||||
SKIP_GENERATION = false
|
||||
message = "No existing codemap found, generating new code flow analysis."
|
||||
}
|
||||
```
|
||||
|
||||
**Output Variables**:
|
||||
- `FEATURE_KEYWORD`: Original feature keyword
|
||||
- `normalized_feature`: Normalized feature name for directory
|
||||
- `CODEMAP_DIR`: `.claude/skills/codemap-{feature}`
|
||||
- `TOOL`: CLI tool to use (gemini or qwen)
|
||||
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
|
||||
|
||||
**TodoWrite**:
|
||||
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
|
||||
- If not skipping: Mark phase 1 completed, phase 2 in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Code Flow Analysis & Documentation Generation
|
||||
|
||||
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
|
||||
|
||||
**Goal**: Use cli-explore-agent for professional code analysis, then orchestrator generates Mermaid documentation
|
||||
|
||||
**Architecture**: Phase 2a (Agent Analysis) → Phase 2b (Orchestrator Documentation)
|
||||
|
||||
---
|
||||
|
||||
#### Phase 2a: cli-explore-agent Analysis
|
||||
|
||||
**Purpose**: Leverage specialized cli-explore-agent for deep code flow analysis
|
||||
|
||||
**Agent Task Specification**:
|
||||
|
||||
```
|
||||
Task(
|
||||
subagent_type: "cli-explore-agent",
|
||||
description: "Analyze code flow: {FEATURE_KEYWORD}",
|
||||
prompt: "
|
||||
Perform Deep Scan analysis for feature: {FEATURE_KEYWORD}
|
||||
|
||||
**Analysis Mode**: deep-scan (Dual-source: Bash structural scan + Gemini semantic analysis)
|
||||
|
||||
**Analysis Objectives**:
|
||||
1. **Module Architecture**: Identify high-level module organization, interactions, and entry points
|
||||
2. **Function Call Chains**: Trace execution paths, call sequences, and parameter flows
|
||||
3. **Data Transformations**: Map data structure changes and transformation stages
|
||||
4. **Conditional Paths**: Document decision trees, branches, and error handling strategies
|
||||
5. **Design Patterns**: Discover architectural patterns and extract design intent
|
||||
|
||||
**Scope**:
|
||||
- Feature: {FEATURE_KEYWORD}
|
||||
- CLI Tool: {TOOL} (gemini-2.5-pro or qwen coder-model)
|
||||
- File Discovery: MCP Code Index (preferred) + rg fallback
|
||||
- Target: 5-15 most relevant files
|
||||
|
||||
**Expected Output Format**:
|
||||
Return comprehensive analysis as structured JSON:
|
||||
{
|
||||
\"feature\": \"{FEATURE_KEYWORD}\",
|
||||
\"analysis_metadata\": {
|
||||
\"tool_used\": \"gemini|qwen\",
|
||||
\"timestamp\": \"ISO_TIMESTAMP\",
|
||||
\"analysis_mode\": \"deep-scan\"
|
||||
},
|
||||
\"files_analyzed\": [
|
||||
{\"file\": \"path/to/file.ts\", \"relevance\": \"high|medium|low\", \"role\": \"brief description\"}
|
||||
],
|
||||
\"architecture\": {
|
||||
\"overview\": \"High-level description\",
|
||||
\"modules\": [
|
||||
{\"name\": \"ModuleName\", \"file\": \"file:line\", \"responsibility\": \"description\", \"dependencies\": [...]}
|
||||
],
|
||||
\"interactions\": [
|
||||
{\"from\": \"ModuleA\", \"to\": \"ModuleB\", \"type\": \"import|call|data-flow\", \"description\": \"...\"}
|
||||
],
|
||||
\"entry_points\": [
|
||||
{\"function\": \"main\", \"file\": \"file:line\", \"description\": \"...\"}
|
||||
]
|
||||
},
|
||||
\"function_calls\": {
|
||||
\"call_chains\": [
|
||||
{
|
||||
\"chain_id\": 1,
|
||||
\"description\": \"User authentication flow\",
|
||||
\"sequence\": [
|
||||
{\"function\": \"login\", \"file\": \"file:line\", \"calls\": [\"validateCredentials\", \"createSession\"]}
|
||||
]
|
||||
}
|
||||
],
|
||||
\"sequences\": [
|
||||
{\"from\": \"Client\", \"to\": \"AuthService\", \"method\": \"login(username, password)\", \"returns\": \"Session\"}
|
||||
]
|
||||
},
|
||||
\"data_flow\": {
|
||||
\"structures\": [
|
||||
{\"name\": \"UserData\", \"stage\": \"input\", \"shape\": {\"username\": \"string\", \"password\": \"string\"}}
|
||||
],
|
||||
\"transformations\": [
|
||||
{\"from\": \"RawInput\", \"to\": \"ValidatedData\", \"transformer\": \"validateUser\", \"file\": \"file:line\"}
|
||||
]
|
||||
},
|
||||
\"conditional_logic\": {
|
||||
\"branches\": [
|
||||
{\"condition\": \"isAuthenticated\", \"file\": \"file:line\", \"true_path\": \"...\", \"false_path\": \"...\"}
|
||||
],
|
||||
\"error_handling\": [
|
||||
{\"error_type\": \"AuthenticationError\", \"handler\": \"handleAuthError\", \"file\": \"file:line\", \"recovery\": \"retry|fail\"}
|
||||
]
|
||||
},
|
||||
\"design_patterns\": [
|
||||
{\"pattern\": \"Repository Pattern\", \"location\": \"src/repositories\", \"description\": \"...\"}
|
||||
],
|
||||
\"recommendations\": [
|
||||
\"Consider extracting authentication logic into separate module\",
|
||||
\"Add error recovery for network failures\"
|
||||
]
|
||||
}
|
||||
|
||||
**Critical Requirements**:
|
||||
- Use Deep Scan mode: Bash (Phase 1 - precise locations) + Gemini CLI (Phase 2 - semantic understanding) + Synthesis (Phase 3 - merge with attribution)
|
||||
- Focus exclusively on {FEATURE_KEYWORD} feature flow
|
||||
- Include file:line references for ALL findings
|
||||
- Extract design intent from code structure and comments
|
||||
- NO FILE WRITING - return JSON analysis only
|
||||
- Handle tool failures gracefully (Gemini → Qwen fallback, MCP → rg fallback)
|
||||
"
|
||||
)
|
||||
```
|
||||
|
||||
**Agent Output**: JSON analysis result with architecture, functions, data flow, conditionals, and patterns
|
||||
|
||||
---
|
||||
|
||||
#### Phase 2b: Orchestrator Documentation Generation
|
||||
|
||||
**Purpose**: Transform cli-explore-agent JSON into Mermaid-enriched documentation
|
||||
|
||||
**Input**: Agent's JSON analysis result
|
||||
|
||||
**Process**:
|
||||
|
||||
1. **Parse Agent Analysis**:
|
||||
```javascript
|
||||
const analysis = JSON.parse(agentResult)
|
||||
const { feature, files_analyzed, architecture, function_calls, data_flow, conditional_logic, design_patterns } = analysis
|
||||
```
|
||||
|
||||
2. **Generate Mermaid Diagrams from Structured Data**:
|
||||
|
||||
**a) architecture-flow.md** (~3K tokens):
|
||||
```javascript
|
||||
// Convert architecture.modules + architecture.interactions → Mermaid graph TD
|
||||
const architectureMermaid = `
|
||||
graph TD
|
||||
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
|
||||
${architecture.interactions.map(i => ` ${i.from} -->|${i.type}| ${i.to}`).join('\n')}
|
||||
`
|
||||
|
||||
Write({
|
||||
file_path: `${CODEMAP_DIR}/architecture-flow.md`,
|
||||
content: `---
|
||||
feature: ${feature}
|
||||
level: architecture
|
||||
detail: high-level module interactions
|
||||
---
|
||||
# Architecture Flow: ${feature}
|
||||
|
||||
## Overview
|
||||
${architecture.overview}
|
||||
|
||||
## Module Architecture
|
||||
${architecture.modules.map(m => `### ${m.name}\n- **File**: ${m.file}\n- **Role**: ${m.responsibility}\n- **Dependencies**: ${m.dependencies.join(', ')}`).join('\n\n')}
|
||||
|
||||
## Flow Diagram
|
||||
\`\`\`mermaid
|
||||
${architectureMermaid}
|
||||
\`\`\`
|
||||
|
||||
## Key Interactions
|
||||
${architecture.interactions.map(i => `- **${i.from} → ${i.to}**: ${i.description}`).join('\n')}
|
||||
|
||||
## Entry Points
|
||||
${architecture.entry_points.map(e => `- **${e.function}** (${e.file}): ${e.description}`).join('\n')}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**b) function-calls.md** (~5K tokens):
|
||||
```javascript
|
||||
// Convert function_calls.sequences → Mermaid sequenceDiagram
|
||||
const sequenceMermaid = `
|
||||
sequenceDiagram
|
||||
${function_calls.sequences.map(s => ` ${s.from}->>${s.to}: ${s.method}`).join('\n')}
|
||||
`
|
||||
|
||||
Write({
|
||||
file_path: `${CODEMAP_DIR}/function-calls.md`,
|
||||
content: `---
|
||||
feature: ${feature}
|
||||
level: function
|
||||
detail: function-level call sequences
|
||||
---
|
||||
# Function Call Chains: ${feature}
|
||||
|
||||
## Call Sequence Diagram
|
||||
\`\`\`mermaid
|
||||
${sequenceMermaid}
|
||||
\`\`\`
|
||||
|
||||
## Detailed Call Chains
|
||||
${function_calls.call_chains.map(chain => `
|
||||
### Chain ${chain.chain_id}: ${chain.description}
|
||||
${chain.sequence.map(fn => `- **${fn.function}** (${fn.file})\n - Calls: ${fn.calls.join(', ')}`).join('\n')}
|
||||
`).join('\n')}
|
||||
|
||||
## Parameters & Returns
|
||||
${function_calls.sequences.map(s => `- **${s.method}** → Returns: ${s.returns || 'void'}`).join('\n')}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**c) data-flow.md** (~4K tokens):
|
||||
```javascript
|
||||
// Convert data_flow.transformations → Mermaid flowchart LR
|
||||
const dataFlowMermaid = `
|
||||
flowchart LR
|
||||
${data_flow.transformations.map((t, i) => ` Stage${i}[${t.from}] -->|${t.transformer}| Stage${i+1}[${t.to}]`).join('\n')}
|
||||
`
|
||||
|
||||
Write({
|
||||
file_path: `${CODEMAP_DIR}/data-flow.md`,
|
||||
content: `---
|
||||
feature: ${feature}
|
||||
level: data
|
||||
detail: data structure transformations
|
||||
---
|
||||
# Data Flow: ${feature}
|
||||
|
||||
## Data Transformation Diagram
|
||||
\`\`\`mermaid
|
||||
${dataFlowMermaid}
|
||||
\`\`\`
|
||||
|
||||
## Data Structures
|
||||
${data_flow.structures.map(s => `### ${s.name} (${s.stage})\n\`\`\`json\n${JSON.stringify(s.shape, null, 2)}\n\`\`\``).join('\n\n')}
|
||||
|
||||
## Transformations
|
||||
${data_flow.transformations.map(t => `- **${t.from} → ${t.to}** via \`${t.transformer}\` (${t.file})`).join('\n')}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**d) conditional-paths.md** (~4K tokens):
|
||||
```javascript
|
||||
// Convert conditional_logic.branches → Mermaid flowchart TD
|
||||
const conditionalMermaid = `
|
||||
flowchart TD
|
||||
Start[Entry Point]
|
||||
${conditional_logic.branches.map((b, i) => `
|
||||
Start --> Check${i}{${b.condition}}
|
||||
Check${i} -->|Yes| Path${i}A[${b.true_path}]
|
||||
Check${i} -->|No| Path${i}B[${b.false_path}]
|
||||
`).join('\n')}
|
||||
`
|
||||
|
||||
Write({
|
||||
file_path: `${CODEMAP_DIR}/conditional-paths.md`,
|
||||
content: `---
|
||||
feature: ${feature}
|
||||
level: conditional
|
||||
detail: decision trees and error paths
|
||||
---
|
||||
# Conditional Paths: ${feature}
|
||||
|
||||
## Decision Tree
|
||||
\`\`\`mermaid
|
||||
${conditionalMermaid}
|
||||
\`\`\`
|
||||
|
||||
## Branch Conditions
|
||||
${conditional_logic.branches.map(b => `- **${b.condition}** (${b.file})\n - True: ${b.true_path}\n - False: ${b.false_path}`).join('\n')}
|
||||
|
||||
## Error Handling
|
||||
${conditional_logic.error_handling.map(e => `- **${e.error_type}**: Handler \`${e.handler}\` (${e.file}) - Recovery: ${e.recovery}`).join('\n')}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**e) complete-flow.md** (~8K tokens):
|
||||
```javascript
|
||||
// Integrate all Mermaid diagrams
|
||||
Write({
|
||||
file_path: `${CODEMAP_DIR}/complete-flow.md`,
|
||||
content: `---
|
||||
feature: ${feature}
|
||||
level: complete
|
||||
detail: integrated multi-level view
|
||||
---
|
||||
# Complete Flow: ${feature}
|
||||
|
||||
## Integrated Flow Diagram
|
||||
\`\`\`mermaid
|
||||
graph TB
|
||||
subgraph Architecture
|
||||
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
|
||||
end
|
||||
|
||||
subgraph "Function Calls"
|
||||
${function_calls.call_chains[0]?.sequence.map(fn => ` ${fn.function}`).join('\n') || ''}
|
||||
end
|
||||
|
||||
subgraph "Data Flow"
|
||||
${data_flow.structures.map(s => ` ${s.name}[${s.name}]`).join('\n')}
|
||||
end
|
||||
\`\`\`
|
||||
|
||||
## Complete Trace
|
||||
[Comprehensive end-to-end documentation combining all analysis layers]
|
||||
|
||||
## Design Patterns Identified
|
||||
${design_patterns.map(p => `- **${p.pattern}** in ${p.location}: ${p.description}`).join('\n')}
|
||||
|
||||
## Recommendations
|
||||
${analysis.recommendations.map(r => `- ${r}`).join('\n')}
|
||||
|
||||
## Cross-References
|
||||
- [Architecture Flow](./architecture-flow.md) - High-level module structure
|
||||
- [Function Calls](./function-calls.md) - Detailed call chains
|
||||
- [Data Flow](./data-flow.md) - Data transformation stages
|
||||
- [Conditional Paths](./conditional-paths.md) - Decision trees and error handling
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
3. **Write metadata.json**:
|
||||
```javascript
|
||||
Write({
|
||||
file_path: `${CODEMAP_DIR}/metadata.json`,
|
||||
content: JSON.stringify({
|
||||
feature: feature,
|
||||
normalized_name: normalized_feature,
|
||||
generated_at: new Date().toISOString(),
|
||||
tool_used: analysis.analysis_metadata.tool_used,
|
||||
files_analyzed: files_analyzed.map(f => f.file),
|
||||
analysis_summary: {
|
||||
total_files: files_analyzed.length,
|
||||
modules_traced: architecture.modules.length,
|
||||
functions_traced: function_calls.call_chains.reduce((sum, c) => sum + c.sequence.length, 0),
|
||||
patterns_discovered: design_patterns.length
|
||||
}
|
||||
}, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
4. **Report Phase 2 Completion**:
|
||||
```
|
||||
Phase 2 Complete: Code flow analysis and documentation generated
|
||||
|
||||
- Agent Analysis: cli-explore-agent with {TOOL}
|
||||
- Files Analyzed: {count}
|
||||
- Documentation Generated: 5 markdown files + metadata.json
|
||||
- Location: {CODEMAP_DIR}
|
||||
```
|
||||
|
||||
**Completion Criteria**:
|
||||
- cli-explore-agent task completed successfully with JSON result
|
||||
- 5 documentation files written with valid Mermaid diagrams
|
||||
- metadata.json written with analysis summary
|
||||
- All files properly formatted and cross-referenced
|
||||
|
||||
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Generate SKILL.md Index
|
||||
|
||||
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
|
||||
|
||||
**Goal**: Read generated flow documentation and create SKILL.md index with progressive loading
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Verify Generated Files**:
|
||||
```bash
|
||||
bash(find "{CODEMAP_DIR}" -name "*.md" -type f | sort)
|
||||
```
|
||||
|
||||
2. **Read metadata.json**:
|
||||
```javascript
|
||||
Read({CODEMAP_DIR}/metadata.json)
|
||||
// Extract: feature, normalized_name, files_analyzed, analysis_summary
|
||||
```
|
||||
|
||||
3. **Read File Headers** (optional, first 30 lines):
|
||||
```javascript
|
||||
Read({CODEMAP_DIR}/architecture-flow.md, limit: 30)
|
||||
Read({CODEMAP_DIR}/function-calls.md, limit: 30)
|
||||
// Extract overview and diagram counts
|
||||
```
|
||||
|
||||
4. **Generate SKILL.md Index**:
|
||||
|
||||
Template structure:
|
||||
```yaml
|
||||
---
|
||||
name: codemap-{normalized_feature}
|
||||
description: Code flow mapping for {FEATURE_KEYWORD} feature (located at {project_path}). Load this SKILL when analyzing, tracing, or understanding {FEATURE_KEYWORD} execution flow, especially when no relevant context exists in memory.
|
||||
version: 1.0.0
|
||||
generated_at: {ISO_TIMESTAMP}
|
||||
---
|
||||
# Code Flow Map: {FEATURE_KEYWORD}
|
||||
|
||||
## Feature: `{FEATURE_KEYWORD}`
|
||||
|
||||
**Analysis Date**: {DATE}
|
||||
**Tool Used**: {TOOL}
|
||||
**Files Analyzed**: {COUNT}
|
||||
|
||||
## Progressive Loading
|
||||
|
||||
### Level 0: Quick Overview (~2K tokens)
|
||||
- [Architecture Flow](./architecture-flow.md) - High-level module interactions
|
||||
|
||||
### Level 1: Core Flows (~10K tokens)
|
||||
- [Architecture Flow](./architecture-flow.md) - Module architecture
|
||||
- [Function Calls](./function-calls.md) - Function call chains
|
||||
|
||||
### Level 2: Complete Analysis (~20K tokens)
|
||||
- [Architecture Flow](./architecture-flow.md)
|
||||
- [Function Calls](./function-calls.md)
|
||||
- [Data Flow](./data-flow.md) - Data transformations
|
||||
|
||||
### Level 3: Deep Dive (~30K tokens)
|
||||
- [Architecture Flow](./architecture-flow.md)
|
||||
- [Function Calls](./function-calls.md)
|
||||
- [Data Flow](./data-flow.md)
|
||||
- [Conditional Paths](./conditional-paths.md) - Branches and error handling
|
||||
- [Complete Flow](./complete-flow.md) - Integrated comprehensive view
|
||||
|
||||
## Usage
|
||||
|
||||
Load this SKILL package when:
|
||||
- Analyzing {FEATURE_KEYWORD} implementation
|
||||
- Tracing execution flow for debugging
|
||||
- Understanding code dependencies
|
||||
- Planning refactoring or enhancements
|
||||
|
||||
## Analysis Summary
|
||||
|
||||
- **Modules Traced**: {modules_traced}
|
||||
- **Functions Traced**: {functions_traced}
|
||||
- **Files Analyzed**: {total_files}
|
||||
|
||||
## Mermaid Diagrams Included
|
||||
|
||||
- Architecture flow diagram (graph TD)
|
||||
- Function call sequence diagram (sequenceDiagram)
|
||||
- Data transformation flowchart (flowchart LR)
|
||||
- Conditional decision tree (flowchart TD)
|
||||
- Complete integrated diagram (graph TB)
|
||||
```
|
||||
|
||||
5. **Write SKILL.md**:
|
||||
```javascript
|
||||
Write({
|
||||
file_path: `{CODEMAP_DIR}/SKILL.md`,
|
||||
content: generatedIndexMarkdown
|
||||
})
|
||||
```
|
||||
|
||||
**Completion Criteria**:
|
||||
- SKILL.md index written
|
||||
- All documentation files verified
|
||||
- Progressive loading levels (0-3) properly structured
|
||||
- Mermaid diagram references included
|
||||
|
||||
**TodoWrite**: Mark phase 3 completed
|
||||
|
||||
**Final Report**:
|
||||
```
|
||||
Code Flow Mapping Complete
|
||||
|
||||
Feature: {FEATURE_KEYWORD}
|
||||
Location: .claude/skills/codemap-{normalized_feature}/
|
||||
|
||||
Files Generated:
|
||||
- SKILL.md (index)
|
||||
- architecture-flow.md (with Mermaid diagram)
|
||||
- function-calls.md (with Mermaid sequence diagram)
|
||||
- data-flow.md (with Mermaid flowchart)
|
||||
- conditional-paths.md (with Mermaid decision tree)
|
||||
- complete-flow.md (with integrated Mermaid diagram)
|
||||
- metadata.json
|
||||
|
||||
Analysis:
|
||||
- Files analyzed: {count}
|
||||
- Modules traced: {count}
|
||||
- Functions traced: {count}
|
||||
|
||||
Usage: Skill(command: "codemap-{normalized_feature}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### TodoWrite Patterns
|
||||
|
||||
**Initialization** (Before Phase 1):
|
||||
```javascript
|
||||
TodoWrite({todos: [
|
||||
{"content": "Parse feature keyword and check existing", "status": "in_progress", "activeForm": "Parsing feature keyword"},
|
||||
{"content": "Agent analyzes code flow and generates files", "status": "pending", "activeForm": "Analyzing code flow"},
|
||||
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
|
||||
]})
|
||||
```
|
||||
|
||||
**Full Path** (SKIP_GENERATION = false):
|
||||
```javascript
|
||||
// After Phase 1
|
||||
TodoWrite({todos: [
|
||||
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
|
||||
{"content": "Agent analyzes code flow and generates files", "status": "in_progress", ...},
|
||||
{"content": "Generate SKILL.md index", "status": "pending", ...}
|
||||
]})
|
||||
|
||||
// After Phase 2
|
||||
TodoWrite({todos: [
|
||||
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
|
||||
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
|
||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
||||
]})
|
||||
|
||||
// After Phase 3
|
||||
TodoWrite({todos: [
|
||||
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
|
||||
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
|
||||
{"content": "Generate SKILL.md index", "status": "completed", ...}
|
||||
]})
|
||||
```
|
||||
|
||||
**Skip Path** (SKIP_GENERATION = true):
|
||||
```javascript
|
||||
// After Phase 1 (skip Phase 2)
|
||||
TodoWrite({todos: [
|
||||
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
|
||||
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...}, // Skipped
|
||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
||||
]})
|
||||
```
|
||||
|
||||
### Execution Flow
|
||||
|
||||
**Full Path**:
|
||||
```
|
||||
User → TodoWrite Init → Phase 1 (parse) → Phase 2 (agent analyzes) → Phase 3 (write index) → Report
|
||||
```
|
||||
|
||||
**Skip Path**:
|
||||
```
|
||||
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
**Phase 1 Errors**:
|
||||
- Empty feature keyword: Report error, ask user to provide feature description
|
||||
- Invalid characters: Normalize and continue
|
||||
|
||||
**Phase 2 Errors (Agent)**:
|
||||
- Agent task fails: Retry once, report if fails again
|
||||
- No files discovered: Warn user, ask for more specific feature keyword
|
||||
- CLI failures: Agent handles internally with retries
|
||||
- Invalid Mermaid syntax: Agent validates before writing
|
||||
|
||||
**Phase 3 Errors**:
|
||||
- Write failures: Report which files failed
|
||||
- Missing files: Note in SKILL.md, suggest regeneration
|
||||
|
||||
---
|
||||
|
||||
## Parameters
|
||||
|
||||
```bash
|
||||
/memory:code-map-memory "feature-keyword" [--regenerate] [--tool <gemini|qwen>]
|
||||
```
|
||||
|
||||
**Arguments**:
|
||||
- **"feature-keyword"**: Feature or flow to analyze (required)
|
||||
- Examples: `"user authentication"`, `"payment processing"`, `"数据导入流程"`
|
||||
- Can be English, Chinese, or mixed
|
||||
- Spaces and underscores normalized to hyphens
|
||||
- **--regenerate**: Force regenerate existing codemap (deletes and recreates)
|
||||
- **--tool**: CLI tool for analysis (default: gemini)
|
||||
- `gemini`: Comprehensive flow analysis with gemini-2.5-pro
|
||||
- `qwen`: Alternative with coder-model
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
**Generated File Structure** (for all examples):
|
||||
```
|
||||
.claude/skills/codemap-{feature}/
|
||||
├── SKILL.md # Index (Phase 3)
|
||||
├── architecture-flow.md # Agent (Phase 2) - High-level flow
|
||||
├── function-calls.md # Agent (Phase 2) - Function chains
|
||||
├── data-flow.md # Agent (Phase 2) - Data transformations
|
||||
├── conditional-paths.md # Agent (Phase 2) - Branches & errors
|
||||
├── complete-flow.md # Agent (Phase 2) - Integrated view
|
||||
└── metadata.json # Agent (Phase 2)
|
||||
```
|
||||
|
||||
### Example 1: User Authentication Flow
|
||||
|
||||
```bash
|
||||
/memory:code-map-memory "user authentication"
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Normalizes to "user-authentication", checks existing codemap
|
||||
2. Phase 2: Agent discovers auth-related files, executes CLI analysis, generates 5 flow docs with Mermaid
|
||||
3. Phase 3: Generates SKILL.md index with progressive loading
|
||||
|
||||
**Output**: `.claude/skills/codemap-user-authentication/` with 6 files + metadata
|
||||
|
||||
|
||||
### Example 3: Regenerate with Qwen
|
||||
|
||||
```bash
|
||||
/memory:code-map-memory "payment processing" --regenerate --tool qwen
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Deletes existing codemap due to --regenerate
|
||||
2. Phase 2: Agent uses qwen with coder-model for fresh analysis
|
||||
3. Phase 3: Generates updated SKILL.md
|
||||
|
||||
---
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Per-Feature SKILL**: Independent packages for each analyzed feature
|
||||
- **Specialized Agent**: cli-explore-agent with Deep Scan mode (Bash + Gemini dual-source)
|
||||
- **Professional Analysis**: Pre-defined workflow for code exploration and structure analysis
|
||||
- **Clear Separation**: Agent analyzes (JSON) → Orchestrator documents (Mermaid markdown)
|
||||
- **Multi-Level Detail**: 4 levels (architecture → function → data → conditional)
|
||||
- **Visual Flow**: Embedded Mermaid diagrams for all flow types
|
||||
- **Progressive Loading**: Token-efficient context loading (2K → 30K)
|
||||
- **Auto-Continue**: Fully autonomous 3-phase execution
|
||||
- **Smart Skip**: Detects existing codemap, 10x faster index updates
|
||||
- **CLI Integration**: Gemini/Qwen for deep semantic understanding
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
code-map-memory (orchestrator)
|
||||
├─ Phase 1: Parse & Check (bash commands, skip decision)
|
||||
├─ Phase 2: Code Analysis & Documentation (skippable)
|
||||
│ ├─ Phase 2a: cli-explore-agent Analysis
|
||||
│ │ └─ Deep Scan: Bash structural + Gemini semantic → JSON
|
||||
│ └─ Phase 2b: Orchestrator Documentation
|
||||
│ └─ Transform JSON → 5 Mermaid markdown files + metadata.json
|
||||
└─ Phase 3: Write SKILL.md (index generation, always runs)
|
||||
|
||||
Benefits:
|
||||
✅ Specialized agent: cli-explore-agent with dual-source strategy (Bash + Gemini)
|
||||
✅ Professional analysis: Pre-defined Deep Scan workflow
|
||||
✅ Clear separation: Agent analyzes (JSON) → Orchestrator documents (Mermaid)
|
||||
✅ Smart skip logic: 10x faster when codemap exists
|
||||
✅ Multi-level detail: Architecture → Functions → Data → Conditionals
|
||||
|
||||
Output: .claude/skills/codemap-{feature}/
|
||||
```
|
||||
@@ -176,22 +176,48 @@ Execution Complete
|
||||
prompt=`
|
||||
Task: ${task_description}
|
||||
|
||||
Analyze:
|
||||
- Relevant files and modules
|
||||
- Current implementation patterns
|
||||
- Dependencies and integration points
|
||||
- Architecture constraints
|
||||
Analyze and return the following information in structured format:
|
||||
1. Project Structure: Overall architecture and module organization
|
||||
2. Relevant Files: List of files that will be affected by this task (with paths)
|
||||
3. Current Implementation Patterns: Existing code patterns, conventions, and styles
|
||||
4. Dependencies: External dependencies and internal module dependencies
|
||||
5. Integration Points: Where this task connects with existing code
|
||||
6. Architecture Constraints: Technical limitations or requirements
|
||||
7. Clarification Needs: Ambiguities or missing information requiring user input
|
||||
|
||||
Time Limit: 60 seconds
|
||||
Output: Findings summary + clarification needs
|
||||
|
||||
Output Format: Return a JSON-like structured object with the above fields populated.
|
||||
Include specific file paths, pattern examples, and clear questions for clarifications.
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
**Expected Return Structure**:
|
||||
```javascript
|
||||
explorationContext = {
|
||||
project_structure: "Description of overall architecture",
|
||||
relevant_files: ["src/auth/service.ts", "src/middleware/auth.ts", ...],
|
||||
patterns: "Description of existing patterns (e.g., 'Uses dependency injection pattern', 'React hooks convention')",
|
||||
dependencies: "List of dependencies and integration points",
|
||||
integration_points: "Where this connects with existing code",
|
||||
constraints: "Technical constraints (e.g., 'Must use existing auth library', 'No breaking changes')",
|
||||
clarification_needs: [
|
||||
{
|
||||
question: "Which authentication method to use?",
|
||||
context: "Found both JWT and Session patterns",
|
||||
options: ["JWT tokens", "Session-based", "Hybrid approach"]
|
||||
},
|
||||
// ... more clarification questions
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Output Processing**:
|
||||
- Store exploration findings in `explorationContext`
|
||||
- Identify clarification needs (ambiguities, missing info, assumptions)
|
||||
- Set `needsClarification` flag if questions exist
|
||||
- Extract `clarification_needs` array from exploration results
|
||||
- Set `needsClarification = (clarification_needs.length > 0)`
|
||||
- Use clarification_needs to generate Phase 2 questions
|
||||
|
||||
**Progress Tracking**:
|
||||
- Mark Phase 1 as completed
|
||||
@@ -207,44 +233,30 @@ Execution Complete
|
||||
**Skip Condition**: Only run if Phase 1 set `needsClarification = true`
|
||||
|
||||
**Operations**:
|
||||
- Review exploration findings for ambiguities
|
||||
- Generate clarification questions based on:
|
||||
- Missing requirements
|
||||
- Ambiguous specifications
|
||||
- Multiple implementation options
|
||||
- Unclear dependencies or constraints
|
||||
- Assumptions that need confirmation
|
||||
- Review `explorationContext.clarification_needs` from Phase 1
|
||||
- Generate AskUserQuestion based on exploration findings
|
||||
- Focus on ambiguities that affect implementation approach
|
||||
|
||||
**AskUserQuestion Format**:
|
||||
**AskUserQuestion Call** (simplified reference):
|
||||
```javascript
|
||||
// Use clarification_needs from exploration to build questions
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Based on code exploration, I need clarification on: ...",
|
||||
header: "Clarify Requirements",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
// Dynamic options based on exploration findings
|
||||
// Example: "Which authentication method?" -> Options: JWT, OAuth2, Session
|
||||
]
|
||||
}
|
||||
]
|
||||
questions: explorationContext.clarification_needs.map(need => ({
|
||||
question: `${need.context}\n\n${need.question}`,
|
||||
header: "Clarification",
|
||||
multiSelect: false,
|
||||
options: need.options.map(opt => ({
|
||||
label: opt,
|
||||
description: `Use ${opt} approach`
|
||||
}))
|
||||
}))
|
||||
})
|
||||
```
|
||||
|
||||
**Example Clarification Scenarios**:
|
||||
|
||||
| Exploration Finding | Clarification Question | Options |
|
||||
|---------------------|------------------------|---------|
|
||||
| "Found 2 auth patterns: JWT and Session" | "Which authentication approach to use?" | JWT / Session-based / Hybrid |
|
||||
| "API uses both REST and GraphQL" | "Which API style for new endpoints?" | REST / GraphQL / Both |
|
||||
| "No existing test framework found" | "Which test framework to set up?" | Jest / Vitest / Mocha |
|
||||
| "Multiple state management libraries" | "Which state manager to use?" | Redux / Zustand / Context |
|
||||
|
||||
**Output Processing**:
|
||||
- Collect user responses
|
||||
- Update task context with clarifications
|
||||
- Store in `clarificationContext` variable
|
||||
- Collect user responses and store in `clarificationContext`
|
||||
- Format: `{ question_id: selected_answer, ... }`
|
||||
- This context will be passed to Phase 3 planning
|
||||
|
||||
**Progress Tracking**:
|
||||
- Mark Phase 2 as completed
|
||||
@@ -314,21 +326,39 @@ Task(
|
||||
Task: ${task_description}
|
||||
|
||||
Exploration Context:
|
||||
${explorationContext}
|
||||
${JSON.stringify(explorationContext, null, 2)}
|
||||
|
||||
Clarifications:
|
||||
${clarificationContext || "None"}
|
||||
User Clarifications:
|
||||
${JSON.stringify(clarificationContext, null, 2) || "None provided"}
|
||||
|
||||
Complexity: ${complexity}
|
||||
Complexity Level: ${complexity}
|
||||
|
||||
Generate detailed task breakdown with:
|
||||
- Clear task dependencies
|
||||
- Specific file modifications
|
||||
- Test requirements
|
||||
- Rollback considerations (if High complexity)
|
||||
- Risk assessment
|
||||
Generate a detailed implementation plan with the following components:
|
||||
|
||||
Output: Structured task list (5-10 tasks)
|
||||
1. Summary: 2-3 sentence overview of the implementation
|
||||
2. Approach: High-level implementation strategy
|
||||
3. Task Breakdown: 5-10 specific, actionable tasks
|
||||
- Each task should specify:
|
||||
* What to do
|
||||
* Which files to modify/create
|
||||
* Dependencies on other tasks (if any)
|
||||
4. Task Dependencies: Explicit ordering requirements (e.g., "Task 2 depends on Task 1")
|
||||
5. Risks: Potential issues and mitigation strategies (for Medium/High complexity)
|
||||
6. Estimated Time: Total implementation time estimate
|
||||
7. Recommended Execution: "Direct" (agent) or "CLI" (autonomous tool)
|
||||
|
||||
Output Format: Return a structured object with these fields:
|
||||
{
|
||||
summary: string,
|
||||
approach: string,
|
||||
tasks: string[],
|
||||
dependencies: string[] (optional),
|
||||
risks: string[] (optional),
|
||||
estimated_time: string,
|
||||
recommended_execution: "Direct" | "CLI"
|
||||
}
|
||||
|
||||
Ensure tasks are specific, with file paths and clear acceptance criteria.
|
||||
`
|
||||
)
|
||||
|
||||
@@ -336,6 +366,32 @@ Task(
|
||||
planObject = agent_output.parse()
|
||||
```
|
||||
|
||||
**Expected Return Structure**:
|
||||
```javascript
|
||||
planObject = {
|
||||
summary: "Implement JWT-based authentication system with middleware integration",
|
||||
approach: "Create auth service layer, implement JWT utilities, add middleware, update routes",
|
||||
tasks: [
|
||||
"Create authentication service in src/auth/service.ts with login/logout/verify methods",
|
||||
"Implement JWT token utilities in src/auth/jwt.ts (generate, verify, refresh)",
|
||||
"Add authentication middleware to src/middleware/auth.ts",
|
||||
"Update API routes in src/routes/*.ts to use auth middleware",
|
||||
"Add integration tests for auth flow in tests/auth.test.ts"
|
||||
],
|
||||
dependencies: [
|
||||
"Task 3 depends on Task 2 (middleware needs JWT utilities)",
|
||||
"Task 4 depends on Task 3 (routes need middleware)",
|
||||
"Task 5 depends on Tasks 1-4 (tests need complete implementation)"
|
||||
],
|
||||
risks: [
|
||||
"Token refresh timing may conflict with existing session logic - test thoroughly",
|
||||
"Breaking change if existing auth is in use - plan migration strategy"
|
||||
],
|
||||
estimated_time: "30-45 minutes",
|
||||
recommended_execution: "CLI" // Based on clear requirements and straightforward implementation
|
||||
}
|
||||
```
|
||||
|
||||
**Output Structure**:
|
||||
```javascript
|
||||
planObject = {
|
||||
@@ -370,120 +426,54 @@ planObject = {
|
||||
|
||||
**Operations**:
|
||||
- Display plan summary with full task breakdown
|
||||
- Two-dimensional user input: Task confirmation + Execution method selection
|
||||
- Collect two-dimensional user input: Task confirmation + Execution method selection
|
||||
- Support modification flow if user requests changes
|
||||
|
||||
**AskUserQuestion Format** (Two questions):
|
||||
|
||||
**Question 1: Task Confirmation**
|
||||
|
||||
Display plan to user and ask for confirmation:
|
||||
- Show: summary, approach, task breakdown, dependencies, risks, complexity, estimated time
|
||||
- Options: "Confirm" / "Modify" / "Cancel"
|
||||
- If Modify: Collect feedback via "Other" option, re-run Phase 3 with modifications
|
||||
- If Cancel: Exit workflow
|
||||
- If Confirm: Proceed to Question 2
|
||||
|
||||
**Question 2: Execution Method Selection** (Only if task confirmed)
|
||||
|
||||
Ask user to select execution method:
|
||||
- Show recommendation from `planObject.recommended_execution`
|
||||
- Options:
|
||||
- "Direct - Execute with Agent" (@code-developer)
|
||||
- "CLI - Gemini" (gemini-2.5-pro)
|
||||
- "CLI - Codex" (gpt-5)
|
||||
- "CLI - Qwen" (coder-model)
|
||||
- Store selection for Phase 5 execution
|
||||
|
||||
**Simplified AskUserQuestion Reference**:
|
||||
```javascript
|
||||
// Question 1: Task Confirmation
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `
|
||||
Implementation Plan:
|
||||
|
||||
Summary: ${planObject.summary}
|
||||
|
||||
Approach: ${planObject.approach}
|
||||
|
||||
Task Breakdown (${planObject.tasks.length} tasks):
|
||||
${planObject.tasks.map((t, i) => ` ${i+1}. ${t}`).join('\n')}
|
||||
|
||||
${planObject.dependencies ? `\nDependencies:\n${planObject.dependencies.join('\n')}` : ''}
|
||||
${planObject.risks ? `\nRisks:\n${planObject.risks.join('\n')}` : ''}
|
||||
|
||||
Complexity: ${planObject.complexity}
|
||||
Estimated Time: ${planObject.estimated_time}
|
||||
|
||||
Do you confirm this implementation plan?`,
|
||||
header: "Confirm Tasks",
|
||||
multiSelect: false,
|
||||
question: `[Display plan with all details]\n\nDo you confirm this plan?`,
|
||||
header: "Confirm Plan",
|
||||
options: [
|
||||
{
|
||||
label: "Confirm - Proceed to execution",
|
||||
description: "Tasks look good, ready to execute"
|
||||
},
|
||||
{
|
||||
label: "Modify - Adjust plan",
|
||||
description: "Need to adjust tasks or approach"
|
||||
},
|
||||
{
|
||||
label: "Cancel - Abort",
|
||||
description: "Don't execute, abort this planning session"
|
||||
}
|
||||
{ label: "Confirm", description: "Proceed to execution" },
|
||||
{ label: "Modify", description: "Adjust plan" },
|
||||
{ label: "Cancel", description: "Abort" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**If Confirm**: Proceed to Question 2
|
||||
**If Modify**:
|
||||
```javascript
|
||||
// Question 2: Execution Method (if confirmed)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What would you like to modify about the plan?",
|
||||
header: "Plan Modifications",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Add specific requirements",
|
||||
description: "Provide additional requirements or constraints"
|
||||
},
|
||||
{
|
||||
label: "Remove/simplify tasks",
|
||||
description: "Some tasks are unnecessary or too detailed"
|
||||
},
|
||||
{
|
||||
label: "Change approach",
|
||||
description: "Different implementation strategy needed"
|
||||
},
|
||||
{
|
||||
label: "Clarify ambiguities",
|
||||
description: "Tasks are unclear or ambiguous"
|
||||
}
|
||||
]
|
||||
}]
|
||||
})
|
||||
// After modification input, re-run Phase 3 with user feedback
|
||||
```
|
||||
|
||||
**Question 2: Execution Method Selection** (Only if confirmed)
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `
|
||||
Select execution method:
|
||||
|
||||
${planObject.recommended_execution === "Direct" ? "[Recommended] " : ""}Direct Execution (Agent):
|
||||
- Current Claude agent executes tasks with full context
|
||||
- Interactive progress tracking
|
||||
- Best for: Complex logic, iterative development
|
||||
|
||||
${planObject.recommended_execution === "CLI" ? "[Recommended] " : ""}CLI Execution:
|
||||
- External CLI tool (Gemini/Codex/Qwen) executes tasks
|
||||
- Autonomous execution, faster for straightforward tasks
|
||||
- Best for: Clear requirements, bulk operations
|
||||
|
||||
Choose execution method:`,
|
||||
question: `Select execution method:\n[Show recommendation and tool descriptions]`,
|
||||
header: "Execution Method",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Direct - Execute with Agent",
|
||||
description: "Use @code-developer agent (interactive, recommended for ${planObject.complexity})"
|
||||
},
|
||||
{
|
||||
label: "CLI - Gemini",
|
||||
description: "Fast semantic analysis (gemini-2.5-pro)"
|
||||
},
|
||||
{
|
||||
label: "CLI - Codex",
|
||||
description: "Autonomous development (gpt-5)"
|
||||
},
|
||||
{
|
||||
label: "CLI - Qwen",
|
||||
description: "Code analysis specialist (coder-model)"
|
||||
}
|
||||
{ label: "Direct - Agent", description: "Interactive execution" },
|
||||
{ label: "CLI - Gemini", description: "gemini-2.5-pro" },
|
||||
{ label: "CLI - Codex", description: "gpt-5" },
|
||||
{ label: "CLI - Qwen", description: "coder-model" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
@@ -154,6 +154,23 @@ Task(subagent_type="ui-design-agent",
|
||||
|
||||
## Code Import Extraction Strategy
|
||||
|
||||
**Step 0: Fast Conflict Detection** (Use Bash/Grep for quick global scan)
|
||||
- Quick scan: \`rg --color=never -n "^\\s*--primary:|^\\s*--secondary:|^\\s*--accent:" --type css ${source}\` to find core color definitions with line numbers
|
||||
- Semantic search: \`rg --color=never -B3 -A1 "^\\s*--primary:" --type css ${source}\` to capture surrounding context and comments
|
||||
- Core token scan: Search for --primary, --secondary, --accent, --background patterns to detect all theme-critical definitions
|
||||
- Pattern: rg → Extract values → Compare → If different → Read full context with comments → Record conflict
|
||||
- Alternative (if many files): Execute CLI analysis for comprehensive report:
|
||||
\`\`\`bash
|
||||
cd ${source} && gemini -p \"
|
||||
PURPOSE: Detect color token conflicts across all CSS/SCSS/JS files
|
||||
TASK: • Scan all files for color definitions • Identify conflicting values • Extract semantic comments
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
||||
EXPECTED: JSON report listing conflicts with file:line, values, semantic context
|
||||
RULES: Focus on core tokens | Report ALL variants | analysis=READ-ONLY
|
||||
\"
|
||||
\`\`\`
|
||||
|
||||
**Step 1: Load file list**
|
||||
- Read(${intermediates_dir}/discovered-files.json)
|
||||
- Extract: file_types.css.files, file_types.js.files, file_types.html.files
|
||||
@@ -252,6 +269,23 @@ Task(subagent_type="ui-design-agent",
|
||||
|
||||
## Code Import Extraction Strategy
|
||||
|
||||
**Step 0: Fast Animation Discovery** (Use Bash/Grep for quick pattern detection)
|
||||
- Quick scan: \`rg --color=never -n "@keyframes|animation:|transition:" --type css ${source}\` to find animation definitions with line numbers
|
||||
- Framework detection: \`rg --color=never "framer-motion|gsap|@react-spring|react-spring" --type js --type ts ${source}\` to detect animation frameworks
|
||||
- Pattern categorization: \`rg --color=never -B2 -A5 "@keyframes" --type css ${source}\` to extract keyframe animations with context
|
||||
- Pattern: rg → Identify animation types → Map framework usage → Prioritize extraction targets
|
||||
- Alternative (if complex framework mix): Execute CLI analysis for comprehensive report:
|
||||
\`\`\`bash
|
||||
cd ${source} && gemini -p \"
|
||||
PURPOSE: Detect animation frameworks and patterns
|
||||
TASK: • Identify frameworks • Map animation patterns • Categorize by complexity
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
||||
EXPECTED: JSON report listing frameworks, animation types, file locations
|
||||
RULES: Focus on framework consistency | Map all animations | analysis=READ-ONLY
|
||||
\"
|
||||
\`\`\`
|
||||
|
||||
**Step 1: Load file list**
|
||||
- Read(${intermediates_dir}/discovered-files.json)
|
||||
- Extract: file_types.css.files, file_types.js.files, file_types.html.files
|
||||
@@ -314,6 +348,23 @@ Task(subagent_type="ui-design-agent",
|
||||
|
||||
## Code Import Extraction Strategy
|
||||
|
||||
**Step 0: Fast Component Discovery** (Use Bash/Grep for quick component scan)
|
||||
- Layout pattern scan: \`rg --color=never -n "display:\\s*(grid|flex)|grid-template" --type css ${source}\` to find layout systems
|
||||
- Component class scan: \`rg --color=never "class.*=.*\\"[^\"]*\\b(btn|button|card|input|modal|dialog|dropdown)" --type html --type js --type ts ${source}\` to identify UI components
|
||||
- Universal component heuristic: Components appearing in 3+ files = universal, <3 files = specialized
|
||||
- Pattern: rg → Count occurrences → Classify by frequency → Prioritize universal components
|
||||
- Alternative (if large codebase): Execute CLI analysis for comprehensive categorization:
|
||||
\`\`\`bash
|
||||
cd ${source} && gemini -p \"
|
||||
PURPOSE: Classify components as universal vs specialized
|
||||
TASK: • Identify UI components • Classify reusability • Map layout systems
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts @**/*.html
|
||||
EXPECTED: JSON report categorizing components, layout patterns, naming conventions
|
||||
RULES: Focus on component reusability | Identify layout systems | analysis=READ-ONLY
|
||||
\"
|
||||
\`\`\`
|
||||
|
||||
**Step 1: Load file list**
|
||||
- Read(${intermediates_dir}/discovered-files.json)
|
||||
- Extract: file_types.css.files, file_types.js.files, file_types.html.files
|
||||
|
||||
Reference in New Issue
Block a user