mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-06 01:54:11 +08:00
Compare commits
77 Commits
v5.8.0
...
claude/add
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c34a6042c0 | ||
|
|
383da9ebb7 | ||
|
|
fc965c87d7 | ||
|
|
50a36ded97 | ||
|
|
c5a0f635f4 | ||
|
|
ca9653c2e6 | ||
|
|
751d251433 | ||
|
|
51b1eb5da6 | ||
|
|
275ed051c6 | ||
|
|
fa7f37695e | ||
|
|
5e69748016 | ||
|
|
f1fff34a9d | ||
|
|
8ae3da8f61 | ||
|
|
62ffc5c645 | ||
|
|
758321b829 | ||
|
|
85d7fd9340 | ||
|
|
fbd41a0851 | ||
|
|
2a63ab5e0a | ||
|
|
46527c5b9a | ||
|
|
b9e893245b | ||
|
|
d96a8a06a0 | ||
|
|
957473aa71 | ||
|
|
c56bf68d87 | ||
|
|
9627b42c03 | ||
|
|
292dc113e3 | ||
|
|
c3818fdb79 | ||
|
|
9f322e0f34 | ||
|
|
89a61acb71 | ||
|
|
9b07310d68 | ||
|
|
487b359266 | ||
|
|
bc5ddb3670 | ||
|
|
45a082d963 | ||
|
|
19ebb2dc82 | ||
|
|
d9fcdad949 | ||
|
|
2aacc34c24 | ||
|
|
4dafec7054 | ||
|
|
b4e09213e4 | ||
|
|
3f7db2fdbc | ||
|
|
7bcf7f24a3 | ||
|
|
0a6c90c345 | ||
|
|
4a0eef03a2 | ||
|
|
9cb9b2213b | ||
|
|
0e21c0dba7 | ||
|
|
8e4e751655 | ||
|
|
6ebb1801d1 | ||
|
|
0380cbb7b8 | ||
|
|
85ef755c12 | ||
|
|
a5effb9784 | ||
|
|
1d766ed4ad | ||
|
|
fe0d30256c | ||
|
|
1c416b538d | ||
|
|
81362c14de | ||
|
|
fa6257ecae | ||
|
|
ccb4490ed4 | ||
|
|
58206f1996 | ||
|
|
564bcb72ea | ||
|
|
965a80b54e | ||
|
|
8f55bf2ece | ||
|
|
a721c50ba3 | ||
|
|
4a5c8490b1 | ||
|
|
2f0ca988f4 | ||
|
|
a45f5e9dc2 | ||
|
|
b8dc3018d4 | ||
|
|
9d4c9ef212 | ||
|
|
d7ffd6ee32 | ||
|
|
02ee426af0 | ||
|
|
e76e5bbf5c | ||
|
|
763c51cb28 | ||
|
|
c7542d95c8 | ||
|
|
02bf6e296c | ||
|
|
f839a3afb8 | ||
|
|
79714edc9a | ||
|
|
f9c33bd0ba | ||
|
|
e4a29c0b2e | ||
|
|
ca18043b14 | ||
|
|
871a02c1f8 | ||
|
|
3747a7b083 |
@@ -102,7 +102,7 @@ Phase 2: Document Generation (Autonomous Output)
|
||||
1. **Extract Tasks**: Parse `analysis_results.tasks` array
|
||||
2. **Map Artifacts**: Use `artifacts_inventory` to add artifact references to task.context
|
||||
3. **Assess Complexity**: Use `analysis_results.complexity` for document structure decision
|
||||
4. **Session Paths**: Use `session_id` to construct output paths (.workflow/{session_id}/)
|
||||
4. **Session Paths**: Use `session_id` to construct output paths (.workflow/active/{session_id}/)
|
||||
|
||||
### MCP Integration Guidelines
|
||||
|
||||
@@ -244,14 +244,14 @@ Generate individual `.task/IMPL-*.json` files with:
|
||||
- Low priority: role_analyses
|
||||
|
||||
### 3. Implementation Plan Creation
|
||||
Generate `IMPL_PLAN.md` at `.workflow/{session_id}/IMPL_PLAN.md`:
|
||||
Generate `IMPL_PLAN.md` at `.workflow/active/{session_id}/IMPL_PLAN.md`:
|
||||
|
||||
**Structure**:
|
||||
```markdown
|
||||
---
|
||||
identifier: {session_id}
|
||||
source: "User requirements"
|
||||
analysis: .workflow/{session_id}/.process/ANALYSIS_RESULTS.md
|
||||
analysis: .workflow/active/{session_id}/.process/ANALYSIS_RESULTS.md
|
||||
---
|
||||
|
||||
# Implementation Plan: {Project Title}
|
||||
@@ -280,7 +280,7 @@ analysis: .workflow/{session_id}/.process/ANALYSIS_RESULTS.md
|
||||
```
|
||||
|
||||
### 4. TODO List Generation
|
||||
Generate `TODO_LIST.md` at `.workflow/{session_id}/TODO_LIST.md`:
|
||||
Generate `TODO_LIST.md` at `.workflow/active/{session_id}/TODO_LIST.md`:
|
||||
|
||||
**Structure**:
|
||||
```markdown
|
||||
|
||||
@@ -162,15 +162,15 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
||||
|
||||
**Gemini/Qwen (Write)**:
|
||||
```bash
|
||||
cd {dir} && gemini -p "..." -m gemini-2.5-flash --approval-mode yolo
|
||||
cd {dir} && gemini -p "..." --approval-mode yolo
|
||||
```
|
||||
|
||||
**Codex (Auto)**:
|
||||
```bash
|
||||
codex -C {dir} --full-auto exec "..." -m gpt-5 --skip-git-repo-check -s danger-full-access
|
||||
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
|
||||
|
||||
# Resume: Add 'resume --last' after prompt
|
||||
codex --full-auto exec "..." resume --last -m gpt-5 --skip-git-repo-check -s danger-full-access
|
||||
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
**Cross-Directory** (Gemini/Qwen):
|
||||
@@ -190,11 +190,11 @@ cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories
|
||||
|
||||
**Session Detection**:
|
||||
```bash
|
||||
find .workflow/ -name '.active-*' -type f
|
||||
find .workflow/active/ -name 'WFS-*' -type d
|
||||
```
|
||||
|
||||
**Output Paths**:
|
||||
- **With session**: `.workflow/WFS-{id}/.chat/{agent}-{timestamp}.md`
|
||||
- **With session**: `.workflow/active/WFS-{id}/.chat/{agent}-{timestamp}.md`
|
||||
- **No session**: `.workflow/.scratchpad/{agent}-{description}-{timestamp}.md`
|
||||
|
||||
**Log Structure**:
|
||||
|
||||
@@ -22,7 +22,7 @@ description: |
|
||||
- Progressive disclosure: Quick overview → detailed analysis → dependency deep-dive
|
||||
- Context-aware filtering based on task requirements
|
||||
|
||||
color: blue
|
||||
color: yellow
|
||||
---
|
||||
|
||||
You are a specialized **CLI Exploration Agent** that executes read-only code analysis tasks autonomously to discover module structures, map dependencies, and understand architectural patterns.
|
||||
@@ -513,37 +513,19 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-
|
||||
- Use Gemini semantic analysis as tiebreaker
|
||||
- Document uncertainty in report with attribution
|
||||
|
||||
## Integration with Other Agents
|
||||
## Available Tools & Services
|
||||
|
||||
### As Service Provider (Called by Others)
|
||||
|
||||
**Planning Agents** (`action-planning-agent`, `conceptual-planning-agent`):
|
||||
- **Use Case**: Pre-planning reconnaissance to understand existing code
|
||||
- **Input**: Task description + focus areas
|
||||
- **Output**: Structural overview + dependency analysis
|
||||
- **Flow**: Planning agent → CLI explore agent (quick-scan) → Context for planning
|
||||
|
||||
**Execution Agents** (`code-developer`, `cli-execution-agent`):
|
||||
- **Use Case**: Refactoring impact analysis before code modifications
|
||||
- **Input**: Target files/functions to modify
|
||||
- **Output**: Dependency map + risk assessment
|
||||
- **Flow**: Execution agent → CLI explore agent (dependency-map) → Safe modification strategy
|
||||
|
||||
**UI Design Agent** (`ui-design-agent`):
|
||||
- **Use Case**: Discover existing UI components and design tokens
|
||||
- **Input**: Component directory + file patterns
|
||||
- **Output**: Component inventory + styling patterns
|
||||
- **Flow**: UI agent delegates structure analysis to CLI explore agent
|
||||
|
||||
### As Consumer (Calls Others)
|
||||
This agent can leverage the following tools to enhance analysis:
|
||||
|
||||
**Context Search Agent** (`context-search-agent`):
|
||||
- **Use Case**: Get project-wide context before analysis
|
||||
- **Flow**: CLI explore agent → Context search agent → Enhanced analysis with full context
|
||||
- **When to use**: Need comprehensive project understanding beyond file structure
|
||||
- **Integration**: Call context-search-agent first, then use results to guide exploration
|
||||
|
||||
**MCP Tools**:
|
||||
**MCP Tools** (Code Index):
|
||||
- **Use Case**: Enhanced file discovery and search capabilities
|
||||
- **Flow**: CLI explore agent → Code Index MCP → Faster pattern discovery
|
||||
- **When to use**: Large codebases requiring fast pattern discovery
|
||||
- **Integration**: Prefer Code Index MCP when available, fallback to rg/bash tools
|
||||
|
||||
## Key Reminders
|
||||
|
||||
@@ -636,52 +618,3 @@ rg "^import .*;" --type java -n
|
||||
# Find test files
|
||||
find . -name "*Test.java" -o -name "*Tests.java"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Caching Strategy (Optional)
|
||||
|
||||
**Project Structure Cache**:
|
||||
- Cache `get_modules_by_depth.sh` output for 1 hour
|
||||
- Invalidate on file system changes (watch .git/index)
|
||||
|
||||
**Pattern Match Cache**:
|
||||
- Cache rg results for common patterns (class/function definitions)
|
||||
- Invalidate on file modifications
|
||||
|
||||
**Gemini Analysis Cache**:
|
||||
- Cache semantic analysis results for unchanged files
|
||||
- Key: file_path + content_hash
|
||||
- TTL: 24 hours
|
||||
|
||||
### Parallel Execution
|
||||
|
||||
**Quick-Scan Mode**:
|
||||
- Run rg searches in parallel (classes, functions, imports)
|
||||
- Merge results after completion
|
||||
|
||||
**Deep-Scan Mode**:
|
||||
- Execute Bash scan (Phase 1) and Gemini setup concurrently
|
||||
- Wait for Phase 1 completion before Phase 2 (Gemini needs context)
|
||||
|
||||
**Dependency-Map Mode**:
|
||||
- Discover imports and exports in parallel
|
||||
- Build graph after all discoveries complete
|
||||
|
||||
### Resource Limits
|
||||
|
||||
**File Count Limits**:
|
||||
- Quick-scan: Unlimited (filtered by relevance)
|
||||
- Deep-scan: Max 100 files for Gemini analysis
|
||||
- Dependency-map: Max 500 modules for graph construction
|
||||
|
||||
**Timeout Limits**:
|
||||
- Quick-scan: 30 seconds (bash-only, fast)
|
||||
- Deep-scan: 5 minutes (includes Gemini CLI)
|
||||
- Dependency-map: 10 minutes (graph construction + analysis)
|
||||
|
||||
**Memory Limits**:
|
||||
- Limit rg output to 10MB (use --max-count)
|
||||
- Stream large outputs instead of loading into memory
|
||||
|
||||
724
.claude/agents/cli-lite-planning-agent.md
Normal file
724
.claude/agents/cli-lite-planning-agent.md
Normal file
@@ -0,0 +1,724 @@
|
||||
---
|
||||
name: cli-lite-planning-agent
|
||||
description: |
|
||||
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans with actionable task breakdowns. Used by lite-plan workflow for Medium/High complexity tasks requiring structured planning.
|
||||
|
||||
Core capabilities:
|
||||
- Task decomposition into actionable steps (3-10 tasks)
|
||||
- Dependency analysis and execution sequence
|
||||
- Integration with exploration context
|
||||
- Enhancement of conceptual tasks to actionable "how to do" steps
|
||||
|
||||
Examples:
|
||||
- Context: Medium complexity feature implementation
|
||||
user: "Generate implementation plan for user authentication feature"
|
||||
assistant: "Executing Gemini CLI planning → Parsing task breakdown → Generating planObject with 7 actionable tasks"
|
||||
commentary: Agent transforms conceptual task into specific file operations
|
||||
|
||||
- Context: High complexity refactoring
|
||||
user: "Generate plan for refactoring logging module with exploration context"
|
||||
assistant: "Using exploration findings → CLI planning with pattern injection → Generating enhanced planObject"
|
||||
commentary: Agent leverages exploration context to create pattern-aware, file-specific tasks
|
||||
color: cyan
|
||||
---
|
||||
|
||||
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate actionable implementation plans (planObject) for downstream execution.
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Input Processing
|
||||
|
||||
**What you receive (Context Package)**:
|
||||
```javascript
|
||||
{
|
||||
"task_description": "User's original task description",
|
||||
"explorationContext": {
|
||||
"project_structure": "Overall architecture description",
|
||||
"relevant_files": ["file1.ts", "file2.ts", "..."],
|
||||
"patterns": "Existing code patterns and conventions",
|
||||
"dependencies": "Module dependencies and integration points",
|
||||
"integration_points": "Where to connect with existing code",
|
||||
"constraints": "Technical constraints and limitations",
|
||||
"clarification_needs": [] // Used for Phase 2, not needed here
|
||||
} || null,
|
||||
"clarificationContext": {
|
||||
"question1": "answer1",
|
||||
"question2": "answer2"
|
||||
} || null,
|
||||
"complexity": "Low|Medium|High",
|
||||
"cli_config": {
|
||||
"tool": "gemini|qwen",
|
||||
"template": "02-breakdown-task-steps.txt",
|
||||
"timeout": 3600000, // 60 minutes for planning
|
||||
"fallback": "qwen"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Context Enrichment Strategy**:
|
||||
```javascript
|
||||
// Merge task description with exploration findings
|
||||
const enrichedContext = {
|
||||
task_description: task_description,
|
||||
relevant_files: explorationContext?.relevant_files || [],
|
||||
patterns: explorationContext?.patterns || "No patterns identified",
|
||||
dependencies: explorationContext?.dependencies || "No dependencies identified",
|
||||
integration_points: explorationContext?.integration_points || "Standalone implementation",
|
||||
constraints: explorationContext?.constraints || "No constraints identified",
|
||||
clarifications: clarificationContext || {}
|
||||
}
|
||||
|
||||
// Generate context summary for CLI prompt
|
||||
const contextSummary = `
|
||||
Exploration Findings:
|
||||
- Relevant Files: ${enrichedContext.relevant_files.join(', ')}
|
||||
- Patterns: ${enrichedContext.patterns}
|
||||
- Dependencies: ${enrichedContext.dependencies}
|
||||
- Integration: ${enrichedContext.integration_points}
|
||||
- Constraints: ${enrichedContext.constraints}
|
||||
|
||||
User Clarifications:
|
||||
${Object.entries(enrichedContext.clarifications).map(([q, a]) => `- ${q}: ${a}`).join('\n')}
|
||||
`
|
||||
```
|
||||
|
||||
### Execution Flow (Three-Phase)
|
||||
|
||||
```
|
||||
Phase 1: Context Preparation & CLI Execution
|
||||
1. Validate context package and extract task context
|
||||
2. Merge task description with exploration and clarification context
|
||||
3. Construct CLI command with planning template
|
||||
4. Execute Gemini/Qwen CLI tool with timeout (60 minutes)
|
||||
5. Handle errors and fallback to alternative tool if needed
|
||||
6. Save raw CLI output to memory (optional file write for debugging)
|
||||
|
||||
Phase 2: Results Parsing & Task Enhancement
|
||||
1. Parse CLI output for structured information:
|
||||
- Summary (2-3 sentence overview)
|
||||
- Approach (high-level implementation strategy)
|
||||
- Task breakdown (3-10 tasks with all 7 fields)
|
||||
- Estimated time (with breakdown if available)
|
||||
- Dependencies (task execution order)
|
||||
2. Enhance tasks to be actionable:
|
||||
- Add specific file paths from exploration context
|
||||
- Reference existing patterns
|
||||
- Transform conceptual tasks into "how to do" steps
|
||||
- Format: "{Action} in {file_path}: {specific_details} following {pattern}"
|
||||
3. Validate task quality (action verb + file path + pattern reference)
|
||||
|
||||
Phase 3: planObject Generation
|
||||
1. Build planObject structure from parsed and enhanced results
|
||||
2. Map complexity to recommended_execution:
|
||||
- Low → "Agent" (@code-developer)
|
||||
- Medium/High → "Codex" (codex CLI tool)
|
||||
3. Return planObject (in-memory, no file writes)
|
||||
4. Return success status to orchestrator (lite-plan)
|
||||
```
|
||||
|
||||
## Core Functions
|
||||
|
||||
### 1. CLI Planning Execution
|
||||
|
||||
**Template-Based Command Construction**:
|
||||
```bash
|
||||
cd {project_root} && {cli_tool} -p "
|
||||
PURPOSE: Generate detailed implementation plan for {complexity} complexity task with structured actionable task breakdown
|
||||
TASK:
|
||||
• Analyze task requirements: {task_description}
|
||||
• Break down into 3-10 structured task objects with complete implementation guidance
|
||||
• For each task, provide:
|
||||
- Title and target file
|
||||
- Action type (Create|Update|Implement|Refactor|Add|Delete)
|
||||
- Description (what to implement)
|
||||
- Implementation steps (how to do it, 3-7 specific steps)
|
||||
- Reference (which patterns/files to follow, with specific examples)
|
||||
- Acceptance criteria (verification checklist)
|
||||
• Identify dependencies and execution sequence
|
||||
• Provide realistic time estimates with breakdown
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: {exploration_context_summary}
|
||||
EXPECTED: Structured plan with the following format:
|
||||
|
||||
## Implementation Summary
|
||||
[2-3 sentence overview]
|
||||
|
||||
## High-Level Approach
|
||||
[Strategy with pattern references]
|
||||
|
||||
## Task Breakdown
|
||||
|
||||
### Task 1: [Title]
|
||||
**File**: [file/path.ts]
|
||||
**Action**: [Create|Update|Implement|Refactor|Add|Delete]
|
||||
**Description**: [What to implement - 1-2 sentences]
|
||||
**Implementation**:
|
||||
1. [Specific step 1 - how to do it]
|
||||
2. [Specific step 2 - concrete action]
|
||||
3. [Specific step 3 - implementation detail]
|
||||
4. [Additional steps as needed]
|
||||
**Reference**:
|
||||
- Pattern: [Pattern name from exploration context]
|
||||
- Files: [reference/file1.ts], [reference/file2.ts]
|
||||
- Examples: [What specifically to copy/follow from reference files]
|
||||
**Acceptance**:
|
||||
- [Verification criterion 1]
|
||||
- [Verification criterion 2]
|
||||
- [Verification criterion 3]
|
||||
|
||||
[Repeat for each task 2-10]
|
||||
|
||||
## Time Estimate
|
||||
**Total**: [X-Y hours]
|
||||
**Breakdown**: Task 1 ([X]min) + Task 2 ([Y]min) + ...
|
||||
|
||||
## Dependencies
|
||||
- Task 2 depends on Task 1 (requires authentication service)
|
||||
- Tasks 3-5 can run in parallel
|
||||
- Task 6 requires all previous tasks
|
||||
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
||||
- Exploration context: Relevant files: {relevant_files_list}
|
||||
- Existing patterns: {patterns_summary}
|
||||
- User clarifications: {clarifications_summary}
|
||||
- Complexity level: {complexity}
|
||||
- Each task MUST include all 7 fields: title, file, action, description, implementation, reference, acceptance
|
||||
- Implementation steps must be concrete and actionable (not conceptual)
|
||||
- Reference must cite specific files from exploration context
|
||||
- analysis=READ-ONLY
|
||||
" {timeout_flag}
|
||||
```
|
||||
|
||||
**Error Handling & Fallback Strategy**:
|
||||
```javascript
|
||||
// Primary execution with fallback chain
|
||||
try {
|
||||
result = executeCLI("gemini", config);
|
||||
} catch (error) {
|
||||
if (error.code === 429 || error.code === 404) {
|
||||
console.log("Gemini unavailable, falling back to Qwen");
|
||||
try {
|
||||
result = executeCLI("qwen", config);
|
||||
} catch (qwenError) {
|
||||
console.error("Both Gemini and Qwen failed");
|
||||
// Return degraded mode with basic plan
|
||||
return {
|
||||
status: "degraded",
|
||||
message: "CLI planning failed, using fallback strategy",
|
||||
planObject: generateBasicPlan(task_description, explorationContext)
|
||||
};
|
||||
}
|
||||
} else {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback plan generation when all CLI tools fail
|
||||
function generateBasicPlan(taskDesc, exploration) {
|
||||
const relevantFiles = exploration?.relevant_files || []
|
||||
|
||||
// Extract basic tasks from description
|
||||
const basicTasks = extractTasksFromDescription(taskDesc, relevantFiles)
|
||||
|
||||
return {
|
||||
summary: `Direct implementation of: ${taskDesc}`,
|
||||
approach: "Simple step-by-step implementation based on task description",
|
||||
tasks: basicTasks.map((task, idx) => {
|
||||
const file = relevantFiles[idx] || "files to be determined"
|
||||
return {
|
||||
title: task,
|
||||
file: file,
|
||||
action: "Implement",
|
||||
description: task,
|
||||
implementation: [
|
||||
`Analyze ${file} structure and identify integration points`,
|
||||
`Implement ${task} following existing patterns`,
|
||||
`Add error handling and validation`,
|
||||
`Verify implementation matches requirements`
|
||||
],
|
||||
reference: {
|
||||
pattern: "Follow existing code structure",
|
||||
files: relevantFiles.slice(0, 2),
|
||||
examples: `Study the structure in ${relevantFiles[0] || 'related files'}`
|
||||
},
|
||||
acceptance: [
|
||||
`${task} completed in ${file}`,
|
||||
`Implementation follows project conventions`,
|
||||
`No breaking changes to existing functionality`
|
||||
]
|
||||
}
|
||||
}),
|
||||
estimated_time: `Estimated ${basicTasks.length * 30} minutes (${basicTasks.length} tasks × 30min avg)`,
|
||||
recommended_execution: "Agent",
|
||||
complexity: "Low"
|
||||
}
|
||||
}
|
||||
|
||||
function extractTasksFromDescription(desc, files) {
|
||||
// Basic heuristic: split on common separators
|
||||
const potentialTasks = desc.split(/[,;]|\band\b/)
|
||||
.map(s => s.trim())
|
||||
.filter(s => s.length > 10)
|
||||
|
||||
if (potentialTasks.length >= 3) {
|
||||
return potentialTasks.slice(0, 10)
|
||||
}
|
||||
|
||||
// Fallback: create generic tasks
|
||||
return [
|
||||
`Analyze requirements and identify implementation approach`,
|
||||
`Implement core functionality in ${files[0] || 'main file'}`,
|
||||
`Add error handling and validation`,
|
||||
`Create unit tests for new functionality`,
|
||||
`Update documentation`
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Output Parsing & Enhancement
|
||||
|
||||
**Structured Task Parsing**:
|
||||
```javascript
|
||||
// Parse CLI output for structured tasks
|
||||
function extractStructuredTasks(cliOutput) {
|
||||
const tasks = []
|
||||
const taskPattern = /### Task \d+: (.+?)\n\*\*File\*\*: (.+?)\n\*\*Action\*\*: (.+?)\n\*\*Description\*\*: (.+?)\n\*\*Implementation\*\*:\n((?:\d+\. .+?\n)+)\*\*Reference\*\*:\n((?:- .+?\n)+)\*\*Acceptance\*\*:\n((?:- .+?\n)+)/g
|
||||
|
||||
let match
|
||||
while ((match = taskPattern.exec(cliOutput)) !== null) {
|
||||
// Parse implementation steps
|
||||
const implementation = match[5].trim()
|
||||
.split('\n')
|
||||
.map(s => s.replace(/^\d+\. /, ''))
|
||||
.filter(s => s.length > 0)
|
||||
|
||||
// Parse reference fields
|
||||
const referenceText = match[6].trim()
|
||||
const patternMatch = /- Pattern: (.+)/m.exec(referenceText)
|
||||
const filesMatch = /- Files: (.+)/m.exec(referenceText)
|
||||
const examplesMatch = /- Examples: (.+)/m.exec(referenceText)
|
||||
|
||||
const reference = {
|
||||
pattern: patternMatch ? patternMatch[1].trim() : "No pattern specified",
|
||||
files: filesMatch ? filesMatch[1].split(',').map(f => f.trim()) : [],
|
||||
examples: examplesMatch ? examplesMatch[1].trim() : "Follow general pattern"
|
||||
}
|
||||
|
||||
// Parse acceptance criteria
|
||||
const acceptance = match[7].trim()
|
||||
.split('\n')
|
||||
.map(s => s.replace(/^- /, ''))
|
||||
.filter(s => s.length > 0)
|
||||
|
||||
tasks.push({
|
||||
title: match[1].trim(),
|
||||
file: match[2].trim(),
|
||||
action: match[3].trim(),
|
||||
description: match[4].trim(),
|
||||
implementation: implementation,
|
||||
reference: reference,
|
||||
acceptance: acceptance
|
||||
})
|
||||
}
|
||||
|
||||
return tasks
|
||||
}
|
||||
|
||||
const parsedResults = {
|
||||
summary: extractSection("Implementation Summary"),
|
||||
approach: extractSection("High-Level Approach"),
|
||||
raw_tasks: extractStructuredTasks(cliOutput),
|
||||
time_estimate: extractSection("Time Estimate"),
|
||||
dependencies: extractSection("Dependencies")
|
||||
}
|
||||
```
|
||||
|
||||
**Validation & Enhancement**:
|
||||
```javascript
|
||||
// Validate and enhance tasks if CLI output is incomplete
|
||||
function validateAndEnhanceTasks(rawTasks, explorationContext) {
|
||||
return rawTasks.map(taskObj => {
|
||||
// Validate required fields
|
||||
const validated = {
|
||||
title: taskObj.title || "Unnamed task",
|
||||
file: taskObj.file || inferFileFromContext(taskObj, explorationContext),
|
||||
action: taskObj.action || inferAction(taskObj.title),
|
||||
description: taskObj.description || taskObj.title,
|
||||
implementation: taskObj.implementation?.length > 0
|
||||
? taskObj.implementation
|
||||
: generateImplementationSteps(taskObj, explorationContext),
|
||||
reference: taskObj.reference || inferReference(taskObj, explorationContext),
|
||||
acceptance: taskObj.acceptance?.length > 0
|
||||
? taskObj.acceptance
|
||||
: generateAcceptanceCriteria(taskObj)
|
||||
}
|
||||
|
||||
return validated
|
||||
})
|
||||
}
|
||||
|
||||
// Helper functions for inference
|
||||
function inferFileFromContext(taskObj, explorationContext) {
|
||||
const relevantFiles = explorationContext?.relevant_files || []
|
||||
const titleLower = taskObj.title.toLowerCase()
|
||||
const matchedFile = relevantFiles.find(f =>
|
||||
titleLower.includes(f.split('/').pop().split('.')[0].toLowerCase())
|
||||
)
|
||||
return matchedFile || "file-to-be-determined.ts"
|
||||
}
|
||||
|
||||
function inferAction(title) {
|
||||
if (/create|add new|implement/i.test(title)) return "Create"
|
||||
if (/update|modify|change/i.test(title)) return "Update"
|
||||
if (/refactor/i.test(title)) return "Refactor"
|
||||
if (/delete|remove/i.test(title)) return "Delete"
|
||||
return "Implement"
|
||||
}
|
||||
|
||||
function generateImplementationSteps(taskObj, explorationContext) {
|
||||
const patterns = explorationContext?.patterns || ""
|
||||
return [
|
||||
`Analyze ${taskObj.file} structure and identify integration points`,
|
||||
`Implement ${taskObj.title} following ${patterns || 'existing patterns'}`,
|
||||
`Add error handling and validation`,
|
||||
`Update related components if needed`,
|
||||
`Verify implementation matches requirements`
|
||||
]
|
||||
}
|
||||
|
||||
function inferReference(taskObj, explorationContext) {
|
||||
const patterns = explorationContext?.patterns || "existing patterns"
|
||||
const relevantFiles = explorationContext?.relevant_files || []
|
||||
|
||||
return {
|
||||
pattern: patterns.split('.')[0] || "Follow existing code structure",
|
||||
files: relevantFiles.slice(0, 2),
|
||||
examples: `Study the structure and methods in ${relevantFiles[0] || 'related files'}`
|
||||
}
|
||||
}
|
||||
|
||||
function generateAcceptanceCriteria(taskObj) {
|
||||
return [
|
||||
`${taskObj.title} completed in ${taskObj.file}`,
|
||||
`Implementation follows project conventions`,
|
||||
`No breaking changes to existing functionality`,
|
||||
`Code passes linting and type checks`
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 3. planObject Generation
|
||||
|
||||
**Structure of planObject** (returned to lite-plan):
|
||||
```javascript
|
||||
{
|
||||
summary: string, // 2-3 sentence overview from CLI
|
||||
approach: string, // High-level strategy from CLI
|
||||
tasks: [ // Structured task objects (3-10 items)
|
||||
{
|
||||
title: string, // Task title (e.g., "Create AuthService")
|
||||
file: string, // Target file path
|
||||
action: string, // Action type: Create|Update|Implement|Refactor|Add|Delete
|
||||
description: string, // What to implement (1-2 sentences)
|
||||
implementation: string[], // Step-by-step how to do it (3-7 steps)
|
||||
reference: { // What to reference
|
||||
pattern: string, // Pattern name (e.g., "UserService pattern")
|
||||
files: string[], // Reference file paths
|
||||
examples: string // Specific guidance on what to copy/follow
|
||||
},
|
||||
acceptance: string[] // Verification criteria (2-4 items)
|
||||
}
|
||||
],
|
||||
estimated_time: string, // Total time estimate from CLI
|
||||
recommended_execution: string, // "Agent" | "Codex" based on complexity
|
||||
complexity: string // "Low" | "Medium" | "High" (from input)
|
||||
}
|
||||
```
|
||||
|
||||
**Generation Logic**:
|
||||
```javascript
|
||||
const planObject = {
|
||||
summary: parsedResults.summary || `Implementation plan for: ${task_description.slice(0, 100)}`,
|
||||
|
||||
approach: parsedResults.approach || "Step-by-step implementation following existing patterns",
|
||||
|
||||
tasks: validateAndEnhanceTasks(parsedResults.raw_tasks, explorationContext),
|
||||
|
||||
estimated_time: parsedResults.time_estimate || estimateTimeFromTaskCount(parsedResults.raw_tasks.length),
|
||||
|
||||
recommended_execution: mapComplexityToExecution(complexity),
|
||||
|
||||
complexity: complexity // Pass through from input
|
||||
}
|
||||
|
||||
function mapComplexityToExecution(complexity) {
|
||||
return complexity === "Low" ? "Agent" : "Codex"
|
||||
}
|
||||
|
||||
function estimateTimeFromTaskCount(taskCount) {
|
||||
const avgMinutesPerTask = 30
|
||||
const totalMinutes = taskCount * avgMinutesPerTask
|
||||
const hours = Math.floor(totalMinutes / 60)
|
||||
const minutes = totalMinutes % 60
|
||||
|
||||
if (hours === 0) {
|
||||
return `${minutes} minutes (${taskCount} tasks × ${avgMinutesPerTask}min avg)`
|
||||
}
|
||||
return `${hours}h ${minutes}m (${taskCount} tasks × ${avgMinutesPerTask}min avg)`
|
||||
}
|
||||
```
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### CLI Execution Standards
|
||||
- **Timeout Management**: Use dynamic timeout (3600000ms = 60min for planning)
|
||||
- **Fallback Chain**: Gemini → Qwen → degraded mode (if both fail)
|
||||
- **Error Context**: Include full error details in failure reports
|
||||
- **Output Preservation**: Optionally save raw CLI output for debugging
|
||||
|
||||
### Task Object Standards
|
||||
|
||||
**Completeness** - Each task must have all 7 required fields:
|
||||
- **title**: Clear, concise task name
|
||||
- **file**: Exact file path (from exploration.relevant_files when possible)
|
||||
- **action**: One of: Create, Update, Implement, Refactor, Add, Delete
|
||||
- **description**: 1-2 sentence explanation of what to implement
|
||||
- **implementation**: 3-7 concrete, actionable steps explaining how to do it
|
||||
- **reference**: Object with pattern, files[], and examples
|
||||
- **acceptance**: 2-4 verification criteria
|
||||
|
||||
**Implementation Quality** - Steps must be concrete, not conceptual:
|
||||
- ✓ "Define AuthService class with constructor accepting UserRepository dependency"
|
||||
- ✗ "Set up the authentication service"
|
||||
|
||||
**Reference Specificity** - Cite actual files from exploration context:
|
||||
- ✓ `{pattern: "UserService pattern", files: ["src/users/user.service.ts"], examples: "Follow constructor injection and async method patterns"}`
|
||||
- ✗ `{pattern: "service pattern", files: [], examples: "follow patterns"}`
|
||||
|
||||
**Acceptance Measurability** - Criteria must be verifiable:
|
||||
- ✓ "AuthService class created with login(), logout(), validateToken() methods"
|
||||
- ✗ "Service works correctly"
|
||||
|
||||
### Task Validation
|
||||
|
||||
**Validation Function**:
|
||||
```javascript
|
||||
function validateTaskObject(task) {
|
||||
const errors = []
|
||||
|
||||
// Validate required fields
|
||||
if (!task.title || task.title.trim().length === 0) {
|
||||
errors.push("Missing title")
|
||||
}
|
||||
if (!task.file || task.file.trim().length === 0) {
|
||||
errors.push("Missing file path")
|
||||
}
|
||||
if (!task.action || !['Create', 'Update', 'Implement', 'Refactor', 'Add', 'Delete'].includes(task.action)) {
|
||||
errors.push(`Invalid action: ${task.action}`)
|
||||
}
|
||||
if (!task.description || task.description.trim().length === 0) {
|
||||
errors.push("Missing description")
|
||||
}
|
||||
if (!task.implementation || task.implementation.length < 3) {
|
||||
errors.push("Implementation must have at least 3 steps")
|
||||
}
|
||||
if (!task.reference || !task.reference.pattern) {
|
||||
errors.push("Missing pattern reference")
|
||||
}
|
||||
if (!task.acceptance || task.acceptance.length < 2) {
|
||||
errors.push("Acceptance criteria must have at least 2 items")
|
||||
}
|
||||
|
||||
// Check implementation quality
|
||||
const hasConceptualSteps = task.implementation?.some(step =>
|
||||
/^(handle|manage|deal with|set up|work on)/i.test(step)
|
||||
)
|
||||
if (hasConceptualSteps) {
|
||||
errors.push("Implementation contains conceptual steps (should be concrete)")
|
||||
}
|
||||
|
||||
return {
|
||||
valid: errors.length === 0,
|
||||
errors: errors
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Good vs Bad Examples**:
|
||||
```javascript
|
||||
// ❌ BAD (Incomplete, vague)
|
||||
{
|
||||
title: "Add authentication",
|
||||
file: "auth.ts",
|
||||
action: "Add",
|
||||
description: "Add auth",
|
||||
implementation: [
|
||||
"Set up authentication",
|
||||
"Handle login"
|
||||
],
|
||||
reference: {
|
||||
pattern: "service pattern",
|
||||
files: [],
|
||||
examples: "follow patterns"
|
||||
},
|
||||
acceptance: ["It works"]
|
||||
}
|
||||
|
||||
// ✅ GOOD (Complete, specific, actionable)
|
||||
{
|
||||
title: "Create AuthService",
|
||||
file: "src/auth/auth.service.ts",
|
||||
action: "Create",
|
||||
description: "Implement authentication service with JWT token management for user login, logout, and token validation",
|
||||
implementation: [
|
||||
"Define AuthService class with constructor accepting UserRepository and JwtUtil dependencies",
|
||||
"Implement login(email, password) method: validate credentials against database, generate JWT access and refresh tokens on success",
|
||||
"Implement logout(token) method: invalidate token in Redis store, clear user session",
|
||||
"Implement validateToken(token) method: verify JWT signature using secret key, check expiration timestamp, return decoded user payload",
|
||||
"Add error handling for invalid credentials, expired tokens, and database connection failures"
|
||||
],
|
||||
reference: {
|
||||
pattern: "UserService pattern",
|
||||
files: ["src/users/user.service.ts", "src/utils/jwt.util.ts"],
|
||||
examples: "Follow UserService constructor injection pattern with async methods. Use JwtUtil.generateToken() and JwtUtil.verifyToken() for token operations"
|
||||
},
|
||||
acceptance: [
|
||||
"AuthService class created with login(), logout(), validateToken() methods",
|
||||
"Methods follow UserService async/await pattern with try-catch error handling",
|
||||
"JWT token generation uses JwtUtil with 1h access token and 7d refresh token expiry",
|
||||
"All methods return typed responses (success/error objects)"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS:**
|
||||
- **Validate context package**: Ensure task_description present before CLI execution
|
||||
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
|
||||
- **Parse CLI output structurally**: Extract all 7 task fields (title, file, action, description, implementation, reference, acceptance)
|
||||
- **Validate task objects**: Each task must have all required fields with quality content
|
||||
- **Generate complete planObject**: All fields populated with structured task objects
|
||||
- **Return in-memory result**: No file writes unless debugging
|
||||
- **Preserve exploration context**: Use relevant_files and patterns in task references
|
||||
- **Ensure implementation concreteness**: Steps must be actionable, not conceptual
|
||||
- **Cite specific references**: Reference actual files from exploration context
|
||||
|
||||
**NEVER:**
|
||||
- Execute implementation directly (return plan, let lite-execute handle execution)
|
||||
- Skip CLI planning (always run CLI even for simple tasks, unless degraded mode)
|
||||
- Return vague task objects (validate all required fields)
|
||||
- Use conceptual implementation steps ("set up", "handle", "manage")
|
||||
- Modify files directly (planning only, no implementation)
|
||||
- Exceed timeout limits (use configured timeout value)
|
||||
- Return tasks with empty reference files (cite actual exploration files)
|
||||
- Skip task validation (all task objects must pass quality checks)
|
||||
|
||||
## Configuration & Examples
|
||||
|
||||
### CLI Tool Configuration
|
||||
|
||||
**Gemini Configuration**:
|
||||
```javascript
|
||||
{
|
||||
"tool": "gemini",
|
||||
"model": "gemini-2.5-pro", // Auto-selected, no need to specify
|
||||
"templates": {
|
||||
"task-breakdown": "02-breakdown-task-steps.txt",
|
||||
"architecture-planning": "01-plan-architecture-design.txt",
|
||||
"component-design": "02-design-component-spec.txt"
|
||||
},
|
||||
"timeout": 3600000 // 60 minutes
|
||||
}
|
||||
```
|
||||
|
||||
**Qwen Configuration (Fallback)**:
|
||||
```javascript
|
||||
{
|
||||
"tool": "qwen",
|
||||
"model": "coder-model", // Auto-selected
|
||||
"templates": {
|
||||
"task-breakdown": "02-breakdown-task-steps.txt",
|
||||
"architecture-planning": "01-plan-architecture-design.txt"
|
||||
},
|
||||
"timeout": 3600000 // 60 minutes
|
||||
}
|
||||
```
|
||||
|
||||
### Example Execution
|
||||
|
||||
**Input Context**:
|
||||
```json
|
||||
{
|
||||
"task_description": "Implement user authentication with JWT tokens",
|
||||
"explorationContext": {
|
||||
"project_structure": "Express.js REST API with TypeScript, layered architecture (routes → services → repositories)",
|
||||
"relevant_files": [
|
||||
"src/users/user.service.ts",
|
||||
"src/users/user.repository.ts",
|
||||
"src/middleware/cors.middleware.ts",
|
||||
"src/routes/api.ts"
|
||||
],
|
||||
"patterns": "Service-Repository pattern used throughout. Services in src/{module}/{module}.service.ts, Repositories in src/{module}/{module}.repository.ts. Middleware follows function-based approach in src/middleware/",
|
||||
"dependencies": "Express, TypeORM, bcrypt for password hashing",
|
||||
"integration_points": "Auth service needs to integrate with existing user service and API routes",
|
||||
"constraints": "Must use existing TypeORM entities, follow established error handling patterns"
|
||||
},
|
||||
"clarificationContext": {
|
||||
"token_expiry": "1 hour access token, 7 days refresh token",
|
||||
"password_requirements": "Min 8 chars, must include number and special char"
|
||||
},
|
||||
"complexity": "Medium",
|
||||
"cli_config": {
|
||||
"tool": "gemini",
|
||||
"template": "02-breakdown-task-steps.txt",
|
||||
"timeout": 3600000
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Execution Summary**:
|
||||
1. **Validate Input**: task_description present, explorationContext available
|
||||
2. **Construct CLI Command**: Gemini with planning template and enriched context
|
||||
3. **Execute CLI**: Gemini runs and returns structured plan (timeout: 60min)
|
||||
4. **Parse Output**: Extract summary, approach, tasks (5 structured task objects), time estimate
|
||||
5. **Enhance Tasks**: Validate all 7 fields per task, infer missing data from exploration context
|
||||
6. **Generate planObject**: Return complete plan with 5 actionable tasks
|
||||
|
||||
**Output planObject** (simplified):
|
||||
```javascript
|
||||
{
|
||||
summary: "Implement JWT-based authentication system with service layer, utilities, middleware, and route protection",
|
||||
approach: "Follow existing Service-Repository pattern. Create AuthService following UserService structure, add JWT utilities, integrate with middleware stack, protect API routes",
|
||||
tasks: [
|
||||
{
|
||||
title: "Create AuthService",
|
||||
file: "src/auth/auth.service.ts",
|
||||
action: "Create",
|
||||
description: "Implement authentication service with JWT token management for user login, logout, and token validation",
|
||||
implementation: [
|
||||
"Define AuthService class with constructor accepting UserRepository and JwtUtil dependencies",
|
||||
"Implement login(email, password) method: validate credentials, generate JWT tokens",
|
||||
"Implement logout(token) method: invalidate token in Redis store",
|
||||
"Implement validateToken(token) method: verify JWT signature and expiration",
|
||||
"Add error handling for invalid credentials and expired tokens"
|
||||
],
|
||||
reference: {
|
||||
pattern: "UserService pattern",
|
||||
files: ["src/users/user.service.ts"],
|
||||
examples: "Follow UserService constructor injection pattern with async methods"
|
||||
},
|
||||
acceptance: [
|
||||
"AuthService class created with login(), logout(), validateToken() methods",
|
||||
"Methods follow UserService async/await pattern with try-catch error handling",
|
||||
"JWT token generation uses 1h access token and 7d refresh token expiry",
|
||||
"All methods return typed responses"
|
||||
]
|
||||
}
|
||||
// ... 4 more tasks (JWT utilities, auth middleware, route protection, tests)
|
||||
],
|
||||
estimated_time: "3-4 hours (1h service + 30m utils + 1h middleware + 30m routes + 1h tests)",
|
||||
recommended_execution: "Codex",
|
||||
complexity: "Medium"
|
||||
}
|
||||
```
|
||||
@@ -10,7 +10,7 @@ description: |
|
||||
commentary: Agent encapsulates CLI execution + result parsing + task generation
|
||||
|
||||
- Context: Coverage gap analysis
|
||||
user: "Analyze coverage gaps and generate补充test task"
|
||||
user: "Analyze coverage gaps and generate supplement test task"
|
||||
assistant: "Executing CLI analysis for uncovered code paths → Generating test supplement task"
|
||||
commentary: Agent handles both analysis and task JSON generation autonomously
|
||||
color: purple
|
||||
@@ -18,12 +18,11 @@ color: purple
|
||||
|
||||
You are a specialized execution agent that bridges CLI analysis tools with task generation. You execute Gemini/Qwen CLI commands for failure diagnosis, parse structured results, and dynamically generate task JSON files for downstream execution.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Execute CLI Analysis**: Run Gemini/Qwen with appropriate templates and context
|
||||
2. **Parse CLI Results**: Extract structured information (fix strategies, root causes, modification points)
|
||||
3. **Generate Task JSONs**: Create IMPL-fix-N.json or IMPL-supplement-N.json dynamically
|
||||
4. **Save Analysis Reports**: Store detailed CLI output as iteration-N-analysis.md
|
||||
**Core capabilities:**
|
||||
- Execute CLI analysis with appropriate templates and context
|
||||
- Parse structured results (fix strategies, root causes, modification points)
|
||||
- Generate task JSONs dynamically (IMPL-fix-N.json, IMPL-supplement-N.json)
|
||||
- Save detailed analysis reports (iteration-N-analysis.md)
|
||||
|
||||
## Execution Process
|
||||
|
||||
@@ -43,7 +42,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
||||
"file": "tests/test_auth.py",
|
||||
"line": 45,
|
||||
"criticality": "high",
|
||||
"test_type": "integration" // ← NEW: L0: static, L1: unit, L2: integration, L3: e2e
|
||||
"test_type": "integration" // L0: static, L1: unit, L2: integration, L3: e2e
|
||||
}
|
||||
],
|
||||
"error_messages": ["error1", "error2"],
|
||||
@@ -61,7 +60,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
||||
"tool": "gemini|qwen",
|
||||
"model": "gemini-3-pro-preview-11-2025|qwen-coder-model",
|
||||
"template": "01-diagnose-bug-root-cause.txt",
|
||||
"timeout": 2400000,
|
||||
"timeout": 2400000, // 40 minutes for analysis
|
||||
"fallback": "qwen"
|
||||
},
|
||||
"task_config": {
|
||||
@@ -79,16 +78,16 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
||||
Phase 1: CLI Analysis Execution
|
||||
1. Validate context package and extract failure context
|
||||
2. Construct CLI command with appropriate template
|
||||
3. Execute Gemini/Qwen CLI tool
|
||||
3. Execute Gemini/Qwen CLI tool with layer-specific guidance
|
||||
4. Handle errors and fallback to alternative tool if needed
|
||||
5. Save raw CLI output to .process/iteration-N-cli-output.txt
|
||||
|
||||
Phase 2: Results Parsing & Strategy Extraction
|
||||
1. Parse CLI output for structured information:
|
||||
- Root cause analysis
|
||||
- Root cause analysis (RCA)
|
||||
- Fix strategy and approach
|
||||
- Modification points (files, functions, line numbers)
|
||||
- Expected outcome
|
||||
- Expected outcome and verification steps
|
||||
2. Extract quantified requirements:
|
||||
- Number of files to modify
|
||||
- Specific functions to fix (with line numbers)
|
||||
@@ -96,18 +95,18 @@ Phase 2: Results Parsing & Strategy Extraction
|
||||
3. Generate structured analysis report (iteration-N-analysis.md)
|
||||
|
||||
Phase 3: Task JSON Generation
|
||||
1. Load task JSON template (defined below)
|
||||
1. Load task JSON template
|
||||
2. Populate template with parsed CLI results
|
||||
3. Add iteration context and previous attempts
|
||||
4. Write task JSON to .workflow/{session}/.task/IMPL-fix-N.json
|
||||
4. Write task JSON to .workflow/session/{session}/.task/IMPL-fix-N.json
|
||||
5. Return success status and task ID to orchestrator
|
||||
```
|
||||
|
||||
## Core Functions
|
||||
|
||||
### 1. CLI Command Construction
|
||||
### 1. CLI Analysis Execution
|
||||
|
||||
**Template-Based Approach with Test Layer Awareness**:
|
||||
**Template-Based Command Construction with Test Layer Awareness**:
|
||||
```bash
|
||||
cd {project_root} && {cli_tool} -p "
|
||||
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
||||
@@ -136,7 +135,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
||||
- Consider previous iteration failures
|
||||
- Validate fix doesn't introduce new vulnerabilities
|
||||
- analysis=READ-ONLY
|
||||
" -m {model} {timeout_flag}
|
||||
" {timeout_flag}
|
||||
```
|
||||
|
||||
**Layer-Specific Guidance Injection**:
|
||||
@@ -151,8 +150,9 @@ const layerGuidance = {
|
||||
const guidance = layerGuidance[test_type] || "Analyze holistically, avoid quick patches";
|
||||
```
|
||||
|
||||
**Error Handling & Fallback**:
|
||||
**Error Handling & Fallback Strategy**:
|
||||
```javascript
|
||||
// Primary execution with fallback chain
|
||||
try {
|
||||
result = executeCLI("gemini", config);
|
||||
} catch (error) {
|
||||
@@ -173,16 +173,18 @@ try {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback strategy when all CLI tools fail
|
||||
function generateBasicFixStrategy(failure_context) {
|
||||
// Generate basic fix task based on error pattern matching
|
||||
// Use previous successful fix patterns from fix-history.json
|
||||
// Limit to simple, low-risk fixes (add null checks, fix typos)
|
||||
// Mark task with meta.analysis_quality: "degraded" flag
|
||||
// Orchestrator will treat degraded analysis with caution
|
||||
}
|
||||
```
|
||||
|
||||
**Fallback Strategy (When All CLI Tools Fail)**:
|
||||
- Generate basic fix task based on error patterns matching
|
||||
- Use previous successful fix patterns from fix-history.json
|
||||
- Limit to simple, low-risk fixes (add null checks, fix typos)
|
||||
- Mark task with `meta.analysis_quality: "degraded"` flag
|
||||
- Orchestrator will treat degraded analysis with caution (may skip iteration)
|
||||
|
||||
### 2. CLI Output Parsing
|
||||
### 2. Output Parsing & Task Generation
|
||||
|
||||
**Expected CLI Output Structure** (from bug diagnosis template):
|
||||
```markdown
|
||||
@@ -220,18 +222,34 @@ try {
|
||||
```javascript
|
||||
const parsedResults = {
|
||||
root_causes: extractSection("根本原因分析"),
|
||||
modification_points: extractModificationPoints(),
|
||||
modification_points: extractModificationPoints(), // Returns: ["file:function:lines", ...]
|
||||
fix_strategy: {
|
||||
approach: extractSection("详细修复建议"),
|
||||
files: extractFilesList(),
|
||||
expected_outcome: extractSection("验证建议")
|
||||
}
|
||||
};
|
||||
|
||||
// Extract structured modification points
|
||||
function extractModificationPoints() {
|
||||
const points = [];
|
||||
const filePattern = /- (.+?\.(?:ts|js|py)) \(lines (\d+-\d+)\): (.+)/g;
|
||||
|
||||
let match;
|
||||
while ((match = filePattern.exec(cliOutput)) !== null) {
|
||||
points.push({
|
||||
file: match[1],
|
||||
lines: match[2],
|
||||
function: match[3],
|
||||
formatted: `${match[1]}:${match[3]}:${match[2]}`
|
||||
});
|
||||
}
|
||||
|
||||
return points;
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Task JSON Generation (Template Definition)
|
||||
|
||||
**Task JSON Template for IMPL-fix-N** (Simplified):
|
||||
**Task JSON Generation** (Simplified Template):
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-fix-{iteration}",
|
||||
@@ -284,9 +302,7 @@ const parsedResults = {
|
||||
{
|
||||
"step": "load_analysis_context",
|
||||
"action": "Load CLI analysis report for full failure context if needed",
|
||||
"commands": [
|
||||
"Read({meta.analysis_report})"
|
||||
],
|
||||
"commands": ["Read({meta.analysis_report})"],
|
||||
"output_to": "full_failure_analysis",
|
||||
"note": "Analysis report contains: failed_tests, error_messages, pass_rate, root causes, previous_attempts"
|
||||
}
|
||||
@@ -334,19 +350,17 @@ const parsedResults = {
|
||||
|
||||
**Template Variables Replacement**:
|
||||
- `{iteration}`: From context.iteration
|
||||
- `{test_type}`: Dominant test type from failed_tests (e.g., "integration", "unit")
|
||||
- `{test_type}`: Dominant test type from failed_tests
|
||||
- `{dominant_test_type}`: Most common test_type in failed_tests array
|
||||
- `{layer_specific_approach}`: Guidance based on test layer from layerGuidance map
|
||||
- `{layer_specific_approach}`: Guidance from layerGuidance map
|
||||
- `{fix_summary}`: First 50 chars of fix_strategy.approach
|
||||
- `{failed_tests.length}`: Count of failures
|
||||
- `{modification_points.length}`: Count of modification points
|
||||
- `{modification_points}`: Array of file:function:lines from parsed CLI output
|
||||
- `{modification_points}`: Array of file:function:lines
|
||||
- `{timestamp}`: ISO 8601 timestamp
|
||||
- `{parent_task_id}`: ID of the parent test task (e.g., "IMPL-002")
|
||||
- `{file1}`, `{file2}`, etc.: Specific file paths from modification_points
|
||||
- `{specific_change_1}`, etc.: Change descriptions for each modification point
|
||||
- `{parent_task_id}`: ID of parent test task
|
||||
|
||||
### 4. Analysis Report Generation
|
||||
### 3. Analysis Report Generation
|
||||
|
||||
**Structure of iteration-N-analysis.md**:
|
||||
```markdown
|
||||
@@ -373,6 +387,7 @@ pass_rate: {pass_rate}%
|
||||
- **Error**: {test.error}
|
||||
- **File**: {test.file}:{test.line}
|
||||
- **Criticality**: {test.criticality}
|
||||
- **Test Type**: {test.test_type}
|
||||
{endforeach}
|
||||
|
||||
## Root Cause Analysis
|
||||
@@ -403,15 +418,16 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
|
||||
### CLI Execution Standards
|
||||
- **Timeout Management**: Use dynamic timeout (2400000ms = 40min for analysis)
|
||||
- **Fallback Chain**: Gemini → Qwen (if Gemini fails with 429/404)
|
||||
- **Fallback Chain**: Gemini → Qwen → degraded mode (if both fail)
|
||||
- **Error Context**: Include full error details in failure reports
|
||||
- **Output Preservation**: Save raw CLI output for debugging
|
||||
- **Output Preservation**: Save raw CLI output to .process/ for debugging
|
||||
|
||||
### Task JSON Standards
|
||||
- **Quantification**: All requirements must include counts and explicit lists
|
||||
- **Specificity**: Modification points must have file:function:line format
|
||||
- **Measurability**: Acceptance criteria must include verification commands
|
||||
- **Traceability**: Link to analysis reports and CLI output files
|
||||
- **Minimal Redundancy**: Use references (analysis_report) instead of embedding full context
|
||||
|
||||
### Analysis Report Standards
|
||||
- **Structured Format**: Use consistent markdown sections
|
||||
@@ -430,19 +446,23 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
- **Link files properly**: Use relative paths from session root
|
||||
- **Preserve CLI output**: Save raw output to .process/ for debugging
|
||||
- **Generate measurable acceptance criteria**: Include verification commands
|
||||
- **Apply layer-specific guidance**: Use test_type to customize analysis approach
|
||||
|
||||
**NEVER:**
|
||||
- Execute tests directly (orchestrator manages test execution)
|
||||
- Skip CLI analysis (always run CLI even for simple failures)
|
||||
- Modify files directly (generate task JSON for @test-fix-agent to execute)
|
||||
- **Embed redundant data in task JSON** (use analysis_report reference instead)
|
||||
- **Copy input context verbatim to output** (creates data duplication)
|
||||
- Embed redundant data in task JSON (use analysis_report reference instead)
|
||||
- Copy input context verbatim to output (creates data duplication)
|
||||
- Generate vague modification points (always specify file:function:lines)
|
||||
- Exceed timeout limits (use configured timeout value)
|
||||
- Ignore test layer context (L0/L1/L2/L3 determines diagnosis approach)
|
||||
|
||||
## CLI Tool Configuration
|
||||
## Configuration & Examples
|
||||
|
||||
### Gemini Configuration
|
||||
### CLI Tool Configuration
|
||||
|
||||
**Gemini Configuration**:
|
||||
```javascript
|
||||
{
|
||||
"tool": "gemini",
|
||||
@@ -452,11 +472,12 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
"test-failure": "01-diagnose-bug-root-cause.txt",
|
||||
"coverage-gap": "02-analyze-code-patterns.txt",
|
||||
"regression": "01-trace-code-execution.txt"
|
||||
}
|
||||
},
|
||||
"timeout": 2400000 // 40 minutes
|
||||
}
|
||||
```
|
||||
|
||||
### Qwen Configuration (Fallback)
|
||||
**Qwen Configuration (Fallback)**:
|
||||
```javascript
|
||||
{
|
||||
"tool": "qwen",
|
||||
@@ -464,47 +485,12 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
"templates": {
|
||||
"test-failure": "01-diagnose-bug-root-cause.txt",
|
||||
"coverage-gap": "02-analyze-code-patterns.txt"
|
||||
}
|
||||
},
|
||||
"timeout": 2400000 // 40 minutes
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with test-cycle-execute
|
||||
|
||||
**Orchestrator Call Pattern**:
|
||||
```javascript
|
||||
// When pass_rate < 95%
|
||||
Task(
|
||||
subagent_type="cli-planning-agent",
|
||||
description=`Analyze test failures and generate fix task (iteration ${iteration})`,
|
||||
prompt=`
|
||||
## Context Package
|
||||
${JSON.stringify(contextPackage, null, 2)}
|
||||
|
||||
## Your Task
|
||||
1. Execute CLI analysis using ${cli_config.tool}
|
||||
2. Parse CLI output and extract fix strategy
|
||||
3. Generate IMPL-fix-${iteration}.json with structured task definition
|
||||
4. Save analysis report to .process/iteration-${iteration}-analysis.md
|
||||
5. Report success and task ID back to orchestrator
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
**Agent Response**:
|
||||
```javascript
|
||||
{
|
||||
"status": "success",
|
||||
"task_id": "IMPL-fix-{iteration}",
|
||||
"task_path": ".workflow/{session}/.task/IMPL-fix-{iteration}.json",
|
||||
"analysis_report": ".process/iteration-{iteration}-analysis.md",
|
||||
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
|
||||
"summary": "{fix_strategy.approach first 100 chars}",
|
||||
"modification_points_count": {count},
|
||||
"estimated_complexity": "low|medium|high"
|
||||
}
|
||||
```
|
||||
|
||||
## Example Execution
|
||||
### Example Execution
|
||||
|
||||
**Input Context**:
|
||||
```json
|
||||
@@ -530,24 +516,45 @@ Task(
|
||||
"cli_config": {
|
||||
"tool": "gemini",
|
||||
"template": "01-diagnose-bug-root-cause.txt"
|
||||
},
|
||||
"task_config": {
|
||||
"agent": "@test-fix-agent",
|
||||
"type": "test-fix-iteration",
|
||||
"max_iterations": 5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Execution Steps**:
|
||||
1. Detect test_type: "integration" → Apply integration-specific diagnosis
|
||||
2. Execute: `gemini -p "PURPOSE: Analyze integration test failure... [layer-specific context]"`
|
||||
- CLI prompt includes: "Examine component interactions, data flow, interface contracts"
|
||||
- Guidance: "Analyze full call stack and data flow across components"
|
||||
3. Parse: Extract RCA, 修复建议, 验证建议 sections
|
||||
4. Generate: IMPL-fix-1.json (SIMPLIFIED) with:
|
||||
**Execution Summary**:
|
||||
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
||||
2. **Execute CLI**:
|
||||
```bash
|
||||
gemini -p "PURPOSE: Analyze integration test failure...
|
||||
TASK: Examine component interactions, data flow, interface contracts...
|
||||
RULES: Analyze full call stack and data flow across components"
|
||||
```
|
||||
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
||||
4. **Generate Task JSON** (IMPL-fix-1.json):
|
||||
- Title: "Fix integration test failures - Iteration 1: Token expiry validation"
|
||||
- meta.analysis_report: ".process/iteration-1-analysis.md" (Reference, not embedded data)
|
||||
- meta.analysis_report: ".process/iteration-1-analysis.md" (reference)
|
||||
- meta.test_layer: "integration"
|
||||
- Requirements: "Fix 1 integration test failures by applying the provided fix strategy"
|
||||
- fix_strategy.modification_points: ["src/auth/auth.service.ts:validateToken:45-60", "src/middleware/auth.middleware.ts:checkExpiry:120-135"]
|
||||
- Requirements: "Fix 1 integration test failures by applying provided fix strategy"
|
||||
- fix_strategy.modification_points:
|
||||
- "src/auth/auth.service.ts:validateToken:45-60"
|
||||
- "src/middleware/auth.middleware.ts:checkExpiry:120-135"
|
||||
- fix_strategy.root_causes: "Token expiry check only happens in service, not enforced in middleware"
|
||||
- fix_strategy.quality_assurance: {avoids_symptom_fix: true, addresses_root_cause: true}
|
||||
- **NO failure_context object** - full context available via analysis_report reference
|
||||
5. Save: iteration-1-analysis.md with full CLI output, layer context, failed_tests details, previous_attempts
|
||||
6. Return: task_id="IMPL-fix-1", test_layer="integration", status="success"
|
||||
5. **Save Analysis Report**: iteration-1-analysis.md with full CLI output, layer context, failed_tests details
|
||||
6. **Return**:
|
||||
```javascript
|
||||
{
|
||||
status: "success",
|
||||
task_id: "IMPL-fix-1",
|
||||
task_path: ".workflow/WFS-test-session-001/.task/IMPL-fix-1.json",
|
||||
analysis_report: ".process/iteration-1-analysis.md",
|
||||
cli_output: ".process/iteration-1-cli-output.txt",
|
||||
summary: "Token expiry check only happens in service, not enforced in middleware",
|
||||
modification_points_count: 2,
|
||||
estimated_complexity: "medium"
|
||||
}
|
||||
```
|
||||
|
||||
@@ -238,7 +238,7 @@ const context = {
|
||||
|
||||
**3.5 Brainstorm Artifacts Integration**
|
||||
|
||||
If `.workflow/{session}/.brainstorming/` exists, read and include content:
|
||||
If `.workflow/session/{session}/.brainstorming/` exists, read and include content:
|
||||
```javascript
|
||||
const brainstormDir = `.workflow/${session}/.brainstorming`;
|
||||
if (dir_exists(brainstormDir)) {
|
||||
@@ -274,7 +274,7 @@ Calculate risk level based on:
|
||||
|
||||
**3.7 Context Packaging & Output**
|
||||
|
||||
**Output**: `.workflow/{session-id}/.process/context-package.json`
|
||||
**Output**: `.workflow/active//{session-id}/.process/context-package.json`
|
||||
|
||||
**Note**: Task JSONs reference via `context_package_path` field (not in `artifacts`)
|
||||
|
||||
@@ -422,7 +422,7 @@ Calculate risk level based on:
|
||||
## Quality Validation
|
||||
|
||||
Before completion verify:
|
||||
- [ ] context-package.json in `.workflow/{session}/.process/`
|
||||
- [ ] context-package.json in `.workflow/session/{session}/.process/`
|
||||
- [ ] Valid JSON with all required fields
|
||||
- [ ] Metadata complete (description, keywords, complexity)
|
||||
- [ ] Project context documented (patterns, conventions, tech stack)
|
||||
@@ -432,25 +432,6 @@ Before completion verify:
|
||||
- [ ] File relevance >80%
|
||||
- [ ] No sensitive data exposed
|
||||
|
||||
## Performance Limits
|
||||
|
||||
**File Counts**:
|
||||
- Max 30 high-priority (score >0.8)
|
||||
- Max 20 medium-priority (score 0.5-0.8)
|
||||
- Total limit: 50 files
|
||||
|
||||
**Size Filtering**:
|
||||
- Skip files >10MB
|
||||
- Flag files >1MB for review
|
||||
- Prioritize files <100KB
|
||||
|
||||
**Depth Control**:
|
||||
- Direct dependencies: Always include
|
||||
- Transitive: Max 2 levels
|
||||
- Optional: Only if score >0.7
|
||||
|
||||
**Tool Priority**: Code-Index > ripgrep > find > grep
|
||||
|
||||
## Output Report
|
||||
|
||||
```
|
||||
@@ -475,7 +456,7 @@ Conflict Detection:
|
||||
- Affected: {modules}
|
||||
- Mitigation: {strategy}
|
||||
|
||||
Output: .workflow/{session}/.process/context-package.json
|
||||
Output: .workflow/session/{session}/.process/context-package.json
|
||||
(Referenced in task JSONs via top-level `context_package_path` field)
|
||||
```
|
||||
|
||||
|
||||
@@ -304,7 +304,7 @@ if (!validation.all_passed()) {
|
||||
## Output Location
|
||||
|
||||
```
|
||||
.workflow/{test_session_id}/.process/test-context-package.json
|
||||
.workflow/active/{test_session_id}/.process/test-context-package.json
|
||||
```
|
||||
|
||||
## Helper Functions Reference
|
||||
|
||||
@@ -75,7 +75,7 @@ Task(
|
||||
4. Execution & Output:
|
||||
- Execute CLI tool with assembled context
|
||||
- Generate comprehensive analysis report
|
||||
- Save to .workflow/WFS-[id]/.chat/analyze-[timestamp].md (or .scratchpad/)
|
||||
- Save to .workflow/active/WFS-[id]/.chat/analyze-[timestamp].md (or .scratchpad/)
|
||||
`
|
||||
)
|
||||
```
|
||||
@@ -84,4 +84,4 @@ Task(
|
||||
|
||||
- **Read-only**: Analyzes code, does NOT modify files
|
||||
- **Auto-pattern**: Detects file patterns from keywords (auth→auth files, component→components, API→api/routes, test→test files)
|
||||
- **Output**: `.workflow/WFS-[id]/.chat/analyze-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
- **Output**: `.workflow/active/WFS-[id]/.chat/analyze-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
|
||||
@@ -70,7 +70,7 @@ Task(
|
||||
3. Execution & Output:
|
||||
- Execute CLI tool with assembled context
|
||||
- Validate answer completeness
|
||||
- Save to .workflow/WFS-[id]/.chat/chat-[timestamp].md (or .scratchpad/)
|
||||
- Save to .workflow/active/WFS-[id]/.chat/chat-[timestamp].md (or .scratchpad/)
|
||||
`
|
||||
)
|
||||
```
|
||||
@@ -79,4 +79,4 @@ Task(
|
||||
|
||||
- **Read-only**: Provides answers, does NOT modify code
|
||||
- **Context**: `@CLAUDE.md` + inferred or all files (`@**/*`)
|
||||
- **Output**: `.workflow/WFS-[id]/.chat/chat-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
- **Output**: `.workflow/active/WFS-[id]/.chat/chat-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
|
||||
@@ -468,9 +468,9 @@ Summary: [file references, changes, next steps]
|
||||
|
||||
**Execution Log Destination**:
|
||||
- **IF** active workflow session exists:
|
||||
- Execution log: `.workflow/WFS-[id]/.chat/codex-execute-[timestamp].md`
|
||||
- Task summaries: `.workflow/WFS-[id]/.summaries/[TASK-ID]-summary.md` (if task ID)
|
||||
- Task updates: `.workflow/WFS-[id]/.task/[TASK-ID].json` status updates
|
||||
- Execution log: `.workflow/active/WFS-[id]/.chat/codex-execute-[timestamp].md`
|
||||
- Task summaries: `.workflow/active/WFS-[id]/.summaries/[TASK-ID]-summary.md` (if task ID)
|
||||
- Task updates: `.workflow/active/WFS-[id]/.task/[TASK-ID].json` status updates
|
||||
- TodoWrite tracking: Embedded in execution log
|
||||
- **ELSE** (no active session):
|
||||
- **Recommended**: Create workflow session first (`/workflow:session:start`)
|
||||
@@ -478,7 +478,7 @@ Summary: [file references, changes, next steps]
|
||||
|
||||
**Output Files** (during execution):
|
||||
```
|
||||
.workflow/WFS-[session-id]/
|
||||
.workflow/active/WFS-[session-id]/
|
||||
├── .chat/
|
||||
│ └── codex-execute-20250105-143022.md # Full execution log with task flow
|
||||
├── .summaries/
|
||||
@@ -492,9 +492,9 @@ Summary: [file references, changes, next steps]
|
||||
|
||||
**Examples**:
|
||||
- During session `WFS-auth-system`, executing multi-stage auth implementation:
|
||||
- Log: `.workflow/WFS-auth-system/.chat/codex-execute-20250105-143022.md`
|
||||
- Summaries: `.workflow/WFS-auth-system/.summaries/IMPL-001.{1,2,3}-summary.md`
|
||||
- Task status: `.workflow/WFS-auth-system/.task/IMPL-001.json` (status: completed)
|
||||
- Log: `.workflow/active/WFS-auth-system/.chat/codex-execute-20250105-143022.md`
|
||||
- Summaries: `.workflow/active/WFS-auth-system/.summaries/IMPL-001.{1,2,3}-summary.md`
|
||||
- Task status: `.workflow/active/WFS-auth-system/.task/IMPL-001.json` (status: completed)
|
||||
- No session, ad-hoc multi-stage task:
|
||||
- Log: `.workflow/.scratchpad/codex-execute-auth-refactor-20250105-143045.md`
|
||||
|
||||
|
||||
@@ -167,9 +167,9 @@ TodoWrite({
|
||||
## Output Routing
|
||||
|
||||
- **Primary Log**: Entire multi-round discussion logged to single file:
|
||||
- `.workflow/WFS-[id]/.chat/discuss-plan-[topic]-[timestamp].md`
|
||||
- `.workflow/active/WFS-[id]/.chat/discuss-plan-[topic]-[timestamp].md`
|
||||
- **Final Plan**: Clean final version saved upon conclusion:
|
||||
- `.workflow/WFS-[id]/.summaries/plan-[topic].md`
|
||||
- `.workflow/active/WFS-[id]/.summaries/plan-[topic].md`
|
||||
- **Scratchpad**: If no session active:
|
||||
- `.workflow/.scratchpad/discuss-plan-[topic]-[timestamp].md`
|
||||
|
||||
|
||||
@@ -76,8 +76,8 @@ Use `resume --last` when current task extends/relates to previous execution. See
|
||||
|
||||
## Workflow Integration
|
||||
|
||||
**Session Management**: Auto-detects `.workflow/.active-*` marker
|
||||
- Active session: Save to `.workflow/WFS-[id]/.chat/execute-[timestamp].md`
|
||||
**Session Management**: Auto-detects active session from `.workflow/active/` directory
|
||||
- Active session: Save to `.workflow/active/WFS-[id]/.chat/execute-[timestamp].md`
|
||||
- No session: Create new session or save to scratchpad
|
||||
|
||||
**Task Integration**: Load from `.task/[TASK-ID].json`, update status, generate summary
|
||||
@@ -127,8 +127,8 @@ Task(
|
||||
- Ensure working implementation with proper error handling
|
||||
|
||||
5. Output & Documentation:
|
||||
- Save execution log: .workflow/WFS-[id]/.chat/execute-[timestamp].md
|
||||
${task_id ? '- Generate task summary: .workflow/WFS-[id]/.summaries/' + task_id + '-summary.md' : ''}
|
||||
- Save execution log: .workflow/active/WFS-[id]/.chat/execute-[timestamp].md
|
||||
${task_id ? '- Generate task summary: .workflow/active/WFS-[id]/.summaries/' + task_id + '-summary.md' : ''}
|
||||
${task_id ? '- Update task status in .task/' + task_id + '.json' : ''}
|
||||
- Document all code changes made
|
||||
|
||||
@@ -137,7 +137,7 @@ Task(
|
||||
)
|
||||
```
|
||||
|
||||
**Output**: `.workflow/WFS-[id]/.chat/execute-[timestamp].md` + `.summaries/[TASK-ID]-summary.md` (or `.scratchpad/` if no session)
|
||||
**Output**: `.workflow/active/WFS-[id]/.chat/execute-[timestamp].md` + `.workflow/active/WFS-[id]/.summaries/[TASK-ID]-summary.md` (or `.scratchpad/` if no session)
|
||||
|
||||
## Examples
|
||||
|
||||
|
||||
@@ -84,7 +84,7 @@ Task(
|
||||
- Root cause diagnosis with evidence
|
||||
- Fix suggestions and recommendations
|
||||
- Prevention strategies
|
||||
- Save to .workflow/WFS-[id]/.chat/bug-diagnosis-[timestamp].md (or .scratchpad/)
|
||||
- Save to .workflow/active/WFS-[id]/.chat/bug-diagnosis-[timestamp].md (or .scratchpad/)
|
||||
`
|
||||
)
|
||||
```
|
||||
@@ -93,4 +93,4 @@ Task(
|
||||
|
||||
- **Read-only**: Diagnoses bugs, does NOT modify code
|
||||
- **Template**: `~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt`
|
||||
- **Output**: `.workflow/WFS-[id]/.chat/bug-diagnosis-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
- **Output**: `.workflow/active/WFS-[id]/.chat/bug-diagnosis-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
|
||||
@@ -86,7 +86,7 @@ Task(
|
||||
- Execution trace documentation
|
||||
- Call flow analysis with diagrams
|
||||
- Performance and optimization insights
|
||||
- Save to .workflow/WFS-[id]/.chat/code-analysis-[timestamp].md (or .scratchpad/)
|
||||
- Save to .workflow/active/WFS-[id]/.chat/code-analysis-[timestamp].md (or .scratchpad/)
|
||||
`
|
||||
)
|
||||
```
|
||||
@@ -95,4 +95,4 @@ Task(
|
||||
|
||||
- **Read-only**: Analyzes code, does NOT modify files
|
||||
- **Template**: `~/.claude/workflows/cli-templates/prompts/analysis/01-trace-code-execution.txt`
|
||||
- **Output**: `.workflow/WFS-[id]/.chat/code-analysis-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
- **Output**: `.workflow/active/WFS-[id]/.chat/code-analysis-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
|
||||
@@ -81,7 +81,7 @@ Task(
|
||||
- Strategic modification plan
|
||||
- Impact analysis and risk assessment
|
||||
- Implementation roadmap
|
||||
- Save to .workflow/WFS-[id]/.chat/plan-[timestamp].md (or .scratchpad/)
|
||||
- Save to .workflow/active/WFS-[id]/.chat/plan-[timestamp].md (or .scratchpad/)
|
||||
`
|
||||
)
|
||||
```
|
||||
@@ -90,4 +90,4 @@ Task(
|
||||
|
||||
- **Read-only**: Creates modification plans, does NOT generate code
|
||||
- **Template**: `~/.claude/workflows/cli-templates/prompts/planning/01-plan-architecture-design.txt`
|
||||
- **Output**: `.workflow/WFS-[id]/.chat/plan-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
- **Output**: `.workflow/active/WFS-[id]/.chat/plan-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
|
||||
@@ -1,37 +1,22 @@
|
||||
---
|
||||
name: enhance-prompt
|
||||
description: Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection
|
||||
description: Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection
|
||||
argument-hint: "user input to enhance"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Systematically enhances user prompts by combining session memory context with codebase patterns, translating ambiguous requests into actionable specifications.
|
||||
Systematically enhances user prompts by leveraging session memory context and intent analysis, translating ambiguous requests into actionable specifications.
|
||||
|
||||
## Core Protocol
|
||||
|
||||
**Enhancement Pipeline:**
|
||||
`Intent Translation` → `Context Integration` → `Gemini Analysis (if needed)` → `Structured Output`
|
||||
`Intent Translation` → `Context Integration` → `Structured Output`
|
||||
|
||||
**Context Sources:**
|
||||
- Session memory (conversation history, previous analysis)
|
||||
- Codebase patterns (via Gemini when triggered)
|
||||
- Implicit technical requirements
|
||||
|
||||
## Gemini Trigger Logic
|
||||
|
||||
```pseudo
|
||||
FUNCTION should_use_gemini(user_prompt):
|
||||
critical_keywords = ["refactor", "migrate", "redesign", "auth", "payment", "security"]
|
||||
|
||||
RETURN (
|
||||
prompt_affects_multiple_modules(user_prompt, threshold=3) OR
|
||||
any_keyword_in_prompt(critical_keywords, user_prompt)
|
||||
)
|
||||
END
|
||||
```
|
||||
|
||||
**Gemini Integration:** ~/.claude/workflows/intelligent-tools-strategy.md
|
||||
- User intent patterns
|
||||
|
||||
## Enhancement Rules
|
||||
|
||||
@@ -47,22 +32,18 @@ END
|
||||
|
||||
### Context Integration Strategy
|
||||
|
||||
**Session Memory First:**
|
||||
**Session Memory:**
|
||||
- Reference recent conversation context
|
||||
- Reuse previously identified patterns
|
||||
- Build on established understanding
|
||||
|
||||
**Codebase Analysis (via Gemini):**
|
||||
- Only when complexity requires it
|
||||
- Focus on integration points
|
||||
- Identify existing patterns
|
||||
- Infer technical requirements from discussion
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# User: "add login"
|
||||
# Session Memory: Previous auth discussion, JWT mentioned
|
||||
# Inferred: JWT-based auth, integrate with existing session management
|
||||
# Gemini (if multi-module): Analyze AuthService patterns, middleware structure
|
||||
# Action: Implement JWT authentication with session persistence
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
@@ -76,7 +57,7 @@ ATTENTION: [Critical constraints]
|
||||
|
||||
### Output Examples
|
||||
|
||||
**Simple (no Gemini):**
|
||||
**Example 1:**
|
||||
```bash
|
||||
# Input: "fix login button"
|
||||
INTENT: Debug non-functional login button
|
||||
@@ -85,28 +66,28 @@ ACTION: Check event binding → verify state updates → test auth flow
|
||||
ATTENTION: Preserve existing OAuth integration
|
||||
```
|
||||
|
||||
**Complex (with Gemini):**
|
||||
**Example 2:**
|
||||
```bash
|
||||
# Input: "refactor payment code"
|
||||
INTENT: Restructure payment module for maintainability
|
||||
CONTEXT: Session memory - PCI compliance requirements
|
||||
Gemini - PaymentService → StripeAdapter pattern identified
|
||||
ACTION: Extract reusable validators → isolate payment gateway logic
|
||||
CONTEXT: Session memory - PCI compliance requirements, Stripe integration patterns
|
||||
ACTION: Extract reusable validators → isolate payment gateway logic → maintain adapter pattern
|
||||
ATTENTION: Zero behavior change, maintain PCI compliance, full test coverage
|
||||
```
|
||||
|
||||
## Automatic Triggers
|
||||
## Enhancement Triggers
|
||||
|
||||
- Ambiguous language: "fix", "improve", "clean up"
|
||||
- Multi-module impact (>3 modules)
|
||||
- Vague requests requiring clarification
|
||||
- Complex technical requirements
|
||||
- Architecture changes
|
||||
- Critical systems: auth, payment, security
|
||||
- Complex refactoring
|
||||
- Multi-step refactoring
|
||||
|
||||
## Key Principles
|
||||
|
||||
1. **Memory First**: Leverage session context before analysis
|
||||
2. **Minimal Gemini**: Only when complexity demands it
|
||||
3. **Context Reuse**: Build on previous understanding
|
||||
4. **Clear Output**: Structured, actionable specifications
|
||||
1. **Session Memory First**: Leverage conversation context and established understanding
|
||||
2. **Context Reuse**: Build on previous discussions and decisions
|
||||
3. **Clear Output**: Structured, actionable specifications
|
||||
4. **Intent Clarification**: Transform vague requests into specific technical goals
|
||||
5. **Avoid Duplication**: Reference existing context, don't repeat
|
||||
@@ -63,10 +63,10 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
|
||||
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
|
||||
|
||||
# Create session directories (replace timestamp)
|
||||
bash(mkdir -p .workflow/WFS-docs-{timestamp}/.{task,process,summaries} && touch .workflow/.active-WFS-docs-{timestamp})
|
||||
bash(mkdir -p .workflow/active/WFS-docs-{timestamp}/.{task,process,summaries})
|
||||
|
||||
# Create workflow-session.json (replace values)
|
||||
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/WFS-docs-{timestamp}/workflow-session.json)
|
||||
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/active/WFS-docs-{timestamp}/workflow-session.json)
|
||||
```
|
||||
|
||||
### Phase 2: Analyze Structure
|
||||
@@ -89,7 +89,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
|
||||
```
|
||||
|
||||
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
|
||||
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -118,7 +118,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
|
||||
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
|
||||
|
||||
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
|
||||
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
|
||||
|
||||
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
|
||||
|
||||
@@ -127,8 +127,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
**Commands**:
|
||||
|
||||
```bash
|
||||
# Count existing docs from phase2-analysis.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
|
||||
# Count existing docs from doc-planning-data.json
|
||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
|
||||
```
|
||||
|
||||
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
|
||||
@@ -182,11 +182,11 @@ Large Projects (single dir >10 docs):
|
||||
**Commands**:
|
||||
|
||||
```bash
|
||||
# 1. Get top-level directories from phase2-analysis.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
|
||||
# 1. Get top-level directories from doc-planning-data.json
|
||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
|
||||
|
||||
# 2. Get mode from workflow-session.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
|
||||
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
|
||||
|
||||
# 3. Check for HTTP API
|
||||
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
|
||||
@@ -201,7 +201,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
|
||||
- If total ≤10 docs: create group
|
||||
- If total >10 docs: split to 1 dir/group or subdivide
|
||||
- If single dir >10 docs: split by subdirectories
|
||||
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
|
||||
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
|
||||
```json
|
||||
"groups": {
|
||||
"count": 3,
|
||||
@@ -215,7 +215,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
|
||||
|
||||
**Task ID Calculation**:
|
||||
```bash
|
||||
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json)
|
||||
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
|
||||
readme_id=$((group_count + 1)) # Next ID after groups
|
||||
arch_id=$((group_count + 2))
|
||||
api_id=$((group_count + 3))
|
||||
@@ -237,7 +237,7 @@ api_id=$((group_count + 3))
|
||||
|
||||
**Generation Process**:
|
||||
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
||||
2. Read group assignments from phase2-analysis.json
|
||||
2. Read group assignments from doc-planning-data.json
|
||||
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
|
||||
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
|
||||
|
||||
@@ -262,14 +262,14 @@ api_id=$((group_count + 3))
|
||||
},
|
||||
"context": {
|
||||
"requirements": [
|
||||
"Process directories from group ${group_number} in phase2-analysis.json",
|
||||
"Process directories from group ${group_number} in doc-planning-data.json",
|
||||
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
|
||||
"Code folders: API.md + README.md; Navigation folders: README.md only",
|
||||
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
|
||||
],
|
||||
"focus_paths": ["${group_dirs_from_json}"],
|
||||
"precomputed_data": {
|
||||
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
|
||||
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
|
||||
}
|
||||
},
|
||||
"flow_control": {
|
||||
@@ -278,8 +278,8 @@ api_id=$((group_count + 3))
|
||||
"step": "load_precomputed_data",
|
||||
"action": "Load Phase 2 analysis and extract group directories",
|
||||
"commands": [
|
||||
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
|
||||
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
|
||||
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
|
||||
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
|
||||
],
|
||||
"output_to": "phase2_context",
|
||||
"note": "Single JSON file contains all Phase 2 analysis results"
|
||||
@@ -324,7 +324,7 @@ api_id=$((group_count + 3))
|
||||
{
|
||||
"step": 2,
|
||||
"title": "Batch generate documentation via CLI",
|
||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
||||
"depends_on": [1],
|
||||
"output": "generated_docs"
|
||||
}
|
||||
@@ -458,14 +458,13 @@ api_id=$((group_count + 3))
|
||||
**Unified Structure** (single JSON replaces multiple text files):
|
||||
|
||||
```
|
||||
.workflow/
|
||||
├── .active-WFS-docs-{timestamp}
|
||||
.workflow/active/
|
||||
└── WFS-docs-{timestamp}/
|
||||
├── workflow-session.json # Session metadata
|
||||
├── IMPL_PLAN.md
|
||||
├── TODO_LIST.md
|
||||
├── .process/
|
||||
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
|
||||
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
|
||||
└── .task/
|
||||
├── IMPL-001.json # Small: all modules | Large: group 1
|
||||
├── IMPL-00N.json # (Large only: groups 2-N)
|
||||
@@ -474,7 +473,7 @@ api_id=$((group_count + 3))
|
||||
└── IMPL-{N+3}.json # HTTP API (optional)
|
||||
```
|
||||
|
||||
**phase2-analysis.json Structure**:
|
||||
**doc-planning-data.json Structure**:
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
|
||||
@@ -21,12 +21,14 @@ auto-continue: true
|
||||
**Key Features**:
|
||||
- Extracts primary design references (colors, typography, spacing, etc.)
|
||||
- Provides dynamic adjustment guidelines for design tokens
|
||||
- Includes prerequisites and tooling requirements (browsers, PostCSS, dark mode)
|
||||
- Progressive loading structure for efficient token usage
|
||||
- Complete implementation examples with React components
|
||||
- Interactive preview showcase
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
## Quick Reference
|
||||
|
||||
### Command Syntax
|
||||
|
||||
@@ -51,20 +53,77 @@ package-name Style reference package name (required)
|
||||
/memory:style-skill-memory
|
||||
```
|
||||
|
||||
### Key Variables
|
||||
|
||||
**Input Variables**:
|
||||
- `PACKAGE_NAME`: Style reference package name
|
||||
- `PACKAGE_DIR`: `.workflow/reference_style/${package_name}`
|
||||
- `SKILL_DIR`: `.claude/skills/style-${package_name}`
|
||||
- `REGENERATE`: `true` if --regenerate flag, `false` otherwise
|
||||
|
||||
**Data Sources** (Phase 2):
|
||||
- `DESIGN_TOKENS_DATA`: Complete design-tokens.json content (from Read)
|
||||
- `LAYOUT_TEMPLATES_DATA`: Complete layout-templates.json content (from Read)
|
||||
- `ANIMATION_TOKENS_DATA`: Complete animation-tokens.json content (from Read, if exists)
|
||||
|
||||
**Metadata** (Phase 2):
|
||||
- `COMPONENT_COUNT`: Total components
|
||||
- `UNIVERSAL_COUNT`: Universal components count
|
||||
- `SPECIALIZED_COUNT`: Specialized components count
|
||||
- `UNIVERSAL_COMPONENTS`: Universal component names (first 5)
|
||||
- `HAS_ANIMATIONS`: Whether animation-tokens.json exists
|
||||
|
||||
**Analysis Output** (`DESIGN_ANALYSIS` - Phase 2):
|
||||
- `has_colors`: Colors exist
|
||||
- `color_semantic`: Has semantic naming (primary/secondary/accent)
|
||||
- `uses_oklch`: Uses modern color spaces (oklch, lab, etc.)
|
||||
- `has_dark_mode`: Has separate light/dark mode color tokens
|
||||
- `spacing_pattern`: Pattern type ("linear", "geometric", "custom")
|
||||
- `spacing_scale`: Actual scale values (e.g., [4, 8, 16, 32, 64])
|
||||
- `has_typography`: Typography system exists
|
||||
- `typography_hierarchy`: Has size scale for hierarchy
|
||||
- `uses_calc`: Uses calc() expressions in token values
|
||||
- `has_radius`: Border radius exists
|
||||
- `radius_style`: Style characteristic ("sharp" <4px, "moderate" 4-8px, "rounded" >8px)
|
||||
- `has_shadows`: Shadow system exists
|
||||
- `shadow_pattern`: Elevation naming pattern
|
||||
- `has_animations`: Animation tokens exist
|
||||
- `animation_range`: Duration range (fast to slow)
|
||||
- `easing_variety`: Types of easing functions
|
||||
|
||||
### Common Errors
|
||||
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Package not found | Invalid package name or doesn't exist | Run `/workflow:ui-design:codify-style` first |
|
||||
| SKILL already exists | SKILL.md already generated | Use `--regenerate` flag |
|
||||
| Missing layout-templates.json | Incomplete package | Verify package integrity, re-run codify-style |
|
||||
| Invalid JSON format | Corrupted package files | Regenerate package with codify-style |
|
||||
|
||||
---
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Phase 1: Validate Package
|
||||
|
||||
**Purpose**: Check if style reference package exists
|
||||
|
||||
**TodoWrite** (First Action):
|
||||
```json
|
||||
[
|
||||
{"content": "Validate style reference package", "status": "in_progress", "activeForm": "Validating package"},
|
||||
{"content": "Read package data and extract design references", "status": "pending", "activeForm": "Reading package data"},
|
||||
{"content": "Generate SKILL.md with progressive loading", "status": "pending", "activeForm": "Generating SKILL.md"}
|
||||
{
|
||||
"content": "Validate package exists and check SKILL status",
|
||||
"activeForm": "Validating package and SKILL status",
|
||||
"status": "in_progress"
|
||||
},
|
||||
{
|
||||
"content": "Read package data and analyze design system",
|
||||
"activeForm": "Reading package data and analyzing design system",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"content": "Generate SKILL.md with design principles and token values",
|
||||
"activeForm": "Generating SKILL.md with design principles and token values",
|
||||
"status": "pending"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
@@ -75,8 +134,6 @@ package-name Style reference package name (required)
|
||||
bash(echo "${package_name}" || basename "$(pwd)" | sed 's/^style-//')
|
||||
```
|
||||
|
||||
Store result as `package_name`
|
||||
|
||||
**Step 2: Validate Package Exists**
|
||||
|
||||
```bash
|
||||
@@ -113,152 +170,90 @@ if (regenerate_flag && skill_exists) {
|
||||
}
|
||||
```
|
||||
|
||||
**Summary Variables**:
|
||||
- `PACKAGE_NAME`: Style reference package name
|
||||
- `PACKAGE_DIR`: `.workflow/reference_style/${package_name}`
|
||||
- `SKILL_DIR`: `.claude/skills/style-${package_name}`
|
||||
- `REGENERATE`: `true` if --regenerate flag, `false` otherwise
|
||||
|
||||
**TodoWrite Update**:
|
||||
```json
|
||||
[
|
||||
{"content": "Validate style reference package", "status": "completed", "activeForm": "Validating package"},
|
||||
{"content": "Read package data and extract design references", "status": "in_progress", "activeForm": "Reading package data"}
|
||||
]
|
||||
```
|
||||
**TodoWrite Update**: Mark "Validate" as completed, "Read package data" as in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Read Package Data & Extract Design References
|
||||
### Phase 2: Read Package Data & Analyze Design System
|
||||
|
||||
**Purpose**: Extract package information and primary design references for SKILL description generation
|
||||
|
||||
**Step 1: Count Components**
|
||||
**Step 1: Read All JSON Files**
|
||||
|
||||
```bash
|
||||
bash(jq '.layout_templates | length' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null || echo 0)
|
||||
```
|
||||
# Read layout templates
|
||||
Read(file_path=".workflow/reference_style/${package_name}/layout-templates.json")
|
||||
|
||||
Store result as `component_count`
|
||||
|
||||
**Step 2: Extract Component Types and Classification**
|
||||
|
||||
```bash
|
||||
# Extract component names from layout templates
|
||||
bash(jq -r '.layout_templates | keys[]' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null | head -10)
|
||||
|
||||
# Count universal vs specialized components
|
||||
bash(jq '[.layout_templates[] | select(.component_type == "universal")] | length' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null || echo 0)
|
||||
bash(jq '[.layout_templates[] | select(.component_type == "specialized")] | length' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null || echo 0)
|
||||
|
||||
# Extract universal component names only
|
||||
bash(jq -r '.layout_templates | to_entries | map(select(.value.component_type == "universal")) | .[].key' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null | head -10)
|
||||
```
|
||||
|
||||
Store as:
|
||||
- `COMPONENT_TYPES`: List of available component types (all)
|
||||
- `UNIVERSAL_COUNT`: Number of universal (reusable) components
|
||||
- `SPECIALIZED_COUNT`: Number of specialized (project-specific) components
|
||||
- `UNIVERSAL_COMPONENTS`: List of universal component names
|
||||
|
||||
**Step 3: Read Design Tokens**
|
||||
|
||||
```bash
|
||||
# Read design tokens
|
||||
Read(file_path=".workflow/reference_style/${package_name}/design-tokens.json")
|
||||
|
||||
# Read animation tokens (if exists)
|
||||
bash(test -f .workflow/reference_style/${package_name}/animation-tokens.json && echo "exists" || echo "missing")
|
||||
Read(file_path=".workflow/reference_style/${package_name}/animation-tokens.json") # if exists
|
||||
```
|
||||
|
||||
**Extract Primary Design References**:
|
||||
|
||||
**Colors** (top 3-5 most important):
|
||||
```bash
|
||||
bash(jq -r '.colors | to_entries | .[0:5] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null | head -5)
|
||||
```
|
||||
|
||||
**Typography** (heading and body fonts):
|
||||
```bash
|
||||
bash(jq -r '.typography | to_entries | select(.key | contains("family")) | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
||||
```
|
||||
|
||||
**Spacing Scale** (base spacing values):
|
||||
```bash
|
||||
bash(jq -r '.spacing | to_entries | .[0:5] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
||||
```
|
||||
|
||||
**Border Radius** (base radius values):
|
||||
```bash
|
||||
bash(jq -r '.border_radius | to_entries | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
||||
```
|
||||
|
||||
**Shadows** (elevation levels):
|
||||
```bash
|
||||
bash(jq -r '.shadows | to_entries | .[0:3] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
||||
```
|
||||
|
||||
Store extracted references as:
|
||||
- `PRIMARY_COLORS`: List of primary color tokens
|
||||
- `TYPOGRAPHY_FONTS`: Font family tokens
|
||||
- `SPACING_SCALE`: Base spacing values
|
||||
- `BORDER_RADIUS`: Radius values
|
||||
- `SHADOWS`: Shadow definitions
|
||||
|
||||
**Step 4: Read Animation Tokens (if available)**
|
||||
**Step 2: Extract Metadata for Description**
|
||||
|
||||
```bash
|
||||
# Check if animation tokens exist
|
||||
bash(test -f .workflow/reference_style/${package_name}/animation-tokens.json && echo "available" || echo "not_available")
|
||||
# Count components and classify by type
|
||||
bash(jq '.layout_templates | length' layout-templates.json)
|
||||
bash(jq '[.layout_templates[] | select(.component_type == "universal")] | length' layout-templates.json)
|
||||
bash(jq '[.layout_templates[] | select(.component_type == "specialized")] | length' layout-templates.json)
|
||||
bash(jq -r '.layout_templates | to_entries[] | select(.value.component_type == "universal") | .key' layout-templates.json | head -5)
|
||||
```
|
||||
|
||||
If available, extract:
|
||||
```bash
|
||||
Read(file_path=".workflow/reference_style/${package_name}/animation-tokens.json")
|
||||
Store results in metadata variables (see [Key Variables](#key-variables))
|
||||
|
||||
# Extract primary animation values
|
||||
bash(jq -r '.duration | to_entries | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/animation-tokens.json 2>/dev/null)
|
||||
bash(jq -r '.easing | to_entries | .[0:3] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/animation-tokens.json 2>/dev/null)
|
||||
```
|
||||
**Step 3: Analyze Design System for Dynamic Principles**
|
||||
|
||||
Store as:
|
||||
- `ANIMATION_DURATIONS`: Animation duration tokens
|
||||
- `EASING_FUNCTIONS`: Easing function tokens
|
||||
|
||||
**Step 5: Count Files**
|
||||
Analyze design-tokens.json to extract characteristics and patterns:
|
||||
|
||||
```bash
|
||||
bash(cd .workflow/reference_style/${package_name} && ls -1 *.json *.html *.css 2>/dev/null | wc -l)
|
||||
# Color system characteristics
|
||||
bash(jq '.colors | keys' design-tokens.json)
|
||||
bash(jq '.colors | to_entries[0:2] | map(.value)' design-tokens.json)
|
||||
# Check for modern color spaces
|
||||
bash(jq '.colors | to_entries[] | .value | test("oklch|lab|lch")' design-tokens.json)
|
||||
# Check for dark mode variants
|
||||
bash(jq '.colors | keys | map(select(contains("dark") or contains("light")))' design-tokens.json)
|
||||
# → Store: has_colors, color_semantic, uses_oklch, has_dark_mode
|
||||
|
||||
# Spacing pattern detection
|
||||
bash(jq '.spacing | to_entries | map(.value) | map(gsub("[^0-9.]"; "") | tonumber)' design-tokens.json)
|
||||
# Analyze pattern: linear (4-8-12-16) vs geometric (4-8-16-32) vs custom
|
||||
# → Store: spacing_pattern, spacing_scale
|
||||
|
||||
# Typography characteristics
|
||||
bash(jq '.typography | keys | map(select(contains("family") or contains("weight")))' design-tokens.json)
|
||||
bash(jq '.typography | to_entries | map(select(.key | contains("size"))) | .[].value' design-tokens.json)
|
||||
# Check for calc() usage
|
||||
bash(jq '. | tostring | test("calc\\(")' design-tokens.json)
|
||||
# → Store: has_typography, typography_hierarchy, uses_calc
|
||||
|
||||
# Border radius style
|
||||
bash(jq '.border_radius | to_entries | map(.value)' design-tokens.json)
|
||||
# Check range: small (sharp <4px) vs moderate (4-8px) vs large (rounded >8px)
|
||||
# → Store: has_radius, radius_style
|
||||
|
||||
# Shadow characteristics
|
||||
bash(jq '.shadows | keys' design-tokens.json)
|
||||
bash(jq '.shadows | to_entries[0].value' design-tokens.json)
|
||||
# → Store: has_shadows, shadow_pattern
|
||||
|
||||
# Animations (if available)
|
||||
bash(jq '.duration | to_entries | map(.value)' animation-tokens.json)
|
||||
bash(jq '.easing | keys' animation-tokens.json)
|
||||
# → Store: has_animations, animation_range, easing_variety
|
||||
```
|
||||
|
||||
Store result as `file_count`
|
||||
Store analysis results in `DESIGN_ANALYSIS` (see [Key Variables](#key-variables))
|
||||
|
||||
**Summary Data Collected**:
|
||||
- `COMPONENT_COUNT`: Number of components in layout templates
|
||||
- `UNIVERSAL_COUNT`: Number of universal (reusable) components
|
||||
- `SPECIALIZED_COUNT`: Number of specialized (project-specific) components
|
||||
- `COMPONENT_TYPES`: List of component types (first 10)
|
||||
- `UNIVERSAL_COMPONENTS`: List of universal component names (first 10)
|
||||
- `FILE_COUNT`: Total files in package
|
||||
- `HAS_ANIMATIONS`: Whether animation tokens are available
|
||||
- `PRIMARY_COLORS`: Primary color tokens with values
|
||||
- `TYPOGRAPHY_FONTS`: Font family tokens
|
||||
- `SPACING_SCALE`: Base spacing scale
|
||||
- `BORDER_RADIUS`: Border radius values
|
||||
- `SHADOWS`: Shadow definitions
|
||||
- `ANIMATION_DURATIONS`: Animation durations (if available)
|
||||
- `EASING_FUNCTIONS`: Easing functions (if available)
|
||||
**Note**: Analysis focuses on characteristics and patterns, not counts. Include technical feature detection (oklch, calc, dark mode) for Prerequisites section.
|
||||
|
||||
**TodoWrite Update**:
|
||||
```json
|
||||
[
|
||||
{"content": "Read package data and extract design references", "status": "completed", "activeForm": "Reading package data"},
|
||||
{"content": "Generate SKILL.md with progressive loading", "status": "in_progress", "activeForm": "Generating SKILL.md"}
|
||||
]
|
||||
```
|
||||
**TodoWrite Update**: Mark "Read package data" as completed, "Generate SKILL.md" as in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Generate SKILL.md
|
||||
|
||||
**Purpose**: Create SKILL memory index with progressive loading structure and design references
|
||||
|
||||
**Step 1: Create SKILL Directory**
|
||||
|
||||
```bash
|
||||
@@ -272,336 +267,57 @@ bash(mkdir -p .claude/skills/style-${package_name})
|
||||
{package_name} project-independent design system with {universal_count} universal layout templates and interactive preview (located at .workflow/reference_style/{package_name}). Load when working with reusable UI components, design tokens, layout patterns, or implementing visual consistency. Excludes {specialized_count} project-specific components.
|
||||
```
|
||||
|
||||
**Key Elements**:
|
||||
- **Universal Count**: Emphasize available reusable layout templates
|
||||
- **Project Independence**: Clearly state project-independent nature
|
||||
- **Specialized Exclusion**: Mention excluded project-specific components
|
||||
- **Path Reference**: Precise package location
|
||||
- **Trigger Keywords**: reusable UI components, design tokens, layout patterns, visual consistency
|
||||
- **Action Coverage**: working with, analyzing, implementing
|
||||
**Step 3: Load and Process SKILL.md Template**
|
||||
|
||||
**Example**:
|
||||
```
|
||||
main-app-style-v1 project-independent design system with 5 universal layout templates and interactive preview (located at .workflow/reference_style/main-app-style-v1). Load when working with reusable UI components, design tokens, layout patterns, or implementing visual consistency. Excludes 3 project-specific components.
|
||||
```
|
||||
|
||||
**Step 3: Write SKILL.md**
|
||||
|
||||
Use Write tool to generate SKILL.md with the following complete content:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: style-{package_name}
|
||||
description: {intelligent description from Step 2}
|
||||
---
|
||||
|
||||
# {Package Name} Style SKILL Package
|
||||
|
||||
## Documentation: `../../../.workflow/reference_style/{package_name}/`
|
||||
|
||||
## Package Overview
|
||||
|
||||
**Project-independent style reference package** extracted from codebase with reusable design patterns, tokens, and interactive preview.
|
||||
|
||||
**Package Details**:
|
||||
- Package: {package_name}
|
||||
- Layout Templates: {component_count} total
|
||||
- **Universal Components**: {universal_count} (reusable, project-independent)
|
||||
- **Specialized Components**: {specialized_count} (project-specific, excluded from reference)
|
||||
- Universal Component Types: {comma-separated list of UNIVERSAL_COMPONENTS}
|
||||
- Files: {file_count}
|
||||
- Animation Tokens: {has_animations ? "✓ Available" : "Not available"}
|
||||
|
||||
**⚠️ IMPORTANT - Project Independence**:
|
||||
This SKILL package represents a **pure style system** independent of any specific project implementation:
|
||||
- **Universal components** are generic, reusable patterns (buttons, inputs, cards, navigation)
|
||||
- **Specialized components** are project-specific implementations (excluded from this reference)
|
||||
- All design tokens and layout patterns are extracted for **reference purposes only**
|
||||
- Adapt and customize these references based on your project's specific requirements
|
||||
|
||||
---
|
||||
|
||||
## ⚡ Primary Design References
|
||||
|
||||
**IMPORTANT**: These are **reference values** extracted from the codebase. They should be **dynamically adjusted** based on your specific design needs, not treated as fixed constraints.
|
||||
|
||||
### 🎨 Colors
|
||||
|
||||
{FOR each color in PRIMARY_COLORS:
|
||||
- **{color.key}**: `{color.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- These colors establish the foundation of the design system
|
||||
- Adjust saturation, lightness, or hue based on:
|
||||
- Brand requirements and accessibility needs
|
||||
- Context (light/dark mode, high-contrast themes)
|
||||
- User feedback and A/B testing results
|
||||
- Use color theory principles to maintain harmony when modifying
|
||||
|
||||
### 📝 Typography
|
||||
|
||||
{FOR each font in TYPOGRAPHY_FONTS:
|
||||
- **{font.key}**: `{font.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- Font families can be substituted based on:
|
||||
- Brand identity and design language
|
||||
- Performance requirements (web fonts vs. system fonts)
|
||||
- Accessibility and readability considerations
|
||||
- Platform-specific availability
|
||||
- Maintain hierarchy and scale relationships when changing fonts
|
||||
|
||||
### 📏 Spacing Scale
|
||||
|
||||
{FOR each spacing in SPACING_SCALE:
|
||||
- **{spacing.key}**: `{spacing.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- Spacing values form a consistent rhythm system
|
||||
- Adjust scale based on:
|
||||
- Target device (mobile vs. desktop vs. tablet)
|
||||
- Content density requirements
|
||||
- Component-specific needs (compact vs. comfortable layouts)
|
||||
- Maintain proportional relationships when scaling
|
||||
|
||||
### 🔲 Border Radius
|
||||
|
||||
{FOR each radius in BORDER_RADIUS:
|
||||
- **{radius.key}**: `{radius.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- Border radius affects visual softness and modernity
|
||||
- Adjust based on:
|
||||
- Design aesthetic (sharp vs. rounded vs. pill-shaped)
|
||||
- Component type (buttons, cards, inputs have different needs)
|
||||
- Platform conventions (iOS vs. Android vs. Web)
|
||||
|
||||
### 🌫️ Shadows
|
||||
|
||||
{FOR each shadow in SHADOWS:
|
||||
- **{shadow.key}**: `{shadow.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- Shadows create elevation and depth perception
|
||||
- Adjust based on:
|
||||
- Material design depth levels
|
||||
- Light/dark mode contexts
|
||||
- Performance considerations (complex shadows impact rendering)
|
||||
- Visual hierarchy needs
|
||||
|
||||
{IF HAS_ANIMATIONS:
|
||||
### ⏱️ Animation & Timing
|
||||
|
||||
**Durations**:
|
||||
{FOR each duration in ANIMATION_DURATIONS:
|
||||
- **{duration.key}**: `{duration.value}`
|
||||
}
|
||||
|
||||
**Easing Functions**:
|
||||
{FOR each easing in EASING_FUNCTIONS:
|
||||
- **{easing.key}**: `{easing.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- Animation timing affects perceived responsiveness and polish
|
||||
- Adjust based on:
|
||||
- User expectations and platform conventions
|
||||
- Accessibility preferences (reduced motion)
|
||||
- Animation type (micro-interactions vs. page transitions)
|
||||
- Performance constraints (mobile vs. desktop)
|
||||
}
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Design Adaptation Strategies
|
||||
|
||||
### When to Adjust Design References
|
||||
|
||||
**Brand Alignment**:
|
||||
- Modify colors to match brand identity and guidelines
|
||||
- Adjust typography to reflect brand personality
|
||||
- Tune spacing and radius to align with brand aesthetic
|
||||
|
||||
**Accessibility Requirements**:
|
||||
- Increase color contrast ratios for WCAG compliance
|
||||
- Adjust font sizes and spacing for readability
|
||||
- Modify animation durations for reduced-motion preferences
|
||||
|
||||
**Platform Optimization**:
|
||||
- Adapt spacing for mobile touch targets (min 44x44px)
|
||||
- Adjust shadows and radius for platform conventions
|
||||
- Optimize animation performance for target devices
|
||||
|
||||
**Context-Specific Needs**:
|
||||
- Dark mode: Adjust colors, shadows, and contrasts
|
||||
- High-density displays: Fine-tune spacing and sizing
|
||||
- Responsive design: Scale tokens across breakpoints
|
||||
|
||||
### How to Apply Adjustments
|
||||
|
||||
1. **Identify Need**: Determine which tokens need adjustment based on your specific requirements
|
||||
2. **Maintain Relationships**: Preserve proportional relationships between related tokens
|
||||
3. **Test Thoroughly**: Validate changes across components and use cases
|
||||
4. **Document Changes**: Track modifications and rationale for team alignment
|
||||
5. **Iterate**: Refine based on user feedback and testing results
|
||||
|
||||
---
|
||||
|
||||
## Progressive Loading
|
||||
|
||||
### Level 0: Design Tokens (~5K tokens)
|
||||
|
||||
Essential design token system for consistent styling.
|
||||
|
||||
**Files**:
|
||||
- [Design Tokens](../../../.workflow/reference_style/{package_name}/design-tokens.json) - Colors, typography, spacing, shadows, borders
|
||||
|
||||
**Use when**: Quick token reference, applying consistent styles, color/typography queries
|
||||
|
||||
---
|
||||
|
||||
### Level 1: Universal Layout Templates (~12K tokens)
|
||||
|
||||
**Project-independent** component layout patterns for reusable UI elements.
|
||||
|
||||
**Files**:
|
||||
- Level 0 files
|
||||
- [Layout Templates](../../../.workflow/reference_style/{package_name}/layout-templates.json) - Component structures with HTML/CSS patterns
|
||||
|
||||
**⚠️ Reference Strategy**:
|
||||
- **Only reference components with `component_type: "universal"`** - these are reusable, project-independent patterns
|
||||
- **Ignore components with `component_type: "specialized"`** - these are project-specific implementations
|
||||
- Universal components include: buttons, inputs, forms, cards, navigation, modals, etc.
|
||||
- Use universal patterns as **reference templates** to adapt for your specific project needs
|
||||
|
||||
**Use when**: Building components, understanding component architecture, implementing layouts
|
||||
|
||||
---
|
||||
|
||||
### Level 2: Complete System (~20K tokens)
|
||||
|
||||
Full design system with animations and interactive preview.
|
||||
|
||||
**Files**:
|
||||
- All Level 1 files
|
||||
- [Animation Tokens](../../../.workflow/reference_style/{package_name}/animation-tokens.json) - Animation durations, easing, transitions _(if available)_
|
||||
- [Preview HTML](../../../.workflow/reference_style/{package_name}/preview.html) - Interactive showcase (reference only)
|
||||
- [Preview CSS](../../../.workflow/reference_style/{package_name}/preview.css) - Showcase styling (reference only)
|
||||
|
||||
**Use when**: Comprehensive analysis, animation development, complete design system understanding
|
||||
|
||||
---
|
||||
|
||||
## Interactive Preview
|
||||
|
||||
**Location**: `.workflow/reference_style/{package_name}/preview.html`
|
||||
|
||||
**View in Browser**:
|
||||
**⚠️ CRITICAL - Execute First**:
|
||||
```bash
|
||||
cd .workflow/reference_style/{package_name}
|
||||
python -m http.server 8080
|
||||
# Open http://localhost:8080/preview.html
|
||||
bash(cat ~/.claude/workflows/cli-templates/memory/style-skill-memory/skill-md-template.md)
|
||||
```
|
||||
|
||||
**Features**:
|
||||
- Color palette swatches with values
|
||||
- Typography scale and combinations
|
||||
- All components with variants and states
|
||||
- Spacing, radius, shadow visual examples
|
||||
- Interactive state demonstrations
|
||||
- Usage code snippets
|
||||
**Template Processing**:
|
||||
1. **Replace variables**: Substitute all `{variable}` placeholders with actual values from Phase 2
|
||||
2. **Generate dynamic sections**:
|
||||
- **Prerequisites & Tooling**: Generate based on `DESIGN_ANALYSIS` technical features (oklch, calc, dark mode)
|
||||
- **Design Principles**: Generate based on `DESIGN_ANALYSIS` characteristics
|
||||
- **Complete Implementation Example**: Include React component example with token adaptation
|
||||
- **Design Token Values**: Iterate `DESIGN_TOKENS_DATA`, `ANIMATION_TOKENS_DATA` and display all key-value pairs with DEFAULT annotations
|
||||
3. **Write to file**: Use Write tool to save to `.claude/skills/style-{package_name}/SKILL.md`
|
||||
|
||||
---
|
||||
**Variable Replacement Map**:
|
||||
- `{package_name}` → PACKAGE_NAME
|
||||
- `{intelligent_description}` → Generated description from Step 2
|
||||
- `{component_count}` → COMPONENT_COUNT
|
||||
- `{universal_count}` → UNIVERSAL_COUNT
|
||||
- `{specialized_count}` → SPECIALIZED_COUNT
|
||||
- `{universal_components_list}` → UNIVERSAL_COMPONENTS (comma-separated)
|
||||
- `{has_animations}` → HAS_ANIMATIONS
|
||||
|
||||
## Usage Guidelines
|
||||
**Dynamic Content Generation**:
|
||||
|
||||
### Loading Levels
|
||||
See template file for complete structure. Key dynamic sections:
|
||||
|
||||
**Level 0** (5K): Design tokens only
|
||||
```
|
||||
Load Level 0 for design token reference
|
||||
```
|
||||
1. **Prerequisites & Tooling** (based on DESIGN_ANALYSIS technical features):
|
||||
- IF uses_oklch → Include PostCSS plugin requirement (`postcss-oklab-function`)
|
||||
- IF uses_calc → Include preprocessor requirement for calc() expressions
|
||||
- IF has_dark_mode → Include dark mode implementation mechanism (class or media query)
|
||||
- ALWAYS include browser support, jq installation, and local server setup
|
||||
|
||||
**Level 1** (12K): Tokens + layout templates
|
||||
```
|
||||
Load Level 1 for layout templates and design tokens
|
||||
```
|
||||
2. **Design Principles** (based on DESIGN_ANALYSIS):
|
||||
- IF has_colors → Include "Color System" principle with semantic pattern
|
||||
- IF spacing_pattern detected → Include "Spatial Rhythm" with unified scale description (actual token values)
|
||||
- IF has_typography_hierarchy → Include "Typographic System" with scale examples
|
||||
- IF has_radius → Include "Shape Language" with style characteristic
|
||||
- IF has_shadows → Include "Depth & Elevation" with elevation pattern
|
||||
- IF has_animations → Include "Motion & Timing" with duration range
|
||||
- ALWAYS include "Accessibility First" principle
|
||||
|
||||
**Level 2** (20K): Complete system with animations and preview
|
||||
```
|
||||
Load Level 2 for complete design system with preview reference
|
||||
```
|
||||
|
||||
### Common Use Cases
|
||||
|
||||
**Implementing UI Components**:
|
||||
- Load Level 1 for universal layout templates
|
||||
- **Only reference components with `component_type: "universal"`** in layout-templates.json
|
||||
- Apply design tokens from design-tokens.json
|
||||
- Adapt patterns to your project's specific requirements
|
||||
|
||||
**Ensuring Style Consistency**:
|
||||
- Load Level 0 for design tokens
|
||||
- Use design-tokens.json for colors, typography, spacing
|
||||
- Check preview.html for visual reference (universal components only)
|
||||
|
||||
**Analyzing Component Patterns**:
|
||||
- Load Level 2 for complete analysis
|
||||
- Review layout-templates.json for component architecture
|
||||
- **Filter for `component_type: "universal"` to exclude project-specific implementations**
|
||||
- Check preview.html for implementation examples
|
||||
|
||||
**Animation Development**:
|
||||
- Load Level 2 for animation tokens (if available)
|
||||
- Reference animation-tokens.json for durations and easing
|
||||
- Apply consistent timing and transitions
|
||||
|
||||
**⚠️ Critical Usage Rule**:
|
||||
This is a **project-independent style reference system**. When working with layout-templates.json:
|
||||
- **USE**: Components marked `component_type: "universal"` as reusable reference patterns
|
||||
- **IGNORE**: Components marked `component_type: "specialized"` (project-specific implementations)
|
||||
- **ADAPT**: All patterns should be customized for your specific project needs
|
||||
|
||||
---
|
||||
|
||||
## Package Structure
|
||||
|
||||
```
|
||||
.workflow/reference_style/{package_name}/
|
||||
├── layout-templates.json # Layout templates from codebase
|
||||
├── design-tokens.json # Design token system
|
||||
├── animation-tokens.json # Animation tokens (optional)
|
||||
├── preview.html # Interactive showcase
|
||||
└── preview.css # Showcase styling
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Regeneration
|
||||
|
||||
To update this SKILL memory after package changes:
|
||||
|
||||
```bash
|
||||
/memory:style-skill-memory {package_name} --regenerate
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Commands
|
||||
|
||||
**Generate Package**:
|
||||
```bash
|
||||
/workflow:ui-design:codify-style --source ./src --package-name {package_name}
|
||||
```
|
||||
|
||||
**Update Package**:
|
||||
Re-run codify-style with same package name to update extraction.
|
||||
```
|
||||
3. **Design Token Values** (iterate from read data):
|
||||
- Colors: Iterate `DESIGN_TOKENS_DATA.colors`
|
||||
- Typography: Iterate `DESIGN_TOKENS_DATA.typography`
|
||||
- Spacing: Iterate `DESIGN_TOKENS_DATA.spacing`
|
||||
- Border Radius: Iterate `DESIGN_TOKENS_DATA.border_radius` with calc() explanations
|
||||
- Shadows: Iterate `DESIGN_TOKENS_DATA.shadows` with DEFAULT token annotations
|
||||
- Animations (if available): Iterate `ANIMATION_TOKENS_DATA.duration` and `ANIMATION_TOKENS_DATA.easing`
|
||||
|
||||
**Step 4: Verify SKILL.md Created**
|
||||
|
||||
@@ -609,115 +325,27 @@ Re-run codify-style with same package name to update extraction.
|
||||
bash(test -f .claude/skills/style-${package_name}/SKILL.md && echo "success" || echo "failed")
|
||||
```
|
||||
|
||||
**TodoWrite Update**:
|
||||
```json
|
||||
[
|
||||
{"content": "Validate style reference package", "status": "completed", "activeForm": "Validating package"},
|
||||
{"content": "Read package data and extract design references", "status": "completed", "activeForm": "Reading package data"},
|
||||
{"content": "Generate SKILL.md with progressive loading", "status": "completed", "activeForm": "Generating SKILL.md"}
|
||||
]
|
||||
```
|
||||
|
||||
**Final Action**: Report completion summary to user
|
||||
**TodoWrite Update**: Mark all todos as completed
|
||||
|
||||
---
|
||||
|
||||
## Completion Message
|
||||
### Completion Message
|
||||
|
||||
Display extracted primary design references to user:
|
||||
Display a simple completion message with key information:
|
||||
|
||||
```
|
||||
✅ SKILL memory generated successfully!
|
||||
✅ SKILL memory generated for style package: {package_name}
|
||||
|
||||
Package: {package_name}
|
||||
SKILL Location: .claude/skills/style-{package_name}/SKILL.md
|
||||
📁 Location: .claude/skills/style-{package_name}/SKILL.md
|
||||
|
||||
📦 Package Details:
|
||||
- Layout Templates: {component_count} total
|
||||
- Universal (reusable): {universal_count}
|
||||
- Specialized (project-specific): {specialized_count}
|
||||
- Universal Component Types: {show first 5 UNIVERSAL_COMPONENTS, then "+ X more"}
|
||||
- Files: {file_count}
|
||||
- Animation Tokens: {has_animations ? "✓ Available" : "Not available"}
|
||||
📊 Package Summary:
|
||||
- {component_count} components ({universal_count} universal, {specialized_count} specialized)
|
||||
- Design tokens: colors, typography, spacing, shadows{animations_note}
|
||||
|
||||
🎨 Primary Design References Extracted:
|
||||
{IF PRIMARY_COLORS exists:
|
||||
Colors:
|
||||
{show first 3 PRIMARY_COLORS with key: value}
|
||||
{if more than 3: + X more colors}
|
||||
}
|
||||
|
||||
{IF TYPOGRAPHY_FONTS exists:
|
||||
Typography:
|
||||
{show all TYPOGRAPHY_FONTS}
|
||||
}
|
||||
|
||||
{IF SPACING_SCALE exists:
|
||||
Spacing Scale:
|
||||
{show first 3 SPACING_SCALE items}
|
||||
{if more than 3: + X more spacing tokens}
|
||||
}
|
||||
|
||||
{IF BORDER_RADIUS exists:
|
||||
Border Radius:
|
||||
{show all BORDER_RADIUS}
|
||||
}
|
||||
|
||||
{IF HAS_ANIMATIONS:
|
||||
Animation:
|
||||
Durations: {count ANIMATION_DURATIONS} tokens
|
||||
Easing: {count EASING_FUNCTIONS} functions
|
||||
}
|
||||
|
||||
⚡ Progressive Loading Levels:
|
||||
- Level 0: Design Tokens (~5K tokens)
|
||||
- Level 1: Tokens + Layout Templates (~12K tokens)
|
||||
- Level 2: Complete System (~20K tokens)
|
||||
|
||||
💡 Usage:
|
||||
Load design system context when working with:
|
||||
- UI component implementation
|
||||
- Layout pattern analysis
|
||||
- Design token application
|
||||
- Style consistency validation
|
||||
|
||||
⚠️ IMPORTANT - Project Independence:
|
||||
This is a **project-independent style reference system**:
|
||||
- Only use universal components (component_type: "universal") as reference patterns
|
||||
- Ignore specialized components (component_type: "specialized") - they are project-specific
|
||||
- The extracted design references are REFERENCE VALUES, not fixed constraints
|
||||
- Dynamically adjust colors, spacing, typography, and other tokens based on:
|
||||
- Brand requirements and accessibility needs
|
||||
- Platform-specific conventions and optimizations
|
||||
- Context (light/dark mode, responsive breakpoints)
|
||||
- User feedback and testing results
|
||||
|
||||
See SKILL.md for detailed adjustment guidelines and component filtering instructions.
|
||||
|
||||
🎯 Preview:
|
||||
Open interactive showcase:
|
||||
file://{absolute_path}/.workflow/reference_style/{package_name}/preview.html
|
||||
|
||||
📋 Next Steps:
|
||||
1. Load appropriate level based on your task context
|
||||
2. Review Primary Design References section for key design tokens
|
||||
3. Apply design tokens with dynamic adjustments as needed
|
||||
4. Reference layout-templates.json for component structures
|
||||
5. Use Design Adaptation Strategies when modifying tokens
|
||||
💡 Usage: /memory:load-skill-memory style-{package_name} "your task description"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Package not found | Invalid package name or package doesn't exist | Run codify-style first to create package |
|
||||
| SKILL already exists | SKILL.md already generated | Use --regenerate to force regeneration |
|
||||
| Missing layout-templates.json | Incomplete package | Verify package integrity, re-run codify-style |
|
||||
| Invalid JSON format | Corrupted package files | Regenerate package with codify-style |
|
||||
Variables: `{package_name}`, `{component_count}`, `{universal_count}`, `{specialized_count}`, `{animations_note}` (", animations" if exists)
|
||||
|
||||
---
|
||||
|
||||
@@ -727,144 +355,42 @@ Open interactive showcase:
|
||||
|
||||
1. **Check Before Generate**: Verify package exists before attempting SKILL generation
|
||||
2. **Respect Existing SKILL**: Don't overwrite unless --regenerate flag provided
|
||||
3. **Extract Primary References**: Always extract and display key design values (colors, typography, spacing, border radius, shadows, animations)
|
||||
4. **Include Adjustment Guidance**: Provide clear guidelines on when and how to dynamically adjust design tokens
|
||||
5. **Progressive Loading**: Always include all 3 levels (0-2) with clear token estimates
|
||||
6. **Intelligent Description**: Extract component count and key features from metadata
|
||||
3. **Load Templates via cat**: Use `cat ~/.claude/workflows/cli-templates/memory/style-skill-memory/{template}` to load templates
|
||||
4. **Variable Substitution**: Replace all `{variable}` placeholders with actual values
|
||||
5. **Technical Feature Detection**: Analyze tokens for modern features (oklch, calc, dark mode) and generate appropriate Prerequisites section
|
||||
6. **Dynamic Content Generation**: Generate sections based on DESIGN_ANALYSIS characteristics
|
||||
7. **Unified Spacing Scale**: Use actual token values as primary scale reference, avoid contradictory pattern descriptions
|
||||
8. **Direct Iteration**: Iterate data structures (DESIGN_TOKENS_DATA, etc.) for token values
|
||||
9. **Annotate Special Tokens**: Add comments for DEFAULT tokens and calc() expressions
|
||||
10. **Embed jq Commands**: Include bash/jq commands in SKILL.md for dynamic loading
|
||||
11. **Progressive Loading**: Include all 3 levels (0-2) with specific jq commands
|
||||
12. **Complete Examples**: Include end-to-end implementation examples (React components)
|
||||
13. **Intelligent Description**: Extract component count and key features from metadata
|
||||
14. **Emphasize Flexibility**: Strongly warn against rigid copying - values are references for creative adaptation
|
||||
|
||||
### SKILL Description Format
|
||||
### Template Files Location
|
||||
|
||||
**Template**:
|
||||
```
|
||||
{package_name} project-independent design system with {universal_count} universal layout templates and interactive preview (located at .workflow/reference_style/{package_name}). Load when working with reusable UI components, design tokens, layout patterns, or implementing visual consistency. Excludes {specialized_count} project-specific components.
|
||||
```
|
||||
|
||||
**Required Elements**:
|
||||
- Package name
|
||||
- Universal layout template count (emphasize reusability)
|
||||
- Project independence statement
|
||||
- Specialized component exclusion notice
|
||||
- Location (full path)
|
||||
- Trigger keywords (reusable UI components, design tokens, layout patterns, visual consistency)
|
||||
- Action verbs (working with, analyzing, implementing)
|
||||
|
||||
### Primary Design References Extraction
|
||||
|
||||
**Required Data Extraction** (from design-tokens.json):
|
||||
- Colors: Primary, secondary, accent colors (top 3-5)
|
||||
- Typography: Font families for headings and body text
|
||||
- Spacing Scale: Base spacing values (xs, sm, md, lg, xl)
|
||||
- Border Radius: All radius tokens
|
||||
- Shadows: Shadow definitions (top 3 elevation levels)
|
||||
|
||||
**Component Classification Extraction** (from layout-templates.json):
|
||||
- Universal Count: Number of components with `component_type: "universal"`
|
||||
- Specialized Count: Number of components with `component_type: "specialized"`
|
||||
- Universal Component Names: List of universal component names (first 10)
|
||||
|
||||
**Optional Data Extraction** (from animation-tokens.json if available):
|
||||
- Animation Durations: All duration tokens
|
||||
- Easing Functions: Top 3 easing functions
|
||||
|
||||
**Extraction Format**:
|
||||
Use `jq` to extract tokens from JSON files. Each token should include key and value.
|
||||
For component classification, filter by `component_type` field.
|
||||
|
||||
### Dynamic Adjustment Guidelines
|
||||
|
||||
**Include in SKILL.md**:
|
||||
1. **Usage Guidelines per Category**: Specific guidance for each token category
|
||||
2. **Adjustment Strategies**: When to adjust design references
|
||||
3. **Practical Examples**: Context-specific adaptation scenarios
|
||||
4. **Best Practices**: How to maintain design system coherence while adjusting
|
||||
|
||||
### Progressive Loading Structure
|
||||
|
||||
**Level 0** (~5K tokens):
|
||||
- design-tokens.json
|
||||
|
||||
**Level 1** (~12K tokens):
|
||||
- Level 0 files
|
||||
- layout-templates.json
|
||||
|
||||
**Level 2** (~20K tokens):
|
||||
- Level 1 files
|
||||
- animation-tokens.json (if exists)
|
||||
- preview.html
|
||||
- preview.css
|
||||
|
||||
---
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Project Independence**: Clear separation between universal (reusable) and specialized (project-specific) components
|
||||
- **Component Filtering**: Automatic classification helps identify which patterns are truly reusable
|
||||
- **Fast Context Loading**: Progressive levels for efficient token usage
|
||||
- **Primary Design References**: Extracted key design values (colors, typography, spacing, etc.) displayed prominently
|
||||
- **Dynamic Adjustment Guidance**: Clear instructions on when and how to adjust design tokens
|
||||
- **Intelligent Triggering**: Keywords optimize SKILL activation
|
||||
- **Complete Reference**: All package files accessible through SKILL
|
||||
- **Easy Regeneration**: Simple --regenerate flag for updates
|
||||
- **Clear Structure**: Organized levels by use case with component type filtering
|
||||
- **Practical Usage Guidelines**: Context-specific adjustment strategies and component selection criteria
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
style-skill-memory
|
||||
├─ Phase 1: Validate
|
||||
│ ├─ Parse package name from argument or auto-detect
|
||||
│ ├─ Check package exists in .workflow/reference_style/
|
||||
│ └─ Check if SKILL already exists (skip if exists and no --regenerate)
|
||||
│
|
||||
├─ Phase 2: Read Package Data & Extract Primary References
|
||||
│ ├─ Count components from layout-templates.json
|
||||
│ ├─ Extract component types list
|
||||
│ ├─ Extract primary colors from design-tokens.json (top 3-5)
|
||||
│ ├─ Extract typography (font families)
|
||||
│ ├─ Extract spacing scale (base values)
|
||||
│ ├─ Extract border radius tokens
|
||||
│ ├─ Extract shadow definitions (top 3)
|
||||
│ ├─ Extract animation tokens (if available)
|
||||
│ └─ Count total files in package
|
||||
│
|
||||
└─ Phase 3: Generate SKILL.md
|
||||
├─ Create SKILL directory
|
||||
├─ Generate intelligent description with keywords
|
||||
├─ Write SKILL.md with complete structure:
|
||||
│ ├─ Package Overview
|
||||
│ ├─ Primary Design References
|
||||
│ │ ├─ Colors with usage guidelines
|
||||
│ │ ├─ Typography with usage guidelines
|
||||
│ │ ├─ Spacing with usage guidelines
|
||||
│ │ ├─ Border Radius with usage guidelines
|
||||
│ │ ├─ Shadows with usage guidelines
|
||||
│ │ └─ Animation & Timing (if available)
|
||||
│ ├─ Design Adaptation Strategies
|
||||
│ │ ├─ When to adjust design references
|
||||
│ │ └─ How to apply adjustments
|
||||
│ ├─ Progressive Loading (3 levels)
|
||||
│ ├─ Interactive Preview
|
||||
│ ├─ Usage Guidelines
|
||||
│ ├─ Package Structure
|
||||
│ ├─ Regeneration
|
||||
│ └─ Related Commands
|
||||
├─ Verify SKILL.md created successfully
|
||||
└─ Display completion message with extracted design references
|
||||
Phase 1: Validate
|
||||
├─ Parse package_name
|
||||
├─ Check PACKAGE_DIR exists
|
||||
└─ Check SKILL_DIR exists (skip if exists and no --regenerate)
|
||||
|
||||
Data Flow:
|
||||
design-tokens.json → jq extraction → PRIMARY_COLORS, TYPOGRAPHY_FONTS,
|
||||
SPACING_SCALE, BORDER_RADIUS, SHADOWS
|
||||
animation-tokens.json → jq extraction → ANIMATION_DURATIONS, EASING_FUNCTIONS
|
||||
layout-templates.json → jq extraction → COMPONENT_COUNT, UNIVERSAL_COUNT,
|
||||
SPECIALIZED_COUNT, UNIVERSAL_COMPONENTS
|
||||
→ component_type filtering → Universal vs Specialized classification
|
||||
Phase 2: Read & Analyze
|
||||
├─ Read design-tokens.json → DESIGN_TOKENS_DATA
|
||||
├─ Read layout-templates.json → LAYOUT_TEMPLATES_DATA
|
||||
├─ Read animation-tokens.json → ANIMATION_TOKENS_DATA (if exists)
|
||||
├─ Extract Metadata → COMPONENT_COUNT, UNIVERSAL_COUNT, etc.
|
||||
└─ Analyze Design System → DESIGN_ANALYSIS (characteristics)
|
||||
|
||||
Extracted data → SKILL.md generation → Primary Design References section
|
||||
→ Component Classification section
|
||||
→ Dynamic Adjustment Guidelines
|
||||
→ Project Independence warnings
|
||||
→ Completion message display
|
||||
Phase 3: Generate
|
||||
├─ Create SKILL directory
|
||||
├─ Generate intelligent description
|
||||
├─ Load SKILL.md template (cat command)
|
||||
├─ Replace variables and generate dynamic content
|
||||
├─ Write SKILL.md
|
||||
├─ Verify creation
|
||||
├─ Load completion message template (cat command)
|
||||
└─ Display completion message
|
||||
```
|
||||
|
||||
@@ -128,8 +128,8 @@ Generate a complete tech stack SKILL package with Exa research.
|
||||
1. **Extract Tech Stack Information**:
|
||||
|
||||
IF MODE == 'session':
|
||||
- Read `.workflow/{SESSION_ID}/workflow-session.json`
|
||||
- Read `.workflow/{SESSION_ID}/.process/context-package.json`
|
||||
- Read `.workflow/active/{session_id}/workflow-session.json`
|
||||
- Read `.workflow/active/{session_id}/.process/context-package.json`
|
||||
- Extract tech_stack: {language, frameworks, libraries}
|
||||
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
|
||||
- Example: \"typescript-react-nextjs\"
|
||||
|
||||
@@ -199,10 +199,10 @@ This is the simplified data structure loaded to provide context for task executi
|
||||
"session": "WFS-user-auth",
|
||||
"phase": "IMPLEMENT",
|
||||
"session_context": {
|
||||
"workflow_directory": ".workflow/WFS-user-auth/",
|
||||
"todo_list_location": ".workflow/WFS-user-auth/TODO_LIST.md",
|
||||
"summaries_directory": ".workflow/WFS-user-auth/.summaries/",
|
||||
"task_json_location": ".workflow/WFS-user-auth/.task/"
|
||||
"workflow_directory": ".workflow/active/WFS-user-auth/",
|
||||
"todo_list_location": ".workflow/active/WFS-user-auth/TODO_LIST.md",
|
||||
"summaries_directory": ".workflow/active/WFS-user-auth/.summaries/",
|
||||
"task_json_location": ".workflow/active/WFS-user-auth/.task/"
|
||||
}
|
||||
},
|
||||
"execution": {
|
||||
|
||||
@@ -7,6 +7,14 @@ allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
|
||||
|
||||
# Task Replan Command (/task:replan)
|
||||
|
||||
> **⚠️ DEPRECATION NOTICE**: This command is maintained for backward compatibility. For new workflows, use `/workflow:replan` which provides:
|
||||
> - Session-level replanning with comprehensive artifact updates
|
||||
> - Interactive boundary clarification
|
||||
> - Updates to IMPL_PLAN.md, TODO_LIST.md, and session metadata
|
||||
> - Better integration with workflow sessions
|
||||
>
|
||||
> **Migration**: Replace `/task:replan IMPL-1 "changes"` with `/workflow:replan IMPL-1 "changes"`
|
||||
|
||||
## Overview
|
||||
Replans individual tasks or batch processes multiple tasks with change tracking and backup management.
|
||||
|
||||
@@ -14,9 +22,6 @@ Replans individual tasks or batch processes multiple tasks with change tracking
|
||||
- **Single Task Mode**: Replan one task with specific changes
|
||||
- **Batch Mode**: Process multiple tasks from action-plan verification report
|
||||
|
||||
## Core Principles
|
||||
**Task System:** @~/.claude/workflows/task-core.md
|
||||
|
||||
## Key Features
|
||||
- **Single/Batch Operations**: Single task or multiple tasks from verification report
|
||||
- **Multiple Input Sources**: Text, files, or verification report
|
||||
@@ -274,7 +279,7 @@ Backup saved to .task/backup/IMPL-2-v1.0.json
|
||||
|
||||
### Batch Mode - From Verification Report
|
||||
```bash
|
||||
/task:replan --batch .workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
|
||||
/task:replan --batch .workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
|
||||
|
||||
Parsing verification report...
|
||||
Found 4 tasks requiring replanning:
|
||||
|
||||
@@ -32,7 +32,7 @@ Identify inconsistencies, duplications, ambiguities, and underspecified items be
|
||||
IF --session parameter provided:
|
||||
session_id = provided session
|
||||
ELSE:
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
ELSE:
|
||||
@@ -40,7 +40,7 @@ ELSE:
|
||||
EXIT
|
||||
|
||||
# Derive absolute paths
|
||||
session_dir = .workflow/WFS-{session}
|
||||
session_dir = .workflow/active/WFS-{session}
|
||||
brainstorm_dir = session_dir/.brainstorming
|
||||
task_dir = session_dir/.task
|
||||
|
||||
@@ -333,7 +333,7 @@ Output a Markdown report (no file writes) with the following structure:
|
||||
|
||||
#### TodoWrite-Based Remediation Workflow
|
||||
|
||||
**Report Location**: `.workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
|
||||
**Report Location**: `.workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
|
||||
|
||||
**Recommended Workflow**:
|
||||
1. **Create TodoWrite Task List**: Extract all findings from report
|
||||
@@ -361,7 +361,7 @@ Priority Order:
|
||||
|
||||
**Save Analysis Report**:
|
||||
```bash
|
||||
report_path = ".workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
||||
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
||||
Write(report_path, full_report_content)
|
||||
```
|
||||
|
||||
@@ -404,12 +404,12 @@ TodoWrite([
|
||||
**File Modification Workflow**:
|
||||
```bash
|
||||
# For task JSON modifications:
|
||||
1. Read(.workflow/WFS-{session}/.task/IMPL-X.Y.json)
|
||||
1. Read(.workflow/active/WFS-{session}/.task/IMPL-X.Y.json)
|
||||
2. Edit() to apply fixes
|
||||
3. Mark todo as completed
|
||||
|
||||
# For IMPL_PLAN modifications:
|
||||
1. Read(.workflow/WFS-{session}/IMPL_PLAN.md)
|
||||
1. Read(.workflow/active/WFS-{session}/IMPL_PLAN.md)
|
||||
2. Edit() to apply strategic changes
|
||||
3. Mark todo as completed
|
||||
```
|
||||
|
||||
@@ -46,10 +46,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -162,7 +162,7 @@ IF update_mode = "incremental":
|
||||
|
||||
### Output Files
|
||||
```
|
||||
.workflow/WFS-[topic]/.brainstorming/
|
||||
.workflow/active/WFS-[topic]/.brainstorming/
|
||||
├── guidance-specification.md # Input: Framework (if exists)
|
||||
└── api-designer/
|
||||
└── analysis.md # ★ OUTPUT: Framework-based analysis
|
||||
@@ -181,7 +181,7 @@ IF update_mode = "incremental":
|
||||
Session detection and selection:
|
||||
```bash
|
||||
# Check for active sessions
|
||||
active_sessions=$(find .workflow -name ".active-*" 2>/dev/null)
|
||||
active_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
|
||||
if [ multiple_sessions ]; then
|
||||
prompt_user_to_select_session()
|
||||
else
|
||||
@@ -280,7 +280,7 @@ TodoWrite tracking for two-step process:
|
||||
|
||||
### Output Location
|
||||
```
|
||||
.workflow/WFS-{topic-slug}/.brainstorming/api-designer/
|
||||
.workflow/active/WFS-{topic-slug}/.brainstorming/api-designer/
|
||||
├── analysis.md # Primary API design analysis
|
||||
├── api-specification.md # Detailed endpoint specifications (OpenAPI/Swagger)
|
||||
├── data-contracts.md # Request/response schemas and validation rules
|
||||
@@ -531,7 +531,7 @@ Upon completion, update `workflow-session.json`:
|
||||
"api_designer": {
|
||||
"status": "completed",
|
||||
"completed_at": "timestamp",
|
||||
"output_directory": ".workflow/WFS-{topic}/.brainstorming/api-designer/",
|
||||
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/api-designer/",
|
||||
"key_insights": ["endpoint_design", "versioning_strategy", "data_contracts"]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,7 +10,7 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*)
|
||||
Six-phase workflow: **Automatic project context collection** → Extract topic challenges → Select roles → Generate task-specific questions → Detect conflicts → Generate confirmed guidance (declarative statements only).
|
||||
|
||||
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
|
||||
**Output**: `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
|
||||
**Output**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
|
||||
**Core Principle**: Questions dynamically generated from project context + topic keywords/challenges, NOT from generic templates
|
||||
|
||||
**Parameters**:
|
||||
@@ -32,7 +32,7 @@ Six-phase workflow: **Automatic project context collection** → Extract topic c
|
||||
**Standalone Mode**:
|
||||
```json
|
||||
[
|
||||
{"content": "Initialize session (.workflow/.active-* check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
|
||||
{"content": "Initialize session (.workflow/active/ session check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
|
||||
{"content": "Phase 0: Automatic project context collection (call context-gather)", "status": "pending", "activeForm": "Phase 0 context collection"},
|
||||
{"content": "Phase 1: Extract challenges, output 2-4 task-specific questions, wait for user input", "status": "pending", "activeForm": "Phase 1 topic analysis"},
|
||||
{"content": "Phase 2: Recommend count+2 roles, output role selection, wait for user input", "status": "pending", "activeForm": "Phase 2 role selection"},
|
||||
@@ -133,7 +133,7 @@ b) {role-name} ({中文名})
|
||||
## Execution Phases
|
||||
|
||||
### Session Management
|
||||
- Check `.workflow/.active-*` markers first
|
||||
- Check `.workflow/active/` for existing sessions
|
||||
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
||||
- Parse `--count N` parameter from user input (default: 3 if not specified)
|
||||
- Store decisions in `workflow-session.json` including count parameter
|
||||
@@ -145,7 +145,7 @@ b) {role-name} ({中文名})
|
||||
**Detection Mechanism** (execute first):
|
||||
```javascript
|
||||
// Check if context-package already exists
|
||||
const contextPackagePath = `.workflow/WFS-{session-id}/.process/context-package.json`;
|
||||
const contextPackagePath = `.workflow/active/WFS-{session-id}/.process/context-package.json`;
|
||||
|
||||
if (file_exists(contextPackagePath)) {
|
||||
// Validate package
|
||||
@@ -229,7 +229,7 @@ Report completion with statistics.
|
||||
|
||||
**Steps**:
|
||||
1. **Load Phase 0 context** (if available):
|
||||
- Read `.workflow/WFS-{session-id}/.process/context-package.json`
|
||||
- Read `.workflow/active/WFS-{session-id}/.process/context-package.json`
|
||||
- Extract: tech_stack, existing modules, conflict_risk, relevant files
|
||||
|
||||
2. **Deep topic analysis** (context-aware):
|
||||
@@ -449,7 +449,7 @@ FOR each selected role:
|
||||
|
||||
## Output Document Template
|
||||
|
||||
**File**: `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||
**File**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||
|
||||
```markdown
|
||||
# [Project] - Confirmed Guidance Specification
|
||||
@@ -596,8 +596,7 @@ ELSE:
|
||||
## File Structure
|
||||
|
||||
```
|
||||
.workflow/WFS-[topic]/
|
||||
├── .active-brainstorming
|
||||
.workflow/active/WFS-[topic]/
|
||||
├── workflow-session.json # Session metadata ONLY
|
||||
└── .brainstorming/
|
||||
└── guidance-specification.md # Full guidance content
|
||||
|
||||
@@ -85,7 +85,7 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
|
||||
**Validation**:
|
||||
- guidance-specification.md created with confirmed decisions
|
||||
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
|
||||
- Session directory `.workflow/WFS-{topic}/.brainstorming/` exists
|
||||
- Session directory `.workflow/active/WFS-{topic}/.brainstorming/` exists
|
||||
|
||||
**TodoWrite Update (Phase 1 SlashCommand invoked - tasks attached)**:
|
||||
```json
|
||||
@@ -132,13 +132,13 @@ Execute {role-name} analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: {role-name}
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/{role}/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/{role}/
|
||||
TOPIC: {user-provided-topic}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -148,7 +148,7 @@ TOPIC: {user-provided-topic}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and original user intent
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context (contains original user prompt as PRIMARY reference)
|
||||
|
||||
4. **load_style_skill** (ONLY for ui-designer role when style_skill_package exists)
|
||||
@@ -194,7 +194,7 @@ TOPIC: {user-provided-topic}
|
||||
- guidance-specification.md path
|
||||
|
||||
**Validation**:
|
||||
- Each role creates `.workflow/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
|
||||
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
|
||||
- If content is large (>800 lines), may split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
||||
- **File naming pattern**: ALL files MUST start with `analysis` prefix (use `analysis*.md` for globbing)
|
||||
- **FORBIDDEN naming**: No `recommendations.md`, `recommendations-*.md`, or any non-`analysis` prefixed files
|
||||
@@ -245,7 +245,7 @@ TOPIC: {user-provided-topic}
|
||||
**Input**: `sessionId` from Phase 1
|
||||
|
||||
**Validation**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
|
||||
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
|
||||
- Synthesis references all role analyses
|
||||
|
||||
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
|
||||
@@ -280,7 +280,7 @@ TOPIC: {user-provided-topic}
|
||||
```
|
||||
Brainstorming complete for session: {sessionId}
|
||||
Roles analyzed: {count}
|
||||
Synthesis: .workflow/WFS-{topic}/.brainstorming/synthesis-specification.md
|
||||
Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md
|
||||
|
||||
✅ Next Steps:
|
||||
1. /workflow:concept-clarify --session {sessionId} # Optional refinement
|
||||
@@ -392,31 +392,31 @@ CONTEXT_VARS:
|
||||
|
||||
## Session Management
|
||||
|
||||
**⚡ FIRST ACTION**: Check for `.workflow/.active-*` markers before Phase 1
|
||||
**⚡ FIRST ACTION**: Check `.workflow/active/` for existing sessions before Phase 1
|
||||
|
||||
**Multiple Sessions Support**:
|
||||
- Different Claude instances can have different active brainstorming sessions
|
||||
- If multiple active sessions found, prompt user to select
|
||||
- If single active session found, use it
|
||||
- If no active session exists, create `WFS-[topic-slug]`
|
||||
- Different Claude instances can have different brainstorming sessions
|
||||
- If multiple sessions found, prompt user to select
|
||||
- If single session found, use it
|
||||
- If no session exists, create `WFS-[topic-slug]`
|
||||
|
||||
**Session Continuity**:
|
||||
- MUST use selected active session for all phases
|
||||
- MUST use selected session for all phases
|
||||
- Each role's context stored in session directory
|
||||
- Session isolation: Each session maintains independent state
|
||||
|
||||
## Output Structure
|
||||
|
||||
**Phase 1 Output**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md` (framework content)
|
||||
- `.workflow/WFS-{topic}/workflow-session.json` (metadata: selected_roles[], topic, timestamps, style_skill_package)
|
||||
- `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md` (framework content)
|
||||
- `.workflow/active/WFS-{topic}/workflow-session.json` (metadata: selected_roles[], topic, timestamps, style_skill_package)
|
||||
|
||||
**Phase 2 Output**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/{role}/analysis.md` (one per role)
|
||||
- `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md` (one per role)
|
||||
- `.superdesign/design_iterations/` (ui-designer artifacts, if --style-skill provided)
|
||||
|
||||
**Phase 3 Output**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/synthesis-specification.md` (integrated analysis)
|
||||
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` (integrated analysis)
|
||||
|
||||
**⚠️ Storage Separation**: Guidance content in .md files, metadata in .json (no duplication)
|
||||
**⚠️ Style References**: When --style-skill provided, workflow-session.json stores style_skill_package name, ui-designer loads from `.claude/skills/style-{package-name}/`
|
||||
@@ -446,8 +446,7 @@ CONTEXT_VARS:
|
||||
|
||||
**File Structure**:
|
||||
```
|
||||
.workflow/WFS-[topic]/
|
||||
├── .active-brainstorming
|
||||
.workflow/active/WFS-[topic]/
|
||||
├── workflow-session.json # Session metadata ONLY
|
||||
└── .brainstorming/
|
||||
├── guidance-specification.md # Framework (Phase 1)
|
||||
|
||||
@@ -47,10 +47,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -87,13 +87,13 @@ Execute data-architect analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: data-architect
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/data-architect/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/data-architect/
|
||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -103,7 +103,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and existing context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -163,7 +163,7 @@ TodoWrite({
|
||||
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/data-architect/
|
||||
.workflow/active/WFS-{session}/.brainstorming/data-architect/
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
@@ -208,7 +208,7 @@ TodoWrite({
|
||||
"data_architect": {
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/data-architect/analysis.md",
|
||||
"output_location": ".workflow/active/WFS-{session}/.brainstorming/data-architect/analysis.md",
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -67,13 +67,13 @@ Execute product-manager analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: product-manager
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/product-manager/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-manager/
|
||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and existing context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -143,7 +143,7 @@ TodoWrite({
|
||||
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/product-manager/
|
||||
.workflow/active/WFS-{session}/.brainstorming/product-manager/
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
@@ -188,7 +188,7 @@ TodoWrite({
|
||||
"product_manager": {
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/product-manager/analysis.md",
|
||||
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-manager/analysis.md",
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -67,13 +67,13 @@ Execute product-owner analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: product-owner
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/product-owner/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-owner/
|
||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and existing context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -143,7 +143,7 @@ TodoWrite({
|
||||
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/product-owner/
|
||||
.workflow/active/WFS-{session}/.brainstorming/product-owner/
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
@@ -188,7 +188,7 @@ TodoWrite({
|
||||
"product_owner": {
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/product-owner/analysis.md",
|
||||
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-owner/analysis.md",
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -67,13 +67,13 @@ Execute scrum-master analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: scrum-master
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/scrum-master/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/scrum-master/
|
||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and existing context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -143,7 +143,7 @@ TodoWrite({
|
||||
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/scrum-master/
|
||||
.workflow/active/WFS-{session}/.brainstorming/scrum-master/
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
@@ -188,7 +188,7 @@ TodoWrite({
|
||||
"scrum_master": {
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/scrum-master/analysis.md",
|
||||
"output_location": ".workflow/active/WFS-{session}/.brainstorming/scrum-master/analysis.md",
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -67,13 +67,13 @@ Execute subject-matter-expert analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: subject-matter-expert
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/subject-matter-expert/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
|
||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and existing context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -143,7 +143,7 @@ TodoWrite({
|
||||
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/subject-matter-expert/
|
||||
.workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
@@ -188,7 +188,7 @@ TodoWrite({
|
||||
"subject_matter_expert": {
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/subject-matter-expert/analysis.md",
|
||||
"output_location": ".workflow/active/WFS-{session}/.brainstorming/subject-matter-expert/analysis.md",
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -48,7 +48,7 @@ Three-phase workflow to eliminate ambiguities and enhance conceptual depth in ro
|
||||
|
||||
### Phase 1: Discovery & Validation
|
||||
|
||||
1. **Detect Session**: Use `--session` parameter or `.workflow/.active-*` marker
|
||||
1. **Detect Session**: Use `--session` parameter or find `.workflow/active/WFS-*` directories
|
||||
2. **Validate Files**:
|
||||
- `guidance-specification.md` (optional, warn if missing)
|
||||
- `*/analysis*.md` (required, error if empty)
|
||||
@@ -59,7 +59,7 @@ Three-phase workflow to eliminate ambiguities and enhance conceptual depth in ro
|
||||
**Main flow prepares file paths for Agent**:
|
||||
|
||||
1. **Discover Analysis Files**:
|
||||
- Glob(.workflow/WFS-{session}/.brainstorming/*/analysis*.md)
|
||||
- Glob(.workflow/active/WFS-{session}/.brainstorming/*/analysis*.md)
|
||||
- Supports: analysis.md, analysis-1.md, analysis-2.md, analysis-3.md
|
||||
- Validate: At least one file exists (error if empty)
|
||||
|
||||
@@ -69,7 +69,7 @@ Three-phase workflow to eliminate ambiguities and enhance conceptual depth in ro
|
||||
|
||||
3. **Pass to Agent** (Phase 3):
|
||||
- `session_id`
|
||||
- `brainstorm_dir`: .workflow/WFS-{session}/.brainstorming/
|
||||
- `brainstorm_dir`: .workflow/active/WFS-{session}/.brainstorming/
|
||||
- `role_analysis_paths`: ["product-manager/analysis.md", "system-architect/analysis-1.md", ...]
|
||||
- `participating_roles`: ["product-manager", "system-architect", ...]
|
||||
|
||||
@@ -361,7 +361,7 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
||||
|
||||
## Output
|
||||
|
||||
**Location**: `.workflow/WFS-{session}/.brainstorming/[role]/analysis*.md` (in-place updates)
|
||||
**Location**: `.workflow/active/WFS-{session}/.brainstorming/[role]/analysis*.md` (in-place updates)
|
||||
|
||||
**Updated Structure**:
|
||||
```markdown
|
||||
|
||||
@@ -46,10 +46,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -162,7 +162,7 @@ IF update_mode = "incremental":
|
||||
|
||||
### Output Files
|
||||
```
|
||||
.workflow/WFS-[topic]/.brainstorming/
|
||||
.workflow/active/WFS-[topic]/.brainstorming/
|
||||
├── guidance-specification.md # Input: Framework (if exists)
|
||||
└── system-architect/
|
||||
└── analysis.md # ★ OUTPUT: Framework-based analysis
|
||||
@@ -186,8 +186,8 @@ IF update_mode = "incremental":
|
||||
### ⚠️ Session Management - FIRST STEP
|
||||
Session detection and selection:
|
||||
```bash
|
||||
# Check for active sessions
|
||||
active_sessions=$(find .workflow -name ".active-*" 2>/dev/null)
|
||||
# Check for existing sessions
|
||||
existing_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
|
||||
if [ multiple_sessions ]; then
|
||||
prompt_user_to_select_session()
|
||||
else
|
||||
@@ -279,7 +279,7 @@ TodoWrite tracking for two-step process:
|
||||
|
||||
### Output Location
|
||||
```
|
||||
.workflow/WFS-{topic-slug}/.brainstorming/system-architect/
|
||||
.workflow/active/WFS-{topic-slug}/.brainstorming/system-architect/
|
||||
├── analysis.md # Primary architecture analysis
|
||||
├── architecture-design.md # Detailed system design and diagrams
|
||||
├── technology-stack.md # Technology stack recommendations and justifications
|
||||
@@ -340,7 +340,7 @@ Upon completion, update `workflow-session.json`:
|
||||
"system_architect": {
|
||||
"status": "completed",
|
||||
"completed_at": "timestamp",
|
||||
"output_directory": ".workflow/WFS-{topic}/.brainstorming/system-architect/",
|
||||
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/system-architect/",
|
||||
"key_insights": ["scalability_bottleneck", "architecture_pattern", "technology_recommendation"]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -48,10 +48,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -88,13 +88,13 @@ Execute ui-designer analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: ui-designer
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/ui-designer/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ui-designer/
|
||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -104,7 +104,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and existing context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -164,7 +164,7 @@ TodoWrite({
|
||||
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/ui-designer/
|
||||
.workflow/active/WFS-{session}/.brainstorming/ui-designer/
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
@@ -209,7 +209,7 @@ TodoWrite({
|
||||
"ui_designer": {
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/ui-designer/analysis.md",
|
||||
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ui-designer/analysis.md",
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -48,10 +48,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -88,13 +88,13 @@ Execute ux-expert analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: ux-expert
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/ux-expert/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/ux-expert/
|
||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -104,7 +104,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and existing context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -164,7 +164,7 @@ TodoWrite({
|
||||
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/ux-expert/
|
||||
.workflow/active/WFS-{session}/.brainstorming/ux-expert/
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
@@ -209,7 +209,7 @@ TodoWrite({
|
||||
"ux_expert": {
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/ux-expert/analysis.md",
|
||||
"output_location": ".workflow/active/WFS-{session}/.brainstorming/ux-expert/analysis.md",
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,7 +7,7 @@ argument-hint: "[--resume-session=\"session-id\"]"
|
||||
# Workflow Execute Command
|
||||
|
||||
## Overview
|
||||
Orchestrates autonomous workflow execution through systematic task discovery, agent coordination, and progress tracking. **Executes entire workflow without user interruption**, providing complete context to agents and ensuring proper flow control execution with comprehensive TodoWrite tracking.
|
||||
Orchestrates autonomous workflow execution through systematic task discovery, agent coordination, and progress tracking. **Executes entire workflow without user interruption** (except initial session selection if multiple active sessions exist), providing complete context to agents and ensuring proper flow control execution with comprehensive TodoWrite tracking.
|
||||
|
||||
**Resume Mode**: When called with `--resume-session` flag, skips discovery phase and directly enters TodoWrite generation and agent execution for the specified session.
|
||||
|
||||
@@ -22,116 +22,119 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
||||
| **Memory** | All tasks | 1-2 tasks | **90% less** |
|
||||
| **Scalability** | 10-20 tasks | 100+ tasks | **5-10x** |
|
||||
|
||||
**Loading Strategy**:
|
||||
- **TODO_LIST.md**: Read in Phase 2 (task metadata, status, dependencies)
|
||||
- **IMPL_PLAN.md**: Read existence in Phase 2, parse execution strategy when needed
|
||||
- **Task JSONs**: Complete lazy loading (read only during execution)
|
||||
|
||||
## Core Rules
|
||||
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
|
||||
**Execute all discovered pending tasks sequentially until workflow completion or blocking dependency.**
|
||||
**Execute all discovered pending tasks until workflow completion or blocking dependency.**
|
||||
**Auto-complete session when all tasks finished: Call `/workflow:session:complete` upon workflow completion.**
|
||||
|
||||
## Core Responsibilities
|
||||
- **Session Discovery**: Identify and select active workflow sessions
|
||||
- **Task Dependency Resolution**: Analyze task relationships and execution order
|
||||
- **Execution Strategy Parsing**: Extract execution model from IMPL_PLAN.md
|
||||
- **TodoWrite Progress Tracking**: Maintain real-time execution status throughout entire workflow
|
||||
- **Agent Orchestration**: Coordinate specialized agents with complete context
|
||||
- **Flow Control Execution**: Execute pre-analysis steps and context accumulation
|
||||
- **Status Synchronization**: Update task JSON files and workflow state
|
||||
- **Autonomous Completion**: Continue execution until all tasks complete or reach blocking state
|
||||
- **Session Auto-Complete**: Call `/workflow:session:complete` when all workflow tasks finished
|
||||
|
||||
## Execution Philosophy
|
||||
- **IMPL_PLAN-driven**: Follow execution strategy from IMPL_PLAN.md Section 4
|
||||
- **Discovery-first**: Auto-discover existing plans and tasks
|
||||
- **Status-aware**: Execute only ready tasks with resolved dependencies
|
||||
- **Context-rich**: Provide complete task JSON and accumulated context to agents
|
||||
- **Progress tracking**: **Continuous TodoWrite updates throughout entire workflow execution**
|
||||
- **Flow control**: Sequential step execution with variable passing
|
||||
- **Autonomous completion**: **Execute all tasks without user interruption until workflow complete**
|
||||
|
||||
## Flow Control Execution
|
||||
**[FLOW_CONTROL]** marker indicates task JSON contains `flow_control.pre_analysis` steps for context preparation.
|
||||
|
||||
### Orchestrator Responsibility
|
||||
- Pass complete task JSON to agent (including `flow_control` block)
|
||||
- Provide session paths for artifact access
|
||||
- Monitor agent completion
|
||||
|
||||
### Agent Responsibility
|
||||
- Parse `flow_control.pre_analysis` array from JSON
|
||||
- Execute steps sequentially with variable substitution
|
||||
- Accumulate context from artifacts and dependencies
|
||||
- Follow error handling per `step.on_error`
|
||||
- Complete implementation using accumulated context
|
||||
|
||||
**Orchestrator does NOT execute flow control steps - Agent interprets and executes them from JSON.**
|
||||
- **Progress tracking**: Continuous TodoWrite updates throughout entire workflow execution
|
||||
- **Autonomous completion**: Execute all tasks without user interruption until workflow complete
|
||||
|
||||
## Execution Lifecycle
|
||||
|
||||
### Resume Mode Detection
|
||||
**Special Flag Processing**: When `--resume-session="session-id"` is provided:
|
||||
1. **Skip Discovery Phase**: Use provided session ID directly
|
||||
2. **Load Specified Session**: Read session state from `.workflow/{session-id}/`
|
||||
3. **Direct TodoWrite Generation**: Skip to Phase 3 (Planning) immediately
|
||||
4. **Accelerated Execution**: Enter agent coordination without validation delays
|
||||
### Phase 1: Discovery
|
||||
**Applies to**: Normal mode only (skipped in resume mode)
|
||||
|
||||
### Phase 1: Discovery (Normal Mode Only)
|
||||
1. **Check Active Sessions**: Find `.workflow/.active-*` markers
|
||||
**Process**:
|
||||
1. **Check Active Sessions**: Find sessions in `.workflow/active/` directory
|
||||
2. **Select Session**: If multiple found, prompt user selection
|
||||
3. **Load Session Metadata**: Read `workflow-session.json` ONLY (minimal context)
|
||||
4. **DO NOT read task JSONs yet** - defer until execution phase
|
||||
|
||||
**Note**: In resume mode, this phase is completely skipped.
|
||||
**Resume Mode**: This phase is completely skipped when `--resume-session="session-id"` flag is provided.
|
||||
|
||||
### Phase 2: Planning Document Analysis
|
||||
**Applies to**: Normal mode only (skipped in resume mode)
|
||||
|
||||
### Phase 2: Planning Document Analysis (Normal Mode Only)
|
||||
**Optimized to avoid reading all task JSONs upfront**
|
||||
|
||||
1. **Read IMPL_PLAN.md**: Understand overall strategy, task breakdown summary, dependencies
|
||||
**Process**:
|
||||
1. **Read IMPL_PLAN.md**: Check existence, understand overall strategy
|
||||
2. **Read TODO_LIST.md**: Get current task statuses and execution progress
|
||||
3. **Extract Task Metadata**: Parse task IDs, titles, and dependency relationships from TODO_LIST.md
|
||||
4. **Build Execution Queue**: Determine ready tasks based on TODO_LIST.md status and dependencies
|
||||
|
||||
**Key Optimization**: Use IMPL_PLAN.md and TODO_LIST.md as primary sources instead of reading all task JSONs
|
||||
**Key Optimization**: Use IMPL_PLAN.md (existence check only) and TODO_LIST.md as primary sources instead of reading all task JSONs
|
||||
|
||||
**Note**: In resume mode, this phase is also skipped as session analysis was already completed by `/workflow:status`.
|
||||
**Resume Mode**: This phase is skipped when `--resume-session` flag is provided (session already known).
|
||||
|
||||
### Phase 3: TodoWrite Generation (Resume Mode Entry Point)
|
||||
**This is where resume mode directly enters after skipping Phases 1 & 2**
|
||||
### Phase 3: TodoWrite Generation
|
||||
**Applies to**: Both normal and resume modes (resume mode entry point)
|
||||
|
||||
**Process**:
|
||||
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
|
||||
- Parse TODO_LIST.md to extract all tasks with current statuses
|
||||
- Identify first pending task with met dependencies
|
||||
- Generate comprehensive TodoWrite covering entire workflow
|
||||
2. **Mark Initial Status**: Set first ready task as `in_progress` in TodoWrite
|
||||
2. **Mark Initial Status**: Set first ready task(s) as `in_progress` in TodoWrite
|
||||
- **Sequential execution**: Mark ONE task as `in_progress`
|
||||
- **Parallel batch**: Mark ALL tasks in current batch as `in_progress`
|
||||
3. **Prepare Session Context**: Inject workflow paths for agent use (using provided session-id)
|
||||
4. **Validate Prerequisites**: Ensure IMPL_PLAN.md and TODO_LIST.md exist and are valid
|
||||
|
||||
**Resume Mode Behavior**:
|
||||
- Load existing TODO_LIST.md directly from `.workflow/{session-id}/`
|
||||
- Load existing TODO_LIST.md directly from `.workflow/active//{session-id}/`
|
||||
- Extract current progress from TODO_LIST.md
|
||||
- Generate TodoWrite from TODO_LIST.md state
|
||||
- Proceed immediately to agent execution (Phase 4)
|
||||
|
||||
### Phase 4: Execution (Lazy Task Loading)
|
||||
**Key Optimization**: Read task JSON **only when needed** for execution
|
||||
### Phase 4: Execution Strategy Selection & Task Execution
|
||||
**Applies to**: Both normal and resume modes
|
||||
|
||||
1. **Identify Next Task**: From TodoWrite, get the next `in_progress` task ID
|
||||
2. **Load Task JSON on Demand**: Read `.task/{task-id}.json` for current task ONLY
|
||||
3. **Validate Task Structure**: Ensure all 5 required fields exist (id, title, status, meta, context, flow_control)
|
||||
4. **Pass Task with Flow Control**: Include complete task JSON with `pre_analysis` steps for agent execution
|
||||
5. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
|
||||
6. **Monitor Progress**: Track agent execution and handle errors without user interruption
|
||||
7. **Collect Results**: Gather implementation results and outputs
|
||||
8. **Update TODO_LIST.md**: Mark current task as completed in TODO_LIST.md
|
||||
9. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat from step 1
|
||||
**Step 4A: Parse Execution Strategy from IMPL_PLAN.md**
|
||||
|
||||
Read IMPL_PLAN.md Section 4 to extract:
|
||||
- **Execution Model**: Sequential | Parallel | Phased | TDD Cycles
|
||||
- **Parallelization Opportunities**: Which tasks can run in parallel
|
||||
- **Serialization Requirements**: Which tasks must run sequentially
|
||||
- **Critical Path**: Priority execution order
|
||||
|
||||
If IMPL_PLAN.md lacks execution strategy, use intelligent fallback (analyze task structure).
|
||||
|
||||
**Step 4B: Execute Tasks with Lazy Loading**
|
||||
|
||||
**Key Optimization**: Read task JSON **only when needed** for execution
|
||||
|
||||
**Execution Loop Pattern**:
|
||||
```
|
||||
while (TODO_LIST.md has pending tasks) {
|
||||
next_task_id = getTodoWriteInProgressTask()
|
||||
task_json = Read(.workflow/{session}/.task/{next_task_id}.json) // Lazy load
|
||||
task_json = Read(.workflow/session/{session}/.task/{next_task_id}.json) // Lazy load
|
||||
executeTaskWithAgent(task_json)
|
||||
updateTodoListMarkCompleted(next_task_id)
|
||||
advanceTodoWriteToNextTask()
|
||||
}
|
||||
```
|
||||
|
||||
**Execution Process per Task**:
|
||||
1. **Identify Next Task**: From TodoWrite, get the next `in_progress` task ID
|
||||
2. **Load Task JSON on Demand**: Read `.task/{task-id}.json` for current task ONLY
|
||||
3. **Validate Task Structure**: Ensure all 5 required fields exist (id, title, status, meta, context, flow_control)
|
||||
4. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
|
||||
5. **Monitor Progress**: Track agent execution and handle errors without user interruption
|
||||
6. **Collect Results**: Gather implementation results and outputs
|
||||
7. **Update TODO_LIST.md**: Mark current task as completed in TODO_LIST.md
|
||||
8. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
|
||||
|
||||
**Benefits**:
|
||||
- Reduces initial context loading by ~90%
|
||||
- Only reads task JSON when actually executing
|
||||
@@ -139,6 +142,9 @@ while (TODO_LIST.md has pending tasks) {
|
||||
- Faster startup time for workflow execution
|
||||
|
||||
### Phase 5: Completion
|
||||
**Applies to**: Both normal and resume modes
|
||||
|
||||
**Process**:
|
||||
1. **Update Task Status**: Mark completed tasks in JSON files
|
||||
2. **Generate Summary**: Create task summary in `.summaries/`
|
||||
3. **Update TodoWrite**: Mark current task complete, advance to next
|
||||
@@ -146,34 +152,52 @@ while (TODO_LIST.md has pending tasks) {
|
||||
5. **Check Workflow Complete**: Verify all tasks are completed
|
||||
6. **Auto-Complete Session**: Call `/workflow:session:complete` when all tasks finished
|
||||
|
||||
## Task Discovery & Queue Building
|
||||
## Execution Strategy (IMPL_PLAN-Driven)
|
||||
|
||||
### Session Discovery Process (Normal Mode - Optimized)
|
||||
```
|
||||
├── Check for .active-* markers in .workflow/
|
||||
├── If multiple active sessions found → Prompt user to select
|
||||
├── Locate selected session's workflow folder
|
||||
├── Load session metadata: workflow-session.json (minimal context)
|
||||
├── Read IMPL_PLAN.md (strategy overview and task summary)
|
||||
├── Read TODO_LIST.md (current task statuses and dependencies)
|
||||
├── Parse TODO_LIST.md to extract task metadata (NO JSON loading)
|
||||
├── Build execution queue from TODO_LIST.md
|
||||
└── Generate TodoWrite from TODO_LIST.md state
|
||||
```
|
||||
### Strategy Priority
|
||||
|
||||
**Key Change**: Task JSONs are NOT loaded during discovery - they are loaded lazily during execution
|
||||
**IMPL_PLAN-Driven Execution (Recommended)**:
|
||||
1. **Read IMPL_PLAN.md execution strategy** (Section 4: Implementation Strategy)
|
||||
2. **Follow explicit guidance**:
|
||||
- Execution Model (Sequential/Parallel/Phased/TDD)
|
||||
- Parallelization Opportunities (which tasks can run in parallel)
|
||||
- Serialization Requirements (which tasks must run sequentially)
|
||||
- Critical Path (priority execution order)
|
||||
3. **Use TODO_LIST.md for status tracking** only
|
||||
4. **IMPL_PLAN decides "HOW"**, execute.md implements it
|
||||
|
||||
### Resume Mode Process (--resume-session flag - Optimized)
|
||||
```
|
||||
├── Use provided session-id directly (skip discovery)
|
||||
├── Validate .workflow/{session-id}/ directory exists
|
||||
├── Read TODO_LIST.md for current progress
|
||||
├── Parse TODO_LIST.md to extract task IDs and statuses
|
||||
├── Generate TodoWrite from TODO_LIST.md (prioritize in-progress/pending tasks)
|
||||
└── Enter Phase 4 (Execution) with lazy task JSON loading
|
||||
```
|
||||
**Intelligent Fallback (When IMPL_PLAN lacks execution details)**:
|
||||
1. **Analyze task structure**:
|
||||
- Check `meta.execution_group` in task JSONs
|
||||
- Analyze `depends_on` relationships
|
||||
- Understand task complexity and risk
|
||||
2. **Apply smart defaults**:
|
||||
- No dependencies + same execution_group → Parallel
|
||||
- Has dependencies → Sequential (wait for deps)
|
||||
- Critical/high-risk tasks → Sequential
|
||||
3. **Conservative approach**: When uncertain, prefer sequential execution
|
||||
|
||||
**Key Change**: Completely skip IMPL_PLAN.md and task JSON loading - use TODO_LIST.md only
|
||||
### Execution Models
|
||||
|
||||
#### 1. Sequential Execution
|
||||
**When**: IMPL_PLAN specifies "Sequential" OR no clear parallelization guidance
|
||||
**Pattern**: Execute tasks one by one in TODO_LIST order
|
||||
**TodoWrite**: ONE task marked as `in_progress` at a time
|
||||
|
||||
#### 2. Parallel Execution
|
||||
**When**: IMPL_PLAN specifies "Parallel" with clear parallelization opportunities
|
||||
**Pattern**: Execute independent task groups concurrently
|
||||
**TodoWrite**: MULTIPLE tasks (in same batch) marked as `in_progress` simultaneously
|
||||
|
||||
#### 3. Phased Execution
|
||||
**When**: IMPL_PLAN specifies "Phased" with phase breakdown
|
||||
**Pattern**: Execute tasks in phases, respect phase boundaries
|
||||
**TodoWrite**: Within each phase, follow Sequential or Parallel rules
|
||||
|
||||
#### 4. Intelligent Fallback
|
||||
**When**: IMPL_PLAN lacks execution strategy details
|
||||
**Pattern**: Analyze task structure and apply smart defaults
|
||||
**TodoWrite**: Follow Sequential or Parallel rules based on analysis
|
||||
|
||||
### Task Status Logic
|
||||
```
|
||||
@@ -182,155 +206,36 @@ completed → skip
|
||||
blocked → skip until dependencies clear
|
||||
```
|
||||
|
||||
## Batch Execution with Dependency Graph
|
||||
|
||||
### Parallel Execution Algorithm
|
||||
**Core principle**: Execute independent tasks concurrently in batches based on dependency graph.
|
||||
|
||||
#### Algorithm Steps (Optimized with Lazy Loading)
|
||||
```javascript
|
||||
function executeBatchWorkflow(sessionId) {
|
||||
// 1. Build dependency graph from TODO_LIST.md (NOT task JSONs)
|
||||
const graph = buildDependencyGraphFromTodoList(`.workflow/${sessionId}/TODO_LIST.md`);
|
||||
|
||||
// 2. Process batches until graph is empty
|
||||
while (!graph.isEmpty()) {
|
||||
// 3. Identify current batch (tasks with in-degree = 0)
|
||||
const batch = graph.getNodesWithInDegreeZero();
|
||||
|
||||
// 4. Load task JSONs ONLY for current batch (lazy loading)
|
||||
const batchTaskJsons = batch.map(taskId =>
|
||||
Read(`.workflow/${sessionId}/.task/${taskId}.json`)
|
||||
);
|
||||
|
||||
// 5. Check for parallel execution opportunities
|
||||
const parallelGroups = groupByExecutionGroup(batchTaskJsons);
|
||||
|
||||
// 6. Execute batch concurrently
|
||||
await Promise.all(
|
||||
parallelGroups.map(group => executeBatch(group))
|
||||
);
|
||||
|
||||
// 7. Update graph: remove completed tasks and their edges
|
||||
graph.removeNodes(batch);
|
||||
|
||||
// 8. Update TODO_LIST.md and TodoWrite to reflect completed batch
|
||||
updateTodoListAfterBatch(batch);
|
||||
updateTodoWriteAfterBatch(batch);
|
||||
}
|
||||
|
||||
// 9. All tasks complete - auto-complete session
|
||||
SlashCommand("/workflow:session:complete");
|
||||
}
|
||||
|
||||
function buildDependencyGraphFromTodoList(todoListPath) {
|
||||
const todoContent = Read(todoListPath);
|
||||
const tasks = parseTodoListTasks(todoContent);
|
||||
const graph = new DirectedGraph();
|
||||
|
||||
tasks.forEach(task => {
|
||||
graph.addNode(task.id, { id: task.id, title: task.title, status: task.status });
|
||||
task.dependencies?.forEach(depId => graph.addEdge(depId, task.id));
|
||||
});
|
||||
|
||||
return graph;
|
||||
}
|
||||
|
||||
function parseTodoListTasks(todoContent) {
|
||||
// Parse: - [ ] **IMPL-001**: Task title → [📋](./.task/IMPL-001.json)
|
||||
const taskPattern = /- \[([ x])\] \*\*([A-Z]+-\d+(?:\.\d+)?)\*\*: (.+?) →/g;
|
||||
const tasks = [];
|
||||
let match;
|
||||
|
||||
while ((match = taskPattern.exec(todoContent)) !== null) {
|
||||
tasks.push({
|
||||
status: match[1] === 'x' ? 'completed' : 'pending',
|
||||
id: match[2],
|
||||
title: match[3]
|
||||
});
|
||||
}
|
||||
|
||||
return tasks;
|
||||
}
|
||||
|
||||
function groupByExecutionGroup(tasks) {
|
||||
const groups = {};
|
||||
|
||||
tasks.forEach(task => {
|
||||
const groupId = task.meta.execution_group || task.id;
|
||||
if (!groups[groupId]) groups[groupId] = [];
|
||||
groups[groupId].push(task);
|
||||
});
|
||||
|
||||
return Object.values(groups);
|
||||
}
|
||||
|
||||
async function executeBatch(tasks) {
|
||||
// Execute all tasks in batch concurrently
|
||||
return Promise.all(
|
||||
tasks.map(task => executeTask(task))
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
#### Execution Group Rules
|
||||
1. **Same `execution_group` ID** → Execute in parallel (independent, different contexts)
|
||||
2. **No `execution_group` (null)** → Execute sequentially (has dependencies)
|
||||
3. **Different `execution_group` IDs** → Execute in parallel (independent batches)
|
||||
4. **Same `context_signature`** → Should have been merged (warning if not)
|
||||
|
||||
#### Parallel Execution Example
|
||||
```
|
||||
Batch 1 (no dependencies):
|
||||
- IMPL-1.1 (execution_group: "parallel-auth-api") → Agent 1
|
||||
- IMPL-1.2 (execution_group: "parallel-ui-comp") → Agent 2
|
||||
- IMPL-1.3 (execution_group: "parallel-db-schema") → Agent 3
|
||||
|
||||
Wait for Batch 1 completion...
|
||||
|
||||
Batch 2 (depends on Batch 1):
|
||||
- IMPL-2.1 (execution_group: null, depends_on: [IMPL-1.1, IMPL-1.2]) → Agent 1
|
||||
|
||||
Wait for Batch 2 completion...
|
||||
|
||||
Batch 3 (independent of Batch 2):
|
||||
- IMPL-3.1 (execution_group: "parallel-tests-1") → Agent 1
|
||||
- IMPL-3.2 (execution_group: "parallel-tests-2") → Agent 2
|
||||
```
|
||||
|
||||
## TodoWrite Coordination
|
||||
**Comprehensive workflow tracking** with immediate status updates throughout entire execution without user interruption:
|
||||
|
||||
#### TodoWrite Workflow Rules
|
||||
1. **Initial Creation**: Generate TodoWrite from discovered pending tasks for entire workflow
|
||||
- **Normal Mode**: Create from discovery results
|
||||
- **Resume Mode**: Create from existing session state and current progress
|
||||
2. **Parallel Task Support**:
|
||||
- **Single-task execution**: Mark ONLY ONE task as `in_progress` at a time
|
||||
- **Batch execution**: Mark ALL tasks in current batch as `in_progress` simultaneously
|
||||
- **Execution group indicator**: Show `[execution_group: group-id]` for parallel tasks
|
||||
3. **Immediate Updates**: Update status after each task/batch completion without user interruption
|
||||
4. **Status Synchronization**: Sync with JSON task files after updates
|
||||
5. **Continuous Tracking**: Maintain TodoWrite throughout entire workflow execution until completion
|
||||
### TodoWrite Rules (Unified)
|
||||
|
||||
#### Resume Mode TodoWrite Generation
|
||||
**Special behavior when `--resume-session` flag is present**:
|
||||
- Load existing session progress from `.workflow/{session-id}/TODO_LIST.md`
|
||||
- Identify currently in-progress or next pending task
|
||||
- Generate TodoWrite starting from interruption point
|
||||
- Preserve completed task history in TodoWrite display
|
||||
- Focus on remaining pending tasks for execution
|
||||
**Rule 1: Initial Creation**
|
||||
- **Normal Mode**: Generate TodoWrite from discovered pending tasks for entire workflow
|
||||
- **Resume Mode**: Generate from existing session state and current progress
|
||||
|
||||
#### TodoWrite Tool Usage
|
||||
**Use Claude Code's built-in TodoWrite tool** to track workflow progress in real-time:
|
||||
**Rule 2: In-Progress Task Count (Execution-Model-Dependent)**
|
||||
- **Sequential execution**: Mark ONLY ONE task as `in_progress` at a time
|
||||
- **Parallel batch execution**: Mark ALL tasks in current batch as `in_progress` simultaneously
|
||||
- **Execution group indicator**: Show `[execution_group: group-id]` for parallel tasks
|
||||
|
||||
**Rule 3: Status Updates**
|
||||
- **Immediate Updates**: Update status after each task/batch completion without user interruption
|
||||
- **Status Synchronization**: Sync with JSON task files after updates
|
||||
- **Continuous Tracking**: Maintain TodoWrite throughout entire workflow execution until completion
|
||||
|
||||
**Rule 4: Workflow Completion Check**
|
||||
- When all tasks marked `completed`, auto-call `/workflow:session:complete`
|
||||
|
||||
### TodoWrite Tool Usage
|
||||
|
||||
**Example 1: Sequential Execution**
|
||||
```javascript
|
||||
// Example 1: Sequential execution (traditional)
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{
|
||||
content: "Execute IMPL-1.1: Design auth schema [code-developer] [FLOW_CONTROL]",
|
||||
status: "in_progress", // Single task in progress
|
||||
status: "in_progress", // ONE task in progress
|
||||
activeForm: "Executing IMPL-1.1: Design auth schema"
|
||||
},
|
||||
{
|
||||
@@ -340,8 +245,10 @@ TodoWrite({
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
// Example 2: Batch execution (parallel tasks with execution_group)
|
||||
**Example 2: Parallel Batch Execution**
|
||||
```javascript
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{
|
||||
@@ -366,44 +273,9 @@ TodoWrite({
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// Example 3: After batch completion
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{
|
||||
content: "Execute IMPL-1.1: Build Auth API [code-developer] [execution_group: parallel-auth-api]",
|
||||
status: "completed", // Batch completed
|
||||
activeForm: "Executing IMPL-1.1: Build Auth API"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-1.2: Build User UI [code-developer] [execution_group: parallel-ui-comp]",
|
||||
status: "completed", // Batch completed
|
||||
activeForm: "Executing IMPL-1.2: Build User UI"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-1.3: Setup Database [code-developer] [execution_group: parallel-db-schema]",
|
||||
status: "completed", // Batch completed
|
||||
activeForm: "Executing IMPL-1.3: Setup Database"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-2.1: Integration Tests [test-fix-agent]",
|
||||
status: "in_progress", // Next batch started
|
||||
activeForm: "Executing IMPL-2.1: Integration Tests"
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
**TodoWrite Integration Rules**:
|
||||
- **Continuous Workflow Tracking**: Use TodoWrite tool throughout entire workflow execution
|
||||
- **Real-time Updates**: Immediate progress tracking without user interruption
|
||||
- **Single Active Task**: Only ONE task marked as `in_progress` at any time
|
||||
- **Immediate Completion**: Mark tasks `completed` immediately after finishing
|
||||
- **Status Sync**: Sync TodoWrite status with JSON task files after each update
|
||||
- **Full Execution**: Continue TodoWrite tracking until all workflow tasks complete
|
||||
- **Workflow Completion Check**: When all tasks marked `completed`, auto-call `/workflow:session:complete`
|
||||
|
||||
#### TODO_LIST.md Update Timing
|
||||
### TODO_LIST.md Update Timing
|
||||
**Single source of truth for task status** - enables lazy loading by providing task metadata without reading JSONs
|
||||
|
||||
- **Before Agent Launch**: Mark task as `in_progress`
|
||||
@@ -411,18 +283,17 @@ TodoWrite({
|
||||
- **On Error**: Keep as `in_progress`, add error note
|
||||
- **Workflow Complete**: Call `/workflow:session:complete`
|
||||
|
||||
### 3. Agent Context Management
|
||||
**Comprehensive context preparation** for autonomous agent execution:
|
||||
## Agent Context Management
|
||||
|
||||
#### Context Sources (Priority Order)
|
||||
### Context Sources (Priority Order)
|
||||
1. **Complete Task JSON**: Full task definition including all fields and artifacts
|
||||
2. **Artifacts Context**: Brainstorming outputs and role analysess from task.context.artifacts
|
||||
2. **Artifacts Context**: Brainstorming outputs and role analyses from task.context.artifacts
|
||||
3. **Flow Control Context**: Accumulated outputs from pre_analysis steps (including artifact loading)
|
||||
4. **Dependency Summaries**: Previous task completion summaries
|
||||
5. **Session Context**: Workflow paths and session metadata
|
||||
6. **Inherited Context**: Parent task context and shared variables
|
||||
|
||||
#### Context Assembly Process
|
||||
### Context Assembly Process
|
||||
```
|
||||
1. Load Task JSON → Base context (including artifacts array)
|
||||
2. Load Artifacts → Synthesis specifications and brainstorming outputs
|
||||
@@ -432,7 +303,7 @@ TodoWrite({
|
||||
6. Combine All → Complete agent context with artifact integration
|
||||
```
|
||||
|
||||
#### Agent Context Package Structure
|
||||
### Agent Context Package Structure
|
||||
```json
|
||||
{
|
||||
"task": { /* Complete task JSON with artifacts array */ },
|
||||
@@ -451,18 +322,18 @@ TodoWrite({
|
||||
}
|
||||
},
|
||||
"session": {
|
||||
"workflow_dir": ".workflow/WFS-session/",
|
||||
"context_package_path": ".workflow/WFS-session/.process/context-package.json",
|
||||
"todo_list_path": ".workflow/WFS-session/TODO_LIST.md",
|
||||
"summaries_dir": ".workflow/WFS-session/.summaries/",
|
||||
"task_json_path": ".workflow/WFS-session/.task/IMPL-1.1.json"
|
||||
"workflow_dir": ".workflow/active/WFS-session/",
|
||||
"context_package_path": ".workflow/active/WFS-session/.process/context-package.json",
|
||||
"todo_list_path": ".workflow/active/WFS-session/TODO_LIST.md",
|
||||
"summaries_dir": ".workflow/active/WFS-session/.summaries/",
|
||||
"task_json_path": ".workflow/active/WFS-session/.task/IMPL-1.1.json"
|
||||
},
|
||||
"dependencies": [ /* Task summaries from depends_on */ ],
|
||||
"inherited": { /* Parent task context */ }
|
||||
}
|
||||
```
|
||||
|
||||
#### Context Validation Rules
|
||||
### Context Validation Rules
|
||||
- **Task JSON Complete**: All 5 fields present and valid, including artifacts array in context
|
||||
- **Artifacts Available**: All artifacts loaded from context-package.json
|
||||
- **Flow Control Ready**: All pre_analysis steps completed including artifact loading steps
|
||||
@@ -470,10 +341,26 @@ TodoWrite({
|
||||
- **Session Paths Valid**: All workflow paths exist and accessible (verified via context-package.json)
|
||||
- **Agent Assignment**: Valid agent type specified in meta.agent
|
||||
|
||||
### 4. Agent Execution Pattern
|
||||
**Structured agent invocation** with complete context and clear instructions:
|
||||
## Agent Execution Pattern
|
||||
|
||||
#### Agent Prompt Template
|
||||
### Flow Control Execution
|
||||
**[FLOW_CONTROL]** marker indicates task JSON contains `flow_control.pre_analysis` steps for context preparation.
|
||||
|
||||
**Orchestrator Responsibility**:
|
||||
- Pass complete task JSON to agent (including `flow_control` block)
|
||||
- Provide session paths for artifact access
|
||||
- Monitor agent completion
|
||||
|
||||
**Agent Responsibility**:
|
||||
- Parse `flow_control.pre_analysis` array from JSON
|
||||
- Execute steps sequentially with variable substitution
|
||||
- Accumulate context from artifacts and dependencies
|
||||
- Follow error handling per `step.on_error`
|
||||
- Complete implementation using accumulated context
|
||||
|
||||
**Orchestrator does NOT execute flow control steps - Agent interprets and executes them from JSON.**
|
||||
|
||||
### Agent Prompt Template
|
||||
```bash
|
||||
Task(subagent_type="{meta.agent}",
|
||||
prompt="**EXECUTE TASK FROM JSON**
|
||||
@@ -512,7 +399,7 @@ Task(subagent_type="{meta.agent}",
|
||||
description="Execute task: {task.id}")
|
||||
```
|
||||
|
||||
#### Agent JSON Loading Specification
|
||||
### Agent JSON Loading Specification
|
||||
**MANDATORY AGENT PROTOCOL**: All agents must follow this exact loading sequence:
|
||||
|
||||
1. **JSON Loading**: First action must be `cat {session.task_json_path}`
|
||||
@@ -565,7 +452,7 @@ Task(subagent_type="{meta.agent}",
|
||||
"step": "load_synthesis_specification",
|
||||
"action": "Load synthesis specification from context-package.json",
|
||||
"commands": [
|
||||
"Read(.workflow/WFS-[session]/.process/context-package.json)",
|
||||
"Read(.workflow/active/WFS-[session]/.process/context-package.json)",
|
||||
"Extract(brainstorm_artifacts.synthesis_output.path)",
|
||||
"Read(extracted path)"
|
||||
],
|
||||
@@ -606,7 +493,7 @@ Task(subagent_type="{meta.agent}",
|
||||
}
|
||||
```
|
||||
|
||||
#### Execution Flow
|
||||
### Execution Flow
|
||||
1. **Load Task JSON**: Agent reads and validates complete JSON structure
|
||||
2. **Execute Flow Control**: Agent runs pre_analysis steps if present
|
||||
3. **Prepare Implementation**: Agent uses implementation_approach from JSON
|
||||
@@ -614,7 +501,7 @@ Task(subagent_type="{meta.agent}",
|
||||
5. **Update Status**: Agent marks JSON status as completed
|
||||
6. **Generate Summary**: Agent creates completion summary
|
||||
|
||||
#### Agent Assignment Rules
|
||||
### Agent Assignment Rules
|
||||
```
|
||||
meta.agent specified → Use specified agent
|
||||
meta.agent missing → Infer from meta.type:
|
||||
@@ -625,15 +512,9 @@ meta.agent missing → Infer from meta.type:
|
||||
- "docs" → @doc-generator
|
||||
```
|
||||
|
||||
#### Error Handling During Execution
|
||||
- **Agent Failure**: Retry once with adjusted context
|
||||
- **Flow Control Error**: Skip optional steps, fail on critical
|
||||
- **Context Missing**: Reload from JSON files and retry
|
||||
- **Timeout**: Mark as blocked, continue with next task
|
||||
|
||||
## Workflow File Structure Reference
|
||||
```
|
||||
.workflow/WFS-[topic-slug]/
|
||||
.workflow/active/WFS-[topic-slug]/
|
||||
├── workflow-session.json # Session state and metadata
|
||||
├── IMPL_PLAN.md # Planning document and requirements
|
||||
├── TODO_LIST.md # Progress tracking (auto-updated)
|
||||
@@ -644,78 +525,26 @@ meta.agent missing → Infer from meta.type:
|
||||
│ ├── IMPL-1-summary.md # Task completion details
|
||||
│ └── IMPL-1.1-summary.md # Subtask completion details
|
||||
└── .process/ # Planning artifacts
|
||||
├── context-package.json # Smart context package
|
||||
└── ANALYSIS_RESULTS.md # Planning analysis results
|
||||
```
|
||||
|
||||
## Error Handling & Recovery
|
||||
|
||||
### Discovery Phase Errors
|
||||
| Error | Cause | Resolution | Command |
|
||||
|-------|-------|------------|---------|
|
||||
| No active session | No `.active-*` markers found | Create or resume session | `/workflow:plan "project"` |
|
||||
| Multiple sessions | Multiple `.active-*` markers | Select specific session | Manual choice prompt |
|
||||
| Corrupted session | Invalid JSON files | Recreate session structure | `/workflow:session:status --validate` |
|
||||
| Missing task files | Broken task references | Regenerate tasks | `/task:create` or repair |
|
||||
### Common Errors & Recovery
|
||||
|
||||
### Execution Phase Errors
|
||||
| Error | Cause | Recovery Strategy | Max Attempts |
|
||||
|-------|-------|------------------|--------------|
|
||||
| Error Type | Cause | Recovery Strategy | Max Attempts |
|
||||
|-----------|-------|------------------|--------------|
|
||||
| **Discovery Errors** |
|
||||
| No active session | No sessions in `.workflow/active/` | Create or resume session: `/workflow:plan "project"` | N/A |
|
||||
| Multiple sessions | Multiple sessions in `.workflow/active/` | Prompt user selection | N/A |
|
||||
| Corrupted session | Invalid JSON files | Recreate session structure or validate files | N/A |
|
||||
| **Execution Errors** |
|
||||
| Agent failure | Agent crash/timeout | Retry with simplified context | 2 |
|
||||
| Flow control error | Command failure | Skip optional, fail critical | 1 per step |
|
||||
| Context loading error | Missing dependencies | Reload from JSON, use defaults | 3 |
|
||||
| JSON file corruption | File system issues | Restore from backup/recreate | 1 |
|
||||
|
||||
### Recovery Procedures
|
||||
|
||||
#### Session Recovery
|
||||
```bash
|
||||
# Check session integrity
|
||||
find .workflow -name ".active-*" | while read marker; do
|
||||
session=$(basename "$marker" | sed 's/^\.active-//')
|
||||
if [ ! -d ".workflow/$session" ]; then
|
||||
echo "Removing orphaned marker: $marker"
|
||||
rm "$marker"
|
||||
fi
|
||||
done
|
||||
|
||||
# Recreate corrupted session files
|
||||
if [ ! -f ".workflow/$session/workflow-session.json" ]; then
|
||||
echo '{"session_id":"'$session'","status":"active"}' > ".workflow/$session/workflow-session.json"
|
||||
fi
|
||||
```
|
||||
|
||||
#### Task Recovery
|
||||
```bash
|
||||
# Validate task JSON integrity
|
||||
for task_file in .workflow/$session/.task/*.json; do
|
||||
if ! jq empty "$task_file" 2>/dev/null; then
|
||||
echo "Corrupted task file: $task_file"
|
||||
# Backup and regenerate or restore from backup
|
||||
fi
|
||||
done
|
||||
|
||||
# Fix missing dependencies
|
||||
missing_deps=$(jq -r '.context.depends_on[]?' .workflow/$session/.task/*.json | sort -u)
|
||||
for dep in $missing_deps; do
|
||||
if [ ! -f ".workflow/$session/.task/$dep.json" ]; then
|
||||
echo "Missing dependency: $dep - creating placeholder"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
#### Context Recovery
|
||||
```bash
|
||||
# Reload context from available sources
|
||||
if [ -f ".workflow/$session/.process/ANALYSIS_RESULTS.md" ]; then
|
||||
echo "Reloading planning context..."
|
||||
fi
|
||||
|
||||
# Restore from documentation if available
|
||||
if [ -d ".workflow/docs/" ]; then
|
||||
echo "Using documentation context as fallback..."
|
||||
fi
|
||||
```
|
||||
|
||||
### Error Prevention
|
||||
- **Pre-flight Checks**: Validate session integrity before execution
|
||||
- **Backup Strategy**: Create task snapshots before major operations
|
||||
@@ -723,16 +552,28 @@ fi
|
||||
- **Dependency Validation**: Check all depends_on references exist
|
||||
- **Context Verification**: Ensure all required context is available
|
||||
|
||||
## Usage Examples
|
||||
### Recovery Procedures
|
||||
|
||||
### Basic Usage
|
||||
**Session Recovery**:
|
||||
```bash
|
||||
/workflow:execute # Execute all pending tasks autonomously
|
||||
/workflow:session:status # Check progress
|
||||
/task:execute IMPL-1.2 # Execute specific task
|
||||
# Check session integrity
|
||||
find .workflow/active/ -name "WFS-*" -type d | while read session_dir; do
|
||||
session=$(basename "$session_dir")
|
||||
[ ! -f "$session_dir/workflow-session.json" ] && \
|
||||
echo '{"session_id":"'$session'","status":"active"}' > "$session_dir/workflow-session.json"
|
||||
done
|
||||
```
|
||||
|
||||
### Integration
|
||||
- **Planning**: `/workflow:plan` → `/workflow:execute` → `/workflow:review`
|
||||
- **Recovery**: `/workflow:status --validate` → `/workflow:execute`
|
||||
**Task Recovery**:
|
||||
```bash
|
||||
# Validate task JSON integrity
|
||||
for task_file in .workflow/active/$session/.task/*.json; do
|
||||
jq empty "$task_file" 2>/dev/null || echo "Corrupted: $task_file"
|
||||
done
|
||||
|
||||
# Fix missing dependencies
|
||||
missing_deps=$(jq -r '.context.depends_on[]?' .workflow/active/$session/.task/*.json | sort -u)
|
||||
for dep in $missing_deps; do
|
||||
[ ! -f ".workflow/active/$session/.task/$dep.json" ] && echo "Missing dependency: $dep"
|
||||
done
|
||||
```
|
||||
|
||||
564
.claude/commands/workflow/init.md
Normal file
564
.claude/commands/workflow/init.md
Normal file
@@ -0,0 +1,564 @@
|
||||
---
|
||||
name: init
|
||||
description: Initialize project-level state with intelligent project analysis using cli-explore-agent
|
||||
argument-hint: "[--regenerate]"
|
||||
examples:
|
||||
- /workflow:init
|
||||
- /workflow:init --regenerate
|
||||
---
|
||||
|
||||
# Workflow Init Command (/workflow:init)
|
||||
|
||||
## Overview
|
||||
Initializes `.workflow/project.json` with comprehensive project understanding by leveraging **cli-explore-agent** for intelligent analysis and **memory discovery** for SKILL package indexing.
|
||||
|
||||
**Key Features**:
|
||||
- **Intelligent Project Analysis**: Uses cli-explore-agent's Deep Scan mode
|
||||
- **Technology Stack Detection**: Identifies languages, frameworks, build tools
|
||||
- **Architecture Overview**: Discovers patterns, layers, key components
|
||||
- **Memory Discovery**: Scans and indexes available SKILL packages
|
||||
- **Smart Recommendations**: Suggests memory commands based on project state
|
||||
- **One-time Initialization**: Skips if project.json exists (unless --regenerate)
|
||||
|
||||
## Usage
|
||||
```bash
|
||||
/workflow:init # Initialize project state (skip if exists)
|
||||
/workflow:init --regenerate # Force regeneration of project.json
|
||||
```
|
||||
|
||||
## Implementation Flow
|
||||
|
||||
### Step 1: Check Existing State
|
||||
|
||||
```bash
|
||||
# Check if project.json already exists
|
||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||
```
|
||||
|
||||
**If EXISTS and no --regenerate flag**:
|
||||
```
|
||||
Project already initialized at .workflow/project.json
|
||||
Use /workflow:init --regenerate to rebuild project analysis
|
||||
Use /workflow:status --project to view current state
|
||||
```
|
||||
|
||||
**If NOT_FOUND or --regenerate flag**: Proceed to initialization
|
||||
|
||||
### Step 2: Project Discovery
|
||||
|
||||
```bash
|
||||
# Get project name and root
|
||||
bash(basename "$(git rev-parse --show-toplevel 2>/dev/null || pwd)")
|
||||
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
|
||||
# Create .workflow directory
|
||||
bash(mkdir -p .workflow)
|
||||
```
|
||||
|
||||
### Step 3: Intelligent Project Analysis
|
||||
|
||||
**Invoke cli-explore-agent** with Deep Scan mode for comprehensive understanding:
|
||||
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
description="Deep project analysis",
|
||||
prompt=`
|
||||
Analyze project structure and technology stack for workflow initialization.
|
||||
|
||||
## Analysis Objective
|
||||
Perform Deep Scan analysis to build comprehensive project understanding for .workflow/project.json initialization.
|
||||
|
||||
## Required Analysis
|
||||
|
||||
### 1. Technology Stack Detection
|
||||
- **Primary Languages**: Identify all programming languages with file counts
|
||||
- **Frameworks**: Detect web frameworks (React, Vue, Express, Django, etc.)
|
||||
- **Build Tools**: Identify build systems (npm, cargo, maven, gradle, etc.)
|
||||
- **Test Frameworks**: Find testing tools (jest, pytest, go test, etc.)
|
||||
|
||||
### 2. Project Architecture
|
||||
- **Architecture Style**: Identify patterns (MVC, microservices, monorepo, etc.)
|
||||
- **Layer Structure**: Discover architectural layers (presentation, business, data)
|
||||
- **Design Patterns**: Find common patterns (singleton, factory, repository, etc.)
|
||||
- **Key Components**: List 5-10 core modules/components with brief descriptions
|
||||
|
||||
### 3. Project Metrics
|
||||
- **Total Files**: Count source code files
|
||||
- **Lines of Code**: Estimate total LOC
|
||||
- **Module Count**: Number of top-level modules/packages
|
||||
- **Complexity**: Overall complexity rating (low/medium/high)
|
||||
|
||||
### 4. Entry Points
|
||||
- **Main Entry**: Identify primary application entry point(s)
|
||||
- **CLI Commands**: Discover available commands/scripts
|
||||
- **API Endpoints**: Find HTTP/REST/GraphQL endpoints (if applicable)
|
||||
|
||||
## Execution Mode
|
||||
Use **Deep Scan** with Dual-Source Strategy:
|
||||
- Phase 1: Bash structural scan (fast pattern discovery)
|
||||
- Phase 2: Gemini semantic analysis (design intent, patterns)
|
||||
- Phase 3: Synthesis (merge findings with attribution)
|
||||
|
||||
## Analysis Scope
|
||||
- Root directory: ${projectRoot}
|
||||
- Exclude: node_modules, dist, build, .git, vendor, __pycache__
|
||||
- Focus: Source code directories (src, lib, pkg, app, etc.)
|
||||
|
||||
## Output Format
|
||||
Return JSON structure for programmatic processing:
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"technology_stack": {
|
||||
"languages": [
|
||||
{"name": "TypeScript", "file_count": 150, "primary": true},
|
||||
{"name": "Python", "file_count": 30, "primary": false}
|
||||
],
|
||||
"frameworks": ["React", "Express", "TypeORM"],
|
||||
"build_tools": ["npm", "webpack"],
|
||||
"test_frameworks": ["Jest", "Supertest"]
|
||||
},
|
||||
"architecture": {
|
||||
"style": "Layered MVC with Repository Pattern",
|
||||
"layers": ["presentation", "business-logic", "data-access"],
|
||||
"patterns": ["MVC", "Repository Pattern", "Dependency Injection"],
|
||||
"key_components": [
|
||||
{
|
||||
"name": "Authentication Module",
|
||||
"path": "src/auth",
|
||||
"description": "JWT-based authentication with OAuth2 support",
|
||||
"importance": "high"
|
||||
},
|
||||
{
|
||||
"name": "User Management",
|
||||
"path": "src/users",
|
||||
"description": "User CRUD operations and profile management",
|
||||
"importance": "high"
|
||||
}
|
||||
]
|
||||
},
|
||||
"metrics": {
|
||||
"total_files": 180,
|
||||
"lines_of_code": 15000,
|
||||
"module_count": 12,
|
||||
"complexity": "medium"
|
||||
},
|
||||
"entry_points": {
|
||||
"main": "src/index.ts",
|
||||
"cli_commands": ["npm start", "npm test", "npm run build"],
|
||||
"api_endpoints": ["/api/auth", "/api/users", "/api/posts"]
|
||||
},
|
||||
"analysis_metadata": {
|
||||
"timestamp": "2025-01-18T10:30:00Z",
|
||||
"mode": "deep-scan",
|
||||
"source": "cli-explore-agent"
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
## Quality Requirements
|
||||
- ✅ All technology stack items verified (no guessing)
|
||||
- ✅ Key components include file paths for navigation
|
||||
- ✅ Architecture style based on actual code patterns, not assumptions
|
||||
- ✅ Metrics calculated from actual file counts/lines
|
||||
- ✅ Entry points verified as executable
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
**Agent Output**: JSON structure with comprehensive project analysis
|
||||
|
||||
### Step 4: Build project.json from Analysis
|
||||
|
||||
**Data Processing**:
|
||||
```javascript
|
||||
// Parse agent analysis output
|
||||
const analysis = JSON.parse(agentOutput);
|
||||
|
||||
// Build complete project.json structure
|
||||
const projectMeta = {
|
||||
// Basic metadata
|
||||
project_name: projectName,
|
||||
initialized_at: new Date().toISOString(),
|
||||
|
||||
// Project overview (from cli-explore-agent)
|
||||
overview: {
|
||||
description: generateDescription(analysis), // e.g., "TypeScript web application with React frontend"
|
||||
technology_stack: analysis.technology_stack,
|
||||
architecture: {
|
||||
style: analysis.architecture.style,
|
||||
layers: analysis.architecture.layers,
|
||||
patterns: analysis.architecture.patterns
|
||||
},
|
||||
key_components: analysis.architecture.key_components,
|
||||
entry_points: analysis.entry_points,
|
||||
metrics: analysis.metrics
|
||||
},
|
||||
|
||||
// Feature registry (initially empty, populated by complete)
|
||||
features: [],
|
||||
|
||||
// Statistics
|
||||
statistics: {
|
||||
total_features: 0,
|
||||
total_sessions: 0,
|
||||
last_updated: new Date().toISOString()
|
||||
},
|
||||
|
||||
// Analysis metadata
|
||||
_metadata: {
|
||||
initialized_by: "cli-explore-agent",
|
||||
analysis_timestamp: analysis.analysis_metadata.timestamp,
|
||||
analysis_mode: analysis.analysis_metadata.mode
|
||||
}
|
||||
};
|
||||
|
||||
// Helper: Generate project description
|
||||
function generateDescription(analysis) {
|
||||
const primaryLang = analysis.technology_stack.languages.find(l => l.primary);
|
||||
const frameworks = analysis.technology_stack.frameworks.slice(0, 2).join(', ');
|
||||
|
||||
return `${primaryLang.name} project using ${frameworks}`;
|
||||
}
|
||||
|
||||
// Write to .workflow/project.json
|
||||
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
|
||||
```
|
||||
|
||||
### Step 5: Output Summary
|
||||
|
||||
```
|
||||
✓ Project initialized successfully
|
||||
|
||||
## Project Overview
|
||||
Name: ${projectName}
|
||||
Description: ${overview.description}
|
||||
|
||||
### Technology Stack
|
||||
Languages: ${languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${frameworks.join(', ')}
|
||||
|
||||
### Architecture
|
||||
Style: ${architecture.style}
|
||||
Components: ${key_components.length} core modules identified
|
||||
|
||||
### Project Metrics
|
||||
Files: ${metrics.total_files}
|
||||
LOC: ${metrics.lines_of_code}
|
||||
Complexity: ${metrics.complexity}
|
||||
|
||||
### Memory Resources
|
||||
SKILL Packages: ${memory_resources.skills.length}
|
||||
Documentation: ${memory_resources.documentation.length} project(s)
|
||||
Module Docs: ${memory_resources.module_docs.length} file(s)
|
||||
Gaps: ${memory_resources.gaps.join(', ') || 'none'}
|
||||
|
||||
## Quick Start
|
||||
• /workflow:plan "feature description" - Start new workflow
|
||||
• /workflow:status --project - View project state
|
||||
|
||||
---
|
||||
Project state saved to: .workflow/project.json
|
||||
Memory index updated: ${memory_resources.last_scanned}
|
||||
```
|
||||
|
||||
## Extended project.json Schema
|
||||
|
||||
### Complete Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"project_name": "claude_dms3",
|
||||
"initialized_at": "2025-01-18T10:00:00Z",
|
||||
|
||||
"overview": {
|
||||
"description": "TypeScript workflow automation system with AI agent orchestration",
|
||||
"technology_stack": {
|
||||
"languages": [
|
||||
{"name": "TypeScript", "file_count": 150, "primary": true},
|
||||
{"name": "Bash", "file_count": 30, "primary": false}
|
||||
],
|
||||
"frameworks": ["Node.js"],
|
||||
"build_tools": ["npm"],
|
||||
"test_frameworks": ["Jest"]
|
||||
},
|
||||
"architecture": {
|
||||
"style": "Agent-based workflow orchestration with modular command system",
|
||||
"layers": ["command-layer", "agent-orchestration", "cli-integration"],
|
||||
"patterns": ["Command Pattern", "Agent Pattern", "Template Method"]
|
||||
},
|
||||
"key_components": [
|
||||
{
|
||||
"name": "Workflow Planning",
|
||||
"path": ".claude/commands/workflow",
|
||||
"description": "Multi-phase planning workflow with brainstorming and task generation",
|
||||
"importance": "high"
|
||||
},
|
||||
{
|
||||
"name": "Agent System",
|
||||
"path": ".claude/agents",
|
||||
"description": "Specialized agents for code development, testing, documentation",
|
||||
"importance": "high"
|
||||
},
|
||||
{
|
||||
"name": "CLI Tool Integration",
|
||||
"path": ".claude/scripts",
|
||||
"description": "Gemini, Qwen, Codex wrapper scripts for AI-powered analysis",
|
||||
"importance": "medium"
|
||||
}
|
||||
],
|
||||
"entry_points": {
|
||||
"main": ".claude/commands/workflow/plan.md",
|
||||
"cli_commands": ["/workflow:plan", "/workflow:execute", "/memory:docs"],
|
||||
"api_endpoints": []
|
||||
},
|
||||
"metrics": {
|
||||
"total_files": 180,
|
||||
"lines_of_code": 15000,
|
||||
"module_count": 12,
|
||||
"complexity": "medium"
|
||||
}
|
||||
},
|
||||
|
||||
"features": [],
|
||||
|
||||
"statistics": {
|
||||
"total_features": 0,
|
||||
"total_sessions": 0,
|
||||
"last_updated": "2025-01-18T10:00:00Z"
|
||||
},
|
||||
|
||||
"memory_resources": {
|
||||
"skills": [
|
||||
{"name": "claude_dms3", "type": "project_docs", "path": ".claude/skills/claude_dms3"},
|
||||
{"name": "workflow-progress", "type": "workflow_progress", "path": ".claude/skills/workflow-progress"}
|
||||
],
|
||||
"documentation": [
|
||||
{
|
||||
"name": "claude_dms3",
|
||||
"path": ".workflow/docs/claude_dms3",
|
||||
"has_readme": true,
|
||||
"has_architecture": true
|
||||
}
|
||||
],
|
||||
"module_docs": [
|
||||
".claude/commands/workflow/CLAUDE.md",
|
||||
".claude/agents/CLAUDE.md"
|
||||
],
|
||||
"gaps": ["tech_stack"],
|
||||
"last_scanned": "2025-01-18T10:05:00Z"
|
||||
},
|
||||
|
||||
"_metadata": {
|
||||
"initialized_by": "cli-explore-agent",
|
||||
"analysis_timestamp": "2025-01-18T10:00:00Z",
|
||||
"analysis_mode": "deep-scan",
|
||||
"memory_scan_timestamp": "2025-01-18T10:05:00Z"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Discover Memory Resources
|
||||
|
||||
**Goal**: Scan and index available SKILL packages (memory command products) using agent delegation
|
||||
|
||||
**Invoke general-purpose agent** to discover and catalog all memory products:
|
||||
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="general-purpose",
|
||||
description="Discover memory resources",
|
||||
prompt=`
|
||||
Discover and index all memory command products: SKILL packages, documentation, and CLAUDE.md files.
|
||||
|
||||
## Discovery Scope
|
||||
1. **SKILL Packages** (.claude/skills/) - Generated by /memory:skill-memory, /memory:tech-research, etc.
|
||||
2. **Documentation** (.workflow/docs/) - Generated by /memory:docs
|
||||
3. **Module Docs** (**/CLAUDE.md) - Generated by /memory:update-full, /memory:update-related
|
||||
|
||||
## Discovery Tasks
|
||||
|
||||
### 1. Scan SKILL Packages
|
||||
- List all directories in .claude/skills/
|
||||
- For each: extract name, classify type, record path
|
||||
- Types: workflow-progress | codemap-* | style-* | tech_stacks | project_docs
|
||||
|
||||
### 2. Scan Documentation
|
||||
- List directories in .workflow/docs/
|
||||
- For each project: name, path, check README.md, ARCHITECTURE.md existence
|
||||
|
||||
### 3. Scan CLAUDE.md Files
|
||||
- Find all **/CLAUDE.md (exclude: node_modules, .git, dist, build)
|
||||
- Return path list only
|
||||
|
||||
### 4. Identify Gaps
|
||||
- No project SKILL? → "project_skill"
|
||||
- No documentation? → "documentation"
|
||||
- Missing tech stack SKILL? → "tech_stack"
|
||||
- No workflow-progress? → "workflow_history"
|
||||
- <10% modules have CLAUDE.md? → "module_docs_low_coverage"
|
||||
|
||||
### 5. Return JSON:
|
||||
{
|
||||
"skills": [
|
||||
{"name": "claude_dms3", "type": "project_docs", "path": ".claude/skills/claude_dms3"},
|
||||
{"name": "workflow-progress", "type": "workflow_progress", "path": ".claude/skills/workflow-progress"}
|
||||
],
|
||||
"documentation": [
|
||||
{
|
||||
"name": "my_project",
|
||||
"path": ".workflow/docs/my_project",
|
||||
"has_readme": true,
|
||||
"has_architecture": true
|
||||
}
|
||||
],
|
||||
"module_docs": [
|
||||
"src/core/CLAUDE.md",
|
||||
"lib/utils/CLAUDE.md"
|
||||
],
|
||||
"gaps": ["tech_stack", "module_docs_low_coverage"]
|
||||
}
|
||||
|
||||
## Context
|
||||
- Project tech stack: ${JSON.stringify(analysis.technology_stack)}
|
||||
- Check .workflow/.archives for session history
|
||||
- If directories missing, return empty state with recommendations
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
**Agent Output**: JSON structure with skills, documentation, module_docs, and gaps
|
||||
|
||||
**Update project.json**:
|
||||
```javascript
|
||||
const memoryDiscovery = JSON.parse(agentOutput);
|
||||
|
||||
projectMeta.memory_resources = {
|
||||
...memoryDiscovery,
|
||||
last_scanned: new Date().toISOString()
|
||||
};
|
||||
|
||||
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
|
||||
```
|
||||
|
||||
**Output Summary**:
|
||||
```
|
||||
Memory Resources Indexed:
|
||||
- SKILL Packages: ${skills.length}
|
||||
- Documentation: ${documentation.length} project(s)
|
||||
- Module Docs: ${module_docs.length} file(s)
|
||||
- Gaps: ${gaps.join(', ') || 'none'}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Regeneration Behavior
|
||||
|
||||
When using `--regenerate` flag:
|
||||
|
||||
1. **Backup existing file**:
|
||||
```bash
|
||||
bash(cp .workflow/project.json .workflow/project.json.backup)
|
||||
```
|
||||
|
||||
2. **Preserve features array**:
|
||||
```javascript
|
||||
const existingMeta = JSON.parse(Read('.workflow/project.json'));
|
||||
const preservedFeatures = existingMeta.features || [];
|
||||
const preservedStats = existingMeta.statistics || {};
|
||||
```
|
||||
|
||||
3. **Re-run cli-explore-agent analysis**
|
||||
|
||||
4. **Re-run memory discovery (Phase 5)**
|
||||
|
||||
5. **Merge preserved data with new analysis**:
|
||||
```javascript
|
||||
const newProjectMeta = {
|
||||
...analysisResults,
|
||||
features: preservedFeatures, // Keep existing features
|
||||
statistics: preservedStats // Keep statistics
|
||||
};
|
||||
```
|
||||
|
||||
6. **Output**:
|
||||
```
|
||||
✓ Project analysis regenerated
|
||||
Backup saved: .workflow/project.json.backup
|
||||
|
||||
Updated:
|
||||
- Technology stack analysis
|
||||
- Architecture overview
|
||||
- Key components discovery
|
||||
- Memory resources index
|
||||
|
||||
Preserved:
|
||||
- ${preservedFeatures.length} existing features
|
||||
- Session statistics
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Agent Failure
|
||||
```
|
||||
If cli-explore-agent fails:
|
||||
1. Fall back to basic initialization
|
||||
2. Use get_modules_by_depth.sh for structure
|
||||
3. Create minimal project.json with placeholder overview
|
||||
4. Log warning: "Project initialized with basic analysis. Run /workflow:init --regenerate for full analysis"
|
||||
```
|
||||
|
||||
### Missing Tools
|
||||
```
|
||||
If Gemini CLI unavailable:
|
||||
1. Agent uses Qwen fallback
|
||||
2. If both fail, use bash-only analysis
|
||||
3. Mark in _metadata: "analysis_mode": "bash-fallback"
|
||||
```
|
||||
|
||||
### Invalid Project Root
|
||||
```
|
||||
If not in git repo and empty directory:
|
||||
1. Warn user: "Empty project detected"
|
||||
2. Create minimal project.json
|
||||
3. Suggest: "Add code files and run /workflow:init --regenerate"
|
||||
```
|
||||
|
||||
### Memory Discovery Failures
|
||||
|
||||
**Missing Directories**:
|
||||
```
|
||||
If .claude/skills, .workflow/docs, or CLAUDE.md files not found:
|
||||
1. Return empty state for that category
|
||||
2. Mark in gaps.missing array
|
||||
3. Continue initialization
|
||||
```
|
||||
|
||||
**Metadata Read Failures**:
|
||||
```
|
||||
If SKILL.md files are unreadable:
|
||||
1. Include SKILL with basic info: name (from directory), type (inferred), path
|
||||
2. Log warning: "SKILL package {name} has invalid metadata"
|
||||
3. Continue with other SKILLs
|
||||
```
|
||||
|
||||
**Coverage Check Failures**:
|
||||
```
|
||||
If unable to determine module doc coverage:
|
||||
1. Skip adding "module_docs_low_coverage" to gaps
|
||||
2. Continue with other gap checks
|
||||
```
|
||||
|
||||
**Default Empty State**:
|
||||
```json
|
||||
{
|
||||
"memory_resources": {
|
||||
"skills": [],
|
||||
"documentation": [],
|
||||
"module_docs": [],
|
||||
"gaps": ["project_skill", "documentation", "tech_stack", "workflow_history", "module_docs"],
|
||||
"last_scanned": "ISO_TIMESTAMP"
|
||||
}
|
||||
}
|
||||
```
|
||||
568
.claude/commands/workflow/lite-execute.md
Normal file
568
.claude/commands/workflow/lite-execute.md
Normal file
@@ -0,0 +1,568 @@
|
||||
---
|
||||
name: lite-execute
|
||||
description: Execute tasks based on in-memory plan, prompt description, or file content
|
||||
argument-hint: "[--in-memory] [\"task description\"|file-path]"
|
||||
allowed-tools: TodoWrite(*), Task(*), Bash(*)
|
||||
---
|
||||
|
||||
# Workflow Lite-Execute Command (/workflow:lite-execute)
|
||||
|
||||
## Overview
|
||||
|
||||
Flexible task execution command supporting three input modes: in-memory plan (from lite-plan), direct prompt description, or file content. Handles execution orchestration, progress tracking, and optional code review.
|
||||
|
||||
**Core capabilities:**
|
||||
- Multi-mode input (in-memory plan, prompt description, or file path)
|
||||
- Execution orchestration (Agent or Codex) with full context
|
||||
- Live progress tracking via TodoWrite at execution call level
|
||||
- Optional code review with selected tool (Gemini, Agent, or custom)
|
||||
- Context continuity across multiple executions
|
||||
- Intelligent format detection (Enhanced Task JSON vs plain text)
|
||||
|
||||
## Usage
|
||||
|
||||
### Command Syntax
|
||||
```bash
|
||||
/workflow:lite-execute [FLAGS] <INPUT>
|
||||
|
||||
# Flags
|
||||
--in-memory Use plan from memory (called by lite-plan)
|
||||
|
||||
# Arguments
|
||||
<input> Task description string, or path to file (required)
|
||||
```
|
||||
|
||||
## Input Modes
|
||||
|
||||
### Mode 1: In-Memory Plan
|
||||
|
||||
**Trigger**: Called by lite-plan after Phase 4 approval with `--in-memory` flag
|
||||
|
||||
**Input Source**: `executionContext` global variable set by lite-plan
|
||||
|
||||
**Content**: Complete execution context (see Data Structures section)
|
||||
|
||||
**Behavior**:
|
||||
- Skip execution method selection (already set by lite-plan)
|
||||
- Directly proceed to execution with full context
|
||||
- All planning artifacts available (exploration, clarifications, plan)
|
||||
|
||||
### Mode 2: Prompt Description
|
||||
|
||||
**Trigger**: User calls with task description string
|
||||
|
||||
**Input**: Simple task description (e.g., "Add unit tests for auth module")
|
||||
|
||||
**Behavior**:
|
||||
- Store prompt as `originalUserInput`
|
||||
- Create simple execution plan from prompt
|
||||
- AskUserQuestion: Select execution method (Agent/Codex/Auto)
|
||||
- AskUserQuestion: Select code review tool (Skip/Gemini/Agent/Other)
|
||||
- Proceed to execution with `originalUserInput` included
|
||||
|
||||
**User Interaction**:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Select execution method:",
|
||||
header: "Execution",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Agent", description: "@code-developer agent" },
|
||||
{ label: "Codex", description: "codex CLI tool" },
|
||||
{ label: "Auto", description: "Auto-select based on complexity" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Enable code review after execution?",
|
||||
header: "Code Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Skip", description: "No review" },
|
||||
{ label: "Gemini Review", description: "Gemini CLI tool" },
|
||||
{ label: "Agent Review", description: "Current agent review" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
### Mode 3: File Content
|
||||
|
||||
**Trigger**: User calls with file path
|
||||
|
||||
**Input**: Path to file containing task description or Enhanced Task JSON
|
||||
|
||||
**Step 1: Read and Detect Format**
|
||||
|
||||
```javascript
|
||||
fileContent = Read(filePath)
|
||||
|
||||
// Attempt JSON parsing
|
||||
try {
|
||||
jsonData = JSON.parse(fileContent)
|
||||
|
||||
// Check if Enhanced Task JSON from lite-plan
|
||||
if (jsonData.meta?.workflow === "lite-plan") {
|
||||
// Extract plan data
|
||||
planObject = {
|
||||
summary: jsonData.context.plan.summary,
|
||||
approach: jsonData.context.plan.approach,
|
||||
tasks: jsonData.context.plan.tasks,
|
||||
estimated_time: jsonData.meta.estimated_time,
|
||||
recommended_execution: jsonData.meta.recommended_execution,
|
||||
complexity: jsonData.meta.complexity
|
||||
}
|
||||
explorationContext = jsonData.context.exploration || null
|
||||
clarificationContext = jsonData.context.clarifications || null
|
||||
originalUserInput = jsonData.title
|
||||
|
||||
isEnhancedTaskJson = true
|
||||
} else {
|
||||
// Valid JSON but not Enhanced Task JSON - treat as plain text
|
||||
originalUserInput = fileContent
|
||||
isEnhancedTaskJson = false
|
||||
}
|
||||
} catch {
|
||||
// Not valid JSON - treat as plain text prompt
|
||||
originalUserInput = fileContent
|
||||
isEnhancedTaskJson = false
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Create Execution Plan**
|
||||
|
||||
If `isEnhancedTaskJson === true`:
|
||||
- Use extracted `planObject` directly
|
||||
- Skip planning, use lite-plan's existing plan
|
||||
- User still selects execution method and code review
|
||||
|
||||
If `isEnhancedTaskJson === false`:
|
||||
- Treat file content as prompt (same behavior as Mode 2)
|
||||
- Create simple execution plan from content
|
||||
|
||||
**Step 3: User Interaction**
|
||||
|
||||
- AskUserQuestion: Select execution method (Agent/Codex/Auto)
|
||||
- AskUserQuestion: Select code review tool
|
||||
- Proceed to execution with full context
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Workflow Overview
|
||||
|
||||
```
|
||||
Input Processing → Mode Detection
|
||||
|
|
||||
v
|
||||
[Mode 1] --in-memory: Load executionContext → Skip selection
|
||||
[Mode 2] Prompt: Create plan → User selects method + review
|
||||
[Mode 3] File: Detect format → Extract plan OR treat as prompt → User selects
|
||||
|
|
||||
v
|
||||
Execution & Progress Tracking
|
||||
├─ Step 1: Initialize execution tracking
|
||||
├─ Step 2: Create TodoWrite execution list
|
||||
├─ Step 3: Launch execution (Agent or Codex)
|
||||
├─ Step 4: Track execution progress
|
||||
└─ Step 5: Code review (optional)
|
||||
|
|
||||
v
|
||||
Execution Complete
|
||||
```
|
||||
|
||||
## Detailed Execution Steps
|
||||
|
||||
### Step 1: Initialize Execution Tracking
|
||||
|
||||
**Operations**:
|
||||
- Initialize result tracking for multi-execution scenarios
|
||||
- Set up `previousExecutionResults` array for context continuity
|
||||
|
||||
```javascript
|
||||
// Initialize result tracking
|
||||
previousExecutionResults = []
|
||||
```
|
||||
|
||||
### Step 2: Create TodoWrite Execution List
|
||||
|
||||
**Operations**:
|
||||
- Create execution tracking from task list
|
||||
- Typically single execution call for all tasks
|
||||
- Split into multiple calls if task list very large (>10 tasks)
|
||||
|
||||
**Execution Call Creation**:
|
||||
```javascript
|
||||
function createExecutionCalls(tasks) {
|
||||
const taskTitles = tasks.map(t => t.title || t)
|
||||
|
||||
// Single call for ≤10 tasks (most common)
|
||||
if (tasks.length <= 10) {
|
||||
return [{
|
||||
method: executionMethod === "Codex" ? "Codex" : "Agent",
|
||||
taskSummary: taskTitles.length <= 3
|
||||
? taskTitles.join(', ')
|
||||
: `${taskTitles.slice(0, 2).join(', ')}, and ${taskTitles.length - 2} more`,
|
||||
tasks: tasks
|
||||
}]
|
||||
}
|
||||
|
||||
// Split into multiple calls for >10 tasks
|
||||
const callSize = 5
|
||||
const calls = []
|
||||
for (let i = 0; i < tasks.length; i += callSize) {
|
||||
const batchTasks = tasks.slice(i, i + callSize)
|
||||
const batchTitles = batchTasks.map(t => t.title || t)
|
||||
calls.push({
|
||||
method: executionMethod === "Codex" ? "Codex" : "Agent",
|
||||
taskSummary: `Tasks ${i + 1}-${Math.min(i + callSize, tasks.length)}: ${batchTitles[0]}...`,
|
||||
tasks: batchTasks
|
||||
})
|
||||
}
|
||||
return calls
|
||||
}
|
||||
|
||||
// Create execution calls with IDs
|
||||
executionCalls = createExecutionCalls(planObject.tasks).map((call, index) => ({
|
||||
...call,
|
||||
id: `[${call.method}-${index+1}]`
|
||||
}))
|
||||
|
||||
// Create TodoWrite list
|
||||
TodoWrite({
|
||||
todos: executionCalls.map(call => ({
|
||||
content: `${call.id} (${call.taskSummary})`,
|
||||
status: "pending",
|
||||
activeForm: `Executing ${call.id} (${call.taskSummary})`
|
||||
}))
|
||||
})
|
||||
```
|
||||
|
||||
**Example Execution Lists**:
|
||||
```
|
||||
Single call (typical):
|
||||
[ ] [Agent-1] (Create AuthService, Add JWT utilities, Implement middleware)
|
||||
|
||||
Few tasks:
|
||||
[ ] [Codex-1] (Create AuthService, Add JWT utilities, and 3 more)
|
||||
|
||||
Large task sets (>10):
|
||||
[ ] [Agent-1] (Tasks 1-5: Create AuthService, Add JWT utilities, ...)
|
||||
[ ] [Agent-2] (Tasks 6-10: Create tests, Update docs, ...)
|
||||
```
|
||||
|
||||
### Step 3: Launch Execution
|
||||
|
||||
**IMPORTANT**: CLI execution MUST run in foreground (no background execution)
|
||||
|
||||
**Execution Loop**:
|
||||
```javascript
|
||||
for (currentIndex = 0; currentIndex < executionCalls.length; currentIndex++) {
|
||||
const currentCall = executionCalls[currentIndex]
|
||||
|
||||
// Update TodoWrite: mark current call in_progress
|
||||
// Launch execution with previousExecutionResults context
|
||||
// After completion: collect result, add to previousExecutionResults
|
||||
// Update TodoWrite: mark current call completed
|
||||
}
|
||||
```
|
||||
|
||||
**Option A: Agent Execution**
|
||||
|
||||
When to use:
|
||||
- `executionMethod = "Agent"`
|
||||
- `executionMethod = "Auto" AND complexity = "Low"`
|
||||
|
||||
Agent call format:
|
||||
```javascript
|
||||
function formatTaskForAgent(task, index) {
|
||||
return `
|
||||
### Task ${index + 1}: ${task.title}
|
||||
**File**: ${task.file}
|
||||
**Action**: ${task.action}
|
||||
**Description**: ${task.description}
|
||||
|
||||
**Implementation Steps**:
|
||||
${task.implementation.map((step, i) => `${i + 1}. ${step}`).join('\n')}
|
||||
|
||||
**Reference**:
|
||||
- Pattern: ${task.reference.pattern}
|
||||
- Example Files: ${task.reference.files.join(', ')}
|
||||
- Guidance: ${task.reference.examples}
|
||||
|
||||
**Acceptance Criteria**:
|
||||
${task.acceptance.map((criterion, i) => `${i + 1}. ${criterion}`).join('\n')}
|
||||
`
|
||||
}
|
||||
|
||||
Task(
|
||||
subagent_type="code-developer",
|
||||
description="Implement planned tasks",
|
||||
prompt=`
|
||||
${originalUserInput ? `## Original User Request\n${originalUserInput}\n\n` : ''}
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
**Summary**: ${planObject.summary}
|
||||
**Approach**: ${planObject.approach}
|
||||
|
||||
## Task Breakdown (${planObject.tasks.length} tasks)
|
||||
${planObject.tasks.map((task, i) => formatTaskForAgent(task, i)).join('\n')}
|
||||
|
||||
${previousExecutionResults.length > 0 ? `\n## Previous Execution Results\n${previousExecutionResults.map(result => `
|
||||
[${result.executionId}] ${result.status}
|
||||
Tasks: ${result.tasksSummary}
|
||||
Completion: ${result.completionSummary}
|
||||
Outputs: ${result.keyOutputs || 'See git diff'}
|
||||
${result.notes ? `Notes: ${result.notes}` : ''}
|
||||
`).join('\n---\n')}` : ''}
|
||||
|
||||
## Code Context
|
||||
${explorationContext || "No exploration performed"}
|
||||
|
||||
${clarificationContext ? `\n## Clarifications\n${JSON.stringify(clarificationContext, null, 2)}` : ''}
|
||||
|
||||
## Instructions
|
||||
- Reference original request to ensure alignment
|
||||
- Review previous results to understand completed work
|
||||
- Build on previous work, avoid duplication
|
||||
- Test functionality as you implement
|
||||
- Complete all assigned tasks
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
**Result Collection**: After completion, collect result following `executionResult` structure (see Data Structures section)
|
||||
|
||||
**Option B: CLI Execution (Codex)**
|
||||
|
||||
When to use:
|
||||
- `executionMethod = "Codex"`
|
||||
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
|
||||
|
||||
Command format:
|
||||
```bash
|
||||
function formatTaskForCodex(task, index) {
|
||||
return `
|
||||
${index + 1}. ${task.title} (${task.file})
|
||||
Action: ${task.action}
|
||||
What: ${task.description}
|
||||
How:
|
||||
${task.implementation.map((step, i) => ` ${i + 1}. ${step}`).join('\n')}
|
||||
Reference: ${task.reference.pattern} (see ${task.reference.files.join(', ')})
|
||||
Guidance: ${task.reference.examples}
|
||||
Verify:
|
||||
${task.acceptance.map((criterion, i) => ` - ${criterion}`).join('\n')}
|
||||
`
|
||||
}
|
||||
|
||||
codex --full-auto exec "
|
||||
${originalUserInput ? `## Original User Request\n${originalUserInput}\n\n` : ''}
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
TASK: ${planObject.summary}
|
||||
APPROACH: ${planObject.approach}
|
||||
|
||||
### Task Breakdown (${planObject.tasks.length} tasks)
|
||||
${planObject.tasks.map((task, i) => formatTaskForCodex(task, i)).join('\n')}
|
||||
|
||||
${previousExecutionResults.length > 0 ? `\n### Previous Execution Results\n${previousExecutionResults.map(result => `
|
||||
[${result.executionId}] ${result.status}
|
||||
Tasks: ${result.tasksSummary}
|
||||
Status: ${result.completionSummary}
|
||||
Outputs: ${result.keyOutputs || 'See git diff'}
|
||||
${result.notes ? `Notes: ${result.notes}` : ''}
|
||||
`).join('\n---\n')}
|
||||
|
||||
IMPORTANT: Review previous results. Build on completed work. Avoid duplication.
|
||||
` : ''}
|
||||
|
||||
### Code Context from Exploration
|
||||
${explorationContext ? `
|
||||
Project Structure: ${explorationContext.project_structure || 'Standard structure'}
|
||||
Relevant Files: ${explorationContext.relevant_files?.join(', ') || 'TBD'}
|
||||
Current Patterns: ${explorationContext.patterns || 'Follow existing conventions'}
|
||||
Integration Points: ${explorationContext.dependencies || 'None specified'}
|
||||
Constraints: ${explorationContext.constraints || 'None'}
|
||||
` : 'No prior exploration - analyze codebase as needed'}
|
||||
|
||||
${clarificationContext ? `\n### User Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `${q}: ${a}`).join('\n')}` : ''}
|
||||
|
||||
## Execution Instructions
|
||||
- Reference original request to ensure alignment
|
||||
- Review previous results for context continuity
|
||||
- Build on previous work, don't duplicate completed tasks
|
||||
- Complete all assigned tasks in single execution
|
||||
- Test functionality as you implement
|
||||
|
||||
Complexity: ${planObject.complexity}
|
||||
" --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
**Execution with tracking**:
|
||||
```javascript
|
||||
// Launch CLI in foreground (NOT background)
|
||||
bash_result = Bash(
|
||||
command=cli_command,
|
||||
timeout=600000 // 10 minutes
|
||||
)
|
||||
|
||||
// Update TodoWrite when execution completes
|
||||
```
|
||||
|
||||
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
|
||||
|
||||
### Step 4: Track Execution Progress
|
||||
|
||||
**Real-time TodoWrite Updates** at execution call level:
|
||||
|
||||
```javascript
|
||||
// When call starts
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "in_progress", activeForm: "..." },
|
||||
{ content: "[Agent-2] (Add middleware + Update routes)", status: "pending", activeForm: "..." }
|
||||
]
|
||||
})
|
||||
|
||||
// When call completes
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "completed", activeForm: "..." },
|
||||
{ content: "[Agent-2] (Add middleware + Update routes)", status: "in_progress", activeForm: "..." }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
**User Visibility**:
|
||||
- User sees execution call progress (not individual task progress)
|
||||
- Current execution highlighted as "in_progress"
|
||||
- Completed executions marked with checkmark
|
||||
- Each execution shows task summary for context
|
||||
|
||||
### Step 5: Code Review (Optional)
|
||||
|
||||
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
|
||||
|
||||
**Operations**:
|
||||
- Agent Review: Current agent performs direct review
|
||||
- Gemini Review: Execute gemini CLI with review prompt
|
||||
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
|
||||
|
||||
**Command Formats**:
|
||||
|
||||
```bash
|
||||
# Agent Review: Direct agent review (no CLI)
|
||||
# Uses analysis prompt and TodoWrite tools directly
|
||||
|
||||
# Gemini Review:
|
||||
gemini -p "
|
||||
PURPOSE: Code review for implemented changes
|
||||
TASK: • Analyze quality • Identify issues • Suggest improvements
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Review lite-execute changes
|
||||
EXPECTED: Quality assessment with recommendations
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
|
||||
"
|
||||
|
||||
# Qwen Review (custom tool via "Other"):
|
||||
qwen -p "
|
||||
PURPOSE: Code review for implemented changes
|
||||
TASK: • Analyze quality • Identify issues • Suggest improvements
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Review lite-execute changes
|
||||
EXPECTED: Quality assessment with recommendations
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
|
||||
"
|
||||
|
||||
# Codex Review (custom tool via "Other"):
|
||||
codex --full-auto exec "Review recent code changes for quality, potential issues, and improvements" --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Execution Intelligence
|
||||
|
||||
1. **Context Continuity**: Each execution call receives previous results
|
||||
- Prevents duplication across multiple executions
|
||||
- Maintains coherent implementation flow
|
||||
- Builds on completed work
|
||||
|
||||
2. **Execution Call Tracking**: Progress at call level, not task level
|
||||
- Each call handles all or subset of tasks
|
||||
- Clear visibility of current execution
|
||||
- Simple progress updates
|
||||
|
||||
3. **Flexible Execution**: Multiple input modes supported
|
||||
- In-memory: Seamless lite-plan integration
|
||||
- Prompt: Quick standalone execution
|
||||
- File: Intelligent format detection
|
||||
- Enhanced Task JSON (lite-plan export): Full plan extraction
|
||||
- Plain text: Uses as prompt
|
||||
|
||||
### Task Management
|
||||
|
||||
1. **Live Progress Updates**: Real-time TodoWrite tracking
|
||||
- Execution calls created before execution starts
|
||||
- Updated as executions progress
|
||||
- Clear completion status
|
||||
|
||||
2. **Simple Execution**: Straightforward task handling
|
||||
- All tasks in single call (typical)
|
||||
- Split only for very large task sets (>10)
|
||||
- Agent/Codex determines optimal execution order
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Missing executionContext | --in-memory without context | Error: "No execution context found. Only available when called by lite-plan." |
|
||||
| File not found | File path doesn't exist | Error: "File not found: {path}. Check file path." |
|
||||
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
|
||||
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
|
||||
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
|
||||
| Execution failure | Agent/Codex crashes | Display error, save partial progress, suggest retry |
|
||||
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
|
||||
|
||||
## Data Structures
|
||||
|
||||
### executionContext (Input - Mode 1)
|
||||
|
||||
Passed from lite-plan via global variable:
|
||||
|
||||
```javascript
|
||||
{
|
||||
planObject: {
|
||||
summary: string,
|
||||
approach: string,
|
||||
tasks: [...],
|
||||
estimated_time: string,
|
||||
recommended_execution: string,
|
||||
complexity: string
|
||||
},
|
||||
explorationContext: {...} | null,
|
||||
clarificationContext: {...} | null,
|
||||
executionMethod: "Agent" | "Codex" | "Auto",
|
||||
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
||||
originalUserInput: string
|
||||
}
|
||||
```
|
||||
|
||||
### executionResult (Output)
|
||||
|
||||
Collected after each execution call completes:
|
||||
|
||||
```javascript
|
||||
{
|
||||
executionId: string, // e.g., "[Agent-1]", "[Codex-1]"
|
||||
status: "completed" | "partial" | "failed",
|
||||
tasksSummary: string, // Brief description of tasks handled
|
||||
completionSummary: string, // What was completed
|
||||
keyOutputs: string, // Files created/modified, key changes
|
||||
notes: string // Important context for next execution
|
||||
}
|
||||
```
|
||||
|
||||
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -70,7 +70,7 @@ CONTEXT: Existing user database schema, REST API endpoints
|
||||
|
||||
**Validation**:
|
||||
- Session ID successfully extracted
|
||||
- Session directory `.workflow/[sessionId]/` exists
|
||||
- Session directory `.workflow/active/[sessionId]/` exists
|
||||
|
||||
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
|
||||
|
||||
@@ -87,7 +87,7 @@ CONTEXT: Existing user database schema, REST API endpoints
|
||||
|
||||
**Parse Output**:
|
||||
- Extract: context-package.json path (store as `contextPath`)
|
||||
- Typical pattern: `.workflow/[sessionId]/.process/context-package.json`
|
||||
- Typical pattern: `.workflow/active/[sessionId]/.process/context-package.json`
|
||||
|
||||
**Validation**:
|
||||
- Context package path extracted
|
||||
@@ -141,7 +141,7 @@ CONTEXT: Existing user database schema, REST API endpoints
|
||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
||||
|
||||
**Validation**:
|
||||
- File `.workflow/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
||||
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
||||
|
||||
**Skip Behavior**:
|
||||
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
|
||||
@@ -232,45 +232,41 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]
|
||||
**Input**: `sessionId` from Phase 1
|
||||
|
||||
**Validation**:
|
||||
- `.workflow/[sessionId]/IMPL_PLAN.md` exists
|
||||
- `.workflow/[sessionId]/.task/IMPL-*.json` exists (at least one)
|
||||
- `.workflow/[sessionId]/TODO_LIST.md` exists
|
||||
- `.workflow/active/[sessionId]/IMPL_PLAN.md` exists
|
||||
- `.workflow/active/[sessionId]/.task/IMPL-*.json` exists (at least one)
|
||||
- `.workflow/active/[sessionId]/TODO_LIST.md` exists
|
||||
|
||||
<!-- TodoWrite: When task-generate-agent invoked, INSERT 3 task-generate-agent tasks -->
|
||||
<!-- TodoWrite: When task-generate-agent invoked, ATTACH 1 agent task -->
|
||||
|
||||
**TodoWrite Update (Phase 4 SlashCommand invoked - tasks attached)**:
|
||||
**TodoWrite Update (Phase 4 SlashCommand invoked - agent task attached)**:
|
||||
```json
|
||||
[
|
||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Phase 4.1: Discovery - analyze requirements (task-generate-agent)", "status": "in_progress", "activeForm": "Analyzing requirements"},
|
||||
{"content": "Phase 4.2: Planning - design tasks (task-generate-agent)", "status": "pending", "activeForm": "Designing tasks"},
|
||||
{"content": "Phase 4.3: Output - generate JSONs (task-generate-agent)", "status": "pending", "activeForm": "Generating task JSONs"}
|
||||
{"content": "Execute task-generate-agent", "status": "in_progress", "activeForm": "Executing task-generate-agent"}
|
||||
]
|
||||
```
|
||||
|
||||
**Note**: SlashCommand invocation **attaches** task-generate-agent's 3 tasks. Orchestrator **executes** these tasks.
|
||||
**Note**: Single agent task attached. Agent autonomously completes discovery, planning, and output generation internally.
|
||||
|
||||
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
|
||||
<!-- TodoWrite: After agent completes, mark task as completed -->
|
||||
|
||||
<!-- TodoWrite: After Phase 4 tasks complete, REMOVE Phase 4.1-4.3, restore to orchestrator view -->
|
||||
|
||||
**TodoWrite Update (Phase 4 completed - tasks collapsed)**:
|
||||
**TodoWrite Update (Phase 4 completed)**:
|
||||
```json
|
||||
[
|
||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Execute task generation", "status": "completed", "activeForm": "Executing task generation"}
|
||||
{"content": "Execute task-generate-agent", "status": "completed", "activeForm": "Executing task-generate-agent"}
|
||||
]
|
||||
```
|
||||
|
||||
**Note**: Phase 4 tasks completed and collapsed to summary.
|
||||
**Note**: Agent task completed. No collapse needed (single task).
|
||||
|
||||
**Return to User**:
|
||||
```
|
||||
Planning complete for session: [sessionId]
|
||||
Tasks generated: [count]
|
||||
Plan: .workflow/[sessionId]/IMPL_PLAN.md
|
||||
Plan: .workflow/active/[sessionId]/IMPL_PLAN.md
|
||||
|
||||
Recommended Next Steps:
|
||||
1. /workflow:action-plan-verify --session [sessionId] # Verify plan quality before execution
|
||||
@@ -288,31 +284,35 @@ Quality Gate: Consider running /workflow:action-plan-verify to catch issues earl
|
||||
|
||||
1. **Task Attachment** (when SlashCommand invoked):
|
||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||
- Example: `/workflow:tools:context-gather` attaches 3 sub-tasks (Phase 2.1, 2.2, 2.3)
|
||||
- **Phase 2, 3**: Multiple sub-tasks attached (e.g., Phase 2.1, 2.2, 2.3)
|
||||
- **Phase 4**: Single agent task attached (e.g., "Execute task-generate-agent")
|
||||
- First attached task marked as `in_progress`, others as `pending`
|
||||
- Orchestrator **executes** these attached tasks sequentially
|
||||
|
||||
2. **Task Collapse** (after sub-tasks complete):
|
||||
- Remove detailed sub-tasks from TodoWrite
|
||||
- **Applies to Phase 2, 3**: Remove detailed sub-tasks from TodoWrite
|
||||
- **Collapse** to high-level phase summary
|
||||
- Example: Phase 2.1-2.3 collapse to "Execute context gathering: completed"
|
||||
- **Phase 4**: No collapse needed (single task, just mark completed)
|
||||
- Maintains clean orchestrator-level view
|
||||
|
||||
3. **Continuous Execution**:
|
||||
- After collapse, automatically proceed to next pending phase
|
||||
- After completion, automatically proceed to next pending phase
|
||||
- No user intervention required between phases
|
||||
- TodoWrite dynamically reflects current execution state
|
||||
|
||||
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
|
||||
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary for Phase 2/3, or marked completed for Phase 4) → Next phase begins → Repeat until all phases complete.
|
||||
|
||||
### Benefits
|
||||
|
||||
- ✓ Real-time visibility into sub-task execution
|
||||
- ✓ Clear mental model: SlashCommand = attach → execute → collapse
|
||||
- ✓ Clear mental model: SlashCommand = attach → execute → collapse (Phase 2/3) or complete (Phase 4)
|
||||
- ✓ Clean summary after completion
|
||||
- ✓ Easy to track workflow progress
|
||||
|
||||
**Note**: See individual Phase descriptions (Phase 2, 3, 4) for detailed TodoWrite Update examples with full JSON structures.
|
||||
**Note**: See individual Phase descriptions for detailed TodoWrite Update examples:
|
||||
- **Phase 2, 3**: Multiple sub-tasks with attach/collapse pattern
|
||||
- **Phase 4**: Single agent task (no collapse needed)
|
||||
|
||||
## Input Processing
|
||||
|
||||
@@ -425,20 +425,21 @@ Conditional Branch: Check conflict_risk
|
||||
└─ ELSE: Skip Phase 3, proceed to Phase 4
|
||||
↓
|
||||
Phase 4: Task Generation (SlashCommand invoked)
|
||||
→ ATTACH 3 tasks: ← ATTACHED
|
||||
- Phase 4.1: Discovery - analyze requirements
|
||||
- Phase 4.2: Planning - design tasks
|
||||
- Phase 4.3: Output - generate JSONs
|
||||
→ Execute Phase 4.1-4.3
|
||||
→ COLLAPSE tasks ← COLLAPSED
|
||||
→ ATTACH 1 agent task: ← ATTACHED
|
||||
- Execute task-generate-agent
|
||||
→ Agent autonomously completes internally:
|
||||
(discovery → planning → output)
|
||||
→ Outputs: IMPL_PLAN.md, IMPL-*.json, TODO_LIST.md
|
||||
↓
|
||||
Return summary to user
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
- **← ATTACHED**: Sub-tasks attached to TodoWrite when SlashCommand invoked
|
||||
- **← COLLAPSED**: Sub-tasks collapsed to summary after completion
|
||||
- **← ATTACHED**: Tasks attached to TodoWrite when SlashCommand invoked
|
||||
- Phase 2, 3: Multiple sub-tasks
|
||||
- Phase 4: Single agent task
|
||||
- **← COLLAPSED**: Sub-tasks collapsed to summary after completion (Phase 2, 3 only)
|
||||
- **Phase 4**: Single agent task, no collapse (just mark completed)
|
||||
- **Conditional Branch**: Phase 3 only executes if conflict_risk ≥ medium
|
||||
- **Continuous Flow**: No user intervention between phases
|
||||
|
||||
|
||||
470
.claude/commands/workflow/replan.md
Normal file
470
.claude/commands/workflow/replan.md
Normal file
@@ -0,0 +1,470 @@
|
||||
---
|
||||
name: replan
|
||||
description: Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning
|
||||
argument-hint: "[--session session-id] [task-id] \"requirements\"|file.md [--interactive]"
|
||||
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
|
||||
---
|
||||
|
||||
# Workflow Replan Command
|
||||
|
||||
## Overview
|
||||
Intelligently replans workflow sessions or individual tasks with interactive boundary clarification and comprehensive artifact updates.
|
||||
|
||||
**Core Capabilities**:
|
||||
- **Session Replan**: Updates multiple artifacts (IMPL_PLAN.md, TODO_LIST.md, task JSONs)
|
||||
- **Task Replan**: Focused updates within session context
|
||||
- **Interactive Clarification**: Guided questioning to define modification boundaries
|
||||
- **Impact Analysis**: Automatic detection of affected files and dependencies
|
||||
- **Backup Management**: Preserves previous versions with restore capability
|
||||
|
||||
## Operation Modes
|
||||
|
||||
### Session Replan Mode
|
||||
|
||||
```bash
|
||||
# Auto-detect active session
|
||||
/workflow:replan "添加双因素认证支持"
|
||||
|
||||
# Explicit session
|
||||
/workflow:replan --session WFS-oauth "添加双因素认证支持"
|
||||
|
||||
# File-based input
|
||||
/workflow:replan --session WFS-oauth requirements-update.md
|
||||
|
||||
# Interactive mode
|
||||
/workflow:replan --interactive
|
||||
```
|
||||
|
||||
### Task Replan Mode
|
||||
|
||||
```bash
|
||||
# Direct task update
|
||||
/workflow:replan IMPL-1 "修改为使用 OAuth2.0 标准"
|
||||
|
||||
# Task with explicit session
|
||||
/workflow:replan --session WFS-oauth IMPL-2 "增加单元测试覆盖率到 90%"
|
||||
|
||||
# Interactive mode
|
||||
/workflow:replan IMPL-1 --interactive
|
||||
```
|
||||
|
||||
## Execution Lifecycle
|
||||
|
||||
### Phase 1: Mode Detection & Session Discovery
|
||||
|
||||
**Process**:
|
||||
1. **Detect Operation Mode**:
|
||||
- Check if task ID provided (IMPL-N or IMPL-N.M format) → Task mode
|
||||
- Otherwise → Session mode
|
||||
|
||||
2. **Discover/Validate Session**:
|
||||
- Use `--session` flag if provided
|
||||
- Otherwise auto-detect from `.workflow/active/`
|
||||
- Validate session exists
|
||||
|
||||
3. **Load Session Context**:
|
||||
- Read `workflow-session.json`
|
||||
- List existing tasks
|
||||
- Read `IMPL_PLAN.md` and `TODO_LIST.md`
|
||||
|
||||
**Output**: Session validated, context loaded, mode determined
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Interactive Requirement Clarification
|
||||
|
||||
**Purpose**: Define modification scope through guided questioning
|
||||
|
||||
#### Session Mode Questions
|
||||
|
||||
**Q1: Modification Scope**
|
||||
```javascript
|
||||
Options:
|
||||
- 仅更新任务细节 (tasks_only)
|
||||
- 修改规划方案 (plan_update)
|
||||
- 重构任务结构 (task_restructure)
|
||||
- 全面重规划 (comprehensive)
|
||||
```
|
||||
|
||||
**Q2: Affected Modules** (if scope >= plan_update)
|
||||
```javascript
|
||||
Options: Dynamically generated from existing tasks' focus_paths
|
||||
- 认证模块 (src/auth)
|
||||
- 用户管理 (src/user)
|
||||
- 全部模块
|
||||
```
|
||||
|
||||
**Q3: Task Changes** (if scope >= task_restructure)
|
||||
```javascript
|
||||
Options:
|
||||
- 添加新任务
|
||||
- 删除现有任务
|
||||
- 合并任务
|
||||
- 拆分任务
|
||||
- 仅更新内容
|
||||
```
|
||||
|
||||
**Q4: Dependency Changes**
|
||||
```javascript
|
||||
Options:
|
||||
- 是,需要重新梳理依赖
|
||||
- 否,保持现有依赖
|
||||
```
|
||||
|
||||
#### Task Mode Questions
|
||||
|
||||
**Q1: Update Type**
|
||||
```javascript
|
||||
Options:
|
||||
- 需求和验收标准 (requirements & acceptance)
|
||||
- 实现方案 (implementation_approach)
|
||||
- 文件范围 (focus_paths)
|
||||
- 依赖关系 (depends_on)
|
||||
- 全部更新
|
||||
```
|
||||
|
||||
**Q2: Ripple Effect**
|
||||
```javascript
|
||||
Options:
|
||||
- 是,需要同步更新依赖任务
|
||||
- 否,仅影响当前任务
|
||||
- 不确定,请帮我分析
|
||||
```
|
||||
|
||||
**Output**: User selections stored, modification boundaries defined
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Impact Analysis & Planning
|
||||
|
||||
**Step 3.1: Analyze Required Changes**
|
||||
|
||||
Determine affected files based on clarification:
|
||||
|
||||
```typescript
|
||||
interface ImpactAnalysis {
|
||||
affected_files: {
|
||||
impl_plan: boolean;
|
||||
todo_list: boolean;
|
||||
session_meta: boolean;
|
||||
tasks: string[];
|
||||
};
|
||||
|
||||
operations: {
|
||||
type: 'create' | 'update' | 'delete' | 'merge' | 'split';
|
||||
target: string;
|
||||
reason: string;
|
||||
}[];
|
||||
|
||||
backup_strategy: {
|
||||
timestamp: string;
|
||||
files: string[];
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
**Step 3.2: Generate Modification Plan**
|
||||
|
||||
```markdown
|
||||
## 修改计划
|
||||
|
||||
### 影响范围
|
||||
- [ ] IMPL_PLAN.md: 更新技术方案第 3 节
|
||||
- [ ] TODO_LIST.md: 添加 2 个新任务,删除 1 个废弃任务
|
||||
- [ ] IMPL-001.json: 更新实现方案
|
||||
- [ ] workflow-session.json: 更新任务计数
|
||||
|
||||
### 变更操作
|
||||
1. **创建**: IMPL-004.json (双因素认证实现)
|
||||
2. **更新**: IMPL-001.json (添加 2FA 准备工作)
|
||||
3. **删除**: IMPL-003.json (已被新方案替代)
|
||||
```
|
||||
|
||||
**Step 3.3: User Confirmation**
|
||||
|
||||
```javascript
|
||||
Options:
|
||||
- 确认执行: 开始应用所有修改
|
||||
- 调整计划: 重新回答问题调整范围
|
||||
- 取消操作: 放弃本次重规划
|
||||
```
|
||||
|
||||
**Output**: Modification plan confirmed or adjusted
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Backup Creation
|
||||
|
||||
**Process**:
|
||||
|
||||
1. **Create Backup Directory**:
|
||||
```bash
|
||||
timestamp=$(date -u +"%Y-%m-%dT%H-%M-%S")
|
||||
backup_dir=".workflow/active/$SESSION_ID/.process/backup/replan-$timestamp"
|
||||
mkdir -p "$backup_dir"
|
||||
```
|
||||
|
||||
2. **Backup All Affected Files**:
|
||||
- IMPL_PLAN.md
|
||||
- TODO_LIST.md
|
||||
- workflow-session.json
|
||||
- Affected task JSONs
|
||||
|
||||
3. **Create Backup Manifest**:
|
||||
```markdown
|
||||
# Replan Backup Manifest
|
||||
|
||||
**Timestamp**: {timestamp}
|
||||
**Reason**: {replan_reason}
|
||||
**Scope**: {modification_scope}
|
||||
|
||||
## Restoration Command
|
||||
cp {backup_dir}/* .workflow/active/{session}/
|
||||
```
|
||||
|
||||
**Output**: All files safely backed up with manifest
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Apply Modifications
|
||||
|
||||
**Step 5.1: Update IMPL_PLAN.md** (if needed)
|
||||
|
||||
Use Edit tool to modify specific sections:
|
||||
- Update affected technical sections
|
||||
- Update modification date
|
||||
|
||||
**Step 5.2: Update TODO_LIST.md** (if needed)
|
||||
|
||||
- Add new tasks with `[ ]` checkbox
|
||||
- Mark deleted tasks as `[x] ~~task~~ (已废弃)`
|
||||
- Update modified task descriptions
|
||||
|
||||
**Step 5.3: Update Task JSONs**
|
||||
|
||||
For each affected task:
|
||||
```typescript
|
||||
const updated_task = {
|
||||
...task,
|
||||
context: {
|
||||
...task.context,
|
||||
requirements: [...updated_requirements],
|
||||
acceptance: [...updated_acceptance]
|
||||
},
|
||||
flow_control: {
|
||||
...task.flow_control,
|
||||
implementation_approach: [...updated_steps]
|
||||
}
|
||||
};
|
||||
|
||||
Write({
|
||||
file_path: `.workflow/active/${SESSION_ID}/.task/${task_id}.json`,
|
||||
content: JSON.stringify(updated_task, null, 2)
|
||||
});
|
||||
```
|
||||
|
||||
**Step 5.4: Create New Tasks** (if needed)
|
||||
|
||||
Generate complete task JSON with all required fields:
|
||||
- id, title, status
|
||||
- meta (type, agent)
|
||||
- context (requirements, focus_paths, acceptance)
|
||||
- flow_control (pre_analysis, implementation_approach, target_files)
|
||||
|
||||
**Step 5.5: Delete Obsolete Tasks** (if needed)
|
||||
|
||||
Move to backup instead of hard delete:
|
||||
```bash
|
||||
mv ".workflow/active/$SESSION_ID/.task/{task-id}.json" "$backup_dir/"
|
||||
```
|
||||
|
||||
**Step 5.6: Update Session Metadata**
|
||||
|
||||
Update workflow-session.json:
|
||||
- progress.current_tasks
|
||||
- progress.last_replan
|
||||
- replan_history array
|
||||
|
||||
**Output**: All modifications applied, artifacts updated
|
||||
|
||||
---
|
||||
|
||||
### Phase 6: Verification & Summary
|
||||
|
||||
**Step 6.1: Verify Consistency**
|
||||
|
||||
1. Validate all task JSONs are valid JSON
|
||||
2. Check task count within limits (max 10)
|
||||
3. Verify dependency graph is acyclic
|
||||
|
||||
**Step 6.2: Generate Change Summary**
|
||||
|
||||
```markdown
|
||||
## 重规划完成
|
||||
|
||||
### 会话信息
|
||||
- **Session**: {session-id}
|
||||
- **时间**: {timestamp}
|
||||
- **备份**: {backup-path}
|
||||
|
||||
### 变更摘要
|
||||
**范围**: {scope}
|
||||
**原因**: {reason}
|
||||
|
||||
### 修改的文件
|
||||
- ✓ IMPL_PLAN.md: {changes}
|
||||
- ✓ TODO_LIST.md: {changes}
|
||||
- ✓ Task JSONs: {count} files updated
|
||||
|
||||
### 任务变更
|
||||
- **新增**: {task-ids}
|
||||
- **删除**: {task-ids}
|
||||
- **更新**: {task-ids}
|
||||
|
||||
### 回滚方法
|
||||
cp {backup-path}/* .workflow/active/{session}/
|
||||
```
|
||||
|
||||
**Output**: Summary displayed, replan complete
|
||||
|
||||
---
|
||||
|
||||
## TodoWrite Progress Tracking
|
||||
|
||||
### Session Mode Progress
|
||||
|
||||
```json
|
||||
[
|
||||
{"content": "检测模式和发现会话", "status": "completed", "activeForm": "检测模式和发现会话"},
|
||||
{"content": "交互式需求明确", "status": "completed", "activeForm": "交互式需求明确"},
|
||||
{"content": "影响分析和计划生成", "status": "completed", "activeForm": "影响分析和计划生成"},
|
||||
{"content": "创建备份", "status": "completed", "activeForm": "创建备份"},
|
||||
{"content": "更新会话产出文件", "status": "completed", "activeForm": "更新会话产出文件"},
|
||||
{"content": "验证一致性", "status": "completed", "activeForm": "验证一致性"}
|
||||
]
|
||||
```
|
||||
|
||||
### Task Mode Progress
|
||||
|
||||
```json
|
||||
[
|
||||
{"content": "检测会话和加载任务", "status": "completed", "activeForm": "检测会话和加载任务"},
|
||||
{"content": "交互式更新确认", "status": "completed", "activeForm": "交互式更新确认"},
|
||||
{"content": "应用任务修改", "status": "completed", "activeForm": "应用任务修改"}
|
||||
]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Session Errors
|
||||
|
||||
```bash
|
||||
# No active session found
|
||||
ERROR: No active session found
|
||||
Run /workflow:session:start to create a session
|
||||
|
||||
# Session not found
|
||||
ERROR: Session WFS-invalid not found
|
||||
Available sessions: [list]
|
||||
|
||||
# No changes specified
|
||||
WARNING: No modifications specified
|
||||
Use --interactive mode or provide requirements
|
||||
```
|
||||
|
||||
### Task Errors
|
||||
|
||||
```bash
|
||||
# Task not found
|
||||
ERROR: Task IMPL-999 not found in session
|
||||
Available tasks: [list]
|
||||
|
||||
# Task completed
|
||||
WARNING: Task IMPL-001 is completed
|
||||
Consider creating new task for additional work
|
||||
|
||||
# Circular dependency
|
||||
ERROR: Circular dependency detected
|
||||
Resolve dependency conflicts before proceeding
|
||||
```
|
||||
|
||||
### Validation Errors
|
||||
|
||||
```bash
|
||||
# Task limit exceeded
|
||||
ERROR: Replan would create 12 tasks (limit: 10)
|
||||
Consider: combining tasks, splitting sessions, or removing tasks
|
||||
|
||||
# Invalid JSON
|
||||
ERROR: Generated invalid JSON
|
||||
Backup preserved, rolling back changes
|
||||
```
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
.workflow/active/WFS-session-name/
|
||||
├── workflow-session.json
|
||||
├── IMPL_PLAN.md
|
||||
├── TODO_LIST.md
|
||||
├── .task/
|
||||
│ ├── IMPL-001.json
|
||||
│ ├── IMPL-002.json
|
||||
│ └── IMPL-003.json
|
||||
└── .process/
|
||||
├── context-package.json
|
||||
└── backup/
|
||||
└── replan-{timestamp}/
|
||||
├── MANIFEST.md
|
||||
├── IMPL_PLAN.md
|
||||
├── TODO_LIST.md
|
||||
├── workflow-session.json
|
||||
└── IMPL-*.json
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Session Replan - Add Feature
|
||||
|
||||
```bash
|
||||
/workflow:replan "添加双因素认证支持"
|
||||
|
||||
# Interactive clarification
|
||||
Q: 修改范围?
|
||||
A: 全面重规划
|
||||
|
||||
Q: 受影响模块?
|
||||
A: 认证模块, API接口
|
||||
|
||||
Q: 任务变更?
|
||||
A: 添加新任务, 更新内容
|
||||
|
||||
# Execution
|
||||
✓ 创建备份
|
||||
✓ 更新 IMPL_PLAN.md
|
||||
✓ 更新 TODO_LIST.md
|
||||
✓ 创建 IMPL-004.json
|
||||
✓ 更新 IMPL-001.json, IMPL-002.json
|
||||
|
||||
重规划完成! 新增 1 任务,更新 2 任务
|
||||
```
|
||||
|
||||
### Task Replan - Update Requirements
|
||||
|
||||
```bash
|
||||
/workflow:replan IMPL-001 "支持 OAuth2.0 标准"
|
||||
|
||||
# Interactive clarification
|
||||
Q: 更新部分?
|
||||
A: 需求和验收标准, 实现方案
|
||||
|
||||
Q: 影响其他任务?
|
||||
A: 是,需要同步更新依赖任务
|
||||
|
||||
# Execution
|
||||
✓ 创建备份
|
||||
✓ 更新 IMPL-001.json
|
||||
✓ 更新 IMPL-002.json (依赖任务)
|
||||
|
||||
任务重规划完成! 更新 2 个任务
|
||||
```
|
||||
@@ -1,105 +0,0 @@
|
||||
---
|
||||
name: resume
|
||||
description: Resume paused workflow session with automatic progress analysis, pending task identification, and conflict detection
|
||||
argument-hint: "session-id for workflow session to resume"
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||
---
|
||||
|
||||
# Sequential Workflow Resume Command
|
||||
|
||||
## Usage
|
||||
```bash
|
||||
/workflow:resume "<session-id>"
|
||||
```
|
||||
|
||||
## Purpose
|
||||
**Sequential command coordination for workflow resumption** by first analyzing current session status, then continuing execution with special resume context. This command orchestrates intelligent session resumption through two-step process.
|
||||
|
||||
## Command Coordination Workflow
|
||||
|
||||
### Phase 1: Status Analysis
|
||||
1. **Call status command**: Execute `/workflow:status` to analyze current session state
|
||||
2. **Verify session information**: Check session ID, progress, and current task status
|
||||
3. **Identify resume point**: Determine where workflow was interrupted
|
||||
|
||||
### Phase 2: Resume Execution
|
||||
1. **Call execute with resume flag**: Execute `/workflow:execute --resume-session="{session-id}"`
|
||||
2. **Pass session context**: Provide analyzed session information to execute command
|
||||
3. **Direct agent execution**: Skip discovery phase, directly enter TodoWrite and agent execution
|
||||
|
||||
## Implementation Protocol
|
||||
|
||||
### Sequential Command Execution
|
||||
```bash
|
||||
# Phase 1: Analyze current session status
|
||||
SlashCommand(command="/workflow:status")
|
||||
|
||||
# Phase 2: Resume execution with special flag
|
||||
SlashCommand(command="/workflow:execute --resume-session=\"{session-id}\"")
|
||||
```
|
||||
|
||||
### Progress Tracking
|
||||
```javascript
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{
|
||||
content: "Analyze current session status and progress",
|
||||
status: "in_progress",
|
||||
activeForm: "Analyzing session status"
|
||||
},
|
||||
{
|
||||
content: "Resume workflow execution with session context",
|
||||
status: "pending",
|
||||
activeForm: "Resuming workflow execution"
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
## Resume Information Flow
|
||||
|
||||
### Status Analysis Results
|
||||
The `/workflow:status` command provides:
|
||||
- **Session ID**: Current active session identifier
|
||||
- **Current Progress**: Completed, in-progress, and pending tasks
|
||||
- **Interruption Point**: Last executed task and next pending task
|
||||
- **Session State**: Overall workflow status
|
||||
|
||||
### Execute Command Context
|
||||
The special `--resume-session` flag tells `/workflow:execute`:
|
||||
- **Skip Discovery**: Don't search for sessions, use provided session ID
|
||||
- **Direct Execution**: Go straight to TodoWrite generation and agent launching
|
||||
- **Context Restoration**: Use existing session state and summaries
|
||||
- **Resume Point**: Continue from identified interruption point
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Session Validation Failures
|
||||
- **Session not found**: Report missing session, suggest available sessions
|
||||
- **Session inactive**: Recommend activating session first
|
||||
- **Status command fails**: Retry once, then report analysis failure
|
||||
|
||||
### Execute Resumption Failures
|
||||
- **No pending tasks**: Report workflow completion status
|
||||
- **Execute command fails**: Report resumption failure, suggest manual intervention
|
||||
|
||||
## Success Criteria
|
||||
1. **Status analysis complete**: Session state properly analyzed and reported
|
||||
2. **Execute command launched**: Resume execution started with proper context
|
||||
3. **Agent coordination**: TodoWrite and agent execution initiated successfully
|
||||
4. **Context preservation**: Session state and progress properly maintained
|
||||
|
||||
## Related Commands
|
||||
|
||||
**Prerequisite Commands**:
|
||||
- `/workflow:plan` or `/workflow:execute` - Workflow must be in progress or paused
|
||||
|
||||
**Called by This Command** (2 phases):
|
||||
- `/workflow:status` - Phase 1: Analyze current session status and identify resume point
|
||||
- `/workflow:execute` - Phase 2: Resume execution with `--resume-session` flag
|
||||
|
||||
**Follow-up Commands**:
|
||||
- None - Workflow continues automatically via `/workflow:execute`
|
||||
|
||||
---
|
||||
*Sequential command coordination for workflow session resumption*
|
||||
@@ -39,17 +39,17 @@ argument-hint: "[--type=security|architecture|action-items|quality] [optional: s
|
||||
if [ -n "$SESSION_ARG" ]; then
|
||||
sessionId="$SESSION_ARG"
|
||||
else
|
||||
sessionId=$(find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//')
|
||||
sessionId=$(find .workflow/active/ -name "WFS-*" -type d | head -1 | xargs basename)
|
||||
fi
|
||||
|
||||
# Step 2: Validation
|
||||
if [ ! -d ".workflow/${sessionId}" ]; then
|
||||
if [ ! -d ".workflow/active/${sessionId}" ]; then
|
||||
echo "Session ${sessionId} not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for completed tasks
|
||||
if [ ! -d ".workflow/${sessionId}/.summaries" ] || [ -z "$(find .workflow/${sessionId}/.summaries/ -name "IMPL-*.md" -type f 2>/dev/null)" ]; then
|
||||
if [ ! -d ".workflow/active/${sessionId}/.summaries" ] || [ -z "$(find .workflow/active/${sessionId}/.summaries/ -name "IMPL-*.md" -type f 2>/dev/null)" ]; then
|
||||
echo "No completed implementation found. Complete implementation first"
|
||||
exit 1
|
||||
fi
|
||||
@@ -80,13 +80,13 @@ After bash validation, the model takes control to:
|
||||
1. **Load Context**: Read completed task summaries and changed files
|
||||
```bash
|
||||
# Load implementation summaries
|
||||
cat .workflow/${sessionId}/.summaries/IMPL-*.md
|
||||
cat .workflow/active/${sessionId}/.summaries/IMPL-*.md
|
||||
|
||||
# Load test results (if available)
|
||||
cat .workflow/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
||||
cat .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
||||
|
||||
# Get changed files
|
||||
git log --since="$(cat .workflow/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
||||
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
||||
```
|
||||
|
||||
2. **Perform Specialized Review**: Based on `review_type`
|
||||
@@ -99,7 +99,7 @@ After bash validation, the model takes control to:
|
||||
```
|
||||
- Use Gemini for security analysis:
|
||||
```bash
|
||||
cd .workflow/${sessionId} && gemini -p "
|
||||
cd .workflow/active/${sessionId} && gemini -p "
|
||||
PURPOSE: Security audit of completed implementation
|
||||
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
@@ -111,7 +111,7 @@ After bash validation, the model takes control to:
|
||||
**Architecture Review** (`--type=architecture`):
|
||||
- Use Qwen for architecture analysis:
|
||||
```bash
|
||||
cd .workflow/${sessionId} && qwen -p "
|
||||
cd .workflow/active/${sessionId} && qwen -p "
|
||||
PURPOSE: Architecture compliance review
|
||||
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
@@ -123,7 +123,7 @@ After bash validation, the model takes control to:
|
||||
**Quality Review** (`--type=quality`):
|
||||
- Use Gemini for code quality:
|
||||
```bash
|
||||
cd .workflow/${sessionId} && gemini -p "
|
||||
cd .workflow/active/${sessionId} && gemini -p "
|
||||
PURPOSE: Code quality and best practices review
|
||||
TASK: Assess code readability, maintainability, adherence to best practices
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
@@ -136,14 +136,14 @@ After bash validation, the model takes control to:
|
||||
- Verify all requirements and acceptance criteria met:
|
||||
```bash
|
||||
# Load task requirements and acceptance criteria
|
||||
find .workflow/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
||||
find .workflow/active/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
||||
"Task: " + .id + "\n" +
|
||||
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
||||
"Acceptance: " + (.context.acceptance | join(", "))
|
||||
' {} \;
|
||||
|
||||
# Check implementation summaries against requirements
|
||||
cd .workflow/${sessionId} && gemini -p "
|
||||
cd .workflow/active/${sessionId} && gemini -p "
|
||||
PURPOSE: Verify all requirements and acceptance criteria are met
|
||||
TASK: Cross-check implementation summaries against original requirements
|
||||
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
@@ -195,7 +195,7 @@ After bash validation, the model takes control to:
|
||||
4. **Output Files**:
|
||||
```bash
|
||||
# Save review report
|
||||
Write(.workflow/${sessionId}/REVIEW-${review_type}.md)
|
||||
Write(.workflow/active/${sessionId}/REVIEW-${review_type}.md)
|
||||
|
||||
# Update session metadata
|
||||
# (optional) Update workflow-session.json with review status
|
||||
|
||||
@@ -19,129 +19,472 @@ Mark the currently active workflow session as complete, analyze it for lessons l
|
||||
|
||||
## Implementation Flow
|
||||
|
||||
### Phase 1: Prepare for Archival (Minimal Manual Operations)
|
||||
### Phase 1: Pre-Archival Preparation (Transactional Setup)
|
||||
|
||||
**Purpose**: Find active session, move to archive location, pass control to agent. Minimal operations.
|
||||
**Purpose**: Find active session, create archiving marker to prevent concurrent operations. Session remains in active location for agent processing.
|
||||
|
||||
#### Step 1.1: Find Active Session and Get Name
|
||||
```bash
|
||||
# Find active marker
|
||||
bash(find .workflow/ -name ".active-*" -type f | head -1)
|
||||
# Find active session directory
|
||||
bash(find .workflow/active/ -name "WFS-*" -type d | head -1)
|
||||
|
||||
# Extract session name from marker path
|
||||
bash(basename .workflow/.active-WFS-session-name | sed 's/^\.active-//')
|
||||
# Extract session name from directory path
|
||||
bash(basename .workflow/active/WFS-session-name)
|
||||
```
|
||||
**Output**: Session name `WFS-session-name`
|
||||
|
||||
#### Step 1.2: Move Session to Archive
|
||||
#### Step 1.2: Check for Existing Archiving Marker (Resume Detection)
|
||||
```bash
|
||||
# Create archive directory if needed
|
||||
bash(mkdir -p .workflow/.archives/)
|
||||
|
||||
# Move session to archive location
|
||||
bash(mv .workflow/WFS-session-name .workflow/.archives/WFS-session-name)
|
||||
# Check if session is already being archived
|
||||
bash(test -f .workflow/active/WFS-session-name/.archiving && echo "RESUMING" || echo "NEW")
|
||||
```
|
||||
**Result**: Session now at `.workflow/.archives/WFS-session-name/`
|
||||
|
||||
### Phase 2: Agent-Orchestrated Completion (All Data Processing)
|
||||
**If RESUMING**:
|
||||
- Previous archival attempt was interrupted
|
||||
- Skip to Phase 2 to resume agent analysis
|
||||
|
||||
**Purpose**: Agent analyzes archived session, generates metadata, updates manifest, and removes active marker.
|
||||
**If NEW**:
|
||||
- Continue to Step 1.3
|
||||
|
||||
#### Step 1.3: Create Archiving Marker
|
||||
```bash
|
||||
# Mark session as "archiving in progress"
|
||||
bash(touch .workflow/active/WFS-session-name/.archiving)
|
||||
```
|
||||
**Purpose**:
|
||||
- Prevents concurrent operations on this session
|
||||
- Enables recovery if archival fails
|
||||
- Session remains in `.workflow/active/` for agent analysis
|
||||
|
||||
**Result**: Session still at `.workflow/active/WFS-session-name/` with `.archiving` marker
|
||||
|
||||
### Phase 2: Agent Analysis (In-Place Processing)
|
||||
|
||||
**Purpose**: Agent analyzes session WHILE STILL IN ACTIVE LOCATION. Generates metadata but does NOT move files or update manifest.
|
||||
|
||||
#### Agent Invocation
|
||||
|
||||
Invoke `universal-executor` agent to complete the archival process.
|
||||
Invoke `universal-executor` agent to analyze session and prepare archive metadata.
|
||||
|
||||
**Agent Task**:
|
||||
```
|
||||
Task(
|
||||
subagent_type="universal-executor",
|
||||
description="Complete session archival",
|
||||
description="Analyze session for archival",
|
||||
prompt=`
|
||||
Complete workflow session archival. Session already moved to archive location.
|
||||
Analyze workflow session for archival preparation. Session is STILL in active location.
|
||||
|
||||
## Context
|
||||
- Session: .workflow/.archives/WFS-session-name/
|
||||
- Active marker: .workflow/.active-WFS-session-name
|
||||
- Session: .workflow/active/WFS-session-name/
|
||||
- Status: Marked as archiving (.archiving marker present)
|
||||
- Location: Active sessions directory (NOT archived yet)
|
||||
|
||||
## Tasks
|
||||
|
||||
1. **Extract session data** from workflow-session.json (session_id, description/topic, started_at/timestamp, completed_at, status)
|
||||
1. **Extract session data** from workflow-session.json
|
||||
- session_id, description/topic, started_at, completed_at, status
|
||||
- If status != "completed", update it with timestamp
|
||||
|
||||
2. **Count files**: tasks (.task/*.json) and summaries (.summaries/*.md)
|
||||
|
||||
3. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt (fallback: analyze files directly)
|
||||
3. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt
|
||||
- Return: {successes, challenges, watch_patterns}
|
||||
|
||||
4. **Build archive entry**:
|
||||
- Calculate: duration_hours, success_rate, tags (3-5 keywords)
|
||||
- Construct complete JSON with session_id, description, archived_at, archive_path, metrics, tags, lessons
|
||||
- Construct complete JSON with session_id, description, archived_at, metrics, tags, lessons
|
||||
- Include archive_path: ".workflow/archives/WFS-session-name" (future location)
|
||||
|
||||
5. **Update manifest**: Initialize .workflow/.archives/manifest.json if needed, append entry
|
||||
5. **Extract feature metadata** (for Phase 4):
|
||||
- Parse IMPL_PLAN.md for title (first # heading)
|
||||
- Extract description (first paragraph, max 200 chars)
|
||||
- Generate feature tags (3-5 keywords from content)
|
||||
|
||||
6. **Remove active marker**
|
||||
6. **Return result**: Complete metadata package for atomic commit
|
||||
{
|
||||
"status": "success",
|
||||
"session_id": "WFS-session-name",
|
||||
"archive_entry": {
|
||||
"session_id": "...",
|
||||
"description": "...",
|
||||
"archived_at": "...",
|
||||
"archive_path": ".workflow/archives/WFS-session-name",
|
||||
"metrics": {...},
|
||||
"tags": [...],
|
||||
"lessons": {...}
|
||||
},
|
||||
"feature_metadata": {
|
||||
"title": "...",
|
||||
"description": "...",
|
||||
"tags": [...]
|
||||
}
|
||||
}
|
||||
|
||||
7. **Return result**: {"status": "success", "session_id": "...", "archived_at": "...", "metrics": {...}, "lessons_summary": {...}}
|
||||
## Important Constraints
|
||||
- DO NOT move or delete any files
|
||||
- DO NOT update manifest.json yet
|
||||
- Session remains in .workflow/active/ during analysis
|
||||
- Return complete metadata package for orchestrator to commit atomically
|
||||
|
||||
## Error Handling
|
||||
- On failure: return {"status": "error", "task": "...", "message": "..."}
|
||||
- Do NOT remove marker if failed
|
||||
- Do NOT modify any files on error
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
**Expected Output**:
|
||||
- Agent returns JSON result confirming successful archival
|
||||
- Display completion summary to user based on agent response
|
||||
- Agent returns complete metadata package
|
||||
- Session remains in `.workflow/active/` with `.archiving` marker
|
||||
- No files moved or manifests updated yet
|
||||
|
||||
### Phase 3: Atomic Commit (Transactional File Operations)
|
||||
|
||||
**Purpose**: Atomically commit all changes. Only execute if Phase 2 succeeds.
|
||||
|
||||
#### Step 3.1: Create Archive Directory
|
||||
```bash
|
||||
bash(mkdir -p .workflow/archives/)
|
||||
```
|
||||
|
||||
#### Step 3.2: Move Session to Archive
|
||||
```bash
|
||||
bash(mv .workflow/active/WFS-session-name .workflow/archives/WFS-session-name)
|
||||
```
|
||||
**Result**: Session now at `.workflow/archives/WFS-session-name/`
|
||||
|
||||
#### Step 3.3: Update Manifest
|
||||
```bash
|
||||
# Read current manifest (or create empty array if not exists)
|
||||
bash(test -f .workflow/archives/manifest.json && cat .workflow/archives/manifest.json || echo "[]")
|
||||
```
|
||||
|
||||
**JSON Update Logic**:
|
||||
```javascript
|
||||
// Read agent result from Phase 2
|
||||
const agentResult = JSON.parse(agentOutput);
|
||||
const archiveEntry = agentResult.archive_entry;
|
||||
|
||||
// Read existing manifest
|
||||
let manifest = [];
|
||||
try {
|
||||
const manifestContent = Read('.workflow/archives/manifest.json');
|
||||
manifest = JSON.parse(manifestContent);
|
||||
} catch {
|
||||
manifest = []; // Initialize if not exists
|
||||
}
|
||||
|
||||
// Append new entry
|
||||
manifest.push(archiveEntry);
|
||||
|
||||
// Write back
|
||||
Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2));
|
||||
```
|
||||
|
||||
#### Step 3.4: Remove Archiving Marker
|
||||
```bash
|
||||
bash(rm .workflow/archives/WFS-session-name/.archiving)
|
||||
```
|
||||
**Result**: Clean archived session without temporary markers
|
||||
|
||||
**Output Confirmation**:
|
||||
```
|
||||
✓ Session "${sessionId}" archived successfully
|
||||
Location: .workflow/archives/WFS-session-name/
|
||||
Lessons: ${archiveEntry.lessons.successes.length} successes, ${archiveEntry.lessons.challenges.length} challenges
|
||||
Manifest: Updated with ${manifest.length} total sessions
|
||||
```
|
||||
|
||||
### Phase 4: Update Project Feature Registry
|
||||
|
||||
**Purpose**: Record completed session as a project feature in `.workflow/project.json`.
|
||||
|
||||
**Execution**: Uses feature metadata from Phase 2 agent result to update project registry.
|
||||
|
||||
#### Step 4.1: Check Project State Exists
|
||||
```bash
|
||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "SKIP")
|
||||
```
|
||||
|
||||
**If SKIP**: Output warning and skip Phase 4
|
||||
```
|
||||
WARNING: No project.json found. Run /workflow:session:start to initialize.
|
||||
```
|
||||
|
||||
#### Step 4.2: Extract Feature Information from Agent Result
|
||||
|
||||
**Data Processing** (Uses Phase 2 agent output):
|
||||
```javascript
|
||||
// Extract feature metadata from agent result
|
||||
const agentResult = JSON.parse(agentOutput);
|
||||
const featureMeta = agentResult.feature_metadata;
|
||||
|
||||
// Data already prepared by agent:
|
||||
const title = featureMeta.title;
|
||||
const description = featureMeta.description;
|
||||
const tags = featureMeta.tags;
|
||||
|
||||
// Create feature ID (lowercase slug)
|
||||
const featureId = title.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 50);
|
||||
```
|
||||
|
||||
#### Step 4.3: Update project.json
|
||||
|
||||
```bash
|
||||
# Read current project state
|
||||
bash(cat .workflow/project.json)
|
||||
```
|
||||
|
||||
**JSON Update Logic**:
|
||||
```javascript
|
||||
// Read existing project.json (created by /workflow:init)
|
||||
// Note: overview field is managed by /workflow:init, not modified here
|
||||
const projectMeta = JSON.parse(Read('.workflow/project.json'));
|
||||
const currentTimestamp = new Date().toISOString();
|
||||
const currentDate = currentTimestamp.split('T')[0]; // YYYY-MM-DD
|
||||
|
||||
// Extract tags from IMPL_PLAN.md (simple keyword extraction)
|
||||
const tags = extractTags(planContent); // e.g., ["auth", "security"]
|
||||
|
||||
// Build feature object with complete metadata
|
||||
const newFeature = {
|
||||
id: featureId,
|
||||
title: title,
|
||||
description: description,
|
||||
status: "completed",
|
||||
tags: tags,
|
||||
timeline: {
|
||||
created_at: currentTimestamp,
|
||||
implemented_at: currentDate,
|
||||
updated_at: currentTimestamp
|
||||
},
|
||||
traceability: {
|
||||
session_id: sessionId,
|
||||
archive_path: archivePath, // e.g., ".workflow/archives/WFS-auth-system"
|
||||
commit_hash: getLatestCommitHash() || "" // Optional: git rev-parse HEAD
|
||||
},
|
||||
docs: [], // Placeholder for future doc links
|
||||
relations: [] // Placeholder for feature dependencies
|
||||
};
|
||||
|
||||
// Add new feature to array
|
||||
projectMeta.features.push(newFeature);
|
||||
|
||||
// Update statistics
|
||||
projectMeta.statistics.total_features = projectMeta.features.length;
|
||||
projectMeta.statistics.total_sessions += 1;
|
||||
projectMeta.statistics.last_updated = currentTimestamp;
|
||||
|
||||
// Write back
|
||||
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
|
||||
```
|
||||
|
||||
**Helper Functions**:
|
||||
```javascript
|
||||
// Extract tags from IMPL_PLAN.md content
|
||||
function extractTags(planContent) {
|
||||
const tags = [];
|
||||
|
||||
// Look for common keywords
|
||||
const keywords = {
|
||||
'auth': /authentication|login|oauth|jwt/i,
|
||||
'security': /security|encrypt|hash|token/i,
|
||||
'api': /api|endpoint|rest|graphql/i,
|
||||
'ui': /component|page|interface|frontend/i,
|
||||
'database': /database|schema|migration|sql/i,
|
||||
'test': /test|testing|spec|coverage/i
|
||||
};
|
||||
|
||||
for (const [tag, pattern] of Object.entries(keywords)) {
|
||||
if (pattern.test(planContent)) {
|
||||
tags.push(tag);
|
||||
}
|
||||
}
|
||||
|
||||
return tags.slice(0, 5); // Max 5 tags
|
||||
}
|
||||
|
||||
// Get latest git commit hash (optional)
|
||||
function getLatestCommitHash() {
|
||||
try {
|
||||
const result = Bash({
|
||||
command: "git rev-parse --short HEAD 2>/dev/null",
|
||||
description: "Get latest commit hash"
|
||||
});
|
||||
return result.trim();
|
||||
} catch {
|
||||
return "";
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 4.4: Output Confirmation
|
||||
|
||||
```
|
||||
✓ Feature "${title}" added to project registry
|
||||
ID: ${featureId}
|
||||
Session: ${sessionId}
|
||||
Location: .workflow/project.json
|
||||
```
|
||||
|
||||
**Error Handling**:
|
||||
- If project.json malformed: Output error, skip update
|
||||
- If feature_metadata missing from agent result: Skip Phase 4
|
||||
- If extraction fails: Use minimal defaults
|
||||
|
||||
**Phase 4 Total Commands**: 1 bash read + JSON manipulation
|
||||
|
||||
## Error Recovery
|
||||
|
||||
### If Agent Fails (Phase 2)
|
||||
|
||||
**Symptoms**:
|
||||
- Agent returns `{"status": "error", ...}`
|
||||
- Agent crashes or times out
|
||||
- Analysis incomplete
|
||||
|
||||
**Recovery Steps**:
|
||||
```bash
|
||||
# Session still in .workflow/active/WFS-session-name
|
||||
# Remove archiving marker
|
||||
bash(rm .workflow/active/WFS-session-name/.archiving)
|
||||
```
|
||||
|
||||
**User Notification**:
|
||||
```
|
||||
ERROR: Session archival failed during analysis phase
|
||||
Reason: [error message from agent]
|
||||
Session remains active in: .workflow/active/WFS-session-name
|
||||
|
||||
Recovery:
|
||||
1. Fix any issues identified in error message
|
||||
2. Retry: /workflow:session:complete
|
||||
|
||||
Session state: SAFE (no changes committed)
|
||||
```
|
||||
|
||||
### If Move Fails (Phase 3)
|
||||
|
||||
**Symptoms**:
|
||||
- `mv` command fails
|
||||
- Permission denied
|
||||
- Disk full
|
||||
|
||||
**Recovery Steps**:
|
||||
```bash
|
||||
# Archiving marker still present
|
||||
# Session still in .workflow/active/ (move failed)
|
||||
# No manifest updated yet
|
||||
```
|
||||
|
||||
**User Notification**:
|
||||
```
|
||||
ERROR: Session archival failed during move operation
|
||||
Reason: [mv error message]
|
||||
Session remains in: .workflow/active/WFS-session-name
|
||||
|
||||
Recovery:
|
||||
1. Fix filesystem issues (permissions, disk space)
|
||||
2. Retry: /workflow:session:complete
|
||||
- System will detect .archiving marker
|
||||
- Will resume from Phase 2 (agent analysis)
|
||||
|
||||
Session state: SAFE (analysis complete, ready to retry move)
|
||||
```
|
||||
|
||||
### If Manifest Update Fails (Phase 3)
|
||||
|
||||
**Symptoms**:
|
||||
- JSON parsing error
|
||||
- Write permission denied
|
||||
- Session moved but manifest not updated
|
||||
|
||||
**Recovery Steps**:
|
||||
```bash
|
||||
# Session moved to .workflow/archives/WFS-session-name
|
||||
# Manifest NOT updated
|
||||
# Archiving marker still present in archived location
|
||||
```
|
||||
|
||||
**User Notification**:
|
||||
```
|
||||
ERROR: Session archived but manifest update failed
|
||||
Reason: [error message]
|
||||
Session location: .workflow/archives/WFS-session-name
|
||||
|
||||
Recovery:
|
||||
1. Fix manifest.json issues (syntax, permissions)
|
||||
2. Manual manifest update:
|
||||
- Add archive entry from agent output
|
||||
- Remove .archiving marker: rm .workflow/archives/WFS-session-name/.archiving
|
||||
|
||||
Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
|
||||
```
|
||||
|
||||
## Workflow Execution Strategy
|
||||
|
||||
### Two-Phase Approach (Optimized)
|
||||
### Transactional Four-Phase Approach
|
||||
|
||||
**Phase 1: Minimal Manual Setup** (2 simple operations)
|
||||
**Phase 1: Pre-Archival Preparation** (Marker creation)
|
||||
- Find active session and extract name
|
||||
- Move session to archive location
|
||||
- **No data extraction** - agent handles all data processing
|
||||
- **No counting** - agent does this from archive location
|
||||
- **Total**: 2 bash commands (find + move)
|
||||
- Check for existing `.archiving` marker (resume detection)
|
||||
- Create `.archiving` marker if new
|
||||
- **No data processing** - just state tracking
|
||||
- **Total**: 2-3 bash commands (find + marker check/create)
|
||||
|
||||
**Phase 2: Agent-Driven Completion** (1 agent invocation)
|
||||
- Extract all session data from archived location
|
||||
**Phase 2: Agent Analysis** (Read-only data processing)
|
||||
- Extract all session data from active location
|
||||
- Count tasks and summaries
|
||||
- Generate lessons learned analysis
|
||||
- Build complete archive metadata
|
||||
- Update manifest
|
||||
- Remove active marker
|
||||
- Return success/error result
|
||||
- Extract feature metadata from IMPL_PLAN.md
|
||||
- Build complete archive + feature metadata package
|
||||
- **No file modifications** - pure analysis
|
||||
- **Total**: 1 agent invocation
|
||||
|
||||
## Quick Commands
|
||||
**Phase 3: Atomic Commit** (Transactional file operations)
|
||||
- Create archive directory
|
||||
- Move session to archive location
|
||||
- Update manifest.json with archive entry
|
||||
- Remove `.archiving` marker
|
||||
- **All-or-nothing**: Either all succeed or session remains in safe state
|
||||
- **Total**: 4 bash commands + JSON manipulation
|
||||
|
||||
```bash
|
||||
# Phase 1: Find and move
|
||||
bash(find .workflow/ -name ".active-*" -type f | head -1)
|
||||
bash(basename .workflow/.active-WFS-session-name | sed 's/^\.active-//')
|
||||
bash(mkdir -p .workflow/.archives/)
|
||||
bash(mv .workflow/WFS-session-name .workflow/.archives/WFS-session-name)
|
||||
**Phase 4: Project Registry Update** (Optional feature tracking)
|
||||
- Check project.json exists
|
||||
- Use feature metadata from Phase 2 agent result
|
||||
- Build feature object with complete traceability
|
||||
- Update project statistics
|
||||
- **Independent**: Can fail without affecting archival
|
||||
- **Total**: 1 bash read + JSON manipulation
|
||||
|
||||
# Phase 2: Agent completes archival
|
||||
Task(subagent_type="universal-executor", description="Complete session archival", prompt=`...`)
|
||||
```
|
||||
### Transactional Guarantees
|
||||
|
||||
## Archive Query Commands
|
||||
**State Consistency**:
|
||||
- Session NEVER in inconsistent state
|
||||
- `.archiving` marker enables safe resume
|
||||
- Agent failure leaves session in recoverable state
|
||||
- Move/manifest operations grouped in Phase 3
|
||||
|
||||
After archival, you can query the manifest:
|
||||
**Failure Isolation**:
|
||||
- Phase 1 failure: No changes made
|
||||
- Phase 2 failure: Session still active, can retry
|
||||
- Phase 3 failure: Clear error state, manual recovery documented
|
||||
- Phase 4 failure: Does not affect archival success
|
||||
|
||||
```bash
|
||||
# List all archived sessions
|
||||
jq '.archives[].session_id' .workflow/.archives/manifest.json
|
||||
**Resume Capability**:
|
||||
- Detect interrupted archival via `.archiving` marker
|
||||
- Resume from Phase 2 (skip marker creation)
|
||||
- Idempotent operations (safe to retry)
|
||||
|
||||
# Find sessions by keyword
|
||||
jq '.archives[] | select(.description | test("auth"; "i"))' .workflow/.archives/manifest.json
|
||||
### Benefits Over Previous Design
|
||||
|
||||
# Get specific session details
|
||||
jq '.archives[] | select(.session_id == "WFS-user-auth")' .workflow/.archives/manifest.json
|
||||
|
||||
# List all watch patterns across sessions
|
||||
jq '.archives[].lessons.watch_patterns[]' .workflow/.archives/manifest.json
|
||||
```
|
||||
**Old Design Weakness**:
|
||||
- Move first → agent second
|
||||
- Agent failure → session moved but metadata incomplete
|
||||
- Inconsistent state requires manual cleanup
|
||||
|
||||
**New Design Strengths**:
|
||||
- Agent first → move second
|
||||
- Agent failure → session still active, safe to retry
|
||||
- Transactional commit → all-or-nothing file operations
|
||||
- Marker-based state → resume capability
|
||||
|
||||
@@ -19,35 +19,35 @@ Display all workflow sessions with their current status, progress, and metadata.
|
||||
|
||||
### Step 1: Find All Sessions
|
||||
```bash
|
||||
ls .workflow/WFS-* 2>/dev/null
|
||||
ls .workflow/active/WFS-* 2>/dev/null
|
||||
```
|
||||
|
||||
### Step 2: Check Active Session
|
||||
```bash
|
||||
ls .workflow/.active-* 2>/dev/null | head -1
|
||||
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1
|
||||
```
|
||||
|
||||
### Step 3: Read Session Metadata
|
||||
```bash
|
||||
jq -r '.session_id, .status, .project' .workflow/WFS-session/workflow-session.json
|
||||
jq -r '.session_id, .status, .project' .workflow/active/WFS-session/workflow-session.json
|
||||
```
|
||||
|
||||
### Step 4: Count Task Progress
|
||||
```bash
|
||||
find .workflow/WFS-session/.task/ -name "*.json" -type f 2>/dev/null | wc -l
|
||||
find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
||||
find .workflow/active/WFS-session/.task/ -name "*.json" -type f 2>/dev/null | wc -l
|
||||
find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
||||
```
|
||||
|
||||
### Step 5: Get Creation Time
|
||||
```bash
|
||||
jq -r '.created_at // "unknown"' .workflow/WFS-session/workflow-session.json
|
||||
jq -r '.created_at // "unknown"' .workflow/active/WFS-session/workflow-session.json
|
||||
```
|
||||
|
||||
## Simple Bash Commands
|
||||
|
||||
### Basic Operations
|
||||
- **List sessions**: `find .workflow/ -maxdepth 1 -type d -name "WFS-*"`
|
||||
- **Find active**: `find .workflow/ -name ".active-*" -type f`
|
||||
- **List sessions**: `find .workflow/active/ -name "WFS-*" -type d`
|
||||
- **Find active**: `find .workflow/active/ -name "WFS-*" -type d`
|
||||
- **Read session data**: `jq -r '.session_id, .status' session.json`
|
||||
- **Count tasks**: `find .task/ -name "*.json" -type f | wc -l`
|
||||
- **Count completed**: `find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l`
|
||||
@@ -89,11 +89,8 @@ Total: 3 sessions (1 active, 1 paused, 1 completed)
|
||||
### Quick Commands
|
||||
```bash
|
||||
# Count all sessions
|
||||
ls .workflow/WFS-* | wc -l
|
||||
|
||||
# Show only active
|
||||
ls .workflow/.active-* | basename | sed 's/^\.active-//'
|
||||
ls .workflow/active/WFS-* | wc -l
|
||||
|
||||
# Show recent sessions
|
||||
ls -t .workflow/WFS-*/workflow-session.json | head -3
|
||||
ls -t .workflow/active/WFS-*/workflow-session.json | head -3
|
||||
```
|
||||
@@ -17,45 +17,39 @@ Resume the most recently paused workflow session, restoring all context and stat
|
||||
|
||||
### Step 1: Find Paused Sessions
|
||||
```bash
|
||||
ls .workflow/WFS-* 2>/dev/null
|
||||
ls .workflow/active/WFS-* 2>/dev/null
|
||||
```
|
||||
|
||||
### Step 2: Check Session Status
|
||||
```bash
|
||||
jq -r '.status' .workflow/WFS-session/workflow-session.json
|
||||
jq -r '.status' .workflow/active/WFS-session/workflow-session.json
|
||||
```
|
||||
|
||||
### Step 3: Find Most Recent Paused
|
||||
```bash
|
||||
ls -t .workflow/WFS-*/workflow-session.json | head -1
|
||||
ls -t .workflow/active/WFS-*/workflow-session.json | head -1
|
||||
```
|
||||
|
||||
### Step 4: Update Session Status
|
||||
```bash
|
||||
jq '.status = "active"' .workflow/WFS-session/workflow-session.json > temp.json
|
||||
mv temp.json .workflow/WFS-session/workflow-session.json
|
||||
jq '.status = "active"' .workflow/active/WFS-session/workflow-session.json > temp.json
|
||||
mv temp.json .workflow/active/WFS-session/workflow-session.json
|
||||
```
|
||||
|
||||
### Step 5: Add Resume Timestamp
|
||||
```bash
|
||||
jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"' .workflow/WFS-session/workflow-session.json > temp.json
|
||||
mv temp.json .workflow/WFS-session/workflow-session.json
|
||||
```
|
||||
|
||||
### Step 6: Create Active Marker
|
||||
```bash
|
||||
touch .workflow/.active-WFS-session-name
|
||||
jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"' .workflow/active/WFS-session/workflow-session.json > temp.json
|
||||
mv temp.json .workflow/active/WFS-session/workflow-session.json
|
||||
```
|
||||
|
||||
## Simple Bash Commands
|
||||
|
||||
### Basic Operations
|
||||
- **List sessions**: `ls .workflow/WFS-*`
|
||||
- **List sessions**: `ls .workflow/active/WFS-*`
|
||||
- **Check status**: `jq -r '.status' session.json`
|
||||
- **Find recent**: `ls -t .workflow/*/workflow-session.json | head -1`
|
||||
- **Find recent**: `ls -t .workflow/active/*/workflow-session.json | head -1`
|
||||
- **Update status**: `jq '.status = "active"' session.json > temp.json`
|
||||
- **Add timestamp**: `jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"'`
|
||||
- **Create marker**: `touch .workflow/.active-session`
|
||||
|
||||
### Resume Result
|
||||
```
|
||||
|
||||
@@ -13,6 +13,35 @@ examples:
|
||||
## Overview
|
||||
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
|
||||
|
||||
**Dual Responsibility**:
|
||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
|
||||
2. **Session-level initialization** (always): Creates session directory structure
|
||||
|
||||
## Step 0: Initialize Project State (First-time Only)
|
||||
|
||||
**Executed before all modes** - Ensures project-level state file exists by calling `/workflow:init`.
|
||||
|
||||
### Check and Initialize
|
||||
```bash
|
||||
# Check if project state exists
|
||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||
```
|
||||
|
||||
**If NOT_FOUND**, delegate to `/workflow:init`:
|
||||
```javascript
|
||||
// Call workflow:init for intelligent project analysis
|
||||
SlashCommand({command: "/workflow:init"});
|
||||
|
||||
// Wait for init completion
|
||||
// project.json will be created with comprehensive project overview
|
||||
```
|
||||
|
||||
**Output**:
|
||||
- If EXISTS: `PROJECT_STATE: initialized`
|
||||
- If NOT_FOUND: Calls `/workflow:init` → creates `.workflow/project.json` with full project analysis
|
||||
|
||||
**Note**: `/workflow:init` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.
|
||||
|
||||
## Mode 1: Discovery Mode (Default)
|
||||
|
||||
### Usage
|
||||
@@ -20,19 +49,14 @@ Manages workflow sessions with three operation modes: discovery (manual), auto (
|
||||
/workflow:session:start
|
||||
```
|
||||
|
||||
### Step 1: Check Active Sessions
|
||||
### Step 1: List Active Sessions
|
||||
```bash
|
||||
bash(ls .workflow/.active-* 2>/dev/null)
|
||||
bash(ls -1 .workflow/active/ 2>/dev/null | head -5)
|
||||
```
|
||||
|
||||
### Step 2: List All Sessions
|
||||
### Step 2: Display Session Metadata
|
||||
```bash
|
||||
bash(ls -1 .workflow/WFS-* 2>/dev/null | head -5)
|
||||
```
|
||||
|
||||
### Step 3: Display Session Metadata
|
||||
```bash
|
||||
bash(cat .workflow/WFS-promptmaster-platform/workflow-session.json)
|
||||
bash(cat .workflow/active/WFS-promptmaster-platform/workflow-session.json)
|
||||
```
|
||||
|
||||
### Step 4: User Decision
|
||||
@@ -49,7 +73,7 @@ Present session information and wait for user to select or create session.
|
||||
|
||||
### Step 1: Check Active Sessions Count
|
||||
```bash
|
||||
bash(ls .workflow/.active-* 2>/dev/null | wc -l)
|
||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
|
||||
```
|
||||
|
||||
### Step 2a: No Active Sessions → Create New
|
||||
@@ -58,15 +82,12 @@ bash(ls .workflow/.active-* 2>/dev/null | wc -l)
|
||||
bash(echo "implement OAuth2 auth" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
|
||||
|
||||
# Create directory structure
|
||||
bash(mkdir -p .workflow/WFS-implement-oauth2-auth/.process)
|
||||
bash(mkdir -p .workflow/WFS-implement-oauth2-auth/.task)
|
||||
bash(mkdir -p .workflow/WFS-implement-oauth2-auth/.summaries)
|
||||
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.process)
|
||||
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.task)
|
||||
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.summaries)
|
||||
|
||||
# Create metadata
|
||||
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning"}' > .workflow/WFS-implement-oauth2-auth/workflow-session.json)
|
||||
|
||||
# Mark as active
|
||||
bash(touch .workflow/.active-WFS-implement-oauth2-auth)
|
||||
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning"}' > .workflow/active/WFS-implement-oauth2-auth/workflow-session.json)
|
||||
```
|
||||
|
||||
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
|
||||
@@ -74,10 +95,10 @@ bash(touch .workflow/.active-WFS-implement-oauth2-auth)
|
||||
### Step 2b: Single Active Session → Check Relevance
|
||||
```bash
|
||||
# Extract session ID
|
||||
bash(ls .workflow/.active-* 2>/dev/null | head -1 | xargs basename | sed 's/^\.active-//')
|
||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
|
||||
|
||||
# Read project name from metadata
|
||||
bash(cat .workflow/WFS-promptmaster-platform/workflow-session.json | grep -o '"project":"[^"]*"' | cut -d'"' -f4)
|
||||
bash(cat .workflow/active/WFS-promptmaster-platform/workflow-session.json | grep -o '"project":"[^"]*"' | cut -d'"' -f4)
|
||||
|
||||
# Check keyword match (manual comparison)
|
||||
# If task contains project keywords → Reuse session
|
||||
@@ -90,7 +111,7 @@ bash(cat .workflow/WFS-promptmaster-platform/workflow-session.json | grep -o '"p
|
||||
### Step 2c: Multiple Active Sessions → Use First
|
||||
```bash
|
||||
# Get first active session
|
||||
bash(ls .workflow/.active-* 2>/dev/null | head -1 | xargs basename | sed 's/^\.active-//')
|
||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
|
||||
|
||||
# Output warning and session ID
|
||||
# WARNING: Multiple active sessions detected
|
||||
@@ -110,25 +131,19 @@ bash(ls .workflow/.active-* 2>/dev/null | head -1 | xargs basename | sed 's/^\.a
|
||||
bash(echo "fix login bug" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
|
||||
|
||||
# Check if exists, add counter if needed
|
||||
bash(ls .workflow/WFS-fix-login-bug 2>/dev/null && echo "WFS-fix-login-bug-2" || echo "WFS-fix-login-bug")
|
||||
bash(ls .workflow/active/WFS-fix-login-bug 2>/dev/null && echo "WFS-fix-login-bug-2" || echo "WFS-fix-login-bug")
|
||||
```
|
||||
|
||||
### Step 2: Create Session Structure
|
||||
```bash
|
||||
bash(mkdir -p .workflow/WFS-fix-login-bug/.process)
|
||||
bash(mkdir -p .workflow/WFS-fix-login-bug/.task)
|
||||
bash(mkdir -p .workflow/WFS-fix-login-bug/.summaries)
|
||||
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.process)
|
||||
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.task)
|
||||
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.summaries)
|
||||
```
|
||||
|
||||
### Step 3: Create Metadata
|
||||
```bash
|
||||
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning"}' > .workflow/WFS-fix-login-bug/workflow-session.json)
|
||||
```
|
||||
|
||||
### Step 4: Mark Active and Clean Old Markers
|
||||
```bash
|
||||
bash(rm .workflow/.active-* 2>/dev/null)
|
||||
bash(touch .workflow/.active-WFS-fix-login-bug)
|
||||
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning"}' > .workflow/active/WFS-fix-login-bug/workflow-session.json)
|
||||
```
|
||||
|
||||
**Output**: `SESSION_ID: WFS-fix-login-bug`
|
||||
@@ -173,41 +188,6 @@ SlashCommand(command="/workflow:session:start")
|
||||
SlashCommand(command="/workflow:session:start --new \"experimental feature\"")
|
||||
```
|
||||
|
||||
## Simple Bash Commands
|
||||
|
||||
### Basic Operations
|
||||
```bash
|
||||
# Check active sessions
|
||||
bash(ls .workflow/.active-*)
|
||||
|
||||
# List all sessions
|
||||
bash(ls .workflow/WFS-*)
|
||||
|
||||
# Read session metadata
|
||||
bash(cat .workflow/WFS-[session-id]/workflow-session.json)
|
||||
|
||||
# Create session directories
|
||||
bash(mkdir -p .workflow/WFS-[session-id]/.process)
|
||||
bash(mkdir -p .workflow/WFS-[session-id]/.task)
|
||||
bash(mkdir -p .workflow/WFS-[session-id]/.summaries)
|
||||
|
||||
# Mark session as active
|
||||
bash(touch .workflow/.active-WFS-[session-id])
|
||||
|
||||
# Clean active markers
|
||||
bash(rm .workflow/.active-*)
|
||||
```
|
||||
|
||||
### Generate Session Slug
|
||||
```bash
|
||||
bash(echo "Task Description" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
|
||||
```
|
||||
|
||||
### Create Metadata JSON
|
||||
```bash
|
||||
bash(echo '{"session_id":"WFS-test","project":"test project","status":"planning"}' > .workflow/WFS-test/workflow-session.json)
|
||||
```
|
||||
|
||||
## Session ID Format
|
||||
- Pattern: `WFS-[lowercase-slug]`
|
||||
- Characters: `a-z`, `0-9`, `-` only
|
||||
|
||||
@@ -1,47 +1,183 @@
|
||||
---
|
||||
name: workflow:status
|
||||
description: Generate on-demand task status views from JSON task data with optional task-id filtering for detailed view
|
||||
argument-hint: "[optional: task-id]"
|
||||
description: Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view
|
||||
argument-hint: "[optional: --project|task-id|--validate]"
|
||||
---
|
||||
|
||||
# Workflow Status Command (/workflow:status)
|
||||
|
||||
## Overview
|
||||
Generates on-demand views from JSON task data. No synchronization needed - all views are calculated from the current state of JSON files.
|
||||
Generates on-demand views from project and session data. Supports two modes:
|
||||
1. **Project Overview** (`--project`): Shows completed features and project statistics
|
||||
2. **Workflow Tasks** (default): Shows current session task progress
|
||||
|
||||
No synchronization needed - all views are calculated from current JSON state.
|
||||
|
||||
## Usage
|
||||
```bash
|
||||
/workflow:status # Show current workflow overview
|
||||
/workflow:status # Show current workflow session overview
|
||||
/workflow:status --project # Show project-level feature registry
|
||||
/workflow:status impl-1 # Show specific task details
|
||||
/workflow:status --validate # Validate workflow integrity
|
||||
```
|
||||
|
||||
## Implementation Flow
|
||||
|
||||
### Mode Selection
|
||||
|
||||
**Check for --project flag**:
|
||||
- If `--project` flag present → Execute **Project Overview Mode**
|
||||
- Otherwise → Execute **Workflow Session Mode** (default)
|
||||
|
||||
## Project Overview Mode
|
||||
|
||||
### Step 1: Check Project State
|
||||
```bash
|
||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||
```
|
||||
|
||||
**If NOT_FOUND**:
|
||||
```
|
||||
No project state found.
|
||||
Run /workflow:session:start to initialize project.
|
||||
```
|
||||
|
||||
### Step 2: Read Project Data
|
||||
```bash
|
||||
bash(cat .workflow/project.json)
|
||||
```
|
||||
|
||||
### Step 3: Parse and Display
|
||||
|
||||
**Data Processing**:
|
||||
```javascript
|
||||
const projectData = JSON.parse(Read('.workflow/project.json'));
|
||||
const features = projectData.features || [];
|
||||
const stats = projectData.statistics || {};
|
||||
const overview = projectData.overview || null;
|
||||
|
||||
// Sort features by implementation date (newest first)
|
||||
const sortedFeatures = features.sort((a, b) =>
|
||||
new Date(b.implemented_at) - new Date(a.implemented_at)
|
||||
);
|
||||
```
|
||||
|
||||
**Output Format** (with extended overview):
|
||||
```
|
||||
## Project: ${projectData.project_name}
|
||||
Initialized: ${projectData.initialized_at}
|
||||
|
||||
${overview ? `
|
||||
### Overview
|
||||
${overview.description}
|
||||
|
||||
**Technology Stack**:
|
||||
${overview.technology_stack.languages.map(l => `- ${l.name}${l.primary ? ' (primary)' : ''}: ${l.file_count} files`).join('\n')}
|
||||
Frameworks: ${overview.technology_stack.frameworks.join(', ')}
|
||||
|
||||
**Architecture**:
|
||||
Style: ${overview.architecture.style}
|
||||
Patterns: ${overview.architecture.patterns.join(', ')}
|
||||
|
||||
**Key Components** (${overview.key_components.length}):
|
||||
${overview.key_components.map(c => `- ${c.name} (${c.path})\n ${c.description}`).join('\n')}
|
||||
|
||||
**Metrics**:
|
||||
- Files: ${overview.metrics.total_files}
|
||||
- Lines of Code: ${overview.metrics.lines_of_code}
|
||||
- Complexity: ${overview.metrics.complexity}
|
||||
|
||||
---
|
||||
` : ''}
|
||||
|
||||
### Completed Features (${stats.total_features})
|
||||
|
||||
${sortedFeatures.map(f => `
|
||||
- ${f.title} (${f.timeline?.implemented_at || f.implemented_at})
|
||||
${f.description}
|
||||
Tags: ${f.tags?.join(', ') || 'none'}
|
||||
Session: ${f.traceability?.session_id || f.session_id}
|
||||
Archive: ${f.traceability?.archive_path || 'unknown'}
|
||||
${f.traceability?.commit_hash ? `Commit: ${f.traceability.commit_hash}` : ''}
|
||||
`).join('\n')}
|
||||
|
||||
### Project Statistics
|
||||
- Total Features: ${stats.total_features}
|
||||
- Total Sessions: ${stats.total_sessions}
|
||||
- Last Updated: ${stats.last_updated}
|
||||
|
||||
### Quick Access
|
||||
- View session details: /workflow:status
|
||||
- Archive query: jq '.archives[] | select(.session_id == "SESSION_ID")' .workflow/archives/manifest.json
|
||||
- Documentation: .workflow/docs/${projectData.project_name}/
|
||||
|
||||
### Query Commands
|
||||
# Find by tag
|
||||
cat .workflow/project.json | jq '.features[] | select(.tags[] == "auth")'
|
||||
|
||||
# View archive
|
||||
cat ${feature.traceability.archive_path}/IMPL_PLAN.md
|
||||
|
||||
# List all tags
|
||||
cat .workflow/project.json | jq -r '.features[].tags[]' | sort -u
|
||||
```
|
||||
|
||||
**Empty State**:
|
||||
```
|
||||
## Project: ${projectData.project_name}
|
||||
Initialized: ${projectData.initialized_at}
|
||||
|
||||
No features completed yet.
|
||||
|
||||
Complete your first workflow session to add features:
|
||||
1. /workflow:plan "feature description"
|
||||
2. /workflow:execute
|
||||
3. /workflow:session:complete
|
||||
```
|
||||
|
||||
### Step 4: Show Recent Sessions (Optional)
|
||||
|
||||
```bash
|
||||
# List 5 most recent archived sessions
|
||||
bash(ls -1t .workflow/archives/WFS-* 2>/dev/null | head -5 | xargs -I {} basename {})
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```
|
||||
### Recent Sessions
|
||||
- WFS-auth-system (archived)
|
||||
- WFS-payment-flow (archived)
|
||||
- WFS-user-dashboard (archived)
|
||||
|
||||
Use /workflow:session:complete to archive current session.
|
||||
```
|
||||
|
||||
## Workflow Session Mode (Default)
|
||||
|
||||
### Step 1: Find Active Session
|
||||
```bash
|
||||
find .workflow/ -name ".active-*" -type f 2>/dev/null | head -1
|
||||
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1
|
||||
```
|
||||
|
||||
### Step 2: Load Session Data
|
||||
```bash
|
||||
cat .workflow/WFS-session/workflow-session.json
|
||||
cat .workflow/active/WFS-session/workflow-session.json
|
||||
```
|
||||
|
||||
### Step 3: Scan Task Files
|
||||
```bash
|
||||
find .workflow/WFS-session/.task/ -name "*.json" -type f 2>/dev/null
|
||||
find .workflow/active/WFS-session/.task/ -name "*.json" -type f 2>/dev/null
|
||||
```
|
||||
|
||||
### Step 4: Generate Task Status
|
||||
```bash
|
||||
cat .workflow/WFS-session/.task/impl-1.json | jq -r '.status'
|
||||
cat .workflow/active/WFS-session/.task/impl-1.json | jq -r '.status'
|
||||
```
|
||||
|
||||
### Step 5: Count Task Progress
|
||||
```bash
|
||||
find .workflow/WFS-session/.task/ -name "*.json" -type f | wc -l
|
||||
find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
||||
find .workflow/active/WFS-session/.task/ -name "*.json" -type f | wc -l
|
||||
find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
||||
```
|
||||
|
||||
### Step 6: Display Overview
|
||||
@@ -56,64 +192,4 @@ find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
||||
|
||||
## Completed Tasks
|
||||
- [COMPLETED] impl-0: Setup completed
|
||||
```
|
||||
|
||||
## Simple Bash Commands
|
||||
|
||||
### Basic Operations
|
||||
- **Find active session**: `find .workflow/ -name ".active-*" -type f`
|
||||
- **Read session info**: `cat .workflow/session/workflow-session.json`
|
||||
- **List tasks**: `find .workflow/session/.task/ -name "*.json" -type f`
|
||||
- **Check task status**: `cat task.json | jq -r '.status'`
|
||||
- **Count completed**: `find .summaries/ -name "*.md" -type f | wc -l`
|
||||
|
||||
### Task Status Check
|
||||
- **pending**: Not started yet
|
||||
- **active**: Currently in progress
|
||||
- **completed**: Finished with summary
|
||||
- **blocked**: Waiting for dependencies
|
||||
|
||||
### Validation Commands
|
||||
```bash
|
||||
# Check session exists
|
||||
test -f .workflow/.active-* && echo "Session active"
|
||||
|
||||
# Validate task files
|
||||
for f in .workflow/session/.task/*.json; do jq empty "$f" && echo "Valid: $f"; done
|
||||
|
||||
# Check summaries match
|
||||
find .task/ -name "*.json" -type f | wc -l
|
||||
find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
||||
```
|
||||
|
||||
## Simple Output Format
|
||||
|
||||
### Default Overview
|
||||
```
|
||||
Session: WFS-user-auth
|
||||
Status: ACTIVE
|
||||
Progress: 5/12 tasks
|
||||
|
||||
Current: impl-3 (Building API endpoints)
|
||||
Next: impl-4 (Adding authentication)
|
||||
Completed: impl-1, impl-2
|
||||
```
|
||||
|
||||
### Task Details
|
||||
```
|
||||
Task: impl-1
|
||||
Title: Build authentication module
|
||||
Status: completed
|
||||
Agent: code-developer
|
||||
Created: 2025-09-15
|
||||
Completed: 2025-09-15
|
||||
Summary: .summaries/impl-1-summary.md
|
||||
```
|
||||
|
||||
### Validation Results
|
||||
```
|
||||
Session file valid
|
||||
8 task files found
|
||||
3 summaries found
|
||||
5 tasks pending completion
|
||||
```
|
||||
@@ -70,7 +70,7 @@ TEST_FOCUS: [Test scenarios]
|
||||
|
||||
**Parse Output**:
|
||||
- Extract: context-package.json path (store as `contextPath`)
|
||||
- Typical pattern: `.workflow/[sessionId]/.process/context-package.json`
|
||||
- Typical pattern: `.workflow/active/[sessionId]/.process/context-package.json`
|
||||
|
||||
**Validation**:
|
||||
- Context package path extracted
|
||||
@@ -91,7 +91,7 @@ TEST_FOCUS: [Test scenarios]
|
||||
- Related components and integration points
|
||||
- Test framework detection
|
||||
|
||||
**Parse**: Extract testContextPath (`.workflow/[sessionId]/.process/test-context-package.json`)
|
||||
**Parse**: Extract testContextPath (`.workflow/active/[sessionId]/.process/test-context-package.json`)
|
||||
|
||||
**Benefits**:
|
||||
- Makes TDD aware of existing environment
|
||||
@@ -153,7 +153,7 @@ TEST_FOCUS: [Test scenarios]
|
||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
||||
|
||||
**Validation**:
|
||||
- File `.workflow/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
||||
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
||||
|
||||
**Skip Behavior**:
|
||||
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
||||
@@ -296,9 +296,9 @@ Structure:
|
||||
[...]
|
||||
|
||||
Plans generated:
|
||||
- Unified Implementation Plan: .workflow/[sessionId]/IMPL_PLAN.md
|
||||
- Unified Implementation Plan: .workflow/active/[sessionId]/IMPL_PLAN.md
|
||||
(includes TDD Implementation Tasks section with workflow_type: "tdd")
|
||||
- Task List: .workflow/[sessionId]/TODO_LIST.md
|
||||
- Task List: .workflow/active/[sessionId]/TODO_LIST.md
|
||||
(with internal TDD phase indicators)
|
||||
|
||||
TDD Configuration:
|
||||
@@ -417,110 +417,6 @@ Convert user input to TDD-structured format:
|
||||
- **Command failure**: Keep phase in_progress, report error
|
||||
- **TDD validation failure**: Report incomplete chains or wrong dependencies
|
||||
|
||||
## TDD Workflow Enhancements
|
||||
|
||||
### Overview
|
||||
The TDD workflow has been significantly enhanced by integrating best practices from both traditional `plan --agent` and `test-gen` workflows, creating a hybrid approach that bridges the gap between idealized TDD and real-world development complexity.
|
||||
|
||||
### Key Improvements
|
||||
|
||||
#### 1. Test Coverage Analysis (Phase 3)
|
||||
**Adopted from test-gen workflow**
|
||||
|
||||
Before planning TDD tasks, the workflow now analyzes the existing codebase:
|
||||
- Detects existing test patterns and conventions
|
||||
- Identifies current test coverage
|
||||
- Discovers related components and integration points
|
||||
- Detects test framework automatically
|
||||
|
||||
**Benefits**:
|
||||
- Context-aware TDD planning
|
||||
- Avoids duplicate test creation
|
||||
- Enables integration with existing tests
|
||||
- No longer assumes greenfield scenarios
|
||||
|
||||
#### 2. Iterative Green Phase with Test-Fix Cycle
|
||||
**Adopted from test-gen workflow**
|
||||
|
||||
IMPL (Green phase) tasks now include automatic test-fix cycle for resilient implementation:
|
||||
|
||||
**Enhanced IMPL Task Flow**:
|
||||
```
|
||||
1. Write minimal implementation code
|
||||
2. Execute test suite
|
||||
3. IF tests pass → Complete task
|
||||
4. IF tests fail → Enter fix cycle:
|
||||
a. Gemini diagnoses with bug-fix template
|
||||
b. Apply fix (manual or Codex)
|
||||
c. Retest
|
||||
d. Repeat (max 3 iterations)
|
||||
5. IF max iterations → Auto-revert changes 🔄
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- Faster feedback within Green phase
|
||||
- Autonomous recovery from implementation errors
|
||||
- Systematic debugging with Gemini
|
||||
- Safe rollback prevents broken state
|
||||
|
||||
#### 3. Agent-Driven Planning
|
||||
**From plan --agent workflow**
|
||||
|
||||
Supports action-planning-agent for more autonomous TDD planning with:
|
||||
- MCP tool integration (code-index, exa)
|
||||
- Memory-first principles
|
||||
- Brainstorming artifact integration
|
||||
- Task merging over decomposition
|
||||
|
||||
### Workflow Comparison
|
||||
|
||||
| Aspect | Previous | Current (Optimized) |
|
||||
|--------|----------|---------------------|
|
||||
| **Phases** | 6 (with test coverage) | 7 (added concept verification) |
|
||||
| **Context** | Greenfield assumption | Existing codebase aware |
|
||||
| **Task Structure** | 1 feature = 3 tasks (TEST/IMPL/REFACTOR) | 1 feature = 1 task (internal TDD cycle) |
|
||||
| **Task Count** | 5 features = 15 tasks | 5 features = 5 tasks (70% reduction) |
|
||||
| **Green Phase** | Single implementation | Iterative with fix cycle |
|
||||
| **Failure Handling** | Manual intervention | Auto-diagnose + fix + revert |
|
||||
| **Test Analysis** | None | Deep coverage analysis |
|
||||
| **Feedback Loop** | Post-execution | During Green phase |
|
||||
| **Task Management** | High overhead (15 tasks) | Low overhead (5 tasks) |
|
||||
| **Execution Efficiency** | Frequent context switching | Continuous context per feature |
|
||||
|
||||
### Migration Notes
|
||||
|
||||
**Backward Compatibility**: Fully compatible
|
||||
- Existing TDD workflows continue to work
|
||||
- New features are additive, not breaking
|
||||
- Phase 3 can be skipped if test-context-gather not available
|
||||
|
||||
**Session Structure**:
|
||||
```
|
||||
.workflow/WFS-xxx/
|
||||
├── IMPL_PLAN.md (unified plan with TDD Implementation Tasks section)
|
||||
├── TODO_LIST.md (with internal TDD phase indicators)
|
||||
├── .process/
|
||||
│ ├── context-package.json
|
||||
│ ├── test-context-package.json
|
||||
│ ├── ANALYSIS_RESULTS.md (enhanced with TDD breakdown)
|
||||
│ └── green-fix-iteration-*.md (fix logs from Green phase cycles)
|
||||
└── .task/
|
||||
├── IMPL-1.json (Complete TDD task: Red-Green-Refactor internally)
|
||||
├── IMPL-2.json (Complete TDD task)
|
||||
├── IMPL-3.json (Complex feature container, if needed)
|
||||
├── IMPL-3.1.json (Complex feature subtask, if needed)
|
||||
└── IMPL-3.2.json (Complex feature subtask, if needed)
|
||||
```
|
||||
|
||||
**File Count Comparison**:
|
||||
- **Old structure**: 5 features = 15 task files (TEST/IMPL/REFACTOR × 5)
|
||||
- **New structure**: 5 features = 5 task files (IMPL-N × 5)
|
||||
- **Complex features**: Add container + subtasks only when necessary
|
||||
|
||||
**Configuration Options** (in IMPL tasks):
|
||||
- `meta.max_iterations`: Fix attempts (default: 3)
|
||||
- `meta.use_codex`: Auto-fix mode (default: false)
|
||||
|
||||
## Related Commands
|
||||
|
||||
**Prerequisite Commands**:
|
||||
|
||||
@@ -28,7 +28,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(gemini:*)
|
||||
sessionId = argument
|
||||
|
||||
# Else auto-detect active session
|
||||
find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//'
|
||||
find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
|
||||
```
|
||||
|
||||
**Extract**: sessionId
|
||||
@@ -44,18 +44,18 @@ find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//'
|
||||
|
||||
```bash
|
||||
# Load all task JSONs
|
||||
find .workflow/{sessionId}/.task/ -name '*.json'
|
||||
find .workflow/active/{sessionId}/.task/ -name '*.json'
|
||||
|
||||
# Extract task IDs
|
||||
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
|
||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
|
||||
|
||||
# Check dependencies
|
||||
find .workflow/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
||||
find .workflow/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
||||
find .workflow/active/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
||||
find .workflow/active/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
||||
|
||||
# Check meta fields
|
||||
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
|
||||
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
|
||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
||||
```
|
||||
|
||||
**Validation**:
|
||||
@@ -82,9 +82,9 @@ find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
||||
- Compliance score
|
||||
|
||||
**Validation**:
|
||||
- `.workflow/{sessionId}/.process/test-results.json` exists
|
||||
- `.workflow/{sessionId}/.process/coverage-report.json` exists
|
||||
- `.workflow/{sessionId}/.process/tdd-cycle-report.md` exists
|
||||
- `.workflow/active/{sessionId}/.process/test-results.json` exists
|
||||
- `.workflow/active/{sessionId}/.process/coverage-report.json` exists
|
||||
- `.workflow/active/{sessionId}/.process/tdd-cycle-report.md` exists
|
||||
|
||||
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
|
||||
|
||||
@@ -97,7 +97,7 @@ find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
||||
cd project-root && gemini -p "
|
||||
PURPOSE: Generate TDD compliance report
|
||||
TASK: Analyze TDD workflow execution and generate quality report
|
||||
CONTEXT: @{.workflow/{sessionId}/.task/*.json,.workflow/{sessionId}/.summaries/*,.workflow/{sessionId}/.process/tdd-cycle-report.md}
|
||||
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
|
||||
EXPECTED:
|
||||
- TDD compliance score (0-100)
|
||||
- Chain completeness verification
|
||||
@@ -106,7 +106,7 @@ EXPECTED:
|
||||
- Red-Green-Refactor cycle validation
|
||||
- Best practices adherence assessment
|
||||
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
|
||||
" > .workflow/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
||||
" > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
||||
```
|
||||
|
||||
**Output**: TDD_COMPLIANCE_REPORT.md
|
||||
@@ -134,7 +134,7 @@ Function Coverage: {percentage}%
|
||||
|
||||
## Compliance Score: {score}/100
|
||||
|
||||
Detailed report: .workflow/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
||||
Detailed report: .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
||||
|
||||
Recommendations:
|
||||
- Complete missing REFACTOR-3.1 task
|
||||
@@ -168,7 +168,7 @@ TodoWrite({todos: [
|
||||
|
||||
### Chain Validation Algorithm
|
||||
```
|
||||
1. Load all task JSONs from .workflow/{sessionId}/.task/
|
||||
1. Load all task JSONs from .workflow/active/{sessionId}/.task/
|
||||
2. Extract task IDs and group by feature number
|
||||
3. For each feature:
|
||||
- Check TEST-N.M exists
|
||||
@@ -202,7 +202,7 @@ Final Score: Max(0, Base Score - Deductions)
|
||||
|
||||
## Output Files
|
||||
```
|
||||
.workflow/{session-id}/
|
||||
.workflow/active/{session-id}/
|
||||
├── TDD_COMPLIANCE_REPORT.md # Comprehensive compliance report ⭐
|
||||
└── .process/
|
||||
├── test-results.json # From tdd-coverage-analysis
|
||||
@@ -215,8 +215,8 @@ Final Score: Max(0, Base Score - Deductions)
|
||||
### Session Discovery Errors
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| No active session | No .active-* file | Provide session-id explicitly |
|
||||
| Multiple active sessions | Multiple .active-* files | Provide session-id explicitly |
|
||||
| No active session | No WFS-* directories | Provide session-id explicitly |
|
||||
| Multiple active sessions | Multiple WFS-* directories | Provide session-id explicitly |
|
||||
| Session not found | Invalid session-id | Check available sessions |
|
||||
|
||||
### Validation Errors
|
||||
|
||||
@@ -209,13 +209,13 @@ Iteration N (managed by test-cycle-execute orchestrator):
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "load_failure_context",
|
||||
"command": "Read(.workflow/{session}/.process/iteration-{N-1}-failures.json)",
|
||||
"command": "Read(.workflow/session/{session}/.process/iteration-{N-1}-failures.json)",
|
||||
"output_to": "previous_failures",
|
||||
"on_error": "skip_optional"
|
||||
},
|
||||
{
|
||||
"step": "load_fix_strategy",
|
||||
"command": "Read(.workflow/{session}/.process/iteration-{N}-strategy.md)",
|
||||
"command": "Read(.workflow/session/{session}/.process/iteration-{N}-strategy.md)",
|
||||
"output_to": "fix_strategy",
|
||||
"on_error": "fail"
|
||||
}
|
||||
@@ -543,7 +543,7 @@ This package is passed to agents via the Task tool's prompt context.
|
||||
"coverage_target": 80
|
||||
},
|
||||
"session": {
|
||||
"workflow_dir": ".workflow/WFS-test-{session}/",
|
||||
"workflow_dir": ".workflow/active/WFS-test-{session}/",
|
||||
"iteration_state_file": ".process/iteration-state.json",
|
||||
"test_results_file": ".process/test-results.json",
|
||||
"fix_history_file": ".process/fix-history.json"
|
||||
@@ -555,7 +555,7 @@ This package is passed to agents via the Task tool's prompt context.
|
||||
|
||||
### Test-Fix Session Files
|
||||
```
|
||||
.workflow/WFS-test-{session}/
|
||||
.workflow/active/WFS-test-{session}/
|
||||
├── workflow-session.json # Session metadata with workflow_type
|
||||
├── IMPL_PLAN.md # Test plan
|
||||
├── TODO_LIST.md # Progress tracking
|
||||
@@ -704,7 +704,7 @@ Task(subagent_type="{meta.agent}",
|
||||
#### Resume from Interruption
|
||||
```bash
|
||||
# Load iteration state
|
||||
iteration_state=$(cat .workflow/{session}/.process/iteration-state.json)
|
||||
iteration_state=$(cat .workflow/session/{session}/.process/iteration-state.json)
|
||||
current_iteration=$(jq -r '.current_iteration' <<< "$iteration_state")
|
||||
|
||||
# Determine resume point
|
||||
@@ -723,7 +723,7 @@ fi
|
||||
git revert HEAD
|
||||
|
||||
# Remove failed fix task
|
||||
rm .workflow/{session}/.task/IMPL-fix-{N}.json
|
||||
rm .workflow/session/{session}/.task/IMPL-fix-{N}.json
|
||||
|
||||
# Restore iteration state
|
||||
jq '.current_iteration -= 1' iteration-state.json > temp.json
|
||||
|
||||
@@ -513,7 +513,7 @@ If quality gate fails:
|
||||
|
||||
### Output Files Structure
|
||||
|
||||
Created in `.workflow/WFS-test-[session]/`:
|
||||
Created in `.workflow/active/WFS-test-[session]/`:
|
||||
|
||||
```
|
||||
WFS-test-[session]/
|
||||
@@ -579,7 +579,7 @@ Test-Fix-Gen Workflow Orchestrator (Dual-Mode Support)
|
||||
└─ Command ends, control returns to user
|
||||
|
||||
Artifacts Created:
|
||||
├── .workflow/WFS-test-[session]/
|
||||
├── .workflow/active/WFS-test-[session]/
|
||||
│ ├── workflow-session.json
|
||||
│ ├── IMPL_PLAN.md
|
||||
│ ├── TODO_LIST.md
|
||||
|
||||
@@ -397,7 +397,7 @@ Test-Gen Workflow Orchestrator
|
||||
└─ Command ends, control returns to user
|
||||
|
||||
Artifacts Created:
|
||||
├── .workflow/WFS-test-[session]/
|
||||
├── .workflow/active/WFS-test-[session]/
|
||||
│ ├── workflow-session.json
|
||||
│ ├── IMPL_PLAN.md
|
||||
│ ├── TODO_LIST.md
|
||||
@@ -444,7 +444,7 @@ See `/workflow:tools:test-task-generate` for complete task JSON schemas.
|
||||
|
||||
## Output Files
|
||||
|
||||
Created in `.workflow/WFS-test-[session]/`:
|
||||
Created in `.workflow/active/WFS-test-[session]/`:
|
||||
- `workflow-session.json` - Session metadata
|
||||
- `.process/test-context-package.json` - Coverage analysis
|
||||
- `.process/TEST_ANALYSIS_RESULTS.md` - Test requirements
|
||||
|
||||
@@ -3,14 +3,19 @@ name: conflict-resolution
|
||||
description: Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen
|
||||
argument-hint: "--session WFS-session-id --context path/to/context-package.json"
|
||||
examples:
|
||||
- /workflow:tools:conflict-resolution --session WFS-auth --context .workflow/WFS-auth/.process/context-package.json
|
||||
- /workflow:tools:conflict-resolution --session WFS-payment --context .workflow/WFS-payment/.process/context-package.json
|
||||
- /workflow:tools:conflict-resolution --session WFS-auth --context .workflow/active/WFS-auth/.process/context-package.json
|
||||
- /workflow:tools:conflict-resolution --session WFS-payment --context .workflow/active/WFS-payment/.process/context-package.json
|
||||
---
|
||||
|
||||
# Conflict Resolution Command
|
||||
|
||||
## Purpose
|
||||
Analyzes conflicts between implementation plans and existing codebase, generating multiple resolution strategies.
|
||||
Analyzes conflicts between implementation plans and existing codebase, **including module scenario uniqueness detection**, generating multiple resolution strategies with **iterative clarification until boundaries are clear**.
|
||||
|
||||
**Key Enhancements**:
|
||||
- **Scenario Uniqueness Detection**: Agent searches all existing modules to identify functional overlaps
|
||||
- **Iterative Clarification Loop**: Unlimited questions per conflict until scenario boundaries are uniquely defined (max 10 rounds)
|
||||
- **Dynamic Re-analysis**: Agent updates strategies based on user clarifications
|
||||
|
||||
**Scope**: Detection and strategy generation only - NO code modification or task creation.
|
||||
|
||||
@@ -21,11 +26,14 @@ Analyzes conflicts between implementation plans and existing codebase, generatin
|
||||
| Responsibility | Description |
|
||||
|---------------|-------------|
|
||||
| **Detect Conflicts** | Analyze plan vs existing code inconsistencies |
|
||||
| **Scenario Uniqueness** | **NEW**: Search and compare new modules with existing modules for functional overlaps |
|
||||
| **Generate Strategies** | Provide 2-4 resolution options per conflict |
|
||||
| **Iterative Clarification** | **NEW**: Ask unlimited questions until scenario boundaries are clear and unique |
|
||||
| **Agent Re-analysis** | **NEW**: Dynamically update strategies based on user clarifications |
|
||||
| **CLI Analysis** | Use Gemini/Qwen (Claude fallback) |
|
||||
| **User Decision** | Present options, never auto-apply |
|
||||
| **User Decision** | Present options ONE BY ONE, never auto-apply |
|
||||
| **Direct Text Output** | Output questions via text directly, NEVER use bash echo/printf |
|
||||
| **Single Output** | `CONFLICT_RESOLUTION.md` with findings |
|
||||
| **Structured Data** | JSON output for programmatic processing, NO file generation |
|
||||
|
||||
## Conflict Categories
|
||||
|
||||
@@ -49,6 +57,13 @@ Analyzes conflicts between implementation plans and existing codebase, generatin
|
||||
- Setup conflicts
|
||||
- Breaking updates
|
||||
|
||||
### 5. Module Scenario Overlap
|
||||
- **NEW**: Functional overlap between new and existing modules
|
||||
- Scenario boundary ambiguity
|
||||
- Duplicate responsibility detection
|
||||
- Module merge/split decisions
|
||||
- **Requires iterative clarification until uniqueness confirmed**
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Phase 1: Validation
|
||||
@@ -73,23 +88,31 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
||||
|
||||
### 1. Load Context
|
||||
- Read existing files from conflict_detection.existing_files
|
||||
- Load plan from .workflow/{session_id}/.process/context-package.json
|
||||
- Load plan from .workflow/active/{session_id}/.process/context-package.json
|
||||
- Extract role analyses and requirements
|
||||
|
||||
### 2. Execute CLI Analysis
|
||||
### 2. Execute CLI Analysis (Enhanced with Scenario Uniqueness Detection)
|
||||
|
||||
Primary (Gemini):
|
||||
cd {project_root} && gemini -p "
|
||||
PURPOSE: Detect conflicts between plan and codebase
|
||||
PURPOSE: Detect conflicts between plan and codebase, including module scenario overlaps
|
||||
TASK:
|
||||
• Compare architectures
|
||||
• Identify breaking API changes
|
||||
• Detect data model incompatibilities
|
||||
• Assess dependency conflicts
|
||||
• **NEW: Analyze module scenario uniqueness**
|
||||
- Extract new module functionality from plan
|
||||
- Search all existing modules with similar functionality
|
||||
- Compare scenario coverage and identify overlaps
|
||||
- Generate clarification questions for boundary definition
|
||||
MODE: analysis
|
||||
CONTEXT: @{existing_files} @.workflow/{session_id}/**/*
|
||||
EXPECTED: Conflict list with severity ratings
|
||||
RULES: Focus on breaking changes and migration needs
|
||||
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @.workflow/active/{session_id}/**/*
|
||||
EXPECTED: Conflict list with severity ratings, including ModuleOverlap conflicts with:
|
||||
- Existing module list with scenarios
|
||||
- Overlap analysis matrix
|
||||
- Targeted clarification questions
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | analysis=READ-ONLY
|
||||
"
|
||||
|
||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||
@@ -98,9 +121,11 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
||||
|
||||
Template per conflict:
|
||||
- Severity: Critical/High/Medium
|
||||
- Category: Architecture/API/Data/Dependency
|
||||
- Category: Architecture/API/Data/Dependency/ModuleOverlap
|
||||
- Affected files + impact
|
||||
- **For ModuleOverlap**: Include overlap_analysis with existing modules and scenarios
|
||||
- Options with pros/cons, effort, risk
|
||||
- **For ModuleOverlap strategies**: Add clarification_needed questions for boundary definition
|
||||
- Recommended strategy + rationale
|
||||
|
||||
### 4. Return Structured Conflict Data
|
||||
@@ -116,10 +141,10 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
||||
"id": "CON-001",
|
||||
"brief": "一行中文冲突摘要",
|
||||
"severity": "Critical|High|Medium",
|
||||
"category": "Architecture|API|Data|Dependency",
|
||||
"category": "Architecture|API|Data|Dependency|ModuleOverlap",
|
||||
"affected_files": [
|
||||
".workflow/{session}/.brainstorm/guidance-specification.md",
|
||||
".workflow/{session}/.brainstorm/system-architect/analysis.md"
|
||||
".workflow/active/{session}/.brainstorm/guidance-specification.md",
|
||||
".workflow/active/{session}/.brainstorm/system-architect/analysis.md"
|
||||
],
|
||||
"description": "详细描述冲突 - 什么不兼容",
|
||||
"impact": {
|
||||
@@ -128,6 +153,23 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
||||
"migration_required": true|false,
|
||||
"estimated_effort": "人天估计"
|
||||
},
|
||||
"overlap_analysis": {
|
||||
"// NOTE": "仅当 category=ModuleOverlap 时需要此字段",
|
||||
"new_module": {
|
||||
"name": "新模块名称",
|
||||
"scenarios": ["场景1", "场景2", "场景3"],
|
||||
"responsibilities": "职责描述"
|
||||
},
|
||||
"existing_modules": [
|
||||
{
|
||||
"file": "src/existing/module.ts",
|
||||
"name": "现有模块名称",
|
||||
"scenarios": ["场景A", "场景B"],
|
||||
"overlap_scenarios": ["重叠场景1", "重叠场景2"],
|
||||
"responsibilities": "现有模块职责"
|
||||
}
|
||||
]
|
||||
},
|
||||
"strategies": [
|
||||
{
|
||||
"name": "策略名称(中文)",
|
||||
@@ -137,9 +179,15 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
||||
"effort": "时间估计",
|
||||
"pros": ["优点1", "优点2"],
|
||||
"cons": ["缺点1", "缺点2"],
|
||||
"clarification_needed": [
|
||||
"// NOTE: 仅当需要用户进一步澄清时需要此字段(尤其是 ModuleOverlap)",
|
||||
"新模块的核心职责边界是什么?",
|
||||
"如何与现有模块 X 协作?",
|
||||
"哪些场景应该由新模块处理?"
|
||||
],
|
||||
"modifications": [
|
||||
{
|
||||
"file": ".workflow/{session}/.brainstorm/guidance-specification.md",
|
||||
"file": ".workflow/active/{session}/.brainstorm/guidance-specification.md",
|
||||
"section": "## 2. System Architect Decisions",
|
||||
"change_type": "update",
|
||||
"old_content": "原始内容片段(用于定位)",
|
||||
@@ -147,7 +195,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
||||
"rationale": "为什么这样改"
|
||||
},
|
||||
{
|
||||
"file": ".workflow/{session}/.brainstorm/system-architect/analysis.md",
|
||||
"file": ".workflow/active/{session}/.brainstorm/system-architect/analysis.md",
|
||||
"section": "## Design Decisions",
|
||||
"change_type": "update",
|
||||
"old_content": "原始内容片段",
|
||||
@@ -204,155 +252,251 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
||||
`)
|
||||
```
|
||||
|
||||
**Agent Internal Flow**:
|
||||
**Agent Internal Flow** (Enhanced):
|
||||
```
|
||||
1. Load context package
|
||||
2. Check conflict_risk (exit if none/low)
|
||||
3. Read existing files + plan artifacts
|
||||
4. Run CLI analysis (Gemini→Qwen→Claude)
|
||||
5. Parse conflict findings
|
||||
6. Generate 2-4 strategies per conflict with modifications
|
||||
4. Run CLI analysis (Gemini→Qwen→Claude) with enhanced tasks:
|
||||
- Standard conflict detection (Architecture/API/Data/Dependency)
|
||||
- **NEW: Module scenario uniqueness detection**
|
||||
* Extract new module functionality from plan
|
||||
* Search all existing modules with similar keywords/functionality
|
||||
* Compare scenario coverage and responsibilities
|
||||
* Identify functional overlaps and boundary ambiguities
|
||||
* Generate ModuleOverlap conflicts with overlap_analysis
|
||||
5. Parse conflict findings (including ModuleOverlap category)
|
||||
6. Generate 2-4 strategies per conflict:
|
||||
- Include modifications for each strategy
|
||||
- **For ModuleOverlap**: Add clarification_needed questions for boundary definition
|
||||
7. Return JSON to stdout (NOT file write)
|
||||
8. Return execution log path
|
||||
```
|
||||
|
||||
### Phase 3: User Confirmation via Text Interaction
|
||||
### Phase 3: Iterative User Interaction with Clarification Loop
|
||||
|
||||
**Command parses agent JSON output and presents conflicts to user via text**:
|
||||
**Execution Flow**:
|
||||
```
|
||||
FOR each conflict (逐个处理,无数量限制):
|
||||
clarified = false
|
||||
round = 0
|
||||
userClarifications = []
|
||||
|
||||
WHILE (!clarified && round < 10):
|
||||
round++
|
||||
|
||||
// 1. Display conflict (包含所有关键字段)
|
||||
- category, id, brief, severity, description
|
||||
- IF ModuleOverlap: 展示 overlap_analysis
|
||||
* new_module: {name, scenarios, responsibilities}
|
||||
* existing_modules[]: {file, name, scenarios, overlap_scenarios, responsibilities}
|
||||
|
||||
// 2. Display strategies (2-4个策略 + 自定义选项)
|
||||
- FOR each strategy: {name, approach, complexity, risk, effort, pros, cons}
|
||||
* IF clarification_needed: 展示待澄清问题列表
|
||||
- 自定义选项: {suggestions: modification_suggestions[]}
|
||||
|
||||
// 3. User selects strategy
|
||||
userChoice = readInput()
|
||||
|
||||
IF userChoice == "自定义":
|
||||
customConflicts.push({id, brief, category, suggestions, overlap_analysis})
|
||||
clarified = true
|
||||
BREAK
|
||||
|
||||
selectedStrategy = strategies[userChoice]
|
||||
|
||||
// 4. Clarification loop
|
||||
IF selectedStrategy.clarification_needed.length > 0:
|
||||
// 收集澄清答案
|
||||
FOR each question:
|
||||
answer = readInput()
|
||||
userClarifications.push({question, answer})
|
||||
|
||||
// Agent 重新分析
|
||||
reanalysisResult = Task(cli-execution-agent, prompt={
|
||||
冲突信息: {id, brief, category, 策略}
|
||||
用户澄清: userClarifications[]
|
||||
场景分析: overlap_analysis (if ModuleOverlap)
|
||||
|
||||
输出: {
|
||||
uniqueness_confirmed: bool,
|
||||
rationale: string,
|
||||
updated_strategy: {name, approach, complexity, risk, effort, modifications[]},
|
||||
remaining_questions: [] (如果仍有歧义)
|
||||
}
|
||||
})
|
||||
|
||||
IF reanalysisResult.uniqueness_confirmed:
|
||||
selectedStrategy = updated_strategy
|
||||
selectedStrategy.clarifications = userClarifications
|
||||
clarified = true
|
||||
ELSE:
|
||||
// 更新澄清问题,继续下一轮
|
||||
selectedStrategy.clarification_needed = remaining_questions
|
||||
ELSE:
|
||||
clarified = true
|
||||
|
||||
resolvedConflicts.push({conflict, strategy: selectedStrategy})
|
||||
END WHILE
|
||||
END FOR
|
||||
|
||||
// Build output
|
||||
selectedStrategies = resolvedConflicts.map(r => ({
|
||||
conflict_id, strategy, clarifications[]
|
||||
}))
|
||||
```
|
||||
|
||||
**Key Data Structures**:
|
||||
|
||||
```javascript
|
||||
// 1. Parse agent JSON output
|
||||
const conflictData = JSON.parse(agentOutput);
|
||||
const conflicts = conflictData.conflicts; // No 4-conflict limit
|
||||
|
||||
// 2. Format conflicts as text output (max 10 per round)
|
||||
const batchSize = 10;
|
||||
const batches = chunkArray(conflicts, batchSize);
|
||||
|
||||
for (const [batchIdx, batch] of batches.entries()) {
|
||||
const totalBatches = batches.length;
|
||||
|
||||
// Output batch header
|
||||
console.log(`===== 冲突解决 (第 ${batchIdx + 1}/${totalBatches} 轮) =====\n`);
|
||||
|
||||
// Output each conflict in batch
|
||||
batch.forEach((conflict, idx) => {
|
||||
const questionNum = batchIdx * batchSize + idx + 1;
|
||||
console.log(`【问题${questionNum} - ${conflict.category}】${conflict.id}: ${conflict.brief}`);
|
||||
|
||||
conflict.strategies.forEach((strategy, sIdx) => {
|
||||
const optionLetter = String.fromCharCode(97 + sIdx); // a, b, c, ...
|
||||
console.log(`${optionLetter}) ${strategy.name}`);
|
||||
console.log(` 说明:${strategy.approach}`);
|
||||
console.log(` 复杂度: ${strategy.complexity} | 风险: ${strategy.risk} | 工作量: ${strategy.effort}`);
|
||||
});
|
||||
|
||||
// Add custom option
|
||||
const customLetter = String.fromCharCode(97 + conflict.strategies.length);
|
||||
console.log(`${customLetter}) 自定义修改`);
|
||||
console.log(` 说明:根据修改建议自行处理,不应用预设策略`);
|
||||
|
||||
// Show modification suggestions
|
||||
if (conflict.modification_suggestions && conflict.modification_suggestions.length > 0) {
|
||||
console.log(` 修改建议:`);
|
||||
conflict.modification_suggestions.forEach(suggestion => {
|
||||
console.log(` - ${suggestion}`);
|
||||
});
|
||||
}
|
||||
console.log();
|
||||
});
|
||||
|
||||
console.log(`请回答 (格式: 1a 2b 3c...):`);
|
||||
|
||||
// Wait for user input
|
||||
const userInput = await readUserInput();
|
||||
|
||||
// Parse answers
|
||||
const answers = parseUserAnswers(userInput, batch);
|
||||
// Custom conflict tracking
|
||||
customConflicts[] = {
|
||||
id, brief, category,
|
||||
suggestions: modification_suggestions[],
|
||||
overlap_analysis: { new_module{}, existing_modules[] } // ModuleOverlap only
|
||||
}
|
||||
|
||||
// 3. Build selected strategies (exclude custom selections)
|
||||
const selectedStrategies = answers.filter(a => !a.isCustom).map(a => a.strategy);
|
||||
const customConflicts = answers.filter(a => a.isCustom).map(a => ({
|
||||
id: a.conflict.id,
|
||||
brief: a.conflict.brief,
|
||||
suggestions: a.conflict.modification_suggestions
|
||||
}));
|
||||
// Agent re-analysis prompt output
|
||||
{
|
||||
uniqueness_confirmed: bool,
|
||||
rationale: string,
|
||||
updated_strategy: {
|
||||
name, approach, complexity, risk, effort,
|
||||
modifications: [{file, section, change_type, old_content, new_content, rationale}]
|
||||
},
|
||||
remaining_questions: string[]
|
||||
}
|
||||
```
|
||||
|
||||
**Text Output Example**:
|
||||
**Text Output Example** (展示关键字段):
|
||||
|
||||
```markdown
|
||||
===== 冲突解决 (第 1/1 轮) =====
|
||||
============================================================
|
||||
冲突 1/3 - 第 1 轮
|
||||
============================================================
|
||||
【ModuleOverlap】CON-001: 新增用户认证服务与现有模块功能重叠
|
||||
严重程度: High | 描述: 计划中的 UserAuthService 与现有 AuthManager 场景重叠
|
||||
|
||||
【问题1 - Architecture】CON-001: 现有认证系统与计划不兼容
|
||||
a) 渐进式迁移
|
||||
说明:保留现有系统,逐步迁移到新方案
|
||||
复杂度: Medium | 风险: Low | 工作量: 3-5天
|
||||
b) 完全重写
|
||||
说明:废弃旧系统,从零实现新认证
|
||||
复杂度: High | 风险: Medium | 工作量: 7-10天
|
||||
c) 自定义修改
|
||||
说明:根据修改建议自行处理,不应用预设策略
|
||||
修改建议:
|
||||
- 评估现有认证系统的兼容性,考虑是否可以通过适配器模式桥接
|
||||
- 检查JWT token格式和验证逻辑是否需要调整
|
||||
- 确保用户会话管理与新架构保持一致
|
||||
--- 场景重叠分析 ---
|
||||
新模块: UserAuthService | 场景: 登录, Token验证, 权限, MFA
|
||||
现有模块: AuthManager (src/auth/AuthManager.ts) | 重叠: 登录, Token验证
|
||||
|
||||
【问题2 - Data】CON-002: 数据库 schema 冲突
|
||||
a) 添加迁移脚本
|
||||
说明:创建数据库迁移脚本处理 schema 变更
|
||||
复杂度: Low | 风险: Low | 工作量: 1-2天
|
||||
b) 自定义修改
|
||||
说明:根据修改建议自行处理,不应用预设策略
|
||||
修改建议:
|
||||
- 检查现有表结构是否支持新增字段,避免破坏性变更
|
||||
- 考虑使用数据库版本控制工具(如Flyway或Liquibase)
|
||||
- 准备数据迁移和回滚策略
|
||||
--- 解决策略 ---
|
||||
1) 合并 (Low复杂度 | Low风险 | 2-3天)
|
||||
⚠️ 需澄清: AuthManager是否能承担MFA?
|
||||
|
||||
请回答 (格式: 1a 2b):
|
||||
2) 拆分边界 (Medium复杂度 | Medium风险 | 4-5天)
|
||||
⚠️ 需澄清: 基础/高级认证边界? Token验证归谁?
|
||||
|
||||
3) 自定义修改
|
||||
建议: 评估扩展性; 策略模式分离; 定义接口边界
|
||||
|
||||
请选择 (1-3): > 2
|
||||
|
||||
--- 澄清问答 (第1轮) ---
|
||||
Q: 基础/高级认证边界?
|
||||
A: 基础=密码登录+token验证, 高级=MFA+OAuth+SSO
|
||||
|
||||
Q: Token验证归谁?
|
||||
A: 统一由 AuthManager 负责
|
||||
|
||||
🔄 重新分析...
|
||||
✅ 唯一性已确认 | 理由: 边界清晰 - AuthManager(基础+token), UserAuthService(MFA+OAuth+SSO)
|
||||
|
||||
============================================================
|
||||
冲突 2/3 - 第 1 轮 [下一个冲突]
|
||||
============================================================
|
||||
```
|
||||
|
||||
**User Input Examples**:
|
||||
- `1a 2a` → Conflict 1: 渐进式迁移, Conflict 2: 添加迁移脚本
|
||||
- `1b 2b` → Conflict 1: 完全重写, Conflict 2: 自定义修改
|
||||
- `1c 2c` → Both choose custom modification (user handles manually with suggestions)
|
||||
**Loop Characteristics**: 逐个处理 | 无限轮次(max 10) | 动态问题生成 | Agent重新分析判断唯一性 | ModuleOverlap场景边界澄清
|
||||
|
||||
### Phase 4: Apply Modifications
|
||||
|
||||
```javascript
|
||||
// 1. Extract modifications from selected strategies
|
||||
// 1. Extract modifications from resolved strategies
|
||||
const modifications = [];
|
||||
selectedStrategies.forEach(strategy => {
|
||||
if (strategy !== "skip") {
|
||||
modifications.push(...strategy.modifications);
|
||||
selectedStrategies.forEach(item => {
|
||||
if (item.strategy && item.strategy.modifications) {
|
||||
modifications.push(...item.strategy.modifications.map(mod => ({
|
||||
...mod,
|
||||
conflict_id: item.conflict_id,
|
||||
clarifications: item.clarifications
|
||||
})));
|
||||
}
|
||||
});
|
||||
|
||||
console.log(`\n正在应用 ${modifications.length} 个修改...`);
|
||||
|
||||
// 2. Apply each modification using Edit tool
|
||||
modifications.forEach(mod => {
|
||||
if (mod.change_type === "update") {
|
||||
Edit({
|
||||
file_path: mod.file,
|
||||
old_string: mod.old_content,
|
||||
new_string: mod.new_content
|
||||
});
|
||||
const appliedModifications = [];
|
||||
const failedModifications = [];
|
||||
|
||||
modifications.forEach((mod, idx) => {
|
||||
try {
|
||||
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
|
||||
|
||||
if (mod.change_type === "update") {
|
||||
Edit({
|
||||
file_path: mod.file,
|
||||
old_string: mod.old_content,
|
||||
new_string: mod.new_content
|
||||
});
|
||||
} else if (mod.change_type === "add") {
|
||||
// Handle addition - append or insert based on section
|
||||
const fileContent = Read(mod.file);
|
||||
const updated = insertContentAfterSection(fileContent, mod.section, mod.new_content);
|
||||
Write(mod.file, updated);
|
||||
} else if (mod.change_type === "remove") {
|
||||
Edit({
|
||||
file_path: mod.file,
|
||||
old_string: mod.old_content,
|
||||
new_string: ""
|
||||
});
|
||||
}
|
||||
|
||||
appliedModifications.push(mod);
|
||||
console.log(` ✓ 成功`);
|
||||
} catch (error) {
|
||||
console.log(` ✗ 失败: ${error.message}`);
|
||||
failedModifications.push({ ...mod, error: error.message });
|
||||
}
|
||||
// Handle "add" and "remove" similarly
|
||||
});
|
||||
|
||||
// 3. Update context-package.json
|
||||
// 3. Update context-package.json with resolution details
|
||||
const contextPackage = JSON.parse(Read(contextPath));
|
||||
contextPackage.conflict_detection.conflict_risk = "resolved";
|
||||
contextPackage.conflict_detection.resolved_conflicts = conflicts.map(c => c.id);
|
||||
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => ({
|
||||
conflict_id: s.conflict_id,
|
||||
strategy_name: s.strategy.name,
|
||||
clarifications: s.clarifications
|
||||
}));
|
||||
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
|
||||
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
|
||||
Write(contextPath, JSON.stringify(contextPackage, null, 2));
|
||||
|
||||
// 4. Output custom conflict summary (if any)
|
||||
// 4. Output custom conflict summary with overlap analysis (if any)
|
||||
if (customConflicts.length > 0) {
|
||||
console.log("\n===== 需要自定义处理的冲突 =====\n");
|
||||
console.log(`\n${'='.repeat(60)}`);
|
||||
console.log(`需要自定义处理的冲突 (${customConflicts.length})`);
|
||||
console.log(`${'='.repeat(60)}\n`);
|
||||
|
||||
customConflicts.forEach(conflict => {
|
||||
console.log(`【${conflict.id}】${conflict.brief}`);
|
||||
console.log("修改建议:");
|
||||
console.log(`【${conflict.category}】${conflict.id}: ${conflict.brief}`);
|
||||
|
||||
// Show overlap analysis for ModuleOverlap conflicts
|
||||
if (conflict.category === 'ModuleOverlap' && conflict.overlap_analysis) {
|
||||
console.log(`\n场景重叠信息:`);
|
||||
console.log(` 新模块: ${conflict.overlap_analysis.new_module.name}`);
|
||||
console.log(` 场景: ${conflict.overlap_analysis.new_module.scenarios.join(', ')}`);
|
||||
console.log(`\n 与以下模块重叠:`);
|
||||
conflict.overlap_analysis.existing_modules.forEach(mod => {
|
||||
console.log(` - ${mod.name} (${mod.file})`);
|
||||
console.log(` 重叠场景: ${mod.overlap_scenarios.join(', ')}`);
|
||||
});
|
||||
}
|
||||
|
||||
console.log(`\n修改建议:`);
|
||||
conflict.suggestions.forEach(suggestion => {
|
||||
console.log(` - ${suggestion}`);
|
||||
});
|
||||
@@ -360,25 +504,43 @@ if (customConflicts.length > 0) {
|
||||
});
|
||||
}
|
||||
|
||||
// 5. Return summary
|
||||
// 5. Output failure summary (if any)
|
||||
if (failedModifications.length > 0) {
|
||||
console.log(`\n⚠️ 部分修改失败 (${failedModifications.length}):`);
|
||||
failedModifications.forEach(mod => {
|
||||
console.log(` - ${mod.file}: ${mod.error}`);
|
||||
});
|
||||
}
|
||||
|
||||
// 6. Return summary
|
||||
return {
|
||||
resolved: modifications.length,
|
||||
custom: customConflicts.length,
|
||||
modified_files: [...new Set(modifications.map(m => m.file))],
|
||||
custom_conflicts: customConflicts
|
||||
total_conflicts: conflicts.length,
|
||||
resolved_with_strategy: selectedStrategies.length,
|
||||
custom_handling: customConflicts.length,
|
||||
modifications_applied: appliedModifications.length,
|
||||
modifications_failed: failedModifications.length,
|
||||
modified_files: [...new Set(appliedModifications.map(m => m.file))],
|
||||
custom_conflicts: customConflicts,
|
||||
clarification_records: selectedStrategies.filter(s => s.clarifications.length > 0)
|
||||
};
|
||||
```
|
||||
|
||||
**Validation**:
|
||||
```
|
||||
✓ Agent returns valid JSON structure
|
||||
✓ Text output displays all conflicts (max 10 per round)
|
||||
✓ User selections captured correctly
|
||||
✓ Agent returns valid JSON structure with ModuleOverlap conflicts
|
||||
✓ Conflicts processed ONE BY ONE (not in batches)
|
||||
✓ ModuleOverlap conflicts include overlap_analysis field
|
||||
✓ Strategies with clarification_needed display questions
|
||||
✓ User selections captured correctly per conflict
|
||||
✓ Clarification loop continues until uniqueness confirmed
|
||||
✓ Agent re-analysis returns uniqueness_confirmed and updated_strategy
|
||||
✓ Maximum 10 rounds per conflict safety limit enforced
|
||||
✓ Edit tool successfully applies modifications
|
||||
✓ guidance-specification.md updated
|
||||
✓ Role analyses (*.md) updated
|
||||
✓ context-package.json marked as resolved
|
||||
✓ Agent log saved to .workflow/{session_id}/.chat/
|
||||
✓ context-package.json marked as resolved with clarification records
|
||||
✓ Custom conflicts display overlap_analysis for manual handling
|
||||
✓ Agent log saved to .workflow/active/{session_id}/.chat/
|
||||
```
|
||||
|
||||
## Output Format: Agent JSON Response
|
||||
@@ -435,31 +597,45 @@ If Edit tool fails mid-application:
|
||||
|
||||
**Output**:
|
||||
- Modified files:
|
||||
- `.workflow/{session_id}/.brainstorm/guidance-specification.md`
|
||||
- `.workflow/{session_id}/.brainstorm/{role}/analysis.md`
|
||||
- `.workflow/{session_id}/.process/context-package.json` (conflict_risk → resolved)
|
||||
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
|
||||
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
|
||||
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved)
|
||||
- NO report file generation
|
||||
|
||||
**User Interaction**:
|
||||
- Text-based strategy selection (max 10 conflicts per round)
|
||||
- **Iterative conflict processing**: One conflict at a time, not in batches
|
||||
- Each conflict: 2-4 strategy options + "自定义修改" option (with suggestions)
|
||||
- **Clarification loop**: Unlimited questions per conflict until uniqueness confirmed (max 10 rounds)
|
||||
- **ModuleOverlap conflicts**: Display overlap_analysis with existing modules
|
||||
- **Agent re-analysis**: Dynamic strategy updates based on user clarifications
|
||||
|
||||
### Success Criteria
|
||||
```
|
||||
✓ CLI analysis returns valid JSON structure
|
||||
✓ Conflicts presented in batches (max 10 per round)
|
||||
✓ CLI analysis returns valid JSON structure with ModuleOverlap category
|
||||
✓ Agent performs scenario uniqueness detection (searches existing modules)
|
||||
✓ Conflicts processed ONE BY ONE with iterative clarification
|
||||
✓ Min 2 strategies per conflict with modifications
|
||||
✓ ModuleOverlap conflicts include overlap_analysis with existing modules
|
||||
✓ Strategies requiring clarification include clarification_needed questions
|
||||
✓ Each conflict includes 2-5 modification_suggestions
|
||||
✓ Text output displays all conflicts correctly with suggestions
|
||||
✓ User selections captured and processed
|
||||
✓ Text output displays conflict with overlap analysis (if ModuleOverlap)
|
||||
✓ User selections captured per conflict
|
||||
✓ Clarification loop continues until uniqueness confirmed (unlimited rounds, max 10)
|
||||
✓ Agent re-analysis with user clarifications updates strategy
|
||||
✓ Uniqueness confirmation based on clear scenario boundaries
|
||||
✓ Edit tool applies modifications successfully
|
||||
✓ Custom conflicts displayed with suggestions for manual handling
|
||||
✓ Custom conflicts displayed with overlap_analysis for manual handling
|
||||
✓ guidance-specification.md updated with resolved conflicts
|
||||
✓ Role analyses (*.md) updated with resolved conflicts
|
||||
✓ context-package.json marked as "resolved"
|
||||
✓ context-package.json marked as "resolved" with clarification records
|
||||
✓ No CONFLICT_RESOLUTION.md file generated
|
||||
✓ Modification summary includes custom conflict count
|
||||
✓ Agent log saved to .workflow/{session_id}/.chat/
|
||||
✓ Modification summary includes:
|
||||
- Total conflicts
|
||||
- Resolved with strategy (count)
|
||||
- Custom handling (count)
|
||||
- Clarification records
|
||||
- Overlap analysis for custom ModuleOverlap conflicts
|
||||
✓ Agent log saved to .workflow/active/{session_id}/.chat/
|
||||
✓ Error handling robust (validate/retry/degrade)
|
||||
```
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ Orchestrator command that invokes `context-search-agent` to gather comprehensive
|
||||
- **Agent Delegation**: Delegate all discovery to `context-search-agent` for autonomous execution
|
||||
- **Detection-First**: Check for existing context-package before executing
|
||||
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
|
||||
- **Standardized Output**: Generate `.workflow/{session}/.process/context-package.json`
|
||||
- **Standardized Output**: Generate `.workflow/active/{session}/.process/context-package.json`
|
||||
|
||||
## Execution Flow
|
||||
|
||||
@@ -71,9 +71,10 @@ You are executing as context-search-agent (.claude/agents/context-search-agent.m
|
||||
Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Detection**: Check for existing context-package (early exit if valid)
|
||||
2. **Foundation**: Initialize code-index, get project structure, load docs
|
||||
3. **Analysis**: Extract keywords, determine scope, classify complexity
|
||||
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
|
||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||
3. **Foundation**: Initialize code-index, get project structure, load docs
|
||||
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
||||
|
||||
### Phase 2: Multi-Source Context Discovery
|
||||
Execute all 4 discovery tracks:
|
||||
@@ -84,16 +85,17 @@ Execute all 4 discovery tracks:
|
||||
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. Synthesize 4-source data (archive > docs > code > web)
|
||||
3. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
4. Perform conflict detection with risk assessment
|
||||
5. **Inject historical conflicts** from archive analysis into conflict_detection
|
||||
6. Generate and validate context-package.json
|
||||
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
||||
3. **Populate `project_context`**: Directly use the `overview` from `project.json` to fill the `project_context` section of the output `context-package.json`. Include technology_stack, architecture, key_components, and entry_points.
|
||||
4. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
5. Perform conflict detection with risk assessment
|
||||
6. **Inject historical conflicts** from archive analysis into conflict_detection
|
||||
7. Generate and validate context-package.json
|
||||
|
||||
## Output Requirements
|
||||
Complete context-package.json with:
|
||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
||||
- **project_context**: architecture_patterns, coding_conventions, tech_stack
|
||||
- **project_context**: architecture_patterns, coding_conventions, tech_stack (sourced from `project.json` overview)
|
||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
||||
- **dependencies**: {internal[], external[]} with dependency graph
|
||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
||||
@@ -139,7 +141,7 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
|
||||
|
||||
**Key Sections**:
|
||||
- **metadata**: Session info, keywords, complexity, tech stack
|
||||
- **project_context**: Architecture patterns, conventions, tech stack
|
||||
- **project_context**: Architecture patterns, conventions, tech stack (populated from `project.json` overview)
|
||||
- **assets**: Categorized files with relevance scores (documentation, source_code, config, tests)
|
||||
- **dependencies**: Internal and external dependency graphs
|
||||
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
|
||||
@@ -154,7 +156,7 @@ The context-search-agent MUST perform historical archive analysis as Track 1 in
|
||||
**Step 1: Check for Archive Manifest**
|
||||
```bash
|
||||
# Check if archive manifest exists
|
||||
if [[ -f .workflow/.archives/manifest.json ]]; then
|
||||
if [[ -f .workflow/archives/manifest.json ]]; then
|
||||
# Manifest available for querying
|
||||
fi
|
||||
```
|
||||
@@ -233,7 +235,7 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
|
||||
### Archive Query Algorithm
|
||||
|
||||
```markdown
|
||||
1. IF .workflow/.archives/manifest.json does NOT exist → Skip Track 1, continue to Track 2
|
||||
1. IF .workflow/archives/manifest.json does NOT exist → Skip Track 1, continue to Track 2
|
||||
2. IF manifest exists:
|
||||
a. Load manifest.json
|
||||
b. Extract keywords from task_description (nouns, verbs, technical terms)
|
||||
@@ -250,33 +252,10 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
|
||||
3. Continue to Track 2 (reference documentation)
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Usage
|
||||
```bash
|
||||
/workflow:tools:context-gather --session WFS-auth-feature "Implement JWT authentication with refresh tokens"
|
||||
```
|
||||
## Success Criteria
|
||||
|
||||
- ✅ Valid context-package.json generated in `.workflow/{session}/.process/`
|
||||
- ✅ Contains >80% relevant files based on task keywords
|
||||
- ✅ Execution completes within 2 minutes
|
||||
- ✅ All required schema fields present and valid
|
||||
- ✅ Conflict risk accurately assessed
|
||||
- ✅ Agent reports completion with statistics
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Package validation failed | Invalid session_id in existing package | Re-run agent to regenerate |
|
||||
| Agent execution timeout | Large codebase or slow MCP | Increase timeout, check code-index status |
|
||||
| Missing required fields | Agent incomplete execution | Check agent logs, verify schema compliance |
|
||||
| File count exceeds limit | Too many relevant files | Agent should auto-prioritize top 50 by relevance |
|
||||
|
||||
## Notes
|
||||
|
||||
- **Detection-first**: Always check for existing package before invoking agent
|
||||
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
|
||||
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||
|
||||
@@ -36,7 +36,7 @@ Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent wit
|
||||
// Path selected by command based on --cli-execute flag, agent reads it
|
||||
"session_metadata": {
|
||||
// If in memory: use cached content
|
||||
// Else: Load from .workflow/{session-id}/workflow-session.json
|
||||
// Else: Load from .workflow/active//{session-id}/workflow-session.json
|
||||
},
|
||||
"brainstorm_artifacts": {
|
||||
// Loaded from context-package.json → brainstorm_artifacts section
|
||||
@@ -50,10 +50,10 @@ Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent wit
|
||||
"synthesis_output": {"path": "...", "exists": true},
|
||||
"conflict_resolution": {"path": "...", "exists": true} // if conflict_risk >= medium
|
||||
},
|
||||
"context_package_path": ".workflow/{session-id}/.process/context-package.json",
|
||||
"context_package_path": ".workflow/active//{session-id}/.process/context-package.json",
|
||||
"context_package": {
|
||||
// If in memory: use cached content
|
||||
// Else: Load from .workflow/{session-id}/.process/context-package.json
|
||||
// Else: Load from .workflow/active//{session-id}/.process/context-package.json
|
||||
},
|
||||
"mcp_capabilities": {
|
||||
"code_index": true,
|
||||
@@ -67,14 +67,14 @@ Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent wit
|
||||
1. **Load Session Context** (if not in memory)
|
||||
```javascript
|
||||
if (!memory.has("workflow-session.json")) {
|
||||
Read(.workflow/{session-id}/workflow-session.json)
|
||||
Read(.workflow/active//{session-id}/workflow-session.json)
|
||||
}
|
||||
```
|
||||
|
||||
2. **Load Context Package** (if not in memory)
|
||||
```javascript
|
||||
if (!memory.has("context-package.json")) {
|
||||
Read(.workflow/{session-id}/.process/context-package.json)
|
||||
Read(.workflow/active//{session-id}/.process/context-package.json)
|
||||
}
|
||||
```
|
||||
|
||||
@@ -176,18 +176,18 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
### Required Outputs Summary
|
||||
|
||||
#### 1. Task JSON Files (.task/IMPL-*.json)
|
||||
- **Location**: `.workflow/{session-id}/.task/`
|
||||
- **Location**: `.workflow/active//{session-id}/.task/`
|
||||
- **Template**: Read from `{template_path}` (pre-selected by command based on `--cli-execute` flag)
|
||||
- **Schema**: 5-field structure (id, title, status, meta, context, flow_control) with artifacts integration
|
||||
- **Details**: See action-planning-agent.md § Task JSON Generation
|
||||
|
||||
#### 2. IMPL_PLAN.md
|
||||
- **Location**: `.workflow/{session-id}/IMPL_PLAN.md`
|
||||
- **Location**: `.workflow/active//{session-id}/IMPL_PLAN.md`
|
||||
- **Template**: `~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt`
|
||||
- **Details**: See action-planning-agent.md § Implementation Plan Creation
|
||||
|
||||
#### 3. TODO_LIST.md
|
||||
- **Location**: `.workflow/{session-id}/TODO_LIST.md`
|
||||
- **Location**: `.workflow/active//{session-id}/TODO_LIST.md`
|
||||
- **Format**: Hierarchical task list with status indicators (▸, [ ], [x]) and JSON links
|
||||
- **Details**: See action-planning-agent.md § TODO List Generation
|
||||
|
||||
@@ -231,13 +231,13 @@ const agentContext = {
|
||||
// Use memory if available, else load
|
||||
session_metadata: memory.has("workflow-session.json")
|
||||
? memory.get("workflow-session.json")
|
||||
: Read(.workflow/WFS-[id]/workflow-session.json),
|
||||
: Read(.workflow/active/WFS-[id]/workflow-session.json),
|
||||
|
||||
context_package_path: ".workflow/WFS-[id]/.process/context-package.json",
|
||||
context_package_path: ".workflow/active/WFS-[id]/.process/context-package.json",
|
||||
|
||||
context_package: memory.has("context-package.json")
|
||||
? memory.get("context-package.json")
|
||||
: Read(".workflow/WFS-[id]/.process/context-package.json"),
|
||||
: Read(".workflow/active/WFS-[id]/.process/context-package.json"),
|
||||
|
||||
// Extract brainstorm artifacts from context package
|
||||
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
|
||||
|
||||
@@ -74,7 +74,7 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
|
||||
"workflow_type": "tdd",
|
||||
"session_metadata": {
|
||||
// If in memory: use cached content
|
||||
// Else: Load from .workflow/{session-id}/workflow-session.json
|
||||
// Else: Load from .workflow/active//{session-id}/workflow-session.json
|
||||
},
|
||||
"brainstorm_artifacts": {
|
||||
// Loaded from context-package.json → brainstorm_artifacts section
|
||||
@@ -88,12 +88,12 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
|
||||
"synthesis_output": {"path": "...", "exists": true},
|
||||
"conflict_resolution": {"path": "...", "exists": true} // if conflict_risk >= medium
|
||||
},
|
||||
"context_package_path": ".workflow/{session-id}/.process/context-package.json",
|
||||
"context_package_path": ".workflow/active//{session-id}/.process/context-package.json",
|
||||
"context_package": {
|
||||
// If in memory: use cached content
|
||||
// Else: Load from .workflow/{session-id}/.process/context-package.json
|
||||
// Else: Load from .workflow/active//{session-id}/.process/context-package.json
|
||||
},
|
||||
"test_context_package_path": ".workflow/{session-id}/.process/test-context-package.json",
|
||||
"test_context_package_path": ".workflow/active//{session-id}/.process/test-context-package.json",
|
||||
"test_context_package": {
|
||||
// Existing test patterns and coverage analysis
|
||||
},
|
||||
@@ -109,21 +109,21 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
|
||||
1. **Load Session Context** (if not in memory)
|
||||
```javascript
|
||||
if (!memory.has("workflow-session.json")) {
|
||||
Read(.workflow/{session-id}/workflow-session.json)
|
||||
Read(.workflow/active//{session-id}/workflow-session.json)
|
||||
}
|
||||
```
|
||||
|
||||
2. **Load Context Package** (if not in memory)
|
||||
```javascript
|
||||
if (!memory.has("context-package.json")) {
|
||||
Read(.workflow/{session-id}/.process/context-package.json)
|
||||
Read(.workflow/active//{session-id}/.process/context-package.json)
|
||||
}
|
||||
```
|
||||
|
||||
3. **Load Test Context Package** (if not in memory)
|
||||
```javascript
|
||||
if (!memory.has("test-context-package.json")) {
|
||||
Read(.workflow/{session-id}/.process/test-context-package.json)
|
||||
Read(.workflow/active//{session-id}/.process/test-context-package.json)
|
||||
}
|
||||
```
|
||||
|
||||
@@ -245,7 +245,7 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
#### Required Outputs Summary
|
||||
|
||||
##### 1. TDD Task JSON Files (.task/IMPL-*.json)
|
||||
- **Location**: `.workflow/{session-id}/.task/`
|
||||
- **Location**: `.workflow/active//{session-id}/.task/`
|
||||
- **Template**: Read from `{template_path}` (pre-selected by command based on `--cli-execute` flag)
|
||||
- **Schema**: 5-field structure with TDD-specific metadata
|
||||
- `meta.tdd_workflow`: true (REQUIRED)
|
||||
@@ -259,14 +259,14 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
- **Details**: See action-planning-agent.md § TDD Task JSON Generation
|
||||
|
||||
##### 2. IMPL_PLAN.md (TDD Variant)
|
||||
- **Location**: `.workflow/{session-id}/IMPL_PLAN.md`
|
||||
- **Location**: `.workflow/active//{session-id}/IMPL_PLAN.md`
|
||||
- **Template**: `~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt`
|
||||
- **TDD-Specific Frontmatter**: workflow_type="tdd", tdd_workflow=true, feature_count, task_breakdown
|
||||
- **TDD Implementation Tasks Section**: Feature-by-feature with internal Red-Green-Refactor cycles
|
||||
- **Details**: See action-planning-agent.md § TDD Implementation Plan Creation
|
||||
|
||||
##### 3. TODO_LIST.md
|
||||
- **Location**: `.workflow/{session-id}/TODO_LIST.md`
|
||||
- **Location**: `.workflow/active//{session-id}/TODO_LIST.md`
|
||||
- **Format**: Hierarchical task list with internal TDD phase indicators (Red → Green → Refactor)
|
||||
- **Status**: ▸ (container), [ ] (pending), [x] (completed)
|
||||
- **Details**: See action-planning-agent.md § TODO List Generation
|
||||
@@ -338,19 +338,19 @@ const agentContext = {
|
||||
// Use memory if available, else load
|
||||
session_metadata: memory.has("workflow-session.json")
|
||||
? memory.get("workflow-session.json")
|
||||
: Read(.workflow/WFS-[id]/workflow-session.json),
|
||||
: Read(.workflow/active/WFS-[id]/workflow-session.json),
|
||||
|
||||
context_package_path: ".workflow/WFS-[id]/.process/context-package.json",
|
||||
context_package_path: ".workflow/active/WFS-[id]/.process/context-package.json",
|
||||
|
||||
context_package: memory.has("context-package.json")
|
||||
? memory.get("context-package.json")
|
||||
: Read(".workflow/WFS-[id]/.process/context-package.json"),
|
||||
: Read(".workflow/active/WFS-[id]/.process/context-package.json"),
|
||||
|
||||
test_context_package_path: ".workflow/WFS-[id]/.process/test-context-package.json",
|
||||
test_context_package_path: ".workflow/active/WFS-[id]/.process/test-context-package.json",
|
||||
|
||||
test_context_package: memory.has("test-context-package.json")
|
||||
? memory.get("test-context-package.json")
|
||||
: Read(".workflow/WFS-[id]/.process/test-context-package.json"),
|
||||
: Read(".workflow/active/WFS-[id]/.process/test-context-package.json"),
|
||||
|
||||
// Extract brainstorm artifacts from context package
|
||||
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
|
||||
@@ -384,7 +384,7 @@ This section provides quick reference for TDD task JSON structure. For complete
|
||||
|
||||
## Output Files Structure
|
||||
```
|
||||
.workflow/{session-id}/
|
||||
.workflow/active//{session-id}/
|
||||
├── IMPL_PLAN.md # Unified plan with TDD Implementation Tasks section
|
||||
├── TODO_LIST.md # Progress tracking with internal TDD phase indicators
|
||||
├── .task/
|
||||
|
||||
@@ -77,8 +77,8 @@ The command follows a streamlined, three-step process to convert analysis into e
|
||||
|
||||
### Step 1: Input & Discovery
|
||||
The process begins by gathering all necessary inputs. It follows a **Memory-First Rule**, skipping file reads if documents are already in the conversation memory.
|
||||
1. **Session Validation**: Loads and validates the session from `.workflow/{session_id}/workflow-session.json`.
|
||||
2. **Context Package Loading** (primary source): Reads `.workflow/{session_id}/.process/context-package.json` for smart context and artifact catalog.
|
||||
1. **Session Validation**: Loads and validates the session from `.workflow/active/{session_id}/workflow-session.json`.
|
||||
2. **Context Package Loading** (primary source): Reads `.workflow/active/{session_id}/.process/context-package.json` for smart context and artifact catalog.
|
||||
3. **Brainstorm Artifacts Extraction**: Extracts role analysis paths from `context-package.json` → `brainstorm_artifacts.role_analyses[]` (supports `analysis*.md` automatically).
|
||||
4. **Document Loading**: Reads role analyses, guidance specification, synthesis output, and conflict resolution (if exists) using paths from context package.
|
||||
|
||||
@@ -224,7 +224,7 @@ Each task JSON embeds all necessary context, artifacts, and execution steps usin
|
||||
- `id`: Task identifier (format: `IMPL-N` or `IMPL-N.M` for subtasks)
|
||||
- `title`: Descriptive task name
|
||||
- `status`: Task state (`pending|active|completed|blocked|container`)
|
||||
- `context_package_path`: Path to context package (`.workflow/WFS-[session]/.process/context-package.json`)
|
||||
- `context_package_path`: Path to context package (`.workflow/active/WFS-[session]/.process/context-package.json`)
|
||||
- `meta`: Task metadata
|
||||
- `context`: Task-specific context and requirements
|
||||
- `flow_control`: Execution steps and workflow
|
||||
@@ -269,7 +269,7 @@ Each task JSON embeds all necessary context, artifacts, and execution steps usin
|
||||
"id": "IMPL-1",
|
||||
"title": "Implement feature X with Y components",
|
||||
"status": "pending",
|
||||
"context_package_path": ".workflow/WFS-session/.process/context-package.json",
|
||||
"context_package_path": ".workflow/active/WFS-session/.process/context-package.json",
|
||||
"meta": {
|
||||
"type": "feature",
|
||||
"agent": "@code-developer",
|
||||
@@ -291,7 +291,7 @@ Each task JSON embeds all necessary context, artifacts, and execution steps usin
|
||||
"depends_on": [],
|
||||
"artifacts": [
|
||||
{
|
||||
"path": ".workflow/WFS-session/.brainstorming/system-architect/analysis.md",
|
||||
"path": ".workflow/active/WFS-session/.brainstorming/system-architect/analysis.md",
|
||||
"priority": "highest",
|
||||
"usage": "Architecture decisions and API specifications"
|
||||
}
|
||||
@@ -353,10 +353,10 @@ This document provides a high-level overview of the entire implementation plan.
|
||||
---
|
||||
identifier: WFS-{session-id}
|
||||
source: "User requirements" | "File: path" | "Issue: ISS-001"
|
||||
role_analyses: .workflow/{session-id}/.brainstorming/[role]/analysis*.md
|
||||
artifacts: .workflow/{session-id}/.brainstorming/
|
||||
context_package: .workflow/{session-id}/.process/context-package.json # CCW smart context
|
||||
guidance_specification: .workflow/{session-id}/.brainstorming/guidance-specification.md # Finalized decisions with resolved conflicts
|
||||
role_analyses: .workflow/active//{session-id}/.brainstorming/[role]/analysis*.md
|
||||
artifacts: .workflow/active//{session-id}/.brainstorming/
|
||||
context_package: .workflow/active//{session-id}/.process/context-package.json # CCW smart context
|
||||
guidance_specification: .workflow/active//{session-id}/.brainstorming/guidance-specification.md # Finalized decisions with resolved conflicts
|
||||
workflow_type: "standard | tdd | design" # Indicates execution model
|
||||
verification_history: # CCW quality gates
|
||||
synthesis_clarify: "passed | skipped | pending" # Brainstorm phase clarification
|
||||
@@ -606,7 +606,7 @@ A simple Markdown file for tracking the status of each task.
|
||||
### 6.4. Output Files Diagram
|
||||
The command organizes outputs into a standard directory structure.
|
||||
```
|
||||
.workflow/{session-id}/
|
||||
.workflow/active//{session-id}/
|
||||
├── IMPL_PLAN.md # Implementation plan
|
||||
├── TODO_LIST.md # Progress tracking
|
||||
├── .task/
|
||||
|
||||
@@ -21,7 +21,7 @@ Analyze test coverage and verify Red-Green-Refactor cycle execution for TDD work
|
||||
|
||||
### Phase 1: Extract Test Tasks
|
||||
```bash
|
||||
find .workflow/{session_id}/.task/ -name 'TEST-*.json' -exec jq -r '.context.focus_paths[]' {} \;
|
||||
find .workflow/active/{session_id}/.task/ -name 'TEST-*.json' -exec jq -r '.context.focus_paths[]' {} \;
|
||||
```
|
||||
|
||||
**Output**: List of test directories/files from all TEST tasks
|
||||
@@ -29,20 +29,20 @@ find .workflow/{session_id}/.task/ -name 'TEST-*.json' -exec jq -r '.context.foc
|
||||
### Phase 2: Run Test Suite
|
||||
```bash
|
||||
# Node.js/JavaScript
|
||||
npm test -- --coverage --json > .workflow/{session_id}/.process/test-results.json
|
||||
npm test -- --coverage --json > .workflow/active/{session_id}/.process/test-results.json
|
||||
|
||||
# Python
|
||||
pytest --cov --json-report > .workflow/{session_id}/.process/test-results.json
|
||||
pytest --cov --json-report > .workflow/active/{session_id}/.process/test-results.json
|
||||
|
||||
# Other frameworks (detect from project)
|
||||
[test_command] --coverage --json-output .workflow/{session_id}/.process/test-results.json
|
||||
[test_command] --coverage --json-output .workflow/active/{session_id}/.process/test-results.json
|
||||
```
|
||||
|
||||
**Output**: test-results.json with coverage data
|
||||
|
||||
### Phase 3: Parse Coverage Data
|
||||
```bash
|
||||
jq '.coverage' .workflow/{session_id}/.process/test-results.json > .workflow/{session_id}/.process/coverage-report.json
|
||||
jq '.coverage' .workflow/active/{session_id}/.process/test-results.json > .workflow/active/{session_id}/.process/coverage-report.json
|
||||
```
|
||||
|
||||
**Extract**:
|
||||
@@ -58,7 +58,7 @@ For each TDD chain (TEST-N.M -> IMPL-N.M -> REFACTOR-N.M):
|
||||
**1. Red Phase Verification**
|
||||
```bash
|
||||
# Check TEST task summary
|
||||
cat .workflow/{session_id}/.summaries/TEST-N.M-summary.md
|
||||
cat .workflow/active/{session_id}/.summaries/TEST-N.M-summary.md
|
||||
```
|
||||
|
||||
Verify:
|
||||
@@ -69,7 +69,7 @@ Verify:
|
||||
**2. Green Phase Verification**
|
||||
```bash
|
||||
# Check IMPL task summary
|
||||
cat .workflow/{session_id}/.summaries/IMPL-N.M-summary.md
|
||||
cat .workflow/active/{session_id}/.summaries/IMPL-N.M-summary.md
|
||||
```
|
||||
|
||||
Verify:
|
||||
@@ -80,7 +80,7 @@ Verify:
|
||||
**3. Refactor Phase Verification**
|
||||
```bash
|
||||
# Check REFACTOR task summary
|
||||
cat .workflow/{session_id}/.summaries/REFACTOR-N.M-summary.md
|
||||
cat .workflow/active/{session_id}/.summaries/REFACTOR-N.M-summary.md
|
||||
```
|
||||
|
||||
Verify:
|
||||
@@ -90,7 +90,7 @@ Verify:
|
||||
|
||||
### Phase 5: Generate Analysis Report
|
||||
|
||||
Create `.workflow/{session_id}/.process/tdd-cycle-report.md`:
|
||||
Create `.workflow/active/{session_id}/.process/tdd-cycle-report.md`:
|
||||
|
||||
```markdown
|
||||
# TDD Cycle Analysis - {Session ID}
|
||||
@@ -146,7 +146,7 @@ Create `.workflow/{session_id}/.process/tdd-cycle-report.md`:
|
||||
|
||||
## Output Files
|
||||
```
|
||||
.workflow/{session-id}/
|
||||
.workflow/active//{session-id}/
|
||||
└── .process/
|
||||
├── test-results.json # Raw test execution results
|
||||
├── coverage-report.json # Parsed coverage data
|
||||
@@ -272,6 +272,6 @@ Function Coverage: 91%
|
||||
|
||||
Overall Compliance: 93/100
|
||||
|
||||
Detailed report: .workflow/WFS-auth/.process/tdd-cycle-report.md
|
||||
Detailed report: .workflow/active/WFS-auth/.process/tdd-cycle-report.md
|
||||
```
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ name: test-concept-enhanced
|
||||
description: Analyze test requirements and generate test generation strategy using Gemini with test-context package
|
||||
argument-hint: "--session WFS-test-session-id --context path/to/test-context-package.json"
|
||||
examples:
|
||||
- /workflow:tools:test-concept-enhanced --session WFS-test-auth --context .workflow/WFS-test-auth/.process/test-context-package.json
|
||||
- /workflow:tools:test-concept-enhanced --session WFS-test-auth --context .workflow/active/WFS-test-auth/.process/test-context-package.json
|
||||
---
|
||||
|
||||
# Test Concept Enhanced Command
|
||||
@@ -30,7 +30,7 @@ Specialized analysis tool for test generation workflows that uses Gemini to anal
|
||||
### Phase 1: Validation & Preparation
|
||||
|
||||
1. **Session Validation**
|
||||
- Load `.workflow/{test_session_id}/workflow-session.json`
|
||||
- Load `.workflow/active/{test_session_id}/workflow-session.json`
|
||||
- Verify test session type is "test-gen"
|
||||
- Extract source session reference
|
||||
|
||||
@@ -48,11 +48,11 @@ Specialized analysis tool for test generation workflows that uses Gemini to anal
|
||||
|
||||
**Tool Configuration**:
|
||||
```bash
|
||||
cd .workflow/{test_session_id}/.process && gemini -p "
|
||||
cd .workflow/active/{test_session_id}/.process && gemini -p "
|
||||
PURPOSE: Analyze test coverage gaps and design comprehensive test generation strategy
|
||||
TASK: Study implementation context, existing tests, and generate test requirements for missing coverage
|
||||
MODE: analysis
|
||||
CONTEXT: @{.workflow/{test_session_id}/.process/test-context-package.json}
|
||||
CONTEXT: @{.workflow/active/{test_session_id}/.process/test-context-package.json}
|
||||
|
||||
**MANDATORY FIRST STEP**: Read and analyze test-context-package.json to understand:
|
||||
- Test coverage gaps from test_coverage.missing_tests[]
|
||||
@@ -226,13 +226,13 @@ RULES:
|
||||
- Prioritize critical business logic tests
|
||||
- Specify clear test scenarios and coverage targets
|
||||
- Identify all dependencies requiring mocks
|
||||
- **MUST write output to .workflow/{test_session_id}/.process/gemini-test-analysis.md**
|
||||
- **MUST write output to .workflow/active/{test_session_id}/.process/gemini-test-analysis.md**
|
||||
- Do NOT generate actual test code or implementation
|
||||
- Output ONLY test analysis and generation strategy
|
||||
" --approval-mode yolo
|
||||
```
|
||||
|
||||
**Output Location**: `.workflow/{test_session_id}/.process/gemini-test-analysis.md`
|
||||
**Output Location**: `.workflow/active/{test_session_id}/.process/gemini-test-analysis.md`
|
||||
|
||||
### Phase 3: Results Synthesis
|
||||
|
||||
@@ -408,7 +408,7 @@ Synthesize Gemini analysis into standardized format:
|
||||
- **Coverage Tools**: {coverage_tool_if_detected}
|
||||
```
|
||||
|
||||
**Output Location**: `.workflow/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md`
|
||||
**Output Location**: `.workflow/active/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md`
|
||||
|
||||
## Error Handling
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ Orchestrator command that invokes `test-context-search-agent` to gather comprehe
|
||||
- **Detection-First**: Check for existing test-context-package before executing
|
||||
- **Coverage-First**: Analyze existing test coverage before planning new tests
|
||||
- **Source Context Loading**: Import implementation summaries from source session
|
||||
- **Standardized Output**: Generate `.workflow/{test_session_id}/.process/test-context-package.json`
|
||||
- **Standardized Output**: Generate `.workflow/active/{test_session_id}/.process/test-context-package.json`
|
||||
|
||||
## Execution Flow
|
||||
|
||||
@@ -164,7 +164,7 @@ Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-pac
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- ✅ Valid test-context-package.json generated in `.workflow/{test_session_id}/.process/`
|
||||
- ✅ Valid test-context-package.json generated in `.workflow/active/{test_session_id}/.process/`
|
||||
- ✅ Source session context loaded successfully
|
||||
- ✅ Test coverage gaps identified (>90% accuracy)
|
||||
- ✅ Test framework detected and documented
|
||||
|
||||
@@ -54,14 +54,14 @@ Autonomous test-fix task JSON generation using action-planning-agent with two-ph
|
||||
"use_codex": true | false, // Determined by --use-codex flag
|
||||
"session_metadata": {
|
||||
// If in memory: use cached content
|
||||
// Else: Load from .workflow/{test-session-id}/workflow-session.json
|
||||
// Else: Load from .workflow/active/{test-session-id}/workflow-session.json
|
||||
},
|
||||
"test_analysis_results_path": ".workflow/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md",
|
||||
"test_analysis_results_path": ".workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md",
|
||||
"test_analysis_results": {
|
||||
// If in memory: use cached content
|
||||
// Else: Load from TEST_ANALYSIS_RESULTS.md
|
||||
},
|
||||
"test_context_package_path": ".workflow/{test-session-id}/.process/test-context-package.json",
|
||||
"test_context_package_path": ".workflow/active/{test-session-id}/.process/test-context-package.json",
|
||||
"test_context_package": {
|
||||
// Existing test patterns and coverage analysis
|
||||
},
|
||||
@@ -81,28 +81,28 @@ Autonomous test-fix task JSON generation using action-planning-agent with two-ph
|
||||
1. **Load Test Session Context** (if not in memory)
|
||||
```javascript
|
||||
if (!memory.has("workflow-session.json")) {
|
||||
Read(.workflow/{test-session-id}/workflow-session.json)
|
||||
Read(.workflow/active/{test-session-id}/workflow-session.json)
|
||||
}
|
||||
```
|
||||
|
||||
2. **Load TEST_ANALYSIS_RESULTS.md** (if not in memory, REQUIRED)
|
||||
```javascript
|
||||
if (!memory.has("TEST_ANALYSIS_RESULTS.md")) {
|
||||
Read(.workflow/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md)
|
||||
Read(.workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md)
|
||||
}
|
||||
```
|
||||
|
||||
3. **Load Test Context Package** (if not in memory)
|
||||
```javascript
|
||||
if (!memory.has("test-context-package.json")) {
|
||||
Read(.workflow/{test-session-id}/.process/test-context-package.json)
|
||||
Read(.workflow/active/{test-session-id}/.process/test-context-package.json)
|
||||
}
|
||||
```
|
||||
|
||||
4. **Load Source Session Summaries** (if source_session_id exists)
|
||||
```javascript
|
||||
if (sessionMetadata.source_session_id) {
|
||||
const summaryFiles = Bash("find .workflow/{source-session-id}/.summaries/ -name 'IMPL-*-summary.md'")
|
||||
const summaryFiles = Bash("find .workflow/active/{source-session-id}/.summaries/ -name 'IMPL-*-summary.md'")
|
||||
summaryFiles.forEach(file => Read(file))
|
||||
}
|
||||
```
|
||||
@@ -205,7 +205,7 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
#### Required Outputs Summary
|
||||
|
||||
##### 1. Test Task JSON Files (.task/IMPL-*.json)
|
||||
- **Location**: `.workflow/{test-session-id}/.task/`
|
||||
- **Location**: `.workflow/active/{test-session-id}/.task/`
|
||||
- **Template**: Read from `{template_path}` (pre-selected by command based on `--cli-execute` flag)
|
||||
- **Schema**: 5-field structure with test-specific metadata
|
||||
- IMPL-001: `meta.type: "test-gen"`, `meta.agent: "@code-developer"`
|
||||
@@ -214,14 +214,14 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
- **Details**: See action-planning-agent.md § Test Task JSON Generation
|
||||
|
||||
##### 2. IMPL_PLAN.md (Test Variant)
|
||||
- **Location**: `.workflow/{test-session-id}/IMPL_PLAN.md`
|
||||
- **Location**: `.workflow/active/{test-session-id}/IMPL_PLAN.md`
|
||||
- **Template**: `~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt`
|
||||
- **Test-Specific Frontmatter**: workflow_type="test_session", test_framework, source_session_id
|
||||
- **Test-Fix-Retest Cycle Section**: Iterative fix cycle with Gemini diagnosis
|
||||
- **Details**: See action-planning-agent.md § Test Implementation Plan Creation
|
||||
|
||||
##### 3. TODO_LIST.md
|
||||
- **Location**: `.workflow/{test-session-id}/TODO_LIST.md`
|
||||
- **Location**: `.workflow/active/{test-session-id}/TODO_LIST.md`
|
||||
- **Format**: Task list with test generation and execution phases
|
||||
- **Status**: [ ] (pending), [x] (completed)
|
||||
- **Details**: See action-planning-agent.md § TODO List Generation
|
||||
@@ -273,19 +273,19 @@ const agentContext = {
|
||||
// Use memory if available, else load
|
||||
session_metadata: memory.has("workflow-session.json")
|
||||
? memory.get("workflow-session.json")
|
||||
: Read(.workflow/WFS-test-[id]/workflow-session.json),
|
||||
: Read(.workflow/active/WFS-test-[id]/workflow-session.json),
|
||||
|
||||
test_analysis_results_path: ".workflow/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md",
|
||||
test_analysis_results_path: ".workflow/active/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md",
|
||||
|
||||
test_analysis_results: memory.has("TEST_ANALYSIS_RESULTS.md")
|
||||
? memory.get("TEST_ANALYSIS_RESULTS.md")
|
||||
: Read(".workflow/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md"),
|
||||
: Read(".workflow/active/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md"),
|
||||
|
||||
test_context_package_path: ".workflow/WFS-test-[id]/.process/test-context-package.json",
|
||||
test_context_package_path: ".workflow/active/WFS-test-[id]/.process/test-context-package.json",
|
||||
|
||||
test_context_package: memory.has("test-context-package.json")
|
||||
? memory.get("test-context-package.json")
|
||||
: Read(".workflow/WFS-test-[id]/.process/test-context-package.json"),
|
||||
: Read(".workflow/active/WFS-test-[id]/.process/test-context-package.json"),
|
||||
|
||||
// Load source session summaries if exists
|
||||
source_session_id: session_metadata.source_session_id || null,
|
||||
@@ -312,7 +312,7 @@ This section provides quick reference for test task JSON structure. For complete
|
||||
|
||||
## Output Files Structure
|
||||
```
|
||||
.workflow/WFS-test-[session]/
|
||||
.workflow/active/WFS-test-[session]/
|
||||
├── workflow-session.json # Test session metadata
|
||||
├── IMPL_PLAN.md # Test validation plan
|
||||
├── TODO_LIST.md # Progress tracking
|
||||
|
||||
@@ -67,7 +67,7 @@ if [ -n "$DESIGN_ID" ]; then
|
||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
||||
elif [ -n "$SESSION_ID" ]; then
|
||||
# Latest in session
|
||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
relative_path=$(find .workflow/active/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
else
|
||||
# Latest globally
|
||||
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
|
||||
@@ -1,380 +0,0 @@
|
||||
---
|
||||
name: capture
|
||||
description: Batch screenshot capture for UI design workflows using MCP puppeteer or local fallback with URL mapping
|
||||
argument-hint: --url-map "target:url,..." [--design-id <id>] [--session <id>]
|
||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*), ListMcpResourcesTool(*), mcp__chrome-devtools__*, mcp__playwright__*
|
||||
---
|
||||
|
||||
# Batch Screenshot Capture (/workflow:ui-design:capture)
|
||||
|
||||
## Overview
|
||||
Batch screenshot tool with MCP-first strategy and multi-tier fallback. Processes multiple URLs in parallel.
|
||||
|
||||
**Strategy**: MCP → Playwright → Chrome → Manual
|
||||
**Output**: Flat structure `screenshots/{target}.png`
|
||||
|
||||
## Phase 1: Initialize & Parse
|
||||
|
||||
### Step 1: Determine Base Path & Generate Design ID
|
||||
```bash
|
||||
# Priority: --design-id > session (latest) > standalone (create new)
|
||||
if [ -n "$DESIGN_ID" ]; then
|
||||
# Use provided design ID
|
||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
||||
if [ -z "$relative_path" ]; then
|
||||
echo "ERROR: Design run not found: $DESIGN_ID"
|
||||
echo "HINT: Run '/workflow:ui-design:list' to see available design runs"
|
||||
exit 1
|
||||
fi
|
||||
elif [ -n "$SESSION_ID" ]; then
|
||||
# Find latest in session or create new
|
||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
if [ -z "$relative_path" ]; then
|
||||
design_id="design-run-$(date +%Y%m%d)-$RANDOM"
|
||||
relative_path=".workflow/WFS-$SESSION_ID/${design_id}"
|
||||
fi
|
||||
else
|
||||
# Create new standalone design run
|
||||
design_id="design-run-$(date +%Y%m%d)-$RANDOM"
|
||||
relative_path=".workflow/${design_id}"
|
||||
fi
|
||||
|
||||
# Create directory and convert to absolute path
|
||||
bash(mkdir -p "$relative_path"/screenshots)
|
||||
base_path=$(cd "$relative_path" && pwd)
|
||||
|
||||
# Extract and display design_id
|
||||
design_id=$(basename "$base_path")
|
||||
echo "✓ Design ID: $design_id"
|
||||
echo "✓ Base path: $base_path"
|
||||
```
|
||||
|
||||
### Step 2: Parse URL Map
|
||||
```javascript
|
||||
// Input: "home:https://linear.app, pricing:https://linear.app/pricing"
|
||||
url_entries = []
|
||||
|
||||
FOR pair IN split(params["--url-map"], ","):
|
||||
parts = pair.split(":", 1)
|
||||
|
||||
IF len(parts) != 2:
|
||||
ERROR: "Invalid format: {pair}. Expected: 'target:url'"
|
||||
EXIT 1
|
||||
|
||||
target = parts[0].strip().lower().replace(" ", "-")
|
||||
url = parts[1].strip()
|
||||
|
||||
// Validate target name
|
||||
IF NOT regex_match(target, r"^[a-z0-9][a-z0-9_-]*$"):
|
||||
ERROR: "Invalid target: {target}"
|
||||
EXIT 1
|
||||
|
||||
// Add https:// if missing
|
||||
IF NOT url.startswith("http"):
|
||||
url = f"https://{url}"
|
||||
|
||||
url_entries.append({target, url})
|
||||
```
|
||||
|
||||
**Output**: `base_path`, `url_entries[]`
|
||||
|
||||
### Step 3: Initialize Todos
|
||||
```javascript
|
||||
TodoWrite({todos: [
|
||||
{content: "Parse url-map", status: "completed", activeForm: "Parsing"},
|
||||
{content: "Detect MCP tools", status: "in_progress", activeForm: "Detecting"},
|
||||
{content: "Capture screenshots", status: "pending", activeForm: "Capturing"},
|
||||
{content: "Verify results", status: "pending", activeForm: "Verifying"}
|
||||
]})
|
||||
```
|
||||
|
||||
## Phase 2: Detect Screenshot Tools
|
||||
|
||||
### Step 1: Check MCP Availability
|
||||
```javascript
|
||||
// List available MCP servers
|
||||
all_resources = ListMcpResourcesTool()
|
||||
available_servers = unique([r.server for r in all_resources])
|
||||
|
||||
// Check Chrome DevTools MCP
|
||||
chrome_devtools = "chrome-devtools" IN available_servers
|
||||
chrome_screenshot = check_tool_exists("mcp__chrome-devtools__take_screenshot")
|
||||
|
||||
// Check Playwright MCP
|
||||
playwright_mcp = "playwright" IN available_servers
|
||||
playwright_screenshot = check_tool_exists("mcp__playwright__screenshot")
|
||||
|
||||
// Determine primary tool
|
||||
IF chrome_devtools AND chrome_screenshot:
|
||||
tool = "chrome-devtools"
|
||||
ELSE IF playwright_mcp AND playwright_screenshot:
|
||||
tool = "playwright"
|
||||
ELSE:
|
||||
tool = null
|
||||
```
|
||||
|
||||
**Output**: `tool` (chrome-devtools | playwright | null)
|
||||
|
||||
### Step 2: Check Local Fallback
|
||||
```bash
|
||||
# Only if MCP unavailable
|
||||
bash(which playwright 2>/dev/null || echo "")
|
||||
bash(which google-chrome || which chrome || which chromium 2>/dev/null || echo "")
|
||||
```
|
||||
|
||||
**Output**: `local_tools[]`
|
||||
|
||||
## Phase 3: Capture Screenshots
|
||||
|
||||
### Screenshot Format Options
|
||||
|
||||
**PNG Format** (default, lossless):
|
||||
- **Pros**: Lossless quality, best for detailed UI screenshots
|
||||
- **Cons**: Larger file sizes (typically 200-500 KB per screenshot)
|
||||
- **Parameters**: `format: "png"` (no quality parameter)
|
||||
- **Use case**: High-fidelity UI replication, design system extraction
|
||||
|
||||
**WebP Format** (optional, lossy/lossless):
|
||||
- **Pros**: Smaller file sizes with good quality (50-70% smaller than PNG)
|
||||
- **Cons**: Requires quality parameter, slight quality loss at high compression
|
||||
- **Parameters**: `format: "webp", quality: 90` (80-100 recommended)
|
||||
- **Use case**: Batch captures, network-constrained environments
|
||||
|
||||
**JPEG Format** (optional, lossy):
|
||||
- **Pros**: Smallest file sizes
|
||||
- **Cons**: Lossy compression, not recommended for UI screenshots
|
||||
- **Parameters**: `format: "jpeg", quality: 90`
|
||||
- **Use case**: Photo-heavy pages, not recommended for UI design
|
||||
|
||||
### Step 1: MCP Capture (If Available)
|
||||
```javascript
|
||||
IF tool == "chrome-devtools":
|
||||
// Get or create page
|
||||
pages = mcp__chrome-devtools__list_pages()
|
||||
|
||||
IF pages.length == 0:
|
||||
mcp__chrome-devtools__new_page({url: url_entries[0].url})
|
||||
page_idx = 0
|
||||
ELSE:
|
||||
page_idx = 0
|
||||
|
||||
mcp__chrome-devtools__select_page({pageIdx: page_idx})
|
||||
|
||||
// Capture each URL
|
||||
FOR entry IN url_entries:
|
||||
mcp__chrome-devtools__navigate_page({url: entry.url, timeout: 30000})
|
||||
bash(sleep 2)
|
||||
|
||||
// PNG format doesn't support quality parameter
|
||||
// Use PNG for lossless quality (larger files)
|
||||
mcp__chrome-devtools__take_screenshot({
|
||||
fullPage: true,
|
||||
format: "png",
|
||||
filePath: f"{base_path}/screenshots/{entry.target}.png"
|
||||
})
|
||||
|
||||
// Alternative: Use WebP with quality for smaller files
|
||||
// mcp__chrome-devtools__take_screenshot({
|
||||
// fullPage: true,
|
||||
// format: "webp",
|
||||
// quality: 90,
|
||||
// filePath: f"{base_path}/screenshots/{entry.target}.webp"
|
||||
// })
|
||||
|
||||
ELSE IF tool == "playwright":
|
||||
FOR entry IN url_entries:
|
||||
mcp__playwright__screenshot({
|
||||
url: entry.url,
|
||||
output_path: f"{base_path}/screenshots/{entry.target}.png",
|
||||
full_page: true,
|
||||
timeout: 30000
|
||||
})
|
||||
```
|
||||
|
||||
### Step 2: Local Fallback (If MCP Failed)
|
||||
```bash
|
||||
# Try Playwright CLI
|
||||
bash(playwright screenshot "$url" "$output_file" --full-page --timeout 30000)
|
||||
|
||||
# Try Chrome headless
|
||||
bash($chrome --headless --screenshot="$output_file" --window-size=1920,1080 "$url")
|
||||
```
|
||||
|
||||
### Step 3: Manual Mode (If All Failed)
|
||||
```
|
||||
⚠️ Manual Screenshot Required
|
||||
|
||||
Failed URLs:
|
||||
home: https://linear.app
|
||||
Save to: .workflow/design-run-20250110/screenshots/home.png
|
||||
|
||||
Steps:
|
||||
1. Visit URL in browser
|
||||
2. Take full-page screenshot
|
||||
3. Save to path above
|
||||
4. Type 'ready' to continue
|
||||
|
||||
Options: ready | skip | abort
|
||||
```
|
||||
|
||||
## Phase 4: Verification
|
||||
|
||||
### Step 1: Scan Captured Files
|
||||
```bash
|
||||
bash(ls -1 $base_path/screenshots/*.{png,jpg,jpeg,webp} 2>/dev/null)
|
||||
bash(du -h $base_path/screenshots/*.png 2>/dev/null)
|
||||
```
|
||||
|
||||
### Step 2: Generate Metadata
|
||||
```javascript
|
||||
captured_files = Glob(f"{base_path}/screenshots/*.{{png,jpg,jpeg,webp}}")
|
||||
captured_targets = [basename_no_ext(f) for f in captured_files]
|
||||
|
||||
metadata = {
|
||||
"timestamp": current_timestamp(),
|
||||
"total_requested": len(url_entries),
|
||||
"total_captured": len(captured_targets),
|
||||
"screenshots": []
|
||||
}
|
||||
|
||||
FOR entry IN url_entries:
|
||||
is_captured = entry.target IN captured_targets
|
||||
|
||||
metadata.screenshots.append({
|
||||
"target": entry.target,
|
||||
"url": entry.url,
|
||||
"captured": is_captured,
|
||||
"path": f"{base_path}/screenshots/{entry.target}.png" IF is_captured ELSE null,
|
||||
"size_kb": file_size_kb IF is_captured ELSE null
|
||||
})
|
||||
|
||||
Write(f"{base_path}/screenshots/capture-metadata.json", JSON.stringify(metadata))
|
||||
```
|
||||
|
||||
**Output**: `capture-metadata.json`
|
||||
|
||||
## Completion
|
||||
|
||||
### Todo Update
|
||||
```javascript
|
||||
TodoWrite({todos: [
|
||||
{content: "Parse url-map", status: "completed", activeForm: "Parsing"},
|
||||
{content: "Detect MCP tools", status: "completed", activeForm: "Detecting"},
|
||||
{content: "Capture screenshots", status: "completed", activeForm: "Capturing"},
|
||||
{content: "Verify results", status: "completed", activeForm: "Verifying"}
|
||||
]})
|
||||
```
|
||||
|
||||
### Output Message
|
||||
```
|
||||
✅ Batch screenshot capture complete!
|
||||
|
||||
Summary:
|
||||
- Requested: {total_requested}
|
||||
- Captured: {total_captured}
|
||||
- Success rate: {success_rate}%
|
||||
- Method: {tool || "Local fallback"}
|
||||
|
||||
Output:
|
||||
{base_path}/screenshots/
|
||||
├── home.png (245.3 KB)
|
||||
├── pricing.png (198.7 KB)
|
||||
└── capture-metadata.json
|
||||
|
||||
Next: /workflow:ui-design:extract --images "screenshots/*.png"
|
||||
```
|
||||
|
||||
## Simple Bash Commands
|
||||
|
||||
### Path Operations
|
||||
```bash
|
||||
# Find design directory
|
||||
bash(find .workflow -type d -name "design-run-*" | head -1)
|
||||
|
||||
# Create screenshot directory
|
||||
bash(mkdir -p $BASE_PATH/screenshots)
|
||||
```
|
||||
|
||||
### Tool Detection
|
||||
```bash
|
||||
# Check MCP
|
||||
all_resources = ListMcpResourcesTool()
|
||||
|
||||
# Check local tools
|
||||
bash(which playwright 2>/dev/null)
|
||||
bash(which google-chrome 2>/dev/null)
|
||||
```
|
||||
|
||||
### Verification
|
||||
```bash
|
||||
# List captures
|
||||
bash(ls -1 $base_path/screenshots/*.png 2>/dev/null)
|
||||
|
||||
# File sizes
|
||||
bash(du -h $base_path/screenshots/*.png)
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{base_path}/
|
||||
└── screenshots/
|
||||
├── home.png
|
||||
├── pricing.png
|
||||
├── about.png
|
||||
└── capture-metadata.json
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
```
|
||||
ERROR: Invalid url-map format
|
||||
→ Use: "target:url, target2:url2"
|
||||
|
||||
ERROR: png screenshots do not support 'quality'
|
||||
→ PNG format is lossless, no quality parameter needed
|
||||
→ Remove quality parameter OR switch to webp/jpeg format
|
||||
|
||||
ERROR: MCP unavailable
|
||||
→ Using local fallback
|
||||
|
||||
ERROR: All tools failed
|
||||
→ Manual mode activated
|
||||
```
|
||||
|
||||
### Format-Specific Errors
|
||||
```
|
||||
❌ Wrong: format: "png", quality: 90
|
||||
✅ Right: format: "png"
|
||||
|
||||
✅ Or use: format: "webp", quality: 90
|
||||
✅ Or use: format: "jpeg", quality: 90
|
||||
```
|
||||
|
||||
### Recovery
|
||||
- **Partial success**: Keep successful captures
|
||||
- **Retry**: Re-run with failed targets only
|
||||
- **Manual**: Follow interactive guidance
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All requested URLs processed
|
||||
- [ ] File sizes > 1KB (valid images)
|
||||
- [ ] Metadata JSON generated
|
||||
- [ ] No missing targets (or documented)
|
||||
|
||||
## Key Features
|
||||
|
||||
- **MCP-first**: Prioritize managed tools
|
||||
- **Multi-tier fallback**: 4 layers (MCP → Local → Manual)
|
||||
- **Batch processing**: Parallel capture
|
||||
- **Error tolerance**: Partial failures handled
|
||||
- **Structured output**: Flat, predictable
|
||||
|
||||
## Integration
|
||||
|
||||
**Input**: `--url-map` (multiple target:url pairs)
|
||||
**Output**: `screenshots/*.png` + `capture-metadata.json`
|
||||
**Called by**: `/workflow:ui-design:imitate-auto`, `/workflow:ui-design:explore-auto`
|
||||
**Next**: `/workflow:ui-design:extract` or `/workflow:ui-design:explore-layers`
|
||||
@@ -1,11 +1,11 @@
|
||||
---
|
||||
name: update
|
||||
description: Update brainstorming artifacts with finalized design system references from selected prototypes
|
||||
name: design-sync
|
||||
description: Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption
|
||||
argument-hint: --session <session_id> [--selected-prototypes "<list>"]
|
||||
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
|
||||
---
|
||||
|
||||
# Design Update Command
|
||||
# Design Sync Command
|
||||
|
||||
## Overview
|
||||
|
||||
@@ -25,10 +25,10 @@ Synchronize finalized design system references to brainstorming artifacts, prepa
|
||||
|
||||
```bash
|
||||
# Validate session
|
||||
CHECK: .workflow/.active-* marker files; VALIDATE: session_id matches active session
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d; VALIDATE: session_id matches active session
|
||||
|
||||
# Verify design artifacts in latest design run
|
||||
latest_design = find_latest_path_matching(".workflow/WFS-{session}/design-run-*")
|
||||
latest_design = find_latest_path_matching(".workflow/active/WFS-{session}/design-run-*")
|
||||
|
||||
# Detect design system structure
|
||||
IF exists({latest_design}/style-extraction/style-1/design-tokens.json):
|
||||
@@ -51,7 +51,7 @@ REPORT: "Found {count} design artifacts, {prototype_count} prototypes"
|
||||
|
||||
```bash
|
||||
# Check if role analysis documents contains current design run reference
|
||||
synthesis_spec_path = ".workflow/WFS-{session}/.brainstorming/role analysis documents"
|
||||
synthesis_spec_path = ".workflow/active/WFS-{session}/.brainstorming/role analysis documents"
|
||||
current_design_run = basename(latest_design) # e.g., "design-run-20250109-143022"
|
||||
|
||||
IF exists(synthesis_spec_path):
|
||||
@@ -68,8 +68,8 @@ IF exists(synthesis_spec_path):
|
||||
|
||||
```bash
|
||||
# Load target brainstorming artifacts (files to be updated)
|
||||
Read(.workflow/WFS-{session}/.brainstorming/role analysis documents)
|
||||
IF exists(.workflow/WFS-{session}/.brainstorming/ui-designer/analysis.md): Read(analysis.md)
|
||||
Read(.workflow/active/WFS-{session}/.brainstorming/role analysis documents)
|
||||
IF exists(.workflow/active/WFS-{session}/.brainstorming/ui-designer/analysis.md): Read(analysis.md)
|
||||
|
||||
# Optional: Read prototype notes for descriptions (minimal context)
|
||||
FOR each selected_prototype IN selected_list:
|
||||
@@ -113,7 +113,7 @@ Update `.brainstorming/role analysis documents` with design system references.
|
||||
**Implementation**:
|
||||
```bash
|
||||
# Option 1: Edit existing section
|
||||
Edit(file_path=".workflow/WFS-{session}/.brainstorming/role analysis documents",
|
||||
Edit(file_path=".workflow/active/WFS-{session}/.brainstorming/role analysis documents",
|
||||
old_string="## UI/UX Guidelines\n[existing content]",
|
||||
new_string="## UI/UX Guidelines\n\n[new design reference content]")
|
||||
|
||||
@@ -128,15 +128,15 @@ IF section not found:
|
||||
|
||||
```bash
|
||||
# Always update ui-designer
|
||||
ui_designer_files = Glob(".workflow/WFS-{session}/.brainstorming/ui-designer/analysis*.md")
|
||||
ui_designer_files = Glob(".workflow/active/WFS-{session}/.brainstorming/ui-designer/analysis*.md")
|
||||
|
||||
# Conditionally update other roles
|
||||
has_animations = exists({latest_design}/animation-extraction/animation-tokens.json)
|
||||
has_layouts = exists({latest_design}/layout-extraction/layout-templates.json)
|
||||
|
||||
IF has_animations: ux_expert_files = Glob(".workflow/WFS-{session}/.brainstorming/ux-expert/analysis*.md")
|
||||
IF has_layouts: architect_files = Glob(".workflow/WFS-{session}/.brainstorming/system-architect/analysis*.md")
|
||||
IF selected_list: pm_files = Glob(".workflow/WFS-{session}/.brainstorming/product-manager/analysis*.md")
|
||||
IF has_animations: ux_expert_files = Glob(".workflow/active/WFS-{session}/.brainstorming/ux-expert/analysis*.md")
|
||||
IF has_layouts: architect_files = Glob(".workflow/active/WFS-{session}/.brainstorming/system-architect/analysis*.md")
|
||||
IF selected_list: pm_files = Glob(".workflow/active/WFS-{session}/.brainstorming/product-manager/analysis*.md")
|
||||
```
|
||||
|
||||
**Content Templates**:
|
||||
@@ -223,7 +223,7 @@ For complete token definitions and usage examples, see:
|
||||
|
||||
**Implementation**:
|
||||
```bash
|
||||
Write(file_path=".workflow/WFS-{session}/.brainstorming/ui-designer/design-system-reference.md",
|
||||
Write(file_path=".workflow/active/WFS-{session}/.brainstorming/ui-designer/design-system-reference.md",
|
||||
content="[generated content with @ references]")
|
||||
```
|
||||
|
||||
@@ -259,7 +259,7 @@ Next: /workflow:plan [--agent] "<task description>"
|
||||
|
||||
**Updated Files**:
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/
|
||||
.workflow/active/WFS-{session}/.brainstorming/
|
||||
├── role analysis documents # Updated with UI/UX Guidelines section
|
||||
├── ui-designer/
|
||||
│ ├── analysis*.md # Updated with design system references
|
||||
@@ -349,15 +349,3 @@ After update, verify:
|
||||
- **Next Phase**: `/workflow:plan` discovers and utilizes design system through @ references
|
||||
- **Auto Integration**: Automatically triggered by `/workflow:ui-design:auto` workflow
|
||||
|
||||
## Why Main Claude Execution?
|
||||
|
||||
This command is executed directly by main Claude (not delegated to an Agent) because:
|
||||
|
||||
1. **Simple Reference Generation**: Only generating file paths, not complex synthesis
|
||||
2. **Context Preservation**: Main Claude has full session and conversation context
|
||||
3. **Minimal Transformation**: Primarily updating references, not analyzing content
|
||||
4. **Path Resolution**: Requires precise relative path calculation
|
||||
5. **Edit Operations**: Better error recovery for Edit conflicts
|
||||
6. **Synthesis Pattern**: Follows same direct-execution pattern as other reference updates
|
||||
|
||||
This ensures reliable, lightweight integration without Agent handoff overhead.
|
||||
@@ -24,8 +24,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
|
||||
- **IF should_extract_animation**: **Attach tasks → Execute → Collapse** → Auto-continues to Phase 9
|
||||
- **ELSE**: Skip (use code import) → Auto-continues to Phase 9
|
||||
5. Phase 9 (layout-extract) → **Attach tasks → Execute → Collapse** → Auto-continues to Phase 10
|
||||
6. **Phase 10 (ui-assembly)** → **Attach tasks → Execute → Collapse** → Auto-continues to Phase 11
|
||||
7. **Phase 11 (preview-generation)** → **Execute script → Generate preview files** → Reports completion
|
||||
6. **Phase 10 (ui-assembly)** → **Attach tasks → Execute → Collapse** → Workflow complete
|
||||
|
||||
**Phase Transition Mechanism**:
|
||||
- **Phase 5 (User Interaction)**: User confirms targets → IMMEDIATELY triggers Phase 7
|
||||
@@ -33,10 +32,9 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
|
||||
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
|
||||
- **Task Collapse**: After tasks complete, collapse them into phase summary
|
||||
- **Phase Transition**: Automatically execute next phase after collapsing
|
||||
- **Phase 11 (Script Execution)**: Execute preview generation script
|
||||
- No additional user interaction after Phase 5 confirmation
|
||||
|
||||
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until reaching Phase 11 (preview generation).
|
||||
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until Phase 10 (UI assembly) finishes.
|
||||
|
||||
**Task Attachment Model**: SlashCommand invocation is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
|
||||
|
||||
@@ -50,7 +48,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
|
||||
4. **Default to All**: When selecting variants/prototypes, use ALL generated items
|
||||
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||
6. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
|
||||
7. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 11 (preview generation).
|
||||
7. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 10 (UI assembly) finishes.
|
||||
|
||||
## Parameter Requirements
|
||||
|
||||
@@ -127,145 +125,83 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
|
||||
**Integrated vs. Standalone**:
|
||||
- `--session` flag determines session integration or standalone execution
|
||||
|
||||
## 11-Phase Execution
|
||||
## 10-Phase Execution
|
||||
|
||||
### Phase 1: Parameter Parsing & Input Detection
|
||||
|
||||
**Unified Principle**: Detect → Classify → Store (avoid string concatenation and escaping)
|
||||
|
||||
**Step 1: Parameter Normalization**
|
||||
```bash
|
||||
# Step 0: Parse and normalize parameters
|
||||
images_input = null
|
||||
prompt_text = null
|
||||
|
||||
# Handle legacy parameters with deprecation warning
|
||||
# Legacy parameters (deprecated)
|
||||
IF --images OR --prompt:
|
||||
WARN: "⚠️ DEPRECATION: --images and --prompt are deprecated. Use --input instead."
|
||||
WARN: " Example: --input \"design-refs/*\" or --input \"modern dashboard\""
|
||||
images_input = --images
|
||||
prompt_text = --prompt
|
||||
WARN: "⚠️ --images/--prompt deprecated. Use --input"
|
||||
images_input = --images; prompt_text = --prompt
|
||||
|
||||
# Parse unified --input parameter
|
||||
IF --input:
|
||||
# Split by | separator for multiple inputs
|
||||
input_parts = split(--input, "|")
|
||||
|
||||
FOR part IN input_parts:
|
||||
part = trim(part)
|
||||
|
||||
# Detection logic
|
||||
IF contains(part, "*") OR glob_matches_files(part):
|
||||
# Glob pattern detected → images
|
||||
images_input = part
|
||||
ELSE IF file_or_directory_exists(part):
|
||||
# File/directory path → will be handled in code detection
|
||||
IF NOT prompt_text:
|
||||
prompt_text = part
|
||||
ELSE:
|
||||
prompt_text = prompt_text + " " + part
|
||||
ELSE:
|
||||
# Pure text → prompt
|
||||
IF NOT prompt_text:
|
||||
prompt_text = part
|
||||
ELSE:
|
||||
prompt_text = prompt_text + " " + part
|
||||
|
||||
# Step 1: Detect design source from parsed inputs
|
||||
code_files_detected = false
|
||||
code_base_path = null
|
||||
has_visual_input = false
|
||||
|
||||
IF prompt_text:
|
||||
# Extract potential file paths from prompt
|
||||
potential_paths = extract_paths_from_text(prompt_text)
|
||||
FOR path IN potential_paths:
|
||||
IF file_or_directory_exists(path):
|
||||
code_files_detected = true
|
||||
code_base_path = path
|
||||
BREAK
|
||||
|
||||
IF images_input:
|
||||
# Check if images parameter points to existing files
|
||||
IF glob_matches_files(images_input):
|
||||
has_visual_input = true
|
||||
|
||||
# Step 2: Determine design source strategy
|
||||
design_source = "unknown"
|
||||
IF code_files_detected AND has_visual_input:
|
||||
design_source = "hybrid" # Both code and visual
|
||||
ELSE IF code_files_detected:
|
||||
design_source = "code_only" # Only code files
|
||||
ELSE IF has_visual_input OR --prompt:
|
||||
design_source = "visual_only" # Only visual/prompt
|
||||
ELSE:
|
||||
ERROR: "No design source provided (code files, images, or prompt required)"
|
||||
EXIT 1
|
||||
|
||||
STORE: design_source, code_base_path, has_visual_input
|
||||
# Unified --input (split by "|")
|
||||
ELSE IF --input:
|
||||
FOR part IN split(--input, "|"):
|
||||
IF "*" IN part OR glob_exists(part): images_input = part
|
||||
ELSE IF path_exists(part): prompt_text += part
|
||||
ELSE: prompt_text += part
|
||||
```
|
||||
|
||||
**Step 2: Design Source Detection**
|
||||
```bash
|
||||
code_base_path = extract_first_valid_path(prompt_text)
|
||||
has_visual_input = (images_input AND glob_exists(images_input))
|
||||
|
||||
design_source = classify_source(code_base_path, has_visual_input):
|
||||
• code + visual → "hybrid"
|
||||
• code only → "code_only"
|
||||
• visual/prompt → "visual_only"
|
||||
• none → ERROR
|
||||
```
|
||||
|
||||
**Stored Variables**: `design_source`, `code_base_path`, `has_visual_input`, `images_input`, `prompt_text`
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Intelligent Prompt Parsing
|
||||
|
||||
**Unified Principle**: explicit > inferred > default
|
||||
|
||||
```bash
|
||||
# Parse variant counts from prompt or use explicit/default values
|
||||
IF prompt_text AND (NOT --style-variants OR NOT --layout-variants):
|
||||
style_variants = regex_extract(prompt_text, r"(\d+)\s*style") OR --style-variants OR 3
|
||||
layout_variants = regex_extract(prompt_text, r"(\d+)\s*layout") OR --layout-variants OR 3
|
||||
ELSE:
|
||||
style_variants = --style-variants OR 3
|
||||
layout_variants = --layout-variants OR 3
|
||||
# Variant counts (priority chain)
|
||||
style_variants = --style-variants OR extract_number(prompt_text, "style") OR 3
|
||||
layout_variants = --layout-variants OR extract_number(prompt_text, "layout") OR 3
|
||||
|
||||
VALIDATE: 1 <= style_variants <= 5, 1 <= layout_variants <= 5
|
||||
|
||||
# Interactive mode (always enabled)
|
||||
interactive_mode = true # Always use interactive mode
|
||||
VALIDATE: 1 ≤ variants ≤ 5
|
||||
```
|
||||
|
||||
**Stored Variables**: `style_variants`, `layout_variants`
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Device Type Inference
|
||||
|
||||
**Unified Principle**: explicit > prompt keywords > target_type > default
|
||||
|
||||
```bash
|
||||
# Device type inference
|
||||
device_type = "auto"
|
||||
# Device type (priority chain)
|
||||
device_type = --device-type (if != "auto")
|
||||
OR detect_keywords(prompt_text, ["mobile", "desktop", "tablet", "responsive"])
|
||||
OR infer_from_target(target_type) # component→desktop, page→responsive
|
||||
OR "responsive"
|
||||
|
||||
# Step 1: Explicit parameter (highest priority)
|
||||
IF --device-type AND --device-type != "auto":
|
||||
device_type = --device-type
|
||||
device_source = "explicit"
|
||||
ELSE:
|
||||
# Step 2: Prompt analysis
|
||||
IF prompt_text:
|
||||
device_keywords = {
|
||||
"desktop": ["desktop", "web", "laptop", "widescreen", "large screen"],
|
||||
"mobile": ["mobile", "phone", "smartphone", "ios", "android"],
|
||||
"tablet": ["tablet", "ipad", "medium screen"],
|
||||
"responsive": ["responsive", "adaptive", "multi-device", "cross-platform"]
|
||||
}
|
||||
detected_device = detect_device_from_prompt(prompt_text, device_keywords)
|
||||
IF detected_device:
|
||||
device_type = detected_device
|
||||
device_source = "prompt_inference"
|
||||
|
||||
# Step 3: Target type inference
|
||||
IF device_type == "auto":
|
||||
# Components are typically desktop-first, pages can vary
|
||||
device_type = target_type == "component" ? "desktop" : "responsive"
|
||||
device_source = "target_type_inference"
|
||||
|
||||
STORE: device_type, device_source
|
||||
device_source = track_detection_source()
|
||||
```
|
||||
|
||||
**Device Type Presets**:
|
||||
- **Desktop**: 1920×1080px - Mouse-driven, spacious layouts
|
||||
- **Mobile**: 375×812px - Touch-friendly, compact layouts
|
||||
- **Tablet**: 768×1024px - Hybrid touch/mouse layouts
|
||||
- **Responsive**: 1920×1080px base with mobile-first breakpoints
|
||||
**Detection Keywords**: mobile, phone, smartphone → mobile | desktop, web, laptop → desktop | tablet, ipad → tablet | responsive, adaptive → responsive
|
||||
|
||||
**Detection Keywords**:
|
||||
- Prompt contains "mobile", "phone", "smartphone" → mobile
|
||||
- Prompt contains "tablet", "ipad" → tablet
|
||||
- Prompt contains "desktop", "web", "laptop" → desktop
|
||||
- Prompt contains "responsive", "adaptive" → responsive
|
||||
- Otherwise: Inferred from target type (components→desktop, pages→responsive)
|
||||
**Device Presets**: Desktop (1920×1080) | Mobile (375×812) | Tablet (768×1024) | Responsive (1920×1080 + breakpoints)
|
||||
|
||||
**Stored Variables**: `device_type`, `device_source`
|
||||
|
||||
### Phase 4: Run Initialization & Directory Setup
|
||||
```bash
|
||||
design_id = "design-run-$(date +%Y%m%d)-$RANDOM"
|
||||
relative_base_path = --session ? ".workflow/WFS-{session}/${design_id}" : ".workflow/${design_id}"
|
||||
relative_base_path = --session ? ".workflow/active/WFS-{session}/${design_id}" : ".workflow/${design_id}"
|
||||
|
||||
# Create directory and convert to absolute path
|
||||
Bash(mkdir -p "${relative_base_path}/style-extraction")
|
||||
@@ -466,13 +402,13 @@ IF design_source IN ["code_only", "hybrid"]:
|
||||
|
||||
# Animation reuse confirmation (code import with complete animations)
|
||||
IF design_source == "code_only" AND animation_complete:
|
||||
REPORT: "✅ 检测到完整的动画系统(来自代码导入)"
|
||||
REPORT: "✅ Complete animation system detected (from code import)"
|
||||
REPORT: " Duration scales: {duration_count} | Easing functions: {easing_count}"
|
||||
REPORT: ""
|
||||
REPORT: "Options:"
|
||||
REPORT: " • 'reuse' (默认) - 复用已有动画系统"
|
||||
REPORT: " • 'regenerate' - 重新生成动画系统(交互式)"
|
||||
REPORT: " • 'cancel' - 取消工作流"
|
||||
REPORT: " • 'reuse' (default) - Reuse existing animation system"
|
||||
REPORT: " • 'regenerate' - Regenerate animation system (interactive)"
|
||||
REPORT: " • 'cancel' - Cancel workflow"
|
||||
user_response = WAIT_FOR_USER_INPUT()
|
||||
MATCH user_response:
|
||||
"reuse" → skip_animation_extraction = true
|
||||
@@ -580,63 +516,22 @@ REPORT: " → Assembly tasks: {total} combinations"
|
||||
SlashCommand(command)
|
||||
|
||||
# After executing all attached tasks, collapse them into phase summary
|
||||
# When phase finishes, IMMEDIATELY execute Phase 11 (auto-continue)
|
||||
# Output:
|
||||
# Workflow complete - generate command handles preview file generation (compare.html, PREVIEW.md)
|
||||
# Output (generated by generate command):
|
||||
# - {target}-style-{s}-layout-{l}.html (assembled prototypes)
|
||||
# - {target}-style-{s}-layout-{l}.css
|
||||
# Note: compare.html and PREVIEW.md will be generated in Phase 11
|
||||
```
|
||||
|
||||
### Phase 11: Generate Preview Files
|
||||
```bash
|
||||
REPORT: "🚀 Phase 11: Generate Preview Files"
|
||||
|
||||
# Update TodoWrite to reflect preview generation phase
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute style extraction", "status": "completed", "activeForm": "Executing style extraction"},
|
||||
{"content": "Execute animation extraction", "status": "completed", "activeForm": "Executing animation extraction"},
|
||||
{"content": "Execute layout extraction", "status": "completed", "activeForm": "Executing layout extraction"},
|
||||
{"content": "Execute UI assembly", "status": "completed", "activeForm": "Executing UI assembly"},
|
||||
{"content": "Generate preview files", "status": "in_progress", "activeForm": "Generating preview files"}
|
||||
]})
|
||||
|
||||
# Execute preview generation script
|
||||
Bash(~/.claude/scripts/ui-generate-preview.sh "${base_path}/prototypes")
|
||||
|
||||
# Verify output files
|
||||
IF NOT exists("${base_path}/prototypes/compare.html"):
|
||||
ERROR: "Preview generation failed: compare.html not found"
|
||||
EXIT 1
|
||||
|
||||
IF NOT exists("${base_path}/prototypes/PREVIEW.md"):
|
||||
ERROR: "Preview generation failed: PREVIEW.md not found"
|
||||
EXIT 1
|
||||
|
||||
# Mark preview generation as complete
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute style extraction", "status": "completed", "activeForm": "Executing style extraction"},
|
||||
{"content": "Execute animation extraction", "status": "completed", "activeForm": "Executing animation extraction"},
|
||||
{"content": "Execute layout extraction", "status": "completed", "activeForm": "Executing layout extraction"},
|
||||
{"content": "Execute UI assembly", "status": "completed", "activeForm": "Executing UI assembly"},
|
||||
{"content": "Generate preview files", "status": "completed", "activeForm": "Generating preview files"}
|
||||
]})
|
||||
|
||||
REPORT: "✅ Preview files generated successfully"
|
||||
REPORT: " → compare.html (interactive matrix view)"
|
||||
REPORT: " → PREVIEW.md (usage instructions)"
|
||||
|
||||
# Workflow complete, display final report
|
||||
# - compare.html (interactive matrix view)
|
||||
# - PREVIEW.md (usage instructions)
|
||||
```
|
||||
|
||||
## TodoWrite Pattern
|
||||
```javascript
|
||||
// Initialize IMMEDIATELY after Phase 5 user confirmation to track multi-phase execution (5 orchestrator-level tasks)
|
||||
// Initialize IMMEDIATELY after Phase 5 user confirmation to track multi-phase execution (4 orchestrator-level tasks)
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute style extraction", "status": "in_progress", "activeForm": "Executing style extraction"},
|
||||
{"content": "Execute animation extraction", "status": "pending", "activeForm": "Executing animation extraction"},
|
||||
{"content": "Execute layout extraction", "status": "pending", "activeForm": "Executing layout extraction"},
|
||||
{"content": "Execute UI assembly", "status": "pending", "activeForm": "Executing UI assembly"},
|
||||
{"content": "Generate preview files", "status": "pending", "activeForm": "Generating preview files"}
|
||||
{"content": "Execute UI assembly", "status": "pending", "activeForm": "Executing UI assembly"}
|
||||
]})
|
||||
|
||||
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
|
||||
@@ -651,18 +546,13 @@ TodoWrite({todos: [
|
||||
// 4. After all attached tasks complete, COLLAPSE them into phase summary
|
||||
// 5. Update next phase to in_progress
|
||||
// 6. IMMEDIATELY execute next phase (auto-continue)
|
||||
//
|
||||
// Phase 11 Script Execution Pattern:
|
||||
// 1. Mark "Generate preview files" as in_progress
|
||||
// 2. Execute preview generation script via Bash tool
|
||||
// 3. Verify output files (compare.html, PREVIEW.md)
|
||||
// 4. Mark "Generate preview files" as completed
|
||||
// 7. After Phase 10 completes, workflow finishes (generate command handles preview files)
|
||||
//
|
||||
// Benefits:
|
||||
// ✓ Real-time visibility into sub-command task progress
|
||||
// ✓ Clean orchestrator-level summary after each phase
|
||||
// ✓ Clear mental model: SlashCommand = attach tasks, not delegate work
|
||||
// ✓ Script execution for preview generation (no delegation)
|
||||
// ✓ Generate command handles preview generation autonomously
|
||||
// ✓ Dynamic attachment/collapse maintains clarity
|
||||
```
|
||||
|
||||
@@ -682,7 +572,7 @@ Phase 9: {n×l} layout templates (layout-extract with multi-select)
|
||||
Phase 10: UI Assembly (generate)
|
||||
- Pure assembly: layout templates + design tokens
|
||||
- {s}×{l}×{n} = {total} final prototypes
|
||||
Phase 11: Preview files generated (compare.html, PREVIEW.md)
|
||||
- Preview files: compare.html, PREVIEW.md (auto-generated by generate command)
|
||||
|
||||
Assembly Process:
|
||||
✅ Separation of Concerns: Layout (structure) + Style (tokens) kept separate
|
||||
|
||||
@@ -1,611 +0,0 @@
|
||||
---
|
||||
name: explore-layers
|
||||
description: Interactive deep UI capture with depth-controlled layer exploration using MCP puppeteer
|
||||
argument-hint: --url <url> --depth <1-5> [--design-id <id>] [--session <id>]
|
||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*), mcp__chrome-devtools__*
|
||||
---
|
||||
|
||||
# Interactive Layer Exploration (/workflow:ui-design:explore-layers)
|
||||
|
||||
## Overview
|
||||
Single-URL depth-controlled interactive capture. Progressively explores UI layers from pages to Shadow DOM.
|
||||
|
||||
**Depth Levels**:
|
||||
- `1` = Page (full-page screenshot)
|
||||
- `2` = Elements (key components)
|
||||
- `3` = Interactions (modals, dropdowns)
|
||||
- `4` = Embedded (iframes, widgets)
|
||||
- `5` = Shadow DOM (web components)
|
||||
|
||||
**Requirements**: Chrome DevTools MCP
|
||||
|
||||
## Phase 1: Setup & Validation
|
||||
|
||||
### Step 1: Parse Parameters
|
||||
```javascript
|
||||
url = params["--url"]
|
||||
depth = int(params["--depth"])
|
||||
|
||||
// Validate URL
|
||||
IF NOT url.startswith("http"):
|
||||
url = f"https://{url}"
|
||||
|
||||
// Validate depth
|
||||
IF depth NOT IN [1, 2, 3, 4, 5]:
|
||||
ERROR: "Invalid depth: {depth}. Use 1-5"
|
||||
EXIT 1
|
||||
```
|
||||
|
||||
### Step 2: Determine Base Path
|
||||
```bash
|
||||
# Priority: --design-id > --session > create new
|
||||
if [ -n "$DESIGN_ID" ]; then
|
||||
# Exact match by design ID
|
||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
||||
if [ -z "$relative_path" ]; then
|
||||
echo "ERROR: Design run not found: $DESIGN_ID"
|
||||
echo "HINT: Run '/workflow:ui-design:list' to see available design runs"
|
||||
exit 1
|
||||
fi
|
||||
elif [ -n "$SESSION_ID" ]; then
|
||||
# Find latest in session or create new
|
||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
if [ -z "$relative_path" ]; then
|
||||
design_id="design-run-$(date +%Y%m%d)-$RANDOM"
|
||||
relative_path=".workflow/WFS-$SESSION_ID/${design_id}"
|
||||
fi
|
||||
else
|
||||
# Create new standalone design run
|
||||
design_id="design-run-$(date +%Y%m%d)-$RANDOM"
|
||||
relative_path=".workflow/${design_id}"
|
||||
fi
|
||||
|
||||
# Create directory structure and convert to absolute path
|
||||
bash(mkdir -p "$relative_path")
|
||||
base_path=$(cd "$relative_path" && pwd)
|
||||
|
||||
# Extract and display design_id
|
||||
design_id=$(basename "$base_path")
|
||||
echo "✓ Design ID: $design_id"
|
||||
echo "✓ Base path: $base_path"
|
||||
|
||||
# Create depth directories
|
||||
bash(for i in $(seq 1 $depth); do mkdir -p "$base_path"/screenshots/depth-$i; done)
|
||||
```
|
||||
|
||||
**Output**: `url`, `depth`, `base_path`
|
||||
|
||||
### Step 3: Validate MCP Availability
|
||||
```javascript
|
||||
all_resources = ListMcpResourcesTool()
|
||||
chrome_devtools = "chrome-devtools" IN [r.server for r in all_resources]
|
||||
|
||||
IF NOT chrome_devtools:
|
||||
ERROR: "explore-layers requires Chrome DevTools MCP"
|
||||
ERROR: "Install: npm i -g @modelcontextprotocol/server-chrome-devtools"
|
||||
EXIT 1
|
||||
```
|
||||
|
||||
### Step 4: Initialize Todos
|
||||
```javascript
|
||||
todos = [
|
||||
{content: "Setup and validation", status: "completed", activeForm: "Setting up"}
|
||||
]
|
||||
|
||||
FOR level IN range(1, depth + 1):
|
||||
todos.append({
|
||||
content: f"Depth {level}: {DEPTH_NAMES[level]}",
|
||||
status: "pending",
|
||||
activeForm: f"Capturing depth {level}"
|
||||
})
|
||||
|
||||
todos.append({content: "Generate layer map", status: "pending", activeForm: "Mapping"})
|
||||
|
||||
TodoWrite({todos})
|
||||
```
|
||||
|
||||
## Phase 2: Navigate & Load Page
|
||||
|
||||
### Step 1: Get or Create Browser Page
|
||||
```javascript
|
||||
pages = mcp__chrome-devtools__list_pages()
|
||||
|
||||
IF pages.length == 0:
|
||||
mcp__chrome-devtools__new_page({url: url, timeout: 30000})
|
||||
page_idx = 0
|
||||
ELSE:
|
||||
page_idx = 0
|
||||
mcp__chrome-devtools__select_page({pageIdx: page_idx})
|
||||
mcp__chrome-devtools__navigate_page({url: url, timeout: 30000})
|
||||
|
||||
bash(sleep 3) // Wait for page load
|
||||
```
|
||||
|
||||
**Output**: `page_idx`
|
||||
|
||||
## Phase 3: Depth 1 - Page Level
|
||||
|
||||
### Step 1: Capture Full Page
|
||||
```javascript
|
||||
TodoWrite(mark_in_progress: "Depth 1: Page")
|
||||
|
||||
output_file = f"{base_path}/screenshots/depth-1/full-page.png"
|
||||
|
||||
mcp__chrome-devtools__take_screenshot({
|
||||
fullPage: true,
|
||||
format: "png",
|
||||
quality: 90,
|
||||
filePath: output_file
|
||||
})
|
||||
|
||||
layer_map = {
|
||||
"url": url,
|
||||
"depth": depth,
|
||||
"layers": {
|
||||
"depth-1": {
|
||||
"type": "page",
|
||||
"captures": [{
|
||||
"name": "full-page",
|
||||
"path": output_file,
|
||||
"size_kb": file_size_kb(output_file)
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
TodoWrite(mark_completed: "Depth 1: Page")
|
||||
```
|
||||
|
||||
**Output**: `depth-1/full-page.png`
|
||||
|
||||
## Phase 4: Depth 2 - Element Level (If depth >= 2)
|
||||
|
||||
### Step 1: Analyze Page Structure
|
||||
```javascript
|
||||
IF depth < 2: SKIP
|
||||
|
||||
TodoWrite(mark_in_progress: "Depth 2: Elements")
|
||||
|
||||
snapshot = mcp__chrome-devtools__take_snapshot()
|
||||
|
||||
// Filter key elements
|
||||
key_types = ["nav", "header", "footer", "aside", "button", "form", "article"]
|
||||
key_elements = [
|
||||
el for el in snapshot.interactiveElements
|
||||
if el.type IN key_types OR el.role IN ["navigation", "banner", "main"]
|
||||
][:10] // Limit to top 10
|
||||
```
|
||||
|
||||
### Step 2: Capture Element Screenshots
|
||||
```javascript
|
||||
depth_2_captures = []
|
||||
|
||||
FOR idx, element IN enumerate(key_elements):
|
||||
element_name = sanitize(element.text[:20] or element.type) or f"element-{idx}"
|
||||
output_file = f"{base_path}/screenshots/depth-2/{element_name}.png"
|
||||
|
||||
TRY:
|
||||
mcp__chrome-devtools__take_screenshot({
|
||||
uid: element.uid,
|
||||
format: "png",
|
||||
quality: 85,
|
||||
filePath: output_file
|
||||
})
|
||||
|
||||
depth_2_captures.append({
|
||||
"name": element_name,
|
||||
"type": element.type,
|
||||
"path": output_file,
|
||||
"size_kb": file_size_kb(output_file)
|
||||
})
|
||||
CATCH error:
|
||||
REPORT: f"Skip {element_name}: {error}"
|
||||
|
||||
layer_map.layers["depth-2"] = {
|
||||
"type": "elements",
|
||||
"captures": depth_2_captures
|
||||
}
|
||||
|
||||
TodoWrite(mark_completed: "Depth 2: Elements")
|
||||
```
|
||||
|
||||
**Output**: `depth-2/{element}.png` × N
|
||||
|
||||
## Phase 5: Depth 3 - Interaction Level (If depth >= 3)
|
||||
|
||||
### Step 1: Analyze Interactive Triggers
|
||||
```javascript
|
||||
IF depth < 3: SKIP
|
||||
|
||||
TodoWrite(mark_in_progress: "Depth 3: Interactions")
|
||||
|
||||
// Detect structure
|
||||
structure = mcp__chrome-devtools__evaluate_script({
|
||||
function: `() => ({
|
||||
modals: document.querySelectorAll('[role="dialog"], .modal').length,
|
||||
dropdowns: document.querySelectorAll('[role="menu"], .dropdown').length,
|
||||
tooltips: document.querySelectorAll('[role="tooltip"], [title]').length
|
||||
})`
|
||||
})
|
||||
|
||||
// Identify triggers
|
||||
triggers = []
|
||||
FOR element IN snapshot.interactiveElements:
|
||||
IF element.attributes CONTAINS ("data-toggle", "aria-haspopup"):
|
||||
triggers.append({
|
||||
uid: element.uid,
|
||||
type: "modal" IF "modal" IN element.classes ELSE "dropdown",
|
||||
trigger: "click",
|
||||
text: element.text
|
||||
})
|
||||
ELSE IF element.attributes CONTAINS ("title", "data-tooltip"):
|
||||
triggers.append({
|
||||
uid: element.uid,
|
||||
type: "tooltip",
|
||||
trigger: "hover",
|
||||
text: element.text
|
||||
})
|
||||
|
||||
triggers = triggers[:10] // Limit
|
||||
```
|
||||
|
||||
### Step 2: Trigger Interactions & Capture
|
||||
```javascript
|
||||
depth_3_captures = []
|
||||
|
||||
FOR idx, trigger IN enumerate(triggers):
|
||||
layer_name = f"{trigger.type}-{sanitize(trigger.text[:15]) or idx}"
|
||||
output_file = f"{base_path}/screenshots/depth-3/{layer_name}.png"
|
||||
|
||||
TRY:
|
||||
// Trigger interaction
|
||||
IF trigger.trigger == "click":
|
||||
mcp__chrome-devtools__click({uid: trigger.uid})
|
||||
ELSE:
|
||||
mcp__chrome-devtools__hover({uid: trigger.uid})
|
||||
|
||||
bash(sleep 1)
|
||||
|
||||
// Capture
|
||||
mcp__chrome-devtools__take_screenshot({
|
||||
fullPage: false, // Viewport only
|
||||
format: "png",
|
||||
quality: 90,
|
||||
filePath: output_file
|
||||
})
|
||||
|
||||
depth_3_captures.append({
|
||||
"name": layer_name,
|
||||
"type": trigger.type,
|
||||
"trigger_method": trigger.trigger,
|
||||
"path": output_file,
|
||||
"size_kb": file_size_kb(output_file)
|
||||
})
|
||||
|
||||
// Dismiss (ESC key)
|
||||
mcp__chrome-devtools__evaluate_script({
|
||||
function: `() => {
|
||||
document.dispatchEvent(new KeyboardEvent('keydown', {key: 'Escape'}));
|
||||
}`
|
||||
})
|
||||
bash(sleep 0.5)
|
||||
|
||||
CATCH error:
|
||||
REPORT: f"Skip {layer_name}: {error}"
|
||||
|
||||
layer_map.layers["depth-3"] = {
|
||||
"type": "interactions",
|
||||
"triggers": structure,
|
||||
"captures": depth_3_captures
|
||||
}
|
||||
|
||||
TodoWrite(mark_completed: "Depth 3: Interactions")
|
||||
```
|
||||
|
||||
**Output**: `depth-3/{interaction}.png` × N
|
||||
|
||||
## Phase 6: Depth 4 - Embedded Level (If depth >= 4)
|
||||
|
||||
### Step 1: Detect Iframes
|
||||
```javascript
|
||||
IF depth < 4: SKIP
|
||||
|
||||
TodoWrite(mark_in_progress: "Depth 4: Embedded")
|
||||
|
||||
iframes = mcp__chrome-devtools__evaluate_script({
|
||||
function: `() => {
|
||||
return Array.from(document.querySelectorAll('iframe')).map(iframe => ({
|
||||
src: iframe.src,
|
||||
id: iframe.id || 'iframe',
|
||||
title: iframe.title || 'untitled'
|
||||
})).filter(i => i.src && i.src.startsWith('http'));
|
||||
}`
|
||||
})
|
||||
```
|
||||
|
||||
### Step 2: Capture Iframe Content
|
||||
```javascript
|
||||
depth_4_captures = []
|
||||
|
||||
FOR idx, iframe IN enumerate(iframes):
|
||||
iframe_name = f"iframe-{sanitize(iframe.title or iframe.id)}-{idx}"
|
||||
output_file = f"{base_path}/screenshots/depth-4/{iframe_name}.png"
|
||||
|
||||
TRY:
|
||||
// Navigate to iframe URL in new tab
|
||||
mcp__chrome-devtools__new_page({url: iframe.src, timeout: 30000})
|
||||
bash(sleep 2)
|
||||
|
||||
mcp__chrome-devtools__take_screenshot({
|
||||
fullPage: true,
|
||||
format: "png",
|
||||
quality: 90,
|
||||
filePath: output_file
|
||||
})
|
||||
|
||||
depth_4_captures.append({
|
||||
"name": iframe_name,
|
||||
"url": iframe.src,
|
||||
"path": output_file,
|
||||
"size_kb": file_size_kb(output_file)
|
||||
})
|
||||
|
||||
// Close iframe tab
|
||||
current_pages = mcp__chrome-devtools__list_pages()
|
||||
mcp__chrome-devtools__close_page({pageIdx: current_pages.length - 1})
|
||||
|
||||
CATCH error:
|
||||
REPORT: f"Skip {iframe_name}: {error}"
|
||||
|
||||
layer_map.layers["depth-4"] = {
|
||||
"type": "embedded",
|
||||
"captures": depth_4_captures
|
||||
}
|
||||
|
||||
TodoWrite(mark_completed: "Depth 4: Embedded")
|
||||
```
|
||||
|
||||
**Output**: `depth-4/iframe-*.png` × N
|
||||
|
||||
## Phase 7: Depth 5 - Shadow DOM (If depth = 5)
|
||||
|
||||
### Step 1: Detect Shadow Roots
|
||||
```javascript
|
||||
IF depth < 5: SKIP
|
||||
|
||||
TodoWrite(mark_in_progress: "Depth 5: Shadow DOM")
|
||||
|
||||
shadow_elements = mcp__chrome-devtools__evaluate_script({
|
||||
function: `() => {
|
||||
const elements = Array.from(document.querySelectorAll('*'));
|
||||
return elements
|
||||
.filter(el => el.shadowRoot)
|
||||
.map((el, idx) => ({
|
||||
tag: el.tagName.toLowerCase(),
|
||||
id: el.id || \`shadow-\${idx}\`,
|
||||
innerHTML: el.shadowRoot.innerHTML.substring(0, 100)
|
||||
}));
|
||||
}`
|
||||
})
|
||||
```
|
||||
|
||||
### Step 2: Capture Shadow DOM Components
|
||||
```javascript
|
||||
depth_5_captures = []
|
||||
|
||||
FOR idx, shadow IN enumerate(shadow_elements):
|
||||
shadow_name = f"shadow-{sanitize(shadow.id)}"
|
||||
output_file = f"{base_path}/screenshots/depth-5/{shadow_name}.png"
|
||||
|
||||
TRY:
|
||||
// Inject highlight script
|
||||
mcp__chrome-devtools__evaluate_script({
|
||||
function: `() => {
|
||||
const el = document.querySelector('${shadow.tag}${shadow.id ? "#" + shadow.id : ""}');
|
||||
if (el) {
|
||||
el.scrollIntoView({behavior: 'smooth', block: 'center'});
|
||||
el.style.outline = '3px solid red';
|
||||
}
|
||||
}`
|
||||
})
|
||||
|
||||
bash(sleep 0.5)
|
||||
|
||||
// Full-page screenshot (component highlighted)
|
||||
mcp__chrome-devtools__take_screenshot({
|
||||
fullPage: false,
|
||||
format: "png",
|
||||
quality: 90,
|
||||
filePath: output_file
|
||||
})
|
||||
|
||||
depth_5_captures.append({
|
||||
"name": shadow_name,
|
||||
"tag": shadow.tag,
|
||||
"path": output_file,
|
||||
"size_kb": file_size_kb(output_file)
|
||||
})
|
||||
|
||||
CATCH error:
|
||||
REPORT: f"Skip {shadow_name}: {error}"
|
||||
|
||||
layer_map.layers["depth-5"] = {
|
||||
"type": "shadow-dom",
|
||||
"captures": depth_5_captures
|
||||
}
|
||||
|
||||
TodoWrite(mark_completed: "Depth 5: Shadow DOM")
|
||||
```
|
||||
|
||||
**Output**: `depth-5/shadow-*.png` × N
|
||||
|
||||
## Phase 8: Generate Layer Map
|
||||
|
||||
### Step 1: Compile Metadata
|
||||
```javascript
|
||||
TodoWrite(mark_in_progress: "Generate layer map")
|
||||
|
||||
// Calculate totals
|
||||
total_captures = sum(len(layer.captures) for layer in layer_map.layers.values())
|
||||
total_size_kb = sum(
|
||||
sum(c.size_kb for c in layer.captures)
|
||||
for layer in layer_map.layers.values()
|
||||
)
|
||||
|
||||
layer_map["summary"] = {
|
||||
"timestamp": current_timestamp(),
|
||||
"total_depth": depth,
|
||||
"total_captures": total_captures,
|
||||
"total_size_kb": total_size_kb
|
||||
}
|
||||
|
||||
Write(f"{base_path}/screenshots/layer-map.json", JSON.stringify(layer_map, indent=2))
|
||||
|
||||
TodoWrite(mark_completed: "Generate layer map")
|
||||
```
|
||||
|
||||
**Output**: `layer-map.json`
|
||||
|
||||
## Completion
|
||||
|
||||
### Todo Update
|
||||
```javascript
|
||||
all_todos_completed = true
|
||||
TodoWrite({todos: all_completed_todos})
|
||||
```
|
||||
|
||||
### Output Message
|
||||
```
|
||||
✅ Interactive layer exploration complete!
|
||||
|
||||
Configuration:
|
||||
- URL: {url}
|
||||
- Max depth: {depth}
|
||||
- Layers explored: {len(layer_map.layers)}
|
||||
|
||||
Capture Summary:
|
||||
Depth 1 (Page): {depth_1_count} screenshot(s)
|
||||
Depth 2 (Elements): {depth_2_count} screenshot(s)
|
||||
Depth 3 (Interactions): {depth_3_count} screenshot(s)
|
||||
Depth 4 (Embedded): {depth_4_count} screenshot(s)
|
||||
Depth 5 (Shadow DOM): {depth_5_count} screenshot(s)
|
||||
|
||||
Total: {total_captures} captures ({total_size_kb:.1f} KB)
|
||||
|
||||
Output Structure:
|
||||
{base_path}/screenshots/
|
||||
├── depth-1/
|
||||
│ └── full-page.png
|
||||
├── depth-2/
|
||||
│ ├── navbar.png
|
||||
│ └── footer.png
|
||||
├── depth-3/
|
||||
│ ├── modal-login.png
|
||||
│ └── dropdown-menu.png
|
||||
├── depth-4/
|
||||
│ └── iframe-analytics.png
|
||||
├── depth-5/
|
||||
│ └── shadow-button.png
|
||||
└── layer-map.json
|
||||
|
||||
Next: /workflow:ui-design:extract --images "screenshots/**/*.png"
|
||||
```
|
||||
|
||||
## Simple Bash Commands
|
||||
|
||||
### Directory Setup
|
||||
```bash
|
||||
# Create depth directories
|
||||
bash(for i in $(seq 1 $depth); do mkdir -p $BASE_PATH/screenshots/depth-$i; done)
|
||||
```
|
||||
|
||||
### Validation
|
||||
```bash
|
||||
# Check MCP
|
||||
all_resources = ListMcpResourcesTool()
|
||||
|
||||
# Count captures per depth
|
||||
bash(ls $base_path/screenshots/depth-{1..5}/*.png 2>/dev/null | wc -l)
|
||||
```
|
||||
|
||||
### File Operations
|
||||
```bash
|
||||
# List all captures
|
||||
bash(find $base_path/screenshots -name "*.png" -type f)
|
||||
|
||||
# Total size
|
||||
bash(du -sh $base_path/screenshots)
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{base_path}/screenshots/
|
||||
├── depth-1/
|
||||
│ └── full-page.png
|
||||
├── depth-2/
|
||||
│ ├── {element}.png
|
||||
│ └── ...
|
||||
├── depth-3/
|
||||
│ ├── {interaction}.png
|
||||
│ └── ...
|
||||
├── depth-4/
|
||||
│ ├── iframe-*.png
|
||||
│ └── ...
|
||||
├── depth-5/
|
||||
│ ├── shadow-*.png
|
||||
│ └── ...
|
||||
└── layer-map.json
|
||||
```
|
||||
|
||||
## Depth Level Details
|
||||
|
||||
| Depth | Name | Captures | Time | Use Case |
|
||||
|-------|------|----------|------|----------|
|
||||
| 1 | Page | Full page | 30s | Quick preview |
|
||||
| 2 | Elements | Key components | 1-2min | Component library |
|
||||
| 3 | Interactions | Modals, dropdowns | 2-4min | UI flows |
|
||||
| 4 | Embedded | Iframes | 3-6min | Complete context |
|
||||
| 5 | Shadow DOM | Web components | 4-8min | Full coverage |
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
```
|
||||
ERROR: Chrome DevTools MCP required
|
||||
→ Install: npm i -g @modelcontextprotocol/server-chrome-devtools
|
||||
|
||||
ERROR: Invalid depth
|
||||
→ Use: 1-5
|
||||
|
||||
ERROR: Interaction trigger failed
|
||||
→ Some modals may be skipped, check layer-map.json
|
||||
```
|
||||
|
||||
### Recovery
|
||||
- **Partial success**: Lower depth captures preserved
|
||||
- **Trigger failures**: Interaction layer may be incomplete
|
||||
- **Iframe restrictions**: Cross-origin iframes skipped
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All depths up to specified level captured
|
||||
- [ ] layer-map.json generated with metadata
|
||||
- [ ] File sizes valid (> 500 bytes)
|
||||
- [ ] Interaction triggers executed
|
||||
- [ ] Shadow DOM elements highlighted
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Depth-controlled**: Progressive capture 1-5 levels
|
||||
- **Interactive triggers**: Click/hover for hidden layers
|
||||
- **Iframe support**: Embedded content captured
|
||||
- **Shadow DOM**: Web component internals
|
||||
- **Structured output**: Organized by depth
|
||||
|
||||
## Integration
|
||||
|
||||
**Input**: Single URL + depth level (1-5)
|
||||
**Output**: Hierarchical screenshots + layer-map.json
|
||||
**Complements**: `/workflow:ui-design:capture` (multi-URL batch)
|
||||
**Next**: `/workflow:ui-design:extract` for design analysis
|
||||
@@ -31,7 +31,7 @@ if [ -n "$DESIGN_ID" ]; then
|
||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
||||
elif [ -n "$SESSION_ID" ]; then
|
||||
# Latest in session
|
||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
relative_path=$(find .workflow/active/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
else
|
||||
# Latest globally
|
||||
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
|
||||
@@ -67,7 +67,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
|
||||
|
||||
**Optional Parameters**:
|
||||
- `--session <id>`: Workflow session ID
|
||||
- Integrate into existing session (`.workflow/WFS-{session}/`)
|
||||
- Integrate into existing session (`.workflow/active/WFS-{session}/`)
|
||||
- Enable automatic design system integration (Phase 4)
|
||||
- If not provided: standalone mode (`.workflow/`)
|
||||
|
||||
@@ -98,7 +98,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
|
||||
**Session Integration**:
|
||||
- `--session` flag determines session integration or standalone execution
|
||||
- Integrated: Design system automatically added to session artifacts
|
||||
- Standalone: Output in `.workflow/{run_id}/`
|
||||
- Standalone: Output in `.workflow/active/{run_id}/`
|
||||
|
||||
## 5-Phase Execution
|
||||
|
||||
@@ -184,11 +184,11 @@ design_id = "design-run-$(date +%Y%m%d)-$RANDOM"
|
||||
|
||||
IF --session:
|
||||
session_id = {provided_session}
|
||||
relative_base_path = ".workflow/WFS-{session_id}/{design_id}"
|
||||
relative_base_path = ".workflow/active/WFS-{session_id}/{design_id}"
|
||||
session_mode = "integrated"
|
||||
ELSE:
|
||||
session_id = null
|
||||
relative_base_path = ".workflow/{design_id}"
|
||||
relative_base_path = ".workflow/active/{design_id}"
|
||||
session_mode = "standalone"
|
||||
|
||||
# Create base directory and convert to absolute path
|
||||
|
||||
@@ -61,7 +61,7 @@ if [ -n "$DESIGN_ID" ]; then
|
||||
fi
|
||||
elif [ -n "$SESSION_ID" ]; then
|
||||
# Latest in session
|
||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
relative_path=$(find .workflow/active/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
if [ -z "$relative_path" ]; then
|
||||
echo "ERROR: No design run found in session: $SESSION_ID"
|
||||
echo "HINT: Create a design run first or provide --design-id"
|
||||
|
||||
@@ -84,7 +84,7 @@ if [ -n "$DESIGN_ID" ]; then
|
||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
||||
elif [ -n "$SESSION_ID" ]; then
|
||||
# Latest in session
|
||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
relative_path=$(find .workflow/active/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
else
|
||||
# Latest globally
|
||||
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
|
||||
@@ -1,174 +0,0 @@
|
||||
---
|
||||
name: list
|
||||
description: List all available design runs with metadata (session, created time, prototype count)
|
||||
argument-hint: [--session <id>]
|
||||
allowed-tools: Bash(*), Read(*)
|
||||
---
|
||||
|
||||
# List Design Runs (/workflow:ui-design:list)
|
||||
|
||||
## Overview
|
||||
List all available UI design runs across sessions or within a specific session. Displays design IDs with metadata for easy reference.
|
||||
|
||||
**Output**: Formatted list with design-id, session, created time, and prototype count
|
||||
|
||||
## Implementation
|
||||
|
||||
### Step 1: Determine Search Scope
|
||||
```bash
|
||||
# Priority: --session > all sessions
|
||||
search_path=$(if [ -n "$SESSION_ID" ]; then
|
||||
echo ".workflow/WFS-$SESSION_ID"
|
||||
else
|
||||
echo ".workflow"
|
||||
fi)
|
||||
```
|
||||
|
||||
### Step 2: Find and Display Design Runs
|
||||
```bash
|
||||
echo "Available design runs:"
|
||||
echo ""
|
||||
|
||||
# Find all design-run directories
|
||||
found_count=0
|
||||
while IFS= read -r line; do
|
||||
timestamp=$(echo "$line" | cut -d' ' -f1)
|
||||
path=$(echo "$line" | cut -d' ' -f2-)
|
||||
|
||||
# Extract design_id from path
|
||||
design_id=$(basename "$path")
|
||||
|
||||
# Extract session from path
|
||||
session_id=$(echo "$path" | grep -oP 'WFS-\K[^/]+' || echo "standalone")
|
||||
|
||||
# Format created date
|
||||
created_at=$(date -d "@${timestamp%.*}" '+%Y-%m-%d %H:%M' 2>/dev/null || echo "unknown")
|
||||
|
||||
# Count prototypes
|
||||
prototype_count=$(find "$path/prototypes" -name "*.html" 2>/dev/null | wc -l)
|
||||
|
||||
# Display formatted output
|
||||
echo " - $design_id"
|
||||
echo " Session: $session_id"
|
||||
echo " Created: $created_at"
|
||||
echo " Prototypes: $prototype_count"
|
||||
echo ""
|
||||
|
||||
found_count=$((found_count + 1))
|
||||
done < <(find "$search_path" -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr)
|
||||
|
||||
# Summary
|
||||
if [ $found_count -eq 0 ]; then
|
||||
echo " No design runs found."
|
||||
echo ""
|
||||
if [ -n "$SESSION_ID" ]; then
|
||||
echo "💡 HINT: Try running '/workflow:ui-design:explore-auto' to create a design run"
|
||||
else
|
||||
echo "💡 HINT: Try running '/workflow:ui-design:explore-auto --session <id>' to create a design run"
|
||||
fi
|
||||
else
|
||||
echo "Total: $found_count design run(s)"
|
||||
echo ""
|
||||
echo "💡 USE: /workflow:ui-design:generate --design-id \"<id>\""
|
||||
echo " OR: /workflow:ui-design:generate --session \"<session>\""
|
||||
fi
|
||||
```
|
||||
|
||||
### Step 3: Execute List Command
|
||||
```bash
|
||||
Bash(
|
||||
description: "List all UI design runs with metadata",
|
||||
command: "
|
||||
search_path=\"${search_path}\"
|
||||
SESSION_ID=\"${SESSION_ID:-}\"
|
||||
|
||||
echo 'Available design runs:'
|
||||
echo ''
|
||||
|
||||
found_count=0
|
||||
while IFS= read -r line; do
|
||||
timestamp=\$(echo \"\$line\" | cut -d' ' -f1)
|
||||
path=\$(echo \"\$line\" | cut -d' ' -f2-)
|
||||
|
||||
design_id=\$(basename \"\$path\")
|
||||
session_id=\$(echo \"\$path\" | grep -oP 'WFS-\\K[^/]+' || echo 'standalone')
|
||||
created_at=\$(date -d \"@\${timestamp%.*}\" '+%Y-%m-%d %H:%M' 2>/dev/null || echo 'unknown')
|
||||
prototype_count=\$(find \"\$path/prototypes\" -name '*.html' 2>/dev/null | wc -l)
|
||||
|
||||
echo \" - \$design_id\"
|
||||
echo \" Session: \$session_id\"
|
||||
echo \" Created: \$created_at\"
|
||||
echo \" Prototypes: \$prototype_count\"
|
||||
echo ''
|
||||
|
||||
found_count=\$((found_count + 1))
|
||||
done < <(find \"\$search_path\" -name 'design-run-*' -type d -printf '%T@ %p\\n' 2>/dev/null | sort -nr)
|
||||
|
||||
if [ \$found_count -eq 0 ]; then
|
||||
echo ' No design runs found.'
|
||||
echo ''
|
||||
if [ -n \"\$SESSION_ID\" ]; then
|
||||
echo '💡 HINT: Try running \\'/workflow:ui-design:explore-auto\\' to create a design run'
|
||||
else
|
||||
echo '💡 HINT: Try running \\'/workflow:ui-design:explore-auto --session <id>\\' to create a design run'
|
||||
fi
|
||||
else
|
||||
echo \"Total: \$found_count design run(s)\"
|
||||
echo ''
|
||||
echo '💡 USE: /workflow:ui-design:generate --design-id \"<id>\"'
|
||||
echo ' OR: /workflow:ui-design:generate --session \"<session>\"'
|
||||
fi
|
||||
"
|
||||
)
|
||||
```
|
||||
|
||||
## Example Output
|
||||
|
||||
### With Session Filter
|
||||
```
|
||||
$ /workflow:ui-design:list --session ui-redesign
|
||||
|
||||
Available design runs:
|
||||
|
||||
- design-run-20250109-143052
|
||||
Session: ui-redesign
|
||||
Created: 2025-01-09 14:30
|
||||
Prototypes: 12
|
||||
|
||||
- design-run-20250109-101534
|
||||
Session: ui-redesign
|
||||
Created: 2025-01-09 10:15
|
||||
Prototypes: 6
|
||||
|
||||
Total: 2 design run(s)
|
||||
|
||||
💡 USE: /workflow:ui-design:generate --design-id "<id>"
|
||||
OR: /workflow:ui-design:generate --session "<session>"
|
||||
```
|
||||
|
||||
### All Sessions
|
||||
```
|
||||
$ /workflow:ui-design:list
|
||||
|
||||
Available design runs:
|
||||
|
||||
- design-run-20250109-143052
|
||||
Session: ui-redesign
|
||||
Created: 2025-01-09 14:30
|
||||
Prototypes: 12
|
||||
|
||||
- design-run-20250108-092314
|
||||
Session: landing-page
|
||||
Created: 2025-01-08 09:23
|
||||
Prototypes: 3
|
||||
|
||||
- design-run-20250107-155623
|
||||
Session: standalone
|
||||
Created: 2025-01-07 15:56
|
||||
Prototypes: 8
|
||||
|
||||
Total: 3 design run(s)
|
||||
|
||||
💡 USE: /workflow:ui-design:generate --design-id "<id>"
|
||||
OR: /workflow:ui-design:generate --session "<session>"
|
||||
```
|
||||
@@ -62,7 +62,7 @@ if [ -n "$DESIGN_ID" ]; then
|
||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
||||
elif [ -n "$SESSION_ID" ]; then
|
||||
# Latest in session
|
||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
relative_path=$(find .workflow/active/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
else
|
||||
# Latest globally
|
||||
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||
|
||||
@@ -1,21 +0,0 @@
|
||||
---
|
||||
name: Workflow Coordination
|
||||
description: Core coordination principles for multi-agent development workflows
|
||||
---
|
||||
## Core Agent Coordination Principles
|
||||
|
||||
### Planning First Principle
|
||||
|
||||
**Purpose**: Thorough upfront planning reduces risk, improves quality, and prevents costly rework.
|
||||
|
||||
|
||||
### TodoWrite Coordination Rules
|
||||
|
||||
1. **TodoWrite FIRST**: Always create TodoWrite entries *before* complex task execution begins.
|
||||
2. **Context Before Implementation**: Context gathering must complete before implementation tasks begin.
|
||||
3. **Agent Coordination**: Each agent is responsible for updating the status of its assigned todo.
|
||||
4. **Progress Visibility**: Provides clear workflow state visibility to stakeholders.
|
||||
|
||||
6. **Checkpoint Safety**: State is saved automatically after each agent completes its work.
|
||||
7. **Interrupt/Resume**: The system must support full state preservation and restoration.
|
||||
|
||||
@@ -78,7 +78,7 @@ cd src/auth && qwen -p "
|
||||
追踪用户登录的完整执行流程,
|
||||
从 API 入口到数据库查询,
|
||||
列出所有调用的函数和依赖关系
|
||||
" -m coder-model
|
||||
"
|
||||
```
|
||||
|
||||
**工具输出**:Qwen 理解需求,自动追踪执行路径
|
||||
@@ -278,7 +278,7 @@ codex -C src/auth --full-auto exec "
|
||||
**方式 1:CLI 工具语义调用**(推荐,灵活)
|
||||
|
||||
- **用户输入**:`使用 gemini 分析这个项目的架构设计,识别主要模块、依赖关系和架构模式`
|
||||
- **Claude Code 生成并执行**:`cd project-root && gemini -p "..." -m gemini-3-pro-preview-11-2025`
|
||||
- **Claude Code 生成并执行**:`cd project-root && gemini -p "..."`
|
||||
|
||||
**方式 2:Slash 命令**
|
||||
- **用户输入**:`/cli:analyze --tool gemini "分析项目架构"`
|
||||
@@ -291,7 +291,7 @@ codex -C src/auth --full-auto exec "
|
||||
|
||||
**方式 1:CLI 工具语义调用**
|
||||
- **用户输入**:`让 codex 实现用户认证功能:注册(邮箱+密码+验证)、登录(JWT token)、刷新令牌,技术栈 Node.js + Express`
|
||||
- **Claude Code 生成并执行**:`codex -C src/auth --full-auto exec "..." -m gpt-5 --skip-git-repo-check -s danger-full-access`
|
||||
- **Claude Code 生成并执行**:`codex -C src/auth --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
||||
|
||||
**方式 2:Slash 命令**(工作流化)
|
||||
- **用户输入**:`/workflow:plan --agent "实现用户认证功能"` → `/workflow:execute`
|
||||
@@ -304,9 +304,9 @@ codex -C src/auth --full-auto exec "
|
||||
|
||||
**方式 1:CLI 工具语义调用**
|
||||
- **用户输入**:`使用 gemini 诊断登录超时问题,分析处理流程、性能瓶颈、数据库查询效率`
|
||||
- **Claude Code 生成并执行**:`cd src/auth && gemini -p "..." -m gemini-3-pro-preview-11-2025`
|
||||
- **Claude Code 生成并执行**:`cd src/auth && gemini -p "..."`
|
||||
- **用户输入**:`让 codex 根据上述分析修复登录超时,优化查询、添加缓存`
|
||||
- **Claude Code 生成并执行**:`codex -C src/auth --full-auto exec "..." -m gpt-5 --skip-git-repo-check -s danger-full-access`
|
||||
- **Claude Code 生成并执行**:`codex -C src/auth --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
||||
|
||||
**方式 2:Slash 命令**
|
||||
- **用户输入**:`/cli:mode:bug-diagnosis --tool gemini "诊断登录超时"` → `/cli:execute --tool codex "修复登录超时"`
|
||||
@@ -319,7 +319,7 @@ codex -C src/auth --full-auto exec "
|
||||
|
||||
**方式 1:CLI 工具语义调用**
|
||||
- **用户输入**:`使用 gemini 为 API 模块生成技术文档,包含端点说明、数据模型、使用示例`
|
||||
- **Claude Code 生成并执行**:`cd src/api && gemini -p "..." -m gemini-3-pro-preview-11-2025 --approval-mode yolo`
|
||||
- **Claude Code 生成并执行**:`cd src/api && gemini -p "..." --approval-mode yolo`
|
||||
|
||||
**方式 2:Slash 命令**
|
||||
- **用户输入**:`/memory:docs src/api --tool gemini --mode full`
|
||||
|
||||
@@ -762,7 +762,7 @@ async function executeCLIAnalysis(prompt) {
|
||||
|
||||
// Execute gemini with analysis prompt using --include-directories
|
||||
// This allows gemini to access reference docs while maintaining correct file context
|
||||
const command = `gemini -p "${escapePrompt(prompt)}" -m gemini-3-pro-preview-11-2025 --include-directories ${referencePath}`;
|
||||
const command = `gemini -p "${escapePrompt(prompt)}" --include-directories ${referencePath}`;
|
||||
|
||||
try {
|
||||
const result = await execBash(command, { timeout: 120000 }); // 2 min timeout
|
||||
@@ -770,7 +770,7 @@ async function executeCLIAnalysis(prompt) {
|
||||
} catch (error) {
|
||||
// Fallback to qwen if gemini fails
|
||||
console.warn('Gemini failed, falling back to qwen');
|
||||
const fallbackCmd = `qwen -p "${escapePrompt(prompt)}" -m coder-model --include-directories ${referencePath}`;
|
||||
const fallbackCmd = `qwen -p "${escapePrompt(prompt)}" --include-directories ${referencePath}`;
|
||||
const result = await execBash(fallbackCmd, { timeout: 120000 });
|
||||
return parseAnalysisResult(result.stdout);
|
||||
}
|
||||
|
||||
@@ -101,7 +101,7 @@
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
@@ -428,11 +428,33 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning and execution workflow with in-memory planning, code exploration, and immediate execution after user confirmation",
|
||||
"arguments": "[--tool claude|gemini|qwen|codex] [--quick] \\\"task description\\\"|file.md",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
@@ -450,17 +472,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:resume",
|
||||
"description": "Resume paused workflow session with automatic progress analysis, pending task identification, and conflict detection",
|
||||
"arguments": "session-id for workflow session to resume",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
@@ -519,8 +530,8 @@
|
||||
{
|
||||
"name": "workflow:status",
|
||||
"command": "/workflow:status",
|
||||
"description": "Generate on-demand task status views from JSON task data with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: task-id]",
|
||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: --project|task-id|--validate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
@@ -692,17 +703,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "capture",
|
||||
"command": "/workflow:ui-design:capture",
|
||||
"description": "Batch screenshot capture for UI design workflows using MCP puppeteer or local fallback with URL mapping",
|
||||
"arguments": "--url-map \"target:url,...\" [--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/capture.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
@@ -714,6 +714,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
@@ -725,17 +736,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-layers",
|
||||
"command": "/workflow:ui-design:explore-layers",
|
||||
"description": "Interactive deep UI capture with depth-controlled layer exploration using MCP puppeteer",
|
||||
"arguments": "--url <url> --depth <1-5> [--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/explore-layers.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
@@ -780,17 +780,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:ui-design:list",
|
||||
"description": "List all available design runs with metadata (session, created time, prototype count)",
|
||||
"arguments": "[--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"file_path": "workflow/ui-design/list.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:reference-page-generator",
|
||||
"command": "/workflow:ui-design:reference-page-generator",
|
||||
@@ -812,16 +801,5 @@
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/style-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "update",
|
||||
"command": "/workflow:ui-design:update",
|
||||
"description": "Update brainstorming artifacts with finalized design system references from selected prototypes",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/update.md"
|
||||
}
|
||||
]
|
||||
@@ -109,7 +109,7 @@
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
@@ -316,11 +316,33 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning and execution workflow with in-memory planning, code exploration, and immediate execution after user confirmation",
|
||||
"arguments": "[--tool claude|gemini|qwen|codex] [--quick] \\\"task description\\\"|file.md",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
@@ -338,17 +360,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:resume",
|
||||
"description": "Resume paused workflow session with automatic progress analysis, pending task identification, and conflict detection",
|
||||
"arguments": "session-id for workflow session to resume",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
@@ -363,8 +374,8 @@
|
||||
{
|
||||
"name": "workflow:status",
|
||||
"command": "/workflow:status",
|
||||
"description": "Generate on-demand task status views from JSON task data with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: task-id]",
|
||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: --project|task-id|--validate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
@@ -720,17 +731,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "capture",
|
||||
"command": "/workflow:ui-design:capture",
|
||||
"description": "Batch screenshot capture for UI design workflows using MCP puppeteer or local fallback with URL mapping",
|
||||
"arguments": "--url-map \"target:url,...\" [--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/capture.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
@@ -742,6 +742,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
@@ -753,17 +764,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-layers",
|
||||
"command": "/workflow:ui-design:explore-layers",
|
||||
"description": "Interactive deep UI capture with depth-controlled layer exploration using MCP puppeteer",
|
||||
"arguments": "--url <url> --depth <1-5> [--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/explore-layers.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
@@ -808,17 +808,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:ui-design:list",
|
||||
"description": "List all available design runs with metadata (session, created time, prototype count)",
|
||||
"arguments": "[--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"file_path": "workflow/ui-design/list.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:reference-page-generator",
|
||||
"command": "/workflow:ui-design:reference-page-generator",
|
||||
@@ -840,17 +829,6 @@
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/style-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "update",
|
||||
"command": "/workflow:ui-design:update",
|
||||
"description": "Update brainstorming artifacts with finalized design system references from selected prototypes",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/update.md"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -71,7 +71,7 @@
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
@@ -244,6 +244,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/brainstorm/ux-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:session:list",
|
||||
@@ -299,17 +310,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "capture",
|
||||
"command": "/workflow:ui-design:capture",
|
||||
"description": "Batch screenshot capture for UI design workflows using MCP puppeteer or local fallback with URL mapping",
|
||||
"arguments": "--url-map \"target:url,...\" [--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/capture.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
@@ -321,17 +321,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-layers",
|
||||
"command": "/workflow:ui-design:explore-layers",
|
||||
"description": "Interactive deep UI capture with depth-controlled layer exploration using MCP puppeteer",
|
||||
"arguments": "--url <url> --depth <1-5> [--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/explore-layers.md"
|
||||
},
|
||||
{
|
||||
"name": "imitate-auto",
|
||||
"command": "/workflow:ui-design:imitate-auto",
|
||||
@@ -354,17 +343,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:ui-design:list",
|
||||
"description": "List all available design runs with metadata (session, created time, prototype count)",
|
||||
"arguments": "[--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"file_path": "workflow/ui-design/list.md"
|
||||
},
|
||||
{
|
||||
"name": "style-extract",
|
||||
"command": "/workflow:ui-design:style-extract",
|
||||
@@ -375,17 +353,6 @@
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/style-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "update",
|
||||
"command": "/workflow:ui-design:update",
|
||||
"description": "Update brainstorming artifacts with finalized design system references from selected prototypes",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/update.md"
|
||||
}
|
||||
],
|
||||
"implementation": [
|
||||
@@ -444,6 +411,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
@@ -592,8 +570,8 @@
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning and execution workflow with in-memory planning, code exploration, and immediate execution after user confirmation",
|
||||
"arguments": "[--tool claude|gemini|qwen|codex] [--quick] \\\"task description\\\"|file.md",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
@@ -633,6 +611,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:import-from-code",
|
||||
"command": "/workflow:ui-design:import-from-code",
|
||||
@@ -725,17 +714,6 @@
|
||||
}
|
||||
],
|
||||
"session-management": [
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:resume",
|
||||
"description": "Resume paused workflow session with automatic progress analysis, pending task identification, and conflict detection",
|
||||
"arguments": "session-id for workflow session to resume",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
@@ -761,8 +739,8 @@
|
||||
{
|
||||
"name": "workflow:status",
|
||||
"command": "/workflow:status",
|
||||
"description": "Generate on-demand task status views from JSON task data with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: task-id]",
|
||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: --project|task-id|--validate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
|
||||
@@ -24,8 +24,8 @@
|
||||
{
|
||||
"name": "workflow:status",
|
||||
"command": "/workflow:status",
|
||||
"description": "Generate on-demand task status views from JSON task data with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: task-id]",
|
||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: --project|task-id|--validate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
@@ -109,17 +109,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:resume",
|
||||
"description": "Resume paused workflow session with automatic progress analysis, pending task identification, and conflict detection",
|
||||
"arguments": "session-id for workflow session to resume",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
@@ -145,7 +134,7 @@
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
|
||||
@@ -102,7 +102,7 @@ Phase 2: Document Generation (Autonomous Output)
|
||||
1. **Extract Tasks**: Parse `analysis_results.tasks` array
|
||||
2. **Map Artifacts**: Use `artifacts_inventory` to add artifact references to task.context
|
||||
3. **Assess Complexity**: Use `analysis_results.complexity` for document structure decision
|
||||
4. **Session Paths**: Use `session_id` to construct output paths (.workflow/{session_id}/)
|
||||
4. **Session Paths**: Use `session_id` to construct output paths (.workflow/active/{session_id}/)
|
||||
|
||||
### MCP Integration Guidelines
|
||||
|
||||
@@ -244,14 +244,14 @@ Generate individual `.task/IMPL-*.json` files with:
|
||||
- Low priority: role_analyses
|
||||
|
||||
### 3. Implementation Plan Creation
|
||||
Generate `IMPL_PLAN.md` at `.workflow/{session_id}/IMPL_PLAN.md`:
|
||||
Generate `IMPL_PLAN.md` at `.workflow/active/{session_id}/IMPL_PLAN.md`:
|
||||
|
||||
**Structure**:
|
||||
```markdown
|
||||
---
|
||||
identifier: {session_id}
|
||||
source: "User requirements"
|
||||
analysis: .workflow/{session_id}/.process/ANALYSIS_RESULTS.md
|
||||
analysis: .workflow/active/{session_id}/.process/ANALYSIS_RESULTS.md
|
||||
---
|
||||
|
||||
# Implementation Plan: {Project Title}
|
||||
@@ -280,7 +280,7 @@ analysis: .workflow/{session_id}/.process/ANALYSIS_RESULTS.md
|
||||
```
|
||||
|
||||
### 4. TODO List Generation
|
||||
Generate `TODO_LIST.md` at `.workflow/{session_id}/TODO_LIST.md`:
|
||||
Generate `TODO_LIST.md` at `.workflow/active/{session_id}/TODO_LIST.md`:
|
||||
|
||||
**Structure**:
|
||||
```markdown
|
||||
|
||||
@@ -162,15 +162,15 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
||||
|
||||
**Gemini/Qwen (Write)**:
|
||||
```bash
|
||||
cd {dir} && gemini -p "..." -m gemini-2.5-flash --approval-mode yolo
|
||||
cd {dir} && gemini -p "..." --approval-mode yolo
|
||||
```
|
||||
|
||||
**Codex (Auto)**:
|
||||
```bash
|
||||
codex -C {dir} --full-auto exec "..." -m gpt-5 --skip-git-repo-check -s danger-full-access
|
||||
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
|
||||
|
||||
# Resume: Add 'resume --last' after prompt
|
||||
codex --full-auto exec "..." resume --last -m gpt-5 --skip-git-repo-check -s danger-full-access
|
||||
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
**Cross-Directory** (Gemini/Qwen):
|
||||
@@ -190,11 +190,11 @@ cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories
|
||||
|
||||
**Session Detection**:
|
||||
```bash
|
||||
find .workflow/ -name '.active-*' -type f
|
||||
find .workflow/active/ -name 'WFS-*' -type d
|
||||
```
|
||||
|
||||
**Output Paths**:
|
||||
- **With session**: `.workflow/WFS-{id}/.chat/{agent}-{timestamp}.md`
|
||||
- **With session**: `.workflow/active/WFS-{id}/.chat/{agent}-{timestamp}.md`
|
||||
- **No session**: `.workflow/.scratchpad/{agent}-{description}-{timestamp}.md`
|
||||
|
||||
**Log Structure**:
|
||||
|
||||
@@ -22,7 +22,7 @@ description: |
|
||||
- Progressive disclosure: Quick overview → detailed analysis → dependency deep-dive
|
||||
- Context-aware filtering based on task requirements
|
||||
|
||||
color: blue
|
||||
color: yellow
|
||||
---
|
||||
|
||||
You are a specialized **CLI Exploration Agent** that executes read-only code analysis tasks autonomously to discover module structures, map dependencies, and understand architectural patterns.
|
||||
@@ -513,37 +513,19 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-
|
||||
- Use Gemini semantic analysis as tiebreaker
|
||||
- Document uncertainty in report with attribution
|
||||
|
||||
## Integration with Other Agents
|
||||
## Available Tools & Services
|
||||
|
||||
### As Service Provider (Called by Others)
|
||||
|
||||
**Planning Agents** (`action-planning-agent`, `conceptual-planning-agent`):
|
||||
- **Use Case**: Pre-planning reconnaissance to understand existing code
|
||||
- **Input**: Task description + focus areas
|
||||
- **Output**: Structural overview + dependency analysis
|
||||
- **Flow**: Planning agent → CLI explore agent (quick-scan) → Context for planning
|
||||
|
||||
**Execution Agents** (`code-developer`, `cli-execution-agent`):
|
||||
- **Use Case**: Refactoring impact analysis before code modifications
|
||||
- **Input**: Target files/functions to modify
|
||||
- **Output**: Dependency map + risk assessment
|
||||
- **Flow**: Execution agent → CLI explore agent (dependency-map) → Safe modification strategy
|
||||
|
||||
**UI Design Agent** (`ui-design-agent`):
|
||||
- **Use Case**: Discover existing UI components and design tokens
|
||||
- **Input**: Component directory + file patterns
|
||||
- **Output**: Component inventory + styling patterns
|
||||
- **Flow**: UI agent delegates structure analysis to CLI explore agent
|
||||
|
||||
### As Consumer (Calls Others)
|
||||
This agent can leverage the following tools to enhance analysis:
|
||||
|
||||
**Context Search Agent** (`context-search-agent`):
|
||||
- **Use Case**: Get project-wide context before analysis
|
||||
- **Flow**: CLI explore agent → Context search agent → Enhanced analysis with full context
|
||||
- **When to use**: Need comprehensive project understanding beyond file structure
|
||||
- **Integration**: Call context-search-agent first, then use results to guide exploration
|
||||
|
||||
**MCP Tools**:
|
||||
**MCP Tools** (Code Index):
|
||||
- **Use Case**: Enhanced file discovery and search capabilities
|
||||
- **Flow**: CLI explore agent → Code Index MCP → Faster pattern discovery
|
||||
- **When to use**: Large codebases requiring fast pattern discovery
|
||||
- **Integration**: Prefer Code Index MCP when available, fallback to rg/bash tools
|
||||
|
||||
## Key Reminders
|
||||
|
||||
@@ -636,52 +618,3 @@ rg "^import .*;" --type java -n
|
||||
# Find test files
|
||||
find . -name "*Test.java" -o -name "*Tests.java"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Caching Strategy (Optional)
|
||||
|
||||
**Project Structure Cache**:
|
||||
- Cache `get_modules_by_depth.sh` output for 1 hour
|
||||
- Invalidate on file system changes (watch .git/index)
|
||||
|
||||
**Pattern Match Cache**:
|
||||
- Cache rg results for common patterns (class/function definitions)
|
||||
- Invalidate on file modifications
|
||||
|
||||
**Gemini Analysis Cache**:
|
||||
- Cache semantic analysis results for unchanged files
|
||||
- Key: file_path + content_hash
|
||||
- TTL: 24 hours
|
||||
|
||||
### Parallel Execution
|
||||
|
||||
**Quick-Scan Mode**:
|
||||
- Run rg searches in parallel (classes, functions, imports)
|
||||
- Merge results after completion
|
||||
|
||||
**Deep-Scan Mode**:
|
||||
- Execute Bash scan (Phase 1) and Gemini setup concurrently
|
||||
- Wait for Phase 1 completion before Phase 2 (Gemini needs context)
|
||||
|
||||
**Dependency-Map Mode**:
|
||||
- Discover imports and exports in parallel
|
||||
- Build graph after all discoveries complete
|
||||
|
||||
### Resource Limits
|
||||
|
||||
**File Count Limits**:
|
||||
- Quick-scan: Unlimited (filtered by relevance)
|
||||
- Deep-scan: Max 100 files for Gemini analysis
|
||||
- Dependency-map: Max 500 modules for graph construction
|
||||
|
||||
**Timeout Limits**:
|
||||
- Quick-scan: 30 seconds (bash-only, fast)
|
||||
- Deep-scan: 5 minutes (includes Gemini CLI)
|
||||
- Dependency-map: 10 minutes (graph construction + analysis)
|
||||
|
||||
**Memory Limits**:
|
||||
- Limit rg output to 10MB (use --max-count)
|
||||
- Stream large outputs instead of loading into memory
|
||||
|
||||
@@ -0,0 +1,724 @@
|
||||
---
|
||||
name: cli-lite-planning-agent
|
||||
description: |
|
||||
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans with actionable task breakdowns. Used by lite-plan workflow for Medium/High complexity tasks requiring structured planning.
|
||||
|
||||
Core capabilities:
|
||||
- Task decomposition into actionable steps (3-10 tasks)
|
||||
- Dependency analysis and execution sequence
|
||||
- Integration with exploration context
|
||||
- Enhancement of conceptual tasks to actionable "how to do" steps
|
||||
|
||||
Examples:
|
||||
- Context: Medium complexity feature implementation
|
||||
user: "Generate implementation plan for user authentication feature"
|
||||
assistant: "Executing Gemini CLI planning → Parsing task breakdown → Generating planObject with 7 actionable tasks"
|
||||
commentary: Agent transforms conceptual task into specific file operations
|
||||
|
||||
- Context: High complexity refactoring
|
||||
user: "Generate plan for refactoring logging module with exploration context"
|
||||
assistant: "Using exploration findings → CLI planning with pattern injection → Generating enhanced planObject"
|
||||
commentary: Agent leverages exploration context to create pattern-aware, file-specific tasks
|
||||
color: cyan
|
||||
---
|
||||
|
||||
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate actionable implementation plans (planObject) for downstream execution.
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Input Processing
|
||||
|
||||
**What you receive (Context Package)**:
|
||||
```javascript
|
||||
{
|
||||
"task_description": "User's original task description",
|
||||
"explorationContext": {
|
||||
"project_structure": "Overall architecture description",
|
||||
"relevant_files": ["file1.ts", "file2.ts", "..."],
|
||||
"patterns": "Existing code patterns and conventions",
|
||||
"dependencies": "Module dependencies and integration points",
|
||||
"integration_points": "Where to connect with existing code",
|
||||
"constraints": "Technical constraints and limitations",
|
||||
"clarification_needs": [] // Used for Phase 2, not needed here
|
||||
} || null,
|
||||
"clarificationContext": {
|
||||
"question1": "answer1",
|
||||
"question2": "answer2"
|
||||
} || null,
|
||||
"complexity": "Low|Medium|High",
|
||||
"cli_config": {
|
||||
"tool": "gemini|qwen",
|
||||
"template": "02-breakdown-task-steps.txt",
|
||||
"timeout": 3600000, // 60 minutes for planning
|
||||
"fallback": "qwen"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Context Enrichment Strategy**:
|
||||
```javascript
|
||||
// Merge task description with exploration findings
|
||||
const enrichedContext = {
|
||||
task_description: task_description,
|
||||
relevant_files: explorationContext?.relevant_files || [],
|
||||
patterns: explorationContext?.patterns || "No patterns identified",
|
||||
dependencies: explorationContext?.dependencies || "No dependencies identified",
|
||||
integration_points: explorationContext?.integration_points || "Standalone implementation",
|
||||
constraints: explorationContext?.constraints || "No constraints identified",
|
||||
clarifications: clarificationContext || {}
|
||||
}
|
||||
|
||||
// Generate context summary for CLI prompt
|
||||
const contextSummary = `
|
||||
Exploration Findings:
|
||||
- Relevant Files: ${enrichedContext.relevant_files.join(', ')}
|
||||
- Patterns: ${enrichedContext.patterns}
|
||||
- Dependencies: ${enrichedContext.dependencies}
|
||||
- Integration: ${enrichedContext.integration_points}
|
||||
- Constraints: ${enrichedContext.constraints}
|
||||
|
||||
User Clarifications:
|
||||
${Object.entries(enrichedContext.clarifications).map(([q, a]) => `- ${q}: ${a}`).join('\n')}
|
||||
`
|
||||
```
|
||||
|
||||
### Execution Flow (Three-Phase)
|
||||
|
||||
```
|
||||
Phase 1: Context Preparation & CLI Execution
|
||||
1. Validate context package and extract task context
|
||||
2. Merge task description with exploration and clarification context
|
||||
3. Construct CLI command with planning template
|
||||
4. Execute Gemini/Qwen CLI tool with timeout (60 minutes)
|
||||
5. Handle errors and fallback to alternative tool if needed
|
||||
6. Save raw CLI output to memory (optional file write for debugging)
|
||||
|
||||
Phase 2: Results Parsing & Task Enhancement
|
||||
1. Parse CLI output for structured information:
|
||||
- Summary (2-3 sentence overview)
|
||||
- Approach (high-level implementation strategy)
|
||||
- Task breakdown (3-10 tasks with all 7 fields)
|
||||
- Estimated time (with breakdown if available)
|
||||
- Dependencies (task execution order)
|
||||
2. Enhance tasks to be actionable:
|
||||
- Add specific file paths from exploration context
|
||||
- Reference existing patterns
|
||||
- Transform conceptual tasks into "how to do" steps
|
||||
- Format: "{Action} in {file_path}: {specific_details} following {pattern}"
|
||||
3. Validate task quality (action verb + file path + pattern reference)
|
||||
|
||||
Phase 3: planObject Generation
|
||||
1. Build planObject structure from parsed and enhanced results
|
||||
2. Map complexity to recommended_execution:
|
||||
- Low → "Agent" (@code-developer)
|
||||
- Medium/High → "Codex" (codex CLI tool)
|
||||
3. Return planObject (in-memory, no file writes)
|
||||
4. Return success status to orchestrator (lite-plan)
|
||||
```
|
||||
|
||||
## Core Functions
|
||||
|
||||
### 1. CLI Planning Execution
|
||||
|
||||
**Template-Based Command Construction**:
|
||||
```bash
|
||||
cd {project_root} && {cli_tool} -p "
|
||||
PURPOSE: Generate detailed implementation plan for {complexity} complexity task with structured actionable task breakdown
|
||||
TASK:
|
||||
• Analyze task requirements: {task_description}
|
||||
• Break down into 3-10 structured task objects with complete implementation guidance
|
||||
• For each task, provide:
|
||||
- Title and target file
|
||||
- Action type (Create|Update|Implement|Refactor|Add|Delete)
|
||||
- Description (what to implement)
|
||||
- Implementation steps (how to do it, 3-7 specific steps)
|
||||
- Reference (which patterns/files to follow, with specific examples)
|
||||
- Acceptance criteria (verification checklist)
|
||||
• Identify dependencies and execution sequence
|
||||
• Provide realistic time estimates with breakdown
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: {exploration_context_summary}
|
||||
EXPECTED: Structured plan with the following format:
|
||||
|
||||
## Implementation Summary
|
||||
[2-3 sentence overview]
|
||||
|
||||
## High-Level Approach
|
||||
[Strategy with pattern references]
|
||||
|
||||
## Task Breakdown
|
||||
|
||||
### Task 1: [Title]
|
||||
**File**: [file/path.ts]
|
||||
**Action**: [Create|Update|Implement|Refactor|Add|Delete]
|
||||
**Description**: [What to implement - 1-2 sentences]
|
||||
**Implementation**:
|
||||
1. [Specific step 1 - how to do it]
|
||||
2. [Specific step 2 - concrete action]
|
||||
3. [Specific step 3 - implementation detail]
|
||||
4. [Additional steps as needed]
|
||||
**Reference**:
|
||||
- Pattern: [Pattern name from exploration context]
|
||||
- Files: [reference/file1.ts], [reference/file2.ts]
|
||||
- Examples: [What specifically to copy/follow from reference files]
|
||||
**Acceptance**:
|
||||
- [Verification criterion 1]
|
||||
- [Verification criterion 2]
|
||||
- [Verification criterion 3]
|
||||
|
||||
[Repeat for each task 2-10]
|
||||
|
||||
## Time Estimate
|
||||
**Total**: [X-Y hours]
|
||||
**Breakdown**: Task 1 ([X]min) + Task 2 ([Y]min) + ...
|
||||
|
||||
## Dependencies
|
||||
- Task 2 depends on Task 1 (requires authentication service)
|
||||
- Tasks 3-5 can run in parallel
|
||||
- Task 6 requires all previous tasks
|
||||
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
||||
- Exploration context: Relevant files: {relevant_files_list}
|
||||
- Existing patterns: {patterns_summary}
|
||||
- User clarifications: {clarifications_summary}
|
||||
- Complexity level: {complexity}
|
||||
- Each task MUST include all 7 fields: title, file, action, description, implementation, reference, acceptance
|
||||
- Implementation steps must be concrete and actionable (not conceptual)
|
||||
- Reference must cite specific files from exploration context
|
||||
- analysis=READ-ONLY
|
||||
" {timeout_flag}
|
||||
```
|
||||
|
||||
**Error Handling & Fallback Strategy**:
|
||||
```javascript
|
||||
// Primary execution with fallback chain
|
||||
try {
|
||||
result = executeCLI("gemini", config);
|
||||
} catch (error) {
|
||||
if (error.code === 429 || error.code === 404) {
|
||||
console.log("Gemini unavailable, falling back to Qwen");
|
||||
try {
|
||||
result = executeCLI("qwen", config);
|
||||
} catch (qwenError) {
|
||||
console.error("Both Gemini and Qwen failed");
|
||||
// Return degraded mode with basic plan
|
||||
return {
|
||||
status: "degraded",
|
||||
message: "CLI planning failed, using fallback strategy",
|
||||
planObject: generateBasicPlan(task_description, explorationContext)
|
||||
};
|
||||
}
|
||||
} else {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback plan generation when all CLI tools fail
|
||||
function generateBasicPlan(taskDesc, exploration) {
|
||||
const relevantFiles = exploration?.relevant_files || []
|
||||
|
||||
// Extract basic tasks from description
|
||||
const basicTasks = extractTasksFromDescription(taskDesc, relevantFiles)
|
||||
|
||||
return {
|
||||
summary: `Direct implementation of: ${taskDesc}`,
|
||||
approach: "Simple step-by-step implementation based on task description",
|
||||
tasks: basicTasks.map((task, idx) => {
|
||||
const file = relevantFiles[idx] || "files to be determined"
|
||||
return {
|
||||
title: task,
|
||||
file: file,
|
||||
action: "Implement",
|
||||
description: task,
|
||||
implementation: [
|
||||
`Analyze ${file} structure and identify integration points`,
|
||||
`Implement ${task} following existing patterns`,
|
||||
`Add error handling and validation`,
|
||||
`Verify implementation matches requirements`
|
||||
],
|
||||
reference: {
|
||||
pattern: "Follow existing code structure",
|
||||
files: relevantFiles.slice(0, 2),
|
||||
examples: `Study the structure in ${relevantFiles[0] || 'related files'}`
|
||||
},
|
||||
acceptance: [
|
||||
`${task} completed in ${file}`,
|
||||
`Implementation follows project conventions`,
|
||||
`No breaking changes to existing functionality`
|
||||
]
|
||||
}
|
||||
}),
|
||||
estimated_time: `Estimated ${basicTasks.length * 30} minutes (${basicTasks.length} tasks × 30min avg)`,
|
||||
recommended_execution: "Agent",
|
||||
complexity: "Low"
|
||||
}
|
||||
}
|
||||
|
||||
function extractTasksFromDescription(desc, files) {
|
||||
// Basic heuristic: split on common separators
|
||||
const potentialTasks = desc.split(/[,;]|\band\b/)
|
||||
.map(s => s.trim())
|
||||
.filter(s => s.length > 10)
|
||||
|
||||
if (potentialTasks.length >= 3) {
|
||||
return potentialTasks.slice(0, 10)
|
||||
}
|
||||
|
||||
// Fallback: create generic tasks
|
||||
return [
|
||||
`Analyze requirements and identify implementation approach`,
|
||||
`Implement core functionality in ${files[0] || 'main file'}`,
|
||||
`Add error handling and validation`,
|
||||
`Create unit tests for new functionality`,
|
||||
`Update documentation`
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Output Parsing & Enhancement
|
||||
|
||||
**Structured Task Parsing**:
|
||||
```javascript
|
||||
// Parse CLI output for structured tasks
|
||||
function extractStructuredTasks(cliOutput) {
|
||||
const tasks = []
|
||||
const taskPattern = /### Task \d+: (.+?)\n\*\*File\*\*: (.+?)\n\*\*Action\*\*: (.+?)\n\*\*Description\*\*: (.+?)\n\*\*Implementation\*\*:\n((?:\d+\. .+?\n)+)\*\*Reference\*\*:\n((?:- .+?\n)+)\*\*Acceptance\*\*:\n((?:- .+?\n)+)/g
|
||||
|
||||
let match
|
||||
while ((match = taskPattern.exec(cliOutput)) !== null) {
|
||||
// Parse implementation steps
|
||||
const implementation = match[5].trim()
|
||||
.split('\n')
|
||||
.map(s => s.replace(/^\d+\. /, ''))
|
||||
.filter(s => s.length > 0)
|
||||
|
||||
// Parse reference fields
|
||||
const referenceText = match[6].trim()
|
||||
const patternMatch = /- Pattern: (.+)/m.exec(referenceText)
|
||||
const filesMatch = /- Files: (.+)/m.exec(referenceText)
|
||||
const examplesMatch = /- Examples: (.+)/m.exec(referenceText)
|
||||
|
||||
const reference = {
|
||||
pattern: patternMatch ? patternMatch[1].trim() : "No pattern specified",
|
||||
files: filesMatch ? filesMatch[1].split(',').map(f => f.trim()) : [],
|
||||
examples: examplesMatch ? examplesMatch[1].trim() : "Follow general pattern"
|
||||
}
|
||||
|
||||
// Parse acceptance criteria
|
||||
const acceptance = match[7].trim()
|
||||
.split('\n')
|
||||
.map(s => s.replace(/^- /, ''))
|
||||
.filter(s => s.length > 0)
|
||||
|
||||
tasks.push({
|
||||
title: match[1].trim(),
|
||||
file: match[2].trim(),
|
||||
action: match[3].trim(),
|
||||
description: match[4].trim(),
|
||||
implementation: implementation,
|
||||
reference: reference,
|
||||
acceptance: acceptance
|
||||
})
|
||||
}
|
||||
|
||||
return tasks
|
||||
}
|
||||
|
||||
const parsedResults = {
|
||||
summary: extractSection("Implementation Summary"),
|
||||
approach: extractSection("High-Level Approach"),
|
||||
raw_tasks: extractStructuredTasks(cliOutput),
|
||||
time_estimate: extractSection("Time Estimate"),
|
||||
dependencies: extractSection("Dependencies")
|
||||
}
|
||||
```
|
||||
|
||||
**Validation & Enhancement**:
|
||||
```javascript
|
||||
// Validate and enhance tasks if CLI output is incomplete
|
||||
function validateAndEnhanceTasks(rawTasks, explorationContext) {
|
||||
return rawTasks.map(taskObj => {
|
||||
// Validate required fields
|
||||
const validated = {
|
||||
title: taskObj.title || "Unnamed task",
|
||||
file: taskObj.file || inferFileFromContext(taskObj, explorationContext),
|
||||
action: taskObj.action || inferAction(taskObj.title),
|
||||
description: taskObj.description || taskObj.title,
|
||||
implementation: taskObj.implementation?.length > 0
|
||||
? taskObj.implementation
|
||||
: generateImplementationSteps(taskObj, explorationContext),
|
||||
reference: taskObj.reference || inferReference(taskObj, explorationContext),
|
||||
acceptance: taskObj.acceptance?.length > 0
|
||||
? taskObj.acceptance
|
||||
: generateAcceptanceCriteria(taskObj)
|
||||
}
|
||||
|
||||
return validated
|
||||
})
|
||||
}
|
||||
|
||||
// Helper functions for inference
|
||||
function inferFileFromContext(taskObj, explorationContext) {
|
||||
const relevantFiles = explorationContext?.relevant_files || []
|
||||
const titleLower = taskObj.title.toLowerCase()
|
||||
const matchedFile = relevantFiles.find(f =>
|
||||
titleLower.includes(f.split('/').pop().split('.')[0].toLowerCase())
|
||||
)
|
||||
return matchedFile || "file-to-be-determined.ts"
|
||||
}
|
||||
|
||||
function inferAction(title) {
|
||||
if (/create|add new|implement/i.test(title)) return "Create"
|
||||
if (/update|modify|change/i.test(title)) return "Update"
|
||||
if (/refactor/i.test(title)) return "Refactor"
|
||||
if (/delete|remove/i.test(title)) return "Delete"
|
||||
return "Implement"
|
||||
}
|
||||
|
||||
function generateImplementationSteps(taskObj, explorationContext) {
|
||||
const patterns = explorationContext?.patterns || ""
|
||||
return [
|
||||
`Analyze ${taskObj.file} structure and identify integration points`,
|
||||
`Implement ${taskObj.title} following ${patterns || 'existing patterns'}`,
|
||||
`Add error handling and validation`,
|
||||
`Update related components if needed`,
|
||||
`Verify implementation matches requirements`
|
||||
]
|
||||
}
|
||||
|
||||
function inferReference(taskObj, explorationContext) {
|
||||
const patterns = explorationContext?.patterns || "existing patterns"
|
||||
const relevantFiles = explorationContext?.relevant_files || []
|
||||
|
||||
return {
|
||||
pattern: patterns.split('.')[0] || "Follow existing code structure",
|
||||
files: relevantFiles.slice(0, 2),
|
||||
examples: `Study the structure and methods in ${relevantFiles[0] || 'related files'}`
|
||||
}
|
||||
}
|
||||
|
||||
function generateAcceptanceCriteria(taskObj) {
|
||||
return [
|
||||
`${taskObj.title} completed in ${taskObj.file}`,
|
||||
`Implementation follows project conventions`,
|
||||
`No breaking changes to existing functionality`,
|
||||
`Code passes linting and type checks`
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 3. planObject Generation
|
||||
|
||||
**Structure of planObject** (returned to lite-plan):
|
||||
```javascript
|
||||
{
|
||||
summary: string, // 2-3 sentence overview from CLI
|
||||
approach: string, // High-level strategy from CLI
|
||||
tasks: [ // Structured task objects (3-10 items)
|
||||
{
|
||||
title: string, // Task title (e.g., "Create AuthService")
|
||||
file: string, // Target file path
|
||||
action: string, // Action type: Create|Update|Implement|Refactor|Add|Delete
|
||||
description: string, // What to implement (1-2 sentences)
|
||||
implementation: string[], // Step-by-step how to do it (3-7 steps)
|
||||
reference: { // What to reference
|
||||
pattern: string, // Pattern name (e.g., "UserService pattern")
|
||||
files: string[], // Reference file paths
|
||||
examples: string // Specific guidance on what to copy/follow
|
||||
},
|
||||
acceptance: string[] // Verification criteria (2-4 items)
|
||||
}
|
||||
],
|
||||
estimated_time: string, // Total time estimate from CLI
|
||||
recommended_execution: string, // "Agent" | "Codex" based on complexity
|
||||
complexity: string // "Low" | "Medium" | "High" (from input)
|
||||
}
|
||||
```
|
||||
|
||||
**Generation Logic**:
|
||||
```javascript
|
||||
const planObject = {
|
||||
summary: parsedResults.summary || `Implementation plan for: ${task_description.slice(0, 100)}`,
|
||||
|
||||
approach: parsedResults.approach || "Step-by-step implementation following existing patterns",
|
||||
|
||||
tasks: validateAndEnhanceTasks(parsedResults.raw_tasks, explorationContext),
|
||||
|
||||
estimated_time: parsedResults.time_estimate || estimateTimeFromTaskCount(parsedResults.raw_tasks.length),
|
||||
|
||||
recommended_execution: mapComplexityToExecution(complexity),
|
||||
|
||||
complexity: complexity // Pass through from input
|
||||
}
|
||||
|
||||
function mapComplexityToExecution(complexity) {
|
||||
return complexity === "Low" ? "Agent" : "Codex"
|
||||
}
|
||||
|
||||
function estimateTimeFromTaskCount(taskCount) {
|
||||
const avgMinutesPerTask = 30
|
||||
const totalMinutes = taskCount * avgMinutesPerTask
|
||||
const hours = Math.floor(totalMinutes / 60)
|
||||
const minutes = totalMinutes % 60
|
||||
|
||||
if (hours === 0) {
|
||||
return `${minutes} minutes (${taskCount} tasks × ${avgMinutesPerTask}min avg)`
|
||||
}
|
||||
return `${hours}h ${minutes}m (${taskCount} tasks × ${avgMinutesPerTask}min avg)`
|
||||
}
|
||||
```
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### CLI Execution Standards
|
||||
- **Timeout Management**: Use dynamic timeout (3600000ms = 60min for planning)
|
||||
- **Fallback Chain**: Gemini → Qwen → degraded mode (if both fail)
|
||||
- **Error Context**: Include full error details in failure reports
|
||||
- **Output Preservation**: Optionally save raw CLI output for debugging
|
||||
|
||||
### Task Object Standards
|
||||
|
||||
**Completeness** - Each task must have all 7 required fields:
|
||||
- **title**: Clear, concise task name
|
||||
- **file**: Exact file path (from exploration.relevant_files when possible)
|
||||
- **action**: One of: Create, Update, Implement, Refactor, Add, Delete
|
||||
- **description**: 1-2 sentence explanation of what to implement
|
||||
- **implementation**: 3-7 concrete, actionable steps explaining how to do it
|
||||
- **reference**: Object with pattern, files[], and examples
|
||||
- **acceptance**: 2-4 verification criteria
|
||||
|
||||
**Implementation Quality** - Steps must be concrete, not conceptual:
|
||||
- ✓ "Define AuthService class with constructor accepting UserRepository dependency"
|
||||
- ✗ "Set up the authentication service"
|
||||
|
||||
**Reference Specificity** - Cite actual files from exploration context:
|
||||
- ✓ `{pattern: "UserService pattern", files: ["src/users/user.service.ts"], examples: "Follow constructor injection and async method patterns"}`
|
||||
- ✗ `{pattern: "service pattern", files: [], examples: "follow patterns"}`
|
||||
|
||||
**Acceptance Measurability** - Criteria must be verifiable:
|
||||
- ✓ "AuthService class created with login(), logout(), validateToken() methods"
|
||||
- ✗ "Service works correctly"
|
||||
|
||||
### Task Validation
|
||||
|
||||
**Validation Function**:
|
||||
```javascript
|
||||
function validateTaskObject(task) {
|
||||
const errors = []
|
||||
|
||||
// Validate required fields
|
||||
if (!task.title || task.title.trim().length === 0) {
|
||||
errors.push("Missing title")
|
||||
}
|
||||
if (!task.file || task.file.trim().length === 0) {
|
||||
errors.push("Missing file path")
|
||||
}
|
||||
if (!task.action || !['Create', 'Update', 'Implement', 'Refactor', 'Add', 'Delete'].includes(task.action)) {
|
||||
errors.push(`Invalid action: ${task.action}`)
|
||||
}
|
||||
if (!task.description || task.description.trim().length === 0) {
|
||||
errors.push("Missing description")
|
||||
}
|
||||
if (!task.implementation || task.implementation.length < 3) {
|
||||
errors.push("Implementation must have at least 3 steps")
|
||||
}
|
||||
if (!task.reference || !task.reference.pattern) {
|
||||
errors.push("Missing pattern reference")
|
||||
}
|
||||
if (!task.acceptance || task.acceptance.length < 2) {
|
||||
errors.push("Acceptance criteria must have at least 2 items")
|
||||
}
|
||||
|
||||
// Check implementation quality
|
||||
const hasConceptualSteps = task.implementation?.some(step =>
|
||||
/^(handle|manage|deal with|set up|work on)/i.test(step)
|
||||
)
|
||||
if (hasConceptualSteps) {
|
||||
errors.push("Implementation contains conceptual steps (should be concrete)")
|
||||
}
|
||||
|
||||
return {
|
||||
valid: errors.length === 0,
|
||||
errors: errors
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Good vs Bad Examples**:
|
||||
```javascript
|
||||
// ❌ BAD (Incomplete, vague)
|
||||
{
|
||||
title: "Add authentication",
|
||||
file: "auth.ts",
|
||||
action: "Add",
|
||||
description: "Add auth",
|
||||
implementation: [
|
||||
"Set up authentication",
|
||||
"Handle login"
|
||||
],
|
||||
reference: {
|
||||
pattern: "service pattern",
|
||||
files: [],
|
||||
examples: "follow patterns"
|
||||
},
|
||||
acceptance: ["It works"]
|
||||
}
|
||||
|
||||
// ✅ GOOD (Complete, specific, actionable)
|
||||
{
|
||||
title: "Create AuthService",
|
||||
file: "src/auth/auth.service.ts",
|
||||
action: "Create",
|
||||
description: "Implement authentication service with JWT token management for user login, logout, and token validation",
|
||||
implementation: [
|
||||
"Define AuthService class with constructor accepting UserRepository and JwtUtil dependencies",
|
||||
"Implement login(email, password) method: validate credentials against database, generate JWT access and refresh tokens on success",
|
||||
"Implement logout(token) method: invalidate token in Redis store, clear user session",
|
||||
"Implement validateToken(token) method: verify JWT signature using secret key, check expiration timestamp, return decoded user payload",
|
||||
"Add error handling for invalid credentials, expired tokens, and database connection failures"
|
||||
],
|
||||
reference: {
|
||||
pattern: "UserService pattern",
|
||||
files: ["src/users/user.service.ts", "src/utils/jwt.util.ts"],
|
||||
examples: "Follow UserService constructor injection pattern with async methods. Use JwtUtil.generateToken() and JwtUtil.verifyToken() for token operations"
|
||||
},
|
||||
acceptance: [
|
||||
"AuthService class created with login(), logout(), validateToken() methods",
|
||||
"Methods follow UserService async/await pattern with try-catch error handling",
|
||||
"JWT token generation uses JwtUtil with 1h access token and 7d refresh token expiry",
|
||||
"All methods return typed responses (success/error objects)"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS:**
|
||||
- **Validate context package**: Ensure task_description present before CLI execution
|
||||
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
|
||||
- **Parse CLI output structurally**: Extract all 7 task fields (title, file, action, description, implementation, reference, acceptance)
|
||||
- **Validate task objects**: Each task must have all required fields with quality content
|
||||
- **Generate complete planObject**: All fields populated with structured task objects
|
||||
- **Return in-memory result**: No file writes unless debugging
|
||||
- **Preserve exploration context**: Use relevant_files and patterns in task references
|
||||
- **Ensure implementation concreteness**: Steps must be actionable, not conceptual
|
||||
- **Cite specific references**: Reference actual files from exploration context
|
||||
|
||||
**NEVER:**
|
||||
- Execute implementation directly (return plan, let lite-execute handle execution)
|
||||
- Skip CLI planning (always run CLI even for simple tasks, unless degraded mode)
|
||||
- Return vague task objects (validate all required fields)
|
||||
- Use conceptual implementation steps ("set up", "handle", "manage")
|
||||
- Modify files directly (planning only, no implementation)
|
||||
- Exceed timeout limits (use configured timeout value)
|
||||
- Return tasks with empty reference files (cite actual exploration files)
|
||||
- Skip task validation (all task objects must pass quality checks)
|
||||
|
||||
## Configuration & Examples
|
||||
|
||||
### CLI Tool Configuration
|
||||
|
||||
**Gemini Configuration**:
|
||||
```javascript
|
||||
{
|
||||
"tool": "gemini",
|
||||
"model": "gemini-2.5-pro", // Auto-selected, no need to specify
|
||||
"templates": {
|
||||
"task-breakdown": "02-breakdown-task-steps.txt",
|
||||
"architecture-planning": "01-plan-architecture-design.txt",
|
||||
"component-design": "02-design-component-spec.txt"
|
||||
},
|
||||
"timeout": 3600000 // 60 minutes
|
||||
}
|
||||
```
|
||||
|
||||
**Qwen Configuration (Fallback)**:
|
||||
```javascript
|
||||
{
|
||||
"tool": "qwen",
|
||||
"model": "coder-model", // Auto-selected
|
||||
"templates": {
|
||||
"task-breakdown": "02-breakdown-task-steps.txt",
|
||||
"architecture-planning": "01-plan-architecture-design.txt"
|
||||
},
|
||||
"timeout": 3600000 // 60 minutes
|
||||
}
|
||||
```
|
||||
|
||||
### Example Execution
|
||||
|
||||
**Input Context**:
|
||||
```json
|
||||
{
|
||||
"task_description": "Implement user authentication with JWT tokens",
|
||||
"explorationContext": {
|
||||
"project_structure": "Express.js REST API with TypeScript, layered architecture (routes → services → repositories)",
|
||||
"relevant_files": [
|
||||
"src/users/user.service.ts",
|
||||
"src/users/user.repository.ts",
|
||||
"src/middleware/cors.middleware.ts",
|
||||
"src/routes/api.ts"
|
||||
],
|
||||
"patterns": "Service-Repository pattern used throughout. Services in src/{module}/{module}.service.ts, Repositories in src/{module}/{module}.repository.ts. Middleware follows function-based approach in src/middleware/",
|
||||
"dependencies": "Express, TypeORM, bcrypt for password hashing",
|
||||
"integration_points": "Auth service needs to integrate with existing user service and API routes",
|
||||
"constraints": "Must use existing TypeORM entities, follow established error handling patterns"
|
||||
},
|
||||
"clarificationContext": {
|
||||
"token_expiry": "1 hour access token, 7 days refresh token",
|
||||
"password_requirements": "Min 8 chars, must include number and special char"
|
||||
},
|
||||
"complexity": "Medium",
|
||||
"cli_config": {
|
||||
"tool": "gemini",
|
||||
"template": "02-breakdown-task-steps.txt",
|
||||
"timeout": 3600000
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Execution Summary**:
|
||||
1. **Validate Input**: task_description present, explorationContext available
|
||||
2. **Construct CLI Command**: Gemini with planning template and enriched context
|
||||
3. **Execute CLI**: Gemini runs and returns structured plan (timeout: 60min)
|
||||
4. **Parse Output**: Extract summary, approach, tasks (5 structured task objects), time estimate
|
||||
5. **Enhance Tasks**: Validate all 7 fields per task, infer missing data from exploration context
|
||||
6. **Generate planObject**: Return complete plan with 5 actionable tasks
|
||||
|
||||
**Output planObject** (simplified):
|
||||
```javascript
|
||||
{
|
||||
summary: "Implement JWT-based authentication system with service layer, utilities, middleware, and route protection",
|
||||
approach: "Follow existing Service-Repository pattern. Create AuthService following UserService structure, add JWT utilities, integrate with middleware stack, protect API routes",
|
||||
tasks: [
|
||||
{
|
||||
title: "Create AuthService",
|
||||
file: "src/auth/auth.service.ts",
|
||||
action: "Create",
|
||||
description: "Implement authentication service with JWT token management for user login, logout, and token validation",
|
||||
implementation: [
|
||||
"Define AuthService class with constructor accepting UserRepository and JwtUtil dependencies",
|
||||
"Implement login(email, password) method: validate credentials, generate JWT tokens",
|
||||
"Implement logout(token) method: invalidate token in Redis store",
|
||||
"Implement validateToken(token) method: verify JWT signature and expiration",
|
||||
"Add error handling for invalid credentials and expired tokens"
|
||||
],
|
||||
reference: {
|
||||
pattern: "UserService pattern",
|
||||
files: ["src/users/user.service.ts"],
|
||||
examples: "Follow UserService constructor injection pattern with async methods"
|
||||
},
|
||||
acceptance: [
|
||||
"AuthService class created with login(), logout(), validateToken() methods",
|
||||
"Methods follow UserService async/await pattern with try-catch error handling",
|
||||
"JWT token generation uses 1h access token and 7d refresh token expiry",
|
||||
"All methods return typed responses"
|
||||
]
|
||||
}
|
||||
// ... 4 more tasks (JWT utilities, auth middleware, route protection, tests)
|
||||
],
|
||||
estimated_time: "3-4 hours (1h service + 30m utils + 1h middleware + 30m routes + 1h tests)",
|
||||
recommended_execution: "Codex",
|
||||
complexity: "Medium"
|
||||
}
|
||||
```
|
||||
@@ -10,7 +10,7 @@ description: |
|
||||
commentary: Agent encapsulates CLI execution + result parsing + task generation
|
||||
|
||||
- Context: Coverage gap analysis
|
||||
user: "Analyze coverage gaps and generate补充test task"
|
||||
user: "Analyze coverage gaps and generate supplement test task"
|
||||
assistant: "Executing CLI analysis for uncovered code paths → Generating test supplement task"
|
||||
commentary: Agent handles both analysis and task JSON generation autonomously
|
||||
color: purple
|
||||
@@ -18,12 +18,11 @@ color: purple
|
||||
|
||||
You are a specialized execution agent that bridges CLI analysis tools with task generation. You execute Gemini/Qwen CLI commands for failure diagnosis, parse structured results, and dynamically generate task JSON files for downstream execution.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Execute CLI Analysis**: Run Gemini/Qwen with appropriate templates and context
|
||||
2. **Parse CLI Results**: Extract structured information (fix strategies, root causes, modification points)
|
||||
3. **Generate Task JSONs**: Create IMPL-fix-N.json or IMPL-supplement-N.json dynamically
|
||||
4. **Save Analysis Reports**: Store detailed CLI output as iteration-N-analysis.md
|
||||
**Core capabilities:**
|
||||
- Execute CLI analysis with appropriate templates and context
|
||||
- Parse structured results (fix strategies, root causes, modification points)
|
||||
- Generate task JSONs dynamically (IMPL-fix-N.json, IMPL-supplement-N.json)
|
||||
- Save detailed analysis reports (iteration-N-analysis.md)
|
||||
|
||||
## Execution Process
|
||||
|
||||
@@ -43,7 +42,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
||||
"file": "tests/test_auth.py",
|
||||
"line": 45,
|
||||
"criticality": "high",
|
||||
"test_type": "integration" // ← NEW: L0: static, L1: unit, L2: integration, L3: e2e
|
||||
"test_type": "integration" // L0: static, L1: unit, L2: integration, L3: e2e
|
||||
}
|
||||
],
|
||||
"error_messages": ["error1", "error2"],
|
||||
@@ -61,7 +60,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
||||
"tool": "gemini|qwen",
|
||||
"model": "gemini-3-pro-preview-11-2025|qwen-coder-model",
|
||||
"template": "01-diagnose-bug-root-cause.txt",
|
||||
"timeout": 2400000,
|
||||
"timeout": 2400000, // 40 minutes for analysis
|
||||
"fallback": "qwen"
|
||||
},
|
||||
"task_config": {
|
||||
@@ -79,16 +78,16 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
||||
Phase 1: CLI Analysis Execution
|
||||
1. Validate context package and extract failure context
|
||||
2. Construct CLI command with appropriate template
|
||||
3. Execute Gemini/Qwen CLI tool
|
||||
3. Execute Gemini/Qwen CLI tool with layer-specific guidance
|
||||
4. Handle errors and fallback to alternative tool if needed
|
||||
5. Save raw CLI output to .process/iteration-N-cli-output.txt
|
||||
|
||||
Phase 2: Results Parsing & Strategy Extraction
|
||||
1. Parse CLI output for structured information:
|
||||
- Root cause analysis
|
||||
- Root cause analysis (RCA)
|
||||
- Fix strategy and approach
|
||||
- Modification points (files, functions, line numbers)
|
||||
- Expected outcome
|
||||
- Expected outcome and verification steps
|
||||
2. Extract quantified requirements:
|
||||
- Number of files to modify
|
||||
- Specific functions to fix (with line numbers)
|
||||
@@ -96,18 +95,18 @@ Phase 2: Results Parsing & Strategy Extraction
|
||||
3. Generate structured analysis report (iteration-N-analysis.md)
|
||||
|
||||
Phase 3: Task JSON Generation
|
||||
1. Load task JSON template (defined below)
|
||||
1. Load task JSON template
|
||||
2. Populate template with parsed CLI results
|
||||
3. Add iteration context and previous attempts
|
||||
4. Write task JSON to .workflow/{session}/.task/IMPL-fix-N.json
|
||||
4. Write task JSON to .workflow/session/{session}/.task/IMPL-fix-N.json
|
||||
5. Return success status and task ID to orchestrator
|
||||
```
|
||||
|
||||
## Core Functions
|
||||
|
||||
### 1. CLI Command Construction
|
||||
### 1. CLI Analysis Execution
|
||||
|
||||
**Template-Based Approach with Test Layer Awareness**:
|
||||
**Template-Based Command Construction with Test Layer Awareness**:
|
||||
```bash
|
||||
cd {project_root} && {cli_tool} -p "
|
||||
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
||||
@@ -136,7 +135,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
||||
- Consider previous iteration failures
|
||||
- Validate fix doesn't introduce new vulnerabilities
|
||||
- analysis=READ-ONLY
|
||||
" -m {model} {timeout_flag}
|
||||
" {timeout_flag}
|
||||
```
|
||||
|
||||
**Layer-Specific Guidance Injection**:
|
||||
@@ -151,8 +150,9 @@ const layerGuidance = {
|
||||
const guidance = layerGuidance[test_type] || "Analyze holistically, avoid quick patches";
|
||||
```
|
||||
|
||||
**Error Handling & Fallback**:
|
||||
**Error Handling & Fallback Strategy**:
|
||||
```javascript
|
||||
// Primary execution with fallback chain
|
||||
try {
|
||||
result = executeCLI("gemini", config);
|
||||
} catch (error) {
|
||||
@@ -173,16 +173,18 @@ try {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback strategy when all CLI tools fail
|
||||
function generateBasicFixStrategy(failure_context) {
|
||||
// Generate basic fix task based on error pattern matching
|
||||
// Use previous successful fix patterns from fix-history.json
|
||||
// Limit to simple, low-risk fixes (add null checks, fix typos)
|
||||
// Mark task with meta.analysis_quality: "degraded" flag
|
||||
// Orchestrator will treat degraded analysis with caution
|
||||
}
|
||||
```
|
||||
|
||||
**Fallback Strategy (When All CLI Tools Fail)**:
|
||||
- Generate basic fix task based on error patterns matching
|
||||
- Use previous successful fix patterns from fix-history.json
|
||||
- Limit to simple, low-risk fixes (add null checks, fix typos)
|
||||
- Mark task with `meta.analysis_quality: "degraded"` flag
|
||||
- Orchestrator will treat degraded analysis with caution (may skip iteration)
|
||||
|
||||
### 2. CLI Output Parsing
|
||||
### 2. Output Parsing & Task Generation
|
||||
|
||||
**Expected CLI Output Structure** (from bug diagnosis template):
|
||||
```markdown
|
||||
@@ -220,18 +222,34 @@ try {
|
||||
```javascript
|
||||
const parsedResults = {
|
||||
root_causes: extractSection("根本原因分析"),
|
||||
modification_points: extractModificationPoints(),
|
||||
modification_points: extractModificationPoints(), // Returns: ["file:function:lines", ...]
|
||||
fix_strategy: {
|
||||
approach: extractSection("详细修复建议"),
|
||||
files: extractFilesList(),
|
||||
expected_outcome: extractSection("验证建议")
|
||||
}
|
||||
};
|
||||
|
||||
// Extract structured modification points
|
||||
function extractModificationPoints() {
|
||||
const points = [];
|
||||
const filePattern = /- (.+?\.(?:ts|js|py)) \(lines (\d+-\d+)\): (.+)/g;
|
||||
|
||||
let match;
|
||||
while ((match = filePattern.exec(cliOutput)) !== null) {
|
||||
points.push({
|
||||
file: match[1],
|
||||
lines: match[2],
|
||||
function: match[3],
|
||||
formatted: `${match[1]}:${match[3]}:${match[2]}`
|
||||
});
|
||||
}
|
||||
|
||||
return points;
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Task JSON Generation (Template Definition)
|
||||
|
||||
**Task JSON Template for IMPL-fix-N** (Simplified):
|
||||
**Task JSON Generation** (Simplified Template):
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-fix-{iteration}",
|
||||
@@ -284,9 +302,7 @@ const parsedResults = {
|
||||
{
|
||||
"step": "load_analysis_context",
|
||||
"action": "Load CLI analysis report for full failure context if needed",
|
||||
"commands": [
|
||||
"Read({meta.analysis_report})"
|
||||
],
|
||||
"commands": ["Read({meta.analysis_report})"],
|
||||
"output_to": "full_failure_analysis",
|
||||
"note": "Analysis report contains: failed_tests, error_messages, pass_rate, root causes, previous_attempts"
|
||||
}
|
||||
@@ -334,19 +350,17 @@ const parsedResults = {
|
||||
|
||||
**Template Variables Replacement**:
|
||||
- `{iteration}`: From context.iteration
|
||||
- `{test_type}`: Dominant test type from failed_tests (e.g., "integration", "unit")
|
||||
- `{test_type}`: Dominant test type from failed_tests
|
||||
- `{dominant_test_type}`: Most common test_type in failed_tests array
|
||||
- `{layer_specific_approach}`: Guidance based on test layer from layerGuidance map
|
||||
- `{layer_specific_approach}`: Guidance from layerGuidance map
|
||||
- `{fix_summary}`: First 50 chars of fix_strategy.approach
|
||||
- `{failed_tests.length}`: Count of failures
|
||||
- `{modification_points.length}`: Count of modification points
|
||||
- `{modification_points}`: Array of file:function:lines from parsed CLI output
|
||||
- `{modification_points}`: Array of file:function:lines
|
||||
- `{timestamp}`: ISO 8601 timestamp
|
||||
- `{parent_task_id}`: ID of the parent test task (e.g., "IMPL-002")
|
||||
- `{file1}`, `{file2}`, etc.: Specific file paths from modification_points
|
||||
- `{specific_change_1}`, etc.: Change descriptions for each modification point
|
||||
- `{parent_task_id}`: ID of parent test task
|
||||
|
||||
### 4. Analysis Report Generation
|
||||
### 3. Analysis Report Generation
|
||||
|
||||
**Structure of iteration-N-analysis.md**:
|
||||
```markdown
|
||||
@@ -373,6 +387,7 @@ pass_rate: {pass_rate}%
|
||||
- **Error**: {test.error}
|
||||
- **File**: {test.file}:{test.line}
|
||||
- **Criticality**: {test.criticality}
|
||||
- **Test Type**: {test.test_type}
|
||||
{endforeach}
|
||||
|
||||
## Root Cause Analysis
|
||||
@@ -403,15 +418,16 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
|
||||
### CLI Execution Standards
|
||||
- **Timeout Management**: Use dynamic timeout (2400000ms = 40min for analysis)
|
||||
- **Fallback Chain**: Gemini → Qwen (if Gemini fails with 429/404)
|
||||
- **Fallback Chain**: Gemini → Qwen → degraded mode (if both fail)
|
||||
- **Error Context**: Include full error details in failure reports
|
||||
- **Output Preservation**: Save raw CLI output for debugging
|
||||
- **Output Preservation**: Save raw CLI output to .process/ for debugging
|
||||
|
||||
### Task JSON Standards
|
||||
- **Quantification**: All requirements must include counts and explicit lists
|
||||
- **Specificity**: Modification points must have file:function:line format
|
||||
- **Measurability**: Acceptance criteria must include verification commands
|
||||
- **Traceability**: Link to analysis reports and CLI output files
|
||||
- **Minimal Redundancy**: Use references (analysis_report) instead of embedding full context
|
||||
|
||||
### Analysis Report Standards
|
||||
- **Structured Format**: Use consistent markdown sections
|
||||
@@ -430,19 +446,23 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
- **Link files properly**: Use relative paths from session root
|
||||
- **Preserve CLI output**: Save raw output to .process/ for debugging
|
||||
- **Generate measurable acceptance criteria**: Include verification commands
|
||||
- **Apply layer-specific guidance**: Use test_type to customize analysis approach
|
||||
|
||||
**NEVER:**
|
||||
- Execute tests directly (orchestrator manages test execution)
|
||||
- Skip CLI analysis (always run CLI even for simple failures)
|
||||
- Modify files directly (generate task JSON for @test-fix-agent to execute)
|
||||
- **Embed redundant data in task JSON** (use analysis_report reference instead)
|
||||
- **Copy input context verbatim to output** (creates data duplication)
|
||||
- Embed redundant data in task JSON (use analysis_report reference instead)
|
||||
- Copy input context verbatim to output (creates data duplication)
|
||||
- Generate vague modification points (always specify file:function:lines)
|
||||
- Exceed timeout limits (use configured timeout value)
|
||||
- Ignore test layer context (L0/L1/L2/L3 determines diagnosis approach)
|
||||
|
||||
## CLI Tool Configuration
|
||||
## Configuration & Examples
|
||||
|
||||
### Gemini Configuration
|
||||
### CLI Tool Configuration
|
||||
|
||||
**Gemini Configuration**:
|
||||
```javascript
|
||||
{
|
||||
"tool": "gemini",
|
||||
@@ -452,11 +472,12 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
"test-failure": "01-diagnose-bug-root-cause.txt",
|
||||
"coverage-gap": "02-analyze-code-patterns.txt",
|
||||
"regression": "01-trace-code-execution.txt"
|
||||
}
|
||||
},
|
||||
"timeout": 2400000 // 40 minutes
|
||||
}
|
||||
```
|
||||
|
||||
### Qwen Configuration (Fallback)
|
||||
**Qwen Configuration (Fallback)**:
|
||||
```javascript
|
||||
{
|
||||
"tool": "qwen",
|
||||
@@ -464,47 +485,12 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
"templates": {
|
||||
"test-failure": "01-diagnose-bug-root-cause.txt",
|
||||
"coverage-gap": "02-analyze-code-patterns.txt"
|
||||
}
|
||||
},
|
||||
"timeout": 2400000 // 40 minutes
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with test-cycle-execute
|
||||
|
||||
**Orchestrator Call Pattern**:
|
||||
```javascript
|
||||
// When pass_rate < 95%
|
||||
Task(
|
||||
subagent_type="cli-planning-agent",
|
||||
description=`Analyze test failures and generate fix task (iteration ${iteration})`,
|
||||
prompt=`
|
||||
## Context Package
|
||||
${JSON.stringify(contextPackage, null, 2)}
|
||||
|
||||
## Your Task
|
||||
1. Execute CLI analysis using ${cli_config.tool}
|
||||
2. Parse CLI output and extract fix strategy
|
||||
3. Generate IMPL-fix-${iteration}.json with structured task definition
|
||||
4. Save analysis report to .process/iteration-${iteration}-analysis.md
|
||||
5. Report success and task ID back to orchestrator
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
**Agent Response**:
|
||||
```javascript
|
||||
{
|
||||
"status": "success",
|
||||
"task_id": "IMPL-fix-{iteration}",
|
||||
"task_path": ".workflow/{session}/.task/IMPL-fix-{iteration}.json",
|
||||
"analysis_report": ".process/iteration-{iteration}-analysis.md",
|
||||
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
|
||||
"summary": "{fix_strategy.approach first 100 chars}",
|
||||
"modification_points_count": {count},
|
||||
"estimated_complexity": "low|medium|high"
|
||||
}
|
||||
```
|
||||
|
||||
## Example Execution
|
||||
### Example Execution
|
||||
|
||||
**Input Context**:
|
||||
```json
|
||||
@@ -530,24 +516,45 @@ Task(
|
||||
"cli_config": {
|
||||
"tool": "gemini",
|
||||
"template": "01-diagnose-bug-root-cause.txt"
|
||||
},
|
||||
"task_config": {
|
||||
"agent": "@test-fix-agent",
|
||||
"type": "test-fix-iteration",
|
||||
"max_iterations": 5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Execution Steps**:
|
||||
1. Detect test_type: "integration" → Apply integration-specific diagnosis
|
||||
2. Execute: `gemini -p "PURPOSE: Analyze integration test failure... [layer-specific context]"`
|
||||
- CLI prompt includes: "Examine component interactions, data flow, interface contracts"
|
||||
- Guidance: "Analyze full call stack and data flow across components"
|
||||
3. Parse: Extract RCA, 修复建议, 验证建议 sections
|
||||
4. Generate: IMPL-fix-1.json (SIMPLIFIED) with:
|
||||
**Execution Summary**:
|
||||
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
||||
2. **Execute CLI**:
|
||||
```bash
|
||||
gemini -p "PURPOSE: Analyze integration test failure...
|
||||
TASK: Examine component interactions, data flow, interface contracts...
|
||||
RULES: Analyze full call stack and data flow across components"
|
||||
```
|
||||
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
||||
4. **Generate Task JSON** (IMPL-fix-1.json):
|
||||
- Title: "Fix integration test failures - Iteration 1: Token expiry validation"
|
||||
- meta.analysis_report: ".process/iteration-1-analysis.md" (Reference, not embedded data)
|
||||
- meta.analysis_report: ".process/iteration-1-analysis.md" (reference)
|
||||
- meta.test_layer: "integration"
|
||||
- Requirements: "Fix 1 integration test failures by applying the provided fix strategy"
|
||||
- fix_strategy.modification_points: ["src/auth/auth.service.ts:validateToken:45-60", "src/middleware/auth.middleware.ts:checkExpiry:120-135"]
|
||||
- Requirements: "Fix 1 integration test failures by applying provided fix strategy"
|
||||
- fix_strategy.modification_points:
|
||||
- "src/auth/auth.service.ts:validateToken:45-60"
|
||||
- "src/middleware/auth.middleware.ts:checkExpiry:120-135"
|
||||
- fix_strategy.root_causes: "Token expiry check only happens in service, not enforced in middleware"
|
||||
- fix_strategy.quality_assurance: {avoids_symptom_fix: true, addresses_root_cause: true}
|
||||
- **NO failure_context object** - full context available via analysis_report reference
|
||||
5. Save: iteration-1-analysis.md with full CLI output, layer context, failed_tests details, previous_attempts
|
||||
6. Return: task_id="IMPL-fix-1", test_layer="integration", status="success"
|
||||
5. **Save Analysis Report**: iteration-1-analysis.md with full CLI output, layer context, failed_tests details
|
||||
6. **Return**:
|
||||
```javascript
|
||||
{
|
||||
status: "success",
|
||||
task_id: "IMPL-fix-1",
|
||||
task_path: ".workflow/WFS-test-session-001/.task/IMPL-fix-1.json",
|
||||
analysis_report: ".process/iteration-1-analysis.md",
|
||||
cli_output: ".process/iteration-1-cli-output.txt",
|
||||
summary: "Token expiry check only happens in service, not enforced in middleware",
|
||||
modification_points_count: 2,
|
||||
estimated_complexity: "medium"
|
||||
}
|
||||
```
|
||||
|
||||
@@ -238,7 +238,7 @@ const context = {
|
||||
|
||||
**3.5 Brainstorm Artifacts Integration**
|
||||
|
||||
If `.workflow/{session}/.brainstorming/` exists, read and include content:
|
||||
If `.workflow/session/{session}/.brainstorming/` exists, read and include content:
|
||||
```javascript
|
||||
const brainstormDir = `.workflow/${session}/.brainstorming`;
|
||||
if (dir_exists(brainstormDir)) {
|
||||
@@ -274,7 +274,7 @@ Calculate risk level based on:
|
||||
|
||||
**3.7 Context Packaging & Output**
|
||||
|
||||
**Output**: `.workflow/{session-id}/.process/context-package.json`
|
||||
**Output**: `.workflow/active//{session-id}/.process/context-package.json`
|
||||
|
||||
**Note**: Task JSONs reference via `context_package_path` field (not in `artifacts`)
|
||||
|
||||
@@ -422,7 +422,7 @@ Calculate risk level based on:
|
||||
## Quality Validation
|
||||
|
||||
Before completion verify:
|
||||
- [ ] context-package.json in `.workflow/{session}/.process/`
|
||||
- [ ] context-package.json in `.workflow/session/{session}/.process/`
|
||||
- [ ] Valid JSON with all required fields
|
||||
- [ ] Metadata complete (description, keywords, complexity)
|
||||
- [ ] Project context documented (patterns, conventions, tech stack)
|
||||
@@ -432,25 +432,6 @@ Before completion verify:
|
||||
- [ ] File relevance >80%
|
||||
- [ ] No sensitive data exposed
|
||||
|
||||
## Performance Limits
|
||||
|
||||
**File Counts**:
|
||||
- Max 30 high-priority (score >0.8)
|
||||
- Max 20 medium-priority (score 0.5-0.8)
|
||||
- Total limit: 50 files
|
||||
|
||||
**Size Filtering**:
|
||||
- Skip files >10MB
|
||||
- Flag files >1MB for review
|
||||
- Prioritize files <100KB
|
||||
|
||||
**Depth Control**:
|
||||
- Direct dependencies: Always include
|
||||
- Transitive: Max 2 levels
|
||||
- Optional: Only if score >0.7
|
||||
|
||||
**Tool Priority**: Code-Index > ripgrep > find > grep
|
||||
|
||||
## Output Report
|
||||
|
||||
```
|
||||
@@ -475,7 +456,7 @@ Conflict Detection:
|
||||
- Affected: {modules}
|
||||
- Mitigation: {strategy}
|
||||
|
||||
Output: .workflow/{session}/.process/context-package.json
|
||||
Output: .workflow/session/{session}/.process/context-package.json
|
||||
(Referenced in task JSONs via top-level `context_package_path` field)
|
||||
```
|
||||
|
||||
|
||||
@@ -304,7 +304,7 @@ if (!validation.all_passed()) {
|
||||
## Output Location
|
||||
|
||||
```
|
||||
.workflow/{test_session_id}/.process/test-context-package.json
|
||||
.workflow/active/{test_session_id}/.process/test-context-package.json
|
||||
```
|
||||
|
||||
## Helper Functions Reference
|
||||
|
||||
@@ -76,8 +76,8 @@ Use `resume --last` when current task extends/relates to previous execution. See
|
||||
|
||||
## Workflow Integration
|
||||
|
||||
**Session Management**: Auto-detects `.workflow/.active-*` marker
|
||||
- Active session: Save to `.workflow/WFS-[id]/.chat/execute-[timestamp].md`
|
||||
**Session Management**: Auto-detects active session from `.workflow/active/` directory
|
||||
- Active session: Save to `.workflow/active/WFS-[id]/.chat/execute-[timestamp].md`
|
||||
- No session: Create new session or save to scratchpad
|
||||
|
||||
**Task Integration**: Load from `.task/[TASK-ID].json`, update status, generate summary
|
||||
|
||||
@@ -1,37 +1,22 @@
|
||||
---
|
||||
name: enhance-prompt
|
||||
description: Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection
|
||||
description: Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection
|
||||
argument-hint: "user input to enhance"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Systematically enhances user prompts by combining session memory context with codebase patterns, translating ambiguous requests into actionable specifications.
|
||||
Systematically enhances user prompts by leveraging session memory context and intent analysis, translating ambiguous requests into actionable specifications.
|
||||
|
||||
## Core Protocol
|
||||
|
||||
**Enhancement Pipeline:**
|
||||
`Intent Translation` → `Context Integration` → `Gemini Analysis (if needed)` → `Structured Output`
|
||||
`Intent Translation` → `Context Integration` → `Structured Output`
|
||||
|
||||
**Context Sources:**
|
||||
- Session memory (conversation history, previous analysis)
|
||||
- Codebase patterns (via Gemini when triggered)
|
||||
- Implicit technical requirements
|
||||
|
||||
## Gemini Trigger Logic
|
||||
|
||||
```pseudo
|
||||
FUNCTION should_use_gemini(user_prompt):
|
||||
critical_keywords = ["refactor", "migrate", "redesign", "auth", "payment", "security"]
|
||||
|
||||
RETURN (
|
||||
prompt_affects_multiple_modules(user_prompt, threshold=3) OR
|
||||
any_keyword_in_prompt(critical_keywords, user_prompt)
|
||||
)
|
||||
END
|
||||
```
|
||||
|
||||
**Gemini Integration:** ~/.claude/workflows/intelligent-tools-strategy.md
|
||||
- User intent patterns
|
||||
|
||||
## Enhancement Rules
|
||||
|
||||
@@ -47,22 +32,18 @@ END
|
||||
|
||||
### Context Integration Strategy
|
||||
|
||||
**Session Memory First:**
|
||||
**Session Memory:**
|
||||
- Reference recent conversation context
|
||||
- Reuse previously identified patterns
|
||||
- Build on established understanding
|
||||
|
||||
**Codebase Analysis (via Gemini):**
|
||||
- Only when complexity requires it
|
||||
- Focus on integration points
|
||||
- Identify existing patterns
|
||||
- Infer technical requirements from discussion
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# User: "add login"
|
||||
# Session Memory: Previous auth discussion, JWT mentioned
|
||||
# Inferred: JWT-based auth, integrate with existing session management
|
||||
# Gemini (if multi-module): Analyze AuthService patterns, middleware structure
|
||||
# Action: Implement JWT authentication with session persistence
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
@@ -76,7 +57,7 @@ ATTENTION: [Critical constraints]
|
||||
|
||||
### Output Examples
|
||||
|
||||
**Simple (no Gemini):**
|
||||
**Example 1:**
|
||||
```bash
|
||||
# Input: "fix login button"
|
||||
INTENT: Debug non-functional login button
|
||||
@@ -85,28 +66,28 @@ ACTION: Check event binding → verify state updates → test auth flow
|
||||
ATTENTION: Preserve existing OAuth integration
|
||||
```
|
||||
|
||||
**Complex (with Gemini):**
|
||||
**Example 2:**
|
||||
```bash
|
||||
# Input: "refactor payment code"
|
||||
INTENT: Restructure payment module for maintainability
|
||||
CONTEXT: Session memory - PCI compliance requirements
|
||||
Gemini - PaymentService → StripeAdapter pattern identified
|
||||
ACTION: Extract reusable validators → isolate payment gateway logic
|
||||
CONTEXT: Session memory - PCI compliance requirements, Stripe integration patterns
|
||||
ACTION: Extract reusable validators → isolate payment gateway logic → maintain adapter pattern
|
||||
ATTENTION: Zero behavior change, maintain PCI compliance, full test coverage
|
||||
```
|
||||
|
||||
## Automatic Triggers
|
||||
## Enhancement Triggers
|
||||
|
||||
- Ambiguous language: "fix", "improve", "clean up"
|
||||
- Multi-module impact (>3 modules)
|
||||
- Vague requests requiring clarification
|
||||
- Complex technical requirements
|
||||
- Architecture changes
|
||||
- Critical systems: auth, payment, security
|
||||
- Complex refactoring
|
||||
- Multi-step refactoring
|
||||
|
||||
## Key Principles
|
||||
|
||||
1. **Memory First**: Leverage session context before analysis
|
||||
2. **Minimal Gemini**: Only when complexity demands it
|
||||
3. **Context Reuse**: Build on previous understanding
|
||||
4. **Clear Output**: Structured, actionable specifications
|
||||
1. **Session Memory First**: Leverage conversation context and established understanding
|
||||
2. **Context Reuse**: Build on previous discussions and decisions
|
||||
3. **Clear Output**: Structured, actionable specifications
|
||||
4. **Intent Clarification**: Transform vague requests into specific technical goals
|
||||
5. **Avoid Duplication**: Reference existing context, don't repeat
|
||||
@@ -63,10 +63,10 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
|
||||
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
|
||||
|
||||
# Create session directories (replace timestamp)
|
||||
bash(mkdir -p .workflow/WFS-docs-{timestamp}/.{task,process,summaries} && touch .workflow/.active-WFS-docs-{timestamp})
|
||||
bash(mkdir -p .workflow/active/WFS-docs-{timestamp}/.{task,process,summaries})
|
||||
|
||||
# Create workflow-session.json (replace values)
|
||||
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/WFS-docs-{timestamp}/workflow-session.json)
|
||||
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/active/WFS-docs-{timestamp}/workflow-session.json)
|
||||
```
|
||||
|
||||
### Phase 2: Analyze Structure
|
||||
@@ -89,7 +89,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
|
||||
```
|
||||
|
||||
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
|
||||
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -118,7 +118,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
|
||||
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
|
||||
|
||||
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
|
||||
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
|
||||
|
||||
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
|
||||
|
||||
@@ -127,8 +127,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
**Commands**:
|
||||
|
||||
```bash
|
||||
# Count existing docs from phase2-analysis.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
|
||||
# Count existing docs from doc-planning-data.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
|
||||
```
|
||||
|
||||
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
|
||||
@@ -182,8 +182,8 @@ Large Projects (single dir >10 docs):
|
||||
**Commands**:
|
||||
|
||||
```bash
|
||||
# 1. Get top-level directories from phase2-analysis.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
|
||||
# 1. Get top-level directories from doc-planning-data.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
|
||||
|
||||
# 2. Get mode from workflow-session.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
|
||||
@@ -201,7 +201,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
|
||||
- If total ≤10 docs: create group
|
||||
- If total >10 docs: split to 1 dir/group or subdivide
|
||||
- If single dir >10 docs: split by subdirectories
|
||||
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
|
||||
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
|
||||
```json
|
||||
"groups": {
|
||||
"count": 3,
|
||||
@@ -215,7 +215,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
|
||||
|
||||
**Task ID Calculation**:
|
||||
```bash
|
||||
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json)
|
||||
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json)
|
||||
readme_id=$((group_count + 1)) # Next ID after groups
|
||||
arch_id=$((group_count + 2))
|
||||
api_id=$((group_count + 3))
|
||||
@@ -237,7 +237,7 @@ api_id=$((group_count + 3))
|
||||
|
||||
**Generation Process**:
|
||||
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
||||
2. Read group assignments from phase2-analysis.json
|
||||
2. Read group assignments from doc-planning-data.json
|
||||
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
|
||||
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
|
||||
|
||||
@@ -262,14 +262,14 @@ api_id=$((group_count + 3))
|
||||
},
|
||||
"context": {
|
||||
"requirements": [
|
||||
"Process directories from group ${group_number} in phase2-analysis.json",
|
||||
"Process directories from group ${group_number} in doc-planning-data.json",
|
||||
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
|
||||
"Code folders: API.md + README.md; Navigation folders: README.md only",
|
||||
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
|
||||
],
|
||||
"focus_paths": ["${group_dirs_from_json}"],
|
||||
"precomputed_data": {
|
||||
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
|
||||
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
|
||||
}
|
||||
},
|
||||
"flow_control": {
|
||||
@@ -278,8 +278,8 @@ api_id=$((group_count + 3))
|
||||
"step": "load_precomputed_data",
|
||||
"action": "Load Phase 2 analysis and extract group directories",
|
||||
"commands": [
|
||||
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
|
||||
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
|
||||
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
|
||||
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
|
||||
],
|
||||
"output_to": "phase2_context",
|
||||
"note": "Single JSON file contains all Phase 2 analysis results"
|
||||
@@ -324,7 +324,7 @@ api_id=$((group_count + 3))
|
||||
{
|
||||
"step": 2,
|
||||
"title": "Batch generate documentation via CLI",
|
||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
||||
"depends_on": [1],
|
||||
"output": "generated_docs"
|
||||
}
|
||||
@@ -458,14 +458,13 @@ api_id=$((group_count + 3))
|
||||
**Unified Structure** (single JSON replaces multiple text files):
|
||||
|
||||
```
|
||||
.workflow/
|
||||
├── .active-WFS-docs-{timestamp}
|
||||
.workflow/active/
|
||||
└── WFS-docs-{timestamp}/
|
||||
├── workflow-session.json # Session metadata
|
||||
├── IMPL_PLAN.md
|
||||
├── TODO_LIST.md
|
||||
├── .process/
|
||||
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
|
||||
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
|
||||
└── .task/
|
||||
├── IMPL-001.json # Small: all modules | Large: group 1
|
||||
├── IMPL-00N.json # (Large only: groups 2-N)
|
||||
@@ -474,7 +473,7 @@ api_id=$((group_count + 3))
|
||||
└── IMPL-{N+3}.json # HTTP API (optional)
|
||||
```
|
||||
|
||||
**phase2-analysis.json Structure**:
|
||||
**doc-planning-data.json Structure**:
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
|
||||
@@ -21,12 +21,14 @@ auto-continue: true
|
||||
**Key Features**:
|
||||
- Extracts primary design references (colors, typography, spacing, etc.)
|
||||
- Provides dynamic adjustment guidelines for design tokens
|
||||
- Includes prerequisites and tooling requirements (browsers, PostCSS, dark mode)
|
||||
- Progressive loading structure for efficient token usage
|
||||
- Complete implementation examples with React components
|
||||
- Interactive preview showcase
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
## Quick Reference
|
||||
|
||||
### Command Syntax
|
||||
|
||||
@@ -51,20 +53,77 @@ package-name Style reference package name (required)
|
||||
/memory:style-skill-memory
|
||||
```
|
||||
|
||||
### Key Variables
|
||||
|
||||
**Input Variables**:
|
||||
- `PACKAGE_NAME`: Style reference package name
|
||||
- `PACKAGE_DIR`: `.workflow/reference_style/${package_name}`
|
||||
- `SKILL_DIR`: `.claude/skills/style-${package_name}`
|
||||
- `REGENERATE`: `true` if --regenerate flag, `false` otherwise
|
||||
|
||||
**Data Sources** (Phase 2):
|
||||
- `DESIGN_TOKENS_DATA`: Complete design-tokens.json content (from Read)
|
||||
- `LAYOUT_TEMPLATES_DATA`: Complete layout-templates.json content (from Read)
|
||||
- `ANIMATION_TOKENS_DATA`: Complete animation-tokens.json content (from Read, if exists)
|
||||
|
||||
**Metadata** (Phase 2):
|
||||
- `COMPONENT_COUNT`: Total components
|
||||
- `UNIVERSAL_COUNT`: Universal components count
|
||||
- `SPECIALIZED_COUNT`: Specialized components count
|
||||
- `UNIVERSAL_COMPONENTS`: Universal component names (first 5)
|
||||
- `HAS_ANIMATIONS`: Whether animation-tokens.json exists
|
||||
|
||||
**Analysis Output** (`DESIGN_ANALYSIS` - Phase 2):
|
||||
- `has_colors`: Colors exist
|
||||
- `color_semantic`: Has semantic naming (primary/secondary/accent)
|
||||
- `uses_oklch`: Uses modern color spaces (oklch, lab, etc.)
|
||||
- `has_dark_mode`: Has separate light/dark mode color tokens
|
||||
- `spacing_pattern`: Pattern type ("linear", "geometric", "custom")
|
||||
- `spacing_scale`: Actual scale values (e.g., [4, 8, 16, 32, 64])
|
||||
- `has_typography`: Typography system exists
|
||||
- `typography_hierarchy`: Has size scale for hierarchy
|
||||
- `uses_calc`: Uses calc() expressions in token values
|
||||
- `has_radius`: Border radius exists
|
||||
- `radius_style`: Style characteristic ("sharp" <4px, "moderate" 4-8px, "rounded" >8px)
|
||||
- `has_shadows`: Shadow system exists
|
||||
- `shadow_pattern`: Elevation naming pattern
|
||||
- `has_animations`: Animation tokens exist
|
||||
- `animation_range`: Duration range (fast to slow)
|
||||
- `easing_variety`: Types of easing functions
|
||||
|
||||
### Common Errors
|
||||
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Package not found | Invalid package name or doesn't exist | Run `/workflow:ui-design:codify-style` first |
|
||||
| SKILL already exists | SKILL.md already generated | Use `--regenerate` flag |
|
||||
| Missing layout-templates.json | Incomplete package | Verify package integrity, re-run codify-style |
|
||||
| Invalid JSON format | Corrupted package files | Regenerate package with codify-style |
|
||||
|
||||
---
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Phase 1: Validate Package
|
||||
|
||||
**Purpose**: Check if style reference package exists
|
||||
|
||||
**TodoWrite** (First Action):
|
||||
```json
|
||||
[
|
||||
{"content": "Validate style reference package", "status": "in_progress", "activeForm": "Validating package"},
|
||||
{"content": "Read package data and extract design references", "status": "pending", "activeForm": "Reading package data"},
|
||||
{"content": "Generate SKILL.md with progressive loading", "status": "pending", "activeForm": "Generating SKILL.md"}
|
||||
{
|
||||
"content": "Validate package exists and check SKILL status",
|
||||
"activeForm": "Validating package and SKILL status",
|
||||
"status": "in_progress"
|
||||
},
|
||||
{
|
||||
"content": "Read package data and analyze design system",
|
||||
"activeForm": "Reading package data and analyzing design system",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"content": "Generate SKILL.md with design principles and token values",
|
||||
"activeForm": "Generating SKILL.md with design principles and token values",
|
||||
"status": "pending"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
@@ -75,8 +134,6 @@ package-name Style reference package name (required)
|
||||
bash(echo "${package_name}" || basename "$(pwd)" | sed 's/^style-//')
|
||||
```
|
||||
|
||||
Store result as `package_name`
|
||||
|
||||
**Step 2: Validate Package Exists**
|
||||
|
||||
```bash
|
||||
@@ -113,152 +170,90 @@ if (regenerate_flag && skill_exists) {
|
||||
}
|
||||
```
|
||||
|
||||
**Summary Variables**:
|
||||
- `PACKAGE_NAME`: Style reference package name
|
||||
- `PACKAGE_DIR`: `.workflow/reference_style/${package_name}`
|
||||
- `SKILL_DIR`: `.claude/skills/style-${package_name}`
|
||||
- `REGENERATE`: `true` if --regenerate flag, `false` otherwise
|
||||
|
||||
**TodoWrite Update**:
|
||||
```json
|
||||
[
|
||||
{"content": "Validate style reference package", "status": "completed", "activeForm": "Validating package"},
|
||||
{"content": "Read package data and extract design references", "status": "in_progress", "activeForm": "Reading package data"}
|
||||
]
|
||||
```
|
||||
**TodoWrite Update**: Mark "Validate" as completed, "Read package data" as in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Read Package Data & Extract Design References
|
||||
### Phase 2: Read Package Data & Analyze Design System
|
||||
|
||||
**Purpose**: Extract package information and primary design references for SKILL description generation
|
||||
|
||||
**Step 1: Count Components**
|
||||
**Step 1: Read All JSON Files**
|
||||
|
||||
```bash
|
||||
bash(jq '.layout_templates | length' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null || echo 0)
|
||||
```
|
||||
# Read layout templates
|
||||
Read(file_path=".workflow/reference_style/${package_name}/layout-templates.json")
|
||||
|
||||
Store result as `component_count`
|
||||
|
||||
**Step 2: Extract Component Types and Classification**
|
||||
|
||||
```bash
|
||||
# Extract component names from layout templates
|
||||
bash(jq -r '.layout_templates | keys[]' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null | head -10)
|
||||
|
||||
# Count universal vs specialized components
|
||||
bash(jq '[.layout_templates[] | select(.component_type == "universal")] | length' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null || echo 0)
|
||||
bash(jq '[.layout_templates[] | select(.component_type == "specialized")] | length' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null || echo 0)
|
||||
|
||||
# Extract universal component names only
|
||||
bash(jq -r '.layout_templates | to_entries | map(select(.value.component_type == "universal")) | .[].key' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null | head -10)
|
||||
```
|
||||
|
||||
Store as:
|
||||
- `COMPONENT_TYPES`: List of available component types (all)
|
||||
- `UNIVERSAL_COUNT`: Number of universal (reusable) components
|
||||
- `SPECIALIZED_COUNT`: Number of specialized (project-specific) components
|
||||
- `UNIVERSAL_COMPONENTS`: List of universal component names
|
||||
|
||||
**Step 3: Read Design Tokens**
|
||||
|
||||
```bash
|
||||
# Read design tokens
|
||||
Read(file_path=".workflow/reference_style/${package_name}/design-tokens.json")
|
||||
|
||||
# Read animation tokens (if exists)
|
||||
bash(test -f .workflow/reference_style/${package_name}/animation-tokens.json && echo "exists" || echo "missing")
|
||||
Read(file_path=".workflow/reference_style/${package_name}/animation-tokens.json") # if exists
|
||||
```
|
||||
|
||||
**Extract Primary Design References**:
|
||||
|
||||
**Colors** (top 3-5 most important):
|
||||
```bash
|
||||
bash(jq -r '.colors | to_entries | .[0:5] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null | head -5)
|
||||
```
|
||||
|
||||
**Typography** (heading and body fonts):
|
||||
```bash
|
||||
bash(jq -r '.typography | to_entries | select(.key | contains("family")) | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
||||
```
|
||||
|
||||
**Spacing Scale** (base spacing values):
|
||||
```bash
|
||||
bash(jq -r '.spacing | to_entries | .[0:5] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
||||
```
|
||||
|
||||
**Border Radius** (base radius values):
|
||||
```bash
|
||||
bash(jq -r '.border_radius | to_entries | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
||||
```
|
||||
|
||||
**Shadows** (elevation levels):
|
||||
```bash
|
||||
bash(jq -r '.shadows | to_entries | .[0:3] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
||||
```
|
||||
|
||||
Store extracted references as:
|
||||
- `PRIMARY_COLORS`: List of primary color tokens
|
||||
- `TYPOGRAPHY_FONTS`: Font family tokens
|
||||
- `SPACING_SCALE`: Base spacing values
|
||||
- `BORDER_RADIUS`: Radius values
|
||||
- `SHADOWS`: Shadow definitions
|
||||
|
||||
**Step 4: Read Animation Tokens (if available)**
|
||||
**Step 2: Extract Metadata for Description**
|
||||
|
||||
```bash
|
||||
# Check if animation tokens exist
|
||||
bash(test -f .workflow/reference_style/${package_name}/animation-tokens.json && echo "available" || echo "not_available")
|
||||
# Count components and classify by type
|
||||
bash(jq '.layout_templates | length' layout-templates.json)
|
||||
bash(jq '[.layout_templates[] | select(.component_type == "universal")] | length' layout-templates.json)
|
||||
bash(jq '[.layout_templates[] | select(.component_type == "specialized")] | length' layout-templates.json)
|
||||
bash(jq -r '.layout_templates | to_entries[] | select(.value.component_type == "universal") | .key' layout-templates.json | head -5)
|
||||
```
|
||||
|
||||
If available, extract:
|
||||
```bash
|
||||
Read(file_path=".workflow/reference_style/${package_name}/animation-tokens.json")
|
||||
Store results in metadata variables (see [Key Variables](#key-variables))
|
||||
|
||||
# Extract primary animation values
|
||||
bash(jq -r '.duration | to_entries | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/animation-tokens.json 2>/dev/null)
|
||||
bash(jq -r '.easing | to_entries | .[0:3] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/animation-tokens.json 2>/dev/null)
|
||||
```
|
||||
**Step 3: Analyze Design System for Dynamic Principles**
|
||||
|
||||
Store as:
|
||||
- `ANIMATION_DURATIONS`: Animation duration tokens
|
||||
- `EASING_FUNCTIONS`: Easing function tokens
|
||||
|
||||
**Step 5: Count Files**
|
||||
Analyze design-tokens.json to extract characteristics and patterns:
|
||||
|
||||
```bash
|
||||
bash(cd .workflow/reference_style/${package_name} && ls -1 *.json *.html *.css 2>/dev/null | wc -l)
|
||||
# Color system characteristics
|
||||
bash(jq '.colors | keys' design-tokens.json)
|
||||
bash(jq '.colors | to_entries[0:2] | map(.value)' design-tokens.json)
|
||||
# Check for modern color spaces
|
||||
bash(jq '.colors | to_entries[] | .value | test("oklch|lab|lch")' design-tokens.json)
|
||||
# Check for dark mode variants
|
||||
bash(jq '.colors | keys | map(select(contains("dark") or contains("light")))' design-tokens.json)
|
||||
# → Store: has_colors, color_semantic, uses_oklch, has_dark_mode
|
||||
|
||||
# Spacing pattern detection
|
||||
bash(jq '.spacing | to_entries | map(.value) | map(gsub("[^0-9.]"; "") | tonumber)' design-tokens.json)
|
||||
# Analyze pattern: linear (4-8-12-16) vs geometric (4-8-16-32) vs custom
|
||||
# → Store: spacing_pattern, spacing_scale
|
||||
|
||||
# Typography characteristics
|
||||
bash(jq '.typography | keys | map(select(contains("family") or contains("weight")))' design-tokens.json)
|
||||
bash(jq '.typography | to_entries | map(select(.key | contains("size"))) | .[].value' design-tokens.json)
|
||||
# Check for calc() usage
|
||||
bash(jq '. | tostring | test("calc\\(")' design-tokens.json)
|
||||
# → Store: has_typography, typography_hierarchy, uses_calc
|
||||
|
||||
# Border radius style
|
||||
bash(jq '.border_radius | to_entries | map(.value)' design-tokens.json)
|
||||
# Check range: small (sharp <4px) vs moderate (4-8px) vs large (rounded >8px)
|
||||
# → Store: has_radius, radius_style
|
||||
|
||||
# Shadow characteristics
|
||||
bash(jq '.shadows | keys' design-tokens.json)
|
||||
bash(jq '.shadows | to_entries[0].value' design-tokens.json)
|
||||
# → Store: has_shadows, shadow_pattern
|
||||
|
||||
# Animations (if available)
|
||||
bash(jq '.duration | to_entries | map(.value)' animation-tokens.json)
|
||||
bash(jq '.easing | keys' animation-tokens.json)
|
||||
# → Store: has_animations, animation_range, easing_variety
|
||||
```
|
||||
|
||||
Store result as `file_count`
|
||||
Store analysis results in `DESIGN_ANALYSIS` (see [Key Variables](#key-variables))
|
||||
|
||||
**Summary Data Collected**:
|
||||
- `COMPONENT_COUNT`: Number of components in layout templates
|
||||
- `UNIVERSAL_COUNT`: Number of universal (reusable) components
|
||||
- `SPECIALIZED_COUNT`: Number of specialized (project-specific) components
|
||||
- `COMPONENT_TYPES`: List of component types (first 10)
|
||||
- `UNIVERSAL_COMPONENTS`: List of universal component names (first 10)
|
||||
- `FILE_COUNT`: Total files in package
|
||||
- `HAS_ANIMATIONS`: Whether animation tokens are available
|
||||
- `PRIMARY_COLORS`: Primary color tokens with values
|
||||
- `TYPOGRAPHY_FONTS`: Font family tokens
|
||||
- `SPACING_SCALE`: Base spacing scale
|
||||
- `BORDER_RADIUS`: Border radius values
|
||||
- `SHADOWS`: Shadow definitions
|
||||
- `ANIMATION_DURATIONS`: Animation durations (if available)
|
||||
- `EASING_FUNCTIONS`: Easing functions (if available)
|
||||
**Note**: Analysis focuses on characteristics and patterns, not counts. Include technical feature detection (oklch, calc, dark mode) for Prerequisites section.
|
||||
|
||||
**TodoWrite Update**:
|
||||
```json
|
||||
[
|
||||
{"content": "Read package data and extract design references", "status": "completed", "activeForm": "Reading package data"},
|
||||
{"content": "Generate SKILL.md with progressive loading", "status": "in_progress", "activeForm": "Generating SKILL.md"}
|
||||
]
|
||||
```
|
||||
**TodoWrite Update**: Mark "Read package data" as completed, "Generate SKILL.md" as in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Generate SKILL.md
|
||||
|
||||
**Purpose**: Create SKILL memory index with progressive loading structure and design references
|
||||
|
||||
**Step 1: Create SKILL Directory**
|
||||
|
||||
```bash
|
||||
@@ -272,336 +267,57 @@ bash(mkdir -p .claude/skills/style-${package_name})
|
||||
{package_name} project-independent design system with {universal_count} universal layout templates and interactive preview (located at .workflow/reference_style/{package_name}). Load when working with reusable UI components, design tokens, layout patterns, or implementing visual consistency. Excludes {specialized_count} project-specific components.
|
||||
```
|
||||
|
||||
**Key Elements**:
|
||||
- **Universal Count**: Emphasize available reusable layout templates
|
||||
- **Project Independence**: Clearly state project-independent nature
|
||||
- **Specialized Exclusion**: Mention excluded project-specific components
|
||||
- **Path Reference**: Precise package location
|
||||
- **Trigger Keywords**: reusable UI components, design tokens, layout patterns, visual consistency
|
||||
- **Action Coverage**: working with, analyzing, implementing
|
||||
**Step 3: Load and Process SKILL.md Template**
|
||||
|
||||
**Example**:
|
||||
```
|
||||
main-app-style-v1 project-independent design system with 5 universal layout templates and interactive preview (located at .workflow/reference_style/main-app-style-v1). Load when working with reusable UI components, design tokens, layout patterns, or implementing visual consistency. Excludes 3 project-specific components.
|
||||
```
|
||||
|
||||
**Step 3: Write SKILL.md**
|
||||
|
||||
Use Write tool to generate SKILL.md with the following complete content:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: style-{package_name}
|
||||
description: {intelligent description from Step 2}
|
||||
---
|
||||
|
||||
# {Package Name} Style SKILL Package
|
||||
|
||||
## Documentation: `../../../.workflow/reference_style/{package_name}/`
|
||||
|
||||
## Package Overview
|
||||
|
||||
**Project-independent style reference package** extracted from codebase with reusable design patterns, tokens, and interactive preview.
|
||||
|
||||
**Package Details**:
|
||||
- Package: {package_name}
|
||||
- Layout Templates: {component_count} total
|
||||
- **Universal Components**: {universal_count} (reusable, project-independent)
|
||||
- **Specialized Components**: {specialized_count} (project-specific, excluded from reference)
|
||||
- Universal Component Types: {comma-separated list of UNIVERSAL_COMPONENTS}
|
||||
- Files: {file_count}
|
||||
- Animation Tokens: {has_animations ? "✓ Available" : "Not available"}
|
||||
|
||||
**⚠️ IMPORTANT - Project Independence**:
|
||||
This SKILL package represents a **pure style system** independent of any specific project implementation:
|
||||
- **Universal components** are generic, reusable patterns (buttons, inputs, cards, navigation)
|
||||
- **Specialized components** are project-specific implementations (excluded from this reference)
|
||||
- All design tokens and layout patterns are extracted for **reference purposes only**
|
||||
- Adapt and customize these references based on your project's specific requirements
|
||||
|
||||
---
|
||||
|
||||
## ⚡ Primary Design References
|
||||
|
||||
**IMPORTANT**: These are **reference values** extracted from the codebase. They should be **dynamically adjusted** based on your specific design needs, not treated as fixed constraints.
|
||||
|
||||
### 🎨 Colors
|
||||
|
||||
{FOR each color in PRIMARY_COLORS:
|
||||
- **{color.key}**: `{color.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- These colors establish the foundation of the design system
|
||||
- Adjust saturation, lightness, or hue based on:
|
||||
- Brand requirements and accessibility needs
|
||||
- Context (light/dark mode, high-contrast themes)
|
||||
- User feedback and A/B testing results
|
||||
- Use color theory principles to maintain harmony when modifying
|
||||
|
||||
### 📝 Typography
|
||||
|
||||
{FOR each font in TYPOGRAPHY_FONTS:
|
||||
- **{font.key}**: `{font.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- Font families can be substituted based on:
|
||||
- Brand identity and design language
|
||||
- Performance requirements (web fonts vs. system fonts)
|
||||
- Accessibility and readability considerations
|
||||
- Platform-specific availability
|
||||
- Maintain hierarchy and scale relationships when changing fonts
|
||||
|
||||
### 📏 Spacing Scale
|
||||
|
||||
{FOR each spacing in SPACING_SCALE:
|
||||
- **{spacing.key}**: `{spacing.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- Spacing values form a consistent rhythm system
|
||||
- Adjust scale based on:
|
||||
- Target device (mobile vs. desktop vs. tablet)
|
||||
- Content density requirements
|
||||
- Component-specific needs (compact vs. comfortable layouts)
|
||||
- Maintain proportional relationships when scaling
|
||||
|
||||
### 🔲 Border Radius
|
||||
|
||||
{FOR each radius in BORDER_RADIUS:
|
||||
- **{radius.key}**: `{radius.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- Border radius affects visual softness and modernity
|
||||
- Adjust based on:
|
||||
- Design aesthetic (sharp vs. rounded vs. pill-shaped)
|
||||
- Component type (buttons, cards, inputs have different needs)
|
||||
- Platform conventions (iOS vs. Android vs. Web)
|
||||
|
||||
### 🌫️ Shadows
|
||||
|
||||
{FOR each shadow in SHADOWS:
|
||||
- **{shadow.key}**: `{shadow.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- Shadows create elevation and depth perception
|
||||
- Adjust based on:
|
||||
- Material design depth levels
|
||||
- Light/dark mode contexts
|
||||
- Performance considerations (complex shadows impact rendering)
|
||||
- Visual hierarchy needs
|
||||
|
||||
{IF HAS_ANIMATIONS:
|
||||
### ⏱️ Animation & Timing
|
||||
|
||||
**Durations**:
|
||||
{FOR each duration in ANIMATION_DURATIONS:
|
||||
- **{duration.key}**: `{duration.value}`
|
||||
}
|
||||
|
||||
**Easing Functions**:
|
||||
{FOR each easing in EASING_FUNCTIONS:
|
||||
- **{easing.key}**: `{easing.value}`
|
||||
}
|
||||
|
||||
**Usage Guidelines**:
|
||||
- Animation timing affects perceived responsiveness and polish
|
||||
- Adjust based on:
|
||||
- User expectations and platform conventions
|
||||
- Accessibility preferences (reduced motion)
|
||||
- Animation type (micro-interactions vs. page transitions)
|
||||
- Performance constraints (mobile vs. desktop)
|
||||
}
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Design Adaptation Strategies
|
||||
|
||||
### When to Adjust Design References
|
||||
|
||||
**Brand Alignment**:
|
||||
- Modify colors to match brand identity and guidelines
|
||||
- Adjust typography to reflect brand personality
|
||||
- Tune spacing and radius to align with brand aesthetic
|
||||
|
||||
**Accessibility Requirements**:
|
||||
- Increase color contrast ratios for WCAG compliance
|
||||
- Adjust font sizes and spacing for readability
|
||||
- Modify animation durations for reduced-motion preferences
|
||||
|
||||
**Platform Optimization**:
|
||||
- Adapt spacing for mobile touch targets (min 44x44px)
|
||||
- Adjust shadows and radius for platform conventions
|
||||
- Optimize animation performance for target devices
|
||||
|
||||
**Context-Specific Needs**:
|
||||
- Dark mode: Adjust colors, shadows, and contrasts
|
||||
- High-density displays: Fine-tune spacing and sizing
|
||||
- Responsive design: Scale tokens across breakpoints
|
||||
|
||||
### How to Apply Adjustments
|
||||
|
||||
1. **Identify Need**: Determine which tokens need adjustment based on your specific requirements
|
||||
2. **Maintain Relationships**: Preserve proportional relationships between related tokens
|
||||
3. **Test Thoroughly**: Validate changes across components and use cases
|
||||
4. **Document Changes**: Track modifications and rationale for team alignment
|
||||
5. **Iterate**: Refine based on user feedback and testing results
|
||||
|
||||
---
|
||||
|
||||
## Progressive Loading
|
||||
|
||||
### Level 0: Design Tokens (~5K tokens)
|
||||
|
||||
Essential design token system for consistent styling.
|
||||
|
||||
**Files**:
|
||||
- [Design Tokens](../../../.workflow/reference_style/{package_name}/design-tokens.json) - Colors, typography, spacing, shadows, borders
|
||||
|
||||
**Use when**: Quick token reference, applying consistent styles, color/typography queries
|
||||
|
||||
---
|
||||
|
||||
### Level 1: Universal Layout Templates (~12K tokens)
|
||||
|
||||
**Project-independent** component layout patterns for reusable UI elements.
|
||||
|
||||
**Files**:
|
||||
- Level 0 files
|
||||
- [Layout Templates](../../../.workflow/reference_style/{package_name}/layout-templates.json) - Component structures with HTML/CSS patterns
|
||||
|
||||
**⚠️ Reference Strategy**:
|
||||
- **Only reference components with `component_type: "universal"`** - these are reusable, project-independent patterns
|
||||
- **Ignore components with `component_type: "specialized"`** - these are project-specific implementations
|
||||
- Universal components include: buttons, inputs, forms, cards, navigation, modals, etc.
|
||||
- Use universal patterns as **reference templates** to adapt for your specific project needs
|
||||
|
||||
**Use when**: Building components, understanding component architecture, implementing layouts
|
||||
|
||||
---
|
||||
|
||||
### Level 2: Complete System (~20K tokens)
|
||||
|
||||
Full design system with animations and interactive preview.
|
||||
|
||||
**Files**:
|
||||
- All Level 1 files
|
||||
- [Animation Tokens](../../../.workflow/reference_style/{package_name}/animation-tokens.json) - Animation durations, easing, transitions _(if available)_
|
||||
- [Preview HTML](../../../.workflow/reference_style/{package_name}/preview.html) - Interactive showcase (reference only)
|
||||
- [Preview CSS](../../../.workflow/reference_style/{package_name}/preview.css) - Showcase styling (reference only)
|
||||
|
||||
**Use when**: Comprehensive analysis, animation development, complete design system understanding
|
||||
|
||||
---
|
||||
|
||||
## Interactive Preview
|
||||
|
||||
**Location**: `.workflow/reference_style/{package_name}/preview.html`
|
||||
|
||||
**View in Browser**:
|
||||
**⚠️ CRITICAL - Execute First**:
|
||||
```bash
|
||||
cd .workflow/reference_style/{package_name}
|
||||
python -m http.server 8080
|
||||
# Open http://localhost:8080/preview.html
|
||||
bash(cat ~/.claude/workflows/cli-templates/memory/style-skill-memory/skill-md-template.md)
|
||||
```
|
||||
|
||||
**Features**:
|
||||
- Color palette swatches with values
|
||||
- Typography scale and combinations
|
||||
- All components with variants and states
|
||||
- Spacing, radius, shadow visual examples
|
||||
- Interactive state demonstrations
|
||||
- Usage code snippets
|
||||
**Template Processing**:
|
||||
1. **Replace variables**: Substitute all `{variable}` placeholders with actual values from Phase 2
|
||||
2. **Generate dynamic sections**:
|
||||
- **Prerequisites & Tooling**: Generate based on `DESIGN_ANALYSIS` technical features (oklch, calc, dark mode)
|
||||
- **Design Principles**: Generate based on `DESIGN_ANALYSIS` characteristics
|
||||
- **Complete Implementation Example**: Include React component example with token adaptation
|
||||
- **Design Token Values**: Iterate `DESIGN_TOKENS_DATA`, `ANIMATION_TOKENS_DATA` and display all key-value pairs with DEFAULT annotations
|
||||
3. **Write to file**: Use Write tool to save to `.claude/skills/style-{package_name}/SKILL.md`
|
||||
|
||||
---
|
||||
**Variable Replacement Map**:
|
||||
- `{package_name}` → PACKAGE_NAME
|
||||
- `{intelligent_description}` → Generated description from Step 2
|
||||
- `{component_count}` → COMPONENT_COUNT
|
||||
- `{universal_count}` → UNIVERSAL_COUNT
|
||||
- `{specialized_count}` → SPECIALIZED_COUNT
|
||||
- `{universal_components_list}` → UNIVERSAL_COMPONENTS (comma-separated)
|
||||
- `{has_animations}` → HAS_ANIMATIONS
|
||||
|
||||
## Usage Guidelines
|
||||
**Dynamic Content Generation**:
|
||||
|
||||
### Loading Levels
|
||||
See template file for complete structure. Key dynamic sections:
|
||||
|
||||
**Level 0** (5K): Design tokens only
|
||||
```
|
||||
Load Level 0 for design token reference
|
||||
```
|
||||
1. **Prerequisites & Tooling** (based on DESIGN_ANALYSIS technical features):
|
||||
- IF uses_oklch → Include PostCSS plugin requirement (`postcss-oklab-function`)
|
||||
- IF uses_calc → Include preprocessor requirement for calc() expressions
|
||||
- IF has_dark_mode → Include dark mode implementation mechanism (class or media query)
|
||||
- ALWAYS include browser support, jq installation, and local server setup
|
||||
|
||||
**Level 1** (12K): Tokens + layout templates
|
||||
```
|
||||
Load Level 1 for layout templates and design tokens
|
||||
```
|
||||
2. **Design Principles** (based on DESIGN_ANALYSIS):
|
||||
- IF has_colors → Include "Color System" principle with semantic pattern
|
||||
- IF spacing_pattern detected → Include "Spatial Rhythm" with unified scale description (actual token values)
|
||||
- IF has_typography_hierarchy → Include "Typographic System" with scale examples
|
||||
- IF has_radius → Include "Shape Language" with style characteristic
|
||||
- IF has_shadows → Include "Depth & Elevation" with elevation pattern
|
||||
- IF has_animations → Include "Motion & Timing" with duration range
|
||||
- ALWAYS include "Accessibility First" principle
|
||||
|
||||
**Level 2** (20K): Complete system with animations and preview
|
||||
```
|
||||
Load Level 2 for complete design system with preview reference
|
||||
```
|
||||
|
||||
### Common Use Cases
|
||||
|
||||
**Implementing UI Components**:
|
||||
- Load Level 1 for universal layout templates
|
||||
- **Only reference components with `component_type: "universal"`** in layout-templates.json
|
||||
- Apply design tokens from design-tokens.json
|
||||
- Adapt patterns to your project's specific requirements
|
||||
|
||||
**Ensuring Style Consistency**:
|
||||
- Load Level 0 for design tokens
|
||||
- Use design-tokens.json for colors, typography, spacing
|
||||
- Check preview.html for visual reference (universal components only)
|
||||
|
||||
**Analyzing Component Patterns**:
|
||||
- Load Level 2 for complete analysis
|
||||
- Review layout-templates.json for component architecture
|
||||
- **Filter for `component_type: "universal"` to exclude project-specific implementations**
|
||||
- Check preview.html for implementation examples
|
||||
|
||||
**Animation Development**:
|
||||
- Load Level 2 for animation tokens (if available)
|
||||
- Reference animation-tokens.json for durations and easing
|
||||
- Apply consistent timing and transitions
|
||||
|
||||
**⚠️ Critical Usage Rule**:
|
||||
This is a **project-independent style reference system**. When working with layout-templates.json:
|
||||
- **USE**: Components marked `component_type: "universal"` as reusable reference patterns
|
||||
- **IGNORE**: Components marked `component_type: "specialized"` (project-specific implementations)
|
||||
- **ADAPT**: All patterns should be customized for your specific project needs
|
||||
|
||||
---
|
||||
|
||||
## Package Structure
|
||||
|
||||
```
|
||||
.workflow/reference_style/{package_name}/
|
||||
├── layout-templates.json # Layout templates from codebase
|
||||
├── design-tokens.json # Design token system
|
||||
├── animation-tokens.json # Animation tokens (optional)
|
||||
├── preview.html # Interactive showcase
|
||||
└── preview.css # Showcase styling
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Regeneration
|
||||
|
||||
To update this SKILL memory after package changes:
|
||||
|
||||
```bash
|
||||
/memory:style-skill-memory {package_name} --regenerate
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Commands
|
||||
|
||||
**Generate Package**:
|
||||
```bash
|
||||
/workflow:ui-design:codify-style --source ./src --package-name {package_name}
|
||||
```
|
||||
|
||||
**Update Package**:
|
||||
Re-run codify-style with same package name to update extraction.
|
||||
```
|
||||
3. **Design Token Values** (iterate from read data):
|
||||
- Colors: Iterate `DESIGN_TOKENS_DATA.colors`
|
||||
- Typography: Iterate `DESIGN_TOKENS_DATA.typography`
|
||||
- Spacing: Iterate `DESIGN_TOKENS_DATA.spacing`
|
||||
- Border Radius: Iterate `DESIGN_TOKENS_DATA.border_radius` with calc() explanations
|
||||
- Shadows: Iterate `DESIGN_TOKENS_DATA.shadows` with DEFAULT token annotations
|
||||
- Animations (if available): Iterate `ANIMATION_TOKENS_DATA.duration` and `ANIMATION_TOKENS_DATA.easing`
|
||||
|
||||
**Step 4: Verify SKILL.md Created**
|
||||
|
||||
@@ -609,115 +325,27 @@ Re-run codify-style with same package name to update extraction.
|
||||
bash(test -f .claude/skills/style-${package_name}/SKILL.md && echo "success" || echo "failed")
|
||||
```
|
||||
|
||||
**TodoWrite Update**:
|
||||
```json
|
||||
[
|
||||
{"content": "Validate style reference package", "status": "completed", "activeForm": "Validating package"},
|
||||
{"content": "Read package data and extract design references", "status": "completed", "activeForm": "Reading package data"},
|
||||
{"content": "Generate SKILL.md with progressive loading", "status": "completed", "activeForm": "Generating SKILL.md"}
|
||||
]
|
||||
```
|
||||
|
||||
**Final Action**: Report completion summary to user
|
||||
**TodoWrite Update**: Mark all todos as completed
|
||||
|
||||
---
|
||||
|
||||
## Completion Message
|
||||
### Completion Message
|
||||
|
||||
Display extracted primary design references to user:
|
||||
Display a simple completion message with key information:
|
||||
|
||||
```
|
||||
✅ SKILL memory generated successfully!
|
||||
✅ SKILL memory generated for style package: {package_name}
|
||||
|
||||
Package: {package_name}
|
||||
SKILL Location: .claude/skills/style-{package_name}/SKILL.md
|
||||
📁 Location: .claude/skills/style-{package_name}/SKILL.md
|
||||
|
||||
📦 Package Details:
|
||||
- Layout Templates: {component_count} total
|
||||
- Universal (reusable): {universal_count}
|
||||
- Specialized (project-specific): {specialized_count}
|
||||
- Universal Component Types: {show first 5 UNIVERSAL_COMPONENTS, then "+ X more"}
|
||||
- Files: {file_count}
|
||||
- Animation Tokens: {has_animations ? "✓ Available" : "Not available"}
|
||||
📊 Package Summary:
|
||||
- {component_count} components ({universal_count} universal, {specialized_count} specialized)
|
||||
- Design tokens: colors, typography, spacing, shadows{animations_note}
|
||||
|
||||
🎨 Primary Design References Extracted:
|
||||
{IF PRIMARY_COLORS exists:
|
||||
Colors:
|
||||
{show first 3 PRIMARY_COLORS with key: value}
|
||||
{if more than 3: + X more colors}
|
||||
}
|
||||
|
||||
{IF TYPOGRAPHY_FONTS exists:
|
||||
Typography:
|
||||
{show all TYPOGRAPHY_FONTS}
|
||||
}
|
||||
|
||||
{IF SPACING_SCALE exists:
|
||||
Spacing Scale:
|
||||
{show first 3 SPACING_SCALE items}
|
||||
{if more than 3: + X more spacing tokens}
|
||||
}
|
||||
|
||||
{IF BORDER_RADIUS exists:
|
||||
Border Radius:
|
||||
{show all BORDER_RADIUS}
|
||||
}
|
||||
|
||||
{IF HAS_ANIMATIONS:
|
||||
Animation:
|
||||
Durations: {count ANIMATION_DURATIONS} tokens
|
||||
Easing: {count EASING_FUNCTIONS} functions
|
||||
}
|
||||
|
||||
⚡ Progressive Loading Levels:
|
||||
- Level 0: Design Tokens (~5K tokens)
|
||||
- Level 1: Tokens + Layout Templates (~12K tokens)
|
||||
- Level 2: Complete System (~20K tokens)
|
||||
|
||||
💡 Usage:
|
||||
Load design system context when working with:
|
||||
- UI component implementation
|
||||
- Layout pattern analysis
|
||||
- Design token application
|
||||
- Style consistency validation
|
||||
|
||||
⚠️ IMPORTANT - Project Independence:
|
||||
This is a **project-independent style reference system**:
|
||||
- Only use universal components (component_type: "universal") as reference patterns
|
||||
- Ignore specialized components (component_type: "specialized") - they are project-specific
|
||||
- The extracted design references are REFERENCE VALUES, not fixed constraints
|
||||
- Dynamically adjust colors, spacing, typography, and other tokens based on:
|
||||
- Brand requirements and accessibility needs
|
||||
- Platform-specific conventions and optimizations
|
||||
- Context (light/dark mode, responsive breakpoints)
|
||||
- User feedback and testing results
|
||||
|
||||
See SKILL.md for detailed adjustment guidelines and component filtering instructions.
|
||||
|
||||
🎯 Preview:
|
||||
Open interactive showcase:
|
||||
file://{absolute_path}/.workflow/reference_style/{package_name}/preview.html
|
||||
|
||||
📋 Next Steps:
|
||||
1. Load appropriate level based on your task context
|
||||
2. Review Primary Design References section for key design tokens
|
||||
3. Apply design tokens with dynamic adjustments as needed
|
||||
4. Reference layout-templates.json for component structures
|
||||
5. Use Design Adaptation Strategies when modifying tokens
|
||||
💡 Usage: /memory:load-skill-memory style-{package_name} "your task description"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Package not found | Invalid package name or package doesn't exist | Run codify-style first to create package |
|
||||
| SKILL already exists | SKILL.md already generated | Use --regenerate to force regeneration |
|
||||
| Missing layout-templates.json | Incomplete package | Verify package integrity, re-run codify-style |
|
||||
| Invalid JSON format | Corrupted package files | Regenerate package with codify-style |
|
||||
Variables: `{package_name}`, `{component_count}`, `{universal_count}`, `{specialized_count}`, `{animations_note}` (", animations" if exists)
|
||||
|
||||
---
|
||||
|
||||
@@ -727,144 +355,42 @@ Open interactive showcase:
|
||||
|
||||
1. **Check Before Generate**: Verify package exists before attempting SKILL generation
|
||||
2. **Respect Existing SKILL**: Don't overwrite unless --regenerate flag provided
|
||||
3. **Extract Primary References**: Always extract and display key design values (colors, typography, spacing, border radius, shadows, animations)
|
||||
4. **Include Adjustment Guidance**: Provide clear guidelines on when and how to dynamically adjust design tokens
|
||||
5. **Progressive Loading**: Always include all 3 levels (0-2) with clear token estimates
|
||||
6. **Intelligent Description**: Extract component count and key features from metadata
|
||||
3. **Load Templates via cat**: Use `cat ~/.claude/workflows/cli-templates/memory/style-skill-memory/{template}` to load templates
|
||||
4. **Variable Substitution**: Replace all `{variable}` placeholders with actual values
|
||||
5. **Technical Feature Detection**: Analyze tokens for modern features (oklch, calc, dark mode) and generate appropriate Prerequisites section
|
||||
6. **Dynamic Content Generation**: Generate sections based on DESIGN_ANALYSIS characteristics
|
||||
7. **Unified Spacing Scale**: Use actual token values as primary scale reference, avoid contradictory pattern descriptions
|
||||
8. **Direct Iteration**: Iterate data structures (DESIGN_TOKENS_DATA, etc.) for token values
|
||||
9. **Annotate Special Tokens**: Add comments for DEFAULT tokens and calc() expressions
|
||||
10. **Embed jq Commands**: Include bash/jq commands in SKILL.md for dynamic loading
|
||||
11. **Progressive Loading**: Include all 3 levels (0-2) with specific jq commands
|
||||
12. **Complete Examples**: Include end-to-end implementation examples (React components)
|
||||
13. **Intelligent Description**: Extract component count and key features from metadata
|
||||
14. **Emphasize Flexibility**: Strongly warn against rigid copying - values are references for creative adaptation
|
||||
|
||||
### SKILL Description Format
|
||||
### Template Files Location
|
||||
|
||||
**Template**:
|
||||
```
|
||||
{package_name} project-independent design system with {universal_count} universal layout templates and interactive preview (located at .workflow/reference_style/{package_name}). Load when working with reusable UI components, design tokens, layout patterns, or implementing visual consistency. Excludes {specialized_count} project-specific components.
|
||||
```
|
||||
|
||||
**Required Elements**:
|
||||
- Package name
|
||||
- Universal layout template count (emphasize reusability)
|
||||
- Project independence statement
|
||||
- Specialized component exclusion notice
|
||||
- Location (full path)
|
||||
- Trigger keywords (reusable UI components, design tokens, layout patterns, visual consistency)
|
||||
- Action verbs (working with, analyzing, implementing)
|
||||
|
||||
### Primary Design References Extraction
|
||||
|
||||
**Required Data Extraction** (from design-tokens.json):
|
||||
- Colors: Primary, secondary, accent colors (top 3-5)
|
||||
- Typography: Font families for headings and body text
|
||||
- Spacing Scale: Base spacing values (xs, sm, md, lg, xl)
|
||||
- Border Radius: All radius tokens
|
||||
- Shadows: Shadow definitions (top 3 elevation levels)
|
||||
|
||||
**Component Classification Extraction** (from layout-templates.json):
|
||||
- Universal Count: Number of components with `component_type: "universal"`
|
||||
- Specialized Count: Number of components with `component_type: "specialized"`
|
||||
- Universal Component Names: List of universal component names (first 10)
|
||||
|
||||
**Optional Data Extraction** (from animation-tokens.json if available):
|
||||
- Animation Durations: All duration tokens
|
||||
- Easing Functions: Top 3 easing functions
|
||||
|
||||
**Extraction Format**:
|
||||
Use `jq` to extract tokens from JSON files. Each token should include key and value.
|
||||
For component classification, filter by `component_type` field.
|
||||
|
||||
### Dynamic Adjustment Guidelines
|
||||
|
||||
**Include in SKILL.md**:
|
||||
1. **Usage Guidelines per Category**: Specific guidance for each token category
|
||||
2. **Adjustment Strategies**: When to adjust design references
|
||||
3. **Practical Examples**: Context-specific adaptation scenarios
|
||||
4. **Best Practices**: How to maintain design system coherence while adjusting
|
||||
|
||||
### Progressive Loading Structure
|
||||
|
||||
**Level 0** (~5K tokens):
|
||||
- design-tokens.json
|
||||
|
||||
**Level 1** (~12K tokens):
|
||||
- Level 0 files
|
||||
- layout-templates.json
|
||||
|
||||
**Level 2** (~20K tokens):
|
||||
- Level 1 files
|
||||
- animation-tokens.json (if exists)
|
||||
- preview.html
|
||||
- preview.css
|
||||
|
||||
---
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Project Independence**: Clear separation between universal (reusable) and specialized (project-specific) components
|
||||
- **Component Filtering**: Automatic classification helps identify which patterns are truly reusable
|
||||
- **Fast Context Loading**: Progressive levels for efficient token usage
|
||||
- **Primary Design References**: Extracted key design values (colors, typography, spacing, etc.) displayed prominently
|
||||
- **Dynamic Adjustment Guidance**: Clear instructions on when and how to adjust design tokens
|
||||
- **Intelligent Triggering**: Keywords optimize SKILL activation
|
||||
- **Complete Reference**: All package files accessible through SKILL
|
||||
- **Easy Regeneration**: Simple --regenerate flag for updates
|
||||
- **Clear Structure**: Organized levels by use case with component type filtering
|
||||
- **Practical Usage Guidelines**: Context-specific adjustment strategies and component selection criteria
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
style-skill-memory
|
||||
├─ Phase 1: Validate
|
||||
│ ├─ Parse package name from argument or auto-detect
|
||||
│ ├─ Check package exists in .workflow/reference_style/
|
||||
│ └─ Check if SKILL already exists (skip if exists and no --regenerate)
|
||||
│
|
||||
├─ Phase 2: Read Package Data & Extract Primary References
|
||||
│ ├─ Count components from layout-templates.json
|
||||
│ ├─ Extract component types list
|
||||
│ ├─ Extract primary colors from design-tokens.json (top 3-5)
|
||||
│ ├─ Extract typography (font families)
|
||||
│ ├─ Extract spacing scale (base values)
|
||||
│ ├─ Extract border radius tokens
|
||||
│ ├─ Extract shadow definitions (top 3)
|
||||
│ ├─ Extract animation tokens (if available)
|
||||
│ └─ Count total files in package
|
||||
│
|
||||
└─ Phase 3: Generate SKILL.md
|
||||
├─ Create SKILL directory
|
||||
├─ Generate intelligent description with keywords
|
||||
├─ Write SKILL.md with complete structure:
|
||||
│ ├─ Package Overview
|
||||
│ ├─ Primary Design References
|
||||
│ │ ├─ Colors with usage guidelines
|
||||
│ │ ├─ Typography with usage guidelines
|
||||
│ │ ├─ Spacing with usage guidelines
|
||||
│ │ ├─ Border Radius with usage guidelines
|
||||
│ │ ├─ Shadows with usage guidelines
|
||||
│ │ └─ Animation & Timing (if available)
|
||||
│ ├─ Design Adaptation Strategies
|
||||
│ │ ├─ When to adjust design references
|
||||
│ │ └─ How to apply adjustments
|
||||
│ ├─ Progressive Loading (3 levels)
|
||||
│ ├─ Interactive Preview
|
||||
│ ├─ Usage Guidelines
|
||||
│ ├─ Package Structure
|
||||
│ ├─ Regeneration
|
||||
│ └─ Related Commands
|
||||
├─ Verify SKILL.md created successfully
|
||||
└─ Display completion message with extracted design references
|
||||
Phase 1: Validate
|
||||
├─ Parse package_name
|
||||
├─ Check PACKAGE_DIR exists
|
||||
└─ Check SKILL_DIR exists (skip if exists and no --regenerate)
|
||||
|
||||
Data Flow:
|
||||
design-tokens.json → jq extraction → PRIMARY_COLORS, TYPOGRAPHY_FONTS,
|
||||
SPACING_SCALE, BORDER_RADIUS, SHADOWS
|
||||
animation-tokens.json → jq extraction → ANIMATION_DURATIONS, EASING_FUNCTIONS
|
||||
layout-templates.json → jq extraction → COMPONENT_COUNT, UNIVERSAL_COUNT,
|
||||
SPECIALIZED_COUNT, UNIVERSAL_COMPONENTS
|
||||
→ component_type filtering → Universal vs Specialized classification
|
||||
Phase 2: Read & Analyze
|
||||
├─ Read design-tokens.json → DESIGN_TOKENS_DATA
|
||||
├─ Read layout-templates.json → LAYOUT_TEMPLATES_DATA
|
||||
├─ Read animation-tokens.json → ANIMATION_TOKENS_DATA (if exists)
|
||||
├─ Extract Metadata → COMPONENT_COUNT, UNIVERSAL_COUNT, etc.
|
||||
└─ Analyze Design System → DESIGN_ANALYSIS (characteristics)
|
||||
|
||||
Extracted data → SKILL.md generation → Primary Design References section
|
||||
→ Component Classification section
|
||||
→ Dynamic Adjustment Guidelines
|
||||
→ Project Independence warnings
|
||||
→ Completion message display
|
||||
Phase 3: Generate
|
||||
├─ Create SKILL directory
|
||||
├─ Generate intelligent description
|
||||
├─ Load SKILL.md template (cat command)
|
||||
├─ Replace variables and generate dynamic content
|
||||
├─ Write SKILL.md
|
||||
├─ Verify creation
|
||||
├─ Load completion message template (cat command)
|
||||
└─ Display completion message
|
||||
```
|
||||
|
||||
@@ -128,8 +128,8 @@ Generate a complete tech stack SKILL package with Exa research.
|
||||
1. **Extract Tech Stack Information**:
|
||||
|
||||
IF MODE == 'session':
|
||||
- Read `.workflow/{SESSION_ID}/workflow-session.json`
|
||||
- Read `.workflow/{SESSION_ID}/.process/context-package.json`
|
||||
- Read `.workflow/active/{session_id}/workflow-session.json`
|
||||
- Read `.workflow/active/{session_id}/.process/context-package.json`
|
||||
- Extract tech_stack: {language, frameworks, libraries}
|
||||
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
|
||||
- Example: \"typescript-react-nextjs\"
|
||||
|
||||
@@ -7,6 +7,14 @@ allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
|
||||
|
||||
# Task Replan Command (/task:replan)
|
||||
|
||||
> **⚠️ DEPRECATION NOTICE**: This command is maintained for backward compatibility. For new workflows, use `/workflow:replan` which provides:
|
||||
> - Session-level replanning with comprehensive artifact updates
|
||||
> - Interactive boundary clarification
|
||||
> - Updates to IMPL_PLAN.md, TODO_LIST.md, and session metadata
|
||||
> - Better integration with workflow sessions
|
||||
>
|
||||
> **Migration**: Replace `/task:replan IMPL-1 "changes"` with `/workflow:replan IMPL-1 "changes"`
|
||||
|
||||
## Overview
|
||||
Replans individual tasks or batch processes multiple tasks with change tracking and backup management.
|
||||
|
||||
@@ -14,9 +22,6 @@ Replans individual tasks or batch processes multiple tasks with change tracking
|
||||
- **Single Task Mode**: Replan one task with specific changes
|
||||
- **Batch Mode**: Process multiple tasks from action-plan verification report
|
||||
|
||||
## Core Principles
|
||||
**Task System:** @~/.claude/workflows/task-core.md
|
||||
|
||||
## Key Features
|
||||
- **Single/Batch Operations**: Single task or multiple tasks from verification report
|
||||
- **Multiple Input Sources**: Text, files, or verification report
|
||||
@@ -274,7 +279,7 @@ Backup saved to .task/backup/IMPL-2-v1.0.json
|
||||
|
||||
### Batch Mode - From Verification Report
|
||||
```bash
|
||||
/task:replan --batch .workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
|
||||
/task:replan --batch .workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
|
||||
|
||||
Parsing verification report...
|
||||
Found 4 tasks requiring replanning:
|
||||
|
||||
@@ -32,7 +32,7 @@ Identify inconsistencies, duplications, ambiguities, and underspecified items be
|
||||
IF --session parameter provided:
|
||||
session_id = provided session
|
||||
ELSE:
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
ELSE:
|
||||
@@ -40,7 +40,7 @@ ELSE:
|
||||
EXIT
|
||||
|
||||
# Derive absolute paths
|
||||
session_dir = .workflow/WFS-{session}
|
||||
session_dir = .workflow/active/WFS-{session}
|
||||
brainstorm_dir = session_dir/.brainstorming
|
||||
task_dir = session_dir/.task
|
||||
|
||||
@@ -333,7 +333,7 @@ Output a Markdown report (no file writes) with the following structure:
|
||||
|
||||
#### TodoWrite-Based Remediation Workflow
|
||||
|
||||
**Report Location**: `.workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
|
||||
**Report Location**: `.workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
|
||||
|
||||
**Recommended Workflow**:
|
||||
1. **Create TodoWrite Task List**: Extract all findings from report
|
||||
@@ -361,7 +361,7 @@ Priority Order:
|
||||
|
||||
**Save Analysis Report**:
|
||||
```bash
|
||||
report_path = ".workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
||||
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
||||
Write(report_path, full_report_content)
|
||||
```
|
||||
|
||||
@@ -404,12 +404,12 @@ TodoWrite([
|
||||
**File Modification Workflow**:
|
||||
```bash
|
||||
# For task JSON modifications:
|
||||
1. Read(.workflow/WFS-{session}/.task/IMPL-X.Y.json)
|
||||
1. Read(.workflow/active/WFS-{session}/.task/IMPL-X.Y.json)
|
||||
2. Edit() to apply fixes
|
||||
3. Mark todo as completed
|
||||
|
||||
# For IMPL_PLAN modifications:
|
||||
1. Read(.workflow/WFS-{session}/IMPL_PLAN.md)
|
||||
1. Read(.workflow/active/WFS-{session}/IMPL_PLAN.md)
|
||||
2. Edit() to apply strategic changes
|
||||
3. Mark todo as completed
|
||||
```
|
||||
|
||||
@@ -46,10 +46,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -162,7 +162,7 @@ IF update_mode = "incremental":
|
||||
|
||||
### Output Files
|
||||
```
|
||||
.workflow/WFS-[topic]/.brainstorming/
|
||||
.workflow/active/WFS-[topic]/.brainstorming/
|
||||
├── guidance-specification.md # Input: Framework (if exists)
|
||||
└── api-designer/
|
||||
└── analysis.md # ★ OUTPUT: Framework-based analysis
|
||||
@@ -181,7 +181,7 @@ IF update_mode = "incremental":
|
||||
Session detection and selection:
|
||||
```bash
|
||||
# Check for active sessions
|
||||
active_sessions=$(find .workflow -name ".active-*" 2>/dev/null)
|
||||
active_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
|
||||
if [ multiple_sessions ]; then
|
||||
prompt_user_to_select_session()
|
||||
else
|
||||
@@ -280,7 +280,7 @@ TodoWrite tracking for two-step process:
|
||||
|
||||
### Output Location
|
||||
```
|
||||
.workflow/WFS-{topic-slug}/.brainstorming/api-designer/
|
||||
.workflow/active/WFS-{topic-slug}/.brainstorming/api-designer/
|
||||
├── analysis.md # Primary API design analysis
|
||||
├── api-specification.md # Detailed endpoint specifications (OpenAPI/Swagger)
|
||||
├── data-contracts.md # Request/response schemas and validation rules
|
||||
@@ -531,7 +531,7 @@ Upon completion, update `workflow-session.json`:
|
||||
"api_designer": {
|
||||
"status": "completed",
|
||||
"completed_at": "timestamp",
|
||||
"output_directory": ".workflow/WFS-{topic}/.brainstorming/api-designer/",
|
||||
"output_directory": ".workflow/active/WFS-{topic}/.brainstorming/api-designer/",
|
||||
"key_insights": ["endpoint_design", "versioning_strategy", "data_contracts"]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,7 +10,7 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*)
|
||||
Six-phase workflow: **Automatic project context collection** → Extract topic challenges → Select roles → Generate task-specific questions → Detect conflicts → Generate confirmed guidance (declarative statements only).
|
||||
|
||||
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
|
||||
**Output**: `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
|
||||
**Output**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
|
||||
**Core Principle**: Questions dynamically generated from project context + topic keywords/challenges, NOT from generic templates
|
||||
|
||||
**Parameters**:
|
||||
@@ -32,7 +32,7 @@ Six-phase workflow: **Automatic project context collection** → Extract topic c
|
||||
**Standalone Mode**:
|
||||
```json
|
||||
[
|
||||
{"content": "Initialize session (.workflow/.active-* check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
|
||||
{"content": "Initialize session (.workflow/active/ session check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
|
||||
{"content": "Phase 0: Automatic project context collection (call context-gather)", "status": "pending", "activeForm": "Phase 0 context collection"},
|
||||
{"content": "Phase 1: Extract challenges, output 2-4 task-specific questions, wait for user input", "status": "pending", "activeForm": "Phase 1 topic analysis"},
|
||||
{"content": "Phase 2: Recommend count+2 roles, output role selection, wait for user input", "status": "pending", "activeForm": "Phase 2 role selection"},
|
||||
@@ -133,7 +133,7 @@ b) {role-name} ({中文名})
|
||||
## Execution Phases
|
||||
|
||||
### Session Management
|
||||
- Check `.workflow/.active-*` markers first
|
||||
- Check `.workflow/active/` for existing sessions
|
||||
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
||||
- Parse `--count N` parameter from user input (default: 3 if not specified)
|
||||
- Store decisions in `workflow-session.json` including count parameter
|
||||
@@ -145,7 +145,7 @@ b) {role-name} ({中文名})
|
||||
**Detection Mechanism** (execute first):
|
||||
```javascript
|
||||
// Check if context-package already exists
|
||||
const contextPackagePath = `.workflow/WFS-{session-id}/.process/context-package.json`;
|
||||
const contextPackagePath = `.workflow/active/WFS-{session-id}/.process/context-package.json`;
|
||||
|
||||
if (file_exists(contextPackagePath)) {
|
||||
// Validate package
|
||||
@@ -229,7 +229,7 @@ Report completion with statistics.
|
||||
|
||||
**Steps**:
|
||||
1. **Load Phase 0 context** (if available):
|
||||
- Read `.workflow/WFS-{session-id}/.process/context-package.json`
|
||||
- Read `.workflow/active/WFS-{session-id}/.process/context-package.json`
|
||||
- Extract: tech_stack, existing modules, conflict_risk, relevant files
|
||||
|
||||
2. **Deep topic analysis** (context-aware):
|
||||
@@ -449,7 +449,7 @@ FOR each selected role:
|
||||
|
||||
## Output Document Template
|
||||
|
||||
**File**: `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||
**File**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||
|
||||
```markdown
|
||||
# [Project] - Confirmed Guidance Specification
|
||||
@@ -596,8 +596,7 @@ ELSE:
|
||||
## File Structure
|
||||
|
||||
```
|
||||
.workflow/WFS-[topic]/
|
||||
├── .active-brainstorming
|
||||
.workflow/active/WFS-[topic]/
|
||||
├── workflow-session.json # Session metadata ONLY
|
||||
└── .brainstorming/
|
||||
└── guidance-specification.md # Full guidance content
|
||||
|
||||
@@ -85,7 +85,7 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
|
||||
**Validation**:
|
||||
- guidance-specification.md created with confirmed decisions
|
||||
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
|
||||
- Session directory `.workflow/WFS-{topic}/.brainstorming/` exists
|
||||
- Session directory `.workflow/active/WFS-{topic}/.brainstorming/` exists
|
||||
|
||||
**TodoWrite Update (Phase 1 SlashCommand invoked - tasks attached)**:
|
||||
```json
|
||||
@@ -132,13 +132,13 @@ Execute {role-name} analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: {role-name}
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/{role}/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/{role}/
|
||||
TOPIC: {user-provided-topic}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -148,7 +148,7 @@ TOPIC: {user-provided-topic}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and original user intent
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context (contains original user prompt as PRIMARY reference)
|
||||
|
||||
4. **load_style_skill** (ONLY for ui-designer role when style_skill_package exists)
|
||||
@@ -194,7 +194,7 @@ TOPIC: {user-provided-topic}
|
||||
- guidance-specification.md path
|
||||
|
||||
**Validation**:
|
||||
- Each role creates `.workflow/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
|
||||
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
|
||||
- If content is large (>800 lines), may split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
||||
- **File naming pattern**: ALL files MUST start with `analysis` prefix (use `analysis*.md` for globbing)
|
||||
- **FORBIDDEN naming**: No `recommendations.md`, `recommendations-*.md`, or any non-`analysis` prefixed files
|
||||
@@ -245,7 +245,7 @@ TOPIC: {user-provided-topic}
|
||||
**Input**: `sessionId` from Phase 1
|
||||
|
||||
**Validation**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
|
||||
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
|
||||
- Synthesis references all role analyses
|
||||
|
||||
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
|
||||
@@ -280,7 +280,7 @@ TOPIC: {user-provided-topic}
|
||||
```
|
||||
Brainstorming complete for session: {sessionId}
|
||||
Roles analyzed: {count}
|
||||
Synthesis: .workflow/WFS-{topic}/.brainstorming/synthesis-specification.md
|
||||
Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md
|
||||
|
||||
✅ Next Steps:
|
||||
1. /workflow:concept-clarify --session {sessionId} # Optional refinement
|
||||
@@ -392,31 +392,31 @@ CONTEXT_VARS:
|
||||
|
||||
## Session Management
|
||||
|
||||
**⚡ FIRST ACTION**: Check for `.workflow/.active-*` markers before Phase 1
|
||||
**⚡ FIRST ACTION**: Check `.workflow/active/` for existing sessions before Phase 1
|
||||
|
||||
**Multiple Sessions Support**:
|
||||
- Different Claude instances can have different active brainstorming sessions
|
||||
- If multiple active sessions found, prompt user to select
|
||||
- If single active session found, use it
|
||||
- If no active session exists, create `WFS-[topic-slug]`
|
||||
- Different Claude instances can have different brainstorming sessions
|
||||
- If multiple sessions found, prompt user to select
|
||||
- If single session found, use it
|
||||
- If no session exists, create `WFS-[topic-slug]`
|
||||
|
||||
**Session Continuity**:
|
||||
- MUST use selected active session for all phases
|
||||
- MUST use selected session for all phases
|
||||
- Each role's context stored in session directory
|
||||
- Session isolation: Each session maintains independent state
|
||||
|
||||
## Output Structure
|
||||
|
||||
**Phase 1 Output**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md` (framework content)
|
||||
- `.workflow/WFS-{topic}/workflow-session.json` (metadata: selected_roles[], topic, timestamps, style_skill_package)
|
||||
- `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md` (framework content)
|
||||
- `.workflow/active/WFS-{topic}/workflow-session.json` (metadata: selected_roles[], topic, timestamps, style_skill_package)
|
||||
|
||||
**Phase 2 Output**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/{role}/analysis.md` (one per role)
|
||||
- `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md` (one per role)
|
||||
- `.superdesign/design_iterations/` (ui-designer artifacts, if --style-skill provided)
|
||||
|
||||
**Phase 3 Output**:
|
||||
- `.workflow/WFS-{topic}/.brainstorming/synthesis-specification.md` (integrated analysis)
|
||||
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` (integrated analysis)
|
||||
|
||||
**⚠️ Storage Separation**: Guidance content in .md files, metadata in .json (no duplication)
|
||||
**⚠️ Style References**: When --style-skill provided, workflow-session.json stores style_skill_package name, ui-designer loads from `.claude/skills/style-{package-name}/`
|
||||
@@ -446,8 +446,7 @@ CONTEXT_VARS:
|
||||
|
||||
**File Structure**:
|
||||
```
|
||||
.workflow/WFS-[topic]/
|
||||
├── .active-brainstorming
|
||||
.workflow/active/WFS-[topic]/
|
||||
├── workflow-session.json # Session metadata ONLY
|
||||
└── .brainstorming/
|
||||
├── guidance-specification.md # Framework (Phase 1)
|
||||
|
||||
@@ -47,10 +47,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -87,13 +87,13 @@ Execute data-architect analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: data-architect
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/data-architect/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/data-architect/
|
||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -103,7 +103,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and existing context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -163,7 +163,7 @@ TodoWrite({
|
||||
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/data-architect/
|
||||
.workflow/active/WFS-{session}/.brainstorming/data-architect/
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
@@ -208,7 +208,7 @@ TodoWrite({
|
||||
"data_architect": {
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/data-architect/analysis.md",
|
||||
"output_location": ".workflow/active/WFS-{session}/.brainstorming/data-architect/analysis.md",
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -67,13 +67,13 @@ Execute product-manager analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: product-manager
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/product-manager/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-manager/
|
||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and existing context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -143,7 +143,7 @@ TodoWrite({
|
||||
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/product-manager/
|
||||
.workflow/active/WFS-{session}/.brainstorming/product-manager/
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
@@ -188,7 +188,7 @@ TodoWrite({
|
||||
"product_manager": {
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/product-manager/analysis.md",
|
||||
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-manager/analysis.md",
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -67,13 +67,13 @@ Execute product-owner analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: product-owner
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/product-owner/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/product-owner/
|
||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and existing context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -143,7 +143,7 @@ TodoWrite({
|
||||
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/product-owner/
|
||||
.workflow/active/WFS-{session}/.brainstorming/product-owner/
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
@@ -188,7 +188,7 @@ TodoWrite({
|
||||
"product_owner": {
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/product-owner/analysis.md",
|
||||
"output_location": ".workflow/active/WFS-{session}/.brainstorming/product-owner/analysis.md",
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
||||
### Phase 1: Session & Framework Detection
|
||||
```bash
|
||||
# Check active session and framework
|
||||
CHECK: .workflow/.active-* marker files
|
||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
||||
IF active_session EXISTS:
|
||||
session_id = get_active_session()
|
||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
||||
brainstorm_dir = .workflow/active/WFS-{session}/.brainstorming/
|
||||
|
||||
CHECK: brainstorm_dir/guidance-specification.md
|
||||
IF EXISTS:
|
||||
@@ -67,13 +67,13 @@ Execute scrum-master analysis for existing topic framework
|
||||
|
||||
## Context Loading
|
||||
ASSIGNED_ROLE: scrum-master
|
||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/scrum-master/
|
||||
OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/scrum-master/
|
||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and existing context
|
||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -143,7 +143,7 @@ TodoWrite({
|
||||
|
||||
### Framework-Based Analysis
|
||||
```
|
||||
.workflow/WFS-{session}/.brainstorming/scrum-master/
|
||||
.workflow/active/WFS-{session}/.brainstorming/scrum-master/
|
||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||
```
|
||||
|
||||
@@ -188,7 +188,7 @@ TodoWrite({
|
||||
"scrum_master": {
|
||||
"status": "completed",
|
||||
"framework_addressed": true,
|
||||
"output_location": ".workflow/WFS-{session}/.brainstorming/scrum-master/analysis.md",
|
||||
"output_location": ".workflow/active/WFS-{session}/.brainstorming/scrum-master/analysis.md",
|
||||
"framework_reference": "@../guidance-specification.md"
|
||||
}
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user