mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-28 20:01:17 +08:00
refactor(workflow): remove all .active marker file references and sync documentation
Core Changes (10 files): - commands: cli/execute.md, memory/docs.md, workflow/review.md, workflow/brainstorm/*.md - agents: cli-execution-agent.md - workflows: task-core.md, workflow-architecture.md Transformations: - Removed all .active-* marker file operations (touch/rm/find) - Updated session discovery to directory-based (.workflow/sessions/) - Updated directory structure examples to show sessions/ subdirectory - Replaced marker-based state with location-based state Reference Documentation (57 files): - Auto-synced via analyze_commands.py script - Includes all core file changes - Updated command indexes (all-commands.json, by-category.json, etc.) Migration complete: 100% .active marker references removed Session state now determined by directory location only
This commit is contained in:
@@ -190,11 +190,11 @@ cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories
|
|||||||
|
|
||||||
**Session Detection**:
|
**Session Detection**:
|
||||||
```bash
|
```bash
|
||||||
find .workflow/ -name '.active-*' -type f
|
find .workflow/sessions/ -name 'WFS-*' -type d
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output Paths**:
|
**Output Paths**:
|
||||||
- **With session**: `.workflow/WFS-{id}/.chat/{agent}-{timestamp}.md`
|
- **With session**: `.workflow/sessions/WFS-{id}/.chat/{agent}-{timestamp}.md`
|
||||||
- **No session**: `.workflow/.scratchpad/{agent}-{description}-{timestamp}.md`
|
- **No session**: `.workflow/.scratchpad/{agent}-{description}-{timestamp}.md`
|
||||||
|
|
||||||
**Log Structure**:
|
**Log Structure**:
|
||||||
|
|||||||
@@ -76,8 +76,8 @@ Use `resume --last` when current task extends/relates to previous execution. See
|
|||||||
|
|
||||||
## Workflow Integration
|
## Workflow Integration
|
||||||
|
|
||||||
**Session Management**: Auto-detects `.workflow/.active-*` marker
|
**Session Management**: Auto-detects active session from `.workflow/sessions/` directory
|
||||||
- Active session: Save to `.workflow/WFS-[id]/.chat/execute-[timestamp].md`
|
- Active session: Save to `.workflow/sessions/WFS-[id]/.chat/execute-[timestamp].md`
|
||||||
- No session: Create new session or save to scratchpad
|
- No session: Create new session or save to scratchpad
|
||||||
|
|
||||||
**Task Integration**: Load from `.task/[TASK-ID].json`, update status, generate summary
|
**Task Integration**: Load from `.task/[TASK-ID].json`, update status, generate summary
|
||||||
|
|||||||
@@ -63,10 +63,10 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
|
|||||||
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
|
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
|
||||||
|
|
||||||
# Create session directories (replace timestamp)
|
# Create session directories (replace timestamp)
|
||||||
bash(mkdir -p .workflow/WFS-docs-{timestamp}/.{task,process,summaries} && touch .workflow/.active-WFS-docs-{timestamp})
|
bash(mkdir -p .workflow/sessions/WFS-docs-{timestamp}/.{task,process,summaries})
|
||||||
|
|
||||||
# Create workflow-session.json (replace values)
|
# Create workflow-session.json (replace values)
|
||||||
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/WFS-docs-{timestamp}/workflow-session.json)
|
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/sessions/WFS-docs-{timestamp}/workflow-session.json)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 2: Analyze Structure
|
### Phase 2: Analyze Structure
|
||||||
@@ -458,8 +458,7 @@ api_id=$((group_count + 3))
|
|||||||
**Unified Structure** (single JSON replaces multiple text files):
|
**Unified Structure** (single JSON replaces multiple text files):
|
||||||
|
|
||||||
```
|
```
|
||||||
.workflow/
|
.workflow/sessions/
|
||||||
├── .active-WFS-docs-{timestamp}
|
|
||||||
└── WFS-docs-{timestamp}/
|
└── WFS-docs-{timestamp}/
|
||||||
├── workflow-session.json # Session metadata
|
├── workflow-session.json # Session metadata
|
||||||
├── IMPL_PLAN.md
|
├── IMPL_PLAN.md
|
||||||
|
|||||||
@@ -133,7 +133,7 @@ b) {role-name} ({中文名})
|
|||||||
## Execution Phases
|
## Execution Phases
|
||||||
|
|
||||||
### Session Management
|
### Session Management
|
||||||
- Check `.workflow/sessions/` for active sessions first
|
- Check `.workflow/sessions/` for existing sessions
|
||||||
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
||||||
- Parse `--count N` parameter from user input (default: 3 if not specified)
|
- Parse `--count N` parameter from user input (default: 3 if not specified)
|
||||||
- Store decisions in `workflow-session.json` including count parameter
|
- Store decisions in `workflow-session.json` including count parameter
|
||||||
@@ -597,7 +597,6 @@ ELSE:
|
|||||||
|
|
||||||
```
|
```
|
||||||
.workflow/sessions/WFS-[topic]/
|
.workflow/sessions/WFS-[topic]/
|
||||||
├── .active-brainstorming
|
|
||||||
├── workflow-session.json # Session metadata ONLY
|
├── workflow-session.json # Session metadata ONLY
|
||||||
└── .brainstorming/
|
└── .brainstorming/
|
||||||
└── guidance-specification.md # Full guidance content
|
└── guidance-specification.md # Full guidance content
|
||||||
|
|||||||
@@ -392,16 +392,16 @@ CONTEXT_VARS:
|
|||||||
|
|
||||||
## Session Management
|
## Session Management
|
||||||
|
|
||||||
**⚡ FIRST ACTION**: Check `.workflow/sessions/` for active sessions before Phase 1
|
**⚡ FIRST ACTION**: Check `.workflow/sessions/` for existing sessions before Phase 1
|
||||||
|
|
||||||
**Multiple Sessions Support**:
|
**Multiple Sessions Support**:
|
||||||
- Different Claude instances can have different active brainstorming sessions
|
- Different Claude instances can have different brainstorming sessions
|
||||||
- If multiple active sessions found, prompt user to select
|
- If multiple sessions found, prompt user to select
|
||||||
- If single active session found, use it
|
- If single session found, use it
|
||||||
- If no active session exists, create `WFS-[topic-slug]`
|
- If no session exists, create `WFS-[topic-slug]`
|
||||||
|
|
||||||
**Session Continuity**:
|
**Session Continuity**:
|
||||||
- MUST use selected active session for all phases
|
- MUST use selected session for all phases
|
||||||
- Each role's context stored in session directory
|
- Each role's context stored in session directory
|
||||||
- Session isolation: Each session maintains independent state
|
- Session isolation: Each session maintains independent state
|
||||||
|
|
||||||
@@ -447,7 +447,6 @@ CONTEXT_VARS:
|
|||||||
**File Structure**:
|
**File Structure**:
|
||||||
```
|
```
|
||||||
.workflow/sessions/WFS-[topic]/
|
.workflow/sessions/WFS-[topic]/
|
||||||
├── .active-brainstorming
|
|
||||||
├── workflow-session.json # Session metadata ONLY
|
├── workflow-session.json # Session metadata ONLY
|
||||||
└── .brainstorming/
|
└── .brainstorming/
|
||||||
├── guidance-specification.md # Framework (Phase 1)
|
├── guidance-specification.md # Framework (Phase 1)
|
||||||
|
|||||||
@@ -186,8 +186,8 @@ IF update_mode = "incremental":
|
|||||||
### ⚠️ Session Management - FIRST STEP
|
### ⚠️ Session Management - FIRST STEP
|
||||||
Session detection and selection:
|
Session detection and selection:
|
||||||
```bash
|
```bash
|
||||||
# Check for active sessions
|
# Check for existing sessions
|
||||||
active_sessions=$(find .workflow -name ".active-*" 2>/dev/null)
|
existing_sessions=$(find .workflow/sessions/ -name "WFS-*" -type d 2>/dev/null)
|
||||||
if [ multiple_sessions ]; then
|
if [ multiple_sessions ]; then
|
||||||
prompt_user_to_select_session()
|
prompt_user_to_select_session()
|
||||||
else
|
else
|
||||||
|
|||||||
@@ -39,17 +39,17 @@ argument-hint: "[--type=security|architecture|action-items|quality] [optional: s
|
|||||||
if [ -n "$SESSION_ARG" ]; then
|
if [ -n "$SESSION_ARG" ]; then
|
||||||
sessionId="$SESSION_ARG"
|
sessionId="$SESSION_ARG"
|
||||||
else
|
else
|
||||||
sessionId=$(find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//')
|
sessionId=$(find .workflow/sessions/ -name "WFS-*" -type d | head -1 | xargs basename)
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Step 2: Validation
|
# Step 2: Validation
|
||||||
if [ ! -d ".workflow/${sessionId}" ]; then
|
if [ ! -d ".workflow/sessions/${sessionId}" ]; then
|
||||||
echo "Session ${sessionId} not found"
|
echo "Session ${sessionId} not found"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check for completed tasks
|
# Check for completed tasks
|
||||||
if [ ! -d ".workflow/${sessionId}/.summaries" ] || [ -z "$(find .workflow/${sessionId}/.summaries/ -name "IMPL-*.md" -type f 2>/dev/null)" ]; then
|
if [ ! -d ".workflow/sessions/${sessionId}/.summaries" ] || [ -z "$(find .workflow/sessions/${sessionId}/.summaries/ -name "IMPL-*.md" -type f 2>/dev/null)" ]; then
|
||||||
echo "No completed implementation found. Complete implementation first"
|
echo "No completed implementation found. Complete implementation first"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
@@ -80,13 +80,13 @@ After bash validation, the model takes control to:
|
|||||||
1. **Load Context**: Read completed task summaries and changed files
|
1. **Load Context**: Read completed task summaries and changed files
|
||||||
```bash
|
```bash
|
||||||
# Load implementation summaries
|
# Load implementation summaries
|
||||||
cat .workflow/${sessionId}/.summaries/IMPL-*.md
|
cat .workflow/sessions/${sessionId}/.summaries/IMPL-*.md
|
||||||
|
|
||||||
# Load test results (if available)
|
# Load test results (if available)
|
||||||
cat .workflow/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
cat .workflow/sessions/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
||||||
|
|
||||||
# Get changed files
|
# Get changed files
|
||||||
git log --since="$(cat .workflow/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
git log --since="$(cat .workflow/sessions/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Perform Specialized Review**: Based on `review_type`
|
2. **Perform Specialized Review**: Based on `review_type`
|
||||||
@@ -99,7 +99,7 @@ After bash validation, the model takes control to:
|
|||||||
```
|
```
|
||||||
- Use Gemini for security analysis:
|
- Use Gemini for security analysis:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/${sessionId} && gemini -p "
|
cd .workflow/sessions/${sessionId} && gemini -p "
|
||||||
PURPOSE: Security audit of completed implementation
|
PURPOSE: Security audit of completed implementation
|
||||||
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
@@ -111,7 +111,7 @@ After bash validation, the model takes control to:
|
|||||||
**Architecture Review** (`--type=architecture`):
|
**Architecture Review** (`--type=architecture`):
|
||||||
- Use Qwen for architecture analysis:
|
- Use Qwen for architecture analysis:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/${sessionId} && qwen -p "
|
cd .workflow/sessions/${sessionId} && qwen -p "
|
||||||
PURPOSE: Architecture compliance review
|
PURPOSE: Architecture compliance review
|
||||||
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
@@ -123,7 +123,7 @@ After bash validation, the model takes control to:
|
|||||||
**Quality Review** (`--type=quality`):
|
**Quality Review** (`--type=quality`):
|
||||||
- Use Gemini for code quality:
|
- Use Gemini for code quality:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/${sessionId} && gemini -p "
|
cd .workflow/sessions/${sessionId} && gemini -p "
|
||||||
PURPOSE: Code quality and best practices review
|
PURPOSE: Code quality and best practices review
|
||||||
TASK: Assess code readability, maintainability, adherence to best practices
|
TASK: Assess code readability, maintainability, adherence to best practices
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
@@ -136,14 +136,14 @@ After bash validation, the model takes control to:
|
|||||||
- Verify all requirements and acceptance criteria met:
|
- Verify all requirements and acceptance criteria met:
|
||||||
```bash
|
```bash
|
||||||
# Load task requirements and acceptance criteria
|
# Load task requirements and acceptance criteria
|
||||||
find .workflow/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
find .workflow/sessions/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
||||||
"Task: " + .id + "\n" +
|
"Task: " + .id + "\n" +
|
||||||
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
||||||
"Acceptance: " + (.context.acceptance | join(", "))
|
"Acceptance: " + (.context.acceptance | join(", "))
|
||||||
' {} \;
|
' {} \;
|
||||||
|
|
||||||
# Check implementation summaries against requirements
|
# Check implementation summaries against requirements
|
||||||
cd .workflow/${sessionId} && gemini -p "
|
cd .workflow/sessions/${sessionId} && gemini -p "
|
||||||
PURPOSE: Verify all requirements and acceptance criteria are met
|
PURPOSE: Verify all requirements and acceptance criteria are met
|
||||||
TASK: Cross-check implementation summaries against original requirements
|
TASK: Cross-check implementation summaries against original requirements
|
||||||
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
@@ -195,7 +195,7 @@ After bash validation, the model takes control to:
|
|||||||
4. **Output Files**:
|
4. **Output Files**:
|
||||||
```bash
|
```bash
|
||||||
# Save review report
|
# Save review report
|
||||||
Write(.workflow/${sessionId}/REVIEW-${review_type}.md)
|
Write(.workflow/sessions/${sessionId}/REVIEW-${review_type}.md)
|
||||||
|
|
||||||
# Update session metadata
|
# Update session metadata
|
||||||
# (optional) Update workflow-session.json with review status
|
# (optional) Update workflow-session.json with review status
|
||||||
|
|||||||
@@ -101,7 +101,7 @@
|
|||||||
{
|
{
|
||||||
"name": "enhance-prompt",
|
"name": "enhance-prompt",
|
||||||
"command": "/enhance-prompt",
|
"command": "/enhance-prompt",
|
||||||
"description": "Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection",
|
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||||
"arguments": "user input to enhance",
|
"arguments": "user input to enhance",
|
||||||
"category": "general",
|
"category": "general",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
@@ -428,11 +428,33 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/execute.md"
|
"file_path": "workflow/execute.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "init",
|
||||||
|
"command": "/workflow:init",
|
||||||
|
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||||
|
"arguments": "[--regenerate]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "general",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/init.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "lite-execute",
|
||||||
|
"command": "/workflow:lite-execute",
|
||||||
|
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||||
|
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-execute.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "lite-plan",
|
"name": "lite-plan",
|
||||||
"command": "/workflow:lite-plan",
|
"command": "/workflow:lite-plan",
|
||||||
"description": "Lightweight interactive planning and execution workflow with in-memory planning, code exploration, and immediate execution after user confirmation",
|
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||||
"arguments": "[--tool claude|gemini|qwen|codex] [--quick] \\\"task description\\\"|file.md",
|
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -450,17 +472,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/plan.md"
|
"file_path": "workflow/plan.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "resume",
|
|
||||||
"command": "/workflow:resume",
|
|
||||||
"description": "Resume paused workflow session with automatic progress analysis, pending task identification, and conflict detection",
|
|
||||||
"arguments": "session-id for workflow session to resume",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/resume.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "review",
|
"name": "review",
|
||||||
"command": "/workflow:review",
|
"command": "/workflow:review",
|
||||||
@@ -519,8 +530,8 @@
|
|||||||
{
|
{
|
||||||
"name": "workflow:status",
|
"name": "workflow:status",
|
||||||
"command": "/workflow:status",
|
"command": "/workflow:status",
|
||||||
"description": "Generate on-demand task status views from JSON task data with optional task-id filtering for detailed view",
|
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||||
"arguments": "[optional: task-id]",
|
"arguments": "[optional: --project|task-id|--validate]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "session-management",
|
"usage_scenario": "session-management",
|
||||||
@@ -692,17 +703,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/animation-extract.md"
|
"file_path": "workflow/ui-design/animation-extract.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "capture",
|
|
||||||
"command": "/workflow:ui-design:capture",
|
|
||||||
"description": "Batch screenshot capture for UI design workflows using MCP puppeteer or local fallback with URL mapping",
|
|
||||||
"arguments": "--url-map \"target:url,...\" [--design-id <id>] [--session <id>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/capture.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "workflow:ui-design:codify-style",
|
"name": "workflow:ui-design:codify-style",
|
||||||
"command": "/workflow:ui-design:codify-style",
|
"command": "/workflow:ui-design:codify-style",
|
||||||
@@ -714,6 +714,17 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/codify-style.md"
|
"file_path": "workflow/ui-design/codify-style.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "design-sync",
|
||||||
|
"command": "/workflow:ui-design:design-sync",
|
||||||
|
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||||
|
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "ui-design",
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/ui-design/design-sync.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "explore-auto",
|
"name": "explore-auto",
|
||||||
"command": "/workflow:ui-design:explore-auto",
|
"command": "/workflow:ui-design:explore-auto",
|
||||||
@@ -725,17 +736,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/explore-auto.md"
|
"file_path": "workflow/ui-design/explore-auto.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "explore-layers",
|
|
||||||
"command": "/workflow:ui-design:explore-layers",
|
|
||||||
"description": "Interactive deep UI capture with depth-controlled layer exploration using MCP puppeteer",
|
|
||||||
"arguments": "--url <url> --depth <1-5> [--design-id <id>] [--session <id>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/explore-layers.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "generate",
|
"name": "generate",
|
||||||
"command": "/workflow:ui-design:generate",
|
"command": "/workflow:ui-design:generate",
|
||||||
@@ -780,17 +780,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/layout-extract.md"
|
"file_path": "workflow/ui-design/layout-extract.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "list",
|
|
||||||
"command": "/workflow:ui-design:list",
|
|
||||||
"description": "List all available design runs with metadata (session, created time, prototype count)",
|
|
||||||
"arguments": "[--session <id>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "workflow/ui-design/list.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "workflow:ui-design:reference-page-generator",
|
"name": "workflow:ui-design:reference-page-generator",
|
||||||
"command": "/workflow:ui-design:reference-page-generator",
|
"command": "/workflow:ui-design:reference-page-generator",
|
||||||
@@ -812,16 +801,5 @@
|
|||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/style-extract.md"
|
"file_path": "workflow/ui-design/style-extract.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "update",
|
|
||||||
"command": "/workflow:ui-design:update",
|
|
||||||
"description": "Update brainstorming artifacts with finalized design system references from selected prototypes",
|
|
||||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/update.md"
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
@@ -109,7 +109,7 @@
|
|||||||
{
|
{
|
||||||
"name": "enhance-prompt",
|
"name": "enhance-prompt",
|
||||||
"command": "/enhance-prompt",
|
"command": "/enhance-prompt",
|
||||||
"description": "Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection",
|
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||||
"arguments": "user input to enhance",
|
"arguments": "user input to enhance",
|
||||||
"category": "general",
|
"category": "general",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
@@ -316,11 +316,33 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/execute.md"
|
"file_path": "workflow/execute.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "init",
|
||||||
|
"command": "/workflow:init",
|
||||||
|
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||||
|
"arguments": "[--regenerate]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "general",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/init.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "lite-execute",
|
||||||
|
"command": "/workflow:lite-execute",
|
||||||
|
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||||
|
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-execute.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "lite-plan",
|
"name": "lite-plan",
|
||||||
"command": "/workflow:lite-plan",
|
"command": "/workflow:lite-plan",
|
||||||
"description": "Lightweight interactive planning and execution workflow with in-memory planning, code exploration, and immediate execution after user confirmation",
|
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||||
"arguments": "[--tool claude|gemini|qwen|codex] [--quick] \\\"task description\\\"|file.md",
|
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -338,17 +360,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/plan.md"
|
"file_path": "workflow/plan.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "resume",
|
|
||||||
"command": "/workflow:resume",
|
|
||||||
"description": "Resume paused workflow session with automatic progress analysis, pending task identification, and conflict detection",
|
|
||||||
"arguments": "session-id for workflow session to resume",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/resume.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "review",
|
"name": "review",
|
||||||
"command": "/workflow:review",
|
"command": "/workflow:review",
|
||||||
@@ -363,8 +374,8 @@
|
|||||||
{
|
{
|
||||||
"name": "workflow:status",
|
"name": "workflow:status",
|
||||||
"command": "/workflow:status",
|
"command": "/workflow:status",
|
||||||
"description": "Generate on-demand task status views from JSON task data with optional task-id filtering for detailed view",
|
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||||
"arguments": "[optional: task-id]",
|
"arguments": "[optional: --project|task-id|--validate]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "session-management",
|
"usage_scenario": "session-management",
|
||||||
@@ -720,17 +731,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/animation-extract.md"
|
"file_path": "workflow/ui-design/animation-extract.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "capture",
|
|
||||||
"command": "/workflow:ui-design:capture",
|
|
||||||
"description": "Batch screenshot capture for UI design workflows using MCP puppeteer or local fallback with URL mapping",
|
|
||||||
"arguments": "--url-map \"target:url,...\" [--design-id <id>] [--session <id>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/capture.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "workflow:ui-design:codify-style",
|
"name": "workflow:ui-design:codify-style",
|
||||||
"command": "/workflow:ui-design:codify-style",
|
"command": "/workflow:ui-design:codify-style",
|
||||||
@@ -742,6 +742,17 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/codify-style.md"
|
"file_path": "workflow/ui-design/codify-style.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "design-sync",
|
||||||
|
"command": "/workflow:ui-design:design-sync",
|
||||||
|
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||||
|
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "ui-design",
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/ui-design/design-sync.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "explore-auto",
|
"name": "explore-auto",
|
||||||
"command": "/workflow:ui-design:explore-auto",
|
"command": "/workflow:ui-design:explore-auto",
|
||||||
@@ -753,17 +764,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/explore-auto.md"
|
"file_path": "workflow/ui-design/explore-auto.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "explore-layers",
|
|
||||||
"command": "/workflow:ui-design:explore-layers",
|
|
||||||
"description": "Interactive deep UI capture with depth-controlled layer exploration using MCP puppeteer",
|
|
||||||
"arguments": "--url <url> --depth <1-5> [--design-id <id>] [--session <id>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/explore-layers.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "generate",
|
"name": "generate",
|
||||||
"command": "/workflow:ui-design:generate",
|
"command": "/workflow:ui-design:generate",
|
||||||
@@ -808,17 +808,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/layout-extract.md"
|
"file_path": "workflow/ui-design/layout-extract.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "list",
|
|
||||||
"command": "/workflow:ui-design:list",
|
|
||||||
"description": "List all available design runs with metadata (session, created time, prototype count)",
|
|
||||||
"arguments": "[--session <id>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "workflow/ui-design/list.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "workflow:ui-design:reference-page-generator",
|
"name": "workflow:ui-design:reference-page-generator",
|
||||||
"command": "/workflow:ui-design:reference-page-generator",
|
"command": "/workflow:ui-design:reference-page-generator",
|
||||||
@@ -840,17 +829,6 @@
|
|||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/style-extract.md"
|
"file_path": "workflow/ui-design/style-extract.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "update",
|
|
||||||
"command": "/workflow:ui-design:update",
|
|
||||||
"description": "Update brainstorming artifacts with finalized design system references from selected prototypes",
|
|
||||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/update.md"
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -71,7 +71,7 @@
|
|||||||
{
|
{
|
||||||
"name": "enhance-prompt",
|
"name": "enhance-prompt",
|
||||||
"command": "/enhance-prompt",
|
"command": "/enhance-prompt",
|
||||||
"description": "Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection",
|
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||||
"arguments": "user input to enhance",
|
"arguments": "user input to enhance",
|
||||||
"category": "general",
|
"category": "general",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
@@ -244,6 +244,17 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/brainstorm/ux-expert.md"
|
"file_path": "workflow/brainstorm/ux-expert.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "init",
|
||||||
|
"command": "/workflow:init",
|
||||||
|
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||||
|
"arguments": "[--regenerate]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "general",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/init.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "list",
|
"name": "list",
|
||||||
"command": "/workflow:session:list",
|
"command": "/workflow:session:list",
|
||||||
@@ -299,17 +310,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/animation-extract.md"
|
"file_path": "workflow/ui-design/animation-extract.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "capture",
|
|
||||||
"command": "/workflow:ui-design:capture",
|
|
||||||
"description": "Batch screenshot capture for UI design workflows using MCP puppeteer or local fallback with URL mapping",
|
|
||||||
"arguments": "--url-map \"target:url,...\" [--design-id <id>] [--session <id>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/capture.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "explore-auto",
|
"name": "explore-auto",
|
||||||
"command": "/workflow:ui-design:explore-auto",
|
"command": "/workflow:ui-design:explore-auto",
|
||||||
@@ -321,17 +321,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/explore-auto.md"
|
"file_path": "workflow/ui-design/explore-auto.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "explore-layers",
|
|
||||||
"command": "/workflow:ui-design:explore-layers",
|
|
||||||
"description": "Interactive deep UI capture with depth-controlled layer exploration using MCP puppeteer",
|
|
||||||
"arguments": "--url <url> --depth <1-5> [--design-id <id>] [--session <id>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/explore-layers.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "imitate-auto",
|
"name": "imitate-auto",
|
||||||
"command": "/workflow:ui-design:imitate-auto",
|
"command": "/workflow:ui-design:imitate-auto",
|
||||||
@@ -354,17 +343,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/layout-extract.md"
|
"file_path": "workflow/ui-design/layout-extract.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "list",
|
|
||||||
"command": "/workflow:ui-design:list",
|
|
||||||
"description": "List all available design runs with metadata (session, created time, prototype count)",
|
|
||||||
"arguments": "[--session <id>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "workflow/ui-design/list.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "style-extract",
|
"name": "style-extract",
|
||||||
"command": "/workflow:ui-design:style-extract",
|
"command": "/workflow:ui-design:style-extract",
|
||||||
@@ -375,17 +353,6 @@
|
|||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/style-extract.md"
|
"file_path": "workflow/ui-design/style-extract.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "update",
|
|
||||||
"command": "/workflow:ui-design:update",
|
|
||||||
"description": "Update brainstorming artifacts with finalized design system references from selected prototypes",
|
|
||||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/update.md"
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"implementation": [
|
"implementation": [
|
||||||
@@ -444,6 +411,17 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/execute.md"
|
"file_path": "workflow/execute.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "lite-execute",
|
||||||
|
"command": "/workflow:lite-execute",
|
||||||
|
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||||
|
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-execute.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "test-cycle-execute",
|
"name": "test-cycle-execute",
|
||||||
"command": "/workflow:test-cycle-execute",
|
"command": "/workflow:test-cycle-execute",
|
||||||
@@ -592,8 +570,8 @@
|
|||||||
{
|
{
|
||||||
"name": "lite-plan",
|
"name": "lite-plan",
|
||||||
"command": "/workflow:lite-plan",
|
"command": "/workflow:lite-plan",
|
||||||
"description": "Lightweight interactive planning and execution workflow with in-memory planning, code exploration, and immediate execution after user confirmation",
|
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||||
"arguments": "[--tool claude|gemini|qwen|codex] [--quick] \\\"task description\\\"|file.md",
|
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -633,6 +611,17 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/ui-design/codify-style.md"
|
"file_path": "workflow/ui-design/codify-style.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "design-sync",
|
||||||
|
"command": "/workflow:ui-design:design-sync",
|
||||||
|
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||||
|
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "ui-design",
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/ui-design/design-sync.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "workflow:ui-design:import-from-code",
|
"name": "workflow:ui-design:import-from-code",
|
||||||
"command": "/workflow:ui-design:import-from-code",
|
"command": "/workflow:ui-design:import-from-code",
|
||||||
@@ -725,17 +714,6 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"session-management": [
|
"session-management": [
|
||||||
{
|
|
||||||
"name": "resume",
|
|
||||||
"command": "/workflow:resume",
|
|
||||||
"description": "Resume paused workflow session with automatic progress analysis, pending task identification, and conflict detection",
|
|
||||||
"arguments": "session-id for workflow session to resume",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/resume.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "complete",
|
"name": "complete",
|
||||||
"command": "/workflow:session:complete",
|
"command": "/workflow:session:complete",
|
||||||
@@ -761,8 +739,8 @@
|
|||||||
{
|
{
|
||||||
"name": "workflow:status",
|
"name": "workflow:status",
|
||||||
"command": "/workflow:status",
|
"command": "/workflow:status",
|
||||||
"description": "Generate on-demand task status views from JSON task data with optional task-id filtering for detailed view",
|
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||||
"arguments": "[optional: task-id]",
|
"arguments": "[optional: --project|task-id|--validate]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "session-management",
|
"usage_scenario": "session-management",
|
||||||
|
|||||||
@@ -24,8 +24,8 @@
|
|||||||
{
|
{
|
||||||
"name": "workflow:status",
|
"name": "workflow:status",
|
||||||
"command": "/workflow:status",
|
"command": "/workflow:status",
|
||||||
"description": "Generate on-demand task status views from JSON task data with optional task-id filtering for detailed view",
|
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||||
"arguments": "[optional: task-id]",
|
"arguments": "[optional: --project|task-id|--validate]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "session-management",
|
"usage_scenario": "session-management",
|
||||||
@@ -109,17 +109,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/action-plan-verify.md"
|
"file_path": "workflow/action-plan-verify.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "resume",
|
|
||||||
"command": "/workflow:resume",
|
|
||||||
"description": "Resume paused workflow session with automatic progress analysis, pending task identification, and conflict detection",
|
|
||||||
"arguments": "session-id for workflow session to resume",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/resume.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "review",
|
"name": "review",
|
||||||
"command": "/workflow:review",
|
"command": "/workflow:review",
|
||||||
@@ -145,7 +134,7 @@
|
|||||||
{
|
{
|
||||||
"name": "enhance-prompt",
|
"name": "enhance-prompt",
|
||||||
"command": "/enhance-prompt",
|
"command": "/enhance-prompt",
|
||||||
"description": "Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection",
|
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||||
"arguments": "user input to enhance",
|
"arguments": "user input to enhance",
|
||||||
"category": "general",
|
"category": "general",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
|
|||||||
@@ -190,11 +190,11 @@ cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories
|
|||||||
|
|
||||||
**Session Detection**:
|
**Session Detection**:
|
||||||
```bash
|
```bash
|
||||||
find .workflow/ -name '.active-*' -type f
|
find .workflow/sessions/ -name 'WFS-*' -type d
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output Paths**:
|
**Output Paths**:
|
||||||
- **With session**: `.workflow/WFS-{id}/.chat/{agent}-{timestamp}.md`
|
- **With session**: `.workflow/sessions/WFS-{id}/.chat/{agent}-{timestamp}.md`
|
||||||
- **No session**: `.workflow/.scratchpad/{agent}-{description}-{timestamp}.md`
|
- **No session**: `.workflow/.scratchpad/{agent}-{description}-{timestamp}.md`
|
||||||
|
|
||||||
**Log Structure**:
|
**Log Structure**:
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ description: |
|
|||||||
- Progressive disclosure: Quick overview → detailed analysis → dependency deep-dive
|
- Progressive disclosure: Quick overview → detailed analysis → dependency deep-dive
|
||||||
- Context-aware filtering based on task requirements
|
- Context-aware filtering based on task requirements
|
||||||
|
|
||||||
color: blue
|
color: yellow
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a specialized **CLI Exploration Agent** that executes read-only code analysis tasks autonomously to discover module structures, map dependencies, and understand architectural patterns.
|
You are a specialized **CLI Exploration Agent** that executes read-only code analysis tasks autonomously to discover module structures, map dependencies, and understand architectural patterns.
|
||||||
@@ -513,37 +513,19 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-
|
|||||||
- Use Gemini semantic analysis as tiebreaker
|
- Use Gemini semantic analysis as tiebreaker
|
||||||
- Document uncertainty in report with attribution
|
- Document uncertainty in report with attribution
|
||||||
|
|
||||||
## Integration with Other Agents
|
## Available Tools & Services
|
||||||
|
|
||||||
### As Service Provider (Called by Others)
|
This agent can leverage the following tools to enhance analysis:
|
||||||
|
|
||||||
**Planning Agents** (`action-planning-agent`, `conceptual-planning-agent`):
|
|
||||||
- **Use Case**: Pre-planning reconnaissance to understand existing code
|
|
||||||
- **Input**: Task description + focus areas
|
|
||||||
- **Output**: Structural overview + dependency analysis
|
|
||||||
- **Flow**: Planning agent → CLI explore agent (quick-scan) → Context for planning
|
|
||||||
|
|
||||||
**Execution Agents** (`code-developer`, `cli-execution-agent`):
|
|
||||||
- **Use Case**: Refactoring impact analysis before code modifications
|
|
||||||
- **Input**: Target files/functions to modify
|
|
||||||
- **Output**: Dependency map + risk assessment
|
|
||||||
- **Flow**: Execution agent → CLI explore agent (dependency-map) → Safe modification strategy
|
|
||||||
|
|
||||||
**UI Design Agent** (`ui-design-agent`):
|
|
||||||
- **Use Case**: Discover existing UI components and design tokens
|
|
||||||
- **Input**: Component directory + file patterns
|
|
||||||
- **Output**: Component inventory + styling patterns
|
|
||||||
- **Flow**: UI agent delegates structure analysis to CLI explore agent
|
|
||||||
|
|
||||||
### As Consumer (Calls Others)
|
|
||||||
|
|
||||||
**Context Search Agent** (`context-search-agent`):
|
**Context Search Agent** (`context-search-agent`):
|
||||||
- **Use Case**: Get project-wide context before analysis
|
- **Use Case**: Get project-wide context before analysis
|
||||||
- **Flow**: CLI explore agent → Context search agent → Enhanced analysis with full context
|
- **When to use**: Need comprehensive project understanding beyond file structure
|
||||||
|
- **Integration**: Call context-search-agent first, then use results to guide exploration
|
||||||
|
|
||||||
**MCP Tools**:
|
**MCP Tools** (Code Index):
|
||||||
- **Use Case**: Enhanced file discovery and search capabilities
|
- **Use Case**: Enhanced file discovery and search capabilities
|
||||||
- **Flow**: CLI explore agent → Code Index MCP → Faster pattern discovery
|
- **When to use**: Large codebases requiring fast pattern discovery
|
||||||
|
- **Integration**: Prefer Code Index MCP when available, fallback to rg/bash tools
|
||||||
|
|
||||||
## Key Reminders
|
## Key Reminders
|
||||||
|
|
||||||
@@ -636,52 +618,3 @@ rg "^import .*;" --type java -n
|
|||||||
# Find test files
|
# Find test files
|
||||||
find . -name "*Test.java" -o -name "*Tests.java"
|
find . -name "*Test.java" -o -name "*Tests.java"
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Performance Optimization
|
|
||||||
|
|
||||||
### Caching Strategy (Optional)
|
|
||||||
|
|
||||||
**Project Structure Cache**:
|
|
||||||
- Cache `get_modules_by_depth.sh` output for 1 hour
|
|
||||||
- Invalidate on file system changes (watch .git/index)
|
|
||||||
|
|
||||||
**Pattern Match Cache**:
|
|
||||||
- Cache rg results for common patterns (class/function definitions)
|
|
||||||
- Invalidate on file modifications
|
|
||||||
|
|
||||||
**Gemini Analysis Cache**:
|
|
||||||
- Cache semantic analysis results for unchanged files
|
|
||||||
- Key: file_path + content_hash
|
|
||||||
- TTL: 24 hours
|
|
||||||
|
|
||||||
### Parallel Execution
|
|
||||||
|
|
||||||
**Quick-Scan Mode**:
|
|
||||||
- Run rg searches in parallel (classes, functions, imports)
|
|
||||||
- Merge results after completion
|
|
||||||
|
|
||||||
**Deep-Scan Mode**:
|
|
||||||
- Execute Bash scan (Phase 1) and Gemini setup concurrently
|
|
||||||
- Wait for Phase 1 completion before Phase 2 (Gemini needs context)
|
|
||||||
|
|
||||||
**Dependency-Map Mode**:
|
|
||||||
- Discover imports and exports in parallel
|
|
||||||
- Build graph after all discoveries complete
|
|
||||||
|
|
||||||
### Resource Limits
|
|
||||||
|
|
||||||
**File Count Limits**:
|
|
||||||
- Quick-scan: Unlimited (filtered by relevance)
|
|
||||||
- Deep-scan: Max 100 files for Gemini analysis
|
|
||||||
- Dependency-map: Max 500 modules for graph construction
|
|
||||||
|
|
||||||
**Timeout Limits**:
|
|
||||||
- Quick-scan: 30 seconds (bash-only, fast)
|
|
||||||
- Deep-scan: 5 minutes (includes Gemini CLI)
|
|
||||||
- Dependency-map: 10 minutes (graph construction + analysis)
|
|
||||||
|
|
||||||
**Memory Limits**:
|
|
||||||
- Limit rg output to 10MB (use --max-count)
|
|
||||||
- Stream large outputs instead of loading into memory
|
|
||||||
|
|||||||
@@ -0,0 +1,724 @@
|
|||||||
|
---
|
||||||
|
name: cli-lite-planning-agent
|
||||||
|
description: |
|
||||||
|
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans with actionable task breakdowns. Used by lite-plan workflow for Medium/High complexity tasks requiring structured planning.
|
||||||
|
|
||||||
|
Core capabilities:
|
||||||
|
- Task decomposition into actionable steps (3-10 tasks)
|
||||||
|
- Dependency analysis and execution sequence
|
||||||
|
- Integration with exploration context
|
||||||
|
- Enhancement of conceptual tasks to actionable "how to do" steps
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- Context: Medium complexity feature implementation
|
||||||
|
user: "Generate implementation plan for user authentication feature"
|
||||||
|
assistant: "Executing Gemini CLI planning → Parsing task breakdown → Generating planObject with 7 actionable tasks"
|
||||||
|
commentary: Agent transforms conceptual task into specific file operations
|
||||||
|
|
||||||
|
- Context: High complexity refactoring
|
||||||
|
user: "Generate plan for refactoring logging module with exploration context"
|
||||||
|
assistant: "Using exploration findings → CLI planning with pattern injection → Generating enhanced planObject"
|
||||||
|
commentary: Agent leverages exploration context to create pattern-aware, file-specific tasks
|
||||||
|
color: cyan
|
||||||
|
---
|
||||||
|
|
||||||
|
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate actionable implementation plans (planObject) for downstream execution.
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
### Input Processing
|
||||||
|
|
||||||
|
**What you receive (Context Package)**:
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"task_description": "User's original task description",
|
||||||
|
"explorationContext": {
|
||||||
|
"project_structure": "Overall architecture description",
|
||||||
|
"relevant_files": ["file1.ts", "file2.ts", "..."],
|
||||||
|
"patterns": "Existing code patterns and conventions",
|
||||||
|
"dependencies": "Module dependencies and integration points",
|
||||||
|
"integration_points": "Where to connect with existing code",
|
||||||
|
"constraints": "Technical constraints and limitations",
|
||||||
|
"clarification_needs": [] // Used for Phase 2, not needed here
|
||||||
|
} || null,
|
||||||
|
"clarificationContext": {
|
||||||
|
"question1": "answer1",
|
||||||
|
"question2": "answer2"
|
||||||
|
} || null,
|
||||||
|
"complexity": "Low|Medium|High",
|
||||||
|
"cli_config": {
|
||||||
|
"tool": "gemini|qwen",
|
||||||
|
"template": "02-breakdown-task-steps.txt",
|
||||||
|
"timeout": 3600000, // 60 minutes for planning
|
||||||
|
"fallback": "qwen"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Context Enrichment Strategy**:
|
||||||
|
```javascript
|
||||||
|
// Merge task description with exploration findings
|
||||||
|
const enrichedContext = {
|
||||||
|
task_description: task_description,
|
||||||
|
relevant_files: explorationContext?.relevant_files || [],
|
||||||
|
patterns: explorationContext?.patterns || "No patterns identified",
|
||||||
|
dependencies: explorationContext?.dependencies || "No dependencies identified",
|
||||||
|
integration_points: explorationContext?.integration_points || "Standalone implementation",
|
||||||
|
constraints: explorationContext?.constraints || "No constraints identified",
|
||||||
|
clarifications: clarificationContext || {}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate context summary for CLI prompt
|
||||||
|
const contextSummary = `
|
||||||
|
Exploration Findings:
|
||||||
|
- Relevant Files: ${enrichedContext.relevant_files.join(', ')}
|
||||||
|
- Patterns: ${enrichedContext.patterns}
|
||||||
|
- Dependencies: ${enrichedContext.dependencies}
|
||||||
|
- Integration: ${enrichedContext.integration_points}
|
||||||
|
- Constraints: ${enrichedContext.constraints}
|
||||||
|
|
||||||
|
User Clarifications:
|
||||||
|
${Object.entries(enrichedContext.clarifications).map(([q, a]) => `- ${q}: ${a}`).join('\n')}
|
||||||
|
`
|
||||||
|
```
|
||||||
|
|
||||||
|
### Execution Flow (Three-Phase)
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Context Preparation & CLI Execution
|
||||||
|
1. Validate context package and extract task context
|
||||||
|
2. Merge task description with exploration and clarification context
|
||||||
|
3. Construct CLI command with planning template
|
||||||
|
4. Execute Gemini/Qwen CLI tool with timeout (60 minutes)
|
||||||
|
5. Handle errors and fallback to alternative tool if needed
|
||||||
|
6. Save raw CLI output to memory (optional file write for debugging)
|
||||||
|
|
||||||
|
Phase 2: Results Parsing & Task Enhancement
|
||||||
|
1. Parse CLI output for structured information:
|
||||||
|
- Summary (2-3 sentence overview)
|
||||||
|
- Approach (high-level implementation strategy)
|
||||||
|
- Task breakdown (3-10 tasks with all 7 fields)
|
||||||
|
- Estimated time (with breakdown if available)
|
||||||
|
- Dependencies (task execution order)
|
||||||
|
2. Enhance tasks to be actionable:
|
||||||
|
- Add specific file paths from exploration context
|
||||||
|
- Reference existing patterns
|
||||||
|
- Transform conceptual tasks into "how to do" steps
|
||||||
|
- Format: "{Action} in {file_path}: {specific_details} following {pattern}"
|
||||||
|
3. Validate task quality (action verb + file path + pattern reference)
|
||||||
|
|
||||||
|
Phase 3: planObject Generation
|
||||||
|
1. Build planObject structure from parsed and enhanced results
|
||||||
|
2. Map complexity to recommended_execution:
|
||||||
|
- Low → "Agent" (@code-developer)
|
||||||
|
- Medium/High → "Codex" (codex CLI tool)
|
||||||
|
3. Return planObject (in-memory, no file writes)
|
||||||
|
4. Return success status to orchestrator (lite-plan)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Core Functions
|
||||||
|
|
||||||
|
### 1. CLI Planning Execution
|
||||||
|
|
||||||
|
**Template-Based Command Construction**:
|
||||||
|
```bash
|
||||||
|
cd {project_root} && {cli_tool} -p "
|
||||||
|
PURPOSE: Generate detailed implementation plan for {complexity} complexity task with structured actionable task breakdown
|
||||||
|
TASK:
|
||||||
|
• Analyze task requirements: {task_description}
|
||||||
|
• Break down into 3-10 structured task objects with complete implementation guidance
|
||||||
|
• For each task, provide:
|
||||||
|
- Title and target file
|
||||||
|
- Action type (Create|Update|Implement|Refactor|Add|Delete)
|
||||||
|
- Description (what to implement)
|
||||||
|
- Implementation steps (how to do it, 3-7 specific steps)
|
||||||
|
- Reference (which patterns/files to follow, with specific examples)
|
||||||
|
- Acceptance criteria (verification checklist)
|
||||||
|
• Identify dependencies and execution sequence
|
||||||
|
• Provide realistic time estimates with breakdown
|
||||||
|
MODE: analysis
|
||||||
|
CONTEXT: @**/* | Memory: {exploration_context_summary}
|
||||||
|
EXPECTED: Structured plan with the following format:
|
||||||
|
|
||||||
|
## Implementation Summary
|
||||||
|
[2-3 sentence overview]
|
||||||
|
|
||||||
|
## High-Level Approach
|
||||||
|
[Strategy with pattern references]
|
||||||
|
|
||||||
|
## Task Breakdown
|
||||||
|
|
||||||
|
### Task 1: [Title]
|
||||||
|
**File**: [file/path.ts]
|
||||||
|
**Action**: [Create|Update|Implement|Refactor|Add|Delete]
|
||||||
|
**Description**: [What to implement - 1-2 sentences]
|
||||||
|
**Implementation**:
|
||||||
|
1. [Specific step 1 - how to do it]
|
||||||
|
2. [Specific step 2 - concrete action]
|
||||||
|
3. [Specific step 3 - implementation detail]
|
||||||
|
4. [Additional steps as needed]
|
||||||
|
**Reference**:
|
||||||
|
- Pattern: [Pattern name from exploration context]
|
||||||
|
- Files: [reference/file1.ts], [reference/file2.ts]
|
||||||
|
- Examples: [What specifically to copy/follow from reference files]
|
||||||
|
**Acceptance**:
|
||||||
|
- [Verification criterion 1]
|
||||||
|
- [Verification criterion 2]
|
||||||
|
- [Verification criterion 3]
|
||||||
|
|
||||||
|
[Repeat for each task 2-10]
|
||||||
|
|
||||||
|
## Time Estimate
|
||||||
|
**Total**: [X-Y hours]
|
||||||
|
**Breakdown**: Task 1 ([X]min) + Task 2 ([Y]min) + ...
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
- Task 2 depends on Task 1 (requires authentication service)
|
||||||
|
- Tasks 3-5 can run in parallel
|
||||||
|
- Task 6 requires all previous tasks
|
||||||
|
|
||||||
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
||||||
|
- Exploration context: Relevant files: {relevant_files_list}
|
||||||
|
- Existing patterns: {patterns_summary}
|
||||||
|
- User clarifications: {clarifications_summary}
|
||||||
|
- Complexity level: {complexity}
|
||||||
|
- Each task MUST include all 7 fields: title, file, action, description, implementation, reference, acceptance
|
||||||
|
- Implementation steps must be concrete and actionable (not conceptual)
|
||||||
|
- Reference must cite specific files from exploration context
|
||||||
|
- analysis=READ-ONLY
|
||||||
|
" {timeout_flag}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Error Handling & Fallback Strategy**:
|
||||||
|
```javascript
|
||||||
|
// Primary execution with fallback chain
|
||||||
|
try {
|
||||||
|
result = executeCLI("gemini", config);
|
||||||
|
} catch (error) {
|
||||||
|
if (error.code === 429 || error.code === 404) {
|
||||||
|
console.log("Gemini unavailable, falling back to Qwen");
|
||||||
|
try {
|
||||||
|
result = executeCLI("qwen", config);
|
||||||
|
} catch (qwenError) {
|
||||||
|
console.error("Both Gemini and Qwen failed");
|
||||||
|
// Return degraded mode with basic plan
|
||||||
|
return {
|
||||||
|
status: "degraded",
|
||||||
|
message: "CLI planning failed, using fallback strategy",
|
||||||
|
planObject: generateBasicPlan(task_description, explorationContext)
|
||||||
|
};
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback plan generation when all CLI tools fail
|
||||||
|
function generateBasicPlan(taskDesc, exploration) {
|
||||||
|
const relevantFiles = exploration?.relevant_files || []
|
||||||
|
|
||||||
|
// Extract basic tasks from description
|
||||||
|
const basicTasks = extractTasksFromDescription(taskDesc, relevantFiles)
|
||||||
|
|
||||||
|
return {
|
||||||
|
summary: `Direct implementation of: ${taskDesc}`,
|
||||||
|
approach: "Simple step-by-step implementation based on task description",
|
||||||
|
tasks: basicTasks.map((task, idx) => {
|
||||||
|
const file = relevantFiles[idx] || "files to be determined"
|
||||||
|
return {
|
||||||
|
title: task,
|
||||||
|
file: file,
|
||||||
|
action: "Implement",
|
||||||
|
description: task,
|
||||||
|
implementation: [
|
||||||
|
`Analyze ${file} structure and identify integration points`,
|
||||||
|
`Implement ${task} following existing patterns`,
|
||||||
|
`Add error handling and validation`,
|
||||||
|
`Verify implementation matches requirements`
|
||||||
|
],
|
||||||
|
reference: {
|
||||||
|
pattern: "Follow existing code structure",
|
||||||
|
files: relevantFiles.slice(0, 2),
|
||||||
|
examples: `Study the structure in ${relevantFiles[0] || 'related files'}`
|
||||||
|
},
|
||||||
|
acceptance: [
|
||||||
|
`${task} completed in ${file}`,
|
||||||
|
`Implementation follows project conventions`,
|
||||||
|
`No breaking changes to existing functionality`
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
estimated_time: `Estimated ${basicTasks.length * 30} minutes (${basicTasks.length} tasks × 30min avg)`,
|
||||||
|
recommended_execution: "Agent",
|
||||||
|
complexity: "Low"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function extractTasksFromDescription(desc, files) {
|
||||||
|
// Basic heuristic: split on common separators
|
||||||
|
const potentialTasks = desc.split(/[,;]|\band\b/)
|
||||||
|
.map(s => s.trim())
|
||||||
|
.filter(s => s.length > 10)
|
||||||
|
|
||||||
|
if (potentialTasks.length >= 3) {
|
||||||
|
return potentialTasks.slice(0, 10)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback: create generic tasks
|
||||||
|
return [
|
||||||
|
`Analyze requirements and identify implementation approach`,
|
||||||
|
`Implement core functionality in ${files[0] || 'main file'}`,
|
||||||
|
`Add error handling and validation`,
|
||||||
|
`Create unit tests for new functionality`,
|
||||||
|
`Update documentation`
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Output Parsing & Enhancement
|
||||||
|
|
||||||
|
**Structured Task Parsing**:
|
||||||
|
```javascript
|
||||||
|
// Parse CLI output for structured tasks
|
||||||
|
function extractStructuredTasks(cliOutput) {
|
||||||
|
const tasks = []
|
||||||
|
const taskPattern = /### Task \d+: (.+?)\n\*\*File\*\*: (.+?)\n\*\*Action\*\*: (.+?)\n\*\*Description\*\*: (.+?)\n\*\*Implementation\*\*:\n((?:\d+\. .+?\n)+)\*\*Reference\*\*:\n((?:- .+?\n)+)\*\*Acceptance\*\*:\n((?:- .+?\n)+)/g
|
||||||
|
|
||||||
|
let match
|
||||||
|
while ((match = taskPattern.exec(cliOutput)) !== null) {
|
||||||
|
// Parse implementation steps
|
||||||
|
const implementation = match[5].trim()
|
||||||
|
.split('\n')
|
||||||
|
.map(s => s.replace(/^\d+\. /, ''))
|
||||||
|
.filter(s => s.length > 0)
|
||||||
|
|
||||||
|
// Parse reference fields
|
||||||
|
const referenceText = match[6].trim()
|
||||||
|
const patternMatch = /- Pattern: (.+)/m.exec(referenceText)
|
||||||
|
const filesMatch = /- Files: (.+)/m.exec(referenceText)
|
||||||
|
const examplesMatch = /- Examples: (.+)/m.exec(referenceText)
|
||||||
|
|
||||||
|
const reference = {
|
||||||
|
pattern: patternMatch ? patternMatch[1].trim() : "No pattern specified",
|
||||||
|
files: filesMatch ? filesMatch[1].split(',').map(f => f.trim()) : [],
|
||||||
|
examples: examplesMatch ? examplesMatch[1].trim() : "Follow general pattern"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse acceptance criteria
|
||||||
|
const acceptance = match[7].trim()
|
||||||
|
.split('\n')
|
||||||
|
.map(s => s.replace(/^- /, ''))
|
||||||
|
.filter(s => s.length > 0)
|
||||||
|
|
||||||
|
tasks.push({
|
||||||
|
title: match[1].trim(),
|
||||||
|
file: match[2].trim(),
|
||||||
|
action: match[3].trim(),
|
||||||
|
description: match[4].trim(),
|
||||||
|
implementation: implementation,
|
||||||
|
reference: reference,
|
||||||
|
acceptance: acceptance
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return tasks
|
||||||
|
}
|
||||||
|
|
||||||
|
const parsedResults = {
|
||||||
|
summary: extractSection("Implementation Summary"),
|
||||||
|
approach: extractSection("High-Level Approach"),
|
||||||
|
raw_tasks: extractStructuredTasks(cliOutput),
|
||||||
|
time_estimate: extractSection("Time Estimate"),
|
||||||
|
dependencies: extractSection("Dependencies")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Validation & Enhancement**:
|
||||||
|
```javascript
|
||||||
|
// Validate and enhance tasks if CLI output is incomplete
|
||||||
|
function validateAndEnhanceTasks(rawTasks, explorationContext) {
|
||||||
|
return rawTasks.map(taskObj => {
|
||||||
|
// Validate required fields
|
||||||
|
const validated = {
|
||||||
|
title: taskObj.title || "Unnamed task",
|
||||||
|
file: taskObj.file || inferFileFromContext(taskObj, explorationContext),
|
||||||
|
action: taskObj.action || inferAction(taskObj.title),
|
||||||
|
description: taskObj.description || taskObj.title,
|
||||||
|
implementation: taskObj.implementation?.length > 0
|
||||||
|
? taskObj.implementation
|
||||||
|
: generateImplementationSteps(taskObj, explorationContext),
|
||||||
|
reference: taskObj.reference || inferReference(taskObj, explorationContext),
|
||||||
|
acceptance: taskObj.acceptance?.length > 0
|
||||||
|
? taskObj.acceptance
|
||||||
|
: generateAcceptanceCriteria(taskObj)
|
||||||
|
}
|
||||||
|
|
||||||
|
return validated
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper functions for inference
|
||||||
|
function inferFileFromContext(taskObj, explorationContext) {
|
||||||
|
const relevantFiles = explorationContext?.relevant_files || []
|
||||||
|
const titleLower = taskObj.title.toLowerCase()
|
||||||
|
const matchedFile = relevantFiles.find(f =>
|
||||||
|
titleLower.includes(f.split('/').pop().split('.')[0].toLowerCase())
|
||||||
|
)
|
||||||
|
return matchedFile || "file-to-be-determined.ts"
|
||||||
|
}
|
||||||
|
|
||||||
|
function inferAction(title) {
|
||||||
|
if (/create|add new|implement/i.test(title)) return "Create"
|
||||||
|
if (/update|modify|change/i.test(title)) return "Update"
|
||||||
|
if (/refactor/i.test(title)) return "Refactor"
|
||||||
|
if (/delete|remove/i.test(title)) return "Delete"
|
||||||
|
return "Implement"
|
||||||
|
}
|
||||||
|
|
||||||
|
function generateImplementationSteps(taskObj, explorationContext) {
|
||||||
|
const patterns = explorationContext?.patterns || ""
|
||||||
|
return [
|
||||||
|
`Analyze ${taskObj.file} structure and identify integration points`,
|
||||||
|
`Implement ${taskObj.title} following ${patterns || 'existing patterns'}`,
|
||||||
|
`Add error handling and validation`,
|
||||||
|
`Update related components if needed`,
|
||||||
|
`Verify implementation matches requirements`
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
function inferReference(taskObj, explorationContext) {
|
||||||
|
const patterns = explorationContext?.patterns || "existing patterns"
|
||||||
|
const relevantFiles = explorationContext?.relevant_files || []
|
||||||
|
|
||||||
|
return {
|
||||||
|
pattern: patterns.split('.')[0] || "Follow existing code structure",
|
||||||
|
files: relevantFiles.slice(0, 2),
|
||||||
|
examples: `Study the structure and methods in ${relevantFiles[0] || 'related files'}`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function generateAcceptanceCriteria(taskObj) {
|
||||||
|
return [
|
||||||
|
`${taskObj.title} completed in ${taskObj.file}`,
|
||||||
|
`Implementation follows project conventions`,
|
||||||
|
`No breaking changes to existing functionality`,
|
||||||
|
`Code passes linting and type checks`
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. planObject Generation
|
||||||
|
|
||||||
|
**Structure of planObject** (returned to lite-plan):
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
summary: string, // 2-3 sentence overview from CLI
|
||||||
|
approach: string, // High-level strategy from CLI
|
||||||
|
tasks: [ // Structured task objects (3-10 items)
|
||||||
|
{
|
||||||
|
title: string, // Task title (e.g., "Create AuthService")
|
||||||
|
file: string, // Target file path
|
||||||
|
action: string, // Action type: Create|Update|Implement|Refactor|Add|Delete
|
||||||
|
description: string, // What to implement (1-2 sentences)
|
||||||
|
implementation: string[], // Step-by-step how to do it (3-7 steps)
|
||||||
|
reference: { // What to reference
|
||||||
|
pattern: string, // Pattern name (e.g., "UserService pattern")
|
||||||
|
files: string[], // Reference file paths
|
||||||
|
examples: string // Specific guidance on what to copy/follow
|
||||||
|
},
|
||||||
|
acceptance: string[] // Verification criteria (2-4 items)
|
||||||
|
}
|
||||||
|
],
|
||||||
|
estimated_time: string, // Total time estimate from CLI
|
||||||
|
recommended_execution: string, // "Agent" | "Codex" based on complexity
|
||||||
|
complexity: string // "Low" | "Medium" | "High" (from input)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Generation Logic**:
|
||||||
|
```javascript
|
||||||
|
const planObject = {
|
||||||
|
summary: parsedResults.summary || `Implementation plan for: ${task_description.slice(0, 100)}`,
|
||||||
|
|
||||||
|
approach: parsedResults.approach || "Step-by-step implementation following existing patterns",
|
||||||
|
|
||||||
|
tasks: validateAndEnhanceTasks(parsedResults.raw_tasks, explorationContext),
|
||||||
|
|
||||||
|
estimated_time: parsedResults.time_estimate || estimateTimeFromTaskCount(parsedResults.raw_tasks.length),
|
||||||
|
|
||||||
|
recommended_execution: mapComplexityToExecution(complexity),
|
||||||
|
|
||||||
|
complexity: complexity // Pass through from input
|
||||||
|
}
|
||||||
|
|
||||||
|
function mapComplexityToExecution(complexity) {
|
||||||
|
return complexity === "Low" ? "Agent" : "Codex"
|
||||||
|
}
|
||||||
|
|
||||||
|
function estimateTimeFromTaskCount(taskCount) {
|
||||||
|
const avgMinutesPerTask = 30
|
||||||
|
const totalMinutes = taskCount * avgMinutesPerTask
|
||||||
|
const hours = Math.floor(totalMinutes / 60)
|
||||||
|
const minutes = totalMinutes % 60
|
||||||
|
|
||||||
|
if (hours === 0) {
|
||||||
|
return `${minutes} minutes (${taskCount} tasks × ${avgMinutesPerTask}min avg)`
|
||||||
|
}
|
||||||
|
return `${hours}h ${minutes}m (${taskCount} tasks × ${avgMinutesPerTask}min avg)`
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quality Standards
|
||||||
|
|
||||||
|
### CLI Execution Standards
|
||||||
|
- **Timeout Management**: Use dynamic timeout (3600000ms = 60min for planning)
|
||||||
|
- **Fallback Chain**: Gemini → Qwen → degraded mode (if both fail)
|
||||||
|
- **Error Context**: Include full error details in failure reports
|
||||||
|
- **Output Preservation**: Optionally save raw CLI output for debugging
|
||||||
|
|
||||||
|
### Task Object Standards
|
||||||
|
|
||||||
|
**Completeness** - Each task must have all 7 required fields:
|
||||||
|
- **title**: Clear, concise task name
|
||||||
|
- **file**: Exact file path (from exploration.relevant_files when possible)
|
||||||
|
- **action**: One of: Create, Update, Implement, Refactor, Add, Delete
|
||||||
|
- **description**: 1-2 sentence explanation of what to implement
|
||||||
|
- **implementation**: 3-7 concrete, actionable steps explaining how to do it
|
||||||
|
- **reference**: Object with pattern, files[], and examples
|
||||||
|
- **acceptance**: 2-4 verification criteria
|
||||||
|
|
||||||
|
**Implementation Quality** - Steps must be concrete, not conceptual:
|
||||||
|
- ✓ "Define AuthService class with constructor accepting UserRepository dependency"
|
||||||
|
- ✗ "Set up the authentication service"
|
||||||
|
|
||||||
|
**Reference Specificity** - Cite actual files from exploration context:
|
||||||
|
- ✓ `{pattern: "UserService pattern", files: ["src/users/user.service.ts"], examples: "Follow constructor injection and async method patterns"}`
|
||||||
|
- ✗ `{pattern: "service pattern", files: [], examples: "follow patterns"}`
|
||||||
|
|
||||||
|
**Acceptance Measurability** - Criteria must be verifiable:
|
||||||
|
- ✓ "AuthService class created with login(), logout(), validateToken() methods"
|
||||||
|
- ✗ "Service works correctly"
|
||||||
|
|
||||||
|
### Task Validation
|
||||||
|
|
||||||
|
**Validation Function**:
|
||||||
|
```javascript
|
||||||
|
function validateTaskObject(task) {
|
||||||
|
const errors = []
|
||||||
|
|
||||||
|
// Validate required fields
|
||||||
|
if (!task.title || task.title.trim().length === 0) {
|
||||||
|
errors.push("Missing title")
|
||||||
|
}
|
||||||
|
if (!task.file || task.file.trim().length === 0) {
|
||||||
|
errors.push("Missing file path")
|
||||||
|
}
|
||||||
|
if (!task.action || !['Create', 'Update', 'Implement', 'Refactor', 'Add', 'Delete'].includes(task.action)) {
|
||||||
|
errors.push(`Invalid action: ${task.action}`)
|
||||||
|
}
|
||||||
|
if (!task.description || task.description.trim().length === 0) {
|
||||||
|
errors.push("Missing description")
|
||||||
|
}
|
||||||
|
if (!task.implementation || task.implementation.length < 3) {
|
||||||
|
errors.push("Implementation must have at least 3 steps")
|
||||||
|
}
|
||||||
|
if (!task.reference || !task.reference.pattern) {
|
||||||
|
errors.push("Missing pattern reference")
|
||||||
|
}
|
||||||
|
if (!task.acceptance || task.acceptance.length < 2) {
|
||||||
|
errors.push("Acceptance criteria must have at least 2 items")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check implementation quality
|
||||||
|
const hasConceptualSteps = task.implementation?.some(step =>
|
||||||
|
/^(handle|manage|deal with|set up|work on)/i.test(step)
|
||||||
|
)
|
||||||
|
if (hasConceptualSteps) {
|
||||||
|
errors.push("Implementation contains conceptual steps (should be concrete)")
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
valid: errors.length === 0,
|
||||||
|
errors: errors
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Good vs Bad Examples**:
|
||||||
|
```javascript
|
||||||
|
// ❌ BAD (Incomplete, vague)
|
||||||
|
{
|
||||||
|
title: "Add authentication",
|
||||||
|
file: "auth.ts",
|
||||||
|
action: "Add",
|
||||||
|
description: "Add auth",
|
||||||
|
implementation: [
|
||||||
|
"Set up authentication",
|
||||||
|
"Handle login"
|
||||||
|
],
|
||||||
|
reference: {
|
||||||
|
pattern: "service pattern",
|
||||||
|
files: [],
|
||||||
|
examples: "follow patterns"
|
||||||
|
},
|
||||||
|
acceptance: ["It works"]
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ GOOD (Complete, specific, actionable)
|
||||||
|
{
|
||||||
|
title: "Create AuthService",
|
||||||
|
file: "src/auth/auth.service.ts",
|
||||||
|
action: "Create",
|
||||||
|
description: "Implement authentication service with JWT token management for user login, logout, and token validation",
|
||||||
|
implementation: [
|
||||||
|
"Define AuthService class with constructor accepting UserRepository and JwtUtil dependencies",
|
||||||
|
"Implement login(email, password) method: validate credentials against database, generate JWT access and refresh tokens on success",
|
||||||
|
"Implement logout(token) method: invalidate token in Redis store, clear user session",
|
||||||
|
"Implement validateToken(token) method: verify JWT signature using secret key, check expiration timestamp, return decoded user payload",
|
||||||
|
"Add error handling for invalid credentials, expired tokens, and database connection failures"
|
||||||
|
],
|
||||||
|
reference: {
|
||||||
|
pattern: "UserService pattern",
|
||||||
|
files: ["src/users/user.service.ts", "src/utils/jwt.util.ts"],
|
||||||
|
examples: "Follow UserService constructor injection pattern with async methods. Use JwtUtil.generateToken() and JwtUtil.verifyToken() for token operations"
|
||||||
|
},
|
||||||
|
acceptance: [
|
||||||
|
"AuthService class created with login(), logout(), validateToken() methods",
|
||||||
|
"Methods follow UserService async/await pattern with try-catch error handling",
|
||||||
|
"JWT token generation uses JwtUtil with 1h access token and 7d refresh token expiry",
|
||||||
|
"All methods return typed responses (success/error objects)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Reminders
|
||||||
|
|
||||||
|
**ALWAYS:**
|
||||||
|
- **Validate context package**: Ensure task_description present before CLI execution
|
||||||
|
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
|
||||||
|
- **Parse CLI output structurally**: Extract all 7 task fields (title, file, action, description, implementation, reference, acceptance)
|
||||||
|
- **Validate task objects**: Each task must have all required fields with quality content
|
||||||
|
- **Generate complete planObject**: All fields populated with structured task objects
|
||||||
|
- **Return in-memory result**: No file writes unless debugging
|
||||||
|
- **Preserve exploration context**: Use relevant_files and patterns in task references
|
||||||
|
- **Ensure implementation concreteness**: Steps must be actionable, not conceptual
|
||||||
|
- **Cite specific references**: Reference actual files from exploration context
|
||||||
|
|
||||||
|
**NEVER:**
|
||||||
|
- Execute implementation directly (return plan, let lite-execute handle execution)
|
||||||
|
- Skip CLI planning (always run CLI even for simple tasks, unless degraded mode)
|
||||||
|
- Return vague task objects (validate all required fields)
|
||||||
|
- Use conceptual implementation steps ("set up", "handle", "manage")
|
||||||
|
- Modify files directly (planning only, no implementation)
|
||||||
|
- Exceed timeout limits (use configured timeout value)
|
||||||
|
- Return tasks with empty reference files (cite actual exploration files)
|
||||||
|
- Skip task validation (all task objects must pass quality checks)
|
||||||
|
|
||||||
|
## Configuration & Examples
|
||||||
|
|
||||||
|
### CLI Tool Configuration
|
||||||
|
|
||||||
|
**Gemini Configuration**:
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"tool": "gemini",
|
||||||
|
"model": "gemini-2.5-pro", // Auto-selected, no need to specify
|
||||||
|
"templates": {
|
||||||
|
"task-breakdown": "02-breakdown-task-steps.txt",
|
||||||
|
"architecture-planning": "01-plan-architecture-design.txt",
|
||||||
|
"component-design": "02-design-component-spec.txt"
|
||||||
|
},
|
||||||
|
"timeout": 3600000 // 60 minutes
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Qwen Configuration (Fallback)**:
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"tool": "qwen",
|
||||||
|
"model": "coder-model", // Auto-selected
|
||||||
|
"templates": {
|
||||||
|
"task-breakdown": "02-breakdown-task-steps.txt",
|
||||||
|
"architecture-planning": "01-plan-architecture-design.txt"
|
||||||
|
},
|
||||||
|
"timeout": 3600000 // 60 minutes
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example Execution
|
||||||
|
|
||||||
|
**Input Context**:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"task_description": "Implement user authentication with JWT tokens",
|
||||||
|
"explorationContext": {
|
||||||
|
"project_structure": "Express.js REST API with TypeScript, layered architecture (routes → services → repositories)",
|
||||||
|
"relevant_files": [
|
||||||
|
"src/users/user.service.ts",
|
||||||
|
"src/users/user.repository.ts",
|
||||||
|
"src/middleware/cors.middleware.ts",
|
||||||
|
"src/routes/api.ts"
|
||||||
|
],
|
||||||
|
"patterns": "Service-Repository pattern used throughout. Services in src/{module}/{module}.service.ts, Repositories in src/{module}/{module}.repository.ts. Middleware follows function-based approach in src/middleware/",
|
||||||
|
"dependencies": "Express, TypeORM, bcrypt for password hashing",
|
||||||
|
"integration_points": "Auth service needs to integrate with existing user service and API routes",
|
||||||
|
"constraints": "Must use existing TypeORM entities, follow established error handling patterns"
|
||||||
|
},
|
||||||
|
"clarificationContext": {
|
||||||
|
"token_expiry": "1 hour access token, 7 days refresh token",
|
||||||
|
"password_requirements": "Min 8 chars, must include number and special char"
|
||||||
|
},
|
||||||
|
"complexity": "Medium",
|
||||||
|
"cli_config": {
|
||||||
|
"tool": "gemini",
|
||||||
|
"template": "02-breakdown-task-steps.txt",
|
||||||
|
"timeout": 3600000
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Execution Summary**:
|
||||||
|
1. **Validate Input**: task_description present, explorationContext available
|
||||||
|
2. **Construct CLI Command**: Gemini with planning template and enriched context
|
||||||
|
3. **Execute CLI**: Gemini runs and returns structured plan (timeout: 60min)
|
||||||
|
4. **Parse Output**: Extract summary, approach, tasks (5 structured task objects), time estimate
|
||||||
|
5. **Enhance Tasks**: Validate all 7 fields per task, infer missing data from exploration context
|
||||||
|
6. **Generate planObject**: Return complete plan with 5 actionable tasks
|
||||||
|
|
||||||
|
**Output planObject** (simplified):
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
summary: "Implement JWT-based authentication system with service layer, utilities, middleware, and route protection",
|
||||||
|
approach: "Follow existing Service-Repository pattern. Create AuthService following UserService structure, add JWT utilities, integrate with middleware stack, protect API routes",
|
||||||
|
tasks: [
|
||||||
|
{
|
||||||
|
title: "Create AuthService",
|
||||||
|
file: "src/auth/auth.service.ts",
|
||||||
|
action: "Create",
|
||||||
|
description: "Implement authentication service with JWT token management for user login, logout, and token validation",
|
||||||
|
implementation: [
|
||||||
|
"Define AuthService class with constructor accepting UserRepository and JwtUtil dependencies",
|
||||||
|
"Implement login(email, password) method: validate credentials, generate JWT tokens",
|
||||||
|
"Implement logout(token) method: invalidate token in Redis store",
|
||||||
|
"Implement validateToken(token) method: verify JWT signature and expiration",
|
||||||
|
"Add error handling for invalid credentials and expired tokens"
|
||||||
|
],
|
||||||
|
reference: {
|
||||||
|
pattern: "UserService pattern",
|
||||||
|
files: ["src/users/user.service.ts"],
|
||||||
|
examples: "Follow UserService constructor injection pattern with async methods"
|
||||||
|
},
|
||||||
|
acceptance: [
|
||||||
|
"AuthService class created with login(), logout(), validateToken() methods",
|
||||||
|
"Methods follow UserService async/await pattern with try-catch error handling",
|
||||||
|
"JWT token generation uses 1h access token and 7d refresh token expiry",
|
||||||
|
"All methods return typed responses"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
// ... 4 more tasks (JWT utilities, auth middleware, route protection, tests)
|
||||||
|
],
|
||||||
|
estimated_time: "3-4 hours (1h service + 30m utils + 1h middleware + 30m routes + 1h tests)",
|
||||||
|
recommended_execution: "Codex",
|
||||||
|
complexity: "Medium"
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -10,7 +10,7 @@ description: |
|
|||||||
commentary: Agent encapsulates CLI execution + result parsing + task generation
|
commentary: Agent encapsulates CLI execution + result parsing + task generation
|
||||||
|
|
||||||
- Context: Coverage gap analysis
|
- Context: Coverage gap analysis
|
||||||
user: "Analyze coverage gaps and generate补充test task"
|
user: "Analyze coverage gaps and generate supplement test task"
|
||||||
assistant: "Executing CLI analysis for uncovered code paths → Generating test supplement task"
|
assistant: "Executing CLI analysis for uncovered code paths → Generating test supplement task"
|
||||||
commentary: Agent handles both analysis and task JSON generation autonomously
|
commentary: Agent handles both analysis and task JSON generation autonomously
|
||||||
color: purple
|
color: purple
|
||||||
@@ -18,12 +18,11 @@ color: purple
|
|||||||
|
|
||||||
You are a specialized execution agent that bridges CLI analysis tools with task generation. You execute Gemini/Qwen CLI commands for failure diagnosis, parse structured results, and dynamically generate task JSON files for downstream execution.
|
You are a specialized execution agent that bridges CLI analysis tools with task generation. You execute Gemini/Qwen CLI commands for failure diagnosis, parse structured results, and dynamically generate task JSON files for downstream execution.
|
||||||
|
|
||||||
## Core Responsibilities
|
**Core capabilities:**
|
||||||
|
- Execute CLI analysis with appropriate templates and context
|
||||||
1. **Execute CLI Analysis**: Run Gemini/Qwen with appropriate templates and context
|
- Parse structured results (fix strategies, root causes, modification points)
|
||||||
2. **Parse CLI Results**: Extract structured information (fix strategies, root causes, modification points)
|
- Generate task JSONs dynamically (IMPL-fix-N.json, IMPL-supplement-N.json)
|
||||||
3. **Generate Task JSONs**: Create IMPL-fix-N.json or IMPL-supplement-N.json dynamically
|
- Save detailed analysis reports (iteration-N-analysis.md)
|
||||||
4. **Save Analysis Reports**: Store detailed CLI output as iteration-N-analysis.md
|
|
||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
@@ -43,7 +42,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
|||||||
"file": "tests/test_auth.py",
|
"file": "tests/test_auth.py",
|
||||||
"line": 45,
|
"line": 45,
|
||||||
"criticality": "high",
|
"criticality": "high",
|
||||||
"test_type": "integration" // ← NEW: L0: static, L1: unit, L2: integration, L3: e2e
|
"test_type": "integration" // L0: static, L1: unit, L2: integration, L3: e2e
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"error_messages": ["error1", "error2"],
|
"error_messages": ["error1", "error2"],
|
||||||
@@ -61,7 +60,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
|||||||
"tool": "gemini|qwen",
|
"tool": "gemini|qwen",
|
||||||
"model": "gemini-3-pro-preview-11-2025|qwen-coder-model",
|
"model": "gemini-3-pro-preview-11-2025|qwen-coder-model",
|
||||||
"template": "01-diagnose-bug-root-cause.txt",
|
"template": "01-diagnose-bug-root-cause.txt",
|
||||||
"timeout": 2400000,
|
"timeout": 2400000, // 40 minutes for analysis
|
||||||
"fallback": "qwen"
|
"fallback": "qwen"
|
||||||
},
|
},
|
||||||
"task_config": {
|
"task_config": {
|
||||||
@@ -79,16 +78,16 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
|||||||
Phase 1: CLI Analysis Execution
|
Phase 1: CLI Analysis Execution
|
||||||
1. Validate context package and extract failure context
|
1. Validate context package and extract failure context
|
||||||
2. Construct CLI command with appropriate template
|
2. Construct CLI command with appropriate template
|
||||||
3. Execute Gemini/Qwen CLI tool
|
3. Execute Gemini/Qwen CLI tool with layer-specific guidance
|
||||||
4. Handle errors and fallback to alternative tool if needed
|
4. Handle errors and fallback to alternative tool if needed
|
||||||
5. Save raw CLI output to .process/iteration-N-cli-output.txt
|
5. Save raw CLI output to .process/iteration-N-cli-output.txt
|
||||||
|
|
||||||
Phase 2: Results Parsing & Strategy Extraction
|
Phase 2: Results Parsing & Strategy Extraction
|
||||||
1. Parse CLI output for structured information:
|
1. Parse CLI output for structured information:
|
||||||
- Root cause analysis
|
- Root cause analysis (RCA)
|
||||||
- Fix strategy and approach
|
- Fix strategy and approach
|
||||||
- Modification points (files, functions, line numbers)
|
- Modification points (files, functions, line numbers)
|
||||||
- Expected outcome
|
- Expected outcome and verification steps
|
||||||
2. Extract quantified requirements:
|
2. Extract quantified requirements:
|
||||||
- Number of files to modify
|
- Number of files to modify
|
||||||
- Specific functions to fix (with line numbers)
|
- Specific functions to fix (with line numbers)
|
||||||
@@ -96,7 +95,7 @@ Phase 2: Results Parsing & Strategy Extraction
|
|||||||
3. Generate structured analysis report (iteration-N-analysis.md)
|
3. Generate structured analysis report (iteration-N-analysis.md)
|
||||||
|
|
||||||
Phase 3: Task JSON Generation
|
Phase 3: Task JSON Generation
|
||||||
1. Load task JSON template (defined below)
|
1. Load task JSON template
|
||||||
2. Populate template with parsed CLI results
|
2. Populate template with parsed CLI results
|
||||||
3. Add iteration context and previous attempts
|
3. Add iteration context and previous attempts
|
||||||
4. Write task JSON to .workflow/{session}/.task/IMPL-fix-N.json
|
4. Write task JSON to .workflow/{session}/.task/IMPL-fix-N.json
|
||||||
@@ -105,9 +104,9 @@ Phase 3: Task JSON Generation
|
|||||||
|
|
||||||
## Core Functions
|
## Core Functions
|
||||||
|
|
||||||
### 1. CLI Command Construction
|
### 1. CLI Analysis Execution
|
||||||
|
|
||||||
**Template-Based Approach with Test Layer Awareness**:
|
**Template-Based Command Construction with Test Layer Awareness**:
|
||||||
```bash
|
```bash
|
||||||
cd {project_root} && {cli_tool} -p "
|
cd {project_root} && {cli_tool} -p "
|
||||||
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
||||||
@@ -151,8 +150,9 @@ const layerGuidance = {
|
|||||||
const guidance = layerGuidance[test_type] || "Analyze holistically, avoid quick patches";
|
const guidance = layerGuidance[test_type] || "Analyze holistically, avoid quick patches";
|
||||||
```
|
```
|
||||||
|
|
||||||
**Error Handling & Fallback**:
|
**Error Handling & Fallback Strategy**:
|
||||||
```javascript
|
```javascript
|
||||||
|
// Primary execution with fallback chain
|
||||||
try {
|
try {
|
||||||
result = executeCLI("gemini", config);
|
result = executeCLI("gemini", config);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -173,16 +173,18 @@ try {
|
|||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Fallback strategy when all CLI tools fail
|
||||||
|
function generateBasicFixStrategy(failure_context) {
|
||||||
|
// Generate basic fix task based on error pattern matching
|
||||||
|
// Use previous successful fix patterns from fix-history.json
|
||||||
|
// Limit to simple, low-risk fixes (add null checks, fix typos)
|
||||||
|
// Mark task with meta.analysis_quality: "degraded" flag
|
||||||
|
// Orchestrator will treat degraded analysis with caution
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Fallback Strategy (When All CLI Tools Fail)**:
|
### 2. Output Parsing & Task Generation
|
||||||
- Generate basic fix task based on error patterns matching
|
|
||||||
- Use previous successful fix patterns from fix-history.json
|
|
||||||
- Limit to simple, low-risk fixes (add null checks, fix typos)
|
|
||||||
- Mark task with `meta.analysis_quality: "degraded"` flag
|
|
||||||
- Orchestrator will treat degraded analysis with caution (may skip iteration)
|
|
||||||
|
|
||||||
### 2. CLI Output Parsing
|
|
||||||
|
|
||||||
**Expected CLI Output Structure** (from bug diagnosis template):
|
**Expected CLI Output Structure** (from bug diagnosis template):
|
||||||
```markdown
|
```markdown
|
||||||
@@ -220,18 +222,34 @@ try {
|
|||||||
```javascript
|
```javascript
|
||||||
const parsedResults = {
|
const parsedResults = {
|
||||||
root_causes: extractSection("根本原因分析"),
|
root_causes: extractSection("根本原因分析"),
|
||||||
modification_points: extractModificationPoints(),
|
modification_points: extractModificationPoints(), // Returns: ["file:function:lines", ...]
|
||||||
fix_strategy: {
|
fix_strategy: {
|
||||||
approach: extractSection("详细修复建议"),
|
approach: extractSection("详细修复建议"),
|
||||||
files: extractFilesList(),
|
files: extractFilesList(),
|
||||||
expected_outcome: extractSection("验证建议")
|
expected_outcome: extractSection("验证建议")
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// Extract structured modification points
|
||||||
|
function extractModificationPoints() {
|
||||||
|
const points = [];
|
||||||
|
const filePattern = /- (.+?\.(?:ts|js|py)) \(lines (\d+-\d+)\): (.+)/g;
|
||||||
|
|
||||||
|
let match;
|
||||||
|
while ((match = filePattern.exec(cliOutput)) !== null) {
|
||||||
|
points.push({
|
||||||
|
file: match[1],
|
||||||
|
lines: match[2],
|
||||||
|
function: match[3],
|
||||||
|
formatted: `${match[1]}:${match[3]}:${match[2]}`
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return points;
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### 3. Task JSON Generation (Template Definition)
|
**Task JSON Generation** (Simplified Template):
|
||||||
|
|
||||||
**Task JSON Template for IMPL-fix-N** (Simplified):
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"id": "IMPL-fix-{iteration}",
|
"id": "IMPL-fix-{iteration}",
|
||||||
@@ -284,9 +302,7 @@ const parsedResults = {
|
|||||||
{
|
{
|
||||||
"step": "load_analysis_context",
|
"step": "load_analysis_context",
|
||||||
"action": "Load CLI analysis report for full failure context if needed",
|
"action": "Load CLI analysis report for full failure context if needed",
|
||||||
"commands": [
|
"commands": ["Read({meta.analysis_report})"],
|
||||||
"Read({meta.analysis_report})"
|
|
||||||
],
|
|
||||||
"output_to": "full_failure_analysis",
|
"output_to": "full_failure_analysis",
|
||||||
"note": "Analysis report contains: failed_tests, error_messages, pass_rate, root causes, previous_attempts"
|
"note": "Analysis report contains: failed_tests, error_messages, pass_rate, root causes, previous_attempts"
|
||||||
}
|
}
|
||||||
@@ -334,19 +350,17 @@ const parsedResults = {
|
|||||||
|
|
||||||
**Template Variables Replacement**:
|
**Template Variables Replacement**:
|
||||||
- `{iteration}`: From context.iteration
|
- `{iteration}`: From context.iteration
|
||||||
- `{test_type}`: Dominant test type from failed_tests (e.g., "integration", "unit")
|
- `{test_type}`: Dominant test type from failed_tests
|
||||||
- `{dominant_test_type}`: Most common test_type in failed_tests array
|
- `{dominant_test_type}`: Most common test_type in failed_tests array
|
||||||
- `{layer_specific_approach}`: Guidance based on test layer from layerGuidance map
|
- `{layer_specific_approach}`: Guidance from layerGuidance map
|
||||||
- `{fix_summary}`: First 50 chars of fix_strategy.approach
|
- `{fix_summary}`: First 50 chars of fix_strategy.approach
|
||||||
- `{failed_tests.length}`: Count of failures
|
- `{failed_tests.length}`: Count of failures
|
||||||
- `{modification_points.length}`: Count of modification points
|
- `{modification_points.length}`: Count of modification points
|
||||||
- `{modification_points}`: Array of file:function:lines from parsed CLI output
|
- `{modification_points}`: Array of file:function:lines
|
||||||
- `{timestamp}`: ISO 8601 timestamp
|
- `{timestamp}`: ISO 8601 timestamp
|
||||||
- `{parent_task_id}`: ID of the parent test task (e.g., "IMPL-002")
|
- `{parent_task_id}`: ID of parent test task
|
||||||
- `{file1}`, `{file2}`, etc.: Specific file paths from modification_points
|
|
||||||
- `{specific_change_1}`, etc.: Change descriptions for each modification point
|
|
||||||
|
|
||||||
### 4. Analysis Report Generation
|
### 3. Analysis Report Generation
|
||||||
|
|
||||||
**Structure of iteration-N-analysis.md**:
|
**Structure of iteration-N-analysis.md**:
|
||||||
```markdown
|
```markdown
|
||||||
@@ -373,6 +387,7 @@ pass_rate: {pass_rate}%
|
|||||||
- **Error**: {test.error}
|
- **Error**: {test.error}
|
||||||
- **File**: {test.file}:{test.line}
|
- **File**: {test.file}:{test.line}
|
||||||
- **Criticality**: {test.criticality}
|
- **Criticality**: {test.criticality}
|
||||||
|
- **Test Type**: {test.test_type}
|
||||||
{endforeach}
|
{endforeach}
|
||||||
|
|
||||||
## Root Cause Analysis
|
## Root Cause Analysis
|
||||||
@@ -403,15 +418,16 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
|||||||
|
|
||||||
### CLI Execution Standards
|
### CLI Execution Standards
|
||||||
- **Timeout Management**: Use dynamic timeout (2400000ms = 40min for analysis)
|
- **Timeout Management**: Use dynamic timeout (2400000ms = 40min for analysis)
|
||||||
- **Fallback Chain**: Gemini → Qwen (if Gemini fails with 429/404)
|
- **Fallback Chain**: Gemini → Qwen → degraded mode (if both fail)
|
||||||
- **Error Context**: Include full error details in failure reports
|
- **Error Context**: Include full error details in failure reports
|
||||||
- **Output Preservation**: Save raw CLI output for debugging
|
- **Output Preservation**: Save raw CLI output to .process/ for debugging
|
||||||
|
|
||||||
### Task JSON Standards
|
### Task JSON Standards
|
||||||
- **Quantification**: All requirements must include counts and explicit lists
|
- **Quantification**: All requirements must include counts and explicit lists
|
||||||
- **Specificity**: Modification points must have file:function:line format
|
- **Specificity**: Modification points must have file:function:line format
|
||||||
- **Measurability**: Acceptance criteria must include verification commands
|
- **Measurability**: Acceptance criteria must include verification commands
|
||||||
- **Traceability**: Link to analysis reports and CLI output files
|
- **Traceability**: Link to analysis reports and CLI output files
|
||||||
|
- **Minimal Redundancy**: Use references (analysis_report) instead of embedding full context
|
||||||
|
|
||||||
### Analysis Report Standards
|
### Analysis Report Standards
|
||||||
- **Structured Format**: Use consistent markdown sections
|
- **Structured Format**: Use consistent markdown sections
|
||||||
@@ -430,19 +446,23 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
|||||||
- **Link files properly**: Use relative paths from session root
|
- **Link files properly**: Use relative paths from session root
|
||||||
- **Preserve CLI output**: Save raw output to .process/ for debugging
|
- **Preserve CLI output**: Save raw output to .process/ for debugging
|
||||||
- **Generate measurable acceptance criteria**: Include verification commands
|
- **Generate measurable acceptance criteria**: Include verification commands
|
||||||
|
- **Apply layer-specific guidance**: Use test_type to customize analysis approach
|
||||||
|
|
||||||
**NEVER:**
|
**NEVER:**
|
||||||
- Execute tests directly (orchestrator manages test execution)
|
- Execute tests directly (orchestrator manages test execution)
|
||||||
- Skip CLI analysis (always run CLI even for simple failures)
|
- Skip CLI analysis (always run CLI even for simple failures)
|
||||||
- Modify files directly (generate task JSON for @test-fix-agent to execute)
|
- Modify files directly (generate task JSON for @test-fix-agent to execute)
|
||||||
- **Embed redundant data in task JSON** (use analysis_report reference instead)
|
- Embed redundant data in task JSON (use analysis_report reference instead)
|
||||||
- **Copy input context verbatim to output** (creates data duplication)
|
- Copy input context verbatim to output (creates data duplication)
|
||||||
- Generate vague modification points (always specify file:function:lines)
|
- Generate vague modification points (always specify file:function:lines)
|
||||||
- Exceed timeout limits (use configured timeout value)
|
- Exceed timeout limits (use configured timeout value)
|
||||||
|
- Ignore test layer context (L0/L1/L2/L3 determines diagnosis approach)
|
||||||
|
|
||||||
## CLI Tool Configuration
|
## Configuration & Examples
|
||||||
|
|
||||||
### Gemini Configuration
|
### CLI Tool Configuration
|
||||||
|
|
||||||
|
**Gemini Configuration**:
|
||||||
```javascript
|
```javascript
|
||||||
{
|
{
|
||||||
"tool": "gemini",
|
"tool": "gemini",
|
||||||
@@ -452,11 +472,12 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
|||||||
"test-failure": "01-diagnose-bug-root-cause.txt",
|
"test-failure": "01-diagnose-bug-root-cause.txt",
|
||||||
"coverage-gap": "02-analyze-code-patterns.txt",
|
"coverage-gap": "02-analyze-code-patterns.txt",
|
||||||
"regression": "01-trace-code-execution.txt"
|
"regression": "01-trace-code-execution.txt"
|
||||||
}
|
},
|
||||||
|
"timeout": 2400000 // 40 minutes
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Qwen Configuration (Fallback)
|
**Qwen Configuration (Fallback)**:
|
||||||
```javascript
|
```javascript
|
||||||
{
|
{
|
||||||
"tool": "qwen",
|
"tool": "qwen",
|
||||||
@@ -464,47 +485,12 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
|||||||
"templates": {
|
"templates": {
|
||||||
"test-failure": "01-diagnose-bug-root-cause.txt",
|
"test-failure": "01-diagnose-bug-root-cause.txt",
|
||||||
"coverage-gap": "02-analyze-code-patterns.txt"
|
"coverage-gap": "02-analyze-code-patterns.txt"
|
||||||
}
|
},
|
||||||
|
"timeout": 2400000 // 40 minutes
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Integration with test-cycle-execute
|
### Example Execution
|
||||||
|
|
||||||
**Orchestrator Call Pattern**:
|
|
||||||
```javascript
|
|
||||||
// When pass_rate < 95%
|
|
||||||
Task(
|
|
||||||
subagent_type="cli-planning-agent",
|
|
||||||
description=`Analyze test failures and generate fix task (iteration ${iteration})`,
|
|
||||||
prompt=`
|
|
||||||
## Context Package
|
|
||||||
${JSON.stringify(contextPackage, null, 2)}
|
|
||||||
|
|
||||||
## Your Task
|
|
||||||
1. Execute CLI analysis using ${cli_config.tool}
|
|
||||||
2. Parse CLI output and extract fix strategy
|
|
||||||
3. Generate IMPL-fix-${iteration}.json with structured task definition
|
|
||||||
4. Save analysis report to .process/iteration-${iteration}-analysis.md
|
|
||||||
5. Report success and task ID back to orchestrator
|
|
||||||
`
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Agent Response**:
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
"status": "success",
|
|
||||||
"task_id": "IMPL-fix-{iteration}",
|
|
||||||
"task_path": ".workflow/{session}/.task/IMPL-fix-{iteration}.json",
|
|
||||||
"analysis_report": ".process/iteration-{iteration}-analysis.md",
|
|
||||||
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
|
|
||||||
"summary": "{fix_strategy.approach first 100 chars}",
|
|
||||||
"modification_points_count": {count},
|
|
||||||
"estimated_complexity": "low|medium|high"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example Execution
|
|
||||||
|
|
||||||
**Input Context**:
|
**Input Context**:
|
||||||
```json
|
```json
|
||||||
@@ -530,24 +516,45 @@ Task(
|
|||||||
"cli_config": {
|
"cli_config": {
|
||||||
"tool": "gemini",
|
"tool": "gemini",
|
||||||
"template": "01-diagnose-bug-root-cause.txt"
|
"template": "01-diagnose-bug-root-cause.txt"
|
||||||
|
},
|
||||||
|
"task_config": {
|
||||||
|
"agent": "@test-fix-agent",
|
||||||
|
"type": "test-fix-iteration",
|
||||||
|
"max_iterations": 5
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Execution Steps**:
|
**Execution Summary**:
|
||||||
1. Detect test_type: "integration" → Apply integration-specific diagnosis
|
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
||||||
2. Execute: `gemini -p "PURPOSE: Analyze integration test failure... [layer-specific context]"`
|
2. **Execute CLI**:
|
||||||
- CLI prompt includes: "Examine component interactions, data flow, interface contracts"
|
```bash
|
||||||
- Guidance: "Analyze full call stack and data flow across components"
|
gemini -p "PURPOSE: Analyze integration test failure...
|
||||||
3. Parse: Extract RCA, 修复建议, 验证建议 sections
|
TASK: Examine component interactions, data flow, interface contracts...
|
||||||
4. Generate: IMPL-fix-1.json (SIMPLIFIED) with:
|
RULES: Analyze full call stack and data flow across components"
|
||||||
|
```
|
||||||
|
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
||||||
|
4. **Generate Task JSON** (IMPL-fix-1.json):
|
||||||
- Title: "Fix integration test failures - Iteration 1: Token expiry validation"
|
- Title: "Fix integration test failures - Iteration 1: Token expiry validation"
|
||||||
- meta.analysis_report: ".process/iteration-1-analysis.md" (Reference, not embedded data)
|
- meta.analysis_report: ".process/iteration-1-analysis.md" (reference)
|
||||||
- meta.test_layer: "integration"
|
- meta.test_layer: "integration"
|
||||||
- Requirements: "Fix 1 integration test failures by applying the provided fix strategy"
|
- Requirements: "Fix 1 integration test failures by applying provided fix strategy"
|
||||||
- fix_strategy.modification_points: ["src/auth/auth.service.ts:validateToken:45-60", "src/middleware/auth.middleware.ts:checkExpiry:120-135"]
|
- fix_strategy.modification_points:
|
||||||
|
- "src/auth/auth.service.ts:validateToken:45-60"
|
||||||
|
- "src/middleware/auth.middleware.ts:checkExpiry:120-135"
|
||||||
- fix_strategy.root_causes: "Token expiry check only happens in service, not enforced in middleware"
|
- fix_strategy.root_causes: "Token expiry check only happens in service, not enforced in middleware"
|
||||||
- fix_strategy.quality_assurance: {avoids_symptom_fix: true, addresses_root_cause: true}
|
- fix_strategy.quality_assurance: {avoids_symptom_fix: true, addresses_root_cause: true}
|
||||||
- **NO failure_context object** - full context available via analysis_report reference
|
5. **Save Analysis Report**: iteration-1-analysis.md with full CLI output, layer context, failed_tests details
|
||||||
5. Save: iteration-1-analysis.md with full CLI output, layer context, failed_tests details, previous_attempts
|
6. **Return**:
|
||||||
6. Return: task_id="IMPL-fix-1", test_layer="integration", status="success"
|
```javascript
|
||||||
|
{
|
||||||
|
status: "success",
|
||||||
|
task_id: "IMPL-fix-1",
|
||||||
|
task_path: ".workflow/WFS-test-session-001/.task/IMPL-fix-1.json",
|
||||||
|
analysis_report: ".process/iteration-1-analysis.md",
|
||||||
|
cli_output: ".process/iteration-1-cli-output.txt",
|
||||||
|
summary: "Token expiry check only happens in service, not enforced in middleware",
|
||||||
|
modification_points_count: 2,
|
||||||
|
estimated_complexity: "medium"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|||||||
@@ -432,25 +432,6 @@ Before completion verify:
|
|||||||
- [ ] File relevance >80%
|
- [ ] File relevance >80%
|
||||||
- [ ] No sensitive data exposed
|
- [ ] No sensitive data exposed
|
||||||
|
|
||||||
## Performance Limits
|
|
||||||
|
|
||||||
**File Counts**:
|
|
||||||
- Max 30 high-priority (score >0.8)
|
|
||||||
- Max 20 medium-priority (score 0.5-0.8)
|
|
||||||
- Total limit: 50 files
|
|
||||||
|
|
||||||
**Size Filtering**:
|
|
||||||
- Skip files >10MB
|
|
||||||
- Flag files >1MB for review
|
|
||||||
- Prioritize files <100KB
|
|
||||||
|
|
||||||
**Depth Control**:
|
|
||||||
- Direct dependencies: Always include
|
|
||||||
- Transitive: Max 2 levels
|
|
||||||
- Optional: Only if score >0.7
|
|
||||||
|
|
||||||
**Tool Priority**: Code-Index > ripgrep > find > grep
|
|
||||||
|
|
||||||
## Output Report
|
## Output Report
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -76,8 +76,8 @@ Use `resume --last` when current task extends/relates to previous execution. See
|
|||||||
|
|
||||||
## Workflow Integration
|
## Workflow Integration
|
||||||
|
|
||||||
**Session Management**: Auto-detects `.workflow/.active-*` marker
|
**Session Management**: Auto-detects active session from `.workflow/sessions/` directory
|
||||||
- Active session: Save to `.workflow/WFS-[id]/.chat/execute-[timestamp].md`
|
- Active session: Save to `.workflow/sessions/WFS-[id]/.chat/execute-[timestamp].md`
|
||||||
- No session: Create new session or save to scratchpad
|
- No session: Create new session or save to scratchpad
|
||||||
|
|
||||||
**Task Integration**: Load from `.task/[TASK-ID].json`, update status, generate summary
|
**Task Integration**: Load from `.task/[TASK-ID].json`, update status, generate summary
|
||||||
|
|||||||
@@ -1,37 +1,22 @@
|
|||||||
---
|
---
|
||||||
name: enhance-prompt
|
name: enhance-prompt
|
||||||
description: Enhanced prompt transformation using session memory and codebase analysis with --enhance flag detection
|
description: Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection
|
||||||
argument-hint: "user input to enhance"
|
argument-hint: "user input to enhance"
|
||||||
---
|
---
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Systematically enhances user prompts by combining session memory context with codebase patterns, translating ambiguous requests into actionable specifications.
|
Systematically enhances user prompts by leveraging session memory context and intent analysis, translating ambiguous requests into actionable specifications.
|
||||||
|
|
||||||
## Core Protocol
|
## Core Protocol
|
||||||
|
|
||||||
**Enhancement Pipeline:**
|
**Enhancement Pipeline:**
|
||||||
`Intent Translation` → `Context Integration` → `Gemini Analysis (if needed)` → `Structured Output`
|
`Intent Translation` → `Context Integration` → `Structured Output`
|
||||||
|
|
||||||
**Context Sources:**
|
**Context Sources:**
|
||||||
- Session memory (conversation history, previous analysis)
|
- Session memory (conversation history, previous analysis)
|
||||||
- Codebase patterns (via Gemini when triggered)
|
|
||||||
- Implicit technical requirements
|
- Implicit technical requirements
|
||||||
|
- User intent patterns
|
||||||
## Gemini Trigger Logic
|
|
||||||
|
|
||||||
```pseudo
|
|
||||||
FUNCTION should_use_gemini(user_prompt):
|
|
||||||
critical_keywords = ["refactor", "migrate", "redesign", "auth", "payment", "security"]
|
|
||||||
|
|
||||||
RETURN (
|
|
||||||
prompt_affects_multiple_modules(user_prompt, threshold=3) OR
|
|
||||||
any_keyword_in_prompt(critical_keywords, user_prompt)
|
|
||||||
)
|
|
||||||
END
|
|
||||||
```
|
|
||||||
|
|
||||||
**Gemini Integration:** ~/.claude/workflows/intelligent-tools-strategy.md
|
|
||||||
|
|
||||||
## Enhancement Rules
|
## Enhancement Rules
|
||||||
|
|
||||||
@@ -47,22 +32,18 @@ END
|
|||||||
|
|
||||||
### Context Integration Strategy
|
### Context Integration Strategy
|
||||||
|
|
||||||
**Session Memory First:**
|
**Session Memory:**
|
||||||
- Reference recent conversation context
|
- Reference recent conversation context
|
||||||
- Reuse previously identified patterns
|
- Reuse previously identified patterns
|
||||||
- Build on established understanding
|
- Build on established understanding
|
||||||
|
- Infer technical requirements from discussion
|
||||||
**Codebase Analysis (via Gemini):**
|
|
||||||
- Only when complexity requires it
|
|
||||||
- Focus on integration points
|
|
||||||
- Identify existing patterns
|
|
||||||
|
|
||||||
**Example:**
|
**Example:**
|
||||||
```bash
|
```bash
|
||||||
# User: "add login"
|
# User: "add login"
|
||||||
# Session Memory: Previous auth discussion, JWT mentioned
|
# Session Memory: Previous auth discussion, JWT mentioned
|
||||||
# Inferred: JWT-based auth, integrate with existing session management
|
# Inferred: JWT-based auth, integrate with existing session management
|
||||||
# Gemini (if multi-module): Analyze AuthService patterns, middleware structure
|
# Action: Implement JWT authentication with session persistence
|
||||||
```
|
```
|
||||||
|
|
||||||
## Output Structure
|
## Output Structure
|
||||||
@@ -76,7 +57,7 @@ ATTENTION: [Critical constraints]
|
|||||||
|
|
||||||
### Output Examples
|
### Output Examples
|
||||||
|
|
||||||
**Simple (no Gemini):**
|
**Example 1:**
|
||||||
```bash
|
```bash
|
||||||
# Input: "fix login button"
|
# Input: "fix login button"
|
||||||
INTENT: Debug non-functional login button
|
INTENT: Debug non-functional login button
|
||||||
@@ -85,28 +66,28 @@ ACTION: Check event binding → verify state updates → test auth flow
|
|||||||
ATTENTION: Preserve existing OAuth integration
|
ATTENTION: Preserve existing OAuth integration
|
||||||
```
|
```
|
||||||
|
|
||||||
**Complex (with Gemini):**
|
**Example 2:**
|
||||||
```bash
|
```bash
|
||||||
# Input: "refactor payment code"
|
# Input: "refactor payment code"
|
||||||
INTENT: Restructure payment module for maintainability
|
INTENT: Restructure payment module for maintainability
|
||||||
CONTEXT: Session memory - PCI compliance requirements
|
CONTEXT: Session memory - PCI compliance requirements, Stripe integration patterns
|
||||||
Gemini - PaymentService → StripeAdapter pattern identified
|
ACTION: Extract reusable validators → isolate payment gateway logic → maintain adapter pattern
|
||||||
ACTION: Extract reusable validators → isolate payment gateway logic
|
|
||||||
ATTENTION: Zero behavior change, maintain PCI compliance, full test coverage
|
ATTENTION: Zero behavior change, maintain PCI compliance, full test coverage
|
||||||
```
|
```
|
||||||
|
|
||||||
## Automatic Triggers
|
## Enhancement Triggers
|
||||||
|
|
||||||
- Ambiguous language: "fix", "improve", "clean up"
|
- Ambiguous language: "fix", "improve", "clean up"
|
||||||
- Multi-module impact (>3 modules)
|
- Vague requests requiring clarification
|
||||||
|
- Complex technical requirements
|
||||||
- Architecture changes
|
- Architecture changes
|
||||||
- Critical systems: auth, payment, security
|
- Critical systems: auth, payment, security
|
||||||
- Complex refactoring
|
- Multi-step refactoring
|
||||||
|
|
||||||
## Key Principles
|
## Key Principles
|
||||||
|
|
||||||
1. **Memory First**: Leverage session context before analysis
|
1. **Session Memory First**: Leverage conversation context and established understanding
|
||||||
2. **Minimal Gemini**: Only when complexity demands it
|
2. **Context Reuse**: Build on previous discussions and decisions
|
||||||
3. **Context Reuse**: Build on previous understanding
|
3. **Clear Output**: Structured, actionable specifications
|
||||||
4. **Clear Output**: Structured, actionable specifications
|
4. **Intent Clarification**: Transform vague requests into specific technical goals
|
||||||
5. **Avoid Duplication**: Reference existing context, don't repeat
|
5. **Avoid Duplication**: Reference existing context, don't repeat
|
||||||
@@ -63,10 +63,10 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
|
|||||||
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
|
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
|
||||||
|
|
||||||
# Create session directories (replace timestamp)
|
# Create session directories (replace timestamp)
|
||||||
bash(mkdir -p .workflow/WFS-docs-{timestamp}/.{task,process,summaries} && touch .workflow/.active-WFS-docs-{timestamp})
|
bash(mkdir -p .workflow/sessions/WFS-docs-{timestamp}/.{task,process,summaries})
|
||||||
|
|
||||||
# Create workflow-session.json (replace values)
|
# Create workflow-session.json (replace values)
|
||||||
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/WFS-docs-{timestamp}/workflow-session.json)
|
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/sessions/WFS-docs-{timestamp}/workflow-session.json)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 2: Analyze Structure
|
### Phase 2: Analyze Structure
|
||||||
@@ -458,8 +458,7 @@ api_id=$((group_count + 3))
|
|||||||
**Unified Structure** (single JSON replaces multiple text files):
|
**Unified Structure** (single JSON replaces multiple text files):
|
||||||
|
|
||||||
```
|
```
|
||||||
.workflow/
|
.workflow/sessions/
|
||||||
├── .active-WFS-docs-{timestamp}
|
|
||||||
└── WFS-docs-{timestamp}/
|
└── WFS-docs-{timestamp}/
|
||||||
├── workflow-session.json # Session metadata
|
├── workflow-session.json # Session metadata
|
||||||
├── IMPL_PLAN.md
|
├── IMPL_PLAN.md
|
||||||
|
|||||||
@@ -21,12 +21,14 @@ auto-continue: true
|
|||||||
**Key Features**:
|
**Key Features**:
|
||||||
- Extracts primary design references (colors, typography, spacing, etc.)
|
- Extracts primary design references (colors, typography, spacing, etc.)
|
||||||
- Provides dynamic adjustment guidelines for design tokens
|
- Provides dynamic adjustment guidelines for design tokens
|
||||||
|
- Includes prerequisites and tooling requirements (browsers, PostCSS, dark mode)
|
||||||
- Progressive loading structure for efficient token usage
|
- Progressive loading structure for efficient token usage
|
||||||
|
- Complete implementation examples with React components
|
||||||
- Interactive preview showcase
|
- Interactive preview showcase
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Usage
|
## Quick Reference
|
||||||
|
|
||||||
### Command Syntax
|
### Command Syntax
|
||||||
|
|
||||||
@@ -51,20 +53,77 @@ package-name Style reference package name (required)
|
|||||||
/memory:style-skill-memory
|
/memory:style-skill-memory
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Key Variables
|
||||||
|
|
||||||
|
**Input Variables**:
|
||||||
|
- `PACKAGE_NAME`: Style reference package name
|
||||||
|
- `PACKAGE_DIR`: `.workflow/reference_style/${package_name}`
|
||||||
|
- `SKILL_DIR`: `.claude/skills/style-${package_name}`
|
||||||
|
- `REGENERATE`: `true` if --regenerate flag, `false` otherwise
|
||||||
|
|
||||||
|
**Data Sources** (Phase 2):
|
||||||
|
- `DESIGN_TOKENS_DATA`: Complete design-tokens.json content (from Read)
|
||||||
|
- `LAYOUT_TEMPLATES_DATA`: Complete layout-templates.json content (from Read)
|
||||||
|
- `ANIMATION_TOKENS_DATA`: Complete animation-tokens.json content (from Read, if exists)
|
||||||
|
|
||||||
|
**Metadata** (Phase 2):
|
||||||
|
- `COMPONENT_COUNT`: Total components
|
||||||
|
- `UNIVERSAL_COUNT`: Universal components count
|
||||||
|
- `SPECIALIZED_COUNT`: Specialized components count
|
||||||
|
- `UNIVERSAL_COMPONENTS`: Universal component names (first 5)
|
||||||
|
- `HAS_ANIMATIONS`: Whether animation-tokens.json exists
|
||||||
|
|
||||||
|
**Analysis Output** (`DESIGN_ANALYSIS` - Phase 2):
|
||||||
|
- `has_colors`: Colors exist
|
||||||
|
- `color_semantic`: Has semantic naming (primary/secondary/accent)
|
||||||
|
- `uses_oklch`: Uses modern color spaces (oklch, lab, etc.)
|
||||||
|
- `has_dark_mode`: Has separate light/dark mode color tokens
|
||||||
|
- `spacing_pattern`: Pattern type ("linear", "geometric", "custom")
|
||||||
|
- `spacing_scale`: Actual scale values (e.g., [4, 8, 16, 32, 64])
|
||||||
|
- `has_typography`: Typography system exists
|
||||||
|
- `typography_hierarchy`: Has size scale for hierarchy
|
||||||
|
- `uses_calc`: Uses calc() expressions in token values
|
||||||
|
- `has_radius`: Border radius exists
|
||||||
|
- `radius_style`: Style characteristic ("sharp" <4px, "moderate" 4-8px, "rounded" >8px)
|
||||||
|
- `has_shadows`: Shadow system exists
|
||||||
|
- `shadow_pattern`: Elevation naming pattern
|
||||||
|
- `has_animations`: Animation tokens exist
|
||||||
|
- `animation_range`: Duration range (fast to slow)
|
||||||
|
- `easing_variety`: Types of easing functions
|
||||||
|
|
||||||
|
### Common Errors
|
||||||
|
|
||||||
|
| Error | Cause | Resolution |
|
||||||
|
|-------|-------|------------|
|
||||||
|
| Package not found | Invalid package name or doesn't exist | Run `/workflow:ui-design:codify-style` first |
|
||||||
|
| SKILL already exists | SKILL.md already generated | Use `--regenerate` flag |
|
||||||
|
| Missing layout-templates.json | Incomplete package | Verify package integrity, re-run codify-style |
|
||||||
|
| Invalid JSON format | Corrupted package files | Regenerate package with codify-style |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
### Phase 1: Validate Package
|
### Phase 1: Validate Package
|
||||||
|
|
||||||
**Purpose**: Check if style reference package exists
|
|
||||||
|
|
||||||
**TodoWrite** (First Action):
|
**TodoWrite** (First Action):
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Validate style reference package", "status": "in_progress", "activeForm": "Validating package"},
|
{
|
||||||
{"content": "Read package data and extract design references", "status": "pending", "activeForm": "Reading package data"},
|
"content": "Validate package exists and check SKILL status",
|
||||||
{"content": "Generate SKILL.md with progressive loading", "status": "pending", "activeForm": "Generating SKILL.md"}
|
"activeForm": "Validating package and SKILL status",
|
||||||
|
"status": "in_progress"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "Read package data and analyze design system",
|
||||||
|
"activeForm": "Reading package data and analyzing design system",
|
||||||
|
"status": "pending"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "Generate SKILL.md with design principles and token values",
|
||||||
|
"activeForm": "Generating SKILL.md with design principles and token values",
|
||||||
|
"status": "pending"
|
||||||
|
}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -75,8 +134,6 @@ package-name Style reference package name (required)
|
|||||||
bash(echo "${package_name}" || basename "$(pwd)" | sed 's/^style-//')
|
bash(echo "${package_name}" || basename "$(pwd)" | sed 's/^style-//')
|
||||||
```
|
```
|
||||||
|
|
||||||
Store result as `package_name`
|
|
||||||
|
|
||||||
**Step 2: Validate Package Exists**
|
**Step 2: Validate Package Exists**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -113,152 +170,90 @@ if (regenerate_flag && skill_exists) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Summary Variables**:
|
**TodoWrite Update**: Mark "Validate" as completed, "Read package data" as in_progress
|
||||||
- `PACKAGE_NAME`: Style reference package name
|
|
||||||
- `PACKAGE_DIR`: `.workflow/reference_style/${package_name}`
|
|
||||||
- `SKILL_DIR`: `.claude/skills/style-${package_name}`
|
|
||||||
- `REGENERATE`: `true` if --regenerate flag, `false` otherwise
|
|
||||||
|
|
||||||
**TodoWrite Update**:
|
|
||||||
```json
|
|
||||||
[
|
|
||||||
{"content": "Validate style reference package", "status": "completed", "activeForm": "Validating package"},
|
|
||||||
{"content": "Read package data and extract design references", "status": "in_progress", "activeForm": "Reading package data"}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Phase 2: Read Package Data & Extract Design References
|
### Phase 2: Read Package Data & Analyze Design System
|
||||||
|
|
||||||
**Purpose**: Extract package information and primary design references for SKILL description generation
|
**Step 1: Read All JSON Files**
|
||||||
|
|
||||||
**Step 1: Count Components**
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
bash(jq '.layout_templates | length' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null || echo 0)
|
# Read layout templates
|
||||||
```
|
Read(file_path=".workflow/reference_style/${package_name}/layout-templates.json")
|
||||||
|
|
||||||
Store result as `component_count`
|
# Read design tokens
|
||||||
|
|
||||||
**Step 2: Extract Component Types and Classification**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Extract component names from layout templates
|
|
||||||
bash(jq -r '.layout_templates | keys[]' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null | head -10)
|
|
||||||
|
|
||||||
# Count universal vs specialized components
|
|
||||||
bash(jq '[.layout_templates[] | select(.component_type == "universal")] | length' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null || echo 0)
|
|
||||||
bash(jq '[.layout_templates[] | select(.component_type == "specialized")] | length' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null || echo 0)
|
|
||||||
|
|
||||||
# Extract universal component names only
|
|
||||||
bash(jq -r '.layout_templates | to_entries | map(select(.value.component_type == "universal")) | .[].key' .workflow/reference_style/${package_name}/layout-templates.json 2>/dev/null | head -10)
|
|
||||||
```
|
|
||||||
|
|
||||||
Store as:
|
|
||||||
- `COMPONENT_TYPES`: List of available component types (all)
|
|
||||||
- `UNIVERSAL_COUNT`: Number of universal (reusable) components
|
|
||||||
- `SPECIALIZED_COUNT`: Number of specialized (project-specific) components
|
|
||||||
- `UNIVERSAL_COMPONENTS`: List of universal component names
|
|
||||||
|
|
||||||
**Step 3: Read Design Tokens**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
Read(file_path=".workflow/reference_style/${package_name}/design-tokens.json")
|
Read(file_path=".workflow/reference_style/${package_name}/design-tokens.json")
|
||||||
|
|
||||||
|
# Read animation tokens (if exists)
|
||||||
|
bash(test -f .workflow/reference_style/${package_name}/animation-tokens.json && echo "exists" || echo "missing")
|
||||||
|
Read(file_path=".workflow/reference_style/${package_name}/animation-tokens.json") # if exists
|
||||||
```
|
```
|
||||||
|
|
||||||
**Extract Primary Design References**:
|
**Step 2: Extract Metadata for Description**
|
||||||
|
|
||||||
**Colors** (top 3-5 most important):
|
|
||||||
```bash
|
|
||||||
bash(jq -r '.colors | to_entries | .[0:5] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null | head -5)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Typography** (heading and body fonts):
|
|
||||||
```bash
|
|
||||||
bash(jq -r '.typography | to_entries | select(.key | contains("family")) | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Spacing Scale** (base spacing values):
|
|
||||||
```bash
|
|
||||||
bash(jq -r '.spacing | to_entries | .[0:5] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Border Radius** (base radius values):
|
|
||||||
```bash
|
|
||||||
bash(jq -r '.border_radius | to_entries | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Shadows** (elevation levels):
|
|
||||||
```bash
|
|
||||||
bash(jq -r '.shadows | to_entries | .[0:3] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/design-tokens.json 2>/dev/null)
|
|
||||||
```
|
|
||||||
|
|
||||||
Store extracted references as:
|
|
||||||
- `PRIMARY_COLORS`: List of primary color tokens
|
|
||||||
- `TYPOGRAPHY_FONTS`: Font family tokens
|
|
||||||
- `SPACING_SCALE`: Base spacing values
|
|
||||||
- `BORDER_RADIUS`: Radius values
|
|
||||||
- `SHADOWS`: Shadow definitions
|
|
||||||
|
|
||||||
**Step 4: Read Animation Tokens (if available)**
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Check if animation tokens exist
|
# Count components and classify by type
|
||||||
bash(test -f .workflow/reference_style/${package_name}/animation-tokens.json && echo "available" || echo "not_available")
|
bash(jq '.layout_templates | length' layout-templates.json)
|
||||||
|
bash(jq '[.layout_templates[] | select(.component_type == "universal")] | length' layout-templates.json)
|
||||||
|
bash(jq '[.layout_templates[] | select(.component_type == "specialized")] | length' layout-templates.json)
|
||||||
|
bash(jq -r '.layout_templates | to_entries[] | select(.value.component_type == "universal") | .key' layout-templates.json | head -5)
|
||||||
```
|
```
|
||||||
|
|
||||||
If available, extract:
|
Store results in metadata variables (see [Key Variables](#key-variables))
|
||||||
```bash
|
|
||||||
Read(file_path=".workflow/reference_style/${package_name}/animation-tokens.json")
|
|
||||||
|
|
||||||
# Extract primary animation values
|
**Step 3: Analyze Design System for Dynamic Principles**
|
||||||
bash(jq -r '.duration | to_entries | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/animation-tokens.json 2>/dev/null)
|
|
||||||
bash(jq -r '.easing | to_entries | .[0:3] | .[] | "\(.key): \(.value)"' .workflow/reference_style/${package_name}/animation-tokens.json 2>/dev/null)
|
|
||||||
```
|
|
||||||
|
|
||||||
Store as:
|
Analyze design-tokens.json to extract characteristics and patterns:
|
||||||
- `ANIMATION_DURATIONS`: Animation duration tokens
|
|
||||||
- `EASING_FUNCTIONS`: Easing function tokens
|
|
||||||
|
|
||||||
**Step 5: Count Files**
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
bash(cd .workflow/reference_style/${package_name} && ls -1 *.json *.html *.css 2>/dev/null | wc -l)
|
# Color system characteristics
|
||||||
|
bash(jq '.colors | keys' design-tokens.json)
|
||||||
|
bash(jq '.colors | to_entries[0:2] | map(.value)' design-tokens.json)
|
||||||
|
# Check for modern color spaces
|
||||||
|
bash(jq '.colors | to_entries[] | .value | test("oklch|lab|lch")' design-tokens.json)
|
||||||
|
# Check for dark mode variants
|
||||||
|
bash(jq '.colors | keys | map(select(contains("dark") or contains("light")))' design-tokens.json)
|
||||||
|
# → Store: has_colors, color_semantic, uses_oklch, has_dark_mode
|
||||||
|
|
||||||
|
# Spacing pattern detection
|
||||||
|
bash(jq '.spacing | to_entries | map(.value) | map(gsub("[^0-9.]"; "") | tonumber)' design-tokens.json)
|
||||||
|
# Analyze pattern: linear (4-8-12-16) vs geometric (4-8-16-32) vs custom
|
||||||
|
# → Store: spacing_pattern, spacing_scale
|
||||||
|
|
||||||
|
# Typography characteristics
|
||||||
|
bash(jq '.typography | keys | map(select(contains("family") or contains("weight")))' design-tokens.json)
|
||||||
|
bash(jq '.typography | to_entries | map(select(.key | contains("size"))) | .[].value' design-tokens.json)
|
||||||
|
# Check for calc() usage
|
||||||
|
bash(jq '. | tostring | test("calc\\(")' design-tokens.json)
|
||||||
|
# → Store: has_typography, typography_hierarchy, uses_calc
|
||||||
|
|
||||||
|
# Border radius style
|
||||||
|
bash(jq '.border_radius | to_entries | map(.value)' design-tokens.json)
|
||||||
|
# Check range: small (sharp <4px) vs moderate (4-8px) vs large (rounded >8px)
|
||||||
|
# → Store: has_radius, radius_style
|
||||||
|
|
||||||
|
# Shadow characteristics
|
||||||
|
bash(jq '.shadows | keys' design-tokens.json)
|
||||||
|
bash(jq '.shadows | to_entries[0].value' design-tokens.json)
|
||||||
|
# → Store: has_shadows, shadow_pattern
|
||||||
|
|
||||||
|
# Animations (if available)
|
||||||
|
bash(jq '.duration | to_entries | map(.value)' animation-tokens.json)
|
||||||
|
bash(jq '.easing | keys' animation-tokens.json)
|
||||||
|
# → Store: has_animations, animation_range, easing_variety
|
||||||
```
|
```
|
||||||
|
|
||||||
Store result as `file_count`
|
Store analysis results in `DESIGN_ANALYSIS` (see [Key Variables](#key-variables))
|
||||||
|
|
||||||
**Summary Data Collected**:
|
**Note**: Analysis focuses on characteristics and patterns, not counts. Include technical feature detection (oklch, calc, dark mode) for Prerequisites section.
|
||||||
- `COMPONENT_COUNT`: Number of components in layout templates
|
|
||||||
- `UNIVERSAL_COUNT`: Number of universal (reusable) components
|
|
||||||
- `SPECIALIZED_COUNT`: Number of specialized (project-specific) components
|
|
||||||
- `COMPONENT_TYPES`: List of component types (first 10)
|
|
||||||
- `UNIVERSAL_COMPONENTS`: List of universal component names (first 10)
|
|
||||||
- `FILE_COUNT`: Total files in package
|
|
||||||
- `HAS_ANIMATIONS`: Whether animation tokens are available
|
|
||||||
- `PRIMARY_COLORS`: Primary color tokens with values
|
|
||||||
- `TYPOGRAPHY_FONTS`: Font family tokens
|
|
||||||
- `SPACING_SCALE`: Base spacing scale
|
|
||||||
- `BORDER_RADIUS`: Border radius values
|
|
||||||
- `SHADOWS`: Shadow definitions
|
|
||||||
- `ANIMATION_DURATIONS`: Animation durations (if available)
|
|
||||||
- `EASING_FUNCTIONS`: Easing functions (if available)
|
|
||||||
|
|
||||||
**TodoWrite Update**:
|
**TodoWrite Update**: Mark "Read package data" as completed, "Generate SKILL.md" as in_progress
|
||||||
```json
|
|
||||||
[
|
|
||||||
{"content": "Read package data and extract design references", "status": "completed", "activeForm": "Reading package data"},
|
|
||||||
{"content": "Generate SKILL.md with progressive loading", "status": "in_progress", "activeForm": "Generating SKILL.md"}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Phase 3: Generate SKILL.md
|
### Phase 3: Generate SKILL.md
|
||||||
|
|
||||||
**Purpose**: Create SKILL memory index with progressive loading structure and design references
|
|
||||||
|
|
||||||
**Step 1: Create SKILL Directory**
|
**Step 1: Create SKILL Directory**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -272,336 +267,57 @@ bash(mkdir -p .claude/skills/style-${package_name})
|
|||||||
{package_name} project-independent design system with {universal_count} universal layout templates and interactive preview (located at .workflow/reference_style/{package_name}). Load when working with reusable UI components, design tokens, layout patterns, or implementing visual consistency. Excludes {specialized_count} project-specific components.
|
{package_name} project-independent design system with {universal_count} universal layout templates and interactive preview (located at .workflow/reference_style/{package_name}). Load when working with reusable UI components, design tokens, layout patterns, or implementing visual consistency. Excludes {specialized_count} project-specific components.
|
||||||
```
|
```
|
||||||
|
|
||||||
**Key Elements**:
|
**Step 3: Load and Process SKILL.md Template**
|
||||||
- **Universal Count**: Emphasize available reusable layout templates
|
|
||||||
- **Project Independence**: Clearly state project-independent nature
|
|
||||||
- **Specialized Exclusion**: Mention excluded project-specific components
|
|
||||||
- **Path Reference**: Precise package location
|
|
||||||
- **Trigger Keywords**: reusable UI components, design tokens, layout patterns, visual consistency
|
|
||||||
- **Action Coverage**: working with, analyzing, implementing
|
|
||||||
|
|
||||||
**Example**:
|
**⚠️ CRITICAL - Execute First**:
|
||||||
```
|
|
||||||
main-app-style-v1 project-independent design system with 5 universal layout templates and interactive preview (located at .workflow/reference_style/main-app-style-v1). Load when working with reusable UI components, design tokens, layout patterns, or implementing visual consistency. Excludes 3 project-specific components.
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 3: Write SKILL.md**
|
|
||||||
|
|
||||||
Use Write tool to generate SKILL.md with the following complete content:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
name: style-{package_name}
|
|
||||||
description: {intelligent description from Step 2}
|
|
||||||
---
|
|
||||||
|
|
||||||
# {Package Name} Style SKILL Package
|
|
||||||
|
|
||||||
## Documentation: `../../../.workflow/reference_style/{package_name}/`
|
|
||||||
|
|
||||||
## Package Overview
|
|
||||||
|
|
||||||
**Project-independent style reference package** extracted from codebase with reusable design patterns, tokens, and interactive preview.
|
|
||||||
|
|
||||||
**Package Details**:
|
|
||||||
- Package: {package_name}
|
|
||||||
- Layout Templates: {component_count} total
|
|
||||||
- **Universal Components**: {universal_count} (reusable, project-independent)
|
|
||||||
- **Specialized Components**: {specialized_count} (project-specific, excluded from reference)
|
|
||||||
- Universal Component Types: {comma-separated list of UNIVERSAL_COMPONENTS}
|
|
||||||
- Files: {file_count}
|
|
||||||
- Animation Tokens: {has_animations ? "✓ Available" : "Not available"}
|
|
||||||
|
|
||||||
**⚠️ IMPORTANT - Project Independence**:
|
|
||||||
This SKILL package represents a **pure style system** independent of any specific project implementation:
|
|
||||||
- **Universal components** are generic, reusable patterns (buttons, inputs, cards, navigation)
|
|
||||||
- **Specialized components** are project-specific implementations (excluded from this reference)
|
|
||||||
- All design tokens and layout patterns are extracted for **reference purposes only**
|
|
||||||
- Adapt and customize these references based on your project's specific requirements
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚡ Primary Design References
|
|
||||||
|
|
||||||
**IMPORTANT**: These are **reference values** extracted from the codebase. They should be **dynamically adjusted** based on your specific design needs, not treated as fixed constraints.
|
|
||||||
|
|
||||||
### 🎨 Colors
|
|
||||||
|
|
||||||
{FOR each color in PRIMARY_COLORS:
|
|
||||||
- **{color.key}**: `{color.value}`
|
|
||||||
}
|
|
||||||
|
|
||||||
**Usage Guidelines**:
|
|
||||||
- These colors establish the foundation of the design system
|
|
||||||
- Adjust saturation, lightness, or hue based on:
|
|
||||||
- Brand requirements and accessibility needs
|
|
||||||
- Context (light/dark mode, high-contrast themes)
|
|
||||||
- User feedback and A/B testing results
|
|
||||||
- Use color theory principles to maintain harmony when modifying
|
|
||||||
|
|
||||||
### 📝 Typography
|
|
||||||
|
|
||||||
{FOR each font in TYPOGRAPHY_FONTS:
|
|
||||||
- **{font.key}**: `{font.value}`
|
|
||||||
}
|
|
||||||
|
|
||||||
**Usage Guidelines**:
|
|
||||||
- Font families can be substituted based on:
|
|
||||||
- Brand identity and design language
|
|
||||||
- Performance requirements (web fonts vs. system fonts)
|
|
||||||
- Accessibility and readability considerations
|
|
||||||
- Platform-specific availability
|
|
||||||
- Maintain hierarchy and scale relationships when changing fonts
|
|
||||||
|
|
||||||
### 📏 Spacing Scale
|
|
||||||
|
|
||||||
{FOR each spacing in SPACING_SCALE:
|
|
||||||
- **{spacing.key}**: `{spacing.value}`
|
|
||||||
}
|
|
||||||
|
|
||||||
**Usage Guidelines**:
|
|
||||||
- Spacing values form a consistent rhythm system
|
|
||||||
- Adjust scale based on:
|
|
||||||
- Target device (mobile vs. desktop vs. tablet)
|
|
||||||
- Content density requirements
|
|
||||||
- Component-specific needs (compact vs. comfortable layouts)
|
|
||||||
- Maintain proportional relationships when scaling
|
|
||||||
|
|
||||||
### 🔲 Border Radius
|
|
||||||
|
|
||||||
{FOR each radius in BORDER_RADIUS:
|
|
||||||
- **{radius.key}**: `{radius.value}`
|
|
||||||
}
|
|
||||||
|
|
||||||
**Usage Guidelines**:
|
|
||||||
- Border radius affects visual softness and modernity
|
|
||||||
- Adjust based on:
|
|
||||||
- Design aesthetic (sharp vs. rounded vs. pill-shaped)
|
|
||||||
- Component type (buttons, cards, inputs have different needs)
|
|
||||||
- Platform conventions (iOS vs. Android vs. Web)
|
|
||||||
|
|
||||||
### 🌫️ Shadows
|
|
||||||
|
|
||||||
{FOR each shadow in SHADOWS:
|
|
||||||
- **{shadow.key}**: `{shadow.value}`
|
|
||||||
}
|
|
||||||
|
|
||||||
**Usage Guidelines**:
|
|
||||||
- Shadows create elevation and depth perception
|
|
||||||
- Adjust based on:
|
|
||||||
- Material design depth levels
|
|
||||||
- Light/dark mode contexts
|
|
||||||
- Performance considerations (complex shadows impact rendering)
|
|
||||||
- Visual hierarchy needs
|
|
||||||
|
|
||||||
{IF HAS_ANIMATIONS:
|
|
||||||
### ⏱️ Animation & Timing
|
|
||||||
|
|
||||||
**Durations**:
|
|
||||||
{FOR each duration in ANIMATION_DURATIONS:
|
|
||||||
- **{duration.key}**: `{duration.value}`
|
|
||||||
}
|
|
||||||
|
|
||||||
**Easing Functions**:
|
|
||||||
{FOR each easing in EASING_FUNCTIONS:
|
|
||||||
- **{easing.key}**: `{easing.value}`
|
|
||||||
}
|
|
||||||
|
|
||||||
**Usage Guidelines**:
|
|
||||||
- Animation timing affects perceived responsiveness and polish
|
|
||||||
- Adjust based on:
|
|
||||||
- User expectations and platform conventions
|
|
||||||
- Accessibility preferences (reduced motion)
|
|
||||||
- Animation type (micro-interactions vs. page transitions)
|
|
||||||
- Performance constraints (mobile vs. desktop)
|
|
||||||
}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Design Adaptation Strategies
|
|
||||||
|
|
||||||
### When to Adjust Design References
|
|
||||||
|
|
||||||
**Brand Alignment**:
|
|
||||||
- Modify colors to match brand identity and guidelines
|
|
||||||
- Adjust typography to reflect brand personality
|
|
||||||
- Tune spacing and radius to align with brand aesthetic
|
|
||||||
|
|
||||||
**Accessibility Requirements**:
|
|
||||||
- Increase color contrast ratios for WCAG compliance
|
|
||||||
- Adjust font sizes and spacing for readability
|
|
||||||
- Modify animation durations for reduced-motion preferences
|
|
||||||
|
|
||||||
**Platform Optimization**:
|
|
||||||
- Adapt spacing for mobile touch targets (min 44x44px)
|
|
||||||
- Adjust shadows and radius for platform conventions
|
|
||||||
- Optimize animation performance for target devices
|
|
||||||
|
|
||||||
**Context-Specific Needs**:
|
|
||||||
- Dark mode: Adjust colors, shadows, and contrasts
|
|
||||||
- High-density displays: Fine-tune spacing and sizing
|
|
||||||
- Responsive design: Scale tokens across breakpoints
|
|
||||||
|
|
||||||
### How to Apply Adjustments
|
|
||||||
|
|
||||||
1. **Identify Need**: Determine which tokens need adjustment based on your specific requirements
|
|
||||||
2. **Maintain Relationships**: Preserve proportional relationships between related tokens
|
|
||||||
3. **Test Thoroughly**: Validate changes across components and use cases
|
|
||||||
4. **Document Changes**: Track modifications and rationale for team alignment
|
|
||||||
5. **Iterate**: Refine based on user feedback and testing results
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Progressive Loading
|
|
||||||
|
|
||||||
### Level 0: Design Tokens (~5K tokens)
|
|
||||||
|
|
||||||
Essential design token system for consistent styling.
|
|
||||||
|
|
||||||
**Files**:
|
|
||||||
- [Design Tokens](../../../.workflow/reference_style/{package_name}/design-tokens.json) - Colors, typography, spacing, shadows, borders
|
|
||||||
|
|
||||||
**Use when**: Quick token reference, applying consistent styles, color/typography queries
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Level 1: Universal Layout Templates (~12K tokens)
|
|
||||||
|
|
||||||
**Project-independent** component layout patterns for reusable UI elements.
|
|
||||||
|
|
||||||
**Files**:
|
|
||||||
- Level 0 files
|
|
||||||
- [Layout Templates](../../../.workflow/reference_style/{package_name}/layout-templates.json) - Component structures with HTML/CSS patterns
|
|
||||||
|
|
||||||
**⚠️ Reference Strategy**:
|
|
||||||
- **Only reference components with `component_type: "universal"`** - these are reusable, project-independent patterns
|
|
||||||
- **Ignore components with `component_type: "specialized"`** - these are project-specific implementations
|
|
||||||
- Universal components include: buttons, inputs, forms, cards, navigation, modals, etc.
|
|
||||||
- Use universal patterns as **reference templates** to adapt for your specific project needs
|
|
||||||
|
|
||||||
**Use when**: Building components, understanding component architecture, implementing layouts
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Level 2: Complete System (~20K tokens)
|
|
||||||
|
|
||||||
Full design system with animations and interactive preview.
|
|
||||||
|
|
||||||
**Files**:
|
|
||||||
- All Level 1 files
|
|
||||||
- [Animation Tokens](../../../.workflow/reference_style/{package_name}/animation-tokens.json) - Animation durations, easing, transitions _(if available)_
|
|
||||||
- [Preview HTML](../../../.workflow/reference_style/{package_name}/preview.html) - Interactive showcase (reference only)
|
|
||||||
- [Preview CSS](../../../.workflow/reference_style/{package_name}/preview.css) - Showcase styling (reference only)
|
|
||||||
|
|
||||||
**Use when**: Comprehensive analysis, animation development, complete design system understanding
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Interactive Preview
|
|
||||||
|
|
||||||
**Location**: `.workflow/reference_style/{package_name}/preview.html`
|
|
||||||
|
|
||||||
**View in Browser**:
|
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/reference_style/{package_name}
|
bash(cat ~/.claude/workflows/cli-templates/memory/style-skill-memory/skill-md-template.md)
|
||||||
python -m http.server 8080
|
|
||||||
# Open http://localhost:8080/preview.html
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Features**:
|
**Template Processing**:
|
||||||
- Color palette swatches with values
|
1. **Replace variables**: Substitute all `{variable}` placeholders with actual values from Phase 2
|
||||||
- Typography scale and combinations
|
2. **Generate dynamic sections**:
|
||||||
- All components with variants and states
|
- **Prerequisites & Tooling**: Generate based on `DESIGN_ANALYSIS` technical features (oklch, calc, dark mode)
|
||||||
- Spacing, radius, shadow visual examples
|
- **Design Principles**: Generate based on `DESIGN_ANALYSIS` characteristics
|
||||||
- Interactive state demonstrations
|
- **Complete Implementation Example**: Include React component example with token adaptation
|
||||||
- Usage code snippets
|
- **Design Token Values**: Iterate `DESIGN_TOKENS_DATA`, `ANIMATION_TOKENS_DATA` and display all key-value pairs with DEFAULT annotations
|
||||||
|
3. **Write to file**: Use Write tool to save to `.claude/skills/style-{package_name}/SKILL.md`
|
||||||
|
|
||||||
---
|
**Variable Replacement Map**:
|
||||||
|
- `{package_name}` → PACKAGE_NAME
|
||||||
|
- `{intelligent_description}` → Generated description from Step 2
|
||||||
|
- `{component_count}` → COMPONENT_COUNT
|
||||||
|
- `{universal_count}` → UNIVERSAL_COUNT
|
||||||
|
- `{specialized_count}` → SPECIALIZED_COUNT
|
||||||
|
- `{universal_components_list}` → UNIVERSAL_COMPONENTS (comma-separated)
|
||||||
|
- `{has_animations}` → HAS_ANIMATIONS
|
||||||
|
|
||||||
## Usage Guidelines
|
**Dynamic Content Generation**:
|
||||||
|
|
||||||
### Loading Levels
|
See template file for complete structure. Key dynamic sections:
|
||||||
|
|
||||||
**Level 0** (5K): Design tokens only
|
1. **Prerequisites & Tooling** (based on DESIGN_ANALYSIS technical features):
|
||||||
```
|
- IF uses_oklch → Include PostCSS plugin requirement (`postcss-oklab-function`)
|
||||||
Load Level 0 for design token reference
|
- IF uses_calc → Include preprocessor requirement for calc() expressions
|
||||||
```
|
- IF has_dark_mode → Include dark mode implementation mechanism (class or media query)
|
||||||
|
- ALWAYS include browser support, jq installation, and local server setup
|
||||||
|
|
||||||
**Level 1** (12K): Tokens + layout templates
|
2. **Design Principles** (based on DESIGN_ANALYSIS):
|
||||||
```
|
- IF has_colors → Include "Color System" principle with semantic pattern
|
||||||
Load Level 1 for layout templates and design tokens
|
- IF spacing_pattern detected → Include "Spatial Rhythm" with unified scale description (actual token values)
|
||||||
```
|
- IF has_typography_hierarchy → Include "Typographic System" with scale examples
|
||||||
|
- IF has_radius → Include "Shape Language" with style characteristic
|
||||||
|
- IF has_shadows → Include "Depth & Elevation" with elevation pattern
|
||||||
|
- IF has_animations → Include "Motion & Timing" with duration range
|
||||||
|
- ALWAYS include "Accessibility First" principle
|
||||||
|
|
||||||
**Level 2** (20K): Complete system with animations and preview
|
3. **Design Token Values** (iterate from read data):
|
||||||
```
|
- Colors: Iterate `DESIGN_TOKENS_DATA.colors`
|
||||||
Load Level 2 for complete design system with preview reference
|
- Typography: Iterate `DESIGN_TOKENS_DATA.typography`
|
||||||
```
|
- Spacing: Iterate `DESIGN_TOKENS_DATA.spacing`
|
||||||
|
- Border Radius: Iterate `DESIGN_TOKENS_DATA.border_radius` with calc() explanations
|
||||||
### Common Use Cases
|
- Shadows: Iterate `DESIGN_TOKENS_DATA.shadows` with DEFAULT token annotations
|
||||||
|
- Animations (if available): Iterate `ANIMATION_TOKENS_DATA.duration` and `ANIMATION_TOKENS_DATA.easing`
|
||||||
**Implementing UI Components**:
|
|
||||||
- Load Level 1 for universal layout templates
|
|
||||||
- **Only reference components with `component_type: "universal"`** in layout-templates.json
|
|
||||||
- Apply design tokens from design-tokens.json
|
|
||||||
- Adapt patterns to your project's specific requirements
|
|
||||||
|
|
||||||
**Ensuring Style Consistency**:
|
|
||||||
- Load Level 0 for design tokens
|
|
||||||
- Use design-tokens.json for colors, typography, spacing
|
|
||||||
- Check preview.html for visual reference (universal components only)
|
|
||||||
|
|
||||||
**Analyzing Component Patterns**:
|
|
||||||
- Load Level 2 for complete analysis
|
|
||||||
- Review layout-templates.json for component architecture
|
|
||||||
- **Filter for `component_type: "universal"` to exclude project-specific implementations**
|
|
||||||
- Check preview.html for implementation examples
|
|
||||||
|
|
||||||
**Animation Development**:
|
|
||||||
- Load Level 2 for animation tokens (if available)
|
|
||||||
- Reference animation-tokens.json for durations and easing
|
|
||||||
- Apply consistent timing and transitions
|
|
||||||
|
|
||||||
**⚠️ Critical Usage Rule**:
|
|
||||||
This is a **project-independent style reference system**. When working with layout-templates.json:
|
|
||||||
- **USE**: Components marked `component_type: "universal"` as reusable reference patterns
|
|
||||||
- **IGNORE**: Components marked `component_type: "specialized"` (project-specific implementations)
|
|
||||||
- **ADAPT**: All patterns should be customized for your specific project needs
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Package Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
.workflow/reference_style/{package_name}/
|
|
||||||
├── layout-templates.json # Layout templates from codebase
|
|
||||||
├── design-tokens.json # Design token system
|
|
||||||
├── animation-tokens.json # Animation tokens (optional)
|
|
||||||
├── preview.html # Interactive showcase
|
|
||||||
└── preview.css # Showcase styling
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Regeneration
|
|
||||||
|
|
||||||
To update this SKILL memory after package changes:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:style-skill-memory {package_name} --regenerate
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Related Commands
|
|
||||||
|
|
||||||
**Generate Package**:
|
|
||||||
```bash
|
|
||||||
/workflow:ui-design:codify-style --source ./src --package-name {package_name}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Update Package**:
|
|
||||||
Re-run codify-style with same package name to update extraction.
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 4: Verify SKILL.md Created**
|
**Step 4: Verify SKILL.md Created**
|
||||||
|
|
||||||
@@ -609,115 +325,27 @@ Re-run codify-style with same package name to update extraction.
|
|||||||
bash(test -f .claude/skills/style-${package_name}/SKILL.md && echo "success" || echo "failed")
|
bash(test -f .claude/skills/style-${package_name}/SKILL.md && echo "success" || echo "failed")
|
||||||
```
|
```
|
||||||
|
|
||||||
**TodoWrite Update**:
|
**TodoWrite Update**: Mark all todos as completed
|
||||||
```json
|
|
||||||
[
|
|
||||||
{"content": "Validate style reference package", "status": "completed", "activeForm": "Validating package"},
|
|
||||||
{"content": "Read package data and extract design references", "status": "completed", "activeForm": "Reading package data"},
|
|
||||||
{"content": "Generate SKILL.md with progressive loading", "status": "completed", "activeForm": "Generating SKILL.md"}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Final Action**: Report completion summary to user
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Completion Message
|
### Completion Message
|
||||||
|
|
||||||
Display extracted primary design references to user:
|
Display a simple completion message with key information:
|
||||||
|
|
||||||
```
|
```
|
||||||
✅ SKILL memory generated successfully!
|
✅ SKILL memory generated for style package: {package_name}
|
||||||
|
|
||||||
Package: {package_name}
|
📁 Location: .claude/skills/style-{package_name}/SKILL.md
|
||||||
SKILL Location: .claude/skills/style-{package_name}/SKILL.md
|
|
||||||
|
|
||||||
📦 Package Details:
|
📊 Package Summary:
|
||||||
- Layout Templates: {component_count} total
|
- {component_count} components ({universal_count} universal, {specialized_count} specialized)
|
||||||
- Universal (reusable): {universal_count}
|
- Design tokens: colors, typography, spacing, shadows{animations_note}
|
||||||
- Specialized (project-specific): {specialized_count}
|
|
||||||
- Universal Component Types: {show first 5 UNIVERSAL_COMPONENTS, then "+ X more"}
|
|
||||||
- Files: {file_count}
|
|
||||||
- Animation Tokens: {has_animations ? "✓ Available" : "Not available"}
|
|
||||||
|
|
||||||
🎨 Primary Design References Extracted:
|
💡 Usage: /memory:load-skill-memory style-{package_name} "your task description"
|
||||||
{IF PRIMARY_COLORS exists:
|
|
||||||
Colors:
|
|
||||||
{show first 3 PRIMARY_COLORS with key: value}
|
|
||||||
{if more than 3: + X more colors}
|
|
||||||
}
|
|
||||||
|
|
||||||
{IF TYPOGRAPHY_FONTS exists:
|
|
||||||
Typography:
|
|
||||||
{show all TYPOGRAPHY_FONTS}
|
|
||||||
}
|
|
||||||
|
|
||||||
{IF SPACING_SCALE exists:
|
|
||||||
Spacing Scale:
|
|
||||||
{show first 3 SPACING_SCALE items}
|
|
||||||
{if more than 3: + X more spacing tokens}
|
|
||||||
}
|
|
||||||
|
|
||||||
{IF BORDER_RADIUS exists:
|
|
||||||
Border Radius:
|
|
||||||
{show all BORDER_RADIUS}
|
|
||||||
}
|
|
||||||
|
|
||||||
{IF HAS_ANIMATIONS:
|
|
||||||
Animation:
|
|
||||||
Durations: {count ANIMATION_DURATIONS} tokens
|
|
||||||
Easing: {count EASING_FUNCTIONS} functions
|
|
||||||
}
|
|
||||||
|
|
||||||
⚡ Progressive Loading Levels:
|
|
||||||
- Level 0: Design Tokens (~5K tokens)
|
|
||||||
- Level 1: Tokens + Layout Templates (~12K tokens)
|
|
||||||
- Level 2: Complete System (~20K tokens)
|
|
||||||
|
|
||||||
💡 Usage:
|
|
||||||
Load design system context when working with:
|
|
||||||
- UI component implementation
|
|
||||||
- Layout pattern analysis
|
|
||||||
- Design token application
|
|
||||||
- Style consistency validation
|
|
||||||
|
|
||||||
⚠️ IMPORTANT - Project Independence:
|
|
||||||
This is a **project-independent style reference system**:
|
|
||||||
- Only use universal components (component_type: "universal") as reference patterns
|
|
||||||
- Ignore specialized components (component_type: "specialized") - they are project-specific
|
|
||||||
- The extracted design references are REFERENCE VALUES, not fixed constraints
|
|
||||||
- Dynamically adjust colors, spacing, typography, and other tokens based on:
|
|
||||||
- Brand requirements and accessibility needs
|
|
||||||
- Platform-specific conventions and optimizations
|
|
||||||
- Context (light/dark mode, responsive breakpoints)
|
|
||||||
- User feedback and testing results
|
|
||||||
|
|
||||||
See SKILL.md for detailed adjustment guidelines and component filtering instructions.
|
|
||||||
|
|
||||||
🎯 Preview:
|
|
||||||
Open interactive showcase:
|
|
||||||
file://{absolute_path}/.workflow/reference_style/{package_name}/preview.html
|
|
||||||
|
|
||||||
📋 Next Steps:
|
|
||||||
1. Load appropriate level based on your task context
|
|
||||||
2. Review Primary Design References section for key design tokens
|
|
||||||
3. Apply design tokens with dynamic adjustments as needed
|
|
||||||
4. Reference layout-templates.json for component structures
|
|
||||||
5. Use Design Adaptation Strategies when modifying tokens
|
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
Variables: `{package_name}`, `{component_count}`, `{universal_count}`, `{specialized_count}`, `{animations_note}` (", animations" if exists)
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
### Common Errors
|
|
||||||
|
|
||||||
| Error | Cause | Resolution |
|
|
||||||
|-------|-------|------------|
|
|
||||||
| Package not found | Invalid package name or package doesn't exist | Run codify-style first to create package |
|
|
||||||
| SKILL already exists | SKILL.md already generated | Use --regenerate to force regeneration |
|
|
||||||
| Missing layout-templates.json | Incomplete package | Verify package integrity, re-run codify-style |
|
|
||||||
| Invalid JSON format | Corrupted package files | Regenerate package with codify-style |
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -727,144 +355,42 @@ Open interactive showcase:
|
|||||||
|
|
||||||
1. **Check Before Generate**: Verify package exists before attempting SKILL generation
|
1. **Check Before Generate**: Verify package exists before attempting SKILL generation
|
||||||
2. **Respect Existing SKILL**: Don't overwrite unless --regenerate flag provided
|
2. **Respect Existing SKILL**: Don't overwrite unless --regenerate flag provided
|
||||||
3. **Extract Primary References**: Always extract and display key design values (colors, typography, spacing, border radius, shadows, animations)
|
3. **Load Templates via cat**: Use `cat ~/.claude/workflows/cli-templates/memory/style-skill-memory/{template}` to load templates
|
||||||
4. **Include Adjustment Guidance**: Provide clear guidelines on when and how to dynamically adjust design tokens
|
4. **Variable Substitution**: Replace all `{variable}` placeholders with actual values
|
||||||
5. **Progressive Loading**: Always include all 3 levels (0-2) with clear token estimates
|
5. **Technical Feature Detection**: Analyze tokens for modern features (oklch, calc, dark mode) and generate appropriate Prerequisites section
|
||||||
6. **Intelligent Description**: Extract component count and key features from metadata
|
6. **Dynamic Content Generation**: Generate sections based on DESIGN_ANALYSIS characteristics
|
||||||
|
7. **Unified Spacing Scale**: Use actual token values as primary scale reference, avoid contradictory pattern descriptions
|
||||||
|
8. **Direct Iteration**: Iterate data structures (DESIGN_TOKENS_DATA, etc.) for token values
|
||||||
|
9. **Annotate Special Tokens**: Add comments for DEFAULT tokens and calc() expressions
|
||||||
|
10. **Embed jq Commands**: Include bash/jq commands in SKILL.md for dynamic loading
|
||||||
|
11. **Progressive Loading**: Include all 3 levels (0-2) with specific jq commands
|
||||||
|
12. **Complete Examples**: Include end-to-end implementation examples (React components)
|
||||||
|
13. **Intelligent Description**: Extract component count and key features from metadata
|
||||||
|
14. **Emphasize Flexibility**: Strongly warn against rigid copying - values are references for creative adaptation
|
||||||
|
|
||||||
### SKILL Description Format
|
### Template Files Location
|
||||||
|
|
||||||
**Template**:
|
|
||||||
```
|
|
||||||
{package_name} project-independent design system with {universal_count} universal layout templates and interactive preview (located at .workflow/reference_style/{package_name}). Load when working with reusable UI components, design tokens, layout patterns, or implementing visual consistency. Excludes {specialized_count} project-specific components.
|
|
||||||
```
|
|
||||||
|
|
||||||
**Required Elements**:
|
|
||||||
- Package name
|
|
||||||
- Universal layout template count (emphasize reusability)
|
|
||||||
- Project independence statement
|
|
||||||
- Specialized component exclusion notice
|
|
||||||
- Location (full path)
|
|
||||||
- Trigger keywords (reusable UI components, design tokens, layout patterns, visual consistency)
|
|
||||||
- Action verbs (working with, analyzing, implementing)
|
|
||||||
|
|
||||||
### Primary Design References Extraction
|
|
||||||
|
|
||||||
**Required Data Extraction** (from design-tokens.json):
|
|
||||||
- Colors: Primary, secondary, accent colors (top 3-5)
|
|
||||||
- Typography: Font families for headings and body text
|
|
||||||
- Spacing Scale: Base spacing values (xs, sm, md, lg, xl)
|
|
||||||
- Border Radius: All radius tokens
|
|
||||||
- Shadows: Shadow definitions (top 3 elevation levels)
|
|
||||||
|
|
||||||
**Component Classification Extraction** (from layout-templates.json):
|
|
||||||
- Universal Count: Number of components with `component_type: "universal"`
|
|
||||||
- Specialized Count: Number of components with `component_type: "specialized"`
|
|
||||||
- Universal Component Names: List of universal component names (first 10)
|
|
||||||
|
|
||||||
**Optional Data Extraction** (from animation-tokens.json if available):
|
|
||||||
- Animation Durations: All duration tokens
|
|
||||||
- Easing Functions: Top 3 easing functions
|
|
||||||
|
|
||||||
**Extraction Format**:
|
|
||||||
Use `jq` to extract tokens from JSON files. Each token should include key and value.
|
|
||||||
For component classification, filter by `component_type` field.
|
|
||||||
|
|
||||||
### Dynamic Adjustment Guidelines
|
|
||||||
|
|
||||||
**Include in SKILL.md**:
|
|
||||||
1. **Usage Guidelines per Category**: Specific guidance for each token category
|
|
||||||
2. **Adjustment Strategies**: When to adjust design references
|
|
||||||
3. **Practical Examples**: Context-specific adaptation scenarios
|
|
||||||
4. **Best Practices**: How to maintain design system coherence while adjusting
|
|
||||||
|
|
||||||
### Progressive Loading Structure
|
|
||||||
|
|
||||||
**Level 0** (~5K tokens):
|
|
||||||
- design-tokens.json
|
|
||||||
|
|
||||||
**Level 1** (~12K tokens):
|
|
||||||
- Level 0 files
|
|
||||||
- layout-templates.json
|
|
||||||
|
|
||||||
**Level 2** (~20K tokens):
|
|
||||||
- Level 1 files
|
|
||||||
- animation-tokens.json (if exists)
|
|
||||||
- preview.html
|
|
||||||
- preview.css
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Benefits
|
|
||||||
|
|
||||||
- **Project Independence**: Clear separation between universal (reusable) and specialized (project-specific) components
|
|
||||||
- **Component Filtering**: Automatic classification helps identify which patterns are truly reusable
|
|
||||||
- **Fast Context Loading**: Progressive levels for efficient token usage
|
|
||||||
- **Primary Design References**: Extracted key design values (colors, typography, spacing, etc.) displayed prominently
|
|
||||||
- **Dynamic Adjustment Guidance**: Clear instructions on when and how to adjust design tokens
|
|
||||||
- **Intelligent Triggering**: Keywords optimize SKILL activation
|
|
||||||
- **Complete Reference**: All package files accessible through SKILL
|
|
||||||
- **Easy Regeneration**: Simple --regenerate flag for updates
|
|
||||||
- **Clear Structure**: Organized levels by use case with component type filtering
|
|
||||||
- **Practical Usage Guidelines**: Context-specific adjustment strategies and component selection criteria
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
```
|
```
|
||||||
style-skill-memory
|
Phase 1: Validate
|
||||||
├─ Phase 1: Validate
|
├─ Parse package_name
|
||||||
│ ├─ Parse package name from argument or auto-detect
|
├─ Check PACKAGE_DIR exists
|
||||||
│ ├─ Check package exists in .workflow/reference_style/
|
└─ Check SKILL_DIR exists (skip if exists and no --regenerate)
|
||||||
│ └─ Check if SKILL already exists (skip if exists and no --regenerate)
|
|
||||||
│
|
|
||||||
├─ Phase 2: Read Package Data & Extract Primary References
|
|
||||||
│ ├─ Count components from layout-templates.json
|
|
||||||
│ ├─ Extract component types list
|
|
||||||
│ ├─ Extract primary colors from design-tokens.json (top 3-5)
|
|
||||||
│ ├─ Extract typography (font families)
|
|
||||||
│ ├─ Extract spacing scale (base values)
|
|
||||||
│ ├─ Extract border radius tokens
|
|
||||||
│ ├─ Extract shadow definitions (top 3)
|
|
||||||
│ ├─ Extract animation tokens (if available)
|
|
||||||
│ └─ Count total files in package
|
|
||||||
│
|
|
||||||
└─ Phase 3: Generate SKILL.md
|
|
||||||
├─ Create SKILL directory
|
|
||||||
├─ Generate intelligent description with keywords
|
|
||||||
├─ Write SKILL.md with complete structure:
|
|
||||||
│ ├─ Package Overview
|
|
||||||
│ ├─ Primary Design References
|
|
||||||
│ │ ├─ Colors with usage guidelines
|
|
||||||
│ │ ├─ Typography with usage guidelines
|
|
||||||
│ │ ├─ Spacing with usage guidelines
|
|
||||||
│ │ ├─ Border Radius with usage guidelines
|
|
||||||
│ │ ├─ Shadows with usage guidelines
|
|
||||||
│ │ └─ Animation & Timing (if available)
|
|
||||||
│ ├─ Design Adaptation Strategies
|
|
||||||
│ │ ├─ When to adjust design references
|
|
||||||
│ │ └─ How to apply adjustments
|
|
||||||
│ ├─ Progressive Loading (3 levels)
|
|
||||||
│ ├─ Interactive Preview
|
|
||||||
│ ├─ Usage Guidelines
|
|
||||||
│ ├─ Package Structure
|
|
||||||
│ ├─ Regeneration
|
|
||||||
│ └─ Related Commands
|
|
||||||
├─ Verify SKILL.md created successfully
|
|
||||||
└─ Display completion message with extracted design references
|
|
||||||
|
|
||||||
Data Flow:
|
Phase 2: Read & Analyze
|
||||||
design-tokens.json → jq extraction → PRIMARY_COLORS, TYPOGRAPHY_FONTS,
|
├─ Read design-tokens.json → DESIGN_TOKENS_DATA
|
||||||
SPACING_SCALE, BORDER_RADIUS, SHADOWS
|
├─ Read layout-templates.json → LAYOUT_TEMPLATES_DATA
|
||||||
animation-tokens.json → jq extraction → ANIMATION_DURATIONS, EASING_FUNCTIONS
|
├─ Read animation-tokens.json → ANIMATION_TOKENS_DATA (if exists)
|
||||||
layout-templates.json → jq extraction → COMPONENT_COUNT, UNIVERSAL_COUNT,
|
├─ Extract Metadata → COMPONENT_COUNT, UNIVERSAL_COUNT, etc.
|
||||||
SPECIALIZED_COUNT, UNIVERSAL_COMPONENTS
|
└─ Analyze Design System → DESIGN_ANALYSIS (characteristics)
|
||||||
→ component_type filtering → Universal vs Specialized classification
|
|
||||||
|
|
||||||
Extracted data → SKILL.md generation → Primary Design References section
|
Phase 3: Generate
|
||||||
→ Component Classification section
|
├─ Create SKILL directory
|
||||||
→ Dynamic Adjustment Guidelines
|
├─ Generate intelligent description
|
||||||
→ Project Independence warnings
|
├─ Load SKILL.md template (cat command)
|
||||||
→ Completion message display
|
├─ Replace variables and generate dynamic content
|
||||||
|
├─ Write SKILL.md
|
||||||
|
├─ Verify creation
|
||||||
|
├─ Load completion message template (cat command)
|
||||||
|
└─ Display completion message
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ Identify inconsistencies, duplications, ambiguities, and underspecified items be
|
|||||||
IF --session parameter provided:
|
IF --session parameter provided:
|
||||||
session_id = provided session
|
session_id = provided session
|
||||||
ELSE:
|
ELSE:
|
||||||
CHECK: .workflow/.active-* marker files
|
CHECK: find .workflow/sessions/ -name "WFS-*" -type d
|
||||||
IF active_session EXISTS:
|
IF active_session EXISTS:
|
||||||
session_id = get_active_session()
|
session_id = get_active_session()
|
||||||
ELSE:
|
ELSE:
|
||||||
@@ -40,7 +40,7 @@ ELSE:
|
|||||||
EXIT
|
EXIT
|
||||||
|
|
||||||
# Derive absolute paths
|
# Derive absolute paths
|
||||||
session_dir = .workflow/WFS-{session}
|
session_dir = .workflow/sessions/WFS-{session}
|
||||||
brainstorm_dir = session_dir/.brainstorming
|
brainstorm_dir = session_dir/.brainstorming
|
||||||
task_dir = session_dir/.task
|
task_dir = session_dir/.task
|
||||||
|
|
||||||
@@ -333,7 +333,7 @@ Output a Markdown report (no file writes) with the following structure:
|
|||||||
|
|
||||||
#### TodoWrite-Based Remediation Workflow
|
#### TodoWrite-Based Remediation Workflow
|
||||||
|
|
||||||
**Report Location**: `.workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
|
**Report Location**: `.workflow/sessions/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
|
||||||
|
|
||||||
**Recommended Workflow**:
|
**Recommended Workflow**:
|
||||||
1. **Create TodoWrite Task List**: Extract all findings from report
|
1. **Create TodoWrite Task List**: Extract all findings from report
|
||||||
@@ -361,7 +361,7 @@ Priority Order:
|
|||||||
|
|
||||||
**Save Analysis Report**:
|
**Save Analysis Report**:
|
||||||
```bash
|
```bash
|
||||||
report_path = ".workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
report_path = ".workflow/sessions/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
||||||
Write(report_path, full_report_content)
|
Write(report_path, full_report_content)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -404,12 +404,12 @@ TodoWrite([
|
|||||||
**File Modification Workflow**:
|
**File Modification Workflow**:
|
||||||
```bash
|
```bash
|
||||||
# For task JSON modifications:
|
# For task JSON modifications:
|
||||||
1. Read(.workflow/WFS-{session}/.task/IMPL-X.Y.json)
|
1. Read(.workflow/sessions/WFS-{session}/.task/IMPL-X.Y.json)
|
||||||
2. Edit() to apply fixes
|
2. Edit() to apply fixes
|
||||||
3. Mark todo as completed
|
3. Mark todo as completed
|
||||||
|
|
||||||
# For IMPL_PLAN modifications:
|
# For IMPL_PLAN modifications:
|
||||||
1. Read(.workflow/WFS-{session}/IMPL_PLAN.md)
|
1. Read(.workflow/sessions/WFS-{session}/IMPL_PLAN.md)
|
||||||
2. Edit() to apply strategic changes
|
2. Edit() to apply strategic changes
|
||||||
3. Mark todo as completed
|
3. Mark todo as completed
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -46,10 +46,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
|||||||
### Phase 1: Session & Framework Detection
|
### Phase 1: Session & Framework Detection
|
||||||
```bash
|
```bash
|
||||||
# Check active session and framework
|
# Check active session and framework
|
||||||
CHECK: .workflow/.active-* marker files
|
CHECK: find .workflow/sessions/ -name "WFS-*" -type d
|
||||||
IF active_session EXISTS:
|
IF active_session EXISTS:
|
||||||
session_id = get_active_session()
|
session_id = get_active_session()
|
||||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
brainstorm_dir = .workflow/sessions/WFS-{session}/.brainstorming/
|
||||||
|
|
||||||
CHECK: brainstorm_dir/guidance-specification.md
|
CHECK: brainstorm_dir/guidance-specification.md
|
||||||
IF EXISTS:
|
IF EXISTS:
|
||||||
@@ -162,7 +162,7 @@ IF update_mode = "incremental":
|
|||||||
|
|
||||||
### Output Files
|
### Output Files
|
||||||
```
|
```
|
||||||
.workflow/WFS-[topic]/.brainstorming/
|
.workflow/sessions/WFS-[topic]/.brainstorming/
|
||||||
├── guidance-specification.md # Input: Framework (if exists)
|
├── guidance-specification.md # Input: Framework (if exists)
|
||||||
└── api-designer/
|
└── api-designer/
|
||||||
└── analysis.md # ★ OUTPUT: Framework-based analysis
|
└── analysis.md # ★ OUTPUT: Framework-based analysis
|
||||||
@@ -181,7 +181,7 @@ IF update_mode = "incremental":
|
|||||||
Session detection and selection:
|
Session detection and selection:
|
||||||
```bash
|
```bash
|
||||||
# Check for active sessions
|
# Check for active sessions
|
||||||
active_sessions=$(find .workflow -name ".active-*" 2>/dev/null)
|
active_sessions=$(find .workflow/sessions/ -name "WFS-*" -type d 2>/dev/null)
|
||||||
if [ multiple_sessions ]; then
|
if [ multiple_sessions ]; then
|
||||||
prompt_user_to_select_session()
|
prompt_user_to_select_session()
|
||||||
else
|
else
|
||||||
@@ -280,7 +280,7 @@ TodoWrite tracking for two-step process:
|
|||||||
|
|
||||||
### Output Location
|
### Output Location
|
||||||
```
|
```
|
||||||
.workflow/WFS-{topic-slug}/.brainstorming/api-designer/
|
.workflow/sessions/WFS-{topic-slug}/.brainstorming/api-designer/
|
||||||
├── analysis.md # Primary API design analysis
|
├── analysis.md # Primary API design analysis
|
||||||
├── api-specification.md # Detailed endpoint specifications (OpenAPI/Swagger)
|
├── api-specification.md # Detailed endpoint specifications (OpenAPI/Swagger)
|
||||||
├── data-contracts.md # Request/response schemas and validation rules
|
├── data-contracts.md # Request/response schemas and validation rules
|
||||||
@@ -531,7 +531,7 @@ Upon completion, update `workflow-session.json`:
|
|||||||
"api_designer": {
|
"api_designer": {
|
||||||
"status": "completed",
|
"status": "completed",
|
||||||
"completed_at": "timestamp",
|
"completed_at": "timestamp",
|
||||||
"output_directory": ".workflow/WFS-{topic}/.brainstorming/api-designer/",
|
"output_directory": ".workflow/sessions/WFS-{topic}/.brainstorming/api-designer/",
|
||||||
"key_insights": ["endpoint_design", "versioning_strategy", "data_contracts"]
|
"key_insights": ["endpoint_design", "versioning_strategy", "data_contracts"]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*)
|
|||||||
Six-phase workflow: **Automatic project context collection** → Extract topic challenges → Select roles → Generate task-specific questions → Detect conflicts → Generate confirmed guidance (declarative statements only).
|
Six-phase workflow: **Automatic project context collection** → Extract topic challenges → Select roles → Generate task-specific questions → Detect conflicts → Generate confirmed guidance (declarative statements only).
|
||||||
|
|
||||||
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
|
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
|
||||||
**Output**: `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
|
**Output**: `.workflow/sessions/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
|
||||||
**Core Principle**: Questions dynamically generated from project context + topic keywords/challenges, NOT from generic templates
|
**Core Principle**: Questions dynamically generated from project context + topic keywords/challenges, NOT from generic templates
|
||||||
|
|
||||||
**Parameters**:
|
**Parameters**:
|
||||||
@@ -32,7 +32,7 @@ Six-phase workflow: **Automatic project context collection** → Extract topic c
|
|||||||
**Standalone Mode**:
|
**Standalone Mode**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Initialize session (.workflow/.active-* check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
|
{"content": "Initialize session (.workflow/sessions/ session check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
|
||||||
{"content": "Phase 0: Automatic project context collection (call context-gather)", "status": "pending", "activeForm": "Phase 0 context collection"},
|
{"content": "Phase 0: Automatic project context collection (call context-gather)", "status": "pending", "activeForm": "Phase 0 context collection"},
|
||||||
{"content": "Phase 1: Extract challenges, output 2-4 task-specific questions, wait for user input", "status": "pending", "activeForm": "Phase 1 topic analysis"},
|
{"content": "Phase 1: Extract challenges, output 2-4 task-specific questions, wait for user input", "status": "pending", "activeForm": "Phase 1 topic analysis"},
|
||||||
{"content": "Phase 2: Recommend count+2 roles, output role selection, wait for user input", "status": "pending", "activeForm": "Phase 2 role selection"},
|
{"content": "Phase 2: Recommend count+2 roles, output role selection, wait for user input", "status": "pending", "activeForm": "Phase 2 role selection"},
|
||||||
@@ -133,7 +133,7 @@ b) {role-name} ({中文名})
|
|||||||
## Execution Phases
|
## Execution Phases
|
||||||
|
|
||||||
### Session Management
|
### Session Management
|
||||||
- Check `.workflow/.active-*` markers first
|
- Check `.workflow/sessions/` for existing sessions
|
||||||
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
||||||
- Parse `--count N` parameter from user input (default: 3 if not specified)
|
- Parse `--count N` parameter from user input (default: 3 if not specified)
|
||||||
- Store decisions in `workflow-session.json` including count parameter
|
- Store decisions in `workflow-session.json` including count parameter
|
||||||
@@ -145,7 +145,7 @@ b) {role-name} ({中文名})
|
|||||||
**Detection Mechanism** (execute first):
|
**Detection Mechanism** (execute first):
|
||||||
```javascript
|
```javascript
|
||||||
// Check if context-package already exists
|
// Check if context-package already exists
|
||||||
const contextPackagePath = `.workflow/WFS-{session-id}/.process/context-package.json`;
|
const contextPackagePath = `.workflow/sessions/WFS-{session-id}/.process/context-package.json`;
|
||||||
|
|
||||||
if (file_exists(contextPackagePath)) {
|
if (file_exists(contextPackagePath)) {
|
||||||
// Validate package
|
// Validate package
|
||||||
@@ -229,7 +229,7 @@ Report completion with statistics.
|
|||||||
|
|
||||||
**Steps**:
|
**Steps**:
|
||||||
1. **Load Phase 0 context** (if available):
|
1. **Load Phase 0 context** (if available):
|
||||||
- Read `.workflow/WFS-{session-id}/.process/context-package.json`
|
- Read `.workflow/sessions/WFS-{session-id}/.process/context-package.json`
|
||||||
- Extract: tech_stack, existing modules, conflict_risk, relevant files
|
- Extract: tech_stack, existing modules, conflict_risk, relevant files
|
||||||
|
|
||||||
2. **Deep topic analysis** (context-aware):
|
2. **Deep topic analysis** (context-aware):
|
||||||
@@ -449,7 +449,7 @@ FOR each selected role:
|
|||||||
|
|
||||||
## Output Document Template
|
## Output Document Template
|
||||||
|
|
||||||
**File**: `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md`
|
**File**: `.workflow/sessions/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||||
|
|
||||||
```markdown
|
```markdown
|
||||||
# [Project] - Confirmed Guidance Specification
|
# [Project] - Confirmed Guidance Specification
|
||||||
@@ -596,8 +596,7 @@ ELSE:
|
|||||||
## File Structure
|
## File Structure
|
||||||
|
|
||||||
```
|
```
|
||||||
.workflow/WFS-[topic]/
|
.workflow/sessions/WFS-[topic]/
|
||||||
├── .active-brainstorming
|
|
||||||
├── workflow-session.json # Session metadata ONLY
|
├── workflow-session.json # Session metadata ONLY
|
||||||
└── .brainstorming/
|
└── .brainstorming/
|
||||||
└── guidance-specification.md # Full guidance content
|
└── guidance-specification.md # Full guidance content
|
||||||
|
|||||||
@@ -85,7 +85,7 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
|
|||||||
**Validation**:
|
**Validation**:
|
||||||
- guidance-specification.md created with confirmed decisions
|
- guidance-specification.md created with confirmed decisions
|
||||||
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
|
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
|
||||||
- Session directory `.workflow/WFS-{topic}/.brainstorming/` exists
|
- Session directory `.workflow/sessions/WFS-{topic}/.brainstorming/` exists
|
||||||
|
|
||||||
**TodoWrite Update (Phase 1 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 1 SlashCommand invoked - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
@@ -132,13 +132,13 @@ Execute {role-name} analysis for existing topic framework
|
|||||||
|
|
||||||
## Context Loading
|
## Context Loading
|
||||||
ASSIGNED_ROLE: {role-name}
|
ASSIGNED_ROLE: {role-name}
|
||||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/{role}/
|
OUTPUT_LOCATION: .workflow/sessions/WFS-{session}/.brainstorming/{role}/
|
||||||
TOPIC: {user-provided-topic}
|
TOPIC: {user-provided-topic}
|
||||||
|
|
||||||
## Flow Control Steps
|
## Flow Control Steps
|
||||||
1. **load_topic_framework**
|
1. **load_topic_framework**
|
||||||
- Action: Load structured topic discussion framework
|
- Action: Load structured topic discussion framework
|
||||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
- Command: Read(.workflow/sessions/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||||
- Output: topic_framework_content
|
- Output: topic_framework_content
|
||||||
|
|
||||||
2. **load_role_template**
|
2. **load_role_template**
|
||||||
@@ -148,7 +148,7 @@ TOPIC: {user-provided-topic}
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata and original user intent
|
- Action: Load session metadata and original user intent
|
||||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/sessions/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_context (contains original user prompt as PRIMARY reference)
|
- Output: session_context (contains original user prompt as PRIMARY reference)
|
||||||
|
|
||||||
4. **load_style_skill** (ONLY for ui-designer role when style_skill_package exists)
|
4. **load_style_skill** (ONLY for ui-designer role when style_skill_package exists)
|
||||||
@@ -194,7 +194,7 @@ TOPIC: {user-provided-topic}
|
|||||||
- guidance-specification.md path
|
- guidance-specification.md path
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
- Each role creates `.workflow/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
|
- Each role creates `.workflow/sessions/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
|
||||||
- If content is large (>800 lines), may split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
- If content is large (>800 lines), may split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
||||||
- **File naming pattern**: ALL files MUST start with `analysis` prefix (use `analysis*.md` for globbing)
|
- **File naming pattern**: ALL files MUST start with `analysis` prefix (use `analysis*.md` for globbing)
|
||||||
- **FORBIDDEN naming**: No `recommendations.md`, `recommendations-*.md`, or any non-`analysis` prefixed files
|
- **FORBIDDEN naming**: No `recommendations.md`, `recommendations-*.md`, or any non-`analysis` prefixed files
|
||||||
@@ -245,7 +245,7 @@ TOPIC: {user-provided-topic}
|
|||||||
**Input**: `sessionId` from Phase 1
|
**Input**: `sessionId` from Phase 1
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
- `.workflow/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
|
- `.workflow/sessions/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
|
||||||
- Synthesis references all role analyses
|
- Synthesis references all role analyses
|
||||||
|
|
||||||
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
|
||||||
@@ -280,7 +280,7 @@ TOPIC: {user-provided-topic}
|
|||||||
```
|
```
|
||||||
Brainstorming complete for session: {sessionId}
|
Brainstorming complete for session: {sessionId}
|
||||||
Roles analyzed: {count}
|
Roles analyzed: {count}
|
||||||
Synthesis: .workflow/WFS-{topic}/.brainstorming/synthesis-specification.md
|
Synthesis: .workflow/sessions/WFS-{topic}/.brainstorming/synthesis-specification.md
|
||||||
|
|
||||||
✅ Next Steps:
|
✅ Next Steps:
|
||||||
1. /workflow:concept-clarify --session {sessionId} # Optional refinement
|
1. /workflow:concept-clarify --session {sessionId} # Optional refinement
|
||||||
@@ -392,31 +392,31 @@ CONTEXT_VARS:
|
|||||||
|
|
||||||
## Session Management
|
## Session Management
|
||||||
|
|
||||||
**⚡ FIRST ACTION**: Check for `.workflow/.active-*` markers before Phase 1
|
**⚡ FIRST ACTION**: Check `.workflow/sessions/` for existing sessions before Phase 1
|
||||||
|
|
||||||
**Multiple Sessions Support**:
|
**Multiple Sessions Support**:
|
||||||
- Different Claude instances can have different active brainstorming sessions
|
- Different Claude instances can have different brainstorming sessions
|
||||||
- If multiple active sessions found, prompt user to select
|
- If multiple sessions found, prompt user to select
|
||||||
- If single active session found, use it
|
- If single session found, use it
|
||||||
- If no active session exists, create `WFS-[topic-slug]`
|
- If no session exists, create `WFS-[topic-slug]`
|
||||||
|
|
||||||
**Session Continuity**:
|
**Session Continuity**:
|
||||||
- MUST use selected active session for all phases
|
- MUST use selected session for all phases
|
||||||
- Each role's context stored in session directory
|
- Each role's context stored in session directory
|
||||||
- Session isolation: Each session maintains independent state
|
- Session isolation: Each session maintains independent state
|
||||||
|
|
||||||
## Output Structure
|
## Output Structure
|
||||||
|
|
||||||
**Phase 1 Output**:
|
**Phase 1 Output**:
|
||||||
- `.workflow/WFS-{topic}/.brainstorming/guidance-specification.md` (framework content)
|
- `.workflow/sessions/WFS-{topic}/.brainstorming/guidance-specification.md` (framework content)
|
||||||
- `.workflow/WFS-{topic}/workflow-session.json` (metadata: selected_roles[], topic, timestamps, style_skill_package)
|
- `.workflow/sessions/WFS-{topic}/workflow-session.json` (metadata: selected_roles[], topic, timestamps, style_skill_package)
|
||||||
|
|
||||||
**Phase 2 Output**:
|
**Phase 2 Output**:
|
||||||
- `.workflow/WFS-{topic}/.brainstorming/{role}/analysis.md` (one per role)
|
- `.workflow/sessions/WFS-{topic}/.brainstorming/{role}/analysis.md` (one per role)
|
||||||
- `.superdesign/design_iterations/` (ui-designer artifacts, if --style-skill provided)
|
- `.superdesign/design_iterations/` (ui-designer artifacts, if --style-skill provided)
|
||||||
|
|
||||||
**Phase 3 Output**:
|
**Phase 3 Output**:
|
||||||
- `.workflow/WFS-{topic}/.brainstorming/synthesis-specification.md` (integrated analysis)
|
- `.workflow/sessions/WFS-{topic}/.brainstorming/synthesis-specification.md` (integrated analysis)
|
||||||
|
|
||||||
**⚠️ Storage Separation**: Guidance content in .md files, metadata in .json (no duplication)
|
**⚠️ Storage Separation**: Guidance content in .md files, metadata in .json (no duplication)
|
||||||
**⚠️ Style References**: When --style-skill provided, workflow-session.json stores style_skill_package name, ui-designer loads from `.claude/skills/style-{package-name}/`
|
**⚠️ Style References**: When --style-skill provided, workflow-session.json stores style_skill_package name, ui-designer loads from `.claude/skills/style-{package-name}/`
|
||||||
@@ -446,8 +446,7 @@ CONTEXT_VARS:
|
|||||||
|
|
||||||
**File Structure**:
|
**File Structure**:
|
||||||
```
|
```
|
||||||
.workflow/WFS-[topic]/
|
.workflow/sessions/WFS-[topic]/
|
||||||
├── .active-brainstorming
|
|
||||||
├── workflow-session.json # Session metadata ONLY
|
├── workflow-session.json # Session metadata ONLY
|
||||||
└── .brainstorming/
|
└── .brainstorming/
|
||||||
├── guidance-specification.md # Framework (Phase 1)
|
├── guidance-specification.md # Framework (Phase 1)
|
||||||
|
|||||||
@@ -47,10 +47,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
|||||||
### Phase 1: Session & Framework Detection
|
### Phase 1: Session & Framework Detection
|
||||||
```bash
|
```bash
|
||||||
# Check active session and framework
|
# Check active session and framework
|
||||||
CHECK: .workflow/.active-* marker files
|
CHECK: find .workflow/sessions/ -name "WFS-*" -type d
|
||||||
IF active_session EXISTS:
|
IF active_session EXISTS:
|
||||||
session_id = get_active_session()
|
session_id = get_active_session()
|
||||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
brainstorm_dir = .workflow/sessions/WFS-{session}/.brainstorming/
|
||||||
|
|
||||||
CHECK: brainstorm_dir/guidance-specification.md
|
CHECK: brainstorm_dir/guidance-specification.md
|
||||||
IF EXISTS:
|
IF EXISTS:
|
||||||
@@ -87,13 +87,13 @@ Execute data-architect analysis for existing topic framework
|
|||||||
|
|
||||||
## Context Loading
|
## Context Loading
|
||||||
ASSIGNED_ROLE: data-architect
|
ASSIGNED_ROLE: data-architect
|
||||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/data-architect/
|
OUTPUT_LOCATION: .workflow/sessions/WFS-{session}/.brainstorming/data-architect/
|
||||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||||
|
|
||||||
## Flow Control Steps
|
## Flow Control Steps
|
||||||
1. **load_topic_framework**
|
1. **load_topic_framework**
|
||||||
- Action: Load structured topic discussion framework
|
- Action: Load structured topic discussion framework
|
||||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
- Command: Read(.workflow/sessions/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||||
- Output: topic_framework_content
|
- Output: topic_framework_content
|
||||||
|
|
||||||
2. **load_role_template**
|
2. **load_role_template**
|
||||||
@@ -103,7 +103,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata and existing context
|
- Action: Load session metadata and existing context
|
||||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/sessions/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_context
|
- Output: session_context
|
||||||
|
|
||||||
## Analysis Requirements
|
## Analysis Requirements
|
||||||
@@ -163,7 +163,7 @@ TodoWrite({
|
|||||||
|
|
||||||
### Framework-Based Analysis
|
### Framework-Based Analysis
|
||||||
```
|
```
|
||||||
.workflow/WFS-{session}/.brainstorming/data-architect/
|
.workflow/sessions/WFS-{session}/.brainstorming/data-architect/
|
||||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -208,7 +208,7 @@ TodoWrite({
|
|||||||
"data_architect": {
|
"data_architect": {
|
||||||
"status": "completed",
|
"status": "completed",
|
||||||
"framework_addressed": true,
|
"framework_addressed": true,
|
||||||
"output_location": ".workflow/WFS-{session}/.brainstorming/data-architect/analysis.md",
|
"output_location": ".workflow/sessions/WFS-{session}/.brainstorming/data-architect/analysis.md",
|
||||||
"framework_reference": "@../guidance-specification.md"
|
"framework_reference": "@../guidance-specification.md"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
|||||||
### Phase 1: Session & Framework Detection
|
### Phase 1: Session & Framework Detection
|
||||||
```bash
|
```bash
|
||||||
# Check active session and framework
|
# Check active session and framework
|
||||||
CHECK: .workflow/.active-* marker files
|
CHECK: find .workflow/sessions/ -name "WFS-*" -type d
|
||||||
IF active_session EXISTS:
|
IF active_session EXISTS:
|
||||||
session_id = get_active_session()
|
session_id = get_active_session()
|
||||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
brainstorm_dir = .workflow/sessions/WFS-{session}/.brainstorming/
|
||||||
|
|
||||||
CHECK: brainstorm_dir/guidance-specification.md
|
CHECK: brainstorm_dir/guidance-specification.md
|
||||||
IF EXISTS:
|
IF EXISTS:
|
||||||
@@ -67,13 +67,13 @@ Execute product-manager analysis for existing topic framework
|
|||||||
|
|
||||||
## Context Loading
|
## Context Loading
|
||||||
ASSIGNED_ROLE: product-manager
|
ASSIGNED_ROLE: product-manager
|
||||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/product-manager/
|
OUTPUT_LOCATION: .workflow/sessions/WFS-{session}/.brainstorming/product-manager/
|
||||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||||
|
|
||||||
## Flow Control Steps
|
## Flow Control Steps
|
||||||
1. **load_topic_framework**
|
1. **load_topic_framework**
|
||||||
- Action: Load structured topic discussion framework
|
- Action: Load structured topic discussion framework
|
||||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
- Command: Read(.workflow/sessions/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||||
- Output: topic_framework_content
|
- Output: topic_framework_content
|
||||||
|
|
||||||
2. **load_role_template**
|
2. **load_role_template**
|
||||||
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata and existing context
|
- Action: Load session metadata and existing context
|
||||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/sessions/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_context
|
- Output: session_context
|
||||||
|
|
||||||
## Analysis Requirements
|
## Analysis Requirements
|
||||||
@@ -143,7 +143,7 @@ TodoWrite({
|
|||||||
|
|
||||||
### Framework-Based Analysis
|
### Framework-Based Analysis
|
||||||
```
|
```
|
||||||
.workflow/WFS-{session}/.brainstorming/product-manager/
|
.workflow/sessions/WFS-{session}/.brainstorming/product-manager/
|
||||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -188,7 +188,7 @@ TodoWrite({
|
|||||||
"product_manager": {
|
"product_manager": {
|
||||||
"status": "completed",
|
"status": "completed",
|
||||||
"framework_addressed": true,
|
"framework_addressed": true,
|
||||||
"output_location": ".workflow/WFS-{session}/.brainstorming/product-manager/analysis.md",
|
"output_location": ".workflow/sessions/WFS-{session}/.brainstorming/product-manager/analysis.md",
|
||||||
"framework_reference": "@../guidance-specification.md"
|
"framework_reference": "@../guidance-specification.md"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
|||||||
### Phase 1: Session & Framework Detection
|
### Phase 1: Session & Framework Detection
|
||||||
```bash
|
```bash
|
||||||
# Check active session and framework
|
# Check active session and framework
|
||||||
CHECK: .workflow/.active-* marker files
|
CHECK: find .workflow/sessions/ -name "WFS-*" -type d
|
||||||
IF active_session EXISTS:
|
IF active_session EXISTS:
|
||||||
session_id = get_active_session()
|
session_id = get_active_session()
|
||||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
brainstorm_dir = .workflow/sessions/WFS-{session}/.brainstorming/
|
||||||
|
|
||||||
CHECK: brainstorm_dir/guidance-specification.md
|
CHECK: brainstorm_dir/guidance-specification.md
|
||||||
IF EXISTS:
|
IF EXISTS:
|
||||||
@@ -67,13 +67,13 @@ Execute product-owner analysis for existing topic framework
|
|||||||
|
|
||||||
## Context Loading
|
## Context Loading
|
||||||
ASSIGNED_ROLE: product-owner
|
ASSIGNED_ROLE: product-owner
|
||||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/product-owner/
|
OUTPUT_LOCATION: .workflow/sessions/WFS-{session}/.brainstorming/product-owner/
|
||||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||||
|
|
||||||
## Flow Control Steps
|
## Flow Control Steps
|
||||||
1. **load_topic_framework**
|
1. **load_topic_framework**
|
||||||
- Action: Load structured topic discussion framework
|
- Action: Load structured topic discussion framework
|
||||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
- Command: Read(.workflow/sessions/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||||
- Output: topic_framework_content
|
- Output: topic_framework_content
|
||||||
|
|
||||||
2. **load_role_template**
|
2. **load_role_template**
|
||||||
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata and existing context
|
- Action: Load session metadata and existing context
|
||||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/sessions/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_context
|
- Output: session_context
|
||||||
|
|
||||||
## Analysis Requirements
|
## Analysis Requirements
|
||||||
@@ -143,7 +143,7 @@ TodoWrite({
|
|||||||
|
|
||||||
### Framework-Based Analysis
|
### Framework-Based Analysis
|
||||||
```
|
```
|
||||||
.workflow/WFS-{session}/.brainstorming/product-owner/
|
.workflow/sessions/WFS-{session}/.brainstorming/product-owner/
|
||||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -188,7 +188,7 @@ TodoWrite({
|
|||||||
"product_owner": {
|
"product_owner": {
|
||||||
"status": "completed",
|
"status": "completed",
|
||||||
"framework_addressed": true,
|
"framework_addressed": true,
|
||||||
"output_location": ".workflow/WFS-{session}/.brainstorming/product-owner/analysis.md",
|
"output_location": ".workflow/sessions/WFS-{session}/.brainstorming/product-owner/analysis.md",
|
||||||
"framework_reference": "@../guidance-specification.md"
|
"framework_reference": "@../guidance-specification.md"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
|||||||
### Phase 1: Session & Framework Detection
|
### Phase 1: Session & Framework Detection
|
||||||
```bash
|
```bash
|
||||||
# Check active session and framework
|
# Check active session and framework
|
||||||
CHECK: .workflow/.active-* marker files
|
CHECK: find .workflow/sessions/ -name "WFS-*" -type d
|
||||||
IF active_session EXISTS:
|
IF active_session EXISTS:
|
||||||
session_id = get_active_session()
|
session_id = get_active_session()
|
||||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
brainstorm_dir = .workflow/sessions/WFS-{session}/.brainstorming/
|
||||||
|
|
||||||
CHECK: brainstorm_dir/guidance-specification.md
|
CHECK: brainstorm_dir/guidance-specification.md
|
||||||
IF EXISTS:
|
IF EXISTS:
|
||||||
@@ -67,13 +67,13 @@ Execute scrum-master analysis for existing topic framework
|
|||||||
|
|
||||||
## Context Loading
|
## Context Loading
|
||||||
ASSIGNED_ROLE: scrum-master
|
ASSIGNED_ROLE: scrum-master
|
||||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/scrum-master/
|
OUTPUT_LOCATION: .workflow/sessions/WFS-{session}/.brainstorming/scrum-master/
|
||||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||||
|
|
||||||
## Flow Control Steps
|
## Flow Control Steps
|
||||||
1. **load_topic_framework**
|
1. **load_topic_framework**
|
||||||
- Action: Load structured topic discussion framework
|
- Action: Load structured topic discussion framework
|
||||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
- Command: Read(.workflow/sessions/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||||
- Output: topic_framework_content
|
- Output: topic_framework_content
|
||||||
|
|
||||||
2. **load_role_template**
|
2. **load_role_template**
|
||||||
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata and existing context
|
- Action: Load session metadata and existing context
|
||||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/sessions/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_context
|
- Output: session_context
|
||||||
|
|
||||||
## Analysis Requirements
|
## Analysis Requirements
|
||||||
@@ -143,7 +143,7 @@ TodoWrite({
|
|||||||
|
|
||||||
### Framework-Based Analysis
|
### Framework-Based Analysis
|
||||||
```
|
```
|
||||||
.workflow/WFS-{session}/.brainstorming/scrum-master/
|
.workflow/sessions/WFS-{session}/.brainstorming/scrum-master/
|
||||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -188,7 +188,7 @@ TodoWrite({
|
|||||||
"scrum_master": {
|
"scrum_master": {
|
||||||
"status": "completed",
|
"status": "completed",
|
||||||
"framework_addressed": true,
|
"framework_addressed": true,
|
||||||
"output_location": ".workflow/WFS-{session}/.brainstorming/scrum-master/analysis.md",
|
"output_location": ".workflow/sessions/WFS-{session}/.brainstorming/scrum-master/analysis.md",
|
||||||
"framework_reference": "@../guidance-specification.md"
|
"framework_reference": "@../guidance-specification.md"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -27,10 +27,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
|||||||
### Phase 1: Session & Framework Detection
|
### Phase 1: Session & Framework Detection
|
||||||
```bash
|
```bash
|
||||||
# Check active session and framework
|
# Check active session and framework
|
||||||
CHECK: .workflow/.active-* marker files
|
CHECK: find .workflow/sessions/ -name "WFS-*" -type d
|
||||||
IF active_session EXISTS:
|
IF active_session EXISTS:
|
||||||
session_id = get_active_session()
|
session_id = get_active_session()
|
||||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
brainstorm_dir = .workflow/sessions/WFS-{session}/.brainstorming/
|
||||||
|
|
||||||
CHECK: brainstorm_dir/guidance-specification.md
|
CHECK: brainstorm_dir/guidance-specification.md
|
||||||
IF EXISTS:
|
IF EXISTS:
|
||||||
@@ -67,13 +67,13 @@ Execute subject-matter-expert analysis for existing topic framework
|
|||||||
|
|
||||||
## Context Loading
|
## Context Loading
|
||||||
ASSIGNED_ROLE: subject-matter-expert
|
ASSIGNED_ROLE: subject-matter-expert
|
||||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/subject-matter-expert/
|
OUTPUT_LOCATION: .workflow/sessions/WFS-{session}/.brainstorming/subject-matter-expert/
|
||||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||||
|
|
||||||
## Flow Control Steps
|
## Flow Control Steps
|
||||||
1. **load_topic_framework**
|
1. **load_topic_framework**
|
||||||
- Action: Load structured topic discussion framework
|
- Action: Load structured topic discussion framework
|
||||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
- Command: Read(.workflow/sessions/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||||
- Output: topic_framework_content
|
- Output: topic_framework_content
|
||||||
|
|
||||||
2. **load_role_template**
|
2. **load_role_template**
|
||||||
@@ -83,7 +83,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata and existing context
|
- Action: Load session metadata and existing context
|
||||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/sessions/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_context
|
- Output: session_context
|
||||||
|
|
||||||
## Analysis Requirements
|
## Analysis Requirements
|
||||||
@@ -143,7 +143,7 @@ TodoWrite({
|
|||||||
|
|
||||||
### Framework-Based Analysis
|
### Framework-Based Analysis
|
||||||
```
|
```
|
||||||
.workflow/WFS-{session}/.brainstorming/subject-matter-expert/
|
.workflow/sessions/WFS-{session}/.brainstorming/subject-matter-expert/
|
||||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -188,7 +188,7 @@ TodoWrite({
|
|||||||
"subject_matter_expert": {
|
"subject_matter_expert": {
|
||||||
"status": "completed",
|
"status": "completed",
|
||||||
"framework_addressed": true,
|
"framework_addressed": true,
|
||||||
"output_location": ".workflow/WFS-{session}/.brainstorming/subject-matter-expert/analysis.md",
|
"output_location": ".workflow/sessions/WFS-{session}/.brainstorming/subject-matter-expert/analysis.md",
|
||||||
"framework_reference": "@../guidance-specification.md"
|
"framework_reference": "@../guidance-specification.md"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -48,7 +48,7 @@ Three-phase workflow to eliminate ambiguities and enhance conceptual depth in ro
|
|||||||
|
|
||||||
### Phase 1: Discovery & Validation
|
### Phase 1: Discovery & Validation
|
||||||
|
|
||||||
1. **Detect Session**: Use `--session` parameter or `.workflow/.active-*` marker
|
1. **Detect Session**: Use `--session` parameter or find `.workflow/sessions/WFS-*` directories
|
||||||
2. **Validate Files**:
|
2. **Validate Files**:
|
||||||
- `guidance-specification.md` (optional, warn if missing)
|
- `guidance-specification.md` (optional, warn if missing)
|
||||||
- `*/analysis*.md` (required, error if empty)
|
- `*/analysis*.md` (required, error if empty)
|
||||||
@@ -59,7 +59,7 @@ Three-phase workflow to eliminate ambiguities and enhance conceptual depth in ro
|
|||||||
**Main flow prepares file paths for Agent**:
|
**Main flow prepares file paths for Agent**:
|
||||||
|
|
||||||
1. **Discover Analysis Files**:
|
1. **Discover Analysis Files**:
|
||||||
- Glob(.workflow/WFS-{session}/.brainstorming/*/analysis*.md)
|
- Glob(.workflow/sessions/WFS-{session}/.brainstorming/*/analysis*.md)
|
||||||
- Supports: analysis.md, analysis-1.md, analysis-2.md, analysis-3.md
|
- Supports: analysis.md, analysis-1.md, analysis-2.md, analysis-3.md
|
||||||
- Validate: At least one file exists (error if empty)
|
- Validate: At least one file exists (error if empty)
|
||||||
|
|
||||||
@@ -69,7 +69,7 @@ Three-phase workflow to eliminate ambiguities and enhance conceptual depth in ro
|
|||||||
|
|
||||||
3. **Pass to Agent** (Phase 3):
|
3. **Pass to Agent** (Phase 3):
|
||||||
- `session_id`
|
- `session_id`
|
||||||
- `brainstorm_dir`: .workflow/WFS-{session}/.brainstorming/
|
- `brainstorm_dir`: .workflow/sessions/WFS-{session}/.brainstorming/
|
||||||
- `role_analysis_paths`: ["product-manager/analysis.md", "system-architect/analysis-1.md", ...]
|
- `role_analysis_paths`: ["product-manager/analysis.md", "system-architect/analysis-1.md", ...]
|
||||||
- `participating_roles`: ["product-manager", "system-architect", ...]
|
- `participating_roles`: ["product-manager", "system-architect", ...]
|
||||||
|
|
||||||
@@ -361,7 +361,7 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|||||||
|
|
||||||
## Output
|
## Output
|
||||||
|
|
||||||
**Location**: `.workflow/WFS-{session}/.brainstorming/[role]/analysis*.md` (in-place updates)
|
**Location**: `.workflow/sessions/WFS-{session}/.brainstorming/[role]/analysis*.md` (in-place updates)
|
||||||
|
|
||||||
**Updated Structure**:
|
**Updated Structure**:
|
||||||
```markdown
|
```markdown
|
||||||
|
|||||||
@@ -46,10 +46,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
|||||||
### Phase 1: Session & Framework Detection
|
### Phase 1: Session & Framework Detection
|
||||||
```bash
|
```bash
|
||||||
# Check active session and framework
|
# Check active session and framework
|
||||||
CHECK: .workflow/.active-* marker files
|
CHECK: find .workflow/sessions/ -name "WFS-*" -type d
|
||||||
IF active_session EXISTS:
|
IF active_session EXISTS:
|
||||||
session_id = get_active_session()
|
session_id = get_active_session()
|
||||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
brainstorm_dir = .workflow/sessions/WFS-{session}/.brainstorming/
|
||||||
|
|
||||||
CHECK: brainstorm_dir/guidance-specification.md
|
CHECK: brainstorm_dir/guidance-specification.md
|
||||||
IF EXISTS:
|
IF EXISTS:
|
||||||
@@ -162,7 +162,7 @@ IF update_mode = "incremental":
|
|||||||
|
|
||||||
### Output Files
|
### Output Files
|
||||||
```
|
```
|
||||||
.workflow/WFS-[topic]/.brainstorming/
|
.workflow/sessions/WFS-[topic]/.brainstorming/
|
||||||
├── guidance-specification.md # Input: Framework (if exists)
|
├── guidance-specification.md # Input: Framework (if exists)
|
||||||
└── system-architect/
|
└── system-architect/
|
||||||
└── analysis.md # ★ OUTPUT: Framework-based analysis
|
└── analysis.md # ★ OUTPUT: Framework-based analysis
|
||||||
@@ -186,8 +186,8 @@ IF update_mode = "incremental":
|
|||||||
### ⚠️ Session Management - FIRST STEP
|
### ⚠️ Session Management - FIRST STEP
|
||||||
Session detection and selection:
|
Session detection and selection:
|
||||||
```bash
|
```bash
|
||||||
# Check for active sessions
|
# Check for existing sessions
|
||||||
active_sessions=$(find .workflow -name ".active-*" 2>/dev/null)
|
existing_sessions=$(find .workflow/sessions/ -name "WFS-*" -type d 2>/dev/null)
|
||||||
if [ multiple_sessions ]; then
|
if [ multiple_sessions ]; then
|
||||||
prompt_user_to_select_session()
|
prompt_user_to_select_session()
|
||||||
else
|
else
|
||||||
@@ -279,7 +279,7 @@ TodoWrite tracking for two-step process:
|
|||||||
|
|
||||||
### Output Location
|
### Output Location
|
||||||
```
|
```
|
||||||
.workflow/WFS-{topic-slug}/.brainstorming/system-architect/
|
.workflow/sessions/WFS-{topic-slug}/.brainstorming/system-architect/
|
||||||
├── analysis.md # Primary architecture analysis
|
├── analysis.md # Primary architecture analysis
|
||||||
├── architecture-design.md # Detailed system design and diagrams
|
├── architecture-design.md # Detailed system design and diagrams
|
||||||
├── technology-stack.md # Technology stack recommendations and justifications
|
├── technology-stack.md # Technology stack recommendations and justifications
|
||||||
@@ -340,7 +340,7 @@ Upon completion, update `workflow-session.json`:
|
|||||||
"system_architect": {
|
"system_architect": {
|
||||||
"status": "completed",
|
"status": "completed",
|
||||||
"completed_at": "timestamp",
|
"completed_at": "timestamp",
|
||||||
"output_directory": ".workflow/WFS-{topic}/.brainstorming/system-architect/",
|
"output_directory": ".workflow/sessions/WFS-{topic}/.brainstorming/system-architect/",
|
||||||
"key_insights": ["scalability_bottleneck", "architecture_pattern", "technology_recommendation"]
|
"key_insights": ["scalability_bottleneck", "architecture_pattern", "technology_recommendation"]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -48,10 +48,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
|||||||
### Phase 1: Session & Framework Detection
|
### Phase 1: Session & Framework Detection
|
||||||
```bash
|
```bash
|
||||||
# Check active session and framework
|
# Check active session and framework
|
||||||
CHECK: .workflow/.active-* marker files
|
CHECK: find .workflow/sessions/ -name "WFS-*" -type d
|
||||||
IF active_session EXISTS:
|
IF active_session EXISTS:
|
||||||
session_id = get_active_session()
|
session_id = get_active_session()
|
||||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
brainstorm_dir = .workflow/sessions/WFS-{session}/.brainstorming/
|
||||||
|
|
||||||
CHECK: brainstorm_dir/guidance-specification.md
|
CHECK: brainstorm_dir/guidance-specification.md
|
||||||
IF EXISTS:
|
IF EXISTS:
|
||||||
@@ -88,13 +88,13 @@ Execute ui-designer analysis for existing topic framework
|
|||||||
|
|
||||||
## Context Loading
|
## Context Loading
|
||||||
ASSIGNED_ROLE: ui-designer
|
ASSIGNED_ROLE: ui-designer
|
||||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/ui-designer/
|
OUTPUT_LOCATION: .workflow/sessions/WFS-{session}/.brainstorming/ui-designer/
|
||||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||||
|
|
||||||
## Flow Control Steps
|
## Flow Control Steps
|
||||||
1. **load_topic_framework**
|
1. **load_topic_framework**
|
||||||
- Action: Load structured topic discussion framework
|
- Action: Load structured topic discussion framework
|
||||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
- Command: Read(.workflow/sessions/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||||
- Output: topic_framework_content
|
- Output: topic_framework_content
|
||||||
|
|
||||||
2. **load_role_template**
|
2. **load_role_template**
|
||||||
@@ -104,7 +104,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata and existing context
|
- Action: Load session metadata and existing context
|
||||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/sessions/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_context
|
- Output: session_context
|
||||||
|
|
||||||
## Analysis Requirements
|
## Analysis Requirements
|
||||||
@@ -164,7 +164,7 @@ TodoWrite({
|
|||||||
|
|
||||||
### Framework-Based Analysis
|
### Framework-Based Analysis
|
||||||
```
|
```
|
||||||
.workflow/WFS-{session}/.brainstorming/ui-designer/
|
.workflow/sessions/WFS-{session}/.brainstorming/ui-designer/
|
||||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -209,7 +209,7 @@ TodoWrite({
|
|||||||
"ui_designer": {
|
"ui_designer": {
|
||||||
"status": "completed",
|
"status": "completed",
|
||||||
"framework_addressed": true,
|
"framework_addressed": true,
|
||||||
"output_location": ".workflow/WFS-{session}/.brainstorming/ui-designer/analysis.md",
|
"output_location": ".workflow/sessions/WFS-{session}/.brainstorming/ui-designer/analysis.md",
|
||||||
"framework_reference": "@../guidance-specification.md"
|
"framework_reference": "@../guidance-specification.md"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -48,10 +48,10 @@ allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
|
|||||||
### Phase 1: Session & Framework Detection
|
### Phase 1: Session & Framework Detection
|
||||||
```bash
|
```bash
|
||||||
# Check active session and framework
|
# Check active session and framework
|
||||||
CHECK: .workflow/.active-* marker files
|
CHECK: find .workflow/sessions/ -name "WFS-*" -type d
|
||||||
IF active_session EXISTS:
|
IF active_session EXISTS:
|
||||||
session_id = get_active_session()
|
session_id = get_active_session()
|
||||||
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
|
brainstorm_dir = .workflow/sessions/WFS-{session}/.brainstorming/
|
||||||
|
|
||||||
CHECK: brainstorm_dir/guidance-specification.md
|
CHECK: brainstorm_dir/guidance-specification.md
|
||||||
IF EXISTS:
|
IF EXISTS:
|
||||||
@@ -88,13 +88,13 @@ Execute ux-expert analysis for existing topic framework
|
|||||||
|
|
||||||
## Context Loading
|
## Context Loading
|
||||||
ASSIGNED_ROLE: ux-expert
|
ASSIGNED_ROLE: ux-expert
|
||||||
OUTPUT_LOCATION: .workflow/WFS-{session}/.brainstorming/ux-expert/
|
OUTPUT_LOCATION: .workflow/sessions/WFS-{session}/.brainstorming/ux-expert/
|
||||||
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
||||||
|
|
||||||
## Flow Control Steps
|
## Flow Control Steps
|
||||||
1. **load_topic_framework**
|
1. **load_topic_framework**
|
||||||
- Action: Load structured topic discussion framework
|
- Action: Load structured topic discussion framework
|
||||||
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
|
- Command: Read(.workflow/sessions/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||||
- Output: topic_framework_content
|
- Output: topic_framework_content
|
||||||
|
|
||||||
2. **load_role_template**
|
2. **load_role_template**
|
||||||
@@ -104,7 +104,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata and existing context
|
- Action: Load session metadata and existing context
|
||||||
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/sessions/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_context
|
- Output: session_context
|
||||||
|
|
||||||
## Analysis Requirements
|
## Analysis Requirements
|
||||||
@@ -164,7 +164,7 @@ TodoWrite({
|
|||||||
|
|
||||||
### Framework-Based Analysis
|
### Framework-Based Analysis
|
||||||
```
|
```
|
||||||
.workflow/WFS-{session}/.brainstorming/ux-expert/
|
.workflow/sessions/WFS-{session}/.brainstorming/ux-expert/
|
||||||
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -209,7 +209,7 @@ TodoWrite({
|
|||||||
"ux_expert": {
|
"ux_expert": {
|
||||||
"status": "completed",
|
"status": "completed",
|
||||||
"framework_addressed": true,
|
"framework_addressed": true,
|
||||||
"output_location": ".workflow/WFS-{session}/.brainstorming/ux-expert/analysis.md",
|
"output_location": ".workflow/sessions/WFS-{session}/.brainstorming/ux-expert/analysis.md",
|
||||||
"framework_reference": "@../guidance-specification.md"
|
"framework_reference": "@../guidance-specification.md"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ argument-hint: "[--resume-session=\"session-id\"]"
|
|||||||
# Workflow Execute Command
|
# Workflow Execute Command
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Orchestrates autonomous workflow execution through systematic task discovery, agent coordination, and progress tracking. **Executes entire workflow without user interruption**, providing complete context to agents and ensuring proper flow control execution with comprehensive TodoWrite tracking.
|
Orchestrates autonomous workflow execution through systematic task discovery, agent coordination, and progress tracking. **Executes entire workflow without user interruption** (except initial session selection if multiple active sessions exist), providing complete context to agents and ensuring proper flow control execution with comprehensive TodoWrite tracking.
|
||||||
|
|
||||||
**Resume Mode**: When called with `--resume-session` flag, skips discovery phase and directly enters TodoWrite generation and agent execution for the specified session.
|
**Resume Mode**: When called with `--resume-session` flag, skips discovery phase and directly enters TodoWrite generation and agent execution for the specified session.
|
||||||
|
|
||||||
@@ -22,83 +22,72 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
|||||||
| **Memory** | All tasks | 1-2 tasks | **90% less** |
|
| **Memory** | All tasks | 1-2 tasks | **90% less** |
|
||||||
| **Scalability** | 10-20 tasks | 100+ tasks | **5-10x** |
|
| **Scalability** | 10-20 tasks | 100+ tasks | **5-10x** |
|
||||||
|
|
||||||
|
**Loading Strategy**:
|
||||||
|
- **TODO_LIST.md**: Read in Phase 2 (task metadata, status, dependencies)
|
||||||
|
- **IMPL_PLAN.md**: Read existence in Phase 2, parse execution strategy when needed
|
||||||
|
- **Task JSONs**: Complete lazy loading (read only during execution)
|
||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
|
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
|
||||||
**Execute all discovered pending tasks sequentially until workflow completion or blocking dependency.**
|
**Execute all discovered pending tasks until workflow completion or blocking dependency.**
|
||||||
**Auto-complete session when all tasks finished: Call `/workflow:session:complete` upon workflow completion.**
|
**Auto-complete session when all tasks finished: Call `/workflow:session:complete` upon workflow completion.**
|
||||||
|
|
||||||
## Core Responsibilities
|
## Core Responsibilities
|
||||||
- **Session Discovery**: Identify and select active workflow sessions
|
- **Session Discovery**: Identify and select active workflow sessions
|
||||||
- **Task Dependency Resolution**: Analyze task relationships and execution order
|
- **Execution Strategy Parsing**: Extract execution model from IMPL_PLAN.md
|
||||||
- **TodoWrite Progress Tracking**: Maintain real-time execution status throughout entire workflow
|
- **TodoWrite Progress Tracking**: Maintain real-time execution status throughout entire workflow
|
||||||
- **Agent Orchestration**: Coordinate specialized agents with complete context
|
- **Agent Orchestration**: Coordinate specialized agents with complete context
|
||||||
- **Flow Control Execution**: Execute pre-analysis steps and context accumulation
|
|
||||||
- **Status Synchronization**: Update task JSON files and workflow state
|
- **Status Synchronization**: Update task JSON files and workflow state
|
||||||
- **Autonomous Completion**: Continue execution until all tasks complete or reach blocking state
|
- **Autonomous Completion**: Continue execution until all tasks complete or reach blocking state
|
||||||
- **Session Auto-Complete**: Call `/workflow:session:complete` when all workflow tasks finished
|
- **Session Auto-Complete**: Call `/workflow:session:complete` when all workflow tasks finished
|
||||||
|
|
||||||
## Execution Philosophy
|
## Execution Philosophy
|
||||||
|
- **IMPL_PLAN-driven**: Follow execution strategy from IMPL_PLAN.md Section 4
|
||||||
- **Discovery-first**: Auto-discover existing plans and tasks
|
- **Discovery-first**: Auto-discover existing plans and tasks
|
||||||
- **Status-aware**: Execute only ready tasks with resolved dependencies
|
- **Status-aware**: Execute only ready tasks with resolved dependencies
|
||||||
- **Context-rich**: Provide complete task JSON and accumulated context to agents
|
- **Context-rich**: Provide complete task JSON and accumulated context to agents
|
||||||
- **Progress tracking**: **Continuous TodoWrite updates throughout entire workflow execution**
|
- **Progress tracking**: Continuous TodoWrite updates throughout entire workflow execution
|
||||||
- **Flow control**: Sequential step execution with variable passing
|
- **Autonomous completion**: Execute all tasks without user interruption until workflow complete
|
||||||
- **Autonomous completion**: **Execute all tasks without user interruption until workflow complete**
|
|
||||||
|
|
||||||
## Flow Control Execution
|
|
||||||
**[FLOW_CONTROL]** marker indicates task JSON contains `flow_control.pre_analysis` steps for context preparation.
|
|
||||||
|
|
||||||
### Orchestrator Responsibility
|
|
||||||
- Pass complete task JSON to agent (including `flow_control` block)
|
|
||||||
- Provide session paths for artifact access
|
|
||||||
- Monitor agent completion
|
|
||||||
|
|
||||||
### Agent Responsibility
|
|
||||||
- Parse `flow_control.pre_analysis` array from JSON
|
|
||||||
- Execute steps sequentially with variable substitution
|
|
||||||
- Accumulate context from artifacts and dependencies
|
|
||||||
- Follow error handling per `step.on_error`
|
|
||||||
- Complete implementation using accumulated context
|
|
||||||
|
|
||||||
**Orchestrator does NOT execute flow control steps - Agent interprets and executes them from JSON.**
|
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Lifecycle
|
||||||
|
|
||||||
### Resume Mode Detection
|
### Phase 1: Discovery
|
||||||
**Special Flag Processing**: When `--resume-session="session-id"` is provided:
|
**Applies to**: Normal mode only (skipped in resume mode)
|
||||||
1. **Skip Discovery Phase**: Use provided session ID directly
|
|
||||||
2. **Load Specified Session**: Read session state from `.workflow/{session-id}/`
|
|
||||||
3. **Direct TodoWrite Generation**: Skip to Phase 3 (Planning) immediately
|
|
||||||
4. **Accelerated Execution**: Enter agent coordination without validation delays
|
|
||||||
|
|
||||||
### Phase 1: Discovery (Normal Mode Only)
|
**Process**:
|
||||||
1. **Check Active Sessions**: Find `.workflow/.active-*` markers
|
1. **Check Active Sessions**: Find sessions in `.workflow/sessions/` directory
|
||||||
2. **Select Session**: If multiple found, prompt user selection
|
2. **Select Session**: If multiple found, prompt user selection
|
||||||
3. **Load Session Metadata**: Read `workflow-session.json` ONLY (minimal context)
|
3. **Load Session Metadata**: Read `workflow-session.json` ONLY (minimal context)
|
||||||
4. **DO NOT read task JSONs yet** - defer until execution phase
|
4. **DO NOT read task JSONs yet** - defer until execution phase
|
||||||
|
|
||||||
**Note**: In resume mode, this phase is completely skipped.
|
**Resume Mode**: This phase is completely skipped when `--resume-session="session-id"` flag is provided.
|
||||||
|
|
||||||
|
### Phase 2: Planning Document Analysis
|
||||||
|
**Applies to**: Normal mode only (skipped in resume mode)
|
||||||
|
|
||||||
### Phase 2: Planning Document Analysis (Normal Mode Only)
|
|
||||||
**Optimized to avoid reading all task JSONs upfront**
|
**Optimized to avoid reading all task JSONs upfront**
|
||||||
|
|
||||||
1. **Read IMPL_PLAN.md**: Understand overall strategy, task breakdown summary, dependencies
|
**Process**:
|
||||||
|
1. **Read IMPL_PLAN.md**: Check existence, understand overall strategy
|
||||||
2. **Read TODO_LIST.md**: Get current task statuses and execution progress
|
2. **Read TODO_LIST.md**: Get current task statuses and execution progress
|
||||||
3. **Extract Task Metadata**: Parse task IDs, titles, and dependency relationships from TODO_LIST.md
|
3. **Extract Task Metadata**: Parse task IDs, titles, and dependency relationships from TODO_LIST.md
|
||||||
4. **Build Execution Queue**: Determine ready tasks based on TODO_LIST.md status and dependencies
|
4. **Build Execution Queue**: Determine ready tasks based on TODO_LIST.md status and dependencies
|
||||||
|
|
||||||
**Key Optimization**: Use IMPL_PLAN.md and TODO_LIST.md as primary sources instead of reading all task JSONs
|
**Key Optimization**: Use IMPL_PLAN.md (existence check only) and TODO_LIST.md as primary sources instead of reading all task JSONs
|
||||||
|
|
||||||
**Note**: In resume mode, this phase is also skipped as session analysis was already completed by `/workflow:status`.
|
**Resume Mode**: This phase is skipped when `--resume-session` flag is provided (session already known).
|
||||||
|
|
||||||
### Phase 3: TodoWrite Generation (Resume Mode Entry Point)
|
### Phase 3: TodoWrite Generation
|
||||||
**This is where resume mode directly enters after skipping Phases 1 & 2**
|
**Applies to**: Both normal and resume modes (resume mode entry point)
|
||||||
|
|
||||||
|
**Process**:
|
||||||
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
|
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
|
||||||
- Parse TODO_LIST.md to extract all tasks with current statuses
|
- Parse TODO_LIST.md to extract all tasks with current statuses
|
||||||
- Identify first pending task with met dependencies
|
- Identify first pending task with met dependencies
|
||||||
- Generate comprehensive TodoWrite covering entire workflow
|
- Generate comprehensive TodoWrite covering entire workflow
|
||||||
2. **Mark Initial Status**: Set first ready task as `in_progress` in TodoWrite
|
2. **Mark Initial Status**: Set first ready task(s) as `in_progress` in TodoWrite
|
||||||
|
- **Sequential execution**: Mark ONE task as `in_progress`
|
||||||
|
- **Parallel batch**: Mark ALL tasks in current batch as `in_progress`
|
||||||
3. **Prepare Session Context**: Inject workflow paths for agent use (using provided session-id)
|
3. **Prepare Session Context**: Inject workflow paths for agent use (using provided session-id)
|
||||||
4. **Validate Prerequisites**: Ensure IMPL_PLAN.md and TODO_LIST.md exist and are valid
|
4. **Validate Prerequisites**: Ensure IMPL_PLAN.md and TODO_LIST.md exist and are valid
|
||||||
|
|
||||||
@@ -108,18 +97,22 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
|||||||
- Generate TodoWrite from TODO_LIST.md state
|
- Generate TodoWrite from TODO_LIST.md state
|
||||||
- Proceed immediately to agent execution (Phase 4)
|
- Proceed immediately to agent execution (Phase 4)
|
||||||
|
|
||||||
### Phase 4: Execution (Lazy Task Loading)
|
### Phase 4: Execution Strategy Selection & Task Execution
|
||||||
**Key Optimization**: Read task JSON **only when needed** for execution
|
**Applies to**: Both normal and resume modes
|
||||||
|
|
||||||
1. **Identify Next Task**: From TodoWrite, get the next `in_progress` task ID
|
**Step 4A: Parse Execution Strategy from IMPL_PLAN.md**
|
||||||
2. **Load Task JSON on Demand**: Read `.task/{task-id}.json` for current task ONLY
|
|
||||||
3. **Validate Task Structure**: Ensure all 5 required fields exist (id, title, status, meta, context, flow_control)
|
Read IMPL_PLAN.md Section 4 to extract:
|
||||||
4. **Pass Task with Flow Control**: Include complete task JSON with `pre_analysis` steps for agent execution
|
- **Execution Model**: Sequential | Parallel | Phased | TDD Cycles
|
||||||
5. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
|
- **Parallelization Opportunities**: Which tasks can run in parallel
|
||||||
6. **Monitor Progress**: Track agent execution and handle errors without user interruption
|
- **Serialization Requirements**: Which tasks must run sequentially
|
||||||
7. **Collect Results**: Gather implementation results and outputs
|
- **Critical Path**: Priority execution order
|
||||||
8. **Update TODO_LIST.md**: Mark current task as completed in TODO_LIST.md
|
|
||||||
9. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat from step 1
|
If IMPL_PLAN.md lacks execution strategy, use intelligent fallback (analyze task structure).
|
||||||
|
|
||||||
|
**Step 4B: Execute Tasks with Lazy Loading**
|
||||||
|
|
||||||
|
**Key Optimization**: Read task JSON **only when needed** for execution
|
||||||
|
|
||||||
**Execution Loop Pattern**:
|
**Execution Loop Pattern**:
|
||||||
```
|
```
|
||||||
@@ -132,6 +125,16 @@ while (TODO_LIST.md has pending tasks) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Execution Process per Task**:
|
||||||
|
1. **Identify Next Task**: From TodoWrite, get the next `in_progress` task ID
|
||||||
|
2. **Load Task JSON on Demand**: Read `.task/{task-id}.json` for current task ONLY
|
||||||
|
3. **Validate Task Structure**: Ensure all 5 required fields exist (id, title, status, meta, context, flow_control)
|
||||||
|
4. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
|
||||||
|
5. **Monitor Progress**: Track agent execution and handle errors without user interruption
|
||||||
|
6. **Collect Results**: Gather implementation results and outputs
|
||||||
|
7. **Update TODO_LIST.md**: Mark current task as completed in TODO_LIST.md
|
||||||
|
8. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
|
||||||
|
|
||||||
**Benefits**:
|
**Benefits**:
|
||||||
- Reduces initial context loading by ~90%
|
- Reduces initial context loading by ~90%
|
||||||
- Only reads task JSON when actually executing
|
- Only reads task JSON when actually executing
|
||||||
@@ -139,6 +142,9 @@ while (TODO_LIST.md has pending tasks) {
|
|||||||
- Faster startup time for workflow execution
|
- Faster startup time for workflow execution
|
||||||
|
|
||||||
### Phase 5: Completion
|
### Phase 5: Completion
|
||||||
|
**Applies to**: Both normal and resume modes
|
||||||
|
|
||||||
|
**Process**:
|
||||||
1. **Update Task Status**: Mark completed tasks in JSON files
|
1. **Update Task Status**: Mark completed tasks in JSON files
|
||||||
2. **Generate Summary**: Create task summary in `.summaries/`
|
2. **Generate Summary**: Create task summary in `.summaries/`
|
||||||
3. **Update TodoWrite**: Mark current task complete, advance to next
|
3. **Update TodoWrite**: Mark current task complete, advance to next
|
||||||
@@ -146,34 +152,52 @@ while (TODO_LIST.md has pending tasks) {
|
|||||||
5. **Check Workflow Complete**: Verify all tasks are completed
|
5. **Check Workflow Complete**: Verify all tasks are completed
|
||||||
6. **Auto-Complete Session**: Call `/workflow:session:complete` when all tasks finished
|
6. **Auto-Complete Session**: Call `/workflow:session:complete` when all tasks finished
|
||||||
|
|
||||||
## Task Discovery & Queue Building
|
## Execution Strategy (IMPL_PLAN-Driven)
|
||||||
|
|
||||||
### Session Discovery Process (Normal Mode - Optimized)
|
### Strategy Priority
|
||||||
```
|
|
||||||
├── Check for .active-* markers in .workflow/
|
|
||||||
├── If multiple active sessions found → Prompt user to select
|
|
||||||
├── Locate selected session's workflow folder
|
|
||||||
├── Load session metadata: workflow-session.json (minimal context)
|
|
||||||
├── Read IMPL_PLAN.md (strategy overview and task summary)
|
|
||||||
├── Read TODO_LIST.md (current task statuses and dependencies)
|
|
||||||
├── Parse TODO_LIST.md to extract task metadata (NO JSON loading)
|
|
||||||
├── Build execution queue from TODO_LIST.md
|
|
||||||
└── Generate TodoWrite from TODO_LIST.md state
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key Change**: Task JSONs are NOT loaded during discovery - they are loaded lazily during execution
|
**IMPL_PLAN-Driven Execution (Recommended)**:
|
||||||
|
1. **Read IMPL_PLAN.md execution strategy** (Section 4: Implementation Strategy)
|
||||||
|
2. **Follow explicit guidance**:
|
||||||
|
- Execution Model (Sequential/Parallel/Phased/TDD)
|
||||||
|
- Parallelization Opportunities (which tasks can run in parallel)
|
||||||
|
- Serialization Requirements (which tasks must run sequentially)
|
||||||
|
- Critical Path (priority execution order)
|
||||||
|
3. **Use TODO_LIST.md for status tracking** only
|
||||||
|
4. **IMPL_PLAN decides "HOW"**, execute.md implements it
|
||||||
|
|
||||||
### Resume Mode Process (--resume-session flag - Optimized)
|
**Intelligent Fallback (When IMPL_PLAN lacks execution details)**:
|
||||||
```
|
1. **Analyze task structure**:
|
||||||
├── Use provided session-id directly (skip discovery)
|
- Check `meta.execution_group` in task JSONs
|
||||||
├── Validate .workflow/{session-id}/ directory exists
|
- Analyze `depends_on` relationships
|
||||||
├── Read TODO_LIST.md for current progress
|
- Understand task complexity and risk
|
||||||
├── Parse TODO_LIST.md to extract task IDs and statuses
|
2. **Apply smart defaults**:
|
||||||
├── Generate TodoWrite from TODO_LIST.md (prioritize in-progress/pending tasks)
|
- No dependencies + same execution_group → Parallel
|
||||||
└── Enter Phase 4 (Execution) with lazy task JSON loading
|
- Has dependencies → Sequential (wait for deps)
|
||||||
```
|
- Critical/high-risk tasks → Sequential
|
||||||
|
3. **Conservative approach**: When uncertain, prefer sequential execution
|
||||||
|
|
||||||
**Key Change**: Completely skip IMPL_PLAN.md and task JSON loading - use TODO_LIST.md only
|
### Execution Models
|
||||||
|
|
||||||
|
#### 1. Sequential Execution
|
||||||
|
**When**: IMPL_PLAN specifies "Sequential" OR no clear parallelization guidance
|
||||||
|
**Pattern**: Execute tasks one by one in TODO_LIST order
|
||||||
|
**TodoWrite**: ONE task marked as `in_progress` at a time
|
||||||
|
|
||||||
|
#### 2. Parallel Execution
|
||||||
|
**When**: IMPL_PLAN specifies "Parallel" with clear parallelization opportunities
|
||||||
|
**Pattern**: Execute independent task groups concurrently
|
||||||
|
**TodoWrite**: MULTIPLE tasks (in same batch) marked as `in_progress` simultaneously
|
||||||
|
|
||||||
|
#### 3. Phased Execution
|
||||||
|
**When**: IMPL_PLAN specifies "Phased" with phase breakdown
|
||||||
|
**Pattern**: Execute tasks in phases, respect phase boundaries
|
||||||
|
**TodoWrite**: Within each phase, follow Sequential or Parallel rules
|
||||||
|
|
||||||
|
#### 4. Intelligent Fallback
|
||||||
|
**When**: IMPL_PLAN lacks execution strategy details
|
||||||
|
**Pattern**: Analyze task structure and apply smart defaults
|
||||||
|
**TodoWrite**: Follow Sequential or Parallel rules based on analysis
|
||||||
|
|
||||||
### Task Status Logic
|
### Task Status Logic
|
||||||
```
|
```
|
||||||
@@ -182,155 +206,36 @@ completed → skip
|
|||||||
blocked → skip until dependencies clear
|
blocked → skip until dependencies clear
|
||||||
```
|
```
|
||||||
|
|
||||||
## Batch Execution with Dependency Graph
|
|
||||||
|
|
||||||
### Parallel Execution Algorithm
|
|
||||||
**Core principle**: Execute independent tasks concurrently in batches based on dependency graph.
|
|
||||||
|
|
||||||
#### Algorithm Steps (Optimized with Lazy Loading)
|
|
||||||
```javascript
|
|
||||||
function executeBatchWorkflow(sessionId) {
|
|
||||||
// 1. Build dependency graph from TODO_LIST.md (NOT task JSONs)
|
|
||||||
const graph = buildDependencyGraphFromTodoList(`.workflow/${sessionId}/TODO_LIST.md`);
|
|
||||||
|
|
||||||
// 2. Process batches until graph is empty
|
|
||||||
while (!graph.isEmpty()) {
|
|
||||||
// 3. Identify current batch (tasks with in-degree = 0)
|
|
||||||
const batch = graph.getNodesWithInDegreeZero();
|
|
||||||
|
|
||||||
// 4. Load task JSONs ONLY for current batch (lazy loading)
|
|
||||||
const batchTaskJsons = batch.map(taskId =>
|
|
||||||
Read(`.workflow/${sessionId}/.task/${taskId}.json`)
|
|
||||||
);
|
|
||||||
|
|
||||||
// 5. Check for parallel execution opportunities
|
|
||||||
const parallelGroups = groupByExecutionGroup(batchTaskJsons);
|
|
||||||
|
|
||||||
// 6. Execute batch concurrently
|
|
||||||
await Promise.all(
|
|
||||||
parallelGroups.map(group => executeBatch(group))
|
|
||||||
);
|
|
||||||
|
|
||||||
// 7. Update graph: remove completed tasks and their edges
|
|
||||||
graph.removeNodes(batch);
|
|
||||||
|
|
||||||
// 8. Update TODO_LIST.md and TodoWrite to reflect completed batch
|
|
||||||
updateTodoListAfterBatch(batch);
|
|
||||||
updateTodoWriteAfterBatch(batch);
|
|
||||||
}
|
|
||||||
|
|
||||||
// 9. All tasks complete - auto-complete session
|
|
||||||
SlashCommand("/workflow:session:complete");
|
|
||||||
}
|
|
||||||
|
|
||||||
function buildDependencyGraphFromTodoList(todoListPath) {
|
|
||||||
const todoContent = Read(todoListPath);
|
|
||||||
const tasks = parseTodoListTasks(todoContent);
|
|
||||||
const graph = new DirectedGraph();
|
|
||||||
|
|
||||||
tasks.forEach(task => {
|
|
||||||
graph.addNode(task.id, { id: task.id, title: task.title, status: task.status });
|
|
||||||
task.dependencies?.forEach(depId => graph.addEdge(depId, task.id));
|
|
||||||
});
|
|
||||||
|
|
||||||
return graph;
|
|
||||||
}
|
|
||||||
|
|
||||||
function parseTodoListTasks(todoContent) {
|
|
||||||
// Parse: - [ ] **IMPL-001**: Task title → [📋](./.task/IMPL-001.json)
|
|
||||||
const taskPattern = /- \[([ x])\] \*\*([A-Z]+-\d+(?:\.\d+)?)\*\*: (.+?) →/g;
|
|
||||||
const tasks = [];
|
|
||||||
let match;
|
|
||||||
|
|
||||||
while ((match = taskPattern.exec(todoContent)) !== null) {
|
|
||||||
tasks.push({
|
|
||||||
status: match[1] === 'x' ? 'completed' : 'pending',
|
|
||||||
id: match[2],
|
|
||||||
title: match[3]
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
return tasks;
|
|
||||||
}
|
|
||||||
|
|
||||||
function groupByExecutionGroup(tasks) {
|
|
||||||
const groups = {};
|
|
||||||
|
|
||||||
tasks.forEach(task => {
|
|
||||||
const groupId = task.meta.execution_group || task.id;
|
|
||||||
if (!groups[groupId]) groups[groupId] = [];
|
|
||||||
groups[groupId].push(task);
|
|
||||||
});
|
|
||||||
|
|
||||||
return Object.values(groups);
|
|
||||||
}
|
|
||||||
|
|
||||||
async function executeBatch(tasks) {
|
|
||||||
// Execute all tasks in batch concurrently
|
|
||||||
return Promise.all(
|
|
||||||
tasks.map(task => executeTask(task))
|
|
||||||
);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Execution Group Rules
|
|
||||||
1. **Same `execution_group` ID** → Execute in parallel (independent, different contexts)
|
|
||||||
2. **No `execution_group` (null)** → Execute sequentially (has dependencies)
|
|
||||||
3. **Different `execution_group` IDs** → Execute in parallel (independent batches)
|
|
||||||
4. **Same `context_signature`** → Should have been merged (warning if not)
|
|
||||||
|
|
||||||
#### Parallel Execution Example
|
|
||||||
```
|
|
||||||
Batch 1 (no dependencies):
|
|
||||||
- IMPL-1.1 (execution_group: "parallel-auth-api") → Agent 1
|
|
||||||
- IMPL-1.2 (execution_group: "parallel-ui-comp") → Agent 2
|
|
||||||
- IMPL-1.3 (execution_group: "parallel-db-schema") → Agent 3
|
|
||||||
|
|
||||||
Wait for Batch 1 completion...
|
|
||||||
|
|
||||||
Batch 2 (depends on Batch 1):
|
|
||||||
- IMPL-2.1 (execution_group: null, depends_on: [IMPL-1.1, IMPL-1.2]) → Agent 1
|
|
||||||
|
|
||||||
Wait for Batch 2 completion...
|
|
||||||
|
|
||||||
Batch 3 (independent of Batch 2):
|
|
||||||
- IMPL-3.1 (execution_group: "parallel-tests-1") → Agent 1
|
|
||||||
- IMPL-3.2 (execution_group: "parallel-tests-2") → Agent 2
|
|
||||||
```
|
|
||||||
|
|
||||||
## TodoWrite Coordination
|
## TodoWrite Coordination
|
||||||
**Comprehensive workflow tracking** with immediate status updates throughout entire execution without user interruption:
|
|
||||||
|
|
||||||
#### TodoWrite Workflow Rules
|
### TodoWrite Rules (Unified)
|
||||||
1. **Initial Creation**: Generate TodoWrite from discovered pending tasks for entire workflow
|
|
||||||
- **Normal Mode**: Create from discovery results
|
|
||||||
- **Resume Mode**: Create from existing session state and current progress
|
|
||||||
2. **Parallel Task Support**:
|
|
||||||
- **Single-task execution**: Mark ONLY ONE task as `in_progress` at a time
|
|
||||||
- **Batch execution**: Mark ALL tasks in current batch as `in_progress` simultaneously
|
|
||||||
- **Execution group indicator**: Show `[execution_group: group-id]` for parallel tasks
|
|
||||||
3. **Immediate Updates**: Update status after each task/batch completion without user interruption
|
|
||||||
4. **Status Synchronization**: Sync with JSON task files after updates
|
|
||||||
5. **Continuous Tracking**: Maintain TodoWrite throughout entire workflow execution until completion
|
|
||||||
|
|
||||||
#### Resume Mode TodoWrite Generation
|
**Rule 1: Initial Creation**
|
||||||
**Special behavior when `--resume-session` flag is present**:
|
- **Normal Mode**: Generate TodoWrite from discovered pending tasks for entire workflow
|
||||||
- Load existing session progress from `.workflow/{session-id}/TODO_LIST.md`
|
- **Resume Mode**: Generate from existing session state and current progress
|
||||||
- Identify currently in-progress or next pending task
|
|
||||||
- Generate TodoWrite starting from interruption point
|
|
||||||
- Preserve completed task history in TodoWrite display
|
|
||||||
- Focus on remaining pending tasks for execution
|
|
||||||
|
|
||||||
#### TodoWrite Tool Usage
|
**Rule 2: In-Progress Task Count (Execution-Model-Dependent)**
|
||||||
**Use Claude Code's built-in TodoWrite tool** to track workflow progress in real-time:
|
- **Sequential execution**: Mark ONLY ONE task as `in_progress` at a time
|
||||||
|
- **Parallel batch execution**: Mark ALL tasks in current batch as `in_progress` simultaneously
|
||||||
|
- **Execution group indicator**: Show `[execution_group: group-id]` for parallel tasks
|
||||||
|
|
||||||
|
**Rule 3: Status Updates**
|
||||||
|
- **Immediate Updates**: Update status after each task/batch completion without user interruption
|
||||||
|
- **Status Synchronization**: Sync with JSON task files after updates
|
||||||
|
- **Continuous Tracking**: Maintain TodoWrite throughout entire workflow execution until completion
|
||||||
|
|
||||||
|
**Rule 4: Workflow Completion Check**
|
||||||
|
- When all tasks marked `completed`, auto-call `/workflow:session:complete`
|
||||||
|
|
||||||
|
### TodoWrite Tool Usage
|
||||||
|
|
||||||
|
**Example 1: Sequential Execution**
|
||||||
```javascript
|
```javascript
|
||||||
// Example 1: Sequential execution (traditional)
|
|
||||||
TodoWrite({
|
TodoWrite({
|
||||||
todos: [
|
todos: [
|
||||||
{
|
{
|
||||||
content: "Execute IMPL-1.1: Design auth schema [code-developer] [FLOW_CONTROL]",
|
content: "Execute IMPL-1.1: Design auth schema [code-developer] [FLOW_CONTROL]",
|
||||||
status: "in_progress", // Single task in progress
|
status: "in_progress", // ONE task in progress
|
||||||
activeForm: "Executing IMPL-1.1: Design auth schema"
|
activeForm: "Executing IMPL-1.1: Design auth schema"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -340,8 +245,10 @@ TodoWrite({
|
|||||||
}
|
}
|
||||||
]
|
]
|
||||||
});
|
});
|
||||||
|
```
|
||||||
|
|
||||||
// Example 2: Batch execution (parallel tasks with execution_group)
|
**Example 2: Parallel Batch Execution**
|
||||||
|
```javascript
|
||||||
TodoWrite({
|
TodoWrite({
|
||||||
todos: [
|
todos: [
|
||||||
{
|
{
|
||||||
@@ -366,44 +273,9 @@ TodoWrite({
|
|||||||
}
|
}
|
||||||
]
|
]
|
||||||
});
|
});
|
||||||
|
|
||||||
// Example 3: After batch completion
|
|
||||||
TodoWrite({
|
|
||||||
todos: [
|
|
||||||
{
|
|
||||||
content: "Execute IMPL-1.1: Build Auth API [code-developer] [execution_group: parallel-auth-api]",
|
|
||||||
status: "completed", // Batch completed
|
|
||||||
activeForm: "Executing IMPL-1.1: Build Auth API"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
content: "Execute IMPL-1.2: Build User UI [code-developer] [execution_group: parallel-ui-comp]",
|
|
||||||
status: "completed", // Batch completed
|
|
||||||
activeForm: "Executing IMPL-1.2: Build User UI"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
content: "Execute IMPL-1.3: Setup Database [code-developer] [execution_group: parallel-db-schema]",
|
|
||||||
status: "completed", // Batch completed
|
|
||||||
activeForm: "Executing IMPL-1.3: Setup Database"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
content: "Execute IMPL-2.1: Integration Tests [test-fix-agent]",
|
|
||||||
status: "in_progress", // Next batch started
|
|
||||||
activeForm: "Executing IMPL-2.1: Integration Tests"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
});
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**TodoWrite Integration Rules**:
|
### TODO_LIST.md Update Timing
|
||||||
- **Continuous Workflow Tracking**: Use TodoWrite tool throughout entire workflow execution
|
|
||||||
- **Real-time Updates**: Immediate progress tracking without user interruption
|
|
||||||
- **Single Active Task**: Only ONE task marked as `in_progress` at any time
|
|
||||||
- **Immediate Completion**: Mark tasks `completed` immediately after finishing
|
|
||||||
- **Status Sync**: Sync TodoWrite status with JSON task files after each update
|
|
||||||
- **Full Execution**: Continue TodoWrite tracking until all workflow tasks complete
|
|
||||||
- **Workflow Completion Check**: When all tasks marked `completed`, auto-call `/workflow:session:complete`
|
|
||||||
|
|
||||||
#### TODO_LIST.md Update Timing
|
|
||||||
**Single source of truth for task status** - enables lazy loading by providing task metadata without reading JSONs
|
**Single source of truth for task status** - enables lazy loading by providing task metadata without reading JSONs
|
||||||
|
|
||||||
- **Before Agent Launch**: Mark task as `in_progress`
|
- **Before Agent Launch**: Mark task as `in_progress`
|
||||||
@@ -411,18 +283,17 @@ TodoWrite({
|
|||||||
- **On Error**: Keep as `in_progress`, add error note
|
- **On Error**: Keep as `in_progress`, add error note
|
||||||
- **Workflow Complete**: Call `/workflow:session:complete`
|
- **Workflow Complete**: Call `/workflow:session:complete`
|
||||||
|
|
||||||
### 3. Agent Context Management
|
## Agent Context Management
|
||||||
**Comprehensive context preparation** for autonomous agent execution:
|
|
||||||
|
|
||||||
#### Context Sources (Priority Order)
|
### Context Sources (Priority Order)
|
||||||
1. **Complete Task JSON**: Full task definition including all fields and artifacts
|
1. **Complete Task JSON**: Full task definition including all fields and artifacts
|
||||||
2. **Artifacts Context**: Brainstorming outputs and role analysess from task.context.artifacts
|
2. **Artifacts Context**: Brainstorming outputs and role analyses from task.context.artifacts
|
||||||
3. **Flow Control Context**: Accumulated outputs from pre_analysis steps (including artifact loading)
|
3. **Flow Control Context**: Accumulated outputs from pre_analysis steps (including artifact loading)
|
||||||
4. **Dependency Summaries**: Previous task completion summaries
|
4. **Dependency Summaries**: Previous task completion summaries
|
||||||
5. **Session Context**: Workflow paths and session metadata
|
5. **Session Context**: Workflow paths and session metadata
|
||||||
6. **Inherited Context**: Parent task context and shared variables
|
6. **Inherited Context**: Parent task context and shared variables
|
||||||
|
|
||||||
#### Context Assembly Process
|
### Context Assembly Process
|
||||||
```
|
```
|
||||||
1. Load Task JSON → Base context (including artifacts array)
|
1. Load Task JSON → Base context (including artifacts array)
|
||||||
2. Load Artifacts → Synthesis specifications and brainstorming outputs
|
2. Load Artifacts → Synthesis specifications and brainstorming outputs
|
||||||
@@ -432,7 +303,7 @@ TodoWrite({
|
|||||||
6. Combine All → Complete agent context with artifact integration
|
6. Combine All → Complete agent context with artifact integration
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Agent Context Package Structure
|
### Agent Context Package Structure
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"task": { /* Complete task JSON with artifacts array */ },
|
"task": { /* Complete task JSON with artifacts array */ },
|
||||||
@@ -451,18 +322,18 @@ TodoWrite({
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"session": {
|
"session": {
|
||||||
"workflow_dir": ".workflow/WFS-session/",
|
"workflow_dir": ".workflow/sessions/WFS-session/",
|
||||||
"context_package_path": ".workflow/WFS-session/.process/context-package.json",
|
"context_package_path": ".workflow/sessions/WFS-session/.process/context-package.json",
|
||||||
"todo_list_path": ".workflow/WFS-session/TODO_LIST.md",
|
"todo_list_path": ".workflow/sessions/WFS-session/TODO_LIST.md",
|
||||||
"summaries_dir": ".workflow/WFS-session/.summaries/",
|
"summaries_dir": ".workflow/sessions/WFS-session/.summaries/",
|
||||||
"task_json_path": ".workflow/WFS-session/.task/IMPL-1.1.json"
|
"task_json_path": ".workflow/sessions/WFS-session/.task/IMPL-1.1.json"
|
||||||
},
|
},
|
||||||
"dependencies": [ /* Task summaries from depends_on */ ],
|
"dependencies": [ /* Task summaries from depends_on */ ],
|
||||||
"inherited": { /* Parent task context */ }
|
"inherited": { /* Parent task context */ }
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Context Validation Rules
|
### Context Validation Rules
|
||||||
- **Task JSON Complete**: All 5 fields present and valid, including artifacts array in context
|
- **Task JSON Complete**: All 5 fields present and valid, including artifacts array in context
|
||||||
- **Artifacts Available**: All artifacts loaded from context-package.json
|
- **Artifacts Available**: All artifacts loaded from context-package.json
|
||||||
- **Flow Control Ready**: All pre_analysis steps completed including artifact loading steps
|
- **Flow Control Ready**: All pre_analysis steps completed including artifact loading steps
|
||||||
@@ -470,10 +341,26 @@ TodoWrite({
|
|||||||
- **Session Paths Valid**: All workflow paths exist and accessible (verified via context-package.json)
|
- **Session Paths Valid**: All workflow paths exist and accessible (verified via context-package.json)
|
||||||
- **Agent Assignment**: Valid agent type specified in meta.agent
|
- **Agent Assignment**: Valid agent type specified in meta.agent
|
||||||
|
|
||||||
### 4. Agent Execution Pattern
|
## Agent Execution Pattern
|
||||||
**Structured agent invocation** with complete context and clear instructions:
|
|
||||||
|
|
||||||
#### Agent Prompt Template
|
### Flow Control Execution
|
||||||
|
**[FLOW_CONTROL]** marker indicates task JSON contains `flow_control.pre_analysis` steps for context preparation.
|
||||||
|
|
||||||
|
**Orchestrator Responsibility**:
|
||||||
|
- Pass complete task JSON to agent (including `flow_control` block)
|
||||||
|
- Provide session paths for artifact access
|
||||||
|
- Monitor agent completion
|
||||||
|
|
||||||
|
**Agent Responsibility**:
|
||||||
|
- Parse `flow_control.pre_analysis` array from JSON
|
||||||
|
- Execute steps sequentially with variable substitution
|
||||||
|
- Accumulate context from artifacts and dependencies
|
||||||
|
- Follow error handling per `step.on_error`
|
||||||
|
- Complete implementation using accumulated context
|
||||||
|
|
||||||
|
**Orchestrator does NOT execute flow control steps - Agent interprets and executes them from JSON.**
|
||||||
|
|
||||||
|
### Agent Prompt Template
|
||||||
```bash
|
```bash
|
||||||
Task(subagent_type="{meta.agent}",
|
Task(subagent_type="{meta.agent}",
|
||||||
prompt="**EXECUTE TASK FROM JSON**
|
prompt="**EXECUTE TASK FROM JSON**
|
||||||
@@ -512,7 +399,7 @@ Task(subagent_type="{meta.agent}",
|
|||||||
description="Execute task: {task.id}")
|
description="Execute task: {task.id}")
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Agent JSON Loading Specification
|
### Agent JSON Loading Specification
|
||||||
**MANDATORY AGENT PROTOCOL**: All agents must follow this exact loading sequence:
|
**MANDATORY AGENT PROTOCOL**: All agents must follow this exact loading sequence:
|
||||||
|
|
||||||
1. **JSON Loading**: First action must be `cat {session.task_json_path}`
|
1. **JSON Loading**: First action must be `cat {session.task_json_path}`
|
||||||
@@ -565,7 +452,7 @@ Task(subagent_type="{meta.agent}",
|
|||||||
"step": "load_synthesis_specification",
|
"step": "load_synthesis_specification",
|
||||||
"action": "Load synthesis specification from context-package.json",
|
"action": "Load synthesis specification from context-package.json",
|
||||||
"commands": [
|
"commands": [
|
||||||
"Read(.workflow/WFS-[session]/.process/context-package.json)",
|
"Read(.workflow/sessions/WFS-[session]/.process/context-package.json)",
|
||||||
"Extract(brainstorm_artifacts.synthesis_output.path)",
|
"Extract(brainstorm_artifacts.synthesis_output.path)",
|
||||||
"Read(extracted path)"
|
"Read(extracted path)"
|
||||||
],
|
],
|
||||||
@@ -606,7 +493,7 @@ Task(subagent_type="{meta.agent}",
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Execution Flow
|
### Execution Flow
|
||||||
1. **Load Task JSON**: Agent reads and validates complete JSON structure
|
1. **Load Task JSON**: Agent reads and validates complete JSON structure
|
||||||
2. **Execute Flow Control**: Agent runs pre_analysis steps if present
|
2. **Execute Flow Control**: Agent runs pre_analysis steps if present
|
||||||
3. **Prepare Implementation**: Agent uses implementation_approach from JSON
|
3. **Prepare Implementation**: Agent uses implementation_approach from JSON
|
||||||
@@ -614,7 +501,7 @@ Task(subagent_type="{meta.agent}",
|
|||||||
5. **Update Status**: Agent marks JSON status as completed
|
5. **Update Status**: Agent marks JSON status as completed
|
||||||
6. **Generate Summary**: Agent creates completion summary
|
6. **Generate Summary**: Agent creates completion summary
|
||||||
|
|
||||||
#### Agent Assignment Rules
|
### Agent Assignment Rules
|
||||||
```
|
```
|
||||||
meta.agent specified → Use specified agent
|
meta.agent specified → Use specified agent
|
||||||
meta.agent missing → Infer from meta.type:
|
meta.agent missing → Infer from meta.type:
|
||||||
@@ -625,15 +512,9 @@ meta.agent missing → Infer from meta.type:
|
|||||||
- "docs" → @doc-generator
|
- "docs" → @doc-generator
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Error Handling During Execution
|
|
||||||
- **Agent Failure**: Retry once with adjusted context
|
|
||||||
- **Flow Control Error**: Skip optional steps, fail on critical
|
|
||||||
- **Context Missing**: Reload from JSON files and retry
|
|
||||||
- **Timeout**: Mark as blocked, continue with next task
|
|
||||||
|
|
||||||
## Workflow File Structure Reference
|
## Workflow File Structure Reference
|
||||||
```
|
```
|
||||||
.workflow/WFS-[topic-slug]/
|
.workflow/sessions/WFS-[topic-slug]/
|
||||||
├── workflow-session.json # Session state and metadata
|
├── workflow-session.json # Session state and metadata
|
||||||
├── IMPL_PLAN.md # Planning document and requirements
|
├── IMPL_PLAN.md # Planning document and requirements
|
||||||
├── TODO_LIST.md # Progress tracking (auto-updated)
|
├── TODO_LIST.md # Progress tracking (auto-updated)
|
||||||
@@ -644,78 +525,26 @@ meta.agent missing → Infer from meta.type:
|
|||||||
│ ├── IMPL-1-summary.md # Task completion details
|
│ ├── IMPL-1-summary.md # Task completion details
|
||||||
│ └── IMPL-1.1-summary.md # Subtask completion details
|
│ └── IMPL-1.1-summary.md # Subtask completion details
|
||||||
└── .process/ # Planning artifacts
|
└── .process/ # Planning artifacts
|
||||||
|
├── context-package.json # Smart context package
|
||||||
└── ANALYSIS_RESULTS.md # Planning analysis results
|
└── ANALYSIS_RESULTS.md # Planning analysis results
|
||||||
```
|
```
|
||||||
|
|
||||||
## Error Handling & Recovery
|
## Error Handling & Recovery
|
||||||
|
|
||||||
### Discovery Phase Errors
|
### Common Errors & Recovery
|
||||||
| Error | Cause | Resolution | Command |
|
|
||||||
|-------|-------|------------|---------|
|
|
||||||
| No active session | No `.active-*` markers found | Create or resume session | `/workflow:plan "project"` |
|
|
||||||
| Multiple sessions | Multiple `.active-*` markers | Select specific session | Manual choice prompt |
|
|
||||||
| Corrupted session | Invalid JSON files | Recreate session structure | `/workflow:session:status --validate` |
|
|
||||||
| Missing task files | Broken task references | Regenerate tasks | `/task:create` or repair |
|
|
||||||
|
|
||||||
### Execution Phase Errors
|
| Error Type | Cause | Recovery Strategy | Max Attempts |
|
||||||
| Error | Cause | Recovery Strategy | Max Attempts |
|
|-----------|-------|------------------|--------------|
|
||||||
|-------|-------|------------------|--------------|
|
| **Discovery Errors** |
|
||||||
|
| No active session | No sessions in `.workflow/sessions/` | Create or resume session: `/workflow:plan "project"` | N/A |
|
||||||
|
| Multiple sessions | Multiple sessions in `.workflow/sessions/` | Prompt user selection | N/A |
|
||||||
|
| Corrupted session | Invalid JSON files | Recreate session structure or validate files | N/A |
|
||||||
|
| **Execution Errors** |
|
||||||
| Agent failure | Agent crash/timeout | Retry with simplified context | 2 |
|
| Agent failure | Agent crash/timeout | Retry with simplified context | 2 |
|
||||||
| Flow control error | Command failure | Skip optional, fail critical | 1 per step |
|
| Flow control error | Command failure | Skip optional, fail critical | 1 per step |
|
||||||
| Context loading error | Missing dependencies | Reload from JSON, use defaults | 3 |
|
| Context loading error | Missing dependencies | Reload from JSON, use defaults | 3 |
|
||||||
| JSON file corruption | File system issues | Restore from backup/recreate | 1 |
|
| JSON file corruption | File system issues | Restore from backup/recreate | 1 |
|
||||||
|
|
||||||
### Recovery Procedures
|
|
||||||
|
|
||||||
#### Session Recovery
|
|
||||||
```bash
|
|
||||||
# Check session integrity
|
|
||||||
find .workflow -name ".active-*" | while read marker; do
|
|
||||||
session=$(basename "$marker" | sed 's/^\.active-//')
|
|
||||||
if [ ! -d ".workflow/$session" ]; then
|
|
||||||
echo "Removing orphaned marker: $marker"
|
|
||||||
rm "$marker"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Recreate corrupted session files
|
|
||||||
if [ ! -f ".workflow/$session/workflow-session.json" ]; then
|
|
||||||
echo '{"session_id":"'$session'","status":"active"}' > ".workflow/$session/workflow-session.json"
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Task Recovery
|
|
||||||
```bash
|
|
||||||
# Validate task JSON integrity
|
|
||||||
for task_file in .workflow/$session/.task/*.json; do
|
|
||||||
if ! jq empty "$task_file" 2>/dev/null; then
|
|
||||||
echo "Corrupted task file: $task_file"
|
|
||||||
# Backup and regenerate or restore from backup
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Fix missing dependencies
|
|
||||||
missing_deps=$(jq -r '.context.depends_on[]?' .workflow/$session/.task/*.json | sort -u)
|
|
||||||
for dep in $missing_deps; do
|
|
||||||
if [ ! -f ".workflow/$session/.task/$dep.json" ]; then
|
|
||||||
echo "Missing dependency: $dep - creating placeholder"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Context Recovery
|
|
||||||
```bash
|
|
||||||
# Reload context from available sources
|
|
||||||
if [ -f ".workflow/$session/.process/ANALYSIS_RESULTS.md" ]; then
|
|
||||||
echo "Reloading planning context..."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Restore from documentation if available
|
|
||||||
if [ -d ".workflow/docs/" ]; then
|
|
||||||
echo "Using documentation context as fallback..."
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error Prevention
|
### Error Prevention
|
||||||
- **Pre-flight Checks**: Validate session integrity before execution
|
- **Pre-flight Checks**: Validate session integrity before execution
|
||||||
- **Backup Strategy**: Create task snapshots before major operations
|
- **Backup Strategy**: Create task snapshots before major operations
|
||||||
@@ -723,16 +552,28 @@ fi
|
|||||||
- **Dependency Validation**: Check all depends_on references exist
|
- **Dependency Validation**: Check all depends_on references exist
|
||||||
- **Context Verification**: Ensure all required context is available
|
- **Context Verification**: Ensure all required context is available
|
||||||
|
|
||||||
## Usage Examples
|
### Recovery Procedures
|
||||||
|
|
||||||
### Basic Usage
|
**Session Recovery**:
|
||||||
```bash
|
```bash
|
||||||
/workflow:execute # Execute all pending tasks autonomously
|
# Check session integrity
|
||||||
/workflow:session:status # Check progress
|
find .workflow/sessions/ -name "WFS-*" -type d | while read session_dir; do
|
||||||
/task:execute IMPL-1.2 # Execute specific task
|
session=$(basename "$session_dir")
|
||||||
|
[ ! -f "$session_dir/workflow-session.json" ] && \
|
||||||
|
echo '{"session_id":"'$session'","status":"active"}' > "$session_dir/workflow-session.json"
|
||||||
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
### Integration
|
**Task Recovery**:
|
||||||
- **Planning**: `/workflow:plan` → `/workflow:execute` → `/workflow:review`
|
```bash
|
||||||
- **Recovery**: `/workflow:status --validate` → `/workflow:execute`
|
# Validate task JSON integrity
|
||||||
|
for task_file in .workflow/sessions/$session/.task/*.json; do
|
||||||
|
jq empty "$task_file" 2>/dev/null || echo "Corrupted: $task_file"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Fix missing dependencies
|
||||||
|
missing_deps=$(jq -r '.context.depends_on[]?' .workflow/sessions/$session/.task/*.json | sort -u)
|
||||||
|
for dep in $missing_deps; do
|
||||||
|
[ ! -f ".workflow/sessions/$session/.task/$dep.json" ] && echo "Missing dependency: $dep"
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|||||||
399
.claude/skills/command-guide/reference/commands/workflow/init.md
Normal file
399
.claude/skills/command-guide/reference/commands/workflow/init.md
Normal file
@@ -0,0 +1,399 @@
|
|||||||
|
---
|
||||||
|
name: init
|
||||||
|
description: Initialize project-level state with intelligent project analysis using cli-explore-agent
|
||||||
|
argument-hint: "[--regenerate]"
|
||||||
|
examples:
|
||||||
|
- /workflow:init
|
||||||
|
- /workflow:init --regenerate
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow Init Command (/workflow:init)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Initializes `.workflow/project.json` with comprehensive project understanding by leveraging **cli-explore-agent** for intelligent analysis.
|
||||||
|
|
||||||
|
**Key Features**:
|
||||||
|
- **Intelligent Project Analysis**: Uses cli-explore-agent's Deep Scan mode
|
||||||
|
- **Technology Stack Detection**: Identifies languages, frameworks, build tools
|
||||||
|
- **Architecture Overview**: Discovers patterns, layers, key components
|
||||||
|
- **One-time Initialization**: Skips if project.json exists (unless --regenerate)
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
```bash
|
||||||
|
/workflow:init # Initialize project state (skip if exists)
|
||||||
|
/workflow:init --regenerate # Force regeneration of project.json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation Flow
|
||||||
|
|
||||||
|
### Step 1: Check Existing State
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check if project.json already exists
|
||||||
|
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||||
|
```
|
||||||
|
|
||||||
|
**If EXISTS and no --regenerate flag**:
|
||||||
|
```
|
||||||
|
Project already initialized at .workflow/project.json
|
||||||
|
Use /workflow:init --regenerate to rebuild project analysis
|
||||||
|
Use /workflow:status --project to view current state
|
||||||
|
```
|
||||||
|
|
||||||
|
**If NOT_FOUND or --regenerate flag**: Proceed to initialization
|
||||||
|
|
||||||
|
### Step 2: Project Discovery
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get project name and root
|
||||||
|
bash(basename "$(git rev-parse --show-toplevel 2>/dev/null || pwd)")
|
||||||
|
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||||
|
|
||||||
|
# Create .workflow directory
|
||||||
|
bash(mkdir -p .workflow)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Intelligent Project Analysis
|
||||||
|
|
||||||
|
**Invoke cli-explore-agent** with Deep Scan mode for comprehensive understanding:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Task(
|
||||||
|
subagent_type="cli-explore-agent",
|
||||||
|
description="Deep project analysis",
|
||||||
|
prompt=`
|
||||||
|
Analyze project structure and technology stack for workflow initialization.
|
||||||
|
|
||||||
|
## Analysis Objective
|
||||||
|
Perform Deep Scan analysis to build comprehensive project understanding for .workflow/project.json initialization.
|
||||||
|
|
||||||
|
## Required Analysis
|
||||||
|
|
||||||
|
### 1. Technology Stack Detection
|
||||||
|
- **Primary Languages**: Identify all programming languages with file counts
|
||||||
|
- **Frameworks**: Detect web frameworks (React, Vue, Express, Django, etc.)
|
||||||
|
- **Build Tools**: Identify build systems (npm, cargo, maven, gradle, etc.)
|
||||||
|
- **Test Frameworks**: Find testing tools (jest, pytest, go test, etc.)
|
||||||
|
|
||||||
|
### 2. Project Architecture
|
||||||
|
- **Architecture Style**: Identify patterns (MVC, microservices, monorepo, etc.)
|
||||||
|
- **Layer Structure**: Discover architectural layers (presentation, business, data)
|
||||||
|
- **Design Patterns**: Find common patterns (singleton, factory, repository, etc.)
|
||||||
|
- **Key Components**: List 5-10 core modules/components with brief descriptions
|
||||||
|
|
||||||
|
### 3. Project Metrics
|
||||||
|
- **Total Files**: Count source code files
|
||||||
|
- **Lines of Code**: Estimate total LOC
|
||||||
|
- **Module Count**: Number of top-level modules/packages
|
||||||
|
- **Complexity**: Overall complexity rating (low/medium/high)
|
||||||
|
|
||||||
|
### 4. Entry Points
|
||||||
|
- **Main Entry**: Identify primary application entry point(s)
|
||||||
|
- **CLI Commands**: Discover available commands/scripts
|
||||||
|
- **API Endpoints**: Find HTTP/REST/GraphQL endpoints (if applicable)
|
||||||
|
|
||||||
|
## Execution Mode
|
||||||
|
Use **Deep Scan** with Dual-Source Strategy:
|
||||||
|
- Phase 1: Bash structural scan (fast pattern discovery)
|
||||||
|
- Phase 2: Gemini semantic analysis (design intent, patterns)
|
||||||
|
- Phase 3: Synthesis (merge findings with attribution)
|
||||||
|
|
||||||
|
## Analysis Scope
|
||||||
|
- Root directory: ${projectRoot}
|
||||||
|
- Exclude: node_modules, dist, build, .git, vendor, __pycache__
|
||||||
|
- Focus: Source code directories (src, lib, pkg, app, etc.)
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
Return JSON structure for programmatic processing:
|
||||||
|
|
||||||
|
\`\`\`json
|
||||||
|
{
|
||||||
|
"technology_stack": {
|
||||||
|
"languages": [
|
||||||
|
{"name": "TypeScript", "file_count": 150, "primary": true},
|
||||||
|
{"name": "Python", "file_count": 30, "primary": false}
|
||||||
|
],
|
||||||
|
"frameworks": ["React", "Express", "TypeORM"],
|
||||||
|
"build_tools": ["npm", "webpack"],
|
||||||
|
"test_frameworks": ["Jest", "Supertest"]
|
||||||
|
},
|
||||||
|
"architecture": {
|
||||||
|
"style": "Layered MVC with Repository Pattern",
|
||||||
|
"layers": ["presentation", "business-logic", "data-access"],
|
||||||
|
"patterns": ["MVC", "Repository Pattern", "Dependency Injection"],
|
||||||
|
"key_components": [
|
||||||
|
{
|
||||||
|
"name": "Authentication Module",
|
||||||
|
"path": "src/auth",
|
||||||
|
"description": "JWT-based authentication with OAuth2 support",
|
||||||
|
"importance": "high"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "User Management",
|
||||||
|
"path": "src/users",
|
||||||
|
"description": "User CRUD operations and profile management",
|
||||||
|
"importance": "high"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"metrics": {
|
||||||
|
"total_files": 180,
|
||||||
|
"lines_of_code": 15000,
|
||||||
|
"module_count": 12,
|
||||||
|
"complexity": "medium"
|
||||||
|
},
|
||||||
|
"entry_points": {
|
||||||
|
"main": "src/index.ts",
|
||||||
|
"cli_commands": ["npm start", "npm test", "npm run build"],
|
||||||
|
"api_endpoints": ["/api/auth", "/api/users", "/api/posts"]
|
||||||
|
},
|
||||||
|
"analysis_metadata": {
|
||||||
|
"timestamp": "2025-01-18T10:30:00Z",
|
||||||
|
"mode": "deep-scan",
|
||||||
|
"source": "cli-explore-agent"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
## Quality Requirements
|
||||||
|
- ✅ All technology stack items verified (no guessing)
|
||||||
|
- ✅ Key components include file paths for navigation
|
||||||
|
- ✅ Architecture style based on actual code patterns, not assumptions
|
||||||
|
- ✅ Metrics calculated from actual file counts/lines
|
||||||
|
- ✅ Entry points verified as executable
|
||||||
|
`
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Agent Output**: JSON structure with comprehensive project analysis
|
||||||
|
|
||||||
|
### Step 4: Build project.json from Analysis
|
||||||
|
|
||||||
|
**Data Processing**:
|
||||||
|
```javascript
|
||||||
|
// Parse agent analysis output
|
||||||
|
const analysis = JSON.parse(agentOutput);
|
||||||
|
|
||||||
|
// Build complete project.json structure
|
||||||
|
const projectMeta = {
|
||||||
|
// Basic metadata
|
||||||
|
project_name: projectName,
|
||||||
|
initialized_at: new Date().toISOString(),
|
||||||
|
|
||||||
|
// Project overview (from cli-explore-agent)
|
||||||
|
overview: {
|
||||||
|
description: generateDescription(analysis), // e.g., "TypeScript web application with React frontend"
|
||||||
|
technology_stack: analysis.technology_stack,
|
||||||
|
architecture: {
|
||||||
|
style: analysis.architecture.style,
|
||||||
|
layers: analysis.architecture.layers,
|
||||||
|
patterns: analysis.architecture.patterns
|
||||||
|
},
|
||||||
|
key_components: analysis.architecture.key_components,
|
||||||
|
entry_points: analysis.entry_points,
|
||||||
|
metrics: analysis.metrics
|
||||||
|
},
|
||||||
|
|
||||||
|
// Feature registry (initially empty, populated by complete)
|
||||||
|
features: [],
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
statistics: {
|
||||||
|
total_features: 0,
|
||||||
|
total_sessions: 0,
|
||||||
|
last_updated: new Date().toISOString()
|
||||||
|
},
|
||||||
|
|
||||||
|
// Analysis metadata
|
||||||
|
_metadata: {
|
||||||
|
initialized_by: "cli-explore-agent",
|
||||||
|
analysis_timestamp: analysis.analysis_metadata.timestamp,
|
||||||
|
analysis_mode: analysis.analysis_metadata.mode
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Helper: Generate project description
|
||||||
|
function generateDescription(analysis) {
|
||||||
|
const primaryLang = analysis.technology_stack.languages.find(l => l.primary);
|
||||||
|
const frameworks = analysis.technology_stack.frameworks.slice(0, 2).join(', ');
|
||||||
|
|
||||||
|
return `${primaryLang.name} project using ${frameworks}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write to .workflow/project.json
|
||||||
|
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Output Summary
|
||||||
|
|
||||||
|
```
|
||||||
|
✓ Project initialized successfully
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
Name: ${projectName}
|
||||||
|
Description: ${overview.description}
|
||||||
|
|
||||||
|
### Technology Stack
|
||||||
|
Languages: ${languages.map(l => l.name).join(', ')}
|
||||||
|
Frameworks: ${frameworks.join(', ')}
|
||||||
|
|
||||||
|
### Architecture
|
||||||
|
Style: ${architecture.style}
|
||||||
|
Components: ${key_components.length} core modules identified
|
||||||
|
|
||||||
|
### Project Metrics
|
||||||
|
Files: ${metrics.total_files}
|
||||||
|
LOC: ${metrics.lines_of_code}
|
||||||
|
Complexity: ${metrics.complexity}
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
1. Start a workflow: /workflow:plan "feature description"
|
||||||
|
2. View project state: /workflow:status --project
|
||||||
|
3. View details: cat .workflow/project.json
|
||||||
|
|
||||||
|
---
|
||||||
|
Project state saved to: .workflow/project.json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Extended project.json Schema
|
||||||
|
|
||||||
|
### Complete Structure
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project_name": "claude_dms3",
|
||||||
|
"initialized_at": "2025-01-18T10:00:00Z",
|
||||||
|
|
||||||
|
"overview": {
|
||||||
|
"description": "TypeScript workflow automation system with AI agent orchestration",
|
||||||
|
"technology_stack": {
|
||||||
|
"languages": [
|
||||||
|
{"name": "TypeScript", "file_count": 150, "primary": true},
|
||||||
|
{"name": "Bash", "file_count": 30, "primary": false}
|
||||||
|
],
|
||||||
|
"frameworks": ["Node.js"],
|
||||||
|
"build_tools": ["npm"],
|
||||||
|
"test_frameworks": ["Jest"]
|
||||||
|
},
|
||||||
|
"architecture": {
|
||||||
|
"style": "Agent-based workflow orchestration with modular command system",
|
||||||
|
"layers": ["command-layer", "agent-orchestration", "cli-integration"],
|
||||||
|
"patterns": ["Command Pattern", "Agent Pattern", "Template Method"]
|
||||||
|
},
|
||||||
|
"key_components": [
|
||||||
|
{
|
||||||
|
"name": "Workflow Planning",
|
||||||
|
"path": ".claude/commands/workflow",
|
||||||
|
"description": "Multi-phase planning workflow with brainstorming and task generation",
|
||||||
|
"importance": "high"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "Agent System",
|
||||||
|
"path": ".claude/agents",
|
||||||
|
"description": "Specialized agents for code development, testing, documentation",
|
||||||
|
"importance": "high"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CLI Tool Integration",
|
||||||
|
"path": ".claude/scripts",
|
||||||
|
"description": "Gemini, Qwen, Codex wrapper scripts for AI-powered analysis",
|
||||||
|
"importance": "medium"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"entry_points": {
|
||||||
|
"main": ".claude/commands/workflow/plan.md",
|
||||||
|
"cli_commands": ["/workflow:plan", "/workflow:execute", "/memory:docs"],
|
||||||
|
"api_endpoints": []
|
||||||
|
},
|
||||||
|
"metrics": {
|
||||||
|
"total_files": 180,
|
||||||
|
"lines_of_code": 15000,
|
||||||
|
"module_count": 12,
|
||||||
|
"complexity": "medium"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
|
||||||
|
"features": [],
|
||||||
|
|
||||||
|
"statistics": {
|
||||||
|
"total_features": 0,
|
||||||
|
"total_sessions": 0,
|
||||||
|
"last_updated": "2025-01-18T10:00:00Z"
|
||||||
|
},
|
||||||
|
|
||||||
|
"_metadata": {
|
||||||
|
"initialized_by": "cli-explore-agent",
|
||||||
|
"analysis_timestamp": "2025-01-18T10:00:00Z",
|
||||||
|
"analysis_mode": "deep-scan"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Regeneration Behavior
|
||||||
|
|
||||||
|
When using `--regenerate` flag:
|
||||||
|
|
||||||
|
1. **Backup existing file**:
|
||||||
|
```bash
|
||||||
|
bash(cp .workflow/project.json .workflow/project.json.backup)
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Preserve features array**:
|
||||||
|
```javascript
|
||||||
|
const existingMeta = JSON.parse(Read('.workflow/project.json'));
|
||||||
|
const preservedFeatures = existingMeta.features || [];
|
||||||
|
const preservedStats = existingMeta.statistics || {};
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Re-run cli-explore-agent analysis**
|
||||||
|
|
||||||
|
4. **Merge preserved data with new analysis**:
|
||||||
|
```javascript
|
||||||
|
const newProjectMeta = {
|
||||||
|
...analysisResults,
|
||||||
|
features: preservedFeatures, // Keep existing features
|
||||||
|
statistics: preservedStats // Keep statistics
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Output**:
|
||||||
|
```
|
||||||
|
✓ Project analysis regenerated
|
||||||
|
Backup saved: .workflow/project.json.backup
|
||||||
|
|
||||||
|
Updated:
|
||||||
|
- Technology stack analysis
|
||||||
|
- Architecture overview
|
||||||
|
- Key components discovery
|
||||||
|
|
||||||
|
Preserved:
|
||||||
|
- ${preservedFeatures.length} existing features
|
||||||
|
- Session statistics
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Agent Failure
|
||||||
|
```
|
||||||
|
If cli-explore-agent fails:
|
||||||
|
1. Fall back to basic initialization
|
||||||
|
2. Use get_modules_by_depth.sh for structure
|
||||||
|
3. Create minimal project.json with placeholder overview
|
||||||
|
4. Log warning: "Project initialized with basic analysis. Run /workflow:init --regenerate for full analysis"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Missing Tools
|
||||||
|
```
|
||||||
|
If Gemini CLI unavailable:
|
||||||
|
1. Agent uses Qwen fallback
|
||||||
|
2. If both fail, use bash-only analysis
|
||||||
|
3. Mark in _metadata: "analysis_mode": "bash-fallback"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Invalid Project Root
|
||||||
|
```
|
||||||
|
If not in git repo and empty directory:
|
||||||
|
1. Warn user: "Empty project detected"
|
||||||
|
2. Create minimal project.json
|
||||||
|
3. Suggest: "Add code files and run /workflow:init --regenerate"
|
||||||
|
```
|
||||||
@@ -0,0 +1,568 @@
|
|||||||
|
---
|
||||||
|
name: lite-execute
|
||||||
|
description: Execute tasks based on in-memory plan, prompt description, or file content
|
||||||
|
argument-hint: "[--in-memory] [\"task description\"|file-path]"
|
||||||
|
allowed-tools: TodoWrite(*), Task(*), Bash(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow Lite-Execute Command (/workflow:lite-execute)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Flexible task execution command supporting three input modes: in-memory plan (from lite-plan), direct prompt description, or file content. Handles execution orchestration, progress tracking, and optional code review.
|
||||||
|
|
||||||
|
**Core capabilities:**
|
||||||
|
- Multi-mode input (in-memory plan, prompt description, or file path)
|
||||||
|
- Execution orchestration (Agent or Codex) with full context
|
||||||
|
- Live progress tracking via TodoWrite at execution call level
|
||||||
|
- Optional code review with selected tool (Gemini, Agent, or custom)
|
||||||
|
- Context continuity across multiple executions
|
||||||
|
- Intelligent format detection (Enhanced Task JSON vs plain text)
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Command Syntax
|
||||||
|
```bash
|
||||||
|
/workflow:lite-execute [FLAGS] <INPUT>
|
||||||
|
|
||||||
|
# Flags
|
||||||
|
--in-memory Use plan from memory (called by lite-plan)
|
||||||
|
|
||||||
|
# Arguments
|
||||||
|
<input> Task description string, or path to file (required)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Input Modes
|
||||||
|
|
||||||
|
### Mode 1: In-Memory Plan
|
||||||
|
|
||||||
|
**Trigger**: Called by lite-plan after Phase 4 approval with `--in-memory` flag
|
||||||
|
|
||||||
|
**Input Source**: `executionContext` global variable set by lite-plan
|
||||||
|
|
||||||
|
**Content**: Complete execution context (see Data Structures section)
|
||||||
|
|
||||||
|
**Behavior**:
|
||||||
|
- Skip execution method selection (already set by lite-plan)
|
||||||
|
- Directly proceed to execution with full context
|
||||||
|
- All planning artifacts available (exploration, clarifications, plan)
|
||||||
|
|
||||||
|
### Mode 2: Prompt Description
|
||||||
|
|
||||||
|
**Trigger**: User calls with task description string
|
||||||
|
|
||||||
|
**Input**: Simple task description (e.g., "Add unit tests for auth module")
|
||||||
|
|
||||||
|
**Behavior**:
|
||||||
|
- Store prompt as `originalUserInput`
|
||||||
|
- Create simple execution plan from prompt
|
||||||
|
- AskUserQuestion: Select execution method (Agent/Codex/Auto)
|
||||||
|
- AskUserQuestion: Select code review tool (Skip/Gemini/Agent/Other)
|
||||||
|
- Proceed to execution with `originalUserInput` included
|
||||||
|
|
||||||
|
**User Interaction**:
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "Select execution method:",
|
||||||
|
header: "Execution",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Agent", description: "@code-developer agent" },
|
||||||
|
{ label: "Codex", description: "codex CLI tool" },
|
||||||
|
{ label: "Auto", description: "Auto-select based on complexity" }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "Enable code review after execution?",
|
||||||
|
header: "Code Review",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Skip", description: "No review" },
|
||||||
|
{ label: "Gemini Review", description: "Gemini CLI tool" },
|
||||||
|
{ label: "Agent Review", description: "Current agent review" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Mode 3: File Content
|
||||||
|
|
||||||
|
**Trigger**: User calls with file path
|
||||||
|
|
||||||
|
**Input**: Path to file containing task description or Enhanced Task JSON
|
||||||
|
|
||||||
|
**Step 1: Read and Detect Format**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
fileContent = Read(filePath)
|
||||||
|
|
||||||
|
// Attempt JSON parsing
|
||||||
|
try {
|
||||||
|
jsonData = JSON.parse(fileContent)
|
||||||
|
|
||||||
|
// Check if Enhanced Task JSON from lite-plan
|
||||||
|
if (jsonData.meta?.workflow === "lite-plan") {
|
||||||
|
// Extract plan data
|
||||||
|
planObject = {
|
||||||
|
summary: jsonData.context.plan.summary,
|
||||||
|
approach: jsonData.context.plan.approach,
|
||||||
|
tasks: jsonData.context.plan.tasks,
|
||||||
|
estimated_time: jsonData.meta.estimated_time,
|
||||||
|
recommended_execution: jsonData.meta.recommended_execution,
|
||||||
|
complexity: jsonData.meta.complexity
|
||||||
|
}
|
||||||
|
explorationContext = jsonData.context.exploration || null
|
||||||
|
clarificationContext = jsonData.context.clarifications || null
|
||||||
|
originalUserInput = jsonData.title
|
||||||
|
|
||||||
|
isEnhancedTaskJson = true
|
||||||
|
} else {
|
||||||
|
// Valid JSON but not Enhanced Task JSON - treat as plain text
|
||||||
|
originalUserInput = fileContent
|
||||||
|
isEnhancedTaskJson = false
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// Not valid JSON - treat as plain text prompt
|
||||||
|
originalUserInput = fileContent
|
||||||
|
isEnhancedTaskJson = false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Create Execution Plan**
|
||||||
|
|
||||||
|
If `isEnhancedTaskJson === true`:
|
||||||
|
- Use extracted `planObject` directly
|
||||||
|
- Skip planning, use lite-plan's existing plan
|
||||||
|
- User still selects execution method and code review
|
||||||
|
|
||||||
|
If `isEnhancedTaskJson === false`:
|
||||||
|
- Treat file content as prompt (same behavior as Mode 2)
|
||||||
|
- Create simple execution plan from content
|
||||||
|
|
||||||
|
**Step 3: User Interaction**
|
||||||
|
|
||||||
|
- AskUserQuestion: Select execution method (Agent/Codex/Auto)
|
||||||
|
- AskUserQuestion: Select code review tool
|
||||||
|
- Proceed to execution with full context
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
### Workflow Overview
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Processing → Mode Detection
|
||||||
|
|
|
||||||
|
v
|
||||||
|
[Mode 1] --in-memory: Load executionContext → Skip selection
|
||||||
|
[Mode 2] Prompt: Create plan → User selects method + review
|
||||||
|
[Mode 3] File: Detect format → Extract plan OR treat as prompt → User selects
|
||||||
|
|
|
||||||
|
v
|
||||||
|
Execution & Progress Tracking
|
||||||
|
├─ Step 1: Initialize execution tracking
|
||||||
|
├─ Step 2: Create TodoWrite execution list
|
||||||
|
├─ Step 3: Launch execution (Agent or Codex)
|
||||||
|
├─ Step 4: Track execution progress
|
||||||
|
└─ Step 5: Code review (optional)
|
||||||
|
|
|
||||||
|
v
|
||||||
|
Execution Complete
|
||||||
|
```
|
||||||
|
|
||||||
|
## Detailed Execution Steps
|
||||||
|
|
||||||
|
### Step 1: Initialize Execution Tracking
|
||||||
|
|
||||||
|
**Operations**:
|
||||||
|
- Initialize result tracking for multi-execution scenarios
|
||||||
|
- Set up `previousExecutionResults` array for context continuity
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Initialize result tracking
|
||||||
|
previousExecutionResults = []
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Create TodoWrite Execution List
|
||||||
|
|
||||||
|
**Operations**:
|
||||||
|
- Create execution tracking from task list
|
||||||
|
- Typically single execution call for all tasks
|
||||||
|
- Split into multiple calls if task list very large (>10 tasks)
|
||||||
|
|
||||||
|
**Execution Call Creation**:
|
||||||
|
```javascript
|
||||||
|
function createExecutionCalls(tasks) {
|
||||||
|
const taskTitles = tasks.map(t => t.title || t)
|
||||||
|
|
||||||
|
// Single call for ≤10 tasks (most common)
|
||||||
|
if (tasks.length <= 10) {
|
||||||
|
return [{
|
||||||
|
method: executionMethod === "Codex" ? "Codex" : "Agent",
|
||||||
|
taskSummary: taskTitles.length <= 3
|
||||||
|
? taskTitles.join(', ')
|
||||||
|
: `${taskTitles.slice(0, 2).join(', ')}, and ${taskTitles.length - 2} more`,
|
||||||
|
tasks: tasks
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
|
||||||
|
// Split into multiple calls for >10 tasks
|
||||||
|
const callSize = 5
|
||||||
|
const calls = []
|
||||||
|
for (let i = 0; i < tasks.length; i += callSize) {
|
||||||
|
const batchTasks = tasks.slice(i, i + callSize)
|
||||||
|
const batchTitles = batchTasks.map(t => t.title || t)
|
||||||
|
calls.push({
|
||||||
|
method: executionMethod === "Codex" ? "Codex" : "Agent",
|
||||||
|
taskSummary: `Tasks ${i + 1}-${Math.min(i + callSize, tasks.length)}: ${batchTitles[0]}...`,
|
||||||
|
tasks: batchTasks
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return calls
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create execution calls with IDs
|
||||||
|
executionCalls = createExecutionCalls(planObject.tasks).map((call, index) => ({
|
||||||
|
...call,
|
||||||
|
id: `[${call.method}-${index+1}]`
|
||||||
|
}))
|
||||||
|
|
||||||
|
// Create TodoWrite list
|
||||||
|
TodoWrite({
|
||||||
|
todos: executionCalls.map(call => ({
|
||||||
|
content: `${call.id} (${call.taskSummary})`,
|
||||||
|
status: "pending",
|
||||||
|
activeForm: `Executing ${call.id} (${call.taskSummary})`
|
||||||
|
}))
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example Execution Lists**:
|
||||||
|
```
|
||||||
|
Single call (typical):
|
||||||
|
[ ] [Agent-1] (Create AuthService, Add JWT utilities, Implement middleware)
|
||||||
|
|
||||||
|
Few tasks:
|
||||||
|
[ ] [Codex-1] (Create AuthService, Add JWT utilities, and 3 more)
|
||||||
|
|
||||||
|
Large task sets (>10):
|
||||||
|
[ ] [Agent-1] (Tasks 1-5: Create AuthService, Add JWT utilities, ...)
|
||||||
|
[ ] [Agent-2] (Tasks 6-10: Create tests, Update docs, ...)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Launch Execution
|
||||||
|
|
||||||
|
**IMPORTANT**: CLI execution MUST run in foreground (no background execution)
|
||||||
|
|
||||||
|
**Execution Loop**:
|
||||||
|
```javascript
|
||||||
|
for (currentIndex = 0; currentIndex < executionCalls.length; currentIndex++) {
|
||||||
|
const currentCall = executionCalls[currentIndex]
|
||||||
|
|
||||||
|
// Update TodoWrite: mark current call in_progress
|
||||||
|
// Launch execution with previousExecutionResults context
|
||||||
|
// After completion: collect result, add to previousExecutionResults
|
||||||
|
// Update TodoWrite: mark current call completed
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Option A: Agent Execution**
|
||||||
|
|
||||||
|
When to use:
|
||||||
|
- `executionMethod = "Agent"`
|
||||||
|
- `executionMethod = "Auto" AND complexity = "Low"`
|
||||||
|
|
||||||
|
Agent call format:
|
||||||
|
```javascript
|
||||||
|
function formatTaskForAgent(task, index) {
|
||||||
|
return `
|
||||||
|
### Task ${index + 1}: ${task.title}
|
||||||
|
**File**: ${task.file}
|
||||||
|
**Action**: ${task.action}
|
||||||
|
**Description**: ${task.description}
|
||||||
|
|
||||||
|
**Implementation Steps**:
|
||||||
|
${task.implementation.map((step, i) => `${i + 1}. ${step}`).join('\n')}
|
||||||
|
|
||||||
|
**Reference**:
|
||||||
|
- Pattern: ${task.reference.pattern}
|
||||||
|
- Example Files: ${task.reference.files.join(', ')}
|
||||||
|
- Guidance: ${task.reference.examples}
|
||||||
|
|
||||||
|
**Acceptance Criteria**:
|
||||||
|
${task.acceptance.map((criterion, i) => `${i + 1}. ${criterion}`).join('\n')}
|
||||||
|
`
|
||||||
|
}
|
||||||
|
|
||||||
|
Task(
|
||||||
|
subagent_type="code-developer",
|
||||||
|
description="Implement planned tasks",
|
||||||
|
prompt=`
|
||||||
|
${originalUserInput ? `## Original User Request\n${originalUserInput}\n\n` : ''}
|
||||||
|
|
||||||
|
## Implementation Plan
|
||||||
|
|
||||||
|
**Summary**: ${planObject.summary}
|
||||||
|
**Approach**: ${planObject.approach}
|
||||||
|
|
||||||
|
## Task Breakdown (${planObject.tasks.length} tasks)
|
||||||
|
${planObject.tasks.map((task, i) => formatTaskForAgent(task, i)).join('\n')}
|
||||||
|
|
||||||
|
${previousExecutionResults.length > 0 ? `\n## Previous Execution Results\n${previousExecutionResults.map(result => `
|
||||||
|
[${result.executionId}] ${result.status}
|
||||||
|
Tasks: ${result.tasksSummary}
|
||||||
|
Completion: ${result.completionSummary}
|
||||||
|
Outputs: ${result.keyOutputs || 'See git diff'}
|
||||||
|
${result.notes ? `Notes: ${result.notes}` : ''}
|
||||||
|
`).join('\n---\n')}` : ''}
|
||||||
|
|
||||||
|
## Code Context
|
||||||
|
${explorationContext || "No exploration performed"}
|
||||||
|
|
||||||
|
${clarificationContext ? `\n## Clarifications\n${JSON.stringify(clarificationContext, null, 2)}` : ''}
|
||||||
|
|
||||||
|
## Instructions
|
||||||
|
- Reference original request to ensure alignment
|
||||||
|
- Review previous results to understand completed work
|
||||||
|
- Build on previous work, avoid duplication
|
||||||
|
- Test functionality as you implement
|
||||||
|
- Complete all assigned tasks
|
||||||
|
`
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result Collection**: After completion, collect result following `executionResult` structure (see Data Structures section)
|
||||||
|
|
||||||
|
**Option B: CLI Execution (Codex)**
|
||||||
|
|
||||||
|
When to use:
|
||||||
|
- `executionMethod = "Codex"`
|
||||||
|
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
|
||||||
|
|
||||||
|
Command format:
|
||||||
|
```bash
|
||||||
|
function formatTaskForCodex(task, index) {
|
||||||
|
return `
|
||||||
|
${index + 1}. ${task.title} (${task.file})
|
||||||
|
Action: ${task.action}
|
||||||
|
What: ${task.description}
|
||||||
|
How:
|
||||||
|
${task.implementation.map((step, i) => ` ${i + 1}. ${step}`).join('\n')}
|
||||||
|
Reference: ${task.reference.pattern} (see ${task.reference.files.join(', ')})
|
||||||
|
Guidance: ${task.reference.examples}
|
||||||
|
Verify:
|
||||||
|
${task.acceptance.map((criterion, i) => ` - ${criterion}`).join('\n')}
|
||||||
|
`
|
||||||
|
}
|
||||||
|
|
||||||
|
codex --full-auto exec "
|
||||||
|
${originalUserInput ? `## Original User Request\n${originalUserInput}\n\n` : ''}
|
||||||
|
|
||||||
|
## Implementation Plan
|
||||||
|
|
||||||
|
TASK: ${planObject.summary}
|
||||||
|
APPROACH: ${planObject.approach}
|
||||||
|
|
||||||
|
### Task Breakdown (${planObject.tasks.length} tasks)
|
||||||
|
${planObject.tasks.map((task, i) => formatTaskForCodex(task, i)).join('\n')}
|
||||||
|
|
||||||
|
${previousExecutionResults.length > 0 ? `\n### Previous Execution Results\n${previousExecutionResults.map(result => `
|
||||||
|
[${result.executionId}] ${result.status}
|
||||||
|
Tasks: ${result.tasksSummary}
|
||||||
|
Status: ${result.completionSummary}
|
||||||
|
Outputs: ${result.keyOutputs || 'See git diff'}
|
||||||
|
${result.notes ? `Notes: ${result.notes}` : ''}
|
||||||
|
`).join('\n---\n')}
|
||||||
|
|
||||||
|
IMPORTANT: Review previous results. Build on completed work. Avoid duplication.
|
||||||
|
` : ''}
|
||||||
|
|
||||||
|
### Code Context from Exploration
|
||||||
|
${explorationContext ? `
|
||||||
|
Project Structure: ${explorationContext.project_structure || 'Standard structure'}
|
||||||
|
Relevant Files: ${explorationContext.relevant_files?.join(', ') || 'TBD'}
|
||||||
|
Current Patterns: ${explorationContext.patterns || 'Follow existing conventions'}
|
||||||
|
Integration Points: ${explorationContext.dependencies || 'None specified'}
|
||||||
|
Constraints: ${explorationContext.constraints || 'None'}
|
||||||
|
` : 'No prior exploration - analyze codebase as needed'}
|
||||||
|
|
||||||
|
${clarificationContext ? `\n### User Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `${q}: ${a}`).join('\n')}` : ''}
|
||||||
|
|
||||||
|
## Execution Instructions
|
||||||
|
- Reference original request to ensure alignment
|
||||||
|
- Review previous results for context continuity
|
||||||
|
- Build on previous work, don't duplicate completed tasks
|
||||||
|
- Complete all assigned tasks in single execution
|
||||||
|
- Test functionality as you implement
|
||||||
|
|
||||||
|
Complexity: ${planObject.complexity}
|
||||||
|
" --skip-git-repo-check -s danger-full-access
|
||||||
|
```
|
||||||
|
|
||||||
|
**Execution with tracking**:
|
||||||
|
```javascript
|
||||||
|
// Launch CLI in foreground (NOT background)
|
||||||
|
bash_result = Bash(
|
||||||
|
command=cli_command,
|
||||||
|
timeout=600000 // 10 minutes
|
||||||
|
)
|
||||||
|
|
||||||
|
// Update TodoWrite when execution completes
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
|
||||||
|
|
||||||
|
### Step 4: Track Execution Progress
|
||||||
|
|
||||||
|
**Real-time TodoWrite Updates** at execution call level:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// When call starts
|
||||||
|
TodoWrite({
|
||||||
|
todos: [
|
||||||
|
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "in_progress", activeForm: "..." },
|
||||||
|
{ content: "[Agent-2] (Add middleware + Update routes)", status: "pending", activeForm: "..." }
|
||||||
|
]
|
||||||
|
})
|
||||||
|
|
||||||
|
// When call completes
|
||||||
|
TodoWrite({
|
||||||
|
todos: [
|
||||||
|
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "completed", activeForm: "..." },
|
||||||
|
{ content: "[Agent-2] (Add middleware + Update routes)", status: "in_progress", activeForm: "..." }
|
||||||
|
]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**User Visibility**:
|
||||||
|
- User sees execution call progress (not individual task progress)
|
||||||
|
- Current execution highlighted as "in_progress"
|
||||||
|
- Completed executions marked with checkmark
|
||||||
|
- Each execution shows task summary for context
|
||||||
|
|
||||||
|
### Step 5: Code Review (Optional)
|
||||||
|
|
||||||
|
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
|
||||||
|
|
||||||
|
**Operations**:
|
||||||
|
- Agent Review: Current agent performs direct review
|
||||||
|
- Gemini Review: Execute gemini CLI with review prompt
|
||||||
|
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
|
||||||
|
|
||||||
|
**Command Formats**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Agent Review: Direct agent review (no CLI)
|
||||||
|
# Uses analysis prompt and TodoWrite tools directly
|
||||||
|
|
||||||
|
# Gemini Review:
|
||||||
|
gemini -p "
|
||||||
|
PURPOSE: Code review for implemented changes
|
||||||
|
TASK: • Analyze quality • Identify issues • Suggest improvements
|
||||||
|
MODE: analysis
|
||||||
|
CONTEXT: @**/* | Memory: Review lite-execute changes
|
||||||
|
EXPECTED: Quality assessment with recommendations
|
||||||
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
|
||||||
|
"
|
||||||
|
|
||||||
|
# Qwen Review (custom tool via "Other"):
|
||||||
|
qwen -p "
|
||||||
|
PURPOSE: Code review for implemented changes
|
||||||
|
TASK: • Analyze quality • Identify issues • Suggest improvements
|
||||||
|
MODE: analysis
|
||||||
|
CONTEXT: @**/* | Memory: Review lite-execute changes
|
||||||
|
EXPECTED: Quality assessment with recommendations
|
||||||
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
|
||||||
|
"
|
||||||
|
|
||||||
|
# Codex Review (custom tool via "Other"):
|
||||||
|
codex --full-auto exec "Review recent code changes for quality, potential issues, and improvements" --skip-git-repo-check -s danger-full-access
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Execution Intelligence
|
||||||
|
|
||||||
|
1. **Context Continuity**: Each execution call receives previous results
|
||||||
|
- Prevents duplication across multiple executions
|
||||||
|
- Maintains coherent implementation flow
|
||||||
|
- Builds on completed work
|
||||||
|
|
||||||
|
2. **Execution Call Tracking**: Progress at call level, not task level
|
||||||
|
- Each call handles all or subset of tasks
|
||||||
|
- Clear visibility of current execution
|
||||||
|
- Simple progress updates
|
||||||
|
|
||||||
|
3. **Flexible Execution**: Multiple input modes supported
|
||||||
|
- In-memory: Seamless lite-plan integration
|
||||||
|
- Prompt: Quick standalone execution
|
||||||
|
- File: Intelligent format detection
|
||||||
|
- Enhanced Task JSON (lite-plan export): Full plan extraction
|
||||||
|
- Plain text: Uses as prompt
|
||||||
|
|
||||||
|
### Task Management
|
||||||
|
|
||||||
|
1. **Live Progress Updates**: Real-time TodoWrite tracking
|
||||||
|
- Execution calls created before execution starts
|
||||||
|
- Updated as executions progress
|
||||||
|
- Clear completion status
|
||||||
|
|
||||||
|
2. **Simple Execution**: Straightforward task handling
|
||||||
|
- All tasks in single call (typical)
|
||||||
|
- Split only for very large task sets (>10)
|
||||||
|
- Agent/Codex determines optimal execution order
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Error | Cause | Resolution |
|
||||||
|
|-------|-------|------------|
|
||||||
|
| Missing executionContext | --in-memory without context | Error: "No execution context found. Only available when called by lite-plan." |
|
||||||
|
| File not found | File path doesn't exist | Error: "File not found: {path}. Check file path." |
|
||||||
|
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
|
||||||
|
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
|
||||||
|
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
|
||||||
|
| Execution failure | Agent/Codex crashes | Display error, save partial progress, suggest retry |
|
||||||
|
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
|
||||||
|
|
||||||
|
## Data Structures
|
||||||
|
|
||||||
|
### executionContext (Input - Mode 1)
|
||||||
|
|
||||||
|
Passed from lite-plan via global variable:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
planObject: {
|
||||||
|
summary: string,
|
||||||
|
approach: string,
|
||||||
|
tasks: [...],
|
||||||
|
estimated_time: string,
|
||||||
|
recommended_execution: string,
|
||||||
|
complexity: string
|
||||||
|
},
|
||||||
|
explorationContext: {...} | null,
|
||||||
|
clarificationContext: {...} | null,
|
||||||
|
executionMethod: "Agent" | "Codex" | "Auto",
|
||||||
|
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
||||||
|
originalUserInput: string
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### executionResult (Output)
|
||||||
|
|
||||||
|
Collected after each execution call completes:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
executionId: string, // e.g., "[Agent-1]", "[Codex-1]"
|
||||||
|
status: "completed" | "partial" | "failed",
|
||||||
|
tasksSummary: string, // Brief description of tasks handled
|
||||||
|
completionSummary: string, // What was completed
|
||||||
|
keyOutputs: string, // Files created/modified, key changes
|
||||||
|
notes: string // Important context for next execution
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -236,35 +236,31 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]
|
|||||||
- `.workflow/[sessionId]/.task/IMPL-*.json` exists (at least one)
|
- `.workflow/[sessionId]/.task/IMPL-*.json` exists (at least one)
|
||||||
- `.workflow/[sessionId]/TODO_LIST.md` exists
|
- `.workflow/[sessionId]/TODO_LIST.md` exists
|
||||||
|
|
||||||
<!-- TodoWrite: When task-generate-agent invoked, INSERT 3 task-generate-agent tasks -->
|
<!-- TodoWrite: When task-generate-agent invoked, ATTACH 1 agent task -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 4 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 4 SlashCommand invoked - agent task attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Phase 4.1: Discovery - analyze requirements (task-generate-agent)", "status": "in_progress", "activeForm": "Analyzing requirements"},
|
{"content": "Execute task-generate-agent", "status": "in_progress", "activeForm": "Executing task-generate-agent"}
|
||||||
{"content": "Phase 4.2: Planning - design tasks (task-generate-agent)", "status": "pending", "activeForm": "Designing tasks"},
|
|
||||||
{"content": "Phase 4.3: Output - generate JSONs (task-generate-agent)", "status": "pending", "activeForm": "Generating task JSONs"}
|
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** task-generate-agent's 3 tasks. Orchestrator **executes** these tasks.
|
**Note**: Single agent task attached. Agent autonomously completes discovery, planning, and output generation internally.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
|
<!-- TodoWrite: After agent completes, mark task as completed -->
|
||||||
|
|
||||||
<!-- TodoWrite: After Phase 4 tasks complete, REMOVE Phase 4.1-4.3, restore to orchestrator view -->
|
**TodoWrite Update (Phase 4 completed)**:
|
||||||
|
|
||||||
**TodoWrite Update (Phase 4 completed - tasks collapsed)**:
|
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Execute task generation", "status": "completed", "activeForm": "Executing task generation"}
|
{"content": "Execute task-generate-agent", "status": "completed", "activeForm": "Executing task-generate-agent"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: Phase 4 tasks completed and collapsed to summary.
|
**Note**: Agent task completed. No collapse needed (single task).
|
||||||
|
|
||||||
**Return to User**:
|
**Return to User**:
|
||||||
```
|
```
|
||||||
@@ -288,31 +284,35 @@ Quality Gate: Consider running /workflow:action-plan-verify to catch issues earl
|
|||||||
|
|
||||||
1. **Task Attachment** (when SlashCommand invoked):
|
1. **Task Attachment** (when SlashCommand invoked):
|
||||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||||
- Example: `/workflow:tools:context-gather` attaches 3 sub-tasks (Phase 2.1, 2.2, 2.3)
|
- **Phase 2, 3**: Multiple sub-tasks attached (e.g., Phase 2.1, 2.2, 2.3)
|
||||||
|
- **Phase 4**: Single agent task attached (e.g., "Execute task-generate-agent")
|
||||||
- First attached task marked as `in_progress`, others as `pending`
|
- First attached task marked as `in_progress`, others as `pending`
|
||||||
- Orchestrator **executes** these attached tasks sequentially
|
- Orchestrator **executes** these attached tasks sequentially
|
||||||
|
|
||||||
2. **Task Collapse** (after sub-tasks complete):
|
2. **Task Collapse** (after sub-tasks complete):
|
||||||
- Remove detailed sub-tasks from TodoWrite
|
- **Applies to Phase 2, 3**: Remove detailed sub-tasks from TodoWrite
|
||||||
- **Collapse** to high-level phase summary
|
- **Collapse** to high-level phase summary
|
||||||
- Example: Phase 2.1-2.3 collapse to "Execute context gathering: completed"
|
- Example: Phase 2.1-2.3 collapse to "Execute context gathering: completed"
|
||||||
|
- **Phase 4**: No collapse needed (single task, just mark completed)
|
||||||
- Maintains clean orchestrator-level view
|
- Maintains clean orchestrator-level view
|
||||||
|
|
||||||
3. **Continuous Execution**:
|
3. **Continuous Execution**:
|
||||||
- After collapse, automatically proceed to next pending phase
|
- After completion, automatically proceed to next pending phase
|
||||||
- No user intervention required between phases
|
- No user intervention required between phases
|
||||||
- TodoWrite dynamically reflects current execution state
|
- TodoWrite dynamically reflects current execution state
|
||||||
|
|
||||||
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
|
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary for Phase 2/3, or marked completed for Phase 4) → Next phase begins → Repeat until all phases complete.
|
||||||
|
|
||||||
### Benefits
|
### Benefits
|
||||||
|
|
||||||
- ✓ Real-time visibility into sub-task execution
|
- ✓ Real-time visibility into sub-task execution
|
||||||
- ✓ Clear mental model: SlashCommand = attach → execute → collapse
|
- ✓ Clear mental model: SlashCommand = attach → execute → collapse (Phase 2/3) or complete (Phase 4)
|
||||||
- ✓ Clean summary after completion
|
- ✓ Clean summary after completion
|
||||||
- ✓ Easy to track workflow progress
|
- ✓ Easy to track workflow progress
|
||||||
|
|
||||||
**Note**: See individual Phase descriptions (Phase 2, 3, 4) for detailed TodoWrite Update examples with full JSON structures.
|
**Note**: See individual Phase descriptions for detailed TodoWrite Update examples:
|
||||||
|
- **Phase 2, 3**: Multiple sub-tasks with attach/collapse pattern
|
||||||
|
- **Phase 4**: Single agent task (no collapse needed)
|
||||||
|
|
||||||
## Input Processing
|
## Input Processing
|
||||||
|
|
||||||
@@ -425,20 +425,21 @@ Conditional Branch: Check conflict_risk
|
|||||||
└─ ELSE: Skip Phase 3, proceed to Phase 4
|
└─ ELSE: Skip Phase 3, proceed to Phase 4
|
||||||
↓
|
↓
|
||||||
Phase 4: Task Generation (SlashCommand invoked)
|
Phase 4: Task Generation (SlashCommand invoked)
|
||||||
→ ATTACH 3 tasks: ← ATTACHED
|
→ ATTACH 1 agent task: ← ATTACHED
|
||||||
- Phase 4.1: Discovery - analyze requirements
|
- Execute task-generate-agent
|
||||||
- Phase 4.2: Planning - design tasks
|
→ Agent autonomously completes internally:
|
||||||
- Phase 4.3: Output - generate JSONs
|
(discovery → planning → output)
|
||||||
→ Execute Phase 4.1-4.3
|
|
||||||
→ COLLAPSE tasks ← COLLAPSED
|
|
||||||
→ Outputs: IMPL_PLAN.md, IMPL-*.json, TODO_LIST.md
|
→ Outputs: IMPL_PLAN.md, IMPL-*.json, TODO_LIST.md
|
||||||
↓
|
↓
|
||||||
Return summary to user
|
Return summary to user
|
||||||
```
|
```
|
||||||
|
|
||||||
**Key Points**:
|
**Key Points**:
|
||||||
- **← ATTACHED**: Sub-tasks attached to TodoWrite when SlashCommand invoked
|
- **← ATTACHED**: Tasks attached to TodoWrite when SlashCommand invoked
|
||||||
- **← COLLAPSED**: Sub-tasks collapsed to summary after completion
|
- Phase 2, 3: Multiple sub-tasks
|
||||||
|
- Phase 4: Single agent task
|
||||||
|
- **← COLLAPSED**: Sub-tasks collapsed to summary after completion (Phase 2, 3 only)
|
||||||
|
- **Phase 4**: Single agent task, no collapse (just mark completed)
|
||||||
- **Conditional Branch**: Phase 3 only executes if conflict_risk ≥ medium
|
- **Conditional Branch**: Phase 3 only executes if conflict_risk ≥ medium
|
||||||
- **Continuous Flow**: No user intervention between phases
|
- **Continuous Flow**: No user intervention between phases
|
||||||
|
|
||||||
|
|||||||
@@ -1,105 +0,0 @@
|
|||||||
---
|
|
||||||
name: resume
|
|
||||||
description: Resume paused workflow session with automatic progress analysis, pending task identification, and conflict detection
|
|
||||||
argument-hint: "session-id for workflow session to resume"
|
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# Sequential Workflow Resume Command
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
```bash
|
|
||||||
/workflow:resume "<session-id>"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
**Sequential command coordination for workflow resumption** by first analyzing current session status, then continuing execution with special resume context. This command orchestrates intelligent session resumption through two-step process.
|
|
||||||
|
|
||||||
## Command Coordination Workflow
|
|
||||||
|
|
||||||
### Phase 1: Status Analysis
|
|
||||||
1. **Call status command**: Execute `/workflow:status` to analyze current session state
|
|
||||||
2. **Verify session information**: Check session ID, progress, and current task status
|
|
||||||
3. **Identify resume point**: Determine where workflow was interrupted
|
|
||||||
|
|
||||||
### Phase 2: Resume Execution
|
|
||||||
1. **Call execute with resume flag**: Execute `/workflow:execute --resume-session="{session-id}"`
|
|
||||||
2. **Pass session context**: Provide analyzed session information to execute command
|
|
||||||
3. **Direct agent execution**: Skip discovery phase, directly enter TodoWrite and agent execution
|
|
||||||
|
|
||||||
## Implementation Protocol
|
|
||||||
|
|
||||||
### Sequential Command Execution
|
|
||||||
```bash
|
|
||||||
# Phase 1: Analyze current session status
|
|
||||||
SlashCommand(command="/workflow:status")
|
|
||||||
|
|
||||||
# Phase 2: Resume execution with special flag
|
|
||||||
SlashCommand(command="/workflow:execute --resume-session=\"{session-id}\"")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Progress Tracking
|
|
||||||
```javascript
|
|
||||||
TodoWrite({
|
|
||||||
todos: [
|
|
||||||
{
|
|
||||||
content: "Analyze current session status and progress",
|
|
||||||
status: "in_progress",
|
|
||||||
activeForm: "Analyzing session status"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
content: "Resume workflow execution with session context",
|
|
||||||
status: "pending",
|
|
||||||
activeForm: "Resuming workflow execution"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## Resume Information Flow
|
|
||||||
|
|
||||||
### Status Analysis Results
|
|
||||||
The `/workflow:status` command provides:
|
|
||||||
- **Session ID**: Current active session identifier
|
|
||||||
- **Current Progress**: Completed, in-progress, and pending tasks
|
|
||||||
- **Interruption Point**: Last executed task and next pending task
|
|
||||||
- **Session State**: Overall workflow status
|
|
||||||
|
|
||||||
### Execute Command Context
|
|
||||||
The special `--resume-session` flag tells `/workflow:execute`:
|
|
||||||
- **Skip Discovery**: Don't search for sessions, use provided session ID
|
|
||||||
- **Direct Execution**: Go straight to TodoWrite generation and agent launching
|
|
||||||
- **Context Restoration**: Use existing session state and summaries
|
|
||||||
- **Resume Point**: Continue from identified interruption point
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
### Session Validation Failures
|
|
||||||
- **Session not found**: Report missing session, suggest available sessions
|
|
||||||
- **Session inactive**: Recommend activating session first
|
|
||||||
- **Status command fails**: Retry once, then report analysis failure
|
|
||||||
|
|
||||||
### Execute Resumption Failures
|
|
||||||
- **No pending tasks**: Report workflow completion status
|
|
||||||
- **Execute command fails**: Report resumption failure, suggest manual intervention
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
1. **Status analysis complete**: Session state properly analyzed and reported
|
|
||||||
2. **Execute command launched**: Resume execution started with proper context
|
|
||||||
3. **Agent coordination**: TodoWrite and agent execution initiated successfully
|
|
||||||
4. **Context preservation**: Session state and progress properly maintained
|
|
||||||
|
|
||||||
## Related Commands
|
|
||||||
|
|
||||||
**Prerequisite Commands**:
|
|
||||||
- `/workflow:plan` or `/workflow:execute` - Workflow must be in progress or paused
|
|
||||||
|
|
||||||
**Called by This Command** (2 phases):
|
|
||||||
- `/workflow:status` - Phase 1: Analyze current session status and identify resume point
|
|
||||||
- `/workflow:execute` - Phase 2: Resume execution with `--resume-session` flag
|
|
||||||
|
|
||||||
**Follow-up Commands**:
|
|
||||||
- None - Workflow continues automatically via `/workflow:execute`
|
|
||||||
|
|
||||||
---
|
|
||||||
*Sequential command coordination for workflow session resumption*
|
|
||||||
@@ -39,17 +39,17 @@ argument-hint: "[--type=security|architecture|action-items|quality] [optional: s
|
|||||||
if [ -n "$SESSION_ARG" ]; then
|
if [ -n "$SESSION_ARG" ]; then
|
||||||
sessionId="$SESSION_ARG"
|
sessionId="$SESSION_ARG"
|
||||||
else
|
else
|
||||||
sessionId=$(find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//')
|
sessionId=$(find .workflow/sessions/ -name "WFS-*" -type d | head -1 | xargs basename)
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Step 2: Validation
|
# Step 2: Validation
|
||||||
if [ ! -d ".workflow/${sessionId}" ]; then
|
if [ ! -d ".workflow/sessions/${sessionId}" ]; then
|
||||||
echo "Session ${sessionId} not found"
|
echo "Session ${sessionId} not found"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check for completed tasks
|
# Check for completed tasks
|
||||||
if [ ! -d ".workflow/${sessionId}/.summaries" ] || [ -z "$(find .workflow/${sessionId}/.summaries/ -name "IMPL-*.md" -type f 2>/dev/null)" ]; then
|
if [ ! -d ".workflow/sessions/${sessionId}/.summaries" ] || [ -z "$(find .workflow/sessions/${sessionId}/.summaries/ -name "IMPL-*.md" -type f 2>/dev/null)" ]; then
|
||||||
echo "No completed implementation found. Complete implementation first"
|
echo "No completed implementation found. Complete implementation first"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
@@ -80,13 +80,13 @@ After bash validation, the model takes control to:
|
|||||||
1. **Load Context**: Read completed task summaries and changed files
|
1. **Load Context**: Read completed task summaries and changed files
|
||||||
```bash
|
```bash
|
||||||
# Load implementation summaries
|
# Load implementation summaries
|
||||||
cat .workflow/${sessionId}/.summaries/IMPL-*.md
|
cat .workflow/sessions/${sessionId}/.summaries/IMPL-*.md
|
||||||
|
|
||||||
# Load test results (if available)
|
# Load test results (if available)
|
||||||
cat .workflow/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
cat .workflow/sessions/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
||||||
|
|
||||||
# Get changed files
|
# Get changed files
|
||||||
git log --since="$(cat .workflow/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
git log --since="$(cat .workflow/sessions/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Perform Specialized Review**: Based on `review_type`
|
2. **Perform Specialized Review**: Based on `review_type`
|
||||||
@@ -99,7 +99,7 @@ After bash validation, the model takes control to:
|
|||||||
```
|
```
|
||||||
- Use Gemini for security analysis:
|
- Use Gemini for security analysis:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/${sessionId} && gemini -p "
|
cd .workflow/sessions/${sessionId} && gemini -p "
|
||||||
PURPOSE: Security audit of completed implementation
|
PURPOSE: Security audit of completed implementation
|
||||||
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
@@ -111,7 +111,7 @@ After bash validation, the model takes control to:
|
|||||||
**Architecture Review** (`--type=architecture`):
|
**Architecture Review** (`--type=architecture`):
|
||||||
- Use Qwen for architecture analysis:
|
- Use Qwen for architecture analysis:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/${sessionId} && qwen -p "
|
cd .workflow/sessions/${sessionId} && qwen -p "
|
||||||
PURPOSE: Architecture compliance review
|
PURPOSE: Architecture compliance review
|
||||||
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
@@ -123,7 +123,7 @@ After bash validation, the model takes control to:
|
|||||||
**Quality Review** (`--type=quality`):
|
**Quality Review** (`--type=quality`):
|
||||||
- Use Gemini for code quality:
|
- Use Gemini for code quality:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/${sessionId} && gemini -p "
|
cd .workflow/sessions/${sessionId} && gemini -p "
|
||||||
PURPOSE: Code quality and best practices review
|
PURPOSE: Code quality and best practices review
|
||||||
TASK: Assess code readability, maintainability, adherence to best practices
|
TASK: Assess code readability, maintainability, adherence to best practices
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
@@ -136,14 +136,14 @@ After bash validation, the model takes control to:
|
|||||||
- Verify all requirements and acceptance criteria met:
|
- Verify all requirements and acceptance criteria met:
|
||||||
```bash
|
```bash
|
||||||
# Load task requirements and acceptance criteria
|
# Load task requirements and acceptance criteria
|
||||||
find .workflow/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
find .workflow/sessions/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
||||||
"Task: " + .id + "\n" +
|
"Task: " + .id + "\n" +
|
||||||
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
||||||
"Acceptance: " + (.context.acceptance | join(", "))
|
"Acceptance: " + (.context.acceptance | join(", "))
|
||||||
' {} \;
|
' {} \;
|
||||||
|
|
||||||
# Check implementation summaries against requirements
|
# Check implementation summaries against requirements
|
||||||
cd .workflow/${sessionId} && gemini -p "
|
cd .workflow/sessions/${sessionId} && gemini -p "
|
||||||
PURPOSE: Verify all requirements and acceptance criteria are met
|
PURPOSE: Verify all requirements and acceptance criteria are met
|
||||||
TASK: Cross-check implementation summaries against original requirements
|
TASK: Cross-check implementation summaries against original requirements
|
||||||
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
@@ -195,7 +195,7 @@ After bash validation, the model takes control to:
|
|||||||
4. **Output Files**:
|
4. **Output Files**:
|
||||||
```bash
|
```bash
|
||||||
# Save review report
|
# Save review report
|
||||||
Write(.workflow/${sessionId}/REVIEW-${review_type}.md)
|
Write(.workflow/sessions/${sessionId}/REVIEW-${review_type}.md)
|
||||||
|
|
||||||
# Update session metadata
|
# Update session metadata
|
||||||
# (optional) Update workflow-session.json with review status
|
# (optional) Update workflow-session.json with review status
|
||||||
|
|||||||
@@ -19,129 +19,472 @@ Mark the currently active workflow session as complete, analyze it for lessons l
|
|||||||
|
|
||||||
## Implementation Flow
|
## Implementation Flow
|
||||||
|
|
||||||
### Phase 1: Prepare for Archival (Minimal Manual Operations)
|
### Phase 1: Pre-Archival Preparation (Transactional Setup)
|
||||||
|
|
||||||
**Purpose**: Find active session, move to archive location, pass control to agent. Minimal operations.
|
**Purpose**: Find active session, create archiving marker to prevent concurrent operations. Session remains in active location for agent processing.
|
||||||
|
|
||||||
#### Step 1.1: Find Active Session and Get Name
|
#### Step 1.1: Find Active Session and Get Name
|
||||||
```bash
|
```bash
|
||||||
# Find active marker
|
# Find active session directory
|
||||||
bash(find .workflow/ -name ".active-*" -type f | head -1)
|
bash(find .workflow/sessions/ -name "WFS-*" -type d | head -1)
|
||||||
|
|
||||||
# Extract session name from marker path
|
# Extract session name from directory path
|
||||||
bash(basename .workflow/.active-WFS-session-name | sed 's/^\.active-//')
|
bash(basename .workflow/sessions/WFS-session-name)
|
||||||
```
|
```
|
||||||
**Output**: Session name `WFS-session-name`
|
**Output**: Session name `WFS-session-name`
|
||||||
|
|
||||||
#### Step 1.2: Move Session to Archive
|
#### Step 1.2: Check for Existing Archiving Marker (Resume Detection)
|
||||||
```bash
|
```bash
|
||||||
# Create archive directory if needed
|
# Check if session is already being archived
|
||||||
bash(mkdir -p .workflow/.archives/)
|
bash(test -f .workflow/sessions/WFS-session-name/.archiving && echo "RESUMING" || echo "NEW")
|
||||||
|
|
||||||
# Move session to archive location
|
|
||||||
bash(mv .workflow/WFS-session-name .workflow/.archives/WFS-session-name)
|
|
||||||
```
|
```
|
||||||
**Result**: Session now at `.workflow/.archives/WFS-session-name/`
|
|
||||||
|
|
||||||
### Phase 2: Agent-Orchestrated Completion (All Data Processing)
|
**If RESUMING**:
|
||||||
|
- Previous archival attempt was interrupted
|
||||||
|
- Skip to Phase 2 to resume agent analysis
|
||||||
|
|
||||||
**Purpose**: Agent analyzes archived session, generates metadata, updates manifest, and removes active marker.
|
**If NEW**:
|
||||||
|
- Continue to Step 1.3
|
||||||
|
|
||||||
|
#### Step 1.3: Create Archiving Marker
|
||||||
|
```bash
|
||||||
|
# Mark session as "archiving in progress"
|
||||||
|
bash(touch .workflow/sessions/WFS-session-name/.archiving)
|
||||||
|
```
|
||||||
|
**Purpose**:
|
||||||
|
- Prevents concurrent operations on this session
|
||||||
|
- Enables recovery if archival fails
|
||||||
|
- Session remains in `.workflow/sessions/` for agent analysis
|
||||||
|
|
||||||
|
**Result**: Session still at `.workflow/sessions/WFS-session-name/` with `.archiving` marker
|
||||||
|
|
||||||
|
### Phase 2: Agent Analysis (In-Place Processing)
|
||||||
|
|
||||||
|
**Purpose**: Agent analyzes session WHILE STILL IN ACTIVE LOCATION. Generates metadata but does NOT move files or update manifest.
|
||||||
|
|
||||||
#### Agent Invocation
|
#### Agent Invocation
|
||||||
|
|
||||||
Invoke `universal-executor` agent to complete the archival process.
|
Invoke `universal-executor` agent to analyze session and prepare archive metadata.
|
||||||
|
|
||||||
**Agent Task**:
|
**Agent Task**:
|
||||||
```
|
```
|
||||||
Task(
|
Task(
|
||||||
subagent_type="universal-executor",
|
subagent_type="universal-executor",
|
||||||
description="Complete session archival",
|
description="Analyze session for archival",
|
||||||
prompt=`
|
prompt=`
|
||||||
Complete workflow session archival. Session already moved to archive location.
|
Analyze workflow session for archival preparation. Session is STILL in active location.
|
||||||
|
|
||||||
## Context
|
## Context
|
||||||
- Session: .workflow/.archives/WFS-session-name/
|
- Session: .workflow/sessions/WFS-session-name/
|
||||||
- Active marker: .workflow/.active-WFS-session-name
|
- Status: Marked as archiving (.archiving marker present)
|
||||||
|
- Location: Active sessions directory (NOT archived yet)
|
||||||
|
|
||||||
## Tasks
|
## Tasks
|
||||||
|
|
||||||
1. **Extract session data** from workflow-session.json (session_id, description/topic, started_at/timestamp, completed_at, status)
|
1. **Extract session data** from workflow-session.json
|
||||||
|
- session_id, description/topic, started_at, completed_at, status
|
||||||
- If status != "completed", update it with timestamp
|
- If status != "completed", update it with timestamp
|
||||||
|
|
||||||
2. **Count files**: tasks (.task/*.json) and summaries (.summaries/*.md)
|
2. **Count files**: tasks (.task/*.json) and summaries (.summaries/*.md)
|
||||||
|
|
||||||
3. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt (fallback: analyze files directly)
|
3. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt
|
||||||
- Return: {successes, challenges, watch_patterns}
|
- Return: {successes, challenges, watch_patterns}
|
||||||
|
|
||||||
4. **Build archive entry**:
|
4. **Build archive entry**:
|
||||||
- Calculate: duration_hours, success_rate, tags (3-5 keywords)
|
- Calculate: duration_hours, success_rate, tags (3-5 keywords)
|
||||||
- Construct complete JSON with session_id, description, archived_at, archive_path, metrics, tags, lessons
|
- Construct complete JSON with session_id, description, archived_at, metrics, tags, lessons
|
||||||
|
- Include archive_path: ".workflow/archives/WFS-session-name" (future location)
|
||||||
|
|
||||||
5. **Update manifest**: Initialize .workflow/.archives/manifest.json if needed, append entry
|
5. **Extract feature metadata** (for Phase 4):
|
||||||
|
- Parse IMPL_PLAN.md for title (first # heading)
|
||||||
|
- Extract description (first paragraph, max 200 chars)
|
||||||
|
- Generate feature tags (3-5 keywords from content)
|
||||||
|
|
||||||
6. **Remove active marker**
|
6. **Return result**: Complete metadata package for atomic commit
|
||||||
|
{
|
||||||
|
"status": "success",
|
||||||
|
"session_id": "WFS-session-name",
|
||||||
|
"archive_entry": {
|
||||||
|
"session_id": "...",
|
||||||
|
"description": "...",
|
||||||
|
"archived_at": "...",
|
||||||
|
"archive_path": ".workflow/archives/WFS-session-name",
|
||||||
|
"metrics": {...},
|
||||||
|
"tags": [...],
|
||||||
|
"lessons": {...}
|
||||||
|
},
|
||||||
|
"feature_metadata": {
|
||||||
|
"title": "...",
|
||||||
|
"description": "...",
|
||||||
|
"tags": [...]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
7. **Return result**: {"status": "success", "session_id": "...", "archived_at": "...", "metrics": {...}, "lessons_summary": {...}}
|
## Important Constraints
|
||||||
|
- DO NOT move or delete any files
|
||||||
|
- DO NOT update manifest.json yet
|
||||||
|
- Session remains in .workflow/sessions/ during analysis
|
||||||
|
- Return complete metadata package for orchestrator to commit atomically
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
- On failure: return {"status": "error", "task": "...", "message": "..."}
|
- On failure: return {"status": "error", "task": "...", "message": "..."}
|
||||||
- Do NOT remove marker if failed
|
- Do NOT modify any files on error
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Expected Output**:
|
**Expected Output**:
|
||||||
- Agent returns JSON result confirming successful archival
|
- Agent returns complete metadata package
|
||||||
- Display completion summary to user based on agent response
|
- Session remains in `.workflow/sessions/` with `.archiving` marker
|
||||||
|
- No files moved or manifests updated yet
|
||||||
|
|
||||||
|
### Phase 3: Atomic Commit (Transactional File Operations)
|
||||||
|
|
||||||
|
**Purpose**: Atomically commit all changes. Only execute if Phase 2 succeeds.
|
||||||
|
|
||||||
|
#### Step 3.1: Create Archive Directory
|
||||||
|
```bash
|
||||||
|
bash(mkdir -p .workflow/archives/)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 3.2: Move Session to Archive
|
||||||
|
```bash
|
||||||
|
bash(mv .workflow/sessions/WFS-session-name .workflow/archives/WFS-session-name)
|
||||||
|
```
|
||||||
|
**Result**: Session now at `.workflow/archives/WFS-session-name/`
|
||||||
|
|
||||||
|
#### Step 3.3: Update Manifest
|
||||||
|
```bash
|
||||||
|
# Read current manifest (or create empty array if not exists)
|
||||||
|
bash(test -f .workflow/archives/manifest.json && cat .workflow/archives/manifest.json || echo "[]")
|
||||||
|
```
|
||||||
|
|
||||||
|
**JSON Update Logic**:
|
||||||
|
```javascript
|
||||||
|
// Read agent result from Phase 2
|
||||||
|
const agentResult = JSON.parse(agentOutput);
|
||||||
|
const archiveEntry = agentResult.archive_entry;
|
||||||
|
|
||||||
|
// Read existing manifest
|
||||||
|
let manifest = [];
|
||||||
|
try {
|
||||||
|
const manifestContent = Read('.workflow/archives/manifest.json');
|
||||||
|
manifest = JSON.parse(manifestContent);
|
||||||
|
} catch {
|
||||||
|
manifest = []; // Initialize if not exists
|
||||||
|
}
|
||||||
|
|
||||||
|
// Append new entry
|
||||||
|
manifest.push(archiveEntry);
|
||||||
|
|
||||||
|
// Write back
|
||||||
|
Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2));
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 3.4: Remove Archiving Marker
|
||||||
|
```bash
|
||||||
|
bash(rm .workflow/archives/WFS-session-name/.archiving)
|
||||||
|
```
|
||||||
|
**Result**: Clean archived session without temporary markers
|
||||||
|
|
||||||
|
**Output Confirmation**:
|
||||||
|
```
|
||||||
|
✓ Session "${sessionId}" archived successfully
|
||||||
|
Location: .workflow/archives/WFS-session-name/
|
||||||
|
Lessons: ${archiveEntry.lessons.successes.length} successes, ${archiveEntry.lessons.challenges.length} challenges
|
||||||
|
Manifest: Updated with ${manifest.length} total sessions
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Update Project Feature Registry
|
||||||
|
|
||||||
|
**Purpose**: Record completed session as a project feature in `.workflow/project.json`.
|
||||||
|
|
||||||
|
**Execution**: Uses feature metadata from Phase 2 agent result to update project registry.
|
||||||
|
|
||||||
|
#### Step 4.1: Check Project State Exists
|
||||||
|
```bash
|
||||||
|
bash(test -f .workflow/project.json && echo "EXISTS" || echo "SKIP")
|
||||||
|
```
|
||||||
|
|
||||||
|
**If SKIP**: Output warning and skip Phase 4
|
||||||
|
```
|
||||||
|
WARNING: No project.json found. Run /workflow:session:start to initialize.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 4.2: Extract Feature Information from Agent Result
|
||||||
|
|
||||||
|
**Data Processing** (Uses Phase 2 agent output):
|
||||||
|
```javascript
|
||||||
|
// Extract feature metadata from agent result
|
||||||
|
const agentResult = JSON.parse(agentOutput);
|
||||||
|
const featureMeta = agentResult.feature_metadata;
|
||||||
|
|
||||||
|
// Data already prepared by agent:
|
||||||
|
const title = featureMeta.title;
|
||||||
|
const description = featureMeta.description;
|
||||||
|
const tags = featureMeta.tags;
|
||||||
|
|
||||||
|
// Create feature ID (lowercase slug)
|
||||||
|
const featureId = title.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 50);
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 4.3: Update project.json
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Read current project state
|
||||||
|
bash(cat .workflow/project.json)
|
||||||
|
```
|
||||||
|
|
||||||
|
**JSON Update Logic**:
|
||||||
|
```javascript
|
||||||
|
// Read existing project.json (created by /workflow:init)
|
||||||
|
// Note: overview field is managed by /workflow:init, not modified here
|
||||||
|
const projectMeta = JSON.parse(Read('.workflow/project.json'));
|
||||||
|
const currentTimestamp = new Date().toISOString();
|
||||||
|
const currentDate = currentTimestamp.split('T')[0]; // YYYY-MM-DD
|
||||||
|
|
||||||
|
// Extract tags from IMPL_PLAN.md (simple keyword extraction)
|
||||||
|
const tags = extractTags(planContent); // e.g., ["auth", "security"]
|
||||||
|
|
||||||
|
// Build feature object with complete metadata
|
||||||
|
const newFeature = {
|
||||||
|
id: featureId,
|
||||||
|
title: title,
|
||||||
|
description: description,
|
||||||
|
status: "completed",
|
||||||
|
tags: tags,
|
||||||
|
timeline: {
|
||||||
|
created_at: currentTimestamp,
|
||||||
|
implemented_at: currentDate,
|
||||||
|
updated_at: currentTimestamp
|
||||||
|
},
|
||||||
|
traceability: {
|
||||||
|
session_id: sessionId,
|
||||||
|
archive_path: archivePath, // e.g., ".workflow/archives/WFS-auth-system"
|
||||||
|
commit_hash: getLatestCommitHash() || "" // Optional: git rev-parse HEAD
|
||||||
|
},
|
||||||
|
docs: [], // Placeholder for future doc links
|
||||||
|
relations: [] // Placeholder for feature dependencies
|
||||||
|
};
|
||||||
|
|
||||||
|
// Add new feature to array
|
||||||
|
projectMeta.features.push(newFeature);
|
||||||
|
|
||||||
|
// Update statistics
|
||||||
|
projectMeta.statistics.total_features = projectMeta.features.length;
|
||||||
|
projectMeta.statistics.total_sessions += 1;
|
||||||
|
projectMeta.statistics.last_updated = currentTimestamp;
|
||||||
|
|
||||||
|
// Write back
|
||||||
|
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
|
||||||
|
```
|
||||||
|
|
||||||
|
**Helper Functions**:
|
||||||
|
```javascript
|
||||||
|
// Extract tags from IMPL_PLAN.md content
|
||||||
|
function extractTags(planContent) {
|
||||||
|
const tags = [];
|
||||||
|
|
||||||
|
// Look for common keywords
|
||||||
|
const keywords = {
|
||||||
|
'auth': /authentication|login|oauth|jwt/i,
|
||||||
|
'security': /security|encrypt|hash|token/i,
|
||||||
|
'api': /api|endpoint|rest|graphql/i,
|
||||||
|
'ui': /component|page|interface|frontend/i,
|
||||||
|
'database': /database|schema|migration|sql/i,
|
||||||
|
'test': /test|testing|spec|coverage/i
|
||||||
|
};
|
||||||
|
|
||||||
|
for (const [tag, pattern] of Object.entries(keywords)) {
|
||||||
|
if (pattern.test(planContent)) {
|
||||||
|
tags.push(tag);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return tags.slice(0, 5); // Max 5 tags
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get latest git commit hash (optional)
|
||||||
|
function getLatestCommitHash() {
|
||||||
|
try {
|
||||||
|
const result = Bash({
|
||||||
|
command: "git rev-parse --short HEAD 2>/dev/null",
|
||||||
|
description: "Get latest commit hash"
|
||||||
|
});
|
||||||
|
return result.trim();
|
||||||
|
} catch {
|
||||||
|
return "";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 4.4: Output Confirmation
|
||||||
|
|
||||||
|
```
|
||||||
|
✓ Feature "${title}" added to project registry
|
||||||
|
ID: ${featureId}
|
||||||
|
Session: ${sessionId}
|
||||||
|
Location: .workflow/project.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Error Handling**:
|
||||||
|
- If project.json malformed: Output error, skip update
|
||||||
|
- If feature_metadata missing from agent result: Skip Phase 4
|
||||||
|
- If extraction fails: Use minimal defaults
|
||||||
|
|
||||||
|
**Phase 4 Total Commands**: 1 bash read + JSON manipulation
|
||||||
|
|
||||||
|
## Error Recovery
|
||||||
|
|
||||||
|
### If Agent Fails (Phase 2)
|
||||||
|
|
||||||
|
**Symptoms**:
|
||||||
|
- Agent returns `{"status": "error", ...}`
|
||||||
|
- Agent crashes or times out
|
||||||
|
- Analysis incomplete
|
||||||
|
|
||||||
|
**Recovery Steps**:
|
||||||
|
```bash
|
||||||
|
# Session still in .workflow/sessions/WFS-session-name
|
||||||
|
# Remove archiving marker
|
||||||
|
bash(rm .workflow/sessions/WFS-session-name/.archiving)
|
||||||
|
```
|
||||||
|
|
||||||
|
**User Notification**:
|
||||||
|
```
|
||||||
|
ERROR: Session archival failed during analysis phase
|
||||||
|
Reason: [error message from agent]
|
||||||
|
Session remains active in: .workflow/sessions/WFS-session-name
|
||||||
|
|
||||||
|
Recovery:
|
||||||
|
1. Fix any issues identified in error message
|
||||||
|
2. Retry: /workflow:session:complete
|
||||||
|
|
||||||
|
Session state: SAFE (no changes committed)
|
||||||
|
```
|
||||||
|
|
||||||
|
### If Move Fails (Phase 3)
|
||||||
|
|
||||||
|
**Symptoms**:
|
||||||
|
- `mv` command fails
|
||||||
|
- Permission denied
|
||||||
|
- Disk full
|
||||||
|
|
||||||
|
**Recovery Steps**:
|
||||||
|
```bash
|
||||||
|
# Archiving marker still present
|
||||||
|
# Session still in .workflow/sessions/ (move failed)
|
||||||
|
# No manifest updated yet
|
||||||
|
```
|
||||||
|
|
||||||
|
**User Notification**:
|
||||||
|
```
|
||||||
|
ERROR: Session archival failed during move operation
|
||||||
|
Reason: [mv error message]
|
||||||
|
Session remains in: .workflow/sessions/WFS-session-name
|
||||||
|
|
||||||
|
Recovery:
|
||||||
|
1. Fix filesystem issues (permissions, disk space)
|
||||||
|
2. Retry: /workflow:session:complete
|
||||||
|
- System will detect .archiving marker
|
||||||
|
- Will resume from Phase 2 (agent analysis)
|
||||||
|
|
||||||
|
Session state: SAFE (analysis complete, ready to retry move)
|
||||||
|
```
|
||||||
|
|
||||||
|
### If Manifest Update Fails (Phase 3)
|
||||||
|
|
||||||
|
**Symptoms**:
|
||||||
|
- JSON parsing error
|
||||||
|
- Write permission denied
|
||||||
|
- Session moved but manifest not updated
|
||||||
|
|
||||||
|
**Recovery Steps**:
|
||||||
|
```bash
|
||||||
|
# Session moved to .workflow/archives/WFS-session-name
|
||||||
|
# Manifest NOT updated
|
||||||
|
# Archiving marker still present in archived location
|
||||||
|
```
|
||||||
|
|
||||||
|
**User Notification**:
|
||||||
|
```
|
||||||
|
ERROR: Session archived but manifest update failed
|
||||||
|
Reason: [error message]
|
||||||
|
Session location: .workflow/archives/WFS-session-name
|
||||||
|
|
||||||
|
Recovery:
|
||||||
|
1. Fix manifest.json issues (syntax, permissions)
|
||||||
|
2. Manual manifest update:
|
||||||
|
- Add archive entry from agent output
|
||||||
|
- Remove .archiving marker: rm .workflow/archives/WFS-session-name/.archiving
|
||||||
|
|
||||||
|
Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
|
||||||
|
```
|
||||||
|
|
||||||
## Workflow Execution Strategy
|
## Workflow Execution Strategy
|
||||||
|
|
||||||
### Two-Phase Approach (Optimized)
|
### Transactional Four-Phase Approach
|
||||||
|
|
||||||
**Phase 1: Minimal Manual Setup** (2 simple operations)
|
**Phase 1: Pre-Archival Preparation** (Marker creation)
|
||||||
- Find active session and extract name
|
- Find active session and extract name
|
||||||
- Move session to archive location
|
- Check for existing `.archiving` marker (resume detection)
|
||||||
- **No data extraction** - agent handles all data processing
|
- Create `.archiving` marker if new
|
||||||
- **No counting** - agent does this from archive location
|
- **No data processing** - just state tracking
|
||||||
- **Total**: 2 bash commands (find + move)
|
- **Total**: 2-3 bash commands (find + marker check/create)
|
||||||
|
|
||||||
**Phase 2: Agent-Driven Completion** (1 agent invocation)
|
**Phase 2: Agent Analysis** (Read-only data processing)
|
||||||
- Extract all session data from archived location
|
- Extract all session data from active location
|
||||||
- Count tasks and summaries
|
- Count tasks and summaries
|
||||||
- Generate lessons learned analysis
|
- Generate lessons learned analysis
|
||||||
- Build complete archive metadata
|
- Extract feature metadata from IMPL_PLAN.md
|
||||||
- Update manifest
|
- Build complete archive + feature metadata package
|
||||||
- Remove active marker
|
- **No file modifications** - pure analysis
|
||||||
- Return success/error result
|
- **Total**: 1 agent invocation
|
||||||
|
|
||||||
## Quick Commands
|
**Phase 3: Atomic Commit** (Transactional file operations)
|
||||||
|
- Create archive directory
|
||||||
|
- Move session to archive location
|
||||||
|
- Update manifest.json with archive entry
|
||||||
|
- Remove `.archiving` marker
|
||||||
|
- **All-or-nothing**: Either all succeed or session remains in safe state
|
||||||
|
- **Total**: 4 bash commands + JSON manipulation
|
||||||
|
|
||||||
```bash
|
**Phase 4: Project Registry Update** (Optional feature tracking)
|
||||||
# Phase 1: Find and move
|
- Check project.json exists
|
||||||
bash(find .workflow/ -name ".active-*" -type f | head -1)
|
- Use feature metadata from Phase 2 agent result
|
||||||
bash(basename .workflow/.active-WFS-session-name | sed 's/^\.active-//')
|
- Build feature object with complete traceability
|
||||||
bash(mkdir -p .workflow/.archives/)
|
- Update project statistics
|
||||||
bash(mv .workflow/WFS-session-name .workflow/.archives/WFS-session-name)
|
- **Independent**: Can fail without affecting archival
|
||||||
|
- **Total**: 1 bash read + JSON manipulation
|
||||||
|
|
||||||
# Phase 2: Agent completes archival
|
### Transactional Guarantees
|
||||||
Task(subagent_type="universal-executor", description="Complete session archival", prompt=`...`)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Archive Query Commands
|
**State Consistency**:
|
||||||
|
- Session NEVER in inconsistent state
|
||||||
|
- `.archiving` marker enables safe resume
|
||||||
|
- Agent failure leaves session in recoverable state
|
||||||
|
- Move/manifest operations grouped in Phase 3
|
||||||
|
|
||||||
After archival, you can query the manifest:
|
**Failure Isolation**:
|
||||||
|
- Phase 1 failure: No changes made
|
||||||
|
- Phase 2 failure: Session still active, can retry
|
||||||
|
- Phase 3 failure: Clear error state, manual recovery documented
|
||||||
|
- Phase 4 failure: Does not affect archival success
|
||||||
|
|
||||||
```bash
|
**Resume Capability**:
|
||||||
# List all archived sessions
|
- Detect interrupted archival via `.archiving` marker
|
||||||
jq '.archives[].session_id' .workflow/.archives/manifest.json
|
- Resume from Phase 2 (skip marker creation)
|
||||||
|
- Idempotent operations (safe to retry)
|
||||||
|
|
||||||
# Find sessions by keyword
|
### Benefits Over Previous Design
|
||||||
jq '.archives[] | select(.description | test("auth"; "i"))' .workflow/.archives/manifest.json
|
|
||||||
|
|
||||||
# Get specific session details
|
**Old Design Weakness**:
|
||||||
jq '.archives[] | select(.session_id == "WFS-user-auth")' .workflow/.archives/manifest.json
|
- Move first → agent second
|
||||||
|
- Agent failure → session moved but metadata incomplete
|
||||||
# List all watch patterns across sessions
|
- Inconsistent state requires manual cleanup
|
||||||
jq '.archives[].lessons.watch_patterns[]' .workflow/.archives/manifest.json
|
|
||||||
```
|
|
||||||
|
|
||||||
|
**New Design Strengths**:
|
||||||
|
- Agent first → move second
|
||||||
|
- Agent failure → session still active, safe to retry
|
||||||
|
- Transactional commit → all-or-nothing file operations
|
||||||
|
- Marker-based state → resume capability
|
||||||
|
|||||||
@@ -19,35 +19,35 @@ Display all workflow sessions with their current status, progress, and metadata.
|
|||||||
|
|
||||||
### Step 1: Find All Sessions
|
### Step 1: Find All Sessions
|
||||||
```bash
|
```bash
|
||||||
ls .workflow/WFS-* 2>/dev/null
|
ls .workflow/sessions/WFS-* 2>/dev/null
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Check Active Session
|
### Step 2: Check Active Session
|
||||||
```bash
|
```bash
|
||||||
ls .workflow/.active-* 2>/dev/null | head -1
|
find .workflow/sessions/ -name "WFS-*" -type d 2>/dev/null | head -1
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Read Session Metadata
|
### Step 3: Read Session Metadata
|
||||||
```bash
|
```bash
|
||||||
jq -r '.session_id, .status, .project' .workflow/WFS-session/workflow-session.json
|
jq -r '.session_id, .status, .project' .workflow/sessions/WFS-session/workflow-session.json
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 4: Count Task Progress
|
### Step 4: Count Task Progress
|
||||||
```bash
|
```bash
|
||||||
find .workflow/WFS-session/.task/ -name "*.json" -type f 2>/dev/null | wc -l
|
find .workflow/sessions/WFS-session/.task/ -name "*.json" -type f 2>/dev/null | wc -l
|
||||||
find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
find .workflow/sessions/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 5: Get Creation Time
|
### Step 5: Get Creation Time
|
||||||
```bash
|
```bash
|
||||||
jq -r '.created_at // "unknown"' .workflow/WFS-session/workflow-session.json
|
jq -r '.created_at // "unknown"' .workflow/sessions/WFS-session/workflow-session.json
|
||||||
```
|
```
|
||||||
|
|
||||||
## Simple Bash Commands
|
## Simple Bash Commands
|
||||||
|
|
||||||
### Basic Operations
|
### Basic Operations
|
||||||
- **List sessions**: `find .workflow/ -maxdepth 1 -type d -name "WFS-*"`
|
- **List sessions**: `find .workflow/sessions/ -name "WFS-*" -type d`
|
||||||
- **Find active**: `find .workflow/ -name ".active-*" -type f`
|
- **Find active**: `find .workflow/sessions/ -name "WFS-*" -type d`
|
||||||
- **Read session data**: `jq -r '.session_id, .status' session.json`
|
- **Read session data**: `jq -r '.session_id, .status' session.json`
|
||||||
- **Count tasks**: `find .task/ -name "*.json" -type f | wc -l`
|
- **Count tasks**: `find .task/ -name "*.json" -type f | wc -l`
|
||||||
- **Count completed**: `find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l`
|
- **Count completed**: `find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l`
|
||||||
@@ -89,11 +89,8 @@ Total: 3 sessions (1 active, 1 paused, 1 completed)
|
|||||||
### Quick Commands
|
### Quick Commands
|
||||||
```bash
|
```bash
|
||||||
# Count all sessions
|
# Count all sessions
|
||||||
ls .workflow/WFS-* | wc -l
|
ls .workflow/sessions/WFS-* | wc -l
|
||||||
|
|
||||||
# Show only active
|
|
||||||
ls .workflow/.active-* | basename | sed 's/^\.active-//'
|
|
||||||
|
|
||||||
# Show recent sessions
|
# Show recent sessions
|
||||||
ls -t .workflow/WFS-*/workflow-session.json | head -3
|
ls -t .workflow/sessions/WFS-*/workflow-session.json | head -3
|
||||||
```
|
```
|
||||||
@@ -17,45 +17,39 @@ Resume the most recently paused workflow session, restoring all context and stat
|
|||||||
|
|
||||||
### Step 1: Find Paused Sessions
|
### Step 1: Find Paused Sessions
|
||||||
```bash
|
```bash
|
||||||
ls .workflow/WFS-* 2>/dev/null
|
ls .workflow/sessions/WFS-* 2>/dev/null
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Check Session Status
|
### Step 2: Check Session Status
|
||||||
```bash
|
```bash
|
||||||
jq -r '.status' .workflow/WFS-session/workflow-session.json
|
jq -r '.status' .workflow/sessions/WFS-session/workflow-session.json
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Find Most Recent Paused
|
### Step 3: Find Most Recent Paused
|
||||||
```bash
|
```bash
|
||||||
ls -t .workflow/WFS-*/workflow-session.json | head -1
|
ls -t .workflow/sessions/WFS-*/workflow-session.json | head -1
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 4: Update Session Status
|
### Step 4: Update Session Status
|
||||||
```bash
|
```bash
|
||||||
jq '.status = "active"' .workflow/WFS-session/workflow-session.json > temp.json
|
jq '.status = "active"' .workflow/sessions/WFS-session/workflow-session.json > temp.json
|
||||||
mv temp.json .workflow/WFS-session/workflow-session.json
|
mv temp.json .workflow/sessions/WFS-session/workflow-session.json
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 5: Add Resume Timestamp
|
### Step 5: Add Resume Timestamp
|
||||||
```bash
|
```bash
|
||||||
jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"' .workflow/WFS-session/workflow-session.json > temp.json
|
jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"' .workflow/sessions/WFS-session/workflow-session.json > temp.json
|
||||||
mv temp.json .workflow/WFS-session/workflow-session.json
|
mv temp.json .workflow/sessions/WFS-session/workflow-session.json
|
||||||
```
|
|
||||||
|
|
||||||
### Step 6: Create Active Marker
|
|
||||||
```bash
|
|
||||||
touch .workflow/.active-WFS-session-name
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Simple Bash Commands
|
## Simple Bash Commands
|
||||||
|
|
||||||
### Basic Operations
|
### Basic Operations
|
||||||
- **List sessions**: `ls .workflow/WFS-*`
|
- **List sessions**: `ls .workflow/sessions/WFS-*`
|
||||||
- **Check status**: `jq -r '.status' session.json`
|
- **Check status**: `jq -r '.status' session.json`
|
||||||
- **Find recent**: `ls -t .workflow/*/workflow-session.json | head -1`
|
- **Find recent**: `ls -t .workflow/sessions/*/workflow-session.json | head -1`
|
||||||
- **Update status**: `jq '.status = "active"' session.json > temp.json`
|
- **Update status**: `jq '.status = "active"' session.json > temp.json`
|
||||||
- **Add timestamp**: `jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"'`
|
- **Add timestamp**: `jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"'`
|
||||||
- **Create marker**: `touch .workflow/.active-session`
|
|
||||||
|
|
||||||
### Resume Result
|
### Resume Result
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -13,6 +13,35 @@ examples:
|
|||||||
## Overview
|
## Overview
|
||||||
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
|
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
|
||||||
|
|
||||||
|
**Dual Responsibility**:
|
||||||
|
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
|
||||||
|
2. **Session-level initialization** (always): Creates session directory structure
|
||||||
|
|
||||||
|
## Step 0: Initialize Project State (First-time Only)
|
||||||
|
|
||||||
|
**Executed before all modes** - Ensures project-level state file exists by calling `/workflow:init`.
|
||||||
|
|
||||||
|
### Check and Initialize
|
||||||
|
```bash
|
||||||
|
# Check if project state exists
|
||||||
|
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||||
|
```
|
||||||
|
|
||||||
|
**If NOT_FOUND**, delegate to `/workflow:init`:
|
||||||
|
```javascript
|
||||||
|
// Call workflow:init for intelligent project analysis
|
||||||
|
SlashCommand({command: "/workflow:init"});
|
||||||
|
|
||||||
|
// Wait for init completion
|
||||||
|
// project.json will be created with comprehensive project overview
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**:
|
||||||
|
- If EXISTS: `PROJECT_STATE: initialized`
|
||||||
|
- If NOT_FOUND: Calls `/workflow:init` → creates `.workflow/project.json` with full project analysis
|
||||||
|
|
||||||
|
**Note**: `/workflow:init` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.
|
||||||
|
|
||||||
## Mode 1: Discovery Mode (Default)
|
## Mode 1: Discovery Mode (Default)
|
||||||
|
|
||||||
### Usage
|
### Usage
|
||||||
@@ -20,19 +49,14 @@ Manages workflow sessions with three operation modes: discovery (manual), auto (
|
|||||||
/workflow:session:start
|
/workflow:session:start
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 1: Check Active Sessions
|
### Step 1: List Active Sessions
|
||||||
```bash
|
```bash
|
||||||
bash(ls .workflow/.active-* 2>/dev/null)
|
bash(ls -1 .workflow/sessions/ 2>/dev/null | head -5)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: List All Sessions
|
### Step 2: Display Session Metadata
|
||||||
```bash
|
```bash
|
||||||
bash(ls -1 .workflow/WFS-* 2>/dev/null | head -5)
|
bash(cat .workflow/sessions/WFS-promptmaster-platform/workflow-session.json)
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Display Session Metadata
|
|
||||||
```bash
|
|
||||||
bash(cat .workflow/WFS-promptmaster-platform/workflow-session.json)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 4: User Decision
|
### Step 4: User Decision
|
||||||
@@ -49,7 +73,7 @@ Present session information and wait for user to select or create session.
|
|||||||
|
|
||||||
### Step 1: Check Active Sessions Count
|
### Step 1: Check Active Sessions Count
|
||||||
```bash
|
```bash
|
||||||
bash(ls .workflow/.active-* 2>/dev/null | wc -l)
|
bash(find .workflow/sessions/ -name "WFS-*" -type d 2>/dev/null | wc -l)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2a: No Active Sessions → Create New
|
### Step 2a: No Active Sessions → Create New
|
||||||
@@ -58,15 +82,12 @@ bash(ls .workflow/.active-* 2>/dev/null | wc -l)
|
|||||||
bash(echo "implement OAuth2 auth" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
|
bash(echo "implement OAuth2 auth" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
|
||||||
|
|
||||||
# Create directory structure
|
# Create directory structure
|
||||||
bash(mkdir -p .workflow/WFS-implement-oauth2-auth/.process)
|
bash(mkdir -p .workflow/sessions/WFS-implement-oauth2-auth/.process)
|
||||||
bash(mkdir -p .workflow/WFS-implement-oauth2-auth/.task)
|
bash(mkdir -p .workflow/sessions/WFS-implement-oauth2-auth/.task)
|
||||||
bash(mkdir -p .workflow/WFS-implement-oauth2-auth/.summaries)
|
bash(mkdir -p .workflow/sessions/WFS-implement-oauth2-auth/.summaries)
|
||||||
|
|
||||||
# Create metadata
|
# Create metadata
|
||||||
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning"}' > .workflow/WFS-implement-oauth2-auth/workflow-session.json)
|
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning"}' > .workflow/sessions/WFS-implement-oauth2-auth/workflow-session.json)
|
||||||
|
|
||||||
# Mark as active
|
|
||||||
bash(touch .workflow/.active-WFS-implement-oauth2-auth)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
|
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
|
||||||
@@ -74,10 +95,10 @@ bash(touch .workflow/.active-WFS-implement-oauth2-auth)
|
|||||||
### Step 2b: Single Active Session → Check Relevance
|
### Step 2b: Single Active Session → Check Relevance
|
||||||
```bash
|
```bash
|
||||||
# Extract session ID
|
# Extract session ID
|
||||||
bash(ls .workflow/.active-* 2>/dev/null | head -1 | xargs basename | sed 's/^\.active-//')
|
bash(find .workflow/sessions/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
|
||||||
|
|
||||||
# Read project name from metadata
|
# Read project name from metadata
|
||||||
bash(cat .workflow/WFS-promptmaster-platform/workflow-session.json | grep -o '"project":"[^"]*"' | cut -d'"' -f4)
|
bash(cat .workflow/sessions/WFS-promptmaster-platform/workflow-session.json | grep -o '"project":"[^"]*"' | cut -d'"' -f4)
|
||||||
|
|
||||||
# Check keyword match (manual comparison)
|
# Check keyword match (manual comparison)
|
||||||
# If task contains project keywords → Reuse session
|
# If task contains project keywords → Reuse session
|
||||||
@@ -90,7 +111,7 @@ bash(cat .workflow/WFS-promptmaster-platform/workflow-session.json | grep -o '"p
|
|||||||
### Step 2c: Multiple Active Sessions → Use First
|
### Step 2c: Multiple Active Sessions → Use First
|
||||||
```bash
|
```bash
|
||||||
# Get first active session
|
# Get first active session
|
||||||
bash(ls .workflow/.active-* 2>/dev/null | head -1 | xargs basename | sed 's/^\.active-//')
|
bash(find .workflow/sessions/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
|
||||||
|
|
||||||
# Output warning and session ID
|
# Output warning and session ID
|
||||||
# WARNING: Multiple active sessions detected
|
# WARNING: Multiple active sessions detected
|
||||||
@@ -110,25 +131,19 @@ bash(ls .workflow/.active-* 2>/dev/null | head -1 | xargs basename | sed 's/^\.a
|
|||||||
bash(echo "fix login bug" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
|
bash(echo "fix login bug" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
|
||||||
|
|
||||||
# Check if exists, add counter if needed
|
# Check if exists, add counter if needed
|
||||||
bash(ls .workflow/WFS-fix-login-bug 2>/dev/null && echo "WFS-fix-login-bug-2" || echo "WFS-fix-login-bug")
|
bash(ls .workflow/sessions/WFS-fix-login-bug 2>/dev/null && echo "WFS-fix-login-bug-2" || echo "WFS-fix-login-bug")
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Create Session Structure
|
### Step 2: Create Session Structure
|
||||||
```bash
|
```bash
|
||||||
bash(mkdir -p .workflow/WFS-fix-login-bug/.process)
|
bash(mkdir -p .workflow/sessions/WFS-fix-login-bug/.process)
|
||||||
bash(mkdir -p .workflow/WFS-fix-login-bug/.task)
|
bash(mkdir -p .workflow/sessions/WFS-fix-login-bug/.task)
|
||||||
bash(mkdir -p .workflow/WFS-fix-login-bug/.summaries)
|
bash(mkdir -p .workflow/sessions/WFS-fix-login-bug/.summaries)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Create Metadata
|
### Step 3: Create Metadata
|
||||||
```bash
|
```bash
|
||||||
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning"}' > .workflow/WFS-fix-login-bug/workflow-session.json)
|
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning"}' > .workflow/sessions/WFS-fix-login-bug/workflow-session.json)
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Mark Active and Clean Old Markers
|
|
||||||
```bash
|
|
||||||
bash(rm .workflow/.active-* 2>/dev/null)
|
|
||||||
bash(touch .workflow/.active-WFS-fix-login-bug)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: `SESSION_ID: WFS-fix-login-bug`
|
**Output**: `SESSION_ID: WFS-fix-login-bug`
|
||||||
@@ -173,41 +188,6 @@ SlashCommand(command="/workflow:session:start")
|
|||||||
SlashCommand(command="/workflow:session:start --new \"experimental feature\"")
|
SlashCommand(command="/workflow:session:start --new \"experimental feature\"")
|
||||||
```
|
```
|
||||||
|
|
||||||
## Simple Bash Commands
|
|
||||||
|
|
||||||
### Basic Operations
|
|
||||||
```bash
|
|
||||||
# Check active sessions
|
|
||||||
bash(ls .workflow/.active-*)
|
|
||||||
|
|
||||||
# List all sessions
|
|
||||||
bash(ls .workflow/WFS-*)
|
|
||||||
|
|
||||||
# Read session metadata
|
|
||||||
bash(cat .workflow/WFS-[session-id]/workflow-session.json)
|
|
||||||
|
|
||||||
# Create session directories
|
|
||||||
bash(mkdir -p .workflow/WFS-[session-id]/.process)
|
|
||||||
bash(mkdir -p .workflow/WFS-[session-id]/.task)
|
|
||||||
bash(mkdir -p .workflow/WFS-[session-id]/.summaries)
|
|
||||||
|
|
||||||
# Mark session as active
|
|
||||||
bash(touch .workflow/.active-WFS-[session-id])
|
|
||||||
|
|
||||||
# Clean active markers
|
|
||||||
bash(rm .workflow/.active-*)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Generate Session Slug
|
|
||||||
```bash
|
|
||||||
bash(echo "Task Description" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create Metadata JSON
|
|
||||||
```bash
|
|
||||||
bash(echo '{"session_id":"WFS-test","project":"test project","status":"planning"}' > .workflow/WFS-test/workflow-session.json)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Session ID Format
|
## Session ID Format
|
||||||
- Pattern: `WFS-[lowercase-slug]`
|
- Pattern: `WFS-[lowercase-slug]`
|
||||||
- Characters: `a-z`, `0-9`, `-` only
|
- Characters: `a-z`, `0-9`, `-` only
|
||||||
|
|||||||
@@ -1,47 +1,183 @@
|
|||||||
---
|
---
|
||||||
name: workflow:status
|
name: workflow:status
|
||||||
description: Generate on-demand task status views from JSON task data with optional task-id filtering for detailed view
|
description: Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view
|
||||||
argument-hint: "[optional: task-id]"
|
argument-hint: "[optional: --project|task-id|--validate]"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Workflow Status Command (/workflow:status)
|
# Workflow Status Command (/workflow:status)
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Generates on-demand views from JSON task data. No synchronization needed - all views are calculated from the current state of JSON files.
|
Generates on-demand views from project and session data. Supports two modes:
|
||||||
|
1. **Project Overview** (`--project`): Shows completed features and project statistics
|
||||||
|
2. **Workflow Tasks** (default): Shows current session task progress
|
||||||
|
|
||||||
|
No synchronization needed - all views are calculated from current JSON state.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
```bash
|
```bash
|
||||||
/workflow:status # Show current workflow overview
|
/workflow:status # Show current workflow session overview
|
||||||
|
/workflow:status --project # Show project-level feature registry
|
||||||
/workflow:status impl-1 # Show specific task details
|
/workflow:status impl-1 # Show specific task details
|
||||||
/workflow:status --validate # Validate workflow integrity
|
/workflow:status --validate # Validate workflow integrity
|
||||||
```
|
```
|
||||||
|
|
||||||
## Implementation Flow
|
## Implementation Flow
|
||||||
|
|
||||||
|
### Mode Selection
|
||||||
|
|
||||||
|
**Check for --project flag**:
|
||||||
|
- If `--project` flag present → Execute **Project Overview Mode**
|
||||||
|
- Otherwise → Execute **Workflow Session Mode** (default)
|
||||||
|
|
||||||
|
## Project Overview Mode
|
||||||
|
|
||||||
|
### Step 1: Check Project State
|
||||||
|
```bash
|
||||||
|
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||||
|
```
|
||||||
|
|
||||||
|
**If NOT_FOUND**:
|
||||||
|
```
|
||||||
|
No project state found.
|
||||||
|
Run /workflow:session:start to initialize project.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Read Project Data
|
||||||
|
```bash
|
||||||
|
bash(cat .workflow/project.json)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Parse and Display
|
||||||
|
|
||||||
|
**Data Processing**:
|
||||||
|
```javascript
|
||||||
|
const projectData = JSON.parse(Read('.workflow/project.json'));
|
||||||
|
const features = projectData.features || [];
|
||||||
|
const stats = projectData.statistics || {};
|
||||||
|
const overview = projectData.overview || null;
|
||||||
|
|
||||||
|
// Sort features by implementation date (newest first)
|
||||||
|
const sortedFeatures = features.sort((a, b) =>
|
||||||
|
new Date(b.implemented_at) - new Date(a.implemented_at)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output Format** (with extended overview):
|
||||||
|
```
|
||||||
|
## Project: ${projectData.project_name}
|
||||||
|
Initialized: ${projectData.initialized_at}
|
||||||
|
|
||||||
|
${overview ? `
|
||||||
|
### Overview
|
||||||
|
${overview.description}
|
||||||
|
|
||||||
|
**Technology Stack**:
|
||||||
|
${overview.technology_stack.languages.map(l => `- ${l.name}${l.primary ? ' (primary)' : ''}: ${l.file_count} files`).join('\n')}
|
||||||
|
Frameworks: ${overview.technology_stack.frameworks.join(', ')}
|
||||||
|
|
||||||
|
**Architecture**:
|
||||||
|
Style: ${overview.architecture.style}
|
||||||
|
Patterns: ${overview.architecture.patterns.join(', ')}
|
||||||
|
|
||||||
|
**Key Components** (${overview.key_components.length}):
|
||||||
|
${overview.key_components.map(c => `- ${c.name} (${c.path})\n ${c.description}`).join('\n')}
|
||||||
|
|
||||||
|
**Metrics**:
|
||||||
|
- Files: ${overview.metrics.total_files}
|
||||||
|
- Lines of Code: ${overview.metrics.lines_of_code}
|
||||||
|
- Complexity: ${overview.metrics.complexity}
|
||||||
|
|
||||||
|
---
|
||||||
|
` : ''}
|
||||||
|
|
||||||
|
### Completed Features (${stats.total_features})
|
||||||
|
|
||||||
|
${sortedFeatures.map(f => `
|
||||||
|
- ${f.title} (${f.timeline?.implemented_at || f.implemented_at})
|
||||||
|
${f.description}
|
||||||
|
Tags: ${f.tags?.join(', ') || 'none'}
|
||||||
|
Session: ${f.traceability?.session_id || f.session_id}
|
||||||
|
Archive: ${f.traceability?.archive_path || 'unknown'}
|
||||||
|
${f.traceability?.commit_hash ? `Commit: ${f.traceability.commit_hash}` : ''}
|
||||||
|
`).join('\n')}
|
||||||
|
|
||||||
|
### Project Statistics
|
||||||
|
- Total Features: ${stats.total_features}
|
||||||
|
- Total Sessions: ${stats.total_sessions}
|
||||||
|
- Last Updated: ${stats.last_updated}
|
||||||
|
|
||||||
|
### Quick Access
|
||||||
|
- View session details: /workflow:status
|
||||||
|
- Archive query: jq '.archives[] | select(.session_id == "SESSION_ID")' .workflow/archives/manifest.json
|
||||||
|
- Documentation: .workflow/docs/${projectData.project_name}/
|
||||||
|
|
||||||
|
### Query Commands
|
||||||
|
# Find by tag
|
||||||
|
cat .workflow/project.json | jq '.features[] | select(.tags[] == "auth")'
|
||||||
|
|
||||||
|
# View archive
|
||||||
|
cat ${feature.traceability.archive_path}/IMPL_PLAN.md
|
||||||
|
|
||||||
|
# List all tags
|
||||||
|
cat .workflow/project.json | jq -r '.features[].tags[]' | sort -u
|
||||||
|
```
|
||||||
|
|
||||||
|
**Empty State**:
|
||||||
|
```
|
||||||
|
## Project: ${projectData.project_name}
|
||||||
|
Initialized: ${projectData.initialized_at}
|
||||||
|
|
||||||
|
No features completed yet.
|
||||||
|
|
||||||
|
Complete your first workflow session to add features:
|
||||||
|
1. /workflow:plan "feature description"
|
||||||
|
2. /workflow:execute
|
||||||
|
3. /workflow:session:complete
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Show Recent Sessions (Optional)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List 5 most recent archived sessions
|
||||||
|
bash(ls -1t .workflow/archives/WFS-* 2>/dev/null | head -5 | xargs -I {} basename {})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**:
|
||||||
|
```
|
||||||
|
### Recent Sessions
|
||||||
|
- WFS-auth-system (archived)
|
||||||
|
- WFS-payment-flow (archived)
|
||||||
|
- WFS-user-dashboard (archived)
|
||||||
|
|
||||||
|
Use /workflow:session:complete to archive current session.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Workflow Session Mode (Default)
|
||||||
|
|
||||||
### Step 1: Find Active Session
|
### Step 1: Find Active Session
|
||||||
```bash
|
```bash
|
||||||
find .workflow/ -name ".active-*" -type f 2>/dev/null | head -1
|
find .workflow/sessions/ -name "WFS-*" -type d 2>/dev/null | head -1
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Load Session Data
|
### Step 2: Load Session Data
|
||||||
```bash
|
```bash
|
||||||
cat .workflow/WFS-session/workflow-session.json
|
cat .workflow/sessions/WFS-session/workflow-session.json
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Scan Task Files
|
### Step 3: Scan Task Files
|
||||||
```bash
|
```bash
|
||||||
find .workflow/WFS-session/.task/ -name "*.json" -type f 2>/dev/null
|
find .workflow/sessions/WFS-session/.task/ -name "*.json" -type f 2>/dev/null
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 4: Generate Task Status
|
### Step 4: Generate Task Status
|
||||||
```bash
|
```bash
|
||||||
cat .workflow/WFS-session/.task/impl-1.json | jq -r '.status'
|
cat .workflow/sessions/WFS-session/.task/impl-1.json | jq -r '.status'
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 5: Count Task Progress
|
### Step 5: Count Task Progress
|
||||||
```bash
|
```bash
|
||||||
find .workflow/WFS-session/.task/ -name "*.json" -type f | wc -l
|
find .workflow/sessions/WFS-session/.task/ -name "*.json" -type f | wc -l
|
||||||
find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
find .workflow/sessions/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 6: Display Overview
|
### Step 6: Display Overview
|
||||||
@@ -56,64 +192,4 @@ find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
|||||||
|
|
||||||
## Completed Tasks
|
## Completed Tasks
|
||||||
- [COMPLETED] impl-0: Setup completed
|
- [COMPLETED] impl-0: Setup completed
|
||||||
```
|
|
||||||
|
|
||||||
## Simple Bash Commands
|
|
||||||
|
|
||||||
### Basic Operations
|
|
||||||
- **Find active session**: `find .workflow/ -name ".active-*" -type f`
|
|
||||||
- **Read session info**: `cat .workflow/session/workflow-session.json`
|
|
||||||
- **List tasks**: `find .workflow/session/.task/ -name "*.json" -type f`
|
|
||||||
- **Check task status**: `cat task.json | jq -r '.status'`
|
|
||||||
- **Count completed**: `find .summaries/ -name "*.md" -type f | wc -l`
|
|
||||||
|
|
||||||
### Task Status Check
|
|
||||||
- **pending**: Not started yet
|
|
||||||
- **active**: Currently in progress
|
|
||||||
- **completed**: Finished with summary
|
|
||||||
- **blocked**: Waiting for dependencies
|
|
||||||
|
|
||||||
### Validation Commands
|
|
||||||
```bash
|
|
||||||
# Check session exists
|
|
||||||
test -f .workflow/.active-* && echo "Session active"
|
|
||||||
|
|
||||||
# Validate task files
|
|
||||||
for f in .workflow/session/.task/*.json; do jq empty "$f" && echo "Valid: $f"; done
|
|
||||||
|
|
||||||
# Check summaries match
|
|
||||||
find .task/ -name "*.json" -type f | wc -l
|
|
||||||
find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
|
||||||
```
|
|
||||||
|
|
||||||
## Simple Output Format
|
|
||||||
|
|
||||||
### Default Overview
|
|
||||||
```
|
|
||||||
Session: WFS-user-auth
|
|
||||||
Status: ACTIVE
|
|
||||||
Progress: 5/12 tasks
|
|
||||||
|
|
||||||
Current: impl-3 (Building API endpoints)
|
|
||||||
Next: impl-4 (Adding authentication)
|
|
||||||
Completed: impl-1, impl-2
|
|
||||||
```
|
|
||||||
|
|
||||||
### Task Details
|
|
||||||
```
|
|
||||||
Task: impl-1
|
|
||||||
Title: Build authentication module
|
|
||||||
Status: completed
|
|
||||||
Agent: code-developer
|
|
||||||
Created: 2025-09-15
|
|
||||||
Completed: 2025-09-15
|
|
||||||
Summary: .summaries/impl-1-summary.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### Validation Results
|
|
||||||
```
|
|
||||||
Session file valid
|
|
||||||
8 task files found
|
|
||||||
3 summaries found
|
|
||||||
5 tasks pending completion
|
|
||||||
```
|
```
|
||||||
@@ -496,7 +496,7 @@ Supports action-planning-agent for more autonomous TDD planning with:
|
|||||||
|
|
||||||
**Session Structure**:
|
**Session Structure**:
|
||||||
```
|
```
|
||||||
.workflow/WFS-xxx/
|
.workflow/sessions/WFS-xxx/
|
||||||
├── IMPL_PLAN.md (unified plan with TDD Implementation Tasks section)
|
├── IMPL_PLAN.md (unified plan with TDD Implementation Tasks section)
|
||||||
├── TODO_LIST.md (with internal TDD phase indicators)
|
├── TODO_LIST.md (with internal TDD phase indicators)
|
||||||
├── .process/
|
├── .process/
|
||||||
|
|||||||
@@ -28,7 +28,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(gemini:*)
|
|||||||
sessionId = argument
|
sessionId = argument
|
||||||
|
|
||||||
# Else auto-detect active session
|
# Else auto-detect active session
|
||||||
find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//'
|
find .workflow/sessions/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
|
||||||
```
|
```
|
||||||
|
|
||||||
**Extract**: sessionId
|
**Extract**: sessionId
|
||||||
@@ -44,18 +44,18 @@ find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//'
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Load all task JSONs
|
# Load all task JSONs
|
||||||
find .workflow/{sessionId}/.task/ -name '*.json'
|
find .workflow/sessions/{sessionId}/.task/ -name '*.json'
|
||||||
|
|
||||||
# Extract task IDs
|
# Extract task IDs
|
||||||
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
|
find .workflow/sessions/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
|
||||||
|
|
||||||
# Check dependencies
|
# Check dependencies
|
||||||
find .workflow/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
find .workflow/sessions/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
||||||
find .workflow/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
find .workflow/sessions/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
||||||
|
|
||||||
# Check meta fields
|
# Check meta fields
|
||||||
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
|
find .workflow/sessions/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
|
||||||
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
find .workflow/sessions/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
||||||
```
|
```
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
@@ -82,9 +82,9 @@ find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
|||||||
- Compliance score
|
- Compliance score
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
- `.workflow/{sessionId}/.process/test-results.json` exists
|
- `.workflow/sessions/{sessionId}/.process/test-results.json` exists
|
||||||
- `.workflow/{sessionId}/.process/coverage-report.json` exists
|
- `.workflow/sessions/{sessionId}/.process/coverage-report.json` exists
|
||||||
- `.workflow/{sessionId}/.process/tdd-cycle-report.md` exists
|
- `.workflow/sessions/{sessionId}/.process/tdd-cycle-report.md` exists
|
||||||
|
|
||||||
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
|
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
|
||||||
|
|
||||||
@@ -97,7 +97,7 @@ find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
|||||||
cd project-root && gemini -p "
|
cd project-root && gemini -p "
|
||||||
PURPOSE: Generate TDD compliance report
|
PURPOSE: Generate TDD compliance report
|
||||||
TASK: Analyze TDD workflow execution and generate quality report
|
TASK: Analyze TDD workflow execution and generate quality report
|
||||||
CONTEXT: @{.workflow/{sessionId}/.task/*.json,.workflow/{sessionId}/.summaries/*,.workflow/{sessionId}/.process/tdd-cycle-report.md}
|
CONTEXT: @{.workflow/sessions/{sessionId}/.task/*.json,.workflow/sessions/{sessionId}/.summaries/*,.workflow/sessions/{sessionId}/.process/tdd-cycle-report.md}
|
||||||
EXPECTED:
|
EXPECTED:
|
||||||
- TDD compliance score (0-100)
|
- TDD compliance score (0-100)
|
||||||
- Chain completeness verification
|
- Chain completeness verification
|
||||||
@@ -106,7 +106,7 @@ EXPECTED:
|
|||||||
- Red-Green-Refactor cycle validation
|
- Red-Green-Refactor cycle validation
|
||||||
- Best practices adherence assessment
|
- Best practices adherence assessment
|
||||||
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
|
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
|
||||||
" > .workflow/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
" > .workflow/sessions/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: TDD_COMPLIANCE_REPORT.md
|
**Output**: TDD_COMPLIANCE_REPORT.md
|
||||||
@@ -134,7 +134,7 @@ Function Coverage: {percentage}%
|
|||||||
|
|
||||||
## Compliance Score: {score}/100
|
## Compliance Score: {score}/100
|
||||||
|
|
||||||
Detailed report: .workflow/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
Detailed report: .workflow/sessions/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
||||||
|
|
||||||
Recommendations:
|
Recommendations:
|
||||||
- Complete missing REFACTOR-3.1 task
|
- Complete missing REFACTOR-3.1 task
|
||||||
@@ -168,7 +168,7 @@ TodoWrite({todos: [
|
|||||||
|
|
||||||
### Chain Validation Algorithm
|
### Chain Validation Algorithm
|
||||||
```
|
```
|
||||||
1. Load all task JSONs from .workflow/{sessionId}/.task/
|
1. Load all task JSONs from .workflow/sessions/{sessionId}/.task/
|
||||||
2. Extract task IDs and group by feature number
|
2. Extract task IDs and group by feature number
|
||||||
3. For each feature:
|
3. For each feature:
|
||||||
- Check TEST-N.M exists
|
- Check TEST-N.M exists
|
||||||
@@ -202,7 +202,7 @@ Final Score: Max(0, Base Score - Deductions)
|
|||||||
|
|
||||||
## Output Files
|
## Output Files
|
||||||
```
|
```
|
||||||
.workflow/{session-id}/
|
.workflow/sessions/{session-id}/
|
||||||
├── TDD_COMPLIANCE_REPORT.md # Comprehensive compliance report ⭐
|
├── TDD_COMPLIANCE_REPORT.md # Comprehensive compliance report ⭐
|
||||||
└── .process/
|
└── .process/
|
||||||
├── test-results.json # From tdd-coverage-analysis
|
├── test-results.json # From tdd-coverage-analysis
|
||||||
@@ -215,8 +215,8 @@ Final Score: Max(0, Base Score - Deductions)
|
|||||||
### Session Discovery Errors
|
### Session Discovery Errors
|
||||||
| Error | Cause | Resolution |
|
| Error | Cause | Resolution |
|
||||||
|-------|-------|------------|
|
|-------|-------|------------|
|
||||||
| No active session | No .active-* file | Provide session-id explicitly |
|
| No active session | No WFS-* directories | Provide session-id explicitly |
|
||||||
| Multiple active sessions | Multiple .active-* files | Provide session-id explicitly |
|
| Multiple active sessions | Multiple WFS-* directories | Provide session-id explicitly |
|
||||||
| Session not found | Invalid session-id | Check available sessions |
|
| Session not found | Invalid session-id | Check available sessions |
|
||||||
|
|
||||||
### Validation Errors
|
### Validation Errors
|
||||||
|
|||||||
@@ -543,7 +543,7 @@ This package is passed to agents via the Task tool's prompt context.
|
|||||||
"coverage_target": 80
|
"coverage_target": 80
|
||||||
},
|
},
|
||||||
"session": {
|
"session": {
|
||||||
"workflow_dir": ".workflow/WFS-test-{session}/",
|
"workflow_dir": ".workflow/sessions/WFS-test-{session}/",
|
||||||
"iteration_state_file": ".process/iteration-state.json",
|
"iteration_state_file": ".process/iteration-state.json",
|
||||||
"test_results_file": ".process/test-results.json",
|
"test_results_file": ".process/test-results.json",
|
||||||
"fix_history_file": ".process/fix-history.json"
|
"fix_history_file": ".process/fix-history.json"
|
||||||
@@ -555,7 +555,7 @@ This package is passed to agents via the Task tool's prompt context.
|
|||||||
|
|
||||||
### Test-Fix Session Files
|
### Test-Fix Session Files
|
||||||
```
|
```
|
||||||
.workflow/WFS-test-{session}/
|
.workflow/sessions/WFS-test-{session}/
|
||||||
├── workflow-session.json # Session metadata with workflow_type
|
├── workflow-session.json # Session metadata with workflow_type
|
||||||
├── IMPL_PLAN.md # Test plan
|
├── IMPL_PLAN.md # Test plan
|
||||||
├── TODO_LIST.md # Progress tracking
|
├── TODO_LIST.md # Progress tracking
|
||||||
|
|||||||
@@ -513,7 +513,7 @@ If quality gate fails:
|
|||||||
|
|
||||||
### Output Files Structure
|
### Output Files Structure
|
||||||
|
|
||||||
Created in `.workflow/WFS-test-[session]/`:
|
Created in `.workflow/sessions/WFS-test-[session]/`:
|
||||||
|
|
||||||
```
|
```
|
||||||
WFS-test-[session]/
|
WFS-test-[session]/
|
||||||
@@ -579,7 +579,7 @@ Test-Fix-Gen Workflow Orchestrator (Dual-Mode Support)
|
|||||||
└─ Command ends, control returns to user
|
└─ Command ends, control returns to user
|
||||||
|
|
||||||
Artifacts Created:
|
Artifacts Created:
|
||||||
├── .workflow/WFS-test-[session]/
|
├── .workflow/sessions/WFS-test-[session]/
|
||||||
│ ├── workflow-session.json
|
│ ├── workflow-session.json
|
||||||
│ ├── IMPL_PLAN.md
|
│ ├── IMPL_PLAN.md
|
||||||
│ ├── TODO_LIST.md
|
│ ├── TODO_LIST.md
|
||||||
|
|||||||
@@ -397,7 +397,7 @@ Test-Gen Workflow Orchestrator
|
|||||||
└─ Command ends, control returns to user
|
└─ Command ends, control returns to user
|
||||||
|
|
||||||
Artifacts Created:
|
Artifacts Created:
|
||||||
├── .workflow/WFS-test-[session]/
|
├── .workflow/sessions/WFS-test-[session]/
|
||||||
│ ├── workflow-session.json
|
│ ├── workflow-session.json
|
||||||
│ ├── IMPL_PLAN.md
|
│ ├── IMPL_PLAN.md
|
||||||
│ ├── TODO_LIST.md
|
│ ├── TODO_LIST.md
|
||||||
@@ -444,7 +444,7 @@ See `/workflow:tools:test-task-generate` for complete task JSON schemas.
|
|||||||
|
|
||||||
## Output Files
|
## Output Files
|
||||||
|
|
||||||
Created in `.workflow/WFS-test-[session]/`:
|
Created in `.workflow/sessions/WFS-test-[session]/`:
|
||||||
- `workflow-session.json` - Session metadata
|
- `workflow-session.json` - Session metadata
|
||||||
- `.process/test-context-package.json` - Coverage analysis
|
- `.process/test-context-package.json` - Coverage analysis
|
||||||
- `.process/TEST_ANALYSIS_RESULTS.md` - Test requirements
|
- `.process/TEST_ANALYSIS_RESULTS.md` - Test requirements
|
||||||
|
|||||||
@@ -3,8 +3,8 @@ name: conflict-resolution
|
|||||||
description: Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen
|
description: Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen
|
||||||
argument-hint: "--session WFS-session-id --context path/to/context-package.json"
|
argument-hint: "--session WFS-session-id --context path/to/context-package.json"
|
||||||
examples:
|
examples:
|
||||||
- /workflow:tools:conflict-resolution --session WFS-auth --context .workflow/WFS-auth/.process/context-package.json
|
- /workflow:tools:conflict-resolution --session WFS-auth --context .workflow/sessions/WFS-auth/.process/context-package.json
|
||||||
- /workflow:tools:conflict-resolution --session WFS-payment --context .workflow/WFS-payment/.process/context-package.json
|
- /workflow:tools:conflict-resolution --session WFS-payment --context .workflow/sessions/WFS-payment/.process/context-package.json
|
||||||
---
|
---
|
||||||
|
|
||||||
# Conflict Resolution Command
|
# Conflict Resolution Command
|
||||||
|
|||||||
@@ -71,9 +71,10 @@ You are executing as context-search-agent (.claude/agents/context-search-agent.m
|
|||||||
Execute complete context-search-agent workflow for implementation planning:
|
Execute complete context-search-agent workflow for implementation planning:
|
||||||
|
|
||||||
### Phase 1: Initialization & Pre-Analysis
|
### Phase 1: Initialization & Pre-Analysis
|
||||||
1. **Detection**: Check for existing context-package (early exit if valid)
|
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
|
||||||
2. **Foundation**: Initialize code-index, get project structure, load docs
|
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||||
3. **Analysis**: Extract keywords, determine scope, classify complexity
|
3. **Foundation**: Initialize code-index, get project structure, load docs
|
||||||
|
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
||||||
|
|
||||||
### Phase 2: Multi-Source Context Discovery
|
### Phase 2: Multi-Source Context Discovery
|
||||||
Execute all 4 discovery tracks:
|
Execute all 4 discovery tracks:
|
||||||
@@ -84,16 +85,17 @@ Execute all 4 discovery tracks:
|
|||||||
|
|
||||||
### Phase 3: Synthesis, Assessment & Packaging
|
### Phase 3: Synthesis, Assessment & Packaging
|
||||||
1. Apply relevance scoring and build dependency graph
|
1. Apply relevance scoring and build dependency graph
|
||||||
2. Synthesize 4-source data (archive > docs > code > web)
|
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
||||||
3. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
3. **Populate `project_context`**: Directly use the `overview` from `project.json` to fill the `project_context` section of the output `context-package.json`. Include technology_stack, architecture, key_components, and entry_points.
|
||||||
4. Perform conflict detection with risk assessment
|
4. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||||
5. **Inject historical conflicts** from archive analysis into conflict_detection
|
5. Perform conflict detection with risk assessment
|
||||||
6. Generate and validate context-package.json
|
6. **Inject historical conflicts** from archive analysis into conflict_detection
|
||||||
|
7. Generate and validate context-package.json
|
||||||
|
|
||||||
## Output Requirements
|
## Output Requirements
|
||||||
Complete context-package.json with:
|
Complete context-package.json with:
|
||||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
||||||
- **project_context**: architecture_patterns, coding_conventions, tech_stack
|
- **project_context**: architecture_patterns, coding_conventions, tech_stack (sourced from `project.json` overview)
|
||||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
||||||
- **dependencies**: {internal[], external[]} with dependency graph
|
- **dependencies**: {internal[], external[]} with dependency graph
|
||||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
||||||
@@ -139,7 +141,7 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
|
|||||||
|
|
||||||
**Key Sections**:
|
**Key Sections**:
|
||||||
- **metadata**: Session info, keywords, complexity, tech stack
|
- **metadata**: Session info, keywords, complexity, tech stack
|
||||||
- **project_context**: Architecture patterns, conventions, tech stack
|
- **project_context**: Architecture patterns, conventions, tech stack (populated from `project.json` overview)
|
||||||
- **assets**: Categorized files with relevance scores (documentation, source_code, config, tests)
|
- **assets**: Categorized files with relevance scores (documentation, source_code, config, tests)
|
||||||
- **dependencies**: Internal and external dependency graphs
|
- **dependencies**: Internal and external dependency graphs
|
||||||
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
|
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
|
||||||
@@ -154,7 +156,7 @@ The context-search-agent MUST perform historical archive analysis as Track 1 in
|
|||||||
**Step 1: Check for Archive Manifest**
|
**Step 1: Check for Archive Manifest**
|
||||||
```bash
|
```bash
|
||||||
# Check if archive manifest exists
|
# Check if archive manifest exists
|
||||||
if [[ -f .workflow/.archives/manifest.json ]]; then
|
if [[ -f .workflow/archives/manifest.json ]]; then
|
||||||
# Manifest available for querying
|
# Manifest available for querying
|
||||||
fi
|
fi
|
||||||
```
|
```
|
||||||
@@ -233,7 +235,7 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
|
|||||||
### Archive Query Algorithm
|
### Archive Query Algorithm
|
||||||
|
|
||||||
```markdown
|
```markdown
|
||||||
1. IF .workflow/.archives/manifest.json does NOT exist → Skip Track 1, continue to Track 2
|
1. IF .workflow/archives/manifest.json does NOT exist → Skip Track 1, continue to Track 2
|
||||||
2. IF manifest exists:
|
2. IF manifest exists:
|
||||||
a. Load manifest.json
|
a. Load manifest.json
|
||||||
b. Extract keywords from task_description (nouns, verbs, technical terms)
|
b. Extract keywords from task_description (nouns, verbs, technical terms)
|
||||||
@@ -250,33 +252,10 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
|
|||||||
3. Continue to Track 2 (reference documentation)
|
3. Continue to Track 2 (reference documentation)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Usage Examples
|
|
||||||
|
|
||||||
### Basic Usage
|
|
||||||
```bash
|
|
||||||
/workflow:tools:context-gather --session WFS-auth-feature "Implement JWT authentication with refresh tokens"
|
|
||||||
```
|
|
||||||
## Success Criteria
|
|
||||||
|
|
||||||
- ✅ Valid context-package.json generated in `.workflow/{session}/.process/`
|
|
||||||
- ✅ Contains >80% relevant files based on task keywords
|
|
||||||
- ✅ Execution completes within 2 minutes
|
|
||||||
- ✅ All required schema fields present and valid
|
|
||||||
- ✅ Conflict risk accurately assessed
|
|
||||||
- ✅ Agent reports completion with statistics
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Cause | Resolution |
|
|
||||||
|-------|-------|------------|
|
|
||||||
| Package validation failed | Invalid session_id in existing package | Re-run agent to regenerate |
|
|
||||||
| Agent execution timeout | Large codebase or slow MCP | Increase timeout, check code-index status |
|
|
||||||
| Missing required fields | Agent incomplete execution | Check agent logs, verify schema compliance |
|
|
||||||
| File count exceeds limit | Too many relevant files | Agent should auto-prioritize top 50 by relevance |
|
|
||||||
|
|
||||||
## Notes
|
## Notes
|
||||||
|
|
||||||
- **Detection-first**: Always check for existing package before invoking agent
|
- **Detection-first**: Always check for existing package before invoking agent
|
||||||
|
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
|
||||||
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
|
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
|
||||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||||
|
|||||||
@@ -231,13 +231,13 @@ const agentContext = {
|
|||||||
// Use memory if available, else load
|
// Use memory if available, else load
|
||||||
session_metadata: memory.has("workflow-session.json")
|
session_metadata: memory.has("workflow-session.json")
|
||||||
? memory.get("workflow-session.json")
|
? memory.get("workflow-session.json")
|
||||||
: Read(.workflow/WFS-[id]/workflow-session.json),
|
: Read(.workflow/sessions/WFS-[id]/workflow-session.json),
|
||||||
|
|
||||||
context_package_path: ".workflow/WFS-[id]/.process/context-package.json",
|
context_package_path: ".workflow/sessions/WFS-[id]/.process/context-package.json",
|
||||||
|
|
||||||
context_package: memory.has("context-package.json")
|
context_package: memory.has("context-package.json")
|
||||||
? memory.get("context-package.json")
|
? memory.get("context-package.json")
|
||||||
: Read(".workflow/WFS-[id]/.process/context-package.json"),
|
: Read(".workflow/sessions/WFS-[id]/.process/context-package.json"),
|
||||||
|
|
||||||
// Extract brainstorm artifacts from context package
|
// Extract brainstorm artifacts from context package
|
||||||
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
|
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
|
||||||
|
|||||||
@@ -338,19 +338,19 @@ const agentContext = {
|
|||||||
// Use memory if available, else load
|
// Use memory if available, else load
|
||||||
session_metadata: memory.has("workflow-session.json")
|
session_metadata: memory.has("workflow-session.json")
|
||||||
? memory.get("workflow-session.json")
|
? memory.get("workflow-session.json")
|
||||||
: Read(.workflow/WFS-[id]/workflow-session.json),
|
: Read(.workflow/sessions/WFS-[id]/workflow-session.json),
|
||||||
|
|
||||||
context_package_path: ".workflow/WFS-[id]/.process/context-package.json",
|
context_package_path: ".workflow/sessions/WFS-[id]/.process/context-package.json",
|
||||||
|
|
||||||
context_package: memory.has("context-package.json")
|
context_package: memory.has("context-package.json")
|
||||||
? memory.get("context-package.json")
|
? memory.get("context-package.json")
|
||||||
: Read(".workflow/WFS-[id]/.process/context-package.json"),
|
: Read(".workflow/sessions/WFS-[id]/.process/context-package.json"),
|
||||||
|
|
||||||
test_context_package_path: ".workflow/WFS-[id]/.process/test-context-package.json",
|
test_context_package_path: ".workflow/sessions/WFS-[id]/.process/test-context-package.json",
|
||||||
|
|
||||||
test_context_package: memory.has("test-context-package.json")
|
test_context_package: memory.has("test-context-package.json")
|
||||||
? memory.get("test-context-package.json")
|
? memory.get("test-context-package.json")
|
||||||
: Read(".workflow/WFS-[id]/.process/test-context-package.json"),
|
: Read(".workflow/sessions/WFS-[id]/.process/test-context-package.json"),
|
||||||
|
|
||||||
// Extract brainstorm artifacts from context package
|
// Extract brainstorm artifacts from context package
|
||||||
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
|
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
|
||||||
|
|||||||
@@ -224,7 +224,7 @@ Each task JSON embeds all necessary context, artifacts, and execution steps usin
|
|||||||
- `id`: Task identifier (format: `IMPL-N` or `IMPL-N.M` for subtasks)
|
- `id`: Task identifier (format: `IMPL-N` or `IMPL-N.M` for subtasks)
|
||||||
- `title`: Descriptive task name
|
- `title`: Descriptive task name
|
||||||
- `status`: Task state (`pending|active|completed|blocked|container`)
|
- `status`: Task state (`pending|active|completed|blocked|container`)
|
||||||
- `context_package_path`: Path to context package (`.workflow/WFS-[session]/.process/context-package.json`)
|
- `context_package_path`: Path to context package (`.workflow/sessions/WFS-[session]/.process/context-package.json`)
|
||||||
- `meta`: Task metadata
|
- `meta`: Task metadata
|
||||||
- `context`: Task-specific context and requirements
|
- `context`: Task-specific context and requirements
|
||||||
- `flow_control`: Execution steps and workflow
|
- `flow_control`: Execution steps and workflow
|
||||||
@@ -269,7 +269,7 @@ Each task JSON embeds all necessary context, artifacts, and execution steps usin
|
|||||||
"id": "IMPL-1",
|
"id": "IMPL-1",
|
||||||
"title": "Implement feature X with Y components",
|
"title": "Implement feature X with Y components",
|
||||||
"status": "pending",
|
"status": "pending",
|
||||||
"context_package_path": ".workflow/WFS-session/.process/context-package.json",
|
"context_package_path": ".workflow/sessions/WFS-session/.process/context-package.json",
|
||||||
"meta": {
|
"meta": {
|
||||||
"type": "feature",
|
"type": "feature",
|
||||||
"agent": "@code-developer",
|
"agent": "@code-developer",
|
||||||
@@ -291,7 +291,7 @@ Each task JSON embeds all necessary context, artifacts, and execution steps usin
|
|||||||
"depends_on": [],
|
"depends_on": [],
|
||||||
"artifacts": [
|
"artifacts": [
|
||||||
{
|
{
|
||||||
"path": ".workflow/WFS-session/.brainstorming/system-architect/analysis.md",
|
"path": ".workflow/sessions/WFS-session/.brainstorming/system-architect/analysis.md",
|
||||||
"priority": "highest",
|
"priority": "highest",
|
||||||
"usage": "Architecture decisions and API specifications"
|
"usage": "Architecture decisions and API specifications"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -272,6 +272,6 @@ Function Coverage: 91%
|
|||||||
|
|
||||||
Overall Compliance: 93/100
|
Overall Compliance: 93/100
|
||||||
|
|
||||||
Detailed report: .workflow/WFS-auth/.process/tdd-cycle-report.md
|
Detailed report: .workflow/sessions/WFS-auth/.process/tdd-cycle-report.md
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ name: test-concept-enhanced
|
|||||||
description: Analyze test requirements and generate test generation strategy using Gemini with test-context package
|
description: Analyze test requirements and generate test generation strategy using Gemini with test-context package
|
||||||
argument-hint: "--session WFS-test-session-id --context path/to/test-context-package.json"
|
argument-hint: "--session WFS-test-session-id --context path/to/test-context-package.json"
|
||||||
examples:
|
examples:
|
||||||
- /workflow:tools:test-concept-enhanced --session WFS-test-auth --context .workflow/WFS-test-auth/.process/test-context-package.json
|
- /workflow:tools:test-concept-enhanced --session WFS-test-auth --context .workflow/sessions/WFS-test-auth/.process/test-context-package.json
|
||||||
---
|
---
|
||||||
|
|
||||||
# Test Concept Enhanced Command
|
# Test Concept Enhanced Command
|
||||||
|
|||||||
@@ -273,19 +273,19 @@ const agentContext = {
|
|||||||
// Use memory if available, else load
|
// Use memory if available, else load
|
||||||
session_metadata: memory.has("workflow-session.json")
|
session_metadata: memory.has("workflow-session.json")
|
||||||
? memory.get("workflow-session.json")
|
? memory.get("workflow-session.json")
|
||||||
: Read(.workflow/WFS-test-[id]/workflow-session.json),
|
: Read(.workflow/sessions/WFS-test-[id]/workflow-session.json),
|
||||||
|
|
||||||
test_analysis_results_path: ".workflow/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md",
|
test_analysis_results_path: ".workflow/sessions/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md",
|
||||||
|
|
||||||
test_analysis_results: memory.has("TEST_ANALYSIS_RESULTS.md")
|
test_analysis_results: memory.has("TEST_ANALYSIS_RESULTS.md")
|
||||||
? memory.get("TEST_ANALYSIS_RESULTS.md")
|
? memory.get("TEST_ANALYSIS_RESULTS.md")
|
||||||
: Read(".workflow/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md"),
|
: Read(".workflow/sessions/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md"),
|
||||||
|
|
||||||
test_context_package_path: ".workflow/WFS-test-[id]/.process/test-context-package.json",
|
test_context_package_path: ".workflow/sessions/WFS-test-[id]/.process/test-context-package.json",
|
||||||
|
|
||||||
test_context_package: memory.has("test-context-package.json")
|
test_context_package: memory.has("test-context-package.json")
|
||||||
? memory.get("test-context-package.json")
|
? memory.get("test-context-package.json")
|
||||||
: Read(".workflow/WFS-test-[id]/.process/test-context-package.json"),
|
: Read(".workflow/sessions/WFS-test-[id]/.process/test-context-package.json"),
|
||||||
|
|
||||||
// Load source session summaries if exists
|
// Load source session summaries if exists
|
||||||
source_session_id: session_metadata.source_session_id || null,
|
source_session_id: session_metadata.source_session_id || null,
|
||||||
@@ -312,7 +312,7 @@ This section provides quick reference for test task JSON structure. For complete
|
|||||||
|
|
||||||
## Output Files Structure
|
## Output Files Structure
|
||||||
```
|
```
|
||||||
.workflow/WFS-test-[session]/
|
.workflow/sessions/WFS-test-[session]/
|
||||||
├── workflow-session.json # Test session metadata
|
├── workflow-session.json # Test session metadata
|
||||||
├── IMPL_PLAN.md # Test validation plan
|
├── IMPL_PLAN.md # Test validation plan
|
||||||
├── TODO_LIST.md # Progress tracking
|
├── TODO_LIST.md # Progress tracking
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ if [ -n "$DESIGN_ID" ]; then
|
|||||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
||||||
elif [ -n "$SESSION_ID" ]; then
|
elif [ -n "$SESSION_ID" ]; then
|
||||||
# Latest in session
|
# Latest in session
|
||||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
relative_path=$(find .workflow/sessions/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||||
else
|
else
|
||||||
# Latest globally
|
# Latest globally
|
||||||
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||||
|
|||||||
@@ -1,380 +0,0 @@
|
|||||||
---
|
|
||||||
name: capture
|
|
||||||
description: Batch screenshot capture for UI design workflows using MCP puppeteer or local fallback with URL mapping
|
|
||||||
argument-hint: --url-map "target:url,..." [--design-id <id>] [--session <id>]
|
|
||||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*), ListMcpResourcesTool(*), mcp__chrome-devtools__*, mcp__playwright__*
|
|
||||||
---
|
|
||||||
|
|
||||||
# Batch Screenshot Capture (/workflow:ui-design:capture)
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
Batch screenshot tool with MCP-first strategy and multi-tier fallback. Processes multiple URLs in parallel.
|
|
||||||
|
|
||||||
**Strategy**: MCP → Playwright → Chrome → Manual
|
|
||||||
**Output**: Flat structure `screenshots/{target}.png`
|
|
||||||
|
|
||||||
## Phase 1: Initialize & Parse
|
|
||||||
|
|
||||||
### Step 1: Determine Base Path & Generate Design ID
|
|
||||||
```bash
|
|
||||||
# Priority: --design-id > session (latest) > standalone (create new)
|
|
||||||
if [ -n "$DESIGN_ID" ]; then
|
|
||||||
# Use provided design ID
|
|
||||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
|
||||||
if [ -z "$relative_path" ]; then
|
|
||||||
echo "ERROR: Design run not found: $DESIGN_ID"
|
|
||||||
echo "HINT: Run '/workflow:ui-design:list' to see available design runs"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
elif [ -n "$SESSION_ID" ]; then
|
|
||||||
# Find latest in session or create new
|
|
||||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
|
||||||
if [ -z "$relative_path" ]; then
|
|
||||||
design_id="design-run-$(date +%Y%m%d)-$RANDOM"
|
|
||||||
relative_path=".workflow/WFS-$SESSION_ID/${design_id}"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Create new standalone design run
|
|
||||||
design_id="design-run-$(date +%Y%m%d)-$RANDOM"
|
|
||||||
relative_path=".workflow/${design_id}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create directory and convert to absolute path
|
|
||||||
bash(mkdir -p "$relative_path"/screenshots)
|
|
||||||
base_path=$(cd "$relative_path" && pwd)
|
|
||||||
|
|
||||||
# Extract and display design_id
|
|
||||||
design_id=$(basename "$base_path")
|
|
||||||
echo "✓ Design ID: $design_id"
|
|
||||||
echo "✓ Base path: $base_path"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Parse URL Map
|
|
||||||
```javascript
|
|
||||||
// Input: "home:https://linear.app, pricing:https://linear.app/pricing"
|
|
||||||
url_entries = []
|
|
||||||
|
|
||||||
FOR pair IN split(params["--url-map"], ","):
|
|
||||||
parts = pair.split(":", 1)
|
|
||||||
|
|
||||||
IF len(parts) != 2:
|
|
||||||
ERROR: "Invalid format: {pair}. Expected: 'target:url'"
|
|
||||||
EXIT 1
|
|
||||||
|
|
||||||
target = parts[0].strip().lower().replace(" ", "-")
|
|
||||||
url = parts[1].strip()
|
|
||||||
|
|
||||||
// Validate target name
|
|
||||||
IF NOT regex_match(target, r"^[a-z0-9][a-z0-9_-]*$"):
|
|
||||||
ERROR: "Invalid target: {target}"
|
|
||||||
EXIT 1
|
|
||||||
|
|
||||||
// Add https:// if missing
|
|
||||||
IF NOT url.startswith("http"):
|
|
||||||
url = f"https://{url}"
|
|
||||||
|
|
||||||
url_entries.append({target, url})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `base_path`, `url_entries[]`
|
|
||||||
|
|
||||||
### Step 3: Initialize Todos
|
|
||||||
```javascript
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{content: "Parse url-map", status: "completed", activeForm: "Parsing"},
|
|
||||||
{content: "Detect MCP tools", status: "in_progress", activeForm: "Detecting"},
|
|
||||||
{content: "Capture screenshots", status: "pending", activeForm: "Capturing"},
|
|
||||||
{content: "Verify results", status: "pending", activeForm: "Verifying"}
|
|
||||||
]})
|
|
||||||
```
|
|
||||||
|
|
||||||
## Phase 2: Detect Screenshot Tools
|
|
||||||
|
|
||||||
### Step 1: Check MCP Availability
|
|
||||||
```javascript
|
|
||||||
// List available MCP servers
|
|
||||||
all_resources = ListMcpResourcesTool()
|
|
||||||
available_servers = unique([r.server for r in all_resources])
|
|
||||||
|
|
||||||
// Check Chrome DevTools MCP
|
|
||||||
chrome_devtools = "chrome-devtools" IN available_servers
|
|
||||||
chrome_screenshot = check_tool_exists("mcp__chrome-devtools__take_screenshot")
|
|
||||||
|
|
||||||
// Check Playwright MCP
|
|
||||||
playwright_mcp = "playwright" IN available_servers
|
|
||||||
playwright_screenshot = check_tool_exists("mcp__playwright__screenshot")
|
|
||||||
|
|
||||||
// Determine primary tool
|
|
||||||
IF chrome_devtools AND chrome_screenshot:
|
|
||||||
tool = "chrome-devtools"
|
|
||||||
ELSE IF playwright_mcp AND playwright_screenshot:
|
|
||||||
tool = "playwright"
|
|
||||||
ELSE:
|
|
||||||
tool = null
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `tool` (chrome-devtools | playwright | null)
|
|
||||||
|
|
||||||
### Step 2: Check Local Fallback
|
|
||||||
```bash
|
|
||||||
# Only if MCP unavailable
|
|
||||||
bash(which playwright 2>/dev/null || echo "")
|
|
||||||
bash(which google-chrome || which chrome || which chromium 2>/dev/null || echo "")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `local_tools[]`
|
|
||||||
|
|
||||||
## Phase 3: Capture Screenshots
|
|
||||||
|
|
||||||
### Screenshot Format Options
|
|
||||||
|
|
||||||
**PNG Format** (default, lossless):
|
|
||||||
- **Pros**: Lossless quality, best for detailed UI screenshots
|
|
||||||
- **Cons**: Larger file sizes (typically 200-500 KB per screenshot)
|
|
||||||
- **Parameters**: `format: "png"` (no quality parameter)
|
|
||||||
- **Use case**: High-fidelity UI replication, design system extraction
|
|
||||||
|
|
||||||
**WebP Format** (optional, lossy/lossless):
|
|
||||||
- **Pros**: Smaller file sizes with good quality (50-70% smaller than PNG)
|
|
||||||
- **Cons**: Requires quality parameter, slight quality loss at high compression
|
|
||||||
- **Parameters**: `format: "webp", quality: 90` (80-100 recommended)
|
|
||||||
- **Use case**: Batch captures, network-constrained environments
|
|
||||||
|
|
||||||
**JPEG Format** (optional, lossy):
|
|
||||||
- **Pros**: Smallest file sizes
|
|
||||||
- **Cons**: Lossy compression, not recommended for UI screenshots
|
|
||||||
- **Parameters**: `format: "jpeg", quality: 90`
|
|
||||||
- **Use case**: Photo-heavy pages, not recommended for UI design
|
|
||||||
|
|
||||||
### Step 1: MCP Capture (If Available)
|
|
||||||
```javascript
|
|
||||||
IF tool == "chrome-devtools":
|
|
||||||
// Get or create page
|
|
||||||
pages = mcp__chrome-devtools__list_pages()
|
|
||||||
|
|
||||||
IF pages.length == 0:
|
|
||||||
mcp__chrome-devtools__new_page({url: url_entries[0].url})
|
|
||||||
page_idx = 0
|
|
||||||
ELSE:
|
|
||||||
page_idx = 0
|
|
||||||
|
|
||||||
mcp__chrome-devtools__select_page({pageIdx: page_idx})
|
|
||||||
|
|
||||||
// Capture each URL
|
|
||||||
FOR entry IN url_entries:
|
|
||||||
mcp__chrome-devtools__navigate_page({url: entry.url, timeout: 30000})
|
|
||||||
bash(sleep 2)
|
|
||||||
|
|
||||||
// PNG format doesn't support quality parameter
|
|
||||||
// Use PNG for lossless quality (larger files)
|
|
||||||
mcp__chrome-devtools__take_screenshot({
|
|
||||||
fullPage: true,
|
|
||||||
format: "png",
|
|
||||||
filePath: f"{base_path}/screenshots/{entry.target}.png"
|
|
||||||
})
|
|
||||||
|
|
||||||
// Alternative: Use WebP with quality for smaller files
|
|
||||||
// mcp__chrome-devtools__take_screenshot({
|
|
||||||
// fullPage: true,
|
|
||||||
// format: "webp",
|
|
||||||
// quality: 90,
|
|
||||||
// filePath: f"{base_path}/screenshots/{entry.target}.webp"
|
|
||||||
// })
|
|
||||||
|
|
||||||
ELSE IF tool == "playwright":
|
|
||||||
FOR entry IN url_entries:
|
|
||||||
mcp__playwright__screenshot({
|
|
||||||
url: entry.url,
|
|
||||||
output_path: f"{base_path}/screenshots/{entry.target}.png",
|
|
||||||
full_page: true,
|
|
||||||
timeout: 30000
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Local Fallback (If MCP Failed)
|
|
||||||
```bash
|
|
||||||
# Try Playwright CLI
|
|
||||||
bash(playwright screenshot "$url" "$output_file" --full-page --timeout 30000)
|
|
||||||
|
|
||||||
# Try Chrome headless
|
|
||||||
bash($chrome --headless --screenshot="$output_file" --window-size=1920,1080 "$url")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Manual Mode (If All Failed)
|
|
||||||
```
|
|
||||||
⚠️ Manual Screenshot Required
|
|
||||||
|
|
||||||
Failed URLs:
|
|
||||||
home: https://linear.app
|
|
||||||
Save to: .workflow/design-run-20250110/screenshots/home.png
|
|
||||||
|
|
||||||
Steps:
|
|
||||||
1. Visit URL in browser
|
|
||||||
2. Take full-page screenshot
|
|
||||||
3. Save to path above
|
|
||||||
4. Type 'ready' to continue
|
|
||||||
|
|
||||||
Options: ready | skip | abort
|
|
||||||
```
|
|
||||||
|
|
||||||
## Phase 4: Verification
|
|
||||||
|
|
||||||
### Step 1: Scan Captured Files
|
|
||||||
```bash
|
|
||||||
bash(ls -1 $base_path/screenshots/*.{png,jpg,jpeg,webp} 2>/dev/null)
|
|
||||||
bash(du -h $base_path/screenshots/*.png 2>/dev/null)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Generate Metadata
|
|
||||||
```javascript
|
|
||||||
captured_files = Glob(f"{base_path}/screenshots/*.{{png,jpg,jpeg,webp}}")
|
|
||||||
captured_targets = [basename_no_ext(f) for f in captured_files]
|
|
||||||
|
|
||||||
metadata = {
|
|
||||||
"timestamp": current_timestamp(),
|
|
||||||
"total_requested": len(url_entries),
|
|
||||||
"total_captured": len(captured_targets),
|
|
||||||
"screenshots": []
|
|
||||||
}
|
|
||||||
|
|
||||||
FOR entry IN url_entries:
|
|
||||||
is_captured = entry.target IN captured_targets
|
|
||||||
|
|
||||||
metadata.screenshots.append({
|
|
||||||
"target": entry.target,
|
|
||||||
"url": entry.url,
|
|
||||||
"captured": is_captured,
|
|
||||||
"path": f"{base_path}/screenshots/{entry.target}.png" IF is_captured ELSE null,
|
|
||||||
"size_kb": file_size_kb IF is_captured ELSE null
|
|
||||||
})
|
|
||||||
|
|
||||||
Write(f"{base_path}/screenshots/capture-metadata.json", JSON.stringify(metadata))
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `capture-metadata.json`
|
|
||||||
|
|
||||||
## Completion
|
|
||||||
|
|
||||||
### Todo Update
|
|
||||||
```javascript
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{content: "Parse url-map", status: "completed", activeForm: "Parsing"},
|
|
||||||
{content: "Detect MCP tools", status: "completed", activeForm: "Detecting"},
|
|
||||||
{content: "Capture screenshots", status: "completed", activeForm: "Capturing"},
|
|
||||||
{content: "Verify results", status: "completed", activeForm: "Verifying"}
|
|
||||||
]})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Output Message
|
|
||||||
```
|
|
||||||
✅ Batch screenshot capture complete!
|
|
||||||
|
|
||||||
Summary:
|
|
||||||
- Requested: {total_requested}
|
|
||||||
- Captured: {total_captured}
|
|
||||||
- Success rate: {success_rate}%
|
|
||||||
- Method: {tool || "Local fallback"}
|
|
||||||
|
|
||||||
Output:
|
|
||||||
{base_path}/screenshots/
|
|
||||||
├── home.png (245.3 KB)
|
|
||||||
├── pricing.png (198.7 KB)
|
|
||||||
└── capture-metadata.json
|
|
||||||
|
|
||||||
Next: /workflow:ui-design:extract --images "screenshots/*.png"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Simple Bash Commands
|
|
||||||
|
|
||||||
### Path Operations
|
|
||||||
```bash
|
|
||||||
# Find design directory
|
|
||||||
bash(find .workflow -type d -name "design-run-*" | head -1)
|
|
||||||
|
|
||||||
# Create screenshot directory
|
|
||||||
bash(mkdir -p $BASE_PATH/screenshots)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Tool Detection
|
|
||||||
```bash
|
|
||||||
# Check MCP
|
|
||||||
all_resources = ListMcpResourcesTool()
|
|
||||||
|
|
||||||
# Check local tools
|
|
||||||
bash(which playwright 2>/dev/null)
|
|
||||||
bash(which google-chrome 2>/dev/null)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verification
|
|
||||||
```bash
|
|
||||||
# List captures
|
|
||||||
bash(ls -1 $base_path/screenshots/*.png 2>/dev/null)
|
|
||||||
|
|
||||||
# File sizes
|
|
||||||
bash(du -h $base_path/screenshots/*.png)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
{base_path}/
|
|
||||||
└── screenshots/
|
|
||||||
├── home.png
|
|
||||||
├── pricing.png
|
|
||||||
├── about.png
|
|
||||||
└── capture-metadata.json
|
|
||||||
```
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
### Common Errors
|
|
||||||
```
|
|
||||||
ERROR: Invalid url-map format
|
|
||||||
→ Use: "target:url, target2:url2"
|
|
||||||
|
|
||||||
ERROR: png screenshots do not support 'quality'
|
|
||||||
→ PNG format is lossless, no quality parameter needed
|
|
||||||
→ Remove quality parameter OR switch to webp/jpeg format
|
|
||||||
|
|
||||||
ERROR: MCP unavailable
|
|
||||||
→ Using local fallback
|
|
||||||
|
|
||||||
ERROR: All tools failed
|
|
||||||
→ Manual mode activated
|
|
||||||
```
|
|
||||||
|
|
||||||
### Format-Specific Errors
|
|
||||||
```
|
|
||||||
❌ Wrong: format: "png", quality: 90
|
|
||||||
✅ Right: format: "png"
|
|
||||||
|
|
||||||
✅ Or use: format: "webp", quality: 90
|
|
||||||
✅ Or use: format: "jpeg", quality: 90
|
|
||||||
```
|
|
||||||
|
|
||||||
### Recovery
|
|
||||||
- **Partial success**: Keep successful captures
|
|
||||||
- **Retry**: Re-run with failed targets only
|
|
||||||
- **Manual**: Follow interactive guidance
|
|
||||||
|
|
||||||
## Quality Checklist
|
|
||||||
|
|
||||||
- [ ] All requested URLs processed
|
|
||||||
- [ ] File sizes > 1KB (valid images)
|
|
||||||
- [ ] Metadata JSON generated
|
|
||||||
- [ ] No missing targets (or documented)
|
|
||||||
|
|
||||||
## Key Features
|
|
||||||
|
|
||||||
- **MCP-first**: Prioritize managed tools
|
|
||||||
- **Multi-tier fallback**: 4 layers (MCP → Local → Manual)
|
|
||||||
- **Batch processing**: Parallel capture
|
|
||||||
- **Error tolerance**: Partial failures handled
|
|
||||||
- **Structured output**: Flat, predictable
|
|
||||||
|
|
||||||
## Integration
|
|
||||||
|
|
||||||
**Input**: `--url-map` (multiple target:url pairs)
|
|
||||||
**Output**: `screenshots/*.png` + `capture-metadata.json`
|
|
||||||
**Called by**: `/workflow:ui-design:imitate-auto`, `/workflow:ui-design:explore-auto`
|
|
||||||
**Next**: `/workflow:ui-design:extract` or `/workflow:ui-design:explore-layers`
|
|
||||||
@@ -1,11 +1,11 @@
|
|||||||
---
|
---
|
||||||
name: update
|
name: design-sync
|
||||||
description: Update brainstorming artifacts with finalized design system references from selected prototypes
|
description: Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption
|
||||||
argument-hint: --session <session_id> [--selected-prototypes "<list>"]
|
argument-hint: --session <session_id> [--selected-prototypes "<list>"]
|
||||||
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
|
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
# Design Update Command
|
# Design Sync Command
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
@@ -25,10 +25,10 @@ Synchronize finalized design system references to brainstorming artifacts, prepa
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Validate session
|
# Validate session
|
||||||
CHECK: .workflow/.active-* marker files; VALIDATE: session_id matches active session
|
CHECK: find .workflow/sessions/ -name "WFS-*" -type d; VALIDATE: session_id matches active session
|
||||||
|
|
||||||
# Verify design artifacts in latest design run
|
# Verify design artifacts in latest design run
|
||||||
latest_design = find_latest_path_matching(".workflow/WFS-{session}/design-run-*")
|
latest_design = find_latest_path_matching(".workflow/sessions/WFS-{session}/design-run-*")
|
||||||
|
|
||||||
# Detect design system structure
|
# Detect design system structure
|
||||||
IF exists({latest_design}/style-extraction/style-1/design-tokens.json):
|
IF exists({latest_design}/style-extraction/style-1/design-tokens.json):
|
||||||
@@ -51,7 +51,7 @@ REPORT: "Found {count} design artifacts, {prototype_count} prototypes"
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Check if role analysis documents contains current design run reference
|
# Check if role analysis documents contains current design run reference
|
||||||
synthesis_spec_path = ".workflow/WFS-{session}/.brainstorming/role analysis documents"
|
synthesis_spec_path = ".workflow/sessions/WFS-{session}/.brainstorming/role analysis documents"
|
||||||
current_design_run = basename(latest_design) # e.g., "design-run-20250109-143022"
|
current_design_run = basename(latest_design) # e.g., "design-run-20250109-143022"
|
||||||
|
|
||||||
IF exists(synthesis_spec_path):
|
IF exists(synthesis_spec_path):
|
||||||
@@ -68,8 +68,8 @@ IF exists(synthesis_spec_path):
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Load target brainstorming artifacts (files to be updated)
|
# Load target brainstorming artifacts (files to be updated)
|
||||||
Read(.workflow/WFS-{session}/.brainstorming/role analysis documents)
|
Read(.workflow/sessions/WFS-{session}/.brainstorming/role analysis documents)
|
||||||
IF exists(.workflow/WFS-{session}/.brainstorming/ui-designer/analysis.md): Read(analysis.md)
|
IF exists(.workflow/sessions/WFS-{session}/.brainstorming/ui-designer/analysis.md): Read(analysis.md)
|
||||||
|
|
||||||
# Optional: Read prototype notes for descriptions (minimal context)
|
# Optional: Read prototype notes for descriptions (minimal context)
|
||||||
FOR each selected_prototype IN selected_list:
|
FOR each selected_prototype IN selected_list:
|
||||||
@@ -113,7 +113,7 @@ Update `.brainstorming/role analysis documents` with design system references.
|
|||||||
**Implementation**:
|
**Implementation**:
|
||||||
```bash
|
```bash
|
||||||
# Option 1: Edit existing section
|
# Option 1: Edit existing section
|
||||||
Edit(file_path=".workflow/WFS-{session}/.brainstorming/role analysis documents",
|
Edit(file_path=".workflow/sessions/WFS-{session}/.brainstorming/role analysis documents",
|
||||||
old_string="## UI/UX Guidelines\n[existing content]",
|
old_string="## UI/UX Guidelines\n[existing content]",
|
||||||
new_string="## UI/UX Guidelines\n\n[new design reference content]")
|
new_string="## UI/UX Guidelines\n\n[new design reference content]")
|
||||||
|
|
||||||
@@ -128,15 +128,15 @@ IF section not found:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Always update ui-designer
|
# Always update ui-designer
|
||||||
ui_designer_files = Glob(".workflow/WFS-{session}/.brainstorming/ui-designer/analysis*.md")
|
ui_designer_files = Glob(".workflow/sessions/WFS-{session}/.brainstorming/ui-designer/analysis*.md")
|
||||||
|
|
||||||
# Conditionally update other roles
|
# Conditionally update other roles
|
||||||
has_animations = exists({latest_design}/animation-extraction/animation-tokens.json)
|
has_animations = exists({latest_design}/animation-extraction/animation-tokens.json)
|
||||||
has_layouts = exists({latest_design}/layout-extraction/layout-templates.json)
|
has_layouts = exists({latest_design}/layout-extraction/layout-templates.json)
|
||||||
|
|
||||||
IF has_animations: ux_expert_files = Glob(".workflow/WFS-{session}/.brainstorming/ux-expert/analysis*.md")
|
IF has_animations: ux_expert_files = Glob(".workflow/sessions/WFS-{session}/.brainstorming/ux-expert/analysis*.md")
|
||||||
IF has_layouts: architect_files = Glob(".workflow/WFS-{session}/.brainstorming/system-architect/analysis*.md")
|
IF has_layouts: architect_files = Glob(".workflow/sessions/WFS-{session}/.brainstorming/system-architect/analysis*.md")
|
||||||
IF selected_list: pm_files = Glob(".workflow/WFS-{session}/.brainstorming/product-manager/analysis*.md")
|
IF selected_list: pm_files = Glob(".workflow/sessions/WFS-{session}/.brainstorming/product-manager/analysis*.md")
|
||||||
```
|
```
|
||||||
|
|
||||||
**Content Templates**:
|
**Content Templates**:
|
||||||
@@ -223,7 +223,7 @@ For complete token definitions and usage examples, see:
|
|||||||
|
|
||||||
**Implementation**:
|
**Implementation**:
|
||||||
```bash
|
```bash
|
||||||
Write(file_path=".workflow/WFS-{session}/.brainstorming/ui-designer/design-system-reference.md",
|
Write(file_path=".workflow/sessions/WFS-{session}/.brainstorming/ui-designer/design-system-reference.md",
|
||||||
content="[generated content with @ references]")
|
content="[generated content with @ references]")
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -259,7 +259,7 @@ Next: /workflow:plan [--agent] "<task description>"
|
|||||||
|
|
||||||
**Updated Files**:
|
**Updated Files**:
|
||||||
```
|
```
|
||||||
.workflow/WFS-{session}/.brainstorming/
|
.workflow/sessions/WFS-{session}/.brainstorming/
|
||||||
├── role analysis documents # Updated with UI/UX Guidelines section
|
├── role analysis documents # Updated with UI/UX Guidelines section
|
||||||
├── ui-designer/
|
├── ui-designer/
|
||||||
│ ├── analysis*.md # Updated with design system references
|
│ ├── analysis*.md # Updated with design system references
|
||||||
@@ -349,15 +349,3 @@ After update, verify:
|
|||||||
- **Next Phase**: `/workflow:plan` discovers and utilizes design system through @ references
|
- **Next Phase**: `/workflow:plan` discovers and utilizes design system through @ references
|
||||||
- **Auto Integration**: Automatically triggered by `/workflow:ui-design:auto` workflow
|
- **Auto Integration**: Automatically triggered by `/workflow:ui-design:auto` workflow
|
||||||
|
|
||||||
## Why Main Claude Execution?
|
|
||||||
|
|
||||||
This command is executed directly by main Claude (not delegated to an Agent) because:
|
|
||||||
|
|
||||||
1. **Simple Reference Generation**: Only generating file paths, not complex synthesis
|
|
||||||
2. **Context Preservation**: Main Claude has full session and conversation context
|
|
||||||
3. **Minimal Transformation**: Primarily updating references, not analyzing content
|
|
||||||
4. **Path Resolution**: Requires precise relative path calculation
|
|
||||||
5. **Edit Operations**: Better error recovery for Edit conflicts
|
|
||||||
6. **Synthesis Pattern**: Follows same direct-execution pattern as other reference updates
|
|
||||||
|
|
||||||
This ensures reliable, lightweight integration without Agent handoff overhead.
|
|
||||||
@@ -265,7 +265,7 @@ STORE: device_type, device_source
|
|||||||
### Phase 4: Run Initialization & Directory Setup
|
### Phase 4: Run Initialization & Directory Setup
|
||||||
```bash
|
```bash
|
||||||
design_id = "design-run-$(date +%Y%m%d)-$RANDOM"
|
design_id = "design-run-$(date +%Y%m%d)-$RANDOM"
|
||||||
relative_base_path = --session ? ".workflow/WFS-{session}/${design_id}" : ".workflow/${design_id}"
|
relative_base_path = --session ? ".workflow/sessions/WFS-{session}/${design_id}" : ".workflow/${design_id}"
|
||||||
|
|
||||||
# Create directory and convert to absolute path
|
# Create directory and convert to absolute path
|
||||||
Bash(mkdir -p "${relative_base_path}/style-extraction")
|
Bash(mkdir -p "${relative_base_path}/style-extraction")
|
||||||
|
|||||||
@@ -1,611 +0,0 @@
|
|||||||
---
|
|
||||||
name: explore-layers
|
|
||||||
description: Interactive deep UI capture with depth-controlled layer exploration using MCP puppeteer
|
|
||||||
argument-hint: --url <url> --depth <1-5> [--design-id <id>] [--session <id>]
|
|
||||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*), mcp__chrome-devtools__*
|
|
||||||
---
|
|
||||||
|
|
||||||
# Interactive Layer Exploration (/workflow:ui-design:explore-layers)
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
Single-URL depth-controlled interactive capture. Progressively explores UI layers from pages to Shadow DOM.
|
|
||||||
|
|
||||||
**Depth Levels**:
|
|
||||||
- `1` = Page (full-page screenshot)
|
|
||||||
- `2` = Elements (key components)
|
|
||||||
- `3` = Interactions (modals, dropdowns)
|
|
||||||
- `4` = Embedded (iframes, widgets)
|
|
||||||
- `5` = Shadow DOM (web components)
|
|
||||||
|
|
||||||
**Requirements**: Chrome DevTools MCP
|
|
||||||
|
|
||||||
## Phase 1: Setup & Validation
|
|
||||||
|
|
||||||
### Step 1: Parse Parameters
|
|
||||||
```javascript
|
|
||||||
url = params["--url"]
|
|
||||||
depth = int(params["--depth"])
|
|
||||||
|
|
||||||
// Validate URL
|
|
||||||
IF NOT url.startswith("http"):
|
|
||||||
url = f"https://{url}"
|
|
||||||
|
|
||||||
// Validate depth
|
|
||||||
IF depth NOT IN [1, 2, 3, 4, 5]:
|
|
||||||
ERROR: "Invalid depth: {depth}. Use 1-5"
|
|
||||||
EXIT 1
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Determine Base Path
|
|
||||||
```bash
|
|
||||||
# Priority: --design-id > --session > create new
|
|
||||||
if [ -n "$DESIGN_ID" ]; then
|
|
||||||
# Exact match by design ID
|
|
||||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
|
||||||
if [ -z "$relative_path" ]; then
|
|
||||||
echo "ERROR: Design run not found: $DESIGN_ID"
|
|
||||||
echo "HINT: Run '/workflow:ui-design:list' to see available design runs"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
elif [ -n "$SESSION_ID" ]; then
|
|
||||||
# Find latest in session or create new
|
|
||||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
|
||||||
if [ -z "$relative_path" ]; then
|
|
||||||
design_id="design-run-$(date +%Y%m%d)-$RANDOM"
|
|
||||||
relative_path=".workflow/WFS-$SESSION_ID/${design_id}"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Create new standalone design run
|
|
||||||
design_id="design-run-$(date +%Y%m%d)-$RANDOM"
|
|
||||||
relative_path=".workflow/${design_id}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create directory structure and convert to absolute path
|
|
||||||
bash(mkdir -p "$relative_path")
|
|
||||||
base_path=$(cd "$relative_path" && pwd)
|
|
||||||
|
|
||||||
# Extract and display design_id
|
|
||||||
design_id=$(basename "$base_path")
|
|
||||||
echo "✓ Design ID: $design_id"
|
|
||||||
echo "✓ Base path: $base_path"
|
|
||||||
|
|
||||||
# Create depth directories
|
|
||||||
bash(for i in $(seq 1 $depth); do mkdir -p "$base_path"/screenshots/depth-$i; done)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `url`, `depth`, `base_path`
|
|
||||||
|
|
||||||
### Step 3: Validate MCP Availability
|
|
||||||
```javascript
|
|
||||||
all_resources = ListMcpResourcesTool()
|
|
||||||
chrome_devtools = "chrome-devtools" IN [r.server for r in all_resources]
|
|
||||||
|
|
||||||
IF NOT chrome_devtools:
|
|
||||||
ERROR: "explore-layers requires Chrome DevTools MCP"
|
|
||||||
ERROR: "Install: npm i -g @modelcontextprotocol/server-chrome-devtools"
|
|
||||||
EXIT 1
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Initialize Todos
|
|
||||||
```javascript
|
|
||||||
todos = [
|
|
||||||
{content: "Setup and validation", status: "completed", activeForm: "Setting up"}
|
|
||||||
]
|
|
||||||
|
|
||||||
FOR level IN range(1, depth + 1):
|
|
||||||
todos.append({
|
|
||||||
content: f"Depth {level}: {DEPTH_NAMES[level]}",
|
|
||||||
status: "pending",
|
|
||||||
activeForm: f"Capturing depth {level}"
|
|
||||||
})
|
|
||||||
|
|
||||||
todos.append({content: "Generate layer map", status: "pending", activeForm: "Mapping"})
|
|
||||||
|
|
||||||
TodoWrite({todos})
|
|
||||||
```
|
|
||||||
|
|
||||||
## Phase 2: Navigate & Load Page
|
|
||||||
|
|
||||||
### Step 1: Get or Create Browser Page
|
|
||||||
```javascript
|
|
||||||
pages = mcp__chrome-devtools__list_pages()
|
|
||||||
|
|
||||||
IF pages.length == 0:
|
|
||||||
mcp__chrome-devtools__new_page({url: url, timeout: 30000})
|
|
||||||
page_idx = 0
|
|
||||||
ELSE:
|
|
||||||
page_idx = 0
|
|
||||||
mcp__chrome-devtools__select_page({pageIdx: page_idx})
|
|
||||||
mcp__chrome-devtools__navigate_page({url: url, timeout: 30000})
|
|
||||||
|
|
||||||
bash(sleep 3) // Wait for page load
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `page_idx`
|
|
||||||
|
|
||||||
## Phase 3: Depth 1 - Page Level
|
|
||||||
|
|
||||||
### Step 1: Capture Full Page
|
|
||||||
```javascript
|
|
||||||
TodoWrite(mark_in_progress: "Depth 1: Page")
|
|
||||||
|
|
||||||
output_file = f"{base_path}/screenshots/depth-1/full-page.png"
|
|
||||||
|
|
||||||
mcp__chrome-devtools__take_screenshot({
|
|
||||||
fullPage: true,
|
|
||||||
format: "png",
|
|
||||||
quality: 90,
|
|
||||||
filePath: output_file
|
|
||||||
})
|
|
||||||
|
|
||||||
layer_map = {
|
|
||||||
"url": url,
|
|
||||||
"depth": depth,
|
|
||||||
"layers": {
|
|
||||||
"depth-1": {
|
|
||||||
"type": "page",
|
|
||||||
"captures": [{
|
|
||||||
"name": "full-page",
|
|
||||||
"path": output_file,
|
|
||||||
"size_kb": file_size_kb(output_file)
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
TodoWrite(mark_completed: "Depth 1: Page")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `depth-1/full-page.png`
|
|
||||||
|
|
||||||
## Phase 4: Depth 2 - Element Level (If depth >= 2)
|
|
||||||
|
|
||||||
### Step 1: Analyze Page Structure
|
|
||||||
```javascript
|
|
||||||
IF depth < 2: SKIP
|
|
||||||
|
|
||||||
TodoWrite(mark_in_progress: "Depth 2: Elements")
|
|
||||||
|
|
||||||
snapshot = mcp__chrome-devtools__take_snapshot()
|
|
||||||
|
|
||||||
// Filter key elements
|
|
||||||
key_types = ["nav", "header", "footer", "aside", "button", "form", "article"]
|
|
||||||
key_elements = [
|
|
||||||
el for el in snapshot.interactiveElements
|
|
||||||
if el.type IN key_types OR el.role IN ["navigation", "banner", "main"]
|
|
||||||
][:10] // Limit to top 10
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Capture Element Screenshots
|
|
||||||
```javascript
|
|
||||||
depth_2_captures = []
|
|
||||||
|
|
||||||
FOR idx, element IN enumerate(key_elements):
|
|
||||||
element_name = sanitize(element.text[:20] or element.type) or f"element-{idx}"
|
|
||||||
output_file = f"{base_path}/screenshots/depth-2/{element_name}.png"
|
|
||||||
|
|
||||||
TRY:
|
|
||||||
mcp__chrome-devtools__take_screenshot({
|
|
||||||
uid: element.uid,
|
|
||||||
format: "png",
|
|
||||||
quality: 85,
|
|
||||||
filePath: output_file
|
|
||||||
})
|
|
||||||
|
|
||||||
depth_2_captures.append({
|
|
||||||
"name": element_name,
|
|
||||||
"type": element.type,
|
|
||||||
"path": output_file,
|
|
||||||
"size_kb": file_size_kb(output_file)
|
|
||||||
})
|
|
||||||
CATCH error:
|
|
||||||
REPORT: f"Skip {element_name}: {error}"
|
|
||||||
|
|
||||||
layer_map.layers["depth-2"] = {
|
|
||||||
"type": "elements",
|
|
||||||
"captures": depth_2_captures
|
|
||||||
}
|
|
||||||
|
|
||||||
TodoWrite(mark_completed: "Depth 2: Elements")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `depth-2/{element}.png` × N
|
|
||||||
|
|
||||||
## Phase 5: Depth 3 - Interaction Level (If depth >= 3)
|
|
||||||
|
|
||||||
### Step 1: Analyze Interactive Triggers
|
|
||||||
```javascript
|
|
||||||
IF depth < 3: SKIP
|
|
||||||
|
|
||||||
TodoWrite(mark_in_progress: "Depth 3: Interactions")
|
|
||||||
|
|
||||||
// Detect structure
|
|
||||||
structure = mcp__chrome-devtools__evaluate_script({
|
|
||||||
function: `() => ({
|
|
||||||
modals: document.querySelectorAll('[role="dialog"], .modal').length,
|
|
||||||
dropdowns: document.querySelectorAll('[role="menu"], .dropdown').length,
|
|
||||||
tooltips: document.querySelectorAll('[role="tooltip"], [title]').length
|
|
||||||
})`
|
|
||||||
})
|
|
||||||
|
|
||||||
// Identify triggers
|
|
||||||
triggers = []
|
|
||||||
FOR element IN snapshot.interactiveElements:
|
|
||||||
IF element.attributes CONTAINS ("data-toggle", "aria-haspopup"):
|
|
||||||
triggers.append({
|
|
||||||
uid: element.uid,
|
|
||||||
type: "modal" IF "modal" IN element.classes ELSE "dropdown",
|
|
||||||
trigger: "click",
|
|
||||||
text: element.text
|
|
||||||
})
|
|
||||||
ELSE IF element.attributes CONTAINS ("title", "data-tooltip"):
|
|
||||||
triggers.append({
|
|
||||||
uid: element.uid,
|
|
||||||
type: "tooltip",
|
|
||||||
trigger: "hover",
|
|
||||||
text: element.text
|
|
||||||
})
|
|
||||||
|
|
||||||
triggers = triggers[:10] // Limit
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Trigger Interactions & Capture
|
|
||||||
```javascript
|
|
||||||
depth_3_captures = []
|
|
||||||
|
|
||||||
FOR idx, trigger IN enumerate(triggers):
|
|
||||||
layer_name = f"{trigger.type}-{sanitize(trigger.text[:15]) or idx}"
|
|
||||||
output_file = f"{base_path}/screenshots/depth-3/{layer_name}.png"
|
|
||||||
|
|
||||||
TRY:
|
|
||||||
// Trigger interaction
|
|
||||||
IF trigger.trigger == "click":
|
|
||||||
mcp__chrome-devtools__click({uid: trigger.uid})
|
|
||||||
ELSE:
|
|
||||||
mcp__chrome-devtools__hover({uid: trigger.uid})
|
|
||||||
|
|
||||||
bash(sleep 1)
|
|
||||||
|
|
||||||
// Capture
|
|
||||||
mcp__chrome-devtools__take_screenshot({
|
|
||||||
fullPage: false, // Viewport only
|
|
||||||
format: "png",
|
|
||||||
quality: 90,
|
|
||||||
filePath: output_file
|
|
||||||
})
|
|
||||||
|
|
||||||
depth_3_captures.append({
|
|
||||||
"name": layer_name,
|
|
||||||
"type": trigger.type,
|
|
||||||
"trigger_method": trigger.trigger,
|
|
||||||
"path": output_file,
|
|
||||||
"size_kb": file_size_kb(output_file)
|
|
||||||
})
|
|
||||||
|
|
||||||
// Dismiss (ESC key)
|
|
||||||
mcp__chrome-devtools__evaluate_script({
|
|
||||||
function: `() => {
|
|
||||||
document.dispatchEvent(new KeyboardEvent('keydown', {key: 'Escape'}));
|
|
||||||
}`
|
|
||||||
})
|
|
||||||
bash(sleep 0.5)
|
|
||||||
|
|
||||||
CATCH error:
|
|
||||||
REPORT: f"Skip {layer_name}: {error}"
|
|
||||||
|
|
||||||
layer_map.layers["depth-3"] = {
|
|
||||||
"type": "interactions",
|
|
||||||
"triggers": structure,
|
|
||||||
"captures": depth_3_captures
|
|
||||||
}
|
|
||||||
|
|
||||||
TodoWrite(mark_completed: "Depth 3: Interactions")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `depth-3/{interaction}.png` × N
|
|
||||||
|
|
||||||
## Phase 6: Depth 4 - Embedded Level (If depth >= 4)
|
|
||||||
|
|
||||||
### Step 1: Detect Iframes
|
|
||||||
```javascript
|
|
||||||
IF depth < 4: SKIP
|
|
||||||
|
|
||||||
TodoWrite(mark_in_progress: "Depth 4: Embedded")
|
|
||||||
|
|
||||||
iframes = mcp__chrome-devtools__evaluate_script({
|
|
||||||
function: `() => {
|
|
||||||
return Array.from(document.querySelectorAll('iframe')).map(iframe => ({
|
|
||||||
src: iframe.src,
|
|
||||||
id: iframe.id || 'iframe',
|
|
||||||
title: iframe.title || 'untitled'
|
|
||||||
})).filter(i => i.src && i.src.startsWith('http'));
|
|
||||||
}`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Capture Iframe Content
|
|
||||||
```javascript
|
|
||||||
depth_4_captures = []
|
|
||||||
|
|
||||||
FOR idx, iframe IN enumerate(iframes):
|
|
||||||
iframe_name = f"iframe-{sanitize(iframe.title or iframe.id)}-{idx}"
|
|
||||||
output_file = f"{base_path}/screenshots/depth-4/{iframe_name}.png"
|
|
||||||
|
|
||||||
TRY:
|
|
||||||
// Navigate to iframe URL in new tab
|
|
||||||
mcp__chrome-devtools__new_page({url: iframe.src, timeout: 30000})
|
|
||||||
bash(sleep 2)
|
|
||||||
|
|
||||||
mcp__chrome-devtools__take_screenshot({
|
|
||||||
fullPage: true,
|
|
||||||
format: "png",
|
|
||||||
quality: 90,
|
|
||||||
filePath: output_file
|
|
||||||
})
|
|
||||||
|
|
||||||
depth_4_captures.append({
|
|
||||||
"name": iframe_name,
|
|
||||||
"url": iframe.src,
|
|
||||||
"path": output_file,
|
|
||||||
"size_kb": file_size_kb(output_file)
|
|
||||||
})
|
|
||||||
|
|
||||||
// Close iframe tab
|
|
||||||
current_pages = mcp__chrome-devtools__list_pages()
|
|
||||||
mcp__chrome-devtools__close_page({pageIdx: current_pages.length - 1})
|
|
||||||
|
|
||||||
CATCH error:
|
|
||||||
REPORT: f"Skip {iframe_name}: {error}"
|
|
||||||
|
|
||||||
layer_map.layers["depth-4"] = {
|
|
||||||
"type": "embedded",
|
|
||||||
"captures": depth_4_captures
|
|
||||||
}
|
|
||||||
|
|
||||||
TodoWrite(mark_completed: "Depth 4: Embedded")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `depth-4/iframe-*.png` × N
|
|
||||||
|
|
||||||
## Phase 7: Depth 5 - Shadow DOM (If depth = 5)
|
|
||||||
|
|
||||||
### Step 1: Detect Shadow Roots
|
|
||||||
```javascript
|
|
||||||
IF depth < 5: SKIP
|
|
||||||
|
|
||||||
TodoWrite(mark_in_progress: "Depth 5: Shadow DOM")
|
|
||||||
|
|
||||||
shadow_elements = mcp__chrome-devtools__evaluate_script({
|
|
||||||
function: `() => {
|
|
||||||
const elements = Array.from(document.querySelectorAll('*'));
|
|
||||||
return elements
|
|
||||||
.filter(el => el.shadowRoot)
|
|
||||||
.map((el, idx) => ({
|
|
||||||
tag: el.tagName.toLowerCase(),
|
|
||||||
id: el.id || \`shadow-\${idx}\`,
|
|
||||||
innerHTML: el.shadowRoot.innerHTML.substring(0, 100)
|
|
||||||
}));
|
|
||||||
}`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Capture Shadow DOM Components
|
|
||||||
```javascript
|
|
||||||
depth_5_captures = []
|
|
||||||
|
|
||||||
FOR idx, shadow IN enumerate(shadow_elements):
|
|
||||||
shadow_name = f"shadow-{sanitize(shadow.id)}"
|
|
||||||
output_file = f"{base_path}/screenshots/depth-5/{shadow_name}.png"
|
|
||||||
|
|
||||||
TRY:
|
|
||||||
// Inject highlight script
|
|
||||||
mcp__chrome-devtools__evaluate_script({
|
|
||||||
function: `() => {
|
|
||||||
const el = document.querySelector('${shadow.tag}${shadow.id ? "#" + shadow.id : ""}');
|
|
||||||
if (el) {
|
|
||||||
el.scrollIntoView({behavior: 'smooth', block: 'center'});
|
|
||||||
el.style.outline = '3px solid red';
|
|
||||||
}
|
|
||||||
}`
|
|
||||||
})
|
|
||||||
|
|
||||||
bash(sleep 0.5)
|
|
||||||
|
|
||||||
// Full-page screenshot (component highlighted)
|
|
||||||
mcp__chrome-devtools__take_screenshot({
|
|
||||||
fullPage: false,
|
|
||||||
format: "png",
|
|
||||||
quality: 90,
|
|
||||||
filePath: output_file
|
|
||||||
})
|
|
||||||
|
|
||||||
depth_5_captures.append({
|
|
||||||
"name": shadow_name,
|
|
||||||
"tag": shadow.tag,
|
|
||||||
"path": output_file,
|
|
||||||
"size_kb": file_size_kb(output_file)
|
|
||||||
})
|
|
||||||
|
|
||||||
CATCH error:
|
|
||||||
REPORT: f"Skip {shadow_name}: {error}"
|
|
||||||
|
|
||||||
layer_map.layers["depth-5"] = {
|
|
||||||
"type": "shadow-dom",
|
|
||||||
"captures": depth_5_captures
|
|
||||||
}
|
|
||||||
|
|
||||||
TodoWrite(mark_completed: "Depth 5: Shadow DOM")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `depth-5/shadow-*.png` × N
|
|
||||||
|
|
||||||
## Phase 8: Generate Layer Map
|
|
||||||
|
|
||||||
### Step 1: Compile Metadata
|
|
||||||
```javascript
|
|
||||||
TodoWrite(mark_in_progress: "Generate layer map")
|
|
||||||
|
|
||||||
// Calculate totals
|
|
||||||
total_captures = sum(len(layer.captures) for layer in layer_map.layers.values())
|
|
||||||
total_size_kb = sum(
|
|
||||||
sum(c.size_kb for c in layer.captures)
|
|
||||||
for layer in layer_map.layers.values()
|
|
||||||
)
|
|
||||||
|
|
||||||
layer_map["summary"] = {
|
|
||||||
"timestamp": current_timestamp(),
|
|
||||||
"total_depth": depth,
|
|
||||||
"total_captures": total_captures,
|
|
||||||
"total_size_kb": total_size_kb
|
|
||||||
}
|
|
||||||
|
|
||||||
Write(f"{base_path}/screenshots/layer-map.json", JSON.stringify(layer_map, indent=2))
|
|
||||||
|
|
||||||
TodoWrite(mark_completed: "Generate layer map")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `layer-map.json`
|
|
||||||
|
|
||||||
## Completion
|
|
||||||
|
|
||||||
### Todo Update
|
|
||||||
```javascript
|
|
||||||
all_todos_completed = true
|
|
||||||
TodoWrite({todos: all_completed_todos})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Output Message
|
|
||||||
```
|
|
||||||
✅ Interactive layer exploration complete!
|
|
||||||
|
|
||||||
Configuration:
|
|
||||||
- URL: {url}
|
|
||||||
- Max depth: {depth}
|
|
||||||
- Layers explored: {len(layer_map.layers)}
|
|
||||||
|
|
||||||
Capture Summary:
|
|
||||||
Depth 1 (Page): {depth_1_count} screenshot(s)
|
|
||||||
Depth 2 (Elements): {depth_2_count} screenshot(s)
|
|
||||||
Depth 3 (Interactions): {depth_3_count} screenshot(s)
|
|
||||||
Depth 4 (Embedded): {depth_4_count} screenshot(s)
|
|
||||||
Depth 5 (Shadow DOM): {depth_5_count} screenshot(s)
|
|
||||||
|
|
||||||
Total: {total_captures} captures ({total_size_kb:.1f} KB)
|
|
||||||
|
|
||||||
Output Structure:
|
|
||||||
{base_path}/screenshots/
|
|
||||||
├── depth-1/
|
|
||||||
│ └── full-page.png
|
|
||||||
├── depth-2/
|
|
||||||
│ ├── navbar.png
|
|
||||||
│ └── footer.png
|
|
||||||
├── depth-3/
|
|
||||||
│ ├── modal-login.png
|
|
||||||
│ └── dropdown-menu.png
|
|
||||||
├── depth-4/
|
|
||||||
│ └── iframe-analytics.png
|
|
||||||
├── depth-5/
|
|
||||||
│ └── shadow-button.png
|
|
||||||
└── layer-map.json
|
|
||||||
|
|
||||||
Next: /workflow:ui-design:extract --images "screenshots/**/*.png"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Simple Bash Commands
|
|
||||||
|
|
||||||
### Directory Setup
|
|
||||||
```bash
|
|
||||||
# Create depth directories
|
|
||||||
bash(for i in $(seq 1 $depth); do mkdir -p $BASE_PATH/screenshots/depth-$i; done)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Validation
|
|
||||||
```bash
|
|
||||||
# Check MCP
|
|
||||||
all_resources = ListMcpResourcesTool()
|
|
||||||
|
|
||||||
# Count captures per depth
|
|
||||||
bash(ls $base_path/screenshots/depth-{1..5}/*.png 2>/dev/null | wc -l)
|
|
||||||
```
|
|
||||||
|
|
||||||
### File Operations
|
|
||||||
```bash
|
|
||||||
# List all captures
|
|
||||||
bash(find $base_path/screenshots -name "*.png" -type f)
|
|
||||||
|
|
||||||
# Total size
|
|
||||||
bash(du -sh $base_path/screenshots)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
{base_path}/screenshots/
|
|
||||||
├── depth-1/
|
|
||||||
│ └── full-page.png
|
|
||||||
├── depth-2/
|
|
||||||
│ ├── {element}.png
|
|
||||||
│ └── ...
|
|
||||||
├── depth-3/
|
|
||||||
│ ├── {interaction}.png
|
|
||||||
│ └── ...
|
|
||||||
├── depth-4/
|
|
||||||
│ ├── iframe-*.png
|
|
||||||
│ └── ...
|
|
||||||
├── depth-5/
|
|
||||||
│ ├── shadow-*.png
|
|
||||||
│ └── ...
|
|
||||||
└── layer-map.json
|
|
||||||
```
|
|
||||||
|
|
||||||
## Depth Level Details
|
|
||||||
|
|
||||||
| Depth | Name | Captures | Time | Use Case |
|
|
||||||
|-------|------|----------|------|----------|
|
|
||||||
| 1 | Page | Full page | 30s | Quick preview |
|
|
||||||
| 2 | Elements | Key components | 1-2min | Component library |
|
|
||||||
| 3 | Interactions | Modals, dropdowns | 2-4min | UI flows |
|
|
||||||
| 4 | Embedded | Iframes | 3-6min | Complete context |
|
|
||||||
| 5 | Shadow DOM | Web components | 4-8min | Full coverage |
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
### Common Errors
|
|
||||||
```
|
|
||||||
ERROR: Chrome DevTools MCP required
|
|
||||||
→ Install: npm i -g @modelcontextprotocol/server-chrome-devtools
|
|
||||||
|
|
||||||
ERROR: Invalid depth
|
|
||||||
→ Use: 1-5
|
|
||||||
|
|
||||||
ERROR: Interaction trigger failed
|
|
||||||
→ Some modals may be skipped, check layer-map.json
|
|
||||||
```
|
|
||||||
|
|
||||||
### Recovery
|
|
||||||
- **Partial success**: Lower depth captures preserved
|
|
||||||
- **Trigger failures**: Interaction layer may be incomplete
|
|
||||||
- **Iframe restrictions**: Cross-origin iframes skipped
|
|
||||||
|
|
||||||
## Quality Checklist
|
|
||||||
|
|
||||||
- [ ] All depths up to specified level captured
|
|
||||||
- [ ] layer-map.json generated with metadata
|
|
||||||
- [ ] File sizes valid (> 500 bytes)
|
|
||||||
- [ ] Interaction triggers executed
|
|
||||||
- [ ] Shadow DOM elements highlighted
|
|
||||||
|
|
||||||
## Key Features
|
|
||||||
|
|
||||||
- **Depth-controlled**: Progressive capture 1-5 levels
|
|
||||||
- **Interactive triggers**: Click/hover for hidden layers
|
|
||||||
- **Iframe support**: Embedded content captured
|
|
||||||
- **Shadow DOM**: Web component internals
|
|
||||||
- **Structured output**: Organized by depth
|
|
||||||
|
|
||||||
## Integration
|
|
||||||
|
|
||||||
**Input**: Single URL + depth level (1-5)
|
|
||||||
**Output**: Hierarchical screenshots + layer-map.json
|
|
||||||
**Complements**: `/workflow:ui-design:capture` (multi-URL batch)
|
|
||||||
**Next**: `/workflow:ui-design:extract` for design analysis
|
|
||||||
@@ -31,7 +31,7 @@ if [ -n "$DESIGN_ID" ]; then
|
|||||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
||||||
elif [ -n "$SESSION_ID" ]; then
|
elif [ -n "$SESSION_ID" ]; then
|
||||||
# Latest in session
|
# Latest in session
|
||||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
relative_path=$(find .workflow/sessions/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||||
else
|
else
|
||||||
# Latest globally
|
# Latest globally
|
||||||
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
|
|||||||
|
|
||||||
**Optional Parameters**:
|
**Optional Parameters**:
|
||||||
- `--session <id>`: Workflow session ID
|
- `--session <id>`: Workflow session ID
|
||||||
- Integrate into existing session (`.workflow/WFS-{session}/`)
|
- Integrate into existing session (`.workflow/sessions/WFS-{session}/`)
|
||||||
- Enable automatic design system integration (Phase 4)
|
- Enable automatic design system integration (Phase 4)
|
||||||
- If not provided: standalone mode (`.workflow/`)
|
- If not provided: standalone mode (`.workflow/`)
|
||||||
|
|
||||||
@@ -184,7 +184,7 @@ design_id = "design-run-$(date +%Y%m%d)-$RANDOM"
|
|||||||
|
|
||||||
IF --session:
|
IF --session:
|
||||||
session_id = {provided_session}
|
session_id = {provided_session}
|
||||||
relative_base_path = ".workflow/WFS-{session_id}/{design_id}"
|
relative_base_path = ".workflow/sessions/WFS-{session_id}/{design_id}"
|
||||||
session_mode = "integrated"
|
session_mode = "integrated"
|
||||||
ELSE:
|
ELSE:
|
||||||
session_id = null
|
session_id = null
|
||||||
|
|||||||
@@ -61,7 +61,7 @@ if [ -n "$DESIGN_ID" ]; then
|
|||||||
fi
|
fi
|
||||||
elif [ -n "$SESSION_ID" ]; then
|
elif [ -n "$SESSION_ID" ]; then
|
||||||
# Latest in session
|
# Latest in session
|
||||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
relative_path=$(find .workflow/sessions/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||||
if [ -z "$relative_path" ]; then
|
if [ -z "$relative_path" ]; then
|
||||||
echo "ERROR: No design run found in session: $SESSION_ID"
|
echo "ERROR: No design run found in session: $SESSION_ID"
|
||||||
echo "HINT: Create a design run first or provide --design-id"
|
echo "HINT: Create a design run first or provide --design-id"
|
||||||
|
|||||||
@@ -84,7 +84,7 @@ if [ -n "$DESIGN_ID" ]; then
|
|||||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
||||||
elif [ -n "$SESSION_ID" ]; then
|
elif [ -n "$SESSION_ID" ]; then
|
||||||
# Latest in session
|
# Latest in session
|
||||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
relative_path=$(find .workflow/sessions/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||||
else
|
else
|
||||||
# Latest globally
|
# Latest globally
|
||||||
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||||
|
|||||||
@@ -1,174 +0,0 @@
|
|||||||
---
|
|
||||||
name: list
|
|
||||||
description: List all available design runs with metadata (session, created time, prototype count)
|
|
||||||
argument-hint: [--session <id>]
|
|
||||||
allowed-tools: Bash(*), Read(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# List Design Runs (/workflow:ui-design:list)
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
List all available UI design runs across sessions or within a specific session. Displays design IDs with metadata for easy reference.
|
|
||||||
|
|
||||||
**Output**: Formatted list with design-id, session, created time, and prototype count
|
|
||||||
|
|
||||||
## Implementation
|
|
||||||
|
|
||||||
### Step 1: Determine Search Scope
|
|
||||||
```bash
|
|
||||||
# Priority: --session > all sessions
|
|
||||||
search_path=$(if [ -n "$SESSION_ID" ]; then
|
|
||||||
echo ".workflow/WFS-$SESSION_ID"
|
|
||||||
else
|
|
||||||
echo ".workflow"
|
|
||||||
fi)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Find and Display Design Runs
|
|
||||||
```bash
|
|
||||||
echo "Available design runs:"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Find all design-run directories
|
|
||||||
found_count=0
|
|
||||||
while IFS= read -r line; do
|
|
||||||
timestamp=$(echo "$line" | cut -d' ' -f1)
|
|
||||||
path=$(echo "$line" | cut -d' ' -f2-)
|
|
||||||
|
|
||||||
# Extract design_id from path
|
|
||||||
design_id=$(basename "$path")
|
|
||||||
|
|
||||||
# Extract session from path
|
|
||||||
session_id=$(echo "$path" | grep -oP 'WFS-\K[^/]+' || echo "standalone")
|
|
||||||
|
|
||||||
# Format created date
|
|
||||||
created_at=$(date -d "@${timestamp%.*}" '+%Y-%m-%d %H:%M' 2>/dev/null || echo "unknown")
|
|
||||||
|
|
||||||
# Count prototypes
|
|
||||||
prototype_count=$(find "$path/prototypes" -name "*.html" 2>/dev/null | wc -l)
|
|
||||||
|
|
||||||
# Display formatted output
|
|
||||||
echo " - $design_id"
|
|
||||||
echo " Session: $session_id"
|
|
||||||
echo " Created: $created_at"
|
|
||||||
echo " Prototypes: $prototype_count"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
found_count=$((found_count + 1))
|
|
||||||
done < <(find "$search_path" -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr)
|
|
||||||
|
|
||||||
# Summary
|
|
||||||
if [ $found_count -eq 0 ]; then
|
|
||||||
echo " No design runs found."
|
|
||||||
echo ""
|
|
||||||
if [ -n "$SESSION_ID" ]; then
|
|
||||||
echo "💡 HINT: Try running '/workflow:ui-design:explore-auto' to create a design run"
|
|
||||||
else
|
|
||||||
echo "💡 HINT: Try running '/workflow:ui-design:explore-auto --session <id>' to create a design run"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "Total: $found_count design run(s)"
|
|
||||||
echo ""
|
|
||||||
echo "💡 USE: /workflow:ui-design:generate --design-id \"<id>\""
|
|
||||||
echo " OR: /workflow:ui-design:generate --session \"<session>\""
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Execute List Command
|
|
||||||
```bash
|
|
||||||
Bash(
|
|
||||||
description: "List all UI design runs with metadata",
|
|
||||||
command: "
|
|
||||||
search_path=\"${search_path}\"
|
|
||||||
SESSION_ID=\"${SESSION_ID:-}\"
|
|
||||||
|
|
||||||
echo 'Available design runs:'
|
|
||||||
echo ''
|
|
||||||
|
|
||||||
found_count=0
|
|
||||||
while IFS= read -r line; do
|
|
||||||
timestamp=\$(echo \"\$line\" | cut -d' ' -f1)
|
|
||||||
path=\$(echo \"\$line\" | cut -d' ' -f2-)
|
|
||||||
|
|
||||||
design_id=\$(basename \"\$path\")
|
|
||||||
session_id=\$(echo \"\$path\" | grep -oP 'WFS-\\K[^/]+' || echo 'standalone')
|
|
||||||
created_at=\$(date -d \"@\${timestamp%.*}\" '+%Y-%m-%d %H:%M' 2>/dev/null || echo 'unknown')
|
|
||||||
prototype_count=\$(find \"\$path/prototypes\" -name '*.html' 2>/dev/null | wc -l)
|
|
||||||
|
|
||||||
echo \" - \$design_id\"
|
|
||||||
echo \" Session: \$session_id\"
|
|
||||||
echo \" Created: \$created_at\"
|
|
||||||
echo \" Prototypes: \$prototype_count\"
|
|
||||||
echo ''
|
|
||||||
|
|
||||||
found_count=\$((found_count + 1))
|
|
||||||
done < <(find \"\$search_path\" -name 'design-run-*' -type d -printf '%T@ %p\\n' 2>/dev/null | sort -nr)
|
|
||||||
|
|
||||||
if [ \$found_count -eq 0 ]; then
|
|
||||||
echo ' No design runs found.'
|
|
||||||
echo ''
|
|
||||||
if [ -n \"\$SESSION_ID\" ]; then
|
|
||||||
echo '💡 HINT: Try running \\'/workflow:ui-design:explore-auto\\' to create a design run'
|
|
||||||
else
|
|
||||||
echo '💡 HINT: Try running \\'/workflow:ui-design:explore-auto --session <id>\\' to create a design run'
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo \"Total: \$found_count design run(s)\"
|
|
||||||
echo ''
|
|
||||||
echo '💡 USE: /workflow:ui-design:generate --design-id \"<id>\"'
|
|
||||||
echo ' OR: /workflow:ui-design:generate --session \"<session>\"'
|
|
||||||
fi
|
|
||||||
"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example Output
|
|
||||||
|
|
||||||
### With Session Filter
|
|
||||||
```
|
|
||||||
$ /workflow:ui-design:list --session ui-redesign
|
|
||||||
|
|
||||||
Available design runs:
|
|
||||||
|
|
||||||
- design-run-20250109-143052
|
|
||||||
Session: ui-redesign
|
|
||||||
Created: 2025-01-09 14:30
|
|
||||||
Prototypes: 12
|
|
||||||
|
|
||||||
- design-run-20250109-101534
|
|
||||||
Session: ui-redesign
|
|
||||||
Created: 2025-01-09 10:15
|
|
||||||
Prototypes: 6
|
|
||||||
|
|
||||||
Total: 2 design run(s)
|
|
||||||
|
|
||||||
💡 USE: /workflow:ui-design:generate --design-id "<id>"
|
|
||||||
OR: /workflow:ui-design:generate --session "<session>"
|
|
||||||
```
|
|
||||||
|
|
||||||
### All Sessions
|
|
||||||
```
|
|
||||||
$ /workflow:ui-design:list
|
|
||||||
|
|
||||||
Available design runs:
|
|
||||||
|
|
||||||
- design-run-20250109-143052
|
|
||||||
Session: ui-redesign
|
|
||||||
Created: 2025-01-09 14:30
|
|
||||||
Prototypes: 12
|
|
||||||
|
|
||||||
- design-run-20250108-092314
|
|
||||||
Session: landing-page
|
|
||||||
Created: 2025-01-08 09:23
|
|
||||||
Prototypes: 3
|
|
||||||
|
|
||||||
- design-run-20250107-155623
|
|
||||||
Session: standalone
|
|
||||||
Created: 2025-01-07 15:56
|
|
||||||
Prototypes: 8
|
|
||||||
|
|
||||||
Total: 3 design run(s)
|
|
||||||
|
|
||||||
💡 USE: /workflow:ui-design:generate --design-id "<id>"
|
|
||||||
OR: /workflow:ui-design:generate --session "<session>"
|
|
||||||
```
|
|
||||||
@@ -62,7 +62,7 @@ if [ -n "$DESIGN_ID" ]; then
|
|||||||
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
relative_path=$(find .workflow -name "${DESIGN_ID}" -type d -print -quit)
|
||||||
elif [ -n "$SESSION_ID" ]; then
|
elif [ -n "$SESSION_ID" ]; then
|
||||||
# Latest in session
|
# Latest in session
|
||||||
relative_path=$(find .workflow/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
relative_path=$(find .workflow/sessions/WFS-$SESSION_ID -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||||
else
|
else
|
||||||
# Latest globally
|
# Latest globally
|
||||||
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
relative_path=$(find .workflow -name "design-run-*" -type d -printf "%T@ %p\n" 2>/dev/null | sort -nr | head -1 | cut -d' ' -f2)
|
||||||
|
|||||||
@@ -133,8 +133,8 @@ All task files use this simplified 5-field schema:
|
|||||||
|
|
||||||
### Active Session Detection
|
### Active Session Detection
|
||||||
```bash
|
```bash
|
||||||
# Check for active session marker
|
# Check for active session in sessions directory
|
||||||
active_session=$(ls .workflow/.active-* 2>/dev/null | head -1)
|
active_session=$(find .workflow/sessions/ -name 'WFS-*' -type d 2>/dev/null | head -1)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Workflow Context Inheritance
|
### Workflow Context Inheritance
|
||||||
@@ -144,10 +144,10 @@ Tasks inherit from:
|
|||||||
3. `IMPL_PLAN.md` - Planning document
|
3. `IMPL_PLAN.md` - Planning document
|
||||||
|
|
||||||
### File Locations
|
### File Locations
|
||||||
- **Task JSON**: `.workflow/WFS-[topic]/.task/IMPL-*.json` (uppercase required)
|
- **Task JSON**: `.workflow/sessions/WFS-[topic]/.task/IMPL-*.json` (uppercase required)
|
||||||
- **Session State**: `.workflow/WFS-[topic]/workflow-session.json`
|
- **Session State**: `.workflow/sessions/WFS-[topic]/workflow-session.json`
|
||||||
- **Planning Doc**: `.workflow/WFS-[topic]/IMPL_PLAN.md`
|
- **Planning Doc**: `.workflow/sessions/WFS-[topic]/IMPL_PLAN.md`
|
||||||
- **Progress**: `.workflow/WFS-[topic]/TODO_LIST.md`
|
- **Progress**: `.workflow/sessions/WFS-[topic]/TODO_LIST.md`
|
||||||
|
|
||||||
## Agent Mapping
|
## Agent Mapping
|
||||||
|
|
||||||
|
|||||||
@@ -24,48 +24,50 @@ This document defines the complete workflow system architecture using a **JSON-o
|
|||||||
|
|
||||||
## Session Management
|
## Session Management
|
||||||
|
|
||||||
### Active Session Marker System
|
### Directory-Based Session Management
|
||||||
**Ultra-Simple Active Tracking**: `.workflow/.active-[session-name]`
|
**Simple Location-Based Tracking**: Sessions in `.workflow/sessions/` directory
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
.workflow/
|
.workflow/
|
||||||
├── WFS-oauth-integration/ # Session directory (paused)
|
├── sessions/
|
||||||
├── WFS-user-profile/ # Session directory (paused)
|
│ ├── WFS-oauth-integration/ # Session directory (active or paused)
|
||||||
├── WFS-bug-fix-123/ # Session directory (completed)
|
│ ├── WFS-user-profile/ # Session directory (active or paused)
|
||||||
└── .active-WFS-user-profile # Marker file (indicates active session)
|
│ └── WFS-bug-fix-123/ # Session directory (completed)
|
||||||
|
└── archives/
|
||||||
|
└── WFS-old-feature/ # Archived session (completed)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Marker File Benefits**:
|
**Directory-Based Benefits**:
|
||||||
- **Zero Parsing**: File existence check is atomic and instant
|
- **Simple Discovery**: Session location determines state (sessions/ = active/paused, archives/ = completed)
|
||||||
- **Atomic Operations**: File creation/deletion is naturally atomic
|
- **No Marker Files**: Location is the state
|
||||||
- **Visual Discovery**: `ls .workflow/.active-*` shows active session immediately
|
- **Clean Structure**: Clear separation between active and completed sessions
|
||||||
- **Simple Switching**: Delete old marker + create new marker = session switch
|
- **Easy Migration**: Move between sessions/ and archives/ to change state
|
||||||
|
|
||||||
### Session Operations
|
### Session Operations
|
||||||
|
|
||||||
#### Detect Active Session(s)
|
#### Detect Active Session(s)
|
||||||
```bash
|
```bash
|
||||||
active_sessions=$(find .workflow -name ".active-*" 2>/dev/null)
|
active_sessions=$(find .workflow/sessions/ -name "WFS-*" -type d 2>/dev/null)
|
||||||
count=$(echo "$active_sessions" | wc -l)
|
count=$(echo "$active_sessions" | wc -l)
|
||||||
|
|
||||||
if [ -z "$active_sessions" ]; then
|
if [ -z "$active_sessions" ]; then
|
||||||
echo "No active session"
|
echo "No active session"
|
||||||
elif [ "$count" -eq 1 ]; then
|
elif [ "$count" -eq 1 ]; then
|
||||||
session_name=$(basename "$active_sessions" | sed 's/^\.active-//')
|
session_name=$(basename "$active_sessions")
|
||||||
echo "Active session: $session_name"
|
echo "Active session: $session_name"
|
||||||
else
|
else
|
||||||
echo "Multiple active sessions found:"
|
echo "Multiple sessions found:"
|
||||||
echo "$active_sessions" | while read marker; do
|
echo "$active_sessions" | while read session_dir; do
|
||||||
session=$(basename "$marker" | sed 's/^\.active-//')
|
session=$(basename "$session_dir")
|
||||||
echo " - $session"
|
echo " - $session"
|
||||||
done
|
done
|
||||||
echo "Please specify which session to work with"
|
echo "Please specify which session to work with"
|
||||||
fi
|
fi
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Switch Session
|
#### Archive Session
|
||||||
```bash
|
```bash
|
||||||
find .workflow -name ".active-*" -delete && touch .workflow/.active-WFS-new-feature
|
mv .workflow/sessions/WFS-feature .workflow/archives/WFS-feature
|
||||||
```
|
```
|
||||||
|
|
||||||
### Session State Tracking
|
### Session State Tracking
|
||||||
@@ -707,41 +709,44 @@ All workflows use the same file structure definition regardless of complexity. *
|
|||||||
│ │ └── index.html # Navigation page
|
│ │ └── index.html # Navigation page
|
||||||
│ └── .run-metadata.json # Run configuration
|
│ └── .run-metadata.json # Run configuration
|
||||||
│
|
│
|
||||||
└── WFS-[topic-slug]/
|
├── sessions/ # Active/paused workflow sessions
|
||||||
├── workflow-session.json # Session metadata and state (REQUIRED)
|
│ └── WFS-[topic-slug]/
|
||||||
├── [.brainstorming/] # Optional brainstorming phase (created when needed)
|
│ ├── workflow-session.json # Session metadata and state (REQUIRED)
|
||||||
├── [.chat/] # CLI interaction sessions (created when analysis is run)
|
│ ├── [.brainstorming/] # Optional brainstorming phase (created when needed)
|
||||||
│ ├── chat-*.md # Saved chat sessions
|
│ ├── [.chat/] # CLI interaction sessions (created when analysis is run)
|
||||||
│ └── analysis-*.md # Analysis results
|
│ │ ├── chat-*.md # Saved chat sessions
|
||||||
├── [.process/] # Planning analysis results (created by /workflow:plan)
|
│ │ └── analysis-*.md # Analysis results
|
||||||
│ └── ANALYSIS_RESULTS.md # Analysis results and planning artifacts
|
│ ├── [.process/] # Planning analysis results (created by /workflow:plan)
|
||||||
├── IMPL_PLAN.md # Planning document (REQUIRED)
|
│ │ └── ANALYSIS_RESULTS.md # Analysis results and planning artifacts
|
||||||
├── TODO_LIST.md # Progress tracking (REQUIRED)
|
│ ├── IMPL_PLAN.md # Planning document (REQUIRED)
|
||||||
├── [.summaries/] # Task completion summaries (created when tasks complete)
|
│ ├── TODO_LIST.md # Progress tracking (REQUIRED)
|
||||||
│ ├── IMPL-*-summary.md # Main task summaries
|
│ ├── [.summaries/] # Task completion summaries (created when tasks complete)
|
||||||
│ └── IMPL-*.*-summary.md # Subtask summaries
|
│ │ ├── IMPL-*-summary.md # Main task summaries
|
||||||
├── [design-*/] # UI design outputs (created by ui-design workflows)
|
│ │ └── IMPL-*.*-summary.md # Subtask summaries
|
||||||
│ ├── .intermediates/ # Intermediate analysis files
|
│ ├── [design-*/] # UI design outputs (created by ui-design workflows)
|
||||||
│ │ ├── style-analysis/ # Style analysis data
|
│ │ ├── .intermediates/ # Intermediate analysis files
|
||||||
│ │ │ ├── computed-styles.json # Extracted CSS values
|
│ │ │ ├── style-analysis/ # Style analysis data
|
||||||
│ │ │ └── design-space-analysis.json # Design directions
|
│ │ │ │ ├── computed-styles.json # Extracted CSS values
|
||||||
│ │ └── layout-analysis/ # Layout analysis data
|
│ │ │ │ └── design-space-analysis.json # Design directions
|
||||||
│ │ ├── dom-structure-{target}.json # DOM extraction
|
│ │ │ └── layout-analysis/ # Layout analysis data
|
||||||
│ │ └── inspirations/ # Layout research
|
│ │ │ ├── dom-structure-{target}.json # DOM extraction
|
||||||
│ │ └── {target}-layout-ideas.txt
|
│ │ │ └── inspirations/ # Layout research
|
||||||
│ ├── style-extraction/ # Final design systems
|
│ │ │ └── {target}-layout-ideas.txt
|
||||||
│ │ ├── style-1/ # design-tokens.json, style-guide.md
|
│ │ ├── style-extraction/ # Final design systems
|
||||||
│ │ └── style-N/
|
│ │ │ ├── style-1/ # design-tokens.json, style-guide.md
|
||||||
│ ├── layout-extraction/ # Layout templates
|
│ │ │ └── style-N/
|
||||||
│ │ └── layout-templates.json
|
│ │ ├── layout-extraction/ # Layout templates
|
||||||
│ ├── prototypes/ # Generated HTML/CSS prototypes
|
│ │ │ └── layout-templates.json
|
||||||
│ │ ├── {target}-style-{s}-layout-{l}.html # Final prototypes
|
│ │ ├── prototypes/ # Generated HTML/CSS prototypes
|
||||||
│ │ ├── compare.html # Interactive matrix view
|
│ │ │ ├── {target}-style-{s}-layout-{l}.html # Final prototypes
|
||||||
│ │ └── index.html # Navigation page
|
│ │ │ ├── compare.html # Interactive matrix view
|
||||||
│ └── .run-metadata.json # Run configuration
|
│ │ │ └── index.html # Navigation page
|
||||||
└── .task/ # Task definitions (REQUIRED)
|
│ │ └── .run-metadata.json # Run configuration
|
||||||
├── IMPL-*.json # Main task definitions
|
│ └── .task/ # Task definitions (REQUIRED)
|
||||||
└── IMPL-*.*.json # Subtask definitions (created dynamically)
|
│ ├── IMPL-*.json # Main task definitions
|
||||||
|
│ └── IMPL-*.*.json # Subtask definitions (created dynamically)
|
||||||
|
└── archives/ # Completed workflow sessions
|
||||||
|
└── WFS-[completed-topic]/ # Archived session directories
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Creation Strategy
|
#### Creation Strategy
|
||||||
@@ -763,8 +768,8 @@ All workflows use the same file structure definition regardless of complexity. *
|
|||||||
4. **One-Off Queries**: Standalone questions or debugging without workflow context
|
4. **One-Off Queries**: Standalone questions or debugging without workflow context
|
||||||
|
|
||||||
**Output Routing Logic**:
|
**Output Routing Logic**:
|
||||||
- **IF** active session exists AND command is session-relevant:
|
- **IF** active session exists in `.workflow/sessions/` AND command is session-relevant:
|
||||||
- Save to `.workflow/WFS-[id]/.chat/[command]-[timestamp].md`
|
- Save to `.workflow/sessions/WFS-[id]/.chat/[command]-[timestamp].md`
|
||||||
- **ELSE** (no session OR one-off analysis):
|
- **ELSE** (no session OR one-off analysis):
|
||||||
- Save to `.workflow/.scratchpad/[command]-[description]-[timestamp].md`
|
- Save to `.workflow/.scratchpad/[command]-[description]-[timestamp].md`
|
||||||
|
|
||||||
@@ -834,10 +839,10 @@ All workflows use the same file structure definition regardless of complexity. *
|
|||||||
### Session Management
|
### Session Management
|
||||||
```bash
|
```bash
|
||||||
# Create minimal required structure
|
# Create minimal required structure
|
||||||
mkdir -p .workflow/WFS-topic-slug/.task
|
mkdir -p .workflow/sessions/WFS-topic-slug/.task
|
||||||
echo '{"session_id":"WFS-topic-slug",...}' > .workflow/WFS-topic-slug/workflow-session.json
|
echo '{"session_id":"WFS-topic-slug",...}' > .workflow/sessions/WFS-topic-slug/workflow-session.json
|
||||||
echo '# Implementation Plan' > .workflow/WFS-topic-slug/IMPL_PLAN.md
|
echo '# Implementation Plan' > .workflow/sessions/WFS-topic-slug/IMPL_PLAN.md
|
||||||
echo '# Tasks' > .workflow/WFS-topic-slug/TODO_LIST.md
|
echo '# Tasks' > .workflow/sessions/WFS-topic-slug/TODO_LIST.md
|
||||||
```
|
```
|
||||||
|
|
||||||
### Task Operations
|
### Task Operations
|
||||||
@@ -861,23 +866,21 @@ mkdir -p .summaries # When first task completes
|
|||||||
|
|
||||||
### Session Consistency Checks & Recovery
|
### Session Consistency Checks & Recovery
|
||||||
```bash
|
```bash
|
||||||
# Validate active session integrity
|
# Validate session directory structure
|
||||||
active_marker=$(find .workflow -name ".active-*" | head -1)
|
if [ -d ".workflow/sessions/" ]; then
|
||||||
if [ -n "$active_marker" ]; then
|
for session_dir in .workflow/sessions/WFS-*; do
|
||||||
session_name=$(basename "$active_marker" | sed 's/^\.active-//')
|
if [ ! -f "$session_dir/workflow-session.json" ]; then
|
||||||
session_dir=".workflow/$session_name"
|
echo "⚠️ Missing workflow-session.json in $session_dir"
|
||||||
if [ ! -d "$session_dir" ]; then
|
fi
|
||||||
echo "⚠️ Orphaned active marker, removing..."
|
done
|
||||||
rm "$active_marker"
|
|
||||||
fi
|
|
||||||
fi
|
fi
|
||||||
```
|
```
|
||||||
|
|
||||||
**Recovery Strategies**:
|
**Recovery Strategies**:
|
||||||
- **Missing Session Directory**: Remove orphaned active marker
|
- **Missing Session File**: Recreate workflow-session.json from template
|
||||||
- **Multiple Active Markers**: Keep newest, remove others
|
- **Corrupted Session File**: Restore from template with basic metadata
|
||||||
- **Corrupted Session File**: Recreate from template
|
- **Broken Task Hierarchy**: Reconstruct parent-child relationships from task JSON files
|
||||||
- **Broken Task Hierarchy**: Reconstruct parent-child relationships
|
- **Orphaned Sessions**: Move incomplete sessions to archives/
|
||||||
|
|
||||||
## Complexity Classification
|
## Complexity Classification
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user