mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-06 01:54:11 +08:00
Compare commits
37 Commits
claude/add
...
v5.9.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3c28c61bea | ||
|
|
b0b99a4217 | ||
|
|
4f533f6fd5 | ||
|
|
530c348e95 | ||
|
|
a98b26b111 | ||
|
|
9f7e33cbde | ||
|
|
a25464ce28 | ||
|
|
0a3f2a5b03 | ||
|
|
1929b7f72d | ||
|
|
b8889d99c9 | ||
|
|
a79a3221ce | ||
|
|
67c18d1b03 | ||
|
|
2301f263cd | ||
|
|
8d828e8762 | ||
|
|
b573450821 | ||
|
|
229a9867e6 | ||
|
|
6fe31cc408 | ||
|
|
196951ff4f | ||
|
|
61c08e1585 | ||
|
|
07caf20e0d | ||
|
|
1e9ca574ed | ||
|
|
d0ceb835b5 | ||
|
|
fad32d7caf | ||
|
|
806b782b03 | ||
|
|
a62bbd6a7f | ||
|
|
2a7d55264d | ||
|
|
837bee79c7 | ||
|
|
d8ead86b67 | ||
|
|
8c2a7b6983 | ||
|
|
f5ca033ee8 | ||
|
|
842ed624e8 | ||
|
|
4693527a8e | ||
|
|
5f0dab409b | ||
|
|
c679253c30 | ||
|
|
38f2355573 | ||
|
|
2fb1015038 | ||
|
|
d7bee9bdf2 |
126
.claude/commands/cli/mode/document-analysis.md
Normal file
126
.claude/commands/cli/mode/document-analysis.md
Normal file
@@ -0,0 +1,126 @@
|
||||
---
|
||||
name: document-analysis
|
||||
description: Read-only technical document/paper analysis using Gemini/Qwen/Codex with systematic comprehension template for insights extraction
|
||||
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] document path or topic"
|
||||
allowed-tools: SlashCommand(*), Bash(*), Task(*), Read(*)
|
||||
---
|
||||
|
||||
# CLI Mode: Document Analysis (/cli:mode:document-analysis)
|
||||
|
||||
## Purpose
|
||||
|
||||
Systematic analysis of technical documents, research papers, API documentation, and technical specifications.
|
||||
|
||||
**Tool Selection**:
|
||||
- **gemini** (default) - Best for document comprehension and structure analysis
|
||||
- **qwen** - Fallback when Gemini unavailable
|
||||
- **codex** - Alternative for complex technical documents
|
||||
|
||||
**Key Feature**: `--cd` flag for directory-scoped document discovery
|
||||
|
||||
## Parameters
|
||||
|
||||
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
|
||||
- `--enhance` - Enhance analysis target with `/enhance-prompt`
|
||||
- `--cd "path"` - Target directory for document search
|
||||
- `<document-path-or-topic>` (Required) - File path or topic description
|
||||
|
||||
## Tool Usage
|
||||
|
||||
**Gemini** (Primary):
|
||||
```bash
|
||||
/cli:mode:document-analysis "README.md"
|
||||
/cli:mode:document-analysis --tool gemini "analyze API documentation"
|
||||
```
|
||||
|
||||
**Qwen** (Fallback):
|
||||
```bash
|
||||
/cli:mode:document-analysis --tool qwen "docs/architecture.md"
|
||||
```
|
||||
|
||||
**Codex** (Alternative):
|
||||
```bash
|
||||
/cli:mode:document-analysis --tool codex "research paper in docs/"
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
Uses **cli-execution-agent** for automated document analysis:
|
||||
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-execution-agent",
|
||||
description="Systematic document comprehension and insights extraction",
|
||||
prompt=`
|
||||
Task: ${document_path_or_topic}
|
||||
Mode: document-analysis
|
||||
Tool: ${tool_flag || 'gemini'}
|
||||
Directory: ${cd_path || '.'}
|
||||
Enhance: ${enhance_flag}
|
||||
Template: ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-technical-document.txt
|
||||
|
||||
Execute systematic document analysis:
|
||||
|
||||
1. Document Discovery:
|
||||
- Locate target document(s) via path or topic keywords
|
||||
- Identify document type (README, API docs, research paper, spec, tutorial)
|
||||
- Detect document format (Markdown, PDF, plain text, reStructuredText)
|
||||
- Discover related documents (references, appendices, examples)
|
||||
- Use MCP/ripgrep for comprehensive file discovery
|
||||
|
||||
2. Pre-Analysis Planning (Required):
|
||||
- Determine document structure (sections, hierarchy, flow)
|
||||
- Identify key components (abstract, methodology, implementation details)
|
||||
- Map dependencies and cross-references
|
||||
- Assess document scope and complexity
|
||||
- Plan analysis approach based on document type
|
||||
|
||||
3. CLI Command Construction:
|
||||
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for complex docs)
|
||||
- Directory: cd ${cd_path || '.'} &&
|
||||
- Context: @{document_paths} + @CLAUDE.md + related files
|
||||
- Mode: analysis (read-only)
|
||||
- Template: analysis/02-analyze-technical-document.txt
|
||||
|
||||
4. Analysis Execution:
|
||||
- Apply 6-field template structure (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
|
||||
- Execute multi-phase analysis protocol with pre-planning
|
||||
- Perform self-critique before final output
|
||||
- Generate structured report with evidence-based insights
|
||||
|
||||
5. Output Generation:
|
||||
- Comprehensive document analysis report
|
||||
- Structured insights with section references
|
||||
- Critical assessment with evidence
|
||||
- Actionable recommendations
|
||||
- Save to .workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md (or .scratchpad/)
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
## Core Rules
|
||||
|
||||
- **Read-only**: Analyzes documents, does NOT modify files
|
||||
- **Evidence-based**: All claims must reference specific sections/pages
|
||||
- **Pre-planning**: Requires analysis approach planning before execution
|
||||
- **Precise language**: Direct, accurate wording - no persuasive embellishment
|
||||
- **Output**: `.workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
|
||||
## Document Types Supported
|
||||
|
||||
| Type | Focus Areas | Key Outputs |
|
||||
|------|-------------|-------------|
|
||||
| README | Purpose, setup, usage | Integration steps, quick-start guide |
|
||||
| API Documentation | Endpoints, parameters, responses | API usage patterns, integration points |
|
||||
| Research Paper | Methodology, findings, validity | Applicable techniques, implementation feasibility |
|
||||
| Specification | Requirements, standards, constraints | Compliance checklist, implementation requirements |
|
||||
| Tutorial | Learning path, examples, exercises | Key concepts, practical applications |
|
||||
| Architecture Docs | System design, components, patterns | Design decisions, integration points, trade-offs |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Scope Definition**: Clearly define what aspects to analyze before starting
|
||||
2. **Layered Reading**: Structure/Overview → Details → Critical Analysis → Synthesis
|
||||
3. **Evidence Trail**: Track section references for all extracted information
|
||||
4. **Gap Identification**: Note missing information or unclear sections explicitly
|
||||
5. **Actionable Output**: Focus on insights that inform decisions or actions
|
||||
@@ -44,7 +44,11 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
|
||||
/memory:docs [path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]
|
||||
```
|
||||
|
||||
- **path**: Target directory (default: current directory)
|
||||
- **path**: Source directory to analyze (default: current directory)
|
||||
- Specifies the source code directory to be documented
|
||||
- Documentation is generated in a separate `.workflow/docs/{project_name}/` directory at the workspace root, **not** within the source `path` itself
|
||||
- The source path's structure is mirrored within the project-specific documentation folder
|
||||
- Example: analyzing `src/modules` produces documentation at `.workflow/docs/{project_name}/src/modules/`
|
||||
- **--mode**: Documentation generation mode (default: full)
|
||||
- `full`: Complete documentation (modules + README + ARCHITECTURE + EXAMPLES + HTTP API)
|
||||
- `partial`: Module documentation only (API.md + README.md)
|
||||
|
||||
@@ -54,13 +54,64 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
||||
### Phase 1: Discovery
|
||||
**Applies to**: Normal mode only (skipped in resume mode)
|
||||
|
||||
**Process**:
|
||||
1. **Check Active Sessions**: Find sessions in `.workflow/active/` directory
|
||||
2. **Select Session**: If multiple found, prompt user selection
|
||||
3. **Load Session Metadata**: Read `workflow-session.json` ONLY (minimal context)
|
||||
4. **DO NOT read task JSONs yet** - defer until execution phase
|
||||
**Purpose**: Find and select active workflow session with user confirmation when multiple sessions exist
|
||||
|
||||
**Resume Mode**: This phase is completely skipped when `--resume-session="session-id"` flag is provided.
|
||||
**Process**:
|
||||
|
||||
#### Step 1.1: Count Active Sessions
|
||||
```bash
|
||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
|
||||
```
|
||||
|
||||
#### Step 1.2: Handle Session Selection
|
||||
|
||||
**Case A: No Sessions** (count = 0)
|
||||
```
|
||||
ERROR: No active workflow sessions found
|
||||
Run /workflow:plan "task description" to create a session
|
||||
```
|
||||
|
||||
**Case B: Single Session** (count = 1)
|
||||
```bash
|
||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
|
||||
```
|
||||
Auto-select and continue to Phase 2.
|
||||
|
||||
**Case C: Multiple Sessions** (count > 1)
|
||||
|
||||
List sessions with metadata and prompt user selection:
|
||||
```bash
|
||||
bash(for dir in .workflow/active/WFS-*/; do
|
||||
session=$(basename "$dir")
|
||||
project=$(jq -r '.project // "Unknown"' "$dir/workflow-session.json" 2>/dev/null)
|
||||
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
||||
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
||||
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
|
||||
echo "${session} | ${project} | ${completed}/${total} tasks (${progress}%)"
|
||||
done)
|
||||
```
|
||||
|
||||
Use AskUserQuestion to present formatted options:
|
||||
```
|
||||
Multiple active workflow sessions detected. Please select one:
|
||||
|
||||
1. WFS-auth-system | Authentication System | 3/5 tasks (60%)
|
||||
2. WFS-payment-module | Payment Integration | 0/8 tasks (0%)
|
||||
|
||||
Enter number, full session ID, or partial match:
|
||||
```
|
||||
|
||||
Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "auth"), validate selection, and continue to Phase 2.
|
||||
|
||||
#### Step 1.3: Load Session Metadata
|
||||
```bash
|
||||
bash(cat .workflow/active/${sessionId}/workflow-session.json)
|
||||
```
|
||||
|
||||
**Output**: Store session metadata in memory
|
||||
**DO NOT read task JSONs yet** - defer until execution phase (lazy loading)
|
||||
|
||||
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
|
||||
|
||||
### Phase 2: Planning Document Analysis
|
||||
**Applies to**: Normal mode only (skipped in resume mode)
|
||||
|
||||
@@ -185,86 +185,104 @@ Execution Complete
|
||||
previousExecutionResults = []
|
||||
```
|
||||
|
||||
### Step 2: Create TodoWrite Execution List
|
||||
### Step 2: Task Grouping & Batch Creation
|
||||
|
||||
**Operations**:
|
||||
- Create execution tracking from task list
|
||||
- Typically single execution call for all tasks
|
||||
- Split into multiple calls if task list very large (>10 tasks)
|
||||
|
||||
**Execution Call Creation**:
|
||||
**Dependency Analysis & Grouping Algorithm**:
|
||||
```javascript
|
||||
function createExecutionCalls(tasks) {
|
||||
const taskTitles = tasks.map(t => t.title || t)
|
||||
// Infer dependencies: same file → sequential, keywords (use/integrate) → sequential
|
||||
function inferDependencies(tasks) {
|
||||
return tasks.map((task, i) => {
|
||||
const deps = []
|
||||
const file = task.file || task.title.match(/in\s+([^\s:]+)/)?.[1]
|
||||
const keywords = (task.description || task.title).toLowerCase()
|
||||
|
||||
// Single call for ≤10 tasks (most common)
|
||||
if (tasks.length <= 10) {
|
||||
return [{
|
||||
method: executionMethod === "Codex" ? "Codex" : "Agent",
|
||||
taskSummary: taskTitles.length <= 3
|
||||
? taskTitles.join(', ')
|
||||
: `${taskTitles.slice(0, 2).join(', ')}, and ${taskTitles.length - 2} more`,
|
||||
tasks: tasks
|
||||
}]
|
||||
}
|
||||
|
||||
// Split into multiple calls for >10 tasks
|
||||
const callSize = 5
|
||||
const calls = []
|
||||
for (let i = 0; i < tasks.length; i += callSize) {
|
||||
const batchTasks = tasks.slice(i, i + callSize)
|
||||
const batchTitles = batchTasks.map(t => t.title || t)
|
||||
calls.push({
|
||||
method: executionMethod === "Codex" ? "Codex" : "Agent",
|
||||
taskSummary: `Tasks ${i + 1}-${Math.min(i + callSize, tasks.length)}: ${batchTitles[0]}...`,
|
||||
tasks: batchTasks
|
||||
})
|
||||
}
|
||||
return calls
|
||||
for (let j = 0; j < i; j++) {
|
||||
const prevFile = tasks[j].file || tasks[j].title.match(/in\s+([^\s:]+)/)?.[1]
|
||||
if (file && prevFile === file) deps.push(j) // Same file
|
||||
else if (/use|integrate|call|import/.test(keywords)) deps.push(j) // Keyword dependency
|
||||
}
|
||||
return { ...task, taskIndex: i, dependencies: deps }
|
||||
})
|
||||
}
|
||||
|
||||
// Create execution calls with IDs
|
||||
executionCalls = createExecutionCalls(planObject.tasks).map((call, index) => ({
|
||||
...call,
|
||||
id: `[${call.method}-${index+1}]`
|
||||
}))
|
||||
// Group into batches: independent → parallel [P1,P2...], dependent → sequential [S1,S2...]
|
||||
function createExecutionCalls(tasks, executionMethod) {
|
||||
const tasksWithDeps = inferDependencies(tasks)
|
||||
const maxBatch = executionMethod === "Codex" ? 4 : 7
|
||||
const calls = []
|
||||
const processed = new Set()
|
||||
|
||||
// Parallel: independent tasks, different files, max batch size
|
||||
const parallelGroups = []
|
||||
tasksWithDeps.forEach(t => {
|
||||
if (t.dependencies.length === 0 && !processed.has(t.taskIndex)) {
|
||||
const group = [t]
|
||||
processed.add(t.taskIndex)
|
||||
tasksWithDeps.forEach(o => {
|
||||
if (!o.dependencies.length && !processed.has(o.taskIndex) &&
|
||||
group.length < maxBatch && t.file !== o.file) {
|
||||
group.push(o)
|
||||
processed.add(o.taskIndex)
|
||||
}
|
||||
})
|
||||
parallelGroups.push(group)
|
||||
}
|
||||
})
|
||||
|
||||
// Sequential: dependent tasks, batch when deps satisfied
|
||||
const remaining = tasksWithDeps.filter(t => !processed.has(t.taskIndex))
|
||||
while (remaining.length > 0) {
|
||||
const batch = remaining.filter((t, i) =>
|
||||
i < maxBatch && t.dependencies.every(d => processed.has(d))
|
||||
)
|
||||
if (!batch.length) break
|
||||
batch.forEach(t => processed.add(t.taskIndex))
|
||||
calls.push({ executionType: "sequential", groupId: `S${calls.length + 1}`, tasks: batch })
|
||||
remaining.splice(0, remaining.length, ...remaining.filter(t => !processed.has(t.taskIndex)))
|
||||
}
|
||||
|
||||
// Combine results
|
||||
return [
|
||||
...parallelGroups.map((g, i) => ({
|
||||
method: executionMethod, executionType: "parallel", groupId: `P${i+1}`,
|
||||
taskSummary: g.map(t => t.title).join(' | '), tasks: g
|
||||
})),
|
||||
...calls.map(c => ({ ...c, method: executionMethod, taskSummary: c.tasks.map(t => t.title).join(' → ') }))
|
||||
]
|
||||
}
|
||||
|
||||
executionCalls = createExecutionCalls(planObject.tasks, executionMethod).map(c => ({ ...c, id: `[${c.groupId}]` }))
|
||||
|
||||
// Create TodoWrite list
|
||||
TodoWrite({
|
||||
todos: executionCalls.map(call => ({
|
||||
content: `${call.id} (${call.taskSummary})`,
|
||||
todos: executionCalls.map(c => ({
|
||||
content: `${c.executionType === "parallel" ? "⚡" : "→"} ${c.id} (${c.tasks.length} tasks)`,
|
||||
status: "pending",
|
||||
activeForm: `Executing ${call.id} (${call.taskSummary})`
|
||||
activeForm: `Executing ${c.id}`
|
||||
}))
|
||||
})
|
||||
```
|
||||
|
||||
**Example Execution Lists**:
|
||||
```
|
||||
Single call (typical):
|
||||
[ ] [Agent-1] (Create AuthService, Add JWT utilities, Implement middleware)
|
||||
|
||||
Few tasks:
|
||||
[ ] [Codex-1] (Create AuthService, Add JWT utilities, and 3 more)
|
||||
|
||||
Large task sets (>10):
|
||||
[ ] [Agent-1] (Tasks 1-5: Create AuthService, Add JWT utilities, ...)
|
||||
[ ] [Agent-2] (Tasks 6-10: Create tests, Update docs, ...)
|
||||
```
|
||||
|
||||
### Step 3: Launch Execution
|
||||
|
||||
**IMPORTANT**: CLI execution MUST run in foreground (no background execution)
|
||||
|
||||
**Execution Loop**:
|
||||
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
|
||||
```javascript
|
||||
for (currentIndex = 0; currentIndex < executionCalls.length; currentIndex++) {
|
||||
const currentCall = executionCalls[currentIndex]
|
||||
const parallel = executionCalls.filter(c => c.executionType === "parallel")
|
||||
const sequential = executionCalls.filter(c => c.executionType === "sequential")
|
||||
|
||||
// Update TodoWrite: mark current call in_progress
|
||||
// Launch execution with previousExecutionResults context
|
||||
// After completion: collect result, add to previousExecutionResults
|
||||
// Update TodoWrite: mark current call completed
|
||||
// Phase 1: Launch all parallel batches (single message with multiple tool calls)
|
||||
if (parallel.length > 0) {
|
||||
TodoWrite({ todos: executionCalls.map(c => ({ status: c.executionType === "parallel" ? "in_progress" : "pending" })) })
|
||||
parallelResults = await Promise.all(parallel.map(c => executeBatch(c)))
|
||||
previousExecutionResults.push(...parallelResults)
|
||||
TodoWrite({ todos: executionCalls.map(c => ({ status: parallel.includes(c) ? "completed" : "pending" })) })
|
||||
}
|
||||
|
||||
// Phase 2: Execute sequential batches one by one
|
||||
for (const call of sequential) {
|
||||
TodoWrite({ todos: executionCalls.map(c => ({ status: c === call ? "in_progress" : "..." })) })
|
||||
result = await executeBatch(call)
|
||||
previousExecutionResults.push(result)
|
||||
TodoWrite({ todos: executionCalls.map(c => ({ status: "completed" or "pending" })) })
|
||||
}
|
||||
```
|
||||
|
||||
@@ -323,12 +341,17 @@ ${result.notes ? `Notes: ${result.notes}` : ''}
|
||||
|
||||
${clarificationContext ? `\n## Clarifications\n${JSON.stringify(clarificationContext, null, 2)}` : ''}
|
||||
|
||||
## Instructions
|
||||
- Reference original request to ensure alignment
|
||||
- Review previous results to understand completed work
|
||||
- Build on previous work, avoid duplication
|
||||
- Test functionality as you implement
|
||||
- Complete all assigned tasks
|
||||
${executionContext?.session?.artifacts ? `\n## Planning Artifacts
|
||||
Detailed planning context available in:
|
||||
${executionContext.session.artifacts.exploration ? `- Exploration: ${executionContext.session.artifacts.exploration}` : ''}
|
||||
- Plan: ${executionContext.session.artifacts.plan}
|
||||
- Task: ${executionContext.session.artifacts.task}
|
||||
|
||||
Read these files for detailed architecture, patterns, and constraints.` : ''}
|
||||
|
||||
## Requirements
|
||||
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
|
||||
Return only after all tasks are fully implemented and tested.
|
||||
`
|
||||
)
|
||||
```
|
||||
@@ -341,6 +364,11 @@ When to use:
|
||||
- `executionMethod = "Codex"`
|
||||
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
|
||||
|
||||
**Artifact Path Delegation**:
|
||||
- Include artifact file paths in CLI prompt for enhanced context
|
||||
- Codex can read artifact files for detailed planning information
|
||||
- Example: Reference exploration.json for architecture patterns
|
||||
|
||||
Command format:
|
||||
```bash
|
||||
function formatTaskForCodex(task, index) {
|
||||
@@ -390,12 +418,18 @@ Constraints: ${explorationContext.constraints || 'None'}
|
||||
|
||||
${clarificationContext ? `\n### User Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `${q}: ${a}`).join('\n')}` : ''}
|
||||
|
||||
## Execution Instructions
|
||||
- Reference original request to ensure alignment
|
||||
- Review previous results for context continuity
|
||||
- Build on previous work, don't duplicate completed tasks
|
||||
- Complete all assigned tasks in single execution
|
||||
- Test functionality as you implement
|
||||
${executionContext?.session?.artifacts ? `\n### Planning Artifact Files
|
||||
Detailed planning context available in session folder:
|
||||
${executionContext.session.artifacts.exploration ? `- Exploration: ${executionContext.session.artifacts.exploration}` : ''}
|
||||
- Plan: ${executionContext.session.artifacts.plan}
|
||||
- Task: ${executionContext.session.artifacts.task}
|
||||
|
||||
Read these files for complete architecture details, code patterns, and integration constraints.
|
||||
` : ''}
|
||||
|
||||
## Requirements
|
||||
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
|
||||
Return only after all tasks are fully implemented and tested.
|
||||
|
||||
Complexity: ${planObject.complexity}
|
||||
" --skip-git-repo-check -s danger-full-access
|
||||
@@ -414,105 +448,72 @@ bash_result = Bash(
|
||||
|
||||
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
|
||||
|
||||
### Step 4: Track Execution Progress
|
||||
### Step 4: Progress Tracking
|
||||
|
||||
**Real-time TodoWrite Updates** at execution call level:
|
||||
|
||||
```javascript
|
||||
// When call starts
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "in_progress", activeForm: "..." },
|
||||
{ content: "[Agent-2] (Add middleware + Update routes)", status: "pending", activeForm: "..." }
|
||||
]
|
||||
})
|
||||
|
||||
// When call completes
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "completed", activeForm: "..." },
|
||||
{ content: "[Agent-2] (Add middleware + Update routes)", status: "in_progress", activeForm: "..." }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
**User Visibility**:
|
||||
- User sees execution call progress (not individual task progress)
|
||||
- Current execution highlighted as "in_progress"
|
||||
- Completed executions marked with checkmark
|
||||
- Each execution shows task summary for context
|
||||
Progress tracked at batch level (not individual task level). Icons: ⚡ (parallel, concurrent), → (sequential, one-by-one)
|
||||
|
||||
### Step 5: Code Review (Optional)
|
||||
|
||||
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
|
||||
|
||||
**Operations**:
|
||||
- Agent Review: Current agent performs direct review
|
||||
- Gemini Review: Execute gemini CLI with review prompt
|
||||
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
|
||||
**Review Focus**: Verify implementation against task.json acceptance criteria
|
||||
- Read task.json from session artifacts for acceptance criteria
|
||||
- Check each acceptance criterion is fulfilled
|
||||
- Validate code quality and identify issues
|
||||
- Ensure alignment with planned approach
|
||||
|
||||
**Command Formats**:
|
||||
**Operations**:
|
||||
- Agent Review: Current agent performs direct review (read task.json for acceptance criteria)
|
||||
- Gemini Review: Execute gemini CLI with review prompt (task.json in CONTEXT)
|
||||
- Custom tool: Execute specified CLI tool (qwen, codex, etc.) with task.json reference
|
||||
|
||||
**Unified Review Template** (All tools use same standard):
|
||||
|
||||
**Review Criteria**:
|
||||
- **Acceptance Criteria**: Verify each criterion from task.json `context.acceptance`
|
||||
- **Code Quality**: Analyze quality, identify issues, suggest improvements
|
||||
- **Plan Alignment**: Validate implementation matches planned approach
|
||||
|
||||
**Shared Prompt Template** (used by all CLI tools):
|
||||
```
|
||||
PURPOSE: Code review for implemented changes against task.json acceptance criteria
|
||||
TASK: • Verify task.json acceptance criteria fulfillment • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* @{task.json} @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against task.json requirements
|
||||
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from task.json.
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on task.json acceptance criteria and plan adherence | analysis=READ-ONLY
|
||||
```
|
||||
|
||||
**Tool-Specific Execution** (Apply shared prompt template above):
|
||||
|
||||
```bash
|
||||
# Agent Review: Direct agent review (no CLI)
|
||||
# Uses analysis prompt and TodoWrite tools directly
|
||||
# Method 1: Agent Review (current agent)
|
||||
# - Read task.json: ${executionContext.session.artifacts.task}
|
||||
# - Apply unified review criteria (see Shared Prompt Template)
|
||||
# - Report findings directly
|
||||
|
||||
# Gemini Review:
|
||||
gemini -p "
|
||||
PURPOSE: Code review for implemented changes
|
||||
TASK: • Analyze quality • Identify issues • Suggest improvements
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Review lite-execute changes
|
||||
EXPECTED: Quality assessment with recommendations
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
|
||||
"
|
||||
# Method 2: Gemini Review (recommended)
|
||||
gemini -p "[Shared Prompt Template with artifacts]"
|
||||
# CONTEXT includes: @**/* @${task.json} @${plan.json} [@${exploration.json}]
|
||||
|
||||
# Qwen Review (custom tool via "Other"):
|
||||
qwen -p "
|
||||
PURPOSE: Code review for implemented changes
|
||||
TASK: • Analyze quality • Identify issues • Suggest improvements
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Review lite-execute changes
|
||||
EXPECTED: Quality assessment with recommendations
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
|
||||
"
|
||||
# Method 3: Qwen Review (alternative)
|
||||
qwen -p "[Shared Prompt Template with artifacts]"
|
||||
# Same prompt as Gemini, different execution engine
|
||||
|
||||
# Codex Review (custom tool via "Other"):
|
||||
codex --full-auto exec "Review recent code changes for quality, potential issues, and improvements" --skip-git-repo-check -s danger-full-access
|
||||
# Method 4: Codex Review (autonomous)
|
||||
codex --full-auto exec "[Verify task.json acceptance criteria at ${task.json}]" --skip-git-repo-check -s danger-full-access
|
||||
```
|
||||
|
||||
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
|
||||
- `@{task.json}` → `@${executionContext.session.artifacts.task}`
|
||||
- `@{plan.json}` → `@${executionContext.session.artifacts.plan}`
|
||||
- `[@{exploration.json}]` → `@${executionContext.session.artifacts.exploration}` (if exists)
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Execution Intelligence
|
||||
|
||||
1. **Context Continuity**: Each execution call receives previous results
|
||||
- Prevents duplication across multiple executions
|
||||
- Maintains coherent implementation flow
|
||||
- Builds on completed work
|
||||
|
||||
2. **Execution Call Tracking**: Progress at call level, not task level
|
||||
- Each call handles all or subset of tasks
|
||||
- Clear visibility of current execution
|
||||
- Simple progress updates
|
||||
|
||||
3. **Flexible Execution**: Multiple input modes supported
|
||||
- In-memory: Seamless lite-plan integration
|
||||
- Prompt: Quick standalone execution
|
||||
- File: Intelligent format detection
|
||||
- Enhanced Task JSON (lite-plan export): Full plan extraction
|
||||
- Plain text: Uses as prompt
|
||||
|
||||
### Task Management
|
||||
|
||||
1. **Live Progress Updates**: Real-time TodoWrite tracking
|
||||
- Execution calls created before execution starts
|
||||
- Updated as executions progress
|
||||
- Clear completion status
|
||||
|
||||
2. **Simple Execution**: Straightforward task handling
|
||||
- All tasks in single call (typical)
|
||||
- Split only for very large task sets (>10)
|
||||
- Agent/Codex determines optimal execution order
|
||||
**Input Modes**: In-memory (lite-plan), prompt (standalone), file (JSON/text)
|
||||
**Batch Limits**: Agent 7 tasks, CLI 4 tasks
|
||||
**Execution**: Parallel batches use single Claude message with multiple tool calls (no concurrency limit)
|
||||
|
||||
## Error Handling
|
||||
|
||||
@@ -546,10 +547,26 @@ Passed from lite-plan via global variable:
|
||||
clarificationContext: {...} | null,
|
||||
executionMethod: "Agent" | "Codex" | "Auto",
|
||||
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
||||
originalUserInput: string
|
||||
originalUserInput: string,
|
||||
|
||||
// Session artifacts location (saved by lite-plan)
|
||||
session: {
|
||||
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
|
||||
folder: string, // Session folder path: .workflow/.lite-plan/{session-id}
|
||||
artifacts: {
|
||||
exploration: string | null, // exploration.json path (if exploration performed)
|
||||
plan: string, // plan.json path (always present)
|
||||
task: string // task.json path (always exported)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Artifact Usage**:
|
||||
- Artifact files contain detailed planning context
|
||||
- Pass artifact paths to CLI tools and agents for enhanced context
|
||||
- See execution options below for usage examples
|
||||
|
||||
### executionResult (Output)
|
||||
|
||||
Collected after each execution call completes:
|
||||
|
||||
652
.claude/commands/workflow/lite-fix.md
Normal file
652
.claude/commands/workflow/lite-fix.md
Normal file
@@ -0,0 +1,652 @@
|
||||
---
|
||||
name: lite-fix
|
||||
description: Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents
|
||||
argument-hint: "[--hotfix] \"bug description or issue reference\""
|
||||
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*)
|
||||
---
|
||||
|
||||
# Workflow Lite-Fix Command (/workflow:lite-fix)
|
||||
|
||||
## Overview
|
||||
|
||||
Fast-track bug fixing workflow optimized for quick diagnosis, targeted fixes, and streamlined verification. Automatically adjusts process complexity based on impact assessment.
|
||||
|
||||
**Core capabilities:**
|
||||
- Rapid root cause diagnosis with intelligent code search
|
||||
- Automatic severity assessment and adaptive workflow
|
||||
- Fix strategy selection (immediate patch vs comprehensive refactor)
|
||||
- Risk-aware verification (smoke tests to full suite)
|
||||
- Optional hotfix mode for production incidents with branch management
|
||||
- Automatic follow-up task generation for hotfixes
|
||||
|
||||
## Usage
|
||||
|
||||
### Command Syntax
|
||||
```bash
|
||||
/workflow:lite-fix [FLAGS] <BUG_DESCRIPTION>
|
||||
|
||||
# Flags
|
||||
--hotfix, -h Production hotfix mode (creates hotfix branch, auto follow-up)
|
||||
|
||||
# Arguments
|
||||
<bug-description> Bug description or issue reference (required)
|
||||
```
|
||||
|
||||
### Modes
|
||||
|
||||
| Mode | Time Budget | Use Case | Workflow Characteristics |
|
||||
|------|-------------|----------|--------------------------|
|
||||
| **Default** | Auto-adapt (15min-4h) | All standard bugs | Intelligent severity assessment + adaptive process |
|
||||
| **Hotfix** (`--hotfix`) | 15-30 min | Production outage | Minimal diagnosis + hotfix branch + auto follow-up |
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# Default mode: Automatically adjusts based on impact
|
||||
/workflow:lite-fix "User avatar upload fails with 413 error"
|
||||
/workflow:lite-fix "Shopping cart randomly loses items at checkout"
|
||||
|
||||
# Hotfix mode: Production incident
|
||||
/workflow:lite-fix --hotfix "Payment gateway 5xx errors"
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Workflow Overview
|
||||
|
||||
```
|
||||
Bug Input → Diagnosis (Phase 1) → Impact Assessment (Phase 2)
|
||||
↓
|
||||
Severity Auto-Detection → Fix Planning (Phase 3)
|
||||
↓
|
||||
Verification Strategy (Phase 4) → User Confirmation (Phase 5) → Execution (Phase 6)
|
||||
```
|
||||
|
||||
### Phase Summary
|
||||
|
||||
| Phase | Default Mode | Hotfix Mode |
|
||||
|-------|--------------|-------------|
|
||||
| 1. Diagnosis | Adaptive search depth | Minimal (known issue) |
|
||||
| 2. Impact Assessment | Full risk scoring | Critical path only |
|
||||
| 3. Fix Planning | Strategy options based on complexity | Single surgical fix |
|
||||
| 4. Verification | Test level matches risk score | Smoke tests only |
|
||||
| 5. User Confirmation | 3 dimensions | 2 dimensions |
|
||||
| 6. Execution | Via lite-execute | Via lite-execute + monitoring |
|
||||
|
||||
---
|
||||
|
||||
## Detailed Phase Execution
|
||||
|
||||
### Phase 1: Diagnosis & Root Cause Analysis
|
||||
|
||||
**Goal**: Identify root cause and affected code paths
|
||||
|
||||
**Execution Strategy**:
|
||||
|
||||
**Default Mode** - Adaptive search:
|
||||
- **High confidence keywords** (e.g., specific error messages): Direct grep search (5min)
|
||||
- **Medium confidence**: cli-explore-agent with focused search (10-15min)
|
||||
- **Low confidence** (vague symptoms): cli-explore-agent with broad search (20min)
|
||||
|
||||
```javascript
|
||||
// Confidence-based strategy selection
|
||||
if (has_specific_error_message || has_file_path_hint) {
|
||||
// Quick targeted search
|
||||
grep -r '${error_message}' src/ --include='*.ts' -n | head -10
|
||||
git log --oneline --since='1 week ago' -- '*affected*'
|
||||
} else {
|
||||
// Deep exploration
|
||||
Task(subagent_type="cli-explore-agent", prompt=`
|
||||
Bug: ${bug_description}
|
||||
Execute diagnostic search:
|
||||
1. Search error patterns and similar issues
|
||||
2. Trace execution path in affected modules
|
||||
3. Check recent changes
|
||||
Return: Root cause hypothesis, affected paths, reproduction steps
|
||||
`)
|
||||
}
|
||||
```
|
||||
|
||||
**Hotfix Mode** - Minimal search:
|
||||
```bash
|
||||
Read(suspected_file) # User typically knows the file
|
||||
git blame ${suspected_file}
|
||||
```
|
||||
|
||||
**Output Structure**:
|
||||
```javascript
|
||||
{
|
||||
root_cause: {
|
||||
file: "src/auth/tokenValidator.ts",
|
||||
line_range: "45-52",
|
||||
issue: "Token expiration check uses wrong comparison",
|
||||
introduced_by: "commit abc123"
|
||||
},
|
||||
reproduction_steps: ["Login", "Wait 15min", "Access protected route"],
|
||||
affected_scope: {
|
||||
users: "All authenticated users",
|
||||
features: ["login", "API access"],
|
||||
data_risk: "none"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**TodoWrite**: Mark Phase 1 completed, Phase 2 in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Impact Assessment & Severity Auto-Detection
|
||||
|
||||
**Goal**: Quantify blast radius and auto-determine severity
|
||||
|
||||
**Risk Score Calculation**:
|
||||
```javascript
|
||||
risk_score = (user_impact × 0.4) + (system_risk × 0.3) + (business_impact × 0.3)
|
||||
|
||||
// Auto-severity mapping
|
||||
if (risk_score >= 8.0) severity = "critical"
|
||||
else if (risk_score >= 5.0) severity = "high"
|
||||
else if (risk_score >= 3.0) severity = "medium"
|
||||
else severity = "low"
|
||||
|
||||
// Workflow adaptation
|
||||
if (severity >= "high") {
|
||||
diagnosis_depth = "focused"
|
||||
test_strategy = "smoke_and_critical"
|
||||
review_optional = true
|
||||
} else {
|
||||
diagnosis_depth = "comprehensive"
|
||||
test_strategy = "full_suite"
|
||||
review_optional = false
|
||||
}
|
||||
```
|
||||
|
||||
**Assessment Output**:
|
||||
```javascript
|
||||
{
|
||||
affected_users: {
|
||||
count: "5000 active users (100%)",
|
||||
severity: "high"
|
||||
},
|
||||
system_risk: {
|
||||
availability: "degraded_30%",
|
||||
cascading_failures: "possible_logout_storm"
|
||||
},
|
||||
business_impact: {
|
||||
revenue: "medium",
|
||||
reputation: "high",
|
||||
sla_breach: "yes"
|
||||
},
|
||||
risk_score: 7.1,
|
||||
severity: "high",
|
||||
workflow_adaptation: {
|
||||
test_strategy: "focused_integration",
|
||||
review_required: false,
|
||||
time_budget: "1_hour"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Hotfix Mode**: Skip detailed assessment, assume critical
|
||||
|
||||
**TodoWrite**: Mark Phase 2 completed, Phase 3 in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Fix Planning & Strategy Selection
|
||||
|
||||
**Goal**: Generate fix options with trade-off analysis
|
||||
|
||||
**Strategy Generation**:
|
||||
|
||||
**Default Mode** - Complexity-adaptive:
|
||||
- **Low risk score (<5.0)**: Generate 2-3 strategy options for user selection
|
||||
- **High risk score (≥5.0)**: Generate single best strategy for speed
|
||||
|
||||
```javascript
|
||||
strategies = generateFixStrategies(root_cause, risk_score)
|
||||
|
||||
if (risk_score >= 5.0 || mode === "hotfix") {
|
||||
// Single best strategy
|
||||
return strategies[0] // Fastest viable fix
|
||||
} else {
|
||||
// Multiple options with trade-offs
|
||||
return strategies // Let user choose
|
||||
}
|
||||
```
|
||||
|
||||
**Example Strategies**:
|
||||
```javascript
|
||||
// Low risk: Multiple options
|
||||
[
|
||||
{
|
||||
strategy: "immediate_patch",
|
||||
description: "Fix comparison operator",
|
||||
estimated_time: "15 minutes",
|
||||
risk: "low",
|
||||
pros: ["Quick fix"],
|
||||
cons: ["Doesn't address underlying issue"]
|
||||
},
|
||||
{
|
||||
strategy: "comprehensive_fix",
|
||||
description: "Refactor token validation logic",
|
||||
estimated_time: "2 hours",
|
||||
risk: "medium",
|
||||
pros: ["Addresses root cause"],
|
||||
cons: ["Longer implementation"]
|
||||
}
|
||||
]
|
||||
|
||||
// High risk or hotfix: Single option
|
||||
{
|
||||
strategy: "surgical_fix",
|
||||
description: "Minimal change to fix comparison",
|
||||
files: ["src/auth/tokenValidator.ts:47"],
|
||||
estimated_time: "5 minutes",
|
||||
risk: "minimal"
|
||||
}
|
||||
```
|
||||
|
||||
**Complexity Assessment**:
|
||||
```javascript
|
||||
if (complexity === "high" && risk_score < 5.0) {
|
||||
suggestCommand("/workflow:plan --mode bugfix")
|
||||
return // Escalate to full planning
|
||||
}
|
||||
```
|
||||
|
||||
**TodoWrite**: Mark Phase 3 completed, Phase 4 in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Verification Strategy
|
||||
|
||||
**Goal**: Define testing approach based on severity
|
||||
|
||||
**Adaptive Test Strategy**:
|
||||
|
||||
| Risk Score | Test Scope | Duration | Automation |
|
||||
|------------|------------|----------|------------|
|
||||
| **< 3.0** (Low) | Full test suite | 15-20 min | `npm test` |
|
||||
| **3.0-5.0** (Medium) | Focused integration | 8-12 min | `npm test -- affected-module.test.ts` |
|
||||
| **5.0-8.0** (High) | Smoke + critical | 5-8 min | `npm test -- critical.smoke.test.ts` |
|
||||
| **≥ 8.0** (Critical) | Smoke only | 2-5 min | `npm test -- smoke.test.ts` |
|
||||
| **Hotfix** | Production smoke | 2-3 min | `npm test -- production.smoke.test.ts` |
|
||||
|
||||
**Branch Strategy**:
|
||||
|
||||
**Default Mode**:
|
||||
```javascript
|
||||
{
|
||||
type: "feature_branch",
|
||||
base: "main",
|
||||
name: "fix/token-expiration-edge-case",
|
||||
merge_target: "main"
|
||||
}
|
||||
```
|
||||
|
||||
**Hotfix Mode**:
|
||||
```javascript
|
||||
{
|
||||
type: "hotfix_branch",
|
||||
base: "production_tag_v2.3.1", // ⚠️ From production tag
|
||||
name: "hotfix/token-validation-fix",
|
||||
merge_target: ["main", "production"] // Dual merge
|
||||
}
|
||||
```
|
||||
|
||||
**TodoWrite**: Mark Phase 4 completed, Phase 5 in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: User Confirmation & Execution Selection
|
||||
|
||||
**Adaptive Confirmation Dimensions**:
|
||||
|
||||
**Default Mode** - 3 dimensions (adapted by risk score):
|
||||
|
||||
```javascript
|
||||
dimensions = [
|
||||
{
|
||||
question: "Confirm fix approach?",
|
||||
options: ["Proceed", "Modify", "Escalate to /workflow:plan"]
|
||||
},
|
||||
{
|
||||
question: "Execution method:",
|
||||
options: ["Agent", "CLI Tool (Codex/Gemini)", "Manual (plan only)"]
|
||||
},
|
||||
{
|
||||
question: "Verification level:",
|
||||
options: adaptedByRiskScore() // Auto-suggest based on Phase 2
|
||||
}
|
||||
]
|
||||
|
||||
// If risk_score >= 5.0, auto-skip code review dimension
|
||||
// If risk_score < 5.0, add optional code review dimension
|
||||
if (risk_score < 5.0) {
|
||||
dimensions.push({
|
||||
question: "Post-fix review:",
|
||||
options: ["Gemini", "Skip"]
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Hotfix Mode** - 2 dimensions (minimal):
|
||||
```javascript
|
||||
[
|
||||
{
|
||||
question: "Confirm hotfix deployment:",
|
||||
options: ["Deploy", "Stage First", "Abort"]
|
||||
},
|
||||
{
|
||||
question: "Post-deployment monitoring:",
|
||||
options: ["Real-time (15 min)", "Passive (alerts only)"]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
**TodoWrite**: Mark Phase 5 completed, Phase 6 in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 6: Execution Dispatch & Follow-up
|
||||
|
||||
**Dispatch to lite-execute**:
|
||||
|
||||
```javascript
|
||||
executionContext = {
|
||||
mode: "bugfix",
|
||||
severity: auto_detected_severity, // From Phase 2
|
||||
planObject: plan,
|
||||
diagnosisContext: diagnosis,
|
||||
impactContext: impact_assessment,
|
||||
verificationStrategy: test_strategy,
|
||||
branchStrategy: branch_strategy,
|
||||
executionMethod: user_selection.execution_method
|
||||
}
|
||||
|
||||
SlashCommand("/workflow:lite-execute --in-memory --mode bugfix")
|
||||
```
|
||||
|
||||
**Hotfix Auto Follow-up**:
|
||||
|
||||
```javascript
|
||||
if (mode === "hotfix") {
|
||||
follow_up_tasks = [
|
||||
{
|
||||
id: `FOLLOWUP-${taskId}-comprehensive`,
|
||||
title: "Replace hotfix with comprehensive fix",
|
||||
priority: "high",
|
||||
due_date: "within_3_days",
|
||||
description: "Refactor quick hotfix into proper solution with full test coverage"
|
||||
},
|
||||
{
|
||||
id: `FOLLOWUP-${taskId}-postmortem`,
|
||||
title: "Incident postmortem",
|
||||
priority: "medium",
|
||||
due_date: "within_1_week",
|
||||
sections: ["Timeline", "Root cause", "Prevention measures"]
|
||||
}
|
||||
]
|
||||
|
||||
Write(`.workflow/lite-fixes/${taskId}-followup.json`, follow_up_tasks)
|
||||
|
||||
console.log(`
|
||||
⚠️ Hotfix follow-up tasks generated:
|
||||
- Comprehensive fix: ${follow_up_tasks[0].id} (due in 3 days)
|
||||
- Postmortem: ${follow_up_tasks[1].id} (due in 1 week)
|
||||
`)
|
||||
}
|
||||
```
|
||||
|
||||
**TodoWrite**: Mark Phase 6 completed
|
||||
|
||||
---
|
||||
|
||||
## Data Structures
|
||||
|
||||
### diagnosisContext
|
||||
```javascript
|
||||
{
|
||||
symptom: string,
|
||||
error_message: string | null,
|
||||
keywords: string[],
|
||||
confidence_level: "high" | "medium" | "low", // Search confidence
|
||||
root_cause: {
|
||||
file: string,
|
||||
line_range: string,
|
||||
issue: string,
|
||||
introduced_by: string
|
||||
},
|
||||
reproduction_steps: string[],
|
||||
affected_scope: {...}
|
||||
}
|
||||
```
|
||||
|
||||
### impactContext
|
||||
```javascript
|
||||
{
|
||||
affected_users: { count: string, severity: string },
|
||||
system_risk: { availability: string, cascading_failures: string },
|
||||
business_impact: { revenue: string, reputation: string, sla_breach: string },
|
||||
risk_score: number, // 0-10
|
||||
severity: "low" | "medium" | "high" | "critical",
|
||||
workflow_adaptation: {
|
||||
diagnosis_depth: string,
|
||||
test_strategy: string,
|
||||
review_optional: boolean,
|
||||
time_budget: string
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### fixPlan
|
||||
```javascript
|
||||
{
|
||||
strategy: string,
|
||||
summary: string,
|
||||
tasks: [{
|
||||
title: string,
|
||||
file: string,
|
||||
action: "Update" | "Create" | "Delete",
|
||||
implementation: string[],
|
||||
verification: string[]
|
||||
}],
|
||||
estimated_time: string,
|
||||
recommended_execution: "Agent" | "CLI" | "Manual"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### When to Use Default Mode
|
||||
|
||||
**Use for all standard bugs:**
|
||||
- Automatically adapts to severity (no manual mode selection needed)
|
||||
- Risk score determines workflow complexity
|
||||
- Handles 90% of bug fixing scenarios
|
||||
|
||||
**Typical scenarios:**
|
||||
- UI bugs, logic errors, edge cases
|
||||
- Performance issues (non-critical)
|
||||
- Integration failures
|
||||
- Data validation bugs
|
||||
|
||||
### When to Use Hotfix Mode
|
||||
|
||||
**Only use for production incidents:**
|
||||
- Production is down or critically degraded
|
||||
- Revenue/reputation at immediate risk
|
||||
- SLA breach occurring
|
||||
- Issue is well-understood (minimal diagnosis needed)
|
||||
|
||||
**Hotfix characteristics:**
|
||||
- Creates hotfix branch from production tag
|
||||
- Minimal diagnosis (assumes known issue)
|
||||
- Smoke tests only
|
||||
- Auto-generates follow-up tasks
|
||||
- Requires incident tracking
|
||||
|
||||
### Branching Strategy
|
||||
|
||||
**Default Mode (feature branch)**:
|
||||
```bash
|
||||
# Standard feature branch workflow
|
||||
git checkout -b fix/issue-description main
|
||||
# ... implement fix
|
||||
git checkout main && git merge fix/issue-description
|
||||
```
|
||||
|
||||
**Hotfix Mode (dual merge)**:
|
||||
```bash
|
||||
# ✅ Correct: Branch from production tag
|
||||
git checkout -b hotfix/fix-name v2.3.1
|
||||
|
||||
# Merge to both targets
|
||||
git checkout main && git merge hotfix/fix-name
|
||||
git checkout production && git merge hotfix/fix-name
|
||||
git tag v2.3.2
|
||||
|
||||
# ❌ Wrong: Branch from main
|
||||
git checkout -b hotfix/fix-name main # Contains unreleased code!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Root cause unclear | Vague symptoms | Extend diagnosis time or use /cli:mode:bug-diagnosis |
|
||||
| Multiple potential causes | Complex interaction | Use /cli:discuss-plan for analysis |
|
||||
| Fix too complex | High-risk refactor | Escalate to /workflow:plan --mode bugfix |
|
||||
| High risk score but unsure | Uncertain severity | Default mode will adapt, proceed normally |
|
||||
|
||||
---
|
||||
|
||||
## Output Routing
|
||||
|
||||
**Lite-fix directory**:
|
||||
```
|
||||
.workflow/lite-fixes/
|
||||
├── BUGFIX-2024-10-20T14-30-00.json # Task JSON
|
||||
├── BUGFIX-2024-10-20T14-30-00-followup.json # Follow-up (hotfix only)
|
||||
└── diagnosis-cache/ # Cached diagnoses
|
||||
└── ${bug_hash}.json
|
||||
```
|
||||
|
||||
**Session-based** (if active session):
|
||||
```
|
||||
.workflow/active/WFS-feature/
|
||||
├── .bugfixes/
|
||||
│ ├── BUGFIX-001.json
|
||||
│ └── BUGFIX-001-followup.json
|
||||
└── .summaries/
|
||||
└── BUGFIX-001-summary.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### 1. Intelligent Diagnosis Caching
|
||||
|
||||
Reuse diagnosis for similar bugs:
|
||||
```javascript
|
||||
cache_key = hash(bug_keywords + recent_changes_hash)
|
||||
if (cache_exists && cache_age < 7_days && similarity > 0.8) {
|
||||
diagnosis = load_from_cache()
|
||||
console.log("Using cached diagnosis (similar issue found)")
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Auto-Severity Suggestion
|
||||
|
||||
Detect urgency from keywords:
|
||||
```javascript
|
||||
urgency_keywords = ["production", "down", "outage", "critical", "urgent"]
|
||||
if (bug_description.includes(urgency_keywords) && !mode_specified) {
|
||||
console.log("💡 Tip: Consider --hotfix flag for production issues")
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Adaptive Workflow Intelligence
|
||||
|
||||
Real-time workflow adjustment:
|
||||
```javascript
|
||||
// During Phase 2, if risk score suddenly increases
|
||||
if (new_risk_score > initial_estimate * 1.5) {
|
||||
console.log("⚠️ Severity increased, adjusting workflow...")
|
||||
test_strategy = "more_comprehensive"
|
||||
review_required = true
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Commands
|
||||
|
||||
**Diagnostic Commands**:
|
||||
- `/cli:mode:bug-diagnosis` - Detailed root cause analysis (use before lite-fix if unclear)
|
||||
|
||||
**Fix Execution**:
|
||||
- `/workflow:lite-execute --in-memory` - Execute fix plan (automatically called)
|
||||
|
||||
**Planning Commands**:
|
||||
- `/workflow:plan --mode bugfix` - Complex bugs requiring comprehensive planning
|
||||
|
||||
**Review Commands**:
|
||||
- `/workflow:review --type quality` - Post-fix quality review
|
||||
|
||||
---
|
||||
|
||||
## Comparison with Other Commands
|
||||
|
||||
| Command | Use Case | Modes | Adaptation | Output |
|
||||
|---------|----------|-------|------------|--------|
|
||||
| `/workflow:lite-fix` | Bug fixes | 2 (default + hotfix) | Auto-adaptive | In-memory + JSON |
|
||||
| `/workflow:lite-plan` | New features | 1 + explore flag | Manual | In-memory + JSON |
|
||||
| `/workflow:plan` | Complex features | Multiple | Manual | Persistent session |
|
||||
| `/cli:mode:bug-diagnosis` | Analysis only | 1 | N/A | Report only |
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
**Before execution** (auto-checked):
|
||||
- [ ] Root cause identified (>70% confidence for default, >90% for hotfix)
|
||||
- [ ] Impact scope defined
|
||||
- [ ] Fix strategy reviewed
|
||||
- [ ] Verification plan matches risk level
|
||||
|
||||
**Hotfix-specific**:
|
||||
- [ ] Production tag identified
|
||||
- [ ] Rollback plan documented
|
||||
- [ ] Follow-up tasks generated
|
||||
- [ ] Monitoring configured
|
||||
|
||||
---
|
||||
|
||||
## When to Use lite-fix
|
||||
|
||||
✅ **Perfect for:**
|
||||
- Any bug with clear symptoms
|
||||
- Localized fixes (1-5 files)
|
||||
- Known technology stack
|
||||
- Time-sensitive but not catastrophic (default mode adapts)
|
||||
- Production incidents (use --hotfix)
|
||||
|
||||
❌ **Not suitable for:**
|
||||
- Root cause completely unclear → use `/cli:mode:bug-diagnosis` first
|
||||
- Requires architectural changes → use `/workflow:plan`
|
||||
- Complex legacy code without tests → use `/workflow:plan --legacy-refactor`
|
||||
- Performance deep-dive → use `/workflow:plan --performance-optimization`
|
||||
- Data migration → use `/workflow:plan --data-migration`
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-11-20
|
||||
**Version**: 2.0.0
|
||||
**Status**: Design Document (Simplified)
|
||||
@@ -130,6 +130,13 @@ needsExploration = (
|
||||
|
||||
**Exploration Execution** (if needed):
|
||||
```javascript
|
||||
// Generate session identifiers for artifact storage
|
||||
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
|
||||
const shortTimestamp = timestamp.substring(0, 19).replace('T', '-') // YYYY-MM-DD-HH-mm-ss
|
||||
const sessionId = `${taskSlug}-${shortTimestamp}`
|
||||
const sessionFolder = `.workflow/.lite-plan/${sessionId}`
|
||||
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
description="Analyze codebase for task context",
|
||||
@@ -149,9 +156,14 @@ Task(
|
||||
Output Format: JSON-like structured object
|
||||
`
|
||||
)
|
||||
|
||||
// Save exploration results for CLI/agent access in lite-execute
|
||||
const explorationFile = `${sessionFolder}/exploration.json`
|
||||
Write(explorationFile, JSON.stringify(explorationContext, null, 2))
|
||||
```
|
||||
|
||||
**Output**: `explorationContext` (see Data Structures section)
|
||||
**Output**: `explorationContext` (in-memory, see Data Structures section)
|
||||
**Artifact**: Saved to `{sessionFolder}/exploration.json` for CLI/agent use
|
||||
|
||||
**Progress Tracking**:
|
||||
- Mark Phase 1 completed
|
||||
@@ -228,6 +240,14 @@ Current Claude generates plan directly:
|
||||
- Estimated Time: Total implementation time
|
||||
- Recommended Execution: "Agent"
|
||||
|
||||
```javascript
|
||||
// Save planning results to session folder (same as Option B)
|
||||
const planFile = `${sessionFolder}/plan.json`
|
||||
Write(planFile, JSON.stringify(planObject, null, 2))
|
||||
```
|
||||
|
||||
**Artifact**: Saved to `{sessionFolder}/plan.json` for CLI/agent use
|
||||
|
||||
**Option B: Agent-Based Planning (Medium/High Complexity)**
|
||||
|
||||
Delegate to cli-lite-planning-agent:
|
||||
@@ -270,9 +290,14 @@ Task(
|
||||
Format: "{Action} in {file_path}: {details} following {pattern}"
|
||||
`
|
||||
)
|
||||
|
||||
// Save planning results to session folder
|
||||
const planFile = `${sessionFolder}/plan.json`
|
||||
Write(planFile, JSON.stringify(planObject, null, 2))
|
||||
```
|
||||
|
||||
**Output**: `planObject` (see Data Structures section)
|
||||
**Artifact**: Saved to `{sessionFolder}/plan.json` for CLI/agent use
|
||||
|
||||
**Progress Tracking**:
|
||||
- Mark Phase 3 completed
|
||||
@@ -315,7 +340,7 @@ ${i+1}. **${task.title}** (${task.file})
|
||||
|
||||
**Step 4.2: Collect User Confirmation**
|
||||
|
||||
Four questions via single AskUserQuestion call:
|
||||
Three questions via single AskUserQuestion call:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
@@ -353,15 +378,6 @@ Confirm plan? (Multi-select: can supplement via "Other")`,
|
||||
{ label: "Agent Review", description: "@code-reviewer agent" },
|
||||
{ label: "Skip", description: "No review" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Export plan to Enhanced Task JSON file?\n\nAllows reuse with lite-execute later.",
|
||||
header: "Export JSON",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Yes", description: "Export to JSON (recommended for complex tasks)" },
|
||||
{ label: "No", description: "Keep in-memory only" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
@@ -384,10 +400,6 @@ Code Review (after execution):
|
||||
├─ Gemini Review → gemini CLI analysis
|
||||
├─ Agent Review → Current Claude review
|
||||
└─ Other → Custom tool (e.g., qwen, codex)
|
||||
|
||||
Export JSON:
|
||||
├─ Yes → Export to .workflow/lite-plans/plan-{timestamp}.json
|
||||
└─ No → In-memory only
|
||||
```
|
||||
|
||||
**Progress Tracking**:
|
||||
@@ -398,48 +410,48 @@ Export JSON:
|
||||
|
||||
### Phase 5: Dispatch to Execution
|
||||
|
||||
**Step 5.1: Export Enhanced Task JSON (Optional)**
|
||||
**Step 5.1: Export Enhanced Task JSON**
|
||||
|
||||
Only execute if `userSelection.export_task_json === "Yes"`:
|
||||
Always export Enhanced Task JSON to session folder:
|
||||
|
||||
```javascript
|
||||
if (userSelection.export_task_json === "Yes") {
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
|
||||
const taskId = `LP-${timestamp}`
|
||||
const filename = `.workflow/lite-plans/${taskId}.json`
|
||||
const taskId = `LP-${shortTimestamp}`
|
||||
const filename = `${sessionFolder}/task.json`
|
||||
|
||||
const enhancedTaskJson = {
|
||||
id: taskId,
|
||||
title: original_task_description,
|
||||
status: "pending",
|
||||
const enhancedTaskJson = {
|
||||
id: taskId,
|
||||
title: original_task_description,
|
||||
status: "pending",
|
||||
|
||||
meta: {
|
||||
type: "planning",
|
||||
created_at: new Date().toISOString(),
|
||||
complexity: planObject.complexity,
|
||||
estimated_time: planObject.estimated_time,
|
||||
recommended_execution: planObject.recommended_execution,
|
||||
workflow: "lite-plan"
|
||||
meta: {
|
||||
type: "planning",
|
||||
created_at: new Date().toISOString(),
|
||||
complexity: planObject.complexity,
|
||||
estimated_time: planObject.estimated_time,
|
||||
recommended_execution: planObject.recommended_execution,
|
||||
workflow: "lite-plan",
|
||||
session_id: sessionId,
|
||||
session_folder: sessionFolder
|
||||
},
|
||||
|
||||
context: {
|
||||
requirements: [original_task_description],
|
||||
plan: {
|
||||
summary: planObject.summary,
|
||||
approach: planObject.approach,
|
||||
tasks: planObject.tasks
|
||||
},
|
||||
|
||||
context: {
|
||||
requirements: [original_task_description],
|
||||
plan: {
|
||||
summary: planObject.summary,
|
||||
approach: planObject.approach,
|
||||
tasks: planObject.tasks
|
||||
},
|
||||
exploration: explorationContext || null,
|
||||
clarifications: clarificationContext || null,
|
||||
focus_paths: explorationContext?.relevant_files || [],
|
||||
acceptance: planObject.tasks.flatMap(t => t.acceptance)
|
||||
}
|
||||
exploration: explorationContext || null,
|
||||
clarifications: clarificationContext || null,
|
||||
focus_paths: explorationContext?.relevant_files || [],
|
||||
acceptance: planObject.tasks.flatMap(t => t.acceptance)
|
||||
}
|
||||
|
||||
Write(filename, JSON.stringify(enhancedTaskJson, null, 2))
|
||||
console.log(`Enhanced Task JSON exported to: ${filename}`)
|
||||
console.log(`Reuse with: /workflow:lite-execute ${filename}`)
|
||||
}
|
||||
|
||||
Write(filename, JSON.stringify(enhancedTaskJson, null, 2))
|
||||
console.log(`Enhanced Task JSON exported to: ${filename}`)
|
||||
console.log(`Session folder: ${sessionFolder}`)
|
||||
console.log(`Reuse with: /workflow:lite-execute ${filename}`)
|
||||
```
|
||||
|
||||
**Step 5.2: Store Execution Context**
|
||||
@@ -451,7 +463,18 @@ executionContext = {
|
||||
clarificationContext: clarificationContext || null,
|
||||
executionMethod: userSelection.execution_method,
|
||||
codeReviewTool: userSelection.code_review_tool,
|
||||
originalUserInput: original_task_description
|
||||
originalUserInput: original_task_description,
|
||||
|
||||
// Session artifacts location
|
||||
session: {
|
||||
id: sessionId,
|
||||
folder: sessionFolder,
|
||||
artifacts: {
|
||||
exploration: explorationContext ? `${sessionFolder}/exploration.json` : null,
|
||||
plan: `${sessionFolder}/plan.json`,
|
||||
task: `${sessionFolder}/task.json` // Always exported
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -462,7 +485,11 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
|
||||
```
|
||||
|
||||
**Execution Handoff**:
|
||||
- lite-execute reads `executionContext` variable
|
||||
- lite-execute reads `executionContext` variable from memory
|
||||
- `executionContext.session.artifacts` contains file paths to saved planning artifacts:
|
||||
- `exploration` - exploration.json (if exploration performed)
|
||||
- `plan` - plan.json (always exists)
|
||||
- `task` - task.json (if user selected export)
|
||||
- All execution logic handled by lite-execute
|
||||
- lite-plan completes after successful handoff
|
||||
|
||||
@@ -502,7 +529,7 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
|
||||
- Plan confirmation (multi-select with supplements)
|
||||
- Execution method selection
|
||||
- Code review tool selection (custom via "Other")
|
||||
- JSON export option
|
||||
- Enhanced Task JSON always exported to session folder
|
||||
- Allows plan refinement without re-selecting execution method
|
||||
|
||||
### Task Management
|
||||
@@ -519,11 +546,11 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
|
||||
- Medium: 5-7 tasks (detailed)
|
||||
- High: 7-10 tasks (comprehensive)
|
||||
|
||||
3. **No File Artifacts During Planning**:
|
||||
- All planning stays in memory
|
||||
- Optional Enhanced Task JSON export (user choice)
|
||||
- Faster workflow, cleaner workspace
|
||||
- Plan context passed directly to execution
|
||||
3. **Session Artifact Management**:
|
||||
- All planning artifacts saved to dedicated session folder
|
||||
- Enhanced Task JSON always exported for reusability
|
||||
- Plan context passed to execution via memory and files
|
||||
- Clean organization with session-based folder structure
|
||||
|
||||
### Planning Standards
|
||||
|
||||
@@ -550,6 +577,39 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
|
||||
| Phase 4 Confirmation Timeout | User no response > 5 minutes | Save context to temp var, display resume instructions, exit gracefully |
|
||||
| Phase 4 Modification Loop | User requests modify > 3 times | Suggest breaking task into smaller pieces or using `/workflow:plan` |
|
||||
|
||||
## Session Folder Structure
|
||||
|
||||
Each lite-plan execution creates a dedicated session folder to organize all artifacts:
|
||||
|
||||
```
|
||||
.workflow/.lite-plan/{task-slug}-{short-timestamp}/
|
||||
├── exploration.json # Exploration results (if exploration performed)
|
||||
├── plan.json # Planning results (always created)
|
||||
└── task.json # Enhanced Task JSON (always created)
|
||||
```
|
||||
|
||||
**Folder Naming Convention**:
|
||||
- `{task-slug}`: First 40 characters of task description, lowercased, non-alphanumeric replaced with `-`
|
||||
- `{short-timestamp}`: YYYY-MM-DD-HH-mm-ss format
|
||||
- Example: `.workflow/.lite-plan/implement-user-auth-jwt-2025-01-15-14-30-45/`
|
||||
|
||||
**File Contents**:
|
||||
- `exploration.json`: Complete explorationContext object (if exploration performed, see Data Structures)
|
||||
- `plan.json`: Complete planObject (always created, see Data Structures)
|
||||
- `task.json`: Enhanced Task JSON with all context (always created, see Data Structures)
|
||||
|
||||
**Access Patterns**:
|
||||
- **lite-plan**: Creates folder and writes all artifacts during execution, passes paths via `executionContext.session.artifacts`
|
||||
- **lite-execute**: Reads artifact paths from `executionContext.session.artifacts` (see lite-execute.md for usage details)
|
||||
- **User**: Can inspect artifacts for debugging or reference
|
||||
- **Reuse**: Pass `task.json` path to `/workflow:lite-execute {path}` for re-execution
|
||||
|
||||
**Benefits**:
|
||||
- Clean separation between different task executions
|
||||
- Easy to find and inspect artifacts for specific tasks
|
||||
- Natural history/audit trail of planning sessions
|
||||
- Supports concurrent lite-plan executions without conflicts
|
||||
|
||||
## Data Structures
|
||||
|
||||
### explorationContext
|
||||
@@ -621,7 +681,18 @@ Context passed to lite-execute via --in-memory (Phase 5):
|
||||
clarificationContext: {...} | null, // User responses from Phase 2
|
||||
executionMethod: "Agent" | "Codex" | "Auto",
|
||||
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
||||
originalUserInput: string // User's original task description
|
||||
originalUserInput: string, // User's original task description
|
||||
|
||||
// Session artifacts location (for lite-execute to access saved files)
|
||||
session: {
|
||||
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
|
||||
folder: string, // Session folder path: .workflow/.lite-plan/{session-id}
|
||||
artifacts: {
|
||||
exploration: string | null, // exploration.json path (if exploration performed)
|
||||
plan: string, // plan.json path (always present)
|
||||
task: string // task.json path (always exported)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -1,15 +1,16 @@
|
||||
---
|
||||
name: workflow:status
|
||||
description: Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view
|
||||
argument-hint: "[optional: --project|task-id|--validate]"
|
||||
argument-hint: "[optional: --project|task-id|--validate|--dashboard]"
|
||||
---
|
||||
|
||||
# Workflow Status Command (/workflow:status)
|
||||
|
||||
## Overview
|
||||
Generates on-demand views from project and session data. Supports two modes:
|
||||
Generates on-demand views from project and session data. Supports multiple modes:
|
||||
1. **Project Overview** (`--project`): Shows completed features and project statistics
|
||||
2. **Workflow Tasks** (default): Shows current session task progress
|
||||
3. **HTML Dashboard** (`--dashboard`): Generates interactive HTML task board with active and archived sessions
|
||||
|
||||
No synchronization needed - all views are calculated from current JSON state.
|
||||
|
||||
@@ -19,6 +20,7 @@ No synchronization needed - all views are calculated from current JSON state.
|
||||
/workflow:status --project # Show project-level feature registry
|
||||
/workflow:status impl-1 # Show specific task details
|
||||
/workflow:status --validate # Validate workflow integrity
|
||||
/workflow:status --dashboard # Generate HTML dashboard board
|
||||
```
|
||||
|
||||
## Implementation Flow
|
||||
@@ -192,4 +194,135 @@ find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null |
|
||||
|
||||
## Completed Tasks
|
||||
- [COMPLETED] impl-0: Setup completed
|
||||
```
|
||||
|
||||
## Dashboard Mode (HTML Board)
|
||||
|
||||
### Step 1: Check for --dashboard flag
|
||||
```bash
|
||||
# If --dashboard flag present → Execute Dashboard Mode
|
||||
```
|
||||
|
||||
### Step 2: Collect Workflow Data
|
||||
|
||||
**Collect Active Sessions**:
|
||||
```bash
|
||||
# Find all active sessions
|
||||
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null
|
||||
|
||||
# For each active session, read metadata and tasks
|
||||
for session in $(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null); do
|
||||
cat "$session/workflow-session.json"
|
||||
find "$session/.task/" -name "*.json" -type f 2>/dev/null
|
||||
done
|
||||
```
|
||||
|
||||
**Collect Archived Sessions**:
|
||||
```bash
|
||||
# Find all archived sessions
|
||||
find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null
|
||||
|
||||
# Read manifest if exists
|
||||
cat .workflow/archives/manifest.json 2>/dev/null
|
||||
|
||||
# For each archived session, read metadata
|
||||
for archive in $(find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null); do
|
||||
cat "$archive/workflow-session.json" 2>/dev/null
|
||||
# Count completed tasks
|
||||
find "$archive/.task/" -name "*.json" -type f 2>/dev/null | wc -l
|
||||
done
|
||||
```
|
||||
|
||||
### Step 3: Process and Structure Data
|
||||
|
||||
**Build data structure for dashboard**:
|
||||
```javascript
|
||||
const dashboardData = {
|
||||
activeSessions: [],
|
||||
archivedSessions: [],
|
||||
generatedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Process active sessions
|
||||
for each active_session in active_sessions:
|
||||
const sessionData = JSON.parse(Read(active_session/workflow-session.json));
|
||||
const tasks = [];
|
||||
|
||||
// Load all tasks for this session
|
||||
for each task_file in find(active_session/.task/*.json):
|
||||
const taskData = JSON.parse(Read(task_file));
|
||||
tasks.push({
|
||||
task_id: taskData.task_id,
|
||||
title: taskData.title,
|
||||
status: taskData.status,
|
||||
type: taskData.type
|
||||
});
|
||||
|
||||
dashboardData.activeSessions.push({
|
||||
session_id: sessionData.session_id,
|
||||
project: sessionData.project,
|
||||
status: sessionData.status,
|
||||
created_at: sessionData.created_at || sessionData.initialized_at,
|
||||
tasks: tasks
|
||||
});
|
||||
|
||||
// Process archived sessions
|
||||
for each archived_session in archived_sessions:
|
||||
const sessionData = JSON.parse(Read(archived_session/workflow-session.json));
|
||||
const taskCount = bash(find archived_session/.task/*.json | wc -l);
|
||||
|
||||
dashboardData.archivedSessions.push({
|
||||
session_id: sessionData.session_id,
|
||||
project: sessionData.project,
|
||||
archived_at: sessionData.completed_at || sessionData.archived_at,
|
||||
taskCount: parseInt(taskCount),
|
||||
archive_path: archived_session
|
||||
});
|
||||
```
|
||||
|
||||
### Step 4: Generate HTML from Template
|
||||
|
||||
**Load template and inject data**:
|
||||
```javascript
|
||||
// Read the HTML template
|
||||
const template = Read("~/.claude/templates/workflow-dashboard.html");
|
||||
|
||||
// Prepare data for injection
|
||||
const dataJson = JSON.stringify(dashboardData, null, 2);
|
||||
|
||||
// Replace placeholder with actual data
|
||||
const htmlContent = template.replace('{{WORKFLOW_DATA}}', dataJson);
|
||||
|
||||
// Ensure .workflow directory exists
|
||||
bash(mkdir -p .workflow);
|
||||
```
|
||||
|
||||
### Step 5: Write HTML File
|
||||
|
||||
```bash
|
||||
# Write the generated HTML to .workflow/dashboard.html
|
||||
Write({
|
||||
file_path: ".workflow/dashboard.html",
|
||||
content: htmlContent
|
||||
})
|
||||
```
|
||||
|
||||
### Step 6: Display Success Message
|
||||
|
||||
```markdown
|
||||
Dashboard generated successfully!
|
||||
|
||||
Location: .workflow/dashboard.html
|
||||
|
||||
Open in browser:
|
||||
file://$(pwd)/.workflow/dashboard.html
|
||||
|
||||
Features:
|
||||
- 📊 Active sessions overview
|
||||
- 📦 Archived sessions history
|
||||
- 🔍 Search and filter
|
||||
- 📈 Progress tracking
|
||||
- 🎨 Dark/light theme
|
||||
|
||||
Refresh data: Re-run /workflow:status --dashboard
|
||||
```
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: layout-extract
|
||||
description: Extract structural layout information from reference images, URLs, or text prompts using Claude analysis with variant generation or refinement mode
|
||||
argument-hint: [--design-id <id>] [--session <id>] [--images "<glob>"] [--urls "<list>"] [--prompt "<desc>"] [--targets "<list>"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]
|
||||
description: Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode
|
||||
argument-hint: [--design-id <id>] [--session <id>] [--images "<glob>"] [--prompt "<desc>"] [--targets "<list>"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]
|
||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), Bash(*), AskUserQuestion(*), Task(ui-design-agent), mcp__exa__web_search_exa(*)
|
||||
---
|
||||
|
||||
@@ -9,7 +9,7 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), Bash(*), AskUserQuestio
|
||||
|
||||
## Overview
|
||||
|
||||
Extract structural layout information from reference images, URLs, or text prompts using AI analysis. Supports two modes:
|
||||
Extract structural layout information from reference images or text prompts using AI analysis. Supports two modes:
|
||||
1. **Exploration Mode** (default): Generate multiple contrasting layout variants
|
||||
2. **Refinement Mode** (`--refine`): Refine a single existing layout through detailed adjustments
|
||||
|
||||
@@ -29,23 +29,7 @@ This command separates the "scaffolding" (HTML structure and CSS layout) from th
|
||||
|
||||
```bash
|
||||
# Detect input source
|
||||
# Priority: --urls + --images → hybrid | --urls → url | --images → image | --prompt → text
|
||||
|
||||
# Parse URLs if provided (format: "target:url,target:url,...")
|
||||
IF --urls:
|
||||
url_list = []
|
||||
FOR pair IN split(--urls, ","):
|
||||
IF ":" IN pair:
|
||||
target, url = pair.split(":", 1)
|
||||
url_list.append({target: target.strip(), url: url.strip()})
|
||||
ELSE:
|
||||
# Single URL without target
|
||||
url_list.append({target: "page", url: pair.strip()})
|
||||
|
||||
has_urls = true
|
||||
ELSE:
|
||||
has_urls = false
|
||||
url_list = []
|
||||
# Priority: --images → image | --prompt → text
|
||||
|
||||
# Detect refinement mode
|
||||
refine_mode = --refine OR false
|
||||
@@ -62,11 +46,9 @@ ELSE:
|
||||
REPORT: "🔍 Exploration mode: Will generate {variants_count} contrasting layout concepts per target"
|
||||
|
||||
# Resolve targets
|
||||
# Priority: --targets → url_list targets → prompt analysis → default ["page"]
|
||||
# Priority: --targets → prompt analysis → default ["page"]
|
||||
IF --targets:
|
||||
targets = split(--targets, ",")
|
||||
ELSE IF has_urls:
|
||||
targets = [url_info.target for url_info in url_list]
|
||||
ELSE IF --prompt:
|
||||
# Extract targets from prompt using pattern matching
|
||||
# Looks for keywords: "page names", target descriptors (login, dashboard, etc.)
|
||||
@@ -107,10 +89,6 @@ bash(echo "✓ Base path: $base_path")
|
||||
bash(ls {images_pattern}) # Expand glob pattern
|
||||
Read({image_path}) # Load each image
|
||||
|
||||
# For URL mode
|
||||
# Parse URL list format: "target:url,target:url"
|
||||
# Validate URLs are accessible
|
||||
|
||||
# For text mode
|
||||
# Validate --prompt is non-empty
|
||||
|
||||
@@ -118,97 +96,6 @@ Read({image_path}) # Load each image
|
||||
bash(mkdir -p {base_path}/layout-extraction)
|
||||
```
|
||||
|
||||
### Step 2.5: Extract DOM Structure (URL Mode - Auto-Trigger)
|
||||
```bash
|
||||
# AUTO-TRIGGER: If URLs are available (from --urls parameter), automatically extract real DOM structure
|
||||
# This provides accurate layout data to supplement visual analysis
|
||||
|
||||
# Check if URLs provided via --urls parameter
|
||||
IF --urls AND url_list:
|
||||
REPORT: "🔍 Auto-triggering URL mode: Extracting DOM structure"
|
||||
|
||||
bash(mkdir -p {base_path}/.intermediates/layout-analysis)
|
||||
|
||||
# For each URL in url_list:
|
||||
FOR url_info IN url_list:
|
||||
target = url_info.target
|
||||
url = url_info.url
|
||||
|
||||
IF mcp_chrome_devtools_available:
|
||||
REPORT: " Processing: {target} ({url})"
|
||||
|
||||
# Read extraction script
|
||||
script_content = Read(~/.claude/scripts/extract-layout-structure.js)
|
||||
|
||||
# Open page in Chrome DevTools
|
||||
mcp__chrome-devtools__navigate_page(url=url)
|
||||
|
||||
# Execute layout extraction script
|
||||
result = mcp__chrome-devtools__evaluate_script(function=script_content)
|
||||
|
||||
# Save DOM structure for this target (intermediate file)
|
||||
Write({base_path}/.intermediates/layout-analysis/dom-structure-{target}.json, result)
|
||||
|
||||
REPORT: " ✅ DOM structure extracted for '{target}'"
|
||||
ELSE:
|
||||
REPORT: " ⚠️ Chrome DevTools MCP not available, falling back to visual analysis"
|
||||
BREAK
|
||||
|
||||
dom_structure_available = mcp_chrome_devtools_available
|
||||
ELSE:
|
||||
dom_structure_available = false
|
||||
```
|
||||
|
||||
**Extraction Script Reference**: `~/.claude/scripts/extract-layout-structure.js`
|
||||
|
||||
**Usage**: Read the script file and use content directly in `mcp__chrome-devtools__evaluate_script()`
|
||||
|
||||
**Script returns**:
|
||||
- `metadata`: Extraction timestamp, URL, method, version
|
||||
- `patterns`: Layout pattern statistics (flexColumn, flexRow, grid counts)
|
||||
- `structure`: Hierarchical DOM tree with layout properties
|
||||
- `exploration`: (Optional) Progressive exploration results when standard selectors fail
|
||||
|
||||
**Benefits**:
|
||||
- ✅ Real flex/grid configuration (justifyContent, alignItems, gap, etc.)
|
||||
- ✅ Accurate element bounds (x, y, width, height)
|
||||
- ✅ Structural hierarchy with depth control
|
||||
- ✅ Layout pattern identification (flex-row, flex-column, grid-NCol)
|
||||
- ✅ Progressive exploration: Auto-discovers missing selectors
|
||||
|
||||
**Progressive Exploration Strategy** (v2.2.0+):
|
||||
|
||||
When script finds <3 main containers, it automatically:
|
||||
1. **Scans** all large visible containers (≥500×300px)
|
||||
2. **Extracts** class patterns matching: `main|content|wrapper|container|page|layout|app`
|
||||
3. **Suggests** new selectors to add to script
|
||||
4. **Returns** exploration data in `result.exploration`:
|
||||
```json
|
||||
{
|
||||
"triggered": true,
|
||||
"discoveredCandidates": [{classes, bounds, display}],
|
||||
"suggestedSelectors": [".wrapper", ".page-index"],
|
||||
"recommendation": ".wrapper, .page-index, .app-container"
|
||||
}
|
||||
```
|
||||
|
||||
**Using Exploration Results**:
|
||||
```javascript
|
||||
// After extraction, check for suggestions
|
||||
IF result.exploration?.triggered:
|
||||
REPORT: result.exploration.warning
|
||||
REPORT: "Suggested selectors: " + result.exploration.recommendation
|
||||
|
||||
// Update script by adding to commonClassSelectors array
|
||||
// Then re-run extraction for better coverage
|
||||
```
|
||||
|
||||
**Selector Update Workflow**:
|
||||
1. Run extraction on unfamiliar site
|
||||
2. Check `result.exploration.suggestedSelectors`
|
||||
3. Add relevant selectors to script's `commonClassSelectors`
|
||||
4. Re-run extraction → improved container detection
|
||||
|
||||
### Step 3: Memory Check
|
||||
```bash
|
||||
# 1. Check if inputs cached in session memory
|
||||
@@ -711,13 +598,6 @@ Configuration:
|
||||
- Device Type: {device_type}
|
||||
- Targets: {targets.join(", ")}
|
||||
- Total Templates: {total_tasks} ({targets.length} targets with multi-selection)
|
||||
{IF has_urls AND dom_structure_available:
|
||||
- 🔍 URL Mode: DOM structure extracted from {len(url_list)} URL(s)
|
||||
- Accuracy: Real flex/grid properties from live pages
|
||||
}
|
||||
{IF has_urls AND NOT dom_structure_available:
|
||||
- ⚠️ URL Mode: Chrome DevTools unavailable, used visual analysis fallback
|
||||
}
|
||||
|
||||
User Selections:
|
||||
{FOR each target in targets:
|
||||
@@ -734,10 +614,7 @@ Generated Templates:
|
||||
|
||||
Intermediate Files:
|
||||
- {base_path}/.intermediates/layout-analysis/
|
||||
├── analysis-options.json (concept proposals + user selections embedded)
|
||||
{IF dom_structure_available:
|
||||
├── dom-structure-*.json ({len(url_list)} DOM extracts)
|
||||
}
|
||||
└── analysis-options.json (concept proposals + user selections embedded)
|
||||
|
||||
Next: /workflow:ui-design:generate will combine these structural templates with design systems to produce final prototypes.
|
||||
```
|
||||
@@ -867,15 +744,11 @@ ERROR: MCP search failed
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Auto-Trigger URL Mode** - Automatically extracts DOM structure when --urls provided (no manual flag needed)
|
||||
- **Hybrid Extraction Strategy** - Combines real DOM structure data with AI visual analysis
|
||||
- **Accurate Layout Properties** - Chrome DevTools extracts real flex/grid configurations, bounds, and hierarchy
|
||||
- **Separation of Concerns** - Decouples layout (structure) from style (visuals)
|
||||
- **Multi-Selection Workflow** - Generate N concepts → User selects multiple → Parallel template generation
|
||||
- **Structural Exploration** - Enables A/B testing of different layouts through multi-selection
|
||||
- **Token-Based Layout** - CSS uses `var()` placeholders for instant design system adaptation
|
||||
- **Device-Specific** - Tailored structures for different screen sizes
|
||||
- **Graceful Fallback** - Falls back to visual analysis if Chrome DevTools unavailable
|
||||
- **Foundation for Assembly** - Provides structural blueprint for prototype generation
|
||||
- **Agent-Powered** - Deep structural analysis with AI
|
||||
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
---
|
||||
name: style-extract
|
||||
description: Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode
|
||||
argument-hint: "[--design-id <id>] [--session <id>] [--images "<glob>"] [--urls "<list>"] [--prompt "<desc>"] [--variants <count>] [--interactive] [--refine]"
|
||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), AskUserQuestion(*), mcp__chrome-devtools__navigate_page(*), mcp__chrome-devtools__evaluate_script(*)
|
||||
argument-hint: "[--design-id <id>] [--session <id>] [--images "<glob>"] [--prompt "<desc>"] [--variants <count>] [--interactive] [--refine]"
|
||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
# Style Extraction Command
|
||||
@@ -24,23 +24,7 @@ Extract design style from reference images or text prompts using Claude's built-
|
||||
### Step 1: Detect Input Mode, Extraction Mode & Base Path
|
||||
```bash
|
||||
# Detect input source
|
||||
# Priority: --urls + --images + --prompt → hybrid-url | --urls + --images → url-image | --urls → url | --images + --prompt → hybrid | --images → image | --prompt → text
|
||||
|
||||
# Parse URLs if provided (format: "target:url,target:url,...")
|
||||
IF --urls:
|
||||
url_list = []
|
||||
FOR pair IN split(--urls, ","):
|
||||
IF ":" IN pair:
|
||||
target, url = pair.split(":", 1)
|
||||
url_list.append({target: target.strip(), url: url.strip()})
|
||||
ELSE:
|
||||
# Single URL without target
|
||||
url_list.append({target: "page", url: pair.strip()})
|
||||
|
||||
has_urls = true
|
||||
primary_url = url_list[0].url # First URL as primary source
|
||||
ELSE:
|
||||
has_urls = false
|
||||
# Priority: --images + --prompt → hybrid | --images → image | --prompt → text
|
||||
|
||||
# Detect refinement mode
|
||||
refine_mode = --refine OR false
|
||||
@@ -79,64 +63,7 @@ base_path=$(cd "$relative_path" && pwd)
|
||||
bash(echo "✓ Base path: $base_path")
|
||||
```
|
||||
|
||||
### Step 2: Extract Computed Styles (URL Mode - Auto-Trigger)
|
||||
```bash
|
||||
# AUTO-TRIGGER: If URLs are available (from --urls parameter or capture metadata), automatically extract real CSS values
|
||||
# This provides accurate design tokens to supplement visual analysis
|
||||
|
||||
# Priority 1: Check for --urls parameter
|
||||
IF has_urls:
|
||||
url_to_extract = primary_url
|
||||
url_source = "--urls parameter"
|
||||
|
||||
# Priority 2: Check for URL metadata from capture phase
|
||||
ELSE IF exists({base_path}/.metadata/capture-urls.json):
|
||||
capture_urls = Read({base_path}/.metadata/capture-urls.json)
|
||||
url_to_extract = capture_urls[0] # Use first URL
|
||||
url_source = "capture metadata"
|
||||
ELSE:
|
||||
url_to_extract = null
|
||||
|
||||
# Execute extraction if URL available
|
||||
IF url_to_extract AND mcp_chrome_devtools_available:
|
||||
REPORT: "🔍 Auto-triggering URL mode: Extracting computed styles from {url_source}"
|
||||
REPORT: " URL: {url_to_extract}"
|
||||
|
||||
# Read extraction script
|
||||
script_content = Read(~/.claude/scripts/extract-computed-styles.js)
|
||||
|
||||
# Open page in Chrome DevTools
|
||||
mcp__chrome-devtools__navigate_page(url=url_to_extract)
|
||||
|
||||
# Execute extraction script directly
|
||||
result = mcp__chrome-devtools__evaluate_script(function=script_content)
|
||||
|
||||
# Save computed styles to intermediates directory
|
||||
bash(mkdir -p {base_path}/.intermediates/style-analysis)
|
||||
Write({base_path}/.intermediates/style-analysis/computed-styles.json, result)
|
||||
|
||||
computed_styles_available = true
|
||||
REPORT: " ✅ Computed styles extracted and saved"
|
||||
ELSE:
|
||||
computed_styles_available = false
|
||||
IF url_to_extract:
|
||||
REPORT: "⚠️ Chrome DevTools MCP not available, falling back to visual analysis"
|
||||
```
|
||||
|
||||
**Extraction Script Reference**: `~/.claude/scripts/extract-computed-styles.js`
|
||||
|
||||
**Usage**: Read the script file and use content directly in `mcp__chrome-devtools__evaluate_script()`
|
||||
|
||||
**Script returns**:
|
||||
- `metadata`: Extraction timestamp, URL, method
|
||||
- `tokens`: Organized design tokens (colors, borderRadii, shadows, fontSizes, fontWeights, spacing)
|
||||
|
||||
**Benefits**:
|
||||
- ✅ Pixel-perfect accuracy for border-radius, box-shadow, padding, etc.
|
||||
- ✅ Eliminates guessing from visual analysis
|
||||
- ✅ Provides ground truth for design tokens
|
||||
|
||||
### Step 3: Load Inputs
|
||||
### Step 2: Load Inputs
|
||||
```bash
|
||||
# For image mode
|
||||
bash(ls {images_pattern}) # Expand glob pattern
|
||||
@@ -161,7 +88,7 @@ IF exists: SKIP to completion
|
||||
|
||||
---
|
||||
|
||||
**Phase 0 Output**: `input_mode`, `base_path`, `extraction_mode`, `variants_count`, `loaded_images[]` or `prompt_guidance`, `has_urls`, `url_list[]`, `computed_styles_available`
|
||||
**Phase 0 Output**: `input_mode`, `base_path`, `extraction_mode`, `variants_count`, `loaded_images[]` or `prompt_guidance`
|
||||
|
||||
## Phase 1: Design Direction or Refinement Options Generation
|
||||
|
||||
@@ -571,9 +498,8 @@ FOR variant_index IN 1..actual_variants_count:
|
||||
- Preview Border Radius: ${selected_direction.preview.border_radius_base}
|
||||
|
||||
## Input Analysis
|
||||
- Input mode: {input_mode} (image/text/hybrid${has_urls ? "/url" : ""})
|
||||
- Input mode: {input_mode} (image/text/hybrid)
|
||||
- Visual references: {loaded_images OR prompt_guidance}
|
||||
${computed_styles_available ? "- Computed styles: Use as ground truth (Read from .intermediates/style-analysis/computed-styles.json)" : ""}
|
||||
|
||||
## Generation Rules
|
||||
- Develop the selected design direction into a complete design system
|
||||
@@ -587,7 +513,7 @@ FOR variant_index IN 1..actual_variants_count:
|
||||
* innovation → token naming, experimental values
|
||||
- Honor search_keywords for design inspiration
|
||||
- Avoid anti_keywords patterns
|
||||
- All colors in OKLCH format ${computed_styles_available ? "(convert from computed RGB)" : ""}
|
||||
- All colors in OKLCH format
|
||||
- WCAG AA compliance: 4.5:1 text contrast, 3:1 UI contrast
|
||||
|
||||
## Generate
|
||||
@@ -656,16 +582,9 @@ TodoWrite({todos: [
|
||||
Configuration:
|
||||
- Session: {session_id}
|
||||
- Extraction Mode: {extraction_mode} (imitate/explore)
|
||||
- Input Mode: {input_mode} (image/text/hybrid{"/url" if has_urls else ""})
|
||||
- Input Mode: {input_mode} (image/text/hybrid)
|
||||
- Variants: {variants_count}
|
||||
- Production-Ready: Complete design systems generated
|
||||
{IF has_urls AND computed_styles_available:
|
||||
- 🔍 URL Mode: Computed styles extracted from {len(url_list)} URL(s)
|
||||
- Accuracy: Pixel-perfect design tokens from DOM
|
||||
}
|
||||
{IF has_urls AND NOT computed_styles_available:
|
||||
- ⚠️ URL Mode: Chrome DevTools unavailable, used visual analysis fallback
|
||||
}
|
||||
|
||||
{IF extraction_mode == "explore":
|
||||
Design Direction Selection:
|
||||
@@ -676,11 +595,6 @@ Design Direction Selection:
|
||||
Generated Files:
|
||||
{base_path}/style-extraction/
|
||||
└── style-1/design-tokens.json
|
||||
|
||||
{IF computed_styles_available:
|
||||
Intermediate Analysis:
|
||||
{base_path}/.intermediates/style-analysis/computed-styles.json (extracted from {primary_url})
|
||||
}
|
||||
{IF extraction_mode == "explore":
|
||||
{base_path}/.intermediates/style-analysis/analysis-options.json (design direction options + user selection)
|
||||
}
|
||||
@@ -811,15 +725,11 @@ ERROR: Claude JSON parsing error
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Auto-Trigger URL Mode** - Automatically extracts computed styles when --urls provided (no manual flag needed)
|
||||
- **Direct Design System Generation** - Complete design-tokens.json + style-guide.md in one step
|
||||
- **Hybrid Extraction Strategy** - Combines computed CSS values (ground truth) with AI visual analysis
|
||||
- **Pixel-Perfect Accuracy** - Chrome DevTools extracts exact border-radius, shadows, spacing values
|
||||
- **AI-Driven Design Space Exploration** - 6D attribute space analysis for maximum contrast
|
||||
- **Variant-Specific Directions** - Each variant has unique philosophy, keywords, anti-patterns
|
||||
- **Maximum Contrast Guarantee** - Variants maximally distant in attribute space
|
||||
- **Flexible Input** - Images, text, URLs, or hybrid mode
|
||||
- **Graceful Fallback** - Falls back to pure visual inference if Chrome DevTools unavailable
|
||||
- **Flexible Input** - Images, text, or hybrid mode
|
||||
- **Production-Ready** - OKLCH colors, WCAG AA compliance, semantic naming
|
||||
- **Agent-Driven** - Autonomous multi-file generation with ui-design-agent
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ The UI Design Workflow System is a comprehensive suite of 11 autonomous commands
|
||||
These commands automate end-to-end processes by chaining specialized sub-commands.
|
||||
|
||||
- **`/workflow:ui-design:explore-auto`**: For creating *new* designs. Generates multiple style and layout variants from a prompt to explore design directions.
|
||||
- **`/workflow:ui-design:imitate-auto`**: For *replicating* existing designs. High-fidelity cloning of target URLs into a reusable design system.
|
||||
- **`/workflow:ui-design:imitate-auto`**: For *replicating* existing designs. Creates design systems from local reference files (images, code) or text prompts.
|
||||
|
||||
### 2. Core Extractors (Specialized Analysis)
|
||||
|
||||
@@ -98,31 +98,35 @@ Tools for combining components and integrating results.
|
||||
|
||||
### Workflow B: Design Replication (Imitation)
|
||||
|
||||
**Goal:** Create a design system and prototypes based on existing reference sites.
|
||||
**Goal:** Create a design system and prototypes based on existing local references.
|
||||
|
||||
**Primary Command:** `imitate-auto`
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. **Initiate**: User runs `/workflow:ui-design:imitate-auto --url-map "home:https://example.com, pricing:https://example.com/pricing"`
|
||||
2. **Capture**: System screenshots all provided URLs.
|
||||
3. **Extraction**: System extracts a unified design system (style, layout, animation) from the primary URL.
|
||||
4. **Assembly**: System recreates all target pages using the extracted system.
|
||||
1. **Initiate**: User runs `/workflow:ui-design:imitate-auto --input "design-refs/*.png"` with local reference files
|
||||
2. **Input Detection**: System detects input type (images, code files, or text)
|
||||
3. **Extraction**: System extracts a unified design system (style, layout, animation) from the references.
|
||||
4. **Assembly**: System creates prototypes using the extracted system.
|
||||
|
||||
**Example:**
|
||||
|
||||
```bash
|
||||
# Using reference images
|
||||
/workflow:ui-design:imitate-auto \
|
||||
--url-map "landing:https://stripe.com, pricing:https://stripe.com/pricing, docs:https://stripe.com/docs" \
|
||||
--capture-mode batch \
|
||||
--input "design-refs/*.png" \
|
||||
--session WFS-002
|
||||
|
||||
# Or importing from existing code
|
||||
/workflow:ui-design:imitate-auto \
|
||||
--input "./src/components" \
|
||||
--session WFS-002
|
||||
```
|
||||
|
||||
**Output:**
|
||||
- Screenshots of all URLs
|
||||
- `design-tokens.json` (unified style system)
|
||||
- `layout-templates.json` (page structures)
|
||||
- 3 HTML prototypes matching the captured pages
|
||||
- HTML prototypes based on the input references
|
||||
|
||||
---
|
||||
|
||||
@@ -204,10 +208,10 @@ For high-volume generation:
|
||||
- Specify the *targets* (e.g., "dashboard, settings page")
|
||||
- Include functional requirements (e.g., "responsive, mobile-first")
|
||||
|
||||
**For URL Mapping:**
|
||||
- First URL is treated as primary source of truth
|
||||
- Use descriptive keys in `--url-map`
|
||||
- Ensure URLs are accessible (no authentication walls)
|
||||
**For Local References:**
|
||||
- Use high-quality reference images (PNG, JPG)
|
||||
- Organize files in accessible directories
|
||||
- For code imports, ensure files are properly structured (CSS, JS, HTML)
|
||||
|
||||
---
|
||||
|
||||
@@ -233,8 +237,8 @@ You can run UI design workflows within an existing workflow session:
|
||||
**Example: Imitation + Custom Extraction**
|
||||
|
||||
```bash
|
||||
# 1. Replicate existing design
|
||||
/workflow:ui-design:imitate-auto --url-map "ref:https://example.com"
|
||||
# 1. Import design from local references
|
||||
/workflow:ui-design:imitate-auto --input "design-refs/*.png"
|
||||
|
||||
# 2. Extract additional layouts and generate prototypes
|
||||
/workflow:ui-design:layout-extract --targets "new-page-1,new-page-2"
|
||||
|
||||
664
.claude/templates/workflow-dashboard.html
Normal file
664
.claude/templates/workflow-dashboard.html
Normal file
@@ -0,0 +1,664 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Workflow Dashboard - Task Board</title>
|
||||
<style>
|
||||
:root {
|
||||
--bg-primary: #f5f7fa;
|
||||
--bg-secondary: #ffffff;
|
||||
--bg-card: #ffffff;
|
||||
--text-primary: #1a202c;
|
||||
--text-secondary: #718096;
|
||||
--border-color: #e2e8f0;
|
||||
--accent-color: #4299e1;
|
||||
--success-color: #48bb78;
|
||||
--warning-color: #ed8936;
|
||||
--danger-color: #f56565;
|
||||
--shadow: 0 1px 3px 0 rgba(0, 0, 0, 0.1), 0 1px 2px 0 rgba(0, 0, 0, 0.06);
|
||||
--shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.1), 0 4px 6px -2px rgba(0, 0, 0, 0.05);
|
||||
}
|
||||
|
||||
[data-theme="dark"] {
|
||||
--bg-primary: #1a202c;
|
||||
--bg-secondary: #2d3748;
|
||||
--bg-card: #2d3748;
|
||||
--text-primary: #f7fafc;
|
||||
--text-secondary: #a0aec0;
|
||||
--border-color: #4a5568;
|
||||
--shadow: 0 1px 3px 0 rgba(0, 0, 0, 0.3), 0 1px 2px 0 rgba(0, 0, 0, 0.2);
|
||||
--shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.3), 0 4px 6px -2px rgba(0, 0, 0, 0.2);
|
||||
}
|
||||
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
|
||||
background-color: var(--bg-primary);
|
||||
color: var(--text-primary);
|
||||
line-height: 1.6;
|
||||
transition: background-color 0.3s, color 0.3s;
|
||||
}
|
||||
|
||||
.container {
|
||||
max-width: 1400px;
|
||||
margin: 0 auto;
|
||||
padding: 20px;
|
||||
}
|
||||
|
||||
header {
|
||||
background-color: var(--bg-secondary);
|
||||
box-shadow: var(--shadow);
|
||||
padding: 20px;
|
||||
margin-bottom: 30px;
|
||||
border-radius: 8px;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 2rem;
|
||||
margin-bottom: 10px;
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
.header-controls {
|
||||
display: flex;
|
||||
gap: 15px;
|
||||
flex-wrap: wrap;
|
||||
align-items: center;
|
||||
margin-top: 15px;
|
||||
}
|
||||
|
||||
.search-box {
|
||||
flex: 1;
|
||||
min-width: 250px;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.search-box input {
|
||||
width: 100%;
|
||||
padding: 10px 15px;
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 6px;
|
||||
background-color: var(--bg-primary);
|
||||
color: var(--text-primary);
|
||||
font-size: 0.95rem;
|
||||
}
|
||||
|
||||
.filter-group {
|
||||
display: flex;
|
||||
gap: 10px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.btn {
|
||||
padding: 10px 20px;
|
||||
border: none;
|
||||
border-radius: 6px;
|
||||
cursor: pointer;
|
||||
font-size: 0.9rem;
|
||||
font-weight: 500;
|
||||
transition: all 0.2s;
|
||||
background-color: var(--bg-card);
|
||||
color: var(--text-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.btn:hover {
|
||||
transform: translateY(-1px);
|
||||
box-shadow: var(--shadow);
|
||||
}
|
||||
|
||||
.btn.active {
|
||||
background-color: var(--accent-color);
|
||||
color: white;
|
||||
border-color: var(--accent-color);
|
||||
}
|
||||
|
||||
.stats-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
|
||||
gap: 20px;
|
||||
margin-bottom: 30px;
|
||||
}
|
||||
|
||||
.stat-card {
|
||||
background-color: var(--bg-card);
|
||||
padding: 20px;
|
||||
border-radius: 8px;
|
||||
box-shadow: var(--shadow);
|
||||
transition: transform 0.2s;
|
||||
}
|
||||
|
||||
.stat-card:hover {
|
||||
transform: translateY(-2px);
|
||||
box-shadow: var(--shadow-lg);
|
||||
}
|
||||
|
||||
.stat-value {
|
||||
font-size: 2rem;
|
||||
font-weight: bold;
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
.stat-label {
|
||||
color: var(--text-secondary);
|
||||
font-size: 0.9rem;
|
||||
margin-top: 5px;
|
||||
}
|
||||
|
||||
.section {
|
||||
margin-bottom: 40px;
|
||||
}
|
||||
|
||||
.section-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.section-title {
|
||||
font-size: 1.5rem;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.sessions-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(350px, 1fr));
|
||||
gap: 20px;
|
||||
}
|
||||
|
||||
.session-card {
|
||||
background-color: var(--bg-card);
|
||||
border-radius: 8px;
|
||||
box-shadow: var(--shadow);
|
||||
padding: 20px;
|
||||
transition: all 0.3s;
|
||||
}
|
||||
|
||||
.session-card:hover {
|
||||
transform: translateY(-4px);
|
||||
box-shadow: var(--shadow-lg);
|
||||
}
|
||||
|
||||
.session-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: start;
|
||||
margin-bottom: 15px;
|
||||
}
|
||||
|
||||
.session-title {
|
||||
font-size: 1.2rem;
|
||||
font-weight: 600;
|
||||
color: var(--text-primary);
|
||||
margin-bottom: 5px;
|
||||
}
|
||||
|
||||
.session-status {
|
||||
padding: 4px 12px;
|
||||
border-radius: 12px;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
.status-active {
|
||||
background-color: #c6f6d5;
|
||||
color: #22543d;
|
||||
}
|
||||
|
||||
.status-archived {
|
||||
background-color: #e2e8f0;
|
||||
color: #4a5568;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .status-active {
|
||||
background-color: #22543d;
|
||||
color: #c6f6d5;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .status-archived {
|
||||
background-color: #4a5568;
|
||||
color: #e2e8f0;
|
||||
}
|
||||
|
||||
.session-meta {
|
||||
display: flex;
|
||||
gap: 15px;
|
||||
font-size: 0.85rem;
|
||||
color: var(--text-secondary);
|
||||
margin-bottom: 15px;
|
||||
}
|
||||
|
||||
.progress-bar {
|
||||
width: 100%;
|
||||
height: 8px;
|
||||
background-color: var(--bg-primary);
|
||||
border-radius: 4px;
|
||||
overflow: hidden;
|
||||
margin: 15px 0;
|
||||
}
|
||||
|
||||
.progress-fill {
|
||||
height: 100%;
|
||||
background: linear-gradient(90deg, var(--accent-color), var(--success-color));
|
||||
transition: width 0.3s;
|
||||
}
|
||||
|
||||
.tasks-list {
|
||||
margin-top: 15px;
|
||||
}
|
||||
|
||||
.task-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
padding: 10px;
|
||||
margin-bottom: 8px;
|
||||
background-color: var(--bg-primary);
|
||||
border-radius: 6px;
|
||||
border-left: 3px solid var(--border-color);
|
||||
transition: all 0.2s;
|
||||
}
|
||||
|
||||
.task-item:hover {
|
||||
transform: translateX(4px);
|
||||
}
|
||||
|
||||
.task-item.completed {
|
||||
border-left-color: var(--success-color);
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
.task-item.in_progress {
|
||||
border-left-color: var(--warning-color);
|
||||
}
|
||||
|
||||
.task-item.pending {
|
||||
border-left-color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.task-checkbox {
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
border-radius: 50%;
|
||||
border: 2px solid var(--border-color);
|
||||
margin-right: 12px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.task-item.completed .task-checkbox {
|
||||
background-color: var(--success-color);
|
||||
border-color: var(--success-color);
|
||||
}
|
||||
|
||||
.task-item.completed .task-checkbox::after {
|
||||
content: '✓';
|
||||
color: white;
|
||||
font-size: 0.8rem;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.task-item.in_progress .task-checkbox {
|
||||
border-color: var(--warning-color);
|
||||
background-color: var(--warning-color);
|
||||
}
|
||||
|
||||
.task-item.in_progress .task-checkbox::after {
|
||||
content: '⟳';
|
||||
color: white;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.task-title {
|
||||
flex: 1;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.task-id {
|
||||
font-size: 0.75rem;
|
||||
color: var(--text-secondary);
|
||||
font-family: monospace;
|
||||
margin-left: 10px;
|
||||
}
|
||||
|
||||
.empty-state {
|
||||
text-align: center;
|
||||
padding: 60px 20px;
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.empty-state-icon {
|
||||
font-size: 4rem;
|
||||
margin-bottom: 20px;
|
||||
opacity: 0.5;
|
||||
}
|
||||
|
||||
.theme-toggle {
|
||||
position: fixed;
|
||||
bottom: 30px;
|
||||
right: 30px;
|
||||
width: 60px;
|
||||
height: 60px;
|
||||
border-radius: 50%;
|
||||
background-color: var(--accent-color);
|
||||
color: white;
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
font-size: 1.5rem;
|
||||
box-shadow: var(--shadow-lg);
|
||||
transition: all 0.3s;
|
||||
z-index: 1000;
|
||||
}
|
||||
|
||||
.theme-toggle:hover {
|
||||
transform: scale(1.1);
|
||||
}
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.sessions-grid {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
|
||||
.stats-grid {
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 1.5rem;
|
||||
}
|
||||
|
||||
.header-controls {
|
||||
flex-direction: column;
|
||||
align-items: stretch;
|
||||
}
|
||||
|
||||
.search-box {
|
||||
width: 100%;
|
||||
}
|
||||
}
|
||||
|
||||
.badge {
|
||||
display: inline-block;
|
||||
padding: 2px 8px;
|
||||
border-radius: 4px;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
margin-left: 8px;
|
||||
}
|
||||
|
||||
.badge-count {
|
||||
background-color: var(--accent-color);
|
||||
color: white;
|
||||
}
|
||||
|
||||
.session-footer {
|
||||
margin-top: 15px;
|
||||
padding-top: 15px;
|
||||
border-top: 1px solid var(--border-color);
|
||||
font-size: 0.85rem;
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<header>
|
||||
<h1>🚀 Workflow Dashboard</h1>
|
||||
<p style="color: var(--text-secondary);">Task Board - Active and Archived Sessions</p>
|
||||
|
||||
<div class="header-controls">
|
||||
<div class="search-box">
|
||||
<input type="text" id="searchInput" placeholder="🔍 Search tasks or sessions..." />
|
||||
</div>
|
||||
|
||||
<div class="filter-group">
|
||||
<button class="btn active" data-filter="all">All</button>
|
||||
<button class="btn" data-filter="active">Active</button>
|
||||
<button class="btn" data-filter="archived">Archived</button>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<div class="stats-grid">
|
||||
<div class="stat-card">
|
||||
<div class="stat-value" id="totalSessions">0</div>
|
||||
<div class="stat-label">Total Sessions</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<div class="stat-value" id="activeSessions">0</div>
|
||||
<div class="stat-label">Active Sessions</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<div class="stat-value" id="totalTasks">0</div>
|
||||
<div class="stat-label">Total Tasks</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<div class="stat-value" id="completedTasks">0</div>
|
||||
<div class="stat-label">Completed Tasks</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="section" id="activeSectionContainer">
|
||||
<div class="section-header">
|
||||
<h2 class="section-title">📋 Active Sessions</h2>
|
||||
</div>
|
||||
<div class="sessions-grid" id="activeSessions"></div>
|
||||
</div>
|
||||
|
||||
<div class="section" id="archivedSectionContainer">
|
||||
<div class="section-header">
|
||||
<h2 class="section-title">📦 Archived Sessions</h2>
|
||||
</div>
|
||||
<div class="sessions-grid" id="archivedSessions"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<button class="theme-toggle" id="themeToggle">🌙</button>
|
||||
|
||||
<script>
|
||||
// Workflow data will be injected here
|
||||
const workflowData = {{WORKFLOW_DATA}};
|
||||
|
||||
// Theme management
|
||||
function initTheme() {
|
||||
const savedTheme = localStorage.getItem('theme') || 'light';
|
||||
document.documentElement.setAttribute('data-theme', savedTheme);
|
||||
updateThemeIcon(savedTheme);
|
||||
}
|
||||
|
||||
function toggleTheme() {
|
||||
const currentTheme = document.documentElement.getAttribute('data-theme');
|
||||
const newTheme = currentTheme === 'dark' ? 'light' : 'dark';
|
||||
document.documentElement.setAttribute('data-theme', newTheme);
|
||||
localStorage.setItem('theme', newTheme);
|
||||
updateThemeIcon(newTheme);
|
||||
}
|
||||
|
||||
function updateThemeIcon(theme) {
|
||||
document.getElementById('themeToggle').textContent = theme === 'dark' ? '☀️' : '🌙';
|
||||
}
|
||||
|
||||
// Statistics calculation
|
||||
function updateStatistics() {
|
||||
const stats = {
|
||||
totalSessions: workflowData.activeSessions.length + workflowData.archivedSessions.length,
|
||||
activeSessions: workflowData.activeSessions.length,
|
||||
totalTasks: 0,
|
||||
completedTasks: 0
|
||||
};
|
||||
|
||||
workflowData.activeSessions.forEach(session => {
|
||||
stats.totalTasks += session.tasks.length;
|
||||
stats.completedTasks += session.tasks.filter(t => t.status === 'completed').length;
|
||||
});
|
||||
|
||||
workflowData.archivedSessions.forEach(session => {
|
||||
stats.totalTasks += session.taskCount || 0;
|
||||
stats.completedTasks += session.taskCount || 0;
|
||||
});
|
||||
|
||||
document.getElementById('totalSessions').textContent = stats.totalSessions;
|
||||
document.getElementById('activeSessions').textContent = stats.activeSessions;
|
||||
document.getElementById('totalTasks').textContent = stats.totalTasks;
|
||||
document.getElementById('completedTasks').textContent = stats.completedTasks;
|
||||
}
|
||||
|
||||
// Render session card
|
||||
function createSessionCard(session, isActive) {
|
||||
const card = document.createElement('div');
|
||||
card.className = 'session-card';
|
||||
card.dataset.sessionType = isActive ? 'active' : 'archived';
|
||||
|
||||
const completedTasks = isActive
|
||||
? session.tasks.filter(t => t.status === 'completed').length
|
||||
: (session.taskCount || 0);
|
||||
const totalTasks = isActive ? session.tasks.length : (session.taskCount || 0);
|
||||
const progress = totalTasks > 0 ? (completedTasks / totalTasks * 100) : 0;
|
||||
|
||||
let tasksHtml = '';
|
||||
if (isActive && session.tasks.length > 0) {
|
||||
tasksHtml = `
|
||||
<div class="tasks-list">
|
||||
${session.tasks.map(task => `
|
||||
<div class="task-item ${task.status}">
|
||||
<div class="task-checkbox"></div>
|
||||
<div class="task-title">${task.title || 'Untitled Task'}</div>
|
||||
<span class="task-id">${task.task_id || ''}</span>
|
||||
</div>
|
||||
`).join('')}
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
card.innerHTML = `
|
||||
<div class="session-header">
|
||||
<div>
|
||||
<h3 class="session-title">${session.session_id || 'Unknown Session'}</h3>
|
||||
<div style="color: var(--text-secondary); font-size: 0.9rem; margin-top: 5px;">
|
||||
${session.project || ''}
|
||||
</div>
|
||||
</div>
|
||||
<span class="session-status ${isActive ? 'status-active' : 'status-archived'}">
|
||||
${isActive ? 'Active' : 'Archived'}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<div class="session-meta">
|
||||
<span>📅 ${session.created_at || session.archived_at || 'N/A'}</span>
|
||||
<span>📊 ${completedTasks}/${totalTasks} tasks</span>
|
||||
</div>
|
||||
|
||||
${totalTasks > 0 ? `
|
||||
<div class="progress-bar">
|
||||
<div class="progress-fill" style="width: ${progress}%"></div>
|
||||
</div>
|
||||
<div style="text-align: center; font-size: 0.85rem; color: var(--text-secondary);">
|
||||
${Math.round(progress)}% Complete
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
${tasksHtml}
|
||||
|
||||
${!isActive && session.archive_path ? `
|
||||
<div class="session-footer">
|
||||
📁 Archive: ${session.archive_path}
|
||||
</div>
|
||||
` : ''}
|
||||
`;
|
||||
|
||||
return card;
|
||||
}
|
||||
|
||||
// Render all sessions
|
||||
function renderSessions(filter = 'all') {
|
||||
const activeContainer = document.getElementById('activeSessions');
|
||||
const archivedContainer = document.getElementById('archivedSessions');
|
||||
|
||||
activeContainer.innerHTML = '';
|
||||
archivedContainer.innerHTML = '';
|
||||
|
||||
if (filter === 'all' || filter === 'active') {
|
||||
if (workflowData.activeSessions.length === 0) {
|
||||
activeContainer.innerHTML = `
|
||||
<div class="empty-state">
|
||||
<div class="empty-state-icon">📭</div>
|
||||
<p>No active sessions</p>
|
||||
</div>
|
||||
`;
|
||||
} else {
|
||||
workflowData.activeSessions.forEach(session => {
|
||||
activeContainer.appendChild(createSessionCard(session, true));
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
if (filter === 'all' || filter === 'archived') {
|
||||
if (workflowData.archivedSessions.length === 0) {
|
||||
archivedContainer.innerHTML = `
|
||||
<div class="empty-state">
|
||||
<div class="empty-state-icon">📦</div>
|
||||
<p>No archived sessions</p>
|
||||
</div>
|
||||
`;
|
||||
} else {
|
||||
workflowData.archivedSessions.forEach(session => {
|
||||
archivedContainer.appendChild(createSessionCard(session, false));
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Show/hide sections
|
||||
document.getElementById('activeSectionContainer').style.display =
|
||||
(filter === 'all' || filter === 'active') ? 'block' : 'none';
|
||||
document.getElementById('archivedSectionContainer').style.display =
|
||||
(filter === 'all' || filter === 'archived') ? 'block' : 'none';
|
||||
}
|
||||
|
||||
// Search functionality
|
||||
function setupSearch() {
|
||||
const searchInput = document.getElementById('searchInput');
|
||||
searchInput.addEventListener('input', (e) => {
|
||||
const query = e.target.value.toLowerCase();
|
||||
const cards = document.querySelectorAll('.session-card');
|
||||
|
||||
cards.forEach(card => {
|
||||
const text = card.textContent.toLowerCase();
|
||||
card.style.display = text.includes(query) ? 'block' : 'none';
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
// Filter functionality
|
||||
function setupFilters() {
|
||||
const filterButtons = document.querySelectorAll('[data-filter]');
|
||||
filterButtons.forEach(btn => {
|
||||
btn.addEventListener('click', () => {
|
||||
filterButtons.forEach(b => b.classList.remove('active'));
|
||||
btn.classList.add('active');
|
||||
renderSessions(btn.dataset.filter);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
// Initialize
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
initTheme();
|
||||
updateStatistics();
|
||||
renderSessions();
|
||||
setupSearch();
|
||||
setupFilters();
|
||||
|
||||
document.getElementById('themeToggle').addEventListener('click', toggleTheme);
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -5,27 +5,22 @@ description: Product backlog management, user story creation, and feature priori
|
||||
|
||||
# Product Owner Planning Template
|
||||
|
||||
You are a **Product Owner** specializing in product backlog management, user story creation, and feature prioritization.
|
||||
## Role & Scope
|
||||
|
||||
## Your Role & Responsibilities
|
||||
**Role**: Product Owner
|
||||
**Focus**: Product backlog management, user story definition, stakeholder alignment, value delivery
|
||||
**Excluded**: Team management, technical implementation, detailed system design
|
||||
|
||||
**Primary Focus**: Product backlog management, user story definition, stakeholder alignment, and value delivery
|
||||
|
||||
**Core Responsibilities**:
|
||||
- Product backlog creation and prioritization
|
||||
- User story writing with acceptance criteria
|
||||
- Stakeholder engagement and requirement gathering
|
||||
- Feature value assessment and ROI analysis
|
||||
- Release planning and roadmap management
|
||||
- Sprint goal definition and commitment
|
||||
- Acceptance testing and definition of done
|
||||
|
||||
**Does NOT Include**: Team management, technical implementation, detailed system design
|
||||
## Planning Process (Required)
|
||||
Before providing planning document, you MUST:
|
||||
1. Analyze product vision and stakeholder needs
|
||||
2. Define backlog structure and prioritization framework
|
||||
3. Create user stories with acceptance criteria
|
||||
4. Plan releases and define success metrics
|
||||
5. Present structured planning document
|
||||
|
||||
## Planning Document Structure
|
||||
|
||||
Generate a comprehensive Product Owner planning document with the following structure:
|
||||
|
||||
### 1. Product Vision & Strategy
|
||||
- **Product Vision**: Long-term product goals and target outcomes
|
||||
- **Value Proposition**: User value and business benefits
|
||||
|
||||
@@ -5,55 +5,52 @@ category: development
|
||||
keywords: [bug诊断, 故障分析, 修复方案]
|
||||
---
|
||||
|
||||
# AI Persona & Core Mission
|
||||
# Role & Output Requirements
|
||||
|
||||
You are a **资深软件工程师 & 故障诊断专家 (Senior Software Engineer & Fault Diagnosis Expert)**. Your mission is to meticulously analyze user-provided bug reports, logs, and code snippets to perform a forensic-level investigation. Your goal is to pinpoint the precise root cause of the bug and then propose a targeted, robust, and minimally invasive correction plan. **Critically, you will *not* write complete, ready-to-use code files. Your output is a diagnostic report and a clear, actionable correction suggestion, articulated in professional Chinese.** You are an expert at logical deduction, tracing execution flows, and anticipating the side effects of any proposed fix.
|
||||
**Role**: Software engineer specializing in bug diagnosis
|
||||
**Output Format**: Diagnostic report in Chinese following the specified structure
|
||||
**Constraints**: Do NOT write complete code files. Provide diagnostic analysis and targeted correction suggestions only.
|
||||
|
||||
## II. ROLE DEFINITION & CORE CAPABILITIES
|
||||
1. **Role**: Senior Software Engineer & Fault Diagnosis Expert.
|
||||
2. **Core Capabilities**:
|
||||
* **Symptom Interpretation**: Deconstructing bug reports, stack traces, logs, and user descriptions into concrete technical observations.
|
||||
* **Logical Deduction & Root Cause Analysis**: Masterfully applying deductive reasoning to trace symptoms back to their fundamental cause, moving from what is happening to why its happening.
|
||||
* **Code Traversal & Execution Flow Analysis**: Mentally (or schematically) tracing code paths, state changes, and data transformations to identify logical flaws.
|
||||
* **Hypothesis Formulation & Validation**: Formulating plausible hypotheses about the bugs origin and systematically validating or refuting them based on the provided evidence.
|
||||
* **Targeted Solution Design**: Proposing precise, effective, and low-risk code corrections rather than broad refactoring.
|
||||
* **Impact Analysis**: Foreseeing the potential ripple effects or unintended consequences of a proposed fix on other parts of the system.
|
||||
* **Clear Technical Communication (Chinese)**: Articulating complex diagnostic processes and correction plans in clear, unambiguous Chinese for a developer audience.
|
||||
## Core Capabilities
|
||||
- Interpret symptoms from bug reports, stack traces, and logs
|
||||
- Trace execution flow to identify root causes
|
||||
- Formulate and validate hypotheses about bug origins
|
||||
- Design targeted, low-risk corrections
|
||||
- Analyze impact on other system components
|
||||
|
||||
3. **Core Thinking Mode**:
|
||||
* **Detective-like & Methodical**: Start with the evidence (symptoms), follow the clues (code paths), identify the suspect (flawed logic), and prove the case (root cause).
|
||||
* **Hypothesis-Driven**: Actively form and state your working theories (My initial hypothesis is that the null pointer is originating from module X because...) before reaching a conclusion.
|
||||
* **From Effect to Cause**: Your primary thought process should be working backward from the observed failure to the initial error.
|
||||
* **Chain-of-Thought (CoT) Driven**: Explicitly articulate your entire diagnostic journey, from symptom analysis to root cause identification.
|
||||
## Analysis Process (Required)
|
||||
**Before providing your final diagnosis, you MUST:**
|
||||
1. Analyze symptoms and form initial hypothesis
|
||||
2. Trace code execution to identify root cause
|
||||
3. Design correction strategy
|
||||
4. Assess potential impacts and risks
|
||||
5. Present structured diagnostic report
|
||||
|
||||
## III. OBJECTIVES
|
||||
1. **Analyze Evidence**: Thoroughly examine all provided information (bug description, code, logs) to understand the failure conditions.
|
||||
2. **Pinpoint Root Cause**: Go beyond surface-level symptoms to identify the fundamental logical error, race condition, data corruption, or configuration issue.
|
||||
3. **Propose Precise Correction**: Formulate a clear and targeted suggestion for how to fix the bug.
|
||||
4. **Explain the Why**: Justify why the proposed correction effectively resolves the root cause.
|
||||
5. **Assess Risks & Side Effects**: Identify potential negative impacts of the fix and suggest verification steps.
|
||||
6. **Professional Chinese Output**: Produce a highly structured, professional diagnostic report and correction plan entirely in Chinese.
|
||||
7. **Show Your Work (CoT)**: Demonstrate your analytical process clearly in the 思考过程 section.
|
||||
## Objectives
|
||||
1. Identify root cause (not just symptoms)
|
||||
2. Propose targeted correction with justification
|
||||
3. Assess risks and side effects
|
||||
4. Provide verification steps
|
||||
|
||||
## IV. INPUT SPECIFICATIONS
|
||||
1. **Bug Description**: A description of the problem, including observed behavior vs. expected behavior.
|
||||
2. **Code Snippets/File Information**: Relevant source code where the bug is suspected to be.
|
||||
3. **Logs/Stack Traces (Highly Recommended)**: Error messages, logs, or stack traces associated with the bug.
|
||||
4. **Reproduction Steps (Optional)**: Steps to reproduce the bug.
|
||||
## Input
|
||||
- Bug description (observed vs. expected behavior)
|
||||
- Code snippets or file locations
|
||||
- Logs, stack traces, error messages
|
||||
- Reproduction steps (if available)
|
||||
|
||||
## V. RESPONSE STRUCTURE & CONTENT (Strictly Adhere - Output in Chinese)
|
||||
## Output Structure (Required)
|
||||
|
||||
Your response **MUST** be in Chinese and structured in Markdown as follows:
|
||||
Output in Chinese using this Markdown structure:
|
||||
|
||||
---
|
||||
|
||||
### 0. 诊断思维链 (Diagnostic Chain-of-Thought)
|
||||
* *(在此处,您必须结构化地展示您的诊断流程。)*
|
||||
* **1. 症状分析 (Symptom Analysis):** 我首先将用户的描述、日志和错误信息进行归纳,提炼出关键的异常行为和技术线索。
|
||||
* **2. 代码勘察与初步假设 (Code Exploration & Initial Hypothesis):** 基于症状,我将定位到最可疑的代码区域,并提出一个关于根本原因的初步假设。
|
||||
* **3. 逻辑推演与根本原因定位 (Logical Deduction & Root Cause Pinpointing):** 我将沿着代码执行路径进行深入推演,验证或修正我的假设,直至锁定导致错误的精确逻辑点。
|
||||
* **4. 修复方案设计 (Correction Strategy Design):** 在确定根本原因后,我将设计一个最直接、风险最低的修复方案。
|
||||
* **5. 影响评估与验证规划 (Impact Assessment & Verification Planning):** 我会评估修复方案可能带来的副作用,并构思如何验证修复的有效性及系统的稳定性。
|
||||
Present your analysis process in these steps:
|
||||
1. **症状分析**: Summarize error symptoms and technical clues
|
||||
2. **初步假设**: Identify suspicious code areas and form initial hypothesis
|
||||
3. **根本原因定位**: Trace execution path to pinpoint exact cause
|
||||
4. **修复方案设计**: Design targeted, low-risk correction
|
||||
5. **影响评估**: Assess side effects and plan verification
|
||||
|
||||
### **故障诊断与修复建议报告 (Bug Diagnosis & Correction Proposal)**
|
||||
|
||||
@@ -114,17 +111,17 @@ Your response **MUST** be in Chinese and structured in Markdown as follows:
|
||||
---
|
||||
*(对每个需要修改的文件重复上述格式)*
|
||||
|
||||
## VI. KEY DIRECTIVES & CONSTRAINTS
|
||||
1. **Language**: **All** descriptive parts MUST be in **Chinese**.
|
||||
2. **No Full Code Generation**: **Strictly refrain** from writing complete functions or files. Your correction suggestions should be concise, using single lines, `diff` format, or pseudo-code to illustrate the change. Your role is to guide the developer, not replace them.
|
||||
3. **Focus on RCA**: The quality of your Root Cause Analysis is paramount. It must be logical, convincing, and directly supported by the evidence.
|
||||
4. **State Assumptions**: If the provided information is insufficient to be 100% certain, clearly state your assumptions in the 诊断分析过程 section.
|
||||
## Key Requirements
|
||||
1. **Language**: All output in Chinese
|
||||
2. **No Code Generation**: Use diff format or pseudo-code only. Do not write complete functions or files
|
||||
3. **Focus on Root Cause**: Analysis must be logical and evidence-based
|
||||
4. **State Assumptions**: Clearly note any assumptions when information is incomplete
|
||||
|
||||
## VII. SELF-CORRECTION / REFLECTION
|
||||
* Before finalizing your response, review it to ensure:
|
||||
* The 诊断思维链 accurately reflects a logical debugging process.
|
||||
* The Root Cause Analysis is deep, clear, and compelling.
|
||||
* The proposed correction directly addresses the identified root cause.
|
||||
* The correction suggestion is minimal and precise (not large-scale refactoring).
|
||||
* The verification steps are actionable and cover both success and failure cases.
|
||||
* You have strictly avoided generating large blocks of code.
|
||||
## Self-Review Checklist
|
||||
Before providing final output, verify:
|
||||
- [ ] Diagnostic chain reflects logical debugging process
|
||||
- [ ] Root cause analysis is clear and evidence-based
|
||||
- [ ] Correction directly addresses root cause (not just symptoms)
|
||||
- [ ] Correction is minimal and targeted (not broad refactoring)
|
||||
- [ ] Verification steps are actionable
|
||||
- [ ] No complete code blocks generated
|
||||
|
||||
@@ -1,10 +1,17 @@
|
||||
Analyze implementation patterns and code structure.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Analyze ALL files in CONTEXT (not just samples)
|
||||
□ Provide file:line references for every pattern identified
|
||||
□ Distinguish between good patterns and anti-patterns
|
||||
□ Apply RULES template requirements exactly as specified
|
||||
## Planning Required
|
||||
Before providing analysis, you MUST:
|
||||
1. Review all files in context (not just samples)
|
||||
2. Identify patterns with file:line references
|
||||
3. Distinguish good patterns from anti-patterns
|
||||
4. Apply template requirements
|
||||
|
||||
## Core Checklist
|
||||
- [ ] Analyze ALL files in CONTEXT
|
||||
- [ ] Provide file:line references for each pattern
|
||||
- [ ] Distinguish good patterns from anti-patterns
|
||||
- [ ] Apply RULES template requirements
|
||||
|
||||
## REQUIRED ANALYSIS
|
||||
1. Identify common code patterns and architectural decisions
|
||||
@@ -19,10 +26,12 @@ Analyze implementation patterns and code structure.
|
||||
- Clear recommendations for pattern improvements
|
||||
- Standards compliance assessment with priority levels
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ All CONTEXT files analyzed (not partial coverage)
|
||||
□ Every pattern backed by code reference (file:line)
|
||||
□ Anti-patterns clearly distinguished from good patterns
|
||||
□ Recommendations prioritized by impact
|
||||
## Verification Checklist
|
||||
Before finalizing output, verify:
|
||||
- [ ] All CONTEXT files analyzed
|
||||
- [ ] Every pattern has code reference (file:line)
|
||||
- [ ] Anti-patterns clearly distinguished
|
||||
- [ ] Recommendations prioritized by impact
|
||||
|
||||
Focus: Actionable insights with concrete implementation guidance.
|
||||
## Output Requirements
|
||||
Provide actionable insights with concrete implementation guidance.
|
||||
|
||||
@@ -0,0 +1,33 @@
|
||||
Analyze technical documents, research papers, and specifications systematically.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Plan analysis approach before reading (document type, key questions, success criteria)
|
||||
□ Provide section/page references for all claims and findings
|
||||
□ Distinguish facts from interpretations explicitly
|
||||
□ Use precise, direct language - avoid persuasive wording
|
||||
□ Apply RULES template requirements exactly as specified
|
||||
|
||||
## REQUIRED ANALYSIS
|
||||
1. Document assessment: type, structure, audience, quality indicators
|
||||
2. Content extraction: concepts, specifications, implementation details, constraints
|
||||
3. Critical evaluation: strengths, gaps, ambiguities, clarity issues
|
||||
4. Self-critique: verify citations, completeness, actionable recommendations
|
||||
5. Synthesis: key takeaways, integration points, follow-up questions
|
||||
|
||||
## OUTPUT REQUIREMENTS
|
||||
- Structured analysis with mandatory section/page references
|
||||
- Evidence-based findings with specific location citations
|
||||
- Clear separation of facts vs. interpretations
|
||||
- Actionable recommendations tied to document content
|
||||
- Integration points with existing project patterns
|
||||
- Identified gaps and ambiguities with impact assessment
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ Pre-analysis plan documented (3-5 bullet points)
|
||||
□ All claims backed by section/page references
|
||||
□ Self-critique completed before final output
|
||||
□ Language is precise and direct (no persuasive adjectives)
|
||||
□ Recommendations are specific and actionable
|
||||
□ Output length proportional to document size
|
||||
|
||||
Focus: Evidence-based insights extraction with pre-planning and self-critique for technical documents.
|
||||
@@ -1,10 +1,17 @@
|
||||
Create comprehensive tests for the codebase.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Analyze existing test coverage and identify gaps
|
||||
□ Follow project testing frameworks and conventions
|
||||
□ Include unit, integration, and end-to-end tests
|
||||
□ Ensure tests are reliable and deterministic
|
||||
## Planning Required
|
||||
Before creating tests, you MUST:
|
||||
1. Analyze existing test coverage and identify gaps
|
||||
2. Study testing frameworks and conventions used
|
||||
3. Plan test strategy covering unit, integration, and e2e
|
||||
4. Design test data management approach
|
||||
|
||||
## Core Checklist
|
||||
- [ ] Analyze coverage gaps
|
||||
- [ ] Follow testing frameworks and conventions
|
||||
- [ ] Include unit, integration, and e2e tests
|
||||
- [ ] Ensure tests are reliable and deterministic
|
||||
|
||||
## IMPLEMENTATION PHASES
|
||||
|
||||
@@ -51,11 +58,13 @@ Create comprehensive tests for the codebase.
|
||||
- Test coverage metrics and quality improvements
|
||||
- File:line references for tested code
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ Test coverage gaps identified and filled
|
||||
□ All test types included (unit + integration + e2e)
|
||||
□ Tests are reliable and deterministic (no flaky tests)
|
||||
□ Test data properly managed (isolation + cleanup)
|
||||
□ Testing conventions followed consistently
|
||||
## Verification Checklist
|
||||
Before finalizing, verify:
|
||||
- [ ] Coverage gaps filled
|
||||
- [ ] All test types included
|
||||
- [ ] Tests are reliable (no flaky tests)
|
||||
- [ ] Test data properly managed
|
||||
- [ ] Conventions followed
|
||||
|
||||
Focus: High-quality, reliable test suite with comprehensive coverage.
|
||||
## Focus
|
||||
High-quality, reliable test suite with comprehensive coverage.
|
||||
|
||||
@@ -1,10 +1,17 @@
|
||||
Implement a new feature following project conventions and best practices.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Study existing code patterns BEFORE implementing
|
||||
□ Follow established project conventions and architecture
|
||||
□ Include comprehensive tests (unit + integration)
|
||||
□ Provide file:line references for all changes
|
||||
## Planning Required
|
||||
Before implementing, you MUST:
|
||||
1. Study existing code patterns and conventions
|
||||
2. Review project architecture and design principles
|
||||
3. Plan implementation with error handling and tests
|
||||
4. Document integration points and dependencies
|
||||
|
||||
## Core Checklist
|
||||
- [ ] Study existing code patterns first
|
||||
- [ ] Follow project conventions and architecture
|
||||
- [ ] Include comprehensive tests
|
||||
- [ ] Provide file:line references
|
||||
|
||||
## IMPLEMENTATION PHASES
|
||||
|
||||
@@ -39,11 +46,13 @@ Implement a new feature following project conventions and best practices.
|
||||
- Documentation of new dependencies or configurations
|
||||
- Test coverage summary
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ Implementation follows existing patterns (no divergence)
|
||||
□ Complete test coverage (unit + integration)
|
||||
□ Documentation updated (code comments + external docs)
|
||||
□ Integration verified (no breaking changes)
|
||||
□ Security and performance validated
|
||||
## Verification Checklist
|
||||
Before finalizing, verify:
|
||||
- [ ] Follows existing patterns
|
||||
- [ ] Complete test coverage
|
||||
- [ ] Documentation updated
|
||||
- [ ] No breaking changes
|
||||
- [ ] Security and performance validated
|
||||
|
||||
Focus: Production-ready implementation with comprehensive testing and documentation.
|
||||
## Focus
|
||||
Production-ready implementation with comprehensive testing and documentation.
|
||||
|
||||
@@ -1,10 +1,17 @@
|
||||
Generate comprehensive module documentation focused on understanding and usage.
|
||||
Generate module documentation focused on understanding and usage.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Explain WHAT the module does, WHY it exists, and HOW to use it
|
||||
□ Do NOT duplicate API signatures from API.md; refer to it instead
|
||||
□ Provide practical, real-world usage examples
|
||||
□ Clearly define the module's boundaries and dependencies
|
||||
## Planning Required
|
||||
Before providing documentation, you MUST:
|
||||
1. Understand what the module does and why it exists
|
||||
2. Review existing documentation to avoid duplication
|
||||
3. Prepare practical usage examples
|
||||
4. Identify module boundaries and dependencies
|
||||
|
||||
## Core Checklist
|
||||
- [ ] Explain WHAT, WHY, and HOW
|
||||
- [ ] Reference API.md instead of duplicating signatures
|
||||
- [ ] Include practical usage examples
|
||||
- [ ] Define module boundaries and dependencies
|
||||
|
||||
## DOCUMENTATION STRUCTURE
|
||||
|
||||
@@ -31,10 +38,12 @@ Generate comprehensive module documentation focused on understanding and usage.
|
||||
### 7. Common Issues
|
||||
- List common problems and their solutions.
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ The module's purpose, scope, and boundaries are clearly defined
|
||||
□ Core concepts are explained for better understanding
|
||||
□ Usage examples are practical and demonstrate real-world scenarios
|
||||
□ All dependencies and configuration options are documented
|
||||
## Verification Checklist
|
||||
Before finalizing output, verify:
|
||||
- [ ] Module purpose, scope, and boundaries are clear
|
||||
- [ ] Core concepts are explained
|
||||
- [ ] Usage examples are practical and realistic
|
||||
- [ ] Dependencies and configuration are documented
|
||||
|
||||
Focus: Explaining the module's purpose and usage, not just its API.
|
||||
## Focus
|
||||
Explain module purpose and usage, not just API details.
|
||||
@@ -1,51 +1,51 @@
|
||||
# 软件架构规划模板
|
||||
# AI Persona & Core Mission
|
||||
|
||||
You are a **Distinguished Senior Software Architect and Strategic Technical Planner**. Your primary function is to conduct a meticulous and insightful analysis of provided code, project context, and user requirements to devise an exceptionally clear, comprehensive, actionable, and forward-thinking modification plan. **Critically, you will *not* write or generate any code yourself; your entire output will be a detailed modification plan articulated in precise, professional Chinese.** You are an expert in anticipating dependencies, potential impacts, and ensuring the proposed plan is robust, maintainable, and scalable.
|
||||
## Role & Output Requirements
|
||||
|
||||
## II. ROLE DEFINITION & CORE CAPABILITIES
|
||||
1. **Role**: Distinguished Senior Software Architect and Strategic Technical Planner.
|
||||
2. **Core Capabilities**:
|
||||
* **Deep Code Comprehension**: Ability to rapidly understand complex existing codebases (structure, patterns, dependencies, data flow, control flow).
|
||||
* **Requirements Analysis & Distillation**: Skill in dissecting user requirements, identifying core needs, and translating them into technical planning objectives.
|
||||
* **Software Design Principles**: Strong grasp of SOLID, DRY, KISS, design patterns, and architectural best practices.
|
||||
* **Impact Analysis & Risk Assessment**: Expertise in identifying potential side effects, inter-module dependencies, and risks associated with proposed changes.
|
||||
* **Strategic Planning**: Ability to formulate logical, step-by-step modification plans that are efficient and minimize disruption.
|
||||
* **Clear Technical Communication (Chinese)**: Excellence in conveying complex technical plans and considerations in clear, unambiguous Chinese for a developer audience.
|
||||
* **Visual Logic Representation**: Ability to sketch out intended logic flows using concise diagrammatic notations.
|
||||
3. **Core Thinking Mode**:
|
||||
* **Systematic & Holistic**: Approach analysis and planning with a comprehensive view of the system.
|
||||
* **Critical & Forward-Thinking**: Evaluate requirements critically and plan for future maintainability and scalability.
|
||||
* **Problem-Solver**: Focus on devising effective solutions through planning.
|
||||
* **Chain-of-Thought (CoT) Driven**: Explicitly articulate your reasoning process, especially when making design choices within the plan.
|
||||
**Role**: Software architect specializing in technical planning
|
||||
**Output Format**: Modification plan in Chinese following the specified structure
|
||||
**Constraints**: Do NOT write or generate code. Provide planning and strategy only.
|
||||
|
||||
## III. OBJECTIVES
|
||||
1. **Thoroughly Understand Context**: Analyze user-provided code, modification requirements, and project background to gain a deep understanding of the existing system and the goals of the modification.
|
||||
2. **Meticulous Code Analysis for Planning**: Identify all relevant code sections, their current logic, and how they interrelate, quoting relevant snippets for context.
|
||||
3. **Devise Actionable Modification Plan**: Create a detailed, step-by-step plan outlining *what* changes are needed, *where* they should occur, *why* they are necessary, and the *intended logic* of the new/modified code.
|
||||
4. **Illustrate Intended Logic**: For each significant logical change proposed, visually represent the *intended* new or modified control flow and data flow using a concise call flow diagram.
|
||||
5. **Contextualize for Implementation**: Provide all necessary contextual information (variables, data structures, dependencies, potential side effects) to enable a developer to implement the plan accurately.
|
||||
6. **Professional Chinese Output**: Produce a highly structured, professional planning document entirely in Chinese, adhering to the specified Markdown format.
|
||||
7. **Show Your Work (CoT)**: Before presenting the plan, outline your analytical framework, key considerations, and how you approached the planning task.
|
||||
## Core Capabilities
|
||||
- Understand complex codebases (structure, patterns, dependencies, data flow)
|
||||
- Analyze requirements and translate to technical objectives
|
||||
- Apply software design principles (SOLID, DRY, KISS, design patterns)
|
||||
- Assess impacts, dependencies, and risks
|
||||
- Create step-by-step modification plans
|
||||
|
||||
## IV. INPUT SPECIFICATIONS
|
||||
1. **Code Snippets/File Information**: User-provided source code, file names, paths, or descriptions of relevant code sections.
|
||||
2. **Modification Requirements**: Specific instructions or goals for what needs to be changed or achieved.
|
||||
3. **Project Context (Optional)**: Any background information about the project or system.
|
||||
## Planning Process (Required)
|
||||
**Before providing your final plan, you MUST:**
|
||||
1. Analyze requirements and identify technical objectives
|
||||
2. Explore existing code structure and patterns
|
||||
3. Identify modification points and formulate strategy
|
||||
4. Assess dependencies and risks
|
||||
5. Present structured modification plan
|
||||
|
||||
## V. RESPONSE STRUCTURE & CONTENT (Strictly Adhere - Output in Chinese)
|
||||
## Objectives
|
||||
1. Understand context (code, requirements, project background)
|
||||
2. Analyze relevant code sections and their relationships
|
||||
3. Create step-by-step modification plan (what, where, why, how)
|
||||
4. Illustrate intended logic using call flow diagrams
|
||||
5. Provide implementation context (variables, dependencies, side effects)
|
||||
|
||||
Your response **MUST** be in Chinese and structured in Markdown as follows:
|
||||
## Input
|
||||
- Code snippets or file locations
|
||||
- Modification requirements and goals
|
||||
- Project context (if available)
|
||||
|
||||
## Output Structure (Required)
|
||||
|
||||
Output in Chinese using this Markdown structure:
|
||||
|
||||
---
|
||||
|
||||
### 0. 思考过程与规划策略 (Thinking Process & Planning Strategy)
|
||||
* *(在此处,您必须结构化地展示您的分析框架和规划流程。)*
|
||||
* **1. 需求解析 (Requirement Analysis):** 我首先将用户的原始需求进行拆解和澄清,确保完全理解其核心目标和边界条件。
|
||||
* **2. 现有代码结构勘探 (Existing Code Exploration):** 基于提供的代码片段,我将分析其当前的结构、逻辑流和关键数据对象,以建立修改的基线。
|
||||
* **3. 核心修改点识别与策略制定 (Identification of Core Modification Points & Strategy Formulation):** 我将识别出需要修改的关键代码位置,并为每个修改点制定高级别的技术策略(例如,是重构、新增还是调整)。
|
||||
* **4. 依赖与风险评估 (Dependency & Risk Assessment):** 我会评估提议的修改可能带来的模块间依赖关系变化,以及潜在的风险(如性能下降、兼容性问题、边界情况处理不当等)。
|
||||
* **5. 规划文档结构设计 (Plan Document Structuring):** 最后,我将依据上述分析,按照指定的格式组织并撰写这份详细的修改规划方案。
|
||||
Present your planning process in these steps:
|
||||
1. **需求解析**: Break down requirements and clarify core objectives
|
||||
2. **代码结构勘探**: Analyze current code structure and logic flow
|
||||
3. **核心修改点识别**: Identify modification points and formulate strategy
|
||||
4. **依赖与风险评估**: Assess dependencies and risks
|
||||
5. **规划文档组织**: Organize planning document
|
||||
|
||||
### **代码修改规划方案 (Code Modification Plan)**
|
||||
|
||||
@@ -93,25 +93,17 @@ Your response **MUST** be in Chinese and structured in Markdown as follows:
|
||||
---
|
||||
*(对每个需要修改的文件重复上述格式)*
|
||||
|
||||
## VI. STYLE & TONE (Chinese Output)
|
||||
* **Professional & Authoritative**: Maintain a formal, expert tone befitting a Senior Architect.
|
||||
* **Analytical & Insightful**: Demonstrate deep understanding and strategic thinking.
|
||||
* **Precise & Unambiguous**: Use clear, exact technical Chinese terminology.
|
||||
* **Structured & Actionable**: Ensure the plan is well-organized and provides clear guidance.
|
||||
## Key Requirements
|
||||
1. **Language**: All output in Chinese
|
||||
2. **No Code Generation**: Do not write actual code. Provide descriptive modification plan only
|
||||
3. **Focus**: Detail what and why. Use logic sketches to illustrate how
|
||||
4. **Completeness**: State assumptions clearly when information is incomplete
|
||||
|
||||
## VII. KEY DIRECTIVES & CONSTRAINTS
|
||||
1. **Language**: **All** descriptive parts of your plan **MUST** be in **Chinese**.
|
||||
2. **No Code Generation**: **Strictly refrain** from writing, suggesting, or generating any actual code. Your output is *purely* a descriptive modification plan.
|
||||
3. **Focus on What and Why, Illustrate How (Logic Sketch)**: Detail what needs to be done and why. The call flow sketch illustrates the *intended how* at a logical level, not implementation code.
|
||||
4. **Completeness & Accuracy**: Ensure the plan is comprehensive. If information is insufficient, state assumptions clearly in the 思考过程 (Thinking Process) and 必要上下文 (Necessary Context).
|
||||
5. **Professional Standard**: Your plan should meet the standards expected of a senior technical document, suitable for guiding development work.
|
||||
|
||||
## VIII. SELF-CORRECTION / REFLECTION
|
||||
* Before finalizing your response, review it to ensure:
|
||||
* The 思考过程 (Thinking Process) clearly outlines your structured analytical approach.
|
||||
* All user requirements from 需求分析 have been addressed in the plan.
|
||||
* The modification plan is logical, actionable, and sufficiently detailed, with relevant original code snippets for context.
|
||||
* The 修改理由 (Reason for Modification) explicitly links back to the initial requirements.
|
||||
* All crucial context and risks are highlighted.
|
||||
* The entire output is in professional, clear Chinese and adheres to the specified Markdown structure.
|
||||
* You have strictly avoided generating any code.
|
||||
## Self-Review Checklist
|
||||
Before providing final output, verify:
|
||||
- [ ] Thinking process outlines structured analytical approach
|
||||
- [ ] All requirements addressed in the plan
|
||||
- [ ] Plan is logical, actionable, and detailed
|
||||
- [ ] Modification reasons link back to requirements
|
||||
- [ ] Context and risks are highlighted
|
||||
- [ ] No actual code generated
|
||||
|
||||
@@ -65,6 +65,7 @@ codex -C [dir] --full-auto exec "[prompt]" [--skip-git-repo-check -s danger-full
|
||||
| Architecture Planning | Gemini → Qwen | analysis | `planning/01-plan-architecture-design.txt` |
|
||||
| Code Pattern Analysis | Gemini → Qwen | analysis | `analysis/02-analyze-code-patterns.txt` |
|
||||
| Architecture Review | Gemini → Qwen | analysis | `analysis/02-review-architecture.txt` |
|
||||
| Document Analysis | Gemini → Qwen | analysis | `analysis/02-analyze-technical-document.txt` |
|
||||
| Feature Implementation | Codex | auto | `development/02-implement-feature.txt` |
|
||||
| Component Development | Codex | auto | `development/02-implement-component-ui.txt` |
|
||||
| Test Generation | Codex | write | `development/02-generate-tests.txt` |
|
||||
@@ -519,13 +520,14 @@ When no specific template matches your task requirements, use one of these unive
|
||||
**Available Templates**:
|
||||
```
|
||||
prompts/
|
||||
├── universal/ # ← NEW: Universal fallback templates
|
||||
├── universal/ # ← Universal fallback templates
|
||||
│ ├── 00-universal-rigorous-style.txt # Precision & standards-driven
|
||||
│ └── 00-universal-creative-style.txt # Innovation & exploration-focused
|
||||
├── analysis/
|
||||
│ ├── 01-trace-code-execution.txt
|
||||
│ ├── 01-diagnose-bug-root-cause.txt
|
||||
│ ├── 02-analyze-code-patterns.txt
|
||||
│ ├── 02-analyze-technical-document.txt
|
||||
│ ├── 02-review-architecture.txt
|
||||
│ ├── 02-review-code-quality.txt
|
||||
│ ├── 03-analyze-performance.txt
|
||||
@@ -556,6 +558,7 @@ prompts/
|
||||
| Execution Tracing | Gemini (Qwen fallback) | `analysis/01-trace-code-execution.txt` |
|
||||
| Bug Diagnosis | Gemini (Qwen fallback) | `analysis/01-diagnose-bug-root-cause.txt` |
|
||||
| Code Pattern Analysis | Gemini (Qwen fallback) | `analysis/02-analyze-code-patterns.txt` |
|
||||
| Document Analysis | Gemini (Qwen fallback) | `analysis/02-analyze-technical-document.txt` |
|
||||
| Architecture Review | Gemini (Qwen fallback) | `analysis/02-review-architecture.txt` |
|
||||
| Code Review | Gemini (Qwen fallback) | `analysis/02-review-code-quality.txt` |
|
||||
| Performance Analysis | Gemini (Qwen fallback) | `analysis/03-analyze-performance.txt` |
|
||||
|
||||
@@ -29,6 +29,7 @@ For all CLI tool usage, command syntax, and integration guidelines:
|
||||
- **Clear intent over clever code** - Be boring and obvious
|
||||
- **Follow existing code style** - Match import patterns, naming conventions, and formatting of existing codebase
|
||||
- **No unsolicited reports** - Task summaries can be performed internally, but NEVER generate additional reports, documentation files, or summary files without explicit user permission
|
||||
- **Minimal documentation output** - Avoid unnecessary documentation. If required, save to .workflow/.scratchpad/
|
||||
|
||||
### Simplicity Means
|
||||
|
||||
|
||||
@@ -1,278 +0,0 @@
|
||||
# 命令文档审计报告
|
||||
|
||||
**审计日期**: 2025-11-20
|
||||
**审计范围**: 73个命令文档文件
|
||||
**审计方法**: 自动化扫描 + 手动内容分析
|
||||
|
||||
---
|
||||
|
||||
## 发现的问题
|
||||
|
||||
### 1. 包含版本信息的文件
|
||||
|
||||
#### [CRITICAL] version.md
|
||||
**文件路径**: `/home/user/Claude-Code-Workflow/.claude/commands/version.md`
|
||||
|
||||
**问题位置**:
|
||||
- 第1-3行:包含在YAML头中
|
||||
- 第96-102行:示例中包含完整版本号和发布日期(如"v3.2.2"、"2025-10-03")
|
||||
- 第127-130行:包含开发版本号和日期
|
||||
- 第155-172行:版本比较和升级建议
|
||||
|
||||
**内容摘要**:
|
||||
```
|
||||
Latest Stable: v3.2.2
|
||||
Release: v3.2.2: Independent Test-Gen Workflow with Cross-Session Context
|
||||
Published: 2025-10-03T04:10:08Z
|
||||
|
||||
Latest Dev: a03415b
|
||||
Message: feat: Add version tracking and upgrade check system
|
||||
Date: 2025-10-03T04:46:44Z
|
||||
```
|
||||
|
||||
**严重程度**: ⚠️ 高 - 文件本质上是版本管理命令,但包含具体版本号、发布日期和完整版本历史
|
||||
|
||||
---
|
||||
|
||||
### 2. 包含额外无关内容的文件
|
||||
|
||||
#### [HIGH] tdd-plan.md
|
||||
**文件路径**: `/home/user/Claude-Code-Workflow/.claude/commands/workflow/tdd-plan.md`
|
||||
|
||||
**问题位置**: 第420-523行
|
||||
|
||||
**部分内容**:
|
||||
```markdown
|
||||
## TDD Workflow Enhancements
|
||||
|
||||
### Overview
|
||||
The TDD workflow has been significantly enhanced by integrating best practices
|
||||
from both traditional `plan --agent` and `test-gen` workflows...
|
||||
|
||||
### Key Improvements
|
||||
|
||||
#### 1. Test Coverage Analysis (Phase 3)
|
||||
**Adopted from test-gen workflow**
|
||||
|
||||
#### 2. Iterative Green Phase with Test-Fix Cycle
|
||||
**Adopted from test-gen workflow**
|
||||
|
||||
#### 3. Agent-Driven Planning
|
||||
**From plan --agent workflow**
|
||||
|
||||
### Workflow Comparison
|
||||
| Aspect | Previous | Current (Optimized) |
|
||||
| **Task Count** | 5 features = 15 tasks | 5 features = 5 tasks (70% reduction) |
|
||||
| **Task Management** | High overhead (15 tasks) | Low overhead (5 tasks) |
|
||||
|
||||
### Migration Notes
|
||||
**Backward Compatibility**: Fully compatible
|
||||
- Existing TDD workflows continue to work
|
||||
- New features are additive, not breaking
|
||||
```
|
||||
|
||||
**问题分析**:
|
||||
- 包含"增强"、"改进"、"演进"等版本历史相关内容
|
||||
- 包含"工作流比较"部分,对比了"之前"和"现在"的版本
|
||||
- 包含"迁移说明",描述了从旧版本的升级路径
|
||||
- 约100行内容(第420-523行)不是关于命令如何使用,而是关于如何改进的
|
||||
|
||||
**严重程度**: ⚠️ 中-高 - 约18%的文件内容(100/543行)是版本演进相关,而不是核心功能说明
|
||||
|
||||
---
|
||||
|
||||
### 3. 任务不够专注的文件
|
||||
|
||||
#### [MEDIUM] tdd-plan.md (继续)
|
||||
**问题**: 文件中包含过多关于与其他命令(plan、test-gen)集成的说明
|
||||
|
||||
**相关部分**:
|
||||
- 第475-488行:与"plan --agent"工作流的比较
|
||||
- 第427-441行:描述从test-gen工作流"采纳"的特性
|
||||
- 第466-473行:描述从plan --agent工作流"采纳"的特性
|
||||
|
||||
**问题分析**: 虽然这些集成说明可能有用,但在命令文档中过度强调其他命令的关系,使文档的焦点分散。建议这类内容应放在项目级文档或架构文档中,而不是在具体命令文档中。
|
||||
|
||||
**严重程度**: ⚠️ 中 - 降低了文档的焦点,但不是严重问题
|
||||
|
||||
---
|
||||
|
||||
## 合规文件统计
|
||||
|
||||
### 审计结果汇总
|
||||
|
||||
| 类别 | 计数 | 百分比 |
|
||||
|------|------|--------|
|
||||
| **完全合规的文件** | 70 | 95.9% |
|
||||
| **有版本信息的文件** | 1 | 1.4% |
|
||||
| **包含额外无关内容的文件** | 1 | 1.4% |
|
||||
| **任务不够专注的文件** | 1* | 1.4% |
|
||||
| **总计** | 73 | 100% |
|
||||
|
||||
*注: tdd-plan.md 同时出现在"额外无关内容"和"任务不专注"两个类别中
|
||||
|
||||
### 问题严重程度分布
|
||||
|
||||
| 严重程度 | 文件数 | 说明 |
|
||||
|---------|--------|------|
|
||||
| CRITICAL | 0 | 没有需要立即阻止执行的问题 |
|
||||
| HIGH | 1 | version.md - 包含完整版本号和发布信息 |
|
||||
| MEDIUM | 1 | tdd-plan.md - 包含过度的版本演进说明和工作流对比 |
|
||||
| LOW | 0 | 无其他问题 |
|
||||
|
||||
---
|
||||
|
||||
## 详细发现
|
||||
|
||||
### version.md - 完整分析
|
||||
|
||||
**问题本质**: version.md命令的存在目的就是管理和报告版本信息。文件中包含版本号、发布日期、更新日志等内容不仅是合理的,而是必需的。
|
||||
|
||||
**但审计角度**: 根据用户的审计标准:
|
||||
- ✓ "包含版本号、版本历史、changelog等内容" - **是的,明确包含**
|
||||
- 示例版本号: v3.2.1, v3.2.2, 3.4.0-dev
|
||||
- 发布日期: 2025-10-03T12:00:00Z
|
||||
- 版本历史信息和升级路径
|
||||
|
||||
**结论**: 该文件符合审计标准中的"版本信息"类别,应被标记为有问题(尽管这是功能需求)
|
||||
|
||||
---
|
||||
|
||||
### tdd-plan.md - 完整分析
|
||||
|
||||
**第一个问题 - 额外的版本演进信息**:
|
||||
```
|
||||
## TDD Workflow Enhancements (行420)
|
||||
### Overview
|
||||
The TDD workflow has been **significantly enhanced** by integrating best practices
|
||||
from **both traditional `plan --agent` and `test-gen` workflows**
|
||||
|
||||
### Key Improvements
|
||||
#### 1. Test Coverage Analysis (Phase 3)
|
||||
**Adopted from test-gen workflow** (行428)
|
||||
|
||||
#### 2. Iterative Green Phase with Test-Fix Cycle
|
||||
**Adopted from test-gen workflow** (行443)
|
||||
|
||||
#### 3. Agent-Driven Planning
|
||||
**From plan --agent workflow** (行467)
|
||||
```
|
||||
|
||||
这部分内容完全是关于命令的历史演变和改进,不是关于如何使用该命令。
|
||||
|
||||
**第二个问题 - 工作流对比表**:
|
||||
```
|
||||
### Workflow Comparison (行475)
|
||||
| Aspect | Previous | Current (Optimized) |
|
||||
| **Phases** | 6 | 7 |
|
||||
| **Task Count** | 5 features = 15 tasks | 5 features = 5 tasks (70% reduction) |
|
||||
```
|
||||
|
||||
直接对比了"之前"和"现在"的实现,这是版本历史相关内容。
|
||||
|
||||
**第三个问题 - 迁移说明**:
|
||||
```
|
||||
### Migration Notes (行490)
|
||||
**Backward Compatibility**: Fully compatible
|
||||
- Existing TDD workflows continue to work
|
||||
- New features are additive, not breaking
|
||||
```
|
||||
|
||||
这是版本升级路径说明,不是命令核心功能文档的一部分。
|
||||
|
||||
**统计**:
|
||||
- 总行数: 543行
|
||||
- 有问题的行: ~103行(第420-523行)
|
||||
- 占比: ~19%
|
||||
|
||||
**结论**: tdd-plan.md 同时违反了两个审计标准:
|
||||
1. 包含版本演进历史相关内容
|
||||
2. 过度描述与其他命令的关系(缺乏任务专注度)
|
||||
|
||||
---
|
||||
|
||||
## 建议
|
||||
|
||||
### 高优先级
|
||||
|
||||
1. **移除 version.md 中的具体版本号**
|
||||
- 当前做法: 包含硬编码的版本号、日期等
|
||||
- 建议: 使用变量或运行时获取版本信息,文档中只描述版本命令的功能
|
||||
- 理由: 版本号应该由版本控制系统管理,而不是在文档中硬编码
|
||||
|
||||
2. **从 tdd-plan.md 中移除第420-523行(版本演进部分)**
|
||||
- 当前: ~103行关于"增强"、"改进"、"迁移"的内容
|
||||
- 建议: 移到单独的"CHANGELOG.md"或项目级文档
|
||||
- 理由: 这是历史演变信息,不是使用指南
|
||||
|
||||
### 中优先级
|
||||
|
||||
3. **重构 tdd-plan.md 中的工作流关系**
|
||||
- 当前: 第475-495行详细对比与其他命令的区别
|
||||
- 建议: 简化对其他命令的引用,保留"Related Commands"部分即可
|
||||
- 理由: 过度关注与其他命令的关系分散了文档焦点
|
||||
|
||||
4. **统一版本信息管理策略**
|
||||
- 建议: 建立项目级文档规范,明确哪些信息应在命令文档中出现
|
||||
- 范围: 适用于所有命令文档
|
||||
|
||||
---
|
||||
|
||||
## 合规性评定
|
||||
|
||||
### 总体评分: 96/100
|
||||
|
||||
- ✓ **整体质量高**: 95.9%的文件完全合规
|
||||
- ⚠️ **两个文件需要整改**:
|
||||
- version.md: 版本信息管理需要优化
|
||||
- tdd-plan.md: 版本演进内容需要分离
|
||||
|
||||
### 推荐行动
|
||||
|
||||
| 优先级 | 行动 | 预期影响 |
|
||||
|--------|------|---------|
|
||||
| **高** | 清理 version.md 的硬编码版本号 | 提高版本管理的可维护性 |
|
||||
| **高** | 从 tdd-plan.md 移除第420-523行 | 提高文档专注度,减少19% |
|
||||
| **中** | 建立版本信息管理规范 | 防止未来重复问题 |
|
||||
| **低** | 简化 tdd-plan.md 中的工作流关系说明 | 进一步改善文档清晰度 |
|
||||
|
||||
---
|
||||
|
||||
## 附录
|
||||
|
||||
### 审计方法论
|
||||
|
||||
1. **自动扫描**: 使用grep搜索关键词(version, changelog, release, history等)
|
||||
2. **内容分析**: 手动读取匹配文件的完整内容
|
||||
3. **结构分析**: 检查是否包含与核心功能无关的内容
|
||||
4. **统计分析**: 计算问题内容占比
|
||||
|
||||
### 数据来源
|
||||
|
||||
- 总文件数: 73
|
||||
- 详细分析文件: 15
|
||||
- 快速扫描文件: 58
|
||||
|
||||
### 文件列表(完整性检查)
|
||||
|
||||
已审计的所有命令文档:
|
||||
- ✓ version.md (有问题)
|
||||
- ✓ enhance-prompt.md
|
||||
- ✓ test-fix-gen.md
|
||||
- ✓ test-gen.md
|
||||
- ✓ test-cycle-execute.md
|
||||
- ✓ tdd-plan.md (有问题)
|
||||
- ✓ tdd-verify.md
|
||||
- ✓ status.md
|
||||
- ✓ review.md
|
||||
- ✓ plan.md
|
||||
- ✓ lite-plan.md
|
||||
- ✓ lite-execute.md
|
||||
- ✓ init.md
|
||||
- ✓ execute.md
|
||||
- ✓ action-plan-verify.md
|
||||
- ... 以及其他58个文件 (全部合规)
|
||||
|
||||
---
|
||||
|
||||
**审计完成** - 生成时间: 2025-11-20
|
||||
@@ -1,274 +0,0 @@
|
||||
# Command Flow Expression Standard
|
||||
|
||||
**用途**:规范命令文档中Task、SlashCommand、Skill和Bash调用的标准表达方式
|
||||
|
||||
**版本**:v2.1.0
|
||||
|
||||
---
|
||||
|
||||
## 核心原则
|
||||
|
||||
1. **统一格式** - 所有调用使用标准化格式
|
||||
2. **清晰参数** - 必需参数明确标注,可选参数加方括号
|
||||
3. **减少冗余** - 避免不必要的echo命令和管道操作
|
||||
4. **工具优先** - 优先使用专用工具(Write/Read/Edit)而非Bash变通
|
||||
5. **可读性** - 保持缩进和换行的一致性
|
||||
|
||||
---
|
||||
|
||||
## 1. Task调用标准(Agent启动)
|
||||
|
||||
### 标准格式
|
||||
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="agent-type",
|
||||
description="Brief description",
|
||||
prompt=`
|
||||
FULL TASK PROMPT HERE
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
### 规范要求
|
||||
|
||||
- `subagent_type`: Agent类型(字符串)
|
||||
- `description`: 简短描述(5-10词,动词开头)
|
||||
- `prompt`: 完整任务提示(使用反引号包裹多行内容)
|
||||
- 参数字段缩进2空格
|
||||
|
||||
### 正确示例
|
||||
|
||||
```javascript
|
||||
// CLI执行agent
|
||||
Task(
|
||||
subagent_type="cli-execution-agent",
|
||||
description="Analyze codebase patterns",
|
||||
prompt=`
|
||||
PURPOSE: Identify code patterns for refactoring
|
||||
TASK: Scan project files and extract common patterns
|
||||
MODE: analysis
|
||||
CONTEXT: @src/**/*
|
||||
EXPECTED: Pattern list with usage examples
|
||||
`
|
||||
)
|
||||
|
||||
// 代码开发agent
|
||||
Task(
|
||||
subagent_type="code-developer",
|
||||
description="Implement authentication module",
|
||||
prompt=`
|
||||
GOAL: Build JWT-based authentication
|
||||
SCOPE: User login, token validation, session management
|
||||
CONTEXT: @src/auth/**/* @CLAUDE.md
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. SlashCommand调用标准
|
||||
|
||||
### 标准格式
|
||||
|
||||
```javascript
|
||||
SlashCommand(command="/category:command-name [flags] arguments")
|
||||
```
|
||||
|
||||
### 规范要求
|
||||
|
||||
单行调用 | 双引号包裹 | 完整路径`/category:command-name` | 参数顺序: 标志→参数值
|
||||
|
||||
### 正确示例
|
||||
|
||||
```javascript
|
||||
// 无参数
|
||||
SlashCommand(command="/workflow:status")
|
||||
|
||||
// 带标志和参数
|
||||
SlashCommand(command="/workflow:session:start --auto \"task description\"")
|
||||
|
||||
// 变量替换
|
||||
SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"description\"")
|
||||
|
||||
// 多个标志
|
||||
SlashCommand(command="/workflow:plan --agent --cli-execute \"feature description\"")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Skill调用标准
|
||||
|
||||
### 标准格式
|
||||
|
||||
```javascript
|
||||
Skill(command: "skill-name")
|
||||
```
|
||||
|
||||
### 规范要求
|
||||
|
||||
单行调用 | 冒号语法`command:` | 双引号包裹skill-name
|
||||
|
||||
### 正确示例
|
||||
|
||||
```javascript
|
||||
// 项目SKILL
|
||||
Skill(command: "claude_dms3")
|
||||
|
||||
// 技术栈SKILL
|
||||
Skill(command: "react-dev")
|
||||
|
||||
// 工作流SKILL
|
||||
Skill(command: "workflow-progress")
|
||||
|
||||
// 变量替换
|
||||
Skill(command: "${skill_name}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Bash命令标准
|
||||
|
||||
### 核心原则:优先使用专用工具
|
||||
|
||||
**工具优先级**:
|
||||
1. **Write工具** → 创建/覆盖文件内容
|
||||
2. **Edit工具** → 修改现有文件内容
|
||||
3. **Read工具** → 读取文件内容
|
||||
4. **Bash命令** → 仅用于真正的系统操作(git, npm, test等)
|
||||
|
||||
### 标准格式
|
||||
|
||||
```javascript
|
||||
bash(command args)
|
||||
```
|
||||
|
||||
### 合理使用Bash的场景
|
||||
|
||||
```javascript
|
||||
// ✅ Git操作
|
||||
bash(git status --short)
|
||||
bash(git commit -m "commit message")
|
||||
|
||||
// ✅ 包管理器和测试
|
||||
bash(npm install)
|
||||
bash(npm test)
|
||||
|
||||
// ✅ 文件系统查询和文本处理
|
||||
bash(find .workflow -name "*.json" -type f)
|
||||
bash(rg "pattern" --type js --files-with-matches)
|
||||
```
|
||||
|
||||
### 避免Bash的场景
|
||||
|
||||
```javascript
|
||||
// ❌ 文件创建/写入 → 使用Write工具
|
||||
bash(echo "content" > file.txt) // 错误
|
||||
Write({file_path: "file.txt", content: "content"}) // 正确
|
||||
|
||||
// ❌ 文件读取 → 使用Read工具
|
||||
bash(cat file.txt) // 错误
|
||||
Read({file_path: "file.txt"}) // 正确
|
||||
|
||||
// ❌ 简单字符串处理 → 在代码中处理
|
||||
bash(echo "text" | tr '[:upper:]' '[:lower:]') // 错误
|
||||
"text".toLowerCase() // 正确
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. 组合调用模式(伪代码准则)
|
||||
|
||||
### 核心准则
|
||||
|
||||
直接写执行逻辑(无FUNCTION/END包裹)| 用`#`注释分段 | 变量赋值`variable = value` | 条件`IF/ELSE` | 循环`FOR` | 验证`VALIDATE` | 错误`ERROR + EXIT 1`
|
||||
|
||||
### 顺序调用(依赖关系)
|
||||
|
||||
```pseudo
|
||||
# Phase 1-2: Session and Context
|
||||
sessionId = SlashCommand(command="/workflow:session:start --auto \"description\"")
|
||||
PARSE sessionId from output
|
||||
VALIDATE: bash(test -d .workflow/{sessionId})
|
||||
|
||||
contextPath = SlashCommand(command="/workflow:tools:context-gather --session {sessionId} \"desc\"")
|
||||
context_json = READ(contextPath)
|
||||
|
||||
# Phase 3-4: Conditional and Agent
|
||||
IF context_json.conflict_risk IN ["medium", "high"]:
|
||||
SlashCommand(command="/workflow:tools:conflict-resolution --session {sessionId}")
|
||||
|
||||
Task(subagent_type="action-planning-agent", description="Generate tasks", prompt=`SESSION: {sessionId}`)
|
||||
|
||||
VALIDATE: bash(test -f .workflow/{sessionId}/IMPL_PLAN.md)
|
||||
RETURN summary
|
||||
```
|
||||
|
||||
### 并行调用(无依赖)
|
||||
|
||||
```pseudo
|
||||
PARALLEL_START:
|
||||
check_git = bash(git status)
|
||||
check_count = bash(find .workflow -name "*.json" | wc -l)
|
||||
check_skill = Skill(command: "project-name")
|
||||
WAIT_ALL_COMPLETE
|
||||
VALIDATE results
|
||||
RETURN summary
|
||||
```
|
||||
|
||||
### 条件分支调用
|
||||
|
||||
```pseudo
|
||||
IF task_type CONTAINS "test": agent = "test-fix-agent"
|
||||
ELSE IF task_type CONTAINS "implement": agent = "code-developer"
|
||||
ELSE: agent = "universal-executor"
|
||||
|
||||
Skill(command: "project-name")
|
||||
Task(subagent_type=agent, description="Execute task", prompt=build_prompt(task_type))
|
||||
VALIDATE output
|
||||
RETURN result
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. 变量和占位符规范
|
||||
|
||||
| 上下文 | 格式 | 示例 |
|
||||
|--------|------|------|
|
||||
| **Markdown说明** | `[variableName]` | `[sessionId]`, `[contextPath]` |
|
||||
| **JavaScript代码** | `${variableName}` | `${sessionId}`, `${contextPath}` |
|
||||
| **Bash命令** | `$variable` | `$session_id`, `$context_path` |
|
||||
|
||||
---
|
||||
|
||||
## 7. 快速检查清单
|
||||
|
||||
**Task**: subagent_type已指定 | description≤10词 | prompt用反引号 | 缩进2空格
|
||||
|
||||
**SlashCommand**: 完整路径 `/category:command` | 标志在前 | 变量用`[var]` | 双引号包裹
|
||||
|
||||
**Skill**: 冒号语法 `command:` | 双引号包裹 | 单行格式
|
||||
|
||||
**Bash**: 能用Write/Edit/Read工具吗?| 避免不必要echo | 真正的系统操作
|
||||
|
||||
---
|
||||
|
||||
## 8. 常见错误及修复
|
||||
|
||||
```javascript
|
||||
// ❌ 错误1: Bash中不必要的echo
|
||||
bash(echo '{"status":"active"}' > status.json)
|
||||
// ✅ 正确: 使用Write工具
|
||||
Write({file_path: "status.json", content: '{"status":"active"}'})
|
||||
|
||||
// ❌ 错误2: Task单行格式
|
||||
Task(subagent_type="agent", description="Do task", prompt=`...`)
|
||||
// ✅ 正确: 多行格式
|
||||
Task(subagent_type="agent", description="Do task", prompt=`...`)
|
||||
|
||||
// ❌ 错误3: Skill使用等号
|
||||
Skill(command="skill-name")
|
||||
// ✅ 正确: 使用冒号
|
||||
Skill(command: "skill-name")
|
||||
```
|
||||
|
||||
@@ -180,7 +180,7 @@ Commands for creating, listing, and managing workflow sessions.
|
||||
- **Syntax**: `/workflow:session:complete [--detailed]`
|
||||
- **Parameters**:
|
||||
- `--detailed` (Flag): Shows a more detailed completion summary.
|
||||
- **Responsibilities**: Marks the currently active session as "completed", records timestamps, and removes the `.active-*` marker file.
|
||||
- **Responsibilities**: Marks the currently active session as "completed", records timestamps, and moves the session from `.workflow/active/` to `.workflow/archives/`.
|
||||
- **Agent Calls**: None.
|
||||
- **Example**:
|
||||
```bash
|
||||
@@ -405,34 +405,23 @@ Specialized workflow for UI/UX design, from style extraction to prototype genera
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:imitate-auto**
|
||||
- **Syntax**: `/workflow:ui-design:imitate-auto --url-map "<map>" [--capture-mode <batch|deep>] ...`
|
||||
- **Responsibilities**: High-speed, multi-page UI replication workflow that captures screenshots and orchestrates the full design pipeline.
|
||||
- **Syntax**: `/workflow:ui-design:imitate-auto --input "<value>" [--session <id>]`
|
||||
- **Responsibilities**: UI design workflow with direct code/image input for design token extraction and prototype generation. Accepts local code files, images (glob patterns), or text descriptions.
|
||||
- **Agent Calls**: `@ui-design-agent`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:imitate-auto --url-map "home:https://linear.app, features:https://linear.app/features"
|
||||
```
|
||||
# Image reference
|
||||
/workflow:ui-design:imitate-auto --input "design-refs/*.png"
|
||||
|
||||
### **/workflow:ui-design:capture**
|
||||
- **Syntax**: `/workflow:ui-design:capture --url-map "target:url,..." ...`
|
||||
- **Responsibilities**: Batch screenshot capture tool using MCP Chrome DevTools with multi-tier fallback strategy (MCP → Playwright → Chrome → Manual).
|
||||
- **Agent Calls**: None directly, uses MCP Chrome DevTools or browser automation as fallback.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:capture --url-map "home:https://linear.app"
|
||||
```
|
||||
# Code import
|
||||
/workflow:ui-design:imitate-auto --input "./src/components"
|
||||
|
||||
### **/workflow:ui-design:explore-layers**
|
||||
- **Syntax**: `/workflow:ui-design:explore-layers --url <url> --depth <1-5> ...`
|
||||
- **Responsibilities**: Performs a deep, interactive UI capture of a single URL, exploring layers from the full page down to the Shadow DOM.
|
||||
- **Agent Calls**: None directly, uses MCP Chrome DevTools for layer exploration.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:explore-layers --url https://linear.app --depth 3
|
||||
# Text prompt
|
||||
/workflow:ui-design:imitate-auto --input "Modern minimalist design"
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:style-extract**
|
||||
- **Syntax**: `/workflow:ui-design:style-extract [--images "..."] [--prompt "..."] ...`
|
||||
- **Syntax**: `/workflow:ui-design:style-extract [--images "<glob>"] [--prompt "<desc>"] [--variants <count>] ...`
|
||||
- **Responsibilities**: Extracts design styles from images or text prompts and generates production-ready design systems (`design-tokens.json`, `style-guide.md`).
|
||||
- **Agent Calls**: `@ui-design-agent`.
|
||||
- **Example**:
|
||||
@@ -441,12 +430,12 @@ Specialized workflow for UI/UX design, from style extraction to prototype genera
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:layout-extract**
|
||||
- **Syntax**: `/workflow:ui-design:layout-extract [--images "..."] [--urls "..."] ...`
|
||||
- **Responsibilities**: Extracts structural layout information (HTML structure, CSS layout rules) separately from visual style.
|
||||
- **Syntax**: `/workflow:ui-design:layout-extract [--images "<glob>"] [--prompt "<desc>"] [--targets "<list>"] ...`
|
||||
- **Responsibilities**: Extracts structural layout information (HTML structure, CSS layout rules) from images or text prompts.
|
||||
- **Agent Calls**: `@ui-design-agent`.
|
||||
- **Example**:
|
||||
```bash
|
||||
/workflow:ui-design:layout-extract --urls "home:https://linear.app" --mode imitate
|
||||
/workflow:ui-design:layout-extract --images "design-refs/*.png" --targets "home,dashboard"
|
||||
```
|
||||
|
||||
### **/workflow:ui-design:generate**
|
||||
|
||||
@@ -1,126 +0,0 @@
|
||||
# Command Template: Executor
|
||||
|
||||
**用途**:直接执行特定功能的执行器命令模板
|
||||
|
||||
**特征**:专注于自身功能实现,移除 Related Commands 段落
|
||||
|
||||
---
|
||||
|
||||
## 模板结构
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: command-name
|
||||
description: Brief description of what this command does
|
||||
argument-hint: "[flags] arguments"
|
||||
allowed-tools: Read(*), Edit(*), Write(*), Bash(*), TodoWrite(*)
|
||||
---
|
||||
|
||||
# Command Name (/category:command-name)
|
||||
|
||||
## Overview
|
||||
Clear description of what this command does and its purpose.
|
||||
|
||||
**Key Characteristics**:
|
||||
- Executes specific functionality directly
|
||||
- Does NOT orchestrate other commands
|
||||
- Focuses on single responsibility
|
||||
- Returns concrete results
|
||||
|
||||
## Core Functionality
|
||||
- Function 1: Description
|
||||
- Function 2: Description
|
||||
- Function 3: Description
|
||||
|
||||
## Usage
|
||||
|
||||
### Command Syntax
|
||||
```bash
|
||||
/category:command-name [FLAGS] <ARGUMENTS>
|
||||
|
||||
# Flags
|
||||
--flag1 Description
|
||||
--flag2 Description
|
||||
|
||||
# Arguments
|
||||
<arg1> Description
|
||||
<arg2> Description (optional)
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Step 1: Step Name
|
||||
Description of what happens in this step
|
||||
|
||||
**Operations**:
|
||||
- Operation 1
|
||||
- Operation 2
|
||||
|
||||
**Validation**:
|
||||
- Check 1
|
||||
- Check 2
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Step Name
|
||||
[Repeat for each step]
|
||||
|
||||
---
|
||||
|
||||
## Input/Output
|
||||
|
||||
### Input Requirements
|
||||
- Input 1: Description and format
|
||||
- Input 2: Description and format
|
||||
|
||||
### Output Format
|
||||
```
|
||||
Output description and structure
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Error message 1 | Root cause | How to fix |
|
||||
| Error message 2 | Root cause | How to fix |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Practice 1**: Description and rationale
|
||||
2. **Practice 2**: Description and rationale
|
||||
3. **Practice 3**: Description and rationale
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 使用规则
|
||||
|
||||
### 核心原则
|
||||
1. **移除 Related Commands** - 执行器不协调其他命令
|
||||
2. **专注单一职责** - 每个执行器只做一件事
|
||||
3. **清晰的步骤划分** - 明确执行流程
|
||||
4. **完整的错误处理** - 列出常见错误和解决方案
|
||||
|
||||
### 可选段落
|
||||
根据命令特性,以下段落可选:
|
||||
- **Configuration**: 有配置参数时使用
|
||||
- **Output Files**: 生成文件时使用
|
||||
- **Exit Codes**: 有明确退出码时使用
|
||||
- **Environment Variables**: 依赖环境变量时使用
|
||||
|
||||
### 格式要求
|
||||
- 无 emoji/图标装饰
|
||||
- 纯文本状态指示器
|
||||
- 使用表格组织错误信息
|
||||
- 提供实用的示例代码
|
||||
|
||||
## 示例参考
|
||||
|
||||
参考已重构的执行器命令:
|
||||
- `.claude/commands/task/create.md`
|
||||
- `.claude/commands/task/breakdown.md`
|
||||
- `.claude/commands/task/execute.md`
|
||||
- `.claude/commands/cli/execute.md`
|
||||
- `.claude/commands/version.md`
|
||||
@@ -1,140 +0,0 @@
|
||||
# Command Template: Orchestrator
|
||||
|
||||
**用途**:协调多个子命令的编排器命令模板
|
||||
|
||||
**特征**:保留 Related Commands 段落,明确说明调用的命令链
|
||||
|
||||
---
|
||||
|
||||
## 模板结构
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: command-name
|
||||
description: Brief description of what this command orchestrates
|
||||
argument-hint: "[flags] arguments"
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||
---
|
||||
|
||||
# Command Name (/category:command-name)
|
||||
|
||||
## Overview
|
||||
Clear description of what this command orchestrates and its role.
|
||||
|
||||
**Key Characteristics**:
|
||||
- Orchestrates X phases/commands
|
||||
- Coordinates between multiple slash commands
|
||||
- Does NOT execute directly - delegates to specialized commands
|
||||
- Manages workflow state and progress tracking
|
||||
|
||||
## Core Responsibilities
|
||||
- Responsibility 1: Description
|
||||
- Responsibility 2: Description
|
||||
- Responsibility 3: Description
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Phase 1: Phase Name
|
||||
**Command**: `SlashCommand(command="/command:name args")`
|
||||
|
||||
**Input**: Description of inputs
|
||||
|
||||
**Expected Behavior**:
|
||||
- Behavior 1
|
||||
- Behavior 2
|
||||
|
||||
**Parse Output**:
|
||||
- Extract: variable name (pattern description)
|
||||
|
||||
**Validation**:
|
||||
- Validation rule 1
|
||||
- Validation rule 2
|
||||
|
||||
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Phase Name
|
||||
[Repeat structure for each phase]
|
||||
|
||||
---
|
||||
|
||||
## TodoWrite Pattern
|
||||
|
||||
Track progress through all phases:
|
||||
|
||||
```javascript
|
||||
TodoWrite({todos: [
|
||||
{"content": "Execute phase 1", "status": "in_progress|completed", "activeForm": "Executing phase 1"},
|
||||
{"content": "Execute phase 2", "status": "pending|in_progress|completed", "activeForm": "Executing phase 2"},
|
||||
{"content": "Execute phase 3", "status": "pending|in_progress|completed", "activeForm": "Executing phase 3"}
|
||||
]})
|
||||
```
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
Phase 1: command-1 → output-1
|
||||
↓
|
||||
Phase 2: command-2 (input: output-1) → output-2
|
||||
↓
|
||||
Phase 3: command-3 (input: output-2) → final-result
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Phase | Error | Action |
|
||||
|-------|-------|--------|
|
||||
| 1 | Error description | Recovery action |
|
||||
| 2 | Error description | Recovery action |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Usage
|
||||
```bash
|
||||
/category:command-name
|
||||
/category:command-name --flag "argument"
|
||||
```
|
||||
|
||||
## Related Commands
|
||||
|
||||
**Prerequisite Commands**:
|
||||
- `/command:prerequisite` - Description of when to use before this
|
||||
|
||||
**Called by This Command**:
|
||||
- `/command:phase1` - Description (Phase 1)
|
||||
- `/command:phase2` - Description (Phase 2)
|
||||
- `/command:phase3` - Description (Phase 3)
|
||||
|
||||
**Follow-up Commands**:
|
||||
- `/command:next` - Description of what to do after this
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 使用规则
|
||||
|
||||
### 核心原则
|
||||
1. **保留 Related Commands** - 明确说明命令调用链
|
||||
2. **清晰的阶段划分** - 每个Phase独立可追踪
|
||||
3. **数据流可视化** - 展示Phase间的数据传递
|
||||
4. **TodoWrite追踪** - 实时更新执行进度
|
||||
|
||||
### Related Commands 分类
|
||||
- **Prerequisite Commands**: 执行本命令前需要先运行的命令
|
||||
- **Called by This Command**: 本命令会调用的子命令(按阶段分组)
|
||||
- **Follow-up Commands**: 执行本命令后的推荐下一步
|
||||
|
||||
### 格式要求
|
||||
- 无 emoji/图标装饰
|
||||
- 纯文本状态指示器
|
||||
- 使用表格组织错误信息
|
||||
- 清晰的数据流图
|
||||
|
||||
## 示例参考
|
||||
|
||||
参考已重构的编排器命令:
|
||||
- `.claude/commands/workflow/plan.md`
|
||||
- `.claude/commands/workflow/execute.md`
|
||||
- `.claude/commands/workflow/session/complete.md`
|
||||
- `.claude/commands/workflow/session/start.md`
|
||||
@@ -434,8 +434,11 @@ services/
|
||||
**Objective**: Create a complete design system for a SaaS application
|
||||
|
||||
```bash
|
||||
# Extract design from reference
|
||||
/workflow:ui-design:imitate-auto --input "https://example-saas.com"
|
||||
# Extract design from local reference images
|
||||
/workflow:ui-design:imitate-auto --input "design-refs/*.png"
|
||||
|
||||
# Or import from existing code
|
||||
/workflow:ui-design:imitate-auto --input "./src/components"
|
||||
|
||||
# Or create from scratch
|
||||
/workflow:ui-design:explore-auto --prompt "Modern SaaS design system with primary components: buttons, inputs, cards, modals, navigation" --targets "button,input,card,modal,navbar" --style-variants 3
|
||||
|
||||
@@ -408,7 +408,7 @@ CCW includes a powerful, multi-phase workflow for UI design and prototyping, cap
|
||||
### Key Commands
|
||||
|
||||
- `/workflow:ui-design:explore-auto`: An exploratory workflow that generates multiple, distinct design variations based on a prompt.
|
||||
- `/workflow:ui-design:imitate-auto`: A replication workflow that creates high-fidelity prototypes from reference URLs.
|
||||
- `/workflow:ui-design:imitate-auto`: A design workflow that creates prototypes from local reference files (images, code) or text prompts.
|
||||
|
||||
### Example: Generating a UI from a Prompt
|
||||
|
||||
|
||||
@@ -408,7 +408,7 @@ CCW 包含强大的多阶段 UI 设计和原型制作工作流,能够从简单
|
||||
### 核心命令
|
||||
|
||||
- `/workflow:ui-design:explore-auto`: 探索性工作流,基于提示词生成多种不同的设计变体。
|
||||
- `/workflow:ui-design:imitate-auto`: 复制工作流,从参考 URL 创建高保真原型。
|
||||
- `/workflow:ui-design:imitate-auto`: 设计工作流,从本地参考文件(图片、代码)或文本提示创建原型。
|
||||
|
||||
### 示例:从提示词生成 UI
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -225,6 +225,7 @@ function get_backup_directory() {
|
||||
function backup_file_to_folder() {
|
||||
local file_path="$1"
|
||||
local backup_folder="$2"
|
||||
local quiet="${3:-}" # Optional quiet mode
|
||||
|
||||
if [ ! -f "$file_path" ]; then
|
||||
return 1
|
||||
@@ -249,10 +250,16 @@ function backup_file_to_folder() {
|
||||
local backup_file_path="${backup_sub_dir}/${file_name}"
|
||||
|
||||
if cp "$file_path" "$backup_file_path"; then
|
||||
write_color "Backed up: $file_name" "$COLOR_INFO"
|
||||
# Only output if not in quiet mode
|
||||
if [ "$quiet" != "quiet" ]; then
|
||||
write_color "Backed up: $file_name" "$COLOR_INFO"
|
||||
fi
|
||||
return 0
|
||||
else
|
||||
write_color "WARNING: Failed to backup file $file_path" "$COLOR_WARNING"
|
||||
# Always show warnings
|
||||
if [ "$quiet" != "quiet" ]; then
|
||||
write_color "WARNING: Failed to backup file $file_path" "$COLOR_WARNING"
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
@@ -443,14 +450,25 @@ function merge_directory_contents() {
|
||||
return 1
|
||||
fi
|
||||
|
||||
mkdir -p "$destination"
|
||||
write_color "Created destination directory: $destination" "$COLOR_INFO"
|
||||
# Create destination directory if it doesn't exist
|
||||
if [ ! -d "$destination" ]; then
|
||||
mkdir -p "$destination"
|
||||
write_color "Created destination directory: $destination" "$COLOR_INFO"
|
||||
fi
|
||||
|
||||
# Count total files first
|
||||
local total_files=$(find "$source" -type f | wc -l)
|
||||
local merged_count=0
|
||||
local skipped_count=0
|
||||
local backed_up_count=0
|
||||
local processed_count=0
|
||||
|
||||
write_color "Processing $total_files files in $description..." "$COLOR_INFO"
|
||||
|
||||
# Find all files recursively
|
||||
while IFS= read -r -d '' file; do
|
||||
((processed_count++))
|
||||
|
||||
local relative_path="${file#$source/}"
|
||||
local dest_path="${destination}/${relative_path}"
|
||||
local dest_dir=$(dirname "$dest_path")
|
||||
@@ -458,41 +476,58 @@ function merge_directory_contents() {
|
||||
mkdir -p "$dest_dir"
|
||||
|
||||
if [ -f "$dest_path" ]; then
|
||||
local file_name=$(basename "$relative_path")
|
||||
|
||||
# Use BackupAll mode for automatic backup without confirmation (default behavior)
|
||||
if [ "$BACKUP_ALL" = true ] && [ "$NO_BACKUP" = false ]; then
|
||||
if [ -n "$backup_folder" ]; then
|
||||
backup_file_to_folder "$dest_path" "$backup_folder"
|
||||
write_color "Auto-backed up: $file_name" "$COLOR_INFO"
|
||||
# Quiet backup - no individual file output
|
||||
if backup_file_to_folder "$dest_path" "$backup_folder" "quiet"; then
|
||||
((backed_up_count++))
|
||||
fi
|
||||
fi
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
elif [ "$NO_BACKUP" = true ]; then
|
||||
# No backup mode - ask for confirmation
|
||||
if confirm_action "File '$relative_path' already exists. Replace it? (NO BACKUP)" false; then
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
else
|
||||
write_color "Skipped $file_name (no backup)" "$COLOR_WARNING"
|
||||
((skipped_count++))
|
||||
fi
|
||||
elif confirm_action "File '$relative_path' already exists. Replace it?" false; then
|
||||
if [ -n "$backup_folder" ]; then
|
||||
backup_file_to_folder "$dest_path" "$backup_folder"
|
||||
write_color "Backed up existing $file_name" "$COLOR_INFO"
|
||||
# Quiet backup - no individual file output
|
||||
if backup_file_to_folder "$dest_path" "$backup_folder" "quiet"; then
|
||||
((backed_up_count++))
|
||||
fi
|
||||
fi
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
else
|
||||
write_color "Skipped $file_name" "$COLOR_WARNING"
|
||||
((skipped_count++))
|
||||
fi
|
||||
else
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
fi
|
||||
|
||||
# Show progress every 20 files
|
||||
if [ $((processed_count % 20)) -eq 0 ] || [ "$processed_count" -eq "$total_files" ]; then
|
||||
local percent=$((processed_count * 100 / total_files))
|
||||
echo -ne "\rMerging $description: $processed_count/$total_files files ($percent%)..."
|
||||
fi
|
||||
done < <(find "$source" -type f -print0)
|
||||
|
||||
write_color "✓ Merged $merged_count files, skipped $skipped_count files" "$COLOR_SUCCESS"
|
||||
# Clear progress line
|
||||
echo -ne "\r\033[K"
|
||||
|
||||
# Show summary
|
||||
if [ "$backed_up_count" -gt 0 ]; then
|
||||
write_color "✓ Merged $merged_count files ($backed_up_count backed up), skipped $skipped_count files" "$COLOR_SUCCESS"
|
||||
else
|
||||
write_color "✓ Merged $merged_count files, skipped $skipped_count files" "$COLOR_SUCCESS"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
@@ -508,6 +543,10 @@ function install_global() {
|
||||
|
||||
write_color "Global installation path: $user_home" "$COLOR_INFO"
|
||||
|
||||
# Clean up old installation before proceeding (fast move operation)
|
||||
echo ""
|
||||
move_old_installation "$user_home" "Global"
|
||||
|
||||
# Initialize manifest
|
||||
local manifest_file=$(new_install_manifest "Global" "$user_home")
|
||||
|
||||
@@ -627,7 +666,7 @@ function install_global() {
|
||||
create_version_json "$global_claude_dir" "Global"
|
||||
|
||||
# Save installation manifest
|
||||
save_install_manifest "$manifest_file" "$user_home"
|
||||
save_install_manifest "$manifest_file" "$user_home" "Global"
|
||||
|
||||
return 0
|
||||
}
|
||||
@@ -642,6 +681,10 @@ function install_path() {
|
||||
local global_claude_dir="${user_home}/.claude"
|
||||
write_color "Global path: $user_home" "$COLOR_INFO"
|
||||
|
||||
# Clean up old installation before proceeding (fast move operation)
|
||||
echo ""
|
||||
move_old_installation "$target_dir" "Path"
|
||||
|
||||
# Initialize manifest
|
||||
local manifest_file=$(new_install_manifest "Path" "$target_dir")
|
||||
|
||||
@@ -700,11 +743,15 @@ function install_path() {
|
||||
fi
|
||||
done
|
||||
|
||||
# Global components - exclude local folders
|
||||
# Global components - exclude local folders (use same efficient method as Global mode)
|
||||
write_color "Installing global components to $global_claude_dir..." "$COLOR_INFO"
|
||||
|
||||
local merged_count=0
|
||||
# Create temporary directory for global files only
|
||||
local temp_global_dir="/tmp/claude-global-$$"
|
||||
mkdir -p "$temp_global_dir"
|
||||
|
||||
# Copy global files to temp directory (excluding local folders)
|
||||
write_color "Preparing global components..." "$COLOR_INFO"
|
||||
while IFS= read -r -d '' file; do
|
||||
local relative_path="${file#$source_claude_dir/}"
|
||||
local top_folder=$(echo "$relative_path" | cut -d'/' -f1)
|
||||
@@ -714,37 +761,28 @@ function install_path() {
|
||||
continue
|
||||
fi
|
||||
|
||||
local dest_path="${global_claude_dir}/${relative_path}"
|
||||
local dest_dir=$(dirname "$dest_path")
|
||||
local temp_dest_path="${temp_global_dir}/${relative_path}"
|
||||
local temp_dest_dir=$(dirname "$temp_dest_path")
|
||||
|
||||
mkdir -p "$dest_dir"
|
||||
|
||||
if [ -f "$dest_path" ]; then
|
||||
if [ "$BACKUP_ALL" = true ] && [ "$NO_BACKUP" = false ]; then
|
||||
if [ -n "$backup_folder" ]; then
|
||||
backup_file_to_folder "$dest_path" "$backup_folder"
|
||||
fi
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
elif [ "$NO_BACKUP" = true ]; then
|
||||
if confirm_action "File '$relative_path' already exists in global location. Replace it? (NO BACKUP)" false; then
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
fi
|
||||
elif confirm_action "File '$relative_path' already exists in global location. Replace it?" false; then
|
||||
if [ -n "$backup_folder" ]; then
|
||||
backup_file_to_folder "$dest_path" "$backup_folder"
|
||||
fi
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
fi
|
||||
else
|
||||
cp "$file" "$dest_path"
|
||||
((merged_count++))
|
||||
fi
|
||||
mkdir -p "$temp_dest_dir"
|
||||
cp "$file" "$temp_dest_path"
|
||||
done < <(find "$source_claude_dir" -type f -print0)
|
||||
|
||||
write_color "✓ Merged $merged_count files to global location" "$COLOR_SUCCESS"
|
||||
# Use bulk merge method (same as Global mode - fast!)
|
||||
if merge_directory_contents "$temp_global_dir" "$global_claude_dir" "global components" "$backup_folder"; then
|
||||
# Track global files in manifest using bulk method (fast!)
|
||||
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
|
||||
|
||||
# Track files from TEMP directory
|
||||
while IFS= read -r -d '' source_file; do
|
||||
local relative_path="${source_file#$temp_global_dir}"
|
||||
local target_path="${global_claude_dir}${relative_path}"
|
||||
add_manifest_entry "$manifest_file" "$target_path" "File"
|
||||
done < <(find "$temp_global_dir" -type f -print0)
|
||||
fi
|
||||
|
||||
# Clean up temp directory
|
||||
rm -rf "$temp_global_dir"
|
||||
|
||||
# Handle CLAUDE.md file in global .claude directory
|
||||
local global_claude_md="${global_claude_dir}/CLAUDE.md"
|
||||
@@ -822,7 +860,7 @@ function install_path() {
|
||||
create_version_json "$global_claude_dir" "Global"
|
||||
|
||||
# Save installation manifest
|
||||
save_install_manifest "$manifest_file" "$target_dir"
|
||||
save_install_manifest "$manifest_file" "$target_dir" "Path"
|
||||
|
||||
return 0
|
||||
}
|
||||
@@ -911,8 +949,15 @@ function new_install_manifest() {
|
||||
mkdir -p "$MANIFEST_DIR"
|
||||
|
||||
# Generate unique manifest ID based on timestamp and mode
|
||||
# Distinguish between Global and Path installations with clear naming
|
||||
local timestamp=$(date +"%Y%m%d-%H%M%S")
|
||||
local manifest_id="install-${installation_mode}-${timestamp}"
|
||||
local mode_prefix
|
||||
if [ "$installation_mode" = "Global" ]; then
|
||||
mode_prefix="manifest-global"
|
||||
else
|
||||
mode_prefix="manifest-path"
|
||||
fi
|
||||
local manifest_id="${mode_prefix}-${timestamp}"
|
||||
|
||||
# Create manifest file path
|
||||
local manifest_file="${MANIFEST_DIR}/${manifest_id}.json"
|
||||
@@ -976,7 +1021,8 @@ EOF
|
||||
|
||||
function remove_old_manifests_for_path() {
|
||||
local installation_path="$1"
|
||||
local current_manifest_file="$2" # Optional: exclude this file from deletion
|
||||
local installation_mode="$2"
|
||||
local current_manifest_file="$3" # Optional: exclude this file from deletion
|
||||
|
||||
if [ ! -d "$MANIFEST_DIR" ]; then
|
||||
return 0
|
||||
@@ -986,7 +1032,8 @@ function remove_old_manifests_for_path() {
|
||||
local target_path=$(echo "$installation_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
|
||||
local removed_count=0
|
||||
|
||||
# Find and remove old manifests for the same installation path
|
||||
# Find and remove old manifests for the same installation path and mode
|
||||
# Support both new (manifest-*) and old (install-*) format
|
||||
while IFS= read -r -d '' file; do
|
||||
# Skip the current manifest file if specified
|
||||
if [ -n "$current_manifest_file" ] && [ "$file" = "$current_manifest_file" ]; then
|
||||
@@ -994,19 +1041,20 @@ function remove_old_manifests_for_path() {
|
||||
fi
|
||||
|
||||
local manifest_path=$(jq -r '.installation_path // ""' "$file" 2>/dev/null)
|
||||
local manifest_mode=$(jq -r '.installation_mode // "Global"' "$file" 2>/dev/null)
|
||||
|
||||
if [ -n "$manifest_path" ]; then
|
||||
# Normalize manifest path
|
||||
local normalized_manifest_path=$(echo "$manifest_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
# If paths match, remove this old manifest
|
||||
if [ "$normalized_manifest_path" = "$target_path" ]; then
|
||||
# Only remove if BOTH path and mode match
|
||||
if [ "$normalized_manifest_path" = "$target_path" ] && [ "$manifest_mode" = "$installation_mode" ]; then
|
||||
rm -f "$file"
|
||||
write_color "Removed old manifest: $(basename "$file")" "$COLOR_INFO"
|
||||
((removed_count++))
|
||||
fi
|
||||
fi
|
||||
done < <(find "$MANIFEST_DIR" -name "install-*.json" -type f -print0 2>/dev/null)
|
||||
done < <(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f -print0 2>/dev/null)
|
||||
|
||||
if [ "$removed_count" -gt 0 ]; then
|
||||
write_color "Removed $removed_count old manifest(s) for installation path: $installation_path" "$COLOR_SUCCESS"
|
||||
@@ -1018,10 +1066,11 @@ function remove_old_manifests_for_path() {
|
||||
function save_install_manifest() {
|
||||
local manifest_file="$1"
|
||||
local installation_path="$2"
|
||||
local installation_mode="$3"
|
||||
|
||||
# Remove old manifests for the same installation path (excluding current one)
|
||||
if [ -n "$installation_path" ]; then
|
||||
remove_old_manifests_for_path "$installation_path" "$manifest_file"
|
||||
# Remove old manifests for the same installation path and mode (excluding current one)
|
||||
if [ -n "$installation_path" ] && [ -n "$installation_mode" ]; then
|
||||
remove_old_manifests_for_path "$installation_path" "$installation_mode" "$manifest_file"
|
||||
fi
|
||||
|
||||
if [ -f "$manifest_file" ]; then
|
||||
@@ -1045,10 +1094,16 @@ function migrate_legacy_manifest() {
|
||||
# Create manifest directory if it doesn't exist
|
||||
mkdir -p "$MANIFEST_DIR"
|
||||
|
||||
# Read legacy manifest
|
||||
# Read legacy manifest and generate new manifest ID with new naming convention
|
||||
local mode=$(jq -r '.installation_mode // "Global"' "$legacy_manifest")
|
||||
local timestamp=$(date +"%Y%m%d-%H%M%S")
|
||||
local manifest_id="install-${mode}-${timestamp}-migrated"
|
||||
local mode_prefix
|
||||
if [ "$mode" = "Global" ]; then
|
||||
mode_prefix="manifest-global"
|
||||
else
|
||||
mode_prefix="manifest-path"
|
||||
fi
|
||||
local manifest_id="${mode_prefix}-${timestamp}-migrated"
|
||||
|
||||
# Create new manifest file
|
||||
local new_manifest="${MANIFEST_DIR}/${manifest_id}.json"
|
||||
@@ -1072,8 +1127,8 @@ function get_all_install_manifests() {
|
||||
return
|
||||
fi
|
||||
|
||||
# Check if any manifest files exist
|
||||
local manifest_count=$(find "$MANIFEST_DIR" -name "install-*.json" -type f 2>/dev/null | wc -l)
|
||||
# Check if any manifest files exist (both new and old formats)
|
||||
local manifest_count=$(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f 2>/dev/null | wc -l)
|
||||
|
||||
if [ "$manifest_count" -eq 0 ]; then
|
||||
echo "[]"
|
||||
@@ -1102,7 +1157,7 @@ function get_all_install_manifests() {
|
||||
manifest_content=$(echo "$manifest_content" | jq --argjson fc "$files_count" --argjson dc "$dirs_count" '. + {files_count: $fc, directories_count: $dc}')
|
||||
|
||||
all_manifests+="$manifest_content"
|
||||
done < <(find "$MANIFEST_DIR" -name "install-*.json" -type f -print0 | sort -z)
|
||||
done < <(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f -print0 | sort -z)
|
||||
|
||||
all_manifests+="]"
|
||||
|
||||
@@ -1128,6 +1183,112 @@ function get_all_install_manifests() {
|
||||
echo "$latest_manifests"
|
||||
}
|
||||
|
||||
function move_old_installation() {
|
||||
local installation_path="$1"
|
||||
local installation_mode="$2"
|
||||
|
||||
write_color "Checking for previous installation..." "$COLOR_INFO"
|
||||
|
||||
# Find existing manifest for this installation path and mode
|
||||
local manifests_json=$(get_all_install_manifests)
|
||||
local target_path=$(echo "$installation_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
local old_manifest=$(echo "$manifests_json" | jq --arg path "$target_path" --arg mode "$installation_mode" '
|
||||
.[] | select(
|
||||
(.installation_path | ascii_downcase | sub("/+$"; "")) == $path and
|
||||
.installation_mode == $mode
|
||||
)
|
||||
')
|
||||
|
||||
if [ -z "$old_manifest" ] || [ "$old_manifest" = "null" ]; then
|
||||
write_color "No previous $installation_mode installation found at this path" "$COLOR_INFO"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local install_date=$(echo "$old_manifest" | jq -r '.installation_date')
|
||||
local files_count=$(echo "$old_manifest" | jq -r '.files_count')
|
||||
local dirs_count=$(echo "$old_manifest" | jq -r '.directories_count')
|
||||
|
||||
write_color "Found previous installation from $install_date" "$COLOR_INFO"
|
||||
write_color "Files: $files_count, Directories: $dirs_count" "$COLOR_INFO"
|
||||
|
||||
# Create backup folder
|
||||
local timestamp=$(date +"%Y%m%d-%H%M%S")
|
||||
local backup_dir="${installation_path}/claude-backup-old-${timestamp}"
|
||||
mkdir -p "$backup_dir"
|
||||
write_color "Created backup folder: $backup_dir" "$COLOR_SUCCESS"
|
||||
|
||||
local moved_files=0
|
||||
local removed_dirs=0
|
||||
local failed_items=()
|
||||
|
||||
# Move files first (from manifest)
|
||||
write_color "Moving old installation files to backup..." "$COLOR_INFO"
|
||||
while IFS= read -r file_path; do
|
||||
if [ -z "$file_path" ] || [ "$file_path" = "null" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
if [ -f "$file_path" ]; then
|
||||
# Calculate relative path from installation root
|
||||
local relative_path="${file_path#$installation_path}"
|
||||
relative_path="${relative_path#/}"
|
||||
|
||||
if [ -z "$relative_path" ]; then
|
||||
relative_path=$(basename "$file_path")
|
||||
fi
|
||||
|
||||
local backup_dest_dir=$(dirname "${backup_dir}/${relative_path}")
|
||||
|
||||
mkdir -p "$backup_dest_dir"
|
||||
if mv "$file_path" "${backup_dest_dir}/" 2>/dev/null; then
|
||||
((moved_files++))
|
||||
else
|
||||
write_color " WARNING: Failed to move file: $file_path" "$COLOR_WARNING"
|
||||
failed_items+=("$file_path")
|
||||
fi
|
||||
fi
|
||||
done <<< "$(echo "$old_manifest" | jq -r '.files[].path')"
|
||||
|
||||
# Remove empty directories (in reverse order to handle nested dirs)
|
||||
write_color "Cleaning up empty directories..." "$COLOR_INFO"
|
||||
while IFS= read -r dir_path; do
|
||||
if [ -z "$dir_path" ] || [ "$dir_path" = "null" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
if [ -d "$dir_path" ]; then
|
||||
# Check if directory is empty
|
||||
if [ -z "$(ls -A "$dir_path" 2>/dev/null)" ]; then
|
||||
if rmdir "$dir_path" 2>/dev/null; then
|
||||
write_color " Removed empty directory: $dir_path" "$COLOR_INFO"
|
||||
((removed_dirs++))
|
||||
fi
|
||||
else
|
||||
write_color " Directory not empty (preserved): $dir_path" "$COLOR_INFO"
|
||||
fi
|
||||
fi
|
||||
done <<< "$(echo "$old_manifest" | jq -r '.directories[].path' | awk '{ print length, $0 }' | sort -rn | cut -d' ' -f2-)"
|
||||
|
||||
# Note: Old manifest will be automatically removed by save_install_manifest
|
||||
# via remove_old_manifests_for_path to ensure robust cleanup
|
||||
|
||||
echo ""
|
||||
write_color "Old installation cleanup summary:" "$COLOR_INFO"
|
||||
echo " Files moved: $moved_files"
|
||||
echo " Directories removed: $removed_dirs"
|
||||
echo " Backup location: $backup_dir"
|
||||
|
||||
if [ ${#failed_items[@]} -gt 0 ]; then
|
||||
write_color " Failed items: ${#failed_items[@]}" "$COLOR_WARNING"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Return backup path for reference
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# UNINSTALLATION FUNCTIONS
|
||||
# ============================================================================
|
||||
@@ -1173,26 +1334,50 @@ function uninstall_claude_workflow() {
|
||||
|
||||
if [ "$manifests_count" -eq 1 ]; then
|
||||
selected_manifest=$(echo "$manifests_json" | jq '.[0]')
|
||||
write_color "Only one installation found, will uninstall:" "$COLOR_INFO"
|
||||
|
||||
# Read version from version.json
|
||||
local install_path=$(echo "$selected_manifest" | jq -r '.installation_path // ""')
|
||||
local install_mode=$(echo "$selected_manifest" | jq -r '.installation_mode // "Unknown"')
|
||||
local version_str="Version Unknown"
|
||||
|
||||
# Determine version.json path
|
||||
local version_json_path="${install_path}/.claude/version.json"
|
||||
|
||||
if [ -f "$version_json_path" ]; then
|
||||
local ver=$(jq -r '.version // ""' "$version_json_path" 2>/dev/null)
|
||||
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
|
||||
version_str="v$ver"
|
||||
fi
|
||||
fi
|
||||
|
||||
write_color "Found installation: $version_str - $install_path" "$COLOR_INFO"
|
||||
else
|
||||
# Multiple manifests - let user choose
|
||||
# Multiple manifests - let user choose (simplified: only version and path)
|
||||
local options=()
|
||||
|
||||
for i in $(seq 0 $((manifests_count - 1))); do
|
||||
local m=$(echo "$manifests_json" | jq ".[$i]")
|
||||
|
||||
# Safely extract date string
|
||||
local date_str=$(echo "$m" | jq -r '.installation_date // "unknown date"' | cut -c1-10)
|
||||
local mode=$(echo "$m" | jq -r '.installation_mode // "Unknown"')
|
||||
local files_count=$(echo "$m" | jq -r '.files_count // 0')
|
||||
local dirs_count=$(echo "$m" | jq -r '.directories_count // 0')
|
||||
local path_info=$(echo "$m" | jq -r '.installation_path // ""')
|
||||
local install_mode=$(echo "$m" | jq -r '.installation_mode // "Unknown"')
|
||||
local version_str="Version Unknown"
|
||||
|
||||
if [ -n "$path_info" ]; then
|
||||
path_info=" ($path_info)"
|
||||
# Read version from version.json
|
||||
local version_json_path="${path_info}/.claude/version.json"
|
||||
|
||||
if [ -f "$version_json_path" ]; then
|
||||
local ver=$(jq -r '.version // ""' "$version_json_path" 2>/dev/null)
|
||||
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
|
||||
version_str="v$ver"
|
||||
fi
|
||||
fi
|
||||
|
||||
options+=("$((i + 1)). [$mode] $date_str - $files_count files, $dirs_count dirs$path_info")
|
||||
local path_str="Path Unknown"
|
||||
if [ -n "$path_info" ]; then
|
||||
path_str="$path_info"
|
||||
fi
|
||||
|
||||
options+=("$((i + 1)). $version_str - $path_str")
|
||||
done
|
||||
|
||||
options+=("Cancel - Don't uninstall anything")
|
||||
@@ -1210,16 +1395,24 @@ function uninstall_claude_workflow() {
|
||||
selected_manifest=$(echo "$manifests_json" | jq ".[$selected_index]")
|
||||
fi
|
||||
|
||||
# Display selected installation info
|
||||
# Display selected installation info (simplified: only version and path)
|
||||
local final_path=$(echo "$selected_manifest" | jq -r '.installation_path // ""')
|
||||
local final_mode=$(echo "$selected_manifest" | jq -r '.installation_mode // "Unknown"')
|
||||
local final_version="Version Unknown"
|
||||
|
||||
# Read version from version.json
|
||||
local final_version_path="${final_path}/.claude/version.json"
|
||||
if [ -f "$final_version_path" ]; then
|
||||
local ver=$(jq -r '.version // ""' "$final_version_path" 2>/dev/null)
|
||||
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
|
||||
final_version="v$ver"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
write_color "Installation Information:" "$COLOR_INFO"
|
||||
echo " Manifest ID: $(echo "$selected_manifest" | jq -r '.manifest_id')"
|
||||
echo " Mode: $(echo "$selected_manifest" | jq -r '.installation_mode')"
|
||||
echo " Path: $(echo "$selected_manifest" | jq -r '.installation_path')"
|
||||
echo " Date: $(echo "$selected_manifest" | jq -r '.installation_date')"
|
||||
echo " Installer Version: $(echo "$selected_manifest" | jq -r '.installer_version')"
|
||||
echo " Files tracked: $(echo "$selected_manifest" | jq -r '.files_count')"
|
||||
echo " Directories tracked: $(echo "$selected_manifest" | jq -r '.directories_count')"
|
||||
write_color "Uninstallation Target:" "$COLOR_INFO"
|
||||
echo " $final_version"
|
||||
echo " Path: $final_path"
|
||||
echo ""
|
||||
|
||||
# Confirm uninstallation
|
||||
@@ -1229,55 +1422,64 @@ function uninstall_claude_workflow() {
|
||||
fi
|
||||
|
||||
local removed_files=0
|
||||
local removed_dirs=0
|
||||
local failed_items=()
|
||||
local skipped_files=0
|
||||
|
||||
# Remove files first
|
||||
# Check if this is a Path mode uninstallation and if Global installation exists
|
||||
local is_path_mode=false
|
||||
local has_global_installation=false
|
||||
|
||||
if [ "$final_mode" = "Path" ]; then
|
||||
is_path_mode=true
|
||||
|
||||
# Check if any Global installation manifest exists
|
||||
if [ -d "$MANIFEST_DIR" ]; then
|
||||
local global_manifest_count=$(find "$MANIFEST_DIR" -name "manifest-global-*.json" -type f 2>/dev/null | wc -l)
|
||||
if [ "$global_manifest_count" -gt 0 ]; then
|
||||
has_global_installation=true
|
||||
write_color "Found Global installation, global files will be preserved" "$COLOR_WARNING"
|
||||
echo ""
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Only remove files listed in manifest - do NOT remove directories
|
||||
write_color "Removing installed files..." "$COLOR_INFO"
|
||||
|
||||
local files_array=$(echo "$selected_manifest" | jq -c '.files[]')
|
||||
local files_array=$(echo "$selected_manifest" | jq -c '.files[]' 2>/dev/null)
|
||||
|
||||
while IFS= read -r file_entry; do
|
||||
local file_path=$(echo "$file_entry" | jq -r '.path')
|
||||
if [ -n "$files_array" ]; then
|
||||
while IFS= read -r file_entry; do
|
||||
local file_path=$(echo "$file_entry" | jq -r '.path')
|
||||
|
||||
if [ -f "$file_path" ]; then
|
||||
if rm -f "$file_path" 2>/dev/null; then
|
||||
write_color " Removed file: $file_path" "$COLOR_SUCCESS"
|
||||
((removed_files++))
|
||||
else
|
||||
write_color " WARNING: Failed to remove file: $file_path" "$COLOR_WARNING"
|
||||
failed_items+=("$file_path")
|
||||
fi
|
||||
else
|
||||
write_color " File not found (already removed): $file_path" "$COLOR_INFO"
|
||||
fi
|
||||
done <<< "$files_array"
|
||||
# For Path mode uninstallation, skip global files if Global installation exists
|
||||
if [ "$is_path_mode" = true ] && [ "$has_global_installation" = true ]; then
|
||||
local global_claude_dir="${HOME}/.claude"
|
||||
|
||||
# Remove directories (in reverse order by path length)
|
||||
write_color "Removing installed directories..." "$COLOR_INFO"
|
||||
|
||||
local dirs_array=$(echo "$selected_manifest" | jq -c '.directories[] | {path: .path, length: (.path | length)}' | sort -t: -k2 -rn | jq -c '.path')
|
||||
|
||||
while IFS= read -r dir_path_json; do
|
||||
local dir_path=$(echo "$dir_path_json" | jq -r '.')
|
||||
|
||||
if [ -d "$dir_path" ]; then
|
||||
# Check if directory is empty
|
||||
if [ -z "$(ls -A "$dir_path" 2>/dev/null)" ]; then
|
||||
if rmdir "$dir_path" 2>/dev/null; then
|
||||
write_color " Removed directory: $dir_path" "$COLOR_SUCCESS"
|
||||
((removed_dirs++))
|
||||
else
|
||||
write_color " WARNING: Failed to remove directory: $dir_path" "$COLOR_WARNING"
|
||||
failed_items+=("$dir_path")
|
||||
# Skip files under global .claude directory
|
||||
if [[ "$file_path" == "$global_claude_dir"* ]]; then
|
||||
((skipped_files++))
|
||||
continue
|
||||
fi
|
||||
else
|
||||
write_color " Directory not empty (preserved): $dir_path" "$COLOR_WARNING"
|
||||
fi
|
||||
else
|
||||
write_color " Directory not found (already removed): $dir_path" "$COLOR_INFO"
|
||||
fi
|
||||
done <<< "$dirs_array"
|
||||
|
||||
if [ -f "$file_path" ]; then
|
||||
if rm -f "$file_path" 2>/dev/null; then
|
||||
((removed_files++))
|
||||
else
|
||||
write_color " WARNING: Failed to remove: $file_path" "$COLOR_WARNING"
|
||||
failed_items+=("$file_path")
|
||||
fi
|
||||
fi
|
||||
done <<< "$files_array"
|
||||
fi
|
||||
|
||||
# Display removal summary
|
||||
if [ "$skipped_files" -gt 0 ]; then
|
||||
write_color "Removed $removed_files files, skipped $skipped_files global files" "$COLOR_SUCCESS"
|
||||
else
|
||||
write_color "Removed $removed_files files" "$COLOR_SUCCESS"
|
||||
fi
|
||||
|
||||
# Remove manifest file
|
||||
local manifest_file=$(echo "$selected_manifest" | jq -r '.manifest_file')
|
||||
@@ -1295,7 +1497,12 @@ function uninstall_claude_workflow() {
|
||||
write_color "========================================" "$COLOR_INFO"
|
||||
write_color "Uninstallation Summary:" "$COLOR_INFO"
|
||||
echo " Files removed: $removed_files"
|
||||
echo " Directories removed: $removed_dirs"
|
||||
|
||||
if [ "$skipped_files" -gt 0 ]; then
|
||||
echo " Files skipped (global files preserved): $skipped_files"
|
||||
echo ""
|
||||
write_color "Note: $skipped_files global files were preserved due to existing Global installation" "$COLOR_INFO"
|
||||
fi
|
||||
|
||||
if [ ${#failed_items[@]} -gt 0 ]; then
|
||||
echo ""
|
||||
@@ -1307,7 +1514,11 @@ function uninstall_claude_workflow() {
|
||||
|
||||
echo ""
|
||||
if [ ${#failed_items[@]} -eq 0 ]; then
|
||||
write_color "✓ Claude Code Workflow has been successfully uninstalled!" "$COLOR_SUCCESS"
|
||||
if [ "$skipped_files" -gt 0 ]; then
|
||||
write_color "✓ Uninstallation complete! Removed $removed_files files, preserved $skipped_files global files." "$COLOR_SUCCESS"
|
||||
else
|
||||
write_color "✓ Claude Code Workflow has been successfully uninstalled!" "$COLOR_SUCCESS"
|
||||
fi
|
||||
else
|
||||
write_color "Uninstallation completed with warnings." "$COLOR_WARNING"
|
||||
write_color "Please manually remove the failed items listed above." "$COLOR_INFO"
|
||||
|
||||
@@ -1,401 +0,0 @@
|
||||
# 🚀 Claude Code Workflow (CCW): 下一代多智能体软件开发自动化框架
|
||||
|
||||
[](https://github.com/catlog22/Claude-Code-Workflow/releases)
|
||||
[](https://github.com/modelcontextprotocol)
|
||||
[](LICENSE)
|
||||
|
||||
---
|
||||
|
||||
## 📋 项目概述
|
||||
|
||||
**Claude Code Workflow (CCW)** 是一个革命性的多智能体自动化开发框架,它通过智能工作流管理和自主执行来协调复杂的软件开发任务。CCW 不仅仅是一个工具,它是一个完整的开发生态系统,将人工智能的强大能力与结构化的开发流程相结合。
|
||||
|
||||
## 🎯 概念设计与核心理念
|
||||
|
||||
### 设计哲学
|
||||
|
||||
CCW 的设计基于几个核心理念:
|
||||
|
||||
1. **🧠 智能协作而非替代**: 不是完全取代开发者,而是作为智能助手协同工作
|
||||
2. **📊 JSON 优先架构**: 以 JSON 作为单一数据源,消除同步复杂性
|
||||
3. **🔄 完整的开发生命周期**: 覆盖从构思到部署的每一个环节
|
||||
4. **🤖 多智能体协调**: 专门的智能体处理不同类型的开发任务
|
||||
5. **⚡ 原子化会话管理**: 超快速的上下文切换和并行工作
|
||||
|
||||
### 架构创新
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[🖥️ CLI 接口层] --> B[📋 会话管理层]
|
||||
B --> C[📊 JSON 任务数据层]
|
||||
C --> D[🤖 多智能体编排层]
|
||||
|
||||
A --> A1[Gemini CLI - 分析探索]
|
||||
A --> A2[Codex CLI - 自主开发]
|
||||
A --> A3[Qwen CLI - 架构生成]
|
||||
|
||||
B --> B1[.active-session 标记]
|
||||
B --> B2[工作流会话状态]
|
||||
|
||||
C --> C1[IMPL-*.json 任务定义]
|
||||
C --> C2[动态任务分解]
|
||||
C --> C3[依赖关系映射]
|
||||
|
||||
D --> D1[概念规划智能体]
|
||||
D --> D2[代码开发智能体]
|
||||
D --> D3[测试审查智能体]
|
||||
D --> D4[记忆桥接智能体]
|
||||
```
|
||||
|
||||
## 🔥 解决的核心问题
|
||||
|
||||
### 1. **项目上下文丢失问题**
|
||||
**传统痛点**: 在复杂项目中,开发者经常在不同任务间切换时丢失上下文,需要重新理解代码结构和业务逻辑。
|
||||
|
||||
**CCW 解决方案**:
|
||||
- 📚 **智能内存更新系统**: 自动维护 `CLAUDE.md` 文档,实时跟踪代码库变化
|
||||
- 🔄 **会话持久化**: 完整保存工作流状态,支持无缝恢复
|
||||
- 📊 **上下文继承**: 任务间自动传递相关上下文信息
|
||||
|
||||
### 2. **开发流程不统一问题**
|
||||
**传统痛点**: 团队成员使用不同的开发流程,导致代码质量不一致,难以协作。
|
||||
|
||||
**CCW 解决方案**:
|
||||
- 🔄 **标准化工作流**: 强制执行 Brainstorm → Plan → Verify → Execute → Test → Review 流程
|
||||
- ✅ **质量门禁**: 每个阶段都有验证机制确保质量
|
||||
- 📋 **可追溯性**: 完整记录决策过程和实现细节
|
||||
|
||||
### 3. **重复性任务自动化不足**
|
||||
**传统痛点**: 大量重复性的代码生成、测试编写、文档更新工作消耗开发者精力。
|
||||
|
||||
**CCW 解决方案**:
|
||||
- 🤖 **多智能体自动化**: 不同类型任务分配给专门的智能体
|
||||
- 🧪 **自动测试生成**: 根据实现自动生成全面的测试套件
|
||||
- 📝 **文档自动更新**: 代码变更时自动更新相关文档
|
||||
|
||||
### 4. **代码库理解困难**
|
||||
**传统痛点**: 在大型项目中,理解现有代码结构和模式需要大量时间。
|
||||
|
||||
**CCW 解决方案**:
|
||||
- 🔧 **MCP 工具集成**: 通过 Model Context Protocol 实现高级代码分析
|
||||
- 🔍 **模式识别**: 自动识别代码库中的设计模式和架构约定
|
||||
- 🌐 **外部最佳实践**: 集成外部 API 模式和行业最佳实践
|
||||
|
||||
## 🛠️ 核心工作流介绍
|
||||
|
||||
### 📊 JSON 优先数据模型
|
||||
|
||||
CCW 采用独特的 JSON 优先架构,所有工作流状态都存储在结构化的 JSON 文件中:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-1.2",
|
||||
"title": "实现 JWT 认证系统",
|
||||
"status": "pending",
|
||||
"meta": {
|
||||
"type": "feature",
|
||||
"agent": "code-developer"
|
||||
},
|
||||
"context": {
|
||||
"requirements": ["JWT 认证", "OAuth2 支持"],
|
||||
"focus_paths": ["src/auth", "tests/auth"],
|
||||
"acceptance": ["JWT 验证工作", "OAuth 流程完整"]
|
||||
},
|
||||
"flow_control": {
|
||||
"pre_analysis": [...],
|
||||
"implementation_approach": {...}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 🧠 智能内存管理系统
|
||||
|
||||
#### 自动内存更新
|
||||
CCW 的内存更新系统是其核心特色之一:
|
||||
|
||||
```bash
|
||||
# 日常开发后的自动更新
|
||||
/update-memory-related # 智能分析最近变更,只更新相关模块
|
||||
|
||||
# 重大变更后的全面更新
|
||||
/update-memory-full # 完整扫描项目,重建所有文档
|
||||
|
||||
# 模块特定更新
|
||||
cd src/auth && /update-memory-related # 针对特定模块的精准更新
|
||||
```
|
||||
|
||||
#### CLAUDE.md 四层架构
|
||||
```
|
||||
CLAUDE.md (项目级总览)
|
||||
├── src/CLAUDE.md (源码层文档)
|
||||
├── src/auth/CLAUDE.md (模块层文档)
|
||||
└── src/auth/jwt/CLAUDE.md (组件层文档)
|
||||
```
|
||||
|
||||
### 🔧 Flow Control 与 CLI 工具集成
|
||||
|
||||
#### 预分析阶段 (pre_analysis)
|
||||
```json
|
||||
"pre_analysis": [
|
||||
{
|
||||
"step": "mcp_codebase_exploration",
|
||||
"action": "使用 MCP 工具探索代码库结构",
|
||||
"command": "mcp__code-index__find_files(pattern=\"[task_focus_patterns]\")",
|
||||
"output_to": "codebase_structure"
|
||||
},
|
||||
{
|
||||
"step": "mcp_external_context",
|
||||
"action": "获取外部 API 示例和最佳实践",
|
||||
"command": "mcp__exa__get_code_context_exa(query=\"[task_technology] [task_patterns]\")",
|
||||
"output_to": "external_context"
|
||||
},
|
||||
{
|
||||
"step": "gather_task_context",
|
||||
"action": "分析任务上下文,不进行实现",
|
||||
"command": "gemini-wrapper -p \"分析 [task_title] 的现有模式和依赖\"",
|
||||
"output_to": "task_context"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
#### 实现方法定义 (implementation_approach)
|
||||
```json
|
||||
"implementation_approach": {
|
||||
"task_description": "基于 [design] 分析结果实现 JWT 认证",
|
||||
"modification_points": [
|
||||
"使用 [parent] 模式添加 JWT 生成",
|
||||
"基于 [context] 实现验证中间件"
|
||||
],
|
||||
"logic_flow": [
|
||||
"用户登录 → 使用 [inherited] 验证 → 生成 JWT",
|
||||
"受保护路由 → 提取 JWT → 使用 [shared] 规则验证"
|
||||
],
|
||||
"target_files": [
|
||||
"src/auth/login.ts:handleLogin:75-120",
|
||||
"src/middleware/auth.ts:validateToken"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 🚀 CLI 工具协同工作
|
||||
|
||||
#### 三大 CLI 工具分工
|
||||
```mermaid
|
||||
graph LR
|
||||
A[Gemini CLI] --> A1[深度分析]
|
||||
A --> A2[模式识别]
|
||||
A --> A3[架构理解]
|
||||
|
||||
B[Qwen CLI] --> B1[架构设计]
|
||||
B --> B2[代码生成]
|
||||
B --> B3[系统规划]
|
||||
|
||||
C[Codex CLI] --> C1[自主开发]
|
||||
C --> C2[错误修复]
|
||||
C --> C3[测试生成]
|
||||
```
|
||||
|
||||
#### 智能工具选择策略
|
||||
CCW 基于任务类型自动选择最适合的工具:
|
||||
|
||||
```bash
|
||||
# 探索和理解阶段
|
||||
/cli:analyze --tool gemini "认证系统架构模式"
|
||||
|
||||
# 设计和规划阶段
|
||||
/cli:mode:plan --tool qwen "微服务认证架构设计"
|
||||
|
||||
# 实现和开发阶段
|
||||
/cli:execute --tool codex "实现 JWT 认证系统"
|
||||
```
|
||||
|
||||
### 🔄 完整开发生命周期
|
||||
|
||||
#### 1. 头脑风暴阶段
|
||||
```bash
|
||||
# 多角色专家视角分析
|
||||
/workflow:brainstorm:system-architect "用户认证系统"
|
||||
/workflow:brainstorm:security-expert "认证安全考虑"
|
||||
/workflow:brainstorm:ui-designer "认证用户体验"
|
||||
|
||||
# 综合所有视角
|
||||
/workflow:brainstorm:synthesis
|
||||
```
|
||||
|
||||
#### 2. 规划与验证
|
||||
```bash
|
||||
# 创建实现计划
|
||||
/workflow:plan "用户认证系统与 JWT 支持"
|
||||
|
||||
# 双重验证机制
|
||||
/workflow:plan-verify # Gemini 战略 + Codex 技术双重验证
|
||||
```
|
||||
|
||||
#### 3. 执行与测试
|
||||
```bash
|
||||
# 智能体协调执行
|
||||
/workflow:execute
|
||||
|
||||
# 自动生成测试工作流
|
||||
/workflow:test-gen WFS-user-auth-system
|
||||
```
|
||||
|
||||
#### 4. 审查与文档
|
||||
```bash
|
||||
# 质量审查
|
||||
/workflow:review
|
||||
|
||||
# 分层文档生成
|
||||
/workflow:docs "all"
|
||||
```
|
||||
|
||||
## 🔧 技术创新亮点
|
||||
|
||||
### 1. **MCP 工具集成** *(实验性)*
|
||||
- **Exa MCP Server**: 获取真实世界的 API 模式和最佳实践
|
||||
- **Code Index MCP**: 高级内部代码库搜索和索引
|
||||
- **自动回退**: MCP 不可用时无缝切换到传统工具
|
||||
|
||||
### 2. **原子化会话管理**
|
||||
```bash
|
||||
# 超快速会话切换 (<10ms)
|
||||
.workflow/.active-user-auth-system # 简单的文件标记
|
||||
|
||||
# 并行会话支持
|
||||
.workflow/WFS-user-auth/ # 认证系统会话
|
||||
.workflow/WFS-payment/ # 支付系统会话
|
||||
.workflow/WFS-dashboard/ # 仪表板会话
|
||||
```
|
||||
|
||||
### 3. **智能上下文传递**
|
||||
- **依赖上下文**: 任务完成后自动传递关键信息给依赖任务
|
||||
- **继承上下文**: 子任务自动继承父任务的设计决策
|
||||
- **共享上下文**: 会话级别的全局规则和模式
|
||||
|
||||
### 4. **动态任务分解**
|
||||
```json
|
||||
// 主任务自动分解为子任务
|
||||
"IMPL-1": "用户认证系统",
|
||||
"IMPL-1.1": "JWT 令牌生成",
|
||||
"IMPL-1.2": "认证中间件",
|
||||
"IMPL-1.3": "用户登录接口"
|
||||
```
|
||||
|
||||
## 🎯 使用场景示例
|
||||
|
||||
### 场景 1: 新功能开发
|
||||
```bash
|
||||
# 1. 启动专门会话
|
||||
/workflow:session:start "OAuth2 集成"
|
||||
|
||||
# 2. 多视角头脑风暴
|
||||
/workflow:brainstorm:system-architect "OAuth2 架构设计"
|
||||
/workflow:brainstorm:security-expert "OAuth2 安全考虑"
|
||||
|
||||
# 3. 执行完整开发流程
|
||||
/workflow:plan "OAuth2 与现有认证系统集成"
|
||||
/workflow:plan-verify
|
||||
/workflow:execute
|
||||
/workflow:test-gen WFS-oauth2-integration
|
||||
/workflow:review
|
||||
```
|
||||
|
||||
### 场景 2: 紧急错误修复
|
||||
```bash
|
||||
# 快速错误解决工作流
|
||||
/workflow:session:start "支付验证修复"
|
||||
/cli:mode:bug-diagnosis --tool gemini "并发请求时支付验证失败"
|
||||
/cli:execute --tool codex "修复支付验证竞态条件"
|
||||
/workflow:review
|
||||
```
|
||||
|
||||
### 场景 3: 架构重构
|
||||
```bash
|
||||
# 深度架构分析和重构
|
||||
/workflow:session:start "微服务重构"
|
||||
/cli:analyze --tool gemini "当前单体架构的技术债务"
|
||||
/workflow:plan "单体到微服务的迁移策略"
|
||||
/workflow:execute
|
||||
/workflow:test-gen WFS-microservice-refactoring
|
||||
```
|
||||
|
||||
## 🌟 核心优势
|
||||
|
||||
### 1. **提升开发效率**
|
||||
- ⚡ **10x 上下文切换速度**: 原子化会话管理
|
||||
- 🤖 **自动化重复任务**: 90% 的样板代码和测试自动生成
|
||||
- 📊 **智能决策支持**: 基于历史模式的建议
|
||||
|
||||
### 2. **保证代码质量**
|
||||
- ✅ **强制质量门禁**: 每个阶段的验证机制
|
||||
- 🔍 **自动模式检测**: 识别并遵循现有代码约定
|
||||
- 📝 **完整可追溯性**: 从需求到实现的完整记录
|
||||
|
||||
### 3. **降低学习成本**
|
||||
- 📚 **智能文档系统**: 自动维护的项目知识库
|
||||
- 🔄 **标准化流程**: 统一的开发工作流
|
||||
- 💡 **最佳实践集成**: 外部优秀模式的自动引入
|
||||
|
||||
### 4. **支持团队协作**
|
||||
- 🔀 **并行会话支持**: 多人同时工作不冲突
|
||||
- 📊 **透明的进度跟踪**: 实时可见的任务状态
|
||||
- 🤝 **知识共享**: 决策过程和实现细节的完整记录
|
||||
|
||||
## 🚀 开始使用
|
||||
|
||||
### 快速安装
|
||||
```powershell
|
||||
# Windows 一键安装
|
||||
Invoke-Expression (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1" -UseBasicParsing).Content
|
||||
|
||||
# 验证安装
|
||||
/workflow:session:list
|
||||
```
|
||||
|
||||
### 可选 MCP 工具增强
|
||||
```bash
|
||||
# 安装 Exa MCP Server (外部 API 模式)
|
||||
# 安装指南: https://github.com/exa-labs/exa-mcp-server
|
||||
|
||||
# 安装 Code Index MCP (高级代码搜索)
|
||||
# 安装指南: https://github.com/johnhuang316/code-index-mcp
|
||||
```
|
||||
|
||||
## 📈 项目状态与路线图
|
||||
|
||||
### 当前状态 (v2.1.0-experimental)
|
||||
- ✅ 核心多智能体系统完成
|
||||
- ✅ JSON 优先架构稳定
|
||||
- ✅ 完整工作流生命周期支持
|
||||
- 🧪 MCP 工具集成 (实验性)
|
||||
- ✅ 智能内存管理系统
|
||||
|
||||
### 即将推出
|
||||
- 🔮 **AI 辅助代码审查**: 更智能的质量检测
|
||||
- 🌐 **云端协作支持**: 团队级工作流共享
|
||||
- 📊 **性能分析集成**: 自动性能优化建议
|
||||
- 🔧 **更多 MCP 工具**: 扩展外部工具生态
|
||||
|
||||
## 🤝 社区与支持
|
||||
|
||||
- 📚 **文档**: [项目 Wiki](https://github.com/catlog22/Claude-Code-Workflow/wiki)
|
||||
- 🐛 **问题反馈**: [GitHub Issues](https://github.com/catlog22/Claude-Code-Workflow/issues)
|
||||
- 💬 **社区讨论**: [讨论区](https://github.com/catlog22/Claude-Code-Workflow/discussions)
|
||||
- 📋 **更新日志**: [发布历史](CHANGELOG.md)
|
||||
|
||||
---
|
||||
|
||||
## 💡 结语
|
||||
|
||||
**Claude Code Workflow** 不仅仅是一个开发工具,它代表了软件开发工作流的未来趋势。通过智能化的多智能体协作、结构化的开发流程和先进的上下文管理,CCW 让开发者能够专注于创造性工作,而将重复性和机械性任务交给 AI 助手。
|
||||
|
||||
我们相信,未来的软件开发将是人机协作的典范,CCW 正是这一愿景的先锋实践。
|
||||
|
||||
🌟 **立即体验 CCW,开启您的智能化开发之旅!**
|
||||
|
||||
[](https://github.com/catlog22/Claude-Code-Workflow)
|
||||
[](https://github.com/catlog22/Claude-Code-Workflow/releases/latest)
|
||||
|
||||
---
|
||||
|
||||
*本文档由 Claude Code Workflow 的智能文档系统自动生成和维护*
|
||||
@@ -166,7 +166,7 @@ CCW provides comprehensive documentation to help you get started and master adva
|
||||
### 📖 **Getting Started**
|
||||
- [**Getting Started Guide**](GETTING_STARTED.md) - 5-minute quick start tutorial
|
||||
- [**Installation Guide**](INSTALL.md) - Detailed installation instructions ([中文](INSTALL_CN.md))
|
||||
- [**Workflow Decision Guide**](WORKFLOW_DECISION_GUIDE.md) - 🌳 Interactive flowchart for choosing the right commands
|
||||
- [**Workflow Decision Guide**](WORKFLOW_DECISION_GUIDE_EN.md) - 🌳 Interactive flowchart for choosing the right commands
|
||||
- [**Examples**](EXAMPLES.md) - Real-world use cases and practical examples
|
||||
- [**FAQ**](FAQ.md) - Frequently asked questions and troubleshooting
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ flowchart TD
|
||||
Q3 -->|不需要| Q4{任务复杂度?}
|
||||
|
||||
UIDesign --> Q3a{有参考设计吗?}
|
||||
Q3a -->|有| UIImitate[/ /workflow:ui-design:imitate-auto<br>--input 参考URL /]
|
||||
Q3a -->|有| UIImitate[/ /workflow:ui-design:imitate-auto<br>--input 本地文件/图片 /]
|
||||
Q3a -->|无| UIExplore[/ /workflow:ui-design:explore-auto<br>--prompt 设计描述 /]
|
||||
|
||||
UIImitate --> UISync[/ /workflow:ui-design:design-sync<br>同步设计系统 /]
|
||||
@@ -158,14 +158,16 @@ flowchart TD
|
||||
|
||||
| 情况 | 命令 | 说明 |
|
||||
|------|------|------|
|
||||
| 🎨 有参考设计 | `/workflow:ui-design:imitate-auto --input "URL"` | 基于现有设计复制 |
|
||||
| 🎨 有参考设计 | `/workflow:ui-design:imitate-auto --input "本地文件/图片"` | 基于本地参考文件/图片复制设计 |
|
||||
| 🎨 从零设计 | `/workflow:ui-design:explore-auto --prompt "描述"` | 生成多个设计变体 |
|
||||
| ⏭️ 后端/无UI | 跳过 | 纯后端API、CLI工具等 |
|
||||
|
||||
**示例**:
|
||||
```bash
|
||||
# 有参考:模仿Google Docs的协作界面
|
||||
/workflow:ui-design:imitate-auto --input "https://docs.google.com"
|
||||
# 有参考:使用本地截图或代码文件
|
||||
/workflow:ui-design:imitate-auto --input "design-refs/*.png"
|
||||
# 或从现有代码导入
|
||||
/workflow:ui-design:imitate-auto --input "./src/components"
|
||||
|
||||
# 无参考:从零设计
|
||||
/workflow:ui-design:explore-auto --prompt "现代简洁的文档协作编辑界面" --style-variants 3
|
||||
|
||||
@@ -26,7 +26,7 @@ flowchart TD
|
||||
Q3 -->|No| Q4{Task complexity?}
|
||||
|
||||
UIDesign --> Q3a{Have reference design?}
|
||||
Q3a -->|Yes| UIImitate[/ /workflow:ui-design:imitate-auto<br>--input reference URL /]
|
||||
Q3a -->|Yes| UIImitate[/ /workflow:ui-design:imitate-auto<br>--input local files/images /]
|
||||
Q3a -->|No| UIExplore[/ /workflow:ui-design:explore-auto<br>--prompt design description /]
|
||||
|
||||
UIImitate --> UISync[/ /workflow:ui-design:design-sync<br>Sync design system /]
|
||||
@@ -158,14 +158,16 @@ flowchart TD
|
||||
|
||||
| Situation | Command | Description |
|
||||
|-----------|---------|-------------|
|
||||
| 🎨 Have reference design | `/workflow:ui-design:imitate-auto --input "URL"` | Copy from existing design |
|
||||
| 🎨 Have reference design | `/workflow:ui-design:imitate-auto --input "local files/images"` | Copy design from local reference files/images |
|
||||
| 🎨 Design from scratch | `/workflow:ui-design:explore-auto --prompt "description"` | Generate multiple design variants |
|
||||
| ⏭️ Backend/No UI | Skip | Pure backend API, CLI tools, etc. |
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Have reference: Imitate Google Docs collaboration interface
|
||||
/workflow:ui-design:imitate-auto --input "https://docs.google.com"
|
||||
# Have reference: Use local screenshots or code files
|
||||
/workflow:ui-design:imitate-auto --input "design-refs/*.png"
|
||||
# Or import from existing code
|
||||
/workflow:ui-design:imitate-auto --input "./src/components"
|
||||
|
||||
# No reference: Design from scratch
|
||||
/workflow:ui-design:explore-auto --prompt "Modern minimalist document collaboration editing interface" --style-variants 3
|
||||
|
||||
@@ -14,9 +14,10 @@ graph TB
|
||||
end
|
||||
|
||||
subgraph "Session Management"
|
||||
MARKER[".active-session marker"]
|
||||
SESSION["workflow-session.json"]
|
||||
WDIR[".workflow/ directories"]
|
||||
ACTIVE_DIR[".workflow/active/"]
|
||||
ARCHIVE_DIR[".workflow/archives/"]
|
||||
end
|
||||
|
||||
subgraph "Task System"
|
||||
@@ -124,9 +125,7 @@ stateDiagram-v2
|
||||
CreateStructure --> CreateJSON: Create workflow-session.json
|
||||
CreateJSON --> CreatePlan: Create IMPL_PLAN.md
|
||||
CreatePlan --> CreateTasks: Create .task/ directory
|
||||
CreateTasks --> SetActive: touch .active-session-name
|
||||
|
||||
SetActive --> Active: Session Ready
|
||||
CreateTasks --> Active: Session Ready in .workflow/active/
|
||||
|
||||
Active --> Paused: Switch to Another Session
|
||||
Active --> Working: Execute Tasks
|
||||
|
||||
Reference in New Issue
Block a user