mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-05 01:50:27 +08:00
Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow
This commit is contained in:
126
.claude/commands/cli/mode/document-analysis.md
Normal file
126
.claude/commands/cli/mode/document-analysis.md
Normal file
@@ -0,0 +1,126 @@
|
||||
---
|
||||
name: document-analysis
|
||||
description: Read-only technical document/paper analysis using Gemini/Qwen/Codex with systematic comprehension template for insights extraction
|
||||
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] document path or topic"
|
||||
allowed-tools: SlashCommand(*), Bash(*), Task(*), Read(*)
|
||||
---
|
||||
|
||||
# CLI Mode: Document Analysis (/cli:mode:document-analysis)
|
||||
|
||||
## Purpose
|
||||
|
||||
Systematic analysis of technical documents, research papers, API documentation, and technical specifications.
|
||||
|
||||
**Tool Selection**:
|
||||
- **gemini** (default) - Best for document comprehension and structure analysis
|
||||
- **qwen** - Fallback when Gemini unavailable
|
||||
- **codex** - Alternative for complex technical documents
|
||||
|
||||
**Key Feature**: `--cd` flag for directory-scoped document discovery
|
||||
|
||||
## Parameters
|
||||
|
||||
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
|
||||
- `--enhance` - Enhance analysis target with `/enhance-prompt`
|
||||
- `--cd "path"` - Target directory for document search
|
||||
- `<document-path-or-topic>` (Required) - File path or topic description
|
||||
|
||||
## Tool Usage
|
||||
|
||||
**Gemini** (Primary):
|
||||
```bash
|
||||
/cli:mode:document-analysis "README.md"
|
||||
/cli:mode:document-analysis --tool gemini "analyze API documentation"
|
||||
```
|
||||
|
||||
**Qwen** (Fallback):
|
||||
```bash
|
||||
/cli:mode:document-analysis --tool qwen "docs/architecture.md"
|
||||
```
|
||||
|
||||
**Codex** (Alternative):
|
||||
```bash
|
||||
/cli:mode:document-analysis --tool codex "research paper in docs/"
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
Uses **cli-execution-agent** for automated document analysis:
|
||||
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-execution-agent",
|
||||
description="Systematic document comprehension and insights extraction",
|
||||
prompt=`
|
||||
Task: ${document_path_or_topic}
|
||||
Mode: document-analysis
|
||||
Tool: ${tool_flag || 'gemini'}
|
||||
Directory: ${cd_path || '.'}
|
||||
Enhance: ${enhance_flag}
|
||||
Template: ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-technical-document.txt
|
||||
|
||||
Execute systematic document analysis:
|
||||
|
||||
1. Document Discovery:
|
||||
- Locate target document(s) via path or topic keywords
|
||||
- Identify document type (README, API docs, research paper, spec, tutorial)
|
||||
- Detect document format (Markdown, PDF, plain text, reStructuredText)
|
||||
- Discover related documents (references, appendices, examples)
|
||||
- Use MCP/ripgrep for comprehensive file discovery
|
||||
|
||||
2. Pre-Analysis Planning (Required):
|
||||
- Determine document structure (sections, hierarchy, flow)
|
||||
- Identify key components (abstract, methodology, implementation details)
|
||||
- Map dependencies and cross-references
|
||||
- Assess document scope and complexity
|
||||
- Plan analysis approach based on document type
|
||||
|
||||
3. CLI Command Construction:
|
||||
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for complex docs)
|
||||
- Directory: cd ${cd_path || '.'} &&
|
||||
- Context: @{document_paths} + @CLAUDE.md + related files
|
||||
- Mode: analysis (read-only)
|
||||
- Template: analysis/02-analyze-technical-document.txt
|
||||
|
||||
4. Analysis Execution:
|
||||
- Apply 6-field template structure (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
|
||||
- Execute multi-phase analysis protocol with pre-planning
|
||||
- Perform self-critique before final output
|
||||
- Generate structured report with evidence-based insights
|
||||
|
||||
5. Output Generation:
|
||||
- Comprehensive document analysis report
|
||||
- Structured insights with section references
|
||||
- Critical assessment with evidence
|
||||
- Actionable recommendations
|
||||
- Save to .workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md (or .scratchpad/)
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
## Core Rules
|
||||
|
||||
- **Read-only**: Analyzes documents, does NOT modify files
|
||||
- **Evidence-based**: All claims must reference specific sections/pages
|
||||
- **Pre-planning**: Requires analysis approach planning before execution
|
||||
- **Precise language**: Direct, accurate wording - no persuasive embellishment
|
||||
- **Output**: `.workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md` (or `.scratchpad/` if no session)
|
||||
|
||||
## Document Types Supported
|
||||
|
||||
| Type | Focus Areas | Key Outputs |
|
||||
|------|-------------|-------------|
|
||||
| README | Purpose, setup, usage | Integration steps, quick-start guide |
|
||||
| API Documentation | Endpoints, parameters, responses | API usage patterns, integration points |
|
||||
| Research Paper | Methodology, findings, validity | Applicable techniques, implementation feasibility |
|
||||
| Specification | Requirements, standards, constraints | Compliance checklist, implementation requirements |
|
||||
| Tutorial | Learning path, examples, exercises | Key concepts, practical applications |
|
||||
| Architecture Docs | System design, components, patterns | Design decisions, integration points, trade-offs |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Scope Definition**: Clearly define what aspects to analyze before starting
|
||||
2. **Layered Reading**: Structure/Overview → Details → Critical Analysis → Synthesis
|
||||
3. **Evidence Trail**: Track section references for all extracted information
|
||||
4. **Gap Identification**: Note missing information or unclear sections explicitly
|
||||
5. **Actionable Output**: Focus on insights that inform decisions or actions
|
||||
@@ -93,7 +93,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
|
||||
```
|
||||
|
||||
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
|
||||
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -122,7 +122,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
|
||||
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
|
||||
|
||||
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
|
||||
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
|
||||
|
||||
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
|
||||
|
||||
@@ -131,8 +131,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
**Commands**:
|
||||
|
||||
```bash
|
||||
# Count existing docs from phase2-analysis.json
|
||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
|
||||
# Count existing docs from doc-planning-data.json
|
||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
|
||||
```
|
||||
|
||||
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
|
||||
@@ -186,8 +186,8 @@ Large Projects (single dir >10 docs):
|
||||
**Commands**:
|
||||
|
||||
```bash
|
||||
# 1. Get top-level directories from phase2-analysis.json
|
||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
|
||||
# 1. Get top-level directories from doc-planning-data.json
|
||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
|
||||
|
||||
# 2. Get mode from workflow-session.json
|
||||
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
|
||||
@@ -205,7 +205,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
|
||||
- If total ≤10 docs: create group
|
||||
- If total >10 docs: split to 1 dir/group or subdivide
|
||||
- If single dir >10 docs: split by subdirectories
|
||||
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
|
||||
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
|
||||
```json
|
||||
"groups": {
|
||||
"count": 3,
|
||||
@@ -219,7 +219,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
|
||||
|
||||
**Task ID Calculation**:
|
||||
```bash
|
||||
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json)
|
||||
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
|
||||
readme_id=$((group_count + 1)) # Next ID after groups
|
||||
arch_id=$((group_count + 2))
|
||||
api_id=$((group_count + 3))
|
||||
@@ -241,7 +241,7 @@ api_id=$((group_count + 3))
|
||||
|
||||
**Generation Process**:
|
||||
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
||||
2. Read group assignments from phase2-analysis.json
|
||||
2. Read group assignments from doc-planning-data.json
|
||||
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
|
||||
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
|
||||
|
||||
@@ -266,14 +266,14 @@ api_id=$((group_count + 3))
|
||||
},
|
||||
"context": {
|
||||
"requirements": [
|
||||
"Process directories from group ${group_number} in phase2-analysis.json",
|
||||
"Process directories from group ${group_number} in doc-planning-data.json",
|
||||
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
|
||||
"Code folders: API.md + README.md; Navigation folders: README.md only",
|
||||
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
|
||||
],
|
||||
"focus_paths": ["${group_dirs_from_json}"],
|
||||
"precomputed_data": {
|
||||
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
|
||||
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
|
||||
}
|
||||
},
|
||||
"flow_control": {
|
||||
@@ -282,8 +282,8 @@ api_id=$((group_count + 3))
|
||||
"step": "load_precomputed_data",
|
||||
"action": "Load Phase 2 analysis and extract group directories",
|
||||
"commands": [
|
||||
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
|
||||
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
|
||||
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
|
||||
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
|
||||
],
|
||||
"output_to": "phase2_context",
|
||||
"note": "Single JSON file contains all Phase 2 analysis results"
|
||||
@@ -328,7 +328,7 @@ api_id=$((group_count + 3))
|
||||
{
|
||||
"step": 2,
|
||||
"title": "Batch generate documentation via CLI",
|
||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
||||
"depends_on": [1],
|
||||
"output": "generated_docs"
|
||||
}
|
||||
@@ -468,7 +468,7 @@ api_id=$((group_count + 3))
|
||||
├── IMPL_PLAN.md
|
||||
├── TODO_LIST.md
|
||||
├── .process/
|
||||
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
|
||||
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
|
||||
└── .task/
|
||||
├── IMPL-001.json # Small: all modules | Large: group 1
|
||||
├── IMPL-00N.json # (Large only: groups 2-N)
|
||||
@@ -477,7 +477,7 @@ api_id=$((group_count + 3))
|
||||
└── IMPL-{N+3}.json # HTTP API (optional)
|
||||
```
|
||||
|
||||
**phase2-analysis.json Structure**:
|
||||
**doc-planning-data.json Structure**:
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
|
||||
@@ -1,15 +1,16 @@
|
||||
---
|
||||
name: workflow:status
|
||||
description: Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view
|
||||
argument-hint: "[optional: --project|task-id|--validate]"
|
||||
argument-hint: "[optional: --project|task-id|--validate|--dashboard]"
|
||||
---
|
||||
|
||||
# Workflow Status Command (/workflow:status)
|
||||
|
||||
## Overview
|
||||
Generates on-demand views from project and session data. Supports two modes:
|
||||
Generates on-demand views from project and session data. Supports multiple modes:
|
||||
1. **Project Overview** (`--project`): Shows completed features and project statistics
|
||||
2. **Workflow Tasks** (default): Shows current session task progress
|
||||
3. **HTML Dashboard** (`--dashboard`): Generates interactive HTML task board with active and archived sessions
|
||||
|
||||
No synchronization needed - all views are calculated from current JSON state.
|
||||
|
||||
@@ -19,6 +20,7 @@ No synchronization needed - all views are calculated from current JSON state.
|
||||
/workflow:status --project # Show project-level feature registry
|
||||
/workflow:status impl-1 # Show specific task details
|
||||
/workflow:status --validate # Validate workflow integrity
|
||||
/workflow:status --dashboard # Generate HTML dashboard board
|
||||
```
|
||||
|
||||
## Implementation Flow
|
||||
@@ -192,4 +194,135 @@ find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null |
|
||||
|
||||
## Completed Tasks
|
||||
- [COMPLETED] impl-0: Setup completed
|
||||
```
|
||||
|
||||
## Dashboard Mode (HTML Board)
|
||||
|
||||
### Step 1: Check for --dashboard flag
|
||||
```bash
|
||||
# If --dashboard flag present → Execute Dashboard Mode
|
||||
```
|
||||
|
||||
### Step 2: Collect Workflow Data
|
||||
|
||||
**Collect Active Sessions**:
|
||||
```bash
|
||||
# Find all active sessions
|
||||
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null
|
||||
|
||||
# For each active session, read metadata and tasks
|
||||
for session in $(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null); do
|
||||
cat "$session/workflow-session.json"
|
||||
find "$session/.task/" -name "*.json" -type f 2>/dev/null
|
||||
done
|
||||
```
|
||||
|
||||
**Collect Archived Sessions**:
|
||||
```bash
|
||||
# Find all archived sessions
|
||||
find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null
|
||||
|
||||
# Read manifest if exists
|
||||
cat .workflow/archives/manifest.json 2>/dev/null
|
||||
|
||||
# For each archived session, read metadata
|
||||
for archive in $(find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null); do
|
||||
cat "$archive/workflow-session.json" 2>/dev/null
|
||||
# Count completed tasks
|
||||
find "$archive/.task/" -name "*.json" -type f 2>/dev/null | wc -l
|
||||
done
|
||||
```
|
||||
|
||||
### Step 3: Process and Structure Data
|
||||
|
||||
**Build data structure for dashboard**:
|
||||
```javascript
|
||||
const dashboardData = {
|
||||
activeSessions: [],
|
||||
archivedSessions: [],
|
||||
generatedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Process active sessions
|
||||
for each active_session in active_sessions:
|
||||
const sessionData = JSON.parse(Read(active_session/workflow-session.json));
|
||||
const tasks = [];
|
||||
|
||||
// Load all tasks for this session
|
||||
for each task_file in find(active_session/.task/*.json):
|
||||
const taskData = JSON.parse(Read(task_file));
|
||||
tasks.push({
|
||||
task_id: taskData.task_id,
|
||||
title: taskData.title,
|
||||
status: taskData.status,
|
||||
type: taskData.type
|
||||
});
|
||||
|
||||
dashboardData.activeSessions.push({
|
||||
session_id: sessionData.session_id,
|
||||
project: sessionData.project,
|
||||
status: sessionData.status,
|
||||
created_at: sessionData.created_at || sessionData.initialized_at,
|
||||
tasks: tasks
|
||||
});
|
||||
|
||||
// Process archived sessions
|
||||
for each archived_session in archived_sessions:
|
||||
const sessionData = JSON.parse(Read(archived_session/workflow-session.json));
|
||||
const taskCount = bash(find archived_session/.task/*.json | wc -l);
|
||||
|
||||
dashboardData.archivedSessions.push({
|
||||
session_id: sessionData.session_id,
|
||||
project: sessionData.project,
|
||||
archived_at: sessionData.completed_at || sessionData.archived_at,
|
||||
taskCount: parseInt(taskCount),
|
||||
archive_path: archived_session
|
||||
});
|
||||
```
|
||||
|
||||
### Step 4: Generate HTML from Template
|
||||
|
||||
**Load template and inject data**:
|
||||
```javascript
|
||||
// Read the HTML template
|
||||
const template = Read("~/.claude/templates/workflow-dashboard.html");
|
||||
|
||||
// Prepare data for injection
|
||||
const dataJson = JSON.stringify(dashboardData, null, 2);
|
||||
|
||||
// Replace placeholder with actual data
|
||||
const htmlContent = template.replace('{{WORKFLOW_DATA}}', dataJson);
|
||||
|
||||
// Ensure .workflow directory exists
|
||||
bash(mkdir -p .workflow);
|
||||
```
|
||||
|
||||
### Step 5: Write HTML File
|
||||
|
||||
```bash
|
||||
# Write the generated HTML to .workflow/dashboard.html
|
||||
Write({
|
||||
file_path: ".workflow/dashboard.html",
|
||||
content: htmlContent
|
||||
})
|
||||
```
|
||||
|
||||
### Step 6: Display Success Message
|
||||
|
||||
```markdown
|
||||
Dashboard generated successfully!
|
||||
|
||||
Location: .workflow/dashboard.html
|
||||
|
||||
Open in browser:
|
||||
file://$(pwd)/.workflow/dashboard.html
|
||||
|
||||
Features:
|
||||
- 📊 Active sessions overview
|
||||
- 📦 Archived sessions history
|
||||
- 🔍 Search and filter
|
||||
- 📈 Progress tracking
|
||||
- 🎨 Dark/light theme
|
||||
|
||||
Refresh data: Re-run /workflow:status --dashboard
|
||||
```
|
||||
@@ -89,7 +89,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
|
||||
```
|
||||
|
||||
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
|
||||
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -118,7 +118,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
|
||||
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
|
||||
|
||||
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
|
||||
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
|
||||
|
||||
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
|
||||
|
||||
@@ -127,8 +127,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
**Commands**:
|
||||
|
||||
```bash
|
||||
# Count existing docs from phase2-analysis.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
|
||||
# Count existing docs from doc-planning-data.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
|
||||
```
|
||||
|
||||
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
|
||||
@@ -182,8 +182,8 @@ Large Projects (single dir >10 docs):
|
||||
**Commands**:
|
||||
|
||||
```bash
|
||||
# 1. Get top-level directories from phase2-analysis.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
|
||||
# 1. Get top-level directories from doc-planning-data.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
|
||||
|
||||
# 2. Get mode from workflow-session.json
|
||||
bash(cat .workflow/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
|
||||
@@ -201,7 +201,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
|
||||
- If total ≤10 docs: create group
|
||||
- If total >10 docs: split to 1 dir/group or subdivide
|
||||
- If single dir >10 docs: split by subdirectories
|
||||
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
|
||||
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
|
||||
```json
|
||||
"groups": {
|
||||
"count": 3,
|
||||
@@ -215,7 +215,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
|
||||
|
||||
**Task ID Calculation**:
|
||||
```bash
|
||||
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json)
|
||||
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json)
|
||||
readme_id=$((group_count + 1)) # Next ID after groups
|
||||
arch_id=$((group_count + 2))
|
||||
api_id=$((group_count + 3))
|
||||
@@ -237,7 +237,7 @@ api_id=$((group_count + 3))
|
||||
|
||||
**Generation Process**:
|
||||
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
||||
2. Read group assignments from phase2-analysis.json
|
||||
2. Read group assignments from doc-planning-data.json
|
||||
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
|
||||
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
|
||||
|
||||
@@ -262,14 +262,14 @@ api_id=$((group_count + 3))
|
||||
},
|
||||
"context": {
|
||||
"requirements": [
|
||||
"Process directories from group ${group_number} in phase2-analysis.json",
|
||||
"Process directories from group ${group_number} in doc-planning-data.json",
|
||||
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
|
||||
"Code folders: API.md + README.md; Navigation folders: README.md only",
|
||||
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
|
||||
],
|
||||
"focus_paths": ["${group_dirs_from_json}"],
|
||||
"precomputed_data": {
|
||||
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
|
||||
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
|
||||
}
|
||||
},
|
||||
"flow_control": {
|
||||
@@ -278,8 +278,8 @@ api_id=$((group_count + 3))
|
||||
"step": "load_precomputed_data",
|
||||
"action": "Load Phase 2 analysis and extract group directories",
|
||||
"commands": [
|
||||
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
|
||||
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
|
||||
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
|
||||
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
|
||||
],
|
||||
"output_to": "phase2_context",
|
||||
"note": "Single JSON file contains all Phase 2 analysis results"
|
||||
@@ -324,7 +324,7 @@ api_id=$((group_count + 3))
|
||||
{
|
||||
"step": 2,
|
||||
"title": "Batch generate documentation via CLI",
|
||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
||||
"depends_on": [1],
|
||||
"output": "generated_docs"
|
||||
}
|
||||
@@ -464,7 +464,7 @@ api_id=$((group_count + 3))
|
||||
├── IMPL_PLAN.md
|
||||
├── TODO_LIST.md
|
||||
├── .process/
|
||||
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
|
||||
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
|
||||
└── .task/
|
||||
├── IMPL-001.json # Small: all modules | Large: group 1
|
||||
├── IMPL-00N.json # (Large only: groups 2-N)
|
||||
@@ -473,7 +473,7 @@ api_id=$((group_count + 3))
|
||||
└── IMPL-{N+3}.json # HTTP API (optional)
|
||||
```
|
||||
|
||||
**phase2-analysis.json Structure**:
|
||||
**doc-planning-data.json Structure**:
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
|
||||
664
.claude/templates/workflow-dashboard.html
Normal file
664
.claude/templates/workflow-dashboard.html
Normal file
@@ -0,0 +1,664 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Workflow Dashboard - Task Board</title>
|
||||
<style>
|
||||
:root {
|
||||
--bg-primary: #f5f7fa;
|
||||
--bg-secondary: #ffffff;
|
||||
--bg-card: #ffffff;
|
||||
--text-primary: #1a202c;
|
||||
--text-secondary: #718096;
|
||||
--border-color: #e2e8f0;
|
||||
--accent-color: #4299e1;
|
||||
--success-color: #48bb78;
|
||||
--warning-color: #ed8936;
|
||||
--danger-color: #f56565;
|
||||
--shadow: 0 1px 3px 0 rgba(0, 0, 0, 0.1), 0 1px 2px 0 rgba(0, 0, 0, 0.06);
|
||||
--shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.1), 0 4px 6px -2px rgba(0, 0, 0, 0.05);
|
||||
}
|
||||
|
||||
[data-theme="dark"] {
|
||||
--bg-primary: #1a202c;
|
||||
--bg-secondary: #2d3748;
|
||||
--bg-card: #2d3748;
|
||||
--text-primary: #f7fafc;
|
||||
--text-secondary: #a0aec0;
|
||||
--border-color: #4a5568;
|
||||
--shadow: 0 1px 3px 0 rgba(0, 0, 0, 0.3), 0 1px 2px 0 rgba(0, 0, 0, 0.2);
|
||||
--shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.3), 0 4px 6px -2px rgba(0, 0, 0, 0.2);
|
||||
}
|
||||
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
|
||||
background-color: var(--bg-primary);
|
||||
color: var(--text-primary);
|
||||
line-height: 1.6;
|
||||
transition: background-color 0.3s, color 0.3s;
|
||||
}
|
||||
|
||||
.container {
|
||||
max-width: 1400px;
|
||||
margin: 0 auto;
|
||||
padding: 20px;
|
||||
}
|
||||
|
||||
header {
|
||||
background-color: var(--bg-secondary);
|
||||
box-shadow: var(--shadow);
|
||||
padding: 20px;
|
||||
margin-bottom: 30px;
|
||||
border-radius: 8px;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 2rem;
|
||||
margin-bottom: 10px;
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
.header-controls {
|
||||
display: flex;
|
||||
gap: 15px;
|
||||
flex-wrap: wrap;
|
||||
align-items: center;
|
||||
margin-top: 15px;
|
||||
}
|
||||
|
||||
.search-box {
|
||||
flex: 1;
|
||||
min-width: 250px;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.search-box input {
|
||||
width: 100%;
|
||||
padding: 10px 15px;
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 6px;
|
||||
background-color: var(--bg-primary);
|
||||
color: var(--text-primary);
|
||||
font-size: 0.95rem;
|
||||
}
|
||||
|
||||
.filter-group {
|
||||
display: flex;
|
||||
gap: 10px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.btn {
|
||||
padding: 10px 20px;
|
||||
border: none;
|
||||
border-radius: 6px;
|
||||
cursor: pointer;
|
||||
font-size: 0.9rem;
|
||||
font-weight: 500;
|
||||
transition: all 0.2s;
|
||||
background-color: var(--bg-card);
|
||||
color: var(--text-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.btn:hover {
|
||||
transform: translateY(-1px);
|
||||
box-shadow: var(--shadow);
|
||||
}
|
||||
|
||||
.btn.active {
|
||||
background-color: var(--accent-color);
|
||||
color: white;
|
||||
border-color: var(--accent-color);
|
||||
}
|
||||
|
||||
.stats-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
|
||||
gap: 20px;
|
||||
margin-bottom: 30px;
|
||||
}
|
||||
|
||||
.stat-card {
|
||||
background-color: var(--bg-card);
|
||||
padding: 20px;
|
||||
border-radius: 8px;
|
||||
box-shadow: var(--shadow);
|
||||
transition: transform 0.2s;
|
||||
}
|
||||
|
||||
.stat-card:hover {
|
||||
transform: translateY(-2px);
|
||||
box-shadow: var(--shadow-lg);
|
||||
}
|
||||
|
||||
.stat-value {
|
||||
font-size: 2rem;
|
||||
font-weight: bold;
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
.stat-label {
|
||||
color: var(--text-secondary);
|
||||
font-size: 0.9rem;
|
||||
margin-top: 5px;
|
||||
}
|
||||
|
||||
.section {
|
||||
margin-bottom: 40px;
|
||||
}
|
||||
|
||||
.section-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.section-title {
|
||||
font-size: 1.5rem;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.sessions-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(350px, 1fr));
|
||||
gap: 20px;
|
||||
}
|
||||
|
||||
.session-card {
|
||||
background-color: var(--bg-card);
|
||||
border-radius: 8px;
|
||||
box-shadow: var(--shadow);
|
||||
padding: 20px;
|
||||
transition: all 0.3s;
|
||||
}
|
||||
|
||||
.session-card:hover {
|
||||
transform: translateY(-4px);
|
||||
box-shadow: var(--shadow-lg);
|
||||
}
|
||||
|
||||
.session-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: start;
|
||||
margin-bottom: 15px;
|
||||
}
|
||||
|
||||
.session-title {
|
||||
font-size: 1.2rem;
|
||||
font-weight: 600;
|
||||
color: var(--text-primary);
|
||||
margin-bottom: 5px;
|
||||
}
|
||||
|
||||
.session-status {
|
||||
padding: 4px 12px;
|
||||
border-radius: 12px;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
.status-active {
|
||||
background-color: #c6f6d5;
|
||||
color: #22543d;
|
||||
}
|
||||
|
||||
.status-archived {
|
||||
background-color: #e2e8f0;
|
||||
color: #4a5568;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .status-active {
|
||||
background-color: #22543d;
|
||||
color: #c6f6d5;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .status-archived {
|
||||
background-color: #4a5568;
|
||||
color: #e2e8f0;
|
||||
}
|
||||
|
||||
.session-meta {
|
||||
display: flex;
|
||||
gap: 15px;
|
||||
font-size: 0.85rem;
|
||||
color: var(--text-secondary);
|
||||
margin-bottom: 15px;
|
||||
}
|
||||
|
||||
.progress-bar {
|
||||
width: 100%;
|
||||
height: 8px;
|
||||
background-color: var(--bg-primary);
|
||||
border-radius: 4px;
|
||||
overflow: hidden;
|
||||
margin: 15px 0;
|
||||
}
|
||||
|
||||
.progress-fill {
|
||||
height: 100%;
|
||||
background: linear-gradient(90deg, var(--accent-color), var(--success-color));
|
||||
transition: width 0.3s;
|
||||
}
|
||||
|
||||
.tasks-list {
|
||||
margin-top: 15px;
|
||||
}
|
||||
|
||||
.task-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
padding: 10px;
|
||||
margin-bottom: 8px;
|
||||
background-color: var(--bg-primary);
|
||||
border-radius: 6px;
|
||||
border-left: 3px solid var(--border-color);
|
||||
transition: all 0.2s;
|
||||
}
|
||||
|
||||
.task-item:hover {
|
||||
transform: translateX(4px);
|
||||
}
|
||||
|
||||
.task-item.completed {
|
||||
border-left-color: var(--success-color);
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
.task-item.in_progress {
|
||||
border-left-color: var(--warning-color);
|
||||
}
|
||||
|
||||
.task-item.pending {
|
||||
border-left-color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.task-checkbox {
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
border-radius: 50%;
|
||||
border: 2px solid var(--border-color);
|
||||
margin-right: 12px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.task-item.completed .task-checkbox {
|
||||
background-color: var(--success-color);
|
||||
border-color: var(--success-color);
|
||||
}
|
||||
|
||||
.task-item.completed .task-checkbox::after {
|
||||
content: '✓';
|
||||
color: white;
|
||||
font-size: 0.8rem;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.task-item.in_progress .task-checkbox {
|
||||
border-color: var(--warning-color);
|
||||
background-color: var(--warning-color);
|
||||
}
|
||||
|
||||
.task-item.in_progress .task-checkbox::after {
|
||||
content: '⟳';
|
||||
color: white;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.task-title {
|
||||
flex: 1;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.task-id {
|
||||
font-size: 0.75rem;
|
||||
color: var(--text-secondary);
|
||||
font-family: monospace;
|
||||
margin-left: 10px;
|
||||
}
|
||||
|
||||
.empty-state {
|
||||
text-align: center;
|
||||
padding: 60px 20px;
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.empty-state-icon {
|
||||
font-size: 4rem;
|
||||
margin-bottom: 20px;
|
||||
opacity: 0.5;
|
||||
}
|
||||
|
||||
.theme-toggle {
|
||||
position: fixed;
|
||||
bottom: 30px;
|
||||
right: 30px;
|
||||
width: 60px;
|
||||
height: 60px;
|
||||
border-radius: 50%;
|
||||
background-color: var(--accent-color);
|
||||
color: white;
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
font-size: 1.5rem;
|
||||
box-shadow: var(--shadow-lg);
|
||||
transition: all 0.3s;
|
||||
z-index: 1000;
|
||||
}
|
||||
|
||||
.theme-toggle:hover {
|
||||
transform: scale(1.1);
|
||||
}
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.sessions-grid {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
|
||||
.stats-grid {
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 1.5rem;
|
||||
}
|
||||
|
||||
.header-controls {
|
||||
flex-direction: column;
|
||||
align-items: stretch;
|
||||
}
|
||||
|
||||
.search-box {
|
||||
width: 100%;
|
||||
}
|
||||
}
|
||||
|
||||
.badge {
|
||||
display: inline-block;
|
||||
padding: 2px 8px;
|
||||
border-radius: 4px;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
margin-left: 8px;
|
||||
}
|
||||
|
||||
.badge-count {
|
||||
background-color: var(--accent-color);
|
||||
color: white;
|
||||
}
|
||||
|
||||
.session-footer {
|
||||
margin-top: 15px;
|
||||
padding-top: 15px;
|
||||
border-top: 1px solid var(--border-color);
|
||||
font-size: 0.85rem;
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<header>
|
||||
<h1>🚀 Workflow Dashboard</h1>
|
||||
<p style="color: var(--text-secondary);">Task Board - Active and Archived Sessions</p>
|
||||
|
||||
<div class="header-controls">
|
||||
<div class="search-box">
|
||||
<input type="text" id="searchInput" placeholder="🔍 Search tasks or sessions..." />
|
||||
</div>
|
||||
|
||||
<div class="filter-group">
|
||||
<button class="btn active" data-filter="all">All</button>
|
||||
<button class="btn" data-filter="active">Active</button>
|
||||
<button class="btn" data-filter="archived">Archived</button>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<div class="stats-grid">
|
||||
<div class="stat-card">
|
||||
<div class="stat-value" id="totalSessions">0</div>
|
||||
<div class="stat-label">Total Sessions</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<div class="stat-value" id="activeSessions">0</div>
|
||||
<div class="stat-label">Active Sessions</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<div class="stat-value" id="totalTasks">0</div>
|
||||
<div class="stat-label">Total Tasks</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<div class="stat-value" id="completedTasks">0</div>
|
||||
<div class="stat-label">Completed Tasks</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="section" id="activeSectionContainer">
|
||||
<div class="section-header">
|
||||
<h2 class="section-title">📋 Active Sessions</h2>
|
||||
</div>
|
||||
<div class="sessions-grid" id="activeSessions"></div>
|
||||
</div>
|
||||
|
||||
<div class="section" id="archivedSectionContainer">
|
||||
<div class="section-header">
|
||||
<h2 class="section-title">📦 Archived Sessions</h2>
|
||||
</div>
|
||||
<div class="sessions-grid" id="archivedSessions"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<button class="theme-toggle" id="themeToggle">🌙</button>
|
||||
|
||||
<script>
|
||||
// Workflow data will be injected here
|
||||
const workflowData = {{WORKFLOW_DATA}};
|
||||
|
||||
// Theme management
|
||||
function initTheme() {
|
||||
const savedTheme = localStorage.getItem('theme') || 'light';
|
||||
document.documentElement.setAttribute('data-theme', savedTheme);
|
||||
updateThemeIcon(savedTheme);
|
||||
}
|
||||
|
||||
function toggleTheme() {
|
||||
const currentTheme = document.documentElement.getAttribute('data-theme');
|
||||
const newTheme = currentTheme === 'dark' ? 'light' : 'dark';
|
||||
document.documentElement.setAttribute('data-theme', newTheme);
|
||||
localStorage.setItem('theme', newTheme);
|
||||
updateThemeIcon(newTheme);
|
||||
}
|
||||
|
||||
function updateThemeIcon(theme) {
|
||||
document.getElementById('themeToggle').textContent = theme === 'dark' ? '☀️' : '🌙';
|
||||
}
|
||||
|
||||
// Statistics calculation
|
||||
function updateStatistics() {
|
||||
const stats = {
|
||||
totalSessions: workflowData.activeSessions.length + workflowData.archivedSessions.length,
|
||||
activeSessions: workflowData.activeSessions.length,
|
||||
totalTasks: 0,
|
||||
completedTasks: 0
|
||||
};
|
||||
|
||||
workflowData.activeSessions.forEach(session => {
|
||||
stats.totalTasks += session.tasks.length;
|
||||
stats.completedTasks += session.tasks.filter(t => t.status === 'completed').length;
|
||||
});
|
||||
|
||||
workflowData.archivedSessions.forEach(session => {
|
||||
stats.totalTasks += session.taskCount || 0;
|
||||
stats.completedTasks += session.taskCount || 0;
|
||||
});
|
||||
|
||||
document.getElementById('totalSessions').textContent = stats.totalSessions;
|
||||
document.getElementById('activeSessions').textContent = stats.activeSessions;
|
||||
document.getElementById('totalTasks').textContent = stats.totalTasks;
|
||||
document.getElementById('completedTasks').textContent = stats.completedTasks;
|
||||
}
|
||||
|
||||
// Render session card
|
||||
function createSessionCard(session, isActive) {
|
||||
const card = document.createElement('div');
|
||||
card.className = 'session-card';
|
||||
card.dataset.sessionType = isActive ? 'active' : 'archived';
|
||||
|
||||
const completedTasks = isActive
|
||||
? session.tasks.filter(t => t.status === 'completed').length
|
||||
: (session.taskCount || 0);
|
||||
const totalTasks = isActive ? session.tasks.length : (session.taskCount || 0);
|
||||
const progress = totalTasks > 0 ? (completedTasks / totalTasks * 100) : 0;
|
||||
|
||||
let tasksHtml = '';
|
||||
if (isActive && session.tasks.length > 0) {
|
||||
tasksHtml = `
|
||||
<div class="tasks-list">
|
||||
${session.tasks.map(task => `
|
||||
<div class="task-item ${task.status}">
|
||||
<div class="task-checkbox"></div>
|
||||
<div class="task-title">${task.title || 'Untitled Task'}</div>
|
||||
<span class="task-id">${task.task_id || ''}</span>
|
||||
</div>
|
||||
`).join('')}
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
card.innerHTML = `
|
||||
<div class="session-header">
|
||||
<div>
|
||||
<h3 class="session-title">${session.session_id || 'Unknown Session'}</h3>
|
||||
<div style="color: var(--text-secondary); font-size: 0.9rem; margin-top: 5px;">
|
||||
${session.project || ''}
|
||||
</div>
|
||||
</div>
|
||||
<span class="session-status ${isActive ? 'status-active' : 'status-archived'}">
|
||||
${isActive ? 'Active' : 'Archived'}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<div class="session-meta">
|
||||
<span>📅 ${session.created_at || session.archived_at || 'N/A'}</span>
|
||||
<span>📊 ${completedTasks}/${totalTasks} tasks</span>
|
||||
</div>
|
||||
|
||||
${totalTasks > 0 ? `
|
||||
<div class="progress-bar">
|
||||
<div class="progress-fill" style="width: ${progress}%"></div>
|
||||
</div>
|
||||
<div style="text-align: center; font-size: 0.85rem; color: var(--text-secondary);">
|
||||
${Math.round(progress)}% Complete
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
${tasksHtml}
|
||||
|
||||
${!isActive && session.archive_path ? `
|
||||
<div class="session-footer">
|
||||
📁 Archive: ${session.archive_path}
|
||||
</div>
|
||||
` : ''}
|
||||
`;
|
||||
|
||||
return card;
|
||||
}
|
||||
|
||||
// Render all sessions
|
||||
function renderSessions(filter = 'all') {
|
||||
const activeContainer = document.getElementById('activeSessions');
|
||||
const archivedContainer = document.getElementById('archivedSessions');
|
||||
|
||||
activeContainer.innerHTML = '';
|
||||
archivedContainer.innerHTML = '';
|
||||
|
||||
if (filter === 'all' || filter === 'active') {
|
||||
if (workflowData.activeSessions.length === 0) {
|
||||
activeContainer.innerHTML = `
|
||||
<div class="empty-state">
|
||||
<div class="empty-state-icon">📭</div>
|
||||
<p>No active sessions</p>
|
||||
</div>
|
||||
`;
|
||||
} else {
|
||||
workflowData.activeSessions.forEach(session => {
|
||||
activeContainer.appendChild(createSessionCard(session, true));
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
if (filter === 'all' || filter === 'archived') {
|
||||
if (workflowData.archivedSessions.length === 0) {
|
||||
archivedContainer.innerHTML = `
|
||||
<div class="empty-state">
|
||||
<div class="empty-state-icon">📦</div>
|
||||
<p>No archived sessions</p>
|
||||
</div>
|
||||
`;
|
||||
} else {
|
||||
workflowData.archivedSessions.forEach(session => {
|
||||
archivedContainer.appendChild(createSessionCard(session, false));
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Show/hide sections
|
||||
document.getElementById('activeSectionContainer').style.display =
|
||||
(filter === 'all' || filter === 'active') ? 'block' : 'none';
|
||||
document.getElementById('archivedSectionContainer').style.display =
|
||||
(filter === 'all' || filter === 'archived') ? 'block' : 'none';
|
||||
}
|
||||
|
||||
// Search functionality
|
||||
function setupSearch() {
|
||||
const searchInput = document.getElementById('searchInput');
|
||||
searchInput.addEventListener('input', (e) => {
|
||||
const query = e.target.value.toLowerCase();
|
||||
const cards = document.querySelectorAll('.session-card');
|
||||
|
||||
cards.forEach(card => {
|
||||
const text = card.textContent.toLowerCase();
|
||||
card.style.display = text.includes(query) ? 'block' : 'none';
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
// Filter functionality
|
||||
function setupFilters() {
|
||||
const filterButtons = document.querySelectorAll('[data-filter]');
|
||||
filterButtons.forEach(btn => {
|
||||
btn.addEventListener('click', () => {
|
||||
filterButtons.forEach(b => b.classList.remove('active'));
|
||||
btn.classList.add('active');
|
||||
renderSessions(btn.dataset.filter);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
// Initialize
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
initTheme();
|
||||
updateStatistics();
|
||||
renderSessions();
|
||||
setupSearch();
|
||||
setupFilters();
|
||||
|
||||
document.getElementById('themeToggle').addEventListener('click', toggleTheme);
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -5,27 +5,22 @@ description: Product backlog management, user story creation, and feature priori
|
||||
|
||||
# Product Owner Planning Template
|
||||
|
||||
You are a **Product Owner** specializing in product backlog management, user story creation, and feature prioritization.
|
||||
## Role & Scope
|
||||
|
||||
## Your Role & Responsibilities
|
||||
**Role**: Product Owner
|
||||
**Focus**: Product backlog management, user story definition, stakeholder alignment, value delivery
|
||||
**Excluded**: Team management, technical implementation, detailed system design
|
||||
|
||||
**Primary Focus**: Product backlog management, user story definition, stakeholder alignment, and value delivery
|
||||
|
||||
**Core Responsibilities**:
|
||||
- Product backlog creation and prioritization
|
||||
- User story writing with acceptance criteria
|
||||
- Stakeholder engagement and requirement gathering
|
||||
- Feature value assessment and ROI analysis
|
||||
- Release planning and roadmap management
|
||||
- Sprint goal definition and commitment
|
||||
- Acceptance testing and definition of done
|
||||
|
||||
**Does NOT Include**: Team management, technical implementation, detailed system design
|
||||
## Planning Process (Required)
|
||||
Before providing planning document, you MUST:
|
||||
1. Analyze product vision and stakeholder needs
|
||||
2. Define backlog structure and prioritization framework
|
||||
3. Create user stories with acceptance criteria
|
||||
4. Plan releases and define success metrics
|
||||
5. Present structured planning document
|
||||
|
||||
## Planning Document Structure
|
||||
|
||||
Generate a comprehensive Product Owner planning document with the following structure:
|
||||
|
||||
### 1. Product Vision & Strategy
|
||||
- **Product Vision**: Long-term product goals and target outcomes
|
||||
- **Value Proposition**: User value and business benefits
|
||||
|
||||
@@ -5,55 +5,52 @@ category: development
|
||||
keywords: [bug诊断, 故障分析, 修复方案]
|
||||
---
|
||||
|
||||
# AI Persona & Core Mission
|
||||
# Role & Output Requirements
|
||||
|
||||
You are a **资深软件工程师 & 故障诊断专家 (Senior Software Engineer & Fault Diagnosis Expert)**. Your mission is to meticulously analyze user-provided bug reports, logs, and code snippets to perform a forensic-level investigation. Your goal is to pinpoint the precise root cause of the bug and then propose a targeted, robust, and minimally invasive correction plan. **Critically, you will *not* write complete, ready-to-use code files. Your output is a diagnostic report and a clear, actionable correction suggestion, articulated in professional Chinese.** You are an expert at logical deduction, tracing execution flows, and anticipating the side effects of any proposed fix.
|
||||
**Role**: Software engineer specializing in bug diagnosis
|
||||
**Output Format**: Diagnostic report in Chinese following the specified structure
|
||||
**Constraints**: Do NOT write complete code files. Provide diagnostic analysis and targeted correction suggestions only.
|
||||
|
||||
## II. ROLE DEFINITION & CORE CAPABILITIES
|
||||
1. **Role**: Senior Software Engineer & Fault Diagnosis Expert.
|
||||
2. **Core Capabilities**:
|
||||
* **Symptom Interpretation**: Deconstructing bug reports, stack traces, logs, and user descriptions into concrete technical observations.
|
||||
* **Logical Deduction & Root Cause Analysis**: Masterfully applying deductive reasoning to trace symptoms back to their fundamental cause, moving from what is happening to why its happening.
|
||||
* **Code Traversal & Execution Flow Analysis**: Mentally (or schematically) tracing code paths, state changes, and data transformations to identify logical flaws.
|
||||
* **Hypothesis Formulation & Validation**: Formulating plausible hypotheses about the bugs origin and systematically validating or refuting them based on the provided evidence.
|
||||
* **Targeted Solution Design**: Proposing precise, effective, and low-risk code corrections rather than broad refactoring.
|
||||
* **Impact Analysis**: Foreseeing the potential ripple effects or unintended consequences of a proposed fix on other parts of the system.
|
||||
* **Clear Technical Communication (Chinese)**: Articulating complex diagnostic processes and correction plans in clear, unambiguous Chinese for a developer audience.
|
||||
## Core Capabilities
|
||||
- Interpret symptoms from bug reports, stack traces, and logs
|
||||
- Trace execution flow to identify root causes
|
||||
- Formulate and validate hypotheses about bug origins
|
||||
- Design targeted, low-risk corrections
|
||||
- Analyze impact on other system components
|
||||
|
||||
3. **Core Thinking Mode**:
|
||||
* **Detective-like & Methodical**: Start with the evidence (symptoms), follow the clues (code paths), identify the suspect (flawed logic), and prove the case (root cause).
|
||||
* **Hypothesis-Driven**: Actively form and state your working theories (My initial hypothesis is that the null pointer is originating from module X because...) before reaching a conclusion.
|
||||
* **From Effect to Cause**: Your primary thought process should be working backward from the observed failure to the initial error.
|
||||
* **Chain-of-Thought (CoT) Driven**: Explicitly articulate your entire diagnostic journey, from symptom analysis to root cause identification.
|
||||
## Analysis Process (Required)
|
||||
**Before providing your final diagnosis, you MUST:**
|
||||
1. Analyze symptoms and form initial hypothesis
|
||||
2. Trace code execution to identify root cause
|
||||
3. Design correction strategy
|
||||
4. Assess potential impacts and risks
|
||||
5. Present structured diagnostic report
|
||||
|
||||
## III. OBJECTIVES
|
||||
1. **Analyze Evidence**: Thoroughly examine all provided information (bug description, code, logs) to understand the failure conditions.
|
||||
2. **Pinpoint Root Cause**: Go beyond surface-level symptoms to identify the fundamental logical error, race condition, data corruption, or configuration issue.
|
||||
3. **Propose Precise Correction**: Formulate a clear and targeted suggestion for how to fix the bug.
|
||||
4. **Explain the Why**: Justify why the proposed correction effectively resolves the root cause.
|
||||
5. **Assess Risks & Side Effects**: Identify potential negative impacts of the fix and suggest verification steps.
|
||||
6. **Professional Chinese Output**: Produce a highly structured, professional diagnostic report and correction plan entirely in Chinese.
|
||||
7. **Show Your Work (CoT)**: Demonstrate your analytical process clearly in the 思考过程 section.
|
||||
## Objectives
|
||||
1. Identify root cause (not just symptoms)
|
||||
2. Propose targeted correction with justification
|
||||
3. Assess risks and side effects
|
||||
4. Provide verification steps
|
||||
|
||||
## IV. INPUT SPECIFICATIONS
|
||||
1. **Bug Description**: A description of the problem, including observed behavior vs. expected behavior.
|
||||
2. **Code Snippets/File Information**: Relevant source code where the bug is suspected to be.
|
||||
3. **Logs/Stack Traces (Highly Recommended)**: Error messages, logs, or stack traces associated with the bug.
|
||||
4. **Reproduction Steps (Optional)**: Steps to reproduce the bug.
|
||||
## Input
|
||||
- Bug description (observed vs. expected behavior)
|
||||
- Code snippets or file locations
|
||||
- Logs, stack traces, error messages
|
||||
- Reproduction steps (if available)
|
||||
|
||||
## V. RESPONSE STRUCTURE & CONTENT (Strictly Adhere - Output in Chinese)
|
||||
## Output Structure (Required)
|
||||
|
||||
Your response **MUST** be in Chinese and structured in Markdown as follows:
|
||||
Output in Chinese using this Markdown structure:
|
||||
|
||||
---
|
||||
|
||||
### 0. 诊断思维链 (Diagnostic Chain-of-Thought)
|
||||
* *(在此处,您必须结构化地展示您的诊断流程。)*
|
||||
* **1. 症状分析 (Symptom Analysis):** 我首先将用户的描述、日志和错误信息进行归纳,提炼出关键的异常行为和技术线索。
|
||||
* **2. 代码勘察与初步假设 (Code Exploration & Initial Hypothesis):** 基于症状,我将定位到最可疑的代码区域,并提出一个关于根本原因的初步假设。
|
||||
* **3. 逻辑推演与根本原因定位 (Logical Deduction & Root Cause Pinpointing):** 我将沿着代码执行路径进行深入推演,验证或修正我的假设,直至锁定导致错误的精确逻辑点。
|
||||
* **4. 修复方案设计 (Correction Strategy Design):** 在确定根本原因后,我将设计一个最直接、风险最低的修复方案。
|
||||
* **5. 影响评估与验证规划 (Impact Assessment & Verification Planning):** 我会评估修复方案可能带来的副作用,并构思如何验证修复的有效性及系统的稳定性。
|
||||
Present your analysis process in these steps:
|
||||
1. **症状分析**: Summarize error symptoms and technical clues
|
||||
2. **初步假设**: Identify suspicious code areas and form initial hypothesis
|
||||
3. **根本原因定位**: Trace execution path to pinpoint exact cause
|
||||
4. **修复方案设计**: Design targeted, low-risk correction
|
||||
5. **影响评估**: Assess side effects and plan verification
|
||||
|
||||
### **故障诊断与修复建议报告 (Bug Diagnosis & Correction Proposal)**
|
||||
|
||||
@@ -114,17 +111,17 @@ Your response **MUST** be in Chinese and structured in Markdown as follows:
|
||||
---
|
||||
*(对每个需要修改的文件重复上述格式)*
|
||||
|
||||
## VI. KEY DIRECTIVES & CONSTRAINTS
|
||||
1. **Language**: **All** descriptive parts MUST be in **Chinese**.
|
||||
2. **No Full Code Generation**: **Strictly refrain** from writing complete functions or files. Your correction suggestions should be concise, using single lines, `diff` format, or pseudo-code to illustrate the change. Your role is to guide the developer, not replace them.
|
||||
3. **Focus on RCA**: The quality of your Root Cause Analysis is paramount. It must be logical, convincing, and directly supported by the evidence.
|
||||
4. **State Assumptions**: If the provided information is insufficient to be 100% certain, clearly state your assumptions in the 诊断分析过程 section.
|
||||
## Key Requirements
|
||||
1. **Language**: All output in Chinese
|
||||
2. **No Code Generation**: Use diff format or pseudo-code only. Do not write complete functions or files
|
||||
3. **Focus on Root Cause**: Analysis must be logical and evidence-based
|
||||
4. **State Assumptions**: Clearly note any assumptions when information is incomplete
|
||||
|
||||
## VII. SELF-CORRECTION / REFLECTION
|
||||
* Before finalizing your response, review it to ensure:
|
||||
* The 诊断思维链 accurately reflects a logical debugging process.
|
||||
* The Root Cause Analysis is deep, clear, and compelling.
|
||||
* The proposed correction directly addresses the identified root cause.
|
||||
* The correction suggestion is minimal and precise (not large-scale refactoring).
|
||||
* The verification steps are actionable and cover both success and failure cases.
|
||||
* You have strictly avoided generating large blocks of code.
|
||||
## Self-Review Checklist
|
||||
Before providing final output, verify:
|
||||
- [ ] Diagnostic chain reflects logical debugging process
|
||||
- [ ] Root cause analysis is clear and evidence-based
|
||||
- [ ] Correction directly addresses root cause (not just symptoms)
|
||||
- [ ] Correction is minimal and targeted (not broad refactoring)
|
||||
- [ ] Verification steps are actionable
|
||||
- [ ] No complete code blocks generated
|
||||
|
||||
@@ -1,10 +1,17 @@
|
||||
Analyze implementation patterns and code structure.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Analyze ALL files in CONTEXT (not just samples)
|
||||
□ Provide file:line references for every pattern identified
|
||||
□ Distinguish between good patterns and anti-patterns
|
||||
□ Apply RULES template requirements exactly as specified
|
||||
## Planning Required
|
||||
Before providing analysis, you MUST:
|
||||
1. Review all files in context (not just samples)
|
||||
2. Identify patterns with file:line references
|
||||
3. Distinguish good patterns from anti-patterns
|
||||
4. Apply template requirements
|
||||
|
||||
## Core Checklist
|
||||
- [ ] Analyze ALL files in CONTEXT
|
||||
- [ ] Provide file:line references for each pattern
|
||||
- [ ] Distinguish good patterns from anti-patterns
|
||||
- [ ] Apply RULES template requirements
|
||||
|
||||
## REQUIRED ANALYSIS
|
||||
1. Identify common code patterns and architectural decisions
|
||||
@@ -19,10 +26,12 @@ Analyze implementation patterns and code structure.
|
||||
- Clear recommendations for pattern improvements
|
||||
- Standards compliance assessment with priority levels
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ All CONTEXT files analyzed (not partial coverage)
|
||||
□ Every pattern backed by code reference (file:line)
|
||||
□ Anti-patterns clearly distinguished from good patterns
|
||||
□ Recommendations prioritized by impact
|
||||
## Verification Checklist
|
||||
Before finalizing output, verify:
|
||||
- [ ] All CONTEXT files analyzed
|
||||
- [ ] Every pattern has code reference (file:line)
|
||||
- [ ] Anti-patterns clearly distinguished
|
||||
- [ ] Recommendations prioritized by impact
|
||||
|
||||
Focus: Actionable insights with concrete implementation guidance.
|
||||
## Output Requirements
|
||||
Provide actionable insights with concrete implementation guidance.
|
||||
|
||||
@@ -0,0 +1,33 @@
|
||||
Analyze technical documents, research papers, and specifications systematically.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Plan analysis approach before reading (document type, key questions, success criteria)
|
||||
□ Provide section/page references for all claims and findings
|
||||
□ Distinguish facts from interpretations explicitly
|
||||
□ Use precise, direct language - avoid persuasive wording
|
||||
□ Apply RULES template requirements exactly as specified
|
||||
|
||||
## REQUIRED ANALYSIS
|
||||
1. Document assessment: type, structure, audience, quality indicators
|
||||
2. Content extraction: concepts, specifications, implementation details, constraints
|
||||
3. Critical evaluation: strengths, gaps, ambiguities, clarity issues
|
||||
4. Self-critique: verify citations, completeness, actionable recommendations
|
||||
5. Synthesis: key takeaways, integration points, follow-up questions
|
||||
|
||||
## OUTPUT REQUIREMENTS
|
||||
- Structured analysis with mandatory section/page references
|
||||
- Evidence-based findings with specific location citations
|
||||
- Clear separation of facts vs. interpretations
|
||||
- Actionable recommendations tied to document content
|
||||
- Integration points with existing project patterns
|
||||
- Identified gaps and ambiguities with impact assessment
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ Pre-analysis plan documented (3-5 bullet points)
|
||||
□ All claims backed by section/page references
|
||||
□ Self-critique completed before final output
|
||||
□ Language is precise and direct (no persuasive adjectives)
|
||||
□ Recommendations are specific and actionable
|
||||
□ Output length proportional to document size
|
||||
|
||||
Focus: Evidence-based insights extraction with pre-planning and self-critique for technical documents.
|
||||
@@ -1,10 +1,17 @@
|
||||
Create comprehensive tests for the codebase.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Analyze existing test coverage and identify gaps
|
||||
□ Follow project testing frameworks and conventions
|
||||
□ Include unit, integration, and end-to-end tests
|
||||
□ Ensure tests are reliable and deterministic
|
||||
## Planning Required
|
||||
Before creating tests, you MUST:
|
||||
1. Analyze existing test coverage and identify gaps
|
||||
2. Study testing frameworks and conventions used
|
||||
3. Plan test strategy covering unit, integration, and e2e
|
||||
4. Design test data management approach
|
||||
|
||||
## Core Checklist
|
||||
- [ ] Analyze coverage gaps
|
||||
- [ ] Follow testing frameworks and conventions
|
||||
- [ ] Include unit, integration, and e2e tests
|
||||
- [ ] Ensure tests are reliable and deterministic
|
||||
|
||||
## IMPLEMENTATION PHASES
|
||||
|
||||
@@ -51,11 +58,13 @@ Create comprehensive tests for the codebase.
|
||||
- Test coverage metrics and quality improvements
|
||||
- File:line references for tested code
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ Test coverage gaps identified and filled
|
||||
□ All test types included (unit + integration + e2e)
|
||||
□ Tests are reliable and deterministic (no flaky tests)
|
||||
□ Test data properly managed (isolation + cleanup)
|
||||
□ Testing conventions followed consistently
|
||||
## Verification Checklist
|
||||
Before finalizing, verify:
|
||||
- [ ] Coverage gaps filled
|
||||
- [ ] All test types included
|
||||
- [ ] Tests are reliable (no flaky tests)
|
||||
- [ ] Test data properly managed
|
||||
- [ ] Conventions followed
|
||||
|
||||
Focus: High-quality, reliable test suite with comprehensive coverage.
|
||||
## Focus
|
||||
High-quality, reliable test suite with comprehensive coverage.
|
||||
|
||||
@@ -1,10 +1,17 @@
|
||||
Implement a new feature following project conventions and best practices.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Study existing code patterns BEFORE implementing
|
||||
□ Follow established project conventions and architecture
|
||||
□ Include comprehensive tests (unit + integration)
|
||||
□ Provide file:line references for all changes
|
||||
## Planning Required
|
||||
Before implementing, you MUST:
|
||||
1. Study existing code patterns and conventions
|
||||
2. Review project architecture and design principles
|
||||
3. Plan implementation with error handling and tests
|
||||
4. Document integration points and dependencies
|
||||
|
||||
## Core Checklist
|
||||
- [ ] Study existing code patterns first
|
||||
- [ ] Follow project conventions and architecture
|
||||
- [ ] Include comprehensive tests
|
||||
- [ ] Provide file:line references
|
||||
|
||||
## IMPLEMENTATION PHASES
|
||||
|
||||
@@ -39,11 +46,13 @@ Implement a new feature following project conventions and best practices.
|
||||
- Documentation of new dependencies or configurations
|
||||
- Test coverage summary
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ Implementation follows existing patterns (no divergence)
|
||||
□ Complete test coverage (unit + integration)
|
||||
□ Documentation updated (code comments + external docs)
|
||||
□ Integration verified (no breaking changes)
|
||||
□ Security and performance validated
|
||||
## Verification Checklist
|
||||
Before finalizing, verify:
|
||||
- [ ] Follows existing patterns
|
||||
- [ ] Complete test coverage
|
||||
- [ ] Documentation updated
|
||||
- [ ] No breaking changes
|
||||
- [ ] Security and performance validated
|
||||
|
||||
Focus: Production-ready implementation with comprehensive testing and documentation.
|
||||
## Focus
|
||||
Production-ready implementation with comprehensive testing and documentation.
|
||||
|
||||
@@ -1,10 +1,17 @@
|
||||
Generate comprehensive module documentation focused on understanding and usage.
|
||||
Generate module documentation focused on understanding and usage.
|
||||
|
||||
## CORE CHECKLIST ⚡
|
||||
□ Explain WHAT the module does, WHY it exists, and HOW to use it
|
||||
□ Do NOT duplicate API signatures from API.md; refer to it instead
|
||||
□ Provide practical, real-world usage examples
|
||||
□ Clearly define the module's boundaries and dependencies
|
||||
## Planning Required
|
||||
Before providing documentation, you MUST:
|
||||
1. Understand what the module does and why it exists
|
||||
2. Review existing documentation to avoid duplication
|
||||
3. Prepare practical usage examples
|
||||
4. Identify module boundaries and dependencies
|
||||
|
||||
## Core Checklist
|
||||
- [ ] Explain WHAT, WHY, and HOW
|
||||
- [ ] Reference API.md instead of duplicating signatures
|
||||
- [ ] Include practical usage examples
|
||||
- [ ] Define module boundaries and dependencies
|
||||
|
||||
## DOCUMENTATION STRUCTURE
|
||||
|
||||
@@ -31,10 +38,12 @@ Generate comprehensive module documentation focused on understanding and usage.
|
||||
### 7. Common Issues
|
||||
- List common problems and their solutions.
|
||||
|
||||
## VERIFICATION CHECKLIST ✓
|
||||
□ The module's purpose, scope, and boundaries are clearly defined
|
||||
□ Core concepts are explained for better understanding
|
||||
□ Usage examples are practical and demonstrate real-world scenarios
|
||||
□ All dependencies and configuration options are documented
|
||||
## Verification Checklist
|
||||
Before finalizing output, verify:
|
||||
- [ ] Module purpose, scope, and boundaries are clear
|
||||
- [ ] Core concepts are explained
|
||||
- [ ] Usage examples are practical and realistic
|
||||
- [ ] Dependencies and configuration are documented
|
||||
|
||||
Focus: Explaining the module's purpose and usage, not just its API.
|
||||
## Focus
|
||||
Explain module purpose and usage, not just API details.
|
||||
@@ -1,51 +1,51 @@
|
||||
# 软件架构规划模板
|
||||
# AI Persona & Core Mission
|
||||
|
||||
You are a **Distinguished Senior Software Architect and Strategic Technical Planner**. Your primary function is to conduct a meticulous and insightful analysis of provided code, project context, and user requirements to devise an exceptionally clear, comprehensive, actionable, and forward-thinking modification plan. **Critically, you will *not* write or generate any code yourself; your entire output will be a detailed modification plan articulated in precise, professional Chinese.** You are an expert in anticipating dependencies, potential impacts, and ensuring the proposed plan is robust, maintainable, and scalable.
|
||||
## Role & Output Requirements
|
||||
|
||||
## II. ROLE DEFINITION & CORE CAPABILITIES
|
||||
1. **Role**: Distinguished Senior Software Architect and Strategic Technical Planner.
|
||||
2. **Core Capabilities**:
|
||||
* **Deep Code Comprehension**: Ability to rapidly understand complex existing codebases (structure, patterns, dependencies, data flow, control flow).
|
||||
* **Requirements Analysis & Distillation**: Skill in dissecting user requirements, identifying core needs, and translating them into technical planning objectives.
|
||||
* **Software Design Principles**: Strong grasp of SOLID, DRY, KISS, design patterns, and architectural best practices.
|
||||
* **Impact Analysis & Risk Assessment**: Expertise in identifying potential side effects, inter-module dependencies, and risks associated with proposed changes.
|
||||
* **Strategic Planning**: Ability to formulate logical, step-by-step modification plans that are efficient and minimize disruption.
|
||||
* **Clear Technical Communication (Chinese)**: Excellence in conveying complex technical plans and considerations in clear, unambiguous Chinese for a developer audience.
|
||||
* **Visual Logic Representation**: Ability to sketch out intended logic flows using concise diagrammatic notations.
|
||||
3. **Core Thinking Mode**:
|
||||
* **Systematic & Holistic**: Approach analysis and planning with a comprehensive view of the system.
|
||||
* **Critical & Forward-Thinking**: Evaluate requirements critically and plan for future maintainability and scalability.
|
||||
* **Problem-Solver**: Focus on devising effective solutions through planning.
|
||||
* **Chain-of-Thought (CoT) Driven**: Explicitly articulate your reasoning process, especially when making design choices within the plan.
|
||||
**Role**: Software architect specializing in technical planning
|
||||
**Output Format**: Modification plan in Chinese following the specified structure
|
||||
**Constraints**: Do NOT write or generate code. Provide planning and strategy only.
|
||||
|
||||
## III. OBJECTIVES
|
||||
1. **Thoroughly Understand Context**: Analyze user-provided code, modification requirements, and project background to gain a deep understanding of the existing system and the goals of the modification.
|
||||
2. **Meticulous Code Analysis for Planning**: Identify all relevant code sections, their current logic, and how they interrelate, quoting relevant snippets for context.
|
||||
3. **Devise Actionable Modification Plan**: Create a detailed, step-by-step plan outlining *what* changes are needed, *where* they should occur, *why* they are necessary, and the *intended logic* of the new/modified code.
|
||||
4. **Illustrate Intended Logic**: For each significant logical change proposed, visually represent the *intended* new or modified control flow and data flow using a concise call flow diagram.
|
||||
5. **Contextualize for Implementation**: Provide all necessary contextual information (variables, data structures, dependencies, potential side effects) to enable a developer to implement the plan accurately.
|
||||
6. **Professional Chinese Output**: Produce a highly structured, professional planning document entirely in Chinese, adhering to the specified Markdown format.
|
||||
7. **Show Your Work (CoT)**: Before presenting the plan, outline your analytical framework, key considerations, and how you approached the planning task.
|
||||
## Core Capabilities
|
||||
- Understand complex codebases (structure, patterns, dependencies, data flow)
|
||||
- Analyze requirements and translate to technical objectives
|
||||
- Apply software design principles (SOLID, DRY, KISS, design patterns)
|
||||
- Assess impacts, dependencies, and risks
|
||||
- Create step-by-step modification plans
|
||||
|
||||
## IV. INPUT SPECIFICATIONS
|
||||
1. **Code Snippets/File Information**: User-provided source code, file names, paths, or descriptions of relevant code sections.
|
||||
2. **Modification Requirements**: Specific instructions or goals for what needs to be changed or achieved.
|
||||
3. **Project Context (Optional)**: Any background information about the project or system.
|
||||
## Planning Process (Required)
|
||||
**Before providing your final plan, you MUST:**
|
||||
1. Analyze requirements and identify technical objectives
|
||||
2. Explore existing code structure and patterns
|
||||
3. Identify modification points and formulate strategy
|
||||
4. Assess dependencies and risks
|
||||
5. Present structured modification plan
|
||||
|
||||
## V. RESPONSE STRUCTURE & CONTENT (Strictly Adhere - Output in Chinese)
|
||||
## Objectives
|
||||
1. Understand context (code, requirements, project background)
|
||||
2. Analyze relevant code sections and their relationships
|
||||
3. Create step-by-step modification plan (what, where, why, how)
|
||||
4. Illustrate intended logic using call flow diagrams
|
||||
5. Provide implementation context (variables, dependencies, side effects)
|
||||
|
||||
Your response **MUST** be in Chinese and structured in Markdown as follows:
|
||||
## Input
|
||||
- Code snippets or file locations
|
||||
- Modification requirements and goals
|
||||
- Project context (if available)
|
||||
|
||||
## Output Structure (Required)
|
||||
|
||||
Output in Chinese using this Markdown structure:
|
||||
|
||||
---
|
||||
|
||||
### 0. 思考过程与规划策略 (Thinking Process & Planning Strategy)
|
||||
* *(在此处,您必须结构化地展示您的分析框架和规划流程。)*
|
||||
* **1. 需求解析 (Requirement Analysis):** 我首先将用户的原始需求进行拆解和澄清,确保完全理解其核心目标和边界条件。
|
||||
* **2. 现有代码结构勘探 (Existing Code Exploration):** 基于提供的代码片段,我将分析其当前的结构、逻辑流和关键数据对象,以建立修改的基线。
|
||||
* **3. 核心修改点识别与策略制定 (Identification of Core Modification Points & Strategy Formulation):** 我将识别出需要修改的关键代码位置,并为每个修改点制定高级别的技术策略(例如,是重构、新增还是调整)。
|
||||
* **4. 依赖与风险评估 (Dependency & Risk Assessment):** 我会评估提议的修改可能带来的模块间依赖关系变化,以及潜在的风险(如性能下降、兼容性问题、边界情况处理不当等)。
|
||||
* **5. 规划文档结构设计 (Plan Document Structuring):** 最后,我将依据上述分析,按照指定的格式组织并撰写这份详细的修改规划方案。
|
||||
Present your planning process in these steps:
|
||||
1. **需求解析**: Break down requirements and clarify core objectives
|
||||
2. **代码结构勘探**: Analyze current code structure and logic flow
|
||||
3. **核心修改点识别**: Identify modification points and formulate strategy
|
||||
4. **依赖与风险评估**: Assess dependencies and risks
|
||||
5. **规划文档组织**: Organize planning document
|
||||
|
||||
### **代码修改规划方案 (Code Modification Plan)**
|
||||
|
||||
@@ -93,25 +93,17 @@ Your response **MUST** be in Chinese and structured in Markdown as follows:
|
||||
---
|
||||
*(对每个需要修改的文件重复上述格式)*
|
||||
|
||||
## VI. STYLE & TONE (Chinese Output)
|
||||
* **Professional & Authoritative**: Maintain a formal, expert tone befitting a Senior Architect.
|
||||
* **Analytical & Insightful**: Demonstrate deep understanding and strategic thinking.
|
||||
* **Precise & Unambiguous**: Use clear, exact technical Chinese terminology.
|
||||
* **Structured & Actionable**: Ensure the plan is well-organized and provides clear guidance.
|
||||
## Key Requirements
|
||||
1. **Language**: All output in Chinese
|
||||
2. **No Code Generation**: Do not write actual code. Provide descriptive modification plan only
|
||||
3. **Focus**: Detail what and why. Use logic sketches to illustrate how
|
||||
4. **Completeness**: State assumptions clearly when information is incomplete
|
||||
|
||||
## VII. KEY DIRECTIVES & CONSTRAINTS
|
||||
1. **Language**: **All** descriptive parts of your plan **MUST** be in **Chinese**.
|
||||
2. **No Code Generation**: **Strictly refrain** from writing, suggesting, or generating any actual code. Your output is *purely* a descriptive modification plan.
|
||||
3. **Focus on What and Why, Illustrate How (Logic Sketch)**: Detail what needs to be done and why. The call flow sketch illustrates the *intended how* at a logical level, not implementation code.
|
||||
4. **Completeness & Accuracy**: Ensure the plan is comprehensive. If information is insufficient, state assumptions clearly in the 思考过程 (Thinking Process) and 必要上下文 (Necessary Context).
|
||||
5. **Professional Standard**: Your plan should meet the standards expected of a senior technical document, suitable for guiding development work.
|
||||
|
||||
## VIII. SELF-CORRECTION / REFLECTION
|
||||
* Before finalizing your response, review it to ensure:
|
||||
* The 思考过程 (Thinking Process) clearly outlines your structured analytical approach.
|
||||
* All user requirements from 需求分析 have been addressed in the plan.
|
||||
* The modification plan is logical, actionable, and sufficiently detailed, with relevant original code snippets for context.
|
||||
* The 修改理由 (Reason for Modification) explicitly links back to the initial requirements.
|
||||
* All crucial context and risks are highlighted.
|
||||
* The entire output is in professional, clear Chinese and adheres to the specified Markdown structure.
|
||||
* You have strictly avoided generating any code.
|
||||
## Self-Review Checklist
|
||||
Before providing final output, verify:
|
||||
- [ ] Thinking process outlines structured analytical approach
|
||||
- [ ] All requirements addressed in the plan
|
||||
- [ ] Plan is logical, actionable, and detailed
|
||||
- [ ] Modification reasons link back to requirements
|
||||
- [ ] Context and risks are highlighted
|
||||
- [ ] No actual code generated
|
||||
|
||||
@@ -65,6 +65,7 @@ codex -C [dir] --full-auto exec "[prompt]" [--skip-git-repo-check -s danger-full
|
||||
| Architecture Planning | Gemini → Qwen | analysis | `planning/01-plan-architecture-design.txt` |
|
||||
| Code Pattern Analysis | Gemini → Qwen | analysis | `analysis/02-analyze-code-patterns.txt` |
|
||||
| Architecture Review | Gemini → Qwen | analysis | `analysis/02-review-architecture.txt` |
|
||||
| Document Analysis | Gemini → Qwen | analysis | `analysis/02-analyze-technical-document.txt` |
|
||||
| Feature Implementation | Codex | auto | `development/02-implement-feature.txt` |
|
||||
| Component Development | Codex | auto | `development/02-implement-component-ui.txt` |
|
||||
| Test Generation | Codex | write | `development/02-generate-tests.txt` |
|
||||
@@ -519,13 +520,14 @@ When no specific template matches your task requirements, use one of these unive
|
||||
**Available Templates**:
|
||||
```
|
||||
prompts/
|
||||
├── universal/ # ← NEW: Universal fallback templates
|
||||
├── universal/ # ← Universal fallback templates
|
||||
│ ├── 00-universal-rigorous-style.txt # Precision & standards-driven
|
||||
│ └── 00-universal-creative-style.txt # Innovation & exploration-focused
|
||||
├── analysis/
|
||||
│ ├── 01-trace-code-execution.txt
|
||||
│ ├── 01-diagnose-bug-root-cause.txt
|
||||
│ ├── 02-analyze-code-patterns.txt
|
||||
│ ├── 02-analyze-technical-document.txt
|
||||
│ ├── 02-review-architecture.txt
|
||||
│ ├── 02-review-code-quality.txt
|
||||
│ ├── 03-analyze-performance.txt
|
||||
@@ -556,6 +558,7 @@ prompts/
|
||||
| Execution Tracing | Gemini (Qwen fallback) | `analysis/01-trace-code-execution.txt` |
|
||||
| Bug Diagnosis | Gemini (Qwen fallback) | `analysis/01-diagnose-bug-root-cause.txt` |
|
||||
| Code Pattern Analysis | Gemini (Qwen fallback) | `analysis/02-analyze-code-patterns.txt` |
|
||||
| Document Analysis | Gemini (Qwen fallback) | `analysis/02-analyze-technical-document.txt` |
|
||||
| Architecture Review | Gemini (Qwen fallback) | `analysis/02-review-architecture.txt` |
|
||||
| Code Review | Gemini (Qwen fallback) | `analysis/02-review-code-quality.txt` |
|
||||
| Performance Analysis | Gemini (Qwen fallback) | `analysis/03-analyze-performance.txt` |
|
||||
|
||||
@@ -166,7 +166,7 @@ CCW provides comprehensive documentation to help you get started and master adva
|
||||
### 📖 **Getting Started**
|
||||
- [**Getting Started Guide**](GETTING_STARTED.md) - 5-minute quick start tutorial
|
||||
- [**Installation Guide**](INSTALL.md) - Detailed installation instructions ([中文](INSTALL_CN.md))
|
||||
- [**Workflow Decision Guide**](WORKFLOW_DECISION_GUIDE.md) - 🌳 Interactive flowchart for choosing the right commands
|
||||
- [**Workflow Decision Guide**](WORKFLOW_DECISION_GUIDE_EN.md) - 🌳 Interactive flowchart for choosing the right commands
|
||||
- [**Examples**](EXAMPLES.md) - Real-world use cases and practical examples
|
||||
- [**FAQ**](FAQ.md) - Frequently asked questions and troubleshooting
|
||||
|
||||
|
||||
@@ -253,6 +253,300 @@ flowchart TD
|
||||
|
||||
---
|
||||
|
||||
### 7️⃣ **CLI 工具协作模式 - 多模型智能协同**
|
||||
|
||||
本项目集成了三种 CLI 工具,支持灵活的串联、并行和混合执行方式:
|
||||
|
||||
| 工具 | 核心能力 | 上下文长度 | 适用场景 |
|
||||
|------|---------|-----------|---------|
|
||||
| **Gemini** | 深度分析、架构设计、规划 | 超长上下文 | 代码理解、执行流追踪、技术方案评估 |
|
||||
| **Qwen** | 代码审查、模式识别 | 超长上下文 | Gemini 备选、多维度分析 |
|
||||
| **Codex** | 精确代码撰写、Bug定位 | 标准上下文 | 功能实现、测试生成、代码重构 |
|
||||
|
||||
#### 📋 三种执行模式
|
||||
|
||||
**1. 串联执行(Serial Execution)** - 顺序依赖
|
||||
|
||||
适用场景:后续任务依赖前一任务的结果
|
||||
|
||||
```bash
|
||||
# 示例:分析后实现
|
||||
# Step 1: Gemini 分析架构
|
||||
使用 gemini 分析认证模块的架构设计,识别关键组件和数据流
|
||||
|
||||
# Step 2: Codex 基于分析结果实现
|
||||
让 codex 根据上述架构分析,实现 JWT 认证中间件
|
||||
```
|
||||
|
||||
**执行流程**:
|
||||
```
|
||||
Gemini 分析 → 输出架构报告 → Codex 读取报告 → 实现代码
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**2. 并行执行(Parallel Execution)** - 同时进行
|
||||
|
||||
适用场景:多个独立任务,无依赖关系
|
||||
|
||||
```bash
|
||||
# 示例:多维度分析
|
||||
用 gemini 分析认证模块的安全性,关注 JWT、密码存储、会话管理
|
||||
用 qwen 分析认证模块的性能瓶颈,识别慢查询和优化点
|
||||
让 codex 为认证模块生成单元测试,覆盖所有核心功能
|
||||
```
|
||||
|
||||
**执行流程**:
|
||||
```
|
||||
┌─ Gemini: 安全分析 ─┐
|
||||
并行 ───┼─ Qwen: 性能分析 ──┼─→ 汇总结果
|
||||
└─ Codex: 测试生成 ─┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**3. 混合执行(Hybrid Execution)** - 串并结合
|
||||
|
||||
适用场景:复杂任务,部分并行、部分串联
|
||||
|
||||
```bash
|
||||
# 示例:完整功能开发
|
||||
# Phase 1: 并行分析(独立任务)
|
||||
使用 gemini 分析现有认证系统的架构模式
|
||||
用 qwen 评估 OAuth2 集成的技术方案
|
||||
|
||||
# Phase 2: 串联实现(依赖 Phase 1)
|
||||
让 codex 基于上述分析,实现 OAuth2 认证流程
|
||||
|
||||
# Phase 3: 并行优化(独立任务)
|
||||
用 gemini 审查代码质量和安全性
|
||||
让 codex 生成集成测试
|
||||
```
|
||||
|
||||
**执行流程**:
|
||||
```
|
||||
Phase 1: Gemini 分析 ──┐
|
||||
Qwen 评估 ────┼─→ Phase 2: Codex 实现 ──→ Phase 3: Gemini 审查 ──┐
|
||||
│ Codex 测试 ──┼─→ 完成
|
||||
└────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### 🎯 语义调用 vs 命令调用
|
||||
|
||||
**方式一:自然语言语义调用**(推荐)
|
||||
|
||||
```bash
|
||||
# 用户只需自然描述,Claude Code 自动调用工具
|
||||
"使用 gemini 分析这个模块的依赖关系"
|
||||
→ Claude Code 自动生成:cd src && gemini -p "分析依赖关系"
|
||||
|
||||
"让 codex 实现用户注册功能"
|
||||
→ Claude Code 自动生成:codex -C src/auth --full-auto exec "实现注册"
|
||||
```
|
||||
|
||||
**方式二:直接命令调用**
|
||||
|
||||
```bash
|
||||
# 通过 Slash 命令精准调用
|
||||
/cli:chat --tool gemini "解释这个算法"
|
||||
/cli:analyze --tool qwen "分析性能瓶颈"
|
||||
/cli:execute --tool codex "优化查询性能"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### 🔗 CLI 结果作为上下文(Memory)
|
||||
|
||||
CLI 工具的分析结果可以被保存并作为后续操作的上下文(memory),实现智能化的工作流程:
|
||||
|
||||
**1. 结果持久化**
|
||||
|
||||
```bash
|
||||
# CLI 执行结果自动保存到会话目录
|
||||
/cli:chat --tool gemini "分析认证模块架构"
|
||||
→ 保存到:.workflow/active/WFS-xxx/.chat/chat-[timestamp].md
|
||||
|
||||
/cli:analyze --tool qwen "评估性能瓶颈"
|
||||
→ 保存到:.workflow/active/WFS-xxx/.chat/analyze-[timestamp].md
|
||||
|
||||
/cli:execute --tool codex "实现功能"
|
||||
→ 保存到:.workflow/active/WFS-xxx/.chat/execute-[timestamp].md
|
||||
```
|
||||
|
||||
**2. 结果作为规划依据**
|
||||
|
||||
```bash
|
||||
# Step 1: 分析现状(生成 memory)
|
||||
使用 gemini 深度分析认证系统的架构、安全性和性能问题
|
||||
→ 输出:详细分析报告(自动保存)
|
||||
|
||||
# Step 2: 基于分析结果规划
|
||||
/workflow:plan "根据上述 Gemini 分析报告重构认证系统"
|
||||
→ 系统自动读取 .chat/ 中的分析报告作为上下文
|
||||
→ 生成精准的实施计划
|
||||
```
|
||||
|
||||
**3. 结果作为实现依据**
|
||||
|
||||
```bash
|
||||
# Step 1: 并行分析(生成多个 memory)
|
||||
使用 gemini 分析现有代码结构
|
||||
用 qwen 评估技术方案可行性
|
||||
→ 输出:多份分析报告
|
||||
|
||||
# Step 2: 基于所有分析结果实现
|
||||
让 codex 综合上述 Gemini 和 Qwen 的分析,实现最优方案
|
||||
→ Codex 自动读取前序分析结果
|
||||
→ 生成符合架构设计的代码
|
||||
```
|
||||
|
||||
**4. 跨会话引用**
|
||||
|
||||
```bash
|
||||
# 引用历史会话的分析结果
|
||||
/cli:execute --tool codex "参考 WFS-2024-001 中的架构分析,实现新的支付模块"
|
||||
→ 系统自动加载指定会话的上下文
|
||||
→ 基于历史分析进行实现
|
||||
```
|
||||
|
||||
**5. Memory 更新循环**
|
||||
|
||||
```bash
|
||||
# 迭代优化流程
|
||||
使用 gemini 分析当前实现的问题
|
||||
→ 生成问题报告(memory)
|
||||
|
||||
让 codex 根据问题报告优化代码
|
||||
→ 实现改进(更新 memory)
|
||||
|
||||
用 qwen 验证优化效果
|
||||
→ 验证报告(追加 memory)
|
||||
|
||||
# 所有结果累积为完整的项目 memory
|
||||
→ 支持后续决策和实现
|
||||
```
|
||||
|
||||
**Memory 流转示例**:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Phase 1: 分析阶段(生成 Memory) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Gemini 分析 → 架构分析报告 (.chat/analyze-001.md) │
|
||||
│ Qwen 评估 → 方案评估报告 (.chat/analyze-002.md) │
|
||||
└─────────────────────┬───────────────────────────────────────┘
|
||||
│ 作为 Memory 输入
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Phase 2: 规划阶段(使用 Memory) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ /workflow:plan → 读取分析报告 → 生成实施计划 │
|
||||
│ (.task/IMPL-*.json) │
|
||||
└─────────────────────┬───────────────────────────────────────┘
|
||||
│ 作为 Memory 输入
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Phase 3: 实现阶段(使用 Memory) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Codex 实现 → 读取计划+分析 → 生成代码 │
|
||||
│ (.chat/execute-001.md) │
|
||||
└─────────────────────┬───────────────────────────────────────┘
|
||||
│ 作为 Memory 输入
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Phase 4: 验证阶段(使用 Memory) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Gemini 审查 → 读取实现代码 → 质量报告 │
|
||||
│ (.chat/review-001.md) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
↓
|
||||
完整的项目 Memory 库
|
||||
支持未来所有决策和实现
|
||||
```
|
||||
|
||||
**最佳实践**:
|
||||
|
||||
1. **保持连续性**:在同一会话中执行相关任务,自动共享 memory
|
||||
2. **显式引用**:跨会话时明确引用历史分析(如"参考 WFS-xxx 的分析")
|
||||
3. **增量更新**:每次分析和实现都追加到 memory,形成完整的决策链
|
||||
4. **定期整理**:使用 `/memory:update-related` 将 CLI 结果整合到 CLAUDE.md
|
||||
5. **质量优先**:高质量的分析 memory 能显著提升后续实现质量
|
||||
|
||||
---
|
||||
|
||||
#### 🔄 工作流集成示例
|
||||
|
||||
**集成到 Lite 工作流**:
|
||||
|
||||
```bash
|
||||
# 1. 规划阶段:Gemini 分析
|
||||
/workflow:lite-plan -e "重构支付模块"
|
||||
→ 三维确认选择 "CLI 工具执行"
|
||||
|
||||
# 2. 执行阶段:选择执行方式
|
||||
# 选项 A: 串联执行
|
||||
→ "使用 gemini 分析支付流程" → "让 codex 重构代码"
|
||||
|
||||
# 选项 B: 并行分析 + 串联实现
|
||||
→ "用 gemini 分析架构" + "用 qwen 评估方案"
|
||||
→ "让 codex 基于分析结果重构"
|
||||
```
|
||||
|
||||
**集成到 Full 工作流**:
|
||||
|
||||
```bash
|
||||
# 1. 规划阶段
|
||||
/workflow:plan "实现分布式缓存"
|
||||
/workflow:action-plan-verify
|
||||
|
||||
# 2. 分析阶段(并行)
|
||||
使用 gemini 分析现有缓存架构
|
||||
用 qwen 评估 Redis 集群方案
|
||||
|
||||
# 3. 实现阶段(串联)
|
||||
/workflow:execute # 或使用 CLI
|
||||
让 codex 实现 Redis 集群集成
|
||||
|
||||
# 4. 测试阶段(并行)
|
||||
/workflow:test-gen WFS-cache
|
||||
→ 内部使用 gemini 分析 + codex 生成测试
|
||||
|
||||
# 5. 审查阶段(串联)
|
||||
用 gemini 审查代码质量
|
||||
/workflow:review --type architecture
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### 💡 最佳实践
|
||||
|
||||
**何时使用串联**:
|
||||
- 实现依赖设计方案
|
||||
- 测试依赖代码实现
|
||||
- 优化依赖性能分析
|
||||
|
||||
**何时使用并行**:
|
||||
- 多维度分析(安全+性能+架构)
|
||||
- 多模块独立开发
|
||||
- 同时生成代码和测试
|
||||
|
||||
**何时使用混合**:
|
||||
- 复杂功能开发(分析→设计→实现→测试)
|
||||
- 大规模重构(评估→规划→执行→验证)
|
||||
- 技术栈迁移(调研→方案→实施→优化)
|
||||
|
||||
**工具选择建议**:
|
||||
1. **需要理解代码** → Gemini(首选)或 Qwen
|
||||
2. **需要编写代码** → Codex
|
||||
3. **复杂分析** → Gemini + Qwen 并行(互补验证)
|
||||
4. **精确实现** → Codex(基于 Gemini 分析)
|
||||
5. **快速原型** → 直接使用 Codex
|
||||
|
||||
---
|
||||
|
||||
## 🔄 典型场景完整流程
|
||||
|
||||
### 场景A:新功能开发(知道怎么做)
|
||||
|
||||
713
WORKFLOW_DECISION_GUIDE_EN.md
Normal file
713
WORKFLOW_DECISION_GUIDE_EN.md
Normal file
@@ -0,0 +1,713 @@
|
||||
# 🌳 CCW Workflow Decision Guide
|
||||
|
||||
This guide helps you choose the right commands and workflows for the complete software development lifecycle.
|
||||
|
||||
---
|
||||
|
||||
## 📊 Full Lifecycle Command Selection Flowchart
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
Start([Start New Feature/Project]) --> Q1{Know what to build?}
|
||||
|
||||
Q1 -->|No| Ideation[💡 Ideation Phase<br>Requirements Exploration]
|
||||
Q1 -->|Yes| Q2{Know how to build?}
|
||||
|
||||
Ideation --> BrainIdea[/ /workflow:brainstorm:auto-parallel<br>Explore product direction and positioning /]
|
||||
BrainIdea --> Q2
|
||||
|
||||
Q2 -->|No| Design[🏗️ Design Exploration<br>Architecture Solution Discovery]
|
||||
Q2 -->|Yes| Q3{Need UI design?}
|
||||
|
||||
Design --> BrainDesign[/ /workflow:brainstorm:auto-parallel<br>Explore technical solutions and architecture /]
|
||||
BrainDesign --> Q3
|
||||
|
||||
Q3 -->|Yes| UIDesign[🎨 UI Design Phase]
|
||||
Q3 -->|No| Q4{Task complexity?}
|
||||
|
||||
UIDesign --> Q3a{Have reference design?}
|
||||
Q3a -->|Yes| UIImitate[/ /workflow:ui-design:imitate-auto<br>--input reference URL /]
|
||||
Q3a -->|No| UIExplore[/ /workflow:ui-design:explore-auto<br>--prompt design description /]
|
||||
|
||||
UIImitate --> UISync[/ /workflow:ui-design:design-sync<br>Sync design system /]
|
||||
UIExplore --> UISync
|
||||
UISync --> Q4
|
||||
|
||||
Q4 -->|Simple & Quick| LitePlan[⚡ Lightweight Planning<br>/workflow:lite-plan]
|
||||
Q4 -->|Complex & Complete| FullPlan[📋 Full Planning<br>/workflow:plan]
|
||||
|
||||
LitePlan --> Q5{Need code exploration?}
|
||||
Q5 -->|Yes| LitePlanE[/ /workflow:lite-plan -e<br>task description /]
|
||||
Q5 -->|No| LitePlanNormal[/ /workflow:lite-plan<br>task description /]
|
||||
|
||||
LitePlanE --> LiteConfirm[Three-Dimensional Confirmation:<br>1️⃣ Task Approval<br>2️⃣ Execution Method<br>3️⃣ Code Review]
|
||||
LitePlanNormal --> LiteConfirm
|
||||
|
||||
LiteConfirm --> Q6{Choose execution method}
|
||||
Q6 -->|Agent| LiteAgent[/ /workflow:lite-execute<br>Using @code-developer /]
|
||||
Q6 -->|CLI Tools| LiteCLI[CLI Execution<br>Gemini/Qwen/Codex]
|
||||
Q6 -->|Plan Only| UserImpl[Manual User Implementation]
|
||||
|
||||
FullPlan --> PlanVerify{Verify plan quality?}
|
||||
PlanVerify -->|Yes| Verify[/ /workflow:action-plan-verify /]
|
||||
PlanVerify -->|No| Execute
|
||||
Verify --> Q7{Verification passed?}
|
||||
Q7 -->|No| FixPlan[Fix plan issues]
|
||||
Q7 -->|Yes| Execute
|
||||
FixPlan --> Execute
|
||||
|
||||
Execute[🚀 Execution Phase<br>/workflow:execute]
|
||||
LiteAgent --> TestDecision
|
||||
LiteCLI --> TestDecision
|
||||
UserImpl --> TestDecision
|
||||
Execute --> TestDecision
|
||||
|
||||
TestDecision{Need testing?}
|
||||
TestDecision -->|TDD Mode| TDD[/ /workflow:tdd-plan<br>Test-Driven Development /]
|
||||
TestDecision -->|Post-Implementation Testing| TestGen[/ /workflow:test-gen<br>Generate tests /]
|
||||
TestDecision -->|Existing Tests| TestCycle[/ /workflow:test-cycle-execute<br>Test-fix cycle /]
|
||||
TestDecision -->|No| Review
|
||||
|
||||
TDD --> TDDExecute[/ /workflow:execute<br>Red-Green-Refactor /]
|
||||
TDDExecute --> TDDVerify[/ /workflow:tdd-verify<br>Verify TDD compliance /]
|
||||
TDDVerify --> Review
|
||||
|
||||
TestGen --> TestExecute[/ /workflow:execute<br>Execute test tasks /]
|
||||
TestExecute --> TestResult{Tests passed?}
|
||||
TestResult -->|No| TestCycle
|
||||
TestResult -->|Yes| Review
|
||||
|
||||
TestCycle --> TestPass{Pass rate ≥95%?}
|
||||
TestPass -->|No, continue fixing| TestCycle
|
||||
TestPass -->|Yes| Review
|
||||
|
||||
Review[📝 Review Phase]
|
||||
Review --> Q8{Need specialized review?}
|
||||
Q8 -->|Security| SecurityReview[/ /workflow:review<br>--type security /]
|
||||
Q8 -->|Architecture| ArchReview[/ /workflow:review<br>--type architecture /]
|
||||
Q8 -->|Quality| QualityReview[/ /workflow:review<br>--type quality /]
|
||||
Q8 -->|Comprehensive| GeneralReview[/ /workflow:review<br>Comprehensive review /]
|
||||
Q8 -->|No| Complete
|
||||
|
||||
SecurityReview --> Complete
|
||||
ArchReview --> Complete
|
||||
QualityReview --> Complete
|
||||
GeneralReview --> Complete
|
||||
|
||||
Complete[✅ Completion Phase<br>/workflow:session:complete]
|
||||
Complete --> End([Project Complete])
|
||||
|
||||
style Start fill:#e1f5ff
|
||||
style End fill:#c8e6c9
|
||||
style BrainIdea fill:#fff9c4
|
||||
style BrainDesign fill:#fff9c4
|
||||
style UIImitate fill:#f8bbd0
|
||||
style UIExplore fill:#f8bbd0
|
||||
style LitePlan fill:#b3e5fc
|
||||
style FullPlan fill:#b3e5fc
|
||||
style Execute fill:#c5e1a5
|
||||
style TDD fill:#ffccbc
|
||||
style TestGen fill:#ffccbc
|
||||
style TestCycle fill:#ffccbc
|
||||
style Review fill:#d1c4e9
|
||||
style Complete fill:#c8e6c9
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Decision Point Explanations
|
||||
|
||||
### 1️⃣ **Ideation Phase - "Know what to build?"**
|
||||
|
||||
| Situation | Command | Description |
|
||||
|-----------|---------|-------------|
|
||||
| ❌ Uncertain about product direction | `/workflow:brainstorm:auto-parallel "Explore XXX domain product opportunities"` | Multi-role analysis with Product Manager, UX Expert, etc. |
|
||||
| ✅ Clear feature requirements | Skip to design phase | Already know what functionality to build |
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Uncertain scenario: Want to build a collaboration tool, but unsure what exactly
|
||||
/workflow:brainstorm:auto-parallel "Explore team collaboration tool positioning and core features" --count 5
|
||||
|
||||
# Certain scenario: Building a real-time document collaboration editor (requirements clear)
|
||||
# Skip ideation, move to design phase
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2️⃣ **Design Phase - "Know how to build?"**
|
||||
|
||||
| Situation | Command | Description |
|
||||
|-----------|---------|-------------|
|
||||
| ❌ Don't know technical approach | `/workflow:brainstorm:auto-parallel "Design XXX system architecture"` | System Architect, Security Expert analyze technical solutions |
|
||||
| ✅ Clear implementation path | Skip to planning | Already know tech stack, architecture patterns |
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Don't know how: Real-time collaboration conflict resolution? Which algorithm?
|
||||
/workflow:brainstorm:auto-parallel "Design conflict resolution mechanism for real-time collaborative document editing" --count 4
|
||||
|
||||
# Know how: Using Operational Transformation + WebSocket + Redis
|
||||
# Skip design exploration, go directly to planning
|
||||
/workflow:plan "Implement real-time collaborative editing using OT algorithm, WebSocket communication, Redis storage"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3️⃣ **UI Design Phase - "Need UI design?"**
|
||||
|
||||
| Situation | Command | Description |
|
||||
|-----------|---------|-------------|
|
||||
| 🎨 Have reference design | `/workflow:ui-design:imitate-auto --input "URL"` | Copy from existing design |
|
||||
| 🎨 Design from scratch | `/workflow:ui-design:explore-auto --prompt "description"` | Generate multiple design variants |
|
||||
| ⏭️ Backend/No UI | Skip | Pure backend API, CLI tools, etc. |
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Have reference: Imitate Google Docs collaboration interface
|
||||
/workflow:ui-design:imitate-auto --input "https://docs.google.com"
|
||||
|
||||
# No reference: Design from scratch
|
||||
/workflow:ui-design:explore-auto --prompt "Modern minimalist document collaboration editing interface" --style-variants 3
|
||||
|
||||
# Sync design to project
|
||||
/workflow:ui-design:design-sync --session WFS-xxx --selected-prototypes "v1,v2"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4️⃣ **Planning Phase - Choose Workflow Type**
|
||||
|
||||
| Workflow | Use Case | Characteristics |
|
||||
|----------|----------|-----------------|
|
||||
| `/workflow:lite-plan` | Quick tasks, small features | In-memory planning, three-dimensional confirmation, fast execution |
|
||||
| `/workflow:plan` | Complex projects, team collaboration | Persistent plans, quality gates, complete traceability |
|
||||
|
||||
**Lite-Plan Three-Dimensional Confirmation**:
|
||||
1. **Task Approval**: Confirm / Modify / Cancel
|
||||
2. **Execution Method**: Agent / Provide Plan / CLI Tools (Gemini/Qwen/Codex)
|
||||
3. **Code Review**: No / Claude / Gemini / Qwen / Codex
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Simple task
|
||||
/workflow:lite-plan "Add user avatar upload feature"
|
||||
|
||||
# Need code exploration
|
||||
/workflow:lite-plan -e "Refactor authentication module to OAuth2 standard"
|
||||
|
||||
# Complex project
|
||||
/workflow:plan "Implement complete real-time collaborative editing system"
|
||||
/workflow:action-plan-verify # Verify plan quality
|
||||
/workflow:execute
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5️⃣ **Testing Phase - Choose Testing Strategy**
|
||||
|
||||
| Strategy | Command | Use Case |
|
||||
|----------|---------|----------|
|
||||
| **TDD Mode** | `/workflow:tdd-plan` | Starting from scratch, test-driven development |
|
||||
| **Post-Implementation Testing** | `/workflow:test-gen` | Code complete, add tests |
|
||||
| **Test Fixing** | `/workflow:test-cycle-execute` | Existing tests, need to fix failures |
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# TDD: Write tests first, then implement
|
||||
/workflow:tdd-plan "User authentication module"
|
||||
/workflow:execute # Red-Green-Refactor cycle
|
||||
/workflow:tdd-verify # Verify TDD compliance
|
||||
|
||||
# Post-implementation testing: Add tests after code complete
|
||||
/workflow:test-gen WFS-user-auth-implementation
|
||||
/workflow:execute
|
||||
|
||||
# Test fixing: Existing tests with high failure rate
|
||||
/workflow:test-cycle-execute --max-iterations 5
|
||||
# Auto-iterate fixes until pass rate ≥95%
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6️⃣ **Review Phase - Choose Review Type**
|
||||
|
||||
| Type | Command | Focus |
|
||||
|------|---------|-------|
|
||||
| **Security Review** | `/workflow:review --type security` | SQL injection, XSS, authentication vulnerabilities |
|
||||
| **Architecture Review** | `/workflow:review --type architecture` | Design patterns, coupling, scalability |
|
||||
| **Quality Review** | `/workflow:review --type quality` | Code style, complexity, maintainability |
|
||||
| **Comprehensive Review** | `/workflow:review` | All-around inspection |
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Security-critical system
|
||||
/workflow:review --type security
|
||||
|
||||
# After architecture refactoring
|
||||
/workflow:review --type architecture
|
||||
|
||||
# Daily development
|
||||
/workflow:review --type quality
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 7️⃣ **CLI Tools Collaboration Mode - Multi-Model Intelligent Coordination**
|
||||
|
||||
This project integrates three CLI tools supporting flexible serial, parallel, and hybrid execution:
|
||||
|
||||
| Tool | Core Capabilities | Context Length | Use Cases |
|
||||
|------|------------------|----------------|-----------|
|
||||
| **Gemini** | Deep analysis, architecture design, planning | Ultra-long context | Code understanding, execution flow tracing, technical solution evaluation |
|
||||
| **Qwen** | Code review, pattern recognition | Ultra-long context | Gemini alternative, multi-dimensional analysis |
|
||||
| **Codex** | Precise code writing, bug location | Standard context | Feature implementation, test generation, code refactoring |
|
||||
|
||||
#### 📋 Three Execution Modes
|
||||
|
||||
**1. Serial Execution** - Sequential dependency
|
||||
|
||||
Use case: Subsequent tasks depend on previous results
|
||||
|
||||
```bash
|
||||
# Example: Analyze then implement
|
||||
# Step 1: Gemini analyzes architecture
|
||||
Use gemini to analyze the authentication module's architecture design, identify key components and data flow
|
||||
|
||||
# Step 2: Codex implements based on analysis
|
||||
Have codex implement JWT authentication middleware based on the above architecture analysis
|
||||
```
|
||||
|
||||
**Execution flow**:
|
||||
```
|
||||
Gemini analysis → Output architecture report → Codex reads report → Implement code
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**2. Parallel Execution** - Concurrent processing
|
||||
|
||||
Use case: Multiple independent tasks with no dependencies
|
||||
|
||||
```bash
|
||||
# Example: Multi-dimensional analysis
|
||||
Use gemini to analyze authentication module security, focus on JWT, password storage, session management
|
||||
Use qwen to analyze authentication module performance bottlenecks, identify slow queries and optimization points
|
||||
Have codex generate unit tests for authentication module, covering all core features
|
||||
```
|
||||
|
||||
**Execution flow**:
|
||||
```
|
||||
┌─ Gemini: Security analysis ─┐
|
||||
Parallel ┼─ Qwen: Performance analysis ┼─→ Aggregate results
|
||||
└─ Codex: Test generation ────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**3. Hybrid Execution** - Combined serial and parallel
|
||||
|
||||
Use case: Complex tasks with both parallel and serial phases
|
||||
|
||||
```bash
|
||||
# Example: Complete feature development
|
||||
# Phase 1: Parallel analysis (independent tasks)
|
||||
Use gemini to analyze existing authentication system architecture patterns
|
||||
Use qwen to evaluate OAuth2 integration technical solutions
|
||||
|
||||
# Phase 2: Serial implementation (depends on Phase 1)
|
||||
Have codex implement OAuth2 authentication flow based on above analysis
|
||||
|
||||
# Phase 3: Parallel optimization (independent tasks)
|
||||
Use gemini to review code quality and security
|
||||
Have codex generate integration tests
|
||||
```
|
||||
|
||||
**Execution flow**:
|
||||
```
|
||||
Phase 1: Gemini analysis ──┐
|
||||
Qwen evaluation ──┼─→ Phase 2: Codex implementation ──→ Phase 3: Gemini review ──┐
|
||||
│ Codex tests ───┼─→ Complete
|
||||
└──────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### 🎯 Semantic Invocation vs Command Invocation
|
||||
|
||||
**Method 1: Natural Language Semantic Invocation** (Recommended)
|
||||
|
||||
```bash
|
||||
# Users simply describe naturally, Claude Code auto-invokes tools
|
||||
"Use gemini to analyze this module's dependencies"
|
||||
→ Claude Code auto-generates: cd src && gemini -p "Analyze dependencies"
|
||||
|
||||
"Have codex implement user registration feature"
|
||||
→ Claude Code auto-generates: codex -C src/auth --full-auto exec "Implement registration"
|
||||
```
|
||||
|
||||
**Method 2: Direct Command Invocation**
|
||||
|
||||
```bash
|
||||
# Precise invocation via Slash commands
|
||||
/cli:chat --tool gemini "Explain this algorithm"
|
||||
/cli:analyze --tool qwen "Analyze performance bottlenecks"
|
||||
/cli:execute --tool codex "Optimize query performance"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### 🔗 CLI Results as Context (Memory)
|
||||
|
||||
CLI tool analysis results can be saved and used as context (memory) for subsequent operations, enabling intelligent workflows:
|
||||
|
||||
**1. Result Persistence**
|
||||
|
||||
```bash
|
||||
# CLI execution results automatically saved to session directory
|
||||
/cli:chat --tool gemini "Analyze authentication module architecture"
|
||||
→ Saved to: .workflow/active/WFS-xxx/.chat/chat-[timestamp].md
|
||||
|
||||
/cli:analyze --tool qwen "Evaluate performance bottlenecks"
|
||||
→ Saved to: .workflow/active/WFS-xxx/.chat/analyze-[timestamp].md
|
||||
|
||||
/cli:execute --tool codex "Implement feature"
|
||||
→ Saved to: .workflow/active/WFS-xxx/.chat/execute-[timestamp].md
|
||||
```
|
||||
|
||||
**2. Results as Planning Basis**
|
||||
|
||||
```bash
|
||||
# Step 1: Analyze current state (generate memory)
|
||||
Use gemini to deeply analyze authentication system architecture, security, and performance issues
|
||||
→ Output: Detailed analysis report (auto-saved)
|
||||
|
||||
# Step 2: Plan based on analysis results
|
||||
/workflow:plan "Refactor authentication system based on above Gemini analysis report"
|
||||
→ System automatically reads analysis reports from .chat/ as context
|
||||
→ Generate precise implementation plan
|
||||
```
|
||||
|
||||
**3. Results as Implementation Basis**
|
||||
|
||||
```bash
|
||||
# Step 1: Parallel analysis (generate multiple memories)
|
||||
Use gemini to analyze existing code structure
|
||||
Use qwen to evaluate technical solution feasibility
|
||||
→ Output: Multiple analysis reports
|
||||
|
||||
# Step 2: Implement based on all analysis results
|
||||
Have codex synthesize above Gemini and Qwen analyses to implement optimal solution
|
||||
→ Codex automatically reads prior analysis results
|
||||
→ Generate code conforming to architecture design
|
||||
```
|
||||
|
||||
**4. Cross-Session References**
|
||||
|
||||
```bash
|
||||
# Reference historical session analysis results
|
||||
/cli:execute --tool codex "Refer to architecture analysis in WFS-2024-001, implement new payment module"
|
||||
→ System automatically loads specified session context
|
||||
→ Implement based on historical analysis
|
||||
```
|
||||
|
||||
**5. Memory Update Loop**
|
||||
|
||||
```bash
|
||||
# Iterative optimization flow
|
||||
Use gemini to analyze problems in current implementation
|
||||
→ Generate problem report (memory)
|
||||
|
||||
Have codex optimize code based on problem report
|
||||
→ Implement improvements (update memory)
|
||||
|
||||
Use qwen to verify optimization effectiveness
|
||||
→ Verification report (append to memory)
|
||||
|
||||
# All results accumulate as complete project memory
|
||||
→ Support subsequent decisions and implementation
|
||||
```
|
||||
|
||||
**Memory Flow Example**:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Phase 1: Analysis Phase (Generate Memory) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Gemini analysis → Architecture report (.chat/analyze-001.md)│
|
||||
│ Qwen evaluation → Solution report (.chat/analyze-002.md) │
|
||||
└─────────────────────┬───────────────────────────────────────┘
|
||||
│ As Memory Input
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Phase 2: Planning Phase (Use Memory) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ /workflow:plan → Read analysis reports → Generate plan │
|
||||
│ (.task/IMPL-*.json) │
|
||||
└─────────────────────┬───────────────────────────────────────┘
|
||||
│ As Memory Input
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Phase 3: Implementation Phase (Use Memory) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Codex implement → Read plan+analysis → Generate code │
|
||||
│ (.chat/execute-001.md) │
|
||||
└─────────────────────┬───────────────────────────────────────┘
|
||||
│ As Memory Input
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Phase 4: Verification Phase (Use Memory) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Gemini review → Read implementation code → Quality report│
|
||||
│ (.chat/review-001.md) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
↓
|
||||
Complete Project Memory Library
|
||||
Supporting All Future Decisions and Implementation
|
||||
```
|
||||
|
||||
**Best Practices**:
|
||||
|
||||
1. **Maintain Continuity**: Execute related tasks in the same session to automatically share memory
|
||||
2. **Explicit References**: Explicitly reference historical analyses when crossing sessions (e.g., "Refer to WFS-xxx analysis")
|
||||
3. **Incremental Updates**: Each analysis and implementation appends to memory, forming complete decision chain
|
||||
4. **Regular Organization**: Use `/memory:update-related` to consolidate CLI results into CLAUDE.md
|
||||
5. **Quality First**: High-quality analysis memory significantly improves subsequent implementation quality
|
||||
|
||||
---
|
||||
|
||||
#### 🔄 Workflow Integration Examples
|
||||
|
||||
**Integration with Lite Workflow**:
|
||||
|
||||
```bash
|
||||
# 1. Planning phase: Gemini analysis
|
||||
/workflow:lite-plan -e "Refactor payment module"
|
||||
→ Three-dimensional confirmation selects "CLI Tools execution"
|
||||
|
||||
# 2. Execution phase: Choose execution method
|
||||
# Option A: Serial execution
|
||||
→ "Use gemini to analyze payment flow" → "Have codex refactor code"
|
||||
|
||||
# Option B: Parallel analysis + Serial implementation
|
||||
→ "Use gemini to analyze architecture" + "Use qwen to evaluate solution"
|
||||
→ "Have codex refactor based on analysis results"
|
||||
```
|
||||
|
||||
**Integration with Full Workflow**:
|
||||
|
||||
```bash
|
||||
# 1. Planning phase
|
||||
/workflow:plan "Implement distributed cache"
|
||||
/workflow:action-plan-verify
|
||||
|
||||
# 2. Analysis phase (parallel)
|
||||
Use gemini to analyze existing cache architecture
|
||||
Use qwen to evaluate Redis cluster solution
|
||||
|
||||
# 3. Implementation phase (serial)
|
||||
/workflow:execute # Or use CLI
|
||||
Have codex implement Redis cluster integration
|
||||
|
||||
# 4. Testing phase (parallel)
|
||||
/workflow:test-gen WFS-cache
|
||||
→ Internally uses gemini analysis + codex test generation
|
||||
|
||||
# 5. Review phase (serial)
|
||||
Use gemini to review code quality
|
||||
/workflow:review --type architecture
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### 💡 Best Practices
|
||||
|
||||
**When to use serial**:
|
||||
- Implementation depends on design solution
|
||||
- Testing depends on code implementation
|
||||
- Optimization depends on performance analysis
|
||||
|
||||
**When to use parallel**:
|
||||
- Multi-dimensional analysis (security + performance + architecture)
|
||||
- Multi-module independent development
|
||||
- Simultaneous code and test generation
|
||||
|
||||
**When to use hybrid**:
|
||||
- Complex feature development (analysis → design → implementation → testing)
|
||||
- Large-scale refactoring (evaluation → planning → execution → verification)
|
||||
- Tech stack migration (research → solution → implementation → optimization)
|
||||
|
||||
**Tool selection guidelines**:
|
||||
1. **Need to understand code** → Gemini (preferred) or Qwen
|
||||
2. **Need to write code** → Codex
|
||||
3. **Complex analysis** → Gemini + Qwen parallel (complementary verification)
|
||||
4. **Precise implementation** → Codex (based on Gemini analysis)
|
||||
5. **Quick prototype** → Direct Codex usage
|
||||
|
||||
---
|
||||
|
||||
## 🔄 Complete Flow for Typical Scenarios
|
||||
|
||||
### Scenario A: New Feature Development (Know How to Build)
|
||||
|
||||
```bash
|
||||
# 1. Planning
|
||||
/workflow:plan "Add JWT authentication and permission management"
|
||||
|
||||
# 2. Verify plan
|
||||
/workflow:action-plan-verify
|
||||
|
||||
# 3. Execute
|
||||
/workflow:execute
|
||||
|
||||
# 4. Testing
|
||||
/workflow:test-gen WFS-jwt-auth
|
||||
/workflow:execute
|
||||
|
||||
# 5. Review
|
||||
/workflow:review --type security
|
||||
|
||||
# 6. Complete
|
||||
/workflow:session:complete
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Scenario B: New Feature Development (Don't Know How to Build)
|
||||
|
||||
```bash
|
||||
# 1. Design exploration
|
||||
/workflow:brainstorm:auto-parallel "Design distributed cache system architecture" --count 5
|
||||
|
||||
# 2. UI design (if needed)
|
||||
/workflow:ui-design:explore-auto --prompt "Cache management dashboard interface"
|
||||
/workflow:ui-design:design-sync --session WFS-xxx
|
||||
|
||||
# 3. Planning
|
||||
/workflow:plan
|
||||
|
||||
# 4. Verification
|
||||
/workflow:action-plan-verify
|
||||
|
||||
# 5. Execution
|
||||
/workflow:execute
|
||||
|
||||
# 6. TDD testing
|
||||
/workflow:tdd-plan "Cache system core modules"
|
||||
/workflow:execute
|
||||
|
||||
# 7. Review
|
||||
/workflow:review --type architecture
|
||||
/workflow:review --type security
|
||||
|
||||
# 8. Complete
|
||||
/workflow:session:complete
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Scenario C: Quick Feature Development (Lite Workflow)
|
||||
|
||||
```bash
|
||||
# 1. Lightweight planning (may need code exploration)
|
||||
/workflow:lite-plan -e "Optimize database query performance"
|
||||
|
||||
# 2. Three-dimensional confirmation
|
||||
# - Confirm task
|
||||
# - Choose Agent execution
|
||||
# - Choose Gemini code review
|
||||
|
||||
# 3. Auto-execution (called internally by /workflow:lite-execute)
|
||||
|
||||
# 4. Complete
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Scenario D: Bug Fixing
|
||||
|
||||
```bash
|
||||
# 1. Diagnosis
|
||||
/cli:mode:bug-diagnosis --tool gemini "User login fails with token expired error"
|
||||
|
||||
# 2. Quick fix
|
||||
/workflow:lite-plan "Fix JWT token expiration validation logic"
|
||||
|
||||
# 3. Test fix
|
||||
/workflow:test-cycle-execute
|
||||
|
||||
# 4. Complete
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Quick Command Reference
|
||||
|
||||
### Choose by Knowledge Level
|
||||
|
||||
| Your Situation | Recommended Command |
|
||||
|----------------|---------------------|
|
||||
| 💭 Don't know what to build | `/workflow:brainstorm:auto-parallel "Explore product direction"` |
|
||||
| ❓ Know what, don't know how | `/workflow:brainstorm:auto-parallel "Design technical solution"` |
|
||||
| ✅ Know what and how | `/workflow:plan "Specific implementation description"` |
|
||||
| ⚡ Simple, clear small task | `/workflow:lite-plan "Task description"` |
|
||||
| 🐛 Bug fixing | `/cli:mode:bug-diagnosis` + `/workflow:lite-plan` |
|
||||
|
||||
### Choose by Project Phase
|
||||
|
||||
| Phase | Command |
|
||||
|-------|---------|
|
||||
| 📋 **Requirements Analysis** | `/workflow:brainstorm:auto-parallel` |
|
||||
| 🏗️ **Architecture Design** | `/workflow:brainstorm:auto-parallel` |
|
||||
| 🎨 **UI Design** | `/workflow:ui-design:explore-auto` / `imitate-auto` |
|
||||
| 📝 **Implementation Planning** | `/workflow:plan` / `/workflow:lite-plan` |
|
||||
| 🚀 **Coding Implementation** | `/workflow:execute` / `/workflow:lite-execute` |
|
||||
| 🧪 **Testing** | `/workflow:tdd-plan` / `/workflow:test-gen` |
|
||||
| 🔧 **Test Fixing** | `/workflow:test-cycle-execute` |
|
||||
| 📖 **Code Review** | `/workflow:review` |
|
||||
| ✅ **Project Completion** | `/workflow:session:complete` |
|
||||
|
||||
### Choose by Work Mode
|
||||
|
||||
| Mode | Workflow | Use Case |
|
||||
|------|----------|----------|
|
||||
| **🚀 Agile & Fast** | Lite Workflow | Personal dev, rapid iteration, prototype validation |
|
||||
| **📋 Standard & Complete** | Full Workflow | Team collaboration, enterprise projects, long-term maintenance |
|
||||
| **🧪 Quality-First** | TDD Workflow | Core modules, critical features, high reliability requirements |
|
||||
| **🎨 Design-Driven** | UI-Design Workflow | Frontend projects, user interfaces, design systems |
|
||||
|
||||
---
|
||||
|
||||
## 💡 Expert Advice
|
||||
|
||||
### ✅ Best Practices
|
||||
|
||||
1. **Use brainstorming when uncertain**: Better to spend 10 minutes exploring solutions than blindly implementing and rewriting
|
||||
2. **Use Full workflow for complex projects**: Persistent plans facilitate team collaboration and long-term maintenance
|
||||
3. **Use Lite workflow for small tasks**: Complete quickly, reduce overhead
|
||||
4. **Use TDD for critical modules**: Test-driven development ensures quality
|
||||
5. **Regularly update memory**: `/memory:update-related` keeps context accurate
|
||||
|
||||
### ❌ Common Pitfalls
|
||||
|
||||
1. **Blindly skipping brainstorming**: Not exploring unfamiliar technical domains leads to rework
|
||||
2. **Overusing brainstorming**: Brainstorming even simple features wastes time
|
||||
3. **Ignoring plan verification**: Not running `/workflow:action-plan-verify` causes execution issues
|
||||
4. **Ignoring testing**: Not generating tests, code quality cannot be guaranteed
|
||||
5. **Not completing sessions**: Not running `/workflow:session:complete` causes session state confusion
|
||||
|
||||
---
|
||||
|
||||
## 🔗 Related Documentation
|
||||
|
||||
- [Getting Started Guide](GETTING_STARTED.md) - Quick start tutorial
|
||||
- [Command Reference](COMMAND_REFERENCE.md) - Complete command list
|
||||
- [Architecture Overview](ARCHITECTURE.md) - System architecture explanation
|
||||
- [Examples](EXAMPLES.md) - Real-world scenario examples
|
||||
- [FAQ](FAQ.md) - Frequently asked questions
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-11-20
|
||||
**Version**: 5.8.1
|
||||
Reference in New Issue
Block a user