Compare commits

..

3 Commits

Author SHA1 Message Date
Claude
63ca455c64 docs: add visual before/after comparison for output structure
- Clear side-by-side comparison of current vs proposed structure
- Safety indicators (safe vs dangerous operations)
- Command output examples showing the improvements
- User experience improvements with concrete examples
- Performance impact analysis
- Timeline and decision framework
- Complements OUTPUT_DIRECTORY_REORGANIZATION.md with visual summary
2025-11-20 09:20:43 +00:00
Claude
9f1a9a7731 docs: comprehensive output directory reorganization recommendations
- Analyzed current structure across 30+ commands
- Identified semantic confusion: .chat/ mixes read-only and code-modifying operations
- Proposed v2.0 structure with clear separation:
  * analysis/ - Read-only code understanding
  * planning/ - Read-only architecture planning
  * executions/ - Code-modifying operations (clearly marked)
  * quality/ - Reviews and verifications
  * context/ - Planning context and brainstorm artifacts
  * history/ - Backups and snapshots
- Detailed 4-phase migration strategy (dual write → dual read → deprecation → full migration)
- Command output mapping for all affected commands
- Risk assessment and rollback strategies
- Implementation checklist with timeline
2025-11-20 09:19:20 +00:00
Claude
38f8175780 docs: comprehensive command ambiguity analysis
- Identified 5 major ambiguity clusters in 74 commands
- Critical issues: Planning command overload (5 variants)
- Critical issues: Execution command confusion (5 variants)
- High priority: Tool selection and enhancement flag inconsistency
- Added decision trees for command selection
- Provided recommendations for immediate, short-term, and long-term improvements
2025-11-20 09:06:21 +00:00
7 changed files with 1464 additions and 1039 deletions

View File

@@ -89,7 +89,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
```json
{
@@ -118,7 +118,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
@@ -127,8 +127,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Commands**:
```bash
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
# Count existing docs from phase2-analysis.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
@@ -182,8 +182,8 @@ Large Projects (single dir >10 docs):
**Commands**:
```bash
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 1. Get top-level directories from phase2-analysis.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
@@ -201,7 +201,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
```json
"groups": {
"count": 3,
@@ -215,7 +215,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
@@ -237,7 +237,7 @@ api_id=$((group_count + 3))
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from doc-planning-data.json
2. Read group assignments from phase2-analysis.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
@@ -262,14 +262,14 @@ api_id=$((group_count + 3))
},
"context": {
"requirements": [
"Process directories from group ${group_number} in doc-planning-data.json",
"Process directories from group ${group_number} in phase2-analysis.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
}
},
"flow_control": {
@@ -278,8 +278,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
@@ -324,7 +324,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"depends_on": [1],
"output": "generated_docs"
}
@@ -464,7 +464,7 @@ api_id=$((group_count + 3))
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
@@ -473,7 +473,7 @@ api_id=$((group_count + 3))
└── IMPL-{N+3}.json # HTTP API (optional)
```
**doc-planning-data.json Structure**:
**phase2-analysis.json Structure**:
```json
{
"metadata": {

View File

@@ -89,7 +89,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
```json
{
@@ -118,7 +118,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
@@ -127,8 +127,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Commands**:
```bash
# Count existing docs from doc-planning-data.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
# Count existing docs from phase2-analysis.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
@@ -182,8 +182,8 @@ Large Projects (single dir >10 docs):
**Commands**:
```bash
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 1. Get top-level directories from phase2-analysis.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
@@ -201,7 +201,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
```json
"groups": {
"count": 3,
@@ -215,7 +215,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json)
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
@@ -237,7 +237,7 @@ api_id=$((group_count + 3))
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from doc-planning-data.json
2. Read group assignments from phase2-analysis.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
@@ -262,14 +262,14 @@ api_id=$((group_count + 3))
},
"context": {
"requirements": [
"Process directories from group ${group_number} in doc-planning-data.json",
"Process directories from group ${group_number} in phase2-analysis.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
}
},
"flow_control": {
@@ -278,8 +278,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
@@ -324,7 +324,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"depends_on": [1],
"output": "generated_docs"
}
@@ -464,7 +464,7 @@ api_id=$((group_count + 3))
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
@@ -473,7 +473,7 @@ api_id=$((group_count + 3))
└── IMPL-{N+3}.json # HTTP API (optional)
```
**doc-planning-data.json Structure**:
**phase2-analysis.json Structure**:
```json
{
"metadata": {

View File

@@ -0,0 +1,374 @@
# Command Ambiguity Analysis
## Executive Summary
Analysis of 74 commands reveals **5 major ambiguity clusters** that could cause user confusion. The primary issues involve overlapping functionality in planning, execution, and analysis commands, with inconsistent parameter usage and unclear decision criteria.
---
## Critical Ambiguities (HIGH Priority)
### 1. Planning Command Overload ⚠️ CRITICAL
**Problem**: 5 different "plan" commands with overlapping but distinct purposes
| Command | Purpose | Outputs | Mode |
|---------|---------|---------|------|
| `/workflow:plan` | 5-phase planning workflow | IMPL_PLAN.md + task JSONs | Autonomous |
| `/workflow:lite-plan` | Lightweight interactive planning | In-memory plan | Interactive |
| `/workflow:replan` | Modify existing plans | Updates existing artifacts | Interactive |
| `/cli:mode:plan` | Architecture planning | .chat/plan-*.md | Read-only |
| `/cli:discuss-plan` | Multi-round collaborative planning | .chat/discuss-plan-*.md | Multi-model discussion |
**Ambiguities**:
-**Intent confusion**: Users don't know which to use for "planning"
-**Output confusion**: Some create tasks, some don't
-**Workflow confusion**: Different levels of automation
-**Scope confusion**: Project-level vs architecture-level vs modification planning
**User Questions**:
- "I want to plan my project - which command do I use?"
- "What's the difference between `/workflow:plan` and `/workflow:lite-plan`?"
- "When should I use `/cli:mode:plan` vs `/workflow:plan`?"
**Recommendations**:
1. ✅ Create decision tree documentation
2. ✅ Rename commands to clarify scope:
- `/workflow:plan``/workflow:project-plan` (full workflow)
- `/workflow:lite-plan``/workflow:quick-plan` (fast planning)
- `/cli:mode:plan``/cli:architecture-plan` (read-only)
3. ✅ Add command hints in descriptions about when to use each
---
### 2. Execution Command Confusion ⚠️ CRITICAL
**Problem**: 5 different "execute" commands with different behaviors
| Command | Input | Modifies Code | Auto-Approval | Context |
|---------|-------|---------------|---------------|---------|
| `/workflow:execute` | Session | Via agents | No | Full workflow |
| `/workflow:lite-execute` | Plan/prompt/file | Via agent/codex | User choice | Lightweight |
| `/cli:execute` | Description/task-id | YES | YOLO | Direct implementation |
| `/cli:codex-execute` | Description | YES | YOLO | Multi-stage Codex |
| `/task:execute` | task-id | Via agent | No | Single task |
**Ambiguities**:
-**Safety confusion**: Some have YOLO auto-approval, others don't
-**Input confusion**: Different input formats
-**Scope confusion**: Workflow vs task vs direct execution
-**Tool confusion**: Agent vs CLI tool execution
**Critical Risk**:
- Users may accidentally use `/cli:execute` (YOLO) when they meant `/workflow:execute` (controlled)
- This could result in unwanted code modifications
**User Questions**:
- "I have a workflow session - do I use `/workflow:execute` or `/task:execute`?"
- "What's the difference between `/cli:execute` and `/workflow:lite-execute`?"
- "Which execute command is safest for production code?"
**Recommendations**:
1. 🚨 Add safety warnings to YOLO commands
2. ✅ Clear documentation on execution modes:
- **Workflow execution**: `/workflow:execute` (controlled, session-based)
- **Quick execution**: `/workflow:lite-execute` (flexible input)
- **Direct implementation**: `/cli:execute` (⚠️ YOLO auto-approval)
3. ✅ Consider renaming:
- `/cli:execute``/cli:implement-auto` (emphasizes auto-approval)
- `/cli:codex-execute``/cli:codex-multi-stage`
---
### 3. Analysis Command Overlap ⚠️ MEDIUM
**Problem**: Multiple analysis commands with unclear distinctions
| Command | Tool | Purpose | Output |
|---------|------|---------|--------|
| `/cli:analyze` | Gemini/Qwen/Codex | General codebase analysis | .chat/analyze-*.md |
| `/cli:mode:code-analysis` | Gemini/Qwen/Codex | Execution path tracing | .chat/code-analysis-*.md |
| `/cli:mode:bug-diagnosis` | Gemini/Qwen/Codex | Bug root cause analysis | .chat/bug-diagnosis-*.md |
| `/cli:chat` | Gemini/Qwen/Codex | Q&A interaction | .chat/chat-*.md |
**Ambiguities**:
-**Use case overlap**: When to use general analysis vs specialized modes
-**Template confusion**: Different templates but similar outputs
-**Mode naming**: "mode" prefix adds extra layer of confusion
**User Questions**:
- "Should I use `/cli:analyze` or `/cli:mode:code-analysis` to understand this code?"
- "What's special about the 'mode' commands?"
**Recommendations**:
1. ✅ Consolidate or clarify:
- Keep `/cli:analyze` for general use
- Document `/cli:mode:*` as specialized templates
2. ✅ Add use case examples in descriptions
3. ✅ Consider flattening:
- `/cli:mode:code-analysis``/cli:trace-execution`
- `/cli:mode:bug-diagnosis``/cli:diagnose-bug`
---
## Medium Priority Ambiguities
### 4. Task vs Workflow Command Overlap
**Problem**: Parallel command hierarchies
**Workflow Commands**:
- `/workflow:plan` - Create workflow with tasks
- `/workflow:execute` - Execute all tasks
- `/workflow:replan` - Modify workflow
**Task Commands**:
- `/task:create` - Create individual task
- `/task:execute` - Execute single task
- `/task:replan` - Modify task
**Ambiguities**:
-**Scope confusion**: When to use workflow vs task commands
-**Execution confusion**: `/task:execute` vs `/workflow:execute`
**Recommendations**:
1. ✅ Document relationship clearly:
- Workflow commands: Multi-task orchestration
- Task commands: Single-task operations
2. ✅ Add cross-references in documentation
---
### 5. Tool Selection Confusion (`--tool` flag)
**Problem**: Many commands accept `--tool codex|gemini|qwen` without clear criteria
**Commands with --tool**:
- `/cli:execute --tool`
- `/cli:analyze --tool`
- `/cli:mode:plan --tool`
- `/memory:update-full --tool`
- And more...
**Ambiguities**:
-**Selection criteria**: No clear guidance on when to use which tool
-**Default inconsistency**: Different defaults across commands
-**Capability confusion**: What each tool is best for
**Recommendations**:
1. ✅ Create tool selection guide:
- **Gemini**: Best for analysis, planning (default for most)
- **Qwen**: Fallback when Gemini unavailable
- **Codex**: Best for complex implementation, multi-stage execution
2. ✅ Add tool selection hints to command descriptions
3. ✅ Document tool capabilities clearly
---
### 6. Enhancement Flag Inconsistency
**Problem**: Different enhancement flags with different meanings
| Command | Flag | Meaning |
|---------|------|---------|
| `/cli:execute` | `--enhance` | Enhance prompt via `/enhance-prompt` |
| `/cli:analyze` | `--enhance` | Enhance prompt via `/enhance-prompt` |
| `/workflow:lite-plan` | `-e` or `--explore` | Force code exploration |
| `/memory:skill-memory` | `--regenerate` | Regenerate existing files |
**Ambiguities**:
-**Flag meaning**: `-e` means different things
-**Inconsistent naming**: `--enhance` vs `--explore` vs `--regenerate`
**Recommendations**:
1. ✅ Standardize flags:
- Use `--enhance` consistently for prompt enhancement
- Use `--explore` specifically for codebase exploration
- Use `--regenerate` for file regeneration
2. ✅ Avoid short flags (`-e`) that could be ambiguous
---
## Low Priority Observations
### 7. Session Management Commands (Well-Designed ✅)
**Commands**:
- `/workflow:session:start`
- `/workflow:session:resume`
- `/workflow:session:complete`
- `/workflow:session:list`
**Analysis**: These are **well-designed** with clear, distinct purposes. No ambiguity found.
---
### 8. Memory Commands (Acceptable)
Memory commands follow consistent patterns but could benefit from better organization:
- `/memory:load`
- `/memory:docs`
- `/memory:skill-memory`
- `/memory:code-map-memory`
- `/memory:update-full`
- `/memory:update-related`
**Minor Issue**: Many memory commands, but purposes are relatively clear.
---
## Parameter Ambiguity Analysis
### Common Parameter Patterns
| Parameter | Commands Using It | Ambiguity Level |
|-----------|-------------------|-----------------|
| `--tool` | 10+ commands | HIGH - Inconsistent defaults |
| `--enhance` | 5+ commands | MEDIUM - Similar but not identical |
| `--session` | 8+ commands | LOW - Consistent meaning |
| `--cli-execute` | 3+ commands | LOW - Clear meaning |
| `-e` / `--explore` | 2+ commands | HIGH - Different meanings |
---
## Output Ambiguity Analysis
### Output Location Confusion
Multiple commands output to similar locations:
**`.chat/` outputs** (read-only analysis):
- `/cli:analyze``.chat/analyze-*.md`
- `/cli:mode:plan``.chat/plan-*.md`
- `/cli:discuss-plan``.chat/discuss-plan-*.md`
- `/cli:execute``.chat/execute-*.md` (❌ Misleading - actually modifies code!)
**Ambiguity**:
- Users might think all `.chat/` outputs are read-only
- `/cli:execute` outputs to `.chat/` but modifies code (YOLO)
**Recommendation**:
- ✅ Separate execution logs from analysis logs
- ✅ Use different directory for code-modifying operations
---
## Decision Tree Recommendations
### When to Use Planning Commands
```
START: I need to plan something
├─ Is this a new full project workflow?
│ └─ YES → /workflow:plan (5-phase, creates tasks)
├─ Do I need quick planning without full workflow?
│ └─ YES → /workflow:lite-plan (fast, interactive)
├─ Do I need architecture-level planning only?
│ └─ YES → /cli:mode:plan (read-only, no tasks)
├─ Do I need multi-perspective discussion?
│ └─ YES → /cli:discuss-plan (Gemini + Codex + Claude)
└─ Am I modifying an existing plan?
└─ YES → /workflow:replan (modify artifacts)
```
### When to Use Execution Commands
```
START: I need to execute/implement something
├─ Do I have an active workflow session with tasks?
│ └─ YES → /workflow:execute (execute all tasks)
├─ Do I have a single task ID to execute?
│ └─ YES → /task:execute IMPL-N (single task)
├─ Do I have a plan or description to execute quickly?
│ └─ YES → /workflow:lite-execute (flexible input)
├─ Do I want direct, autonomous implementation (⚠️ YOLO)?
│ ├─ Single-stage → /cli:execute (auto-approval)
│ └─ Multi-stage → /cli:codex-execute (complex tasks)
└─ ⚠️ WARNING: CLI execute commands modify code without confirmation
```
### When to Use Analysis Commands
```
START: I need to analyze code
├─ General codebase understanding?
│ └─ /cli:analyze (broad analysis)
├─ Specific execution path tracing?
│ └─ /cli:mode:code-analysis (detailed flow)
├─ Bug diagnosis?
│ └─ /cli:mode:bug-diagnosis (root cause)
└─ Quick Q&A?
└─ /cli:chat (interactive)
```
---
## Summary of Findings
### Ambiguity Count by Severity
| Severity | Count | Commands Affected |
|----------|-------|-------------------|
| 🚨 CRITICAL | 2 | Planning (5 cmds), Execution (5 cmds) |
| ⚠️ HIGH | 2 | Tool selection, Enhancement flags |
| MEDIUM | 3 | Analysis, Task/Workflow overlap, Output locations |
| ✅ LOW | Multiple | Most other commands acceptable |
### Key Recommendations Priority
1. **🚨 URGENT**: Add safety warnings to YOLO execution commands
2. **🚨 URGENT**: Create decision trees for planning and execution commands
3. **⚠️ HIGH**: Standardize tool selection criteria documentation
4. **⚠️ HIGH**: Clarify enhancement flag meanings
5. ** MEDIUM**: Reorganize output directories by operation type
6. ** MEDIUM**: Consider renaming most ambiguous commands
---
## Recommended Actions
### Immediate (Week 1)
1. ✅ Add decision trees to documentation
2. ✅ Add ⚠️ WARNING labels to YOLO commands
3. ✅ Create "Which command should I use?" guide
### Short-term (Month 1)
1. ✅ Standardize flag meanings across commands
2. ✅ Add tool selection guide
3. ✅ Clarify command descriptions
### Long-term (Future)
1. 🤔 Consider command consolidation or renaming
2. 🤔 Reorganize output directory structure
3. 🤔 Add interactive command selector tool
---
## Conclusion
The command system is **powerful but complex**. The main ambiguities stem from:
- Multiple commands with similar names serving different purposes
- Inconsistent parameter usage
- Unclear decision criteria for command selection
**Overall Assessment**: The codebase has a well-structured command system, but would benefit significantly from:
1. Better documentation (decision trees, use case examples)
2. Clearer naming conventions
3. Consistent parameter patterns
4. Safety warnings for destructive operations
**Risk Level**: MEDIUM - Experienced users can navigate, but new users will struggle. The YOLO execution commands pose the highest risk of accidental misuse.

View File

@@ -0,0 +1,654 @@
# Output Directory Reorganization Recommendations
## Executive Summary
Current output directory structure mixes different operation types (read-only analysis, code modifications, planning artifacts) in the same directories, leading to confusion and poor organization. This document proposes a **semantic directory structure** that separates outputs by purpose and operation type.
**Impact**: Affects 30+ commands, requires phased migration
**Priority**: MEDIUM (improves clarity, not critical functionality)
**Effort**: 2-4 weeks for full implementation
---
## Current Structure Analysis
### Active Session Structure
```
.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── IMPL_PLAN.md # Planning document
├── TODO_LIST.md # Progress tracking
├── .chat/ # ⚠️ MIXED PURPOSE
│ ├── analyze-*.md # Read-only analysis
│ ├── plan-*.md # Read-only planning
│ ├── discuss-plan-*.md # Read-only discussion
│ ├── execute-*.md # ⚠️ Code-modifying execution
│ └── chat-*.md # Q&A interactions
├── .summaries/ # Task completion summaries
│ ├── IMPL-*-summary.md
│ └── TEST-FIX-*-summary.md
├── .task/ # Task definitions
│ ├── IMPL-001.json
│ └── IMPL-001.1.json
└── .process/ # ⚠️ MIXED PURPOSE
├── context-package.json # Planning context
├── test-context-package.json # Test context
├── phase2-analysis.json # Temporary analysis
├── CONFLICT_RESOLUTION.md # Planning artifact
├── ACTION_PLAN_VERIFICATION.md # Verification report
└── backup/ # Backup storage
└── replan-{timestamp}/
```
### Scratchpad Structure (No Session)
```
.workflow/.scratchpad/
├── analyze-*.md
├── execute-*.md
├── chat-*.md
└── plan-*.md
```
---
## Problems Identified
### 1. **Semantic Confusion** 🚨 CRITICAL
**Problem**: `.chat/` directory contains both:
- ✅ Read-only operations (analyze, chat, plan)
- ⚠️ Code-modifying operations (execute)
**Impact**: Users assume `.chat/` is safe (read-only), but some files represent dangerous operations
**Example**:
```bash
# These both output to .chat/ but have VERY different impacts:
/cli:analyze "review auth code" # Read-only → .chat/analyze-*.md
/cli:execute "implement auth feature" # ⚠️ MODIFIES CODE → .chat/execute-*.md
```
### 2. **Purpose Overload**
**Problem**: `.process/` used for multiple unrelated purposes:
- Planning artifacts (context-package.json)
- Temporary analysis (phase2-analysis.json)
- Verification reports (ACTION_PLAN_VERIFICATION.md)
- Backup storage (backup/)
**Impact**: Difficult to understand what's in `.process/`
### 3. **Inconsistent Organization**
**Problem**: Different commands use different naming patterns:
- Some use timestamps: `analyze-{timestamp}.md`
- Some use topics: `plan-{topic}.md`
- Some use task IDs: `IMPL-001-summary.md`
**Impact**: Hard to find specific outputs
### 4. **No Operation Type Distinction**
**Problem**: Can't distinguish operation type from directory structure:
- Analysis outputs mixed with execution logs
- Planning discussions mixed with implementation records
- No clear audit trail
**Impact**: Poor traceability, difficult debugging
---
## Proposed New Structure
### Design Principles
1. **Semantic Organization**: Directories reflect operation type and safety level
2. **Clear Hierarchy**: Separate by purpose → type → chronology
3. **Safety Indicators**: Code-modifying operations clearly separated
4. **Consistent Naming**: Standard patterns across all commands
5. **Backward Compatible**: Old structure accessible during migration
---
## Recommended Structure v2.0
```
.workflow/active/WFS-{session-id}/
├── ## Core Artifacts (Root Level)
├── workflow-session.json
├── IMPL_PLAN.md
├── TODO_LIST.md
├── ## Task Definitions
├── tasks/ # (renamed from .task/)
│ ├── IMPL-001.json
│ └── IMPL-001.1.json
├── ## 🟢 READ-ONLY Operations (Safe)
├── analysis/ # (split from .chat/)
│ ├── code/
│ │ ├── 2024-01-15T10-30-auth-patterns.md
│ │ └── 2024-01-15T11-45-api-structure.md
│ ├── architecture/
│ │ └── 2024-01-14T09-00-caching-layer.md
│ └── bugs/
│ └── 2024-01-16T14-20-login-bug-diagnosis.md
├── planning/ # (split from .chat/)
│ ├── discussions/
│ │ └── 2024-01-13T15-00-auth-strategy-3rounds.md
│ ├── architecture/
│ │ └── 2024-01-13T16-30-database-design.md
│ └── revisions/
│ └── 2024-01-17T10-00-replan-add-2fa.md
├── interactions/ # (split from .chat/)
│ ├── 2024-01-15T10-00-question-about-jwt.md
│ └── 2024-01-15T14-30-how-to-test-auth.md
├── ## ⚠️ CODE-MODIFYING Operations (Dangerous)
├── executions/ # (split from .chat/)
│ ├── implementations/
│ │ ├── 2024-01-15T11-00-impl-jwt-auth.md
│ │ ├── 2024-01-15T12-30-impl-user-api.md
│ │ └── metadata.json # Execution metadata
│ ├── test-fixes/
│ │ └── 2024-01-16T09-00-fix-auth-tests.md
│ └── refactors/
│ └── 2024-01-16T15-00-refactor-middleware.md
├── ## Completion Records
├── summaries/ # (kept same)
│ ├── implementations/
│ │ ├── IMPL-001-jwt-authentication.md
│ │ └── IMPL-002-user-endpoints.md
│ ├── tests/
│ │ └── TEST-FIX-001-auth-validation.md
│ └── index.json # Quick lookup
├── ## Planning Context & Artifacts
├── context/ # (split from .process/)
│ ├── project/
│ │ ├── context-package.json
│ │ └── test-context-package.json
│ ├── brainstorm/
│ │ ├── guidance-specification.md
│ │ ├── synthesis-output.md
│ │ └── roles/
│ │ ├── api-designer-analysis.md
│ │ └── system-architect-analysis.md
│ └── conflicts/
│ └── 2024-01-14T10-00-resolution.md
├── ## Verification & Quality
├── quality/ # (split from .process/)
│ ├── verifications/
│ │ └── 2024-01-15T09-00-action-plan-verify.md
│ ├── reviews/
│ │ ├── 2024-01-17T11-00-security-review.md
│ │ └── 2024-01-17T12-00-architecture-review.md
│ └── tdd-compliance/
│ └── 2024-01-16T16-00-cycle-analysis.md
├── ## History & Backups
├── history/ # (renamed from .process/backup/)
│ ├── replans/
│ │ └── 2024-01-17T10-00-add-2fa/
│ │ ├── MANIFEST.md
│ │ ├── IMPL_PLAN.md
│ │ └── tasks/
│ └── snapshots/
│ └── 2024-01-15T00-00-milestone-1/
└── ## Temporary Working Data
└── temp/ # (for transient analysis)
└── phase2-analysis.json
```
### Scratchpad Structure v2.0
```
.workflow/.scratchpad/
├── analysis/
├── planning/
├── interactions/
└── executions/ # ⚠️ Code-modifying
```
---
## Directory Purpose Reference
| Directory | Purpose | Safety | Retention |
|-----------|---------|--------|-----------|
| `analysis/` | Code understanding, bug diagnosis | 🟢 Read-only | Keep indefinitely |
| `planning/` | Architecture plans, discussions | 🟢 Read-only | Keep indefinitely |
| `interactions/` | Q&A, chat sessions | 🟢 Read-only | Keep 30 days |
| `executions/` | Implementation logs | ⚠️ Modifies code | Keep indefinitely |
| `summaries/` | Task completion records | 🟢 Reference | Keep indefinitely |
| `context/` | Planning context, brainstorm | 🟢 Reference | Keep indefinitely |
| `quality/` | Reviews, verifications | 🟢 Reference | Keep indefinitely |
| `history/` | Backups, snapshots | 🟢 Archive | Keep indefinitely |
| `temp/` | Transient analysis data | 🟢 Temporary | Clean on completion |
---
## Naming Convention Standards
### Timestamp-based Files
**Format**: `YYYY-MM-DDTHH-MM-{description}.md`
**Examples**:
- `2024-01-15T10-30-auth-patterns.md`
- `2024-01-15T11-45-jwt-implementation.md`
**Benefits**:
- Chronological sorting
- Unique identifiers
- Easy to find by date
### Task-based Files
**Format**: `{TASK-ID}-{description}.md`
**Examples**:
- `IMPL-001-jwt-authentication.md`
- `TEST-FIX-002-login-validation.md`
**Benefits**:
- Clear task association
- Easy to find by task ID
### Metadata Files
**Format**: `{type}.json` or `{type}-metadata.json`
**Examples**:
- `context-package.json`
- `execution-metadata.json`
- `index.json`
---
## Command Output Mapping
### Analysis Commands → `analysis/`
| Command | Old Location | New Location |
|---------|-------------|--------------|
| `/cli:analyze` | `.chat/analyze-*.md` | `analysis/code/{timestamp}-{topic}.md` |
| `/cli:mode:code-analysis` | `.chat/code-analysis-*.md` | `analysis/code/{timestamp}-{topic}.md` |
| `/cli:mode:bug-diagnosis` | `.chat/bug-diagnosis-*.md` | `analysis/bugs/{timestamp}-{topic}.md` |
### Planning Commands → `planning/`
| Command | Old Location | New Location |
|---------|-------------|--------------|
| `/cli:mode:plan` | `.chat/plan-*.md` | `planning/architecture/{timestamp}-{topic}.md` |
| `/cli:discuss-plan` | `.chat/discuss-plan-*.md` | `planning/discussions/{timestamp}-{topic}.md` |
| `/workflow:replan` | (modifies artifacts) | `planning/revisions/{timestamp}-{reason}.md` |
### Execution Commands → `executions/`
| Command | Old Location | New Location |
|---------|-------------|--------------|
| `/cli:execute` | `.chat/execute-*.md` | `executions/implementations/{timestamp}-{description}.md` |
| `/cli:codex-execute` | `.chat/codex-*.md` | `executions/implementations/{timestamp}-{description}.md` |
| `/workflow:execute` | (multiple) | `executions/implementations/{timestamp}-{task-id}.md` |
| `/workflow:test-cycle-execute` | (various) | `executions/test-fixes/{timestamp}-cycle-{n}.md` |
### Quality Commands → `quality/`
| Command | Old Location | New Location |
|---------|-------------|--------------|
| `/workflow:action-plan-verify` | `.process/ACTION_PLAN_VERIFICATION.md` | `quality/verifications/{timestamp}-action-plan.md` |
| `/workflow:review` | (inline) | `quality/reviews/{timestamp}-{type}.md` |
| `/workflow:tdd-verify` | (inline) | `quality/tdd-compliance/{timestamp}-verify.md` |
### Context Commands → `context/`
| Data Type | Old Location | New Location |
|-----------|-------------|--------------|
| Context packages | `.process/context-package.json` | `context/project/context-package.json` |
| Brainstorm artifacts | `.process/` | `context/brainstorm/` |
| Conflict resolution | `.process/CONFLICT_RESOLUTION.md` | `context/conflicts/{timestamp}-resolution.md` |
---
## Migration Strategy
### Phase 1: Dual Write (Week 1-2)
**Goal**: Write to both old and new locations
**Implementation**:
```bash
# Example for /cli:analyze
old_path=".workflow/active/$session/.chat/analyze-$timestamp.md"
new_path=".workflow/active/$session/analysis/code/$timestamp-$topic.md"
# Write to both locations
Write($old_path, content)
Write($new_path, content)
# Add migration notice to old location
echo "⚠️ This file has moved to: $new_path" >> $old_path
```
**Changes**:
- Update all commands to write to new structure
- Keep writing to old structure for compatibility
- Add deprecation notices
**Commands to Update**: 30+ commands
### Phase 2: Dual Read (Week 3)
**Goal**: Read from new location, fallback to old
**Implementation**:
```bash
# Example read logic
if [ -f "$new_path" ]; then
content=$(cat "$new_path")
elif [ -f "$old_path" ]; then
content=$(cat "$old_path")
# Migrate on read
mkdir -p "$(dirname "$new_path")"
cp "$old_path" "$new_path"
echo "✓ Migrated: $old_path$new_path"
fi
```
**Changes**:
- Update read logic in all commands
- Automatic migration on read
- Log migrations for verification
### Phase 3: Legacy Deprecation (Week 4)
**Goal**: Stop writing to old locations
**Implementation**:
```bash
# Stop dual write, only write to new structure
new_path=".workflow/active/$session/analysis/code/$timestamp-$topic.md"
Write($new_path, content)
# No longer write to old_path
```
**Changes**:
- Remove old write logic
- Keep read fallback for 1 release cycle
- Update documentation
### Phase 4: Full Migration (Future Release)
**Goal**: Remove old structure entirely
**Implementation**:
```bash
# One-time migration script
/workflow:migrate-outputs --session all --dry-run
/workflow:migrate-outputs --session all --execute
```
**Migration Script**:
```bash
#!/bin/bash
# migrate-outputs.sh
session_dir="$1"
# Migrate .chat/ files
for file in "$session_dir/.chat"/*; do
case "$file" in
*analyze*)
mv "$file" "$session_dir/analysis/code/"
;;
*execute*)
mv "$file" "$session_dir/executions/implementations/"
;;
*plan*)
mv "$file" "$session_dir/planning/architecture/"
;;
*chat*)
mv "$file" "$session_dir/interactions/"
;;
esac
done
# Migrate .process/ files
mv "$session_dir/.process/context-package.json" "$session_dir/context/project/"
mv "$session_dir/.process/backup" "$session_dir/history/"
# Remove old directories
rmdir "$session_dir/.chat" "$session_dir/.process" 2>/dev/null
echo "✓ Migration complete: $session_dir"
```
---
## Implementation Checklist
### Week 1-2: Dual Write Setup
**Core Commands** (Priority 1):
- [ ] `/cli:analyze``analysis/code/`
- [ ] `/cli:execute``executions/implementations/`
- [ ] `/cli:mode:plan``planning/architecture/`
- [ ] `/workflow:execute``executions/implementations/`
- [ ] `/workflow:action-plan-verify``quality/verifications/`
**Planning Commands** (Priority 2):
- [ ] `/cli:discuss-plan``planning/discussions/`
- [ ] `/workflow:replan``planning/revisions/`
- [ ] `/workflow:plan` → (updates `context/project/`)
**Context Commands** (Priority 3):
- [ ] `/workflow:tools:context-gather``context/project/`
- [ ] `/workflow:brainstorm:*``context/brainstorm/`
- [ ] `/workflow:tools:conflict-resolution``context/conflicts/`
### Week 3: Dual Read + Auto-Migration
**Read Logic Updates**:
- [ ] Update all Read() calls with fallback logic
- [ ] Add migration-on-read for all file types
- [ ] Log all automatic migrations
**Testing**:
- [ ] Test with existing sessions
- [ ] Test with new sessions
- [ ] Verify backward compatibility
### Week 4: Documentation + Deprecation
**Documentation Updates**:
- [ ] Update command documentation with new paths
- [ ] Add migration guide for users
- [ ] Document new directory structure
- [ ] Add "Directory Purpose Reference" to docs
**Deprecation Notices**:
- [ ] Add notices to old command outputs
- [ ] Update error messages with new paths
- [ ] Create migration FAQ
---
## Benefits Analysis
### Immediate Benefits
**1. Safety Clarity** 🟢
- Clear separation: Read-only vs Code-modifying operations
- Users can quickly identify dangerous operations
- Reduces accidental code modifications
**2. Better Organization** 📁
- Semantic structure reflects operation purpose
- Easy to find specific outputs
- Clear audit trail
**3. Improved Traceability** 🔍
- Execution logs separated by type
- Planning discussions organized chronologically
- Quality checks easily accessible
### Long-term Benefits
**4. Scalability** 📈
- Structure scales to 100+ sessions
- Easy to add new operation types
- Consistent organization patterns
**5. Automation Potential** 🤖
- Programmatic analysis of outputs
- Automated cleanup of old files
- Better CI/CD integration
**6. User Experience** 👥
- Intuitive directory structure
- Self-documenting organization
- Easier onboarding for new users
---
## Risk Assessment
### Migration Risks
| Risk | Severity | Mitigation |
|------|----------|------------|
| **Breaking Changes** | HIGH | Phased migration with dual write/read |
| **Data Loss** | MEDIUM | Automatic migration on read, keep backups |
| **User Confusion** | MEDIUM | Clear documentation, migration guide |
| **Command Failures** | LOW | Fallback to old locations during transition |
| **Performance Impact** | LOW | Dual write adds minimal overhead |
### Rollback Strategy
If migration causes issues:
**Phase 1 Rollback** (Dual Write):
- Stop writing to new locations
- Continue using old structure
- No data loss
**Phase 2 Rollback** (Dual Read):
- Disable migration-on-read
- Continue reading from old locations
- New files still in new structure (OK)
**Phase 3+ Rollback**:
- Run reverse migration script
- Copy new structure files back to old locations
- May require manual intervention
---
## Alternative Approaches Considered
### Alternative 1: Flat Structure with Prefixes
```
.workflow/active/WFS-{session}/
├── ANALYSIS_2024-01-15_auth-patterns.md
├── EXEC_2024-01-15_jwt-impl.md
└── PLAN_2024-01-14_architecture.md
```
**Rejected**: Too many files in one directory, poor organization
### Alternative 2: Single "logs/" Directory
```
.workflow/active/WFS-{session}/
└── logs/
├── 2024-01-15T10-30-analyze-auth.md
└── 2024-01-15T11-00-execute-jwt.md
```
**Rejected**: Doesn't solve semantic confusion
### Alternative 3: Minimal Change (Status Quo++)
```
.workflow/active/WFS-{session}/
├── .chat/ # Rename to .interactions/
├── .exec/ # NEW: Split executions out
├── .summaries/
└── .process/
```
**Partially Adopted**: Considered as "lite" version if full migration too complex
---
## Recommended Timeline
### Immediate (This Sprint)
1. ✅ Document current structure
2. ✅ Create proposed structure v2.0
3. ✅ Get stakeholder approval
### Short-term (Next 2 Sprints - 4 weeks)
1. 📝 Implement Phase 1: Dual Write
2. 🔍 Implement Phase 2: Dual Read
3. 📢 Implement Phase 3: Deprecation
### Long-term (Future Release)
1. 🗑️ Implement Phase 4: Full Migration
2. 🧹 Remove old structure code
3. 📚 Update all documentation
---
## Success Metrics
### Quantitative
- ✅ 100% of commands updated to new structure
- ✅ 0 data loss during migration
- ✅ <5% increase in execution time (dual write overhead)
- ✅ 90% of sessions migrated within 1 month
### Qualitative
- ✅ User feedback: "Easier to find outputs"
- ✅ User feedback: "Clearer which operations are safe"
- ✅ Developer feedback: "Easier to maintain"
---
## Conclusion
The proposed directory reorganization addresses critical semantic confusion in the current structure by:
1. **Separating read-only from code-modifying operations** (safety)
2. **Organizing by purpose** (usability)
3. **Using consistent naming** (maintainability)
4. **Providing clear migration path** (feasibility)
**Recommendation**: Proceed with phased migration starting with dual-write implementation.
**Next Steps**:
1. Review and approve proposed structure
2. Identify pilot commands for Phase 1
3. Create detailed implementation tasks
4. Begin dual-write implementation
**Questions for Discussion**:
1. Should we use "lite" version (minimal changes) or full v2.0?
2. What's the acceptable timeline for full migration?
3. Are there any other directory purposes we should consider?
4. Should we add more automation (e.g., auto-cleanup old files)?

View File

@@ -0,0 +1,404 @@
# Output Structure: Before vs After
## Quick Visual Comparison
### Current Structure (v1.0) - ⚠️ Problematic
```
.workflow/active/WFS-session/
├── .chat/ ⚠️ MIXED: Safe + Dangerous operations
│ ├── analyze-*.md ✅ Read-only
│ ├── plan-*.md ✅ Read-only
│ ├── chat-*.md ✅ Read-only
│ └── execute-*.md ⚠️ MODIFIES CODE!
├── .summaries/ ✅ OK
├── .task/ ✅ OK
└── .process/ ⚠️ MIXED: Multiple purposes
├── context-package.json (planning context)
├── phase2-analysis.json (temp data)
├── CONFLICT_RESOLUTION.md (planning artifact)
└── backup/ (history)
```
**Problems**:
-`.chat/` mixes safe (read-only) and dangerous (code-modifying) operations
-`.process/` serves too many purposes
- ❌ No clear organization by operation type
- ❌ Hard to find specific outputs
---
### Proposed Structure (v2.0) - ✅ Clear & Semantic
```
.workflow/active/WFS-session/
├── 🟢 SAFE: Read-only Operations
│ ├── analysis/ Split from .chat/
│ │ ├── code/ Code understanding
│ │ ├── architecture/ Architecture analysis
│ │ └── bugs/ Bug diagnosis
│ │
│ ├── planning/ Split from .chat/
│ │ ├── discussions/ Multi-round planning
│ │ ├── architecture/ Architecture plans
│ │ └── revisions/ Replan history
│ │
│ └── interactions/ Split from .chat/
│ └── *-chat.md Q&A sessions
├── ⚠️ DANGEROUS: Code-modifying Operations
│ └── executions/ Split from .chat/
│ ├── implementations/ Code implementations
│ ├── test-fixes/ Test fixes
│ └── refactors/ Refactoring
├── 📊 RECORDS: Completion & Quality
│ ├── summaries/ Keep same (task completions)
│ │
│ └── quality/ Split from .process/
│ ├── verifications/ Plan verifications
│ ├── reviews/ Code reviews
│ └── tdd-compliance/ TDD checks
├── 📦 CONTEXT: Planning Artifacts
│ └── context/ Split from .process/
│ ├── project/ Context packages
│ ├── brainstorm/ Brainstorm artifacts
│ └── conflicts/ Conflict resolutions
├── 📜 HISTORY: Backups & Archives
│ └── history/ Rename from .process/backup/
│ ├── replans/ Replan backups
│ └── snapshots/ Session snapshots
└── 📋 TASKS: Definitions
└── tasks/ Rename from .task/
```
**Benefits**:
- ✅ Clear separation: Safe vs Dangerous operations
- ✅ Semantic organization by purpose
- ✅ Easy to find outputs by type
- ✅ Self-documenting structure
---
## Key Changes Summary
### 1. Split `.chat/` by Safety Level
| Current | New | Safety |
|---------|-----|--------|
| `.chat/analyze-*.md` | `analysis/code/` | 🟢 Safe |
| `.chat/plan-*.md` | `planning/architecture/` | 🟢 Safe |
| `.chat/chat-*.md` | `interactions/` | 🟢 Safe |
| `.chat/execute-*.md` | `executions/implementations/` | ⚠️ Dangerous |
### 2. Split `.process/` by Purpose
| Current | New | Purpose |
|---------|-----|---------|
| `.process/context-package.json` | `context/project/` | Planning context |
| `.process/CONFLICT_RESOLUTION.md` | `context/conflicts/` | Planning artifact |
| `.process/ACTION_PLAN_VERIFICATION.md` | `quality/verifications/` | Quality check |
| `.process/backup/` | `history/replans/` | Backups |
| `.process/phase2-analysis.json` | `temp/` | Temporary data |
### 3. Rename for Clarity
| Current | New | Reason |
|---------|-----|--------|
| `.task/` | `tasks/` | Remove dot prefix (not hidden) |
| `.summaries/` | `summaries/` | Keep same (already clear) |
---
## Command Output Changes (Examples)
### Analysis Commands
```bash
# Current (v1.0)
/cli:analyze "review auth code"
→ .chat/analyze-2024-01-15.md ⚠️ Mixed with dangerous ops
# Proposed (v2.0)
/cli:analyze "review auth code"
→ analysis/code/2024-01-15T10-30-auth.md ✅ Clearly safe
```
### Execution Commands
```bash
# Current (v1.0)
/cli:execute "implement auth"
→ .chat/execute-2024-01-15.md ⚠️ Looks safe, but dangerous!
# Proposed (v2.0)
/cli:execute "implement auth"
→ executions/implementations/2024-01-15T11-00-auth.md ⚠️ Clearly dangerous
```
### Planning Commands
```bash
# Current (v1.0)
/cli:discuss-plan "design caching"
→ .chat/discuss-plan-2024-01-15.md ⚠️ Mixed with dangerous ops
# Proposed (v2.0)
/cli:discuss-plan "design caching"
→ planning/discussions/2024-01-15T15-00-caching-3rounds.md ✅ Clearly safe
```
---
## Migration Impact
### Affected Commands: ~30
**Analysis Commands** (6):
- `/cli:analyze`
- `/cli:mode:code-analysis`
- `/cli:mode:bug-diagnosis`
- `/cli:chat`
- `/memory:code-map-memory`
- `/workflow:review`
**Planning Commands** (5):
- `/cli:mode:plan`
- `/cli:discuss-plan`
- `/workflow:plan`
- `/workflow:replan`
- `/workflow:brainstorm:*`
**Execution Commands** (8):
- `/cli:execute`
- `/cli:codex-execute`
- `/workflow:execute`
- `/workflow:lite-execute`
- `/task:execute`
- `/workflow:test-cycle-execute`
- `/workflow:test-fix-gen`
- `/workflow:test-gen`
**Quality Commands** (4):
- `/workflow:action-plan-verify`
- `/workflow:review`
- `/workflow:tdd-verify`
- `/workflow:tdd-coverage-analysis`
**Context Commands** (7):
- `/workflow:tools:context-gather`
- `/workflow:tools:conflict-resolution`
- `/workflow:brainstorm:artifacts`
- `/memory:skill-memory`
- `/memory:docs`
- `/memory:load`
- `/memory:tech-research`
---
## Safety Indicators
### Directory Color Coding
- 🟢 **Green** (Safe): Read-only operations, no code changes
- `analysis/`
- `planning/`
- `interactions/`
- `summaries/`
- `quality/`
- `context/`
- `history/`
- ⚠️ **Yellow** (Dangerous): Code-modifying operations
- `executions/`
### File Naming Patterns
**Safe Operations** (🟢):
```
analysis/code/2024-01-15T10-30-auth-patterns.md
planning/discussions/2024-01-15T15-00-caching-3rounds.md
interactions/2024-01-15T14-00-jwt-question.md
```
**Dangerous Operations** (⚠️):
```
executions/implementations/2024-01-15T11-00-impl-auth.md
executions/test-fixes/2024-01-16T09-00-fix-login-tests.md
executions/refactors/2024-01-16T15-00-refactor-middleware.md
```
---
## User Experience Improvements
### Before (v1.0) - Confusing ❌
**User wants to review analysis logs**:
```bash
$ ls .workflow/active/WFS-auth/.chat/
analyze-2024-01-15.md
execute-2024-01-15.md # ⚠️ Wait, which one is safe?
plan-2024-01-14.md
execute-2024-01-16.md # ⚠️ More dangerous files mixed in!
chat-2024-01-15.md
```
User thinks: "They're all in `.chat/`, so they're all logs... right?" 😰
### After (v2.0) - Clear ✅
**User wants to review analysis logs**:
```bash
$ ls .workflow/active/WFS-auth/
analysis/ # ✅ Safe - code understanding
planning/ # ✅ Safe - planning discussions
interactions/ # ✅ Safe - Q&A logs
executions/ # ⚠️ DANGER - code modifications
```
User thinks: "Oh, `executions/` is separate. I know that modifies code!" 😊
---
## Performance Impact
### Storage
**Overhead**: Negligible
- Deeper directory nesting adds ~10 bytes per file
- For 1000 files: ~10 KB additional metadata
### Access Speed
**Overhead**: Negligible
- Modern filesystems handle nested directories efficiently
- Typical lookup: O(log n) regardless of depth
### Migration Cost
**Phase 1 (Dual Write)**: ~5-10% overhead
- Writing to both old and new locations
- Temporary during migration period
**Phase 2+ (New Structure Only)**: No overhead
- Single write location
- Actually slightly faster (better organization)
---
## Rollback Plan
If migration causes issues:
### Easy Rollback (Phase 1-2)
```bash
# Stop using new structure
git revert <migration-commit>
# Continue with old structure
# No data loss (dual write preserved both)
```
### Manual Rollback (Phase 3+)
```bash
# Copy files back to old locations
cp -r analysis/code/* .chat/
cp -r executions/implementations/* .chat/
cp -r context/project/* .process/
# etc.
```
---
## Timeline Summary
| Phase | Duration | Status | Risk |
|-------|----------|--------|------|
| **Phase 1**: Dual Write | 2 weeks | 📋 Planned | LOW |
| **Phase 2**: Dual Read | 1 week | 📋 Planned | LOW |
| **Phase 3**: Deprecation | 1 week | 📋 Planned | MEDIUM |
| **Phase 4**: Full Migration | Future | 🤔 Optional | MEDIUM |
**Total**: 4 weeks for Phases 1-3
**Effort**: ~20-30 hours development time
---
## Decision: Which Approach?
### Option A: Full v2.0 Migration (Recommended) ✅
**Pros**:
- ✅ Clear semantic separation
- ✅ Future-proof organization
- ✅ Best user experience
- ✅ Solves all identified problems
**Cons**:
- ❌ 4-week migration period
- ❌ Affects 30+ commands
- ❌ Requires documentation updates
**Recommendation**: **YES** - Worth the investment
### Option B: Minimal Changes (Quick Fix)
**Change**:
```
.chat/ → Split into .analysis/ and .executions/
.process/ → Keep as-is with better docs
```
**Pros**:
- ✅ Quick implementation (1 week)
- ✅ Solves main safety confusion
**Cons**:
- ❌ Partial solution
- ❌ Still some confusion
- ❌ May need full migration later anyway
**Recommendation**: Only if time-constrained
### Option C: Status Quo (No Change)
**Pros**:
- ✅ No development effort
**Cons**:
- ❌ Problems remain
- ❌ User confusion continues
- ❌ Safety risks
**Recommendation**: **NO** - Not recommended
---
## Conclusion
**Recommended Action**: Proceed with **Option A (Full v2.0 Migration)**
**Key Benefits**:
1. 🟢 Clear safety separation (read-only vs code-modifying)
2. 📁 Semantic organization by purpose
3. 🔍 Easy to find specific outputs
4. 📈 Scales for future growth
5. 👥 Better user experience
**Next Steps**:
1. ✅ Review and approve this proposal
2. 📋 Create detailed implementation tasks
3. 🚀 Begin Phase 1: Dual Write implementation
4. 📚 Update documentation in parallel
**Questions?**
- See detailed analysis in: `OUTPUT_DIRECTORY_REORGANIZATION.md`
- Implementation guide: Migration Strategy section
- Risk assessment: Risk Assessment section

View File

@@ -253,300 +253,6 @@ flowchart TD
---
### 7⃣ **CLI 工具协作模式 - 多模型智能协同**
本项目集成了三种 CLI 工具,支持灵活的串联、并行和混合执行方式:
| 工具 | 核心能力 | 上下文长度 | 适用场景 |
|------|---------|-----------|---------|
| **Gemini** | 深度分析、架构设计、规划 | 超长上下文 | 代码理解、执行流追踪、技术方案评估 |
| **Qwen** | 代码审查、模式识别 | 超长上下文 | Gemini 备选、多维度分析 |
| **Codex** | 精确代码撰写、Bug定位 | 标准上下文 | 功能实现、测试生成、代码重构 |
#### 📋 三种执行模式
**1. 串联执行Serial Execution** - 顺序依赖
适用场景:后续任务依赖前一任务的结果
```bash
# 示例:分析后实现
# Step 1: Gemini 分析架构
使用 gemini 分析认证模块的架构设计,识别关键组件和数据流
# Step 2: Codex 基于分析结果实现
让 codex 根据上述架构分析,实现 JWT 认证中间件
```
**执行流程**
```
Gemini 分析 → 输出架构报告 → Codex 读取报告 → 实现代码
```
---
**2. 并行执行Parallel Execution** - 同时进行
适用场景:多个独立任务,无依赖关系
```bash
# 示例:多维度分析
用 gemini 分析认证模块的安全性,关注 JWT、密码存储、会话管理
用 qwen 分析认证模块的性能瓶颈,识别慢查询和优化点
让 codex 为认证模块生成单元测试,覆盖所有核心功能
```
**执行流程**
```
┌─ Gemini: 安全分析 ─┐
并行 ───┼─ Qwen: 性能分析 ──┼─→ 汇总结果
└─ Codex: 测试生成 ─┘
```
---
**3. 混合执行Hybrid Execution** - 串并结合
适用场景:复杂任务,部分并行、部分串联
```bash
# 示例:完整功能开发
# Phase 1: 并行分析(独立任务)
使用 gemini 分析现有认证系统的架构模式
用 qwen 评估 OAuth2 集成的技术方案
# Phase 2: 串联实现(依赖 Phase 1
让 codex 基于上述分析,实现 OAuth2 认证流程
# Phase 3: 并行优化(独立任务)
用 gemini 审查代码质量和安全性
让 codex 生成集成测试
```
**执行流程**
```
Phase 1: Gemini 分析 ──┐
Qwen 评估 ────┼─→ Phase 2: Codex 实现 ──→ Phase 3: Gemini 审查 ──┐
│ Codex 测试 ──┼─→ 完成
└────────────────────────────────────────────────┘
```
---
#### 🎯 语义调用 vs 命令调用
**方式一:自然语言语义调用**(推荐)
```bash
# 用户只需自然描述Claude Code 自动调用工具
"使用 gemini 分析这个模块的依赖关系"
→ Claude Code 自动生成cd src && gemini -p "分析依赖关系"
"让 codex 实现用户注册功能"
→ Claude Code 自动生成codex -C src/auth --full-auto exec "实现注册"
```
**方式二:直接命令调用**
```bash
# 通过 Slash 命令精准调用
/cli:chat --tool gemini "解释这个算法"
/cli:analyze --tool qwen "分析性能瓶颈"
/cli:execute --tool codex "优化查询性能"
```
---
#### 🔗 CLI 结果作为上下文Memory
CLI 工具的分析结果可以被保存并作为后续操作的上下文memory实现智能化的工作流程
**1. 结果持久化**
```bash
# CLI 执行结果自动保存到会话目录
/cli:chat --tool gemini "分析认证模块架构"
→ 保存到:.workflow/active/WFS-xxx/.chat/chat-[timestamp].md
/cli:analyze --tool qwen "评估性能瓶颈"
→ 保存到:.workflow/active/WFS-xxx/.chat/analyze-[timestamp].md
/cli:execute --tool codex "实现功能"
→ 保存到:.workflow/active/WFS-xxx/.chat/execute-[timestamp].md
```
**2. 结果作为规划依据**
```bash
# Step 1: 分析现状(生成 memory
使用 gemini 深度分析认证系统的架构、安全性和性能问题
→ 输出:详细分析报告(自动保存)
# Step 2: 基于分析结果规划
/workflow:plan "根据上述 Gemini 分析报告重构认证系统"
→ 系统自动读取 .chat/ 中的分析报告作为上下文
→ 生成精准的实施计划
```
**3. 结果作为实现依据**
```bash
# Step 1: 并行分析(生成多个 memory
使用 gemini 分析现有代码结构
用 qwen 评估技术方案可行性
→ 输出:多份分析报告
# Step 2: 基于所有分析结果实现
让 codex 综合上述 Gemini 和 Qwen 的分析,实现最优方案
→ Codex 自动读取前序分析结果
→ 生成符合架构设计的代码
```
**4. 跨会话引用**
```bash
# 引用历史会话的分析结果
/cli:execute --tool codex "参考 WFS-2024-001 中的架构分析,实现新的支付模块"
→ 系统自动加载指定会话的上下文
→ 基于历史分析进行实现
```
**5. Memory 更新循环**
```bash
# 迭代优化流程
使用 gemini 分析当前实现的问题
→ 生成问题报告memory
让 codex 根据问题报告优化代码
→ 实现改进(更新 memory
用 qwen 验证优化效果
→ 验证报告(追加 memory
# 所有结果累积为完整的项目 memory
→ 支持后续决策和实现
```
**Memory 流转示例**
```
┌─────────────────────────────────────────────────────────────┐
│ Phase 1: 分析阶段(生成 Memory
├─────────────────────────────────────────────────────────────┤
│ Gemini 分析 → 架构分析报告 (.chat/analyze-001.md) │
│ Qwen 评估 → 方案评估报告 (.chat/analyze-002.md) │
└─────────────────────┬───────────────────────────────────────┘
│ 作为 Memory 输入
┌─────────────────────────────────────────────────────────────┐
│ Phase 2: 规划阶段(使用 Memory
├─────────────────────────────────────────────────────────────┤
│ /workflow:plan → 读取分析报告 → 生成实施计划 │
│ (.task/IMPL-*.json) │
└─────────────────────┬───────────────────────────────────────┘
│ 作为 Memory 输入
┌─────────────────────────────────────────────────────────────┐
│ Phase 3: 实现阶段(使用 Memory
├─────────────────────────────────────────────────────────────┤
│ Codex 实现 → 读取计划+分析 → 生成代码 │
│ (.chat/execute-001.md) │
└─────────────────────┬───────────────────────────────────────┘
│ 作为 Memory 输入
┌─────────────────────────────────────────────────────────────┐
│ Phase 4: 验证阶段(使用 Memory
├─────────────────────────────────────────────────────────────┤
│ Gemini 审查 → 读取实现代码 → 质量报告 │
│ (.chat/review-001.md) │
└─────────────────────────────────────────────────────────────┘
完整的项目 Memory 库
支持未来所有决策和实现
```
**最佳实践**
1. **保持连续性**:在同一会话中执行相关任务,自动共享 memory
2. **显式引用**:跨会话时明确引用历史分析(如"参考 WFS-xxx 的分析"
3. **增量更新**:每次分析和实现都追加到 memory形成完整的决策链
4. **定期整理**:使用 `/memory:update-related` 将 CLI 结果整合到 CLAUDE.md
5. **质量优先**:高质量的分析 memory 能显著提升后续实现质量
---
#### 🔄 工作流集成示例
**集成到 Lite 工作流**
```bash
# 1. 规划阶段Gemini 分析
/workflow:lite-plan -e "重构支付模块"
→ 三维确认选择 "CLI 工具执行"
# 2. 执行阶段:选择执行方式
# 选项 A: 串联执行
"使用 gemini 分析支付流程""让 codex 重构代码"
# 选项 B: 并行分析 + 串联实现
"用 gemini 分析架构" + "用 qwen 评估方案"
"让 codex 基于分析结果重构"
```
**集成到 Full 工作流**
```bash
# 1. 规划阶段
/workflow:plan "实现分布式缓存"
/workflow:action-plan-verify
# 2. 分析阶段(并行)
使用 gemini 分析现有缓存架构
用 qwen 评估 Redis 集群方案
# 3. 实现阶段(串联)
/workflow:execute # 或使用 CLI
让 codex 实现 Redis 集群集成
# 4. 测试阶段(并行)
/workflow:test-gen WFS-cache
→ 内部使用 gemini 分析 + codex 生成测试
# 5. 审查阶段(串联)
用 gemini 审查代码质量
/workflow:review --type architecture
```
---
#### 💡 最佳实践
**何时使用串联**
- 实现依赖设计方案
- 测试依赖代码实现
- 优化依赖性能分析
**何时使用并行**
- 多维度分析(安全+性能+架构)
- 多模块独立开发
- 同时生成代码和测试
**何时使用混合**
- 复杂功能开发(分析→设计→实现→测试)
- 大规模重构(评估→规划→执行→验证)
- 技术栈迁移(调研→方案→实施→优化)
**工具选择建议**
1. **需要理解代码** → Gemini首选或 Qwen
2. **需要编写代码** → Codex
3. **复杂分析** → Gemini + Qwen 并行(互补验证)
4. **精确实现** → Codex基于 Gemini 分析)
5. **快速原型** → 直接使用 Codex
---
## 🔄 典型场景完整流程
### 场景A新功能开发知道怎么做

View File

@@ -1,713 +0,0 @@
# 🌳 CCW Workflow Decision Guide
This guide helps you choose the right commands and workflows for the complete software development lifecycle.
---
## 📊 Full Lifecycle Command Selection Flowchart
```mermaid
flowchart TD
Start([Start New Feature/Project]) --> Q1{Know what to build?}
Q1 -->|No| Ideation[💡 Ideation Phase<br>Requirements Exploration]
Q1 -->|Yes| Q2{Know how to build?}
Ideation --> BrainIdea[/ /workflow:brainstorm:auto-parallel<br>Explore product direction and positioning /]
BrainIdea --> Q2
Q2 -->|No| Design[🏗️ Design Exploration<br>Architecture Solution Discovery]
Q2 -->|Yes| Q3{Need UI design?}
Design --> BrainDesign[/ /workflow:brainstorm:auto-parallel<br>Explore technical solutions and architecture /]
BrainDesign --> Q3
Q3 -->|Yes| UIDesign[🎨 UI Design Phase]
Q3 -->|No| Q4{Task complexity?}
UIDesign --> Q3a{Have reference design?}
Q3a -->|Yes| UIImitate[/ /workflow:ui-design:imitate-auto<br>--input reference URL /]
Q3a -->|No| UIExplore[/ /workflow:ui-design:explore-auto<br>--prompt design description /]
UIImitate --> UISync[/ /workflow:ui-design:design-sync<br>Sync design system /]
UIExplore --> UISync
UISync --> Q4
Q4 -->|Simple & Quick| LitePlan[⚡ Lightweight Planning<br>/workflow:lite-plan]
Q4 -->|Complex & Complete| FullPlan[📋 Full Planning<br>/workflow:plan]
LitePlan --> Q5{Need code exploration?}
Q5 -->|Yes| LitePlanE[/ /workflow:lite-plan -e<br>task description /]
Q5 -->|No| LitePlanNormal[/ /workflow:lite-plan<br>task description /]
LitePlanE --> LiteConfirm[Three-Dimensional Confirmation:<br>1⃣ Task Approval<br>2⃣ Execution Method<br>3⃣ Code Review]
LitePlanNormal --> LiteConfirm
LiteConfirm --> Q6{Choose execution method}
Q6 -->|Agent| LiteAgent[/ /workflow:lite-execute<br>Using @code-developer /]
Q6 -->|CLI Tools| LiteCLI[CLI Execution<br>Gemini/Qwen/Codex]
Q6 -->|Plan Only| UserImpl[Manual User Implementation]
FullPlan --> PlanVerify{Verify plan quality?}
PlanVerify -->|Yes| Verify[/ /workflow:action-plan-verify /]
PlanVerify -->|No| Execute
Verify --> Q7{Verification passed?}
Q7 -->|No| FixPlan[Fix plan issues]
Q7 -->|Yes| Execute
FixPlan --> Execute
Execute[🚀 Execution Phase<br>/workflow:execute]
LiteAgent --> TestDecision
LiteCLI --> TestDecision
UserImpl --> TestDecision
Execute --> TestDecision
TestDecision{Need testing?}
TestDecision -->|TDD Mode| TDD[/ /workflow:tdd-plan<br>Test-Driven Development /]
TestDecision -->|Post-Implementation Testing| TestGen[/ /workflow:test-gen<br>Generate tests /]
TestDecision -->|Existing Tests| TestCycle[/ /workflow:test-cycle-execute<br>Test-fix cycle /]
TestDecision -->|No| Review
TDD --> TDDExecute[/ /workflow:execute<br>Red-Green-Refactor /]
TDDExecute --> TDDVerify[/ /workflow:tdd-verify<br>Verify TDD compliance /]
TDDVerify --> Review
TestGen --> TestExecute[/ /workflow:execute<br>Execute test tasks /]
TestExecute --> TestResult{Tests passed?}
TestResult -->|No| TestCycle
TestResult -->|Yes| Review
TestCycle --> TestPass{Pass rate ≥95%?}
TestPass -->|No, continue fixing| TestCycle
TestPass -->|Yes| Review
Review[📝 Review Phase]
Review --> Q8{Need specialized review?}
Q8 -->|Security| SecurityReview[/ /workflow:review<br>--type security /]
Q8 -->|Architecture| ArchReview[/ /workflow:review<br>--type architecture /]
Q8 -->|Quality| QualityReview[/ /workflow:review<br>--type quality /]
Q8 -->|Comprehensive| GeneralReview[/ /workflow:review<br>Comprehensive review /]
Q8 -->|No| Complete
SecurityReview --> Complete
ArchReview --> Complete
QualityReview --> Complete
GeneralReview --> Complete
Complete[✅ Completion Phase<br>/workflow:session:complete]
Complete --> End([Project Complete])
style Start fill:#e1f5ff
style End fill:#c8e6c9
style BrainIdea fill:#fff9c4
style BrainDesign fill:#fff9c4
style UIImitate fill:#f8bbd0
style UIExplore fill:#f8bbd0
style LitePlan fill:#b3e5fc
style FullPlan fill:#b3e5fc
style Execute fill:#c5e1a5
style TDD fill:#ffccbc
style TestGen fill:#ffccbc
style TestCycle fill:#ffccbc
style Review fill:#d1c4e9
style Complete fill:#c8e6c9
```
---
## 🎯 Decision Point Explanations
### 1⃣ **Ideation Phase - "Know what to build?"**
| Situation | Command | Description |
|-----------|---------|-------------|
| ❌ Uncertain about product direction | `/workflow:brainstorm:auto-parallel "Explore XXX domain product opportunities"` | Multi-role analysis with Product Manager, UX Expert, etc. |
| ✅ Clear feature requirements | Skip to design phase | Already know what functionality to build |
**Examples**:
```bash
# Uncertain scenario: Want to build a collaboration tool, but unsure what exactly
/workflow:brainstorm:auto-parallel "Explore team collaboration tool positioning and core features" --count 5
# Certain scenario: Building a real-time document collaboration editor (requirements clear)
# Skip ideation, move to design phase
```
---
### 2⃣ **Design Phase - "Know how to build?"**
| Situation | Command | Description |
|-----------|---------|-------------|
| ❌ Don't know technical approach | `/workflow:brainstorm:auto-parallel "Design XXX system architecture"` | System Architect, Security Expert analyze technical solutions |
| ✅ Clear implementation path | Skip to planning | Already know tech stack, architecture patterns |
**Examples**:
```bash
# Don't know how: Real-time collaboration conflict resolution? Which algorithm?
/workflow:brainstorm:auto-parallel "Design conflict resolution mechanism for real-time collaborative document editing" --count 4
# Know how: Using Operational Transformation + WebSocket + Redis
# Skip design exploration, go directly to planning
/workflow:plan "Implement real-time collaborative editing using OT algorithm, WebSocket communication, Redis storage"
```
---
### 3⃣ **UI Design Phase - "Need UI design?"**
| Situation | Command | Description |
|-----------|---------|-------------|
| 🎨 Have reference design | `/workflow:ui-design:imitate-auto --input "URL"` | Copy from existing design |
| 🎨 Design from scratch | `/workflow:ui-design:explore-auto --prompt "description"` | Generate multiple design variants |
| ⏭️ Backend/No UI | Skip | Pure backend API, CLI tools, etc. |
**Examples**:
```bash
# Have reference: Imitate Google Docs collaboration interface
/workflow:ui-design:imitate-auto --input "https://docs.google.com"
# No reference: Design from scratch
/workflow:ui-design:explore-auto --prompt "Modern minimalist document collaboration editing interface" --style-variants 3
# Sync design to project
/workflow:ui-design:design-sync --session WFS-xxx --selected-prototypes "v1,v2"
```
---
### 4⃣ **Planning Phase - Choose Workflow Type**
| Workflow | Use Case | Characteristics |
|----------|----------|-----------------|
| `/workflow:lite-plan` | Quick tasks, small features | In-memory planning, three-dimensional confirmation, fast execution |
| `/workflow:plan` | Complex projects, team collaboration | Persistent plans, quality gates, complete traceability |
**Lite-Plan Three-Dimensional Confirmation**:
1. **Task Approval**: Confirm / Modify / Cancel
2. **Execution Method**: Agent / Provide Plan / CLI Tools (Gemini/Qwen/Codex)
3. **Code Review**: No / Claude / Gemini / Qwen / Codex
**Examples**:
```bash
# Simple task
/workflow:lite-plan "Add user avatar upload feature"
# Need code exploration
/workflow:lite-plan -e "Refactor authentication module to OAuth2 standard"
# Complex project
/workflow:plan "Implement complete real-time collaborative editing system"
/workflow:action-plan-verify # Verify plan quality
/workflow:execute
```
---
### 5⃣ **Testing Phase - Choose Testing Strategy**
| Strategy | Command | Use Case |
|----------|---------|----------|
| **TDD Mode** | `/workflow:tdd-plan` | Starting from scratch, test-driven development |
| **Post-Implementation Testing** | `/workflow:test-gen` | Code complete, add tests |
| **Test Fixing** | `/workflow:test-cycle-execute` | Existing tests, need to fix failures |
**Examples**:
```bash
# TDD: Write tests first, then implement
/workflow:tdd-plan "User authentication module"
/workflow:execute # Red-Green-Refactor cycle
/workflow:tdd-verify # Verify TDD compliance
# Post-implementation testing: Add tests after code complete
/workflow:test-gen WFS-user-auth-implementation
/workflow:execute
# Test fixing: Existing tests with high failure rate
/workflow:test-cycle-execute --max-iterations 5
# Auto-iterate fixes until pass rate ≥95%
```
---
### 6⃣ **Review Phase - Choose Review Type**
| Type | Command | Focus |
|------|---------|-------|
| **Security Review** | `/workflow:review --type security` | SQL injection, XSS, authentication vulnerabilities |
| **Architecture Review** | `/workflow:review --type architecture` | Design patterns, coupling, scalability |
| **Quality Review** | `/workflow:review --type quality` | Code style, complexity, maintainability |
| **Comprehensive Review** | `/workflow:review` | All-around inspection |
**Examples**:
```bash
# Security-critical system
/workflow:review --type security
# After architecture refactoring
/workflow:review --type architecture
# Daily development
/workflow:review --type quality
```
---
### 7⃣ **CLI Tools Collaboration Mode - Multi-Model Intelligent Coordination**
This project integrates three CLI tools supporting flexible serial, parallel, and hybrid execution:
| Tool | Core Capabilities | Context Length | Use Cases |
|------|------------------|----------------|-----------|
| **Gemini** | Deep analysis, architecture design, planning | Ultra-long context | Code understanding, execution flow tracing, technical solution evaluation |
| **Qwen** | Code review, pattern recognition | Ultra-long context | Gemini alternative, multi-dimensional analysis |
| **Codex** | Precise code writing, bug location | Standard context | Feature implementation, test generation, code refactoring |
#### 📋 Three Execution Modes
**1. Serial Execution** - Sequential dependency
Use case: Subsequent tasks depend on previous results
```bash
# Example: Analyze then implement
# Step 1: Gemini analyzes architecture
Use gemini to analyze the authentication module's architecture design, identify key components and data flow
# Step 2: Codex implements based on analysis
Have codex implement JWT authentication middleware based on the above architecture analysis
```
**Execution flow**:
```
Gemini analysis → Output architecture report → Codex reads report → Implement code
```
---
**2. Parallel Execution** - Concurrent processing
Use case: Multiple independent tasks with no dependencies
```bash
# Example: Multi-dimensional analysis
Use gemini to analyze authentication module security, focus on JWT, password storage, session management
Use qwen to analyze authentication module performance bottlenecks, identify slow queries and optimization points
Have codex generate unit tests for authentication module, covering all core features
```
**Execution flow**:
```
┌─ Gemini: Security analysis ─┐
Parallel ┼─ Qwen: Performance analysis ┼─→ Aggregate results
└─ Codex: Test generation ────┘
```
---
**3. Hybrid Execution** - Combined serial and parallel
Use case: Complex tasks with both parallel and serial phases
```bash
# Example: Complete feature development
# Phase 1: Parallel analysis (independent tasks)
Use gemini to analyze existing authentication system architecture patterns
Use qwen to evaluate OAuth2 integration technical solutions
# Phase 2: Serial implementation (depends on Phase 1)
Have codex implement OAuth2 authentication flow based on above analysis
# Phase 3: Parallel optimization (independent tasks)
Use gemini to review code quality and security
Have codex generate integration tests
```
**Execution flow**:
```
Phase 1: Gemini analysis ──┐
Qwen evaluation ──┼─→ Phase 2: Codex implementation ──→ Phase 3: Gemini review ──┐
│ Codex tests ───┼─→ Complete
└──────────────────────────────────────────────────────────────┘
```
---
#### 🎯 Semantic Invocation vs Command Invocation
**Method 1: Natural Language Semantic Invocation** (Recommended)
```bash
# Users simply describe naturally, Claude Code auto-invokes tools
"Use gemini to analyze this module's dependencies"
→ Claude Code auto-generates: cd src && gemini -p "Analyze dependencies"
"Have codex implement user registration feature"
→ Claude Code auto-generates: codex -C src/auth --full-auto exec "Implement registration"
```
**Method 2: Direct Command Invocation**
```bash
# Precise invocation via Slash commands
/cli:chat --tool gemini "Explain this algorithm"
/cli:analyze --tool qwen "Analyze performance bottlenecks"
/cli:execute --tool codex "Optimize query performance"
```
---
#### 🔗 CLI Results as Context (Memory)
CLI tool analysis results can be saved and used as context (memory) for subsequent operations, enabling intelligent workflows:
**1. Result Persistence**
```bash
# CLI execution results automatically saved to session directory
/cli:chat --tool gemini "Analyze authentication module architecture"
→ Saved to: .workflow/active/WFS-xxx/.chat/chat-[timestamp].md
/cli:analyze --tool qwen "Evaluate performance bottlenecks"
→ Saved to: .workflow/active/WFS-xxx/.chat/analyze-[timestamp].md
/cli:execute --tool codex "Implement feature"
→ Saved to: .workflow/active/WFS-xxx/.chat/execute-[timestamp].md
```
**2. Results as Planning Basis**
```bash
# Step 1: Analyze current state (generate memory)
Use gemini to deeply analyze authentication system architecture, security, and performance issues
→ Output: Detailed analysis report (auto-saved)
# Step 2: Plan based on analysis results
/workflow:plan "Refactor authentication system based on above Gemini analysis report"
→ System automatically reads analysis reports from .chat/ as context
→ Generate precise implementation plan
```
**3. Results as Implementation Basis**
```bash
# Step 1: Parallel analysis (generate multiple memories)
Use gemini to analyze existing code structure
Use qwen to evaluate technical solution feasibility
→ Output: Multiple analysis reports
# Step 2: Implement based on all analysis results
Have codex synthesize above Gemini and Qwen analyses to implement optimal solution
→ Codex automatically reads prior analysis results
→ Generate code conforming to architecture design
```
**4. Cross-Session References**
```bash
# Reference historical session analysis results
/cli:execute --tool codex "Refer to architecture analysis in WFS-2024-001, implement new payment module"
→ System automatically loads specified session context
→ Implement based on historical analysis
```
**5. Memory Update Loop**
```bash
# Iterative optimization flow
Use gemini to analyze problems in current implementation
→ Generate problem report (memory)
Have codex optimize code based on problem report
→ Implement improvements (update memory)
Use qwen to verify optimization effectiveness
→ Verification report (append to memory)
# All results accumulate as complete project memory
→ Support subsequent decisions and implementation
```
**Memory Flow Example**:
```
┌─────────────────────────────────────────────────────────────┐
│ Phase 1: Analysis Phase (Generate Memory) │
├─────────────────────────────────────────────────────────────┤
│ Gemini analysis → Architecture report (.chat/analyze-001.md)│
│ Qwen evaluation → Solution report (.chat/analyze-002.md) │
└─────────────────────┬───────────────────────────────────────┘
│ As Memory Input
┌─────────────────────────────────────────────────────────────┐
│ Phase 2: Planning Phase (Use Memory) │
├─────────────────────────────────────────────────────────────┤
│ /workflow:plan → Read analysis reports → Generate plan │
│ (.task/IMPL-*.json) │
└─────────────────────┬───────────────────────────────────────┘
│ As Memory Input
┌─────────────────────────────────────────────────────────────┐
│ Phase 3: Implementation Phase (Use Memory) │
├─────────────────────────────────────────────────────────────┤
│ Codex implement → Read plan+analysis → Generate code │
│ (.chat/execute-001.md) │
└─────────────────────┬───────────────────────────────────────┘
│ As Memory Input
┌─────────────────────────────────────────────────────────────┐
│ Phase 4: Verification Phase (Use Memory) │
├─────────────────────────────────────────────────────────────┤
│ Gemini review → Read implementation code → Quality report│
│ (.chat/review-001.md) │
└─────────────────────────────────────────────────────────────┘
Complete Project Memory Library
Supporting All Future Decisions and Implementation
```
**Best Practices**:
1. **Maintain Continuity**: Execute related tasks in the same session to automatically share memory
2. **Explicit References**: Explicitly reference historical analyses when crossing sessions (e.g., "Refer to WFS-xxx analysis")
3. **Incremental Updates**: Each analysis and implementation appends to memory, forming complete decision chain
4. **Regular Organization**: Use `/memory:update-related` to consolidate CLI results into CLAUDE.md
5. **Quality First**: High-quality analysis memory significantly improves subsequent implementation quality
---
#### 🔄 Workflow Integration Examples
**Integration with Lite Workflow**:
```bash
# 1. Planning phase: Gemini analysis
/workflow:lite-plan -e "Refactor payment module"
→ Three-dimensional confirmation selects "CLI Tools execution"
# 2. Execution phase: Choose execution method
# Option A: Serial execution
"Use gemini to analyze payment flow""Have codex refactor code"
# Option B: Parallel analysis + Serial implementation
"Use gemini to analyze architecture" + "Use qwen to evaluate solution"
"Have codex refactor based on analysis results"
```
**Integration with Full Workflow**:
```bash
# 1. Planning phase
/workflow:plan "Implement distributed cache"
/workflow:action-plan-verify
# 2. Analysis phase (parallel)
Use gemini to analyze existing cache architecture
Use qwen to evaluate Redis cluster solution
# 3. Implementation phase (serial)
/workflow:execute # Or use CLI
Have codex implement Redis cluster integration
# 4. Testing phase (parallel)
/workflow:test-gen WFS-cache
→ Internally uses gemini analysis + codex test generation
# 5. Review phase (serial)
Use gemini to review code quality
/workflow:review --type architecture
```
---
#### 💡 Best Practices
**When to use serial**:
- Implementation depends on design solution
- Testing depends on code implementation
- Optimization depends on performance analysis
**When to use parallel**:
- Multi-dimensional analysis (security + performance + architecture)
- Multi-module independent development
- Simultaneous code and test generation
**When to use hybrid**:
- Complex feature development (analysis → design → implementation → testing)
- Large-scale refactoring (evaluation → planning → execution → verification)
- Tech stack migration (research → solution → implementation → optimization)
**Tool selection guidelines**:
1. **Need to understand code** → Gemini (preferred) or Qwen
2. **Need to write code** → Codex
3. **Complex analysis** → Gemini + Qwen parallel (complementary verification)
4. **Precise implementation** → Codex (based on Gemini analysis)
5. **Quick prototype** → Direct Codex usage
---
## 🔄 Complete Flow for Typical Scenarios
### Scenario A: New Feature Development (Know How to Build)
```bash
# 1. Planning
/workflow:plan "Add JWT authentication and permission management"
# 2. Verify plan
/workflow:action-plan-verify
# 3. Execute
/workflow:execute
# 4. Testing
/workflow:test-gen WFS-jwt-auth
/workflow:execute
# 5. Review
/workflow:review --type security
# 6. Complete
/workflow:session:complete
```
---
### Scenario B: New Feature Development (Don't Know How to Build)
```bash
# 1. Design exploration
/workflow:brainstorm:auto-parallel "Design distributed cache system architecture" --count 5
# 2. UI design (if needed)
/workflow:ui-design:explore-auto --prompt "Cache management dashboard interface"
/workflow:ui-design:design-sync --session WFS-xxx
# 3. Planning
/workflow:plan
# 4. Verification
/workflow:action-plan-verify
# 5. Execution
/workflow:execute
# 6. TDD testing
/workflow:tdd-plan "Cache system core modules"
/workflow:execute
# 7. Review
/workflow:review --type architecture
/workflow:review --type security
# 8. Complete
/workflow:session:complete
```
---
### Scenario C: Quick Feature Development (Lite Workflow)
```bash
# 1. Lightweight planning (may need code exploration)
/workflow:lite-plan -e "Optimize database query performance"
# 2. Three-dimensional confirmation
# - Confirm task
# - Choose Agent execution
# - Choose Gemini code review
# 3. Auto-execution (called internally by /workflow:lite-execute)
# 4. Complete
```
---
### Scenario D: Bug Fixing
```bash
# 1. Diagnosis
/cli:mode:bug-diagnosis --tool gemini "User login fails with token expired error"
# 2. Quick fix
/workflow:lite-plan "Fix JWT token expiration validation logic"
# 3. Test fix
/workflow:test-cycle-execute
# 4. Complete
```
---
## 🎓 Quick Command Reference
### Choose by Knowledge Level
| Your Situation | Recommended Command |
|----------------|---------------------|
| 💭 Don't know what to build | `/workflow:brainstorm:auto-parallel "Explore product direction"` |
| ❓ Know what, don't know how | `/workflow:brainstorm:auto-parallel "Design technical solution"` |
| ✅ Know what and how | `/workflow:plan "Specific implementation description"` |
| ⚡ Simple, clear small task | `/workflow:lite-plan "Task description"` |
| 🐛 Bug fixing | `/cli:mode:bug-diagnosis` + `/workflow:lite-plan` |
### Choose by Project Phase
| Phase | Command |
|-------|---------|
| 📋 **Requirements Analysis** | `/workflow:brainstorm:auto-parallel` |
| 🏗️ **Architecture Design** | `/workflow:brainstorm:auto-parallel` |
| 🎨 **UI Design** | `/workflow:ui-design:explore-auto` / `imitate-auto` |
| 📝 **Implementation Planning** | `/workflow:plan` / `/workflow:lite-plan` |
| 🚀 **Coding Implementation** | `/workflow:execute` / `/workflow:lite-execute` |
| 🧪 **Testing** | `/workflow:tdd-plan` / `/workflow:test-gen` |
| 🔧 **Test Fixing** | `/workflow:test-cycle-execute` |
| 📖 **Code Review** | `/workflow:review` |
| ✅ **Project Completion** | `/workflow:session:complete` |
### Choose by Work Mode
| Mode | Workflow | Use Case |
|------|----------|----------|
| **🚀 Agile & Fast** | Lite Workflow | Personal dev, rapid iteration, prototype validation |
| **📋 Standard & Complete** | Full Workflow | Team collaboration, enterprise projects, long-term maintenance |
| **🧪 Quality-First** | TDD Workflow | Core modules, critical features, high reliability requirements |
| **🎨 Design-Driven** | UI-Design Workflow | Frontend projects, user interfaces, design systems |
---
## 💡 Expert Advice
### ✅ Best Practices
1. **Use brainstorming when uncertain**: Better to spend 10 minutes exploring solutions than blindly implementing and rewriting
2. **Use Full workflow for complex projects**: Persistent plans facilitate team collaboration and long-term maintenance
3. **Use Lite workflow for small tasks**: Complete quickly, reduce overhead
4. **Use TDD for critical modules**: Test-driven development ensures quality
5. **Regularly update memory**: `/memory:update-related` keeps context accurate
### ❌ Common Pitfalls
1. **Blindly skipping brainstorming**: Not exploring unfamiliar technical domains leads to rework
2. **Overusing brainstorming**: Brainstorming even simple features wastes time
3. **Ignoring plan verification**: Not running `/workflow:action-plan-verify` causes execution issues
4. **Ignoring testing**: Not generating tests, code quality cannot be guaranteed
5. **Not completing sessions**: Not running `/workflow:session:complete` causes session state confusion
---
## 🔗 Related Documentation
- [Getting Started Guide](GETTING_STARTED.md) - Quick start tutorial
- [Command Reference](COMMAND_REFERENCE.md) - Complete command list
- [Architecture Overview](ARCHITECTURE.md) - System architecture explanation
- [Examples](EXAMPLES.md) - Real-world scenario examples
- [FAQ](FAQ.md) - Frequently asked questions
---
**Last Updated**: 2025-11-20
**Version**: 5.8.1