Compare commits

..

3 Commits

Author SHA1 Message Date
Claude
63ca455c64 docs: add visual before/after comparison for output structure
- Clear side-by-side comparison of current vs proposed structure
- Safety indicators (safe vs dangerous operations)
- Command output examples showing the improvements
- User experience improvements with concrete examples
- Performance impact analysis
- Timeline and decision framework
- Complements OUTPUT_DIRECTORY_REORGANIZATION.md with visual summary
2025-11-20 09:20:43 +00:00
Claude
9f1a9a7731 docs: comprehensive output directory reorganization recommendations
- Analyzed current structure across 30+ commands
- Identified semantic confusion: .chat/ mixes read-only and code-modifying operations
- Proposed v2.0 structure with clear separation:
  * analysis/ - Read-only code understanding
  * planning/ - Read-only architecture planning
  * executions/ - Code-modifying operations (clearly marked)
  * quality/ - Reviews and verifications
  * context/ - Planning context and brainstorm artifacts
  * history/ - Backups and snapshots
- Detailed 4-phase migration strategy (dual write → dual read → deprecation → full migration)
- Command output mapping for all affected commands
- Risk assessment and rollback strategies
- Implementation checklist with timeline
2025-11-20 09:19:20 +00:00
Claude
38f8175780 docs: comprehensive command ambiguity analysis
- Identified 5 major ambiguity clusters in 74 commands
- Critical issues: Planning command overload (5 variants)
- Critical issues: Execution command confusion (5 variants)
- High priority: Tool selection and enhancement flag inconsistency
- Added decision trees for command selection
- Provided recommendations for immediate, short-term, and long-term improvements
2025-11-20 09:06:21 +00:00
13 changed files with 1636 additions and 643 deletions

View File

@@ -89,7 +89,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
```json
{
@@ -118,7 +118,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
@@ -127,8 +127,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Commands**:
```bash
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
# Count existing docs from phase2-analysis.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
@@ -182,8 +182,8 @@ Large Projects (single dir >10 docs):
**Commands**:
```bash
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 1. Get top-level directories from phase2-analysis.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
@@ -201,7 +201,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
```json
"groups": {
"count": 3,
@@ -215,7 +215,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
@@ -237,7 +237,7 @@ api_id=$((group_count + 3))
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from doc-planning-data.json
2. Read group assignments from phase2-analysis.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
@@ -262,14 +262,14 @@ api_id=$((group_count + 3))
},
"context": {
"requirements": [
"Process directories from group ${group_number} in doc-planning-data.json",
"Process directories from group ${group_number} in phase2-analysis.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
}
},
"flow_control": {
@@ -278,8 +278,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
@@ -324,7 +324,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"depends_on": [1],
"output": "generated_docs"
}
@@ -464,7 +464,7 @@ api_id=$((group_count + 3))
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
@@ -473,7 +473,7 @@ api_id=$((group_count + 3))
└── IMPL-{N+3}.json # HTTP API (optional)
```
**doc-planning-data.json Structure**:
**phase2-analysis.json Structure**:
```json
{
"metadata": {

View File

@@ -89,7 +89,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
```json
{
@@ -118,7 +118,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
@@ -127,8 +127,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Commands**:
```bash
# Count existing docs from doc-planning-data.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
# Count existing docs from phase2-analysis.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
@@ -182,8 +182,8 @@ Large Projects (single dir >10 docs):
**Commands**:
```bash
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 1. Get top-level directories from phase2-analysis.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
@@ -201,7 +201,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
```json
"groups": {
"count": 3,
@@ -215,7 +215,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json)
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
@@ -237,7 +237,7 @@ api_id=$((group_count + 3))
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from doc-planning-data.json
2. Read group assignments from phase2-analysis.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
@@ -262,14 +262,14 @@ api_id=$((group_count + 3))
},
"context": {
"requirements": [
"Process directories from group ${group_number} in doc-planning-data.json",
"Process directories from group ${group_number} in phase2-analysis.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
}
},
"flow_control": {
@@ -278,8 +278,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
@@ -324,7 +324,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"depends_on": [1],
"output": "generated_docs"
}
@@ -464,7 +464,7 @@ api_id=$((group_count + 3))
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
@@ -473,7 +473,7 @@ api_id=$((group_count + 3))
└── IMPL-{N+3}.json # HTTP API (optional)
```
**doc-planning-data.json Structure**:
**phase2-analysis.json Structure**:
```json
{
"metadata": {

View File

@@ -5,22 +5,27 @@ description: Product backlog management, user story creation, and feature priori
# Product Owner Planning Template
## Role & Scope
You are a **Product Owner** specializing in product backlog management, user story creation, and feature prioritization.
**Role**: Product Owner
**Focus**: Product backlog management, user story definition, stakeholder alignment, value delivery
**Excluded**: Team management, technical implementation, detailed system design
## Your Role & Responsibilities
## Planning Process (Required)
Before providing planning document, you MUST:
1. Analyze product vision and stakeholder needs
2. Define backlog structure and prioritization framework
3. Create user stories with acceptance criteria
4. Plan releases and define success metrics
5. Present structured planning document
**Primary Focus**: Product backlog management, user story definition, stakeholder alignment, and value delivery
**Core Responsibilities**:
- Product backlog creation and prioritization
- User story writing with acceptance criteria
- Stakeholder engagement and requirement gathering
- Feature value assessment and ROI analysis
- Release planning and roadmap management
- Sprint goal definition and commitment
- Acceptance testing and definition of done
**Does NOT Include**: Team management, technical implementation, detailed system design
## Planning Document Structure
Generate a comprehensive Product Owner planning document with the following structure:
### 1. Product Vision & Strategy
- **Product Vision**: Long-term product goals and target outcomes
- **Value Proposition**: User value and business benefits

View File

@@ -5,52 +5,55 @@ category: development
keywords: [bug诊断, 故障分析, 修复方案]
---
# Role & Output Requirements
# AI Persona & Core Mission
**Role**: Software engineer specializing in bug diagnosis
**Output Format**: Diagnostic report in Chinese following the specified structure
**Constraints**: Do NOT write complete code files. Provide diagnostic analysis and targeted correction suggestions only.
You are a **资深软件工程师 & 故障诊断专家 (Senior Software Engineer & Fault Diagnosis Expert)**. Your mission is to meticulously analyze user-provided bug reports, logs, and code snippets to perform a forensic-level investigation. Your goal is to pinpoint the precise root cause of the bug and then propose a targeted, robust, and minimally invasive correction plan. **Critically, you will *not* write complete, ready-to-use code files. Your output is a diagnostic report and a clear, actionable correction suggestion, articulated in professional Chinese.** You are an expert at logical deduction, tracing execution flows, and anticipating the side effects of any proposed fix.
## Core Capabilities
- Interpret symptoms from bug reports, stack traces, and logs
- Trace execution flow to identify root causes
- Formulate and validate hypotheses about bug origins
- Design targeted, low-risk corrections
- Analyze impact on other system components
## II. ROLE DEFINITION & CORE CAPABILITIES
1. **Role**: Senior Software Engineer & Fault Diagnosis Expert.
2. **Core Capabilities**:
* **Symptom Interpretation**: Deconstructing bug reports, stack traces, logs, and user descriptions into concrete technical observations.
* **Logical Deduction & Root Cause Analysis**: Masterfully applying deductive reasoning to trace symptoms back to their fundamental cause, moving from what is happening to why its happening.
* **Code Traversal & Execution Flow Analysis**: Mentally (or schematically) tracing code paths, state changes, and data transformations to identify logical flaws.
* **Hypothesis Formulation & Validation**: Formulating plausible hypotheses about the bugs origin and systematically validating or refuting them based on the provided evidence.
* **Targeted Solution Design**: Proposing precise, effective, and low-risk code corrections rather than broad refactoring.
* **Impact Analysis**: Foreseeing the potential ripple effects or unintended consequences of a proposed fix on other parts of the system.
* **Clear Technical Communication (Chinese)**: Articulating complex diagnostic processes and correction plans in clear, unambiguous Chinese for a developer audience.
## Analysis Process (Required)
**Before providing your final diagnosis, you MUST:**
1. Analyze symptoms and form initial hypothesis
2. Trace code execution to identify root cause
3. Design correction strategy
4. Assess potential impacts and risks
5. Present structured diagnostic report
3. **Core Thinking Mode**:
* **Detective-like & Methodical**: Start with the evidence (symptoms), follow the clues (code paths), identify the suspect (flawed logic), and prove the case (root cause).
* **Hypothesis-Driven**: Actively form and state your working theories (My initial hypothesis is that the null pointer is originating from module X because...) before reaching a conclusion.
* **From Effect to Cause**: Your primary thought process should be working backward from the observed failure to the initial error.
* **Chain-of-Thought (CoT) Driven**: Explicitly articulate your entire diagnostic journey, from symptom analysis to root cause identification.
## Objectives
1. Identify root cause (not just symptoms)
2. Propose targeted correction with justification
3. Assess risks and side effects
4. Provide verification steps
## III. OBJECTIVES
1. **Analyze Evidence**: Thoroughly examine all provided information (bug description, code, logs) to understand the failure conditions.
2. **Pinpoint Root Cause**: Go beyond surface-level symptoms to identify the fundamental logical error, race condition, data corruption, or configuration issue.
3. **Propose Precise Correction**: Formulate a clear and targeted suggestion for how to fix the bug.
4. **Explain the Why**: Justify why the proposed correction effectively resolves the root cause.
5. **Assess Risks & Side Effects**: Identify potential negative impacts of the fix and suggest verification steps.
6. **Professional Chinese Output**: Produce a highly structured, professional diagnostic report and correction plan entirely in Chinese.
7. **Show Your Work (CoT)**: Demonstrate your analytical process clearly in the 思考过程 section.
## Input
- Bug description (observed vs. expected behavior)
- Code snippets or file locations
- Logs, stack traces, error messages
- Reproduction steps (if available)
## IV. INPUT SPECIFICATIONS
1. **Bug Description**: A description of the problem, including observed behavior vs. expected behavior.
2. **Code Snippets/File Information**: Relevant source code where the bug is suspected to be.
3. **Logs/Stack Traces (Highly Recommended)**: Error messages, logs, or stack traces associated with the bug.
4. **Reproduction Steps (Optional)**: Steps to reproduce the bug.
## Output Structure (Required)
## V. RESPONSE STRUCTURE & CONTENT (Strictly Adhere - Output in Chinese)
Output in Chinese using this Markdown structure:
Your response **MUST** be in Chinese and structured in Markdown as follows:
---
### 0. 诊断思维链 (Diagnostic Chain-of-Thought)
Present your analysis process in these steps:
1. **症状分析**: Summarize error symptoms and technical clues
2. **初步假设**: Identify suspicious code areas and form initial hypothesis
3. **根本原因定位**: Trace execution path to pinpoint exact cause
4. **修复方案设计**: Design targeted, low-risk correction
5. **影响评估**: Assess side effects and plan verification
* *(在此处,您必须结构化地展示您的诊断流程。)*
* **1. 症状分析 (Symptom Analysis):** 我首先将用户的描述、日志和错误信息进行归纳,提炼出关键的异常行为和技术线索。
* **2. 代码勘察与初步假设 (Code Exploration & Initial Hypothesis):** 基于症状,我将定位到最可疑的代码区域,并提出一个关于根本原因的初步假设。
* **3. 逻辑推演与根本原因定位 (Logical Deduction & Root Cause Pinpointing):** 我将沿着代码执行路径进行深入推演,验证或修正我的假设,直至锁定导致错误的精确逻辑点。
* **4. 修复方案设计 (Correction Strategy Design):** 在确定根本原因后,我将设计一个最直接、风险最低的修复方案。
* **5. 影响评估与验证规划 (Impact Assessment & Verification Planning):** 我会评估修复方案可能带来的副作用,并构思如何验证修复的有效性及系统的稳定性。
### **故障诊断与修复建议报告 (Bug Diagnosis & Correction Proposal)**
@@ -111,17 +114,17 @@ Present your analysis process in these steps:
---
*(对每个需要修改的文件重复上述格式)*
## Key Requirements
1. **Language**: All output in Chinese
2. **No Code Generation**: Use diff format or pseudo-code only. Do not write complete functions or files
3. **Focus on Root Cause**: Analysis must be logical and evidence-based
4. **State Assumptions**: Clearly note any assumptions when information is incomplete
## VI. KEY DIRECTIVES & CONSTRAINTS
1. **Language**: **All** descriptive parts MUST be in **Chinese**.
2. **No Full Code Generation**: **Strictly refrain** from writing complete functions or files. Your correction suggestions should be concise, using single lines, `diff` format, or pseudo-code to illustrate the change. Your role is to guide the developer, not replace them.
3. **Focus on RCA**: The quality of your Root Cause Analysis is paramount. It must be logical, convincing, and directly supported by the evidence.
4. **State Assumptions**: If the provided information is insufficient to be 100% certain, clearly state your assumptions in the 诊断分析过程 section.
## Self-Review Checklist
Before providing final output, verify:
- [ ] Diagnostic chain reflects logical debugging process
- [ ] Root cause analysis is clear and evidence-based
- [ ] Correction directly addresses root cause (not just symptoms)
- [ ] Correction is minimal and targeted (not broad refactoring)
- [ ] Verification steps are actionable
- [ ] No complete code blocks generated
## VII. SELF-CORRECTION / REFLECTION
* Before finalizing your response, review it to ensure:
* The 诊断思维链 accurately reflects a logical debugging process.
* The Root Cause Analysis is deep, clear, and compelling.
* The proposed correction directly addresses the identified root cause.
* The correction suggestion is minimal and precise (not large-scale refactoring).
* The verification steps are actionable and cover both success and failure cases.
* You have strictly avoided generating large blocks of code.

View File

@@ -1,17 +1,10 @@
Analyze implementation patterns and code structure.
## Planning Required
Before providing analysis, you MUST:
1. Review all files in context (not just samples)
2. Identify patterns with file:line references
3. Distinguish good patterns from anti-patterns
4. Apply template requirements
## Core Checklist
- [ ] Analyze ALL files in CONTEXT
- [ ] Provide file:line references for each pattern
- [ ] Distinguish good patterns from anti-patterns
- [ ] Apply RULES template requirements
## CORE CHECKLIST ⚡
□ Analyze ALL files in CONTEXT (not just samples)
□ Provide file:line references for every pattern identified
□ Distinguish between good patterns and anti-patterns
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Identify common code patterns and architectural decisions
@@ -26,12 +19,10 @@ Before providing analysis, you MUST:
- Clear recommendations for pattern improvements
- Standards compliance assessment with priority levels
## Verification Checklist
Before finalizing output, verify:
- [ ] All CONTEXT files analyzed
- [ ] Every pattern has code reference (file:line)
- [ ] Anti-patterns clearly distinguished
- [ ] Recommendations prioritized by impact
## VERIFICATION CHECKLIST ✓
□ All CONTEXT files analyzed (not partial coverage)
□ Every pattern backed by code reference (file:line)
□ Anti-patterns clearly distinguished from good patterns
□ Recommendations prioritized by impact
## Output Requirements
Provide actionable insights with concrete implementation guidance.
Focus: Actionable insights with concrete implementation guidance.

View File

@@ -1,17 +1,10 @@
Create comprehensive tests for the codebase.
## Planning Required
Before creating tests, you MUST:
1. Analyze existing test coverage and identify gaps
2. Study testing frameworks and conventions used
3. Plan test strategy covering unit, integration, and e2e
4. Design test data management approach
## Core Checklist
- [ ] Analyze coverage gaps
- [ ] Follow testing frameworks and conventions
- [ ] Include unit, integration, and e2e tests
- [ ] Ensure tests are reliable and deterministic
## CORE CHECKLIST ⚡
□ Analyze existing test coverage and identify gaps
□ Follow project testing frameworks and conventions
□ Include unit, integration, and end-to-end tests
□ Ensure tests are reliable and deterministic
## IMPLEMENTATION PHASES
@@ -58,13 +51,11 @@ Before creating tests, you MUST:
- Test coverage metrics and quality improvements
- File:line references for tested code
## Verification Checklist
Before finalizing, verify:
- [ ] Coverage gaps filled
- [ ] All test types included
- [ ] Tests are reliable (no flaky tests)
- [ ] Test data properly managed
- [ ] Conventions followed
## VERIFICATION CHECKLIST ✓
□ Test coverage gaps identified and filled
□ All test types included (unit + integration + e2e)
□ Tests are reliable and deterministic (no flaky tests)
□ Test data properly managed (isolation + cleanup)
□ Testing conventions followed consistently
## Focus
High-quality, reliable test suite with comprehensive coverage.
Focus: High-quality, reliable test suite with comprehensive coverage.

View File

@@ -1,17 +1,10 @@
Implement a new feature following project conventions and best practices.
## Planning Required
Before implementing, you MUST:
1. Study existing code patterns and conventions
2. Review project architecture and design principles
3. Plan implementation with error handling and tests
4. Document integration points and dependencies
## Core Checklist
- [ ] Study existing code patterns first
- [ ] Follow project conventions and architecture
- [ ] Include comprehensive tests
- [ ] Provide file:line references
## CORE CHECKLIST ⚡
□ Study existing code patterns BEFORE implementing
□ Follow established project conventions and architecture
□ Include comprehensive tests (unit + integration)
□ Provide file:line references for all changes
## IMPLEMENTATION PHASES
@@ -46,13 +39,11 @@ Before implementing, you MUST:
- Documentation of new dependencies or configurations
- Test coverage summary
## Verification Checklist
Before finalizing, verify:
- [ ] Follows existing patterns
- [ ] Complete test coverage
- [ ] Documentation updated
- [ ] No breaking changes
- [ ] Security and performance validated
## VERIFICATION CHECKLIST ✓
□ Implementation follows existing patterns (no divergence)
□ Complete test coverage (unit + integration)
□ Documentation updated (code comments + external docs)
□ Integration verified (no breaking changes)
□ Security and performance validated
## Focus
Production-ready implementation with comprehensive testing and documentation.
Focus: Production-ready implementation with comprehensive testing and documentation.

View File

@@ -1,17 +1,10 @@
Generate module documentation focused on understanding and usage.
Generate comprehensive module documentation focused on understanding and usage.
## Planning Required
Before providing documentation, you MUST:
1. Understand what the module does and why it exists
2. Review existing documentation to avoid duplication
3. Prepare practical usage examples
4. Identify module boundaries and dependencies
## Core Checklist
- [ ] Explain WHAT, WHY, and HOW
- [ ] Reference API.md instead of duplicating signatures
- [ ] Include practical usage examples
- [ ] Define module boundaries and dependencies
## CORE CHECKLIST ⚡
□ Explain WHAT the module does, WHY it exists, and HOW to use it
□ Do NOT duplicate API signatures from API.md; refer to it instead
□ Provide practical, real-world usage examples
□ Clearly define the module's boundaries and dependencies
## DOCUMENTATION STRUCTURE
@@ -38,12 +31,10 @@ Before providing documentation, you MUST:
### 7. Common Issues
- List common problems and their solutions.
## Verification Checklist
Before finalizing output, verify:
- [ ] Module purpose, scope, and boundaries are clear
- [ ] Core concepts are explained
- [ ] Usage examples are practical and realistic
- [ ] Dependencies and configuration are documented
## VERIFICATION CHECKLIST ✓
□ The module's purpose, scope, and boundaries are clearly defined
□ Core concepts are explained for better understanding
□ Usage examples are practical and demonstrate real-world scenarios
□ All dependencies and configuration options are documented
## Focus
Explain module purpose and usage, not just API details.
Focus: Explaining the module's purpose and usage, not just its API.

View File

@@ -1,51 +1,51 @@
# 软件架构规划模板
# AI Persona & Core Mission
## Role & Output Requirements
You are a **Distinguished Senior Software Architect and Strategic Technical Planner**. Your primary function is to conduct a meticulous and insightful analysis of provided code, project context, and user requirements to devise an exceptionally clear, comprehensive, actionable, and forward-thinking modification plan. **Critically, you will *not* write or generate any code yourself; your entire output will be a detailed modification plan articulated in precise, professional Chinese.** You are an expert in anticipating dependencies, potential impacts, and ensuring the proposed plan is robust, maintainable, and scalable.
**Role**: Software architect specializing in technical planning
**Output Format**: Modification plan in Chinese following the specified structure
**Constraints**: Do NOT write or generate code. Provide planning and strategy only.
## II. ROLE DEFINITION & CORE CAPABILITIES
1. **Role**: Distinguished Senior Software Architect and Strategic Technical Planner.
2. **Core Capabilities**:
* **Deep Code Comprehension**: Ability to rapidly understand complex existing codebases (structure, patterns, dependencies, data flow, control flow).
* **Requirements Analysis & Distillation**: Skill in dissecting user requirements, identifying core needs, and translating them into technical planning objectives.
* **Software Design Principles**: Strong grasp of SOLID, DRY, KISS, design patterns, and architectural best practices.
* **Impact Analysis & Risk Assessment**: Expertise in identifying potential side effects, inter-module dependencies, and risks associated with proposed changes.
* **Strategic Planning**: Ability to formulate logical, step-by-step modification plans that are efficient and minimize disruption.
* **Clear Technical Communication (Chinese)**: Excellence in conveying complex technical plans and considerations in clear, unambiguous Chinese for a developer audience.
* **Visual Logic Representation**: Ability to sketch out intended logic flows using concise diagrammatic notations.
3. **Core Thinking Mode**:
* **Systematic & Holistic**: Approach analysis and planning with a comprehensive view of the system.
* **Critical & Forward-Thinking**: Evaluate requirements critically and plan for future maintainability and scalability.
* **Problem-Solver**: Focus on devising effective solutions through planning.
* **Chain-of-Thought (CoT) Driven**: Explicitly articulate your reasoning process, especially when making design choices within the plan.
## Core Capabilities
- Understand complex codebases (structure, patterns, dependencies, data flow)
- Analyze requirements and translate to technical objectives
- Apply software design principles (SOLID, DRY, KISS, design patterns)
- Assess impacts, dependencies, and risks
- Create step-by-step modification plans
## III. OBJECTIVES
1. **Thoroughly Understand Context**: Analyze user-provided code, modification requirements, and project background to gain a deep understanding of the existing system and the goals of the modification.
2. **Meticulous Code Analysis for Planning**: Identify all relevant code sections, their current logic, and how they interrelate, quoting relevant snippets for context.
3. **Devise Actionable Modification Plan**: Create a detailed, step-by-step plan outlining *what* changes are needed, *where* they should occur, *why* they are necessary, and the *intended logic* of the new/modified code.
4. **Illustrate Intended Logic**: For each significant logical change proposed, visually represent the *intended* new or modified control flow and data flow using a concise call flow diagram.
5. **Contextualize for Implementation**: Provide all necessary contextual information (variables, data structures, dependencies, potential side effects) to enable a developer to implement the plan accurately.
6. **Professional Chinese Output**: Produce a highly structured, professional planning document entirely in Chinese, adhering to the specified Markdown format.
7. **Show Your Work (CoT)**: Before presenting the plan, outline your analytical framework, key considerations, and how you approached the planning task.
## Planning Process (Required)
**Before providing your final plan, you MUST:**
1. Analyze requirements and identify technical objectives
2. Explore existing code structure and patterns
3. Identify modification points and formulate strategy
4. Assess dependencies and risks
5. Present structured modification plan
## IV. INPUT SPECIFICATIONS
1. **Code Snippets/File Information**: User-provided source code, file names, paths, or descriptions of relevant code sections.
2. **Modification Requirements**: Specific instructions or goals for what needs to be changed or achieved.
3. **Project Context (Optional)**: Any background information about the project or system.
## Objectives
1. Understand context (code, requirements, project background)
2. Analyze relevant code sections and their relationships
3. Create step-by-step modification plan (what, where, why, how)
4. Illustrate intended logic using call flow diagrams
5. Provide implementation context (variables, dependencies, side effects)
## V. RESPONSE STRUCTURE & CONTENT (Strictly Adhere - Output in Chinese)
## Input
- Code snippets or file locations
- Modification requirements and goals
- Project context (if available)
## Output Structure (Required)
Output in Chinese using this Markdown structure:
Your response **MUST** be in Chinese and structured in Markdown as follows:
---
### 0. 思考过程与规划策略 (Thinking Process & Planning Strategy)
Present your planning process in these steps:
1. **需求解析**: Break down requirements and clarify core objectives
2. **代码结构勘探**: Analyze current code structure and logic flow
3. **核心修改点识别**: Identify modification points and formulate strategy
4. **依赖与风险评估**: Assess dependencies and risks
5. **规划文档组织**: Organize planning document
* *(在此处,您必须结构化地展示您的分析框架和规划流程。)*
* **1. 需求解析 (Requirement Analysis):** 我首先将用户的原始需求进行拆解和澄清,确保完全理解其核心目标和边界条件。
* **2. 现有代码结构勘探 (Existing Code Exploration):** 基于提供的代码片段,我将分析其当前的结构、逻辑流和关键数据对象,以建立修改的基线。
* **3. 核心修改点识别与策略制定 (Identification of Core Modification Points & Strategy Formulation):** 我将识别出需要修改的关键代码位置,并为每个修改点制定高级别的技术策略(例如,是重构、新增还是调整)。
* **4. 依赖与风险评估 (Dependency & Risk Assessment):** 我会评估提议的修改可能带来的模块间依赖关系变化,以及潜在的风险(如性能下降、兼容性问题、边界情况处理不当等)。
* **5. 规划文档结构设计 (Plan Document Structuring):** 最后,我将依据上述分析,按照指定的格式组织并撰写这份详细的修改规划方案。
### **代码修改规划方案 (Code Modification Plan)**
@@ -93,17 +93,25 @@ Present your planning process in these steps:
---
*(对每个需要修改的文件重复上述格式)*
## Key Requirements
1. **Language**: All output in Chinese
2. **No Code Generation**: Do not write actual code. Provide descriptive modification plan only
3. **Focus**: Detail what and why. Use logic sketches to illustrate how
4. **Completeness**: State assumptions clearly when information is incomplete
## VI. STYLE & TONE (Chinese Output)
* **Professional & Authoritative**: Maintain a formal, expert tone befitting a Senior Architect.
* **Analytical & Insightful**: Demonstrate deep understanding and strategic thinking.
* **Precise & Unambiguous**: Use clear, exact technical Chinese terminology.
* **Structured & Actionable**: Ensure the plan is well-organized and provides clear guidance.
## Self-Review Checklist
Before providing final output, verify:
- [ ] Thinking process outlines structured analytical approach
- [ ] All requirements addressed in the plan
- [ ] Plan is logical, actionable, and detailed
- [ ] Modification reasons link back to requirements
- [ ] Context and risks are highlighted
- [ ] No actual code generated
## VII. KEY DIRECTIVES & CONSTRAINTS
1. **Language**: **All** descriptive parts of your plan **MUST** be in **Chinese**.
2. **No Code Generation**: **Strictly refrain** from writing, suggesting, or generating any actual code. Your output is *purely* a descriptive modification plan.
3. **Focus on What and Why, Illustrate How (Logic Sketch)**: Detail what needs to be done and why. The call flow sketch illustrates the *intended how* at a logical level, not implementation code.
4. **Completeness & Accuracy**: Ensure the plan is comprehensive. If information is insufficient, state assumptions clearly in the 思考过程 (Thinking Process) and 必要上下文 (Necessary Context).
5. **Professional Standard**: Your plan should meet the standards expected of a senior technical document, suitable for guiding development work.
## VIII. SELF-CORRECTION / REFLECTION
* Before finalizing your response, review it to ensure:
* The 思考过程 (Thinking Process) clearly outlines your structured analytical approach.
* All user requirements from 需求分析 have been addressed in the plan.
* The modification plan is logical, actionable, and sufficiently detailed, with relevant original code snippets for context.
* The 修改理由 (Reason for Modification) explicitly links back to the initial requirements.
* All crucial context and risks are highlighted.
* The entire output is in professional, clear Chinese and adheres to the specified Markdown structure.
* You have strictly avoided generating any code.

View File

@@ -0,0 +1,374 @@
# Command Ambiguity Analysis
## Executive Summary
Analysis of 74 commands reveals **5 major ambiguity clusters** that could cause user confusion. The primary issues involve overlapping functionality in planning, execution, and analysis commands, with inconsistent parameter usage and unclear decision criteria.
---
## Critical Ambiguities (HIGH Priority)
### 1. Planning Command Overload ⚠️ CRITICAL
**Problem**: 5 different "plan" commands with overlapping but distinct purposes
| Command | Purpose | Outputs | Mode |
|---------|---------|---------|------|
| `/workflow:plan` | 5-phase planning workflow | IMPL_PLAN.md + task JSONs | Autonomous |
| `/workflow:lite-plan` | Lightweight interactive planning | In-memory plan | Interactive |
| `/workflow:replan` | Modify existing plans | Updates existing artifacts | Interactive |
| `/cli:mode:plan` | Architecture planning | .chat/plan-*.md | Read-only |
| `/cli:discuss-plan` | Multi-round collaborative planning | .chat/discuss-plan-*.md | Multi-model discussion |
**Ambiguities**:
-**Intent confusion**: Users don't know which to use for "planning"
-**Output confusion**: Some create tasks, some don't
-**Workflow confusion**: Different levels of automation
-**Scope confusion**: Project-level vs architecture-level vs modification planning
**User Questions**:
- "I want to plan my project - which command do I use?"
- "What's the difference between `/workflow:plan` and `/workflow:lite-plan`?"
- "When should I use `/cli:mode:plan` vs `/workflow:plan`?"
**Recommendations**:
1. ✅ Create decision tree documentation
2. ✅ Rename commands to clarify scope:
- `/workflow:plan``/workflow:project-plan` (full workflow)
- `/workflow:lite-plan``/workflow:quick-plan` (fast planning)
- `/cli:mode:plan``/cli:architecture-plan` (read-only)
3. ✅ Add command hints in descriptions about when to use each
---
### 2. Execution Command Confusion ⚠️ CRITICAL
**Problem**: 5 different "execute" commands with different behaviors
| Command | Input | Modifies Code | Auto-Approval | Context |
|---------|-------|---------------|---------------|---------|
| `/workflow:execute` | Session | Via agents | No | Full workflow |
| `/workflow:lite-execute` | Plan/prompt/file | Via agent/codex | User choice | Lightweight |
| `/cli:execute` | Description/task-id | YES | YOLO | Direct implementation |
| `/cli:codex-execute` | Description | YES | YOLO | Multi-stage Codex |
| `/task:execute` | task-id | Via agent | No | Single task |
**Ambiguities**:
-**Safety confusion**: Some have YOLO auto-approval, others don't
-**Input confusion**: Different input formats
-**Scope confusion**: Workflow vs task vs direct execution
-**Tool confusion**: Agent vs CLI tool execution
**Critical Risk**:
- Users may accidentally use `/cli:execute` (YOLO) when they meant `/workflow:execute` (controlled)
- This could result in unwanted code modifications
**User Questions**:
- "I have a workflow session - do I use `/workflow:execute` or `/task:execute`?"
- "What's the difference between `/cli:execute` and `/workflow:lite-execute`?"
- "Which execute command is safest for production code?"
**Recommendations**:
1. 🚨 Add safety warnings to YOLO commands
2. ✅ Clear documentation on execution modes:
- **Workflow execution**: `/workflow:execute` (controlled, session-based)
- **Quick execution**: `/workflow:lite-execute` (flexible input)
- **Direct implementation**: `/cli:execute` (⚠️ YOLO auto-approval)
3. ✅ Consider renaming:
- `/cli:execute``/cli:implement-auto` (emphasizes auto-approval)
- `/cli:codex-execute``/cli:codex-multi-stage`
---
### 3. Analysis Command Overlap ⚠️ MEDIUM
**Problem**: Multiple analysis commands with unclear distinctions
| Command | Tool | Purpose | Output |
|---------|------|---------|--------|
| `/cli:analyze` | Gemini/Qwen/Codex | General codebase analysis | .chat/analyze-*.md |
| `/cli:mode:code-analysis` | Gemini/Qwen/Codex | Execution path tracing | .chat/code-analysis-*.md |
| `/cli:mode:bug-diagnosis` | Gemini/Qwen/Codex | Bug root cause analysis | .chat/bug-diagnosis-*.md |
| `/cli:chat` | Gemini/Qwen/Codex | Q&A interaction | .chat/chat-*.md |
**Ambiguities**:
-**Use case overlap**: When to use general analysis vs specialized modes
-**Template confusion**: Different templates but similar outputs
-**Mode naming**: "mode" prefix adds extra layer of confusion
**User Questions**:
- "Should I use `/cli:analyze` or `/cli:mode:code-analysis` to understand this code?"
- "What's special about the 'mode' commands?"
**Recommendations**:
1. ✅ Consolidate or clarify:
- Keep `/cli:analyze` for general use
- Document `/cli:mode:*` as specialized templates
2. ✅ Add use case examples in descriptions
3. ✅ Consider flattening:
- `/cli:mode:code-analysis``/cli:trace-execution`
- `/cli:mode:bug-diagnosis``/cli:diagnose-bug`
---
## Medium Priority Ambiguities
### 4. Task vs Workflow Command Overlap
**Problem**: Parallel command hierarchies
**Workflow Commands**:
- `/workflow:plan` - Create workflow with tasks
- `/workflow:execute` - Execute all tasks
- `/workflow:replan` - Modify workflow
**Task Commands**:
- `/task:create` - Create individual task
- `/task:execute` - Execute single task
- `/task:replan` - Modify task
**Ambiguities**:
-**Scope confusion**: When to use workflow vs task commands
-**Execution confusion**: `/task:execute` vs `/workflow:execute`
**Recommendations**:
1. ✅ Document relationship clearly:
- Workflow commands: Multi-task orchestration
- Task commands: Single-task operations
2. ✅ Add cross-references in documentation
---
### 5. Tool Selection Confusion (`--tool` flag)
**Problem**: Many commands accept `--tool codex|gemini|qwen` without clear criteria
**Commands with --tool**:
- `/cli:execute --tool`
- `/cli:analyze --tool`
- `/cli:mode:plan --tool`
- `/memory:update-full --tool`
- And more...
**Ambiguities**:
-**Selection criteria**: No clear guidance on when to use which tool
-**Default inconsistency**: Different defaults across commands
-**Capability confusion**: What each tool is best for
**Recommendations**:
1. ✅ Create tool selection guide:
- **Gemini**: Best for analysis, planning (default for most)
- **Qwen**: Fallback when Gemini unavailable
- **Codex**: Best for complex implementation, multi-stage execution
2. ✅ Add tool selection hints to command descriptions
3. ✅ Document tool capabilities clearly
---
### 6. Enhancement Flag Inconsistency
**Problem**: Different enhancement flags with different meanings
| Command | Flag | Meaning |
|---------|------|---------|
| `/cli:execute` | `--enhance` | Enhance prompt via `/enhance-prompt` |
| `/cli:analyze` | `--enhance` | Enhance prompt via `/enhance-prompt` |
| `/workflow:lite-plan` | `-e` or `--explore` | Force code exploration |
| `/memory:skill-memory` | `--regenerate` | Regenerate existing files |
**Ambiguities**:
-**Flag meaning**: `-e` means different things
-**Inconsistent naming**: `--enhance` vs `--explore` vs `--regenerate`
**Recommendations**:
1. ✅ Standardize flags:
- Use `--enhance` consistently for prompt enhancement
- Use `--explore` specifically for codebase exploration
- Use `--regenerate` for file regeneration
2. ✅ Avoid short flags (`-e`) that could be ambiguous
---
## Low Priority Observations
### 7. Session Management Commands (Well-Designed ✅)
**Commands**:
- `/workflow:session:start`
- `/workflow:session:resume`
- `/workflow:session:complete`
- `/workflow:session:list`
**Analysis**: These are **well-designed** with clear, distinct purposes. No ambiguity found.
---
### 8. Memory Commands (Acceptable)
Memory commands follow consistent patterns but could benefit from better organization:
- `/memory:load`
- `/memory:docs`
- `/memory:skill-memory`
- `/memory:code-map-memory`
- `/memory:update-full`
- `/memory:update-related`
**Minor Issue**: Many memory commands, but purposes are relatively clear.
---
## Parameter Ambiguity Analysis
### Common Parameter Patterns
| Parameter | Commands Using It | Ambiguity Level |
|-----------|-------------------|-----------------|
| `--tool` | 10+ commands | HIGH - Inconsistent defaults |
| `--enhance` | 5+ commands | MEDIUM - Similar but not identical |
| `--session` | 8+ commands | LOW - Consistent meaning |
| `--cli-execute` | 3+ commands | LOW - Clear meaning |
| `-e` / `--explore` | 2+ commands | HIGH - Different meanings |
---
## Output Ambiguity Analysis
### Output Location Confusion
Multiple commands output to similar locations:
**`.chat/` outputs** (read-only analysis):
- `/cli:analyze``.chat/analyze-*.md`
- `/cli:mode:plan``.chat/plan-*.md`
- `/cli:discuss-plan``.chat/discuss-plan-*.md`
- `/cli:execute``.chat/execute-*.md` (❌ Misleading - actually modifies code!)
**Ambiguity**:
- Users might think all `.chat/` outputs are read-only
- `/cli:execute` outputs to `.chat/` but modifies code (YOLO)
**Recommendation**:
- ✅ Separate execution logs from analysis logs
- ✅ Use different directory for code-modifying operations
---
## Decision Tree Recommendations
### When to Use Planning Commands
```
START: I need to plan something
├─ Is this a new full project workflow?
│ └─ YES → /workflow:plan (5-phase, creates tasks)
├─ Do I need quick planning without full workflow?
│ └─ YES → /workflow:lite-plan (fast, interactive)
├─ Do I need architecture-level planning only?
│ └─ YES → /cli:mode:plan (read-only, no tasks)
├─ Do I need multi-perspective discussion?
│ └─ YES → /cli:discuss-plan (Gemini + Codex + Claude)
└─ Am I modifying an existing plan?
└─ YES → /workflow:replan (modify artifacts)
```
### When to Use Execution Commands
```
START: I need to execute/implement something
├─ Do I have an active workflow session with tasks?
│ └─ YES → /workflow:execute (execute all tasks)
├─ Do I have a single task ID to execute?
│ └─ YES → /task:execute IMPL-N (single task)
├─ Do I have a plan or description to execute quickly?
│ └─ YES → /workflow:lite-execute (flexible input)
├─ Do I want direct, autonomous implementation (⚠️ YOLO)?
│ ├─ Single-stage → /cli:execute (auto-approval)
│ └─ Multi-stage → /cli:codex-execute (complex tasks)
└─ ⚠️ WARNING: CLI execute commands modify code without confirmation
```
### When to Use Analysis Commands
```
START: I need to analyze code
├─ General codebase understanding?
│ └─ /cli:analyze (broad analysis)
├─ Specific execution path tracing?
│ └─ /cli:mode:code-analysis (detailed flow)
├─ Bug diagnosis?
│ └─ /cli:mode:bug-diagnosis (root cause)
└─ Quick Q&A?
└─ /cli:chat (interactive)
```
---
## Summary of Findings
### Ambiguity Count by Severity
| Severity | Count | Commands Affected |
|----------|-------|-------------------|
| 🚨 CRITICAL | 2 | Planning (5 cmds), Execution (5 cmds) |
| ⚠️ HIGH | 2 | Tool selection, Enhancement flags |
| MEDIUM | 3 | Analysis, Task/Workflow overlap, Output locations |
| ✅ LOW | Multiple | Most other commands acceptable |
### Key Recommendations Priority
1. **🚨 URGENT**: Add safety warnings to YOLO execution commands
2. **🚨 URGENT**: Create decision trees for planning and execution commands
3. **⚠️ HIGH**: Standardize tool selection criteria documentation
4. **⚠️ HIGH**: Clarify enhancement flag meanings
5. ** MEDIUM**: Reorganize output directories by operation type
6. ** MEDIUM**: Consider renaming most ambiguous commands
---
## Recommended Actions
### Immediate (Week 1)
1. ✅ Add decision trees to documentation
2. ✅ Add ⚠️ WARNING labels to YOLO commands
3. ✅ Create "Which command should I use?" guide
### Short-term (Month 1)
1. ✅ Standardize flag meanings across commands
2. ✅ Add tool selection guide
3. ✅ Clarify command descriptions
### Long-term (Future)
1. 🤔 Consider command consolidation or renaming
2. 🤔 Reorganize output directory structure
3. 🤔 Add interactive command selector tool
---
## Conclusion
The command system is **powerful but complex**. The main ambiguities stem from:
- Multiple commands with similar names serving different purposes
- Inconsistent parameter usage
- Unclear decision criteria for command selection
**Overall Assessment**: The codebase has a well-structured command system, but would benefit significantly from:
1. Better documentation (decision trees, use case examples)
2. Clearer naming conventions
3. Consistent parameter patterns
4. Safety warnings for destructive operations
**Risk Level**: MEDIUM - Experienced users can navigate, but new users will struggle. The YOLO execution commands pose the highest risk of accidental misuse.

View File

@@ -0,0 +1,654 @@
# Output Directory Reorganization Recommendations
## Executive Summary
Current output directory structure mixes different operation types (read-only analysis, code modifications, planning artifacts) in the same directories, leading to confusion and poor organization. This document proposes a **semantic directory structure** that separates outputs by purpose and operation type.
**Impact**: Affects 30+ commands, requires phased migration
**Priority**: MEDIUM (improves clarity, not critical functionality)
**Effort**: 2-4 weeks for full implementation
---
## Current Structure Analysis
### Active Session Structure
```
.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── IMPL_PLAN.md # Planning document
├── TODO_LIST.md # Progress tracking
├── .chat/ # ⚠️ MIXED PURPOSE
│ ├── analyze-*.md # Read-only analysis
│ ├── plan-*.md # Read-only planning
│ ├── discuss-plan-*.md # Read-only discussion
│ ├── execute-*.md # ⚠️ Code-modifying execution
│ └── chat-*.md # Q&A interactions
├── .summaries/ # Task completion summaries
│ ├── IMPL-*-summary.md
│ └── TEST-FIX-*-summary.md
├── .task/ # Task definitions
│ ├── IMPL-001.json
│ └── IMPL-001.1.json
└── .process/ # ⚠️ MIXED PURPOSE
├── context-package.json # Planning context
├── test-context-package.json # Test context
├── phase2-analysis.json # Temporary analysis
├── CONFLICT_RESOLUTION.md # Planning artifact
├── ACTION_PLAN_VERIFICATION.md # Verification report
└── backup/ # Backup storage
└── replan-{timestamp}/
```
### Scratchpad Structure (No Session)
```
.workflow/.scratchpad/
├── analyze-*.md
├── execute-*.md
├── chat-*.md
└── plan-*.md
```
---
## Problems Identified
### 1. **Semantic Confusion** 🚨 CRITICAL
**Problem**: `.chat/` directory contains both:
- ✅ Read-only operations (analyze, chat, plan)
- ⚠️ Code-modifying operations (execute)
**Impact**: Users assume `.chat/` is safe (read-only), but some files represent dangerous operations
**Example**:
```bash
# These both output to .chat/ but have VERY different impacts:
/cli:analyze "review auth code" # Read-only → .chat/analyze-*.md
/cli:execute "implement auth feature" # ⚠️ MODIFIES CODE → .chat/execute-*.md
```
### 2. **Purpose Overload**
**Problem**: `.process/` used for multiple unrelated purposes:
- Planning artifacts (context-package.json)
- Temporary analysis (phase2-analysis.json)
- Verification reports (ACTION_PLAN_VERIFICATION.md)
- Backup storage (backup/)
**Impact**: Difficult to understand what's in `.process/`
### 3. **Inconsistent Organization**
**Problem**: Different commands use different naming patterns:
- Some use timestamps: `analyze-{timestamp}.md`
- Some use topics: `plan-{topic}.md`
- Some use task IDs: `IMPL-001-summary.md`
**Impact**: Hard to find specific outputs
### 4. **No Operation Type Distinction**
**Problem**: Can't distinguish operation type from directory structure:
- Analysis outputs mixed with execution logs
- Planning discussions mixed with implementation records
- No clear audit trail
**Impact**: Poor traceability, difficult debugging
---
## Proposed New Structure
### Design Principles
1. **Semantic Organization**: Directories reflect operation type and safety level
2. **Clear Hierarchy**: Separate by purpose → type → chronology
3. **Safety Indicators**: Code-modifying operations clearly separated
4. **Consistent Naming**: Standard patterns across all commands
5. **Backward Compatible**: Old structure accessible during migration
---
## Recommended Structure v2.0
```
.workflow/active/WFS-{session-id}/
├── ## Core Artifacts (Root Level)
├── workflow-session.json
├── IMPL_PLAN.md
├── TODO_LIST.md
├── ## Task Definitions
├── tasks/ # (renamed from .task/)
│ ├── IMPL-001.json
│ └── IMPL-001.1.json
├── ## 🟢 READ-ONLY Operations (Safe)
├── analysis/ # (split from .chat/)
│ ├── code/
│ │ ├── 2024-01-15T10-30-auth-patterns.md
│ │ └── 2024-01-15T11-45-api-structure.md
│ ├── architecture/
│ │ └── 2024-01-14T09-00-caching-layer.md
│ └── bugs/
│ └── 2024-01-16T14-20-login-bug-diagnosis.md
├── planning/ # (split from .chat/)
│ ├── discussions/
│ │ └── 2024-01-13T15-00-auth-strategy-3rounds.md
│ ├── architecture/
│ │ └── 2024-01-13T16-30-database-design.md
│ └── revisions/
│ └── 2024-01-17T10-00-replan-add-2fa.md
├── interactions/ # (split from .chat/)
│ ├── 2024-01-15T10-00-question-about-jwt.md
│ └── 2024-01-15T14-30-how-to-test-auth.md
├── ## ⚠️ CODE-MODIFYING Operations (Dangerous)
├── executions/ # (split from .chat/)
│ ├── implementations/
│ │ ├── 2024-01-15T11-00-impl-jwt-auth.md
│ │ ├── 2024-01-15T12-30-impl-user-api.md
│ │ └── metadata.json # Execution metadata
│ ├── test-fixes/
│ │ └── 2024-01-16T09-00-fix-auth-tests.md
│ └── refactors/
│ └── 2024-01-16T15-00-refactor-middleware.md
├── ## Completion Records
├── summaries/ # (kept same)
│ ├── implementations/
│ │ ├── IMPL-001-jwt-authentication.md
│ │ └── IMPL-002-user-endpoints.md
│ ├── tests/
│ │ └── TEST-FIX-001-auth-validation.md
│ └── index.json # Quick lookup
├── ## Planning Context & Artifacts
├── context/ # (split from .process/)
│ ├── project/
│ │ ├── context-package.json
│ │ └── test-context-package.json
│ ├── brainstorm/
│ │ ├── guidance-specification.md
│ │ ├── synthesis-output.md
│ │ └── roles/
│ │ ├── api-designer-analysis.md
│ │ └── system-architect-analysis.md
│ └── conflicts/
│ └── 2024-01-14T10-00-resolution.md
├── ## Verification & Quality
├── quality/ # (split from .process/)
│ ├── verifications/
│ │ └── 2024-01-15T09-00-action-plan-verify.md
│ ├── reviews/
│ │ ├── 2024-01-17T11-00-security-review.md
│ │ └── 2024-01-17T12-00-architecture-review.md
│ └── tdd-compliance/
│ └── 2024-01-16T16-00-cycle-analysis.md
├── ## History & Backups
├── history/ # (renamed from .process/backup/)
│ ├── replans/
│ │ └── 2024-01-17T10-00-add-2fa/
│ │ ├── MANIFEST.md
│ │ ├── IMPL_PLAN.md
│ │ └── tasks/
│ └── snapshots/
│ └── 2024-01-15T00-00-milestone-1/
└── ## Temporary Working Data
└── temp/ # (for transient analysis)
└── phase2-analysis.json
```
### Scratchpad Structure v2.0
```
.workflow/.scratchpad/
├── analysis/
├── planning/
├── interactions/
└── executions/ # ⚠️ Code-modifying
```
---
## Directory Purpose Reference
| Directory | Purpose | Safety | Retention |
|-----------|---------|--------|-----------|
| `analysis/` | Code understanding, bug diagnosis | 🟢 Read-only | Keep indefinitely |
| `planning/` | Architecture plans, discussions | 🟢 Read-only | Keep indefinitely |
| `interactions/` | Q&A, chat sessions | 🟢 Read-only | Keep 30 days |
| `executions/` | Implementation logs | ⚠️ Modifies code | Keep indefinitely |
| `summaries/` | Task completion records | 🟢 Reference | Keep indefinitely |
| `context/` | Planning context, brainstorm | 🟢 Reference | Keep indefinitely |
| `quality/` | Reviews, verifications | 🟢 Reference | Keep indefinitely |
| `history/` | Backups, snapshots | 🟢 Archive | Keep indefinitely |
| `temp/` | Transient analysis data | 🟢 Temporary | Clean on completion |
---
## Naming Convention Standards
### Timestamp-based Files
**Format**: `YYYY-MM-DDTHH-MM-{description}.md`
**Examples**:
- `2024-01-15T10-30-auth-patterns.md`
- `2024-01-15T11-45-jwt-implementation.md`
**Benefits**:
- Chronological sorting
- Unique identifiers
- Easy to find by date
### Task-based Files
**Format**: `{TASK-ID}-{description}.md`
**Examples**:
- `IMPL-001-jwt-authentication.md`
- `TEST-FIX-002-login-validation.md`
**Benefits**:
- Clear task association
- Easy to find by task ID
### Metadata Files
**Format**: `{type}.json` or `{type}-metadata.json`
**Examples**:
- `context-package.json`
- `execution-metadata.json`
- `index.json`
---
## Command Output Mapping
### Analysis Commands → `analysis/`
| Command | Old Location | New Location |
|---------|-------------|--------------|
| `/cli:analyze` | `.chat/analyze-*.md` | `analysis/code/{timestamp}-{topic}.md` |
| `/cli:mode:code-analysis` | `.chat/code-analysis-*.md` | `analysis/code/{timestamp}-{topic}.md` |
| `/cli:mode:bug-diagnosis` | `.chat/bug-diagnosis-*.md` | `analysis/bugs/{timestamp}-{topic}.md` |
### Planning Commands → `planning/`
| Command | Old Location | New Location |
|---------|-------------|--------------|
| `/cli:mode:plan` | `.chat/plan-*.md` | `planning/architecture/{timestamp}-{topic}.md` |
| `/cli:discuss-plan` | `.chat/discuss-plan-*.md` | `planning/discussions/{timestamp}-{topic}.md` |
| `/workflow:replan` | (modifies artifacts) | `planning/revisions/{timestamp}-{reason}.md` |
### Execution Commands → `executions/`
| Command | Old Location | New Location |
|---------|-------------|--------------|
| `/cli:execute` | `.chat/execute-*.md` | `executions/implementations/{timestamp}-{description}.md` |
| `/cli:codex-execute` | `.chat/codex-*.md` | `executions/implementations/{timestamp}-{description}.md` |
| `/workflow:execute` | (multiple) | `executions/implementations/{timestamp}-{task-id}.md` |
| `/workflow:test-cycle-execute` | (various) | `executions/test-fixes/{timestamp}-cycle-{n}.md` |
### Quality Commands → `quality/`
| Command | Old Location | New Location |
|---------|-------------|--------------|
| `/workflow:action-plan-verify` | `.process/ACTION_PLAN_VERIFICATION.md` | `quality/verifications/{timestamp}-action-plan.md` |
| `/workflow:review` | (inline) | `quality/reviews/{timestamp}-{type}.md` |
| `/workflow:tdd-verify` | (inline) | `quality/tdd-compliance/{timestamp}-verify.md` |
### Context Commands → `context/`
| Data Type | Old Location | New Location |
|-----------|-------------|--------------|
| Context packages | `.process/context-package.json` | `context/project/context-package.json` |
| Brainstorm artifacts | `.process/` | `context/brainstorm/` |
| Conflict resolution | `.process/CONFLICT_RESOLUTION.md` | `context/conflicts/{timestamp}-resolution.md` |
---
## Migration Strategy
### Phase 1: Dual Write (Week 1-2)
**Goal**: Write to both old and new locations
**Implementation**:
```bash
# Example for /cli:analyze
old_path=".workflow/active/$session/.chat/analyze-$timestamp.md"
new_path=".workflow/active/$session/analysis/code/$timestamp-$topic.md"
# Write to both locations
Write($old_path, content)
Write($new_path, content)
# Add migration notice to old location
echo "⚠️ This file has moved to: $new_path" >> $old_path
```
**Changes**:
- Update all commands to write to new structure
- Keep writing to old structure for compatibility
- Add deprecation notices
**Commands to Update**: 30+ commands
### Phase 2: Dual Read (Week 3)
**Goal**: Read from new location, fallback to old
**Implementation**:
```bash
# Example read logic
if [ -f "$new_path" ]; then
content=$(cat "$new_path")
elif [ -f "$old_path" ]; then
content=$(cat "$old_path")
# Migrate on read
mkdir -p "$(dirname "$new_path")"
cp "$old_path" "$new_path"
echo "✓ Migrated: $old_path$new_path"
fi
```
**Changes**:
- Update read logic in all commands
- Automatic migration on read
- Log migrations for verification
### Phase 3: Legacy Deprecation (Week 4)
**Goal**: Stop writing to old locations
**Implementation**:
```bash
# Stop dual write, only write to new structure
new_path=".workflow/active/$session/analysis/code/$timestamp-$topic.md"
Write($new_path, content)
# No longer write to old_path
```
**Changes**:
- Remove old write logic
- Keep read fallback for 1 release cycle
- Update documentation
### Phase 4: Full Migration (Future Release)
**Goal**: Remove old structure entirely
**Implementation**:
```bash
# One-time migration script
/workflow:migrate-outputs --session all --dry-run
/workflow:migrate-outputs --session all --execute
```
**Migration Script**:
```bash
#!/bin/bash
# migrate-outputs.sh
session_dir="$1"
# Migrate .chat/ files
for file in "$session_dir/.chat"/*; do
case "$file" in
*analyze*)
mv "$file" "$session_dir/analysis/code/"
;;
*execute*)
mv "$file" "$session_dir/executions/implementations/"
;;
*plan*)
mv "$file" "$session_dir/planning/architecture/"
;;
*chat*)
mv "$file" "$session_dir/interactions/"
;;
esac
done
# Migrate .process/ files
mv "$session_dir/.process/context-package.json" "$session_dir/context/project/"
mv "$session_dir/.process/backup" "$session_dir/history/"
# Remove old directories
rmdir "$session_dir/.chat" "$session_dir/.process" 2>/dev/null
echo "✓ Migration complete: $session_dir"
```
---
## Implementation Checklist
### Week 1-2: Dual Write Setup
**Core Commands** (Priority 1):
- [ ] `/cli:analyze``analysis/code/`
- [ ] `/cli:execute``executions/implementations/`
- [ ] `/cli:mode:plan``planning/architecture/`
- [ ] `/workflow:execute``executions/implementations/`
- [ ] `/workflow:action-plan-verify``quality/verifications/`
**Planning Commands** (Priority 2):
- [ ] `/cli:discuss-plan``planning/discussions/`
- [ ] `/workflow:replan``planning/revisions/`
- [ ] `/workflow:plan` → (updates `context/project/`)
**Context Commands** (Priority 3):
- [ ] `/workflow:tools:context-gather``context/project/`
- [ ] `/workflow:brainstorm:*``context/brainstorm/`
- [ ] `/workflow:tools:conflict-resolution``context/conflicts/`
### Week 3: Dual Read + Auto-Migration
**Read Logic Updates**:
- [ ] Update all Read() calls with fallback logic
- [ ] Add migration-on-read for all file types
- [ ] Log all automatic migrations
**Testing**:
- [ ] Test with existing sessions
- [ ] Test with new sessions
- [ ] Verify backward compatibility
### Week 4: Documentation + Deprecation
**Documentation Updates**:
- [ ] Update command documentation with new paths
- [ ] Add migration guide for users
- [ ] Document new directory structure
- [ ] Add "Directory Purpose Reference" to docs
**Deprecation Notices**:
- [ ] Add notices to old command outputs
- [ ] Update error messages with new paths
- [ ] Create migration FAQ
---
## Benefits Analysis
### Immediate Benefits
**1. Safety Clarity** 🟢
- Clear separation: Read-only vs Code-modifying operations
- Users can quickly identify dangerous operations
- Reduces accidental code modifications
**2. Better Organization** 📁
- Semantic structure reflects operation purpose
- Easy to find specific outputs
- Clear audit trail
**3. Improved Traceability** 🔍
- Execution logs separated by type
- Planning discussions organized chronologically
- Quality checks easily accessible
### Long-term Benefits
**4. Scalability** 📈
- Structure scales to 100+ sessions
- Easy to add new operation types
- Consistent organization patterns
**5. Automation Potential** 🤖
- Programmatic analysis of outputs
- Automated cleanup of old files
- Better CI/CD integration
**6. User Experience** 👥
- Intuitive directory structure
- Self-documenting organization
- Easier onboarding for new users
---
## Risk Assessment
### Migration Risks
| Risk | Severity | Mitigation |
|------|----------|------------|
| **Breaking Changes** | HIGH | Phased migration with dual write/read |
| **Data Loss** | MEDIUM | Automatic migration on read, keep backups |
| **User Confusion** | MEDIUM | Clear documentation, migration guide |
| **Command Failures** | LOW | Fallback to old locations during transition |
| **Performance Impact** | LOW | Dual write adds minimal overhead |
### Rollback Strategy
If migration causes issues:
**Phase 1 Rollback** (Dual Write):
- Stop writing to new locations
- Continue using old structure
- No data loss
**Phase 2 Rollback** (Dual Read):
- Disable migration-on-read
- Continue reading from old locations
- New files still in new structure (OK)
**Phase 3+ Rollback**:
- Run reverse migration script
- Copy new structure files back to old locations
- May require manual intervention
---
## Alternative Approaches Considered
### Alternative 1: Flat Structure with Prefixes
```
.workflow/active/WFS-{session}/
├── ANALYSIS_2024-01-15_auth-patterns.md
├── EXEC_2024-01-15_jwt-impl.md
└── PLAN_2024-01-14_architecture.md
```
**Rejected**: Too many files in one directory, poor organization
### Alternative 2: Single "logs/" Directory
```
.workflow/active/WFS-{session}/
└── logs/
├── 2024-01-15T10-30-analyze-auth.md
└── 2024-01-15T11-00-execute-jwt.md
```
**Rejected**: Doesn't solve semantic confusion
### Alternative 3: Minimal Change (Status Quo++)
```
.workflow/active/WFS-{session}/
├── .chat/ # Rename to .interactions/
├── .exec/ # NEW: Split executions out
├── .summaries/
└── .process/
```
**Partially Adopted**: Considered as "lite" version if full migration too complex
---
## Recommended Timeline
### Immediate (This Sprint)
1. ✅ Document current structure
2. ✅ Create proposed structure v2.0
3. ✅ Get stakeholder approval
### Short-term (Next 2 Sprints - 4 weeks)
1. 📝 Implement Phase 1: Dual Write
2. 🔍 Implement Phase 2: Dual Read
3. 📢 Implement Phase 3: Deprecation
### Long-term (Future Release)
1. 🗑️ Implement Phase 4: Full Migration
2. 🧹 Remove old structure code
3. 📚 Update all documentation
---
## Success Metrics
### Quantitative
- ✅ 100% of commands updated to new structure
- ✅ 0 data loss during migration
- ✅ <5% increase in execution time (dual write overhead)
- ✅ 90% of sessions migrated within 1 month
### Qualitative
- ✅ User feedback: "Easier to find outputs"
- ✅ User feedback: "Clearer which operations are safe"
- ✅ Developer feedback: "Easier to maintain"
---
## Conclusion
The proposed directory reorganization addresses critical semantic confusion in the current structure by:
1. **Separating read-only from code-modifying operations** (safety)
2. **Organizing by purpose** (usability)
3. **Using consistent naming** (maintainability)
4. **Providing clear migration path** (feasibility)
**Recommendation**: Proceed with phased migration starting with dual-write implementation.
**Next Steps**:
1. Review and approve proposed structure
2. Identify pilot commands for Phase 1
3. Create detailed implementation tasks
4. Begin dual-write implementation
**Questions for Discussion**:
1. Should we use "lite" version (minimal changes) or full v2.0?
2. What's the acceptable timeline for full migration?
3. Are there any other directory purposes we should consider?
4. Should we add more automation (e.g., auto-cleanup old files)?

View File

@@ -0,0 +1,404 @@
# Output Structure: Before vs After
## Quick Visual Comparison
### Current Structure (v1.0) - ⚠️ Problematic
```
.workflow/active/WFS-session/
├── .chat/ ⚠️ MIXED: Safe + Dangerous operations
│ ├── analyze-*.md ✅ Read-only
│ ├── plan-*.md ✅ Read-only
│ ├── chat-*.md ✅ Read-only
│ └── execute-*.md ⚠️ MODIFIES CODE!
├── .summaries/ ✅ OK
├── .task/ ✅ OK
└── .process/ ⚠️ MIXED: Multiple purposes
├── context-package.json (planning context)
├── phase2-analysis.json (temp data)
├── CONFLICT_RESOLUTION.md (planning artifact)
└── backup/ (history)
```
**Problems**:
-`.chat/` mixes safe (read-only) and dangerous (code-modifying) operations
-`.process/` serves too many purposes
- ❌ No clear organization by operation type
- ❌ Hard to find specific outputs
---
### Proposed Structure (v2.0) - ✅ Clear & Semantic
```
.workflow/active/WFS-session/
├── 🟢 SAFE: Read-only Operations
│ ├── analysis/ Split from .chat/
│ │ ├── code/ Code understanding
│ │ ├── architecture/ Architecture analysis
│ │ └── bugs/ Bug diagnosis
│ │
│ ├── planning/ Split from .chat/
│ │ ├── discussions/ Multi-round planning
│ │ ├── architecture/ Architecture plans
│ │ └── revisions/ Replan history
│ │
│ └── interactions/ Split from .chat/
│ └── *-chat.md Q&A sessions
├── ⚠️ DANGEROUS: Code-modifying Operations
│ └── executions/ Split from .chat/
│ ├── implementations/ Code implementations
│ ├── test-fixes/ Test fixes
│ └── refactors/ Refactoring
├── 📊 RECORDS: Completion & Quality
│ ├── summaries/ Keep same (task completions)
│ │
│ └── quality/ Split from .process/
│ ├── verifications/ Plan verifications
│ ├── reviews/ Code reviews
│ └── tdd-compliance/ TDD checks
├── 📦 CONTEXT: Planning Artifacts
│ └── context/ Split from .process/
│ ├── project/ Context packages
│ ├── brainstorm/ Brainstorm artifacts
│ └── conflicts/ Conflict resolutions
├── 📜 HISTORY: Backups & Archives
│ └── history/ Rename from .process/backup/
│ ├── replans/ Replan backups
│ └── snapshots/ Session snapshots
└── 📋 TASKS: Definitions
└── tasks/ Rename from .task/
```
**Benefits**:
- ✅ Clear separation: Safe vs Dangerous operations
- ✅ Semantic organization by purpose
- ✅ Easy to find outputs by type
- ✅ Self-documenting structure
---
## Key Changes Summary
### 1. Split `.chat/` by Safety Level
| Current | New | Safety |
|---------|-----|--------|
| `.chat/analyze-*.md` | `analysis/code/` | 🟢 Safe |
| `.chat/plan-*.md` | `planning/architecture/` | 🟢 Safe |
| `.chat/chat-*.md` | `interactions/` | 🟢 Safe |
| `.chat/execute-*.md` | `executions/implementations/` | ⚠️ Dangerous |
### 2. Split `.process/` by Purpose
| Current | New | Purpose |
|---------|-----|---------|
| `.process/context-package.json` | `context/project/` | Planning context |
| `.process/CONFLICT_RESOLUTION.md` | `context/conflicts/` | Planning artifact |
| `.process/ACTION_PLAN_VERIFICATION.md` | `quality/verifications/` | Quality check |
| `.process/backup/` | `history/replans/` | Backups |
| `.process/phase2-analysis.json` | `temp/` | Temporary data |
### 3. Rename for Clarity
| Current | New | Reason |
|---------|-----|--------|
| `.task/` | `tasks/` | Remove dot prefix (not hidden) |
| `.summaries/` | `summaries/` | Keep same (already clear) |
---
## Command Output Changes (Examples)
### Analysis Commands
```bash
# Current (v1.0)
/cli:analyze "review auth code"
→ .chat/analyze-2024-01-15.md ⚠️ Mixed with dangerous ops
# Proposed (v2.0)
/cli:analyze "review auth code"
→ analysis/code/2024-01-15T10-30-auth.md ✅ Clearly safe
```
### Execution Commands
```bash
# Current (v1.0)
/cli:execute "implement auth"
→ .chat/execute-2024-01-15.md ⚠️ Looks safe, but dangerous!
# Proposed (v2.0)
/cli:execute "implement auth"
→ executions/implementations/2024-01-15T11-00-auth.md ⚠️ Clearly dangerous
```
### Planning Commands
```bash
# Current (v1.0)
/cli:discuss-plan "design caching"
→ .chat/discuss-plan-2024-01-15.md ⚠️ Mixed with dangerous ops
# Proposed (v2.0)
/cli:discuss-plan "design caching"
→ planning/discussions/2024-01-15T15-00-caching-3rounds.md ✅ Clearly safe
```
---
## Migration Impact
### Affected Commands: ~30
**Analysis Commands** (6):
- `/cli:analyze`
- `/cli:mode:code-analysis`
- `/cli:mode:bug-diagnosis`
- `/cli:chat`
- `/memory:code-map-memory`
- `/workflow:review`
**Planning Commands** (5):
- `/cli:mode:plan`
- `/cli:discuss-plan`
- `/workflow:plan`
- `/workflow:replan`
- `/workflow:brainstorm:*`
**Execution Commands** (8):
- `/cli:execute`
- `/cli:codex-execute`
- `/workflow:execute`
- `/workflow:lite-execute`
- `/task:execute`
- `/workflow:test-cycle-execute`
- `/workflow:test-fix-gen`
- `/workflow:test-gen`
**Quality Commands** (4):
- `/workflow:action-plan-verify`
- `/workflow:review`
- `/workflow:tdd-verify`
- `/workflow:tdd-coverage-analysis`
**Context Commands** (7):
- `/workflow:tools:context-gather`
- `/workflow:tools:conflict-resolution`
- `/workflow:brainstorm:artifacts`
- `/memory:skill-memory`
- `/memory:docs`
- `/memory:load`
- `/memory:tech-research`
---
## Safety Indicators
### Directory Color Coding
- 🟢 **Green** (Safe): Read-only operations, no code changes
- `analysis/`
- `planning/`
- `interactions/`
- `summaries/`
- `quality/`
- `context/`
- `history/`
- ⚠️ **Yellow** (Dangerous): Code-modifying operations
- `executions/`
### File Naming Patterns
**Safe Operations** (🟢):
```
analysis/code/2024-01-15T10-30-auth-patterns.md
planning/discussions/2024-01-15T15-00-caching-3rounds.md
interactions/2024-01-15T14-00-jwt-question.md
```
**Dangerous Operations** (⚠️):
```
executions/implementations/2024-01-15T11-00-impl-auth.md
executions/test-fixes/2024-01-16T09-00-fix-login-tests.md
executions/refactors/2024-01-16T15-00-refactor-middleware.md
```
---
## User Experience Improvements
### Before (v1.0) - Confusing ❌
**User wants to review analysis logs**:
```bash
$ ls .workflow/active/WFS-auth/.chat/
analyze-2024-01-15.md
execute-2024-01-15.md # ⚠️ Wait, which one is safe?
plan-2024-01-14.md
execute-2024-01-16.md # ⚠️ More dangerous files mixed in!
chat-2024-01-15.md
```
User thinks: "They're all in `.chat/`, so they're all logs... right?" 😰
### After (v2.0) - Clear ✅
**User wants to review analysis logs**:
```bash
$ ls .workflow/active/WFS-auth/
analysis/ # ✅ Safe - code understanding
planning/ # ✅ Safe - planning discussions
interactions/ # ✅ Safe - Q&A logs
executions/ # ⚠️ DANGER - code modifications
```
User thinks: "Oh, `executions/` is separate. I know that modifies code!" 😊
---
## Performance Impact
### Storage
**Overhead**: Negligible
- Deeper directory nesting adds ~10 bytes per file
- For 1000 files: ~10 KB additional metadata
### Access Speed
**Overhead**: Negligible
- Modern filesystems handle nested directories efficiently
- Typical lookup: O(log n) regardless of depth
### Migration Cost
**Phase 1 (Dual Write)**: ~5-10% overhead
- Writing to both old and new locations
- Temporary during migration period
**Phase 2+ (New Structure Only)**: No overhead
- Single write location
- Actually slightly faster (better organization)
---
## Rollback Plan
If migration causes issues:
### Easy Rollback (Phase 1-2)
```bash
# Stop using new structure
git revert <migration-commit>
# Continue with old structure
# No data loss (dual write preserved both)
```
### Manual Rollback (Phase 3+)
```bash
# Copy files back to old locations
cp -r analysis/code/* .chat/
cp -r executions/implementations/* .chat/
cp -r context/project/* .process/
# etc.
```
---
## Timeline Summary
| Phase | Duration | Status | Risk |
|-------|----------|--------|------|
| **Phase 1**: Dual Write | 2 weeks | 📋 Planned | LOW |
| **Phase 2**: Dual Read | 1 week | 📋 Planned | LOW |
| **Phase 3**: Deprecation | 1 week | 📋 Planned | MEDIUM |
| **Phase 4**: Full Migration | Future | 🤔 Optional | MEDIUM |
**Total**: 4 weeks for Phases 1-3
**Effort**: ~20-30 hours development time
---
## Decision: Which Approach?
### Option A: Full v2.0 Migration (Recommended) ✅
**Pros**:
- ✅ Clear semantic separation
- ✅ Future-proof organization
- ✅ Best user experience
- ✅ Solves all identified problems
**Cons**:
- ❌ 4-week migration period
- ❌ Affects 30+ commands
- ❌ Requires documentation updates
**Recommendation**: **YES** - Worth the investment
### Option B: Minimal Changes (Quick Fix)
**Change**:
```
.chat/ → Split into .analysis/ and .executions/
.process/ → Keep as-is with better docs
```
**Pros**:
- ✅ Quick implementation (1 week)
- ✅ Solves main safety confusion
**Cons**:
- ❌ Partial solution
- ❌ Still some confusion
- ❌ May need full migration later anyway
**Recommendation**: Only if time-constrained
### Option C: Status Quo (No Change)
**Pros**:
- ✅ No development effort
**Cons**:
- ❌ Problems remain
- ❌ User confusion continues
- ❌ Safety risks
**Recommendation**: **NO** - Not recommended
---
## Conclusion
**Recommended Action**: Proceed with **Option A (Full v2.0 Migration)**
**Key Benefits**:
1. 🟢 Clear safety separation (read-only vs code-modifying)
2. 📁 Semantic organization by purpose
3. 🔍 Easy to find specific outputs
4. 📈 Scales for future growth
5. 👥 Better user experience
**Next Steps**:
1. ✅ Review and approve this proposal
2. 📋 Create detailed implementation tasks
3. 🚀 Begin Phase 1: Dual Write implementation
4. 📚 Update documentation in parallel
**Questions?**
- See detailed analysis in: `OUTPUT_DIRECTORY_REORGANIZATION.md`
- Implementation guide: Migration Strategy section
- Risk assessment: Risk Assessment section

View File

@@ -1,419 +0,0 @@
# 🌳 CCW Workflow Decision Guide
This guide helps you choose the right commands and workflows for the complete software development lifecycle.
---
## 📊 Full Lifecycle Command Selection Flowchart
```mermaid
flowchart TD
Start([Start New Feature/Project]) --> Q1{Know what to build?}
Q1 -->|No| Ideation[💡 Ideation Phase<br>Requirements Exploration]
Q1 -->|Yes| Q2{Know how to build?}
Ideation --> BrainIdea[/ /workflow:brainstorm:auto-parallel<br>Explore product direction and positioning /]
BrainIdea --> Q2
Q2 -->|No| Design[🏗️ Design Exploration<br>Architecture Solution Discovery]
Q2 -->|Yes| Q3{Need UI design?}
Design --> BrainDesign[/ /workflow:brainstorm:auto-parallel<br>Explore technical solutions and architecture /]
BrainDesign --> Q3
Q3 -->|Yes| UIDesign[🎨 UI Design Phase]
Q3 -->|No| Q4{Task complexity?}
UIDesign --> Q3a{Have reference design?}
Q3a -->|Yes| UIImitate[/ /workflow:ui-design:imitate-auto<br>--input reference URL /]
Q3a -->|No| UIExplore[/ /workflow:ui-design:explore-auto<br>--prompt design description /]
UIImitate --> UISync[/ /workflow:ui-design:design-sync<br>Sync design system /]
UIExplore --> UISync
UISync --> Q4
Q4 -->|Simple & Quick| LitePlan[⚡ Lightweight Planning<br>/workflow:lite-plan]
Q4 -->|Complex & Complete| FullPlan[📋 Full Planning<br>/workflow:plan]
LitePlan --> Q5{Need code exploration?}
Q5 -->|Yes| LitePlanE[/ /workflow:lite-plan -e<br>task description /]
Q5 -->|No| LitePlanNormal[/ /workflow:lite-plan<br>task description /]
LitePlanE --> LiteConfirm[Three-Dimensional Confirmation:<br>1⃣ Task Approval<br>2⃣ Execution Method<br>3⃣ Code Review]
LitePlanNormal --> LiteConfirm
LiteConfirm --> Q6{Choose execution method}
Q6 -->|Agent| LiteAgent[/ /workflow:lite-execute<br>Using @code-developer /]
Q6 -->|CLI Tools| LiteCLI[CLI Execution<br>Gemini/Qwen/Codex]
Q6 -->|Plan Only| UserImpl[Manual User Implementation]
FullPlan --> PlanVerify{Verify plan quality?}
PlanVerify -->|Yes| Verify[/ /workflow:action-plan-verify /]
PlanVerify -->|No| Execute
Verify --> Q7{Verification passed?}
Q7 -->|No| FixPlan[Fix plan issues]
Q7 -->|Yes| Execute
FixPlan --> Execute
Execute[🚀 Execution Phase<br>/workflow:execute]
LiteAgent --> TestDecision
LiteCLI --> TestDecision
UserImpl --> TestDecision
Execute --> TestDecision
TestDecision{Need testing?}
TestDecision -->|TDD Mode| TDD[/ /workflow:tdd-plan<br>Test-Driven Development /]
TestDecision -->|Post-Implementation Testing| TestGen[/ /workflow:test-gen<br>Generate tests /]
TestDecision -->|Existing Tests| TestCycle[/ /workflow:test-cycle-execute<br>Test-fix cycle /]
TestDecision -->|No| Review
TDD --> TDDExecute[/ /workflow:execute<br>Red-Green-Refactor /]
TDDExecute --> TDDVerify[/ /workflow:tdd-verify<br>Verify TDD compliance /]
TDDVerify --> Review
TestGen --> TestExecute[/ /workflow:execute<br>Execute test tasks /]
TestExecute --> TestResult{Tests passed?}
TestResult -->|No| TestCycle
TestResult -->|Yes| Review
TestCycle --> TestPass{Pass rate ≥95%?}
TestPass -->|No, continue fixing| TestCycle
TestPass -->|Yes| Review
Review[📝 Review Phase]
Review --> Q8{Need specialized review?}
Q8 -->|Security| SecurityReview[/ /workflow:review<br>--type security /]
Q8 -->|Architecture| ArchReview[/ /workflow:review<br>--type architecture /]
Q8 -->|Quality| QualityReview[/ /workflow:review<br>--type quality /]
Q8 -->|Comprehensive| GeneralReview[/ /workflow:review<br>Comprehensive review /]
Q8 -->|No| Complete
SecurityReview --> Complete
ArchReview --> Complete
QualityReview --> Complete
GeneralReview --> Complete
Complete[✅ Completion Phase<br>/workflow:session:complete]
Complete --> End([Project Complete])
style Start fill:#e1f5ff
style End fill:#c8e6c9
style BrainIdea fill:#fff9c4
style BrainDesign fill:#fff9c4
style UIImitate fill:#f8bbd0
style UIExplore fill:#f8bbd0
style LitePlan fill:#b3e5fc
style FullPlan fill:#b3e5fc
style Execute fill:#c5e1a5
style TDD fill:#ffccbc
style TestGen fill:#ffccbc
style TestCycle fill:#ffccbc
style Review fill:#d1c4e9
style Complete fill:#c8e6c9
```
---
## 🎯 Decision Point Explanations
### 1⃣ **Ideation Phase - "Know what to build?"**
| Situation | Command | Description |
|-----------|---------|-------------|
| ❌ Uncertain about product direction | `/workflow:brainstorm:auto-parallel "Explore XXX domain product opportunities"` | Multi-role analysis with Product Manager, UX Expert, etc. |
| ✅ Clear feature requirements | Skip to design phase | Already know what functionality to build |
**Examples**:
```bash
# Uncertain scenario: Want to build a collaboration tool, but unsure what exactly
/workflow:brainstorm:auto-parallel "Explore team collaboration tool positioning and core features" --count 5
# Certain scenario: Building a real-time document collaboration editor (requirements clear)
# Skip ideation, move to design phase
```
---
### 2⃣ **Design Phase - "Know how to build?"**
| Situation | Command | Description |
|-----------|---------|-------------|
| ❌ Don't know technical approach | `/workflow:brainstorm:auto-parallel "Design XXX system architecture"` | System Architect, Security Expert analyze technical solutions |
| ✅ Clear implementation path | Skip to planning | Already know tech stack, architecture patterns |
**Examples**:
```bash
# Don't know how: Real-time collaboration conflict resolution? Which algorithm?
/workflow:brainstorm:auto-parallel "Design conflict resolution mechanism for real-time collaborative document editing" --count 4
# Know how: Using Operational Transformation + WebSocket + Redis
# Skip design exploration, go directly to planning
/workflow:plan "Implement real-time collaborative editing using OT algorithm, WebSocket communication, Redis storage"
```
---
### 3⃣ **UI Design Phase - "Need UI design?"**
| Situation | Command | Description |
|-----------|---------|-------------|
| 🎨 Have reference design | `/workflow:ui-design:imitate-auto --input "URL"` | Copy from existing design |
| 🎨 Design from scratch | `/workflow:ui-design:explore-auto --prompt "description"` | Generate multiple design variants |
| ⏭️ Backend/No UI | Skip | Pure backend API, CLI tools, etc. |
**Examples**:
```bash
# Have reference: Imitate Google Docs collaboration interface
/workflow:ui-design:imitate-auto --input "https://docs.google.com"
# No reference: Design from scratch
/workflow:ui-design:explore-auto --prompt "Modern minimalist document collaboration editing interface" --style-variants 3
# Sync design to project
/workflow:ui-design:design-sync --session WFS-xxx --selected-prototypes "v1,v2"
```
---
### 4⃣ **Planning Phase - Choose Workflow Type**
| Workflow | Use Case | Characteristics |
|----------|----------|-----------------|
| `/workflow:lite-plan` | Quick tasks, small features | In-memory planning, three-dimensional confirmation, fast execution |
| `/workflow:plan` | Complex projects, team collaboration | Persistent plans, quality gates, complete traceability |
**Lite-Plan Three-Dimensional Confirmation**:
1. **Task Approval**: Confirm / Modify / Cancel
2. **Execution Method**: Agent / Provide Plan / CLI Tools (Gemini/Qwen/Codex)
3. **Code Review**: No / Claude / Gemini / Qwen / Codex
**Examples**:
```bash
# Simple task
/workflow:lite-plan "Add user avatar upload feature"
# Need code exploration
/workflow:lite-plan -e "Refactor authentication module to OAuth2 standard"
# Complex project
/workflow:plan "Implement complete real-time collaborative editing system"
/workflow:action-plan-verify # Verify plan quality
/workflow:execute
```
---
### 5⃣ **Testing Phase - Choose Testing Strategy**
| Strategy | Command | Use Case |
|----------|---------|----------|
| **TDD Mode** | `/workflow:tdd-plan` | Starting from scratch, test-driven development |
| **Post-Implementation Testing** | `/workflow:test-gen` | Code complete, add tests |
| **Test Fixing** | `/workflow:test-cycle-execute` | Existing tests, need to fix failures |
**Examples**:
```bash
# TDD: Write tests first, then implement
/workflow:tdd-plan "User authentication module"
/workflow:execute # Red-Green-Refactor cycle
/workflow:tdd-verify # Verify TDD compliance
# Post-implementation testing: Add tests after code complete
/workflow:test-gen WFS-user-auth-implementation
/workflow:execute
# Test fixing: Existing tests with high failure rate
/workflow:test-cycle-execute --max-iterations 5
# Auto-iterate fixes until pass rate ≥95%
```
---
### 6⃣ **Review Phase - Choose Review Type**
| Type | Command | Focus |
|------|---------|-------|
| **Security Review** | `/workflow:review --type security` | SQL injection, XSS, authentication vulnerabilities |
| **Architecture Review** | `/workflow:review --type architecture` | Design patterns, coupling, scalability |
| **Quality Review** | `/workflow:review --type quality` | Code style, complexity, maintainability |
| **Comprehensive Review** | `/workflow:review` | All-around inspection |
**Examples**:
```bash
# Security-critical system
/workflow:review --type security
# After architecture refactoring
/workflow:review --type architecture
# Daily development
/workflow:review --type quality
```
---
## 🔄 Complete Flow for Typical Scenarios
### Scenario A: New Feature Development (Know How to Build)
```bash
# 1. Planning
/workflow:plan "Add JWT authentication and permission management"
# 2. Verify plan
/workflow:action-plan-verify
# 3. Execute
/workflow:execute
# 4. Testing
/workflow:test-gen WFS-jwt-auth
/workflow:execute
# 5. Review
/workflow:review --type security
# 6. Complete
/workflow:session:complete
```
---
### Scenario B: New Feature Development (Don't Know How to Build)
```bash
# 1. Design exploration
/workflow:brainstorm:auto-parallel "Design distributed cache system architecture" --count 5
# 2. UI design (if needed)
/workflow:ui-design:explore-auto --prompt "Cache management dashboard interface"
/workflow:ui-design:design-sync --session WFS-xxx
# 3. Planning
/workflow:plan
# 4. Verification
/workflow:action-plan-verify
# 5. Execution
/workflow:execute
# 6. TDD testing
/workflow:tdd-plan "Cache system core modules"
/workflow:execute
# 7. Review
/workflow:review --type architecture
/workflow:review --type security
# 8. Complete
/workflow:session:complete
```
---
### Scenario C: Quick Feature Development (Lite Workflow)
```bash
# 1. Lightweight planning (may need code exploration)
/workflow:lite-plan -e "Optimize database query performance"
# 2. Three-dimensional confirmation
# - Confirm task
# - Choose Agent execution
# - Choose Gemini code review
# 3. Auto-execution (called internally by /workflow:lite-execute)
# 4. Complete
```
---
### Scenario D: Bug Fixing
```bash
# 1. Diagnosis
/cli:mode:bug-diagnosis --tool gemini "User login fails with token expired error"
# 2. Quick fix
/workflow:lite-plan "Fix JWT token expiration validation logic"
# 3. Test fix
/workflow:test-cycle-execute
# 4. Complete
```
---
## 🎓 Quick Command Reference
### Choose by Knowledge Level
| Your Situation | Recommended Command |
|----------------|---------------------|
| 💭 Don't know what to build | `/workflow:brainstorm:auto-parallel "Explore product direction"` |
| ❓ Know what, don't know how | `/workflow:brainstorm:auto-parallel "Design technical solution"` |
| ✅ Know what and how | `/workflow:plan "Specific implementation description"` |
| ⚡ Simple, clear small task | `/workflow:lite-plan "Task description"` |
| 🐛 Bug fixing | `/cli:mode:bug-diagnosis` + `/workflow:lite-plan` |
### Choose by Project Phase
| Phase | Command |
|-------|---------|
| 📋 **Requirements Analysis** | `/workflow:brainstorm:auto-parallel` |
| 🏗️ **Architecture Design** | `/workflow:brainstorm:auto-parallel` |
| 🎨 **UI Design** | `/workflow:ui-design:explore-auto` / `imitate-auto` |
| 📝 **Implementation Planning** | `/workflow:plan` / `/workflow:lite-plan` |
| 🚀 **Coding Implementation** | `/workflow:execute` / `/workflow:lite-execute` |
| 🧪 **Testing** | `/workflow:tdd-plan` / `/workflow:test-gen` |
| 🔧 **Test Fixing** | `/workflow:test-cycle-execute` |
| 📖 **Code Review** | `/workflow:review` |
| ✅ **Project Completion** | `/workflow:session:complete` |
### Choose by Work Mode
| Mode | Workflow | Use Case |
|------|----------|----------|
| **🚀 Agile & Fast** | Lite Workflow | Personal dev, rapid iteration, prototype validation |
| **📋 Standard & Complete** | Full Workflow | Team collaboration, enterprise projects, long-term maintenance |
| **🧪 Quality-First** | TDD Workflow | Core modules, critical features, high reliability requirements |
| **🎨 Design-Driven** | UI-Design Workflow | Frontend projects, user interfaces, design systems |
---
## 💡 Expert Advice
### ✅ Best Practices
1. **Use brainstorming when uncertain**: Better to spend 10 minutes exploring solutions than blindly implementing and rewriting
2. **Use Full workflow for complex projects**: Persistent plans facilitate team collaboration and long-term maintenance
3. **Use Lite workflow for small tasks**: Complete quickly, reduce overhead
4. **Use TDD for critical modules**: Test-driven development ensures quality
5. **Regularly update memory**: `/memory:update-related` keeps context accurate
### ❌ Common Pitfalls
1. **Blindly skipping brainstorming**: Not exploring unfamiliar technical domains leads to rework
2. **Overusing brainstorming**: Brainstorming even simple features wastes time
3. **Ignoring plan verification**: Not running `/workflow:action-plan-verify` causes execution issues
4. **Ignoring testing**: Not generating tests, code quality cannot be guaranteed
5. **Not completing sessions**: Not running `/workflow:session:complete` causes session state confusion
---
## 🔗 Related Documentation
- [Getting Started Guide](GETTING_STARTED.md) - Quick start tutorial
- [Command Reference](COMMAND_REFERENCE.md) - Complete command list
- [Architecture Overview](ARCHITECTURE.md) - System architecture explanation
- [Examples](EXAMPLES.md) - Real-world scenario examples
- [FAQ](FAQ.md) - Frequently asked questions
---
**Last Updated**: 2025-11-20
**Version**: 5.8.1