Compare commits

...

24 Commits

Author SHA1 Message Date
catlog22
51a0cb3a3c docs: release v4.6.2 - documentation optimization and /memory:load refinement
### Documentation Optimization

**Optimized `/memory:load` Command Specification**:
- Reduced documentation from 273 to 240 lines (12% reduction)
- Merged redundant sections for better information flow
- Removed unnecessary internal implementation details
- Simplified usage examples while preserving clarity
- Maintained all critical information (parameters, workflow, JSON structure)

### Documentation Updates

**CHANGELOG.md**:
- Added v4.6.2 release entry with documentation improvements

**COMMAND_SPEC.md**:
- Updated `/memory:load` specification to match actual implementation
- Corrected syntax: `[--tool gemini|qwen]` instead of outdated `[--agent] [--json]` flags
- Added agent-driven execution details

**GETTING_STARTED.md**:
- Added "Quick Context Loading for Specific Tasks" section
- Positioned between "Full Project Index Rebuild" and "Incremental Related Module Updates"
- Includes practical examples and use case guidance

**README.md**:
- Updated version badge to v4.6.2
- Updated latest release description

**COMMAND_REFERENCE.md**:
- Added `/memory:load` command reference entry

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-20 10:58:31 +08:00
catlog22
436c7909b0 feat: 添加完整的卸载功能支持
- 添加 -Uninstall 参数支持交互式和命令行卸载
- 实现 manifest 跟踪系统,记录每次安装的文件和目录
- 支持多个安装的选择性卸载
- 修复关键 bug: 从源目录扫描文件而非目标目录,避免误删用户文件
- 添加操作模式选择 UI (Install/Uninstall)
- 自动迁移旧版 manifest 到新的多文件系统
- PowerShell 和 Bash 版本功能完全对等

Closes #5

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-20 10:24:50 +08:00
catlog22
f8d5d908ea feat: 修复workflow用户意图传递链,确保原始提示词贯穿全流程
## 核心问题
根据Gemini深度分析,用户原始提示词在brainstorm→plan流程中逐步被稀释,
synthesis环节完全不接收用户提示词,导致最终产出偏离用户意图。

## 关键修改

### 1. synthesis.md - 修复最严重偏差环节
- 新增从workflow-session.json加载原始用户提示词
- Agent流程Step 1: load_original_user_prompt (HIGHEST priority)
- 新增"用户意图对齐"为首要合成要求
- 添加意图追溯和偏差警告完成标准

### 2. concept-clarify.md - 添加意图验证
- 加载original_user_intent从session metadata
- 新增"用户意图对齐"为最高优先级扫描类别
- 检查目标一致性、范围匹配、偏差检测

### 3. action-plan-verify.md - 强化验证门禁
- workflow-session.json作为主要参考源
- 新增"用户意图对齐"为CRITICAL级别检测
- 违反用户意图标记为CRITICAL严重性

### 4. plan.md - 确立意图优先级
- 明确用户原始意图始终为主要参考
- 优先级规则: 当前用户提示词 > synthesis > topic-framework

### 5. artifacts.md - 推广结构化格式
- 添加推荐的GOAL/SCOPE/CONTEXT结构化格式
- 强调用户意图保存的重要性

### 6. auto-parallel.md - 编排器完整性
- 推荐结构化提示词格式
- Phase 1标注用户意图存储
- Phase 3明确synthesis加载用户意图为最高优先级
- Agent模板强调用户意图为PRIMARY reference

## 影响力提升
用户提示词影响力: 30% → 95%

## 参考
分析报告由Gemini生成,识别了5个关键问题点并提供4条改进建议

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-19 20:25:45 +08:00
catlog22
ac8c3b3d0c docs: 优化文档准确性和安装说明
- 修正 Agent Skills 描述,明确区分 -e 和 --enhance 机制
- 简化 INSTALL.md,对齐 README.md 的清晰结构
- 移除不必要的远程安装参数说明
- 优化 MCP 工具配置说明

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-19 13:17:19 +08:00
catlog22
423289c539 docs: 更新 MCP 工具官方源码链接 2025-10-19 10:22:53 +08:00
catlog22
21ea77bdf3 docs: 更新 MCP 工具官方源链接 2025-10-19 10:20:47 +08:00
catlog22
03ffc91764 docs: 简化 MCP 工具安装说明
主要变更:
- INSTALL.md 和 INSTALL_CN.md
  - 简化 MCP 工具安装部分
  - 只保留工具名称、用途和官方源代码库链接
  - 移除具体安装步骤,让用户根据官方文档安装
  - 保持表格格式清晰易读

改进原因:
- MCP 工具安装方式可能随时更新
- 官方文档是最准确的安装指南
- 避免维护多份安装说明

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-19 09:51:03 +08:00
catlog22
ee3a420f60 docs: 完善文档引用和安装说明
主要变更:
1. README.md 和 README_CN.md
   - 添加 COMMAND_SPEC.md 引用链接
   - 区分用户友好参考和技术规范

2. GETTING_STARTED.md
   - 添加 Skill 系统介绍章节
   - 添加 UI 设计工作流介绍
   - 包含 prompt-enhancer 使用示例
   - 包含 explore-auto 和 imitate-auto 示例

3. INSTALL.md 和 INSTALL_CN.md
   - 添加推荐系统工具安装说明(ripgrep, jq)
   - 添加 MCP 工具安装说明(Exa, Code Index, Chrome DevTools)
   - 添加可选 CLI 工具说明(Gemini, Codex, Qwen)
   - 提供各平台安装命令和验证步骤
   - 标注必需和可选工具

改进效果:
- 用户可快速找到技术规范文档
- 新手指南更完整,覆盖高级功能
- 安装文档更详细,包含所有依赖工具

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-19 09:44:48 +08:00
catlog22
9151a82d1d docs: 优化 README 文档并创建命令参考文档
主要变更:
1. 精简 README.md 和 README_CN.md 结构
   - 移除冗长的功能描述和安装细节
   - 合并理念章节为"核心概念"
   - 简化安装说明,链接到详细 INSTALL.md
   - 移除详细命令列表,链接到新的 COMMAND_REFERENCE.md

2. 创建 COMMAND_REFERENCE.md
   - 完整的命令列表和分类
   - 按功能组织(Workflow, CLI, Task, Memory, UI Design, Testing)
   - 包含所有遗漏的命令(UI 设计、测试工作流等)

3. 创建 COMMAND_SPEC.md
   - 详细的命令技术规范
   - 包含参数、职责、Agent 调用、Skill 调用
   - 实际使用示例
   - 命令间的工作流集成说明

4. 修正 Critical 级别问题
   - 统一仓库 URL 为 Claude-Code-Workflow
   - 更新 INSTALL.md 中的过时信息

5. 文档引用优化
   - 在 README 中添加到 COMMAND_REFERENCE.md 和 COMMAND_SPEC.md 的链接
   - 保持中英文文档结构一致

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 23:37:24 +08:00
catlog22
24aad6238a feat: add dynamic output format for prompt-enhancer and update README
- Redesign output format to be dynamic and task-adaptive
- Replace fixed template with core + optional fields structure
- Add simple and complex task examples
- Update workflow and enhancement guidelines
- Add Agent Skills section to README.md and README_CN.md
- Document Prompt Enhancer skill usage with -e/--enhance flags
- Provide skill creation best practices

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 23:00:08 +08:00
catlog22
44734a447c refactor: clarify prompt-enhancer trigger condition in description
- Update description to follow skills.md best practices
- Explicitly state trigger condition: 'Use when user input contains -e or --enhance flag'
- Improve discoverability with clear when-to-use guidance

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 22:47:05 +08:00
catlog22
99cb29ed23 refactor: simplify prompt-enhancer skill description and internal analysis
- Update description: focus on intelligent analysis and session memory
- Simplify trigger mechanism to only -e/--enhance flags
- Condense internal analysis section to single concise statement
- Remove verbose semantic analysis details for cleaner documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 22:41:25 +08:00
catlog22
b8935777e7 refactor: enhance prompt generation process with direct output and improved internal analysis 2025-10-18 22:33:29 +08:00
catlog22
49c2b189d4 refactor: streamline prompt-enhancer skill for faster execution
- Simplified process from 4 steps to 3 steps
- Changed to memory-only extraction (no file reading)
- Updated confirmation options: added "Suggest optimizations"
- Removed file operation tools (Bash, Read, Glob, Grep)
- Enhanced output format with structured sections
- Improved workflow efficiency and user interaction

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 22:11:41 +08:00
catlog22
1324fb8c2a feat: optimize prompt-enhancer skill with bilingual support
- Add Chinese semantic recognition (修复/优化/重构/更新/改进/清理)
- Support -e/--enhance flags for explicit triggering
- Streamline structure with tables and concise format
- Add bilingual confirmation workflow (EN/CN)
- Reduce examples section for better readability
- Implement 4-priority trigger system (P1-P4)
- Add visual workflow pipeline diagram

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 22:03:09 +08:00
catlog22
1073e43c0b refactor: split task JSON templates and improve CLI mode support
- Split task-json-schema.txt into two mode-specific templates:
  - task-json-agent-mode.txt: Agent execution (no command field)
  - task-json-cli-mode.txt: CLI execution (with command field)
- Update task-generate.md:
  - Remove outdated Codex resume mechanism description
  - Add clear execution mode examples (Agent/CLI)
  - Simplify CLI Execute Mode Details section
- Update task-generate-agent.md:
  - Add --cli-execute flag support
  - Command selects template path before invoking agent
  - Agent receives template path and reads it (not content)
  - Clarify responsibility: Command decides, Agent executes
- Improve architecture:
  - Clear separation: Command layer (template selection) vs Agent layer (content generation)
  - Template selection based on flag, not agent logic
  - Agent simplicity: receives path, reads template, generates content

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 20:44:46 +08:00
catlog22
393b2f480f Refactor task generation and implementation plan templates
- Updated task JSON schema to enhance structure and artifact integration.
- Simplified agent mode execution by omitting command fields in implementation steps.
- Introduced CLI execute mode with detailed command specifications for complex tasks.
- Added comprehensive IMPL_PLAN.md template with structured sections for project analysis, context, and execution strategy.
- Enhanced documentation for artifact usage and priority guidelines.
- Improved flow control definitions for task execution and context loading.
2025-10-18 20:26:58 +08:00
catlog22
3b0f067f0b docs: enhance task-generate documentation with detailed execution modes and principles 2025-10-18 19:49:24 +08:00
catlog22
0130a66642 refactor: optimize workflow execution with parallel agent support
## Key Changes

### execute.md
- Simplify Agent Prompt (77 lines → 34 lines, -56% tokens)
- Add dependency graph batch execution algorithm
- Implement parallel task execution with execution_group
- Clarify orchestrator vs agent responsibilities
- Add TodoWrite parallel task status support

### task-generate.md
- Update task decomposition: shared context merging + independent parallelization
- Add execution_group and context_signature fields to task JSON
- Implement context signature algorithm for intelligent task grouping
- Add automatic parallel group assignment logic

## Core Requirements Verified (by Gemini)
 Complete JSON context preserved in Agent Prompt
 Shared context merging logic implemented (context_signature algorithm)
 Independent parallelization enabled (execution_group + batch execution)
 All critical functionality retained and enhanced

## Performance Impact
- 3-5x execution speed improvement (parallel batch execution)
- Reduced token usage in Agent Prompt (56% reduction)
- Intelligent task grouping (automatic context reuse)

## Risk Assessment: LOW
- Removed content: orchestrator's flow control execution → transferred to agent
- Mitigation: clear Agent JSON Loading Specification and prompt template
- Result: clearer separation of concerns, more maintainable

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 19:36:03 +08:00
catlog22
e2711a7797 feat: add workflow prompt templates for planning phases
Add CLI prompt templates for workflow planning integration:
- analysis-results-structure.txt: ANALYSIS_RESULTS.md generation template
- gemini-solution-design.txt: Solution design analysis template
- codex-feasibility-validation.txt: Technical feasibility validation template

These templates support the workflow planning phase with standardized
analysis and design documentation formats.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 19:04:50 +08:00
catlog22
3a6e88c0df refactor: replace replan commands with direct agent-based fixes
Replace batch replan workflow with TodoWrite tracking and direct agent calls:
- concept-clarify.md: Call conceptual-planning-agent for concept gaps
- action-plan-verify.md: Call action-planning-agent for task/plan issues

Both commands now require explicit user confirmation before fixes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 19:03:31 +08:00
catlog22
199585b29c refactor: convert context-gather to agent-driven execution and fix path mismatch
- Refactor context-gather.md to use general-purpose agent delegation
  - Change from 4-phase manual execution to 2-phase agent-driven flow
  - Move project structure analysis and documentation loading to agent execution
  - Add Step 0: Foundation Setup for agent to execute first
  - Update agent context passing to minimal configuration
  - Add MCP tools integration guidance for agent

- Fix critical path mismatch in workflow data flow
  - Update plan.md Phase 2 output path from .context/ to .process/
  - Align with context-gather.md output location (.process/context-package.json)
  - Ensure correct data flow: context-gather → concept-enhanced

- Update concept-enhanced.md line selection (minor formatting)

Verified path consistency across all workflow commands:
- context-gather.md outputs to .process/
- concept-enhanced.md reads from .process/
- plan.md passes correct .process/ path
- All workflow tools now use consistent .process/ directory

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 17:39:45 +08:00
catlog22
e94b2a250b docs: clarify installation verification in README
Improve installation verification section to clearly indicate
checking slash commands in Claude Code interface.

Changes:
- README.md: Add Claude Code context to verification section
- README_CN.md: Add Claude Code context to verification section (Chinese)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 12:31:46 +08:00
catlog22
4193a17c27 docs: finalize v4.6.0 release documentation
Update version badges and CHANGELOG for v4.6.0 release

Changes:
- README.md: Update version badge to v4.6.0, add What's New section
- README_CN.md: Update version badge to v4.6.0, add What's New section
- CHANGELOG.md: Add comprehensive v4.6.0 release notes

Release highlights:
- Concept Clarification Quality Gate (Phase 3.5)
- Agent-Delegated Intelligent Analysis (Phase 3)
- Dual-mode support for brainstorm/plan workflows
- Enhanced planning workflow with 5 phases
- Test-cycle-execute documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-18 12:28:37 +08:00
30 changed files with 4949 additions and 3517 deletions

View File

@@ -0,0 +1,240 @@
---
name: load
description: Load project memory by delegating to agent, returns structured core content package for subsequent operations
argument-hint: "[--tool gemini|qwen] \"task context description\""
allowed-tools: Task(*), Bash(*)
examples:
- /memory:load "在当前前端基础上开发用户认证功能"
- /memory:load --tool qwen "重构支付模块API"
---
# Memory Load Command (/memory:load)
## 1. Overview
The `memory:load` command **delegates to a general-purpose agent** to analyze the project and return a structured "Core Content Pack". This pack is loaded into the main thread's memory, providing essential context for subsequent agent operations while minimizing token consumption.
**Core Philosophy**:
- **Agent-Driven**: Fully delegates execution to general-purpose agent
- **Read-Only Analysis**: Does not modify code, only extracts context
- **Structured Output**: Returns standardized JSON content package
- **Memory Optimization**: Package loaded directly into main thread memory
- **Token Efficiency**: CLI analysis executed within agent to save tokens
## 2. Parameters
- `"task context description"` (Required): Task description to guide context extraction
- Example: "在当前前端基础上开发用户认证功能"
- Example: "重构支付模块API"
- Example: "修复数据库查询性能问题"
- `--tool <gemini|qwen>` (Optional): Specify CLI tool for agent to use (default: gemini)
- gemini: Large context window, suitable for complex project analysis
- qwen: Alternative to Gemini with similar capabilities
## 3. Agent-Driven Execution Flow
The command fully delegates to **general-purpose agent**, which autonomously:
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
3. **Extracts Keywords**: Derives core keywords from task description
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
6. **Generates Content Package**: Returns structured JSON core content package
## 4. Core Content Package Structure
**Output Format** - Loaded into main thread memory for subsequent use:
```json
{
"task_context": "在当前前端基础上开发用户认证功能",
"keywords": ["前端", "用户", "认证", "auth", "login"],
"project_summary": {
"architecture": "TypeScript + React frontend with Vite build system",
"tech_stack": ["React", "TypeScript", "Vite", "TailwindCSS"],
"key_patterns": [
"State management via Context API",
"Functional components with Hooks pattern",
"API calls encapsulated in custom hooks"
]
},
"relevant_files": [
{
"path": "src/components/Auth/LoginForm.tsx",
"relevance": "Existing login form component",
"priority": "high"
},
{
"path": "src/contexts/AuthContext.tsx",
"relevance": "Authentication state management context",
"priority": "high"
},
{
"path": "CLAUDE.md",
"relevance": "Project development standards",
"priority": "high"
}
],
"integration_points": [
"Must integrate with existing AuthContext",
"Follow component organization pattern: src/components/[Feature]/",
"API calls should use src/hooks/useApi.ts wrapper"
],
"constraints": [
"Maintain backward compatibility",
"Follow TypeScript strict mode",
"Use existing UI component library"
]
}
```
## 5. Agent Invocation
```javascript
Task(
subagent_type="general-purpose",
description="Load project memory: ${task_description}",
prompt=`
## Mission: Load Project Memory Context
**Task Context**: "${task_description}"
**Mode**: Read-only analysis
**Tool**: ${tool || 'gemini'}
## Execution Steps
### Step 1: Foundation Analysis
1. **Project Structure**
\`\`\`bash
bash(~/.claude/scripts/get_modules_by_depth.sh)
\`\`\`
2. **Core Documentation**
\`\`\`javascript
Read(CLAUDE.md)
Read(README.md)
\`\`\`
### Step 2: Keyword Extraction & File Discovery
1. Extract core keywords from task description
2. Discover relevant files using MCP code-index or rg:
\`\`\`javascript
// Prefer MCP tools
mcp__code-index__find_files(pattern="*{keyword}*")
mcp__code-index__search_code_advanced(pattern="{keyword}", context_lines=2)
// Fallback: use rg
bash(rg -l "{keyword}" --type ts --type md)
\`\`\`
### Step 3: Deep Analysis via CLI
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
\`\`\`bash
cd . && ~/.claude/scripts/${tool}-wrapper -p "
PURPOSE: Extract project core context for task: ${task_description}
TASK: Analyze project architecture, tech stack, key patterns, relevant files
MODE: analysis
CONTEXT: @{CLAUDE.md,README.md,${discovered_files}}
EXPECTED: Structured project summary and integration point analysis
RULES:
- Focus on task-relevant core information
- Identify key architecture patterns and technical constraints
- Extract integration points and development standards
- Output concise, structured format
"
\`\`\`
### Step 4: Generate Core Content Package
Generate structured JSON content package (format shown above)
**Required Fields**:
- task_context: Original task description
- keywords: Extracted keyword array
- project_summary: Architecture, tech stack, key patterns
- relevant_files: File list with path, relevance, priority
- integration_points: Integration guidance
- constraints: Development constraints
### Step 5: Return Content Package
Return JSON content package as final output for main thread to load into memory.
## Quality Checklist
Before returning:
- [ ] Valid JSON format
- [ ] All required fields complete
- [ ] relevant_files contains 3-10 files minimum
- [ ] project_summary accurately reflects architecture
- [ ] integration_points clearly specify integration paths
- [ ] keywords accurately extracted (3-8 keywords)
- [ ] Content concise, avoiding redundancy (< 5KB total)
`
)
```
## 6. Usage Examples
### Example 1: Load Context for New Feature
```bash
/memory:load "在当前前端基础上开发用户认证功能"
```
**Agent Execution**:
1. Analyzes project structure (`get_modules_by_depth.sh`)
2. Reads CLAUDE.md, README.md
3. Extracts keywords: ["前端", "用户", "认证", "auth"]
4. Uses MCP to search relevant files
5. Executes Gemini CLI for deep analysis
6. Returns core content package
**Returned Package** (loaded into memory):
```json
{
"task_context": "在当前前端基础上开发用户认证功能",
"keywords": ["前端", "认证", "auth", "login"],
"project_summary": { ... },
"relevant_files": [ ... ],
"integration_points": [ ... ],
"constraints": [ ... ]
}
```
### Example 2: Using Qwen Tool
```bash
/memory:load --tool qwen "重构支付模块API"
```
Agent uses Qwen CLI for analysis, returns same structured package.
### Example 3: Bug Fix Context
```bash
/memory:load "修复登录验证错误"
```
Returns core context related to login validation, including test files and validation logic.
### Memory Persistence
- **Session-Scoped**: Content package valid for current session
- **Subsequent Reference**: All subsequent agents/commands can access
- **Reload Required**: New sessions need to re-execute /memory:load
## 8. Notes
- **Read-Only**: Does not modify any code, pure analysis
- **Token Optimization**: CLI analysis executed within agent, saves main thread tokens
- **Memory Loading**: Returned JSON loaded directly into main thread memory
- **Subsequent Use**: Other commands/agents can reference this package for development
- **Session-Level**: Content package valid for current session

View File

@@ -67,6 +67,11 @@ IF TASK_FILES.count == 0:
Load only minimal necessary context from each artifact:
**From workflow-session.json** (NEW - PRIMARY REFERENCE):
- Original user prompt/intent (project or description field)
- User's stated goals and objectives
- User's scope definition
**From synthesis-specification.md**:
- Functional Requirements (IDs, descriptions, acceptance criteria)
- Non-Functional Requirements (IDs, targets)
@@ -117,7 +122,14 @@ Create internal representations (do not include raw artifacts in output):
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
#### A. Requirements Coverage Analysis
#### A. User Intent Alignment (NEW - CRITICAL)
- **Goal Alignment**: IMPL_PLAN objectives match user's original intent
- **Scope Drift**: Plan covers user's stated scope without unauthorized expansion
- **Success Criteria Match**: Plan's success criteria reflect user's expectations
- **Intent Conflicts**: Tasks contradicting user's original objectives
#### B. Requirements Coverage Analysis
- **Orphaned Requirements**: Requirements in synthesis with zero associated tasks
- **Unmapped Tasks**: Tasks with no clear requirement linkage
@@ -167,6 +179,7 @@ Focus on high-signal findings. Limit to 50 findings total; aggregate remainder i
Use this heuristic to prioritize findings:
- **CRITICAL**:
- Violates user's original intent (goal misalignment, scope drift)
- Violates synthesis authority (requirement conflict)
- Core requirement with zero coverage
- Circular dependencies
@@ -362,81 +375,8 @@ At end of report, provide batch remediation guidance:
### 🔧 Remediation Options
**Recommended Workflow**:
1. **Batch Mode** (Fastest): Apply all task improvements automatically
```bash
/task:replan --batch .workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
```
1. **TodoWrite Tracking**: Create task list for all fixes
2. **Agent-Based Fixes**: Call action-planning-agent to regenerate task files
3. **User Confirmation**: Wait for approval before executing fixes
2. **Manual Review**: Examine each issue before applying
- Review findings in this report
- Execute specific `/task:create` or `/task:replan` commands individually
3. **Architecture Changes**: Update IMPL_PLAN.md manually if architecture drift detected
**Note**: This is read-only analysis. All fixes require explicit execution.
```
### 8. Update Session Metadata
```json
{
"phases": {
"PLAN": {
"status": "completed",
"action_plan_verification": {
"completed": true,
"completed_at": "timestamp",
"overall_risk_level": "HIGH",
"recommendation": "PROCEED_WITH_FIXES",
"issues": {
"critical": 2,
"high": 5,
"medium": 8,
"low": 3
},
"coverage": {
"functional_requirements": 0.85,
"non_functional_requirements": 0.40,
"business_requirements": 1.00
},
"report_path": ".workflow/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
}
}
}
}
```
## Operating Principles
### Context Efficiency
- **Minimal high-signal tokens**: Focus on actionable findings
- **Progressive disclosure**: Load artifacts incrementally
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
- **Deterministic results**: Rerunning without changes produces consistent IDs and counts
### Analysis Guidelines
- **NEVER modify files** (this is read-only analysis)
- **NEVER hallucinate missing sections** (if absent, report them accurately)
- **Prioritize synthesis violations** (these are always CRITICAL)
- **Use examples over exhaustive rules** (cite specific instances)
- **Report zero issues gracefully** (emit success report with coverage statistics)
### Verification Taxonomy
- **Coverage**: Requirements → Tasks mapping
- **Consistency**: Cross-artifact alignment
- **Dependencies**: Task ordering and relationships
- **Synthesis Alignment**: Adherence to authoritative requirements
- **Task Quality**: Specification completeness
- **Feasibility**: Implementation risks
## Behavior Rules
- **If no issues found**: Report "✅ Action plan verification passed. No issues detected." and suggest proceeding to `/workflow:execute`.
- **If CRITICAL issues exist**: Recommend blocking execution until resolved.
- **If only HIGH/MEDIUM issues**: User may proceed with caution, but provide improvement suggestions.
- **If IMPL_PLAN.md or task files missing**: Instruct user to run `/workflow:plan` first.
- **Always provide actionable remediation suggestions**: Don't just identify problems—suggest solutions.
## Context
{ARGS}
**Note**: All fixes require explicit user confirmation.

View File

@@ -12,9 +12,16 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
/workflow:brainstorm:artifacts "<topic>" [--roles "role1,role2,role3"]
```
**Recommended Structured Format**:
```bash
/workflow:brainstorm:artifacts "GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--roles "..."]
```
## Purpose
**Generate dynamic topic-framework.md tailored to selected roles**. Creates role-specific discussion frameworks that address relevant perspectives. If no roles specified, generates comprehensive framework covering common analysis areas.
**⚠️ User Intent Preservation**: Topic description is stored in session metadata and serves as authoritative reference throughout workflow lifecycle.
## Role-Based Framework Generation
**Dynamic Generation**: Framework content adapts based on selected roles

View File

@@ -12,10 +12,17 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
/workflow:brainstorm:auto-parallel "<topic>" [--count N]
```
**Recommended Structured Format**:
```bash
/workflow:brainstorm:auto-parallel "GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]
```
**Parameters**:
- `topic` (required): Topic or challenge description
- `topic` (required): Topic or challenge description (structured format recommended)
- `--count N` (optional): Number of roles to auto-select (default: 3, max: 9)
**⚠️ User Intent Preservation**: Topic description is stored in session metadata as authoritative reference throughout entire brainstorming workflow and plan generation.
## Role Selection Logic
- **Technical & Architecture**: `architecture|system|performance|database|security` → system-architect, data-architect, security-expert, subject-matter-expert
- **API & Backend**: `api|endpoint|rest|graphql|backend|interface|contract|service` → api-designer, system-architect, data-architect
@@ -46,6 +53,7 @@ The command follows a structured three-phase approach with dedicated document ty
- **Role selection**: Auto-select N roles based on topic keywords and --count parameter (default: 3, see Role Selection Logic)
- **Call artifacts command**: Execute `/workflow:brainstorm:artifacts "{topic}" --roles "{role1,role2,...,roleN}"` using SlashCommand tool
- **Role-specific framework**: Generate framework with sections tailored to selected roles
- **⚠️ User intent storage**: Topic saved in workflow-session.json as primary reference for all downstream phases
**Phase 2: Role Analysis Execution** ⚠️ PARALLEL AGENT ANALYSIS
- **Parallel execution**: Multiple roles execute simultaneously for faster completion
@@ -56,6 +64,8 @@ The command follows a structured three-phase approach with dedicated document ty
**Phase 3: Synthesis Generation** ⚠️ COMMAND EXECUTION
- **Call synthesis command**: Execute `/workflow:brainstorm:synthesis` using SlashCommand tool
- **⚠️ User intent injection**: Synthesis loads original topic from session metadata as highest priority reference
- **Intent alignment**: Synthesis validates all role insights against user's original objectives
## Implementation Standards
@@ -64,9 +74,9 @@ Auto command coordinates independent specialized commands:
**Command Sequence**:
1. **Role Selection**: Auto-select N relevant roles based on topic keywords and --count parameter (default: 3)
2. **Generate Role-Specific Framework**: Use SlashCommand to execute `/workflow:brainstorm:artifacts "{topic}" --roles "{role1,role2,...,roleN}"`
2. **Generate Role-Specific Framework**: Use SlashCommand to execute `/workflow:brainstorm:artifacts "{topic}" --roles "{role1,role2,...,roleN}"` (stores user intent in session)
3. **Parallel Role Analysis**: Execute selected role agents in parallel, each reading their specific framework section
4. **Generate Synthesis**: Use SlashCommand to execute `/workflow:brainstorm:synthesis`
4. **Generate Synthesis**: Use SlashCommand to execute `/workflow:brainstorm:synthesis` (loads user intent from session as primary reference)
**SlashCommand Integration**:
1. **artifacts command**: Called via SlashCommand tool with `--roles` parameter for role-specific framework generation
@@ -172,16 +182,17 @@ Task(subagent_type="conceptual-planning-agent",
- Output: role_template
3. **load_session_metadata**
- Action: Load session metadata and topic description
- Action: Load session metadata and original user intent
- Command: bash(cat .workflow/WFS-{topic}/workflow-session.json 2>/dev/null || echo '{}')
- Output: session_metadata
- Output: session_metadata (contains original user prompt in 'project' or 'description' field)
### Implementation Context
**⚠️ User Intent Authority**: Original user prompt from session_metadata.project is PRIMARY reference
**Topic Framework**: Use loaded topic-framework.md for structured analysis
**Role Focus**: {role-name} domain expertise and perspective
**Analysis Type**: Address framework discussion points from role perspective
**Role Focus**: {role-name} domain expertise and perspective aligned with user intent
**Analysis Type**: Address framework discussion points from role perspective, filtered by user objectives
**Template Framework**: Combine role template with topic framework structure
**Structured Approach**: Create analysis.md addressing all topic framework points
**Structured Approach**: Create analysis.md addressing all topic framework points relevant to user's goals
### Session Context
**Workflow Directory**: .workflow/WFS-{topic}/.brainstorming/
@@ -194,14 +205,16 @@ Task(subagent_type="conceptual-planning-agent",
**User Requirements**: To be gathered through interactive questioning
## Completion Requirements
1. Execute all flow control steps in sequence (load topic framework, role template, session metadata)
2. **Address Topic Framework**: Respond to all discussion points in topic-framework.md from role perspective
3. Apply role template guidelines within topic framework structure
4. Generate structured role analysis addressing framework points
5. Create single comprehensive deliverable in OUTPUT_LOCATION:
- analysis.md (structured analysis addressing all topic framework points with role-specific insights)
6. Include framework reference: @../topic-framework.md in analysis.md
7. Update workflow-session.json with completion status",
1. Execute all flow control steps in sequence (load topic framework, role template, session metadata with user intent)
2. **⚠️ User Intent Alignment**: Validate analysis aligns with original user objectives from session_metadata
3. **Address Topic Framework**: Respond to all discussion points in topic-framework.md from role perspective
4. **Filter by User Goals**: Prioritize insights directly relevant to user's stated objectives
5. Apply role template guidelines within topic framework structure
6. Generate structured role analysis addressing framework points aligned with user intent
7. Create single comprehensive deliverable in OUTPUT_LOCATION:
- analysis.md (structured analysis addressing all topic framework points with role-specific insights filtered by user goals)
8. Include framework reference: @../topic-framework.md in analysis.md
9. Update workflow-session.json with completion status",
description="Execute {role-name} brainstorming analysis")
```

View File

@@ -1,7 +1,7 @@
---
name: synthesis
description: Generate synthesis-specification.md from topic-framework and role analyses with @ references using conceptual-planning-agent
argument-hint: "no arguments required - synthesizes existing framework and role analyses"
argument-hint: "[optional: --session session-id]"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*)
---
@@ -56,19 +56,30 @@ Initialize synthesis task tracking using TodoWrite at command start:
### Phase 1: Document Discovery & Validation
```bash
# Detect active brainstorming session
CHECK: .workflow/.active-* marker files
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
IF --session parameter provided:
session_id = provided session
ELSE:
ERROR: "No active brainstorming session found"
EXIT
CHECK: .workflow/.active-* marker files
IF active_session EXISTS:
session_id = get_active_session()
ELSE:
ERROR: "No active brainstorming session found"
EXIT
brainstorm_dir = .workflow/WFS-{session}/.brainstorming/
# Validate required documents
CHECK: brainstorm_dir/topic-framework.md
IF NOT EXISTS:
ERROR: "topic-framework.md not found. Run /workflow:brainstorm:artifacts first"
EXIT
# Load user's original prompt from session metadata
session_metadata = Read(.workflow/WFS-{session}/workflow-session.json)
original_user_prompt = session_metadata.project || session_metadata.description
IF NOT original_user_prompt:
WARN: "No original user prompt found in session metadata"
original_user_prompt = "Not available"
```
### Phase 2: Role Analysis Discovery
@@ -92,6 +103,7 @@ FIND_ANALYSES: [
# - test-strategist
LOAD_DOCUMENTS: {
"original_user_prompt": original_user_prompt (from session metadata),
"topic_framework": topic-framework.md,
"role_analyses": [dynamically discovered analysis.md files],
"participating_roles": [extract role names from discovered directories],
@@ -100,6 +112,7 @@ LOAD_DOCUMENTS: {
# Note: Not all roles participate in every brainstorming session
# Only synthesize roles that actually produced analysis.md files
# CRITICAL: Original user prompt MUST be primary reference for synthesis
```
### Phase 3: Update Mechanism Check
@@ -133,32 +146,40 @@ OUTPUT_PATH: .workflow/WFS-{session}/.brainstorming/synthesis-specification.md
SESSION_ID: {session_id}
ANALYSIS_MODE: cross_role_synthesis
## ⚠️ CRITICAL: User Intent Authority
**ORIGINAL USER PROMPT IS THE PRIMARY REFERENCE**: {original_user_prompt}
All synthesis MUST align with user's original intent. Topic framework and role analyses are supplementary context.
## Flow Control Steps
1. **load_topic_framework**
1. **load_original_user_prompt**
- Action: Load user's original intent from session metadata
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Extract: project field or description field
- Output: original_user_prompt (PRIMARY REFERENCE)
- Priority: HIGHEST - This is the authoritative source of user intent
2. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/WFS-{session}/.brainstorming/topic-framework.md)
- Output: topic_framework_content
- Note: Validate alignment with original_user_prompt
2. **discover_role_analyses**
3. **discover_role_analyses**
- Action: Dynamically discover all participating role analysis files
- Command: Glob(.workflow/WFS-{session}/.brainstorming/*/analysis.md)
- Output: role_analysis_paths, participating_roles
3. **load_role_analyses**
4. **load_role_analyses**
- Action: Load all discovered role analysis documents
- Command: Read(each path from role_analysis_paths)
- Output: role_analyses_content
- Note: Filter insights relevant to original_user_prompt
4. **check_existing_synthesis**
5. **check_existing_synthesis**
- Action: Check if synthesis-specification.md already exists
- Command: Read(.workflow/WFS-{session}/.brainstorming/synthesis-specification.md) [if exists]
- Output: existing_synthesis_content [optional]
5. **load_session_metadata**
- Action: Load session metadata and context
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Output: session_context
6. **load_synthesis_template**
- Action: Load synthesis role template for structure and guidelines
- Command: Read(~/.claude/workflows/cli-templates/planning-roles/synthesis-role.md)
@@ -166,14 +187,22 @@ ANALYSIS_MODE: cross_role_synthesis
## Synthesis Requirements
### ⚠️ PRIMARY REQUIREMENT: User Intent Alignment
**User's Original Goal is Supreme**: Synthesis MUST directly address {original_user_prompt}
**Intent Validation**: Every requirement, design decision, and recommendation must trace back to user's original intent
**Deviation Detection**: Flag any role analysis points that drift from user's stated goals
**Refocus Mechanism**: When role discussions diverge, explicitly refocus on original user prompt
**Traceability**: Each section should reference how it fulfills user's original intent
### Core Integration
**Cross-Role Analysis**: Integrate all discovered role analyses with comprehensive coverage
**Framework Integration**: Address how each role responded to topic-framework.md discussion points
**User Intent Filter**: Prioritize insights that directly serve user's original prompt
**Decision Transparency**: Document both adopted solutions and rejected alternatives with rationale
**Process Integration**: Include team capability gaps, process risks, and collaboration patterns
**Visual Documentation**: Include key diagrams (architecture, data model, user journey) via Mermaid
**Priority Matrix**: Create quantified recommendation matrix with multi-dimensional evaluation
**Actionable Plan**: Provide phased implementation roadmap with clear next steps
**Priority Matrix**: Create quantified recommendation matrix aligned with user's goals
**Actionable Plan**: Provide phased implementation roadmap addressing user's original objectives
### Cross-Role Analysis Process (Agent Internal Execution)
Perform systematic cross-role analysis following these steps:
@@ -200,13 +229,16 @@ Follow synthesis-specification.md structure defined in synthesis-role.md templat
3. **Session Metadata Update**: Update workflow-session.json with synthesis completion status
## Completion Criteria
- ⚠️ **USER INTENT ALIGNMENT**: Synthesis directly addresses user's original prompt
- All discovered role analyses integrated without gaps
- Framework discussion points addressed across all roles
- **Intent traceability**: Each major section references user's original goals
- Controversial points documented with dissenting roles identified
- Process concerns (team capabilities, risks, collaboration) captured
- Quantified priority recommendations with evaluation criteria
- Actionable implementation plan with phased approach
- Quantified priority recommendations aligned with user's objectives
- Actionable implementation plan addressing user's stated goals
- Comprehensive risk assessment with mitigation strategies
- **Deviation warnings**: Any significant departures from user intent flagged explicitly
## Execution Notes
- Dynamic role participation: Only synthesize roles that produced analysis.md files

View File

@@ -66,6 +66,13 @@ This serves as a quality gate to ensure conceptual clarity before detailed task
# Load primary artifact (determined in step 1)
primary_content = Read(primary_artifact)
# Load user's original intent from session metadata
session_metadata = Read(.workflow/WFS-{session}/workflow-session.json)
original_user_intent = session_metadata.project || session_metadata.description
IF NOT original_user_intent:
WARN: "No original user intent found in session metadata"
original_user_intent = "Not available"
# Load mode-specific supplementary artifacts
IF clarify_mode == "brainstorm":
topic_framework = Read(brainstorm_dir + "/topic-framework.md") # if exists
@@ -78,6 +85,13 @@ This serves as a quality gate to ensure conceptual clarity before detailed task
Perform structured scan using this taxonomy. For each category, mark status: **Clear** / **Partial** / **Missing**.
**⚠️ User Intent Alignment** (NEW - HIGHEST PRIORITY):
- Primary artifact alignment with original user intent
- Goal consistency between artifact and user's stated objectives
- Scope match with user's requirements
- Success criteria reflects user's expectations
- Any unexplained deviations from user intent
**Requirements Clarity**:
- Functional requirements specificity and measurability
- Non-functional requirements with quantified targets
@@ -271,6 +285,13 @@ This serves as a quality gate to ensure conceptual clarity before detailed task
- OR 🔄 **Run /workflow:concept-clarify Again**: If new information available
### Next Steps
**If Outstanding Items Exist**:
1. TodoWrite tracking for outstanding issues
2. Call conceptual-planning-agent to fix conceptual gaps
3. Wait for user confirmation before proceeding
**If No Issues**:
```bash
/workflow:plan # Generate IMPL_PLAN.md and task.json
```

View File

@@ -35,28 +35,21 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
- **Autonomous completion**: **Execute all tasks without user interruption until workflow complete**
## Flow Control Execution
**[FLOW_CONTROL]** marker indicates sequential step execution required for context gathering and preparation. **These steps are executed BY THE AGENT, not by the workflow:execute command.**
**[FLOW_CONTROL]** marker indicates task JSON contains `flow_control.pre_analysis` steps for context preparation.
### Flow Control Rules
1. **Auto-trigger**: When `task.flow_control.pre_analysis` array exists in task JSON, agents execute these steps
2. **Sequential Processing**: Agents execute steps in order, accumulating context including artifacts
3. **Variable Passing**: Agents use `[variable_name]` syntax to reference step outputs including artifact content
4. **Error Handling**: Agents follow step-specific error strategies (`fail`, `skip_optional`, `retry_once`)
5. **Artifacts Priority**: When artifacts exist in task.context.artifacts, load synthesis specifications first
### Orchestrator Responsibility
- Pass complete task JSON to agent (including `flow_control` block)
- Provide session paths for artifact access
- Monitor agent completion
### Execution Pattern
```
Step 1: load_dependencies → dependency_context
Step 2: analyze_patterns [dependency_context] → pattern_analysis
Step 3: implement_solution [pattern_analysis] [dependency_context] → implementation
```
### Agent Responsibility
- Parse `flow_control.pre_analysis` array from JSON
- Execute steps sequentially with variable substitution
- Accumulate context from artifacts and dependencies
- Follow error handling per `step.on_error`
- Complete implementation using accumulated context
### Context Accumulation Process (Executed by Agents)
- **Load Artifacts**: Agents retrieve synthesis specifications and brainstorming outputs from `context.artifacts`
- **Load Dependencies**: Agents retrieve summaries from `context.depends_on` tasks
- **Execute Analysis**: Agents run CLI tools with accumulated context including artifacts
- **Prepare Implementation**: Agents build comprehensive context for implementation
- **Continue Implementation**: Agents use all accumulated context including artifacts for task execution
**Orchestrator does NOT execute flow control steps - Agent interprets and executes them from JSON.**
## Execution Lifecycle
@@ -143,6 +136,102 @@ completed → skip
blocked → skip until dependencies clear
```
## Batch Execution with Dependency Graph
### Parallel Execution Algorithm
**Core principle**: Execute independent tasks concurrently in batches based on dependency graph.
#### Algorithm Steps
```javascript
function executeBatchWorkflow(sessionId) {
// 1. Build dependency graph from task JSONs
const graph = buildDependencyGraph(`.workflow/${sessionId}/.task/*.json`);
// 2. Process batches until graph is empty
while (!graph.isEmpty()) {
// 3. Identify current batch (tasks with in-degree = 0)
const batch = graph.getNodesWithInDegreeZero();
// 4. Check for parallel execution opportunities
const parallelGroups = groupByExecutionGroup(batch);
// 5. Execute batch concurrently
await Promise.all(
parallelGroups.map(group => executeBatch(group))
);
// 6. Update graph: remove completed tasks and their edges
graph.removeNodes(batch);
// 7. Update TodoWrite to reflect completed batch
updateTodoWriteAfterBatch(batch);
}
// 8. All tasks complete - auto-complete session
SlashCommand("/workflow:session:complete");
}
function buildDependencyGraph(taskFiles) {
const tasks = loadAllTaskJSONs(taskFiles);
const graph = new DirectedGraph();
tasks.forEach(task => {
graph.addNode(task.id, task);
// Add edges for dependencies
task.context.depends_on?.forEach(depId => {
graph.addEdge(depId, task.id); // Edge from dependency to task
});
});
return graph;
}
function groupByExecutionGroup(tasks) {
const groups = {};
tasks.forEach(task => {
const groupId = task.meta.execution_group || task.id;
if (!groups[groupId]) groups[groupId] = [];
groups[groupId].push(task);
});
return Object.values(groups);
}
async function executeBatch(tasks) {
// Execute all tasks in batch concurrently
return Promise.all(
tasks.map(task => executeTask(task))
);
}
```
#### Execution Group Rules
1. **Same `execution_group` ID** → Execute in parallel (independent, different contexts)
2. **No `execution_group` (null)** → Execute sequentially (has dependencies)
3. **Different `execution_group` IDs** → Execute in parallel (independent batches)
4. **Same `context_signature`** → Should have been merged (warning if not)
#### Parallel Execution Example
```
Batch 1 (no dependencies):
- IMPL-1.1 (execution_group: "parallel-auth-api") → Agent 1
- IMPL-1.2 (execution_group: "parallel-ui-comp") → Agent 2
- IMPL-1.3 (execution_group: "parallel-db-schema") → Agent 3
Wait for Batch 1 completion...
Batch 2 (depends on Batch 1):
- IMPL-2.1 (execution_group: null, depends_on: [IMPL-1.1, IMPL-1.2]) → Agent 1
Wait for Batch 2 completion...
Batch 3 (independent of Batch 2):
- IMPL-3.1 (execution_group: "parallel-tests-1") → Agent 1
- IMPL-3.2 (execution_group: "parallel-tests-2") → Agent 2
```
## TodoWrite Coordination
**Comprehensive workflow tracking** with immediate status updates throughout entire execution without user interruption:
@@ -150,8 +239,11 @@ blocked → skip until dependencies clear
1. **Initial Creation**: Generate TodoWrite from discovered pending tasks for entire workflow
- **Normal Mode**: Create from discovery results
- **Resume Mode**: Create from existing session state and current progress
2. **Single In-Progress**: Mark ONLY ONE task as `in_progress` at a time
3. **Immediate Updates**: Update status after each task completion without user interruption
2. **Parallel Task Support**:
- **Single-task execution**: Mark ONLY ONE task as `in_progress` at a time
- **Batch execution**: Mark ALL tasks in current batch as `in_progress` simultaneously
- **Execution group indicator**: Show `[execution_group: group-id]` for parallel tasks
3. **Immediate Updates**: Update status after each task/batch completion without user interruption
4. **Status Synchronization**: Sync with JSON task files after updates
5. **Continuous Tracking**: Maintain TodoWrite throughout entire workflow execution until completion
@@ -167,36 +259,71 @@ blocked → skip until dependencies clear
**Use Claude Code's built-in TodoWrite tool** to track workflow progress in real-time:
```javascript
// Create initial todo list from discovered pending tasks
// Example 1: Sequential execution (traditional)
TodoWrite({
todos: [
{
content: "Execute IMPL-1.1: Design auth schema [code-developer] [FLOW_CONTROL]",
status: "pending",
status: "in_progress", // Single task in progress
activeForm: "Executing IMPL-1.1: Design auth schema"
},
{
content: "Execute IMPL-1.2: Implement auth logic [code-developer] [FLOW_CONTROL]",
status: "pending",
activeForm: "Executing IMPL-1.2: Implement auth logic"
},
{
content: "Execute TEST-FIX-1: Validate implementation tests [test-fix-agent]",
status: "pending",
activeForm: "Executing TEST-FIX-1: Validate implementation tests"
}
]
});
// Update status as tasks progress - ONLY ONE task should be in_progress at a time
// Example 2: Batch execution (parallel tasks with execution_group)
TodoWrite({
todos: [
{
content: "Execute IMPL-1.1: Design auth schema [code-developer] [FLOW_CONTROL]",
status: "in_progress", // Mark current task as in_progress
activeForm: "Executing IMPL-1.1: Design auth schema"
content: "Execute IMPL-1.1: Build Auth API [code-developer] [execution_group: parallel-auth-api]",
status: "in_progress", // Batch task 1
activeForm: "Executing IMPL-1.1: Build Auth API"
},
// ... other tasks remain pending
{
content: "Execute IMPL-1.2: Build User UI [code-developer] [execution_group: parallel-ui-comp]",
status: "in_progress", // Batch task 2 (running concurrently)
activeForm: "Executing IMPL-1.2: Build User UI"
},
{
content: "Execute IMPL-1.3: Setup Database [code-developer] [execution_group: parallel-db-schema]",
status: "in_progress", // Batch task 3 (running concurrently)
activeForm: "Executing IMPL-1.3: Setup Database"
},
{
content: "Execute IMPL-2.1: Integration Tests [test-fix-agent] [depends_on: IMPL-1.1, IMPL-1.2, IMPL-1.3]",
status: "pending", // Next batch (waits for current batch completion)
activeForm: "Executing IMPL-2.1: Integration Tests"
}
]
});
// Example 3: After batch completion
TodoWrite({
todos: [
{
content: "Execute IMPL-1.1: Build Auth API [code-developer] [execution_group: parallel-auth-api]",
status: "completed", // Batch completed
activeForm: "Executing IMPL-1.1: Build Auth API"
},
{
content: "Execute IMPL-1.2: Build User UI [code-developer] [execution_group: parallel-ui-comp]",
status: "completed", // Batch completed
activeForm: "Executing IMPL-1.2: Build User UI"
},
{
content: "Execute IMPL-1.3: Setup Database [code-developer] [execution_group: parallel-db-schema]",
status: "completed", // Batch completed
activeForm: "Executing IMPL-1.3: Setup Database"
},
{
content: "Execute IMPL-2.1: Integration Tests [test-fix-agent]",
status: "in_progress", // Next batch started
activeForm: "Executing IMPL-2.1: Integration Tests"
}
]
});
```
@@ -282,82 +409,40 @@ TodoWrite({
#### Agent Prompt Template
```bash
Task(subagent_type="{meta.agent}",
prompt="**TASK EXECUTION WITH FULL JSON LOADING**
prompt="**EXECUTE TASK FROM JSON**
## STEP 1: Load Complete Task JSON
**MANDATORY**: First load the complete task JSON from: {session.task_json_path}
## Task JSON Location
{session.task_json_path}
cat {session.task_json_path}
## Instructions
1. **Load Complete Task JSON**: Read and validate all fields (id, title, status, meta, context, flow_control)
2. **Execute Flow Control**: If `flow_control.pre_analysis` exists, execute steps sequentially:
- Load artifacts (synthesis-specification.md, role analyses) using commands in each step
- Accumulate context from step outputs using variable substitution [variable_name]
- Handle errors per step.on_error (skip_optional | fail | retry_once)
3. **Implement Solution**: Follow `flow_control.implementation_approach` using accumulated context
4. **Complete Task**:
- Update task status: `jq '.status = \"completed\"' {session.task_json_path} > temp.json && mv temp.json {session.task_json_path}`
- Update TODO list: {session.todo_list_path}
- Generate summary: {session.summaries_dir}/{task.id}-summary.md
- Check workflow completion and call `/workflow:session:complete` if all tasks done
**CRITICAL**: Validate all 5 required fields are present:
- id, title, status, meta, context, flow_control
## Context Sources (All from JSON)
- Requirements: `context.requirements`
- Focus Paths: `context.focus_paths`
- Acceptance: `context.acceptance`
- Artifacts: `context.artifacts` (synthesis specs, brainstorming outputs)
- Dependencies: `context.depends_on`
- Target Files: `flow_control.target_files`
## STEP 2: Task Definition (From Loaded JSON)
**ID**: Use id field from JSON
**Title**: Use title field from JSON
**Type**: Use meta.type field from JSON
**Agent**: Use meta.agent field from JSON
**Status**: Verify status is pending or active
## Session Paths
- Workflow Dir: {session.workflow_dir}
- TODO List: {session.todo_list_path}
- Summaries: {session.summaries_dir}
- Flow Context: {flow_context.step_outputs}
## STEP 3: Flow Control Execution (if flow_control.pre_analysis exists)
**AGENT RESPONSIBILITY**: Execute pre_analysis steps sequentially from loaded JSON:
**PRIORITY: Artifact Loading Steps First**
1. **Load Synthesis Specification** (if present): Priority artifact loading for consolidated design
2. **Load Individual Artifacts** (fallback): Load role-specific brainstorming outputs if synthesis unavailable
3. **Execute Remaining Steps**: Continue with other pre_analysis steps
For each step in flow_control.pre_analysis array:
1. Execute step.command/commands with variable substitution (support both single command and commands array)
2. Store output to step.output_to variable
3. Handle errors per step.on_error strategy (skip_optional, fail, retry_once)
4. Pass accumulated variables to next step including artifact context
**Special Artifact Loading Commands**:
- Use `bash(ls path 2>/dev/null || echo 'file not found')` for artifact existence checks
- Use `Read(path)` for loading artifact content
- Use `find` commands for discovering multiple artifact files
- Reference artifacts in subsequent steps using output variables: [synthesis_specification], [individual_artifacts]
## STEP 4: Implementation Context (From JSON context field)
**Requirements**: Use context.requirements array from JSON
**Focus Paths**: Use context.focus_paths array from JSON
**Acceptance Criteria**: Use context.acceptance array from JSON
**Dependencies**: Use context.depends_on array from JSON
**Parent Context**: Use context.inherited object from JSON
**Artifacts**: Use context.artifacts array from JSON (synthesis specifications, brainstorming outputs)
**Target Files**: Use flow_control.target_files array from JSON
**Implementation Approach**: Use flow_control.implementation_approach object from JSON (with artifact integration)
## STEP 5: Session Context (Provided by workflow:execute)
**Workflow Directory**: {session.workflow_dir}
**TODO List Path**: {session.todo_list_path}
**Summaries Directory**: {session.summaries_dir}
**Task JSON Path**: {session.task_json_path}
**Flow Context**: {flow_context.step_outputs}
## STEP 6: Agent Completion Requirements
1. **Load Task JSON**: Read and validate complete task structure
2. **Execute Flow Control**: Run all pre_analysis steps if present
3. **Implement Solution**: Follow implementation_approach from JSON
4. **Update Progress**: Mark task status in JSON as completed
5. **Update TODO List**: Update TODO_LIST.md at provided path
6. **Generate Summary**: Create completion summary in summaries directory
7. **Check Workflow Complete**: After task completion, check if all workflow tasks done
8. **Auto-Complete Session**: If all tasks completed, call SlashCommand(\"/workflow:session:complete\")
**JSON UPDATE COMMAND**:
Update task status to completed using jq:
jq '.status = \"completed\"' {session.task_json_path} > temp.json && mv temp.json {session.task_json_path}
**WORKFLOW COMPLETION CHECK**:
After updating task status, check if workflow is complete:
total_tasks=\$(find .workflow/*/\.task/ -name "*.json" -type f 2>/dev/null | wc -l)
completed_tasks=\$(find .workflow/*/\.summaries/ -name "*.md" -type f 2>/dev/null | wc -l)
if [ \$total_tasks -eq \$completed_tasks ]; then
SlashCommand(command=\"/workflow:session:complete\")
fi"),
description="Execute task with full JSON loading and validation")
**Complete JSON structure is authoritative - load and follow it exactly.**"),
description="Execute task: {task.id}")
```
#### Agent JSON Loading Specification

View File

@@ -84,7 +84,7 @@ CONTEXT: Existing user database schema, REST API endpoints
**Parse Output**:
- Extract: context-package.json path (store as `contextPath`)
- Typical pattern: `.workflow/[sessionId]/.context/context-package.json`
- Typical pattern: `.workflow/[sessionId]/.process/context-package.json`
**Validation**:
- Context package path extracted
@@ -96,45 +96,19 @@ CONTEXT: Existing user database schema, REST API endpoints
---
### Phase 3: Intelligent Analysis (Agent-Delegated)
### Phase 3: Intelligent Analysis
**Command**: `Task(subagent_type="cli-execution-agent", description="Intelligent Analysis", prompt="...")`
**Agent Task Prompt**:
```
Analyze project requirements and generate comprehensive solution blueprint for session [sessionId].
Context: Load context package from [contextPath]
Output: Generate ANALYSIS_RESULTS.md in .workflow/[sessionId]/.process/
Requirements:
- Review context-package.json and discover additional relevant files
- Analyze architecture patterns, data models, and dependencies
- Identify technical constraints and risks
- Generate comprehensive solution blueprint
- Include task breakdown recommendations
Session: [sessionId]
Mode: analysis (read-only during discovery, write for ANALYSIS_RESULTS.md)
```
**Command**: `SlashCommand(command="/workflow:tools:concept-enhanced --session [sessionId] --context [contextPath]")`
**Input**: `sessionId` from Phase 1, `contextPath` from Phase 2
**Agent Execution**:
- Phase 1: Understands analysis intent, extracts keywords
- Phase 2: Discovers additional context via MCP code-index
- Phase 3: Enhances prompt with discovered patterns
- Phase 4: Executes with Gemini (analysis mode), generates ANALYSIS_RESULTS.md
- Phase 5: Routes output to session directory
**Parse Output**:
- Agent returns execution log path
- Verify ANALYSIS_RESULTS.md created by agent
- Extract: Execution status (success/failed)
- Verify: ANALYSIS_RESULTS.md file path
**Validation**:
- File `.workflow/[sessionId]/.process/ANALYSIS_RESULTS.md` exists
- Contains task recommendations section
- Agent execution log saved to `.workflow/[sessionId]/.chat/`
**TodoWrite**: Mark phase 3 completed, phase 3.5 in_progress
@@ -186,9 +160,11 @@ Mode: analysis (read-only during discovery, write for ANALYSIS_RESULTS.md)
**Relationship with Brainstorm Phase**:
- If brainstorm synthesis exists (synthesis-specification.md), Phase 3 analysis incorporates it as input
- **⚠️ User's original intent is ALWAYS primary**: New or refined user goals override synthesis recommendations
- **synthesis-specification.md defines "WHAT"**: Requirements, design specs, high-level features
- **IMPL_PLAN.md defines "HOW"**: Executable task breakdown, dependencies, implementation sequence
- Task generation translates high-level specifications into concrete, actionable work items
- **Intent priority**: Current user prompt > synthesis-specification.md > topic-framework.md
**Command Selection**:
- Manual: `SlashCommand(command="/workflow:tools:task-generate --session [sessionId]")`

View File

@@ -17,55 +17,27 @@ Advanced solution design and feasibility analysis engine with parallel CLI execu
**Usage**: Standalone command or integrated into `/workflow:plan`. Accepts context packages and orchestrates Gemini/Codex for comprehensive analysis.
## Core Philosophy & Responsibilities
- **Agent Coordination**: Delegate analysis execution to specialized agent (cli-execution-agent)
- **Solution-Focused Analysis**: Emphasize design decisions, architectural rationale, and critical insights (exclude task planning)
- **Context-Driven**: Parse and validate context-package.json for precise analysis
- **Intelligent Tool Selection**: Gemini for design (all tasks), Codex for validation (complex tasks only)
- **Parallel Execution**: Execute multiple CLI tools simultaneously for efficiency
- **Agent-Driven Tool Selection**: Agent autonomously selects Gemini/Codex based on task complexity
- **Solution Design**: Evaluate architecture, identify key design decisions with rationale
- **Feasibility Assessment**: Analyze technical complexity, risks, implementation readiness
- **Optimization Recommendations**: Performance, security, and code quality improvements
- **Perspective Synthesis**: Integrate multi-tool insights into unified assessment
- **Output Validation**: Verify ANALYSIS_RESULTS.md generation and quality
- **Single Output**: Generate only ANALYSIS_RESULTS.md with technical analysis
## Analysis Strategy Selection
### Tool Selection by Task Complexity
**Agent-Driven Strategy**: cli-execution-agent autonomously determines tool selection based on:
- **Task Complexity**: Number of modules, integration scope, technical depth
- **Tech Stack**: Frontend (Gemini-focused), Backend (Codex-preferred), Fullstack (hybrid)
- **Analysis Focus**: Architecture design (Gemini), Feasibility validation (Codex), Performance optimization (both)
**Simple Tasks (≤3 modules)**:
- **Primary**: Gemini (rapid understanding + pattern recognition)
- **Support**: Code-index (structural analysis)
- **Mode**: Single-round analysis
**Medium Tasks (4-6 modules)**:
- **Primary**: Gemini (comprehensive analysis + architecture design)
- **Support**: Code-index + Exa (best practices)
- **Mode**: Single comprehensive round
**Complex Tasks (>6 modules)**:
- **Primary**: Gemini (comprehensive analysis) + Codex (validation)
- **Mode**: Parallel execution - Gemini design + Codex feasibility
### Tool Preferences by Tech Stack
```json
{
"frontend": {
"primary": "gemini",
"secondary": "codex",
"focus": ["component_design", "state_management", "ui_patterns"]
},
"backend": {
"primary": "codex",
"secondary": "gemini",
"focus": ["api_design", "data_flow", "security", "performance"]
},
"fullstack": {
"primary": "gemini",
"secondary": "codex",
"focus": ["system_architecture", "integration", "data_consistency"]
}
}
```
**Complexity Tiers** (Agent decides internally):
- **Simple (≤3 modules)**: Gemini-only analysis
- **Medium (4-6 modules)**: Gemini comprehensive analysis
- **Complex (>6 modules)**: Gemini + Codex parallel execution
## Execution Lifecycle
@@ -73,280 +45,158 @@ Advanced solution design and feasibility analysis engine with parallel CLI execu
1. **Session Validation**: Verify `.workflow/{session_id}/` exists, load `workflow-session.json`
2. **Context Package Validation**: Verify path, validate JSON format and structure
3. **Task Analysis**: Extract keywords, identify domain/complexity, determine scope
4. **Tool Selection**: Gemini (all tasks), +Codex (complex only), load templates
4. **Agent Preparation**: Prepare agent task prompt with complete analysis requirements
### Phase 2: Analysis Preparation
1. **Workspace Setup**: Create `.workflow/{session_id}/.process/`, initialize logs, set resource limits
2. **Context Optimization**: Filter high-priority assets, organize structure, prepare templates
3. **Execution Environment**: Configure CLI tools, set timeouts, prepare error handling
### Phase 2: Agent-Delegated Analysis
### Phase 3: Parallel Analysis Execution
1. **Gemini Solution Design & Architecture Analysis**
```bash
~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Analyze and design optimal solution for {task_description}
TASK: Evaluate current architecture, propose solution design, identify key design decisions
CONTEXT: @{.workflow/{session_id}/.process/context-package.json,.workflow/{session_id}/workflow-session.json,CLAUDE.md}
**Agent Invocation**:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Enhanced solution design and feasibility analysis",
prompt=`
## Execution Context
**MANDATORY**: Read context-package.json to understand task requirements, source files, tech stack, project structure
**Session ID**: {session_id}
**Mode**: Enhanced Analysis with CLI Tool Orchestration
**ANALYSIS PRIORITY**:
1. PRIMARY: Individual role analysis.md files (system-architect, ui-designer, etc.) - technical details, ADRs, decision context
2. SECONDARY: synthesis-specification.md - integrated requirements, cross-role alignment
3. REFERENCE: topic-framework.md - discussion context
## Input Context
EXPECTED:
1. CURRENT STATE: Existing patterns, code structure, integration points, technical debt
2. SOLUTION DESIGN: Core principles, system design, key decisions with rationale
3. CRITICAL INSIGHTS: Strengths, gaps, risks, tradeoffs
4. OPTIMIZATION: Performance, security, code quality recommendations
5. FEASIBILITY: Complexity analysis, compatibility, implementation readiness
6. OUTPUT: Write to .workflow/{session_id}/.process/gemini-solution-design.md
**Context Package**: {context_path}
**Session State**: .workflow/{session_id}/workflow-session.json
**Project Standards**: CLAUDE.md
RULES:
- Focus on SOLUTION IMPROVEMENTS and KEY DESIGN DECISIONS (NO task planning)
- Identify code targets: existing "file:function:lines", new files "file"
- Do NOT create task lists, implementation steps, or code examples
" --approval-mode yolo
```
Output: `.workflow/{session_id}/.process/gemini-solution-design.md`
## Analysis Task
2. **Codex Technical Feasibility Validation** (Complex Tasks Only)
```bash
codex --full-auto exec "
PURPOSE: Validate technical feasibility and identify implementation risks for {task_description}
TASK: Assess complexity, validate technology choices, evaluate performance/security implications
CONTEXT: @{.workflow/{session_id}/.process/context-package.json,.workflow/{session_id}/.process/gemini-solution-design.md,.workflow/{session_id}/workflow-session.json,CLAUDE.md}
### Analysis Templates (Use these to guide CLI tool execution)
- **Document Structure**: ~/.claude/workflows/cli-templates/prompts/workflow/analysis-results-structure.txt
- **Gemini Analysis**: ~/.claude/workflows/cli-templates/prompts/workflow/gemini-solution-design.txt
- **Codex Validation**: ~/.claude/workflows/cli-templates/prompts/workflow/codex-feasibility-validation.txt
**MANDATORY**: Read context-package.json, gemini-solution-design.md, and relevant source files
### Execution Strategy
1. **Load Context**: Read context-package.json to determine task complexity (module count, integration scope)
2. **Gemini Analysis** (ALL tasks): Execute using gemini-solution-design.txt template
- Output: .workflow/{session_id}/.process/gemini-solution-design.md
3. **Codex Validation** (COMPLEX tasks >6 modules only): Execute using codex-feasibility-validation.txt template
- Output: .workflow/{session_id}/.process/codex-feasibility-validation.md
4. **Synthesize Results**: Combine outputs into ANALYSIS_RESULTS.md following analysis-results-structure.txt
EXPECTED:
1. FEASIBILITY: Complexity rating, resource requirements, technology compatibility
2. RISK ANALYSIS: Implementation risks, integration challenges, performance/security concerns
3. VALIDATION: Development approach, quality standards, maintenance implications
4. RECOMMENDATIONS: Must-have requirements, optimization opportunities, security controls
5. OUTPUT: Write to .workflow/{session_id}/.process/codex-feasibility-validation.md
### Output Requirements
RULES:
- Focus on TECHNICAL FEASIBILITY and RISK ASSESSMENT (NO implementation planning)
- Verify code targets: existing "file:function:lines", new files "file"
- Do NOT create task breakdowns, step-by-step guides, or code examples
" --skip-git-repo-check -s danger-full-access
```
Output: `.workflow/{session_id}/.process/codex-feasibility-validation.md`
**Intermediate Outputs**:
- Gemini: \`.workflow/{session_id}/.process/gemini-solution-design.md\` (always required)
- Codex: \`.workflow/{session_id}/.process/codex-feasibility-validation.md\` (complex tasks only)
3. **Parallel Execution**: Launch tools simultaneously, monitor progress, handle completion/errors, maintain logs
**Final Output**:
- \`.workflow/{session_id}/.process/ANALYSIS_RESULTS.md\` (synthesized, required)
**⚠️ IMPORTANT**: CLI commands MUST execute in foreground (NOT background). Do NOT use `run_in_background` parameter for Gemini/Codex execution.
**Required Sections** (7 sections per analysis-results-structure.txt):
1. Executive Summary
2. Current State Analysis
3. Proposed Solution Design
4. Implementation Strategy
5. Solution Optimization
6. Critical Success Factors
7. Reference Information
### Phase 4: Results Collection & Synthesis
1. **Output Validation**: Validate gemini-solution-design.md (all), codex-feasibility-validation.md (complex), use logs if incomplete, classify status
2. **Quality Assessment**: Verify design rationale, insight depth, feasibility rigor, optimization value
3. **Synthesis Strategy**: Direct integration (simple/medium), multi-tool synthesis (complex), resolve conflicts, score confidence
### Synthesis Rules
- Follow 7-section structure from analysis-results-structure.txt
- Integrate Gemini insights as primary content
- Incorporate Codex validation findings (if executed)
- Resolve conflicts between tools with clear rationale
- Generate confidence scores (1-5 scale) for all assessment dimensions
- Provide final recommendation: PROCEED | PROCEED_WITH_MODIFICATIONS | RECONSIDER | REJECT
### Phase 5: ANALYSIS_RESULTS.md Generation
1. **Report Sections**: Executive Summary, Current State, Solution Design, Implementation Strategy, Optimization, Success Factors, Confidence Scores
2. **Guidelines**: Focus on solution improvements and design decisions (exclude task planning), emphasize rationale/tradeoffs/risk assessment
3. **Output**: Single file `ANALYSIS_RESULTS.md` at `.workflow/{session_id}/.process/` with technical insights and optimization strategies
## Output
Generate final ANALYSIS_RESULTS.md and report completion status:
- Gemini analysis: [completed/failed]
- Codex validation: [completed/skipped/failed]
- Synthesis: [completed/failed]
- Final output: .workflow/{session_id}/.process/ANALYSIS_RESULTS.md
`
)
```
**Agent Execution Flow** (Internal to cli-execution-agent):
1. Parse session ID and context path, load context-package.json
2. Analyze task complexity (module count, integration scope)
3. Discover additional context via MCP code-index
4. Execute Gemini analysis (all tasks) with template-guided prompt
5. Execute Codex validation (complex tasks >6 modules) with template-guided prompt
6. Synthesize Gemini + Codex outputs into ANALYSIS_RESULTS.md
7. Verify output file exists at correct path
8. Return execution log path
**Command Execution**: Launch agent via Task tool, wait for completion
### Phase 3: Output Validation
1. **File Verification**: Confirm `.workflow/{session_id}/.process/ANALYSIS_RESULTS.md` exists
2. **Content Validation**: Verify required sections present (Executive Summary, Solution Design, etc.)
3. **Quality Check**: Ensure design rationale, feasibility assessment, confidence scores included
4. **Agent Log**: Retrieve agent execution log from `.workflow/{session_id}/.chat/`
5. **Success Criteria**: File exists, contains all required sections, meets quality standards
## Analysis Results Format
Generated ANALYSIS_RESULTS.md focuses on **solution improvements, key design decisions, and critical insights** (NOT task planning):
**Template Reference**: `~/.claude/workflows/cli-templates/prompts/workflow/analysis-results-structure.txt`
```markdown
# Technical Analysis & Solution Design
Generated ANALYSIS_RESULTS.md focuses on **solution improvements, key design decisions, and critical insights** (NOT task planning).
## Executive Summary
- **Analysis Focus**: {core_problem_or_improvement_area}
- **Analysis Timestamp**: {timestamp}
- **Tools Used**: {analysis_tools}
- **Overall Assessment**: {feasibility_score}/5 - {recommendation_status}
### Required Structure (7 Sections)
---
1. **Executive Summary**: Analysis focus, tools used, overall assessment (X/5), recommendation status
2. **Current State Analysis**: Architecture overview, compatibility/dependencies, critical findings
3. **Proposed Solution Design**: Core principles, system design, key decisions with rationale, technical specs
4. **Implementation Strategy**: Development approach, code modification targets, feasibility assessment, risk mitigation
5. **Solution Optimization**: Performance, security, code quality recommendations
6. **Critical Success Factors**: Technical requirements, quality metrics, success validation
7. **Reference Information**: Tool analysis summary, context & resources
## 1. Current State Analysis
### Key Requirements
### Architecture Overview
- **Existing Patterns**: {key_architectural_patterns}
- **Code Structure**: {current_codebase_organization}
- **Integration Points**: {system_integration_touchpoints}
- **Technical Debt Areas**: {identified_debt_with_impact}
### Compatibility & Dependencies
- **Framework Alignment**: {framework_compatibility_assessment}
- **Dependency Analysis**: {critical_dependencies_and_risks}
- **Migration Considerations**: {backward_compatibility_concerns}
### Critical Findings
- **Strengths**: {what_works_well}
- **Gaps**: {missing_capabilities_or_issues}
- **Risks**: {identified_technical_and_business_risks}
---
## 2. Proposed Solution Design
### Core Architecture Principles
- **Design Philosophy**: {key_design_principles}
- **Architectural Approach**: {chosen_architectural_pattern_with_rationale}
- **Scalability Strategy**: {how_solution_scales}
### System Design
- **Component Architecture**: {high_level_component_design}
- **Data Flow**: {data_flow_patterns_and_state_management}
- **API Design**: {interface_contracts_and_specifications}
- **Integration Strategy**: {how_components_integrate}
### Key Design Decisions
1. **Decision**: {critical_design_choice}
- **Rationale**: {why_this_approach}
- **Alternatives Considered**: {other_options_and_tradeoffs}
- **Impact**: {implications_on_architecture}
2. **Decision**: {another_critical_choice}
- **Rationale**: {reasoning}
- **Alternatives Considered**: {tradeoffs}
- **Impact**: {consequences}
### Technical Specifications
- **Technology Stack**: {chosen_technologies_with_justification}
- **Code Organization**: {module_structure_and_patterns}
- **Testing Strategy**: {testing_approach_and_coverage}
- **Performance Targets**: {performance_requirements_and_benchmarks}
---
## 3. Implementation Strategy
### Development Approach
- **Core Implementation Pattern**: {primary_implementation_strategy}
- **Module Dependencies**: {dependency_graph_and_order}
- **Quality Assurance**: {qa_approach_and_validation}
### Code Modification Targets
**Purpose**: Specific code locations for modification AND new files to create
**Identified Targets**:
1. **Target**: `src/auth/AuthService.ts:login:45-52`
- **Type**: Modify existing
- **Modification**: Enhance error handling
- **Rationale**: Current logic lacks validation
2. **Target**: `src/auth/PasswordReset.ts`
- **Type**: Create new file
- **Purpose**: Password reset functionality
- **Rationale**: New feature requirement
**Format Rules**:
- Existing files: `file:function:lines` (with line numbers)
- New files: `file` (no function or lines)
**Code Modification Targets**:
- Existing files: `file:function:lines` (e.g., `src/auth/login.ts:validateUser:45-52`)
- New files: `file` only (e.g., `src/auth/PasswordReset.ts`)
- Unknown lines: `file:function:*`
- Task generation will refine these targets during `analyze_task_patterns` step
### Feasibility Assessment
- **Technical Complexity**: {complexity_rating_and_analysis}
- **Performance Impact**: {expected_performance_characteristics}
- **Resource Requirements**: {development_resources_needed}
- **Maintenance Burden**: {ongoing_maintenance_considerations}
**Key Design Decisions** (minimum 2):
- Decision statement
- Rationale (why this approach)
- Alternatives considered (tradeoffs)
- Impact (implications on architecture)
### Risk Mitigation
- **Technical Risks**: {implementation_risks_and_mitigation}
- **Integration Risks**: {compatibility_challenges_and_solutions}
- **Performance Risks**: {performance_concerns_and_strategies}
- **Security Risks**: {security_vulnerabilities_and_controls}
**Assessment Scores** (1-5 scale):
- Conceptual Integrity, Architectural Soundness, Technical Feasibility, Implementation Readiness
- Overall Confidence score
- Final Recommendation: PROCEED | PROCEED_WITH_MODIFICATIONS | RECONSIDER | REJECT
---
## 4. Solution Optimization
### Performance Optimization
- **Optimization Strategies**: {key_performance_improvements}
- **Caching Strategy**: {caching_approach_and_invalidation}
- **Resource Management**: {resource_utilization_optimization}
- **Bottleneck Mitigation**: {identified_bottlenecks_and_solutions}
### Security Enhancements
- **Security Model**: {authentication_authorization_approach}
- **Data Protection**: {data_security_and_encryption}
- **Vulnerability Mitigation**: {known_vulnerabilities_and_controls}
- **Compliance**: {regulatory_and_compliance_considerations}
### Code Quality
- **Code Standards**: {coding_conventions_and_patterns}
- **Testing Coverage**: {test_strategy_and_coverage_goals}
- **Documentation**: {documentation_requirements}
- **Maintainability**: {maintainability_practices}
---
## 5. Critical Success Factors
### Technical Requirements
- **Must Have**: {essential_technical_capabilities}
- **Should Have**: {important_but_not_critical_features}
- **Nice to Have**: {optional_enhancements}
### Quality Metrics
- **Performance Benchmarks**: {measurable_performance_targets}
- **Code Quality Standards**: {quality_metrics_and_thresholds}
- **Test Coverage Goals**: {testing_coverage_requirements}
- **Security Standards**: {security_compliance_requirements}
### Success Validation
- **Acceptance Criteria**: {how_to_validate_success}
- **Testing Strategy**: {validation_testing_approach}
- **Monitoring Plan**: {production_monitoring_strategy}
- **Rollback Plan**: {failure_recovery_strategy}
---
## 6. Analysis Confidence & Recommendations
### Assessment Scores
- **Conceptual Integrity**: {score}/5 - {brief_assessment}
- **Architectural Soundness**: {score}/5 - {brief_assessment}
- **Technical Feasibility**: {score}/5 - {brief_assessment}
- **Implementation Readiness**: {score}/5 - {brief_assessment}
- **Overall Confidence**: {overall_score}/5
### Final Recommendation
**Status**: {PROCEED|PROCEED_WITH_MODIFICATIONS|RECONSIDER|REJECT}
**Rationale**: {clear_explanation_of_recommendation}
**Critical Prerequisites**: {what_must_be_resolved_before_proceeding}
---
## 7. Reference Information
### Tool Analysis Summary
- **Gemini Insights**: {key_architectural_and_pattern_insights}
- **Codex Validation**: {technical_feasibility_and_implementation_notes}
- **Consensus Points**: {agreements_between_tools}
- **Conflicting Views**: {disagreements_and_resolution}
### Context & Resources
- **Analysis Context**: {context_package_reference}
- **Documentation References**: {relevant_documentation}
- **Related Patterns**: {similar_implementations_in_codebase}
- **External Resources**: {external_references_and_best_practices}
```
### Content Focus
- ✅ Solution improvements and architectural decisions
- ✅ Design rationale, alternatives, and tradeoffs
- ✅ Risk assessment with mitigation strategies
- ✅ Optimization opportunities (performance, security, quality)
- ❌ Task lists or implementation steps
- ❌ Code examples or snippets
- ❌ Project management timelines
## Execution Management
### Error Handling & Recovery
1. **Pre-execution**: Verify session/context package, confirm CLI tools, validate dependencies
2. **Monitoring & Timeout**: Track progress, 30-min limit, manage parallel execution, maintain status
3. **Partial Recovery**: Generate results with incomplete outputs, use logs, provide next steps
4. **Error Recovery**: Auto error detection, structured workflows, graceful degradation
1. **Pre-execution**: Verify session/context package exists and is valid
2. **Agent Monitoring**: Track agent execution status via Task tool
3. **Validation**: Check ANALYSIS_RESULTS.md generation on completion
4. **Error Recovery**:
- Agent execution failure → report error, check agent logs
- Missing output file → retry agent execution once
- Incomplete output → use agent logs to diagnose issue
5. **Graceful Degradation**: If agent fails, report specific error and suggest manual analysis
### Performance & Resource Optimization
- **Parallel Analysis**: Execute multiple tools simultaneously to reduce time
- **Context Sharding**: Analyze large projects by module shards
- **Caching**: Reuse results for similar contexts
- **Resource Management**: Monitor disk/CPU/memory, set limits, cleanup temporary files
- **Timeout Control**: `timeout 600s` with partial result generation on failure
### Agent Delegation Benefits
- **Autonomous Tool Selection**: Agent decides Gemini/Codex based on complexity
- **Context Discovery**: Agent discovers additional relevant files via MCP
- **Prompt Enhancement**: Agent optimizes prompts with discovered patterns
- **Error Handling**: Agent manages CLI tool failures internally
- **Log Tracking**: Agent execution logs saved to `.workflow/{session_id}/.chat/`
## Integration & Success Criteria
@@ -354,8 +204,6 @@ Generated ANALYSIS_RESULTS.md focuses on **solution improvements, key design dec
**Input**:
- `--session` (required): Session ID (e.g., WFS-auth)
- `--context` (required): Context package path
- `--depth` (optional): Analysis depth (quick|full|deep)
- `--focus` (optional): Analysis focus areas
**Output**:
- Single file: `ANALYSIS_RESULTS.md` at `.workflow/{session_id}/.process/`
@@ -366,13 +214,14 @@ Generated ANALYSIS_RESULTS.md focuses on **solution improvements, key design dec
**Success Criteria**:
- ✅ Solution-focused analysis (design decisions, critical insights, NO task planning)
- ✅ Single output file only
- ✅ Single output file only (ANALYSIS_RESULTS.md)
- ✅ Design decision depth with rationale/alternatives/tradeoffs
- ✅ Feasibility assessment (complexity, risks, readiness)
- ✅ Optimization strategies (performance, security, quality)
- ✅ Parallel execution efficiency (Gemini + Codex for complex tasks)
- ✅ Robust error handling (validation, timeout, partial recovery)
- ✅ Agent-driven tool selection (autonomous Gemini/Codex execution)
- ✅ Robust error handling (validation, retry, graceful degradation)
- ✅ Confidence scoring with clear recommendation status
- ✅ Agent execution log saved to session chat directory
## Related Commands
- `/context:gather` - Generate context packages required by this command

View File

@@ -1,6 +1,6 @@
---
name: gather
description: Intelligently collect project context based on task description and package into standardized JSON
description: Intelligently collect project context using general-purpose agent based on task description and package into standardized JSON
argument-hint: "--session WFS-session-id \"task description\""
examples:
- /workflow:tools:context-gather --session WFS-user-auth "Implement user authentication system"
@@ -11,25 +11,105 @@ examples:
# Context Gather Command (/workflow:tools:context-gather)
## Overview
Intelligent context collector that gathers relevant information from project codebase, documentation, and dependencies based on task descriptions, generating standardized context packages.
Agent-driven intelligent context collector that gathers relevant information from project codebase, documentation, and dependencies based on task descriptions, generating standardized context packages.
## Core Philosophy
- **Agent-Driven**: Delegate execution to general-purpose agent for autonomous operation
- **Two-Phase Flow**: Discovery (context loading) → Execution (context gathering and packaging)
- **Memory-First**: Reuse loaded documents from conversation memory
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and file discovery
- **Intelligent Collection**: Auto-identify relevant resources based on keyword analysis
- **Comprehensive Coverage**: Collect code, documentation, configurations, and dependencies
- **Standardized Output**: Generate unified format context-package.json
- **Efficient Execution**: Optimize collection strategies to avoid irrelevant information
## Core Responsibilities
- **Keyword Extraction**: Extract core keywords from task descriptions
- **Smart Documentation Loading**: Load relevant project documentation based on keywords
- **Code Structure Analysis**: Analyze project structure to locate relevant code files
- **Dependency Discovery**: Identify tech stack and dependency relationships
- **MCP Tools Integration**: Leverage code-index tools for enhanced collection
- **Context Packaging**: Generate standardized JSON context packages
## Execution Lifecycle
## Execution Process
### Phase 1: Discovery & Context Loading
**⚡ Memory-First Rule**: Skip file loading if documents already in conversation memory
### Phase 1: Task Analysis
**Agent Context Package**:
```javascript
{
"session_id": "WFS-[session-id]",
"task_description": "[user provided task description]",
"session_metadata": {
// If in memory: use cached content
// Else: Load from .workflow/{session-id}/workflow-session.json
},
"mcp_capabilities": {
// Agent will use these tools to discover project context
"code_index": true,
"exa_code": true,
"exa_web": true
}
}
// Agent will autonomously execute:
// - Project structure analysis: bash(~/.claude/scripts/get_modules_by_depth.sh)
// - Documentation loading: Read(CLAUDE.md), Read(README.md)
```
**Discovery Actions**:
1. **Load Session Context** (if not in memory)
```javascript
if (!memory.has("workflow-session.json")) {
Read(.workflow/{session-id}/workflow-session.json)
}
```
### Phase 2: Agent Execution (Context Gathering & Packaging)
**Agent Invocation**:
```javascript
Task(
subagent_type="general-purpose",
description="Gather project context and generate context package",
prompt=`
## Execution Context
**Session ID**: WFS-{session-id}
**Task Description**: {task_description}
**Mode**: Agent-Driven Context Gathering
## Phase 1: Discovery Results (Provided Context)
### Session Metadata
{session_metadata_content}
### MCP Capabilities
- code-index: Available for file discovery and code search
- exa-code: Available for external research
- exa-web: Available for web search
## Phase 2: Context Gathering Task
### Core Responsibilities
1. **Project Structure Analysis**: Execute get_modules_by_depth.sh for architecture overview
2. **Documentation Loading**: Load CLAUDE.md, README.md and relevant documentation
3. **Keyword Extraction**: Extract core keywords from task description
4. **Smart File Discovery**: Use MCP code-index tools to locate relevant files
5. **Code Structure Analysis**: Analyze project structure to identify relevant modules
6. **Dependency Discovery**: Identify tech stack and dependency relationships
7. **Context Packaging**: Generate standardized JSON context package
### Execution Process
#### Step 0: Foundation Setup (Execute First)
1. **Project Structure Analysis**
Execute to get comprehensive architecture overview:
\`\`\`javascript
bash(~/.claude/scripts/get_modules_by_depth.sh)
\`\`\`
2. **Load Project Documentation** (if not in memory)
Load core project documentation:
\`\`\`javascript
Read(CLAUDE.md)
Read(README.md)
// Load other relevant documentation based on session context
\`\`\`
#### Step 1: Task Analysis
1. **Keyword Extraction**
- Parse task description to extract core keywords
- Identify technical domain (auth, API, frontend, backend, etc.)
@@ -40,23 +120,31 @@ Intelligent context collector that gathers relevant information from project cod
- Identify potentially involved modules and components
- Set file type filters
### Phase 2: Project Structure Exploration
1. **Architecture Analysis**
- Use `~/.claude/scripts/get_modules_by_depth.sh` for comprehensive project structure
- Analyze project layout and module organization
- Identify key directories and components
#### Step 2: MCP-Enhanced File Discovery
1. **Code File Location**
Use MCP code-index tools:
\`\`\`javascript
// Find files by pattern
mcp__code-index__find_files(pattern="*{keyword}*")
2. **Code File Location**
- Use MCP tools for precise search: `mcp__code-index__find_files()` and `mcp__code-index__search_code_advanced()`
- Search for relevant source code files based on keywords
- Locate implementation files, interfaces, and modules
// Search code content
mcp__code-index__search_code_advanced(
pattern="{keyword_patterns}",
file_pattern="*.{ts,js,py,go,md}",
context_lines=3
)
3. **Documentation Collection**
- Load CLAUDE.md and README.md
- Load relevant documentation from .workflow/docs/ based on keywords
- Collect configuration files (package.json, requirements.txt, etc.)
// Get file summaries
mcp__code-index__get_file_summary(file_path="relevant/file.ts")
\`\`\`
### Phase 3: Intelligent Filtering & Association
2. **Configuration Files Discovery**
Locate: package.json, requirements.txt, Cargo.toml, tsconfig.json, etc.
3. **Test Files Location**
Find test files related to task keywords
#### Step 3: Intelligent Filtering & Association
1. **Relevance Scoring**
- Score based on keyword match degree
- Score based on file path relevance
@@ -67,17 +155,15 @@ Intelligent context collector that gathers relevant information from project cod
- Identify inter-module dependencies
- Determine core and optional dependencies
### Phase 4: Context Packaging
1. **Standardized Output**
- Generate context-package.json
- Organize resources by type and importance
- Add relevance descriptions and usage recommendations
#### Step 4: Context Packaging
Generate standardized context-package.json following the format below
## Context Package Format
### Required Output
Generated context package format:
**Output Location**: \`.workflow/{session-id}/.process/context-package.json\`
```json
**Output Format**:
\`\`\`json
{
"metadata": {
"task_description": "Implement user authentication system",
@@ -138,163 +224,130 @@ Generated context package format:
"test_files": 1
}
}
```
\`\`\`
## MCP Tools Integration
### Quality Validation
### Code Index Integration
```bash
# Set project path
Before completion, verify:
- [ ] context-package.json created in correct location
- [ ] Valid JSON format with all required fields
- [ ] Metadata includes task description, keywords, complexity
- [ ] Assets array contains relevant files with priorities
- [ ] Tech stack accurately identified
- [ ] Statistics section provides file counts
- [ ] File relevance accuracy >80%
- [ ] No sensitive information exposed
### Performance Optimization
**Large Project Optimization**:
- File count limit: Maximum 50 files per type
- Size filtering: Skip oversized files (>10MB)
- Depth limit: Maximum search depth of 3 levels
- Use MCP tools for efficient discovery
**MCP Tools Integration**:
Agent should use MCP code-index tools when available:
\`\`\`javascript
// Set project path
mcp__code-index__set_project_path(path="{current_project_path}")
# Refresh index to ensure latest
// Refresh index
mcp__code-index__refresh_index()
# Search relevant files
// Find files by pattern
mcp__code-index__find_files(pattern="*{keyword}*")
# Search code content
// Search code content
mcp__code-index__search_code_advanced(
pattern="{keyword_patterns}",
file_pattern="*.{ts,js,py,go,md}",
context_lines=3
)
\`\`\`
**Fallback Strategy**:
When MCP tools unavailable, agent should use traditional commands:
- \`find\` for file discovery
- \`rg\` or \`grep\` for content search
- Bash commands from project structure analysis
## Output
Generate context-package.json and report completion:
- Task description: {description}
- Keywords extracted: {count}
- Files collected: {total}
- Source files: {count}
- Documentation: {count}
- Configuration: {count}
- Tests: {count}
- Tech stack identified: {frameworks/libraries}
- Output location: .workflow/{session-id}/.process/context-package.json
\`
)
\`\`\`
## Command Integration
### Usage
```bash
# Basic usage
/workflow:tools:context-gather --session WFS-auth "Implement JWT authentication"
# Called by /workflow:plan
SlashCommand(command="/workflow:tools:context-gather --session WFS-[id] \\"[task description]\\"")
```
### Agent Context Passing
**Memory-Aware Context Assembly**:
```javascript
// Assemble minimal context package for agent
// Agent will execute project structure analysis and documentation loading
const agentContext = {
session_id: "WFS-[id]",
task_description: "[user provided task description]",
// Use memory if available, else load
session_metadata: memory.has("workflow-session.json")
? memory.get("workflow-session.json")
: Read(.workflow/WFS-[id]/workflow-session.json),
// MCP capabilities - agent will use these tools
mcp_capabilities: {
code_index: true,
exa_code: true,
exa_web: true
}
}
// Note: Agent will execute these steps autonomously:
// - bash(~/.claude/scripts/get_modules_by_depth.sh) for project structure
// - Read(CLAUDE.md) and Read(README.md) for documentation
```
## Session ID Integration
### Session ID Usage
- **Required Parameter**: `--session WFS-session-id`
- **Session Context Loading**: Load existing session state and task summaries
- **Session Continuity**: Maintain context across pipeline phases
- **Session Context Loading**: Load existing session state and metadata
- **Session Continuity**: Maintain context across workflow pipeline phases
### Session State Management
```bash
# Validate session exists
if [ ! -d ".workflow/${session_id}" ]; then
echo "❌ Session ${session_id} not found"
exit 1
fi
# Load session metadata
session_metadata=".workflow/${session_id}/workflow-session.json"
```
## Output Location
Context package output location:
```
.workflow/{session_id}/.process/context-package.json
```
## Error Handling
### Common Error Handling
1. **No Active Session**: Create temporary session directory
2. **MCP Tools Unavailable**: Fallback to traditional bash commands
3. **Permission Errors**: Prompt user to check file permissions
4. **Large Project Optimization**: Limit file count, prioritize high-relevance files
### Graceful Degradation Strategy
```bash
# Fallback when MCP unavailable
if ! command -v mcp__code-index__find_files; then
# Use find command for file discovery
find . -name "*{keyword}*" -type f -not -path "*/node_modules/*" -not -path "*/.git/*"
# Alternative pattern matching
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.py" -o -name "*.go" \) -exec grep -l "{keyword}" {} \;
fi
# Use ripgrep instead of MCP search
rg "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 30
# Content-based search with context
rg -A 3 -B 3 "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source
# Quick relevance check
grep -r --include="*.{ts,js,py,go}" -l "{keywords}" . | head -15
# Test files discovery
find . -name "*test*" -o -name "*spec*" | grep -E "\.(ts|js|py|go)$" | head -10
# Import/dependency analysis
rg "^(import|from|require|#include)" --type-add 'source:*.{ts,js,py,go}' -t source | head -20
```
## Performance Optimization
### Large Project Optimization Strategy
- **File Count Limit**: Maximum 50 files per type
- **Size Filtering**: Skip oversized files (>10MB)
- **Depth Limit**: Maximum search depth of 3 levels
- **Caching Strategy**: Cache project structure analysis results
### Parallel Processing
- Documentation collection and code search in parallel
- MCP tool calls and traditional commands in parallel
- Reduce I/O wait time
## Essential Bash Commands (Max 10)
### 1. Project Structure Analysis
```bash
~/.claude/scripts/get_modules_by_depth.sh
```
### 2. File Discovery by Keywords
```bash
find . -name "*{keyword}*" -type f -not -path "*/node_modules/*" -not -path "*/.git/*"
```
### 3. Content Search in Code Files
```bash
rg "{keyword}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 20
```
### 4. Configuration Files Discovery
```bash
find . -maxdepth 3 \( -name "*.json" -o -name "package.json" -o -name "requirements.txt" -o -name "Cargo.toml" \) -not -path "*/node_modules/*"
```
### 5. Documentation Files Collection
```bash
find . -name "*.md" -o -name "README*" -o -name "CLAUDE.md" | grep -v node_modules | head -10
```
### 6. Test Files Location
```bash
find . \( -name "*test*" -o -name "*spec*" \) -type f | grep -E "\.(js|ts|py|go)$" | head -10
```
### 7. Function/Class Definitions Search
```bash
rg "^(function|def|func|class|interface)" --type-add 'source:*.{ts,js,py,go}' -t source -n --max-count 15
```
### 8. Import/Dependency Analysis
```bash
rg "^(import|from|require|#include)" --type-add 'source:*.{ts,js,py,go}' -t source | head -15
```
### 9. Workflow Session Information
```bash
find .workflow/ -name "*.json" -path "*/${session_id}/*" -o -name "workflow-session.json" | head -5
```
### 10. Context-Aware Content Search
```bash
rg -A 2 -B 2 "{keywords}" --type-add 'source:*.{ts,js,py,go}' -t source --max-count 10
### Session Validation
```javascript
// Validate session exists
const sessionPath = `.workflow/${session_id}`;
if (!fs.existsSync(sessionPath)) {
console.error(`❌ Session ${session_id} not found`);
process.exit(1);
}
```
## Success Criteria
- Generate valid context-package.json file
- Contains sufficient relevant information for subsequent analysis
- Execution time controlled within 30 seconds
- File relevance accuracy rate >80%
- Valid context-package.json generated in correct location
- Contains sufficient relevant information (>80% relevance)
- Execution completes within reasonable time (<2 minutes)
- All required fields present and properly formatted
- Agent reports completion status with statistics
## Related Commands
- `/workflow:tools:concept-enhanced` - Consumes output of this command for analysis
- `/workflow:plan` - Calls this command to gather context
- `/workflow:status` - Can display context collection status

View File

@@ -1,21 +1,24 @@
---
name: task-generate-agent
description: Autonomous task generation using action-planning-agent with discovery and output phases
argument-hint: "--session WFS-session-id"
argument-hint: "--session WFS-session-id [--cli-execute]"
examples:
- /workflow:tools:task-generate-agent --session WFS-auth
- /workflow:tools:task-generate-agent --session WFS-auth --cli-execute
---
# Autonomous Task Generation Command
## Overview
Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation.
Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Supports both agent-driven execution (default) and CLI tool execution modes.
## Core Philosophy
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
- **Memory-First**: Reuse loaded documents from conversation memory
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
- **Pre-Selected Templates**: Command selects correct template based on `--cli-execute` flag **before** invoking agent
- **Agent Simplicity**: Agent receives pre-selected template and focuses only on content generation
## Execution Lifecycle
@@ -26,6 +29,10 @@ Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent wit
```javascript
{
"session_id": "WFS-[session-id]",
"execution_mode": "agent-mode" | "cli-execute-mode", // Determined by flag
"task_json_template_path": "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt"
| "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt",
// Path selected by command based on --cli-execute flag, agent reads it
"session_metadata": {
// If in memory: use cached content
// Else: Load from .workflow/{session-id}/workflow-session.json
@@ -96,6 +103,14 @@ Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent wit
### Phase 2: Agent Execution (Document Generation)
**Pre-Agent Template Selection** (Command decides path before invoking agent):
```javascript
// Command checks flag and selects template PATH (not content)
const templatePath = hasCliExecuteFlag
? "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt"
: "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt";
```
**Agent Invocation**:
```javascript
Task(
@@ -105,7 +120,8 @@ Task(
## Execution Context
**Session ID**: WFS-{session-id}
**Mode**: Two-Phase Autonomous Task Generation
**Execution Mode**: {agent-mode | cli-execute-mode}
**Task JSON Template Path**: {template_path}
## Phase 1: Discovery Results (Provided Context)
@@ -147,334 +163,34 @@ Task(
#### 1. Task JSON Files (.task/IMPL-*.json)
**Location**: .workflow/{session-id}/.task/
**Schema**: 5-field enhanced schema with artifacts
**Template**: Read from the template path provided above
**Required Fields**:
\`\`\`json
{
"id": "IMPL-N[.M]",
"title": "Descriptive task name",
"status": "pending",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer|@test-fix-agent|@general-purpose"
},
"context": {
"requirements": ["extracted from analysis"],
"focus_paths": ["src/paths"],
"acceptance": ["measurable criteria"],
"depends_on": ["IMPL-N"],
"artifacts": [
{
"type": "synthesis_specification",
"path": "{synthesis_spec_path}",
"priority": "highest",
"usage": "Primary requirement source - use for consolidated requirements and cross-role alignment"
},
{
"type": "role_analysis",
"path": "{role_analysis_path}",
"priority": "high",
"usage": "Technical/design/business details from specific roles. Common roles: system-architect (ADRs, APIs, caching), ui-designer (design tokens, layouts), product-manager (user stories, metrics)",
"note": "Dynamically discovered - multiple role analysis files included based on brainstorming results"
},
{
"type": "topic_framework",
"path": "{topic_framework_path}",
"priority": "low",
"usage": "Discussion context and framework structure"
}
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_synthesis_specification",
"action": "Load consolidated synthesis specification",
"commands": [
"bash(ls {synthesis_spec_path} 2>/dev/null || echo 'not found')",
"Read({synthesis_spec_path})"
],
"output_to": "synthesis_specification",
"on_error": "skip_optional"
},
{
"step": "mcp_codebase_exploration",
"action": "Explore codebase using MCP",
"command": "mcp__code-index__find_files(pattern=\\"[patterns]\\") && mcp__code-index__search_code_advanced(pattern=\\"[patterns]\\")",
"output_to": "codebase_structure"
},
{
"step": "analyze_task_patterns",
"action": "Analyze existing code patterns",
"commands": [
"bash(cd \\"[focus_paths]\\")",
"bash(~/.claude/scripts/gemini-wrapper -p \\"PURPOSE: Analyze patterns TASK: Review '[title]' CONTEXT: [synthesis_specification] EXPECTED: Pattern analysis RULES: Prioritize synthesis-specification.md\\")"
],
"output_to": "task_context",
"on_error": "fail"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Implement task following synthesis specification",
"description": "Implement '[title]' following synthesis specification. PRIORITY: Use synthesis-specification.md as primary requirement source. When implementation needs technical details (e.g., API schemas, caching configs, design tokens), refer to artifacts[] for detailed specifications from original role analyses.",
"modification_points": [
"Apply consolidated requirements from synthesis-specification.md",
"Follow technical guidelines from synthesis",
"Consult artifacts for implementation details when needed",
"Integrate with existing patterns"
],
"logic_flow": [
"Load synthesis specification and relevant role artifacts",
"Execute MCP code-index discovery for relevant files",
"Analyze existing patterns and identify modification targets",
"Implement following specification",
"Consult artifacts for technical details when needed",
"Validate against acceptance criteria"
],
"depends_on": [],
"output": "implementation"
}
],
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
}
}
**Task JSON Template Loading**:
\`\`\`
Read({template_path})
\`\`\`
**Important**:
- Read the template from the path provided in context
- Use the template structure exactly as written
- Replace placeholder variables ({synthesis_spec_path}, {role_analysis_path}, etc.) with actual session-specific paths
- Include MCP tool integration in pre_analysis steps
- Map artifacts based on task domain (UI → ui-designer, Backend → system-architect)
#### 2. IMPL_PLAN.md
**Location**: .workflow/{session-id}/IMPL_PLAN.md
**Structure**:
\`\`\`markdown
---
identifier: WFS-{session-id}
source: "User requirements" | "File: path" | "Issue: ISS-001"
analysis: .workflow/{session-id}/.process/ANALYSIS_RESULTS.md
artifacts: .workflow/{session-id}/.brainstorming/
context_package: .workflow/{session-id}/.process/context-package.json # CCW smart context
workflow_type: "standard | tdd | design" # Indicates execution model
verification_history: # CCW quality gates
concept_verify: "passed | skipped | pending"
action_plan_verify: "pending"
phase_progression: "brainstorm → context → analysis → concept_verify → planning" # CCW workflow phases
---
# Implementation Plan: {Project Title}
## 1. Summary
Core requirements, objectives, technical approach summary (2-3 paragraphs max).
**Core Objectives**:
- [Key objective 1]
- [Key objective 2]
**Technical Approach**:
- [High-level approach]
## 2. Context Analysis
### CCW Workflow Context
**Phase Progression**:
- ✅ Phase 1: Brainstorming (synthesis-specification.md generated)
- ✅ Phase 2: Context Gathering (context-package.json: {N} files, {M} modules analyzed)
- ✅ Phase 3: Enhanced Analysis (ANALYSIS_RESULTS.md: Gemini/Qwen/Codex parallel insights)
- ✅ Phase 4: Concept Verification ({X} clarifications answered, synthesis updated | skipped)
- ⏳ Phase 5: Action Planning (current phase - generating IMPL_PLAN.md)
**Quality Gates**:
- concept-verify: ✅ Passed (0 ambiguities remaining) | ⏭️ Skipped (user decision) | ⏳ Pending
- action-plan-verify: ⏳ Pending (recommended before /workflow:execute)
**Context Package Summary**:
- **Focus Paths**: {list key directories from context-package.json}
- **Key Files**: {list primary files for modification}
- **Module Depth Analysis**: {from get_modules_by_depth.sh output}
- **Smart Context**: {total file count} files, {module count} modules, {dependency count} dependencies identified
### Project Profile
- **Type**: Greenfield/Enhancement/Refactor
- **Scale**: User count, data volume, complexity
- **Tech Stack**: Primary technologies
- **Timeline**: Duration and milestones
### Module Structure
**IMPL_PLAN Template**:
\`\`\`
[Directory tree showing key modules]
$(cat ~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
\`\`\`
### Dependencies
**Primary**: [Core libraries and frameworks]
**APIs**: [External services]
**Development**: [Testing, linting, CI/CD tools]
### Patterns & Conventions
- **Architecture**: [Key patterns like DI, Event-Driven]
- **Component Design**: [Design patterns]
- **State Management**: [State strategy]
- **Code Style**: [Naming, TypeScript coverage]
## 3. Brainstorming Artifacts Reference
### Artifact Usage Strategy
**Primary Reference (synthesis-specification.md)**:
- **What**: Comprehensive implementation blueprint from multi-role synthesis
- **When**: Every task references this first for requirements and design decisions
- **How**: Extract architecture decisions, UI/UX patterns, functional requirements, non-functional requirements
- **Priority**: Authoritative - overrides role-specific analyses when conflicts arise
- **CCW Value**: Consolidates insights from all brainstorming roles into single source of truth
**Context Intelligence (context-package.json)**:
- **What**: Smart context gathered by CCW's context-gather phase
- **Content**: Focus paths, dependency graph, existing patterns, module structure
- **Usage**: Tasks load this via \`flow_control.preparatory_steps\` for environment setup
- **CCW Value**: Automated intelligent context discovery replacing manual file exploration
**Technical Analysis (ANALYSIS_RESULTS.md)**:
- **What**: Gemini/Qwen/Codex parallel analysis results
- **Content**: Optimization strategies, risk assessment, architecture review, implementation patterns
- **Usage**: Referenced in task planning for technical guidance and risk mitigation
- **CCW Value**: Multi-model parallel analysis providing comprehensive technical intelligence
### Integrated Specifications (Highest Priority)
- **synthesis-specification.md**: Comprehensive implementation blueprint
- Contains: Architecture design, UI/UX guidelines, functional/non-functional requirements, implementation roadmap, risk assessment
### Supporting Artifacts (Reference)
- **topic-framework.md**: Role-specific discussion points and analysis framework
- **system-architect/analysis.md**: Detailed architecture specifications
- **ui-designer/analysis.md**: Layout and component specifications
- **product-manager/analysis.md**: Product vision and user stories
**Artifact Priority in Development**:
1. synthesis-specification.md (primary reference for all tasks)
2. context-package.json (smart context for execution environment)
3. ANALYSIS_RESULTS.md (technical analysis and optimization strategies)
4. Role-specific analyses (fallback for detailed specifications)
## 4. Implementation Strategy
### Execution Strategy
**Execution Model**: [Sequential | Parallel | Phased | TDD Cycles]
**Rationale**: [Why this execution model fits the project]
**Parallelization Opportunities**:
- [List independent workstreams]
**Serialization Requirements**:
- [List critical dependencies]
### Architectural Approach
**Key Architecture Decisions**:
- [ADR references from synthesis]
- [Justification for architecture patterns]
**Integration Strategy**:
- [How modules communicate]
- [State management approach]
### Key Dependencies
**Task Dependency Graph**:
\`\`\`
[High-level dependency visualization]
\`\`\`
**Critical Path**: [Identify bottleneck tasks]
### Testing Strategy
**Testing Approach**:
- Unit testing: [Tools, scope]
- Integration testing: [Key integration points]
- E2E testing: [Critical user flows]
**Coverage Targets**:
- Lines: ≥70%
- Functions: ≥70%
- Branches: ≥65%
**Quality Gates**:
- [CI/CD gates]
- [Performance budgets]
## 5. Task Breakdown Summary
### Task Count
**{N} tasks** (flat hierarchy | two-level hierarchy, sequential | parallel execution)
### Task Structure
- **IMPL-1**: [Main task title]
- **IMPL-2**: [Main task title]
...
### Complexity Assessment
- **High**: [List with rationale]
- **Medium**: [List]
- **Low**: [List]
### Dependencies
[Reference Section 4.3 for dependency graph]
**Parallelization Opportunities**:
- [Specific task groups that can run in parallel]
## 6. Implementation Plan (Detailed Phased Breakdown)
### Execution Strategy
**Phase 1 (Weeks 1-2): [Phase Name]**
- **Tasks**: IMPL-1, IMPL-2
- **Deliverables**:
- [Specific deliverable 1]
- [Specific deliverable 2]
- **Success Criteria**:
- [Measurable criterion]
**Phase 2 (Weeks 3-N): [Phase Name]**
...
### Resource Requirements
**Development Team**:
- [Team composition and skills]
**External Dependencies**:
- [Third-party services, APIs]
**Infrastructure**:
- [Development, staging, production environments]
## 7. Risk Assessment & Mitigation
| Risk | Impact | Probability | Mitigation Strategy | Owner |
|------|--------|-------------|---------------------|-------|
| [Risk description] | High/Med/Low | High/Med/Low | [Strategy] | [Role] |
**Critical Risks** (High impact + High probability):
- [Risk 1]: [Detailed mitigation plan]
**Monitoring Strategy**:
- [How risks will be monitored]
## 8. Success Criteria
**Functional Completeness**:
- [ ] All requirements from synthesis-specification.md implemented
- [ ] All acceptance criteria from task.json files met
**Technical Quality**:
- [ ] Test coverage ≥70%
- [ ] Bundle size within budget
- [ ] Performance targets met
**Operational Readiness**:
- [ ] CI/CD pipeline operational
- [ ] Monitoring and logging configured
- [ ] Documentation complete
**Business Metrics**:
- [ ] [Key business metrics from synthesis]
\`\`\`
**Important**:
- Use the template above for IMPL_PLAN.md generation
- Replace all {placeholder} variables with actual session-specific values
- Populate CCW Workflow Context based on actual phase progression
- Extract content from ANALYSIS_RESULTS.md and context-package.json
- List all detected brainstorming artifacts with correct paths
#### 3. TODO_LIST.md
**Location**: .workflow/{session-id}/TODO_LIST.md
@@ -495,34 +211,43 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
- \`- [x]\` = Completed leaf task
\`\`\`
### Execution Instructions
### Execution Instructions for Agent
**Step 1: Extract Task Definitions**
- Parse analysis results for task recommendations
- Extract task ID, title, requirements, complexity
- Map artifacts to relevant tasks based on type
**Agent Task**: Generate task JSON files, IMPL_PLAN.md, and TODO_LIST.md based on analysis results
**Step 2: Generate Task JSON Files**
- Create individual .task/IMPL-*.json files
- Embed artifacts array with detected brainstorming outputs
- Generate flow_control with artifact loading steps
- Add MCP tool integration for codebase exploration
**Note**: The correct task JSON template path has been pre-selected by the command based on the `--cli-execute` flag and is provided in the context as `{template_path}`.
**Step 3: Create IMPL_PLAN.md**
- Summarize requirements and technical approach
- List detected artifacts with priorities
- Document task breakdown and dependencies
- Define execution strategy and success criteria
**Step 1: Load Task JSON Template**
- Read template from the provided path: `Read({template_path})`
- This template is already the correct one based on execution mode
**Step 4: Generate TODO_LIST.md**
- List all tasks with container/leaf structure
- Link to task JSON files
**Step 2: Extract and Decompose Tasks**
- Parse ANALYSIS_RESULTS.md for task recommendations
- Apply task merging rules (merge when possible, decompose only when necessary)
- Map artifacts to tasks based on domain (UI/Backend/Data)
- Ensure task count ≤10
**Step 3: Generate Task JSON Files**
- Use the template structure from Step 1
- Create .task/IMPL-*.json files with proper structure
- Replace all {placeholder} variables with actual session paths
- Embed artifacts array with brainstorming outputs
- Include MCP tool integration in pre_analysis steps
**Step 4: Create IMPL_PLAN.md**
- Use IMPL_PLAN template
- Populate all sections with session-specific content
- List artifacts with priorities and usage guidelines
- Document execution strategy and dependencies
**Step 5: Generate TODO_LIST.md**
- Create task progress checklist matching generated JSONs
- Use proper status indicators (▸, [ ], [x])
- Link to task JSON files
**Step 5: Update Session State**
- Update .workflow/{session-id}/workflow-session.json
- Mark session as ready for execution
- Record task count and artifact inventory
**Step 6: Update Session State**
- Update workflow-session.json with task count and artifact inventory
- Mark session ready for execution
### MCP Enhancement Examples
@@ -583,16 +308,6 @@ Generate all three documents and report completion status:
)
```
## Command Integration
### Usage
```bash
# Basic usage
/workflow:tools:task-generate-agent --session WFS-auth
# Called by /workflow:plan
SlashCommand(command="/workflow:tools:task-generate-agent --session WFS-[id]")
```
### Agent Context Passing
@@ -623,20 +338,3 @@ const agentContext = {
mcp_analysis: executeMcpDiscovery()
}
```
## Related Commands
- `/workflow:plan` - Orchestrates planning and calls this command
- `/workflow:tools:task-generate` - Manual version without agent
- `/workflow:tools:context-gather` - Provides context package
- `/workflow:tools:concept-enhanced` - Provides analysis results
- `/workflow:execute` - Executes generated tasks
## Key Differences from task-generate
| Feature | task-generate | task-generate-agent |
|---------|--------------|-------------------|
| Execution | Manual/scripted | Agent-driven |
| Phases | 6 phases | 2 phases (discovery + output) |
| MCP Integration | Optional | Enhanced with examples |
| Decision Logic | Command-driven | Agent-autonomous |
| Complexity | Higher control | Simpler delegation |

View File

@@ -1,4 +1,4 @@
---
---
name: task-generate
description: Generate task JSON files and IMPL_PLAN.md from analysis results with artifacts integration
argument-hint: "--session WFS-session-id [--cli-execute]"
@@ -9,76 +9,164 @@ examples:
# Task Generation Command
## Overview
Generate task JSON files and IMPL_PLAN.md from analysis results with automatic artifact detection and integration.
## 1. Overview
This command generates task JSON files and an `IMPL_PLAN.md` from `ANALYSIS_RESULTS.md`. It automatically detects and integrates brainstorming artifacts, creating a structured and context-rich plan for implementation. The command supports two primary execution modes: a default agent-based mode for seamless context handling and a `--cli-execute` mode that leverages the Codex CLI for complex, autonomous development tasks. Its core function is to translate analysis into actionable, executable tasks, ensuring all necessary context, dependencies, and implementation steps are defined upfront.
## Execution Modes
## 2. Execution Modes
This command offers two distinct modes for task execution, providing flexibility for different implementation complexities.
### Agent Mode (Default)
Tasks execute within agent context using agent's capabilities:
- Agent reads synthesis specifications
- Agent implements following requirements
- Agent validates implementation
- **Benefit**: Seamless context within single agent execution
In the default mode, each step in `implementation_approach` **omits the `command` field**. The agent interprets the step's `modification_points` and `logic_flow` to execute the task autonomously.
- **Step Structure**: Contains `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, and `output` fields
- **Execution**: Agent reads these fields and performs the implementation autonomously
- **Context Loading**: Agent loads context via `pre_analysis` steps
- **Validation**: Agent validates against acceptance criteria in `context.acceptance`
- **Benefit**: Direct agent execution with full context awareness, no external tool overhead
- **Use Case**: Standard implementation tasks where agent capability is sufficient
### CLI Execute Mode (`--cli-execute`)
Tasks execute using Codex CLI with resume mechanism:
- Each task uses `codex exec` command in `implementation_approach`
- First task establishes Codex session
- Subsequent tasks use `codex exec "..." resume --last` for context continuity
- **Benefit**: Codex's autonomous development capabilities with persistent context
- **Use Case**: Complex implementation requiring Codex's reasoning and iteration
When the `--cli-execute` flag is used, each step in `implementation_approach` **includes a `command` field** that specifies the exact execution command. This mode is designed for complex implementations requiring specialized CLI tools.
- **Step Structure**: Includes all default fields PLUS a `command` field
- **Execution**: The specified command executes the step directly (e.g., `bash(codex ...)`)
- **Context Packages**: Each command receives context via the CONTEXT field in the prompt
- **Multi-Step Support**: Complex tasks can have multiple sequential codex steps with `resume --last`
- **Benefit**: Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning and autonomous execution
- **Use Case**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
## Core Philosophy
- **Analysis-Driven**: Generate from ANALYSIS_RESULTS.md
- **Artifact-Aware**: Auto-detect brainstorming outputs
- **Context-Rich**: Embed comprehensive context in task JSON
- **Flow-Control Ready**: Pre-define implementation steps
- **Memory-First**: Reuse loaded documents from memory
- **CLI-Aware**: Support Codex resume mechanism for persistent context
## 3. Core Principles
This command is built on a set of core principles to ensure efficient and reliable task generation.
## Core Responsibilities
- Parse analysis results and extract tasks
- Detect and integrate brainstorming artifacts
- Generate enhanced task JSON files (5-field schema)
- Create IMPL_PLAN.md and TODO_LIST.md
- Update session state for execution
- **Analysis-Driven**: All generated tasks originate from `ANALYSIS_RESULTS.md`, ensuring a direct link between analysis and implementation
- **Artifact-Aware**: Automatically detects and integrates brainstorming outputs (`synthesis-specification.md`, role analyses) to enrich task context
- **Context-Rich**: Embeds comprehensive context (requirements, focus paths, acceptance criteria, artifact references) directly into each task JSON
- **Flow-Control Ready**: Pre-defines clear execution sequence (`pre_analysis`, `implementation_approach`) within each task
- **Memory-First**: Prioritizes using documents already loaded in conversation memory to avoid redundant file operations
- **Mode-Flexible**: Supports both agent-driven execution (default) and CLI tool execution (with `--cli-execute` flag)
- **Multi-Step Support**: Complex tasks can use multiple sequential steps in `implementation_approach` with codex resume mechanism
- **Responsibility**: Parses analysis, detects artifacts, generates enhanced task JSONs, creates `IMPL_PLAN.md` and `TODO_LIST.md`, updates session state
## Execution Lifecycle
## 4. Execution Flow
The command follows a streamlined, three-step process to convert analysis into executable tasks.
### Phase 1: Input Validation & Discovery
**⚡ Memory-First Rule**: Skip file loading if documents already in conversation memory
### Step 1: Input & Discovery
The process begins by gathering all necessary inputs. It follows a **Memory-First Rule**, skipping file reads if documents are already in the conversation memory.
1. **Session Validation**: Loads and validates the session from `.workflow/{session_id}/workflow-session.json`.
2. **Analysis Loading**: Reads the primary input, `.workflow/{session_id}/.process/ANALYSIS_RESULTS.md`.
3. **Artifact Discovery**: Scans the `.workflow/{session_id}/.brainstorming/` directory to find `synthesis-specification.md`, `topic-framework.md`, and various role analyses.
1. **Session Validation**
- If session metadata in memory → Skip loading
- Else: Load `.workflow/{session_id}/workflow-session.json`
### Step 2: Task Decomposition & Grouping
Once all inputs are loaded, the command analyzes the tasks defined in the analysis results and groups them based on shared context.
1. **Task Definition Parsing**: Extracts task definitions, requirements, and dependencies.
2. **Context Signature Analysis**: Computes a unique hash (`context_signature`) for each task based on its `focus_paths` and referenced `artifacts`.
3. **Task Grouping**:
* Tasks with the **same signature** are candidates for merging, as they operate on the same context.
* Tasks with **different signatures** and no dependencies are grouped for parallel execution.
* Tasks with `depends_on` relationships are marked for sequential execution.
4. **Modification Target Determination**: Extracts specific code locations (`file:function:lines`) from the analysis to populate the `target_files` field.
2. **Analysis Results Loading**
- If ANALYSIS_RESULTS.md in memory → Skip loading
- Else: Read `.workflow/{session_id}/.process/ANALYSIS_RESULTS.md`
### Step 3: Output Generation
Finally, the command generates all the necessary output files.
1. **Task JSON Creation**: Creates individual `.task/IMPL-*.json` files, embedding all context, artifacts, and flow control steps. If `--cli-execute` is active, it generates the appropriate `codex exec` commands.
2. **IMPL_PLAN.md Generation**: Creates the main implementation plan document, summarizing the strategy, tasks, and dependencies.
3. **TODO_LIST.md Generation**: Creates a simple checklist for tracking task progress.
4. **Session State Update**: Updates `workflow-session.json` with the final task count and artifact inventory, marking the session as ready for execution.
3. **Artifact Discovery**
- If artifact inventory in memory → Skip scanning
- Else: Scan `.workflow/{session_id}/.brainstorming/` directory
- Detect: synthesis-specification.md, topic-framework.md, role analyses
## 5. Task Decomposition Strategy
The command employs a sophisticated strategy to group and decompose tasks, optimizing for context reuse and parallel execution.
### Phase 2: Task JSON Generation
### Core Principles
- **Primary Rule: Shared Context → Merge Tasks**: Tasks that operate on the same files, use the same artifacts, and share the same tech stack are merged. This avoids redundant context loading and recognizes inherent relationships between the tasks.
- **Secondary Rule: Different Contexts + No Dependencies → Decompose for Parallel Execution**: Tasks that are fully independent (different files, different artifacts, no shared dependencies) are decomposed into separate parallel execution groups.
#### Task Decomposition Standards
**Core Principle: Task Merging Over Decomposition**
- **Merge Rule**: Execute together when possible
- **Decompose Only When**:
- Excessive workload (>2500 lines or >6 files)
- Different tech stacks or domains
- Sequential dependency blocking
- Parallel execution needed
### Context Analysis for Task Grouping
The decision to merge or decompose is based on analyzing context indicators:
1. **Shared Context Indicators (→ Merge)**:
* Identical `focus_paths` (working on the same modules/files).
* Same tech stack and dependencies.
* Identical `context.artifacts` references.
* A sequential logic flow within the same feature.
* Shared test fixtures or setup.
2. **Independent Context Indicators (→ Decompose)**:
* Different `focus_paths` (separate modules).
* Different tech stacks (e.g., frontend vs. backend).
* Different `context.artifacts` (using different brainstorming outputs).
* No shared dependencies.
* Can be tested independently.
**Decomposition is only performed when**:
- Tasks have different contexts and no shared dependencies (enabling parallel execution).
- A single task represents an excessive workload (e.g., >2500 lines of code or >6 files to modify).
- A sequential dependency creates a necessary block (e.g., IMPL-1 must complete before IMPL-2 can start).
### Context Signature Algorithm
To automate grouping, a `context_signature` is computed for each task.
```javascript
// Compute context signature for task grouping
function computeContextSignature(task) {
const focusPathsStr = task.context.focus_paths.sort().join('|');
const artifactsStr = task.context.artifacts.map(a => a.path).sort().join('|');
const techStack = task.context.shared_context?.tech_stack?.sort().join('|') || '';
return hash(`${focusPathsStr}:${artifactsStr}:${techStack}`);
}
```
### Execution Group Assignment
Tasks are assigned to execution groups based on their signatures and dependencies.
```javascript
// Group tasks by context signature
function groupTasksByContext(tasks) {
const groups = {};
tasks.forEach(task => {
const signature = computeContextSignature(task);
if (!groups[signature]) {
groups[signature] = [];
}
groups[signature].push(task);
});
return groups;
}
// Assign execution groups for parallel tasks
function assignExecutionGroups(tasks) {
const contextGroups = groupTasksByContext(tasks);
Object.entries(contextGroups).forEach(([signature, groupTasks]) => {
if (groupTasks.length === 1) {
const task = groupTasks[0];
// Single task with unique context
if (!task.context.depends_on || task.context.depends_on.length === 0) {
task.meta.execution_group = `parallel-${signature.slice(0, 8)}`;
} else {
task.meta.execution_group = null; // Sequential task
}
} else {
// Multiple tasks with same context → Should be merged
console.warn(`Tasks ${groupTasks.map(t => t.id).join(', ')} share context and should be merged`);
// Merge tasks into single task
return mergeTasks(groupTasks);
}
});
}
```
**Task Limits**:
- **Maximum 10 tasks** (hard limit)
- **Function-based**: Complete units (logic + UI + tests + config)
- **Hierarchy**: Flat (≤5) | Two-level (6-10) | Re-scope (>10)
- **Maximum 10 tasks** (hard limit).
- **Hierarchy**: Flat (≤5 tasks) or two-level (6-10 tasks). If >10, the scope should be re-evaluated.
- **Parallel Groups**: Tasks with the same `execution_group` ID are independent and can run concurrently.
## 6. Generated Outputs
The command produces three key documents and a directory of task files.
### 6.1. Task JSON Schema (`.task/IMPL-*.json`)
This enhanced 5-field schema embeds all necessary context, artifacts, and execution steps.
#### Enhanced Task JSON Schema (5-Field + Artifacts)
```json
{
"id": "IMPL-N[.M]",
@@ -86,7 +174,9 @@ Tasks execute using Codex CLI with resume mechanism:
"status": "pending|active|completed|blocked|container",
"meta": {
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@test-fix-agent|@general-purpose"
"agent": "@code-developer|@test-fix-agent|@general-purpose",
"execution_group": "group-id|null",
"context_signature": "hash-of-focus_paths-and-artifacts"
},
"context": {
"requirements": ["Clear requirement from analysis"],
@@ -195,123 +285,14 @@ Tasks execute using Codex CLI with resume mechanism:
"output": "implementation"
}
],
// CLI Execute Mode: Use Codex command (when --cli-execute flag present)
"implementation_approach": [
{
"step": 1,
"title": "Execute implementation with Codex",
"description": "Use Codex CLI to implement '[title]' following synthesis specification with autonomous development capabilities",
"modification_points": [
"Codex loads synthesis specification and artifacts",
"Codex implements following requirements",
"Codex validates and tests implementation"
],
"logic_flow": [
"Establish or resume Codex session",
"Pass synthesis specification to Codex",
"Codex performs autonomous implementation",
"Codex validates against acceptance criteria"
],
"command": "bash(codex -C [focus_paths] --full-auto exec \"PURPOSE: [title] TASK: [requirements] MODE: auto CONTEXT: @{[synthesis_path],[artifacts_paths]} EXPECTED: [acceptance] RULES: Follow synthesis-specification.md\" [resume_flag] --skip-git-repo-check -s danger-full-access)",
"depends_on": [],
"output": "implementation"
}
],
"target_files": ["file:function:lines"]
}
}
```
#### Task Generation Process
1. Parse analysis results and extract task definitions
2. Detect brainstorming artifacts with priority scoring
3. Generate task context (requirements, focus_paths, acceptance)
4. **Determine modification targets**: Extract specific code locations from analysis
5. Build flow_control with artifact loading steps and target_files
6. **CLI Execute Mode**: If `--cli-execute` flag present, generate Codex commands
7. Create individual task JSON files in `.task/`
### 6.2. IMPL_PLAN.md Structure
This document provides a high-level overview of the entire implementation plan.
#### Codex Resume Mechanism (CLI Execute Mode)
**Session Continuity Strategy**:
- **First Task** (no depends_on or depends_on=[]): Establish new Codex session
- Command: `codex -C [path] --full-auto exec "[prompt]" --skip-git-repo-check -s danger-full-access`
- Creates new session context
- **Subsequent Tasks** (has depends_on): Resume previous Codex session
- Command: `codex --full-auto exec "[prompt]" resume --last --skip-git-repo-check -s danger-full-access`
- Maintains context from previous implementation
- **Critical**: `resume --last` flag enables context continuity
**Resume Flag Logic**:
```javascript
// Determine resume flag based on task dependencies
const resumeFlag = task.context.depends_on && task.context.depends_on.length > 0
? "resume --last"
: "";
// First task (IMPL-001): no resume flag
// Later tasks (IMPL-002, IMPL-003): use "resume --last"
```
**Benefits**:
- ✅ Shared context across related tasks
- ✅ Codex learns from previous implementations
- ✅ Consistent patterns and conventions
- ✅ Reduced redundant analysis
#### Target Files Generation (Critical)
**Purpose**: Identify specific code locations for modification AND new files to create
**Source Data Priority**:
1. **ANALYSIS_RESULTS.md** - Should contain identified code locations
2. **Gemini/MCP Analysis** - From `analyze_task_patterns` step
3. **Context Package** - File references from `focus_paths`
**Format**: `["file:function:lines"]` or `["file"]` (for new files)
- `file`: Relative path from project root (e.g., `src/auth/AuthService.ts`)
- `function`: Function/method name to modify (e.g., `login`, `validateToken`) - **omit for new files**
- `lines`: Approximate line range (e.g., `45-52`, `120-135`) - **omit for new files**
**Examples**:
```json
"target_files": [
"src/auth/AuthService.ts:login:45-52",
"src/middleware/auth.ts:validateToken:30-45",
"src/auth/PasswordReset.ts",
"tests/auth/PasswordReset.test.ts",
"tests/auth.test.ts:testLogin:15-20"
]
```
**Generation Strategy**:
- **New files to create** → Use `["path/to/NewFile.ts"]` (no function or lines)
- **Existing files with specific locations** → Use `["file:function:lines"]`
- **Existing files with function only** → Search lines using MCP/grep `["file:function:*"]`
- **Existing files (explore entire)** → Mark as `["file.ts:*:*"]`
- **No specific targets** → Leave empty `[]` (agent explores focus_paths)
### Phase 3: Artifact Detection & Integration
#### Artifact Priority
1. **synthesis-specification.md** (highest) - Complete integrated spec
2. **topic-framework.md** (medium) - Discussion framework
3. **role/analysis.md** (low) - Individual perspectives
#### Artifact-Task Mapping
- **synthesis-specification.md** → All tasks
- **ui-designer/analysis.md** → UI/Frontend tasks
- **ux-expert/analysis.md** → UX/Interaction tasks
- **system-architect/analysis.md** → Architecture/Backend tasks
- **subject-matter-expert/analysis.md** → Domain/Standards tasks
- **data-architect/analysis.md** → Data/API tasks
- **scrum-master/analysis.md** → Sprint/Process tasks
- **product-owner/analysis.md** → Backlog/Story tasks
### Phase 4: IMPL_PLAN.md Generation
#### Document Structure
```markdown
---
identifier: WFS-{session-id}
@@ -365,9 +346,9 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
- **Timeline**: Duration and milestones
### Module Structure
```
'''
[Directory tree showing key modules]
```
'''
### Dependencies
**Primary**: [Core libraries and frameworks]
@@ -442,9 +423,9 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
### Key Dependencies
**Task Dependency Graph**:
```
'''
[High-level dependency visualization]
```
'''
**Critical Path**: [Identify bottleneck tasks]
@@ -542,9 +523,9 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
- [ ] [Key business metrics from synthesis]
```
### Phase 5: TODO_LIST.md Generation
### 6.3. TODO_LIST.md Structure
A simple Markdown file for tracking the status of each task.
#### Document Structure
```markdown
# Tasks: [Session Topic]
@@ -562,12 +543,8 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
- Maximum 2 levels: Main tasks and subtasks only
```
### Phase 6: Session State Update
1. Update workflow-session.json with task count and artifacts
2. Validate all output files (task JSONs, IMPL_PLAN.md, TODO_LIST.md)
3. Generate completion report
## Output Files Structure
### 6.4. Output Files Diagram
The command organizes outputs into a standard directory structure.
```
.workflow/{session-id}/
├── IMPL_PLAN.md # Implementation plan
@@ -585,7 +562,201 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
└── context-package.json # Input from context-gather
```
## Error Handling
## 7. Artifact Integration
The command intelligently detects and integrates artifacts from the `.brainstorming/` directory.
#### Artifact Priority
1. **synthesis-specification.md** (highest): The complete, integrated specification that serves as the primary source of truth.
2. **topic-framework.md** (medium): The discussion framework that provides high-level structure.
3. **role/analysis.md** (low): Individual role-based analyses that offer detailed, perspective-specific insights.
#### Artifact-Task Mapping
Artifacts are mapped to tasks based on their relevance to the task's domain.
- **synthesis-specification.md**: Included in all tasks as the primary reference.
- **ui-designer/analysis.md**: Mapped to UI/Frontend tasks.
- **system-architect/analysis.md**: Mapped to Architecture/Backend tasks.
- **subject-matter-expert/analysis.md**: Mapped to tasks related to domain logic or standards.
- **data-architect/analysis.md**: Mapped to tasks involving data models or APIs.
This ensures that each task has access to the most relevant and detailed specifications, from the high-level synthesis down to the role-specific details.
## 8. CLI Execute Mode Details
When using `--cli-execute`, each step in `implementation_approach` includes a `command` field with the execution command.
**Key Points**:
- **Sequential Steps**: Steps execute in order defined in `implementation_approach` array
- **Context Delivery**: Each codex command receives context via CONTEXT field: `@{.workflow/{session}/.process/context-package.json}` and `@{.workflow/{session}/.brainstorming/synthesis-specification.md}`
- **Multi-Step Tasks**: First step provides full context, subsequent steps use `resume --last` to maintain session continuity
- **Step Dependencies**: Later steps reference outputs from earlier steps via `depends_on` field
### Example 1: Agent Mode - Simple Task (Default, No Command)
```json
{
"id": "IMPL-001",
"title": "Implement user authentication module",
"context": {
"depends_on": [],
"focus_paths": ["src/auth"],
"requirements": ["JWT-based authentication", "Login and registration endpoints"],
"acceptance": [
"JWT token generation working",
"Login and registration endpoints implemented",
"Tests passing with >70% coverage"
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_synthesis",
"action": "Load synthesis specification for requirements",
"commands": ["Read(.workflow/WFS-session/.brainstorming/synthesis-specification.md)"],
"output_to": "synthesis_spec",
"on_error": "fail"
},
{
"step": "load_context",
"action": "Load context package for project structure",
"commands": ["Read(.workflow/WFS-session/.process/context-package.json)"],
"output_to": "context_pkg",
"on_error": "fail"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Implement JWT-based authentication",
"description": "Create authentication module using JWT following [synthesis_spec] requirements and [context_pkg] patterns",
"modification_points": [
"Create auth service with JWT generation",
"Implement login endpoint with credential validation",
"Implement registration endpoint with user creation",
"Add JWT middleware for route protection"
],
"logic_flow": [
"User registers → validate input → hash password → create user",
"User logs in → validate credentials → generate JWT → return token",
"Protected routes → validate JWT → extract user → allow access"
],
"depends_on": [],
"output": "auth_implementation"
}
],
"target_files": ["src/auth/service.ts", "src/auth/middleware.ts", "src/routes/auth.ts"]
}
}
```
### Example 2: CLI Execute Mode - Single Codex Step
```json
{
"id": "IMPL-002",
"title": "Implement user authentication module",
"context": {
"depends_on": [],
"focus_paths": ["src/auth"],
"requirements": ["JWT-based authentication", "Login and registration endpoints"],
"acceptance": ["JWT generation working", "Endpoints implemented", "Tests passing"]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_synthesis",
"action": "Load synthesis specification",
"commands": ["Read(.workflow/WFS-session/.brainstorming/synthesis-specification.md)"],
"output_to": "synthesis_spec",
"on_error": "fail"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Implement authentication with Codex",
"description": "Create JWT-based authentication module",
"command": "bash(codex -C src/auth --full-auto exec \"PURPOSE: Implement user authentication TASK: JWT-based auth with login/registration MODE: auto CONTEXT: @{.workflow/WFS-session/.process/context-package.json} @{.workflow/WFS-session/.brainstorming/synthesis-specification.md} EXPECTED: Complete auth module with tests RULES: Follow synthesis specification\" --skip-git-repo-check -s danger-full-access)",
"modification_points": ["Create auth service", "Implement endpoints", "Add JWT middleware"],
"logic_flow": ["Validate credentials", "Generate JWT", "Return token"],
"depends_on": [],
"output": "auth_implementation"
}
],
"target_files": ["src/auth/service.ts", "src/auth/middleware.ts"]
}
}
```
### Example 3: CLI Execute Mode - Multi-Step with Resume
```json
{
"id": "IMPL-003",
"title": "Implement role-based access control",
"context": {
"depends_on": ["IMPL-002"],
"focus_paths": ["src/auth", "src/middleware"],
"requirements": ["User roles and permissions", "Route protection middleware"],
"acceptance": ["RBAC models created", "Middleware working", "Management API complete"]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_context",
"action": "Load context and synthesis",
"commands": [
"Read(.workflow/WFS-session/.process/context-package.json)",
"Read(.workflow/WFS-session/.brainstorming/synthesis-specification.md)"
],
"output_to": "full_context",
"on_error": "fail"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Create RBAC models",
"description": "Define role and permission data models",
"command": "bash(codex -C src/auth --full-auto exec \"PURPOSE: Create RBAC models TASK: Role and permission models MODE: auto CONTEXT: @{.workflow/WFS-session/.process/context-package.json} @{.workflow/WFS-session/.brainstorming/synthesis-specification.md} EXPECTED: Models with migrations RULES: Follow synthesis spec\" --skip-git-repo-check -s danger-full-access)",
"modification_points": ["Define role model", "Define permission model", "Create migrations"],
"logic_flow": ["Design schema", "Implement models", "Generate migrations"],
"depends_on": [],
"output": "rbac_models"
},
{
"step": 2,
"title": "Implement RBAC middleware",
"description": "Create route protection middleware using models from step 1",
"command": "bash(codex --full-auto exec \"PURPOSE: Create RBAC middleware TASK: Route protection middleware MODE: auto CONTEXT: RBAC models from step 1 EXPECTED: Middleware for route protection RULES: Use session patterns\" resume --last --skip-git-repo-check -s danger-full-access)",
"modification_points": ["Create permission checker", "Add route decorators", "Integrate with auth"],
"logic_flow": ["Check user role", "Validate permissions", "Allow/deny access"],
"depends_on": [1],
"output": "rbac_middleware"
},
{
"step": 3,
"title": "Add role management API",
"description": "Create CRUD endpoints for roles and permissions",
"command": "bash(codex --full-auto exec \"PURPOSE: Role management API TASK: CRUD endpoints for roles/permissions MODE: auto CONTEXT: Models and middleware from previous steps EXPECTED: Complete API with validation RULES: Maintain consistency\" resume --last --skip-git-repo-check -s danger-full-access)",
"modification_points": ["Create role endpoints", "Create permission endpoints", "Add validation"],
"logic_flow": ["Define routes", "Implement controllers", "Add authorization"],
"depends_on": [2],
"output": "role_management_api"
}
],
"target_files": [
"src/models/Role.ts",
"src/models/Permission.ts",
"src/middleware/rbac.ts",
"src/routes/roles.ts"
]
}
}
```
**Pattern Summary**:
- **Agent Mode (Example 1)**: No `command` field - agent executes via `modification_points` and `logic_flow`
- **CLI Mode Single-Step (Example 2)**: One `command` field with full context package
- **CLI Mode Multi-Step (Example 3)**: First step uses full context, subsequent steps use `resume --last`
- **Context Delivery**: Context package provided via `@{...}` references in CONTEXT field
## 9. Error Handling
### Input Validation Errors
| Error | Cause | Resolution |
@@ -608,7 +779,7 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
| Invalid format | Corrupted file | Skip artifact loading |
| Path invalid | Moved/deleted | Update references |
## Integration & Usage
## 10. Integration & Usage
### Command Chain
- **Called By**: `/workflow:plan` (Phase 4)
@@ -620,82 +791,9 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
/workflow:tools:task-generate --session WFS-auth
```
## CLI Execute Mode Examples
### Example 1: First Task (Establish Session)
```json
{
"id": "IMPL-001",
"title": "Implement user authentication module",
"context": {
"depends_on": [],
"focus_paths": ["src/auth"],
"requirements": ["JWT-based authentication", "Login and registration endpoints"]
},
"flow_control": {
"implementation_approach": [{
"step": 1,
"title": "Execute implementation with Codex",
"command": "bash(codex -C src/auth --full-auto exec \"PURPOSE: Implement user authentication module TASK: JWT-based authentication with login and registration MODE: auto CONTEXT: @{.workflow/WFS-session/.brainstorming/synthesis-specification.md} EXPECTED: Complete auth module with tests RULES: Follow synthesis specification\" --skip-git-repo-check -s danger-full-access)",
"depends_on": [],
"output": "implementation"
}]
}
}
```
### Example 2: Subsequent Task (Resume Session)
```json
{
"id": "IMPL-002",
"title": "Add password reset functionality",
"context": {
"depends_on": ["IMPL-001"],
"focus_paths": ["src/auth"],
"requirements": ["Password reset via email", "Token validation"]
},
"flow_control": {
"implementation_approach": [{
"step": 1,
"title": "Execute implementation with Codex",
"command": "bash(codex --full-auto exec \"PURPOSE: Add password reset functionality TASK: Password reset via email with token validation MODE: auto CONTEXT: Previous auth implementation from session EXPECTED: Password reset endpoints with email integration RULES: Maintain consistency with existing auth patterns\" resume --last --skip-git-repo-check -s danger-full-access)",
"depends_on": [],
"output": "implementation"
}]
}
}
```
### Example 3: Third Task (Continue Session)
```json
{
"id": "IMPL-003",
"title": "Implement role-based access control",
"context": {
"depends_on": ["IMPL-001", "IMPL-002"],
"focus_paths": ["src/auth"],
"requirements": ["User roles and permissions", "Middleware for route protection"]
},
"flow_control": {
"implementation_approach": [{
"step": 1,
"title": "Execute implementation with Codex",
"command": "bash(codex --full-auto exec \"PURPOSE: Implement role-based access control TASK: User roles, permissions, and route protection middleware MODE: auto CONTEXT: Existing auth system from session EXPECTED: RBAC system integrated with current auth RULES: Use established patterns from session context\" resume --last --skip-git-repo-check -s danger-full-access)",
"depends_on": [],
"output": "implementation"
}]
}
}
```
**Pattern Summary**:
- IMPL-001: Fresh start with `-C src/auth` and full prompt
- IMPL-002: Resume with `resume --last`, references "previous auth implementation"
- IMPL-003: Resume with `resume --last`, references "existing auth system"
## Related Commands
## 11. Related Commands
- `/workflow:plan` - Orchestrates entire planning
- `/workflow:plan --cli-execute` - Planning with CLI execution mode
- `/workflow:tools:context-gather` - Provides context package
- `/workflow:tools:concept-enhanced` - Provides analysis results
- `/workflow:execute` - Executes generated tasks
- `/workflow:execute` - Executes generated tasks

View File

@@ -1,100 +1,124 @@
---
name: Prompt Enhancer
description: Systematically enhance unclear and ambiguous user prompts by combining session memory with codebase analysis. AUTO-TRIGGER when user input is vague, lacks technical specificity (e.g., "fix", "improve", "clean up", "update", "refactor"), uses unclear references ("it", "that", "this thing"), or affects multiple modules or critical systems. Essential for transforming vague intent into actionable specifications.
allowed-tools: Bash, Read, Glob, Grep
description: Transform vague prompts into actionable specs using intelligent analysis and session memory. Use when user input contains -e or --enhance flag.
allowed-tools: (none)
---
# Prompt Enhancer
## Overview
**Transform**: Vague intent → Structured specification (Memory-based, Direct Output)
Transforms ambiguous user requests into actionable technical specifications through semantic analysis and session memory integration.
**Languages**: English + Chinese (中英文语义识别)
**Core Capability**: Vague intent → Structured specification
## Process (Internal → Direct Output)
## Enhancement Process
**Internal Analysis**: Intelligently extract session context, identify tech stack, and structure into actionable format.
### Step 1: Semantic Analysis
**Output**: Direct structured prompt (no intermediate steps shown)
Analyze user input to identify:
- **Intent keywords**: fix, improve, add, refactor, update, migrate
- **Technical scope**: single file vs multi-module
- **Domain context**: auth, payment, security, API, UI, database
- **Implied requirements**: performance, security, testing, documentation
## Output Format
### Step 2: Memory Analysis
**Dynamic Structure**: Adapt fields based on task type and context needs. Not all fields are required.
Extract from conversation history:
- **Technical context**: Previous discussions, decisions, implementations
- **Known patterns**: Identified code patterns, architecture decisions
- **Current state**: What's been built, what's in progress
- **Dependencies**: Related modules, integration points
- **Constraints**: Security requirements, backward compatibility
**Core Fields** (always present):
- **INTENT**: One-sentence technical goal
- **ACTION**: Concrete steps with technical details
### Step 3: Context Integration
Combine semantic and memory analysis to determine:
- **Precise intent**: Specific technical goal
- **Required actions**: Implementation steps with file references
- **Critical constraints**: Security, compatibility, testing requirements
- **Missing information**: What needs clarification
## Output Structure
Every enhanced prompt must follow this format:
**Optional Fields** (include when relevant):
- **TECH STACK**: Relevant technologies (when tech-specific)
- **CONTEXT**: Session memory findings (when context matters)
- **ATTENTION**: Critical constraints (when risks/requirements exist)
- **SCOPE**: Affected modules/files (for multi-module tasks)
- **METRICS**: Success criteria (for optimization/performance tasks)
- **DEPENDENCIES**: Related components (for integration tasks)
**Example (Simple Task)**:
```
INTENT: [Clear technical goal]
CONTEXT: [Session memory + semantic analysis]
ACTION: [Numbered implementation steps]
ATTENTION: [Critical constraints]
📋 ENHANCED PROMPT
INTENT: Fix authentication token validation in JWT middleware
ACTION:
1. Review token expiration logic in auth middleware
2. Add proper error handling for expired tokens
3. Test with valid/expired/malformed tokens
```
**Field Descriptions**:
**Example (Complex Task)**:
```
📋 ENHANCED PROMPT
- **INTENT**: One-sentence technical goal derived from semantic analysis
- **CONTEXT**: Session memory findings + semantic domain analysis
- **ACTION**: Numbered steps with specific file/module references
- **ATTENTION**: Critical constraints, security, compliance, tests
INTENT: Optimize API performance with caching and database indexing
## Semantic Patterns
TECH STACK:
- Redis: Response caching
- PostgreSQL: Query optimization
### Intent Translation
CONTEXT:
- API response times >2s mentioned in previous conversation
- PostgreSQL slow query logs show N+1 problems
| User Input | Semantic Intent | Focus |
|------------|----------------|-------|
| "fix" + vague target | Debug and resolve | Root cause → preserve behavior |
| "improve" + no metrics | Enhance/optimize | Performance/readability |
| "add" + feature name | Implement feature | Integration + edge cases |
| "refactor" + module | Restructure | Maintain behavior |
| "update" + version | Modernize | Version compatibility |
ACTION:
1. Profile endpoints to identify slow queries
2. Add PostgreSQL indexes on frequently queried columns
3. Implement Redis caching for read-heavy endpoints
4. Add cache invalidation on data updates
### Scope Detection
METRICS:
- Target: <500ms API response time
- Cache hit ratio: >80%
**Single-file scope**:
- "fix button", "add validation", "update component"
- Use session memory only
ATTENTION:
- Maintain backward compatibility with existing API contracts
- Handle cache invalidation correctly to avoid stale data
```
## Workflow
**Multi-module scope** (>3 modules):
- "add authentication", "refactor payment", "migrate database"
- Analyze dependencies and integration points
```
Trigger (-e/--enhance) → Internal Analysis → Dynamic Output
↓ ↓ ↓
User Input Assess Task Type Select Fields
Extract Memory Context Structure Prompt
```
**System-wide scope**:
- "improve performance", "add logging", "update security"
- Consider cross-cutting concerns
1. **Detect**: User input contains `-e` or `--enhance`
2. **Analyze**:
- Determine task type (fix/optimize/implement/refactor)
- Extract relevant session context
- Identify tech stack and constraints
3. **Structure**:
- Always include: INTENT + ACTION
- Conditionally add: TECH STACK, CONTEXT, ATTENTION, METRICS, etc.
4. **Output**: Present dynamically structured prompt
## Key Principles
## Enhancement Guidelines (Internal)
1. **Memory First**: Check session memory before assumptions
2. **Semantic Precision**: Extract exact technical intent from vague language
3. **Context Reuse**: Build on previous understanding
4. **Clear Output**: Always structured format
5. **Avoid Duplication**: Reference context, don't repeat
**Always Include**:
- Clear, actionable INTENT
- Concrete ACTION steps with technical details
**Add When Relevant**:
- TECH STACK: Task involves specific technologies
- CONTEXT: Session memory provides useful background
- ATTENTION: Security/compatibility/performance concerns exist
- SCOPE: Multi-module or cross-component changes
- METRICS: Performance/optimization goals need measurement
- DEPENDENCIES: Integration points matter
**Quality Checks**:
- Make vague intent explicit
- Resolve ambiguous references
- Add testing/validation steps
- Include constraints from memory
## Best Practices
- **Semantic analysis**: Identify domain, scope, and intent keywords
- **Memory integration**: Extract all relevant context from conversation
- **Structured output**: Always use INTENT/CONTEXT/ACTION/ATTENTION format
- **Actionable steps**: Specific files, clear execution order
- **Critical constraints**: Security, compatibility, testing requirements
- ✅ Trigger only on `-e`/`--enhance` flags
- ✅ Use **dynamic field selection** based on task type
- ✅ Extract **memory context ONLY** (no file reading)
- ✅ Always include INTENT + ACTION as core fields
- ✅ Add optional fields only when relevant to task
- ✅ Direct output (no intermediate steps shown)
- ❌ NO tool calls
- ❌ NO file operations (Bash, Read, Glob, Grep)
- ❌ NO fixed template - adapt to task needs

View File

@@ -0,0 +1,224 @@
Generate ANALYSIS_RESULTS.md with comprehensive solution design and technical analysis.
## OUTPUT FILE STRUCTURE
### Required Sections
```markdown
# Technical Analysis & Solution Design
## Executive Summary
- **Analysis Focus**: {core_problem_or_improvement_area}
- **Analysis Timestamp**: {timestamp}
- **Tools Used**: {analysis_tools}
- **Overall Assessment**: {feasibility_score}/5 - {recommendation_status}
---
## 1. Current State Analysis
### Architecture Overview
- **Existing Patterns**: {key_architectural_patterns}
- **Code Structure**: {current_codebase_organization}
- **Integration Points**: {system_integration_touchpoints}
- **Technical Debt Areas**: {identified_debt_with_impact}
### Compatibility & Dependencies
- **Framework Alignment**: {framework_compatibility_assessment}
- **Dependency Analysis**: {critical_dependencies_and_risks}
- **Migration Considerations**: {backward_compatibility_concerns}
### Critical Findings
- **Strengths**: {what_works_well}
- **Gaps**: {missing_capabilities_or_issues}
- **Risks**: {identified_technical_and_business_risks}
---
## 2. Proposed Solution Design
### Core Architecture Principles
- **Design Philosophy**: {key_design_principles}
- **Architectural Approach**: {chosen_architectural_pattern_with_rationale}
- **Scalability Strategy**: {how_solution_scales}
### System Design
- **Component Architecture**: {high_level_component_design}
- **Data Flow**: {data_flow_patterns_and_state_management}
- **API Design**: {interface_contracts_and_specifications}
- **Integration Strategy**: {how_components_integrate}
### Key Design Decisions
1. **Decision**: {critical_design_choice}
- **Rationale**: {why_this_approach}
- **Alternatives Considered**: {other_options_and_tradeoffs}
- **Impact**: {implications_on_architecture}
2. **Decision**: {another_critical_choice}
- **Rationale**: {reasoning}
- **Alternatives Considered**: {tradeoffs}
- **Impact**: {consequences}
### Technical Specifications
- **Technology Stack**: {chosen_technologies_with_justification}
- **Code Organization**: {module_structure_and_patterns}
- **Testing Strategy**: {testing_approach_and_coverage}
- **Performance Targets**: {performance_requirements_and_benchmarks}
---
## 3. Implementation Strategy
### Development Approach
- **Core Implementation Pattern**: {primary_implementation_strategy}
- **Module Dependencies**: {dependency_graph_and_order}
- **Quality Assurance**: {qa_approach_and_validation}
### Code Modification Targets
**Purpose**: Specific code locations for modification AND new files to create
**Identified Targets**:
1. **Target**: `src/module/File.ts:function:45-52`
- **Type**: Modify existing
- **Modification**: {what_to_change}
- **Rationale**: {why_change_needed}
2. **Target**: `src/module/NewFile.ts`
- **Type**: Create new file
- **Purpose**: {file_purpose}
- **Rationale**: {why_new_file_needed}
**Format Rules**:
- Existing files: `file:function:lines` (with line numbers)
- New files: `file` (no function or lines)
- Unknown lines: `file:function:*`
### Feasibility Assessment
- **Technical Complexity**: {complexity_rating_and_analysis}
- **Performance Impact**: {expected_performance_characteristics}
- **Resource Requirements**: {development_resources_needed}
- **Maintenance Burden**: {ongoing_maintenance_considerations}
### Risk Mitigation
- **Technical Risks**: {implementation_risks_and_mitigation}
- **Integration Risks**: {compatibility_challenges_and_solutions}
- **Performance Risks**: {performance_concerns_and_strategies}
- **Security Risks**: {security_vulnerabilities_and_controls}
---
## 4. Solution Optimization
### Performance Optimization
- **Optimization Strategies**: {key_performance_improvements}
- **Caching Strategy**: {caching_approach_and_invalidation}
- **Resource Management**: {resource_utilization_optimization}
- **Bottleneck Mitigation**: {identified_bottlenecks_and_solutions}
### Security Enhancements
- **Security Model**: {authentication_authorization_approach}
- **Data Protection**: {data_security_and_encryption}
- **Vulnerability Mitigation**: {known_vulnerabilities_and_controls}
- **Compliance**: {regulatory_and_compliance_considerations}
### Code Quality
- **Code Standards**: {coding_conventions_and_patterns}
- **Testing Coverage**: {test_strategy_and_coverage_goals}
- **Documentation**: {documentation_requirements}
- **Maintainability**: {maintainability_practices}
---
## 5. Critical Success Factors
### Technical Requirements
- **Must Have**: {essential_technical_capabilities}
- **Should Have**: {important_but_not_critical_features}
- **Nice to Have**: {optional_enhancements}
### Quality Metrics
- **Performance Benchmarks**: {measurable_performance_targets}
- **Code Quality Standards**: {quality_metrics_and_thresholds}
- **Test Coverage Goals**: {testing_coverage_requirements}
- **Security Standards**: {security_compliance_requirements}
### Success Validation
- **Acceptance Criteria**: {how_to_validate_success}
- **Testing Strategy**: {validation_testing_approach}
- **Monitoring Plan**: {production_monitoring_strategy}
- **Rollback Plan**: {failure_recovery_strategy}
---
## 6. Analysis Confidence & Recommendations
### Assessment Scores
- **Conceptual Integrity**: {score}/5 - {brief_assessment}
- **Architectural Soundness**: {score}/5 - {brief_assessment}
- **Technical Feasibility**: {score}/5 - {brief_assessment}
- **Implementation Readiness**: {score}/5 - {brief_assessment}
- **Overall Confidence**: {overall_score}/5
### Final Recommendation
**Status**: {PROCEED|PROCEED_WITH_MODIFICATIONS|RECONSIDER|REJECT}
**Rationale**: {clear_explanation_of_recommendation}
**Critical Prerequisites**: {what_must_be_resolved_before_proceeding}
---
## 7. Reference Information
### Tool Analysis Summary
- **Gemini Insights**: {key_architectural_and_pattern_insights}
- **Codex Validation**: {technical_feasibility_and_implementation_notes}
- **Consensus Points**: {agreements_between_tools}
- **Conflicting Views**: {disagreements_and_resolution}
### Context & Resources
- **Analysis Context**: {context_package_reference}
- **Documentation References**: {relevant_documentation}
- **Related Patterns**: {similar_implementations_in_codebase}
- **External Resources**: {external_references_and_best_practices}
```
## CONTENT REQUIREMENTS
### Analysis Priority Sources
1. **PRIMARY**: Individual role analysis.md files (system-architect, ui-designer, etc.) - technical details, ADRs, decision context
2. **SECONDARY**: synthesis-specification.md - integrated requirements, cross-role alignment
3. **REFERENCE**: topic-framework.md - discussion context
### Focus Areas
- **SOLUTION IMPROVEMENTS**: How to enhance current design
- **KEY DESIGN DECISIONS**: Critical choices with rationale, alternatives, tradeoffs
- **CRITICAL INSIGHTS**: Non-obvious findings, risks, opportunities
- **OPTIMIZATION**: Performance, security, code quality recommendations
### Exclusions
- ❌ Task lists or implementation steps
- ❌ Code examples or snippets
- ❌ Project management timelines
- ❌ Resource allocation details
## OUTPUT VALIDATION
### Completeness Checklist
□ All 7 sections present with content
□ Executive Summary with feasibility score
□ Current State Analysis with findings
□ Solution Design with 2+ key decisions
□ Implementation Strategy with code targets
□ Optimization recommendations in 3 areas
□ Confidence scores with final recommendation
□ Reference information included
### Quality Standards
□ Design decisions include rationale and alternatives
□ Code targets specify file:function:lines format
□ Risk assessment with mitigation strategies
□ Quantified scores (X/5) for all assessments
□ Clear PROCEED/RECONSIDER/REJECT recommendation
Focus: Solution-focused technical analysis emphasizing design decisions and critical insights.

View File

@@ -0,0 +1,176 @@
Validate technical feasibility and identify implementation risks for proposed solution design.
## CORE CHECKLIST ⚡
□ Read context-package.json and gemini-solution-design.md
□ Assess complexity, validate technology choices
□ Evaluate performance and security implications
□ Focus on TECHNICAL FEASIBILITY and RISK ASSESSMENT
□ Write output to specified .workflow/{session_id}/.process/ path
## PREREQUISITE ANALYSIS
### Required Input Files
1. **context-package.json**: Task requirements, source files, tech stack
2. **gemini-solution-design.md**: Proposed solution design and architecture
3. **workflow-session.json**: Session state and context
4. **CLAUDE.md**: Project standards and conventions
### Analysis Dependencies
- Review Gemini's proposed solution design
- Validate against actual codebase capabilities
- Assess implementation complexity realistically
- Identify gaps between design and execution
## REQUIRED VALIDATION
### 1. Feasibility Assessment
- **Complexity Rating**: Rate technical complexity (1-5 scale)
- 1: Trivial - straightforward implementation
- 2: Simple - well-known patterns
- 3: Moderate - some challenges
- 4: Complex - significant challenges
- 5: Very Complex - high risk, major unknowns
- **Resource Requirements**: Estimate development effort
- Development time (hours/days/weeks)
- Required expertise level
- Infrastructure needs
- **Technology Compatibility**: Validate proposed tech stack
- Framework version compatibility
- Library maturity and support
- Integration with existing systems
### 2. Risk Analysis
- **Implementation Risks**: Technical challenges and blockers
- Unknown implementation patterns
- Missing capabilities or APIs
- Breaking changes to existing code
- **Integration Challenges**: System integration concerns
- Data format compatibility
- API contract changes
- Dependency conflicts
- **Performance Concerns**: Performance and scalability risks
- Resource consumption (CPU, memory, I/O)
- Latency and throughput impact
- Caching and optimization needs
- **Security Concerns**: Security vulnerabilities and threats
- Authentication/authorization gaps
- Data exposure risks
- Compliance violations
### 3. Implementation Validation
- **Development Approach**: Validate proposed implementation strategy
- Verify module dependency order
- Assess incremental development feasibility
- Evaluate testing approach
- **Quality Standards**: Validate quality requirements
- Test coverage achievability
- Performance benchmark realism
- Documentation completeness
- **Maintenance Implications**: Long-term sustainability
- Code maintainability assessment
- Technical debt evaluation
- Evolution and extensibility
### 4. Code Target Verification
Review Gemini's proposed code targets:
- **Validate existing targets**: Confirm file:function:lines exist
- **Assess new file targets**: Evaluate necessity and placement
- **Identify missing targets**: Suggest additional modification points
- **Refine target specifications**: Provide more precise line numbers if possible
### 5. Recommendations
- **Must-Have Requirements**: Critical requirements for success
- **Optimization Opportunities**: Performance and quality improvements
- **Security Controls**: Essential security measures
- **Risk Mitigation**: Strategies to reduce identified risks
## OUTPUT REQUIREMENTS
### Output File
**Path**: `.workflow/{session_id}/.process/codex-feasibility-validation.md`
**Format**: Follow structure from `~/.claude/workflows/cli-templates/prompts/workflow/analysis-results-structure.txt`
### Required Sections
Focus on these sections from the template:
- Executive Summary (with Codex perspective)
- Current State Analysis (validation findings)
- Implementation Strategy (feasibility assessment)
- Solution Optimization (risk mitigation)
- Confidence Scores (technical feasibility focus)
### Content Guidelines
- ✅ Focus on technical feasibility and risk assessment
- ✅ Verify code targets from Gemini's design
- ✅ Provide concrete risk mitigation strategies
- ✅ Quantify complexity and effort estimates
- ❌ Do NOT create task breakdowns
- ❌ Do NOT provide step-by-step implementation guides
- ❌ Do NOT include code examples
## VALIDATION METHODOLOGY
### Complexity Scoring
Rate each aspect on 1-5 scale:
- Technical Complexity
- Integration Complexity
- Performance Risk
- Security Risk
- Maintenance Burden
### Risk Classification
- **LOW**: Minor issues, easily addressable
- **MEDIUM**: Manageable challenges with clear mitigation
- **HIGH**: Significant concerns requiring major mitigation
- **CRITICAL**: Fundamental viability threats
### Feasibility Judgment
- **PROCEED**: Technically feasible with acceptable risk
- **PROCEED_WITH_MODIFICATIONS**: Feasible but needs adjustments
- **RECONSIDER**: High risk, major changes needed
- **REJECT**: Not feasible with current approach
## CONTEXT INTEGRATION
### Gemini Analysis Integration
- Review proposed architecture and design decisions
- Validate assumptions and technology choices
- Cross-check code targets against actual codebase
- Assess realism of performance targets
### Codebase Reality Check
- Verify existing code capabilities
- Identify actual technical constraints
- Assess team skill compatibility
- Evaluate infrastructure readiness
### Session Context
- Consider session history and previous decisions
- Align with project architecture standards
- Respect existing patterns and conventions
## EXECUTION MODE
**Mode**: Analysis with write permission for output file
**CLI Tool**: Codex with --skip-git-repo-check -s danger-full-access
**Timeout**: 60-90 minutes for complex tasks
**Output**: Single file codex-feasibility-validation.md
**Trigger**: Only for complex tasks (>6 modules)
## VERIFICATION CHECKLIST ✓
□ context-package.json and gemini-solution-design.md read
□ Complexity rated on 1-5 scale with justification
□ All risk categories assessed (technical, integration, performance, security)
□ Code targets verified and refined
□ Risk mitigation strategies provided
□ Resource requirements estimated
□ Final feasibility judgment (PROCEED/RECONSIDER/REJECT)
□ Output written to .workflow/{session_id}/.process/codex-feasibility-validation.md
Focus: Technical feasibility validation with realistic risk assessment and mitigation strategies.

View File

@@ -0,0 +1,131 @@
Analyze and design optimal solution with comprehensive architecture evaluation and design decisions.
## CORE CHECKLIST ⚡
□ Read context-package.json to understand task requirements, source files, tech stack
□ Analyze current architecture patterns and code structure
□ Propose solution design with key decisions and rationale
□ Focus on SOLUTION IMPROVEMENTS and KEY DESIGN DECISIONS
□ Write output to specified .workflow/{session_id}/.process/ path
## ANALYSIS PRIORITY
### Source Hierarchy
1. **PRIMARY**: Individual role analysis.md files (system-architect, ui-designer, data-architect, etc.)
- Technical details and implementation considerations
- Architecture Decision Records (ADRs)
- Design decision context and rationale
2. **SECONDARY**: synthesis-specification.md
- Integrated requirements across roles
- Cross-role alignment and dependencies
- Unified feature specifications
3. **REFERENCE**: topic-framework.md
- Discussion context and background
- Initial problem framing
## REQUIRED ANALYSIS
### 1. Current State Assessment
- Identify existing architectural patterns and code structure
- Map integration points and dependencies
- Evaluate technical debt and pain points
- Assess framework compatibility and constraints
### 2. Solution Design
- Propose core architecture principles and approach
- Design component architecture and data flow
- Specify API contracts and integration strategy
- Define technology stack with justification
### 3. Key Design Decisions
For each critical decision:
- **Decision**: What is being decided
- **Rationale**: Why this approach
- **Alternatives Considered**: Other options and their tradeoffs
- **Impact**: Implications on architecture, performance, maintainability
Minimum 2 key decisions required.
### 4. Code Modification Targets
Identify specific code locations for changes:
- **Existing files**: `file:function:lines` format (e.g., `src/auth/login.ts:validateUser:45-52`)
- **New files**: `file` only (e.g., `src/auth/PasswordReset.ts`)
- **Unknown lines**: `file:function:*` (e.g., `src/auth/service.ts:refreshToken:*`)
For each target:
- Type: Modify existing | Create new
- Modification/Purpose: What changes needed
- Rationale: Why this target
### 5. Critical Insights
- Strengths: What works well in current/proposed design
- Gaps: Missing capabilities or concerns
- Risks: Technical, integration, performance, security
- Optimization Opportunities: Performance, security, code quality
### 6. Feasibility Assessment
- Technical Complexity: Rating and analysis
- Performance Impact: Expected characteristics
- Resource Requirements: Development effort
- Maintenance Burden: Ongoing considerations
## OUTPUT REQUIREMENTS
### Output File
**Path**: `.workflow/{session_id}/.process/gemini-solution-design.md`
**Format**: Follow structure from `~/.claude/workflows/cli-templates/prompts/workflow/analysis-results-structure.txt`
### Required Sections
- Executive Summary with feasibility score
- Current State Analysis
- Proposed Solution Design with 2+ key decisions
- Implementation Strategy with code targets
- Solution Optimization (performance, security, quality)
- Critical Success Factors
- Confidence Scores with recommendation
### Content Guidelines
- ✅ Focus on solution improvements and key design decisions
- ✅ Include rationale, alternatives, and tradeoffs for decisions
- ✅ Provide specific code targets in correct format
- ✅ Quantify assessments with scores (X/5)
- ❌ Do NOT create task lists or implementation steps
- ❌ Do NOT include code examples or snippets
- ❌ Do NOT create project management timelines
## CONTEXT INTEGRATION
### Session Context
- Load context-package.json for task requirements
- Reference workflow-session.json for session state
- Review CLAUDE.md for project standards
### Brainstorm Context
If brainstorming artifacts exist:
- Prioritize individual role analysis.md files
- Use synthesis-specification.md for integrated view
- Reference topic-framework.md for context
### Codebase Context
- Identify similar patterns in existing code
- Evaluate success/failure of current approaches
- Ensure consistency with project architecture
## EXECUTION MODE
**Mode**: Analysis with write permission for output file
**CLI Tool**: Gemini wrapper with --approval-mode yolo
**Timeout**: 40-60 minutes based on complexity
**Output**: Single file gemini-solution-design.md
## VERIFICATION CHECKLIST ✓
□ context-package.json read and analyzed
□ All 7 required sections present in output
□ 2+ key design decisions with rationale and alternatives
□ Code targets specified in correct format
□ Feasibility scores provided (X/5)
□ Final recommendation (PROCEED/RECONSIDER/REJECT)
□ Output written to .workflow/{session_id}/.process/gemini-solution-design.md
Focus: Comprehensive solution design emphasizing architecture decisions and critical insights.

View File

@@ -0,0 +1,286 @@
IMPL_PLAN.md Template - Implementation Plan Document Structure
## Document Frontmatter
```yaml
---
identifier: WFS-{session-id}
source: "User requirements" | "File: path" | "Issue: ISS-001"
analysis: .workflow/{session-id}/.process/ANALYSIS_RESULTS.md
artifacts: .workflow/{session-id}/.brainstorming/
context_package: .workflow/{session-id}/.process/context-package.json # CCW smart context
workflow_type: "standard | tdd | design" # Indicates execution model
verification_history: # CCW quality gates
concept_verify: "passed | skipped | pending"
action_plan_verify: "pending"
phase_progression: "brainstorm → context → analysis → concept_verify → planning" # CCW workflow phases
---
```
## Document Structure
# Implementation Plan: {Project Title}
## 1. Summary
Core requirements, objectives, technical approach summary (2-3 paragraphs max).
**Core Objectives**:
- [Key objective 1]
- [Key objective 2]
**Technical Approach**:
- [High-level approach]
## 2. Context Analysis
### CCW Workflow Context
**Phase Progression**:
- ✅ Phase 1: Brainstorming (synthesis-specification.md generated)
- ✅ Phase 2: Context Gathering (context-package.json: {N} files, {M} modules analyzed)
- ✅ Phase 3: Enhanced Analysis (ANALYSIS_RESULTS.md: Gemini/Qwen/Codex parallel insights)
- ✅ Phase 4: Concept Verification ({X} clarifications answered, synthesis updated | skipped)
- ⏳ Phase 5: Action Planning (current phase - generating IMPL_PLAN.md)
**Quality Gates**:
- concept-verify: ✅ Passed (0 ambiguities remaining) | ⏭️ Skipped (user decision) | ⏳ Pending
- action-plan-verify: ⏳ Pending (recommended before /workflow:execute)
**Context Package Summary**:
- **Focus Paths**: {list key directories from context-package.json}
- **Key Files**: {list primary files for modification}
- **Module Depth Analysis**: {from get_modules_by_depth.sh output}
- **Smart Context**: {total file count} files, {module count} modules, {dependency count} dependencies identified
### Project Profile
- **Type**: Greenfield/Enhancement/Refactor
- **Scale**: User count, data volume, complexity
- **Tech Stack**: Primary technologies
- **Timeline**: Duration and milestones
### Module Structure
```
[Directory tree showing key modules]
```
### Dependencies
**Primary**: [Core libraries and frameworks]
**APIs**: [External services]
**Development**: [Testing, linting, CI/CD tools]
### Patterns & Conventions
- **Architecture**: [Key patterns like DI, Event-Driven]
- **Component Design**: [Design patterns]
- **State Management**: [State strategy]
- **Code Style**: [Naming, TypeScript coverage]
## 3. Brainstorming Artifacts Reference
### Artifact Usage Strategy
**Primary Reference (synthesis-specification.md)**:
- **What**: Comprehensive implementation blueprint from multi-role synthesis
- **When**: Every task references this first for requirements and design decisions
- **How**: Extract architecture decisions, UI/UX patterns, functional requirements, non-functional requirements
- **Priority**: Authoritative - overrides role-specific analyses when conflicts arise
- **CCW Value**: Consolidates insights from all brainstorming roles into single source of truth
**Context Intelligence (context-package.json)**:
- **What**: Smart context gathered by CCW's context-gather phase
- **Content**: Focus paths, dependency graph, existing patterns, module structure
- **Usage**: Tasks load this via `flow_control.preparatory_steps` for environment setup
- **CCW Value**: Automated intelligent context discovery replacing manual file exploration
**Technical Analysis (ANALYSIS_RESULTS.md)**:
- **What**: Gemini/Qwen/Codex parallel analysis results
- **Content**: Optimization strategies, risk assessment, architecture review, implementation patterns
- **Usage**: Referenced in task planning for technical guidance and risk mitigation
- **CCW Value**: Multi-model parallel analysis providing comprehensive technical intelligence
### Integrated Specifications (Highest Priority)
- **synthesis-specification.md**: Comprehensive implementation blueprint
- Contains: Architecture design, UI/UX guidelines, functional/non-functional requirements, implementation roadmap, risk assessment
### Supporting Artifacts (Reference)
- **topic-framework.md**: Role-specific discussion points and analysis framework
- **system-architect/analysis.md**: Detailed architecture specifications
- **ui-designer/analysis.md**: Layout and component specifications
- **product-manager/analysis.md**: Product vision and user stories
**Artifact Priority in Development**:
1. synthesis-specification.md (primary reference for all tasks)
2. context-package.json (smart context for execution environment)
3. ANALYSIS_RESULTS.md (technical analysis and optimization strategies)
4. Role-specific analyses (fallback for detailed specifications)
## 4. Implementation Strategy
### Execution Strategy
**Execution Model**: [Sequential | Parallel | Phased | TDD Cycles]
**Rationale**: [Why this execution model fits the project]
**Parallelization Opportunities**:
- [List independent workstreams]
**Serialization Requirements**:
- [List critical dependencies]
### Architectural Approach
**Key Architecture Decisions**:
- [ADR references from synthesis]
- [Justification for architecture patterns]
**Integration Strategy**:
- [How modules communicate]
- [State management approach]
### Key Dependencies
**Task Dependency Graph**:
```
[High-level dependency visualization]
```
**Critical Path**: [Identify bottleneck tasks]
### Testing Strategy
**Testing Approach**:
- Unit testing: [Tools, scope]
- Integration testing: [Key integration points]
- E2E testing: [Critical user flows]
**Coverage Targets**:
- Lines: ≥70%
- Functions: ≥70%
- Branches: ≥65%
**Quality Gates**:
- [CI/CD gates]
- [Performance budgets]
## 5. Task Breakdown Summary
### Task Count
**{N} tasks** (flat hierarchy | two-level hierarchy, sequential | parallel execution)
### Task Structure
- **IMPL-1**: [Main task title]
- **IMPL-2**: [Main task title]
...
### Complexity Assessment
- **High**: [List with rationale]
- **Medium**: [List]
- **Low**: [List]
### Dependencies
[Reference Section 4.3 for dependency graph]
**Parallelization Opportunities**:
- [Specific task groups that can run in parallel]
## 6. Implementation Plan (Detailed Phased Breakdown)
### Execution Strategy
**Phase 1 (Weeks 1-2): [Phase Name]**
- **Tasks**: IMPL-1, IMPL-2
- **Deliverables**:
- [Specific deliverable 1]
- [Specific deliverable 2]
- **Success Criteria**:
- [Measurable criterion]
**Phase 2 (Weeks 3-N): [Phase Name]**
...
### Resource Requirements
**Development Team**:
- [Team composition and skills]
**External Dependencies**:
- [Third-party services, APIs]
**Infrastructure**:
- [Development, staging, production environments]
## 7. Risk Assessment & Mitigation
| Risk | Impact | Probability | Mitigation Strategy | Owner |
|------|--------|-------------|---------------------|-------|
| [Risk description] | High/Med/Low | High/Med/Low | [Strategy] | [Role] |
**Critical Risks** (High impact + High probability):
- [Risk 1]: [Detailed mitigation plan]
**Monitoring Strategy**:
- [How risks will be monitored]
## 8. Success Criteria
**Functional Completeness**:
- [ ] All requirements from synthesis-specification.md implemented
- [ ] All acceptance criteria from task.json files met
**Technical Quality**:
- [ ] Test coverage ≥70%
- [ ] Bundle size within budget
- [ ] Performance targets met
**Operational Readiness**:
- [ ] CI/CD pipeline operational
- [ ] Monitoring and logging configured
- [ ] Documentation complete
**Business Metrics**:
- [ ] [Key business metrics from synthesis]
## Template Usage Guidelines
### When Generating IMPL_PLAN.md
1. **Fill Frontmatter Variables**:
- Replace {session-id} with actual session ID
- Set workflow_type based on planning phase
- Update verification_history based on concept-verify results
2. **Populate CCW Workflow Context**:
- Extract file/module counts from context-package.json
- Document phase progression based on completed workflow steps
- Update quality gate status (passed/skipped/pending)
3. **Extract from Analysis Results**:
- Core objectives from ANALYSIS_RESULTS.md
- Technical approach and architecture decisions
- Risk assessment and mitigation strategies
4. **Reference Brainstorming Artifacts**:
- List detected artifacts with correct paths
- Document artifact priority and usage strategy
- Map artifacts to specific tasks based on domain
5. **Define Implementation Strategy**:
- Choose execution model (sequential/parallel/phased)
- Identify parallelization opportunities
- Document critical path and dependencies
6. **Break Down Tasks**:
- List all task IDs and titles
- Assess complexity (high/medium/low)
- Create dependency graph visualization
7. **Set Success Criteria**:
- Extract from synthesis-specification.md
- Include measurable metrics
- Define quality gates
### Validation Checklist
Before finalizing IMPL_PLAN.md:
- [ ] All frontmatter fields populated correctly
- [ ] CCW workflow context reflects actual phase progression
- [ ] Brainstorming artifacts correctly referenced
- [ ] Task breakdown matches generated task JSONs
- [ ] Dependencies are acyclic and logical
- [ ] Success criteria are measurable
- [ ] Risk assessment includes mitigation strategies
- [ ] All {placeholder} variables replaced with actual values

View File

@@ -0,0 +1,119 @@
Task JSON Schema - Agent Mode (No Command Field)
## Schema Structure
```json
{
"id": "IMPL-N[.M]",
"title": "Descriptive task name",
"status": "pending",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer|@test-fix-agent|@general-purpose"
},
"context": {
"requirements": ["extracted from analysis"],
"focus_paths": ["src/paths"],
"acceptance": ["measurable criteria"],
"depends_on": ["IMPL-N"],
"artifacts": [
{
"type": "synthesis_specification",
"path": "{synthesis_spec_path}",
"priority": "highest",
"usage": "Primary requirement source - use for consolidated requirements and cross-role alignment"
},
{
"type": "role_analysis",
"path": "{role_analysis_path}",
"priority": "high",
"usage": "Technical/design/business details from specific roles. Common roles: system-architect (ADRs, APIs, caching), ui-designer (design tokens, layouts), product-manager (user stories, metrics)"
}
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_synthesis_specification",
"action": "Load consolidated synthesis specification",
"commands": [
"Read({synthesis_spec_path})"
],
"output_to": "synthesis_specification",
"on_error": "fail"
},
{
"step": "load_context_package",
"action": "Load context package for project structure",
"commands": [
"Read({context_package_path})"
],
"output_to": "context_pkg",
"on_error": "fail"
},
{
"step": "mcp_codebase_exploration",
"action": "Explore codebase using MCP",
"command": "mcp__code-index__find_files(pattern=\"{file_pattern}\") && mcp__code-index__search_code_advanced(pattern=\"{search_pattern}\")",
"output_to": "codebase_structure",
"on_error": "skip_optional"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Implement task following synthesis specification",
"description": "Implement '{title}' following [synthesis_specification] requirements and [context_pkg] patterns. Use synthesis-specification.md as primary source, consult artifacts for technical details.",
"modification_points": [
"Apply consolidated requirements from synthesis-specification.md",
"Follow technical guidelines from synthesis",
"Consult artifacts for implementation details when needed",
"Integrate with existing patterns"
],
"logic_flow": [
"Load synthesis specification and context package",
"Analyze existing patterns from [codebase_structure]",
"Implement following specification",
"Consult artifacts for technical details when needed",
"Validate against acceptance criteria"
],
"depends_on": [],
"output": "implementation"
}
],
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
}
}
```
## Key Features - Agent Mode
**Execution Model**: Agent interprets `modification_points` and `logic_flow` to execute autonomously
**No Command Field**: Steps in `implementation_approach` do NOT include `command` field
**Context Loading**: Context loaded via `pre_analysis` steps, available as variables (e.g., [synthesis_specification], [context_pkg])
**Agent Execution**:
- Agent reads modification_points and logic_flow
- Agent performs implementation autonomously
- Agent validates against acceptance criteria
## Field Descriptions
**implementation_approach**: Array of step objects (NO command field)
- **step**: Sequential step number
- **title**: Step description
- **description**: Detailed instructions with variable references
- **modification_points**: Specific code modifications to apply
- **logic_flow**: Business logic execution sequence
- **depends_on**: Step dependencies (empty array for independent steps)
- **output**: Expected deliverable variable name
## Usage Guidelines
1. **Load Context**: Use pre_analysis to load synthesis, context package, and explore codebase
2. **Reference Variables**: Use [variable_name] to reference outputs from pre_analysis steps
3. **Clear Instructions**: Provide detailed modification_points and logic_flow for agent
4. **No Commands**: Never add command field to implementation_approach steps
5. **Agent Autonomy**: Let agent interpret and execute based on provided instructions

View File

@@ -0,0 +1,178 @@
Task JSON Schema - CLI Execute Mode (With Command Field)
## Schema Structure
```json
{
"id": "IMPL-N[.M]",
"title": "Descriptive task name",
"status": "pending",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer|@test-fix-agent|@general-purpose"
},
"context": {
"requirements": ["extracted from analysis"],
"focus_paths": ["src/paths"],
"acceptance": ["measurable criteria"],
"depends_on": ["IMPL-N"],
"artifacts": [
{
"type": "synthesis_specification",
"path": "{synthesis_spec_path}",
"priority": "highest",
"usage": "Primary requirement source - use for consolidated requirements and cross-role alignment"
},
{
"type": "role_analysis",
"path": "{role_analysis_path}",
"priority": "high",
"usage": "Technical/design/business details from specific roles"
}
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_synthesis_specification",
"action": "Load consolidated synthesis specification",
"commands": [
"Read({synthesis_spec_path})"
],
"output_to": "synthesis_specification",
"on_error": "fail"
},
{
"step": "load_context_package",
"action": "Load context package",
"commands": [
"Read({context_package_path})"
],
"output_to": "context_pkg",
"on_error": "fail"
},
{
"step": "mcp_codebase_exploration",
"action": "Explore codebase using MCP",
"command": "mcp__code-index__find_files(pattern=\"{file_pattern}\")",
"output_to": "codebase_structure",
"on_error": "skip_optional"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Implement task with Codex",
"description": "Implement '{title}' using Codex CLI tool",
"command": "bash(codex -C {focus_path} --full-auto exec \"PURPOSE: {purpose} TASK: {task_description} MODE: auto CONTEXT: @{{synthesis_spec_path}} @{{context_package_path}} EXPECTED: {expected_output} RULES: Follow synthesis specification\" --skip-git-repo-check -s danger-full-access)",
"modification_points": [
"Create/modify implementation files",
"Follow synthesis specification requirements",
"Integrate with existing patterns"
],
"logic_flow": [
"Codex loads context package and synthesis",
"Codex implements according to specification",
"Codex validates against acceptance criteria"
],
"depends_on": [],
"output": "implementation"
}
],
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
}
}
```
## Multi-Step Example (Complex Task with Resume)
```json
{
"id": "IMPL-002",
"title": "Implement RBAC system",
"flow_control": {
"implementation_approach": [
{
"step": 1,
"title": "Create RBAC models",
"description": "Create role and permission data models",
"command": "bash(codex -C src/models --full-auto exec \"PURPOSE: Create RBAC models TASK: Define role and permission models MODE: auto CONTEXT: @{{synthesis_spec_path}} @{{context_package_path}} EXPECTED: Models with migrations RULES: Follow synthesis spec\" --skip-git-repo-check -s danger-full-access)",
"modification_points": ["Define role model", "Define permission model"],
"logic_flow": ["Design schema", "Implement models", "Generate migrations"],
"depends_on": [],
"output": "rbac_models"
},
{
"step": 2,
"title": "Implement RBAC middleware",
"description": "Create route protection middleware",
"command": "bash(codex --full-auto exec \"PURPOSE: Create RBAC middleware TASK: Route protection middleware MODE: auto CONTEXT: RBAC models from step 1 EXPECTED: Middleware for route protection RULES: Use session patterns\" resume --last --skip-git-repo-check -s danger-full-access)",
"modification_points": ["Create permission checker", "Add route decorators"],
"logic_flow": ["Check user role", "Validate permissions", "Allow/deny access"],
"depends_on": [1],
"output": "rbac_middleware"
}
]
}
}
```
## Key Features - CLI Execute Mode
**Execution Model**: Commands in `command` field execute steps directly
**Command Field Required**: Every step in `implementation_approach` MUST include `command` field
**Context Delivery**: Context provided via CONTEXT field in command prompt using `@{path}` syntax
**Multi-Step Support**:
- First step: Full context with `-C directory` and complete CONTEXT field
- Subsequent steps: Use `resume --last` to maintain session continuity
- Step dependencies: Use `depends_on` array to specify step order
## Command Templates
### Single-Step Codex Command
```bash
bash(codex -C {focus_path} --full-auto exec "PURPOSE: {purpose} TASK: {task} MODE: auto CONTEXT: @{{synthesis_spec_path}} @{{context_package_path}} EXPECTED: {expected} RULES: {rules}" --skip-git-repo-check -s danger-full-access)
```
### Multi-Step Codex with Resume
```bash
# First step
bash(codex -C {path} --full-auto exec "..." --skip-git-repo-check -s danger-full-access)
# Subsequent steps
bash(codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access)
```
### Gemini/Qwen Commands (Analysis/Documentation)
```bash
bash(~/.claude/scripts/gemini-wrapper -p "PURPOSE: {purpose} TASK: {task} MODE: analysis CONTEXT: @{{synthesis_spec_path}} EXPECTED: {expected} RULES: {rules}")
# With write permission
bash(~/.claude/scripts/gemini-wrapper --approval-mode yolo -p "PURPOSE: {purpose} TASK: {task} MODE: write CONTEXT: @{{context}} EXPECTED: {expected} RULES: {rules}")
```
## Field Descriptions
**implementation_approach**: Array of step objects (WITH command field)
- **step**: Sequential step number
- **title**: Step description
- **description**: Brief step description
- **command**: Complete CLI command to execute the step
- **modification_points**: Specific code modifications (for reference)
- **logic_flow**: Execution sequence (for reference)
- **depends_on**: Step dependencies (array of step numbers, empty for independent)
- **output**: Expected deliverable variable name
## Usage Guidelines
1. **Always Include Command**: Every step MUST have a `command` field
2. **Context via CONTEXT Field**: Provide context using `@{path}` syntax in command prompt
3. **First Step Full Context**: First step should include `-C directory` and full context package
4. **Resume for Continuity**: Use `resume --last` for subsequent steps in same task
5. **Step Dependencies**: Use `depends_on: [1, 2]` to specify execution order
6. **Parameter Position**:
- Codex: `--skip-git-repo-check -s danger-full-access` at END
- Gemini/Qwen: `--approval-mode yolo` AFTER wrapper command, BEFORE -p

View File

@@ -5,6 +5,262 @@ All notable changes to Claude Code Workflow (CCW) will be documented in this fil
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
n## [4.6.2] - 2025-10-20
### 📝 Documentation Optimization
#### Improved
**`/memory:load` Command Documentation**: Optimized command specification from 273 to 240 lines (12% reduction)
- Merged redundant sections for better information flow
- Removed unnecessary internal implementation details
- Simplified usage examples while preserving clarity
- Maintained all critical information (parameters, workflow, JSON structure)
- Improved user-centric documentation structure
#### Updated
**COMMAND_SPEC.md**: Updated `/memory:load` specification to match actual implementation
- Corrected syntax: `[--tool gemini|qwen]` instead of outdated `[--agent] [--json]` flags
- Added agent-driven execution details
- Clarified core philosophy and token-efficiency benefits
**GETTING_STARTED.md**: Added "Quick Context Loading for Specific Tasks" section
- Positioned between "Full Project Index Rebuild" and "Incremental Related Module Updates"
- Includes practical examples and use case guidance
- Explains how `/memory:load` works and when to use it
---
## [4.6.0] - 2025-10-18
### 🎯 Concept Clarification & Agent-Driven Analysis
This release introduces a concept clarification quality gate and agent-delegated intelligent analysis, significantly enhancing workflow planning accuracy and reducing execution errors.
#### Added
**Concept Clarification Quality Gate** (`/workflow:concept-clarify`):
- **Dual-Mode Support**: Automatically detects and operates in brainstorm or plan workflows
- **Brainstorm Mode**: Analyzes `synthesis-specification.md` after brainstorm synthesis
- **Plan Mode**: Analyzes `ANALYSIS_RESULTS.md` between Phase 3 and Phase 4
- **Interactive Q&A System**: Up to 5 targeted questions to resolve ambiguities
- Multiple-choice or short-answer format
- Covers requirements, architecture, UX, implementation, risks
- Progressive disclosure - one question at a time
- **Incremental Updates**: Saves clarifications after each answer to prevent context loss
- **Coverage Summary**: Generates detailed report with recommendations
- **Session Metadata**: Tracks verification status in workflow session
- **Phase 3.5 Integration**: Inserted as quality gate in `/workflow:plan`
- Pauses auto-continue workflow for user interaction
- Auto-skips if no critical ambiguities detected
- Updates ANALYSIS_RESULTS.md with user clarifications
**Agent-Delegated Intelligent Analysis** (Phase 3 Enhancement):
- **CLI Execution Agent Integration**: Phase 3 now uses `cli-execution-agent`
- Autonomous context discovery via MCP code-index
- Enhanced prompt generation with discovered patterns
- 5-phase agent workflow (understand → discover → enhance → execute → route)
- **MCP-Powered Context Discovery**: Automatic file and pattern discovery
- `mcp__code-index__find_files`: Pattern-based file discovery
- `mcp__code-index__search_code_advanced`: Content-based code search
- `mcp__code-index__get_file_summary`: Structural analysis
- **Smart Tool Selection**: Agent automatically chooses Gemini for analysis tasks
- **Execution Logging**: Complete agent execution log saved to session
- **Session-Aware Routing**: Results automatically routed to correct session directory
**Enhanced Planning Workflow** (`/workflow:plan`):
- **5-Phase Model**: Upgraded from 4-phase to 5-phase workflow
- Phase 1: Session Discovery
- Phase 2: Context Gathering
- Phase 3: Intelligent Analysis (agent-delegated)
- Phase 3.5: Concept Clarification (quality gate)
- Phase 4: Task Generation
- **Auto-Continue Enhancement**: Workflow pauses only at Phase 3.5 for user input
- **Memory Management**: Added memory state check before Phase 3.5
- Automatic `/compact` execution if context usage >110K tokens
- Prevents context overflow during intensive analysis
#### Changed
**concept-clarify.md** - Enhanced with Dual-Mode Support:
- **Mode Detection Logic**: Auto-detects workflow type based on artifact presence
```bash
IF EXISTS(ANALYSIS_RESULTS.md) → plan mode
ELSE IF EXISTS(synthesis-specification.md) → brainstorm mode
```
- **Dynamic File Handling**: Loads and updates appropriate artifact based on mode
- **Mode-Specific Validation**: Different validation rules for each mode
- **Enhanced Metadata**: Tracks `clarify_mode` in session verification data
- **Backward Compatible**: Preserves all existing brainstorm mode functionality
**plan.md** - Refactored for Agent Delegation:
- **Phase 3 Delegation**: Changed from direct `concept-enhanced` call to `cli-execution-agent`
- Agent receives: sessionId, contextPath, task description
- Agent executes: autonomous context discovery + Gemini analysis
- Agent outputs: ANALYSIS_RESULTS.md + execution log
- **Phase 3.5 Integration**: New quality gate phase with interactive Q&A
- Command: `SlashCommand(concept-clarify --session [sessionId])`
- Validation: Checks for clarifications section and recommendation
- Skip conditions: Auto-proceeds if no ambiguities detected
- **TodoWrite Enhancement**: Updated to track 5 phases including Phase 3.5
- **Data Flow Updates**: Enhanced context flow diagram showing agent execution
- **Coordinator Checklist**: Added Phase 3.5 verification steps
**README.md & README_CN.md** - Documentation Updates:
- **Version Badge**: Updated to v4.6.0
- **What's New Section**: Highlighted key features of v4.6.0
- Concept clarification quality gate
- Agent-delegated analysis
- Dual-mode support
- Test-cycle-execute documentation
- **Phase 5 Enhancement**: Added `/workflow:test-cycle-execute` documentation
- Dynamic task generation explanation
- Iterative testing workflow
- CLI-driven analysis integration
- Resume session support
- **Command Reference**: Added test-cycle-execute to workflow commands table
#### Improved
**Workflow Quality Gates**:
- 🎯 **Pre-Planning Verification**: concept-clarify catches ambiguities before task generation
- 🤖 **Intelligent Analysis**: Agent-driven Phase 3 provides deeper context discovery
- 🔄 **Interactive Control**: Users validate critical decisions at Phase 3.5
- ✅ **Higher Accuracy**: Clarified requirements reduce execution errors
**Context Discovery**:
- 🔍 **MCP Integration**: Leverages code-index for automatic pattern discovery
- 📊 **Enhanced Prompts**: Agent enriches prompts with discovered context
- 🎯 **Relevance Scoring**: Files ranked and filtered by relevance
- 📁 **Execution Transparency**: Complete agent logs for debugging
**User Experience**:
- ⏸️ **Single Interaction Point**: Only Phase 3.5 requires user input
- ⚡ **Auto-Skip Intelligence**: No questions if analysis is already clear
- 📝 **Incremental Saves**: Clarifications saved after each answer
- 🔄 **Resume Support**: Can continue interrupted test workflows
#### Technical Details
**Concept Clarification Architecture**:
```javascript
Phase 1: Session Detection & Mode Detection
IF EXISTS(process_dir/ANALYSIS_RESULTS.md):
mode = "plan" → primary_artifact = ANALYSIS_RESULTS.md
ELSE IF EXISTS(brainstorm_dir/synthesis-specification.md):
mode = "brainstorm" → primary_artifact = synthesis-specification.md
Phase 2: Load Artifacts (mode-specific)
Phase 3: Ambiguity Scan (8 categories)
Phase 4: Question Generation (max 5, prioritized)
Phase 5: Interactive Q&A (one at a time)
Phase 6: Incremental Updates (save after each answer)
Phase 7: Completion Report with recommendations
```
**Agent-Delegated Analysis Flow**:
```javascript
plan.md Phase 3:
Task(cli-execution-agent) →
Agent Phase 1: Understand analysis intent
Agent Phase 2: MCP code-index discovery
Agent Phase 3: Enhance prompt with patterns
Agent Phase 4: Execute Gemini analysis
Agent Phase 5: Route to .workflow/[session]/.process/ANALYSIS_RESULTS.md
→ ANALYSIS_RESULTS.md + execution log
```
**Workflow Data Flow**:
```
User Input
Phase 1: session:start → sessionId
Phase 2: context-gather → contextPath
Phase 3: cli-execution-agent → ANALYSIS_RESULTS.md (enhanced)
Phase 3.5: concept-clarify → ANALYSIS_RESULTS.md (clarified)
↓ [User answers 0-5 questions]
Phase 4: task-generate → IMPL_PLAN.md + task.json
```
#### Files Changed
**Commands** (3 files):
- `.claude/commands/workflow/concept-clarify.md` - Added dual-mode support (85 lines changed)
- `.claude/commands/workflow/plan.md` - Agent delegation + Phase 3.5 (106 lines added)
- `.claude/commands/workflow/tools/concept-enhanced.md` - Documentation updates
**Documentation** (3 files):
- `README.md` - Version update + test-cycle-execute documentation (25 lines changed)
- `README_CN.md` - Chinese version aligned with README.md (25 lines changed)
- `CHANGELOG.md` - This changelog entry
**Total Impact**:
- 6 files changed
- 241 insertions, 50 deletions
- Net: +191 lines
#### Backward Compatibility
**✅ Fully Backward Compatible**:
- Existing workflows continue to work unchanged
- concept-clarify preserves brainstorm mode functionality
- Phase 3.5 auto-skips when no ambiguities detected
- Agent delegation transparent to users
- All existing commands and sessions unaffected
#### Benefits
**Planning Accuracy**:
- 🎯 **Ambiguity Resolution**: Interactive Q&A eliminates underspecified requirements
- 📊 **Better Context**: Agent discovers patterns missed by manual analysis
- ✅ **Pre-Execution Validation**: Catches issues before task generation
**Workflow Efficiency**:
- ⚡ **Autonomous Discovery**: MCP integration reduces manual context gathering
- 🔄 **Smart Skipping**: No questions when analysis is already complete
- 📝 **Incremental Progress**: Saves work after each clarification
**Development Quality**:
- 🐛 **Fewer Errors**: Clarified requirements reduce implementation mistakes
- 🎯 **Focused Tasks**: Better analysis produces more precise task breakdown
- 📚 **Audit Trail**: Complete execution logs for debugging
#### Migration Notes
**No Action Required**:
- All changes are additive and backward compatible
- Existing workflows benefit from new features automatically
- concept-clarify can be used manually in existing sessions
**Optional Enhancements**:
- Use `/workflow:concept-clarify` manually before `/workflow:plan` for brainstorm workflows
- Review Phase 3 execution logs in `.workflow/[session]/.chat/` for insights
- Enable MCP tools for optimal agent context discovery
**New Workflow Pattern**:
```bash
# New recommended workflow with quality gates
/workflow:brainstorm:auto-parallel "topic"
/workflow:brainstorm:synthesis
/workflow:concept-clarify # Optional but recommended
/workflow:plan "description"
# Phase 3.5 will pause for clarification Q&A if needed
/workflow:execute
```
---
## [4.4.1] - 2025-10-12
### 🔧 Implementation Approach Structure Refactoring

134
COMMAND_REFERENCE.md Normal file
View File

@@ -0,0 +1,134 @@
# Command Reference
This document provides a comprehensive reference for all commands available in the Claude Code Workflow (CCW) system.
## Unified CLI Commands (`/cli:*`)
These commands provide direct access to AI tools for quick analysis and interaction without initiating a full workflow.
| Command | Description |
|---|---|
| `/cli:analyze` | Quick codebase analysis using CLI tools (codex/gemini/qwen). |
| `/cli:chat` | Simple CLI interaction command for direct codebase analysis. |
| `/cli:cli-init`| Initialize CLI tool configurations (Gemini and Qwen) based on workspace analysis. |
| `/cli:codex-execute` | Automated task decomposition and execution with Codex using resume mechanism. |
| `/cli:discuss-plan` | Orchestrates an iterative, multi-model discussion for planning and analysis without implementation. |
| `/cli:execute` | Auto-execution of implementation tasks with YOLO permissions and intelligent context inference. |
| `/cli:mode:bug-index` | Bug analysis and fix suggestions using CLI tools. |
| `/cli:mode:code-analysis` | Deep code analysis and debugging using CLI tools with specialized template. |
| `/cli:mode:plan` | Project planning and architecture analysis using CLI tools. |
## Workflow Commands (`/workflow:*`)
These commands orchestrate complex, multi-phase development processes, from planning to execution.
### Session Management
| Command | Description |
|---|---|
| `/workflow:session:start` | Discover existing sessions or start a new workflow session with intelligent session management. |
| `/workflow:session:list` | List all workflow sessions with status. |
| `/workflow:session:resume` | Resume the most recently paused workflow session. |
| `/workflow:session:complete` | Mark the active workflow session as complete and remove active flag. |
### Core Workflow
| Command | Description |
|---|---|
| `/workflow:plan` | Orchestrate 5-phase planning workflow with quality gate, executing commands and passing context between phases. |
| `/workflow:execute` | Coordinate agents for existing workflow tasks with automatic discovery. |
| `/workflow:resume` | Intelligent workflow session resumption with automatic progress analysis. |
| `/workflow:review` | Optional specialized review (security, architecture, docs) for completed implementation. |
| `/workflow:status` | Generate on-demand views from JSON task data. |
### Brainstorming
| Command | Description |
|---|---|
| `/workflow:brainstorm:artifacts` | Generate role-specific topic-framework.md dynamically based on selected roles. |
| `/workflow:brainstorm:auto-parallel` | Parallel brainstorming automation with dynamic role selection and concurrent execution. |
| `/workflow:brainstorm:synthesis` | Generate synthesis-specification.md from topic-framework and role analyses with @ references using conceptual-planning-agent. |
| `/workflow:brainstorm:api-designer` | Generate or update api-designer/analysis.md addressing topic-framework discussion points. |
| `/workflow:brainstorm:data-architect` | Generate or update data-architect/analysis.md addressing topic-framework discussion points. |
| `/workflow:brainstorm:product-manager` | Generate or update product-manager/analysis.md addressing topic-framework discussion points. |
| `/workflow:brainstorm:product-owner` | Generate or update product-owner/analysis.md addressing topic-framework discussion points. |
| `/workflow:brainstorm:scrum-master` | Generate or update scrum-master/analysis.md addressing topic-framework discussion points. |
| `/workflow:brainstorm:subject-matter-expert` | Generate or update subject-matter-expert/analysis.md addressing topic-framework discussion points. |
| `/workflow:brainstorm:system-architect` | Generate or update system-architect/analysis.md addressing topic-framework discussion points. |
| `/workflow:brainstorm:ui-designer` | Generate or update ui-designer/analysis.md addressing topic-framework discussion points. |
| `/workflow:brainstorm:ux-expert` | Generate or update ux-expert/analysis.md addressing topic-framework discussion points. |
### Quality & Verification
| Command | Description |
|---|---|
| `/workflow:concept-clarify` | Identify underspecified areas in brainstorming artifacts through targeted clarification questions before action planning. |
| `/workflow:action-plan-verify`| Perform non-destructive cross-artifact consistency and quality analysis of IMPL_PLAN.md and task.json before execution. |
### Test-Driven Development (TDD)
| Command | Description |
|---|---|
| `/workflow:tdd-plan` | Orchestrate TDD workflow planning with Red-Green-Refactor task chains. |
| `/workflow:tdd-verify` | Verify TDD workflow compliance and generate quality report. |
### Test Generation & Execution
| Command | Description |
|---|---|
| `/workflow:test-gen` | Create independent test-fix workflow session by analyzing completed implementation. |
| `/workflow:test-fix-gen` | Create independent test-fix workflow session from existing implementation (session or prompt-based). |
| `/workflow:test-cycle-execute` | Execute test-fix workflow with dynamic task generation and iterative fix cycles. |
### UI Design Workflow
| Command | Description |
|---|---|
| `/workflow:ui-design:explore-auto` | Exploratory UI design workflow with style-centric batch generation. |
| `/workflow:ui-design:imitate-auto` | High-speed multi-page UI replication with batch screenshot capture. |
| `/workflow:ui-design:batch-generate` | Prompt-driven batch UI generation using target-style-centric parallel execution. |
| `/workflow:ui-design:capture` | Batch screenshot capture for UI design workflows using MCP or local fallback. |
| `/workflow:ui-design:explore-layers` | Interactive deep UI capture with depth-controlled layer exploration. |
| `/workflow:ui-design:style-extract` | Extract design style from reference images or text prompts using Claude's analysis. |
| `/workflow:ui-design:layout-extract` | Extract structural layout information from reference images, URLs, or text prompts. |
| `/workflow:ui-design:generate` | Assemble UI prototypes by combining layout templates with design tokens (pure assembler). |
| `/workflow:ui-design:update` | Update brainstorming artifacts with finalized design system references. |
| `/workflow:ui-design:animation-extract` | Extract animation and transition patterns from URLs, CSS, or interactive questioning. |
### Internal Tools
These commands are primarily used internally by other workflow commands but can be used manually.
| Command | Description |
|---|---|
| `/workflow:tools:concept-enhanced` | Enhanced intelligent analysis with parallel CLI execution and design blueprint generation. |
| `/workflow:tools:context-gather` | Intelligently collect project context using general-purpose agent based on task description and package into standardized JSON. |
| `/workflow:tools:task-generate` | Generate task JSON files and IMPL_PLAN.md from analysis results with artifacts integration. |
| `/workflow:tools:task-generate-agent` | Autonomous task generation using action-planning-agent with discovery and output phases. |
| `/workflow:tools:task-generate-tdd` | Generate TDD task chains with Red-Green-Refactor dependencies. |
| `/workflow:tools:tdd-coverage-analysis` | Analyze test coverage and TDD cycle execution. |
| `/workflow:tools:test-concept-enhanced` | Analyze test requirements and generate test generation strategy using Gemini. |
| `/workflow:tools:test-context-gather` | Collect test coverage context and identify files requiring test generation. |
| `/workflow:tools:test-task-generate` | Generate test-fix task JSON with iterative test-fix-retest cycle specification. |
## Task Commands (`/task:*`)
Commands for managing individual tasks within a workflow session.
| Command | Description |
|---|---|
| `/task:create` | Create implementation tasks with automatic context awareness. |
| `/task:breakdown` | Intelligent task decomposition with context-aware subtask generation. |
| `/task:execute` | Execute tasks with appropriate agents and context-aware orchestration. |
| `/task:replan` | Replan individual tasks with detailed user input and change tracking. |
## Memory and Versioning Commands
| Command | Description |
|---|---|
| `/memory:update-full` | Complete project-wide CLAUDE.md documentation update. |
| `/memory:load` | Quickly load key project context into memory based on a task description. |
| `/memory:update-related` | Context-aware CLAUDE.md documentation updates based on recent changes. |
| `/version` | Display version information and check for updates. |
| `/enhance-prompt` | Context-aware prompt enhancement using session memory and codebase analysis. |

497
COMMAND_SPEC.md Normal file
View File

@@ -0,0 +1,497 @@
# Claude Code Workflow (CCW) - Command Specification
**Version**: 4.6.0
**Generated**: 2025年10月18日星期六
## 1. Introduction
This document provides a detailed technical specification for every command available in the Claude Code Workflow (CCW) system. It is intended for advanced users and developers who wish to understand the inner workings of CCW, customize commands, or build new workflows.
For a user-friendly overview, please see [COMMAND_REFERENCE.md](COMMAND_REFERENCE.md).
## 2. Command Categories
Commands are organized into the following categories:
- **Workflow Commands**: High-level orchestration for multi-phase development processes.
- **CLI Commands**: Direct access to AI tools for analysis and interaction.
- **Task Commands**: Management of individual work units within a workflow.
- **Memory Commands**: Context and documentation management.
- **UI Design Commands**: Specialized workflow for UI/UX design and prototyping.
- **Testing Commands**: TDD and test generation workflows.
---
## 3. Workflow Commands
High-level orchestrators for complex, multi-phase development processes.
### **/workflow:plan**
- **Syntax**: `/workflow:plan [--agent] [--cli-execute] "text description"|file.md`
- **Parameters**:
- `--agent` (Optional, Flag): Use the `task-generate-agent` for autonomous task generation.
- `--cli-execute` (Optional, Flag): Generate tasks with commands ready for CLI execution (e.g., using Codex).
- `description|file.md` (Required, String): A description of the planning goal or a path to a markdown file containing the requirements.
- **Responsibilities**: Orchestrates a 5-phase planning workflow that includes session start, context gathering, intelligent analysis, concept clarification (quality gate), and task generation.
- **Agent Calls**: Delegates analysis to `@cli-execution-agent` and task generation to `@action-planning-agent`.
- **Skill Invocation**: Does not directly invoke a skill, but the underlying agents may.
- **Integration**: This is a primary entry point for starting a development workflow. It is followed by `/workflow:execute`.
- **Example**:
```bash
/workflow:plan "Create a simple Express API that returns Hello World"
```
### **/workflow:execute**
- **Syntax**: `/workflow:execute [--resume-session="session-id"]`
- **Parameters**:
- `--resume-session` (Optional, String): The ID of a paused session to resume.
- **Responsibilities**: Discovers and executes all pending tasks in the active (or specified) workflow session. It handles dependency resolution and orchestrates agents to perform the work.
- **Agent Calls**: Dynamically calls the agent specified in each task's `meta.agent` field (e.g., `@code-developer`, `@test-fix-agent`).
- **Integration**: The primary command for implementing a plan generated by `/workflow:plan`.
- **Example**:
```bash
# Execute tasks in the currently active session
/workflow:execute
```
### **/workflow:resume**
- **Syntax**: `/workflow:resume "session-id"`
- **Parameters**:
- `session-id` (Required, String): The ID of the workflow session to resume.
- **Responsibilities**: A two-phase orchestrator that first analyzes the status of a paused session and then resumes it by calling `/workflow:execute --resume-session`.
- **Agent Calls**: None directly. It orchestrates `/workflow:status` and `/workflow:execute`.
- **Integration**: Used to continue a previously paused or interrupted workflow.
- **Example**:
```bash
/workflow:resume "WFS-user-login-feature"
```
### **/workflow:review**
- **Syntax**: `/workflow:review [--type=security|architecture|action-items|quality] [session-id]`
- **Parameters**:
- `--type` (Optional, String): The type of review to perform. Defaults to `quality`.
- `session-id` (Optional, String): The session to review. Defaults to the active session.
- **Responsibilities**: Performs a specialized, post-implementation review. This is optional, as the default quality gate is passing tests.
- **Agent Calls**: Uses `gemini-wrapper` or `qwen-wrapper` for analysis based on the review type.
- **Integration**: Used after `/workflow:execute` to perform audits before deployment.
- **Example**:
```bash
/workflow:review --type=security
```
### **/workflow:status**
- **Syntax**: `/workflow:status [task-id]`
- **Parameters**:
- `task-id` (Optional, String): If provided, shows details for a specific task.
- **Responsibilities**: Generates and displays an on-demand view of the current workflow's status by reading task JSON data. Does not modify any state.
- **Agent Calls**: None.
- **Integration**: A read-only command used to check progress at any point.
- **Example**:
```bash
/workflow:status
```
---
## 4. Session Management Commands
Commands for creating, listing, and managing workflow sessions.
### **/workflow:session:start**
- **Syntax**: `/workflow:session:start [--auto|--new] [description]`
- **Parameters**:
- `--auto` (Flag): Intelligently reuses an active session if relevant, otherwise creates a new one.
- `--new` (Flag): Forces the creation of a new session.
- `description` (Optional, String): A description for the new session's goal.
- **Responsibilities**: Manages session creation and activation. It can discover existing sessions, create new ones, and set the active session marker.
- **Agent Calls**: None.
- **Example**:
```bash
/workflow:session:start "My New Feature"
```
### **/workflow:session:list**
- **Syntax**: `/workflow:session:list`
- **Parameters**: None.
- **Responsibilities**: Lists all workflow sessions found in the `.workflow/` directory, showing their status (active, paused, completed).
- **Agent Calls**: None.
- **Example**:
```bash
/workflow:session:list
```
### **/workflow:session:resume**
- **Syntax**: `/workflow:session:resume`
- **Parameters**: None.
- **Responsibilities**: Finds the most recently paused session and marks it as active.
- **Agent Calls**: None.
- **Example**:
```bash
/workflow:session:resume
```
### **/workflow:session:complete**
- **Syntax**: `/workflow:session:complete [--detailed]`
- **Parameters**:
- `--detailed` (Flag): Shows a more detailed completion summary.
- **Responsibilities**: Marks the currently active session as "completed", records timestamps, and removes the `.active-*` marker file.
- **Agent Calls**: None.
- **Example**:
```bash
/workflow:session:complete
```
---
## 5. CLI Commands
Direct access to AI tools for analysis and code interaction without a full workflow structure.
### **/cli:analyze**
- **Syntax**: `/cli:analyze [--agent] [--tool codex|gemini|qwen] [--enhance] <analysis target>`
- **Responsibilities**: Performs read-only codebase analysis. Can operate in standard mode (direct tool call) or agent mode (`@cli-execution-agent`) for automated context discovery.
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
- **Example**:
```bash
/cli:analyze "authentication patterns"
```
### **/cli:chat**
- **Syntax**: `/cli:chat [--agent] [--tool codex|gemini|qwen] [--enhance] <inquiry>`
- **Responsibilities**: Provides a direct Q&A interface with AI tools for codebase questions. Read-only.
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
- **Example**:
```bash
/cli:chat "how does the caching layer work?"
```
### **/cli:cli-init**
- **Syntax**: `/cli:cli-init [--tool gemini|qwen|all] [--output path] [--preview]`
- **Responsibilities**: Initializes configuration for CLI tools (`.gemini/`, `.qwen/`) by analyzing the workspace and creating optimized `.geminiignore` and `.qwenignore` files.
- **Agent Calls**: None.
- **Example**:
```bash
/cli:cli-init
```
### **/cli:codex-execute**
- **Syntax**: `/cli:codex-execute [--verify-git] <description|task-id>`
- **Responsibilities**: Orchestrates automated task decomposition and sequential execution using Codex. It uses the `resume --last` mechanism for context continuity between subtasks.
- **Agent Calls**: None directly, but orchestrates `codex` CLI tool.
- **Example**:
```bash
/cli:codex-execute "implement user authentication system"
```
### **/cli:discuss-plan**
- **Syntax**: `/cli:discuss-plan [--topic '...'] [--task-id '...'] [--rounds N] <input>`
- **Responsibilities**: Orchestrates an iterative, multi-model (Gemini, Codex, Claude) discussion to perform deep analysis and planning without modifying code.
- **Agent Calls**: None directly, but orchestrates `gemini-wrapper` and `codex` CLI tools.
- **Example**:
```bash
/cli:discuss-plan --topic "Design a new caching layer"
```
### **/cli:execute**
- **Syntax**: `/cli:execute [--agent] [--tool codex|gemini|qwen] [--enhance] <description|task-id>`
- **Responsibilities**: Executes implementation tasks with auto-approval (`YOLO` mode). **MODIFIES CODE**.
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
- **Example**:
```bash
/cli:execute "implement JWT authentication with middleware"
```
### **/cli:mode:bug-index**
- **Syntax**: `/cli:mode:bug-index [--agent] [--tool ...] [--enhance] [--cd path] <bug description>`
- **Responsibilities**: Performs systematic bug analysis using the `bug-fix.md` template. Read-only.
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
- **Example**:
```bash
/cli:mode:bug-index "null pointer error in login flow"
```
### **/cli:mode:code-analysis**
- **Syntax**: `/cli:mode:code-analysis [--agent] [--tool ...] [--enhance] [--cd path] <analysis target>`
- **Responsibilities**: Performs deep code analysis and execution path tracing using the `code-analysis.md` template. Read-only.
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
- **Example**:
```bash
/cli:mode:code-analysis "trace authentication execution flow"
```
### **/cli:mode:plan**
- **Syntax**: `/cli:mode:plan [--agent] [--tool ...] [--enhance] [--cd path] <topic>`
- **Responsibilities**: Performs comprehensive planning and architecture analysis using the `plan.md` template. Read-only.
- **Agent Calls**: `@cli-execution-agent` (if `--agent` is used).
- **Example**:
```bash
/cli:mode:plan "design user dashboard architecture"
```
---
## 6. Task Commands
Commands for managing individual tasks within a workflow session.
### **/task:create**
- **Syntax**: `/task:create "task title"`
- **Parameters**:
- `title` (Required, String): The title of the task.
- **Responsibilities**: Creates a new task JSON file within the active session, auto-generating an ID and inheriting context.
- **Agent Calls**: Suggests an agent (e.g., `@code-developer`) based on task type but does not call it.
- **Example**:
```bash
/task:create "Build authentication module"
```
### **/task:breakdown**
- **Syntax**: `/task:breakdown <task-id>`
- **Parameters**:
- `task-id` (Required, String): The ID of the parent task to break down.
- **Responsibilities**: Manually decomposes a complex parent task into smaller, executable subtasks. Enforces a 10-task limit and file cohesion.
- **Agent Calls**: None.
- **Example**:
```bash
/task:breakdown IMPL-1
```
### **/task:execute**
- **Syntax**: `/task:execute <task-id>`
- **Parameters**:
- `task-id` (Required, String): The ID of the task to execute.
- **Responsibilities**: Executes a single task or a parent task (by executing its subtasks) using the assigned agent.
- **Agent Calls**: Calls the agent specified in the task's `meta.agent` field.
- **Example**:
```bash
/task:execute IMPL-1.1
```
### **/task:replan**
- **Syntax**: `/task:replan <task-id> ["text"|file.md] | --batch [report.md]`
- **Parameters**:
- `task-id` (String): The ID of the task to replan.
- `input` (String): Text or a file path with the new specifications.
- `--batch` (Flag): Enables batch processing from a verification report.
- **Responsibilities**: Updates a task's specification, creating a versioned backup of the previous state.
- **Agent Calls**: None.
- **Example**:
```bash
/task:replan IMPL-1 "Add OAuth2 authentication support"
```
---
## 7. Memory and Versioning Commands
### **/memory:update-full**
- **Syntax**: `/memory:update-full [--tool gemini|qwen|codex] [--path <directory>]`
- **Responsibilities**: Orchestrates a complete, project-wide update of all `CLAUDE.md` documentation files.
- **Agent Calls**: None directly, but orchestrates CLI tools (`gemini-wrapper`, etc.).
- **Example**:
```bash
/memory:update-full
```
### **/memory:load**
- **Syntax**: `/memory:load [--tool gemini|qwen] "task context description"`
- **Parameters**:
- `"task context description"` (Required, String): Task description to guide context extraction.
- `--tool <gemini|qwen>` (Optional): Specify CLI tool for agent to use (default: gemini).
- **Responsibilities**: Delegates to `@general-purpose` agent to analyze the project and return a structured "Core Content Pack". This pack is loaded into the main thread's memory, providing essential context for subsequent operations.
- **Agent-Driven Execution**: Fully delegates to general-purpose agent which autonomously:
1. Analyzes project structure and documentation
2. Extracts keywords from task description
3. Discovers relevant files using MCP code-index or search tools
4. Executes Gemini/Qwen CLI for deep analysis
5. Generates structured JSON content package
- **Core Philosophy**: Read-only analysis, token-efficient (CLI analysis in agent), structured output
- **Agent Calls**: `@general-purpose` agent.
- **Integration**: Provides quick, task-relevant context for subsequent agent operations while minimizing token consumption.
- **Example**:
```bash
/memory:load "在当前前端基础上开发用户认证功能"
/memory:load --tool qwen "重构支付模块API"
```
### **/memory:update-related**
- **Syntax**: `/memory:update-related [--tool gemini|qwen|codex]`
- **Responsibilities**: Performs a context-aware update of `CLAUDE.md` files for modules affected by recent git changes.
- **Agent Calls**: None directly, but orchestrates CLI tools.
- **Example**:
```bash
/memory:update-related
```
### **/version**
- **Syntax**: `/version`
- **Parameters**: None.
- **Responsibilities**: Displays local and global installation versions and checks for updates from GitHub.
- **Agent Calls**: None.
- **Example**:
```bash
/version
```
### **/enhance-prompt**
- **Syntax**: `/enhance-prompt <user input>`
- **Responsibilities**: A system-level skill that enhances a user's prompt by adding context from session memory and codebase analysis. It is typically triggered automatically by other commands that include the `--enhance` flag.
- **Skill Invocation**: This is a core skill, invoked when `--enhance` is used.
- **Agent Calls**: None.
- **Example (as part of another command)**:
```bash
/cli:execute --enhance "fix the login button"
```
---
## 8. UI Design Commands
Specialized workflow for UI/UX design, from style extraction to prototype generation.
### **/workflow:ui-design:explore-auto**
- **Syntax**: `/workflow:ui-design:explore-auto [--prompt "..."] [--images "..."] [--targets "..."] ...`
- **Responsibilities**: Fully autonomous, multi-phase workflow that orchestrates style extraction, layout extraction, and prototype generation.
- **Agent Calls**: `@ui-design-agent`.
- **Example**:
```bash
/workflow:ui-design:explore-auto --prompt "Modern blog: home, article, author"
```
### **/workflow:ui-design:imitate-auto**
- **Syntax**: `/workflow:ui-design:imitate-auto --url-map "<map>" [--capture-mode <batch|deep>] ...`
- **Responsibilities**: High-speed, multi-page UI replication workflow that captures screenshots and orchestrates the full design pipeline.
- **Agent Calls**: `@ui-design-agent`.
- **Example**:
```bash
/workflow:ui-design:imitate-auto --url-map "home:https://linear.app, features:https://linear.app/features"
```
### **/workflow:ui-design:batch-generate**
- **Syntax**: `/workflow:ui-design:batch-generate [--prompt "..."] [--targets "..."] ...`
- **Responsibilities**: Prompt-driven batch UI generation with parallel execution for multiple targets and styles.
- **Agent Calls**: `@ui-design-agent`.
- **Example**:
```bash
/workflow:ui-design:batch-generate --prompt "Dashboard with metric cards and charts"
```
### **/workflow:ui-design:capture**
- **Syntax**: `/workflow:ui-design:capture --url-map "target:url,..." ...`
- **Responsibilities**: Batch screenshot capture tool with MCP-first strategy and local fallbacks.
- **Agent Calls**: None directly, uses MCP tools.
- **Example**:
```bash
/workflow:ui-design:capture --url-map "home:https://linear.app"
```
### **/workflow:ui-design:explore-layers**
- **Syntax**: `/workflow:ui-design:explore-layers --url <url> --depth <1-5> ...`
- **Responsibilities**: Performs a deep, interactive UI capture of a single URL, exploring layers from the full page down to the Shadow DOM.
- **Agent Calls**: None directly, uses MCP tools.
- **Example**:
```bash
/workflow:ui-design:explore-layers --url https://linear.app --depth 3
```
### **/workflow:ui-design:style-extract**
- **Syntax**: `/workflow:ui-design:style-extract [--images "..."] [--prompt "..."] ...`
- **Responsibilities**: Extracts design styles from images or text prompts and generates production-ready design systems (`design-tokens.json`, `style-guide.md`).
- **Agent Calls**: `@ui-design-agent`.
- **Example**:
```bash
/workflow:ui-design:style-extract --images "design-refs/*.png" --variants 3
```
### **/workflow:ui-design:layout-extract**
- **Syntax**: `/workflow:ui-design:layout-extract [--images "..."] [--urls "..."] ...`
- **Responsibilities**: Extracts structural layout information (HTML structure, CSS layout rules) separately from visual style.
- **Agent Calls**: `@ui-design-agent`.
- **Example**:
```bash
/workflow:ui-design:layout-extract --urls "home:https://linear.app" --mode imitate
```
### **/workflow:ui-design:generate**
- **Syntax**: `/workflow:ui-design:generate [--base-path <path>] ...`
- **Responsibilities**: A pure assembler that combines pre-extracted layout templates with design tokens to generate final UI prototypes.
- **Agent Calls**: `@ui-design-agent`.
- **Example**:
```bash
/workflow:ui-design:generate --session WFS-design-run
```
### **/workflow:ui-design:update**
- **Syntax**: `/workflow:ui-design:update --session <session_id> ...`
- **Responsibilities**: Synchronizes the finalized design system references into the core brainstorming artifacts (`synthesis-specification.md`) to make them available for the planning phase.
- **Agent Calls**: None.
- **Example**:
```bash
/workflow:ui-design:update --session WFS-my-app
```
### **/workflow:ui-design:animation-extract**
- **Syntax**: `/workflow:ui-design:animation-extract [--urls "<list>"] [--mode <auto|interactive>] ...`
- **Responsibilities**: Extracts animation and transition patterns from URLs (auto mode) or through interactive questioning to generate animation tokens.
- **Agent Calls**: `@ui-design-agent` (for interactive mode).
- **Example**:
```bash
/workflow:ui-design:animation-extract --urls "home:https://linear.app" --mode auto
```
---
## 9. Testing Commands
Workflows for Test-Driven Development (TDD) and post-implementation test generation.
### **/workflow:tdd-plan**
- **Syntax**: `/workflow:tdd-plan [--agent] "feature description"|file.md`
- **Responsibilities**: Orchestrates a 7-phase TDD planning workflow, creating tasks with Red-Green-Refactor cycles.
- **Agent Calls**: Orchestrates sub-commands which may call agents.
- **Example**:
```bash
/workflow:tdd-plan "Implement a secure login endpoint"
```
### **/workflow:tdd-verify**
- **Syntax**: `/workflow:tdd-verify [session-id]`
- **Responsibilities**: Verifies TDD workflow compliance by analyzing task chains, test coverage, and cycle execution.
- **Agent Calls**: None directly, orchestrates `gemini-wrapper`.
- **Example**:
```bash
/workflow:tdd-verify WFS-login-tdd
```
### **/workflow:test-gen**
- **Syntax**: `/workflow:test-gen [--use-codex] [--cli-execute] <source-session-id>`
- **Responsibilities**: Creates an independent test-fix workflow by analyzing a completed implementation session.
- **Agent Calls**: Orchestrates sub-commands that call `@code-developer` and `@test-fix-agent`.
- **Example**:
```bash
/workflow:test-gen WFS-user-auth-v2
```
### **/workflow:test-fix-gen**
- **Syntax**: `/workflow:test-fix-gen [--use-codex] [--cli-execute] (<source-session-id> | "description" | /path/to/file.md)`
- **Responsibilities**: Creates an independent test-fix workflow from either a completed session or a feature description.
- **Agent Calls**: Orchestrates sub-commands that call `@code-developer` and `@test-fix-agent`.
- **Example**:
```bash
/workflow:test-fix-gen "Test the user authentication API endpoints"
```
### **/workflow:test-cycle-execute**
- **Syntax**: `/workflow:test-cycle-execute [--resume-session="session-id"] [--max-iterations=N]`
- **Responsibilities**: Executes a dynamic test-fix workflow, creating intermediate fix tasks based on test results and analysis.
- **Agent Calls**: `@code-developer`, `@test-fix-agent`.
- **Example**:
```bash
/workflow:test-cycle-execute --resume-session="WFS-test-user-auth"
```

View File

@@ -226,6 +226,31 @@ Suitable for large-scale refactoring, architectural changes, or first-time CCW u
- Weekly routine maintenance
- When AI output drift is detected
#### Quick Context Loading for Specific Tasks
When you need immediate, task-specific context without updating documentation:
```bash
# Load context for a specific task into memory
/memory:load "在当前前端基础上开发用户认证功能"
# Use alternative CLI tool for analysis
/memory:load --tool qwen "重构支付模块API"
```
**How It Works**:
- Delegates to an AI agent for autonomous project analysis
- Discovers relevant files and extracts task-specific keywords
- Uses CLI tools (Gemini/Qwen) for deep analysis to save tokens
- Returns a structured "Core Content Pack" loaded into memory
- Provides context for subsequent agent operations
**When to Use**:
- Before starting a new feature or task
- When you need quick context without full documentation rebuild
- For task-specific architectural or pattern discovery
- As preparation for agent-based development workflows
#### Incremental Related Module Updates
Suitable for daily development, updating only modules affected by changes:
@@ -273,6 +298,58 @@ This command will:
---
## Advanced Usage: Agent Skills
Agent Skills are modular, reusable capabilities that extend the AI's functionality. They are stored in the `.claude/skills/` directory and are invoked through specific trigger mechanisms.
### How Skills Work
- **Model-Invoked**: Unlike slash commands, you don't call Skills directly. The AI decides when to use a Skill based on its understanding of your goal.
- **Contextual**: Skills provide specific instructions, scripts, and templates to the AI for specialized tasks.
- **Trigger Mechanisms**:
- **Conversational Trigger**: Use `-e` or `--enhance` flag in **natural conversation** to trigger the `prompt-enhancer` skill
- **CLI Command Enhancement**: Use `--enhance` flag in **CLI commands** for prompt refinement (this is a CLI feature, not a skill trigger)
### Examples
**Conversational Trigger** (activates prompt-enhancer skill):
```
User: "Analyze authentication module -e"
→ AI uses prompt-enhancer skill to expand the request
```
**CLI Command Enhancement** (built-in CLI feature):
```bash
# The --enhance flag here is a CLI parameter, not a skill trigger
/cli:analyze --enhance "check for security issues"
```
**Important Note**: The `-e` flag works in natural conversation, but `--enhance` in CLI commands is a separate enhancement mechanism, not the skill system.
---
## Advanced Usage: UI Design Workflow
CCW includes a powerful, multi-phase workflow for UI design and prototyping, capable of generating complete design systems and interactive prototypes from simple descriptions or reference images.
### Key Commands
- `/workflow:ui-design:explore-auto`: An exploratory workflow that generates multiple, distinct design variations based on a prompt.
- `/workflow:ui-design:imitate-auto`: A replication workflow that creates high-fidelity prototypes from reference URLs.
### Example: Generating a UI from a Prompt
You can generate multiple design options for a web page with a single command:
```bash
# This command will generate 3 different style and layout variations for a login page.
/workflow:ui-design:explore-auto --prompt "A modern, clean login page for a SaaS application" --targets "login" --style-variants 3 --layout-variants 3
```
After the workflow completes, it provides a `compare.html` file, allowing you to visually review and select the best design combination.
---
## ❓ Troubleshooting
- **Problem: Prompt shows "No active session found"**

View File

@@ -273,6 +273,58 @@ CCW 使用分层的 CLAUDE.md 文档系统维护项目上下文。定期更新
---
## 🎯 进阶用法:智能体技能 (Agent Skills)
智能体技能是可扩展 AI 功能的模块化、可复用能力。它们存储在 `.claude/skills/` 目录中,通过特定的触发机制调用。
### 技能工作原理
- **模型调用**:与斜杠命令不同,您不直接调用技能。AI 会根据对您目标的理解来决定何时使用技能。
- **上下文化**:技能为 AI 提供特定的指令、脚本和模板,用于专门化任务。
- **触发机制**
- **对话触发**:在**自然对话**中使用 `-e``--enhance` 标识符来触发 `prompt-enhancer` 技能
- **CLI 命令增强**:在 **CLI 命令**中使用 `--enhance` 标识符进行提示词优化(这是 CLI 功能,不是技能触发)
### 使用示例
**对话触发** (激活 prompt-enhancer 技能):
```
用户: "分析认证模块 -e"
→ AI 使用 prompt-enhancer 技能扩展请求
```
**CLI 命令增强** (CLI 内置功能):
```bash
# 这里的 --enhance 标识符是 CLI 参数,不是技能触发器
/cli:analyze --enhance "检查安全问题"
```
**重要说明**`-e` 标识符仅在自然对话中有效,而 CLI 命令中的 `--enhance` 是独立的增强机制,与技能系统无关。
---
## 🎨 进阶用法UI 设计工作流
CCW 包含强大的多阶段 UI 设计和原型制作工作流,能够从简单的描述或参考图像生成完整的设计系统和交互式原型。
### 核心命令
- `/workflow:ui-design:explore-auto`: 探索性工作流,基于提示词生成多种不同的设计变体。
- `/workflow:ui-design:imitate-auto`: 复制工作流,从参考 URL 创建高保真原型。
### 示例:从提示词生成 UI
您可以使用单个命令为网页生成多种设计选项:
```bash
# 此命令将为登录页面生成 3 种不同的样式和布局变体
/workflow:ui-design:explore-auto --prompt "一个现代简洁的 SaaS 应用登录页面" --targets "login" --style-variants 3 --layout-variants 3
```
工作流完成后,会提供一个 `compare.html` 文件,让您可以可视化地查看和选择最佳设计组合。
---
## ❓ 常见问题排查 (Troubleshooting)
- **问题:提示 "No active session found" (未找到活动会话)**

View File

@@ -4,230 +4,201 @@
Interactive installation guide for Claude Code with Agent workflow coordination and distributed memory system.
## ⚡ One-Line Remote Installation (Recommended)
## ⚡ Quick One-Line Installation
### All Platforms - Remote PowerShell Installation
**Windows (PowerShell):**
```powershell
# Interactive remote installation from feature branch (latest)
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1)
# Global installation with unified file output system
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
# Force overwrite (non-interactive) - includes all new workflow file generation features
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Force -NonInteractive
# One-click backup all existing files (no confirmations needed)
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -BackupAll
Invoke-Expression (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1" -UseBasicParsing).Content
```
**What the remote installer does:**
- ✅ Checks system requirements (PowerShell version, network connectivity)
- ✅ Downloads latest version from GitHub (main branch)
- ✅ Includes all new unified file output system features
- ✅ Automatically extracts and runs local installer
- ✅ Security confirmation and user prompts
- ✅ Automatic cleanup of temporary files
- ✅ Sets up .workflow/ directory structure for session management
**Linux/macOS (Bash/Zsh):**
```bash
bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
```
**Note**: Interface is in English for cross-platform compatibility
### Interactive Version Selection
## 📂 Local Installation
After running the installation command, you'll see an interactive menu with real-time version information:
### All Platforms (PowerShell)
```
Detecting latest release and commits...
Latest stable: v4.6.0 (2025-10-19 04:27 UTC)
Latest commit: cdea58f (2025-10-19 08:15 UTC)
====================================================
Version Selection Menu
====================================================
1) Latest Stable Release (Recommended)
|-- Version: v4.6.0
|-- Released: 2025-10-19 04:27 UTC
\-- Production-ready
2) Latest Development Version
|-- Branch: main
|-- Commit: cdea58f
|-- Updated: 2025-10-19 08:15 UTC
|-- Cutting-edge features
\-- May contain experimental changes
3) Specific Release Version
|-- Install a specific tagged release
\-- Recent: v4.6.0, v4.5.0, v4.4.0
====================================================
Select version to install (1-3, default: 1):
```
**Version Options:**
- **Option 1 (Recommended)**: Latest stable release with verified production quality
- **Option 2**: Latest development version from main branch with newest features
- **Option 3**: Specific version tag for controlled deployments
> 💡 **Pro Tip**: The installer automatically detects and displays the latest version numbers and release dates from GitHub. Just press Enter to select the recommended stable release.
## 📂 Local Installation (Install-Claude.ps1)
For local installation without network access, use the bundled PowerShell installer:
**Installation Modes:**
```powershell
# Clone the repository with latest features
git clone -b main https://github.com/catlog22/Claude-CCW.git
cd Dmsflow
# Windows PowerShell 5.1+ or PowerShell Core (Global installation only)
# Interactive mode with prompts (recommended)
.\Install-Claude.ps1
# Linux/macOS PowerShell Core (Global installation only)
pwsh ./Install-Claude.ps1
# Quick install with automatic backup
.\Install-Claude.ps1 -Force -BackupAll
# Non-interactive install
.\Install-Claude.ps1 -NonInteractive -Force
```
**Note**: The feature branch contains all the latest unified file output system enhancements and should be used for new installations.
**Installation Options:**
## Installation Options
| Mode | Description | Installs To |
|------|-------------|-------------|
| **Global** | System-wide installation (default) | `~/.claude/`, `~/.codex/`, `~/.gemini/` |
| **Path** | Custom directory + global hybrid | Local: `agents/`, `commands/`<br>Global: `workflows/`, `scripts/` |
### Remote Installation Parameters
**Backup Behavior:**
- **Default**: Automatic backup enabled (`-BackupAll`)
- **Disable**: Use `-NoBackup` flag (⚠️ overwrites without backup)
- **Backup location**: `claude-backup-{timestamp}/` in installation directory
All parameters can be passed to the remote installer:
**⚠️ Important Warnings:**
- `-Force -BackupAll`: Silent file overwrite (with backup)
- `-NoBackup -Force`: Permanent file overwrite (no recovery)
- Global mode modifies user profile directories
```powershell
# Global installation
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
# Install to specific directory
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Directory "C:\MyProject"
# Force overwrite without prompts
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Force -NonInteractive
# Install from specific branch
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Branch "dev"
# Skip backups (overwrite without backup - not recommended)
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -NoBackup
# Explicit automatic backup all existing files (default behavior since v1.1.0)
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -BackupAll
### ✅ Verify Installation
After installation, open **Claude Code** and check if the workflow commands are available by running:
```bash
/workflow:session:list
```
### Local Installation Options
This command should be recognized in Claude Code's interface. If you see the workflow slash commands (e.g., `/workflow:*`, `/cli:*`), the installation was successful.
### Global Installation (Default and Only Mode)
Install to user home directory (`~/.claude`):
```powershell
# All platforms - Global installation (default)
.\Install-Claude.ps1
# With automatic backup (default since v1.1.0)
.\Install-Claude.ps1 -BackupAll
# Disable automatic backup (not recommended)
.\Install-Claude.ps1 -NoBackup
# Non-interactive mode for automation
.\Install-Claude.ps1 -Force -NonInteractive
```
**Global installation structure:**
```
~/.claude/
├── agents/
├── commands/
├── output-styles/
├── settings.local.json
└── CLAUDE.md
```
**Note**: Starting from v1.2.0, only global installation is supported. Local directory and custom path installations have been removed to simplify the installation process and ensure consistent behavior across all platforms.
## Advanced Options
### 🛡️ Enhanced Backup Features (v1.1.0+)
The installer now includes **automatic backup as the default behavior** to protect your existing files:
**Backup Modes:**
- **Automatic Backup** (default since v1.1.0): Automatically backs up all existing files without prompts
- **Explicit Backup** (`-BackupAll`): Same as default behavior, explicitly specified for compatibility
- **No Backup** (`-NoBackup`): Disable backup functionality (not recommended)
**Backup Organization:**
- Creates timestamped backup folders (e.g., `claude-backup-20240117-143022`)
- Preserves directory structure within backup folders
- Maintains file relationships and paths
### Force Installation
Overwrite existing files:
```powershell
.\Install-Claude.ps1 -Force
```
### One-Click Backup
Automatically backup all existing files without confirmations:
```powershell
.\Install-Claude.ps1 -BackupAll
```
### Skip Backups
Don't create backup files:
```powershell
.\Install-Claude.ps1 -NoBackup
```
### Uninstall
Remove installation:
```powershell
.\Install-Claude.ps1 -Uninstall -Force
```
> **📝 Installation Notes:**
> - The installer will automatically install/update `.codex/` and `.gemini/` directories
> - **Global mode**: Installs to `~/.codex` and `~/.gemini`
> - **Path mode**: Installs to your specified directory (e.g., `project/.codex`, `project/.gemini`)
> - **Backup**: Existing files are backed up by default to `claude-backup-{timestamp}/`
> - **Safety**: Use interactive mode for first-time installation to review changes
## Platform Requirements
### PowerShell (Recommended)
- **Windows**: PowerShell 5.1+ or PowerShell Core 6+
- **Linux**: PowerShell Core 6+
- **macOS**: PowerShell Core 6+
- **Linux/macOS**: Bash/Zsh (for installer) or PowerShell Core 6+ (for manual Install-Claude.ps1)
Install PowerShell Core:
**Install PowerShell Core (if needed):**
- **Ubuntu/Debian**: `sudo apt install powershell`
- **CentOS/RHEL**: `sudo dnf install powershell`
- **macOS**: `brew install powershell`
- **Or download**: https://github.com/PowerShell/PowerShell
- **Download**: https://github.com/PowerShell/PowerShell
## ⚙️ Configuration
## Complete Installation Examples
### Tool Control System
### ⚡ Super Quick (One-Liner)
```powershell
# Complete installation in one command
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
CCW uses a **configuration-based tool control system** that makes external CLI tools **optional** rather than required. This allows you to:
# Done! 🎉
# Start using Claude Code with Agent workflows!
-**Start with Claude-only mode** - Work immediately without installing additional tools
-**Progressive enhancement** - Add external tools selectively as needed
-**Graceful degradation** - Automatic fallback when tools are unavailable
-**Flexible configuration** - Control tool availability per project
**Configuration File**: `~/.claude/workflows/tool-control.yaml`
```yaml
tools:
gemini:
enabled: false # Optional: AI analysis & documentation
qwen:
enabled: true # Optional: AI architecture & code generation
codex:
enabled: true # Optional: AI development & implementation
```
### 📂 Manual Installation Method
```powershell
# Manual installation steps:
# 1. Install PowerShell Core (if needed)
# Windows: Download from GitHub
# Linux: sudo apt install powershell
# macOS: brew install powershell
**Behavior**:
- **When disabled**: CCW automatically falls back to other enabled tools or Claude's native capabilities
- **When enabled**: Uses specialized tools for their specific strengths
- **Default**: All tools disabled - Claude-only mode works out of the box
# 2. Download Claude Code Workflow System
git clone https://github.com/catlog22/Claude-CCW.git
cd Dmsflow
### Optional CLI Tools *(Enhanced Capabilities)*
# 3. Install globally (interactive)
.\Install-Claude.ps1 -Global
While CCW works with Claude alone, installing these tools provides enhanced analysis and extended context:
# 4. Start using Claude Code with Agent workflows!
# Use /workflow commands and memory system for development
```
#### System Utilities
## Verification
| Tool | Purpose | Installation |
|------|---------|--------------|
| **ripgrep (rg)** | Fast code search | `brew install ripgrep` (macOS), `apt install ripgrep` (Ubuntu), `winget install ripgrep` (Windows) |
| **jq** | JSON processing | `brew install jq` (macOS), `apt install jq` (Ubuntu), `winget install jq` (Windows) |
After installation, verify:
#### External AI Tools
1. **Check installation:**
```bash
# Global
ls ~/.claude
# Local
ls ./.claude
```
Configure these tools in `~/.claude/workflows/tool-control.yaml` after installation:
2. **Test Claude Code:**
- Open Claude Code in your project
- Check that global `.claude` directory is recognized
- Verify workflow commands and DMS commands are available
- Test `/workflow` commands for agent coordination
- Test `/workflow version` to check version information
| Tool | Purpose | Installation |
|------|---------|--------------|
| **Gemini CLI** | AI analysis & documentation | Follow [official docs](https://ai.google.dev) - Free quota, extended context |
| **Codex CLI** | AI development & implementation | Follow [official docs](https://github.com/openai/codex) - Autonomous development |
| **Qwen Code** | AI architecture & code generation | Follow [official docs](https://github.com/QwenLM/qwen-code) - Large context window |
### Recommended: MCP Tools *(Enhanced Analysis)*
MCP (Model Context Protocol) tools provide advanced codebase analysis. **Recommended installation** - While CCW has fallback mechanisms, not installing MCP tools may lead to unexpected behavior or degraded performance in some workflows.
| MCP Server | Purpose | Installation Guide |
|------------|---------|-------------------|
| **Exa MCP** | External API patterns & best practices | [Install Guide](https://smithery.ai/server/exa) |
| **Code Index MCP** | Advanced internal code search | [Install Guide](https://github.com/johnhuang316/code-index-mcp) |
| **Chrome DevTools MCP** | ⚠️ **Required for UI workflows** - URL mode design extraction | [Install Guide](https://github.com/ChromeDevTools/chrome-devtools-mcp) |
⚠️ **Note**: Some workflows expect MCP tools to be available. Without them, you may experience:
- Slower code analysis and search operations
- Reduced context quality in some scenarios
- Fallback to less efficient traditional tools
- Potential unexpected behavior in advanced workflows
## Troubleshooting
### PowerShell Execution Policy
### PowerShell Execution Policy (Windows)
If you get execution policy errors:
```powershell
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
```
### Workflow Commands Not Working
- Ensure `.claude` directory exists in your project
- Verify workflow.md and agent files are properly installed
- Check that Claude Code recognizes the configuration
- Verify installation: `ls ~/.claude` (should show agents/, commands/, workflows/)
- Restart Claude Code after installation
- Check `/workflow:session:list` command is recognized
### Permission Errors
- **Windows**: Run as Administrator
- **Linux/macOS**: Use `sudo` if needed for global PowerShell installation
- **Windows**: Run PowerShell as Administrator
- **Linux/macOS**: May need `sudo` for global PowerShell installation
## Support
- **Issues**: [GitHub Issues](https://github.com/catlog22/Claude-CCW/issues)
- **Documentation**: [Main README](README.md)
- **Workflow Documentation**: [.claude/commands/workflow.md](.claude/commands/workflow.md)
- **Issues**: [GitHub Issues](https://github.com/catlog22/Claude-Code-Workflow/issues)
- **Getting Started**: [Quick Start Guide](GETTING_STARTED.md)
- **Documentation**: [Main README](README.md)

View File

@@ -9,16 +9,16 @@ Claude Code Agent 工作流协调和分布式内存系统的交互式安装指
### 所有平台 - 远程 PowerShell 安装
```powershell
# 从功能分支进行交互式远程安装(最新版本)
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1)
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
# 包含统一文件输出系统的全局安装
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Global
# 强制覆盖(非交互式)- 包含所有新的工作流文件生成功能
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Force -NonInteractive
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Force -NonInteractive
# 一键备份所有现有文件(无需确认)
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -BackupAll
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -BackupAll
```
**远程安装器的功能:**
@@ -37,8 +37,7 @@ iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-r
### 所有平台PowerShell
```powershell
# 克隆包含最新功能的仓库
git clone -b main https://github.com/catlog22/Claude-CCW.git
cd Dmsflow
cd Claude-Code-Workflow
# Windows PowerShell 5.1+ 或 PowerShell Core仅支持全局安装
.\Install-Claude.ps1
@@ -57,19 +56,19 @@ pwsh ./Install-Claude.ps1
```powershell
# 全局安装
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Global
# 安装到指定目录
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Directory "C:\MyProject"
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Directory "C:\MyProject"
# 强制覆盖而不提示
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Force -NonInteractive
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Force -NonInteractive
# 从特定分支安装
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Branch "dev"
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Branch "dev"
# 跳过备份(更快)
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -NoBackup
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -NoBackup
```
### 本地安装选项
@@ -140,7 +139,7 @@ iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-r
### ⚡ 超快速(一键)
```powershell
# 一条命令完成安装
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Dmsflow/main/install-remote.ps1) -Global
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1) -Global
# 完成!🎉
# 开始使用 Claude Code Agent 工作流!
@@ -165,6 +164,55 @@ cd Dmsflow
# 使用 /workflow 命令和内存系统进行开发
```
## 先决条件和推荐工具
为了充分发挥 CCW 的全部潜力,强烈建议安装这些额外工具。
### 系统工具 (推荐)
这些工具增强了文件搜索和数据处理能力。
- **`ripgrep` (rg)**: 一款高速代码搜索工具。
- **Windows**: `winget install BurntSushi.Ripper.MSVC``choco install ripgrep`
- **macOS**: `brew install ripgrep`
- **Linux**: `sudo apt-get install ripgrep` (Debian/Ubuntu) 或 `sudo dnf install ripgrep` (Fedora)
- **验证**: `rg --version`
- **`jq`**: 一款命令行 JSON 处理器。
- **Windows**: `winget install jqlang.jq``choco install jq`
- **macOS**: `brew install jq`
- **Linux**: `sudo apt-get install jq` (Debian/Ubuntu) 或 `sudo dnf install jq` (Fedora)
- **验证**: `jq --version`
### 模型上下文协议 (MCP) 工具 (可选)
MCP 工具从外部来源提供高级上下文检索,增强 AI 的理解能力。关于安装,请参考各个工具的官方文档。
| 工具 | 用途 | 官方源码 |
|---|---|---|
| **Exa MCP** | 用于搜索代码和网络。 | [exa-labs/exa-mcp-server](https://github.com/exa-labs/exa-mcp-server) |
| **Code Index MCP** | 用于索引和搜索本地代码库。 |[johnhuang316/code-index-mcp](https://github.com/johnhuang316/code-index-mcp) |
| **Chrome DevTools MCP** | 用于与网页交互以提取布局和样式信息。 | [ChromeDevTools/chrome-devtools-mcp](https://github.com/ChromeDevTools/chrome-devtools-mcp) |
- **先决条件**: Node.js 和 npm (或兼容的 JavaScript 运行时)。
- **验证**: 安装后,检查服务器是否可以启动 (具体请查阅 MCP 文档)。
### 可选的 AI CLI 工具
CCW 使用包装脚本与底层的 AI 模型进行交互。为了使这些包装器正常工作,必须在您的系统上安装和配置相应的 CLI 工具。
- **Gemini CLI**: 用于分析、文档和探索。
- **用途**: 提供对 Google Gemini 模型的访问。
- **安装**: 请遵循 Google AI 官方文档来安装和配置 Gemini CLI。确保 `gemini` 命令在您的系统 PATH 中可用。
- **Codex CLI**: 用于自主开发和实现。
- **用途**: 提供对 OpenAI Codex 模型的访问,用于代码生成和修改。
- **安装**: 请遵循您环境中使用的特定 Codex CLI 工具的安装说明。确保 `codex` 命令在您的系统 PATH 中可用。
- **Qwen Code**: 用于架构和代码生成。
- **用途**: 提供对阿里巴巴通义千问模型的访问。
- **安装**: 请遵循通义千问官方文档来安装和配置其 CLI 工具。确保 `qwen` 命令在您的系统 PATH 中可用。
## 验证
安装后,验证:

View File

@@ -26,6 +26,9 @@
.PARAMETER NoBackup
Disable automatic backup functionality
.PARAMETER Uninstall
Uninstall Claude Code Workflow System based on installation manifest
.EXAMPLE
.\Install-Claude.ps1
Interactive installation with mode selection
@@ -45,6 +48,14 @@
.EXAMPLE
.\Install-Claude.ps1 -NoBackup
Installation without any backup (overwrite existing files)
.EXAMPLE
.\Install-Claude.ps1 -Uninstall
Uninstall Claude Code Workflow System
.EXAMPLE
.\Install-Claude.ps1 -Uninstall -Force
Uninstall without confirmation prompts
#>
param(
@@ -61,6 +72,8 @@ param(
[switch]$NoBackup,
[switch]$Uninstall,
[string]$SourceVersion = "",
[string]$SourceBranch = "",
@@ -98,6 +111,9 @@ $ColorWarning = "Yellow"
$ColorError = "Red"
$ColorPrompt = "Magenta"
# Global manifest directory location
$script:ManifestDir = Join-Path ([Environment]::GetFolderPath("UserProfile")) ".claude-manifests"
function Write-ColorOutput {
param(
[string]$Message,
@@ -704,6 +720,427 @@ function Merge-DirectoryContents {
return $true
}
# ============================================================================
# INSTALLATION MANIFEST MANAGEMENT
# ============================================================================
function New-InstallManifest {
<#
.SYNOPSIS
Create a new installation manifest to track installed files
#>
param(
[string]$InstallationMode,
[string]$InstallationPath
)
# Create manifest directory if it doesn't exist
if (-not (Test-Path $script:ManifestDir)) {
New-Item -ItemType Directory -Path $script:ManifestDir -Force | Out-Null
}
# Generate unique manifest ID based on timestamp and mode
$timestamp = Get-Date -Format "yyyyMMdd-HHmmss"
$manifestId = "install-$InstallationMode-$timestamp"
$manifest = @{
manifest_id = $manifestId
version = "1.0"
installation_mode = $InstallationMode
installation_path = $InstallationPath
installation_date = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssZ")
installer_version = $ScriptVersion
files = @()
directories = @()
}
return $manifest
}
function Add-ManifestEntry {
<#
.SYNOPSIS
Add a file or directory entry to the manifest
#>
param(
[Parameter(Mandatory=$true)]
[hashtable]$Manifest,
[Parameter(Mandatory=$true)]
[string]$Path,
[Parameter(Mandatory=$true)]
[ValidateSet("File", "Directory")]
[string]$Type
)
$entry = @{
path = $Path
type = $Type
timestamp = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssZ")
}
if ($Type -eq "File") {
$Manifest.files += $entry
} else {
$Manifest.directories += $entry
}
}
function Save-InstallManifest {
<#
.SYNOPSIS
Save the installation manifest to disk
#>
param(
[Parameter(Mandatory=$true)]
[hashtable]$Manifest
)
try {
# Use manifest ID to create unique file name
$manifestFileName = "$($Manifest.manifest_id).json"
$manifestPath = Join-Path $script:ManifestDir $manifestFileName
$Manifest | ConvertTo-Json -Depth 10 | Out-File -FilePath $manifestPath -Encoding utf8 -Force
Write-ColorOutput "Installation manifest saved: $manifestPath" $ColorSuccess
return $true
} catch {
Write-ColorOutput "WARNING: Failed to save installation manifest: $($_.Exception.Message)" $ColorWarning
return $false
}
}
function Migrate-LegacyManifest {
<#
.SYNOPSIS
Migrate old single manifest file to new multi-manifest system
#>
$legacyManifestPath = Join-Path ([Environment]::GetFolderPath("UserProfile")) ".claude-install-manifest.json"
if (-not (Test-Path $legacyManifestPath)) {
return
}
try {
Write-ColorOutput "Found legacy manifest file, migrating to new system..." $ColorInfo
# Create manifest directory if it doesn't exist
if (-not (Test-Path $script:ManifestDir)) {
New-Item -ItemType Directory -Path $script:ManifestDir -Force | Out-Null
}
# Read legacy manifest
$legacyJson = Get-Content -Path $legacyManifestPath -Raw -Encoding utf8
$legacy = $legacyJson | ConvertFrom-Json
# Generate new manifest ID
$timestamp = Get-Date -Format "yyyyMMdd-HHmmss"
$mode = if ($legacy.installation_mode) { $legacy.installation_mode } else { "Global" }
$manifestId = "install-$mode-$timestamp-migrated"
# Create new manifest with all fields
$newManifest = @{
manifest_id = $manifestId
version = if ($legacy.version) { $legacy.version } else { "1.0" }
installation_mode = $mode
installation_path = if ($legacy.installation_path) { $legacy.installation_path } else { [Environment]::GetFolderPath("UserProfile") }
installation_date = if ($legacy.installation_date) { $legacy.installation_date } else { (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssZ") }
installer_version = if ($legacy.installer_version) { $legacy.installer_version } else { "unknown" }
files = if ($legacy.files) { @($legacy.files) } else { @() }
directories = if ($legacy.directories) { @($legacy.directories) } else { @() }
}
# Save to new location
$newManifestPath = Join-Path $script:ManifestDir "$manifestId.json"
$newManifest | ConvertTo-Json -Depth 10 | Out-File -FilePath $newManifestPath -Encoding utf8 -Force
# Rename old manifest (don't delete, keep as backup)
$backupPath = "$legacyManifestPath.migrated"
Move-Item -Path $legacyManifestPath -Destination $backupPath -Force
Write-ColorOutput "Legacy manifest migrated successfully" $ColorSuccess
Write-ColorOutput "Old manifest backed up to: $backupPath" $ColorInfo
} catch {
Write-ColorOutput "WARNING: Failed to migrate legacy manifest: $($_.Exception.Message)" $ColorWarning
}
}
function Get-AllInstallManifests {
<#
.SYNOPSIS
Get all installation manifests
#>
# Migrate legacy manifest if exists
Migrate-LegacyManifest
if (-not (Test-Path $script:ManifestDir)) {
return @()
}
try {
$manifestFiles = Get-ChildItem -Path $script:ManifestDir -Filter "install-*.json" -File | Sort-Object LastWriteTime -Descending
$manifests = [System.Collections.ArrayList]::new()
foreach ($file in $manifestFiles) {
try {
$manifestJson = Get-Content -Path $file.FullName -Raw -Encoding utf8
$manifest = $manifestJson | ConvertFrom-Json
# Convert to hashtable for easier manipulation
# Handle both old and new manifest formats
# Safely get array counts
$filesCount = 0
$dirsCount = 0
if ($manifest.files) {
if ($manifest.files -is [System.Array]) {
$filesCount = $manifest.files.Count
} else {
$filesCount = 1
}
}
if ($manifest.directories) {
if ($manifest.directories -is [System.Array]) {
$dirsCount = $manifest.directories.Count
} else {
$dirsCount = 1
}
}
$manifestHash = @{
manifest_id = if ($manifest.manifest_id) { $manifest.manifest_id } else { $file.BaseName }
manifest_file = $file.FullName
version = if ($manifest.version) { $manifest.version } else { "1.0" }
installation_mode = if ($manifest.installation_mode) { $manifest.installation_mode } else { "Unknown" }
installation_path = if ($manifest.installation_path) { $manifest.installation_path } else { "" }
installation_date = if ($manifest.installation_date) { $manifest.installation_date } else { $file.LastWriteTime.ToString("yyyy-MM-ddTHH:mm:ssZ") }
installer_version = if ($manifest.installer_version) { $manifest.installer_version } else { "unknown" }
files = if ($manifest.files) { @($manifest.files) } else { @() }
directories = if ($manifest.directories) { @($manifest.directories) } else { @() }
files_count = $filesCount
directories_count = $dirsCount
}
$null = $manifests.Add($manifestHash)
} catch {
Write-ColorOutput "WARNING: Failed to load manifest $($file.Name): $($_.Exception.Message)" $ColorWarning
}
}
return ,$manifests.ToArray()
} catch {
Write-ColorOutput "ERROR: Failed to list installation manifests: $($_.Exception.Message)" $ColorError
return @()
}
}
# ============================================================================
# UNINSTALLATION FUNCTIONS
# ============================================================================
function Uninstall-ClaudeWorkflow {
<#
.SYNOPSIS
Uninstall Claude Code Workflow based on installation manifest
#>
Write-ColorOutput "Claude Code Workflow System Uninstaller" $ColorInfo
Write-ColorOutput "========================================" $ColorInfo
Write-Host ""
# Load all manifests
$manifests = Get-AllInstallManifests
if (-not $manifests -or $manifests.Count -eq 0) {
Write-ColorOutput "ERROR: No installation manifests found in: $script:ManifestDir" $ColorError
Write-ColorOutput "Cannot proceed with uninstallation without manifest." $ColorError
Write-Host ""
Write-ColorOutput "Manual uninstallation instructions:" $ColorInfo
Write-Host "For Global installation, remove these directories:"
Write-Host " - ~/.claude/agents"
Write-Host " - ~/.claude/commands"
Write-Host " - ~/.claude/output-styles"
Write-Host " - ~/.claude/workflows"
Write-Host " - ~/.claude/scripts"
Write-Host " - ~/.claude/prompt-templates"
Write-Host " - ~/.claude/python_script"
Write-Host " - ~/.claude/skills"
Write-Host " - ~/.claude/version.json"
Write-Host " - ~/.claude/CLAUDE.md"
Write-Host " - ~/.codex"
Write-Host " - ~/.gemini"
Write-Host " - ~/.qwen"
return $false
}
# Display available installations
Write-ColorOutput "Found $($manifests.Count) installation(s):" $ColorInfo
Write-Host ""
# If only one manifest, use it directly
$selectedManifest = $null
if ($manifests.Count -eq 1) {
$selectedManifest = $manifests[0]
Write-ColorOutput "Only one installation found, will uninstall:" $ColorInfo
} else {
# Multiple manifests - let user choose
$options = @()
for ($i = 0; $i -lt $manifests.Count; $i++) {
$m = $manifests[$i]
# Safely extract date string
$dateStr = "unknown date"
if ($m.installation_date) {
try {
if ($m.installation_date.Length -ge 10) {
$dateStr = $m.installation_date.Substring(0, 10)
} else {
$dateStr = $m.installation_date
}
} catch {
$dateStr = "unknown date"
}
}
# Build option string with safe counts
$filesCount = if ($m.files_count) { $m.files_count } else { 0 }
$dirsCount = if ($m.directories_count) { $m.directories_count } else { 0 }
$pathInfo = if ($m.installation_path) { " ($($m.installation_path))" } else { "" }
$option = "$($i + 1). [$($m.installation_mode)] $dateStr - $filesCount files, $dirsCount dirs$pathInfo"
$options += $option
}
$options += "Cancel - Don't uninstall anything"
Write-Host ""
$selection = Get-UserChoiceWithArrows -Prompt "Select installation to uninstall:" -Options $options -DefaultIndex 0
if ($selection -like "Cancel*") {
Write-ColorOutput "Uninstallation cancelled." $ColorWarning
return $false
}
# Parse selection to get index
$selectedIndex = [int]($selection.Split('.')[0]) - 1
$selectedManifest = $manifests[$selectedIndex]
}
# Display selected installation info
Write-Host ""
Write-ColorOutput "Installation Information:" $ColorInfo
Write-Host " Manifest ID: $($selectedManifest.manifest_id)"
Write-Host " Mode: $($selectedManifest.installation_mode)"
Write-Host " Path: $($selectedManifest.installation_path)"
Write-Host " Date: $($selectedManifest.installation_date)"
Write-Host " Installer Version: $($selectedManifest.installer_version)"
# Use pre-calculated counts
$filesCount = if ($selectedManifest.files_count) { $selectedManifest.files_count } else { 0 }
$dirsCount = if ($selectedManifest.directories_count) { $selectedManifest.directories_count } else { 0 }
Write-Host " Files tracked: $filesCount"
Write-Host " Directories tracked: $dirsCount"
Write-Host ""
# Confirm uninstallation
if (-not (Confirm-Action "Do you want to uninstall this installation?" -DefaultYes:$false)) {
Write-ColorOutput "Uninstallation cancelled." $ColorWarning
return $false
}
# Use the selected manifest for uninstallation
$manifest = $selectedManifest
$removedFiles = 0
$removedDirs = 0
$failedItems = @()
# Remove files first
Write-ColorOutput "Removing installed files..." $ColorInfo
foreach ($fileEntry in $manifest.files) {
$filePath = $fileEntry.path
if (Test-Path $filePath) {
try {
Remove-Item -Path $filePath -Force -ErrorAction Stop
Write-ColorOutput " Removed file: $filePath" $ColorSuccess
$removedFiles++
} catch {
Write-ColorOutput " WARNING: Failed to remove file: $filePath" $ColorWarning
$failedItems += $filePath
}
} else {
Write-ColorOutput " File not found (already removed): $filePath" $ColorInfo
}
}
# Remove directories (in reverse order to handle nested directories)
Write-ColorOutput "Removing installed directories..." $ColorInfo
$sortedDirs = $manifest.directories | Sort-Object { $_.path.Length } -Descending
foreach ($dirEntry in $sortedDirs) {
$dirPath = $dirEntry.path
if (Test-Path $dirPath) {
try {
# Check if directory is empty or only contains files we installed
$dirContents = Get-ChildItem -Path $dirPath -Recurse -Force -ErrorAction SilentlyContinue
if (-not $dirContents -or ($dirContents | Measure-Object).Count -eq 0) {
Remove-Item -Path $dirPath -Recurse -Force -ErrorAction Stop
Write-ColorOutput " Removed directory: $dirPath" $ColorSuccess
$removedDirs++
} else {
Write-ColorOutput " Directory not empty (preserved): $dirPath" $ColorWarning
}
} catch {
Write-ColorOutput " WARNING: Failed to remove directory: $dirPath" $ColorWarning
$failedItems += $dirPath
}
} else {
Write-ColorOutput " Directory not found (already removed): $dirPath" $ColorInfo
}
}
# Remove manifest file
if (Test-Path $manifest.manifest_file) {
try {
Remove-Item -Path $manifest.manifest_file -Force
Write-ColorOutput "Removed installation manifest: $($manifest.manifest_id)" $ColorSuccess
} catch {
Write-ColorOutput "WARNING: Failed to remove manifest file" $ColorWarning
}
}
# Show summary
Write-Host ""
Write-ColorOutput "========================================" $ColorInfo
Write-ColorOutput "Uninstallation Summary:" $ColorInfo
Write-Host " Files removed: $removedFiles"
Write-Host " Directories removed: $removedDirs"
if ($failedItems.Count -gt 0) {
Write-Host ""
Write-ColorOutput "Failed to remove the following items:" $ColorWarning
foreach ($item in $failedItems) {
Write-Host " - $item"
}
}
Write-Host ""
if ($failedItems.Count -eq 0) {
Write-ColorOutput "Claude Code Workflow has been successfully uninstalled!" $ColorSuccess
} else {
Write-ColorOutput "Uninstallation completed with warnings." $ColorWarning
Write-ColorOutput "Please manually remove the failed items listed above." $ColorInfo
}
return $true
}
function Create-VersionJson {
param(
[string]$TargetClaudeDir,
@@ -751,6 +1188,9 @@ function Install-Global {
Write-ColorOutput "Global installation path: $userProfile" $ColorInfo
# Initialize manifest
$manifest = New-InstallManifest -InstallationMode "Global" -InstallationPath $userProfile
# Source paths
$sourceDir = $PSScriptRoot
$sourceClaudeDir = Join-Path $sourceDir ".claude"
@@ -791,22 +1231,73 @@ function Install-Global {
Write-ColorOutput "Installing .claude directory..." $ColorInfo
$claudeInstalled = Backup-AndReplaceDirectory -Source $sourceClaudeDir -Destination $globalClaudeDir -Description ".claude directory" -BackupFolder $backupFolder
# Track .claude directory in manifest
if ($claudeInstalled) {
Add-ManifestEntry -Manifest $manifest -Path $globalClaudeDir -Type "Directory"
# Track files from SOURCE directory, not destination
Get-ChildItem -Path $sourceClaudeDir -Recurse -File | ForEach-Object {
# Calculate target path where this file will be installed
$relativePath = $_.FullName.Substring($sourceClaudeDir.Length)
$targetPath = $globalClaudeDir + $relativePath
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
}
}
# Handle CLAUDE.md file in .claude directory
Write-ColorOutput "Installing CLAUDE.md to global .claude directory..." $ColorInfo
$claudeMdInstalled = Copy-FileToDestination -Source $sourceClaudeMd -Destination $globalClaudeMd -Description "CLAUDE.md" -BackupFolder $backupFolder
# Track CLAUDE.md in manifest
if ($claudeMdInstalled) {
Add-ManifestEntry -Manifest $manifest -Path $globalClaudeMd -Type "File"
}
# Replace .codex directory (backup → clear → copy entire folder)
Write-ColorOutput "Installing .codex directory..." $ColorInfo
$codexInstalled = Backup-AndReplaceDirectory -Source $sourceCodexDir -Destination $globalCodexDir -Description ".codex directory" -BackupFolder $backupFolder
# Track .codex directory in manifest
if ($codexInstalled) {
Add-ManifestEntry -Manifest $manifest -Path $globalCodexDir -Type "Directory"
# Track files from SOURCE directory
Get-ChildItem -Path $sourceCodexDir -Recurse -File | ForEach-Object {
$relativePath = $_.FullName.Substring($sourceCodexDir.Length)
$targetPath = $globalCodexDir + $relativePath
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
}
}
# Replace .gemini directory (backup → clear → copy entire folder)
Write-ColorOutput "Installing .gemini directory..." $ColorInfo
$geminiInstalled = Backup-AndReplaceDirectory -Source $sourceGeminiDir -Destination $globalGeminiDir -Description ".gemini directory" -BackupFolder $backupFolder
# Track .gemini directory in manifest
if ($geminiInstalled) {
Add-ManifestEntry -Manifest $manifest -Path $globalGeminiDir -Type "Directory"
# Track files from SOURCE directory
Get-ChildItem -Path $sourceGeminiDir -Recurse -File | ForEach-Object {
$relativePath = $_.FullName.Substring($sourceGeminiDir.Length)
$targetPath = $globalGeminiDir + $relativePath
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
}
}
# Replace .qwen directory (backup → clear → copy entire folder)
Write-ColorOutput "Installing .qwen directory..." $ColorInfo
$qwenInstalled = Backup-AndReplaceDirectory -Source $sourceQwenDir -Destination $globalQwenDir -Description ".qwen directory" -BackupFolder $backupFolder
# Track .qwen directory in manifest
if ($qwenInstalled) {
Add-ManifestEntry -Manifest $manifest -Path $globalQwenDir -Type "Directory"
# Track files from SOURCE directory
Get-ChildItem -Path $sourceQwenDir -Recurse -File | ForEach-Object {
$relativePath = $_.FullName.Substring($sourceQwenDir.Length)
$targetPath = $globalQwenDir + $relativePath
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
}
}
# Create version.json in global .claude directory
Write-ColorOutput "Creating version.json..." $ColorInfo
Create-VersionJson -TargetClaudeDir $globalClaudeDir -InstallationMode "Global"
@@ -820,6 +1311,9 @@ function Install-Global {
}
}
# Save installation manifest
Save-InstallManifest -Manifest $manifest
return $true
}
@@ -837,6 +1331,9 @@ function Install-Path {
Write-ColorOutput "Global path: $userProfile" $ColorInfo
# Initialize manifest
$manifest = New-InstallManifest -InstallationMode "Path" -InstallationPath $TargetDirectory
# Source paths
$sourceDir = $PSScriptRoot
$sourceClaudeDir = Join-Path $sourceDir ".claude"
@@ -877,8 +1374,19 @@ function Install-Path {
if (Test-Path $sourceFolderPath) {
# Use new backup and replace logic for local folders
Write-ColorOutput "Installing local folder: $folder..." $ColorInfo
Backup-AndReplaceDirectory -Source $sourceFolderPath -Destination $destFolderPath -Description "$folder folder" -BackupFolder $backupFolder
$folderInstalled = Backup-AndReplaceDirectory -Source $sourceFolderPath -Destination $destFolderPath -Description "$folder folder" -BackupFolder $backupFolder
Write-ColorOutput "Installed local folder: $folder" $ColorSuccess
# Track local folder in manifest
if ($folderInstalled) {
Add-ManifestEntry -Manifest $manifest -Path $destFolderPath -Type "Directory"
# Track files from SOURCE directory
Get-ChildItem -Path $sourceFolderPath -Recurse -File | ForEach-Object {
$relativePath = $_.FullName.Substring($sourceFolderPath.Length)
$targetPath = $destFolderPath + $relativePath
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
}
}
} else {
Write-ColorOutput "WARNING: Source folder not found: $folder" $ColorWarning
}
@@ -933,23 +1441,71 @@ function Install-Path {
Write-ColorOutput "Merged $mergedCount files to global location" $ColorSuccess
# Track global files in manifest
$globalClaudeFiles = Get-ChildItem -Path $globalClaudeDir -Recurse -File | Where-Object {
$relativePath = $_.FullName.Substring($globalClaudeDir.Length + 1)
$topFolder = $relativePath.Split([System.IO.Path]::DirectorySeparatorChar)[0]
$topFolder -notin $localFolders
}
foreach ($file in $globalClaudeFiles) {
Add-ManifestEntry -Manifest $manifest -Path $file.FullName -Type "File"
}
# Handle CLAUDE.md file in global .claude directory
$globalClaudeMd = Join-Path $globalClaudeDir "CLAUDE.md"
Write-ColorOutput "Installing CLAUDE.md to global .claude directory..." $ColorInfo
Copy-FileToDestination -Source $sourceClaudeMd -Destination $globalClaudeMd -Description "CLAUDE.md" -BackupFolder $backupFolder
$claudeMdInstalled = Copy-FileToDestination -Source $sourceClaudeMd -Destination $globalClaudeMd -Description "CLAUDE.md" -BackupFolder $backupFolder
# Track CLAUDE.md in manifest
if ($claudeMdInstalled) {
Add-ManifestEntry -Manifest $manifest -Path $globalClaudeMd -Type "File"
}
# Replace .codex directory to local location (backup → clear → copy entire folder)
Write-ColorOutput "Installing .codex directory to local location..." $ColorInfo
$codexInstalled = Backup-AndReplaceDirectory -Source $sourceCodexDir -Destination $localCodexDir -Description ".codex directory" -BackupFolder $backupFolder
# Track .codex directory in manifest
if ($codexInstalled) {
Add-ManifestEntry -Manifest $manifest -Path $localCodexDir -Type "Directory"
# Track files from SOURCE directory
Get-ChildItem -Path $sourceCodexDir -Recurse -File | ForEach-Object {
$relativePath = $_.FullName.Substring($sourceCodexDir.Length)
$targetPath = $localCodexDir + $relativePath
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
}
}
# Replace .gemini directory to local location (backup → clear → copy entire folder)
Write-ColorOutput "Installing .gemini directory to local location..." $ColorInfo
$geminiInstalled = Backup-AndReplaceDirectory -Source $sourceGeminiDir -Destination $localGeminiDir -Description ".gemini directory" -BackupFolder $backupFolder
# Track .gemini directory in manifest
if ($geminiInstalled) {
Add-ManifestEntry -Manifest $manifest -Path $localGeminiDir -Type "Directory"
# Track files from SOURCE directory
Get-ChildItem -Path $sourceGeminiDir -Recurse -File | ForEach-Object {
$relativePath = $_.FullName.Substring($sourceGeminiDir.Length)
$targetPath = $localGeminiDir + $relativePath
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
}
}
# Replace .qwen directory to local location (backup → clear → copy entire folder)
Write-ColorOutput "Installing .qwen directory to local location..." $ColorInfo
$qwenInstalled = Backup-AndReplaceDirectory -Source $sourceQwenDir -Destination $localQwenDir -Description ".qwen directory" -BackupFolder $backupFolder
# Track .qwen directory in manifest
if ($qwenInstalled) {
Add-ManifestEntry -Manifest $manifest -Path $localQwenDir -Type "Directory"
# Track files from SOURCE directory
Get-ChildItem -Path $sourceQwenDir -Recurse -File | ForEach-Object {
$relativePath = $_.FullName.Substring($sourceQwenDir.Length)
$targetPath = $localQwenDir + $relativePath
Add-ManifestEntry -Manifest $manifest -Path $targetPath -Type "File"
}
}
# Create version.json in local .claude directory
Write-ColorOutput "Creating version.json in local directory..." $ColorInfo
Create-VersionJson -TargetClaudeDir $localClaudeDir -InstallationMode "Path"
@@ -966,6 +1522,9 @@ function Install-Path {
}
}
# Save installation manifest
Save-InstallManifest -Manifest $manifest
return $true
}
@@ -1098,6 +1657,42 @@ function Main {
# Use SourceVersion parameter if provided, otherwise use default
$installVersion = if ($SourceVersion) { $SourceVersion } else { $DefaultVersion }
# Show banner first
Show-Banner
# Check for uninstall mode from parameter or ask user interactively
$operationMode = "Install"
if ($Uninstall) {
$operationMode = "Uninstall"
} elseif (-not $NonInteractive -and -not $InstallMode) {
# Interactive mode selection
Write-Host ""
$operations = @(
"Install - Install Claude Code Workflow System",
"Uninstall - Remove Claude Code Workflow System"
)
$selection = Get-UserChoiceWithArrows -Prompt "Choose operation:" -Options $operations -DefaultIndex 0
if ($selection -like "Uninstall*") {
$operationMode = "Uninstall"
}
}
# Handle uninstall mode
if ($operationMode -eq "Uninstall") {
$result = Uninstall-ClaudeWorkflow
if (-not $NonInteractive) {
Write-Host ""
Write-ColorOutput "Press any key to exit..." $ColorPrompt
$null = $Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown")
}
return $(if ($result) { 0 } else { 1 })
}
# Continue with installation
Show-Header -InstallVersion $installVersion
# Test prerequisites

View File

@@ -24,10 +24,14 @@ FORCE=false
NON_INTERACTIVE=false
BACKUP_ALL=true # Enabled by default
NO_BACKUP=false
UNINSTALL=false # Uninstall mode
SOURCE_VERSION="" # Version from remote installer
SOURCE_BRANCH="" # Branch from remote installer
SOURCE_COMMIT="" # Commit SHA from remote installer
# Global manifest directory location
MANIFEST_DIR="${HOME}/.claude-manifests"
# Functions
function write_color() {
local message="$1"
@@ -474,6 +478,9 @@ function install_global() {
write_color "Global installation path: $user_home" "$COLOR_INFO"
# Initialize manifest
local manifest_file=$(new_install_manifest "Global" "$user_home")
# Source paths
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local source_claude_dir="${script_dir}/.claude"
@@ -507,23 +514,66 @@ function install_global() {
# Replace .claude directory (backup → clear conflicting → copy)
write_color "Installing .claude directory..." "$COLOR_INFO"
backup_and_replace_directory "$source_claude_dir" "$global_claude_dir" ".claude directory" "$backup_folder"
if backup_and_replace_directory "$source_claude_dir" "$global_claude_dir" ".claude directory" "$backup_folder"; then
# Track .claude directory in manifest
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
# Track files from SOURCE directory, not destination
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_claude_dir}"
local target_path="${global_claude_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_claude_dir" -type f -print0)
fi
# Handle CLAUDE.md file
write_color "Installing CLAUDE.md to global .claude directory..." "$COLOR_INFO"
copy_file_to_destination "$source_claude_md" "$global_claude_md" "CLAUDE.md" "$backup_folder"
if copy_file_to_destination "$source_claude_md" "$global_claude_md" "CLAUDE.md" "$backup_folder"; then
# Track CLAUDE.md in manifest
add_manifest_entry "$manifest_file" "$global_claude_md" "File"
fi
# Replace .codex directory (backup → clear conflicting → copy)
write_color "Installing .codex directory..." "$COLOR_INFO"
backup_and_replace_directory "$source_codex_dir" "$global_codex_dir" ".codex directory" "$backup_folder"
if backup_and_replace_directory "$source_codex_dir" "$global_codex_dir" ".codex directory" "$backup_folder"; then
# Track .codex directory in manifest
add_manifest_entry "$manifest_file" "$global_codex_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_codex_dir}"
local target_path="${global_codex_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_codex_dir" -type f -print0)
fi
# Replace .gemini directory (backup → clear conflicting → copy)
write_color "Installing .gemini directory..." "$COLOR_INFO"
backup_and_replace_directory "$source_gemini_dir" "$global_gemini_dir" ".gemini directory" "$backup_folder"
if backup_and_replace_directory "$source_gemini_dir" "$global_gemini_dir" ".gemini directory" "$backup_folder"; then
# Track .gemini directory in manifest
add_manifest_entry "$manifest_file" "$global_gemini_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_gemini_dir}"
local target_path="${global_gemini_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_gemini_dir" -type f -print0)
fi
# Replace .qwen directory (backup → clear conflicting → copy)
write_color "Installing .qwen directory..." "$COLOR_INFO"
backup_and_replace_directory "$source_qwen_dir" "$global_qwen_dir" ".qwen directory" "$backup_folder"
if backup_and_replace_directory "$source_qwen_dir" "$global_qwen_dir" ".qwen directory" "$backup_folder"; then
# Track .qwen directory in manifest
add_manifest_entry "$manifest_file" "$global_qwen_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_qwen_dir}"
local target_path="${global_qwen_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_qwen_dir" -type f -print0)
fi
# Remove empty backup folder
if [ -n "$backup_folder" ] && [ -d "$backup_folder" ]; then
@@ -537,6 +587,9 @@ function install_global() {
write_color "Creating version.json..." "$COLOR_INFO"
create_version_json "$global_claude_dir" "Global"
# Save installation manifest
save_install_manifest "$manifest_file"
return 0
}
@@ -550,6 +603,9 @@ function install_path() {
local global_claude_dir="${user_home}/.claude"
write_color "Global path: $user_home" "$COLOR_INFO"
# Initialize manifest
local manifest_file=$(new_install_manifest "Path" "$target_dir")
# Source paths
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local source_claude_dir="${script_dir}/.claude"
@@ -588,7 +644,17 @@ function install_path() {
if [ -d "$source_folder" ]; then
# Use new backup and replace logic for local folders
write_color "Installing local folder: $folder..." "$COLOR_INFO"
backup_and_replace_directory "$source_folder" "$dest_folder" "$folder folder" "$backup_folder"
if backup_and_replace_directory "$source_folder" "$dest_folder" "$folder folder" "$backup_folder"; then
# Track local folder in manifest
add_manifest_entry "$manifest_file" "$dest_folder" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_folder}"
local target_path="${dest_folder}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_folder" -type f -print0)
fi
write_color "✓ Installed local folder: $folder" "$COLOR_SUCCESS"
else
write_color "WARNING: Source folder not found: $folder" "$COLOR_WARNING"
@@ -644,19 +710,52 @@ function install_path() {
# Handle CLAUDE.md file in global .claude directory
local global_claude_md="${global_claude_dir}/CLAUDE.md"
write_color "Installing CLAUDE.md to global .claude directory..." "$COLOR_INFO"
copy_file_to_destination "$source_claude_md" "$global_claude_md" "CLAUDE.md" "$backup_folder"
if copy_file_to_destination "$source_claude_md" "$global_claude_md" "CLAUDE.md" "$backup_folder"; then
# Track CLAUDE.md in manifest
add_manifest_entry "$manifest_file" "$global_claude_md" "File"
fi
# Replace .codex directory to local location (backup → clear conflicting → copy)
write_color "Installing .codex directory to local location..." "$COLOR_INFO"
backup_and_replace_directory "$source_codex_dir" "$local_codex_dir" ".codex directory" "$backup_folder"
if backup_and_replace_directory "$source_codex_dir" "$local_codex_dir" ".codex directory" "$backup_folder"; then
# Track .codex directory in manifest
add_manifest_entry "$manifest_file" "$local_codex_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_codex_dir}"
local target_path="${local_codex_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_codex_dir" -type f -print0)
fi
# Replace .gemini directory to local location (backup → clear conflicting → copy)
write_color "Installing .gemini directory to local location..." "$COLOR_INFO"
backup_and_replace_directory "$source_gemini_dir" "$local_gemini_dir" ".gemini directory" "$backup_folder"
if backup_and_replace_directory "$source_gemini_dir" "$local_gemini_dir" ".gemini directory" "$backup_folder"; then
# Track .gemini directory in manifest
add_manifest_entry "$manifest_file" "$local_gemini_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_gemini_dir}"
local target_path="${local_gemini_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_gemini_dir" -type f -print0)
fi
# Replace .qwen directory to local location (backup → clear conflicting → copy)
write_color "Installing .qwen directory to local location..." "$COLOR_INFO"
backup_and_replace_directory "$source_qwen_dir" "$local_qwen_dir" ".qwen directory" "$backup_folder"
if backup_and_replace_directory "$source_qwen_dir" "$local_qwen_dir" ".qwen directory" "$backup_folder"; then
# Track .qwen directory in manifest
add_manifest_entry "$manifest_file" "$local_qwen_dir" "Directory"
# Track files from SOURCE directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$source_qwen_dir}"
local target_path="${local_qwen_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$source_qwen_dir" -type f -print0)
fi
# Remove empty backup folder
if [ -n "$backup_folder" ] && [ -d "$backup_folder" ]; then
@@ -674,6 +773,9 @@ function install_path() {
write_color "Creating version.json in global directory..." "$COLOR_INFO"
create_version_json "$global_claude_dir" "Global"
# Save installation manifest
save_install_manifest "$manifest_file"
return 0
}
@@ -749,6 +851,357 @@ function get_installation_path() {
done
}
# ============================================================================
# INSTALLATION MANIFEST MANAGEMENT
# ============================================================================
function new_install_manifest() {
local installation_mode="$1"
local installation_path="$2"
# Create manifest directory if it doesn't exist
mkdir -p "$MANIFEST_DIR"
# Generate unique manifest ID based on timestamp and mode
local timestamp=$(date +"%Y%m%d-%H%M%S")
local manifest_id="install-${installation_mode}-${timestamp}"
# Create manifest file path
local manifest_file="${MANIFEST_DIR}/${manifest_id}.json"
# Get current UTC timestamp
local installation_date_utc=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
# Create manifest JSON
cat > "$manifest_file" << EOF
{
"manifest_id": "$manifest_id",
"version": "1.0",
"installation_mode": "$installation_mode",
"installation_path": "$installation_path",
"installation_date": "$installation_date_utc",
"installer_version": "$VERSION",
"files": [],
"directories": []
}
EOF
echo "$manifest_file"
}
function add_manifest_entry() {
local manifest_file="$1"
local entry_path="$2"
local entry_type="$3"
if [ ! -f "$manifest_file" ]; then
write_color "WARNING: Manifest file not found: $manifest_file" "$COLOR_WARNING"
return 1
fi
local timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
# Escape path for JSON
local escaped_path=$(echo "$entry_path" | sed 's/\\/\\\\/g' | sed 's/"/\\"/g')
# Create entry JSON
local entry_json=$(cat << EOF
{
"path": "$escaped_path",
"type": "$entry_type",
"timestamp": "$timestamp"
}
EOF
)
# Read manifest, add entry, write back
local temp_file="${manifest_file}.tmp"
if [ "$entry_type" = "File" ]; then
jq --argjson entry "$entry_json" '.files += [$entry]' "$manifest_file" > "$temp_file"
else
jq --argjson entry "$entry_json" '.directories += [$entry]' "$manifest_file" > "$temp_file"
fi
mv "$temp_file" "$manifest_file"
}
function save_install_manifest() {
local manifest_file="$1"
if [ -f "$manifest_file" ]; then
write_color "Installation manifest saved: $manifest_file" "$COLOR_SUCCESS"
return 0
else
write_color "WARNING: Failed to save installation manifest" "$COLOR_WARNING"
return 1
fi
}
function migrate_legacy_manifest() {
local legacy_manifest="${HOME}/.claude-install-manifest.json"
if [ ! -f "$legacy_manifest" ]; then
return 0
fi
write_color "Found legacy manifest file, migrating to new system..." "$COLOR_INFO"
# Create manifest directory if it doesn't exist
mkdir -p "$MANIFEST_DIR"
# Read legacy manifest
local mode=$(jq -r '.installation_mode // "Global"' "$legacy_manifest")
local timestamp=$(date +"%Y%m%d-%H%M%S")
local manifest_id="install-${mode}-${timestamp}-migrated"
# Create new manifest file
local new_manifest="${MANIFEST_DIR}/${manifest_id}.json"
# Copy with new manifest_id field
jq --arg id "$manifest_id" '. + {manifest_id: $id}' "$legacy_manifest" > "$new_manifest"
# Rename old manifest (don't delete, keep as backup)
mv "$legacy_manifest" "${legacy_manifest}.migrated"
write_color "Legacy manifest migrated successfully" "$COLOR_SUCCESS"
write_color "Old manifest backed up to: ${legacy_manifest}.migrated" "$COLOR_INFO"
}
function get_all_install_manifests() {
# Migrate legacy manifest if exists
migrate_legacy_manifest
if [ ! -d "$MANIFEST_DIR" ]; then
echo "[]"
return
fi
# Check if any manifest files exist
local manifest_count=$(find "$MANIFEST_DIR" -name "install-*.json" -type f 2>/dev/null | wc -l)
if [ "$manifest_count" -eq 0 ]; then
echo "[]"
return
fi
# Collect all manifests into JSON array
local manifests="["
local first=true
while IFS= read -r -d '' file; do
if [ "$first" = true ]; then
first=false
else
manifests+=","
fi
# Add manifest_file field
local manifest_content=$(jq --arg file "$file" '. + {manifest_file: $file}' "$file")
# Count files and directories safely
local files_count=$(echo "$manifest_content" | jq '.files | length')
local dirs_count=$(echo "$manifest_content" | jq '.directories | length')
# Add counts to manifest
manifest_content=$(echo "$manifest_content" | jq --argjson fc "$files_count" --argjson dc "$dirs_count" '. + {files_count: $fc, directories_count: $dc}')
manifests+="$manifest_content"
done < <(find "$MANIFEST_DIR" -name "install-*.json" -type f -print0 | sort -z)
manifests+="]"
echo "$manifests"
}
# ============================================================================
# UNINSTALLATION FUNCTIONS
# ============================================================================
function uninstall_claude_workflow() {
write_color "Claude Code Workflow System Uninstaller" "$COLOR_INFO"
write_color "========================================" "$COLOR_INFO"
echo ""
# Load all manifests
local manifests_json=$(get_all_install_manifests)
local manifests_count=$(echo "$manifests_json" | jq 'length')
if [ "$manifests_count" -eq 0 ]; then
write_color "ERROR: No installation manifests found in: $MANIFEST_DIR" "$COLOR_ERROR"
write_color "Cannot proceed with uninstallation without manifest." "$COLOR_ERROR"
echo ""
write_color "Manual uninstallation instructions:" "$COLOR_INFO"
echo "For Global installation, remove these directories:"
echo " - ~/.claude/agents"
echo " - ~/.claude/commands"
echo " - ~/.claude/output-styles"
echo " - ~/.claude/workflows"
echo " - ~/.claude/scripts"
echo " - ~/.claude/prompt-templates"
echo " - ~/.claude/python_script"
echo " - ~/.claude/skills"
echo " - ~/.claude/version.json"
echo " - ~/.claude/CLAUDE.md"
echo " - ~/.codex"
echo " - ~/.gemini"
echo " - ~/.qwen"
return 1
fi
# Display available installations
write_color "Found $manifests_count installation(s):" "$COLOR_INFO"
echo ""
# If only one manifest, use it directly
local selected_index=0
local selected_manifest=""
if [ "$manifests_count" -eq 1 ]; then
selected_manifest=$(echo "$manifests_json" | jq '.[0]')
write_color "Only one installation found, will uninstall:" "$COLOR_INFO"
else
# Multiple manifests - let user choose
local options=()
for i in $(seq 0 $((manifests_count - 1))); do
local m=$(echo "$manifests_json" | jq ".[$i]")
# Safely extract date string
local date_str=$(echo "$m" | jq -r '.installation_date // "unknown date"' | cut -c1-10)
local mode=$(echo "$m" | jq -r '.installation_mode // "Unknown"')
local files_count=$(echo "$m" | jq -r '.files_count // 0')
local dirs_count=$(echo "$m" | jq -r '.directories_count // 0')
local path_info=$(echo "$m" | jq -r '.installation_path // ""')
if [ -n "$path_info" ]; then
path_info=" ($path_info)"
fi
options+=("$((i + 1)). [$mode] $date_str - $files_count files, $dirs_count dirs$path_info")
done
options+=("Cancel - Don't uninstall anything")
echo ""
local selection=$(get_user_choice "Select installation to uninstall:" "${options[@]}")
if [[ "$selection" == Cancel* ]]; then
write_color "Uninstallation cancelled." "$COLOR_WARNING"
return 1
fi
# Parse selection to get index
selected_index=$((${selection%%.*} - 1))
selected_manifest=$(echo "$manifests_json" | jq ".[$selected_index]")
fi
# Display selected installation info
echo ""
write_color "Installation Information:" "$COLOR_INFO"
echo " Manifest ID: $(echo "$selected_manifest" | jq -r '.manifest_id')"
echo " Mode: $(echo "$selected_manifest" | jq -r '.installation_mode')"
echo " Path: $(echo "$selected_manifest" | jq -r '.installation_path')"
echo " Date: $(echo "$selected_manifest" | jq -r '.installation_date')"
echo " Installer Version: $(echo "$selected_manifest" | jq -r '.installer_version')"
echo " Files tracked: $(echo "$selected_manifest" | jq -r '.files_count')"
echo " Directories tracked: $(echo "$selected_manifest" | jq -r '.directories_count')"
echo ""
# Confirm uninstallation
if ! confirm_action "Do you want to uninstall this installation?" false; then
write_color "Uninstallation cancelled." "$COLOR_WARNING"
return 1
fi
local removed_files=0
local removed_dirs=0
local failed_items=()
# Remove files first
write_color "Removing installed files..." "$COLOR_INFO"
local files_array=$(echo "$selected_manifest" | jq -c '.files[]')
while IFS= read -r file_entry; do
local file_path=$(echo "$file_entry" | jq -r '.path')
if [ -f "$file_path" ]; then
if rm -f "$file_path" 2>/dev/null; then
write_color " Removed file: $file_path" "$COLOR_SUCCESS"
((removed_files++))
else
write_color " WARNING: Failed to remove file: $file_path" "$COLOR_WARNING"
failed_items+=("$file_path")
fi
else
write_color " File not found (already removed): $file_path" "$COLOR_INFO"
fi
done <<< "$files_array"
# Remove directories (in reverse order by path length)
write_color "Removing installed directories..." "$COLOR_INFO"
local dirs_array=$(echo "$selected_manifest" | jq -c '.directories[] | {path: .path, length: (.path | length)}' | sort -t: -k2 -rn | jq -c '.path')
while IFS= read -r dir_path_json; do
local dir_path=$(echo "$dir_path_json" | jq -r '.')
if [ -d "$dir_path" ]; then
# Check if directory is empty
if [ -z "$(ls -A "$dir_path" 2>/dev/null)" ]; then
if rmdir "$dir_path" 2>/dev/null; then
write_color " Removed directory: $dir_path" "$COLOR_SUCCESS"
((removed_dirs++))
else
write_color " WARNING: Failed to remove directory: $dir_path" "$COLOR_WARNING"
failed_items+=("$dir_path")
fi
else
write_color " Directory not empty (preserved): $dir_path" "$COLOR_WARNING"
fi
else
write_color " Directory not found (already removed): $dir_path" "$COLOR_INFO"
fi
done <<< "$dirs_array"
# Remove manifest file
local manifest_file=$(echo "$selected_manifest" | jq -r '.manifest_file')
if [ -f "$manifest_file" ]; then
if rm -f "$manifest_file" 2>/dev/null; then
write_color "Removed installation manifest: $(basename "$manifest_file")" "$COLOR_SUCCESS"
else
write_color "WARNING: Failed to remove manifest file" "$COLOR_WARNING"
fi
fi
# Show summary
echo ""
write_color "========================================" "$COLOR_INFO"
write_color "Uninstallation Summary:" "$COLOR_INFO"
echo " Files removed: $removed_files"
echo " Directories removed: $removed_dirs"
if [ ${#failed_items[@]} -gt 0 ]; then
echo ""
write_color "Failed to remove the following items:" "$COLOR_WARNING"
for item in "${failed_items[@]}"; do
echo " - $item"
done
fi
echo ""
if [ ${#failed_items[@]} -eq 0 ]; then
write_color "✓ Claude Code Workflow has been successfully uninstalled!" "$COLOR_SUCCESS"
else
write_color "Uninstallation completed with warnings." "$COLOR_WARNING"
write_color "Please manually remove the failed items listed above." "$COLOR_INFO"
fi
return 0
}
function create_version_json() {
local target_claude_dir="$1"
local installation_mode="$2"
@@ -863,6 +1316,10 @@ function parse_arguments() {
BACKUP_ALL=false
shift
;;
-Uninstall)
UNINSTALL=true
shift
;;
-SourceVersion)
SOURCE_VERSION="$2"
shift 2
@@ -901,6 +1358,7 @@ Options:
-NonInteractive Run in non-interactive mode with default options
-BackupAll Automatically backup all existing files (default)
-NoBackup Disable automatic backup functionality
-Uninstall Uninstall Claude Code Workflow System based on installation manifest
-SourceVersion <ver> Source version (passed from install-remote.sh)
-SourceBranch <name> Source branch (passed from install-remote.sh)
-SourceCommit <sha> Source commit SHA (passed from install-remote.sh)
@@ -919,6 +1377,12 @@ Examples:
# Installation without backup
$0 -NoBackup
# Uninstall Claude Code Workflow System
$0 -Uninstall
# Uninstall without confirmation prompts
$0 -Uninstall -Force
# With version info (typically called by install-remote.sh)
$0 -InstallMode Global -Force -SourceVersion "3.4.2" -SourceBranch "main" -SourceCommit "abc1234"
@@ -926,6 +1390,46 @@ EOF
}
function main() {
# Show banner first
show_banner
# Check for uninstall mode from parameter or ask user interactively
local operation_mode="Install"
if [ "$UNINSTALL" = true ]; then
operation_mode="Uninstall"
elif [ "$NON_INTERACTIVE" != true ] && [ -z "$INSTALL_MODE" ]; then
# Interactive mode selection
echo ""
local operations=(
"Install - Install Claude Code Workflow System"
"Uninstall - Remove Claude Code Workflow System"
)
local selection=$(get_user_choice "Choose operation:" "${operations[@]}")
if [[ "$selection" == Uninstall* ]]; then
operation_mode="Uninstall"
fi
fi
# Handle uninstall mode
if [ "$operation_mode" = "Uninstall" ]; then
if uninstall_claude_workflow; then
local result=0
else
local result=1
fi
if [ "$NON_INTERACTIVE" != true ]; then
echo ""
write_color "Press Enter to exit..." "$COLOR_PROMPT"
read -r
fi
return $result
fi
# Continue with installation
show_header
# Test prerequisites

990
README.md

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff