Compare commits

...

11 Commits

Author SHA1 Message Date
catlog22
78b1287ced chore: Bump version to 6.3.54 2026-01-30 15:43:20 +08:00
catlog22
c6ad8e53b9 Add flow template generator documentation for meta-skill/flow-coordinator
- Introduced a comprehensive guide for generating workflow templates.
- Detailed usage instructions with examples for creating templates.
- Outlined execution flow across three phases: Template Design, Step Definition, and JSON Generation.
- Included JavaScript code snippets for each phase to facilitate implementation.
- Provided suggested step templates for various development scenarios.
- Documented command port references and minimum execution units for clarity.
2026-01-30 15:42:08 +08:00
catlog22
4006b2a0ee feat: add N+1 planning context recording to planning-notes
- Add N+1 Context section (Decisions + Deferred) to planning-notes.md init
- Add section 3.3 N+1 Context Recording in action-planning-agent.md
- Update task-generate-agent.md Phase 2A/2B/3 prompts with N+1 recording

Supports cross-module dependency resolution tracking and deferred items
for continuous planning iterations.
2026-01-30 15:42:00 +08:00
catlog22
0bb102c56a feat: add meta-skill flow-create command for workflow template generation
Interactive command to create flow-coordinator templates with comprehensive
command selection (9 categories), minimum execution units, and command port
reference integrated from ccw/ccw-coordinator/codex-coordinator.
2026-01-30 14:27:20 +08:00
catlog22
fca03a3f9c refactor: rename meta-skill → flow-coordinator, update template cmd paths
**Major changes:**
- Rename skill from meta-skill to flow-coordinator (avoid /ccw conflicts)
- Update all 17 templates: store full /workflow: command paths in cmd field
  - Session ID prefix: ms → fc (fc-YYYYMMDD-HHMMSS)
  - Workflow path: .workflow/.meta-skill → .workflow/.flow-coordinator
- Simplify SKILL.md schema documentation
  - Streamline Status Schema section
  - Consolidate Template Schema with single example
  - Remove redundant Field Explanations and behavior tables
- All templates now store cmd as full paths (e.g. /workflow:lite-plan)
  - Eliminates need for path assembly during execution
  - Matches ccw-coordinator execution format
2026-01-30 12:29:38 +08:00
catlog22
a9df4c6659 refactor: 移除统一执行命令的使用推荐部分,简化文档内容 2026-01-30 10:54:02 +08:00
catlog22
0a3246ab36 chore: Bump version to 6.3.53 2026-01-30 10:34:02 +08:00
catlog22
b5caee6b94 rename: collaborative-plan → collaborative-plan-with-file 2026-01-30 10:30:37 +08:00
catlog22
64d2156319 refactor: 统一 lite 工作流产物结构,合并 exploration+understanding 为 planning-context
- 合并 exploration.md 和 understanding.md 为单一的 planning-context.md
- 为所有 lite 工作流添加 Output Artifacts 表格
- 统一 agent 提示词的 Output Location 格式
- 更新 Session Folder Structure 包含 planning-context.md

涉及文件:
- cli-lite-planning-agent.md: 更新产物文档和格式模板
- collaborative-plan.md: 简化为 planning-context.md + sub-plan.json
- lite-plan.md: 添加产物表格和输出地址
- lite-fix.md: 添加产物表格和输出地址
2026-01-30 10:25:45 +08:00
catlog22
3f46a02df3 feat(cli): add settings file support for builtin Claude
- Enhance CLI status rendering to display settings file information for builtin Claude.
- Introduce settings file input in CLI manager for configuring the path to settings.json.
- Update Claude CLI tool interface to include settingsFile property.
- Implement settings file resolution and validation in CLI executor.
- Create a new collaborative planning workflow command with detailed documentation.
- Add test scripts for debugging tool configuration and command building.
2026-01-30 10:07:02 +08:00
catlog22
4b69492b16 feat: 添加 Codex 协调器命令,支持任务分析、命令链推荐和顺序执行 2026-01-29 23:36:42 +08:00
41 changed files with 3659 additions and 2147 deletions

View File

@@ -834,10 +834,35 @@ Use `analysis_results.complexity` or task count to determine structure:
- Proper linking between documents
- Consistent navigation and references
### 3.3 Guidelines Checklist
### 3.3 N+1 Context Recording
**Purpose**: Record decisions and deferred items for N+1 planning continuity.
**When**: After task generation, update `## N+1 Context` in planning-notes.md.
**What to Record**:
- **Decisions**: Architecture/technology choices with rationale (mark `Revisit?` if may change)
- **Deferred**: Items explicitly moved to N+1 with reason
**Example**:
```markdown
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| JWT over Session | Stateless scaling | No |
| CROSS::B::api → IMPL-B1 | B1 defines base | Yes |
### Deferred
- [ ] Rate limiting - Requires Redis (N+1)
- [ ] API versioning - Low priority
```
### 3.4 Guidelines Checklist
**ALWAYS:**
- **Load planning-notes.md FIRST**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
- **Record N+1 Context**: Update `## N+1 Context` section with key decisions and deferred items
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md

View File

@@ -1,13 +1,14 @@
---
name: cli-lite-planning-agent
description: |
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Generic planning agent for lite-plan, collaborative-plan, and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Core capabilities:
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
- Task decomposition with dependency analysis
- CLI execution ID assignment for fork/merge strategies
- Multi-angle context integration (explorations or diagnoses)
- Process documentation (planning-context.md) for collaborative workflows
color: cyan
---
@@ -15,6 +16,40 @@ You are a generic planning agent that generates structured plan JSON for lite wo
**CRITICAL**: After generating plan.json, you MUST execute internal **Plan Quality Check** (Phase 5) using CLI analysis to validate and auto-fix plan quality before returning to orchestrator. Quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance.
## Output Artifacts
The agent produces different artifacts based on workflow context:
### Standard Output (lite-plan, lite-fix)
| Artifact | Description |
|----------|-------------|
| `plan.json` | Structured plan following plan-json-schema.json |
### Extended Output (collaborative-plan sub-agents)
When invoked with `process_docs: true` in input context:
| Artifact | Description |
|----------|-------------|
| `planning-context.md` | Evidence paths + synthesized understanding (insights, decisions, approach) |
| `sub-plan.json` | Sub-plan following plan-json-schema.json with source_agent metadata |
**planning-context.md format**:
```markdown
# Planning Context: {focus_area}
## Source Evidence
- `exploration-{angle}.json` - {key finding}
- `{file}:{line}` - {what this proves}
## Understanding
- Current state: {analysis}
- Proposed approach: {strategy}
## Key Decisions
- Decision: {what} | Rationale: {why} | Evidence: {file ref}
```
## Input Context
@@ -34,10 +69,39 @@ You are a generic planning agent that generates structured plan JSON for lite wo
clarificationContext: { [question]: answer } | null,
complexity: "Low" | "Medium" | "High", // For lite-plan
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
cli_config: { tool, template, timeout, fallback }
cli_config: { tool, template, timeout, fallback },
// Process documentation (collaborative-plan)
process_docs: boolean, // If true, generate planning-context.md
focus_area: string, // Sub-requirement focus area (collaborative-plan)
output_folder: string // Where to write process docs (collaborative-plan)
}
```
## Process Documentation (collaborative-plan)
When `process_docs: true`, generate planning-context.md before sub-plan.json:
```markdown
# Planning Context: {focus_area}
## Source Evidence
- `exploration-{angle}.json` - {key finding from exploration}
- `{file}:{line}` - {code evidence for decision}
## Understanding
- **Current State**: {what exists now}
- **Problem**: {what needs to change}
- **Approach**: {proposed solution strategy}
## Key Decisions
- Decision: {what} | Rationale: {why} | Evidence: {file:line or exploration ref}
## Dependencies
- Depends on: {other sub-requirements or none}
- Provides for: {what this enables}
```
## Schema-Driven Output
**CRITICAL**: Read the schema reference first to determine output structure:

View File

@@ -0,0 +1,513 @@
---
name: codex-coordinator
description: Command orchestration tool for Codex - analyze requirements, recommend command chain, execute sequentially with state persistence
argument-hint: "TASK=\"<task description>\" [--depth=standard|deep] [--auto-confirm] [--verbose]"
---
# Codex Coordinator Command
Interactive orchestration tool for Codex commands: analyze task → discover commands → recommend chain → execute sequentially → track state.
**Execution Model**: Intelligent agent-driven workflow. Claude analyzes each phase and orchestrates command execution.
## Core Concept: Minimum Execution Units (最小执行单元)
### What is a Minimum Execution Unit?
**Definition**: A set of commands that must execute together as an atomic group to achieve a meaningful workflow milestone. Splitting these commands breaks the logical flow and creates incomplete states.
**Why This Matters**:
- **Prevents Incomplete States**: Avoid stopping after task generation without execution
- **User Experience**: User gets complete results, not intermediate artifacts requiring manual follow-up
- **Workflow Integrity**: Maintains logical coherence of multi-step operations
### Codex Minimum Execution Units
**Planning + Execution Units** (规划+执行单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Quick Implementation** | lite-plan-a → execute | Lightweight plan and immediate execution | Working code |
| **Bug Fix** | lite-fix → execute | Quick bug diagnosis and fix execution | Fixed code |
| **Issue Workflow** | issue-discover → issue-plan → issue-queue → issue-execute | Complete issue lifecycle | Completed issues |
| **Discovery & Analysis** | issue-discover → issue-discover-by-prompt | Issue discovery with multiple perspectives | Generated issues |
| **Brainstorm to Execution** | brainstorm-with-file → execute | Brainstorm ideas then implement | Working code |
**With-File Workflows** (文档化单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Brainstorm With File** | brainstorm-with-file | Multi-perspective ideation with documentation | brainstorm.md |
| **Debug With File** | debug-with-file | Hypothesis-driven debugging with documentation | understanding.md |
| **Analyze With File** | analyze-with-file | Collaborative analysis with documentation | discussion.md |
| **Clean & Analyze** | clean → analyze-with-file | Cleanup then analyze | Cleaned code + analysis |
### Command-to-Unit Mapping (命令与最小单元的映射)
| Command | Precedes | Atomic Units |
|---------|----------|--------------|
| lite-plan-a | execute, brainstorm-with-file | Quick Implementation |
| lite-fix | execute | Bug Fix |
| issue-discover | issue-plan | Issue Workflow |
| issue-plan | issue-queue | Issue Workflow |
| issue-queue | issue-execute | Issue Workflow |
| brainstorm-with-file | execute, issue-execute | Brainstorm to Execution |
| debug-with-file | execute | Debug With File |
| analyze-with-file | (standalone) | Analyze With File |
| clean | analyze-with-file, execute | Clean & Analyze |
| quick-plan-with-file | execute | Quick Planning with File |
| merge-plans-with-file | execute | Merge Multiple Plans |
| unified-execute-with-file | (terminal) | Execute with File Tracking |
### Atomic Group Rules
1. **Never Split Units**: Coordinator must recommend complete units, not partial chains
2. **Multi-Unit Participation**: Some commands can participate in multiple units
3. **User Override**: User can explicitly request partial execution (advanced mode)
4. **Visualization**: Pipeline view shows unit boundaries with 【 】markers
5. **Validation**: Before execution, verify all unit commands are included
**Example Pipeline with Units**:
```
需求 → 【lite-plan-a → execute】→ 代码 → 【issue-discover → issue-plan → issue-queue → issue-execute】→ 完成
└──── Quick Implementation ────┘ └────────── Issue Workflow ─────────┘
```
## 3-Phase Workflow
### Phase 1: Analyze Requirements
Parse task to extract: goal, scope, complexity, and task type.
```javascript
function analyzeRequirements(taskDescription) {
return {
goal: extractMainGoal(taskDescription), // e.g., "Fix login bug"
scope: extractScope(taskDescription), // e.g., ["auth", "login"]
complexity: determineComplexity(taskDescription), // 'simple' | 'medium' | 'complex'
task_type: detectTaskType(taskDescription) // See task type patterns below
};
}
// Task Type Detection Patterns
function detectTaskType(text) {
// Priority order (first match wins)
if (/fix|bug|error|crash|fail|debug|diagnose/.test(text)) return 'bugfix';
if (/生成|generate|discover|找出|issue|问题/.test(text)) return 'discovery';
if (/plan|规划|设计|design|analyze|分析/.test(text)) return 'analysis';
if (/清理|cleanup|clean|refactor|重构/.test(text)) return 'cleanup';
if (/头脑|brainstorm|创意|ideation/.test(text)) return 'brainstorm';
if (/合并|merge|combine|batch/.test(text)) return 'batch-planning';
return 'feature'; // Default
}
// Complexity Assessment
function determineComplexity(text) {
let score = 0;
if (/refactor|重构|migrate|迁移|architect|架构|system|系统/.test(text)) score += 2;
if (/multiple|多个|across|跨|all|所有|entire|整个/.test(text)) score += 2;
if (/integrate|集成|api|database|数据库/.test(text)) score += 1;
if (/security|安全|performance|性能|scale|扩展/.test(text)) score += 1;
return score >= 4 ? 'complex' : score >= 2 ? 'medium' : 'simple';
}
```
**Display to user**:
```
Analysis Complete:
Goal: [extracted goal]
Scope: [identified areas]
Complexity: [level]
Task Type: [detected type]
```
### Phase 2: Discover Commands & Recommend Chain
Dynamic command chain assembly using task type and complexity matching.
#### Available Codex Commands (Discovery)
All commands from `~/.codex/prompts/`:
- **Planning**: @~/.codex/prompts/lite-plan-a.md, @~/.codex/prompts/lite-plan-b.md, @~/.codex/prompts/lite-plan-c.md, @~/.codex/prompts/quick-plan-with-file.md, @~/.codex/prompts/merge-plans-with-file.md
- **Execution**: @~/.codex/prompts/execute.md, @~/.codex/prompts/unified-execute-with-file.md
- **Bug Fixes**: @~/.codex/prompts/lite-fix.md, @~/.codex/prompts/debug-with-file.md
- **Discovery**: @~/.codex/prompts/issue-discover.md, @~/.codex/prompts/issue-discover-by-prompt.md, @~/.codex/prompts/issue-plan.md, @~/.codex/prompts/issue-queue.md, @~/.codex/prompts/issue-execute.md
- **Analysis**: @~/.codex/prompts/analyze-with-file.md
- **Brainstorming**: @~/.codex/prompts/brainstorm-with-file.md, @~/.codex/prompts/brainstorm-to-cycle.md
- **Cleanup**: @~/.codex/prompts/clean.md, @~/.codex/prompts/compact.md
#### Recommendation Algorithm
```javascript
async function recommendCommandChain(analysis) {
// Step 1: 根据任务类型确定流程
const { inputPort, outputPort } = determinePortFlow(analysis.task_type, analysis.complexity);
// Step 2: Claude 根据命令特性和任务特征,智能选择命令序列
const chain = selectChainByTaskType(analysis);
return chain;
}
// 任务类型对应的端口流
function determinePortFlow(taskType, complexity) {
const flows = {
'bugfix': { flow: ['lite-fix', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'discovery': { flow: ['issue-discover', 'issue-plan', 'issue-queue', 'issue-execute'], depth: 'standard' },
'analysis': { flow: ['analyze-with-file'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'cleanup': { flow: ['clean'], depth: 'standard' },
'brainstorm': { flow: ['brainstorm-with-file', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'batch-planning': { flow: ['merge-plans-with-file', 'execute'], depth: 'standard' },
'feature': { flow: complexity === 'complex' ? ['lite-plan-b'] : ['lite-plan-a', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' }
};
return flows[taskType] || flows['feature'];
}
```
#### Display to User
```
Recommended Command Chain:
Pipeline (管道视图):
需求 → @~/.codex/prompts/lite-plan-a.md → 计划 → @~/.codex/prompts/execute.md → 代码完成
Commands (命令列表):
1. @~/.codex/prompts/lite-plan-a.md
2. @~/.codex/prompts/execute.md
Proceed? [Confirm / Show Details / Adjust / Cancel]
```
### Phase 2b: Get User Confirmation
Ask user for confirmation before proceeding with execution.
```javascript
async function getUserConfirmation(chain) {
const response = await AskUserQuestion({
questions: [{
question: 'Proceed with this command chain?',
header: 'Confirm Chain',
multiSelect: false,
options: [
{ label: 'Confirm and execute', description: 'Proceed with commands' },
{ label: 'Show details', description: 'View each command' },
{ label: 'Adjust chain', description: 'Remove or reorder' },
{ label: 'Cancel', description: 'Abort' }
]
}]
});
return response;
}
```
### Phase 3: Execute Sequential Command Chain
```javascript
async function executeCommandChain(chain, analysis) {
const sessionId = `codex-coord-${Date.now()}`;
const stateDir = `.workflow/.codex-coordinator/${sessionId}`;
// Create state directory
const state = {
session_id: sessionId,
status: 'running',
created_at: new Date().toISOString(),
analysis: analysis,
command_chain: chain.map((cmd, idx) => ({ ...cmd, index: idx, status: 'pending' })),
execution_results: [],
};
// Save initial state
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
for (let i = 0; i < chain.length; i++) {
const cmd = chain[i];
console.log(`[${i+1}/${chain.length}] Executing: @~/.codex/prompts/${cmd.name}.md`);
// Update status to running
state.command_chain[i].status = 'running';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
try {
// Build command with parameters using full path
let commandStr = `@~/.codex/prompts/${cmd.name}.md`;
// Add parameters based on previous results and task context
if (i > 0 && state.execution_results.length > 0) {
const lastResult = state.execution_results[state.execution_results.length - 1];
commandStr += ` --resume="${lastResult.session_id || lastResult.artifact}"`;
}
// For analysis-based commands, add depth parameter
if (analysis.complexity === 'complex' && (cmd.name.includes('analyze') || cmd.name.includes('plan'))) {
commandStr += ` --depth=deep`;
}
// Add task description for planning commands
if (cmd.type === 'planning' && i === 0) {
commandStr += ` TASK="${analysis.goal}"`;
}
// Execute command via Bash (spawning as background task)
// Format: @~/.codex/prompts/command-name.md [] parameters
// Note: This simulates the execution; actual implementation uses hook callbacks
console.log(`Executing: ${commandStr}`);
// Save execution record
state.execution_results.push({
index: i,
command: cmd.name,
status: 'in-progress',
started_at: new Date().toISOString(),
session_id: null,
artifact: null
});
state.command_chain[i].status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`[${i+1}/${chain.length}] ✓ Completed: @~/.codex/prompts/${cmd.name}.md`);
} catch (error) {
state.command_chain[i].status = 'failed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`❌ Command failed: ${error.message}`);
break;
}
}
state.status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`\n✅ Orchestration Complete: ${state.session_id}`);
return state;
}
```
## State File Structure
**Location**: `.workflow/.codex-coordinator/{session_id}/state.json`
```json
{
"session_id": "codex-coord-20250129-143025",
"status": "running|waiting|completed|failed",
"created_at": "2025-01-29T14:30:25Z",
"updated_at": "2025-01-29T14:35:45Z",
"analysis": {
"goal": "Fix login authentication bug",
"scope": ["auth", "login"],
"complexity": "medium",
"task_type": "bugfix"
},
"command_chain": [
{
"index": 0,
"name": "lite-fix",
"type": "bugfix",
"status": "completed"
},
{
"index": 1,
"name": "execute",
"type": "execution",
"status": "pending"
}
],
"execution_results": [
{
"index": 0,
"command": "lite-fix",
"status": "completed",
"started_at": "2025-01-29T14:30:25Z",
"session_id": "fix-login-2025-01-29",
"artifact": ".workflow/.lite-fix/fix-login-2025-01-29/fix-plan.json"
}
]
}
```
### Status Values
- `running`: Orchestrator actively executing
- `waiting`: Paused, waiting for external events
- `completed`: All commands finished successfully
- `failed`: Error occurred or user aborted
## Task Type Routing (Pipeline Summary)
**Note**: 【 】marks Minimum Execution Units (最小执行单元) - these commands must execute together.
| Task Type | Pipeline | Minimum Units |
|-----------|----------|---|
| **bugfix** | Bug报告 →【@~/.codex/prompts/lite-fix.md → @~/.codex/prompts/execute.md】→ 修复代码 | Bug Fix |
| **discovery** | 需求 →【@~/.codex/prompts/issue-discover.md → @~/.codex/prompts/issue-plan.md → @~/.codex/prompts/issue-queue.md → @~/.codex/prompts/issue-execute.md】→ 完成 issues | Issue Workflow |
| **analysis** | 需求 → @~/.codex/prompts/analyze-with-file.md → 分析报告 | Analyze With File |
| **cleanup** | 代码库 → @~/.codex/prompts/clean.md → 清理完成 | Cleanup |
| **brainstorm** | 主题 →【@~/.codex/prompts/brainstorm-with-file.md → @~/.codex/prompts/execute.md】→ 实现代码 | Brainstorm to Execution |
| **batch-planning** | 需求集合 →【@~/.codex/prompts/merge-plans-with-file.md → @~/.codex/prompts/execute.md】→ 代码完成 | Merge Multiple Plans |
| **feature** (simple) | 需求 →【@~/.codex/prompts/lite-plan-a.md → @~/.codex/prompts/execute.md】→ 代码 | Quick Implementation |
| **feature** (complex) | 需求 → @~/.codex/prompts/lite-plan-b.md → 详细计划 → @~/.codex/prompts/execute.md → 代码 | Complex Planning |
## Available Commands Reference
### Planning Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **lite-plan-a** | Lightweight merged-mode planning | `@~/.codex/prompts/lite-plan-a.md TASK="..."` | plan.json |
| **lite-plan-b** | Multi-angle exploration planning | `@~/.codex/prompts/lite-plan-b.md TASK="..."` | plan.json |
| **lite-plan-c** | Parallel angle planning | `@~/.codex/prompts/lite-plan-c.md TASK="..."` | plan.json |
| **quick-plan-with-file** | Quick planning with file tracking | `@~/.codex/prompts/quick-plan-with-file.md TASK="..."` | plan + docs |
| **merge-plans-with-file** | Merge multiple plans | `@~/.codex/prompts/merge-plans-with-file.md PLANS="..."` | merged-plan.json |
### Execution Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **execute** | Execute tasks from plan | `@~/.codex/prompts/execute.md SESSION=".../plan/"` | Working code |
| **unified-execute-with-file** | Execute with file tracking | `@~/.codex/prompts/unified-execute-with-file.md SESSION="..."` | Code + tracking |
### Bug Fix Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **lite-fix** | Quick bug diagnosis and planning | `@~/.codex/prompts/lite-fix.md BUG="..."` | fix-plan.json |
| **debug-with-file** | Hypothesis-driven debugging | `@~/.codex/prompts/debug-with-file.md BUG="..."` | understanding.md |
### Discovery Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **issue-discover** | Multi-perspective issue discovery | `@~/.codex/prompts/issue-discover.md PATTERN="src/**"` | issues.jsonl |
| **issue-discover-by-prompt** | Prompt-based discovery | `@~/.codex/prompts/issue-discover-by-prompt.md PROMPT="..."` | issues |
| **issue-plan** | Plan issue solutions | `@~/.codex/prompts/issue-plan.md --all-pending` | issue-plans.json |
| **issue-queue** | Form execution queue | `@~/.codex/prompts/issue-queue.md --from-plan` | queue.json |
| **issue-execute** | Execute issue queue | `@~/.codex/prompts/issue-execute.md QUEUE="..."` | Completed |
### Analysis Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **analyze-with-file** | Collaborative analysis | `@~/.codex/prompts/analyze-with-file.md TOPIC="..."` | discussion.md |
### Brainstorm Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **brainstorm-with-file** | Multi-perspective brainstorming | `@~/.codex/prompts/brainstorm-with-file.md TOPIC="..."` | brainstorm.md |
| **brainstorm-to-cycle** | Bridge brainstorm to execution | `@~/.codex/prompts/brainstorm-to-cycle.md` | Executable plan |
### Utility Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **clean** | Intelligent code cleanup | `@~/.codex/prompts/clean.md` | Cleaned code |
| **compact** | Compact session memory | `@~/.codex/prompts/compact.md SESSION="..."` | Compressed state |
## Execution Flow
```
User Input: TASK="..."
Phase 1: analyzeRequirements(task)
Phase 2: recommendCommandChain(analysis)
Display pipeline and commands
User Confirmation
Phase 3: executeCommandChain(chain, analysis)
├─ For each command:
│ ├─ Update state to "running"
│ ├─ Build command string with parameters
│ ├─ Execute @command [] with parameters
│ ├─ Save execution results
│ └─ Update state to "completed"
Output completion summary
```
## Key Design Principles
1. **Atomic Execution** - Never split minimum execution units
2. **State Persistence** - All state saved to JSON
3. **User Control** - Confirmation before execution
4. **Context Passing** - Parameters chain across commands
5. **Resume Support** - Can resume from state.json
6. **Intelligent Routing** - Task type determines command chain
7. **Complexity Awareness** - Different paths for simple vs complex tasks
## Command Invocation Format
**Format**: `@~/.codex/prompts/<command-name>.md <parameters>`
**Examples**:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="Implement user authentication"
@~/.codex/prompts/execute.md SESSION=".workflow/.lite-plan/..."
@~/.codex/prompts/lite-fix.md BUG="Login fails with 404 error"
@~/.codex/prompts/issue-discover.md PATTERN="src/auth/**"
@~/.codex/prompts/brainstorm-with-file.md TOPIC="Improve user onboarding"
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Unknown task type | Default to feature implementation |
| Command not found | Error: command not available |
| Execution fails | Report error, offer retry or skip |
| Invalid parameters | Validate and ask for correction |
| Circular dependency | Detect and report |
| All commands fail | Report and suggest manual intervention |
## Session Management
**Resume Previous Session**:
```
1. Find session in .workflow/.codex-coordinator/
2. Load state.json
3. Identify last completed command
4. Restart from next pending command
```
**View Session Progress**:
```
cat .workflow/.codex-coordinator/{session-id}/state.json
```
---
## Execution Instructions
The coordinator workflow follows these steps:
1. **Parse Input**: Extract task description from TASK parameter
2. **Analyze**: Determine goal, scope, complexity, and task type
3. **Recommend**: Build optimal command chain based on analysis
4. **Confirm**: Display pipeline and request user approval
5. **Execute**: Run commands sequentially with state tracking
6. **Report**: Display final results and artifacts
To use this coordinator, invoke it as a Claude Code command (not a Codex command):
From the Claude Code CLI, you would call Codex commands like:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="Your task description"
```
Or with options:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="..." --depth=deep
```
This coordinator orchestrates such Codex commands based on your task requirements.

View File

@@ -0,0 +1,675 @@
# Flow Template Generator
Generate workflow templates for meta-skill/flow-coordinator.
## Usage
```
/meta-skill:flow-create [template-name] [--output <path>]
```
**Examples**:
```bash
/meta-skill:flow-create bugfix-v2
/meta-skill:flow-create my-workflow --output ~/.claude/skills/my-skill/templates/
```
## Execution Flow
```
User Input → Phase 1: Template Design → Phase 2: Step Definition → Phase 3: Generate JSON
↓ ↓ ↓
Name + Description Define workflow steps Write template file
```
---
## Phase 1: Template Design
Gather basic template information:
```javascript
async function designTemplate(input) {
const templateName = parseTemplateName(input) || await askTemplateName();
const metadata = await AskUserQuestion({
questions: [
{
question: "What is the purpose of this workflow template?",
header: "Purpose",
options: [
{ label: "Feature Development", description: "Implement new features with planning and testing" },
{ label: "Bug Fix", description: "Diagnose and fix bugs with verification" },
{ label: "TDD Development", description: "Test-driven development workflow" },
{ label: "Code Review", description: "Review cycle with findings and fixes" },
{ label: "Testing", description: "Test generation and validation" },
{ label: "Issue Workflow", description: "Complete issue lifecycle (discover → plan → queue → execute)" },
{ label: "With-File Workflow", description: "Documented exploration (brainstorm/debug/analyze)" },
{ label: "Custom", description: "Define custom workflow purpose" }
],
multiSelect: false
},
{
question: "What complexity level?",
header: "Level",
options: [
{ label: "Level 1 (Rapid)", description: "1-2 steps, ultra-lightweight (lite-lite-lite)" },
{ label: "Level 2 (Lightweight)", description: "2-4 steps, quick implementation" },
{ label: "Level 3 (Standard)", description: "4-6 steps, with verification and testing" },
{ label: "Level 4 (Full)", description: "6+ steps, brainstorm + full workflow" }
],
multiSelect: false
}
]
});
return {
name: templateName,
description: generateDescription(templateName, metadata.Purpose),
level: parseLevel(metadata.Level),
purpose: metadata.Purpose
};
}
```
---
## Phase 2: Step Definition
### Step 2.1: Select Command Category
```javascript
async function selectCommandCategory() {
return await AskUserQuestion({
questions: [{
question: "Select command category",
header: "Category",
options: [
{ label: "Planning", description: "lite-plan, plan, multi-cli-plan, tdd-plan, quick-plan-with-file" },
{ label: "Execution", description: "lite-execute, execute, unified-execute-with-file" },
{ label: "Testing", description: "test-fix-gen, test-cycle-execute, test-gen, tdd-verify" },
{ label: "Review", description: "review-session-cycle, review-module-cycle, review-cycle-fix" },
{ label: "Bug Fix", description: "lite-fix, debug-with-file" },
{ label: "Brainstorm", description: "brainstorm-with-file, brainstorm:auto-parallel" },
{ label: "Analysis", description: "analyze-with-file" },
{ label: "Issue", description: "discover, plan, queue, execute, from-brainstorm, convert-to-plan" },
{ label: "Utility", description: "clean, init, replan, status" }
],
multiSelect: false
}]
});
}
```
### Step 2.2: Select Specific Command
```javascript
async function selectCommand(category) {
const commandOptions = {
'Planning': [
{ label: "/workflow:lite-plan", description: "Lightweight merged-mode planning" },
{ label: "/workflow:plan", description: "Full planning with architecture design" },
{ label: "/workflow:multi-cli-plan", description: "Multi-CLI collaborative planning (Gemini+Codex+Claude)" },
{ label: "/workflow:tdd-plan", description: "TDD workflow planning with Red-Green-Refactor" },
{ label: "/workflow:quick-plan-with-file", description: "Rapid planning with minimal docs" },
{ label: "/workflow:plan-verify", description: "Verify plan against requirements" },
{ label: "/workflow:replan", description: "Update plan and execute changes" }
],
'Execution': [
{ label: "/workflow:lite-execute", description: "Execute from in-memory plan" },
{ label: "/workflow:execute", description: "Execute from planning session" },
{ label: "/workflow:unified-execute-with-file", description: "Universal execution engine" },
{ label: "/workflow:lite-lite-lite", description: "Ultra-lightweight multi-tool execution" }
],
'Testing': [
{ label: "/workflow:test-fix-gen", description: "Generate test tasks for specific issues" },
{ label: "/workflow:test-cycle-execute", description: "Execute iterative test-fix cycle (>=95% pass)" },
{ label: "/workflow:test-gen", description: "Generate comprehensive test suite" },
{ label: "/workflow:tdd-verify", description: "Verify TDD workflow compliance" }
],
'Review': [
{ label: "/workflow:review-session-cycle", description: "Session-based multi-dimensional code review" },
{ label: "/workflow:review-module-cycle", description: "Module-focused code review" },
{ label: "/workflow:review-cycle-fix", description: "Fix review findings with prioritization" },
{ label: "/workflow:review", description: "Post-implementation review" }
],
'Bug Fix': [
{ label: "/workflow:lite-fix", description: "Lightweight bug diagnosis and fix" },
{ label: "/workflow:debug-with-file", description: "Hypothesis-driven debugging with documentation" }
],
'Brainstorm': [
{ label: "/workflow:brainstorm-with-file", description: "Multi-perspective ideation with documentation" },
{ label: "/workflow:brainstorm:auto-parallel", description: "Parallel multi-role brainstorming" }
],
'Analysis': [
{ label: "/workflow:analyze-with-file", description: "Collaborative analysis with documentation" }
],
'Issue': [
{ label: "/issue:discover", description: "Multi-perspective issue discovery" },
{ label: "/issue:discover-by-prompt", description: "Prompt-based issue discovery with Gemini" },
{ label: "/issue:plan", description: "Plan issue solutions" },
{ label: "/issue:queue", description: "Form execution queue with conflict analysis" },
{ label: "/issue:execute", description: "Execute issue queue with DAG orchestration" },
{ label: "/issue:from-brainstorm", description: "Convert brainstorm to issue" },
{ label: "/issue:convert-to-plan", description: "Convert planning artifacts to issue solutions" }
],
'Utility': [
{ label: "/workflow:clean", description: "Intelligent code cleanup" },
{ label: "/workflow:init", description: "Initialize project-level state" },
{ label: "/workflow:replan", description: "Interactive workflow replanning" },
{ label: "/workflow:status", description: "Generate workflow status views" }
]
};
return await AskUserQuestion({
questions: [{
question: `Select ${category} command`,
header: "Command",
options: commandOptions[category] || commandOptions['Planning'],
multiSelect: false
}]
});
}
```
### Step 2.3: Select Execution Unit
```javascript
async function selectExecutionUnit() {
return await AskUserQuestion({
questions: [{
question: "Select execution unit (atomic command group)",
header: "Unit",
options: [
// Planning + Execution Units
{ label: "quick-implementation", description: "【lite-plan → lite-execute】" },
{ label: "multi-cli-planning", description: "【multi-cli-plan → lite-execute】" },
{ label: "full-planning-execution", description: "【plan → execute】" },
{ label: "verified-planning-execution", description: "【plan → plan-verify → execute】" },
{ label: "replanning-execution", description: "【replan → execute】" },
{ label: "tdd-planning-execution", description: "【tdd-plan → execute】" },
// Testing Units
{ label: "test-validation", description: "【test-fix-gen → test-cycle-execute】" },
{ label: "test-generation-execution", description: "【test-gen → execute】" },
// Review Units
{ label: "code-review", description: "【review-*-cycle → review-cycle-fix】" },
// Bug Fix Units
{ label: "bug-fix", description: "【lite-fix → lite-execute】" },
// Issue Units
{ label: "issue-workflow", description: "【discover → plan → queue → execute】" },
{ label: "rapid-to-issue", description: "【lite-plan → convert-to-plan → queue → execute】" },
{ label: "brainstorm-to-issue", description: "【from-brainstorm → queue → execute】" },
// With-File Units (self-contained)
{ label: "brainstorm-with-file", description: "Self-contained brainstorming workflow" },
{ label: "debug-with-file", description: "Self-contained debugging workflow" },
{ label: "analyze-with-file", description: "Self-contained analysis workflow" },
// Standalone
{ label: "standalone", description: "Single command, no atomic grouping" }
],
multiSelect: false
}]
});
}
```
### Step 2.4: Select Execution Mode
```javascript
async function selectExecutionMode() {
return await AskUserQuestion({
questions: [{
question: "Execution mode for this step?",
header: "Mode",
options: [
{ label: "mainprocess", description: "Run in main process (blocking, synchronous)" },
{ label: "async", description: "Run asynchronously (background, hook callbacks)" }
],
multiSelect: false
}]
});
}
```
### Complete Step Definition Flow
```javascript
async function defineSteps(templateDesign) {
// Suggest steps based on purpose
const suggestedSteps = getSuggestedSteps(templateDesign.purpose);
const customize = await AskUserQuestion({
questions: [{
question: "Use suggested steps or customize?",
header: "Steps",
options: [
{ label: "Use Suggested", description: `Suggested: ${suggestedSteps.map(s => s.cmd).join(' → ')}` },
{ label: "Customize", description: "Modify or add custom steps" },
{ label: "Start Empty", description: "Define all steps from scratch" }
],
multiSelect: false
}]
});
if (customize.Steps === "Use Suggested") {
return suggestedSteps;
}
// Interactive step definition
const steps = [];
let addMore = true;
while (addMore) {
const category = await selectCommandCategory();
const command = await selectCommand(category.Category);
const unit = await selectExecutionUnit();
const execMode = await selectExecutionMode();
const contextHint = await askContextHint(command.Command);
steps.push({
cmd: command.Command,
args: command.Command.includes('plan') || command.Command.includes('fix') ? '"{{goal}}"' : undefined,
unit: unit.Unit,
execution: {
type: "slash-command",
mode: execMode.Mode
},
contextHint: contextHint
});
const continueAdding = await AskUserQuestion({
questions: [{
question: `Added step ${steps.length}: ${command.Command}. Add another?`,
header: "Continue",
options: [
{ label: "Add More", description: "Define another step" },
{ label: "Done", description: "Finish step definition" }
],
multiSelect: false
}]
});
addMore = continueAdding.Continue === "Add More";
}
return steps;
}
```
---
## Suggested Step Templates
### Feature Development (Level 2 - Rapid)
```json
{
"name": "rapid",
"description": "Quick implementation with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight implementation plan" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute implementation based on plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle until pass rate >= 95%" }
]
}
```
### Feature Development (Level 3 - Coupled)
```json
{
"name": "coupled",
"description": "Full workflow with verification, review, and testing",
"level": 3,
"steps": [
{ "cmd": "/workflow:plan", "args": "\"{{goal}}\"", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed implementation plan" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan against requirements" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
### Bug Fix (Level 2)
```json
{
"name": "bugfix",
"description": "Bug diagnosis and fix with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-fix", "args": "\"{{goal}}\"", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Diagnose and plan bug fix" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute bug fix" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate regression tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fix with tests" }
]
}
```
### Bug Fix Hotfix (Level 2)
```json
{
"name": "bugfix-hotfix",
"description": "Urgent production bug fix (no tests)",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-fix", "args": "--hotfix \"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Emergency hotfix mode" }
]
}
```
### TDD Development (Level 3)
```json
{
"name": "tdd",
"description": "Test-driven development with Red-Green-Refactor",
"level": 3,
"steps": [
{ "cmd": "/workflow:tdd-plan", "args": "\"{{goal}}\"", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create TDD task chain" },
{ "cmd": "/workflow:execute", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute TDD cycle" },
{ "cmd": "/workflow:tdd-verify", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify TDD compliance" }
]
}
```
### Code Review (Level 3)
```json
{
"name": "review",
"description": "Code review cycle with fixes and testing",
"level": 3,
"steps": [
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests for fixes" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fixes pass tests" }
]
}
```
### Test Fix (Level 3)
```json
{
"name": "test-fix",
"description": "Fix failing tests",
"level": 3,
"steps": [
{ "cmd": "/workflow:test-fix-gen", "args": "\"{{goal}}\"", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test fix tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
### Issue Workflow (Level Issue)
```json
{
"name": "issue",
"description": "Complete issue lifecycle",
"level": "Issue",
"steps": [
{ "cmd": "/issue:discover", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Discover issues from codebase" },
{ "cmd": "/issue:plan", "args": "--all-pending", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Plan issue solutions" },
{ "cmd": "/issue:queue", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### Rapid to Issue (Level 2.5)
```json
{
"name": "rapid-to-issue",
"description": "Bridge lightweight planning to issue workflow",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight plan" },
{ "cmd": "/issue:convert-to-plan", "args": "--latest-lite-plan -y", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert to issue plan" },
{ "cmd": "/issue:queue", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### Brainstorm to Issue (Level 4)
```json
{
"name": "brainstorm-to-issue",
"description": "Bridge brainstorm session to issue workflow",
"level": 4,
"steps": [
{ "cmd": "/issue:from-brainstorm", "args": "SESSION=\"{{session}}\" --auto", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert brainstorm to issue" },
{ "cmd": "/issue:queue", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### With-File: Brainstorm (Level 4)
```json
{
"name": "brainstorm",
"description": "Multi-perspective ideation with documentation",
"level": 4,
"steps": [
{ "cmd": "/workflow:brainstorm-with-file", "args": "\"{{goal}}\"", "unit": "brainstorm-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-CLI brainstorming with documented diverge-converge cycles" }
]
}
```
### With-File: Debug (Level 3)
```json
{
"name": "debug",
"description": "Hypothesis-driven debugging with documentation",
"level": 3,
"steps": [
{ "cmd": "/workflow:debug-with-file", "args": "\"{{goal}}\"", "unit": "debug-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Hypothesis-driven debugging with Gemini validation" }
]
}
```
### With-File: Analyze (Level 3)
```json
{
"name": "analyze",
"description": "Collaborative analysis with documentation",
"level": 3,
"steps": [
{ "cmd": "/workflow:analyze-with-file", "args": "\"{{goal}}\"", "unit": "analyze-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-round collaborative analysis with CLI exploration" }
]
}
```
### Full Workflow (Level 4)
```json
{
"name": "full",
"description": "Complete workflow: brainstorm → plan → execute → test",
"level": 4,
"steps": [
{ "cmd": "/workflow:brainstorm:auto-parallel", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Parallel multi-perspective brainstorming" },
{ "cmd": "/workflow:plan", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed plan from brainstorm" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan quality" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate comprehensive tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
### Multi-CLI Planning (Level 3)
```json
{
"name": "multi-cli-plan",
"description": "Multi-CLI collaborative planning with cross-verification",
"level": 3,
"steps": [
{ "cmd": "/workflow:multi-cli-plan", "args": "\"{{goal}}\"", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Gemini+Codex+Claude collaborative planning" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute converged plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
### Ultra-Lightweight (Level 1)
```json
{
"name": "lite-lite-lite",
"description": "Ultra-lightweight multi-tool execution",
"level": 1,
"steps": [
{ "cmd": "/workflow:lite-lite-lite", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Direct execution with minimal overhead" }
]
}
```
---
## Command Port Reference
Each command has input/output ports for pipeline composition:
| Command | Input Port | Output Port | Atomic Unit |
|---------|------------|-------------|-------------|
| **Planning** |
| lite-plan | requirement | plan | quick-implementation |
| plan | requirement | detailed-plan | full-planning-execution |
| plan-verify | detailed-plan | verified-plan | verified-planning-execution |
| multi-cli-plan | requirement | multi-cli-plan | multi-cli-planning |
| tdd-plan | requirement | tdd-tasks | tdd-planning-execution |
| replan | session, feedback | replan | replanning-execution |
| **Execution** |
| lite-execute | plan, multi-cli-plan, lite-fix | code | (multiple) |
| execute | detailed-plan, verified-plan, replan, tdd-tasks | code | (multiple) |
| **Testing** |
| test-fix-gen | failing-tests, session | test-tasks | test-validation |
| test-cycle-execute | test-tasks | test-passed | test-validation |
| test-gen | code, session | test-tasks | test-generation-execution |
| tdd-verify | code | tdd-verified | standalone |
| **Review** |
| review-session-cycle | code, session | review-verified | code-review |
| review-module-cycle | module-pattern | review-verified | code-review |
| review-cycle-fix | review-findings | fixed-code | code-review |
| **Bug Fix** |
| lite-fix | bug-report | lite-fix | bug-fix |
| debug-with-file | bug-report | understanding-document | debug-with-file |
| **With-File** |
| brainstorm-with-file | exploration-topic | brainstorm-document | brainstorm-with-file |
| analyze-with-file | analysis-topic | discussion-document | analyze-with-file |
| **Issue** |
| issue:discover | codebase | pending-issues | issue-workflow |
| issue:plan | pending-issues | issue-plans | issue-workflow |
| issue:queue | issue-plans, converted-plan | execution-queue | issue-workflow |
| issue:execute | execution-queue | completed-issues | issue-workflow |
| issue:convert-to-plan | plan | converted-plan | rapid-to-issue |
| issue:from-brainstorm | brainstorm-document | converted-plan | brainstorm-to-issue |
---
## Minimum Execution Units (最小执行单元)
**Definition**: Commands that must execute together as an atomic group.
| Unit Name | Commands | Purpose |
|-----------|----------|---------|
| **quick-implementation** | lite-plan → lite-execute | Lightweight plan and execution |
| **multi-cli-planning** | multi-cli-plan → lite-execute | Multi-perspective planning and execution |
| **bug-fix** | lite-fix → lite-execute | Bug diagnosis and fix |
| **full-planning-execution** | plan → execute | Detailed planning and execution |
| **verified-planning-execution** | plan → plan-verify → execute | Planning with verification |
| **replanning-execution** | replan → execute | Update plan and execute |
| **tdd-planning-execution** | tdd-plan → execute | TDD planning and execution |
| **test-validation** | test-fix-gen → test-cycle-execute | Test generation and fix cycle |
| **test-generation-execution** | test-gen → execute | Generate and execute tests |
| **code-review** | review-*-cycle → review-cycle-fix | Review and fix findings |
| **issue-workflow** | discover → plan → queue → execute | Complete issue lifecycle |
| **rapid-to-issue** | lite-plan → convert-to-plan → queue → execute | Bridge to issue workflow |
| **brainstorm-to-issue** | from-brainstorm → queue → execute | Brainstorm to issue bridge |
| **brainstorm-with-file** | (self-contained) | Multi-perspective ideation |
| **debug-with-file** | (self-contained) | Hypothesis-driven debugging |
| **analyze-with-file** | (self-contained) | Collaborative analysis |
---
## Phase 3: Generate JSON
```javascript
async function generateTemplate(design, steps, outputPath) {
const template = {
name: design.name,
description: design.description,
level: design.level,
steps: steps
};
const finalPath = outputPath || `~/.claude/skills/flow-coordinator/templates/${design.name}.json`;
// Write template
Write(finalPath, JSON.stringify(template, null, 2));
// Validate
const validation = validateTemplate(template);
console.log(`✅ Template created: ${finalPath}`);
console.log(` Steps: ${template.steps.length}`);
console.log(` Level: ${template.level}`);
console.log(` Units: ${[...new Set(template.steps.map(s => s.unit))].join(', ')}`);
return { path: finalPath, template, validation };
}
```
---
## Output Format
```json
{
"name": "template-name",
"description": "Template description",
"level": 2,
"steps": [
{
"cmd": "/workflow:command",
"args": "\"{{goal}}\"",
"unit": "unit-name",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Description of what this step does"
}
]
}
```
---
## Examples
**Create a quick bugfix template**:
```
/meta-skill:flow-create hotfix-simple
→ Purpose: Bug Fix
→ Level: 2 (Lightweight)
→ Steps: Use Suggested
→ Output: ~/.claude/skills/flow-coordinator/templates/hotfix-simple.json
```
**Create a custom multi-stage workflow**:
```
/meta-skill:flow-create complex-feature --output ~/.claude/skills/my-project/templates/
→ Purpose: Feature Development
→ Level: 3 (Standard)
→ Steps: Customize
→ Step 1: /workflow:brainstorm:auto-parallel (standalone, mainprocess)
→ Step 2: /workflow:plan (verified-planning-execution, mainprocess)
→ Step 3: /workflow:plan-verify (verified-planning-execution, mainprocess)
→ Step 4: /workflow:execute (verified-planning-execution, async)
→ Step 5: /workflow:review-session-cycle (code-review, mainprocess)
→ Step 6: /workflow:review-cycle-fix (code-review, mainprocess)
→ Done
→ Output: ~/.claude/skills/my-project/templates/complex-feature.json
```

View File

@@ -0,0 +1,761 @@
---
name: workflow:collaborative-plan-with-file
description: Unified collaborative planning with dynamic requirement splitting, parallel sub-agent exploration/understanding/planning, and automatic merge. Each agent maintains process files for full traceability.
argument-hint: "[-y|--yes] <task description> [--max-agents=5] [--depth=normal|deep] [--merge-rule=consensus|priority]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-approve splits, use default merge rule, skip confirmations.
# Collaborative Planning Command
## Quick Start
```bash
# Basic usage
/workflow:collaborative-plan-with-file "Implement real-time notification system"
# With options
/workflow:collaborative-plan-with-file "Refactor authentication module" --max-agents=4
/workflow:collaborative-plan-with-file "Add payment gateway support" --depth=deep
/workflow:collaborative-plan-with-file "Migrate to microservices" --merge-rule=priority
```
**Context Source**: ACE semantic search + Per-agent CLI exploration
**Output Directory**: `.workflow/.planning/{session-id}/`
**Default Max Agents**: 5 (actual count based on requirement complexity)
**CLI Tools**: cli-lite-planning-agent (internally calls ccw cli with gemini/codex/qwen)
**Schema**: plan-json-schema.json (sub-plans & final plan share same base schema)
## Output Artifacts
### Per Sub-Agent (Phase 2)
| Artifact | Description |
|----------|-------------|
| `planning-context.md` | Evidence paths + synthesized understanding |
| `sub-plan.json` | Sub-plan following plan-json-schema.json |
### Final Output (Phase 4)
| Artifact | Description |
|----------|-------------|
| `requirement-analysis.json` | Requirement breakdown and sub-agent assignments |
| `conflicts.json` | Detected conflicts between sub-plans |
| `plan.json` | Merged plan (plan-json-schema + merge_metadata) |
| `plan.md` | Human-readable plan summary |
**Agent**: `cli-lite-planning-agent` with `process_docs: true` for sub-agents
## Overview
Unified collaborative planning workflow that:
1. **Analyzes** complex requirements and splits into sub-requirements
2. **Spawns** parallel sub-agents, each responsible for one sub-requirement
3. **Each agent** maintains process files: planning-refs.md + sub-plan.json
4. **Merges** all sub-plans into unified plan.json with conflict resolution
```
┌─────────────────────────────────────────────────────────────────────────┐
│ COLLABORATIVE PLANNING │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Requirement Analysis & Splitting │
│ ├─ Analyze requirement complexity │
│ ├─ Identify 2-5 sub-requirements (focus areas) │
│ └─ Write requirement-analysis.json │
│ │
│ Phase 2: Parallel Sub-Agent Execution │
│ ┌──────────────┬──────────────┬──────────────┐ │
│ │ Agent 1 │ Agent 2 │ Agent N │ │
├──────────────┼──────────────┼──────────────┤ │
│ │ planning │ planning │ planning │ → planning-context.md│
│ │ + sub-plan │ + sub-plan │ + sub-plan │ → sub-plan.json │
│ └──────────────┴──────────────┴──────────────┘ │
│ │
│ Phase 3: Cross-Verification & Conflict Detection │
│ ├─ Load all sub-plan.json files │
│ ├─ Detect conflicts (effort, approach, dependencies) │
│ └─ Write conflicts.json │
│ │
│ Phase 4: Merge & Synthesis │
│ ├─ Resolve conflicts using merge-rule │
│ ├─ Merge all sub-plans into unified plan │
│ └─ Write plan.json + plan.md │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
## Output Structure
```
.workflow/.planning/{CPLAN-slug-YYYY-MM-DD}/
├── requirement-analysis.json # Phase 1: Requirement breakdown
├── agents/ # Phase 2: Per-agent process files
│ ├── {focus-area-1}/
│ │ ├── planning-context.md # Evidence + understanding
│ │ └── sub-plan.json # Agent's plan for this focus area
│ ├── {focus-area-2}/
│ │ └── ...
│ └── {focus-area-N}/
│ └── ...
├── conflicts.json # Phase 3: Detected conflicts
├── plan.json # Phase 4: Unified merged plan
└── plan.md # Phase 4: Human-readable plan
```
## Implementation
### Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const taskDescription = "$ARGUMENTS"
const taskSlug = taskDescription.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 30)
const sessionId = `CPLAN-${taskSlug}-${getUtc8ISOString().substring(0, 10)}`
const sessionFolder = `.workflow/.planning/${sessionId}`
// Parse options
const maxAgents = parseInt($ARGUMENTS.match(/--max-agents=(\d+)/)?.[1] || '5')
const depth = $ARGUMENTS.match(/--depth=(normal|deep)/)?.[1] || 'normal'
const mergeRule = $ARGUMENTS.match(/--merge-rule=(consensus|priority)/)?.[1] || 'consensus'
const autoMode = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
Bash(`mkdir -p ${sessionFolder}/agents`)
```
### Phase 1: Requirement Analysis & Splitting
Use CLI to analyze and split requirements:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Requirement Analysis", status: "in_progress", activeForm: "Analyzing requirements" },
{ content: "Phase 2: Parallel Agent Execution", status: "pending", activeForm: "Running agents" },
{ content: "Phase 3: Conflict Detection", status: "pending", activeForm: "Detecting conflicts" },
{ content: "Phase 4: Merge & Synthesis", status: "pending", activeForm: "Merging plans" }
]})
// Step 1.1: Use CLI to analyze requirement and propose splits
Bash({
command: `ccw cli -p "
PURPOSE: Analyze requirement and identify distinct sub-requirements/focus areas
Success: 2-${maxAgents} clearly separated sub-requirements that can be planned independently
TASK:
• Understand the overall requirement: '${taskDescription}'
• Identify major components, features, or concerns
• Split into 2-${maxAgents} independent sub-requirements
• Each sub-requirement should be:
- Self-contained (can be planned independently)
- Non-overlapping (minimal dependency on other sub-requirements)
- Roughly equal in complexity
• For each sub-requirement, provide:
- focus_area: Short identifier (e.g., 'auth-backend', 'ui-components')
- description: What this sub-requirement covers
- key_concerns: Main challenges or considerations
- suggested_cli_tool: Which CLI tool is best suited (gemini/codex/qwen)
MODE: analysis
CONTEXT: @**/*
EXPECTED: JSON output with structure:
{
\"original_requirement\": \"...\",
\"complexity\": \"low|medium|high\",
\"sub_requirements\": [
{
\"index\": 1,
\"focus_area\": \"...\",
\"description\": \"...\",
\"key_concerns\": [\"...\"],
\"suggested_cli_tool\": \"gemini|codex|qwen\",
\"estimated_effort\": \"low|medium|high\"
}
],
\"dependencies_between_subs\": [
{ \"from\": 1, \"to\": 2, \"reason\": \"...\" }
],
\"rationale\": \"Why this split was chosen\"
}
CONSTRAINTS: Maximum ${maxAgents} sub-requirements | Ensure clear boundaries
" --tool gemini --mode analysis`,
run_in_background: true
})
// Wait for CLI completion and parse result
// ... (hook callback will provide result)
```
**After CLI completes**:
```javascript
// Parse CLI output to extract sub-requirements
const analysisResult = parseCLIOutput(cliOutput)
const subRequirements = analysisResult.sub_requirements
// Write requirement-analysis.json
Write(`${sessionFolder}/requirement-analysis.json`, JSON.stringify({
session_id: sessionId,
original_requirement: taskDescription,
analysis_timestamp: getUtc8ISOString(),
complexity: analysisResult.complexity,
sub_requirements: subRequirements,
dependencies_between_subs: analysisResult.dependencies_between_subs,
rationale: analysisResult.rationale,
options: { maxAgents, depth, mergeRule }
}, null, 2))
// Create agent folders
subRequirements.forEach(sub => {
Bash(`mkdir -p ${sessionFolder}/agents/${sub.focus_area}`)
})
// User confirmation (unless auto mode)
if (!autoMode) {
AskUserQuestion({
questions: [{
question: `已识别 ${subRequirements.length} 个子需求:\n${subRequirements.map((s, i) => `${i+1}. ${s.focus_area}: ${s.description}`).join('\n')}\n\n确认开始并行规划?`,
header: "Confirm Split",
multiSelect: false,
options: [
{ label: "开始规划", description: "启动并行sub-agent" },
{ label: "调整拆分", description: "修改子需求划分" },
{ label: "取消", description: "退出规划" }
]
}]
})
}
```
### Phase 2: Parallel Sub-Agent Execution
Launch one agent per sub-requirement, each maintaining its own process files:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Requirement Analysis", status: "completed", activeForm: "Analyzing requirements" },
{ content: "Phase 2: Parallel Agent Execution", status: "in_progress", activeForm: "Running agents" },
...subRequirements.map((sub, i) => ({
content: ` → Agent ${i+1}: ${sub.focus_area}`,
status: "pending",
activeForm: `Planning ${sub.focus_area}`
})),
{ content: "Phase 3: Conflict Detection", status: "pending", activeForm: "Detecting conflicts" },
{ content: "Phase 4: Merge & Synthesis", status: "pending", activeForm: "Merging plans" }
]})
// Launch all sub-agents in parallel
const agentPromises = subRequirements.map((sub, index) => {
return Task({
subagent_type: "cli-lite-planning-agent",
run_in_background: false,
description: `Plan: ${sub.focus_area}`,
prompt: `
## Sub-Agent Context
You are planning ONE sub-requirement. Generate process docs + sub-plan.
**Focus Area**: ${sub.focus_area}
**Description**: ${sub.description}
**Key Concerns**: ${sub.key_concerns.join(', ')}
**CLI Tool**: ${sub.suggested_cli_tool}
**Depth**: ${depth}
## Input Context
\`\`\`json
{
"task_description": "${sub.description}",
"schema_path": "~/.claude/workflows/cli-templates/schemas/plan-json-schema.json",
"session": { "id": "${sessionId}", "folder": "${sessionFolder}" },
"process_docs": true,
"focus_area": "${sub.focus_area}",
"output_folder": "${sessionFolder}/agents/${sub.focus_area}",
"cli_config": { "tool": "${sub.suggested_cli_tool}" },
"parent_requirement": "${taskDescription}"
}
\`\`\`
## Output Requirements
Write 2 files to \`${sessionFolder}/agents/${sub.focus_area}/\`:
1. **planning-context.md** - Evidence paths + synthesized understanding
2. **sub-plan.json** - Plan with \`_metadata.source_agent: "${sub.focus_area}"\`
See cli-lite-planning-agent documentation for file formats.
`
})
})
// Wait for all agents to complete
const agentResults = await Promise.all(agentPromises)
```
### Phase 3: Cross-Verification & Conflict Detection
Load all sub-plans and detect conflicts:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Requirement Analysis", status: "completed", activeForm: "Analyzing requirements" },
{ content: "Phase 2: Parallel Agent Execution", status: "completed", activeForm: "Running agents" },
{ content: "Phase 3: Conflict Detection", status: "in_progress", activeForm: "Detecting conflicts" },
{ content: "Phase 4: Merge & Synthesis", status: "pending", activeForm: "Merging plans" }
]})
// Load all sub-plans
const subPlans = subRequirements.map(sub => {
const planPath = `${sessionFolder}/agents/${sub.focus_area}/sub-plan.json`
const content = Read(planPath)
return {
focus_area: sub.focus_area,
index: sub.index,
plan: JSON.parse(content)
}
})
// Detect conflicts
const conflicts = {
detected_at: getUtc8ISOString(),
total_sub_plans: subPlans.length,
conflicts: []
}
// 1. Effort conflicts (same task estimated differently)
const effortConflicts = detectEffortConflicts(subPlans)
conflicts.conflicts.push(...effortConflicts)
// 2. File conflicts (multiple agents modifying same file)
const fileConflicts = detectFileConflicts(subPlans)
conflicts.conflicts.push(...fileConflicts)
// 3. Approach conflicts (different approaches to same problem)
const approachConflicts = detectApproachConflicts(subPlans)
conflicts.conflicts.push(...approachConflicts)
// 4. Dependency conflicts (circular or missing dependencies)
const dependencyConflicts = detectDependencyConflicts(subPlans)
conflicts.conflicts.push(...dependencyConflicts)
// Write conflicts.json
Write(`${sessionFolder}/conflicts.json`, JSON.stringify(conflicts, null, 2))
console.log(`
## Conflict Detection Complete
**Total Sub-Plans**: ${subPlans.length}
**Conflicts Found**: ${conflicts.conflicts.length}
${conflicts.conflicts.length > 0 ? `
### Conflicts:
${conflicts.conflicts.map((c, i) => `
${i+1}. **${c.type}** (${c.severity})
- Agents: ${c.agents_involved.join(' vs ')}
- Issue: ${c.description}
- Suggested Resolution: ${c.suggested_resolution}
`).join('\n')}
` : '✅ No conflicts detected - sub-plans are compatible'}
`)
```
**Conflict Detection Functions**:
```javascript
function detectFileConflicts(subPlans) {
const fileModifications = {}
const conflicts = []
subPlans.forEach(sp => {
sp.plan.tasks.forEach(task => {
task.modification_points?.forEach(mp => {
if (!fileModifications[mp.file]) {
fileModifications[mp.file] = []
}
fileModifications[mp.file].push({
focus_area: sp.focus_area,
task_id: task.id,
target: mp.target,
change: mp.change
})
})
})
})
Object.entries(fileModifications).forEach(([file, mods]) => {
if (mods.length > 1) {
const agents = [...new Set(mods.map(m => m.focus_area))]
if (agents.length > 1) {
conflicts.push({
type: "file_conflict",
severity: "high",
file: file,
agents_involved: agents,
modifications: mods,
description: `Multiple agents modifying ${file}`,
suggested_resolution: "Sequence modifications or consolidate"
})
}
}
})
return conflicts
}
function detectEffortConflicts(subPlans) {
// Compare effort estimates across similar tasks
// Return conflicts where estimates differ by >50%
return []
}
function detectApproachConflicts(subPlans) {
// Analyze approaches for contradictions
// Return conflicts where approaches are incompatible
return []
}
function detectDependencyConflicts(subPlans) {
// Check for circular dependencies
// Check for missing dependencies
return []
}
```
### Phase 4: Merge & Synthesis
Use cli-lite-planning-agent to merge all sub-plans:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Requirement Analysis", status: "completed", activeForm: "Analyzing requirements" },
{ content: "Phase 2: Parallel Agent Execution", status: "completed", activeForm: "Running agents" },
{ content: "Phase 3: Conflict Detection", status: "completed", activeForm: "Detecting conflicts" },
{ content: "Phase 4: Merge & Synthesis", status: "in_progress", activeForm: "Merging plans" }
]})
// Collect all planning context documents for context
const contextDocs = subRequirements.map(sub => {
const path = `${sessionFolder}/agents/${sub.focus_area}/planning-context.md`
return {
focus_area: sub.focus_area,
content: Read(path)
}
})
// Invoke planning agent to merge
Task({
subagent_type: "cli-lite-planning-agent",
run_in_background: false,
description: "Merge sub-plans into unified plan",
prompt: `
## Mission: Merge Multiple Sub-Plans
Merge ${subPlans.length} sub-plans into a single unified plan.
## Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json
The merged plan follows the SAME schema as lite-plan, with ONE additional field:
- \`merge_metadata\`: Object containing merge-specific information
## Project Context
1. Read: .workflow/project-tech.json
2. Read: .workflow/project-guidelines.json
## Original Requirement
${taskDescription}
## Sub-Plans to Merge
${subPlans.map(sp => `
### Sub-Plan: ${sp.focus_area}
\`\`\`json
${JSON.stringify(sp.plan, null, 2)}
\`\`\`
`).join('\n')}
## Planning Context Documents
${contextDocs.map(cd => `
### Context: ${cd.focus_area}
${cd.content}
`).join('\n')}
## Detected Conflicts
\`\`\`json
${JSON.stringify(conflicts, null, 2)}
\`\`\`
## Merge Rules
**Rule**: ${mergeRule}
${mergeRule === 'consensus' ? `
- Equal weight to all sub-plans
- Conflicts resolved by finding middle ground
- Combine overlapping tasks
` : `
- Priority based on sub-requirement index
- Earlier agents' decisions take precedence
- Later agents adapt to earlier decisions
`}
## Requirements
1. **Task Consolidation**:
- Combine tasks that modify same files
- Preserve unique tasks from each sub-plan
- Ensure no task duplication
- Maintain clear task boundaries
2. **Dependency Resolution**:
- Cross-reference dependencies between sub-plans
- Create global task ordering
- Handle inter-sub-plan dependencies
3. **Conflict Resolution**:
- Apply ${mergeRule} rule to resolve conflicts
- Document resolution rationale
- Ensure no contradictions in final plan
4. **Metadata Preservation**:
- Track which sub-plan each task originated from (source_agent field)
- Include merge_metadata with:
- merged_from: list of sub-plan focus areas
- conflicts_resolved: count
- merge_rule: ${mergeRule}
## Output
Write to ${sessionFolder}/plan.json following plan-json-schema.json.
Add ONE extension field for merge tracking:
\`\`\`json
{
// ... all standard plan-json-schema fields ...
"merge_metadata": {
"source_session": "${sessionId}",
"merged_from": ["focus-area-1", "focus-area-2"],
"sub_plan_count": N,
"conflicts_detected": N,
"conflicts_resolved": N,
"merge_rule": "${mergeRule}",
"merged_at": "ISO-timestamp"
}
}
\`\`\`
Each task should include \`source_agent\` field indicating which sub-plan it originated from.
## Success Criteria
- [ ] All sub-plan tasks included (or explicitly merged)
- [ ] Conflicts resolved per ${mergeRule} rule
- [ ] Dependencies form valid DAG (no cycles)
- [ ] merge_metadata present
- [ ] Schema compliance verified
- [ ] plan.json written to ${sessionFolder}/plan.json
`
})
// Generate human-readable plan.md
const plan = JSON.parse(Read(`${sessionFolder}/plan.json`))
const planMd = generatePlanMarkdown(plan, subRequirements, conflicts)
Write(`${sessionFolder}/plan.md`, planMd)
```
**Markdown Generation**:
```javascript
function generatePlanMarkdown(plan, subRequirements, conflicts) {
return `# Collaborative Planning Session
**Session ID**: ${plan._metadata?.session_id || sessionId}
**Original Requirement**: ${taskDescription}
**Created**: ${getUtc8ISOString()}
---
## Sub-Requirements Analyzed
${subRequirements.map((sub, i) => `
### ${i+1}. ${sub.focus_area}
${sub.description}
- **Key Concerns**: ${sub.key_concerns.join(', ')}
- **Estimated Effort**: ${sub.estimated_effort}
`).join('\n')}
---
## Conflict Resolution
${conflicts.conflicts.length > 0 ? `
**Conflicts Detected**: ${conflicts.conflicts.length}
**Merge Rule**: ${mergeRule}
${conflicts.conflicts.map((c, i) => `
${i+1}. **${c.type}** - ${c.description}
- Resolution: ${c.suggested_resolution}
`).join('\n')}
` : '✅ No conflicts detected'}
---
## Merged Plan
### Summary
${plan.summary}
### Approach
${plan.approach}
---
## Tasks
${plan.tasks.map((task, i) => `
### ${task.id}: ${task.title}
**Source**: ${task.source_agent || 'merged'}
**Scope**: ${task.scope}
**Action**: ${task.action}
**Complexity**: ${task.effort?.complexity || 'medium'}
${task.description}
**Modification Points**:
${task.modification_points?.map(mp => `- \`${mp.file}\`${mp.target}: ${mp.change}`).join('\n') || 'N/A'}
**Implementation**:
${task.implementation?.map((step, idx) => `${idx+1}. ${step}`).join('\n') || 'N/A'}
**Acceptance Criteria**:
${task.acceptance?.map(ac => `- ${ac}`).join('\n') || 'N/A'}
**Dependencies**: ${task.depends_on?.join(', ') || 'None'}
---
`).join('\n')}
## Execution
\`\`\`bash
# Execute this plan
/workflow:unified-execute-with-file -p ${sessionFolder}/plan.json
# Or with auto-confirmation
/workflow:unified-execute-with-file -y -p ${sessionFolder}/plan.json
\`\`\`
---
## Agent Process Files
${subRequirements.map(sub => `
### ${sub.focus_area}
- Context: \`${sessionFolder}/agents/${sub.focus_area}/planning-context.md\`
- Sub-Plan: \`${sessionFolder}/agents/${sub.focus_area}/sub-plan.json\`
`).join('\n')}
---
**Generated by**: /workflow:collaborative-plan-with-file
**Merge Rule**: ${mergeRule}
`
}
```
### Completion
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Requirement Analysis", status: "completed", activeForm: "Analyzing requirements" },
{ content: "Phase 2: Parallel Agent Execution", status: "completed", activeForm: "Running agents" },
{ content: "Phase 3: Conflict Detection", status: "completed", activeForm: "Detecting conflicts" },
{ content: "Phase 4: Merge & Synthesis", status: "completed", activeForm: "Merging plans" }
]})
console.log(`
✅ Collaborative Planning Complete
**Session**: ${sessionId}
**Sub-Agents**: ${subRequirements.length}
**Conflicts Resolved**: ${conflicts.conflicts.length}
## Output Files
📁 ${sessionFolder}/
├── requirement-analysis.json # Requirement breakdown
├── agents/ # Per-agent process files
${subRequirements.map(sub => `│ ├── ${sub.focus_area}/
│ │ ├── planning-context.md
│ │ └── sub-plan.json`).join('\n')}
├── conflicts.json # Detected conflicts
├── plan.json # Unified plan (execution-ready)
└── plan.md # Human-readable plan
## Next Steps
Execute the plan:
\`\`\`bash
/workflow:unified-execute-with-file -p ${sessionFolder}/plan.json
\`\`\`
Review a specific agent's work:
\`\`\`bash
cat ${sessionFolder}/agents/{focus-area}/planning-context.md
\`\`\`
`)
```
## Configuration
| Flag | Default | Description |
|------|---------|-------------|
| `--max-agents` | 5 | Maximum sub-agents to spawn |
| `--depth` | normal | Exploration depth: normal or deep |
| `--merge-rule` | consensus | Conflict resolution: consensus or priority |
| `-y, --yes` | false | Auto-confirm all decisions |
## Error Handling
| Error | Resolution |
|-------|------------|
| Requirement too simple | Use single-agent lite-plan instead |
| Agent fails | Retry once, then continue with partial results |
| Merge conflicts unresolvable | Ask user for manual resolution |
| CLI timeout | Use fallback CLI tool |
| File write fails | Retry with alternative path |
## vs Other Planning Commands
| Command | Use Case |
|---------|----------|
| **collaborative-plan-with-file** | Complex multi-aspect requirements needing parallel exploration |
| lite-plan | Simple single-focus tasks |
| multi-cli-plan | Iterative cross-verification with convergence |
## Best Practices
1. **Be Specific**: Detailed requirements lead to better splits
2. **Review Process Files**: Check planning-context.md for insights
3. **Trust the Merge**: Conflict resolution follows defined rules
4. **Iterate if Needed**: Re-run with different --merge-rule if results unsatisfactory
---
**Now execute collaborative-plan-with-file for**: $ARGUMENTS

View File

@@ -37,6 +37,23 @@ Intelligent lightweight bug fixing command with dynamic workflow adaptation base
/workflow:lite-fix -y --hotfix "生产环境数据库连接失败" # Auto + hotfix mode
```
## Output Artifacts
| Artifact | Description |
|----------|-------------|
| `diagnosis-{angle}.json` | Per-angle diagnosis results (1-4 files based on severity) |
| `diagnoses-manifest.json` | Index of all diagnosis files |
| `planning-context.md` | Evidence paths + synthesized understanding |
| `fix-plan.json` | Structured fix plan (fix-plan-json-schema.json) |
**Output Directory**: `.workflow/.lite-fix/{bug-slug}-{YYYY-MM-DD}/`
**Agent Usage**:
- Low/Medium severity → Direct Claude planning (no agent)
- High/Critical severity → `cli-lite-planning-agent` generates `fix-plan.json`
**Schema Reference**: `~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json`
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
@@ -192,11 +209,15 @@ const diagnosisTasks = selectedAngles.map((angle, index) =>
## Task Objective
Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase from this specific angle to discover root cause, affected paths, and fix hints.
## Output Location
**Session Folder**: ${sessionFolder}
**Output File**: ${sessionFolder}/diagnosis-${angle}.json
## Assigned Context
- **Diagnosis Angle**: ${angle}
- **Bug Description**: ${bug_description}
- **Diagnosis Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/diagnosis-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
@@ -225,8 +246,6 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
## Expected Output
**File**: ${sessionFolder}/diagnosis-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
@@ -255,9 +274,9 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/diagnosis-${angle}.json
Return: 2-3 sentence summary of ${angle} diagnosis findings
## Execution
**Write**: \`${sessionFolder}/diagnosis-${angle}.json\`
**Return**: 2-3 sentence summary of ${angle} diagnosis findings
`
)
)
@@ -493,6 +512,13 @@ Task(
prompt=`
Generate fix plan and write fix-plan.json.
## Output Location
**Session Folder**: ${sessionFolder}
**Output Files**:
- ${sessionFolder}/planning-context.md (evidence + understanding)
- ${sessionFolder}/fix-plan.json (fix plan)
## Output Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json (get schema reference before generating plan)
@@ -588,8 +614,9 @@ Generate fix-plan.json with:
- For High/Critical: REQUIRED new fields (rationale, verification, risks, code_skeleton, data_flow, design_decisions)
- Each task MUST have rationale (why this fix), verification (how to verify success), and risks (potential issues)
5. Parse output and structure fix-plan
6. Write JSON: Write('${sessionFolder}/fix-plan.json', jsonContent)
7. Return brief completion summary
6. **Write**: \`${sessionFolder}/planning-context.md\` (evidence paths + understanding)
7. **Write**: \`${sessionFolder}/fix-plan.json\`
8. Return brief completion summary
## Output Format for CLI
Include these sections in your fix-plan output:
@@ -747,22 +774,24 @@ SlashCommand(command="/workflow:lite-execute --in-memory --mode bugfix")
```
.workflow/.lite-fix/{bug-slug}-{YYYY-MM-DD}/
|- diagnosis-{angle1}.json # Diagnosis angle 1
|- diagnosis-{angle2}.json # Diagnosis angle 2
|- diagnosis-{angle3}.json # Diagnosis angle 3 (if applicable)
|- diagnosis-{angle4}.json # Diagnosis angle 4 (if applicable)
|- diagnoses-manifest.json # Diagnosis index
+- fix-plan.json # Fix plan
├── diagnosis-{angle1}.json # Diagnosis angle 1
├── diagnosis-{angle2}.json # Diagnosis angle 2
├── diagnosis-{angle3}.json # Diagnosis angle 3 (if applicable)
├── diagnosis-{angle4}.json # Diagnosis angle 4 (if applicable)
├── diagnoses-manifest.json # Diagnosis index
├── planning-context.md # Evidence + understanding
└── fix-plan.json # Fix plan
```
**Example**:
```
.workflow/.lite-fix/user-avatar-upload-fails-413-2025-11-25-14-30-25/
|- diagnosis-error-handling.json
|- diagnosis-dataflow.json
|- diagnosis-validation.json
|- diagnoses-manifest.json
+- fix-plan.json
.workflow/.lite-fix/user-avatar-upload-fails-413-2025-11-25/
├── diagnosis-error-handling.json
├── diagnosis-dataflow.json
├── diagnosis-validation.json
├── diagnoses-manifest.json
├── planning-context.md
└── fix-plan.json
```
## Error Handling

View File

@@ -37,6 +37,23 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
/workflow:lite-plan -y -e "优化数据库查询性能" # Auto mode + force exploration
```
## Output Artifacts
| Artifact | Description |
|----------|-------------|
| `exploration-{angle}.json` | Per-angle exploration results (1-4 files based on complexity) |
| `explorations-manifest.json` | Index of all exploration files |
| `planning-context.md` | Evidence paths + synthesized understanding |
| `plan.json` | Structured implementation plan (plan-json-schema.json) |
**Output Directory**: `.workflow/.lite-plan/{task-slug}-{YYYY-MM-DD}/`
**Agent Usage**:
- Low complexity → Direct Claude planning (no agent)
- Medium/High complexity → `cli-lite-planning-agent` generates `plan.json`
**Schema Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
@@ -193,11 +210,15 @@ const explorationTasks = selectedAngles.map((angle, index) =>
## Task Objective
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
## Output Location
**Session Folder**: ${sessionFolder}
**Output File**: ${sessionFolder}/exploration-${angle}.json
## Assigned Context
- **Exploration Angle**: ${angle}
- **Task Description**: ${task_description}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
@@ -225,8 +246,6 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
## Expected Output
**File**: ${sessionFolder}/exploration-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
@@ -252,9 +271,9 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/exploration-${angle}.json
Return: 2-3 sentence summary of ${angle} findings
## Execution
**Write**: \`${sessionFolder}/exploration-${angle}.json\`
**Return**: 2-3 sentence summary of ${angle} findings
`
)
)
@@ -443,6 +462,13 @@ Task(
prompt=`
Generate implementation plan and write plan.json.
## Output Location
**Session Folder**: ${sessionFolder}
**Output Files**:
- ${sessionFolder}/planning-context.md (evidence + understanding)
- ${sessionFolder}/plan.json (implementation plan)
## Output Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json (get schema reference before generating plan)
@@ -492,8 +518,9 @@ Generate plan.json following the schema obtained above. Key constraints:
2. Execute CLI planning using Gemini (Qwen fallback)
3. Read ALL exploration files for comprehensive context
4. Synthesize findings and generate plan following schema
5. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
6. Return brief completion summary
5. **Write**: \`${sessionFolder}/planning-context.md\` (evidence paths + understanding)
6. **Write**: \`${sessionFolder}/plan.json\`
7. Return brief completion summary
`
)
```

View File

@@ -1,807 +0,0 @@
---
name: merge-plans-with-file
description: Merge multiple planning/brainstorm/analysis outputs, resolve conflicts, and synthesize unified plan. Designed for multi-team input aggregation and final plan crystallization
argument-hint: "[-y|--yes] [-r|--rule consensus|priority|hierarchy] \"plan or topic name\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-resolve conflicts using specified rule (consensus/priority/hierarchy), minimal user prompts.
# Workflow Merge-Plans-With-File Command (/workflow:merge-plans-with-file)
## Overview
Plan aggregation and conflict resolution workflow. Takes multiple planning artifacts (brainstorm conclusions, analysis recommendations, quick-plans, implementation plans) and synthesizes them into a unified, conflict-resolved execution plan.
**Core workflow**: Load Sources → Parse Plans → Conflict Analysis → Arbitration → Unified Plan
**Key features**:
- **Multi-Source Support**: Brainstorm, analysis, quick-plan, IMPL_PLAN, task JSONs
- **Parallel Conflict Detection**: Identify all contradictions across input plans
- **Conflict Resolution**: Consensus, priority-based, or hierarchical resolution modes
- **Unified Synthesis**: Single authoritative plan from multiple perspectives
- **Decision Tracking**: Full audit trail of conflicts and resolutions
- **Resumable**: Save intermediate states, refine resolutions
## Usage
```bash
/workflow:merge-plans-with-file [FLAGS] <PLAN_NAME_OR_PATTERN>
# Flags
-y, --yes Auto-resolve conflicts using rule, skip confirmations
-r, --rule <rule> Conflict resolution rule: consensus (default) | priority | hierarchy
-o, --output <path> Output directory (default: .workflow/.merged/{name})
# Arguments
<plan-name-or-pattern> Plan name or glob pattern to identify input files/sessions
Examples: "auth-module", "*.analysis-*.json", "PLAN-*"
# Examples
/workflow:merge-plans-with-file "authentication" # Auto-detect all auth-related plans
/workflow:merge-plans-with-file -y -r priority "payment-system" # Auto-resolve with priority rule
/workflow:merge-plans-with-file -r hierarchy "feature-complete" # Use hierarchy rule (requires user ranking)
```
## Execution Process
```
Discovery & Loading:
├─ Search for planning artifacts matching pattern
├─ Load all synthesis.json, conclusions.json, IMPL_PLAN.md
├─ Parse each into normalized task/plan structure
└─ Validate data completeness
Session Initialization:
├─ Create .workflow/.merged/{sessionId}/
├─ Initialize merge.md with plan summary
├─ Index all source plans
└─ Extract planning metadata and constraints
Phase 1: Plan Normalization
├─ Convert all formats to common task representation
├─ Extract tasks, dependencies, effort, risks
├─ Identify plan scope and boundaries
├─ Validate no duplicate tasks
└─ Aggregate recommendations from each plan
Phase 2: Conflict Detection (Parallel)
├─ Architecture conflicts: different design approaches
├─ Task conflicts: overlapping responsibilities or different priorities
├─ Effort conflicts: vastly different estimates
├─ Risk assessment conflicts: different risk levels
├─ Scope conflicts: different feature inclusions
└─ Generate conflict matrix with severity levels
Phase 3: Consensus Building / Arbitration
├─ For each conflict, analyze source rationale
├─ Apply resolution rule (consensus/priority/hierarchy)
├─ Escalate unresolvable conflicts to user (unless --yes)
├─ Document decision rationale
└─ Generate resolutions.json
Phase 4: Plan Synthesis
├─ Merge task lists (remove duplicates, combine insights)
├─ Integrate dependencies from all sources
├─ Consolidate effort and risk estimates
├─ Generate unified execution sequence
├─ Create final unified plan
└─ Output ready for execution
Output:
├─ .workflow/.merged/{sessionId}/merge.md (merge process & decisions)
├─ .workflow/.merged/{sessionId}/source-index.json (input sources)
├─ .workflow/.merged/{sessionId}/conflicts.json (conflict matrix)
├─ .workflow/.merged/{sessionId}/resolutions.json (how conflicts were resolved)
├─ .workflow/.merged/{sessionId}/unified-plan.json (final merged plan)
└─ .workflow/.merged/{sessionId}/unified-plan.md (execution-ready markdown)
```
## Implementation
### Phase 1: Plan Discovery & Loading
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse arguments
const planPattern = "$PLAN_NAME_OR_PATTERN"
const resolutionRule = $ARGUMENTS.match(/--rule\s+(\w+)/)?.[1] || 'consensus'
const isAutoMode = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
// Generate session ID
const mergeSlug = planPattern.toLowerCase()
.replace(/[*?]/g, '-')
.replace(/[^a-z0-9\u4e00-\u9fa5-]+/g, '-')
.substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `MERGE-${mergeSlug}-${dateStr}`
const sessionFolder = `.workflow/.merged/${sessionId}`
bash(`mkdir -p ${sessionFolder}`)
// Discover all relevant planning artifacts
const discoveryPaths = [
`.workflow/.brainstorm/*/${planPattern}*/synthesis.json`,
`.workflow/.analysis/*/${planPattern}*/conclusions.json`,
`.workflow/.planning/*/${planPattern}*/synthesis.json`,
`.workflow/.plan/${planPattern}*IMPL_PLAN.md`,
`.workflow/*/${planPattern}*.json`
]
const sourcePlans = []
for (const pattern of discoveryPaths) {
const matches = glob(pattern)
for (const path of matches) {
try {
const content = Read(path)
const plan = parsePlanFile(path, content)
if (plan && plan.tasks?.length > 0) {
sourcePlans.push({
source_path: path,
source_type: identifySourceType(path),
plan: plan,
loaded_at: getUtc8ISOString()
})
}
} catch (e) {
console.warn(`Failed to load plan from ${path}: ${e.message}`)
}
}
}
if (sourcePlans.length === 0) {
console.error(`
## Error: No Plans Found
Pattern: ${planPattern}
Searched locations:
${discoveryPaths.join('\n')}
Available plans in .workflow/:
`)
bash(`find .workflow -name "*.json" -o -name "*PLAN.md" | head -20`)
return { status: 'error', message: 'No plans found' }
}
console.log(`
## Plans Discovered
Total: ${sourcePlans.length}
${sourcePlans.map(sp => `- ${sp.source_type}: ${sp.source_path}`).join('\n')}
`)
```
---
### Phase 2: Plan Normalization
```javascript
// Normalize all plans to common format
const normalizedPlans = sourcePlans.map((sourcePlan, idx) => {
const plan = sourcePlan.plan
const tasks = plan.tasks || []
return {
index: idx,
source: sourcePlan.source_path,
source_type: sourcePlan.source_type,
metadata: {
title: plan.title || `Plan ${idx + 1}`,
topic: plan.topic || plan.planning_topic || 'unknown',
timestamp: plan.completed || plan.timestamp || sourcePlan.loaded_at,
source_ideas: plan.top_ideas?.length || 0,
complexity: plan.complexity_level || 'unknown'
},
// Normalized tasks
tasks: tasks.map(task => ({
id: task.id || `T${idx}-${task.title?.substring(0, 20)}`,
title: task.title || task.content,
description: task.description || '',
type: task.type || inferType(task),
priority: task.priority || 'normal',
// Effort estimation
effort: {
estimated: task.estimated_duration || task.effort_estimate || 'unknown',
from_plan: idx
},
// Risk assessment
risk: {
level: task.risk_level || 'medium',
from_plan: idx
},
// Dependencies
dependencies: task.dependencies || [],
// Source tracking
source_plan_index: idx,
original_id: task.id,
// Quality tracking
success_criteria: task.success_criteria || [],
challenges: task.challenges || []
}))
}
})
// Save source index
const sourceIndex = {
session_id: sessionId,
merge_timestamp: getUtc8ISOString(),
pattern: planPattern,
total_source_plans: sourcePlans.length,
sources: normalizedPlans.map(p => ({
index: p.index,
source_path: p.source,
source_type: p.source_type,
topic: p.metadata.topic,
task_count: p.tasks.length
}))
}
Write(`${sessionFolder}/source-index.json`, JSON.stringify(sourceIndex, null, 2))
```
---
### Phase 3: Conflict Detection
```javascript
// Detect conflicts across plans
const conflictDetector = {
// Architecture conflicts
architectureConflicts: [],
// Task conflicts (duplicates, different scope)
taskConflicts: [],
// Effort conflicts
effortConflicts: [],
// Risk assessment conflicts
riskConflicts: [],
// Scope conflicts
scopeConflicts: [],
// Priority conflicts
priorityConflicts: []
}
// Algorithm 1: Detect similar tasks across plans
const allTasks = normalizedPlans.flatMap(p => p.tasks)
const taskGroups = groupSimilarTasks(allTasks)
for (const group of taskGroups) {
if (group.tasks.length > 1) {
// Same task appears in multiple plans
const efforts = group.tasks.map(t => t.effort.estimated)
const effortVariance = calculateVariance(efforts)
if (effortVariance > 0.5) {
// Significant difference in effort estimates
conflictDetector.effortConflicts.push({
task_group: group.title,
conflicting_tasks: group.tasks.map((t, i) => ({
id: t.id,
from_plan: t.source_plan_index,
effort: t.effort.estimated
})),
variance: effortVariance,
severity: 'high'
})
}
// Check for scope differences
const scopeDifferences = analyzeScopeDifferences(group.tasks)
if (scopeDifferences.length > 0) {
conflictDetector.taskConflicts.push({
task_group: group.title,
scope_differences: scopeDifferences,
severity: 'medium'
})
}
}
}
// Algorithm 2: Architecture conflicts
const architectures = normalizedPlans.map(p => p.metadata.complexity)
if (new Set(architectures).size > 1) {
conflictDetector.architectureConflicts.push({
different_approaches: true,
complexity_levels: architectures.map((a, i) => ({
plan: i,
complexity: a
})),
severity: 'high'
})
}
// Algorithm 3: Risk assessment conflicts
const riskLevels = allTasks.map(t => ({ task: t.id, risk: t.risk.level }))
const taskRisks = {}
for (const tr of riskLevels) {
if (!taskRisks[tr.task]) taskRisks[tr.task] = []
taskRisks[tr.task].push(tr.risk)
}
for (const [task, risks] of Object.entries(taskRisks)) {
if (new Set(risks).size > 1) {
conflictDetector.riskConflicts.push({
task: task,
conflicting_risk_levels: risks,
severity: 'medium'
})
}
}
// Save conflicts
Write(`${sessionFolder}/conflicts.json`, JSON.stringify(conflictDetector, null, 2))
```
---
### Phase 4: Conflict Resolution
```javascript
// Resolve conflicts based on selected rule
const resolutions = {
resolution_rule: resolutionRule,
timestamp: getUtc8ISOString(),
effort_resolutions: [],
architecture_resolutions: [],
risk_resolutions: [],
scope_resolutions: [],
priority_resolutions: []
}
// Resolution Strategy 1: Consensus
if (resolutionRule === 'consensus') {
for (const conflict of conflictDetector.effortConflicts) {
// Use median or average
const efforts = conflict.conflicting_tasks.map(t => parseEffort(t.effort))
const resolved_effort = calculateMedian(efforts)
resolutions.effort_resolutions.push({
conflict: conflict.task_group,
original_estimates: efforts,
resolved_estimate: resolved_effort,
method: 'consensus-median',
rationale: 'Used median of all estimates'
})
}
}
// Resolution Strategy 2: Priority-Based
else if (resolutionRule === 'priority') {
// Use the plan from highest priority source (first or most recent)
for (const conflict of conflictDetector.effortConflicts) {
const highestPriority = conflict.conflicting_tasks[0] // First plan has priority
resolutions.effort_resolutions.push({
conflict: conflict.task_group,
conflicting_estimates: conflict.conflicting_tasks.map(t => t.effort),
resolved_estimate: highestPriority.effort,
selected_from_plan: highestPriority.from_plan,
method: 'priority-based',
rationale: `Selected estimate from plan ${highestPriority.from_plan} (highest priority)`
})
}
}
// Resolution Strategy 3: Hierarchy (requires user ranking)
else if (resolutionRule === 'hierarchy') {
if (!isAutoMode) {
// Ask user to rank plan importance
const planRanking = AskUserQuestion({
questions: [{
question: "请按重要性排序这些规划(从最重要到最不重要):",
header: "Plan Ranking",
multiSelect: false,
options: normalizedPlans.slice(0, 5).map(p => ({
label: `Plan ${p.index}: ${p.metadata.title.substring(0, 40)}`,
description: `${p.tasks.length} tasks, complexity: ${p.metadata.complexity}`
}))
}]
})
// Apply hierarchy
const hierarchy = extractHierarchy(planRanking)
for (const conflict of conflictDetector.effortConflicts) {
const topPriorityTask = conflict.conflicting_tasks
.sort((a, b) => hierarchy[a.from_plan] - hierarchy[b.from_plan])[0]
resolutions.effort_resolutions.push({
conflict: conflict.task_group,
resolved_estimate: topPriorityTask.effort,
selected_from_plan: topPriorityTask.from_plan,
method: 'hierarchy-based',
rationale: `Selected from highest-ranked plan`
})
}
}
}
Write(`${sessionFolder}/resolutions.json`, JSON.stringify(resolutions, null, 2))
```
---
### Phase 5: Plan Synthesis
```javascript
// Merge all tasks into unified plan
const unifiedTasks = []
const processedTaskIds = new Set()
for (const task of allTasks) {
const taskKey = generateTaskKey(task)
if (processedTaskIds.has(taskKey)) {
// Task already added, skip
continue
}
processedTaskIds.add(taskKey)
// Apply resolution if this task has conflicts
let resolvedTask = { ...task }
const effortResolution = resolutions.effort_resolutions
.find(r => r.conflict === taskKey)
if (effortResolution) {
resolvedTask.effort.estimated = effortResolution.resolved_estimate
resolvedTask.effort.resolution_method = effortResolution.method
}
unifiedTasks.push({
id: taskKey,
title: task.title,
description: task.description,
type: task.type,
priority: task.priority,
effort: resolvedTask.effort,
risk: task.risk,
dependencies: task.dependencies,
success_criteria: [...new Set([
...task.success_criteria,
...findRelatedTasks(task, allTasks)
.flatMap(t => t.success_criteria)
])],
challenges: [...new Set([
...task.challenges,
...findRelatedTasks(task, allTasks)
.flatMap(t => t.challenges)
])],
source_plans: [
...new Set(allTasks
.filter(t => generateTaskKey(t) === taskKey)
.map(t => t.source_plan_index))
]
})
}
// Generate execution sequence
const executionSequence = topologicalSort(unifiedTasks)
const criticalPath = identifyCriticalPath(unifiedTasks, executionSequence)
// Final unified plan
const unifiedPlan = {
session_id: sessionId,
merge_timestamp: getUtc8ISOString(),
summary: {
total_source_plans: normalizedPlans.length,
original_tasks_total: allTasks.length,
merged_tasks: unifiedTasks.length,
conflicts_resolved: Object.values(conflictDetector).flat().length,
resolution_rule: resolutionRule
},
merged_metadata: {
topics: [...new Set(normalizedPlans.map(p => p.metadata.topic))],
average_complexity: calculateAverage(normalizedPlans.map(p => parseComplexity(p.metadata.complexity))),
combined_scope: estimateScope(unifiedTasks)
},
tasks: unifiedTasks,
execution_sequence: executionSequence,
critical_path: criticalPath,
risks: aggregateRisks(unifiedTasks),
success_criteria: aggregateSuccessCriteria(unifiedTasks),
audit_trail: {
source_plans: normalizedPlans.length,
conflicts_detected: Object.values(conflictDetector).flat().length,
conflicts_resolved: Object.values(resolutions).flat().length,
resolution_method: resolutionRule
}
}
Write(`${sessionFolder}/unified-plan.json`, JSON.stringify(unifiedPlan, null, 2))
```
---
### Phase 6: Generate Execution Plan
```markdown
# Merged Planning Session
**Session ID**: ${sessionId}
**Pattern**: ${planPattern}
**Created**: ${getUtc8ISOString()}
---
## Merge Summary
**Source Plans**: ${unifiedPlan.summary.total_source_plans}
**Original Tasks**: ${unifiedPlan.summary.original_tasks_total}
**Merged Tasks**: ${unifiedPlan.summary.merged_tasks}
**Tasks Deduplicated**: ${unifiedPlan.summary.original_tasks_total - unifiedPlan.summary.merged_tasks}
**Conflicts Resolved**: ${unifiedPlan.summary.conflicts_resolved}
**Resolution Method**: ${unifiedPlan.summary.resolution_rule}
---
## Merged Plan Overview
**Topics**: ${unifiedPlan.merged_metadata.topics.join(', ')}
**Combined Complexity**: ${unifiedPlan.merged_metadata.average_complexity}
**Total Scope**: ${unifiedPlan.merged_metadata.combined_scope}
---
## Unified Task List
${unifiedPlan.tasks.map((task, i) => `
${i+1}. **${task.id}: ${task.title}**
- Type: ${task.type}
- Effort: ${task.effort.estimated}
- Risk: ${task.risk.level}
- Source Plans: ${task.source_plans.join(', ')}
- ${task.description}
`).join('\n')}
---
## Execution Sequence
**Critical Path**: ${unifiedPlan.critical_path.join(' → ')}
**Execution Order**:
${unifiedPlan.execution_sequence.map((id, i) => `${i+1}. ${id}`).join('\n')}
---
## Conflict Resolution Report
**Total Conflicts**: ${unifiedPlan.summary.conflicts_resolved}
**Resolved Conflicts**:
${Object.entries(resolutions).flatMap(([key, items]) =>
items.slice(0, 3).map((item, i) => `
- ${key.replace('_', ' ')}: ${item.rationale || item.method}
`)
).join('\n')}
**Full Report**: See \`conflicts.json\` and \`resolutions.json\`
---
## Risks & Considerations
**Aggregated Risks**:
${unifiedPlan.risks.slice(0, 5).map(r => `- **${r.title}**: ${r.mitigation}`).join('\n')}
**Combined Success Criteria**:
${unifiedPlan.success_criteria.slice(0, 5).map(c => `- ${c}`).join('\n')}
---
## Next Steps
### Option 1: Direct Execution
Execute merged plan with unified-execute-with-file:
\`\`\`
/workflow:unified-execute-with-file -p ${sessionFolder}/unified-plan.json
\`\`\`
### Option 2: Detailed Planning
Create detailed IMPL_PLAN from merged plan:
\`\`\`
/workflow:plan "Based on merged plan from $(echo ${planPattern})"
\`\`\`
### Option 3: Review Conflicts
Review detailed conflict analysis:
\`\`\`
cat ${sessionFolder}/resolutions.json
\`\`\`
---
## Artifacts
- **source-index.json** - All input plans and sources
- **conflicts.json** - Conflict detection results
- **resolutions.json** - How each conflict was resolved
- **unified-plan.json** - Merged plan data structure (for execution)
- **unified-plan.md** - This document (human-readable)
```
---
## Session Folder Structure
```
.workflow/.merged/{sessionId}/
├── merge.md # Merge process and decisions
├── source-index.json # All input plan sources
├── conflicts.json # Detected conflicts
├── resolutions.json # Conflict resolutions applied
├── unified-plan.json # Merged plan (machine-parseable, for execution)
└── unified-plan.md # Execution-ready plan (human-readable)
```
---
## Resolution Rules
### Rule 1: Consensus (default)
- Use median or average of conflicting estimates
- Good for: Multiple similar perspectives
- Tradeoff: May miss important minority viewpoints
### Rule 2: Priority-Based
- First plan has highest priority, subsequent plans are fallback
- Good for: Clear ranking of plan sources
- Tradeoff: Discards valuable alternative perspectives
### Rule 3: Hierarchy
- User explicitly ranks importance of each plan
- Good for: Mixed-source plans (engineering + product + leadership)
- Tradeoff: Requires user input
---
## Input Format Support
| Source Type | Detection | Parsing | Notes |
|-------------|-----------|---------|-------|
| **Brainstorm** | `.brainstorm/*/synthesis.json` | Top ideas → tasks | Ideas converted to work items |
| **Analysis** | `.analysis/*/conclusions.json` | Recommendations → tasks | Recommendations prioritized |
| **Quick-Plan** | `.planning/*/synthesis.json` | Direct task list | Already normalized |
| **IMPL_PLAN** | `*IMPL_PLAN.md` | Markdown → tasks | Parsed from markdown structure |
| **Task JSON** | `.json` with `tasks` key | Direct mapping | Requires standard schema |
---
## Error Handling
| Situation | Action |
|-----------|--------|
| No plans found | Suggest search terms, list available plans |
| Incompatible formats | Skip unsupported format, continue with others |
| Circular dependencies | Alert user, suggest manual review |
| Unresolvable conflicts | Require user decision (unless --yes + conflict rule) |
| Contradictory recommendations | Document both options for user consideration |
---
## Usage Patterns
### Pattern 1: Merge Multiple Brainstorms
```bash
/workflow:merge-plans-with-file "authentication" -y -r consensus
# → Finds all brainstorm sessions with "auth"
# → Merges top ideas into unified task list
# → Uses consensus method for conflicts
```
### Pattern 2: Synthesize Team Input
```bash
/workflow:merge-plans-with-file "payment-integration" -r hierarchy
# → Loads plans from different team members
# → Asks for ranking by importance
# → Applies hierarchy-based conflict resolution
```
### Pattern 3: Bridge Planning Phases
```bash
/workflow:merge-plans-with-file "user-auth" -f analysis
# → Takes analysis conclusions
# → Merges with existing quick-plans
# → Produces execution-ready plan
```
---
## Advanced: Custom Conflict Resolution
For complex conflict scenarios, create custom resolution script:
```
.workflow/.merged/{sessionId}/
└── custom-resolutions.js (optional)
- Define custom conflict resolution logic
- Applied after automatic resolution
- Override specific decisions
```
---
## Best Practices
1. **Before merging**:
- Ensure all source plans have same quality level
- Verify plans address same scope/topic
- Document any special considerations
2. **During merging**:
- Review conflict matrix (conflicts.json)
- Understand resolution rationale (resolutions.json)
- Challenge assumptions if results seem odd
3. **After merging**:
- Validate unified plan makes sense
- Review critical path
- Ensure no important details lost
- Execute or iterate if needed
---
## Integration with Other Workflows
```
Multiple Brainstorms / Analyses
├─ brainstorm-with-file (session 1)
├─ brainstorm-with-file (session 2)
├─ analyze-with-file (session 3)
merge-plans-with-file ◄──── This workflow
unified-plan.json
├─ /workflow:unified-execute-with-file (direct execution)
├─ /workflow:plan (detailed planning)
└─ /workflow:quick-plan-with-file (refinement)
```
---
## Comparison: When to Use Which Merge Rule
| Rule | Use When | Pros | Cons |
|------|----------|------|------|
| **Consensus** | Similar-quality inputs | Fair, balanced | May miss extremes |
| **Priority** | Clear hierarchy | Simple, predictable | May bias to first input |
| **Hierarchy** | Mixed stakeholders | Respects importance | Requires user ranking |
---
**Ready to execute**: Run `/workflow:merge-plans-with-file` to start merging plans!

View File

@@ -119,7 +119,7 @@ CONTEXT: Existing user database schema, REST API endpoints
**After Phase 1**: Initialize planning-notes.md with user intent
```javascript
// Create minimal planning notes document
// Create planning notes document with N+1 context support
const planningNotesPath = `.workflow/active/${sessionId}/planning-notes.md`
const userGoal = structuredDescription.goal
const userConstraints = structuredDescription.context || "None specified"
@@ -144,6 +144,19 @@ Write(planningNotesPath, `# Planning Notes
## Consolidated Constraints (Phase 4 Input)
1. ${userConstraints}
---
## Task Generation (Phase 4)
(To be filled by action-planning-agent)
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
### Deferred
- [ ] (For N+1)
`)
```

View File

@@ -1,808 +0,0 @@
---
name: quick-plan-with-file
description: Multi-agent rapid planning with minimal documentation, conflict resolution, and actionable synthesis. Designed as a lightweight planning supplement between brainstorm and full implementation planning
argument-hint: "[-y|--yes] [-c|--continue] [-f|--from <type>] \"planning topic or task description\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm planning decisions, use aggressive parallelization, minimal user interaction.
# Workflow Quick-Plan-With-File Command (/workflow:quick-plan-with-file)
## Overview
Multi-agent rapid planning workflow with **minimal documentation overhead**. Coordinates parallel agent analysis, synthesizes conflicting perspectives into actionable decisions, and generates a lightweight implementation-ready plan.
**Core workflow**: Parse Input → Parallel Analysis → Conflict Resolution → Plan Synthesis → Output
**Key features**:
- **Plan Format Agnostic**: Consumes brainstorm conclusions, analysis recommendations, or raw task descriptions
- **Minimal Docs**: Single `plan.md` (no lengthy brainstorm.md or discussion.md)
- **Parallel Multi-Agent**: 3-4 concurrent agent perspectives (architecture, implementation, validation, risk)
- **Conflict Resolution**: Automatic conflict detection and resolution via synthesis agent
- **Actionable Output**: Direct task breakdown ready for execution
- **Session Resumable**: Continue if interrupted, checkpoint at each phase
## Usage
```bash
/workflow:quick-plan-with-file [FLAGS] <PLANNING_TOPIC>
# Flags
-y, --yes Auto-confirm decisions, use defaults
-c, --continue Continue existing session (auto-detected)
-f, --from <type> Input source type: brainstorm|analysis|task|raw
# Arguments
<planning-topic> Planning topic, task, or reference to planning artifact
# Examples
/workflow:quick-plan-with-file "实现分布式缓存层支持Redis和内存后端"
/workflow:quick-plan-with-file --continue "缓存层规划" # Continue
/workflow:quick-plan-with-file -y -f analysis "从分析结论生成实施规划" # Auto mode
/workflow:quick-plan-with-file --from brainstorm BS-rate-limiting-2025-01-28 # From artifact
```
## Execution Process
```
Input Validation & Loading:
├─ Parse input (topic | artifact reference)
├─ Load artifact if referenced (synthesis.json | conclusions.json | etc.)
├─ Extract key constraints and requirements
└─ Initialize session folder and plan.md
Session Initialization:
├─ Create .workflow/.planning/{sessionId}/
├─ Initialize plan.md with input summary
├─ Parse existing output (if --from artifact)
└─ Define planning dimensions & focus areas
Phase 1: Parallel Multi-Agent Analysis (concurrent)
├─ Agent 1 (Architecture): High-level design & decomposition
├─ Agent 2 (Implementation): Technical approach & feasibility
├─ Agent 3 (Validation): Risk analysis & edge cases
├─ Agent 4 (Decision): Recommendations & tradeoffs
└─ Aggregate findings into perspectives.json
Phase 2: Conflict Detection & Resolution
├─ Analyze agent perspectives for contradictions
├─ Identify critical decision points
├─ Generate synthesis via arbitration agent
├─ Document conflicts and resolutions
└─ Update plan.md with decisive recommendations
Phase 3: Plan Synthesis
├─ Consolidate all insights
├─ Generate actionable task breakdown
├─ Create execution strategy
├─ Document assumptions & risks
└─ Generate synthesis.md with ready-to-execute tasks
Output:
├─ .workflow/.planning/{sessionId}/plan.md (minimal, actionable)
├─ .workflow/.planning/{sessionId}/perspectives.json (agent findings)
├─ .workflow/.planning/{sessionId}/conflicts.json (decision points)
├─ .workflow/.planning/{sessionId}/synthesis.md (task breakdown)
└─ Optional: Feed to /workflow:unified-execute-with-file
```
## Implementation
### Session Setup & Input Loading
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse arguments
const planningTopic = "$PLANNING_TOPIC"
const inputType = $ARGUMENTS.match(/--from\s+(\w+)/)?.[1] || 'raw'
const isAutoMode = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const isContinue = $ARGUMENTS.includes('--continue') || $ARGUMENTS.includes('-c')
// Auto-detect artifact if referenced
let artifact = null
let artifactContent = null
if (inputType === 'brainstorm' || planningTopic.startsWith('BS-')) {
const sessionId = planningTopic
const synthesisPath = `.workflow/.brainstorm/${sessionId}/synthesis.json`
if (fs.existsSync(synthesisPath)) {
artifact = { type: 'brainstorm', path: synthesisPath }
artifactContent = JSON.parse(Read(synthesisPath))
}
} else if (inputType === 'analysis' || planningTopic.startsWith('ANL-')) {
const sessionId = planningTopic
const conclusionsPath = `.workflow/.analysis/${sessionId}/conclusions.json`
if (fs.existsSync(conclusionsPath)) {
artifact = { type: 'analysis', path: conclusionsPath }
artifactContent = JSON.parse(Read(conclusionsPath))
}
}
// Generate session ID
const planSlug = planningTopic.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `PLAN-${planSlug}-${dateStr}`
const sessionFolder = `.workflow/.planning/${sessionId}`
// Session mode detection
const sessionExists = fs.existsSync(sessionFolder)
const hasPlan = sessionExists && fs.existsSync(`${sessionFolder}/plan.md`)
const mode = (hasPlan || isContinue) ? 'continue' : 'new'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Phase 1: Initialize plan.md (Minimal)
```markdown
# Quick Planning Session
**Session ID**: ${sessionId}
**Topic**: ${planningTopic}
**Started**: ${getUtc8ISOString()}
**Mode**: ${mode}
---
## Input Context
${artifact ? `
**Source**: ${artifact.type} artifact
**Path**: ${artifact.path}
**Artifact Summary**:
${artifact.type === 'brainstorm' ? `
- Topic: ${artifactContent.topic}
- Top Ideas: ${artifactContent.top_ideas?.length || 0}
- Key Insights: ${artifactContent.key_insights?.slice(0, 2).join(', ') || 'N/A'}
` : artifact.type === 'analysis' ? `
- Topic: ${artifactContent.topic}
- Key Conclusions: ${artifactContent.key_conclusions?.length || 0}
- Recommendations: ${artifactContent.recommendations?.length || 0}
` : ''}
` : `
**User Input**: ${planningTopic}
`}
---
## Planning Dimensions
*To be populated after agent analysis*
---
## Key Decisions
*Conflict resolution and recommendations - to be populated*
---
## Implementation Plan
*Task breakdown - to be populated after synthesis*
---
## Progress
- [ ] Multi-agent analysis
- [ ] Conflict detection
- [ ] Plan synthesis
- [ ] Ready for execution
```
---
### Phase 2: Parallel Multi-Agent Analysis
```javascript
const analysisPrompt = artifact
? `Convert ${artifact.type} artifact to planning requirements and execute parallel analysis`
: `Create planning breakdown for: ${planningTopic}`
// Prepare context for agents
const agentContext = {
topic: planningTopic,
artifact: artifact ? {
type: artifact.type,
summary: extractArtifactSummary(artifactContent)
} : null,
planning_focus: determineFocusAreas(planningTopic),
constraints: extractConstraints(planningTopic, artifactContent)
}
// Agent 1: Architecture & Design
const archPromise = Bash({
command: `ccw cli -p "
PURPOSE: Architecture & high-level design planning for '${planningTopic}'
Success: Clear component decomposition, interface design, and data flow
TASK:
• Decompose problem into major components/modules
• Identify architectural patterns and integration points
• Design interfaces and data models
• Assess scalability and maintainability implications
• Propose architectural approach with rationale
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Component decomposition (box diagram in text)
- Module interfaces and responsibilities
- Data flow between components
- Architectural patterns applied
- Scalability assessment (1-5 rating)
- Risks from architectural perspective
CONSTRAINTS: Focus on long-term maintainability
" --tool gemini --mode analysis`,
run_in_background: true
})
// Agent 2: Implementation & Feasibility
const implPromise = Bash({
command: `ccw cli -p "
PURPOSE: Implementation approach & technical feasibility for '${planningTopic}'
Success: Concrete implementation strategy with realistic resource estimates
TASK:
• Evaluate technical feasibility of approach
• Identify required technologies and dependencies
• Estimate effort: high/medium/low + rationale
• Suggest implementation phases and milestones
• Highlight technical blockers or challenges
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Technology stack recommendation
- Implementation complexity: high|medium|low with justification
- Estimated effort breakdown (analysis/design/coding/testing/deployment)
- Key technical decisions with tradeoffs
- Potential blockers and mitigations
- Suggested implementation phases
- Reusable components or libraries
CONSTRAINTS: Realistic with current tech stack
" --tool codex --mode analysis`,
run_in_background: true
})
// Agent 3: Risk & Validation
const riskPromise = Bash({
command: `ccw cli -p "
PURPOSE: Risk analysis and validation strategy for '${planningTopic}'
Success: Comprehensive risk matrix with testing strategy
TASK:
• Identify technical risks and failure scenarios
• Assess business/timeline risks
• Define validation/testing strategy
• Suggest monitoring and observability requirements
• Rate overall risk level (low/medium/high)
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Risk matrix (likelihood × impact, 1-5 each)
- Top 3 technical risks with mitigations
- Top 3 timeline/resource risks with mitigations
- Testing strategy (unit/integration/e2e/performance)
- Deployment strategy and rollback plan
- Monitoring/observability requirements
- Overall risk rating with confidence (low/medium/high)
CONSTRAINTS: Be realistic, not pessimistic
" --tool claude --mode analysis`,
run_in_background: true
})
// Agent 4: Decisions & Recommendations
const decisionPromise = Bash({
command: `ccw cli -p "
PURPOSE: Strategic decisions and execution recommendations for '${planningTopic}'
Success: Clear recommended approach with tradeoff analysis
TASK:
• Synthesize all considerations into recommendations
• Clearly identify critical decision points
• Outline key tradeoffs (speed vs quality, scope vs timeline, etc.)
• Propose go/no-go decision criteria
• Suggest execution strategy and sequencing
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Primary recommendation with strong rationale
- Alternative approaches with pros/cons
- 2-3 critical decision points with recommended choices
- Key tradeoffs and what we're optimizing for
- Success metrics and go/no-go criteria
- Suggested execution sequencing
- Resource requirements and dependencies
CONSTRAINTS: Focus on actionable decisions, not analysis
" --tool gemini --mode analysis`,
run_in_background: true
})
// Wait for all parallel analyses
const [archResult, implResult, riskResult, decisionResult] = await Promise.all([
archPromise, implPromise, riskPromise, decisionPromise
])
```
---
### Phase 3: Aggregate Perspectives
```javascript
// Parse and structure agent findings
const perspectives = {
session_id: sessionId,
timestamp: getUtc8ISOString(),
topic: planningTopic,
source_artifact: artifact?.type || 'raw',
architecture: {
source: 'gemini (design)',
components: extractComponents(archResult),
interfaces: extractInterfaces(archResult),
patterns: extractPatterns(archResult),
scalability_rating: extractRating(archResult, 'scalability'),
risks_from_design: extractRisks(archResult)
},
implementation: {
source: 'codex (pragmatic)',
technology_stack: extractStack(implResult),
complexity: extractComplexity(implResult),
effort_breakdown: extractEffort(implResult),
blockers: extractBlockers(implResult),
phases: extractPhases(implResult)
},
validation: {
source: 'claude (systematic)',
risk_matrix: extractRiskMatrix(riskResult),
top_risks: extractTopRisks(riskResult),
testing_strategy: extractTestingStrategy(riskResult),
deployment_strategy: extractDeploymentStrategy(riskResult),
monitoring_requirements: extractMonitoring(riskResult),
overall_risk_rating: extractRiskRating(riskResult)
},
recommendation: {
source: 'gemini (synthesis)',
primary_approach: extractPrimaryApproach(decisionResult),
alternatives: extractAlternatives(decisionResult),
critical_decisions: extractDecisions(decisionResult),
tradeoffs: extractTradeoffs(decisionResult),
success_criteria: extractCriteria(decisionResult),
execution_sequence: extractSequence(decisionResult)
},
analysis_timestamp: getUtc8ISOString()
}
Write(`${sessionFolder}/perspectives.json`, JSON.stringify(perspectives, null, 2))
```
---
### Phase 4: Conflict Detection & Resolution
```javascript
// Analyze for conflicts and contradictions
const conflicts = detectConflicts({
arch_vs_impl: compareArchitectureAndImplementation(perspectives),
design_vs_risk: compareDesignAndRisk(perspectives),
effort_vs_scope: compareEffortAndScope(perspectives),
timeline_implications: extractTimingConflicts(perspectives)
})
// If conflicts exist, invoke arbitration agent
if (conflicts.critical.length > 0) {
const arbitrationResult = await Bash({
command: `ccw cli -p "
PURPOSE: Resolve planning conflicts and generate unified recommendation
Input: ${JSON.stringify(conflicts, null, 2)}
TASK:
• Review all conflicts presented
• Recommend resolution for each critical conflict
• Explain tradeoff choices
• Identify what we're optimizing for (speed/quality/risk/resource)
• Generate unified execution strategy
MODE: analysis
EXPECTED:
- For each conflict: recommended resolution + rationale
- Unified optimization criteria (what matters most?)
- Final recommendation with confidence level
- Any unresolved tensions that need user input
CONSTRAINTS: Be decisive, not fence-sitting
" --tool gemini --mode analysis`,
run_in_background: false
})
const conflictResolution = {
detected_conflicts: conflicts,
arbitration_result: arbitrationResult,
timestamp: getUtc8ISOString()
}
Write(`${sessionFolder}/conflicts.json`, JSON.stringify(conflictResolution, null, 2))
}
```
---
### Phase 5: Plan Synthesis & Task Breakdown
```javascript
const synthesisPrompt = `
Given the planning context:
- Topic: ${planningTopic}
- Architecture: ${perspectives.architecture.components.map(c => c.name).join(', ')}
- Implementation Complexity: ${perspectives.implementation.complexity}
- Timeline Risk: ${perspectives.validation.overall_risk_rating}
- Primary Recommendation: ${perspectives.recommendation.primary_approach.summary}
Generate a minimal but complete implementation plan with:
1. Task breakdown (5-8 major tasks)
2. Dependencies between tasks
3. For each task: what needs to be done, why, and key considerations
4. Success criteria for the entire effort
5. Known risks and mitigation strategies
Output as structured task list ready for execution.
`
const synthesisResult = await Bash({
command: `ccw cli -p "${synthesisPrompt}" --tool gemini --mode analysis`,
run_in_background: false
})
// Parse synthesis and generate task breakdown
const tasks = parseTaskBreakdown(synthesisResult)
const synthesis = {
session_id: sessionId,
planning_topic: planningTopic,
completed: getUtc8ISOString(),
// Summary
executive_summary: perspectives.recommendation.primary_approach.summary,
optimization_focus: extractOptimizationFocus(perspectives),
// Architecture
architecture_approach: perspectives.architecture.patterns[0] || 'TBD',
key_components: perspectives.architecture.components.slice(0, 5),
// Implementation
technology_stack: perspectives.implementation.technology_stack,
complexity_level: perspectives.implementation.complexity,
estimated_effort: perspectives.implementation.effort_breakdown,
// Risks & Validation
top_risks: perspectives.validation.top_risks.slice(0, 3),
testing_approach: perspectives.validation.testing_strategy,
// Execution
phases: perspectives.implementation.phases,
critical_path_tasks: extractCriticalPath(tasks),
total_tasks: tasks.length,
// Task breakdown (ready for unified-execute-with-file)
tasks: tasks.map(task => ({
id: task.id,
title: task.title,
description: task.description,
type: task.type,
dependencies: task.dependencies,
effort_estimate: task.effort,
success_criteria: task.criteria
}))
}
Write(`${sessionFolder}/synthesis.md`, formatSynthesisMarkdown(synthesis))
Write(`${sessionFolder}/synthesis.json`, JSON.stringify(synthesis, null, 2))
```
---
### Phase 6: Update plan.md with Results
```markdown
# Quick Planning Session
**Session ID**: ${sessionId}
**Topic**: ${planningTopic}
**Started**: ${startTime}
**Completed**: ${completionTime}
---
## Executive Summary
${synthesis.executive_summary}
**Optimization Focus**: ${synthesis.optimization_focus}
**Complexity**: ${synthesis.complexity_level}
**Estimated Effort**: ${formatEffort(synthesis.estimated_effort)}
---
## Architecture
**Primary Pattern**: ${synthesis.architecture_approach}
**Key Components**:
${synthesis.key_components.map((c, i) => `${i+1}. ${c.name}: ${c.responsibility}`).join('\n')}
---
## Implementation Strategy
**Technology Stack**:
${synthesis.technology_stack.map(t => `- ${t}`).join('\n')}
**Phases**:
${synthesis.phases.map((p, i) => `${i+1}. ${p.name} (${p.effort})`).join('\n')}
---
## Risk Assessment
**Overall Risk Level**: ${synthesis.top_risks[0].risk_level}
**Top 3 Risks**:
${synthesis.top_risks.map((r, i) => `
${i+1}. **${r.title}** (Impact: ${r.impact})
- Mitigation: ${r.mitigation}
`).join('\n')}
**Testing Approach**: ${synthesis.testing_approach}
---
## Execution Plan
**Total Tasks**: ${synthesis.total_tasks}
**Critical Path**: ${synthesis.critical_path_tasks.map(t => t.id).join(' → ')}
### Task Breakdown
${synthesis.tasks.map((task, i) => `
${i+1}. **${task.id}: ${task.title}** (Effort: ${task.effort_estimate})
- ${task.description}
- Depends on: ${task.dependencies.join(', ') || 'none'}
- Success: ${task.success_criteria}
`).join('\n')}
---
## Next Steps
**Recommended**: Execute with \`/workflow:unified-execute-with-file\` using:
\`\`\`
/workflow:unified-execute-with-file -p ${sessionFolder}/synthesis.json
\`\`\`
---
## Artifacts
- **Perspectives**: ${sessionFolder}/perspectives.json (all agent findings)
- **Conflicts**: ${sessionFolder}/conflicts.json (decision points and resolutions)
- **Synthesis**: ${sessionFolder}/synthesis.json (task breakdown for execution)
```
---
## Session Folder Structure
```
.workflow/.planning/{sessionId}/
├── plan.md # Minimal, actionable planning doc
├── perspectives.json # Multi-agent findings (architecture, impl, risk, decision)
├── conflicts.json # Detected conflicts and resolutions (if any)
├── synthesis.json # Task breakdown ready for execution
└── synthesis.md # Human-readable execution plan
```
---
## Multi-Agent Roles
| Agent | Focus | Input | Output |
|-------|-------|-------|--------|
| **Gemini (Design)** | Architecture & design patterns | Topic + constraints | Components, interfaces, patterns, scalability |
| **Codex (Pragmatic)** | Implementation reality | Topic + architecture | Tech stack, effort, phases, blockers |
| **Claude (Validation)** | Risk & testing | Architecture + impl | Risk matrix, test strategy, monitoring |
| **Gemini (Decision)** | Synthesis & strategy | All findings | Recommendations, tradeoffs, execution plan |
---
## Conflict Resolution Strategy
**Auto-Resolution for conflicts**:
1. **Architecture vs Implementation**: Recommend design-for-feasibility approach
2. **Scope vs Timeline**: Prioritize critical path, defer nice-to-haves
3. **Quality vs Speed**: Suggest iterative approach (MVP + iterations)
4. **Resource vs Effort**: Identify parallelizable tasks
**Require User Input for**:
- Strategic choices (which feature to prioritize?)
- Tool/technology decisions with strong team preferences
- Budget/resource constraints not stated in planning topic
---
## Continue & Resume
```bash
/workflow:quick-plan-with-file --continue "planning-topic"
```
When continuing:
1. Load existing plan.md and perspectives.json
2. Identify what's incomplete
3. Re-run affected agents (if planning has changed)
4. Update plan.md with new findings
5. Generate updated synthesis.json
---
## Integration Flow
```
Input Source:
├─ Raw task description
├─ Brainstorm synthesis.json
└─ Analysis conclusions.json
/workflow:quick-plan-with-file
plan.md + synthesis.json
/workflow:unified-execute-with-file
Implementation
```
---
## Usage Patterns
### Pattern 1: Quick Planning from Task
```bash
# User has a task, needs rapid multi-perspective plan
/workflow:quick-plan-with-file -y "实现实时通知系统支持推送和WebSocket"
# → Creates plan in ~5 minutes
# → Ready for execution
```
### Pattern 2: Convert Brainstorm to Executable Plan
```bash
# User completed brainstorm, wants to convert top idea to executable plan
/workflow:quick-plan-with-file --from brainstorm BS-notifications-2025-01-28
# → Reads synthesis.json from brainstorm
# → Generates implementation plan
# → Ready for unified-execute-with-file
```
### Pattern 3: From Analysis to Implementation
```bash
# Analysis completed, now need execution plan
/workflow:quick-plan-with-file --from analysis ANL-auth-architecture-2025-01-28
# → Reads conclusions.json from analysis
# → Generates planning with recommendations
# → Output task breakdown
```
### Pattern 4: Planning with Interactive Conflict Resolution
```bash
# Full planning with user involvement in decision-making
/workflow:quick-plan-with-file "新的支付流程集成"
# → Without -y flag
# → After conflict detection, asks user about tradeoffs
# → Generates plan based on user preferences
```
---
## Comparison with Other Workflows
| Feature | brainstorm | analyze | quick-plan | plan |
|---------|-----------|---------|-----------|------|
| **Purpose** | Ideation | Investigation | Lightweight planning | Detailed planning |
| **Multi-agent** | 3 perspectives | 2 CLI + explore | 4 concurrent agents | N/A (single) |
| **Documentation** | Extensive | Extensive | Minimal | Standard |
| **Output** | Ideas + synthesis | Conclusions | Executable tasks | IMPL_PLAN |
| **Typical Duration** | 30-60 min | 20-30 min | 5-10 min | 15-20 min |
| **User Interaction** | High (multi-round) | High (Q&A) | Low (decisions) | Medium |
---
## Error Handling
| Situation | Action |
|-----------|--------|
| Agents conflict on approach | Arbitration agent decides, document in conflicts.json |
| Missing critical files | Continue with available context, note limitations |
| Insufficient task breakdown | Ask user for planning focus areas |
| Effort estimate too high | Suggest MVP approach or phasing |
| Unclear requirements | Ask clarifying questions via AskUserQuestion |
| Agent timeout | Use last successful result, note partial analysis |
---
## Best Practices
1. **Use when**:
- You have clarity on WHAT but not HOW
- Need rapid multi-perspective planning
- Converting brainstorm/analysis into execution
- Want minimal planning overhead
2. **Avoid when**:
- Requirements are highly ambiguous (use brainstorm instead)
- Need deep investigation (use analyze instead)
- Want extensive planning document (use plan instead)
- No tech stack clarity (use analyze first)
3. **For best results**:
- Provide complete task/requirement description
- Include constraints and success criteria
- Specify preferences (speed vs quality vs risk)
- Review conflicts.json and make conscious tradeoff decisions
---
## Next Steps After Planning
### Feed to Execution
```bash
/workflow:unified-execute-with-file -p .workflow/.planning/{sessionId}/synthesis.json
```
### Detailed Planning if Needed
```bash
/workflow:plan "Based on quick-plan recommendations..."
```
### Continuous Refinement
```bash
/workflow:quick-plan-with-file --continue "{topic}" # Update plan with new constraints
```

View File

@@ -378,16 +378,26 @@ Hard Constraints:
- Return completion status with document count and task breakdown summary
## PLANNING NOTES RECORD (REQUIRED)
After completing all documents, append a brief execution record to planning-notes.md:
After completing, update planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints"
**Format**:
\`\`\`
## Task Generation (Phase 4)
1. **Task Generation (Phase 4)**: Task count and key tasks
2. **N+1 Context**: Key decisions (with rationale) + deferred items
\`\`\`markdown
## Task Generation (Phase 4)
### [Action-Planning Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结任务数量、关键任务、依赖关系等]
- **Tasks**: [count] ([IDs])
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| [choice] | [why] | [Yes/No] |
### Deferred
- [ ] [item] - [reason]
\`\`\`
`
)
@@ -543,19 +553,13 @@ Hard Constraints:
- Return: task count, task IDs, dependency summary (internal + cross-module)
## PLANNING NOTES RECORD (REQUIRED)
After completing module task JSONs, append a brief execution record to planning-notes.md:
After completing, append to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints" (if not exists)
**Format**:
\`\`\`markdown
### [${module.name}] YYYY-MM-DD
- **Tasks**: [count] ([IDs])
- **CROSS deps**: [placeholders used]
\`\`\`
## Task Generation (Phase 4)
### [Action-Planning Agent - ${module.name}] YYYY-MM-DD
- **Note**: [智能补充:简短总结本模块任务数量、关键任务等]
\`\`\`
**Note**: Multiple module agents will append their records. Phase 3 Integration Coordinator will add final summary.
`
)
);
@@ -638,14 +642,21 @@ Module Count: ${modules.length}
- Return: task count, per-module breakdown, resolved dependency count
## PLANNING NOTES RECORD (REQUIRED)
After completing integration, append final summary to planning-notes.md:
After integration, update planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Under "## Task Generation (Phase 4)" section (after module agent records)
**Format**:
\`\`\`
### [Integration Coordinator] YYYY-MM-DD
- **Note**: [智能补充:简短总结总任务数、跨模块依赖解决情况等]
\`\`\`markdown
### [Coordinator] YYYY-MM-DD
- **Total**: [count] tasks
- **Resolved**: [CROSS:: resolutions]
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| CROSS::X → IMPL-Y | [why this resolution] | [Yes/No] |
### Deferred
- [ ] [unresolved CROSS or conflict] - [reason]
\`\`\`
`
)

View File

@@ -155,27 +155,65 @@ bash(`mkdir -p ${executionFolder}`)
## Plan Format Parsers
Support multiple plan sources:
Support multiple plan sources (all JSON plans follow plan-json-schema.json):
```javascript
function parsePlan(content, filePath) {
const ext = filePath.split('.').pop()
if (filePath.includes('IMPL_PLAN')) {
return parseImplPlan(content) // From /workflow:plan
return parseImplPlan(content) // From /workflow:plan (markdown)
} else if (filePath.includes('brainstorm')) {
return parseBrainstormPlan(content) // From /workflow:brainstorm-with-file
} else if (filePath.includes('synthesis')) {
return parseSynthesisPlan(content) // From /workflow:brainstorm-with-file synthesis.json
} else if (filePath.includes('conclusions')) {
return parseConclusionsPlan(content) // From /workflow:analyze-with-file conclusions.json
} else if (filePath.endsWith('.json') && content.includes('tasks')) {
return parseTaskJson(content) // Direct task JSON
} else if (filePath.endsWith('.json') && content.includes('"tasks"')) {
return parsePlanJson(content) // Standard plan-json-schema (lite-plan, collaborative-plan, sub-plans)
}
throw new Error(`Unsupported plan format: ${filePath}`)
}
// Standard plan-json-schema parser
// Handles: lite-plan, collaborative-plan, sub-plans (all follow same schema)
function parsePlanJson(content) {
const plan = JSON.parse(content)
return {
type: plan.merge_metadata ? 'collaborative-plan' : 'lite-plan',
title: plan.summary?.split('.')[0] || 'Untitled Plan',
slug: plan._metadata?.session_id || generateSlug(plan.summary),
summary: plan.summary,
approach: plan.approach,
tasks: plan.tasks.map(task => ({
id: task.id,
type: inferTaskTypeFromAction(task.action),
title: task.title,
description: task.description,
dependencies: task.depends_on || [],
agent_type: selectAgentFromTask(task),
prompt: buildPromptFromTask(task),
files_to_modify: task.modification_points?.map(mp => mp.file) || [],
expected_output: task.acceptance || [],
priority: task.effort?.complexity === 'high' ? 'high' : 'normal',
estimated_duration: task.effort?.estimated_hours ? `${task.effort.estimated_hours}h` : null,
verification: task.verification,
risks: task.risks,
source_agent: task.source_agent // From collaborative-plan sub-agents
})),
flow_control: plan.flow_control,
data_flow: plan.data_flow,
design_decisions: plan.design_decisions,
estimatedDuration: plan.estimated_time,
recommended_execution: plan.recommended_execution,
complexity: plan.complexity,
merge_metadata: plan.merge_metadata, // Present if from collaborative-plan
_metadata: plan._metadata
}
}
// IMPL_PLAN.md parser
function parseImplPlan(content) {
// Extract:
@@ -216,6 +254,65 @@ function parseSynthesisPlan(content) {
recommendations: synthesis.recommendations
}
}
// Helper: Infer task type from action field
function inferTaskTypeFromAction(action) {
const actionMap = {
'Create': 'code',
'Update': 'code',
'Implement': 'code',
'Refactor': 'code',
'Add': 'code',
'Delete': 'code',
'Configure': 'config',
'Test': 'test',
'Fix': 'debug'
}
return actionMap[action] || 'code'
}
// Helper: Select agent based on task properties
function selectAgentFromTask(task) {
if (task.verification?.unit_tests?.length > 0) {
return 'tdd-developer'
} else if (task.action === 'Test') {
return 'test-fix-agent'
} else if (task.action === 'Fix') {
return 'debug-explore-agent'
} else {
return 'code-developer'
}
}
// Helper: Build prompt from task details
function buildPromptFromTask(task) {
let prompt = `## Task: ${task.title}\n\n${task.description}\n\n`
if (task.modification_points?.length > 0) {
prompt += `### Modification Points\n`
task.modification_points.forEach(mp => {
prompt += `- **${mp.file}**: ${mp.target}${mp.change}\n`
})
prompt += '\n'
}
if (task.implementation?.length > 0) {
prompt += `### Implementation Steps\n`
task.implementation.forEach((step, i) => {
prompt += `${i + 1}. ${step}\n`
})
prompt += '\n'
}
if (task.acceptance?.length > 0) {
prompt += `### Acceptance Criteria\n`
task.acceptance.forEach(ac => {
prompt += `- ${ac}\n`
})
}
return prompt
}
```
---
@@ -776,21 +873,6 @@ function calculateParallel(tasks) {
| Agent unavailable | Fallback to universal-executor |
| Execution interrupted | Support resume with `/workflow:unified-execute-with-file --continue` |
## Usage Recommendations
Use `/workflow:unified-execute-with-file` when:
- Executing any planning document (IMPL_PLAN.md, brainstorm conclusions, analysis recommendations)
- Multiple tasks with dependencies need orchestration
- Want minimal progress tracking without clutter
- Need to handle failures gracefully and resume
- Want to parallelize where possible but ensure correctness
Use for consuming output from:
- `/workflow:plan` → IMPL_PLAN.md
- `/workflow:brainstorm-with-file` → synthesis.json → execution
- `/workflow:analyze-with-file` → conclusions.json → execution
- `/workflow:debug-with-file` → recommendations → execution
- `/workflow:lite-plan` → task JSONs → execution
## Session Resume

View File

@@ -0,0 +1,394 @@
---
name: flow-coordinator
description: Template-driven workflow coordinator with minimal state tracking. Executes command chains from workflow templates with slash-command execution (mainprocess/async). Triggers on "flow-coordinator", "workflow template", "orchestrate".
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep
---
# Flow Coordinator
Lightweight workflow coordinator that executes command chains from predefined templates, supporting slash-command execution with mainprocess (blocking) and async (background) modes.
## Architecture
```
User Task → Select Template → status.json Init → Execute Steps → Complete
↑ │
└──────────────── Resume (from status.json) ─────┘
Step Execution:
execution mode?
├─ mainprocess → SlashCommand (blocking, main process)
└─ async → ccw cli --tool claude --mode write (background)
```
## Core Concepts
**Template-Driven**: Workflows defined as JSON templates in `templates/`, decoupled from coordinator logic.
**Execution Type**: `slash-command` only
- ALL workflow commands (`/workflow:*`) use `slash-command` type
- Two execution modes:
- `mainprocess`: SlashCommand (blocking, main process)
- `async`: CLI background (ccw cli with claude tool)
**Dynamic Discovery**: Templates discovered at runtime via Glob, not hardcoded.
---
## Execution Flow
```javascript
async function execute(task) {
// 1. Discover and select template
const templates = await discoverTemplates();
const template = await selectTemplate(templates);
// 2. Init status
const sessionId = `fc-${timestamp()}`;
const statusPath = `.workflow/.flow-coordinator/${sessionId}/status.json`;
const status = initStatus(template, task);
write(statusPath, JSON.stringify(status, null, 2));
// 3. Execute steps based on execution config
await executeSteps(status, statusPath);
}
async function executeSteps(status, statusPath) {
for (let i = status.current; i < status.steps.length; i++) {
const step = status.steps[i];
status.current = i;
// Execute based on step mode (all steps use slash-command type)
const execConfig = step.execution || { type: 'slash-command', mode: 'mainprocess' };
if (execConfig.mode === 'async') {
// Async execution - stop and wait for hook callback
await executeSlashCommandAsync(step, status, statusPath);
break;
} else {
// Mainprocess execution - continue immediately
await executeSlashCommandSync(step, status);
step.status = 'done';
write(statusPath, JSON.stringify(status, null, 2));
}
}
// All steps complete
if (status.current >= status.steps.length) {
status.complete = true;
write(statusPath, JSON.stringify(status, null, 2));
}
}
```
---
## Template Discovery
**Dynamic query** - never hardcode template list:
```javascript
async function discoverTemplates() {
// Discover all JSON templates
const files = Glob('*.json', { path: 'templates/' });
// Parse each template
const templates = [];
for (const file of files) {
const content = JSON.parse(Read(file));
templates.push({
name: content.name,
description: content.description,
steps: content.steps.map(s => s.cmd).join(' → '),
file: file
});
}
return templates;
}
```
---
## Template Selection
User chooses from discovered templates:
```javascript
async function selectTemplate(templates) {
// Build options from discovered templates
const options = templates.slice(0, 4).map(t => ({
label: t.name,
description: t.steps
}));
const response = await AskUserQuestion({
questions: [{
question: 'Select workflow template:',
header: 'Template',
options: options,
multiSelect: false
}]
});
// Handle "Other" - show remaining templates or custom input
if (response.template === 'Other') {
return await selectFromRemainingTemplates(templates.slice(4));
}
return templates.find(t => t.name === response.template);
}
```
---
## Status Schema
**Creation**: Copy template JSON → Update `id`, `template`, `goal`, set all steps `status: "pending"`
**Location**: `.workflow/.flow-coordinator/{session-id}/status.json`
**Core Fields**:
- `id`: Session ID (fc-YYYYMMDD-HHMMSS)
- `template`: Template name
- `goal`: User task description
- `current`: Current step index
- `steps[]`: Step array from template (with runtime `status`, `session`, `taskId`)
- `complete`: All steps done?
**Step Status**: `pending``running``done` | `failed` | `skipped`
---
## Extended Template Schema
**Templates stored in**: `templates/*.json` (discovered at runtime via Glob)
**TemplateStep Fields**:
- `cmd`: Full command path (e.g., `/workflow:lite-plan`, `/workflow:execute`)
- `args?`: Arguments with `{{goal}}` and `{{prev}}` placeholders
- `unit?`: Minimum execution unit name (groups related commands)
- `optional?`: Can be skipped by user
- `execution`: Type and mode configuration
- `type`: Always `'slash-command'` (for all workflow commands)
- `mode`: `'mainprocess'` (blocking) or `'async'` (background)
- `contextHint?`: Natural language guidance for context assembly
**Template Example**:
```json
{
"name": "rapid",
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-implementation",
"execution": { "type": "slash-command", "mode": "mainprocess" },
"contextHint": "Create lightweight implementation plan"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "quick-implementation",
"execution": { "type": "slash-command", "mode": "async" },
"contextHint": "Execute plan from previous step"
}
]
}
```
---
## Execution Implementation
### Mainprocess Mode (Blocking)
```javascript
async function executeSlashCommandSync(step, status) {
// Build command: /workflow:cmd -y args
const cmd = buildCommand(step, status);
const result = await SlashCommand({ command: cmd });
step.session = result.session_id;
step.status = 'done';
return result;
}
```
### Async Mode (Background)
```javascript
async function executeSlashCommandAsync(step, status, statusPath) {
// Build prompt: /workflow:cmd -y args + context
const prompt = buildCommandPrompt(step, status);
step.status = 'running';
write(statusPath, JSON.stringify(status, null, 2));
// Execute via ccw cli in background
const taskId = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool claude --mode write`,
{ run_in_background: true }
).task_id;
step.taskId = taskId;
write(statusPath, JSON.stringify(status, null, 2));
console.log(`Executing: ${step.cmd} (async)`);
console.log(`Resume: /flow-coordinator --resume ${status.id}`);
}
```
---
## Prompt Building
Prompts are built in format: `/workflow:cmd -y args` + context
```javascript
function buildCommandPrompt(step, status) {
// step.cmd already contains full path: /workflow:lite-plan, /workflow:execute, etc.
let prompt = `${step.cmd} -y`;
// Add arguments (with placeholder replacement)
if (step.args) {
const args = step.args
.replace('{{goal}}', status.goal)
.replace('{{prev}}', getPreviousSessionId(status));
prompt += ` ${args}`;
}
// Add context based on contextHint
if (step.contextHint) {
const context = buildContextFromHint(step.contextHint, status);
prompt += `\n\nContext:\n${context}`;
} else {
// Default context: previous session IDs
const previousContext = collectPreviousResults(status);
if (previousContext) {
prompt += `\n\nPrevious results:\n${previousContext}`;
}
}
return prompt;
}
function buildContextFromHint(hint, status) {
// Parse contextHint instruction and build context accordingly
// Examples:
// "Summarize IMPL_PLAN.md" → read and summarize plan
// "List test coverage gaps" → analyze previous test results
// "Pass session ID" → just return session reference
return parseAndBuildContext(hint, status);
}
```
### Example Prompt Output
```
/workflow:lite-plan -y "Implement user registration"
Context:
Task: Implement user registration
Previous results:
- None (first step)
```
```
/workflow:execute -y --in-memory
Context:
Task: Implement user registration
Previous results:
- lite-plan: WFS-plan-20250130 (planning-context.md)
```
---
## User Interaction
### Step 1: Select Template
```
Select workflow template:
○ rapid lite-plan → lite-execute → test-cycle-execute
○ coupled plan → plan-verify → execute → review → test
○ bugfix lite-fix → lite-execute → test-cycle-execute
○ tdd tdd-plan → execute → tdd-verify
○ Other (more templates or custom)
```
### Step 2: Review Execution Plan
```
Template: coupled
Steps:
1. /workflow:plan (slash-command mainprocess)
2. /workflow:plan-verify (slash-command mainprocess)
3. /workflow:execute (slash-command async)
4. /workflow:review-session-cycle (slash-command mainprocess)
5. /workflow:review-cycle-fix (slash-command mainprocess)
6. /workflow:test-fix-gen (slash-command mainprocess)
7. /workflow:test-cycle-execute (slash-command async)
Proceed? [Confirm / Cancel]
```
---
## Resume Capability
```javascript
async function resume(sessionId) {
const statusPath = `.workflow/.flow-coordinator/${sessionId}/status.json`;
const status = JSON.parse(Read(statusPath));
// Find first incomplete step
status.current = status.steps.findIndex(s => s.status !== 'done');
if (status.current === -1) {
console.log('All steps complete');
return;
}
// Continue executing steps
await executeSteps(status, statusPath);
}
```
---
## Available Templates
Templates discovered from `templates/*.json`:
| Template | Use Case | Steps |
|----------|----------|-------|
| rapid | Simple feature | /workflow:lite-plan → /workflow:lite-execute → /workflow:test-cycle-execute |
| coupled | Complex feature | /workflow:plan → /workflow:plan-verify → /workflow:execute → /workflow:review-session-cycle → /workflow:test-fix-gen |
| bugfix | Bug fix | /workflow:lite-fix → /workflow:lite-execute → /workflow:test-cycle-execute |
| tdd | Test-driven | /workflow:tdd-plan → /workflow:execute → /workflow:tdd-verify |
| test-fix | Fix failing tests | /workflow:test-fix-gen → /workflow:test-cycle-execute |
| brainstorm | Exploration | /workflow:brainstorm-with-file |
| debug | Debug with docs | /workflow:debug-with-file |
| analyze | Collaborative analysis | /workflow:analyze-with-file |
| issue | Issue workflow | /workflow:issue:plan → /workflow:issue:queue → /workflow:issue:execute |
---
## Design Principles
1. **Minimal fields**: Only essential tracking data
2. **Flat structure**: No nested objects beyond steps array
3. **Step-level execution**: Each step defines how it's executed
4. **Resumable**: Any step can be resumed from status
5. **Human readable**: Clear JSON format
---
## Reference Documents
| Document | Purpose |
|----------|---------|
| templates/*.json | Workflow templates (dynamic discovery) |

View File

@@ -0,0 +1,16 @@
{
"name": "analyze",
"description": "Collaborative analysis with multi-round discussion - deep exploration and understanding",
"level": 3,
"steps": [
{
"cmd": "/workflow:analyze-with-file",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-round collaborative analysis with iterative understanding. Generate discussion.md with comprehensive analysis and conclusions"
}
]
}

View File

@@ -0,0 +1,36 @@
{
"name": "brainstorm-to-issue",
"description": "Bridge brainstorm session to issue workflow - convert exploration insights to executable issues",
"level": 4,
"steps": [
{
"cmd": "/workflow:issue:from-brainstorm",
"args": "--auto",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Convert brainstorm session findings into issue plans and solutions"
},
{
"cmd": "/workflow:issue:queue",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue from converted brainstorm issues"
},
{
"cmd": "/workflow:issue:execute",
"args": "--queue auto",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "brainstorm",
"description": "Multi-perspective ideation with documentation - explore possibilities with multiple analytical viewpoints",
"level": 4,
"steps": [
{
"cmd": "/workflow:brainstorm-with-file",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective ideation with interactive diverge-converge cycles. Generate brainstorm.md with synthesis of ideas and recommendations"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "bugfix-hotfix",
"description": "Urgent production fix - immediate diagnosis and fix with minimal overhead",
"level": 1,
"steps": [
{
"cmd": "/workflow:lite-fix",
"args": "--hotfix \"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Urgent hotfix mode: quick diagnosis and immediate fix for critical production issue"
}
]
}

View File

@@ -0,0 +1,47 @@
{
"name": "bugfix",
"description": "Standard bug fix workflow - lightweight diagnosis and execution with testing",
"level": 2,
"steps": [
{
"cmd": "/workflow:lite-fix",
"args": "\"{{goal}}\"",
"unit": "bug-fix",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze bug report, trace execution flow, identify root cause with fix strategy"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "bug-fix",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Implement fix based on diagnosis. Execute against in-memory state from lite-fix analysis."
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks to verify bug fix and prevent regression"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until all tests pass"
}
]
}

View File

@@ -0,0 +1,71 @@
{
"name": "coupled",
"description": "Full workflow for complex features - detailed planning with verification, execution, review, and testing",
"level": 3,
"steps": [
{
"cmd": "/workflow:plan",
"args": "\"{{goal}}\"",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create detailed implementation plan with architecture design, file structure, dependencies, and milestones"
},
{
"cmd": "/workflow:plan-verify",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify IMPL_PLAN.md against requirements, check for missing details, conflicts, and quality gates"
},
{
"cmd": "/workflow:execute",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute implementation based on verified plan. Resume from planning session with all context preserved."
},
{
"cmd": "/workflow:review-session-cycle",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Perform multi-dimensional code review across correctness, security, performance, maintainability. Reference execution session for full code context."
},
{
"cmd": "/workflow:review-cycle-fix",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Fix issues identified in review findings with prioritization by severity levels"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate comprehensive test tasks for the implementation with coverage analysis"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "debug",
"description": "Hypothesis-driven debugging with documentation - systematic troubleshooting and logging",
"level": 3,
"steps": [
{
"cmd": "/workflow:debug-with-file",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Systematic debugging with hypothesis formation and verification. Generate understanding.md with root cause analysis and fix recommendations"
}
]
}

View File

@@ -0,0 +1,27 @@
{
"name": "docs",
"description": "Documentation generation workflow",
"level": 2,
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-documentation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Plan documentation structure and content organization"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "quick-documentation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute documentation generation from plan"
}
]
}

View File

@@ -0,0 +1,61 @@
{
"name": "full",
"description": "Comprehensive workflow - brainstorm exploration, planning verification, execution, and testing",
"level": 4,
"steps": [
{
"cmd": "/workflow:brainstorm:auto-parallel",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective exploration of requirements and possible approaches"
},
{
"cmd": "/workflow:plan",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create detailed implementation plan based on brainstorm insights"
},
{
"cmd": "/workflow:plan-verify",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify plan quality and completeness"
},
{
"cmd": "/workflow:execute",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute implementation from verified plan"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate comprehensive test tasks"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,43 @@
{
"name": "issue",
"description": "Issue workflow - discover issues, create plans, queue execution, and resolve",
"level": "issue",
"steps": [
{
"cmd": "/workflow:issue:discover",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Discover pending issues from codebase for potential fixes"
},
{
"cmd": "/workflow:issue:plan",
"args": "--all-pending",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create execution plans for all discovered pending issues"
},
{
"cmd": "/workflow:issue:queue",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue with issue prioritization and dependencies"
},
{
"cmd": "/workflow:issue:execute",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking and completion reporting"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "lite-lite-lite",
"description": "Ultra-lightweight for immediate simple tasks - minimal overhead, direct execution",
"level": 1,
"steps": [
{
"cmd": "/workflow:lite-lite-lite",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Direct task execution with minimal analysis and zero documentation overhead"
}
]
}

View File

@@ -0,0 +1,47 @@
{
"name": "multi-cli-plan",
"description": "Multi-perspective planning with cross-tool verification and execution",
"level": 3,
"steps": [
{
"cmd": "/workflow:multi-cli-plan",
"args": "\"{{goal}}\"",
"unit": "multi-cli-planning",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective analysis comparing different implementation approaches with trade-off analysis"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "multi-cli-planning",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute best approach selected from multi-perspective analysis"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks for the implementation"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,46 @@
{
"name": "rapid-to-issue",
"description": "Bridge lite workflow to issue workflow - convert simple plan to structured issue execution",
"level": 2.5,
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create lightweight plan for the task"
},
{
"cmd": "/workflow:issue:convert-to-plan",
"args": "--latest-lite-plan -y",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Convert lite plan to structured issue plan"
},
{
"cmd": "/workflow:issue:queue",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue from converted plan"
},
{
"cmd": "/workflow:issue:execute",
"args": "--queue auto",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking"
}
]
}

View File

@@ -0,0 +1,47 @@
{
"name": "rapid",
"description": "Quick implementation - lightweight plan and immediate execution for simple features",
"level": 2,
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-implementation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze requirements and create a lightweight implementation plan with key decisions and file structure"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "quick-implementation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Use the plan from previous step to implement code. Execute against in-memory state."
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks from the implementation session"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until all tests pass"
}
]
}

View File

@@ -0,0 +1,43 @@
{
"name": "review",
"description": "Code review workflow - multi-dimensional review, fix issues, and test validation",
"level": 3,
"steps": [
{
"cmd": "/workflow:review-session-cycle",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Perform comprehensive multi-dimensional code review across correctness, security, performance, maintainability dimensions"
},
{
"cmd": "/workflow:review-cycle-fix",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Fix all review findings prioritized by severity level (critical → high → medium → low)"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks for fixed code with coverage analysis"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,34 @@
{
"name": "tdd",
"description": "Test-driven development - write tests first, implement to pass tests, verify Red-Green-Refactor cycles",
"level": 3,
"steps": [
{
"cmd": "/workflow:tdd-plan",
"args": "\"{{goal}}\"",
"unit": "tdd-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create TDD task plan with Red-Green-Refactor cycles, test specifications, and implementation strategy"
},
{
"cmd": "/workflow:execute",
"unit": "tdd-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute TDD tasks following Red-Green-Refactor workflow with test-first development"
},
{
"cmd": "/workflow:tdd-verify",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify TDD cycle compliance, test coverage, and code quality against Red-Green-Refactor principles"
}
]
}

View File

@@ -0,0 +1,26 @@
{
"name": "test-fix",
"description": "Fix failing tests - generate test tasks and execute iterative test-fix cycle",
"level": 2,
"steps": [
{
"cmd": "/workflow:test-fix-gen",
"args": "\"{{goal}}\"",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze failing tests, generate targeted test tasks with root cause and fix strategy"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle with pass rate tracking until >= 95% pass rate achieved"
}
]
}

View File

@@ -1,450 +0,0 @@
---
description: Multi-agent rapid planning with minimal documentation, conflict resolution, and actionable synthesis. Lightweight planning from raw task, brainstorm, or analysis artifacts
argument-hint: "TOPIC=\"<planning topic or task>\" [--from=brainstorm|analysis|task|raw] [--perspectives=arch,impl,risk,decision] [--auto] [--verbose]"
---
# Codex Quick-Plan-With-File Prompt
## Overview
Multi-agent rapid planning workflow with **minimal documentation overhead**. Coordinates parallel agent analysis (architecture, implementation, validation, decision), synthesizes conflicting perspectives into actionable decisions, and generates an implementation-ready plan.
**Core workflow**: Parse Input → Parallel Analysis → Conflict Resolution → Plan Synthesis → Output
**Key features**:
- **Format Agnostic**: Consumes brainstorm conclusions, analysis recommendations, quick tasks, or raw descriptions
- **Minimal Docs**: Single plan.md (no lengthy timeline documentation)
- **Parallel Multi-Agent**: 4 concurrent perspectives for rapid analysis
- **Conflict Resolution**: Automatic conflict detection and synthesis
- **Actionable Output**: Direct task breakdown ready for execution
## Target Planning
**$TOPIC**
- `--from`: Input source type (brainstorm | analysis | task | raw) - auto-detected if omitted
- `--perspectives`: Which perspectives to use (arch, impl, risk, decision) - all by default
- `--auto`: Auto-confirm decisions, minimal user prompts
- `--verbose`: Verbose output with all reasoning
## Execution Process
```
Phase 1: Input Validation & Loading
├─ Parse input: topic | artifact reference
├─ Load artifact if referenced (synthesis.json | conclusions.json)
├─ Extract constraints and key requirements
└─ Initialize session folder
Phase 2: Parallel Multi-Agent Analysis (concurrent)
├─ Agent 1 (Architecture): Design decomposition, patterns, scalability
├─ Agent 2 (Implementation): Tech stack, feasibility, effort estimates
├─ Agent 3 (Validation): Risk matrix, testing strategy, monitoring
├─ Agent 4 (Decision): Recommendations, tradeoffs, execution strategy
└─ Aggregate findings into perspectives.json
Phase 3: Conflict Detection & Resolution
├─ Detect: effort conflicts, architecture conflicts, risk conflicts
├─ Analyze rationale for each conflict
├─ Synthesis via arbitration: generate unified recommendation
├─ Document conflicts and resolutions
└─ Update plan.md
Phase 4: Plan Synthesis
├─ Consolidate all insights
├─ Generate task breakdown (5-8 major tasks)
├─ Create execution strategy and dependencies
├─ Document assumptions and risks
└─ Output plan.md + synthesis.json
Output:
├─ .workflow/.planning/{sessionId}/plan.md (minimal, actionable)
├─ .workflow/.planning/{sessionId}/perspectives.json (agent findings)
├─ .workflow/.planning/{sessionId}/conflicts.json (decision points)
└─ .workflow/.planning/{sessionId}/synthesis.json (task breakdown for execution)
```
## Implementation Details
### Phase 1: Session Setup & Input Loading
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse input
const planSlug = "$TOPIC".toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 30)
const sessionId = `PLAN-${planSlug}-${getUtc8ISOString().substring(0, 10)}`
const sessionFolder = `.workflow/.planning/${sessionId}`
// Detect input type
let artifact = null
if ($TOPIC.startsWith('BS-') || "$TOPIC".includes('brainstorm')) {
artifact = loadBrainstormArtifact($TOPIC)
} else if ($TOPIC.startsWith('ANL-') || "$TOPIC".includes('analysis')) {
artifact = loadAnalysisArtifact($TOPIC)
}
bash(`mkdir -p ${sessionFolder}`)
```
### Phase 2: Parallel Multi-Agent Analysis
Run 4 agents in parallel using ccw cli:
```javascript
// Launch all 4 agents concurrently with Bash run_in_background
const agentPromises = []
// Agent 1 - Architecture (Gemini)
agentPromises.push(
Bash({
command: `ccw cli -p "
PURPOSE: Architecture & high-level design for '${planningTopic}'
Success: Clear component decomposition and architectural approach
TASK:
• Decompose problem into major components/modules
• Identify architectural patterns and integration points
• Design component interfaces and data models
• Assess scalability and maintainability implications
• Propose architectural approach with rationale
MODE: analysis
CONTEXT: @**/*
${artifact ? \`| Source artifact: \${artifact.type}\` : ''}
EXPECTED:
- Component decomposition (list with responsibilities)
- Module interfaces and contracts
- Data flow between components
- Architectural patterns applied (e.g., MVC, Event-Driven, etc.)
- Scalability assessment (1-5 rating with rationale)
- Architectural risks identified
CONSTRAINTS: Focus on long-term maintainability and extensibility
" --tool gemini --mode analysis`,
run_in_background: true
})
)
// Agent 2 - Implementation (Codex)
agentPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Implementation approach & technical feasibility for '\${planningTopic}'
Success: Concrete implementation strategy with realistic estimates
TASK:
• Evaluate technical feasibility of proposed approach
• Identify required technologies and dependencies
• Estimate effort: analysis/design/coding/testing/deployment
• Suggest implementation phases and milestones
• Highlight technical blockers and challenges
MODE: analysis
CONTEXT: @**/*
\${artifact ? \`| Source artifact: \${artifact.type}\` : ''}
EXPECTED:
- Technology stack recommendation (languages, frameworks, tools)
- Implementation complexity: high|medium|low (with justification)
- Effort breakdown (hours or complexity: analysis, design, coding, testing, deployment)
- Key technical decisions with tradeoffs explained
- Potential blockers and mitigation strategies
- Suggested implementation phases with sequencing
- Reusable components or libraries identified
CONSTRAINTS: Realistic with current tech stack
" --tool codex --mode analysis\`,
run_in_background: true
})
)
// Agent 3 - Validation & Risk (Claude)
agentPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Risk analysis and validation strategy for '\${planningTopic}'
Success: Comprehensive risk matrix with testing and deployment strategy
TASK:
• Identify technical risks and failure scenarios
• Assess timeline and resource risks
• Define validation/testing strategy (unit, integration, e2e, performance)
• Suggest monitoring and observability requirements
• Propose deployment strategy and rollback plan
MODE: analysis
CONTEXT: @**/*
\${artifact ? \`| Source artifact: \${artifact.type}\` : ''}
EXPECTED:
- Risk matrix (likelihood × impact, each 1-5)
- Top 3 technical risks with mitigation approaches
- Top 3 timeline/resource risks with mitigation
- Testing strategy (what to test, how, when, acceptance criteria)
- Deployment strategy (staged rollout, blue-green, canary, etc.)
- Rollback plan and recovery procedures
- Monitoring/observability requirements (metrics, logs, alerts)
- Overall risk rating: low|medium|high (with confidence)
CONSTRAINTS: Be realistic, not pessimistic
" --tool claude --mode analysis\`,
run_in_background: true
})
)
// Agent 4 - Strategic Decision (Gemini)
agentPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Strategic decisions and execution recommendations for '\${planningTopic}'
Success: Clear recommended approach with tradeoff analysis
TASK:
• Synthesize all perspectives into strategic recommendations
• Identify 2-3 critical decision points with recommended choices
• Clearly outline key tradeoffs (speed vs quality, scope vs timeline, risk vs cost)
• Propose go/no-go decision criteria and success metrics
• Suggest execution strategy and resource sequencing
MODE: analysis
CONTEXT: @**/*
\${artifact ? \`| Source artifact: \${artifact.type}\` : ''}
EXPECTED:
- Primary recommendation with strong rationale (1-2 paragraphs)
- Alternative approaches with pros/cons (2-3 alternatives)
- 2-3 critical decision points:
- What decision needs to be made
- Trade-offs for each option
- Recommended choice and why
- Key trade-offs explained (what we're optimizing for: speed/quality/risk/cost)
- Success metrics and go/no-go criteria
- Resource requirements and critical path items
- Suggested execution sequencing and phases
CONSTRAINTS: Focus on actionable decisions, provide clear rationale
" --tool gemini --mode analysis\`,
run_in_background: true
})
)
// Wait for all agents to complete
const [archResult, implResult, riskResult, decisionResult] = await Promise.all(agentPromises)
// Parse and extract findings from each agent result
const architecture = parseArchitectureResult(archResult)
const implementation = parseImplementationResult(implResult)
const validation = parseValidationResult(riskResult)
const recommendation = parseDecisionResult(decisionResult)
```
**Agent Focus Areas**:
| Agent | Perspective | Focus Areas |
|-------|-------------|------------|
| Gemini (Design) | Architecture patterns | Components, interfaces, scalability, patterns |
| Codex (Build) | Implementation reality | Tech stack, complexity, effort, blockers |
| Claude (Validate) | Risk & testing | Risk matrix, testing strategy, deployment, monitoring |
| Gemini (Decide) | Strategic synthesis | Recommendations, trade-offs, critical decisions |
### Phase 3: Parse & Aggregate Perspectives
```javascript
const perspectives = {
session_id: sessionId,
topic: "$TOPIC",
timestamp: getUtc8ISOString(),
architecture: {
components: [...],
patterns: [...],
scalability_rating: 3,
risks: [...]
},
implementation: {
technology_stack: [...],
complexity: "medium",
effort_breakdown: { analysis: 2, design: 3, coding: 8, testing: 4 },
blockers: [...]
},
validation: {
risk_matrix: [...],
top_risks: [{ title, impact, mitigation }, ...],
testing_strategy: "...",
monitoring: [...]
},
recommendation: {
primary_approach: "...",
alternatives: [...],
critical_decisions: [...],
tradeoffs: [...]
}
}
Write(`${sessionFolder}/perspectives.json`, JSON.stringify(perspectives, null, 2))
```
### Phase 4: Conflict Detection
Detect conflicts:
- Effort variance: Are estimates consistent?
- Risk disagreement: Do arch and validation agree on risks?
- Scope confusion: Are recommendations aligned?
- Architecture mismatch: Do design and implementation agree?
For each conflict: document it, then run synthesis arbitration.
### Phase 5: Generate Plan
```markdown
# Quick Planning Session
**Session ID**: ${sessionId}
**Topic**: $TOPIC
**Created**: ${timestamp}
---
## Executive Summary
${synthesis.executive_summary}
**Complexity**: ${synthesis.complexity_level}
**Estimated Effort**: ${formatEffort(synthesis.effort_breakdown)}
**Optimization Focus**: ${synthesis.optimization_focus}
---
## Architecture
**Primary Pattern**: ${synthesis.architecture_approach}
**Key Components**:
${synthesis.key_components.map((c, i) => `${i+1}. ${c.name}: ${c.responsibility}`).join('\n')}
---
## Implementation Strategy
**Technology Stack**:
${synthesis.technology_stack.map(t => `- ${t}`).join('\n')}
**Phases**:
${synthesis.phases.map((p, i) => `${i+1}. ${p.name} (${p.effort})`).join('\n')}
---
## Risk Assessment
**Overall Risk**: ${synthesis.overall_risk_level}
**Top 3 Risks**:
${synthesis.top_risks.map((r, i) => `${i+1}. **${r.title}** (Impact: ${r.impact})\n Mitigation: ${r.mitigation}`).join('\n\n')}
---
## Task Breakdown (Ready for Execution)
${synthesis.tasks.map((task, i) => `
${i+1}. **${task.id}: ${task.title}** (Effort: ${task.effort})
${task.description}
Dependencies: ${task.dependencies.join(', ') || 'none'}
`).join('\n')}
---
## Next Steps
**Execute with**:
\`\`\`
/workflow:unified-execute-with-file -p ${sessionFolder}/synthesis.json
\`\`\`
**Detailed planning if needed**:
\`\`\`
/workflow:plan "Based on: $TOPIC"
\`\`\`
```
## Session Folder Structure
```
.workflow/.planning/{sessionId}/
├── plan.md # Minimal, actionable
├── perspectives.json # Agent findings
├── conflicts.json # Conflicts & resolutions (if any)
└── synthesis.json # Task breakdown for execution
```
## Multi-Agent Coordination
| Agent | Perspective | Tools | Output |
|-------|-------------|-------|--------|
| Gemini (Design) | Architecture patterns | Design thinking, cross-domain | Components, patterns, scalability |
| Codex (Build) | Implementation reality | Tech stack evaluation | Stack, effort, feasibility |
| Claude (Validate) | Risk & testing | Risk assessment, QA | Risks, testing strategy |
| Gemini (Decide) | Strategic synthesis | Decision analysis | Recommendations, tradeoffs |
## Error Handling
| Situation | Action |
|-----------|--------|
| Agents conflict | Arbitration agent synthesizes recommendation |
| Missing blockers | Continue with available context, note gaps |
| Unclear input | Ask for clarification on planning focus |
| Estimate too high | Suggest MVP approach or phasing |
## Integration Flow
```
Raw Task / Brainstorm / Analysis
quick-plan-with-file (5-10 min)
├─ plan.md
├─ perspectives.json
└─ synthesis.json
unified-execute-with-file
Implementation
```
## Usage Patterns
**Pattern 1: Quick planning from task**
```
TOPIC="实现实时通知系统" --auto
→ Creates actionable plan in ~5 minutes
```
**Pattern 2: Convert brainstorm to execution plan**
```
TOPIC="BS-notifications-2025-01-28" --from=brainstorm
→ Reads synthesis.json from brainstorm
→ Generates implementation plan
```
**Pattern 3: From analysis to plan**
```
TOPIC="ANL-auth-2025-01-28" --from=analysis
→ Converts conclusions.json to executable plan
```
---
**Now execute quick-plan-with-file for topic**: $TOPIC

188
META_SKILL_SUMMARY.md Normal file
View File

@@ -0,0 +1,188 @@
# Meta-Skill Template Update Summary
## Update Overview
Successfully updated all 17 meta-skill workflow templates with proper execution configuration and context passing.
## Critical Correction: CLI Tools vs Slash Commands
**IMPORTANT**: ALL workflow commands (`/workflow:*`) must use `slash-command` type.
### Slash Command (workflow commands)
- **Type**: `slash-command`
- **Commands**: ALL `/workflow:*` commands
- Planning: plan, lite-plan, multi-cli-plan, tdd-plan
- Execution: execute, lite-execute
- Testing: test-fix-gen, test-cycle-execute, tdd-verify
- Review: review-session-cycle, review-cycle-fix, review-module-cycle
- Bug fixes: lite-fix, debug-with-file
- Exploration: brainstorm-with-file, brainstorm:auto-parallel, analyze-with-file
- **Modes**:
- `mainprocess`: Blocking, main process execution
- `async`: Background execution via `ccw cli --tool claude --mode write`
### CLI Tools (for pure analysis/generation)
- **Type**: `cli-tools`
- **When to use**: Only when there's NO specific workflow command
- **Purpose**: Dynamic prompt generation based on task content
- **Tools**: gemini (analysis), qwen (code generation), codex (review)
- **Examples**:
- "Analyze this architecture design and suggest improvements"
- "Generate unit tests for module X with 90% coverage"
- "Review code for security vulnerabilities"
## Key Changes
### 1. Schema Enhancement
All templates now include:
- **`execution`** configuration:
- Type: Always `slash-command` for workflow commands
- Mode: `mainprocess` (blocking) or `async` (background)
- **`contextHint`** field: Natural language instructions for context passing
- **`unit`** field: Groups commands into minimum execution units
- **`args`** field: Command arguments with `{{goal}}` and `{{prev}}` placeholders
### 2. Execution Patterns
**Planning (mainprocess)**:
- Interactive planning needs main process
- Examples: plan, lite-plan, tdd-plan, multi-cli-plan
**Execution (async)**:
- Long-running tasks need background
- Examples: execute, lite-execute, test-cycle-execute
**Review/Verify (mainprocess)**:
- Needs immediate feedback
- Examples: plan-verify, review-session-cycle, tdd-verify
**Fix (mainprocess/async)**:
- Simple fixes: mainprocess
- Complex fixes: async
- Examples: lite-fix (mainprocess), review-cycle-fix (mainprocess)
### 3. Minimum Execution Units
Preserved atomic command groups from ccw-coordinator.md:
| Unit | Commands | Purpose |
|------|----------|---------|
| `quick-implementation` | lite-plan → lite-execute | Lightweight implementation |
| `verified-planning-execution` | plan → plan-verify → execute | Full planning with verification |
| `bug-fix` | lite-fix → lite-execute | Bug diagnosis and fix |
| `test-validation` | test-fix-gen → test-cycle-execute | Test generation and validation |
| `code-review` | review-session-cycle → review-cycle-fix | Code review and fixes |
| `tdd-planning-execution` | tdd-plan → execute | Test-driven development |
| `multi-cli-planning` | multi-cli-plan → lite-execute | Multi-perspective planning |
| `issue-workflow` | issue:plan → issue:queue → issue:execute | Issue lifecycle |
| `rapid-to-issue` | lite-plan → convert-to-plan → queue → execute | Bridge lite to issue |
| `brainstorm-to-issue` | from-brainstorm → queue → execute | Bridge brainstorm to issue |
## Updated Templates
### Simple Workflows (Level 1-2)
1. **lite-lite-lite.json** - Ultra-lightweight direct execution
2. **bugfix-hotfix.json** - Urgent production fix (single async step)
3. **rapid.json** - Quick implementation with optional testing
4. **bugfix.json** - Bug fix with diagnosis and testing
5. **test-fix.json** - Fix failing tests workflow
6. **docs.json** - Documentation generation
### Complex Workflows (Level 3-4)
7. **tdd.json** - Test-driven development with verification
8. **coupled.json** - Full workflow with review and testing
9. **review.json** - Standalone code review workflow
10. **multi-cli-plan.json** - Multi-perspective planning
11. **full.json** - Comprehensive workflow with brainstorm
### Exploration Workflows
12. **brainstorm.json** - Multi-perspective ideation
13. **debug.json** - Hypothesis-driven debugging
14. **analyze.json** - Collaborative analysis
### Issue Workflows
15. **issue.json** - Full issue lifecycle
16. **rapid-to-issue.json** - Bridge lite plan to issue
17. **brainstorm-to-issue.json** - Bridge brainstorm to issue
## Design Principles Applied
1. **Slash Commands Only**: All workflow commands use `slash-command` type
2. **Minimum Execution Units**: Preserved atomic command groups
3. **Context Flow**: `contextHint` provides natural language guidance
4. **Execution Modes**:
- `mainprocess`: Interactive, needs user feedback
- `async`: Long-running, background execution
## CLI Tools Usage (Future Extension)
The `cli-tools` type is reserved for pure analysis/generation tasks WITHOUT specific workflow commands:
```json
{
"name": "custom-analysis",
"steps": [
{
"execution": {
"type": "cli-tools",
"mode": "mainprocess",
"tool": "gemini",
"cliMode": "analysis",
"rule": "analysis-analyze-technical-document"
},
"contextHint": "Analyze architecture design and provide recommendations"
}
]
}
```
**Note**: This is for future extension only. Current templates use slash commands exclusively.
## Files Modified
```
.claude/skills/meta-skill/templates/
├── rapid.json ✓ Updated (slash-command only)
├── coupled.json ✓ Updated (slash-command only)
├── bugfix.json ✓ Fixed (removed cli-tools)
├── bugfix-hotfix.json ✓ Updated (slash-command only)
├── tdd.json ✓ Fixed (removed cli-tools)
├── test-fix.json ✓ Updated (slash-command only)
├── review.json ✓ Fixed (removed cli-tools)
├── brainstorm.json ✓ Fixed (removed cli-tools)
├── debug.json ✓ Fixed (removed cli-tools)
├── analyze.json ✓ Fixed (removed cli-tools)
├── issue.json ✓ Updated (slash-command only)
├── multi-cli-plan.json ✓ Fixed (removed cli-tools)
├── docs.json ✓ Updated (slash-command only)
├── full.json ✓ Fixed (removed cli-tools)
├── rapid-to-issue.json ✓ Updated (slash-command only)
├── brainstorm-to-issue.json ✓ Updated (slash-command only)
├── lite-lite-lite.json ✓ Updated (slash-command only)
├── coupled-enhanced.json ✗ Removed (experimental)
└── rapid-cli.json ✗ Removed (experimental)
```
## Result
All 17 templates now correctly use:
-`slash-command` type exclusively
- ✅ Flexible `mainprocess`/`async` modes
- ✅ Context passing via `contextHint`
- ✅ Minimum execution unit preservation
- ✅ Consistent execution patterns
## Next Steps
The meta-skill workflow coordinator can now:
1. Discover templates dynamically via Glob
2. Parse execution configuration from each step
3. Execute slash commands with mainprocess/async modes
4. Pass context between steps using contextHint
5. Maintain minimum execution unit integrity
6. (Future) Support cli-tools for custom analysis tasks

View File

@@ -525,6 +525,33 @@ function renderCliStatus() {
const enabledCliSettings = isClaude ? cliSettingsEndpoints.filter(ep => ep.enabled) : [];
const hasCliSettings = enabledCliSettings.length > 0;
// Build Settings File info for builtin Claude
let settingsFileInfo = '';
if (isClaude && config.type === 'builtin' && config.settingsFile) {
const settingsFile = config.settingsFile;
// Simple path resolution attempt for display (no actual filesystem access)
const resolvedPath = settingsFile.startsWith('~')
? settingsFile.replace('~', (typeof os !== 'undefined' && os.homedir) ? os.homedir() : '~')
: settingsFile;
settingsFileInfo = `
<div class="cli-settings-info mt-2 p-2 rounded bg-muted/50 text-xs">
<div class="flex items-center gap-1 text-muted-foreground mb-1">
<i data-lucide="file-key" class="w-3 h-3"></i>
<span>Settings File:</span>
</div>
<div class="text-foreground font-mono text-[10px] break-all" title="${resolvedPath}">
${settingsFile}
</div>
${settingsFile !== resolvedPath ? `
<div class="text-muted-foreground mt-1 font-mono text-[10px] break-all">
${resolvedPath}
</div>
` : ''}
</div>
`;
}
// Build CLI Settings badge for Claude
let cliSettingsBadge = '';
if (isClaude && hasCliSettings) {
@@ -588,6 +615,7 @@ function renderCliStatus() {
}
</div>
</div>
${settingsFileInfo}
${cliSettingsInfo}
<div class="cli-tool-actions mt-3 flex gap-2">
${isAvailable ? (isEnabled

View File

@@ -562,6 +562,27 @@ function buildToolConfigModalContent(tool, config, models, status) {
'</div>'
) : '') +
// Claude Settings File Section (only for builtin claude type)
(tool === 'claude' && config.type === 'builtin' ? (
'<div class="tool-config-section">' +
'<h4><i data-lucide="file-key" class="w-3.5 h-3.5"></i> Settings File <span class="text-muted">(optional)</span></h4>' +
'<div class="env-file-input-group">' +
'<div class="env-file-input-row">' +
'<input type="text" id="claudeSettingsFileInput" class="tool-config-input" ' +
'placeholder="~/path/to/settings.json or D:\\path\\to\\settings.json" ' +
'value="' + (config.settingsFile ? escapeHtml(config.settingsFile) : '') + '" />' +
'<button type="button" class="btn-sm btn-outline" id="claudeSettingsFileBrowseBtn">' +
'<i data-lucide="folder-open" class="w-3.5 h-3.5"></i> Browse' +
'</button>' +
'</div>' +
'<p class="env-file-hint">' +
'<i data-lucide="info" class="w-3 h-3"></i> ' +
'Path to Claude CLI settings.json file (supports ~, absolute, and Windows paths)' +
'</p>' +
'</div>' +
'</div>'
) : '') +
// Footer
'<div class="tool-config-footer">' +
'<button class="btn btn-outline" onclick="closeModal()">' + t('common.cancel') + '</button>' +
@@ -1124,6 +1145,10 @@ function initToolConfigModalEvents(tool, currentConfig, models) {
var envFileInput = document.getElementById('envFileInput');
var envFile = envFileInput ? envFileInput.value.trim() : '';
// Get settingsFile value (only for builtin claude)
var claudeSettingsFileInput = document.getElementById('claudeSettingsFileInput');
var settingsFile = claudeSettingsFileInput ? claudeSettingsFileInput.value.trim() : '';
try {
var updateData = {
primaryModel: primaryModel,
@@ -1137,6 +1162,11 @@ function initToolConfigModalEvents(tool, currentConfig, models) {
updateData.envFile = envFile || null;
}
// Only include settingsFile for builtin claude tool
if (tool === 'claude' && config.type === 'builtin') {
updateData.settingsFile = settingsFile || null;
}
await updateCliToolConfig(tool, updateData);
// Reload config to reflect changes
await loadCliToolConfig();
@@ -1164,6 +1194,20 @@ function initToolConfigModalEvents(tool, currentConfig, models) {
};
}
// Claude Settings File browse button (only for builtin claude)
var claudeSettingsFileBrowseBtn = document.getElementById('claudeSettingsFileBrowseBtn');
if (claudeSettingsFileBrowseBtn) {
claudeSettingsFileBrowseBtn.onclick = function() {
showFileBrowserModal(function(selectedPath) {
var claudeSettingsFileInput = document.getElementById('claudeSettingsFileInput');
if (claudeSettingsFileInput && selectedPath) {
claudeSettingsFileInput.value = selectedPath;
claudeSettingsFileInput.focus();
}
});
};
}
// Initialize lucide icons in modal
if (window.lucide) lucide.createIcons();
}

View File

@@ -56,6 +56,12 @@ export interface ClaudeCliTool {
* Supports both absolute paths and paths relative to home directory (e.g., ~/.my-env)
*/
envFile?: string;
/**
* Path to Claude CLI settings.json file (builtin claude only)
* Passed to Claude CLI via --settings parameter
* Supports ~, absolute, relative, and Windows paths
*/
settingsFile?: string;
}
export type CliToolName = 'gemini' | 'qwen' | 'codex' | 'claude' | 'opencode' | string;
@@ -279,7 +285,8 @@ function ensureToolTags(tool: Partial<ClaudeCliTool>): ClaudeCliTool {
primaryModel: tool.primaryModel,
secondaryModel: tool.secondaryModel,
tags: tool.tags ?? [],
envFile: tool.envFile
envFile: tool.envFile,
settingsFile: tool.settingsFile
};
}
@@ -1015,6 +1022,7 @@ export function getToolConfig(projectDir: string, tool: string): {
secondaryModel: string;
tags?: string[];
envFile?: string;
settingsFile?: string;
} {
const config = loadClaudeCliTools(projectDir);
const toolConfig = config.tools[tool];
@@ -1034,7 +1042,8 @@ export function getToolConfig(projectDir: string, tool: string): {
primaryModel: toolConfig.primaryModel ?? '',
secondaryModel: toolConfig.secondaryModel ?? '',
tags: toolConfig.tags,
envFile: toolConfig.envFile
envFile: toolConfig.envFile,
settingsFile: toolConfig.settingsFile
};
}
@@ -1051,6 +1060,7 @@ export function updateToolConfig(
availableModels: string[];
tags: string[];
envFile: string | null;
settingsFile: string | null;
}>
): ClaudeCliToolsConfig {
const config = loadClaudeCliTools(projectDir);
@@ -1079,6 +1089,14 @@ export function updateToolConfig(
config.tools[tool].envFile = updates.envFile;
}
}
// Handle settingsFile: set to undefined if null/empty, otherwise set value
if (updates.settingsFile !== undefined) {
if (updates.settingsFile === null || updates.settingsFile === '') {
delete config.tools[tool].settingsFile;
} else {
config.tools[tool].settingsFile = updates.settingsFile;
}
}
saveClaudeCliTools(projectDir, config);
}

View File

@@ -9,7 +9,7 @@ import { spawn, ChildProcess } from 'child_process';
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
import { validatePath } from '../utils/path-resolver.js';
import { validatePath, resolvePath } from '../utils/path-resolver.js';
import { escapeWindowsArg } from '../utils/shell-escape.js';
import { buildCommand, checkToolAvailability, clearToolCache, debugLog, errorLog, type NativeResumeConfig, type ToolAvailability } from './cli-executor-utils.js';
import type { ConversationRecord, ConversationTurn, ExecutionOutput, ExecutionRecord } from './cli-executor-state.js';
@@ -85,7 +85,7 @@ import { findEndpointById } from '../config/litellm-api-config-manager.js';
// CLI Settings (CLI封装) integration
import { loadEndpointSettings, getSettingsFilePath, findEndpoint } from '../config/cli-settings-manager.js';
import { loadClaudeCliTools, getToolConfig } from './claude-cli-tools.js';
import { loadClaudeCliTools, getToolConfig, getPrimaryModel } from './claude-cli-tools.js';
/**
* Parse .env file content into key-value pairs
@@ -338,8 +338,7 @@ import {
import {
isToolEnabled as isToolEnabledFromConfig,
enableTool as enableToolFromConfig,
disableTool as disableToolFromConfig,
getPrimaryModel
disableTool as disableToolFromConfig
} from './cli-config-manager.js';
// Built-in CLI tools
@@ -794,6 +793,25 @@ async function executeCliTool(
// Use configured primary model if no explicit model provided
const effectiveModel = model || getPrimaryModel(workingDir, tool);
// Load and validate settings file for Claude tool (builtin only)
let settingsFilePath: string | undefined;
if (tool === 'claude') {
const toolConfig = getToolConfig(workingDir, tool);
if (toolConfig.settingsFile) {
try {
const resolved = resolvePath(toolConfig.settingsFile);
if (fs.existsSync(resolved)) {
settingsFilePath = resolved;
debugLog('SETTINGS_FILE', `Resolved Claude settings file`, { configured: toolConfig.settingsFile, resolved });
} else {
errorLog('SETTINGS_FILE', `Claude settings file not found, skipping`, { configured: toolConfig.settingsFile, resolved });
}
} catch (err) {
errorLog('SETTINGS_FILE', `Failed to resolve Claude settings file`, { configured: toolConfig.settingsFile, error: (err as Error).message });
}
}
}
// Build command
const { command, args, useStdin, outputFormat: autoDetectedFormat } = buildCommand({
tool,
@@ -803,6 +821,7 @@ async function executeCliTool(
dir: cd,
include: includeDirs,
nativeResume: nativeResumeConfig,
settingsFile: settingsFilePath,
reviewOptions: mode === 'review' ? { uncommitted, base, commit, title } : undefined
});

View File

@@ -1,6 +1,6 @@
{
"name": "claude-code-workflow",
"version": "6.3.52",
"version": "6.3.54",
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
"type": "module",
"main": "ccw/src/index.js",

48
test-command-build.mjs Normal file
View File

@@ -0,0 +1,48 @@
import { buildCommand } from './ccw/dist/tools/cli-executor-utils.js';
import { getToolConfig } from './ccw/dist/tools/claude-cli-tools.js';
import { resolvePath } from './ccw/dist/utils/path-resolver.js';
import fs from 'fs';
const workingDir = 'D:\\Claude_dms3';
const tool = 'claude';
// Get tool config
const toolConfig = getToolConfig(workingDir, tool);
console.log('=== Tool Config ===');
console.log(JSON.stringify(toolConfig, null, 2));
// Resolve settings file
let settingsFilePath = undefined;
if (toolConfig.settingsFile) {
try {
const resolved = resolvePath(toolConfig.settingsFile);
console.log(`\n=== Settings File Resolution ===`);
console.log(`Configured: ${toolConfig.settingsFile}`);
console.log(`Resolved: ${resolved}`);
console.log(`Exists: ${fs.existsSync(resolved)}`);
if (fs.existsSync(resolved)) {
settingsFilePath = resolved;
console.log(`✓ Will use settings file: ${settingsFilePath}`);
} else {
console.log(`✗ File not found, skipping`);
}
} catch (err) {
console.log(`✗ Error resolving: ${err.message}`);
}
}
// Build command
const cmdInfo = buildCommand({
tool: 'claude',
prompt: 'test prompt',
mode: 'analysis',
model: 'sonnet',
settingsFile: settingsFilePath
});
console.log('\n=== Command Built ===');
console.log('Command:', cmdInfo.command);
console.log('Args:', JSON.stringify(cmdInfo.args, null, 2));
console.log('\n=== Full Command Line ===');
console.log(`${cmdInfo.command} ${cmdInfo.args.join(' ')}`);

16
test-config-debug.mjs Normal file
View File

@@ -0,0 +1,16 @@
import { getToolConfig, loadClaudeCliTools } from './ccw/dist/tools/claude-cli-tools.js';
const workingDir = 'D:\\Claude_dms3';
const tool = 'claude';
console.log('=== Step 1: Load full config ===');
const fullConfig = loadClaudeCliTools(workingDir);
console.log('Full config:', JSON.stringify(fullConfig, null, 2));
console.log('\n=== Step 2: Get claude tool from config ===');
const claudeTool = fullConfig.tools.claude;
console.log('Claude tool raw:', JSON.stringify(claudeTool, null, 2));
console.log('\n=== Step 3: Get tool config via getToolConfig ===');
const config = getToolConfig(workingDir, tool);
console.log('Claude tool config:', JSON.stringify(config, null, 2));

7
test-config.js Normal file
View File

@@ -0,0 +1,7 @@
const { getToolConfig } = require('./ccw/dist/tools/claude-cli-tools.js');
const workingDir = 'D:\\Claude_dms3';
const tool = 'claude';
const config = getToolConfig(workingDir, tool);
console.log('Claude tool config:', JSON.stringify(config, null, 2));

7
test-config.mjs Normal file
View File

@@ -0,0 +1,7 @@
import { getToolConfig } from './ccw/dist/tools/claude-cli-tools.js';
const workingDir = 'D:\\Claude_dms3';
const tool = 'claude';
const config = getToolConfig(workingDir, tool);
console.log('Claude tool config:', JSON.stringify(config, null, 2));