Compare commits

...

16 Commits

Author SHA1 Message Date
catlog22
78b1287ced chore: Bump version to 6.3.54 2026-01-30 15:43:20 +08:00
catlog22
c6ad8e53b9 Add flow template generator documentation for meta-skill/flow-coordinator
- Introduced a comprehensive guide for generating workflow templates.
- Detailed usage instructions with examples for creating templates.
- Outlined execution flow across three phases: Template Design, Step Definition, and JSON Generation.
- Included JavaScript code snippets for each phase to facilitate implementation.
- Provided suggested step templates for various development scenarios.
- Documented command port references and minimum execution units for clarity.
2026-01-30 15:42:08 +08:00
catlog22
4006b2a0ee feat: add N+1 planning context recording to planning-notes
- Add N+1 Context section (Decisions + Deferred) to planning-notes.md init
- Add section 3.3 N+1 Context Recording in action-planning-agent.md
- Update task-generate-agent.md Phase 2A/2B/3 prompts with N+1 recording

Supports cross-module dependency resolution tracking and deferred items
for continuous planning iterations.
2026-01-30 15:42:00 +08:00
catlog22
0bb102c56a feat: add meta-skill flow-create command for workflow template generation
Interactive command to create flow-coordinator templates with comprehensive
command selection (9 categories), minimum execution units, and command port
reference integrated from ccw/ccw-coordinator/codex-coordinator.
2026-01-30 14:27:20 +08:00
catlog22
fca03a3f9c refactor: rename meta-skill → flow-coordinator, update template cmd paths
**Major changes:**
- Rename skill from meta-skill to flow-coordinator (avoid /ccw conflicts)
- Update all 17 templates: store full /workflow: command paths in cmd field
  - Session ID prefix: ms → fc (fc-YYYYMMDD-HHMMSS)
  - Workflow path: .workflow/.meta-skill → .workflow/.flow-coordinator
- Simplify SKILL.md schema documentation
  - Streamline Status Schema section
  - Consolidate Template Schema with single example
  - Remove redundant Field Explanations and behavior tables
- All templates now store cmd as full paths (e.g. /workflow:lite-plan)
  - Eliminates need for path assembly during execution
  - Matches ccw-coordinator execution format
2026-01-30 12:29:38 +08:00
catlog22
a9df4c6659 refactor: 移除统一执行命令的使用推荐部分,简化文档内容 2026-01-30 10:54:02 +08:00
catlog22
0a3246ab36 chore: Bump version to 6.3.53 2026-01-30 10:34:02 +08:00
catlog22
b5caee6b94 rename: collaborative-plan → collaborative-plan-with-file 2026-01-30 10:30:37 +08:00
catlog22
64d2156319 refactor: 统一 lite 工作流产物结构,合并 exploration+understanding 为 planning-context
- 合并 exploration.md 和 understanding.md 为单一的 planning-context.md
- 为所有 lite 工作流添加 Output Artifacts 表格
- 统一 agent 提示词的 Output Location 格式
- 更新 Session Folder Structure 包含 planning-context.md

涉及文件:
- cli-lite-planning-agent.md: 更新产物文档和格式模板
- collaborative-plan.md: 简化为 planning-context.md + sub-plan.json
- lite-plan.md: 添加产物表格和输出地址
- lite-fix.md: 添加产物表格和输出地址
2026-01-30 10:25:45 +08:00
catlog22
3f46a02df3 feat(cli): add settings file support for builtin Claude
- Enhance CLI status rendering to display settings file information for builtin Claude.
- Introduce settings file input in CLI manager for configuring the path to settings.json.
- Update Claude CLI tool interface to include settingsFile property.
- Implement settings file resolution and validation in CLI executor.
- Create a new collaborative planning workflow command with detailed documentation.
- Add test scripts for debugging tool configuration and command building.
2026-01-30 10:07:02 +08:00
catlog22
4b69492b16 feat: 添加 Codex 协调器命令,支持任务分析、命令链推荐和顺序执行 2026-01-29 23:36:42 +08:00
catlog22
67a578450c chore: Bump version to 6.3.52 2026-01-29 21:22:49 +08:00
catlog22
d5199ad2d4 Add merge-plans-with-file and quick-plan-with-file prompts for enhanced planning workflows
- Introduced merge-plans-with-file prompt for aggregating multiple planning artifacts, resolving conflicts, and synthesizing a unified execution plan.
- Implemented detailed execution process including discovery, normalization, conflict detection, resolution, and synthesis phases.
- Added quick-plan-with-file prompt for rapid planning with minimal documentation, utilizing multi-agent analysis for actionable outputs.
- Both prompts support various input formats and provide structured outputs for execution and documentation.
2026-01-29 21:21:29 +08:00
catlog22
f3c773a81e Refactor Codex Issue Plan-Execute Skill Documentation and CLI Options
- Deleted obsolete INDEX.md and OPTIMIZATION_SUMMARY.md files, consolidating documentation for improved clarity and organization.
- Removed skipGitRepoCheck option from CLI execution parameters to streamline command usage.
- Updated CLI executor utilities to automatically skip git repository checks, allowing execution in non-git directories.
- Enhanced documentation with new ARCHITECTURE.md and INDEX.md files for better navigation and understanding of the system architecture.
- Created CONTENT_MIGRATION_REPORT.md to verify zero content loss during the consolidation process.
2026-01-29 20:39:12 +08:00
catlog22
875b1f19bd feat: 完成 Codex Issue Plan-Execute Skill v2.0 优化
- 新增 OPTIMIZATION_SUMMARY.md,详细记录优化过程和成果
- 新增 README_OPTIMIZATION.md,概述优化后的文件结构和关键指标
- 创建 specs/agent-roles.md,整合 Planning Agent 和 Execution Agent 的角色定义
- 合并多个提示词文件,减少内容重复,优化 Token 使用
- 新建 ARCHITECTURE.md 和 INDEX.md,提供系统架构和文档导航
- 添加 CONTENT_MIGRATION_REPORT.md,确保内容迁移的完整性和零丢失
- 更新文件引用,确保向后兼容性,添加弃用通知
2026-01-29 20:37:30 +08:00
catlog22
c08f5382d3 fix: Improve command detail modal tab handling and error reporting 2026-01-29 17:48:06 +08:00
51 changed files with 5255 additions and 950 deletions

View File

@@ -834,10 +834,35 @@ Use `analysis_results.complexity` or task count to determine structure:
- Proper linking between documents
- Consistent navigation and references
### 3.3 Guidelines Checklist
### 3.3 N+1 Context Recording
**Purpose**: Record decisions and deferred items for N+1 planning continuity.
**When**: After task generation, update `## N+1 Context` in planning-notes.md.
**What to Record**:
- **Decisions**: Architecture/technology choices with rationale (mark `Revisit?` if may change)
- **Deferred**: Items explicitly moved to N+1 with reason
**Example**:
```markdown
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| JWT over Session | Stateless scaling | No |
| CROSS::B::api → IMPL-B1 | B1 defines base | Yes |
### Deferred
- [ ] Rate limiting - Requires Redis (N+1)
- [ ] API versioning - Low priority
```
### 3.4 Guidelines Checklist
**ALWAYS:**
- **Load planning-notes.md FIRST**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
- **Record N+1 Context**: Update `## N+1 Context` section with key decisions and deferred items
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md

View File

@@ -1,13 +1,14 @@
---
name: cli-lite-planning-agent
description: |
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Generic planning agent for lite-plan, collaborative-plan, and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Core capabilities:
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
- Task decomposition with dependency analysis
- CLI execution ID assignment for fork/merge strategies
- Multi-angle context integration (explorations or diagnoses)
- Process documentation (planning-context.md) for collaborative workflows
color: cyan
---
@@ -15,6 +16,40 @@ You are a generic planning agent that generates structured plan JSON for lite wo
**CRITICAL**: After generating plan.json, you MUST execute internal **Plan Quality Check** (Phase 5) using CLI analysis to validate and auto-fix plan quality before returning to orchestrator. Quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance.
## Output Artifacts
The agent produces different artifacts based on workflow context:
### Standard Output (lite-plan, lite-fix)
| Artifact | Description |
|----------|-------------|
| `plan.json` | Structured plan following plan-json-schema.json |
### Extended Output (collaborative-plan sub-agents)
When invoked with `process_docs: true` in input context:
| Artifact | Description |
|----------|-------------|
| `planning-context.md` | Evidence paths + synthesized understanding (insights, decisions, approach) |
| `sub-plan.json` | Sub-plan following plan-json-schema.json with source_agent metadata |
**planning-context.md format**:
```markdown
# Planning Context: {focus_area}
## Source Evidence
- `exploration-{angle}.json` - {key finding}
- `{file}:{line}` - {what this proves}
## Understanding
- Current state: {analysis}
- Proposed approach: {strategy}
## Key Decisions
- Decision: {what} | Rationale: {why} | Evidence: {file ref}
```
## Input Context
@@ -34,10 +69,39 @@ You are a generic planning agent that generates structured plan JSON for lite wo
clarificationContext: { [question]: answer } | null,
complexity: "Low" | "Medium" | "High", // For lite-plan
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
cli_config: { tool, template, timeout, fallback }
cli_config: { tool, template, timeout, fallback },
// Process documentation (collaborative-plan)
process_docs: boolean, // If true, generate planning-context.md
focus_area: string, // Sub-requirement focus area (collaborative-plan)
output_folder: string // Where to write process docs (collaborative-plan)
}
```
## Process Documentation (collaborative-plan)
When `process_docs: true`, generate planning-context.md before sub-plan.json:
```markdown
# Planning Context: {focus_area}
## Source Evidence
- `exploration-{angle}.json` - {key finding from exploration}
- `{file}:{line}` - {code evidence for decision}
## Understanding
- **Current State**: {what exists now}
- **Problem**: {what needs to change}
- **Approach**: {proposed solution strategy}
## Key Decisions
- Decision: {what} | Rationale: {why} | Evidence: {file:line or exploration ref}
## Dependencies
- Depends on: {other sub-requirements or none}
- Provides for: {what this enables}
```
## Schema-Driven Output
**CRITICAL**: Read the schema reference first to determine output structure:

View File

@@ -0,0 +1,513 @@
---
name: codex-coordinator
description: Command orchestration tool for Codex - analyze requirements, recommend command chain, execute sequentially with state persistence
argument-hint: "TASK=\"<task description>\" [--depth=standard|deep] [--auto-confirm] [--verbose]"
---
# Codex Coordinator Command
Interactive orchestration tool for Codex commands: analyze task → discover commands → recommend chain → execute sequentially → track state.
**Execution Model**: Intelligent agent-driven workflow. Claude analyzes each phase and orchestrates command execution.
## Core Concept: Minimum Execution Units (最小执行单元)
### What is a Minimum Execution Unit?
**Definition**: A set of commands that must execute together as an atomic group to achieve a meaningful workflow milestone. Splitting these commands breaks the logical flow and creates incomplete states.
**Why This Matters**:
- **Prevents Incomplete States**: Avoid stopping after task generation without execution
- **User Experience**: User gets complete results, not intermediate artifacts requiring manual follow-up
- **Workflow Integrity**: Maintains logical coherence of multi-step operations
### Codex Minimum Execution Units
**Planning + Execution Units** (规划+执行单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Quick Implementation** | lite-plan-a → execute | Lightweight plan and immediate execution | Working code |
| **Bug Fix** | lite-fix → execute | Quick bug diagnosis and fix execution | Fixed code |
| **Issue Workflow** | issue-discover → issue-plan → issue-queue → issue-execute | Complete issue lifecycle | Completed issues |
| **Discovery & Analysis** | issue-discover → issue-discover-by-prompt | Issue discovery with multiple perspectives | Generated issues |
| **Brainstorm to Execution** | brainstorm-with-file → execute | Brainstorm ideas then implement | Working code |
**With-File Workflows** (文档化单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Brainstorm With File** | brainstorm-with-file | Multi-perspective ideation with documentation | brainstorm.md |
| **Debug With File** | debug-with-file | Hypothesis-driven debugging with documentation | understanding.md |
| **Analyze With File** | analyze-with-file | Collaborative analysis with documentation | discussion.md |
| **Clean & Analyze** | clean → analyze-with-file | Cleanup then analyze | Cleaned code + analysis |
### Command-to-Unit Mapping (命令与最小单元的映射)
| Command | Precedes | Atomic Units |
|---------|----------|--------------|
| lite-plan-a | execute, brainstorm-with-file | Quick Implementation |
| lite-fix | execute | Bug Fix |
| issue-discover | issue-plan | Issue Workflow |
| issue-plan | issue-queue | Issue Workflow |
| issue-queue | issue-execute | Issue Workflow |
| brainstorm-with-file | execute, issue-execute | Brainstorm to Execution |
| debug-with-file | execute | Debug With File |
| analyze-with-file | (standalone) | Analyze With File |
| clean | analyze-with-file, execute | Clean & Analyze |
| quick-plan-with-file | execute | Quick Planning with File |
| merge-plans-with-file | execute | Merge Multiple Plans |
| unified-execute-with-file | (terminal) | Execute with File Tracking |
### Atomic Group Rules
1. **Never Split Units**: Coordinator must recommend complete units, not partial chains
2. **Multi-Unit Participation**: Some commands can participate in multiple units
3. **User Override**: User can explicitly request partial execution (advanced mode)
4. **Visualization**: Pipeline view shows unit boundaries with 【 】markers
5. **Validation**: Before execution, verify all unit commands are included
**Example Pipeline with Units**:
```
需求 → 【lite-plan-a → execute】→ 代码 → 【issue-discover → issue-plan → issue-queue → issue-execute】→ 完成
└──── Quick Implementation ────┘ └────────── Issue Workflow ─────────┘
```
## 3-Phase Workflow
### Phase 1: Analyze Requirements
Parse task to extract: goal, scope, complexity, and task type.
```javascript
function analyzeRequirements(taskDescription) {
return {
goal: extractMainGoal(taskDescription), // e.g., "Fix login bug"
scope: extractScope(taskDescription), // e.g., ["auth", "login"]
complexity: determineComplexity(taskDescription), // 'simple' | 'medium' | 'complex'
task_type: detectTaskType(taskDescription) // See task type patterns below
};
}
// Task Type Detection Patterns
function detectTaskType(text) {
// Priority order (first match wins)
if (/fix|bug|error|crash|fail|debug|diagnose/.test(text)) return 'bugfix';
if (/生成|generate|discover|找出|issue|问题/.test(text)) return 'discovery';
if (/plan|规划|设计|design|analyze|分析/.test(text)) return 'analysis';
if (/清理|cleanup|clean|refactor|重构/.test(text)) return 'cleanup';
if (/头脑|brainstorm|创意|ideation/.test(text)) return 'brainstorm';
if (/合并|merge|combine|batch/.test(text)) return 'batch-planning';
return 'feature'; // Default
}
// Complexity Assessment
function determineComplexity(text) {
let score = 0;
if (/refactor|重构|migrate|迁移|architect|架构|system|系统/.test(text)) score += 2;
if (/multiple|多个|across|跨|all|所有|entire|整个/.test(text)) score += 2;
if (/integrate|集成|api|database|数据库/.test(text)) score += 1;
if (/security|安全|performance|性能|scale|扩展/.test(text)) score += 1;
return score >= 4 ? 'complex' : score >= 2 ? 'medium' : 'simple';
}
```
**Display to user**:
```
Analysis Complete:
Goal: [extracted goal]
Scope: [identified areas]
Complexity: [level]
Task Type: [detected type]
```
### Phase 2: Discover Commands & Recommend Chain
Dynamic command chain assembly using task type and complexity matching.
#### Available Codex Commands (Discovery)
All commands from `~/.codex/prompts/`:
- **Planning**: @~/.codex/prompts/lite-plan-a.md, @~/.codex/prompts/lite-plan-b.md, @~/.codex/prompts/lite-plan-c.md, @~/.codex/prompts/quick-plan-with-file.md, @~/.codex/prompts/merge-plans-with-file.md
- **Execution**: @~/.codex/prompts/execute.md, @~/.codex/prompts/unified-execute-with-file.md
- **Bug Fixes**: @~/.codex/prompts/lite-fix.md, @~/.codex/prompts/debug-with-file.md
- **Discovery**: @~/.codex/prompts/issue-discover.md, @~/.codex/prompts/issue-discover-by-prompt.md, @~/.codex/prompts/issue-plan.md, @~/.codex/prompts/issue-queue.md, @~/.codex/prompts/issue-execute.md
- **Analysis**: @~/.codex/prompts/analyze-with-file.md
- **Brainstorming**: @~/.codex/prompts/brainstorm-with-file.md, @~/.codex/prompts/brainstorm-to-cycle.md
- **Cleanup**: @~/.codex/prompts/clean.md, @~/.codex/prompts/compact.md
#### Recommendation Algorithm
```javascript
async function recommendCommandChain(analysis) {
// Step 1: 根据任务类型确定流程
const { inputPort, outputPort } = determinePortFlow(analysis.task_type, analysis.complexity);
// Step 2: Claude 根据命令特性和任务特征,智能选择命令序列
const chain = selectChainByTaskType(analysis);
return chain;
}
// 任务类型对应的端口流
function determinePortFlow(taskType, complexity) {
const flows = {
'bugfix': { flow: ['lite-fix', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'discovery': { flow: ['issue-discover', 'issue-plan', 'issue-queue', 'issue-execute'], depth: 'standard' },
'analysis': { flow: ['analyze-with-file'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'cleanup': { flow: ['clean'], depth: 'standard' },
'brainstorm': { flow: ['brainstorm-with-file', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'batch-planning': { flow: ['merge-plans-with-file', 'execute'], depth: 'standard' },
'feature': { flow: complexity === 'complex' ? ['lite-plan-b'] : ['lite-plan-a', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' }
};
return flows[taskType] || flows['feature'];
}
```
#### Display to User
```
Recommended Command Chain:
Pipeline (管道视图):
需求 → @~/.codex/prompts/lite-plan-a.md → 计划 → @~/.codex/prompts/execute.md → 代码完成
Commands (命令列表):
1. @~/.codex/prompts/lite-plan-a.md
2. @~/.codex/prompts/execute.md
Proceed? [Confirm / Show Details / Adjust / Cancel]
```
### Phase 2b: Get User Confirmation
Ask user for confirmation before proceeding with execution.
```javascript
async function getUserConfirmation(chain) {
const response = await AskUserQuestion({
questions: [{
question: 'Proceed with this command chain?',
header: 'Confirm Chain',
multiSelect: false,
options: [
{ label: 'Confirm and execute', description: 'Proceed with commands' },
{ label: 'Show details', description: 'View each command' },
{ label: 'Adjust chain', description: 'Remove or reorder' },
{ label: 'Cancel', description: 'Abort' }
]
}]
});
return response;
}
```
### Phase 3: Execute Sequential Command Chain
```javascript
async function executeCommandChain(chain, analysis) {
const sessionId = `codex-coord-${Date.now()}`;
const stateDir = `.workflow/.codex-coordinator/${sessionId}`;
// Create state directory
const state = {
session_id: sessionId,
status: 'running',
created_at: new Date().toISOString(),
analysis: analysis,
command_chain: chain.map((cmd, idx) => ({ ...cmd, index: idx, status: 'pending' })),
execution_results: [],
};
// Save initial state
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
for (let i = 0; i < chain.length; i++) {
const cmd = chain[i];
console.log(`[${i+1}/${chain.length}] Executing: @~/.codex/prompts/${cmd.name}.md`);
// Update status to running
state.command_chain[i].status = 'running';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
try {
// Build command with parameters using full path
let commandStr = `@~/.codex/prompts/${cmd.name}.md`;
// Add parameters based on previous results and task context
if (i > 0 && state.execution_results.length > 0) {
const lastResult = state.execution_results[state.execution_results.length - 1];
commandStr += ` --resume="${lastResult.session_id || lastResult.artifact}"`;
}
// For analysis-based commands, add depth parameter
if (analysis.complexity === 'complex' && (cmd.name.includes('analyze') || cmd.name.includes('plan'))) {
commandStr += ` --depth=deep`;
}
// Add task description for planning commands
if (cmd.type === 'planning' && i === 0) {
commandStr += ` TASK="${analysis.goal}"`;
}
// Execute command via Bash (spawning as background task)
// Format: @~/.codex/prompts/command-name.md [] parameters
// Note: This simulates the execution; actual implementation uses hook callbacks
console.log(`Executing: ${commandStr}`);
// Save execution record
state.execution_results.push({
index: i,
command: cmd.name,
status: 'in-progress',
started_at: new Date().toISOString(),
session_id: null,
artifact: null
});
state.command_chain[i].status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`[${i+1}/${chain.length}] ✓ Completed: @~/.codex/prompts/${cmd.name}.md`);
} catch (error) {
state.command_chain[i].status = 'failed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`❌ Command failed: ${error.message}`);
break;
}
}
state.status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`\n✅ Orchestration Complete: ${state.session_id}`);
return state;
}
```
## State File Structure
**Location**: `.workflow/.codex-coordinator/{session_id}/state.json`
```json
{
"session_id": "codex-coord-20250129-143025",
"status": "running|waiting|completed|failed",
"created_at": "2025-01-29T14:30:25Z",
"updated_at": "2025-01-29T14:35:45Z",
"analysis": {
"goal": "Fix login authentication bug",
"scope": ["auth", "login"],
"complexity": "medium",
"task_type": "bugfix"
},
"command_chain": [
{
"index": 0,
"name": "lite-fix",
"type": "bugfix",
"status": "completed"
},
{
"index": 1,
"name": "execute",
"type": "execution",
"status": "pending"
}
],
"execution_results": [
{
"index": 0,
"command": "lite-fix",
"status": "completed",
"started_at": "2025-01-29T14:30:25Z",
"session_id": "fix-login-2025-01-29",
"artifact": ".workflow/.lite-fix/fix-login-2025-01-29/fix-plan.json"
}
]
}
```
### Status Values
- `running`: Orchestrator actively executing
- `waiting`: Paused, waiting for external events
- `completed`: All commands finished successfully
- `failed`: Error occurred or user aborted
## Task Type Routing (Pipeline Summary)
**Note**: 【 】marks Minimum Execution Units (最小执行单元) - these commands must execute together.
| Task Type | Pipeline | Minimum Units |
|-----------|----------|---|
| **bugfix** | Bug报告 →【@~/.codex/prompts/lite-fix.md → @~/.codex/prompts/execute.md】→ 修复代码 | Bug Fix |
| **discovery** | 需求 →【@~/.codex/prompts/issue-discover.md → @~/.codex/prompts/issue-plan.md → @~/.codex/prompts/issue-queue.md → @~/.codex/prompts/issue-execute.md】→ 完成 issues | Issue Workflow |
| **analysis** | 需求 → @~/.codex/prompts/analyze-with-file.md → 分析报告 | Analyze With File |
| **cleanup** | 代码库 → @~/.codex/prompts/clean.md → 清理完成 | Cleanup |
| **brainstorm** | 主题 →【@~/.codex/prompts/brainstorm-with-file.md → @~/.codex/prompts/execute.md】→ 实现代码 | Brainstorm to Execution |
| **batch-planning** | 需求集合 →【@~/.codex/prompts/merge-plans-with-file.md → @~/.codex/prompts/execute.md】→ 代码完成 | Merge Multiple Plans |
| **feature** (simple) | 需求 →【@~/.codex/prompts/lite-plan-a.md → @~/.codex/prompts/execute.md】→ 代码 | Quick Implementation |
| **feature** (complex) | 需求 → @~/.codex/prompts/lite-plan-b.md → 详细计划 → @~/.codex/prompts/execute.md → 代码 | Complex Planning |
## Available Commands Reference
### Planning Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **lite-plan-a** | Lightweight merged-mode planning | `@~/.codex/prompts/lite-plan-a.md TASK="..."` | plan.json |
| **lite-plan-b** | Multi-angle exploration planning | `@~/.codex/prompts/lite-plan-b.md TASK="..."` | plan.json |
| **lite-plan-c** | Parallel angle planning | `@~/.codex/prompts/lite-plan-c.md TASK="..."` | plan.json |
| **quick-plan-with-file** | Quick planning with file tracking | `@~/.codex/prompts/quick-plan-with-file.md TASK="..."` | plan + docs |
| **merge-plans-with-file** | Merge multiple plans | `@~/.codex/prompts/merge-plans-with-file.md PLANS="..."` | merged-plan.json |
### Execution Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **execute** | Execute tasks from plan | `@~/.codex/prompts/execute.md SESSION=".../plan/"` | Working code |
| **unified-execute-with-file** | Execute with file tracking | `@~/.codex/prompts/unified-execute-with-file.md SESSION="..."` | Code + tracking |
### Bug Fix Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **lite-fix** | Quick bug diagnosis and planning | `@~/.codex/prompts/lite-fix.md BUG="..."` | fix-plan.json |
| **debug-with-file** | Hypothesis-driven debugging | `@~/.codex/prompts/debug-with-file.md BUG="..."` | understanding.md |
### Discovery Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **issue-discover** | Multi-perspective issue discovery | `@~/.codex/prompts/issue-discover.md PATTERN="src/**"` | issues.jsonl |
| **issue-discover-by-prompt** | Prompt-based discovery | `@~/.codex/prompts/issue-discover-by-prompt.md PROMPT="..."` | issues |
| **issue-plan** | Plan issue solutions | `@~/.codex/prompts/issue-plan.md --all-pending` | issue-plans.json |
| **issue-queue** | Form execution queue | `@~/.codex/prompts/issue-queue.md --from-plan` | queue.json |
| **issue-execute** | Execute issue queue | `@~/.codex/prompts/issue-execute.md QUEUE="..."` | Completed |
### Analysis Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **analyze-with-file** | Collaborative analysis | `@~/.codex/prompts/analyze-with-file.md TOPIC="..."` | discussion.md |
### Brainstorm Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **brainstorm-with-file** | Multi-perspective brainstorming | `@~/.codex/prompts/brainstorm-with-file.md TOPIC="..."` | brainstorm.md |
| **brainstorm-to-cycle** | Bridge brainstorm to execution | `@~/.codex/prompts/brainstorm-to-cycle.md` | Executable plan |
### Utility Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **clean** | Intelligent code cleanup | `@~/.codex/prompts/clean.md` | Cleaned code |
| **compact** | Compact session memory | `@~/.codex/prompts/compact.md SESSION="..."` | Compressed state |
## Execution Flow
```
User Input: TASK="..."
Phase 1: analyzeRequirements(task)
Phase 2: recommendCommandChain(analysis)
Display pipeline and commands
User Confirmation
Phase 3: executeCommandChain(chain, analysis)
├─ For each command:
│ ├─ Update state to "running"
│ ├─ Build command string with parameters
│ ├─ Execute @command [] with parameters
│ ├─ Save execution results
│ └─ Update state to "completed"
Output completion summary
```
## Key Design Principles
1. **Atomic Execution** - Never split minimum execution units
2. **State Persistence** - All state saved to JSON
3. **User Control** - Confirmation before execution
4. **Context Passing** - Parameters chain across commands
5. **Resume Support** - Can resume from state.json
6. **Intelligent Routing** - Task type determines command chain
7. **Complexity Awareness** - Different paths for simple vs complex tasks
## Command Invocation Format
**Format**: `@~/.codex/prompts/<command-name>.md <parameters>`
**Examples**:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="Implement user authentication"
@~/.codex/prompts/execute.md SESSION=".workflow/.lite-plan/..."
@~/.codex/prompts/lite-fix.md BUG="Login fails with 404 error"
@~/.codex/prompts/issue-discover.md PATTERN="src/auth/**"
@~/.codex/prompts/brainstorm-with-file.md TOPIC="Improve user onboarding"
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Unknown task type | Default to feature implementation |
| Command not found | Error: command not available |
| Execution fails | Report error, offer retry or skip |
| Invalid parameters | Validate and ask for correction |
| Circular dependency | Detect and report |
| All commands fail | Report and suggest manual intervention |
## Session Management
**Resume Previous Session**:
```
1. Find session in .workflow/.codex-coordinator/
2. Load state.json
3. Identify last completed command
4. Restart from next pending command
```
**View Session Progress**:
```
cat .workflow/.codex-coordinator/{session-id}/state.json
```
---
## Execution Instructions
The coordinator workflow follows these steps:
1. **Parse Input**: Extract task description from TASK parameter
2. **Analyze**: Determine goal, scope, complexity, and task type
3. **Recommend**: Build optimal command chain based on analysis
4. **Confirm**: Display pipeline and request user approval
5. **Execute**: Run commands sequentially with state tracking
6. **Report**: Display final results and artifacts
To use this coordinator, invoke it as a Claude Code command (not a Codex command):
From the Claude Code CLI, you would call Codex commands like:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="Your task description"
```
Or with options:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="..." --depth=deep
```
This coordinator orchestrates such Codex commands based on your task requirements.

View File

@@ -0,0 +1,675 @@
# Flow Template Generator
Generate workflow templates for meta-skill/flow-coordinator.
## Usage
```
/meta-skill:flow-create [template-name] [--output <path>]
```
**Examples**:
```bash
/meta-skill:flow-create bugfix-v2
/meta-skill:flow-create my-workflow --output ~/.claude/skills/my-skill/templates/
```
## Execution Flow
```
User Input → Phase 1: Template Design → Phase 2: Step Definition → Phase 3: Generate JSON
↓ ↓ ↓
Name + Description Define workflow steps Write template file
```
---
## Phase 1: Template Design
Gather basic template information:
```javascript
async function designTemplate(input) {
const templateName = parseTemplateName(input) || await askTemplateName();
const metadata = await AskUserQuestion({
questions: [
{
question: "What is the purpose of this workflow template?",
header: "Purpose",
options: [
{ label: "Feature Development", description: "Implement new features with planning and testing" },
{ label: "Bug Fix", description: "Diagnose and fix bugs with verification" },
{ label: "TDD Development", description: "Test-driven development workflow" },
{ label: "Code Review", description: "Review cycle with findings and fixes" },
{ label: "Testing", description: "Test generation and validation" },
{ label: "Issue Workflow", description: "Complete issue lifecycle (discover → plan → queue → execute)" },
{ label: "With-File Workflow", description: "Documented exploration (brainstorm/debug/analyze)" },
{ label: "Custom", description: "Define custom workflow purpose" }
],
multiSelect: false
},
{
question: "What complexity level?",
header: "Level",
options: [
{ label: "Level 1 (Rapid)", description: "1-2 steps, ultra-lightweight (lite-lite-lite)" },
{ label: "Level 2 (Lightweight)", description: "2-4 steps, quick implementation" },
{ label: "Level 3 (Standard)", description: "4-6 steps, with verification and testing" },
{ label: "Level 4 (Full)", description: "6+ steps, brainstorm + full workflow" }
],
multiSelect: false
}
]
});
return {
name: templateName,
description: generateDescription(templateName, metadata.Purpose),
level: parseLevel(metadata.Level),
purpose: metadata.Purpose
};
}
```
---
## Phase 2: Step Definition
### Step 2.1: Select Command Category
```javascript
async function selectCommandCategory() {
return await AskUserQuestion({
questions: [{
question: "Select command category",
header: "Category",
options: [
{ label: "Planning", description: "lite-plan, plan, multi-cli-plan, tdd-plan, quick-plan-with-file" },
{ label: "Execution", description: "lite-execute, execute, unified-execute-with-file" },
{ label: "Testing", description: "test-fix-gen, test-cycle-execute, test-gen, tdd-verify" },
{ label: "Review", description: "review-session-cycle, review-module-cycle, review-cycle-fix" },
{ label: "Bug Fix", description: "lite-fix, debug-with-file" },
{ label: "Brainstorm", description: "brainstorm-with-file, brainstorm:auto-parallel" },
{ label: "Analysis", description: "analyze-with-file" },
{ label: "Issue", description: "discover, plan, queue, execute, from-brainstorm, convert-to-plan" },
{ label: "Utility", description: "clean, init, replan, status" }
],
multiSelect: false
}]
});
}
```
### Step 2.2: Select Specific Command
```javascript
async function selectCommand(category) {
const commandOptions = {
'Planning': [
{ label: "/workflow:lite-plan", description: "Lightweight merged-mode planning" },
{ label: "/workflow:plan", description: "Full planning with architecture design" },
{ label: "/workflow:multi-cli-plan", description: "Multi-CLI collaborative planning (Gemini+Codex+Claude)" },
{ label: "/workflow:tdd-plan", description: "TDD workflow planning with Red-Green-Refactor" },
{ label: "/workflow:quick-plan-with-file", description: "Rapid planning with minimal docs" },
{ label: "/workflow:plan-verify", description: "Verify plan against requirements" },
{ label: "/workflow:replan", description: "Update plan and execute changes" }
],
'Execution': [
{ label: "/workflow:lite-execute", description: "Execute from in-memory plan" },
{ label: "/workflow:execute", description: "Execute from planning session" },
{ label: "/workflow:unified-execute-with-file", description: "Universal execution engine" },
{ label: "/workflow:lite-lite-lite", description: "Ultra-lightweight multi-tool execution" }
],
'Testing': [
{ label: "/workflow:test-fix-gen", description: "Generate test tasks for specific issues" },
{ label: "/workflow:test-cycle-execute", description: "Execute iterative test-fix cycle (>=95% pass)" },
{ label: "/workflow:test-gen", description: "Generate comprehensive test suite" },
{ label: "/workflow:tdd-verify", description: "Verify TDD workflow compliance" }
],
'Review': [
{ label: "/workflow:review-session-cycle", description: "Session-based multi-dimensional code review" },
{ label: "/workflow:review-module-cycle", description: "Module-focused code review" },
{ label: "/workflow:review-cycle-fix", description: "Fix review findings with prioritization" },
{ label: "/workflow:review", description: "Post-implementation review" }
],
'Bug Fix': [
{ label: "/workflow:lite-fix", description: "Lightweight bug diagnosis and fix" },
{ label: "/workflow:debug-with-file", description: "Hypothesis-driven debugging with documentation" }
],
'Brainstorm': [
{ label: "/workflow:brainstorm-with-file", description: "Multi-perspective ideation with documentation" },
{ label: "/workflow:brainstorm:auto-parallel", description: "Parallel multi-role brainstorming" }
],
'Analysis': [
{ label: "/workflow:analyze-with-file", description: "Collaborative analysis with documentation" }
],
'Issue': [
{ label: "/issue:discover", description: "Multi-perspective issue discovery" },
{ label: "/issue:discover-by-prompt", description: "Prompt-based issue discovery with Gemini" },
{ label: "/issue:plan", description: "Plan issue solutions" },
{ label: "/issue:queue", description: "Form execution queue with conflict analysis" },
{ label: "/issue:execute", description: "Execute issue queue with DAG orchestration" },
{ label: "/issue:from-brainstorm", description: "Convert brainstorm to issue" },
{ label: "/issue:convert-to-plan", description: "Convert planning artifacts to issue solutions" }
],
'Utility': [
{ label: "/workflow:clean", description: "Intelligent code cleanup" },
{ label: "/workflow:init", description: "Initialize project-level state" },
{ label: "/workflow:replan", description: "Interactive workflow replanning" },
{ label: "/workflow:status", description: "Generate workflow status views" }
]
};
return await AskUserQuestion({
questions: [{
question: `Select ${category} command`,
header: "Command",
options: commandOptions[category] || commandOptions['Planning'],
multiSelect: false
}]
});
}
```
### Step 2.3: Select Execution Unit
```javascript
async function selectExecutionUnit() {
return await AskUserQuestion({
questions: [{
question: "Select execution unit (atomic command group)",
header: "Unit",
options: [
// Planning + Execution Units
{ label: "quick-implementation", description: "【lite-plan → lite-execute】" },
{ label: "multi-cli-planning", description: "【multi-cli-plan → lite-execute】" },
{ label: "full-planning-execution", description: "【plan → execute】" },
{ label: "verified-planning-execution", description: "【plan → plan-verify → execute】" },
{ label: "replanning-execution", description: "【replan → execute】" },
{ label: "tdd-planning-execution", description: "【tdd-plan → execute】" },
// Testing Units
{ label: "test-validation", description: "【test-fix-gen → test-cycle-execute】" },
{ label: "test-generation-execution", description: "【test-gen → execute】" },
// Review Units
{ label: "code-review", description: "【review-*-cycle → review-cycle-fix】" },
// Bug Fix Units
{ label: "bug-fix", description: "【lite-fix → lite-execute】" },
// Issue Units
{ label: "issue-workflow", description: "【discover → plan → queue → execute】" },
{ label: "rapid-to-issue", description: "【lite-plan → convert-to-plan → queue → execute】" },
{ label: "brainstorm-to-issue", description: "【from-brainstorm → queue → execute】" },
// With-File Units (self-contained)
{ label: "brainstorm-with-file", description: "Self-contained brainstorming workflow" },
{ label: "debug-with-file", description: "Self-contained debugging workflow" },
{ label: "analyze-with-file", description: "Self-contained analysis workflow" },
// Standalone
{ label: "standalone", description: "Single command, no atomic grouping" }
],
multiSelect: false
}]
});
}
```
### Step 2.4: Select Execution Mode
```javascript
async function selectExecutionMode() {
return await AskUserQuestion({
questions: [{
question: "Execution mode for this step?",
header: "Mode",
options: [
{ label: "mainprocess", description: "Run in main process (blocking, synchronous)" },
{ label: "async", description: "Run asynchronously (background, hook callbacks)" }
],
multiSelect: false
}]
});
}
```
### Complete Step Definition Flow
```javascript
async function defineSteps(templateDesign) {
// Suggest steps based on purpose
const suggestedSteps = getSuggestedSteps(templateDesign.purpose);
const customize = await AskUserQuestion({
questions: [{
question: "Use suggested steps or customize?",
header: "Steps",
options: [
{ label: "Use Suggested", description: `Suggested: ${suggestedSteps.map(s => s.cmd).join(' → ')}` },
{ label: "Customize", description: "Modify or add custom steps" },
{ label: "Start Empty", description: "Define all steps from scratch" }
],
multiSelect: false
}]
});
if (customize.Steps === "Use Suggested") {
return suggestedSteps;
}
// Interactive step definition
const steps = [];
let addMore = true;
while (addMore) {
const category = await selectCommandCategory();
const command = await selectCommand(category.Category);
const unit = await selectExecutionUnit();
const execMode = await selectExecutionMode();
const contextHint = await askContextHint(command.Command);
steps.push({
cmd: command.Command,
args: command.Command.includes('plan') || command.Command.includes('fix') ? '"{{goal}}"' : undefined,
unit: unit.Unit,
execution: {
type: "slash-command",
mode: execMode.Mode
},
contextHint: contextHint
});
const continueAdding = await AskUserQuestion({
questions: [{
question: `Added step ${steps.length}: ${command.Command}. Add another?`,
header: "Continue",
options: [
{ label: "Add More", description: "Define another step" },
{ label: "Done", description: "Finish step definition" }
],
multiSelect: false
}]
});
addMore = continueAdding.Continue === "Add More";
}
return steps;
}
```
---
## Suggested Step Templates
### Feature Development (Level 2 - Rapid)
```json
{
"name": "rapid",
"description": "Quick implementation with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight implementation plan" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute implementation based on plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle until pass rate >= 95%" }
]
}
```
### Feature Development (Level 3 - Coupled)
```json
{
"name": "coupled",
"description": "Full workflow with verification, review, and testing",
"level": 3,
"steps": [
{ "cmd": "/workflow:plan", "args": "\"{{goal}}\"", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed implementation plan" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan against requirements" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
### Bug Fix (Level 2)
```json
{
"name": "bugfix",
"description": "Bug diagnosis and fix with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-fix", "args": "\"{{goal}}\"", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Diagnose and plan bug fix" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute bug fix" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate regression tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fix with tests" }
]
}
```
### Bug Fix Hotfix (Level 2)
```json
{
"name": "bugfix-hotfix",
"description": "Urgent production bug fix (no tests)",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-fix", "args": "--hotfix \"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Emergency hotfix mode" }
]
}
```
### TDD Development (Level 3)
```json
{
"name": "tdd",
"description": "Test-driven development with Red-Green-Refactor",
"level": 3,
"steps": [
{ "cmd": "/workflow:tdd-plan", "args": "\"{{goal}}\"", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create TDD task chain" },
{ "cmd": "/workflow:execute", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute TDD cycle" },
{ "cmd": "/workflow:tdd-verify", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify TDD compliance" }
]
}
```
### Code Review (Level 3)
```json
{
"name": "review",
"description": "Code review cycle with fixes and testing",
"level": 3,
"steps": [
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests for fixes" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fixes pass tests" }
]
}
```
### Test Fix (Level 3)
```json
{
"name": "test-fix",
"description": "Fix failing tests",
"level": 3,
"steps": [
{ "cmd": "/workflow:test-fix-gen", "args": "\"{{goal}}\"", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test fix tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
### Issue Workflow (Level Issue)
```json
{
"name": "issue",
"description": "Complete issue lifecycle",
"level": "Issue",
"steps": [
{ "cmd": "/issue:discover", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Discover issues from codebase" },
{ "cmd": "/issue:plan", "args": "--all-pending", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Plan issue solutions" },
{ "cmd": "/issue:queue", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### Rapid to Issue (Level 2.5)
```json
{
"name": "rapid-to-issue",
"description": "Bridge lightweight planning to issue workflow",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight plan" },
{ "cmd": "/issue:convert-to-plan", "args": "--latest-lite-plan -y", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert to issue plan" },
{ "cmd": "/issue:queue", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### Brainstorm to Issue (Level 4)
```json
{
"name": "brainstorm-to-issue",
"description": "Bridge brainstorm session to issue workflow",
"level": 4,
"steps": [
{ "cmd": "/issue:from-brainstorm", "args": "SESSION=\"{{session}}\" --auto", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert brainstorm to issue" },
{ "cmd": "/issue:queue", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### With-File: Brainstorm (Level 4)
```json
{
"name": "brainstorm",
"description": "Multi-perspective ideation with documentation",
"level": 4,
"steps": [
{ "cmd": "/workflow:brainstorm-with-file", "args": "\"{{goal}}\"", "unit": "brainstorm-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-CLI brainstorming with documented diverge-converge cycles" }
]
}
```
### With-File: Debug (Level 3)
```json
{
"name": "debug",
"description": "Hypothesis-driven debugging with documentation",
"level": 3,
"steps": [
{ "cmd": "/workflow:debug-with-file", "args": "\"{{goal}}\"", "unit": "debug-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Hypothesis-driven debugging with Gemini validation" }
]
}
```
### With-File: Analyze (Level 3)
```json
{
"name": "analyze",
"description": "Collaborative analysis with documentation",
"level": 3,
"steps": [
{ "cmd": "/workflow:analyze-with-file", "args": "\"{{goal}}\"", "unit": "analyze-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-round collaborative analysis with CLI exploration" }
]
}
```
### Full Workflow (Level 4)
```json
{
"name": "full",
"description": "Complete workflow: brainstorm → plan → execute → test",
"level": 4,
"steps": [
{ "cmd": "/workflow:brainstorm:auto-parallel", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Parallel multi-perspective brainstorming" },
{ "cmd": "/workflow:plan", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed plan from brainstorm" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan quality" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate comprehensive tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
### Multi-CLI Planning (Level 3)
```json
{
"name": "multi-cli-plan",
"description": "Multi-CLI collaborative planning with cross-verification",
"level": 3,
"steps": [
{ "cmd": "/workflow:multi-cli-plan", "args": "\"{{goal}}\"", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Gemini+Codex+Claude collaborative planning" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute converged plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
### Ultra-Lightweight (Level 1)
```json
{
"name": "lite-lite-lite",
"description": "Ultra-lightweight multi-tool execution",
"level": 1,
"steps": [
{ "cmd": "/workflow:lite-lite-lite", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Direct execution with minimal overhead" }
]
}
```
---
## Command Port Reference
Each command has input/output ports for pipeline composition:
| Command | Input Port | Output Port | Atomic Unit |
|---------|------------|-------------|-------------|
| **Planning** |
| lite-plan | requirement | plan | quick-implementation |
| plan | requirement | detailed-plan | full-planning-execution |
| plan-verify | detailed-plan | verified-plan | verified-planning-execution |
| multi-cli-plan | requirement | multi-cli-plan | multi-cli-planning |
| tdd-plan | requirement | tdd-tasks | tdd-planning-execution |
| replan | session, feedback | replan | replanning-execution |
| **Execution** |
| lite-execute | plan, multi-cli-plan, lite-fix | code | (multiple) |
| execute | detailed-plan, verified-plan, replan, tdd-tasks | code | (multiple) |
| **Testing** |
| test-fix-gen | failing-tests, session | test-tasks | test-validation |
| test-cycle-execute | test-tasks | test-passed | test-validation |
| test-gen | code, session | test-tasks | test-generation-execution |
| tdd-verify | code | tdd-verified | standalone |
| **Review** |
| review-session-cycle | code, session | review-verified | code-review |
| review-module-cycle | module-pattern | review-verified | code-review |
| review-cycle-fix | review-findings | fixed-code | code-review |
| **Bug Fix** |
| lite-fix | bug-report | lite-fix | bug-fix |
| debug-with-file | bug-report | understanding-document | debug-with-file |
| **With-File** |
| brainstorm-with-file | exploration-topic | brainstorm-document | brainstorm-with-file |
| analyze-with-file | analysis-topic | discussion-document | analyze-with-file |
| **Issue** |
| issue:discover | codebase | pending-issues | issue-workflow |
| issue:plan | pending-issues | issue-plans | issue-workflow |
| issue:queue | issue-plans, converted-plan | execution-queue | issue-workflow |
| issue:execute | execution-queue | completed-issues | issue-workflow |
| issue:convert-to-plan | plan | converted-plan | rapid-to-issue |
| issue:from-brainstorm | brainstorm-document | converted-plan | brainstorm-to-issue |
---
## Minimum Execution Units (最小执行单元)
**Definition**: Commands that must execute together as an atomic group.
| Unit Name | Commands | Purpose |
|-----------|----------|---------|
| **quick-implementation** | lite-plan → lite-execute | Lightweight plan and execution |
| **multi-cli-planning** | multi-cli-plan → lite-execute | Multi-perspective planning and execution |
| **bug-fix** | lite-fix → lite-execute | Bug diagnosis and fix |
| **full-planning-execution** | plan → execute | Detailed planning and execution |
| **verified-planning-execution** | plan → plan-verify → execute | Planning with verification |
| **replanning-execution** | replan → execute | Update plan and execute |
| **tdd-planning-execution** | tdd-plan → execute | TDD planning and execution |
| **test-validation** | test-fix-gen → test-cycle-execute | Test generation and fix cycle |
| **test-generation-execution** | test-gen → execute | Generate and execute tests |
| **code-review** | review-*-cycle → review-cycle-fix | Review and fix findings |
| **issue-workflow** | discover → plan → queue → execute | Complete issue lifecycle |
| **rapid-to-issue** | lite-plan → convert-to-plan → queue → execute | Bridge to issue workflow |
| **brainstorm-to-issue** | from-brainstorm → queue → execute | Brainstorm to issue bridge |
| **brainstorm-with-file** | (self-contained) | Multi-perspective ideation |
| **debug-with-file** | (self-contained) | Hypothesis-driven debugging |
| **analyze-with-file** | (self-contained) | Collaborative analysis |
---
## Phase 3: Generate JSON
```javascript
async function generateTemplate(design, steps, outputPath) {
const template = {
name: design.name,
description: design.description,
level: design.level,
steps: steps
};
const finalPath = outputPath || `~/.claude/skills/flow-coordinator/templates/${design.name}.json`;
// Write template
Write(finalPath, JSON.stringify(template, null, 2));
// Validate
const validation = validateTemplate(template);
console.log(`✅ Template created: ${finalPath}`);
console.log(` Steps: ${template.steps.length}`);
console.log(` Level: ${template.level}`);
console.log(` Units: ${[...new Set(template.steps.map(s => s.unit))].join(', ')}`);
return { path: finalPath, template, validation };
}
```
---
## Output Format
```json
{
"name": "template-name",
"description": "Template description",
"level": 2,
"steps": [
{
"cmd": "/workflow:command",
"args": "\"{{goal}}\"",
"unit": "unit-name",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Description of what this step does"
}
]
}
```
---
## Examples
**Create a quick bugfix template**:
```
/meta-skill:flow-create hotfix-simple
→ Purpose: Bug Fix
→ Level: 2 (Lightweight)
→ Steps: Use Suggested
→ Output: ~/.claude/skills/flow-coordinator/templates/hotfix-simple.json
```
**Create a custom multi-stage workflow**:
```
/meta-skill:flow-create complex-feature --output ~/.claude/skills/my-project/templates/
→ Purpose: Feature Development
→ Level: 3 (Standard)
→ Steps: Customize
→ Step 1: /workflow:brainstorm:auto-parallel (standalone, mainprocess)
→ Step 2: /workflow:plan (verified-planning-execution, mainprocess)
→ Step 3: /workflow:plan-verify (verified-planning-execution, mainprocess)
→ Step 4: /workflow:execute (verified-planning-execution, async)
→ Step 5: /workflow:review-session-cycle (code-review, mainprocess)
→ Step 6: /workflow:review-cycle-fix (code-review, mainprocess)
→ Done
→ Output: ~/.claude/skills/my-project/templates/complex-feature.json
```

View File

@@ -0,0 +1,761 @@
---
name: workflow:collaborative-plan-with-file
description: Unified collaborative planning with dynamic requirement splitting, parallel sub-agent exploration/understanding/planning, and automatic merge. Each agent maintains process files for full traceability.
argument-hint: "[-y|--yes] <task description> [--max-agents=5] [--depth=normal|deep] [--merge-rule=consensus|priority]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-approve splits, use default merge rule, skip confirmations.
# Collaborative Planning Command
## Quick Start
```bash
# Basic usage
/workflow:collaborative-plan-with-file "Implement real-time notification system"
# With options
/workflow:collaborative-plan-with-file "Refactor authentication module" --max-agents=4
/workflow:collaborative-plan-with-file "Add payment gateway support" --depth=deep
/workflow:collaborative-plan-with-file "Migrate to microservices" --merge-rule=priority
```
**Context Source**: ACE semantic search + Per-agent CLI exploration
**Output Directory**: `.workflow/.planning/{session-id}/`
**Default Max Agents**: 5 (actual count based on requirement complexity)
**CLI Tools**: cli-lite-planning-agent (internally calls ccw cli with gemini/codex/qwen)
**Schema**: plan-json-schema.json (sub-plans & final plan share same base schema)
## Output Artifacts
### Per Sub-Agent (Phase 2)
| Artifact | Description |
|----------|-------------|
| `planning-context.md` | Evidence paths + synthesized understanding |
| `sub-plan.json` | Sub-plan following plan-json-schema.json |
### Final Output (Phase 4)
| Artifact | Description |
|----------|-------------|
| `requirement-analysis.json` | Requirement breakdown and sub-agent assignments |
| `conflicts.json` | Detected conflicts between sub-plans |
| `plan.json` | Merged plan (plan-json-schema + merge_metadata) |
| `plan.md` | Human-readable plan summary |
**Agent**: `cli-lite-planning-agent` with `process_docs: true` for sub-agents
## Overview
Unified collaborative planning workflow that:
1. **Analyzes** complex requirements and splits into sub-requirements
2. **Spawns** parallel sub-agents, each responsible for one sub-requirement
3. **Each agent** maintains process files: planning-refs.md + sub-plan.json
4. **Merges** all sub-plans into unified plan.json with conflict resolution
```
┌─────────────────────────────────────────────────────────────────────────┐
│ COLLABORATIVE PLANNING │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Requirement Analysis & Splitting │
│ ├─ Analyze requirement complexity │
│ ├─ Identify 2-5 sub-requirements (focus areas) │
│ └─ Write requirement-analysis.json │
│ │
│ Phase 2: Parallel Sub-Agent Execution │
│ ┌──────────────┬──────────────┬──────────────┐ │
│ │ Agent 1 │ Agent 2 │ Agent N │ │
├──────────────┼──────────────┼──────────────┤ │
│ │ planning │ planning │ planning │ → planning-context.md│
│ │ + sub-plan │ + sub-plan │ + sub-plan │ → sub-plan.json │
│ └──────────────┴──────────────┴──────────────┘ │
│ │
│ Phase 3: Cross-Verification & Conflict Detection │
│ ├─ Load all sub-plan.json files │
│ ├─ Detect conflicts (effort, approach, dependencies) │
│ └─ Write conflicts.json │
│ │
│ Phase 4: Merge & Synthesis │
│ ├─ Resolve conflicts using merge-rule │
│ ├─ Merge all sub-plans into unified plan │
│ └─ Write plan.json + plan.md │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
## Output Structure
```
.workflow/.planning/{CPLAN-slug-YYYY-MM-DD}/
├── requirement-analysis.json # Phase 1: Requirement breakdown
├── agents/ # Phase 2: Per-agent process files
│ ├── {focus-area-1}/
│ │ ├── planning-context.md # Evidence + understanding
│ │ └── sub-plan.json # Agent's plan for this focus area
│ ├── {focus-area-2}/
│ │ └── ...
│ └── {focus-area-N}/
│ └── ...
├── conflicts.json # Phase 3: Detected conflicts
├── plan.json # Phase 4: Unified merged plan
└── plan.md # Phase 4: Human-readable plan
```
## Implementation
### Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const taskDescription = "$ARGUMENTS"
const taskSlug = taskDescription.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 30)
const sessionId = `CPLAN-${taskSlug}-${getUtc8ISOString().substring(0, 10)}`
const sessionFolder = `.workflow/.planning/${sessionId}`
// Parse options
const maxAgents = parseInt($ARGUMENTS.match(/--max-agents=(\d+)/)?.[1] || '5')
const depth = $ARGUMENTS.match(/--depth=(normal|deep)/)?.[1] || 'normal'
const mergeRule = $ARGUMENTS.match(/--merge-rule=(consensus|priority)/)?.[1] || 'consensus'
const autoMode = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
Bash(`mkdir -p ${sessionFolder}/agents`)
```
### Phase 1: Requirement Analysis & Splitting
Use CLI to analyze and split requirements:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Requirement Analysis", status: "in_progress", activeForm: "Analyzing requirements" },
{ content: "Phase 2: Parallel Agent Execution", status: "pending", activeForm: "Running agents" },
{ content: "Phase 3: Conflict Detection", status: "pending", activeForm: "Detecting conflicts" },
{ content: "Phase 4: Merge & Synthesis", status: "pending", activeForm: "Merging plans" }
]})
// Step 1.1: Use CLI to analyze requirement and propose splits
Bash({
command: `ccw cli -p "
PURPOSE: Analyze requirement and identify distinct sub-requirements/focus areas
Success: 2-${maxAgents} clearly separated sub-requirements that can be planned independently
TASK:
• Understand the overall requirement: '${taskDescription}'
• Identify major components, features, or concerns
• Split into 2-${maxAgents} independent sub-requirements
• Each sub-requirement should be:
- Self-contained (can be planned independently)
- Non-overlapping (minimal dependency on other sub-requirements)
- Roughly equal in complexity
• For each sub-requirement, provide:
- focus_area: Short identifier (e.g., 'auth-backend', 'ui-components')
- description: What this sub-requirement covers
- key_concerns: Main challenges or considerations
- suggested_cli_tool: Which CLI tool is best suited (gemini/codex/qwen)
MODE: analysis
CONTEXT: @**/*
EXPECTED: JSON output with structure:
{
\"original_requirement\": \"...\",
\"complexity\": \"low|medium|high\",
\"sub_requirements\": [
{
\"index\": 1,
\"focus_area\": \"...\",
\"description\": \"...\",
\"key_concerns\": [\"...\"],
\"suggested_cli_tool\": \"gemini|codex|qwen\",
\"estimated_effort\": \"low|medium|high\"
}
],
\"dependencies_between_subs\": [
{ \"from\": 1, \"to\": 2, \"reason\": \"...\" }
],
\"rationale\": \"Why this split was chosen\"
}
CONSTRAINTS: Maximum ${maxAgents} sub-requirements | Ensure clear boundaries
" --tool gemini --mode analysis`,
run_in_background: true
})
// Wait for CLI completion and parse result
// ... (hook callback will provide result)
```
**After CLI completes**:
```javascript
// Parse CLI output to extract sub-requirements
const analysisResult = parseCLIOutput(cliOutput)
const subRequirements = analysisResult.sub_requirements
// Write requirement-analysis.json
Write(`${sessionFolder}/requirement-analysis.json`, JSON.stringify({
session_id: sessionId,
original_requirement: taskDescription,
analysis_timestamp: getUtc8ISOString(),
complexity: analysisResult.complexity,
sub_requirements: subRequirements,
dependencies_between_subs: analysisResult.dependencies_between_subs,
rationale: analysisResult.rationale,
options: { maxAgents, depth, mergeRule }
}, null, 2))
// Create agent folders
subRequirements.forEach(sub => {
Bash(`mkdir -p ${sessionFolder}/agents/${sub.focus_area}`)
})
// User confirmation (unless auto mode)
if (!autoMode) {
AskUserQuestion({
questions: [{
question: `已识别 ${subRequirements.length} 个子需求:\n${subRequirements.map((s, i) => `${i+1}. ${s.focus_area}: ${s.description}`).join('\n')}\n\n确认开始并行规划?`,
header: "Confirm Split",
multiSelect: false,
options: [
{ label: "开始规划", description: "启动并行sub-agent" },
{ label: "调整拆分", description: "修改子需求划分" },
{ label: "取消", description: "退出规划" }
]
}]
})
}
```
### Phase 2: Parallel Sub-Agent Execution
Launch one agent per sub-requirement, each maintaining its own process files:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Requirement Analysis", status: "completed", activeForm: "Analyzing requirements" },
{ content: "Phase 2: Parallel Agent Execution", status: "in_progress", activeForm: "Running agents" },
...subRequirements.map((sub, i) => ({
content: ` → Agent ${i+1}: ${sub.focus_area}`,
status: "pending",
activeForm: `Planning ${sub.focus_area}`
})),
{ content: "Phase 3: Conflict Detection", status: "pending", activeForm: "Detecting conflicts" },
{ content: "Phase 4: Merge & Synthesis", status: "pending", activeForm: "Merging plans" }
]})
// Launch all sub-agents in parallel
const agentPromises = subRequirements.map((sub, index) => {
return Task({
subagent_type: "cli-lite-planning-agent",
run_in_background: false,
description: `Plan: ${sub.focus_area}`,
prompt: `
## Sub-Agent Context
You are planning ONE sub-requirement. Generate process docs + sub-plan.
**Focus Area**: ${sub.focus_area}
**Description**: ${sub.description}
**Key Concerns**: ${sub.key_concerns.join(', ')}
**CLI Tool**: ${sub.suggested_cli_tool}
**Depth**: ${depth}
## Input Context
\`\`\`json
{
"task_description": "${sub.description}",
"schema_path": "~/.claude/workflows/cli-templates/schemas/plan-json-schema.json",
"session": { "id": "${sessionId}", "folder": "${sessionFolder}" },
"process_docs": true,
"focus_area": "${sub.focus_area}",
"output_folder": "${sessionFolder}/agents/${sub.focus_area}",
"cli_config": { "tool": "${sub.suggested_cli_tool}" },
"parent_requirement": "${taskDescription}"
}
\`\`\`
## Output Requirements
Write 2 files to \`${sessionFolder}/agents/${sub.focus_area}/\`:
1. **planning-context.md** - Evidence paths + synthesized understanding
2. **sub-plan.json** - Plan with \`_metadata.source_agent: "${sub.focus_area}"\`
See cli-lite-planning-agent documentation for file formats.
`
})
})
// Wait for all agents to complete
const agentResults = await Promise.all(agentPromises)
```
### Phase 3: Cross-Verification & Conflict Detection
Load all sub-plans and detect conflicts:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Requirement Analysis", status: "completed", activeForm: "Analyzing requirements" },
{ content: "Phase 2: Parallel Agent Execution", status: "completed", activeForm: "Running agents" },
{ content: "Phase 3: Conflict Detection", status: "in_progress", activeForm: "Detecting conflicts" },
{ content: "Phase 4: Merge & Synthesis", status: "pending", activeForm: "Merging plans" }
]})
// Load all sub-plans
const subPlans = subRequirements.map(sub => {
const planPath = `${sessionFolder}/agents/${sub.focus_area}/sub-plan.json`
const content = Read(planPath)
return {
focus_area: sub.focus_area,
index: sub.index,
plan: JSON.parse(content)
}
})
// Detect conflicts
const conflicts = {
detected_at: getUtc8ISOString(),
total_sub_plans: subPlans.length,
conflicts: []
}
// 1. Effort conflicts (same task estimated differently)
const effortConflicts = detectEffortConflicts(subPlans)
conflicts.conflicts.push(...effortConflicts)
// 2. File conflicts (multiple agents modifying same file)
const fileConflicts = detectFileConflicts(subPlans)
conflicts.conflicts.push(...fileConflicts)
// 3. Approach conflicts (different approaches to same problem)
const approachConflicts = detectApproachConflicts(subPlans)
conflicts.conflicts.push(...approachConflicts)
// 4. Dependency conflicts (circular or missing dependencies)
const dependencyConflicts = detectDependencyConflicts(subPlans)
conflicts.conflicts.push(...dependencyConflicts)
// Write conflicts.json
Write(`${sessionFolder}/conflicts.json`, JSON.stringify(conflicts, null, 2))
console.log(`
## Conflict Detection Complete
**Total Sub-Plans**: ${subPlans.length}
**Conflicts Found**: ${conflicts.conflicts.length}
${conflicts.conflicts.length > 0 ? `
### Conflicts:
${conflicts.conflicts.map((c, i) => `
${i+1}. **${c.type}** (${c.severity})
- Agents: ${c.agents_involved.join(' vs ')}
- Issue: ${c.description}
- Suggested Resolution: ${c.suggested_resolution}
`).join('\n')}
` : '✅ No conflicts detected - sub-plans are compatible'}
`)
```
**Conflict Detection Functions**:
```javascript
function detectFileConflicts(subPlans) {
const fileModifications = {}
const conflicts = []
subPlans.forEach(sp => {
sp.plan.tasks.forEach(task => {
task.modification_points?.forEach(mp => {
if (!fileModifications[mp.file]) {
fileModifications[mp.file] = []
}
fileModifications[mp.file].push({
focus_area: sp.focus_area,
task_id: task.id,
target: mp.target,
change: mp.change
})
})
})
})
Object.entries(fileModifications).forEach(([file, mods]) => {
if (mods.length > 1) {
const agents = [...new Set(mods.map(m => m.focus_area))]
if (agents.length > 1) {
conflicts.push({
type: "file_conflict",
severity: "high",
file: file,
agents_involved: agents,
modifications: mods,
description: `Multiple agents modifying ${file}`,
suggested_resolution: "Sequence modifications or consolidate"
})
}
}
})
return conflicts
}
function detectEffortConflicts(subPlans) {
// Compare effort estimates across similar tasks
// Return conflicts where estimates differ by >50%
return []
}
function detectApproachConflicts(subPlans) {
// Analyze approaches for contradictions
// Return conflicts where approaches are incompatible
return []
}
function detectDependencyConflicts(subPlans) {
// Check for circular dependencies
// Check for missing dependencies
return []
}
```
### Phase 4: Merge & Synthesis
Use cli-lite-planning-agent to merge all sub-plans:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Requirement Analysis", status: "completed", activeForm: "Analyzing requirements" },
{ content: "Phase 2: Parallel Agent Execution", status: "completed", activeForm: "Running agents" },
{ content: "Phase 3: Conflict Detection", status: "completed", activeForm: "Detecting conflicts" },
{ content: "Phase 4: Merge & Synthesis", status: "in_progress", activeForm: "Merging plans" }
]})
// Collect all planning context documents for context
const contextDocs = subRequirements.map(sub => {
const path = `${sessionFolder}/agents/${sub.focus_area}/planning-context.md`
return {
focus_area: sub.focus_area,
content: Read(path)
}
})
// Invoke planning agent to merge
Task({
subagent_type: "cli-lite-planning-agent",
run_in_background: false,
description: "Merge sub-plans into unified plan",
prompt: `
## Mission: Merge Multiple Sub-Plans
Merge ${subPlans.length} sub-plans into a single unified plan.
## Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json
The merged plan follows the SAME schema as lite-plan, with ONE additional field:
- \`merge_metadata\`: Object containing merge-specific information
## Project Context
1. Read: .workflow/project-tech.json
2. Read: .workflow/project-guidelines.json
## Original Requirement
${taskDescription}
## Sub-Plans to Merge
${subPlans.map(sp => `
### Sub-Plan: ${sp.focus_area}
\`\`\`json
${JSON.stringify(sp.plan, null, 2)}
\`\`\`
`).join('\n')}
## Planning Context Documents
${contextDocs.map(cd => `
### Context: ${cd.focus_area}
${cd.content}
`).join('\n')}
## Detected Conflicts
\`\`\`json
${JSON.stringify(conflicts, null, 2)}
\`\`\`
## Merge Rules
**Rule**: ${mergeRule}
${mergeRule === 'consensus' ? `
- Equal weight to all sub-plans
- Conflicts resolved by finding middle ground
- Combine overlapping tasks
` : `
- Priority based on sub-requirement index
- Earlier agents' decisions take precedence
- Later agents adapt to earlier decisions
`}
## Requirements
1. **Task Consolidation**:
- Combine tasks that modify same files
- Preserve unique tasks from each sub-plan
- Ensure no task duplication
- Maintain clear task boundaries
2. **Dependency Resolution**:
- Cross-reference dependencies between sub-plans
- Create global task ordering
- Handle inter-sub-plan dependencies
3. **Conflict Resolution**:
- Apply ${mergeRule} rule to resolve conflicts
- Document resolution rationale
- Ensure no contradictions in final plan
4. **Metadata Preservation**:
- Track which sub-plan each task originated from (source_agent field)
- Include merge_metadata with:
- merged_from: list of sub-plan focus areas
- conflicts_resolved: count
- merge_rule: ${mergeRule}
## Output
Write to ${sessionFolder}/plan.json following plan-json-schema.json.
Add ONE extension field for merge tracking:
\`\`\`json
{
// ... all standard plan-json-schema fields ...
"merge_metadata": {
"source_session": "${sessionId}",
"merged_from": ["focus-area-1", "focus-area-2"],
"sub_plan_count": N,
"conflicts_detected": N,
"conflicts_resolved": N,
"merge_rule": "${mergeRule}",
"merged_at": "ISO-timestamp"
}
}
\`\`\`
Each task should include \`source_agent\` field indicating which sub-plan it originated from.
## Success Criteria
- [ ] All sub-plan tasks included (or explicitly merged)
- [ ] Conflicts resolved per ${mergeRule} rule
- [ ] Dependencies form valid DAG (no cycles)
- [ ] merge_metadata present
- [ ] Schema compliance verified
- [ ] plan.json written to ${sessionFolder}/plan.json
`
})
// Generate human-readable plan.md
const plan = JSON.parse(Read(`${sessionFolder}/plan.json`))
const planMd = generatePlanMarkdown(plan, subRequirements, conflicts)
Write(`${sessionFolder}/plan.md`, planMd)
```
**Markdown Generation**:
```javascript
function generatePlanMarkdown(plan, subRequirements, conflicts) {
return `# Collaborative Planning Session
**Session ID**: ${plan._metadata?.session_id || sessionId}
**Original Requirement**: ${taskDescription}
**Created**: ${getUtc8ISOString()}
---
## Sub-Requirements Analyzed
${subRequirements.map((sub, i) => `
### ${i+1}. ${sub.focus_area}
${sub.description}
- **Key Concerns**: ${sub.key_concerns.join(', ')}
- **Estimated Effort**: ${sub.estimated_effort}
`).join('\n')}
---
## Conflict Resolution
${conflicts.conflicts.length > 0 ? `
**Conflicts Detected**: ${conflicts.conflicts.length}
**Merge Rule**: ${mergeRule}
${conflicts.conflicts.map((c, i) => `
${i+1}. **${c.type}** - ${c.description}
- Resolution: ${c.suggested_resolution}
`).join('\n')}
` : '✅ No conflicts detected'}
---
## Merged Plan
### Summary
${plan.summary}
### Approach
${plan.approach}
---
## Tasks
${plan.tasks.map((task, i) => `
### ${task.id}: ${task.title}
**Source**: ${task.source_agent || 'merged'}
**Scope**: ${task.scope}
**Action**: ${task.action}
**Complexity**: ${task.effort?.complexity || 'medium'}
${task.description}
**Modification Points**:
${task.modification_points?.map(mp => `- \`${mp.file}\`${mp.target}: ${mp.change}`).join('\n') || 'N/A'}
**Implementation**:
${task.implementation?.map((step, idx) => `${idx+1}. ${step}`).join('\n') || 'N/A'}
**Acceptance Criteria**:
${task.acceptance?.map(ac => `- ${ac}`).join('\n') || 'N/A'}
**Dependencies**: ${task.depends_on?.join(', ') || 'None'}
---
`).join('\n')}
## Execution
\`\`\`bash
# Execute this plan
/workflow:unified-execute-with-file -p ${sessionFolder}/plan.json
# Or with auto-confirmation
/workflow:unified-execute-with-file -y -p ${sessionFolder}/plan.json
\`\`\`
---
## Agent Process Files
${subRequirements.map(sub => `
### ${sub.focus_area}
- Context: \`${sessionFolder}/agents/${sub.focus_area}/planning-context.md\`
- Sub-Plan: \`${sessionFolder}/agents/${sub.focus_area}/sub-plan.json\`
`).join('\n')}
---
**Generated by**: /workflow:collaborative-plan-with-file
**Merge Rule**: ${mergeRule}
`
}
```
### Completion
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Requirement Analysis", status: "completed", activeForm: "Analyzing requirements" },
{ content: "Phase 2: Parallel Agent Execution", status: "completed", activeForm: "Running agents" },
{ content: "Phase 3: Conflict Detection", status: "completed", activeForm: "Detecting conflicts" },
{ content: "Phase 4: Merge & Synthesis", status: "completed", activeForm: "Merging plans" }
]})
console.log(`
✅ Collaborative Planning Complete
**Session**: ${sessionId}
**Sub-Agents**: ${subRequirements.length}
**Conflicts Resolved**: ${conflicts.conflicts.length}
## Output Files
📁 ${sessionFolder}/
├── requirement-analysis.json # Requirement breakdown
├── agents/ # Per-agent process files
${subRequirements.map(sub => `│ ├── ${sub.focus_area}/
│ │ ├── planning-context.md
│ │ └── sub-plan.json`).join('\n')}
├── conflicts.json # Detected conflicts
├── plan.json # Unified plan (execution-ready)
└── plan.md # Human-readable plan
## Next Steps
Execute the plan:
\`\`\`bash
/workflow:unified-execute-with-file -p ${sessionFolder}/plan.json
\`\`\`
Review a specific agent's work:
\`\`\`bash
cat ${sessionFolder}/agents/{focus-area}/planning-context.md
\`\`\`
`)
```
## Configuration
| Flag | Default | Description |
|------|---------|-------------|
| `--max-agents` | 5 | Maximum sub-agents to spawn |
| `--depth` | normal | Exploration depth: normal or deep |
| `--merge-rule` | consensus | Conflict resolution: consensus or priority |
| `-y, --yes` | false | Auto-confirm all decisions |
## Error Handling
| Error | Resolution |
|-------|------------|
| Requirement too simple | Use single-agent lite-plan instead |
| Agent fails | Retry once, then continue with partial results |
| Merge conflicts unresolvable | Ask user for manual resolution |
| CLI timeout | Use fallback CLI tool |
| File write fails | Retry with alternative path |
## vs Other Planning Commands
| Command | Use Case |
|---------|----------|
| **collaborative-plan-with-file** | Complex multi-aspect requirements needing parallel exploration |
| lite-plan | Simple single-focus tasks |
| multi-cli-plan | Iterative cross-verification with convergence |
## Best Practices
1. **Be Specific**: Detailed requirements lead to better splits
2. **Review Process Files**: Check planning-context.md for insights
3. **Trust the Merge**: Conflict resolution follows defined rules
4. **Iterate if Needed**: Re-run with different --merge-rule if results unsatisfactory
---
**Now execute collaborative-plan-with-file for**: $ARGUMENTS

View File

@@ -37,6 +37,23 @@ Intelligent lightweight bug fixing command with dynamic workflow adaptation base
/workflow:lite-fix -y --hotfix "生产环境数据库连接失败" # Auto + hotfix mode
```
## Output Artifacts
| Artifact | Description |
|----------|-------------|
| `diagnosis-{angle}.json` | Per-angle diagnosis results (1-4 files based on severity) |
| `diagnoses-manifest.json` | Index of all diagnosis files |
| `planning-context.md` | Evidence paths + synthesized understanding |
| `fix-plan.json` | Structured fix plan (fix-plan-json-schema.json) |
**Output Directory**: `.workflow/.lite-fix/{bug-slug}-{YYYY-MM-DD}/`
**Agent Usage**:
- Low/Medium severity → Direct Claude planning (no agent)
- High/Critical severity → `cli-lite-planning-agent` generates `fix-plan.json`
**Schema Reference**: `~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json`
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
@@ -192,11 +209,15 @@ const diagnosisTasks = selectedAngles.map((angle, index) =>
## Task Objective
Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase from this specific angle to discover root cause, affected paths, and fix hints.
## Output Location
**Session Folder**: ${sessionFolder}
**Output File**: ${sessionFolder}/diagnosis-${angle}.json
## Assigned Context
- **Diagnosis Angle**: ${angle}
- **Bug Description**: ${bug_description}
- **Diagnosis Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/diagnosis-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
@@ -225,8 +246,6 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
## Expected Output
**File**: ${sessionFolder}/diagnosis-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
@@ -255,9 +274,9 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/diagnosis-${angle}.json
Return: 2-3 sentence summary of ${angle} diagnosis findings
## Execution
**Write**: \`${sessionFolder}/diagnosis-${angle}.json\`
**Return**: 2-3 sentence summary of ${angle} diagnosis findings
`
)
)
@@ -493,6 +512,13 @@ Task(
prompt=`
Generate fix plan and write fix-plan.json.
## Output Location
**Session Folder**: ${sessionFolder}
**Output Files**:
- ${sessionFolder}/planning-context.md (evidence + understanding)
- ${sessionFolder}/fix-plan.json (fix plan)
## Output Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json (get schema reference before generating plan)
@@ -588,8 +614,9 @@ Generate fix-plan.json with:
- For High/Critical: REQUIRED new fields (rationale, verification, risks, code_skeleton, data_flow, design_decisions)
- Each task MUST have rationale (why this fix), verification (how to verify success), and risks (potential issues)
5. Parse output and structure fix-plan
6. Write JSON: Write('${sessionFolder}/fix-plan.json', jsonContent)
7. Return brief completion summary
6. **Write**: \`${sessionFolder}/planning-context.md\` (evidence paths + understanding)
7. **Write**: \`${sessionFolder}/fix-plan.json\`
8. Return brief completion summary
## Output Format for CLI
Include these sections in your fix-plan output:
@@ -747,22 +774,24 @@ SlashCommand(command="/workflow:lite-execute --in-memory --mode bugfix")
```
.workflow/.lite-fix/{bug-slug}-{YYYY-MM-DD}/
|- diagnosis-{angle1}.json # Diagnosis angle 1
|- diagnosis-{angle2}.json # Diagnosis angle 2
|- diagnosis-{angle3}.json # Diagnosis angle 3 (if applicable)
|- diagnosis-{angle4}.json # Diagnosis angle 4 (if applicable)
|- diagnoses-manifest.json # Diagnosis index
+- fix-plan.json # Fix plan
├── diagnosis-{angle1}.json # Diagnosis angle 1
├── diagnosis-{angle2}.json # Diagnosis angle 2
├── diagnosis-{angle3}.json # Diagnosis angle 3 (if applicable)
├── diagnosis-{angle4}.json # Diagnosis angle 4 (if applicable)
├── diagnoses-manifest.json # Diagnosis index
├── planning-context.md # Evidence + understanding
└── fix-plan.json # Fix plan
```
**Example**:
```
.workflow/.lite-fix/user-avatar-upload-fails-413-2025-11-25-14-30-25/
|- diagnosis-error-handling.json
|- diagnosis-dataflow.json
|- diagnosis-validation.json
|- diagnoses-manifest.json
+- fix-plan.json
.workflow/.lite-fix/user-avatar-upload-fails-413-2025-11-25/
├── diagnosis-error-handling.json
├── diagnosis-dataflow.json
├── diagnosis-validation.json
├── diagnoses-manifest.json
├── planning-context.md
└── fix-plan.json
```
## Error Handling

View File

@@ -37,6 +37,23 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
/workflow:lite-plan -y -e "优化数据库查询性能" # Auto mode + force exploration
```
## Output Artifacts
| Artifact | Description |
|----------|-------------|
| `exploration-{angle}.json` | Per-angle exploration results (1-4 files based on complexity) |
| `explorations-manifest.json` | Index of all exploration files |
| `planning-context.md` | Evidence paths + synthesized understanding |
| `plan.json` | Structured implementation plan (plan-json-schema.json) |
**Output Directory**: `.workflow/.lite-plan/{task-slug}-{YYYY-MM-DD}/`
**Agent Usage**:
- Low complexity → Direct Claude planning (no agent)
- Medium/High complexity → `cli-lite-planning-agent` generates `plan.json`
**Schema Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
@@ -193,11 +210,15 @@ const explorationTasks = selectedAngles.map((angle, index) =>
## Task Objective
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
## Output Location
**Session Folder**: ${sessionFolder}
**Output File**: ${sessionFolder}/exploration-${angle}.json
## Assigned Context
- **Exploration Angle**: ${angle}
- **Task Description**: ${task_description}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
@@ -225,8 +246,6 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
## Expected Output
**File**: ${sessionFolder}/exploration-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
@@ -252,9 +271,9 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/exploration-${angle}.json
Return: 2-3 sentence summary of ${angle} findings
## Execution
**Write**: \`${sessionFolder}/exploration-${angle}.json\`
**Return**: 2-3 sentence summary of ${angle} findings
`
)
)
@@ -443,6 +462,13 @@ Task(
prompt=`
Generate implementation plan and write plan.json.
## Output Location
**Session Folder**: ${sessionFolder}
**Output Files**:
- ${sessionFolder}/planning-context.md (evidence + understanding)
- ${sessionFolder}/plan.json (implementation plan)
## Output Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json (get schema reference before generating plan)
@@ -492,8 +518,9 @@ Generate plan.json following the schema obtained above. Key constraints:
2. Execute CLI planning using Gemini (Qwen fallback)
3. Read ALL exploration files for comprehensive context
4. Synthesize findings and generate plan following schema
5. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
6. Return brief completion summary
5. **Write**: \`${sessionFolder}/planning-context.md\` (evidence paths + understanding)
6. **Write**: \`${sessionFolder}/plan.json\`
7. Return brief completion summary
`
)
```

View File

@@ -119,7 +119,7 @@ CONTEXT: Existing user database schema, REST API endpoints
**After Phase 1**: Initialize planning-notes.md with user intent
```javascript
// Create minimal planning notes document
// Create planning notes document with N+1 context support
const planningNotesPath = `.workflow/active/${sessionId}/planning-notes.md`
const userGoal = structuredDescription.goal
const userConstraints = structuredDescription.context || "None specified"
@@ -144,6 +144,19 @@ Write(planningNotesPath, `# Planning Notes
## Consolidated Constraints (Phase 4 Input)
1. ${userConstraints}
---
## Task Generation (Phase 4)
(To be filled by action-planning-agent)
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
### Deferred
- [ ] (For N+1)
`)
```

View File

@@ -378,16 +378,26 @@ Hard Constraints:
- Return completion status with document count and task breakdown summary
## PLANNING NOTES RECORD (REQUIRED)
After completing all documents, append a brief execution record to planning-notes.md:
After completing, update planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints"
**Format**:
\`\`\`
## Task Generation (Phase 4)
1. **Task Generation (Phase 4)**: Task count and key tasks
2. **N+1 Context**: Key decisions (with rationale) + deferred items
\`\`\`markdown
## Task Generation (Phase 4)
### [Action-Planning Agent] YYYY-MM-DD
- **Note**: [智能补充:简短总结任务数量、关键任务、依赖关系等]
- **Tasks**: [count] ([IDs])
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| [choice] | [why] | [Yes/No] |
### Deferred
- [ ] [item] - [reason]
\`\`\`
`
)
@@ -543,19 +553,13 @@ Hard Constraints:
- Return: task count, task IDs, dependency summary (internal + cross-module)
## PLANNING NOTES RECORD (REQUIRED)
After completing module task JSONs, append a brief execution record to planning-notes.md:
After completing, append to planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Create new section after "## Consolidated Constraints" (if not exists)
**Format**:
\`\`\`markdown
### [${module.name}] YYYY-MM-DD
- **Tasks**: [count] ([IDs])
- **CROSS deps**: [placeholders used]
\`\`\`
## Task Generation (Phase 4)
### [Action-Planning Agent - ${module.name}] YYYY-MM-DD
- **Note**: [智能补充:简短总结本模块任务数量、关键任务等]
\`\`\`
**Note**: Multiple module agents will append their records. Phase 3 Integration Coordinator will add final summary.
`
)
);
@@ -638,14 +642,21 @@ Module Count: ${modules.length}
- Return: task count, per-module breakdown, resolved dependency count
## PLANNING NOTES RECORD (REQUIRED)
After completing integration, append final summary to planning-notes.md:
After integration, update planning-notes.md:
**File**: .workflow/active/{session_id}/planning-notes.md
**Location**: Under "## Task Generation (Phase 4)" section (after module agent records)
**Format**:
\`\`\`
### [Integration Coordinator] YYYY-MM-DD
- **Note**: [智能补充:简短总结总任务数、跨模块依赖解决情况等]
\`\`\`markdown
### [Coordinator] YYYY-MM-DD
- **Total**: [count] tasks
- **Resolved**: [CROSS:: resolutions]
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| CROSS::X → IMPL-Y | [why this resolution] | [Yes/No] |
### Deferred
- [ ] [unresolved CROSS or conflict] - [reason]
\`\`\`
`
)

View File

@@ -155,27 +155,65 @@ bash(`mkdir -p ${executionFolder}`)
## Plan Format Parsers
Support multiple plan sources:
Support multiple plan sources (all JSON plans follow plan-json-schema.json):
```javascript
function parsePlan(content, filePath) {
const ext = filePath.split('.').pop()
if (filePath.includes('IMPL_PLAN')) {
return parseImplPlan(content) // From /workflow:plan
return parseImplPlan(content) // From /workflow:plan (markdown)
} else if (filePath.includes('brainstorm')) {
return parseBrainstormPlan(content) // From /workflow:brainstorm-with-file
} else if (filePath.includes('synthesis')) {
return parseSynthesisPlan(content) // From /workflow:brainstorm-with-file synthesis.json
} else if (filePath.includes('conclusions')) {
return parseConclusionsPlan(content) // From /workflow:analyze-with-file conclusions.json
} else if (filePath.endsWith('.json') && content.includes('tasks')) {
return parseTaskJson(content) // Direct task JSON
} else if (filePath.endsWith('.json') && content.includes('"tasks"')) {
return parsePlanJson(content) // Standard plan-json-schema (lite-plan, collaborative-plan, sub-plans)
}
throw new Error(`Unsupported plan format: ${filePath}`)
}
// Standard plan-json-schema parser
// Handles: lite-plan, collaborative-plan, sub-plans (all follow same schema)
function parsePlanJson(content) {
const plan = JSON.parse(content)
return {
type: plan.merge_metadata ? 'collaborative-plan' : 'lite-plan',
title: plan.summary?.split('.')[0] || 'Untitled Plan',
slug: plan._metadata?.session_id || generateSlug(plan.summary),
summary: plan.summary,
approach: plan.approach,
tasks: plan.tasks.map(task => ({
id: task.id,
type: inferTaskTypeFromAction(task.action),
title: task.title,
description: task.description,
dependencies: task.depends_on || [],
agent_type: selectAgentFromTask(task),
prompt: buildPromptFromTask(task),
files_to_modify: task.modification_points?.map(mp => mp.file) || [],
expected_output: task.acceptance || [],
priority: task.effort?.complexity === 'high' ? 'high' : 'normal',
estimated_duration: task.effort?.estimated_hours ? `${task.effort.estimated_hours}h` : null,
verification: task.verification,
risks: task.risks,
source_agent: task.source_agent // From collaborative-plan sub-agents
})),
flow_control: plan.flow_control,
data_flow: plan.data_flow,
design_decisions: plan.design_decisions,
estimatedDuration: plan.estimated_time,
recommended_execution: plan.recommended_execution,
complexity: plan.complexity,
merge_metadata: plan.merge_metadata, // Present if from collaborative-plan
_metadata: plan._metadata
}
}
// IMPL_PLAN.md parser
function parseImplPlan(content) {
// Extract:
@@ -216,6 +254,65 @@ function parseSynthesisPlan(content) {
recommendations: synthesis.recommendations
}
}
// Helper: Infer task type from action field
function inferTaskTypeFromAction(action) {
const actionMap = {
'Create': 'code',
'Update': 'code',
'Implement': 'code',
'Refactor': 'code',
'Add': 'code',
'Delete': 'code',
'Configure': 'config',
'Test': 'test',
'Fix': 'debug'
}
return actionMap[action] || 'code'
}
// Helper: Select agent based on task properties
function selectAgentFromTask(task) {
if (task.verification?.unit_tests?.length > 0) {
return 'tdd-developer'
} else if (task.action === 'Test') {
return 'test-fix-agent'
} else if (task.action === 'Fix') {
return 'debug-explore-agent'
} else {
return 'code-developer'
}
}
// Helper: Build prompt from task details
function buildPromptFromTask(task) {
let prompt = `## Task: ${task.title}\n\n${task.description}\n\n`
if (task.modification_points?.length > 0) {
prompt += `### Modification Points\n`
task.modification_points.forEach(mp => {
prompt += `- **${mp.file}**: ${mp.target}${mp.change}\n`
})
prompt += '\n'
}
if (task.implementation?.length > 0) {
prompt += `### Implementation Steps\n`
task.implementation.forEach((step, i) => {
prompt += `${i + 1}. ${step}\n`
})
prompt += '\n'
}
if (task.acceptance?.length > 0) {
prompt += `### Acceptance Criteria\n`
task.acceptance.forEach(ac => {
prompt += `- ${ac}\n`
})
}
return prompt
}
```
---
@@ -776,21 +873,6 @@ function calculateParallel(tasks) {
| Agent unavailable | Fallback to universal-executor |
| Execution interrupted | Support resume with `/workflow:unified-execute-with-file --continue` |
## Usage Recommendations
Use `/workflow:unified-execute-with-file` when:
- Executing any planning document (IMPL_PLAN.md, brainstorm conclusions, analysis recommendations)
- Multiple tasks with dependencies need orchestration
- Want minimal progress tracking without clutter
- Need to handle failures gracefully and resume
- Want to parallelize where possible but ensure correctness
Use for consuming output from:
- `/workflow:plan` → IMPL_PLAN.md
- `/workflow:brainstorm-with-file` → synthesis.json → execution
- `/workflow:analyze-with-file` → conclusions.json → execution
- `/workflow:debug-with-file` → recommendations → execution
- `/workflow:lite-plan` → task JSONs → execution
## Session Resume

View File

@@ -0,0 +1,394 @@
---
name: flow-coordinator
description: Template-driven workflow coordinator with minimal state tracking. Executes command chains from workflow templates with slash-command execution (mainprocess/async). Triggers on "flow-coordinator", "workflow template", "orchestrate".
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep
---
# Flow Coordinator
Lightweight workflow coordinator that executes command chains from predefined templates, supporting slash-command execution with mainprocess (blocking) and async (background) modes.
## Architecture
```
User Task → Select Template → status.json Init → Execute Steps → Complete
↑ │
└──────────────── Resume (from status.json) ─────┘
Step Execution:
execution mode?
├─ mainprocess → SlashCommand (blocking, main process)
└─ async → ccw cli --tool claude --mode write (background)
```
## Core Concepts
**Template-Driven**: Workflows defined as JSON templates in `templates/`, decoupled from coordinator logic.
**Execution Type**: `slash-command` only
- ALL workflow commands (`/workflow:*`) use `slash-command` type
- Two execution modes:
- `mainprocess`: SlashCommand (blocking, main process)
- `async`: CLI background (ccw cli with claude tool)
**Dynamic Discovery**: Templates discovered at runtime via Glob, not hardcoded.
---
## Execution Flow
```javascript
async function execute(task) {
// 1. Discover and select template
const templates = await discoverTemplates();
const template = await selectTemplate(templates);
// 2. Init status
const sessionId = `fc-${timestamp()}`;
const statusPath = `.workflow/.flow-coordinator/${sessionId}/status.json`;
const status = initStatus(template, task);
write(statusPath, JSON.stringify(status, null, 2));
// 3. Execute steps based on execution config
await executeSteps(status, statusPath);
}
async function executeSteps(status, statusPath) {
for (let i = status.current; i < status.steps.length; i++) {
const step = status.steps[i];
status.current = i;
// Execute based on step mode (all steps use slash-command type)
const execConfig = step.execution || { type: 'slash-command', mode: 'mainprocess' };
if (execConfig.mode === 'async') {
// Async execution - stop and wait for hook callback
await executeSlashCommandAsync(step, status, statusPath);
break;
} else {
// Mainprocess execution - continue immediately
await executeSlashCommandSync(step, status);
step.status = 'done';
write(statusPath, JSON.stringify(status, null, 2));
}
}
// All steps complete
if (status.current >= status.steps.length) {
status.complete = true;
write(statusPath, JSON.stringify(status, null, 2));
}
}
```
---
## Template Discovery
**Dynamic query** - never hardcode template list:
```javascript
async function discoverTemplates() {
// Discover all JSON templates
const files = Glob('*.json', { path: 'templates/' });
// Parse each template
const templates = [];
for (const file of files) {
const content = JSON.parse(Read(file));
templates.push({
name: content.name,
description: content.description,
steps: content.steps.map(s => s.cmd).join(' → '),
file: file
});
}
return templates;
}
```
---
## Template Selection
User chooses from discovered templates:
```javascript
async function selectTemplate(templates) {
// Build options from discovered templates
const options = templates.slice(0, 4).map(t => ({
label: t.name,
description: t.steps
}));
const response = await AskUserQuestion({
questions: [{
question: 'Select workflow template:',
header: 'Template',
options: options,
multiSelect: false
}]
});
// Handle "Other" - show remaining templates or custom input
if (response.template === 'Other') {
return await selectFromRemainingTemplates(templates.slice(4));
}
return templates.find(t => t.name === response.template);
}
```
---
## Status Schema
**Creation**: Copy template JSON → Update `id`, `template`, `goal`, set all steps `status: "pending"`
**Location**: `.workflow/.flow-coordinator/{session-id}/status.json`
**Core Fields**:
- `id`: Session ID (fc-YYYYMMDD-HHMMSS)
- `template`: Template name
- `goal`: User task description
- `current`: Current step index
- `steps[]`: Step array from template (with runtime `status`, `session`, `taskId`)
- `complete`: All steps done?
**Step Status**: `pending``running``done` | `failed` | `skipped`
---
## Extended Template Schema
**Templates stored in**: `templates/*.json` (discovered at runtime via Glob)
**TemplateStep Fields**:
- `cmd`: Full command path (e.g., `/workflow:lite-plan`, `/workflow:execute`)
- `args?`: Arguments with `{{goal}}` and `{{prev}}` placeholders
- `unit?`: Minimum execution unit name (groups related commands)
- `optional?`: Can be skipped by user
- `execution`: Type and mode configuration
- `type`: Always `'slash-command'` (for all workflow commands)
- `mode`: `'mainprocess'` (blocking) or `'async'` (background)
- `contextHint?`: Natural language guidance for context assembly
**Template Example**:
```json
{
"name": "rapid",
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-implementation",
"execution": { "type": "slash-command", "mode": "mainprocess" },
"contextHint": "Create lightweight implementation plan"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "quick-implementation",
"execution": { "type": "slash-command", "mode": "async" },
"contextHint": "Execute plan from previous step"
}
]
}
```
---
## Execution Implementation
### Mainprocess Mode (Blocking)
```javascript
async function executeSlashCommandSync(step, status) {
// Build command: /workflow:cmd -y args
const cmd = buildCommand(step, status);
const result = await SlashCommand({ command: cmd });
step.session = result.session_id;
step.status = 'done';
return result;
}
```
### Async Mode (Background)
```javascript
async function executeSlashCommandAsync(step, status, statusPath) {
// Build prompt: /workflow:cmd -y args + context
const prompt = buildCommandPrompt(step, status);
step.status = 'running';
write(statusPath, JSON.stringify(status, null, 2));
// Execute via ccw cli in background
const taskId = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool claude --mode write`,
{ run_in_background: true }
).task_id;
step.taskId = taskId;
write(statusPath, JSON.stringify(status, null, 2));
console.log(`Executing: ${step.cmd} (async)`);
console.log(`Resume: /flow-coordinator --resume ${status.id}`);
}
```
---
## Prompt Building
Prompts are built in format: `/workflow:cmd -y args` + context
```javascript
function buildCommandPrompt(step, status) {
// step.cmd already contains full path: /workflow:lite-plan, /workflow:execute, etc.
let prompt = `${step.cmd} -y`;
// Add arguments (with placeholder replacement)
if (step.args) {
const args = step.args
.replace('{{goal}}', status.goal)
.replace('{{prev}}', getPreviousSessionId(status));
prompt += ` ${args}`;
}
// Add context based on contextHint
if (step.contextHint) {
const context = buildContextFromHint(step.contextHint, status);
prompt += `\n\nContext:\n${context}`;
} else {
// Default context: previous session IDs
const previousContext = collectPreviousResults(status);
if (previousContext) {
prompt += `\n\nPrevious results:\n${previousContext}`;
}
}
return prompt;
}
function buildContextFromHint(hint, status) {
// Parse contextHint instruction and build context accordingly
// Examples:
// "Summarize IMPL_PLAN.md" → read and summarize plan
// "List test coverage gaps" → analyze previous test results
// "Pass session ID" → just return session reference
return parseAndBuildContext(hint, status);
}
```
### Example Prompt Output
```
/workflow:lite-plan -y "Implement user registration"
Context:
Task: Implement user registration
Previous results:
- None (first step)
```
```
/workflow:execute -y --in-memory
Context:
Task: Implement user registration
Previous results:
- lite-plan: WFS-plan-20250130 (planning-context.md)
```
---
## User Interaction
### Step 1: Select Template
```
Select workflow template:
○ rapid lite-plan → lite-execute → test-cycle-execute
○ coupled plan → plan-verify → execute → review → test
○ bugfix lite-fix → lite-execute → test-cycle-execute
○ tdd tdd-plan → execute → tdd-verify
○ Other (more templates or custom)
```
### Step 2: Review Execution Plan
```
Template: coupled
Steps:
1. /workflow:plan (slash-command mainprocess)
2. /workflow:plan-verify (slash-command mainprocess)
3. /workflow:execute (slash-command async)
4. /workflow:review-session-cycle (slash-command mainprocess)
5. /workflow:review-cycle-fix (slash-command mainprocess)
6. /workflow:test-fix-gen (slash-command mainprocess)
7. /workflow:test-cycle-execute (slash-command async)
Proceed? [Confirm / Cancel]
```
---
## Resume Capability
```javascript
async function resume(sessionId) {
const statusPath = `.workflow/.flow-coordinator/${sessionId}/status.json`;
const status = JSON.parse(Read(statusPath));
// Find first incomplete step
status.current = status.steps.findIndex(s => s.status !== 'done');
if (status.current === -1) {
console.log('All steps complete');
return;
}
// Continue executing steps
await executeSteps(status, statusPath);
}
```
---
## Available Templates
Templates discovered from `templates/*.json`:
| Template | Use Case | Steps |
|----------|----------|-------|
| rapid | Simple feature | /workflow:lite-plan → /workflow:lite-execute → /workflow:test-cycle-execute |
| coupled | Complex feature | /workflow:plan → /workflow:plan-verify → /workflow:execute → /workflow:review-session-cycle → /workflow:test-fix-gen |
| bugfix | Bug fix | /workflow:lite-fix → /workflow:lite-execute → /workflow:test-cycle-execute |
| tdd | Test-driven | /workflow:tdd-plan → /workflow:execute → /workflow:tdd-verify |
| test-fix | Fix failing tests | /workflow:test-fix-gen → /workflow:test-cycle-execute |
| brainstorm | Exploration | /workflow:brainstorm-with-file |
| debug | Debug with docs | /workflow:debug-with-file |
| analyze | Collaborative analysis | /workflow:analyze-with-file |
| issue | Issue workflow | /workflow:issue:plan → /workflow:issue:queue → /workflow:issue:execute |
---
## Design Principles
1. **Minimal fields**: Only essential tracking data
2. **Flat structure**: No nested objects beyond steps array
3. **Step-level execution**: Each step defines how it's executed
4. **Resumable**: Any step can be resumed from status
5. **Human readable**: Clear JSON format
---
## Reference Documents
| Document | Purpose |
|----------|---------|
| templates/*.json | Workflow templates (dynamic discovery) |

View File

@@ -0,0 +1,16 @@
{
"name": "analyze",
"description": "Collaborative analysis with multi-round discussion - deep exploration and understanding",
"level": 3,
"steps": [
{
"cmd": "/workflow:analyze-with-file",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-round collaborative analysis with iterative understanding. Generate discussion.md with comprehensive analysis and conclusions"
}
]
}

View File

@@ -0,0 +1,36 @@
{
"name": "brainstorm-to-issue",
"description": "Bridge brainstorm session to issue workflow - convert exploration insights to executable issues",
"level": 4,
"steps": [
{
"cmd": "/workflow:issue:from-brainstorm",
"args": "--auto",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Convert brainstorm session findings into issue plans and solutions"
},
{
"cmd": "/workflow:issue:queue",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue from converted brainstorm issues"
},
{
"cmd": "/workflow:issue:execute",
"args": "--queue auto",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "brainstorm",
"description": "Multi-perspective ideation with documentation - explore possibilities with multiple analytical viewpoints",
"level": 4,
"steps": [
{
"cmd": "/workflow:brainstorm-with-file",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective ideation with interactive diverge-converge cycles. Generate brainstorm.md with synthesis of ideas and recommendations"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "bugfix-hotfix",
"description": "Urgent production fix - immediate diagnosis and fix with minimal overhead",
"level": 1,
"steps": [
{
"cmd": "/workflow:lite-fix",
"args": "--hotfix \"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Urgent hotfix mode: quick diagnosis and immediate fix for critical production issue"
}
]
}

View File

@@ -0,0 +1,47 @@
{
"name": "bugfix",
"description": "Standard bug fix workflow - lightweight diagnosis and execution with testing",
"level": 2,
"steps": [
{
"cmd": "/workflow:lite-fix",
"args": "\"{{goal}}\"",
"unit": "bug-fix",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze bug report, trace execution flow, identify root cause with fix strategy"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "bug-fix",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Implement fix based on diagnosis. Execute against in-memory state from lite-fix analysis."
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks to verify bug fix and prevent regression"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until all tests pass"
}
]
}

View File

@@ -0,0 +1,71 @@
{
"name": "coupled",
"description": "Full workflow for complex features - detailed planning with verification, execution, review, and testing",
"level": 3,
"steps": [
{
"cmd": "/workflow:plan",
"args": "\"{{goal}}\"",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create detailed implementation plan with architecture design, file structure, dependencies, and milestones"
},
{
"cmd": "/workflow:plan-verify",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify IMPL_PLAN.md against requirements, check for missing details, conflicts, and quality gates"
},
{
"cmd": "/workflow:execute",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute implementation based on verified plan. Resume from planning session with all context preserved."
},
{
"cmd": "/workflow:review-session-cycle",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Perform multi-dimensional code review across correctness, security, performance, maintainability. Reference execution session for full code context."
},
{
"cmd": "/workflow:review-cycle-fix",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Fix issues identified in review findings with prioritization by severity levels"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate comprehensive test tasks for the implementation with coverage analysis"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "debug",
"description": "Hypothesis-driven debugging with documentation - systematic troubleshooting and logging",
"level": 3,
"steps": [
{
"cmd": "/workflow:debug-with-file",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Systematic debugging with hypothesis formation and verification. Generate understanding.md with root cause analysis and fix recommendations"
}
]
}

View File

@@ -0,0 +1,27 @@
{
"name": "docs",
"description": "Documentation generation workflow",
"level": 2,
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-documentation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Plan documentation structure and content organization"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "quick-documentation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute documentation generation from plan"
}
]
}

View File

@@ -0,0 +1,61 @@
{
"name": "full",
"description": "Comprehensive workflow - brainstorm exploration, planning verification, execution, and testing",
"level": 4,
"steps": [
{
"cmd": "/workflow:brainstorm:auto-parallel",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective exploration of requirements and possible approaches"
},
{
"cmd": "/workflow:plan",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create detailed implementation plan based on brainstorm insights"
},
{
"cmd": "/workflow:plan-verify",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify plan quality and completeness"
},
{
"cmd": "/workflow:execute",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute implementation from verified plan"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate comprehensive test tasks"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,43 @@
{
"name": "issue",
"description": "Issue workflow - discover issues, create plans, queue execution, and resolve",
"level": "issue",
"steps": [
{
"cmd": "/workflow:issue:discover",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Discover pending issues from codebase for potential fixes"
},
{
"cmd": "/workflow:issue:plan",
"args": "--all-pending",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create execution plans for all discovered pending issues"
},
{
"cmd": "/workflow:issue:queue",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue with issue prioritization and dependencies"
},
{
"cmd": "/workflow:issue:execute",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking and completion reporting"
}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "lite-lite-lite",
"description": "Ultra-lightweight for immediate simple tasks - minimal overhead, direct execution",
"level": 1,
"steps": [
{
"cmd": "/workflow:lite-lite-lite",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Direct task execution with minimal analysis and zero documentation overhead"
}
]
}

View File

@@ -0,0 +1,47 @@
{
"name": "multi-cli-plan",
"description": "Multi-perspective planning with cross-tool verification and execution",
"level": 3,
"steps": [
{
"cmd": "/workflow:multi-cli-plan",
"args": "\"{{goal}}\"",
"unit": "multi-cli-planning",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective analysis comparing different implementation approaches with trade-off analysis"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "multi-cli-planning",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute best approach selected from multi-perspective analysis"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks for the implementation"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,46 @@
{
"name": "rapid-to-issue",
"description": "Bridge lite workflow to issue workflow - convert simple plan to structured issue execution",
"level": 2.5,
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create lightweight plan for the task"
},
{
"cmd": "/workflow:issue:convert-to-plan",
"args": "--latest-lite-plan -y",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Convert lite plan to structured issue plan"
},
{
"cmd": "/workflow:issue:queue",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue from converted plan"
},
{
"cmd": "/workflow:issue:execute",
"args": "--queue auto",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking"
}
]
}

View File

@@ -0,0 +1,47 @@
{
"name": "rapid",
"description": "Quick implementation - lightweight plan and immediate execution for simple features",
"level": 2,
"steps": [
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-implementation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze requirements and create a lightweight implementation plan with key decisions and file structure"
},
{
"cmd": "/workflow:lite-execute",
"args": "--in-memory",
"unit": "quick-implementation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Use the plan from previous step to implement code. Execute against in-memory state."
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks from the implementation session"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until all tests pass"
}
]
}

View File

@@ -0,0 +1,43 @@
{
"name": "review",
"description": "Code review workflow - multi-dimensional review, fix issues, and test validation",
"level": 3,
"steps": [
{
"cmd": "/workflow:review-session-cycle",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Perform comprehensive multi-dimensional code review across correctness, security, performance, maintainability dimensions"
},
{
"cmd": "/workflow:review-cycle-fix",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Fix all review findings prioritized by severity level (critical → high → medium → low)"
},
{
"cmd": "/workflow:test-fix-gen",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks for fixed code with coverage analysis"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -0,0 +1,34 @@
{
"name": "tdd",
"description": "Test-driven development - write tests first, implement to pass tests, verify Red-Green-Refactor cycles",
"level": 3,
"steps": [
{
"cmd": "/workflow:tdd-plan",
"args": "\"{{goal}}\"",
"unit": "tdd-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create TDD task plan with Red-Green-Refactor cycles, test specifications, and implementation strategy"
},
{
"cmd": "/workflow:execute",
"unit": "tdd-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute TDD tasks following Red-Green-Refactor workflow with test-first development"
},
{
"cmd": "/workflow:tdd-verify",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify TDD cycle compliance, test coverage, and code quality against Red-Green-Refactor principles"
}
]
}

View File

@@ -0,0 +1,26 @@
{
"name": "test-fix",
"description": "Fix failing tests - generate test tasks and execute iterative test-fix cycle",
"level": 2,
"steps": [
{
"cmd": "/workflow:test-fix-gen",
"args": "\"{{goal}}\"",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze failing tests, generate targeted test tasks with root cause and fix strategy"
},
{
"cmd": "/workflow:test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle with pass rate tracking until >= 95% pass rate achieved"
}
]
}

View File

@@ -1,205 +0,0 @@
# Unified-Execute-With-File: Claude vs Codex Versions
## Overview
Two complementary implementations of the universal execution engine:
| Aspect | Claude CLI Command | Codex Prompt |
|--------|-------------------|--------------|
| **Location** | `.claude/commands/workflow/` | `.codex/prompts/` |
| **Format** | YAML frontmatter + Markdown | Simple Markdown + Variables |
| **Execution** | `/workflow:unified-execute-with-file` | Direct Codex execution |
| **Lines** | 807 (optimized) | 722 (adapted) |
| **Parameters** | CLI flags (`-y`, `-p`, `-m`) | Substitution variables (`$PLAN_PATH`, etc) |
---
## Format Differences
### Claude Version (CLI Command)
**Header (YAML)**:
```yaml
---
name: unified-execute-with-file
description: Universal execution engine...
argument-hint: "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel]"
allowed-tools: TodoWrite(*), Task(*), ...
---
```
**Parameters**: CLI-style flags with short forms
```bash
/workflow:unified-execute-with-file -y -p PLAN_PATH -m parallel
```
### Codex Version (Prompt)
**Header (Simple)**:
```yaml
---
description: Universal execution engine...
argument-hint: "PLAN_PATH=\"<path>\" [EXECUTION_MODE=\"sequential|parallel\"]"
---
```
**Parameters**: Variable substitution with named arguments
```
PLAN_PATH=".workflow/IMPL_PLAN.md"
EXECUTION_MODE="parallel"
AUTO_CONFIRM="yes"
```
---
## Functional Equivalence
### Core Features (Identical)
Both versions support:
- ✅ Format-agnostic plan parsing (IMPL_PLAN.md, synthesis.json, conclusions.json)
- ✅ Multi-agent orchestration (code-developer, test-fix-agent, doc-generator, etc)
- ✅ Automatic dependency resolution with topological sort
- ✅ Parallel execution with wave-based grouping (max 3 tasks/wave)
- ✅ Unified event logging (execution-events.md as SINGLE SOURCE OF TRUTH)
- ✅ Knowledge chain: agents read all previous executions
- ✅ Incremental execution with resume capability
- ✅ Error handling: retry/skip/abort logic
- ✅ Session management and folder organization
### Session Structure (Identical)
Both create:
```
.workflow/.execution/{executionId}/
├── execution.md # Execution plan and status
└── execution-events.md # Unified execution log (SINGLE SOURCE OF TRUTH)
```
---
## Key Adaptations
### Claude CLI Version
**Optimizations**:
- Direct access to Claude Code tools (TodoWrite, Task, AskUserQuestion)
- CLI tool integration (`ccw cli`)
- Background agent execution with run_in_background flag
- Direct file system operations via Bash
**Structure**:
- Comprehensive Implementation Details section
- Explicit allowed-tools configuration
- Integration with workflow command system
### Codex Version
**Adaptations**:
- Simplified execution context (no direct tool access)
- Variable substitution for parameter passing
- Streamlined phase explanations
- Focused on core logic and flow
- Self-contained event logging
**Benefits**:
- Works with Codex's execution model
- Simpler parameter interface
- 85 fewer lines while maintaining all core functionality
---
## Parameter Mapping
| Concept | Claude | Codex |
|---------|--------|-------|
| Plan path | `-p path/to/plan.md` | `PLAN_PATH="path/to/plan.md"` |
| Execution mode | `-m sequential\|parallel` | `EXECUTION_MODE="sequential\|parallel"` |
| Auto-confirm | `-y, --yes` | `AUTO_CONFIRM="yes"` |
| Context focus | `"execution context"` | `EXECUTION_CONTEXT="focus area"` |
---
## Recommended Usage
### Use Claude Version When:
- Using Claude Code CLI environment
- Need direct integration with workflow system
- Want full tool access (TodoWrite, Task, AskUserQuestion)
- Prefer CLI flag syntax
- Building multi-command workflows
### Use Codex Version When:
- Executing within Codex directly
- Need simpler execution model
- Prefer variable substitution
- Want standalone execution
- Integrating with Codex command chains
---
## Event Logging (Unified)
Both versions produce identical execution-events.md format:
```markdown
## Task {id} - {STATUS} {emoji}
**Timestamp**: {ISO8601}
**Duration**: {ms}
**Agent**: {agent_type}
### Execution Summary
{summary}
### Generated Artifacts
- `path/to/file` (size)
### Notes for Next Agent
- Key decisions
- Issues identified
- Ready for: NEXT_TASK_ID
---
```
---
## Migration Path
If switching between Claude and Codex versions:
1. **Same session ID format**: Both use `.workflow/.execution/{executionId}/`
2. **Same event log structure**: execution-events.md is 100% compatible
3. **Same artifact locations**: Files generated at project paths (e.g., `src/types/auth.ts`)
4. **Same agent selection**: Both use identical selectBestAgent() strategy
5. **Same parallelization rules**: Identical wave grouping and file conflict detection
You can:
- Start execution with Claude, resume with Codex
- Start with Codex, continue with Claude
- Mix both in multi-step workflows
---
## Statistics
| Metric | Claude | Codex |
|--------|--------|-------|
| **Lines** | 807 | 722 |
| **Size** | 25 KB | 22 KB |
| **Phases** | 4 full phases | 4 phases (adapted) |
| **Agent types** | 6+ supported | 6+ supported |
| **Parallelization** | Max 3 tasks/wave | Max 3 tasks/wave |
| **Error handling** | retry/skip/abort | retry/skip/abort |
---
## Implementation Timeline
1. **Initial Claude version**: Full unified-execute-with-file.md (1094 lines)
2. **Claude optimization**: Consolidated duplicates (807 lines, -26%)
3. **Codex adaptation**: Format-adapted version (722 lines)
Both versions represent same core logic with format-specific optimizations.

View File

@@ -231,53 +231,97 @@ ${newFocusFromUser}
### Phase 2: Divergent Exploration (Multi-Perspective)
#### Step 2.1: Creative Perspective Analysis
Launch 3 parallel agents for multi-perspective brainstorming:
Explore from creative/innovative angle:
```javascript
const cliPromises = []
- Think beyond obvious solutions - what would be surprising/delightful?
- Cross-domain inspiration (what can we learn from other industries?)
- Challenge assumptions - what if the opposite were true?
- Generate 'moonshot' ideas alongside practical ones
- Consider future trends and emerging technologies
// Agent 1: Creative/Innovative Perspective (Gemini)
cliPromises.push(
Bash({
command: `ccw cli -p "
PURPOSE: Creative brainstorming for '$TOPIC' - generate innovative, unconventional ideas
Success: 5+ unique creative solutions that push boundaries
Output:
TASK:
• Think beyond obvious solutions - what would be surprising/delightful?
• Explore cross-domain inspiration (what can we learn from other industries?)
• Challenge assumptions - what if the opposite were true?
• Generate 'moonshot' ideas alongside practical ones
• Consider future trends and emerging technologies
MODE: analysis
CONTEXT: @**/* | Topic: $TOPIC
Exploration vectors: ${explorationVectors.map(v => v.title).join(', ')}
EXPECTED:
- 5+ creative ideas with brief descriptions
- Each idea rated: novelty (1-5), potential impact (1-5)
- Key assumptions challenged
- Cross-domain inspirations
- One 'crazy' idea that might just work
#### Step 2.2: Pragmatic Perspective Analysis
CONSTRAINTS: ${brainstormMode === 'structured' ? 'Keep ideas technically feasible' : 'No constraints - think freely'}
" --tool gemini --mode analysis`,
run_in_background: true
})
)
Evaluate from implementation reality:
// Agent 2: Pragmatic/Implementation Perspective (Codex)
cliPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Pragmatic analysis for '$TOPIC' - focus on implementation reality
Success: Actionable approaches with clear implementation paths
- Technical feasibility of core concept
- Existing patterns/libraries that could help
- Integration with current codebase
- Implementation complexity estimates
- Potential technical blockers
- Incremental implementation approach
TASK:
• Evaluate technical feasibility of core concept
• Identify existing patterns/libraries that could help
• Consider integration with current codebase
• Estimate implementation complexity
• Highlight potential technical blockers
• Suggest incremental implementation approach
Output:
MODE: analysis
CONTEXT: @**/* | Topic: $TOPIC
Exploration vectors: \${explorationVectors.map(v => v.title).join(', ')}
EXPECTED:
- 3-5 practical implementation approaches
- Each rated: effort (1-5), risk (1-5), reuse potential (1-5)
- Technical dependencies identified
- Quick wins vs long-term solutions
- Recommended starting point
#### Step 2.3: Systematic Perspective Analysis
CONSTRAINTS: Focus on what can actually be built with current tech stack
" --tool codex --mode analysis\`,
run_in_background: true
})
)
Analyze from architectural standpoint:
// Agent 3: Systematic/Architectural Perspective (Claude)
cliPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Systematic analysis for '$TOPIC' - architectural and structural thinking
Success: Well-structured solution framework with clear tradeoffs
- Decompose the problem into sub-problems
- Identify architectural patterns that apply
- Map dependencies and interactions
- Consider scalability implications
- Evaluate long-term maintainability
- Propose systematic solution structure
TASK:
• Decompose the problem into sub-problems
• Identify architectural patterns that apply
• Map dependencies and interactions
• Consider scalability implications
• Evaluate long-term maintainability
• Propose systematic solution structure
Output:
MODE: analysis
CONTEXT: @**/* | Topic: $TOPIC
Exploration vectors: \${explorationVectors.map(v => v.title).join(', ')}
EXPECTED:
- Problem decomposition diagram (text)
- 2-3 architectural approaches with tradeoffs
- Dependency mapping
@@ -285,6 +329,29 @@ Output:
- Recommended architecture pattern
- Risk matrix
CONSTRAINTS: Consider existing system architecture
" --tool claude --mode analysis\`,
run_in_background: true
})
)
// Wait for all CLI analyses to complete
const [creativeResult, pragmaticResult, systematicResult] = await Promise.all(cliPromises)
// Parse results from each perspective
const creativeIdeas = parseCreativeResult(creativeResult)
const pragmaticApproaches = parsePragmaticResult(pragmaticResult)
const architecturalOptions = parseSystematicResult(systematicResult)
```
**Multi-Perspective Coordination**:
| Agent | Perspective | Tool | Focus Areas |
|-------|-------------|------|-------------|
| 1 | Creative/Innovative | Gemini | Novel ideas, cross-domain inspiration, moonshots |
| 2 | Pragmatic/Implementation | Codex | Feasibility, tech stack, blockers, quick wins |
| 3 | Systematic/Architectural | Claude | Decomposition, patterns, scalability, risks |
#### Step 2.4: Aggregate Multi-Perspective Findings
```javascript

View File

@@ -0,0 +1,530 @@
---
description: Merge multiple planning/brainstorm/analysis outputs, resolve conflicts, and synthesize unified plan. Multi-team input aggregation and plan crystallization
argument-hint: "PATTERN=\"<plan pattern or topic>\" [--rule=consensus|priority|hierarchy] [--output=<path>] [--auto] [--verbose]"
---
# Codex Merge-Plans-With-File Prompt
## Overview
Plan aggregation and conflict resolution workflow. Takes multiple planning artifacts (brainstorm conclusions, analysis recommendations, quick-plans, implementation plans) and synthesizes them into a unified, conflict-resolved execution plan.
**Core workflow**: Load Sources → Parse Plans → Conflict Analysis → Arbitration → Unified Plan
**Key features**:
- **Multi-Source Support**: brainstorm, analysis, quick-plan, IMPL_PLAN, task JSONs
- **Conflict Detection**: Identify contradictions across all input plans
- **Resolution Rules**: consensus, priority-based, or hierarchical resolution
- **Unified Synthesis**: Single authoritative plan from multiple perspectives
- **Decision Tracking**: Full audit trail of conflicts and resolutions
## Target Pattern
**$PATTERN**
- `--rule`: Conflict resolution (consensus | priority | hierarchy) - consensus by default
- `--output`: Output directory (default: .workflow/.merged/{pattern})
- `--auto`: Auto-resolve conflicts using rule, skip confirmations
- `--verbose`: Include detailed conflict analysis
## Execution Process
```
Phase 1: Discovery & Loading
├─ Search for artifacts matching pattern
├─ Load synthesis.json, conclusions.json, IMPL_PLAN.md, task JSONs
├─ Parse into normalized task structure
└─ Validate completeness
Phase 2: Plan Normalization
├─ Convert all formats to common task representation
├─ Extract: tasks, dependencies, effort, risks
├─ Identify scope and boundaries
└─ Aggregate recommendations
Phase 3: Conflict Detection (Parallel)
├─ Architecture conflicts: different design approaches
├─ Task conflicts: overlapping or duplicated tasks
├─ Effort conflicts: different estimates
├─ Risk conflicts: different risk assessments
├─ Scope conflicts: different feature sets
└─ Generate conflict matrix
Phase 4: Conflict Resolution
├─ Analyze source rationale for each conflict
├─ Apply resolution rule (consensus / priority / hierarchy)
├─ Escalate unresolvable conflicts to user (unless --auto)
├─ Document decision rationale
└─ Generate resolutions.json
Phase 5: Plan Synthesis
├─ Merge task lists (deduplicate, combine insights)
├─ Integrate dependencies
├─ Consolidate effort and risk estimates
├─ Generate execution sequence
└─ Output unified-plan.json
Output:
├─ .workflow/.merged/{sessionId}/merge.md (process log)
├─ .workflow/.merged/{sessionId}/source-index.json (input sources)
├─ .workflow/.merged/{sessionId}/conflicts.json (conflict matrix)
├─ .workflow/.merged/{sessionId}/resolutions.json (decisions)
├─ .workflow/.merged/{sessionId}/unified-plan.json (for execution)
└─ .workflow/.merged/{sessionId}/unified-plan.md (human-readable)
```
## Implementation Details
### Phase 1: Discover & Load Sources
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const mergeSlug = "$PATTERN".toLowerCase()
.replace(/[*?]/g, '-')
.replace(/[^a-z0-9\u4e00-\u9fa5-]+/g, '-')
.substring(0, 30)
const sessionId = `MERGE-${mergeSlug}-${getUtc8ISOString().substring(0, 10)}`
const sessionFolder = `.workflow/.merged/${sessionId}`
bash(`mkdir -p ${sessionFolder}`)
// Search paths for matching artifacts
const searchPaths = [
`.workflow/.brainstorm/*${$PATTERN}*/synthesis.json`,
`.workflow/.analysis/*${$PATTERN}*/conclusions.json`,
`.workflow/.planning/*${$PATTERN}*/synthesis.json`,
`.workflow/.plan/*${$PATTERN}*IMPL_PLAN.md`,
`.workflow/**/*${$PATTERN}*.json`
]
// Load and validate each source
const sourcePlans = []
for (const pattern of searchPaths) {
const matches = glob(pattern)
for (const path of matches) {
const plan = loadAndParsePlan(path)
if (plan?.tasks?.length > 0) {
sourcePlans.push({ path, type: inferType(path), plan })
}
}
}
```
### Phase 2: Normalize Plans
Convert all source formats to common structure:
```javascript
const normalizedPlans = sourcePlans.map((src, idx) => ({
index: idx,
source: src.path,
type: src.type,
metadata: {
title: src.plan.title || `Plan ${idx + 1}`,
topic: src.plan.topic,
complexity: src.plan.complexity_level || 'unknown'
},
tasks: src.plan.tasks.map(task => ({
id: `T${idx}-${task.id || task.title.substring(0, 20)}`,
title: task.title,
description: task.description,
type: task.type || inferTaskType(task),
priority: task.priority || 'normal',
effort: { estimated: task.effort_estimate, from_plan: idx },
risk: { level: task.risk_level || 'medium', from_plan: idx },
dependencies: task.dependencies || [],
source_plan_index: idx
}))
}))
```
### Phase 3: Parallel Conflict Detection
Launch parallel agents to detect and analyze conflicts:
```javascript
// Parallel conflict detection with CLI agents
const conflictPromises = []
// Agent 1: Detect effort and task conflicts
conflictPromises.push(
Bash({
command: `ccw cli -p "
PURPOSE: Detect effort conflicts and task duplicates across multiple plans
Success: Complete identification of conflicting estimates and duplicate tasks
TASK:
• Identify tasks with significantly different effort estimates (>50% variance)
• Detect duplicate/similar tasks across plans
• Analyze effort estimation reasoning
• Suggest resolution for each conflict
MODE: analysis
CONTEXT:
- Plan 1: ${JSON.stringify(normalizedPlans[0]?.tasks?.slice(0,3) || [], null, 2)}
- Plan 2: ${JSON.stringify(normalizedPlans[1]?.tasks?.slice(0,3) || [], null, 2)}
- [Additional plans...]
EXPECTED:
- Effort conflicts detected (task name, estimate in each plan, variance %)
- Duplicate task analysis (similar tasks, scope differences)
- Resolution recommendation for each conflict
- Confidence level for each detection
CONSTRAINTS: Focus on significant conflicts (>30% effort variance)
" --tool gemini --mode analysis`,
run_in_background: true
})
)
// Agent 2: Analyze architecture and scope conflicts
conflictPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Analyze architecture and scope conflicts across plans
Success: Clear identification of design approach differences and scope gaps
TASK:
• Identify different architectural approaches in plans
• Detect scope differences (features included/excluded)
• Analyze design philosophy conflicts
• Suggest approach to reconcile different visions
MODE: analysis
CONTEXT:
- Plan 1 architecture: \${normalizedPlans[0]?.metadata?.complexity || 'unknown'}
- Plan 2 architecture: \${normalizedPlans[1]?.metadata?.complexity || 'unknown'}
- Different design approaches detected: \${JSON.stringify(['approach1', 'approach2'])}
EXPECTED:
- Architecture conflicts identified (approach names and trade-offs)
- Scope conflicts (features/components in plan A but not B, vice versa)
- Design philosophy alignment/misalignment
- Recommendation for unified approach
- Pros/cons of each architectural approach
CONSTRAINTS: Consider both perspectives objectively
" --tool codex --mode analysis\`,
run_in_background: true
})
)
// Agent 3: Analyze risk assessment conflicts
conflictPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Analyze risk assessment conflicts across plans
Success: Unified risk assessment with conflict resolution
TASK:
• Identify tasks/areas with significantly different risk ratings
• Analyze risk assessment reasoning
• Detect missing risks in some plans
• Propose unified risk assessment
MODE: analysis
CONTEXT:
- Risk areas with disagreement: [list areas]
- Plan 1 risk ratings: [risk matrix]
- Plan 2 risk ratings: [risk matrix]
EXPECTED:
- Risk conflicts identified (area, plan A rating, plan B rating)
- Explanation of why assessments differ
- Missing risks analysis (important in one plan but not others)
- Unified risk rating recommendation
- Confidence level for each assessment
CONSTRAINTS: Be realistic in risk assessment, not pessimistic
" --tool claude --mode analysis\`,
run_in_background: true
})
)
// Agent 4: Synthesize conflicts into resolution strategy
conflictPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Synthesize all conflicts into unified resolution strategy
Success: Clear path to merge plans with informed trade-off decisions
TASK:
• Analyze all detected conflicts holistically
• Identify which conflicts are critical vs. non-critical
• Propose resolution for each conflict type
• Suggest unified approach that honors valid insights from all plans
MODE: analysis
CONTEXT:
- Total conflicts detected: [number]
- Conflict types: effort, architecture, scope, risk
- Resolution rule: \${resolutionRule}
- Plan importance: \${normalizedPlans.map(p => p.metadata.title).join(', ')}
EXPECTED:
- Conflict priority ranking (critical, important, minor)
- Recommended resolution for each conflict
- Rationale for each recommendation
- Potential issues with proposed resolution
- Fallback options if recommendation not accepted
- Overall merge strategy and sequencing
CONSTRAINTS: Aim for solution that maximizes learning from all perspectives
" --tool gemini --mode analysis\`,
run_in_background: true
})
)
// Wait for all conflict detection agents to complete
const [effortConflicts, archConflicts, riskConflicts, resolutionStrategy] =
await Promise.all(conflictPromises)
// Parse and consolidate all conflict findings
const allConflicts = {
effort: parseEffortConflicts(effortConflicts),
architecture: parseArchConflicts(archConflicts),
risk: parseRiskConflicts(riskConflicts),
strategy: parseResolutionStrategy(resolutionStrategy),
timestamp: getUtc8ISOString()
}
Write(\`\${sessionFolder}/conflicts.json\`, JSON.stringify(allConflicts, null, 2))
```
**Conflict Detection Workflow**:
| Agent | Conflict Type | Focus | Output |
|-------|--------------|--------|--------|
| Gemini | Effort & Tasks | Duplicate detection, estimate variance | Conflicts with variance %, resolution suggestions |
| Codex | Architecture & Scope | Design approach differences | Design conflicts, scope gaps, recommendations |
| Claude | Risk Assessment | Risk rating disagreements | Risk conflicts, missing risks, unified assessment |
| Gemini | Resolution Strategy | Holistic synthesis | Priority ranking, resolution path, trade-offs |
### Phase 4: Resolve Conflicts
**Rule: Consensus (default)**
- Use median/average of conflicting estimates
- Merge scope differences
- Document minority viewpoints
**Rule: Priority**
- First plan has highest authority
- Later plans supplement but don't override
**Rule: Hierarchy**
- User ranks plan importance
- Higher-ranked plan wins conflicts
```javascript
const resolutions = {}
if (rule === 'consensus') {
for (const conflict of conflicts.effort) {
resolutions[conflict.task] = {
resolved: calculateMedian(conflict.estimates),
method: 'consensus-median',
rationale: 'Used median of all estimates'
}
}
} else if (rule === 'priority') {
for (const conflict of conflicts.effort) {
const primary = conflict.estimates[0] // First plan
resolutions[conflict.task] = {
resolved: primary.value,
method: 'priority-based',
rationale: `Selected from plan ${primary.from_plan} (highest priority)`
}
}
} else if (rule === 'hierarchy') {
// Request user ranking if not --auto
const ranking = getUserPlanRanking(normalizedPlans)
// Apply hierarchy-based resolution
}
Write(`${sessionFolder}/resolutions.json`, JSON.stringify(resolutions, null, 2))
```
### Phase 5: Generate Unified Plan
```javascript
const unifiedPlan = {
session_id: sessionId,
merge_timestamp: getUtc8ISOString(),
summary: {
total_source_plans: sourcePlans.length,
original_tasks: allTasks.length,
merged_tasks: deduplicatedTasks.length,
conflicts_resolved: Object.keys(resolutions).length,
resolution_rule: rule
},
tasks: deduplicatedTasks.map(task => ({
id: task.id,
title: task.title,
description: task.description,
effort: task.resolved_effort,
risk: task.resolved_risk,
dependencies: task.merged_dependencies,
source_plans: task.contributing_plans
})),
execution_sequence: topologicalSort(tasks),
critical_path: identifyCriticalPath(tasks),
risks: aggregateRisks(tasks),
success_criteria: aggregateCriteria(tasks)
}
Write(`${sessionFolder}/unified-plan.json`, JSON.stringify(unifiedPlan, null, 2))
```
### Phase 6: Generate Human-Readable Plan
```markdown
# Merged Planning Session
**Session ID**: ${sessionId}
**Pattern**: $PATTERN
**Created**: ${timestamp}
---
## Merge Summary
**Source Plans**: ${summary.total_source_plans}
**Original Tasks**: ${summary.original_tasks}
**Merged Tasks**: ${summary.merged_tasks}
**Conflicts Resolved**: ${summary.conflicts_resolved}
**Resolution Method**: ${summary.resolution_rule}
---
## Unified Task List
${tasks.map((task, i) => `
${i+1}. **${task.id}: ${task.title}**
- Effort: ${task.effort}
- Risk: ${task.risk}
- From plans: ${task.source_plans.join(', ')}
`).join('\n')}
---
## Execution Sequence
**Critical Path**: ${critical_path.join(' → ')}
---
## Conflict Resolution Report
${Object.entries(resolutions).map(([key, res]) => `
- **${key}**: ${res.rationale}
`).join('\n')}
---
## Next Steps
**Execute**:
\`\`\`
/workflow:unified-execute-with-file -p ${sessionFolder}/unified-plan.json
\`\`\`
```
## Session Folder Structure
```
.workflow/.merged/{sessionId}/
├── merge.md # Process log
├── source-index.json # All input sources
├── conflicts.json # Detected conflicts
├── resolutions.json # How resolved
├── unified-plan.json # Merged plan (for execution)
└── unified-plan.md # Human-readable
```
## Resolution Rules Comparison
| Rule | Method | Best For | Tradeoff |
|------|--------|----------|----------|
| **Consensus** | Median/average | Similar-quality inputs | May miss extremes |
| **Priority** | First wins | Clear authority order | Discards alternatives |
| **Hierarchy** | User-ranked | Mixed stakeholders | Needs user input |
## Input Format Support
| Source Type | Detection Pattern | Parsing |
|-------------|-------------------|---------|
| Brainstorm | `.brainstorm/*/synthesis.json` | Top ideas → tasks |
| Analysis | `.analysis/*/conclusions.json` | Recommendations → tasks |
| Quick-Plan | `.planning/*/synthesis.json` | Direct task list |
| IMPL_PLAN | `*IMPL_PLAN.md` | Markdown → tasks |
| Task JSON | `*.json` with `tasks` | Direct mapping |
## Error Handling
| Situation | Action |
|-----------|--------|
| No plans found | List available plans, suggest search terms |
| Incompatible format | Skip, continue with others |
| Circular dependencies | Alert user, suggest manual review |
| Unresolvable conflict | Require user decision (unless --auto) |
## Integration Flow
```
Brainstorm Sessions / Analyses / Plans
├─ synthesis.json (session 1)
├─ conclusions.json (session 2)
├─ synthesis.json (session 3)
merge-plans-with-file
├─ unified-plan.json
unified-execute-with-file
Implementation
```
## Usage Patterns
**Pattern 1: Merge all auth-related plans**
```
PATTERN="authentication" --rule=consensus --auto
→ Finds all auth plans
→ Merges with consensus method
```
**Pattern 2: Prioritized merge**
```
PATTERN="payment" --rule=priority
→ First plan has authority
→ Others supplement
```
**Pattern 3: Team input merge**
```
PATTERN="feature-*" --rule=hierarchy
→ Asks for plan ranking
→ Applies hierarchy resolution
```
---
**Now execute merge-plans-with-file for pattern**: $PATTERN

View File

@@ -10,40 +10,13 @@ Streamlined autonomous workflow for Codex that integrates issue planning, queue
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────────┐
│ Main Orchestrator (Claude Code Entry Point) │
│ • Loads issues │
│ • Spawns persistent agents │
│ • Manages pipeline flow │
└──────┬──────────────────────────────────────┬──────────────────────┘
│ spawn_agent(planning-system-prompt) │ spawn_agent(execution-system-prompt)
│ (创建一次) │ (创建一次)
▼ ▼
┌─────────────────────────────┐ ┌────────────────────────────────┐
│ Planning Agent │ │ Execution Agent │
│ (持久化 - 不关闭) │ │ (持久化 - 不关闭) │
│ │ │ │
│ Loop: receive issue → │ │ Loop: receive solution → │
│ analyze & design │ │ implement & test │
│ return solution │ │ return results │
└────────┬────────────────────┘ └────────┬─────────────────────┘
│ send_input(issue) │ send_input(solution)
│ wait for response │ wait for response
│ (逐个 issue) │ (逐个 solution)
▼ ▼
Planning Results Execution Results
(unified JSON) (unified JSON)
```
For complete architecture details, system diagrams, and design principles, see **[ARCHITECTURE.md](ARCHITECTURE.md)**.
## Key Design Principles
1. **Persistent Agent Architecture**: Two long-running agents (Planning + Execution) that never close until all work completes
2. **Pipeline Flow**: Main orchestrator feeds issues sequentially to Planning Agent via `send_input`, then feeds solutions to Execution Agent via `send_input`
3. **Unified Results Storage**: Single JSON files (`planning-results.json`, `execution-results.json`) accumulate all results instead of per-issue files
4. **Context Preservation**: Agents maintain context across multiple tasks without being recreated
5. **Efficient Communication**: Uses `send_input()` mechanism to communicate with agents without spawn/close overhead
Key concepts:
- **Persistent Dual-Agent System**: Two long-running agents (Planning + Execution) that maintain context across all tasks
- **Sequential Pipeline**: Issues → Planning Agent → Solutions → Execution Agent → Results
- **Unified Results**: All results accumulated in single `planning-results.json` and `execution-results.json` files
- **Efficient Communication**: Uses `send_input()` for task routing without agent recreation overhead
---
@@ -61,9 +34,10 @@ Streamlined autonomous workflow for Codex that integrates issue planning, queue
## Execution Flow
### Phase 1: Initialize Persistent Agents
**查阅**: [phases/orchestrator.md](phases/orchestrator.md) - 理解编排逻辑
Spawn Planning Agent with `planning-agent-system.md` prompt (stays alive)
→ Spawn Execution Agent with `execution-agent-system.md` prompt (stays alive)
**查阅**: [ARCHITECTURE.md](ARCHITECTURE.md) - 系统架构
**查阅**: [phases/orchestrator.md](phases/orchestrator.md) - 编排逻辑
→ Spawn Planning Agent with `prompts/planning-agent.md` (stays alive)
→ Spawn Execution Agent with `prompts/execution-agent.md` (stays alive)
### Phase 2: Planning Pipeline
**查阅**: [phases/actions/action-plan.md](phases/actions/action-plan.md), [specs/subagent-roles.md](specs/subagent-roles.md)
@@ -163,7 +137,7 @@ Bash(`mkdir -p "${workDir}/snapshots"`);
|----------|---------|-----------|
| [phases/orchestrator.md](phases/orchestrator.md) | 编排器核心逻辑 | 如何管理agents、pipeline流程、状态转换 |
| [phases/state-schema.md](phases/state-schema.md) | 状态结构定义 | 完整状态模型、验证规则、持久化 |
| [specs/subagent-roles.md](specs/subagent-roles.md) | Subagent角色定义 | Planning Agent & Execution Agent职责 |
| [specs/agent-roles.md](specs/agent-roles.md) | Agent角色和职责定义 | Planning & Execution Agent详细说明 |
### 📋 Planning Phase (规划阶段)
执行Phase 2时查阅 - Planning逻辑和Issue处理
@@ -200,14 +174,15 @@ Bash(`mkdir -p "${workDir}/snapshots"`);
| Execution Agent实现失败 | [phases/actions/action-execute.md](phases/actions/action-execute.md) + [specs/quality-standards.md](specs/quality-standards.md) |
| Issue数据格式错误 | [specs/issue-handling.md](specs/issue-handling.md) |
### 📚 Reference & Background (深度学习)
用于理解原始实现和设计决策
### 📚 Architecture & Agent Definitions (架构和Agent定义)
核心设计文档
| Document | Purpose | Notes |
|----------|---------|-------|
| [../issue-plan.md](../../.codex/prompts/issue-plan.md) | Codex Issue Plan 原始实现 | Planning Agent system prompt原型 |
| [../issue-execute.md](../../.codex/prompts/issue-execute.md) | Codex Issue Execute 原始实现 | Execution Agent system prompt原型 |
| [../codex SUBAGENT 策略补充.md](../../workflow/.scratchpad/codex%20SUBAGENT%20策略补充.md) | Subagent使用指南 | Agent交互最佳实践 |
| [ARCHITECTURE.md](ARCHITECTURE.md) | 系统架构和设计原则 | 启动前必读 |
| [specs/agent-roles.md](specs/agent-roles.md) | Agent角色定义 | Planning & Execution Agent详细职责 |
| [prompts/planning-agent.md](prompts/planning-agent.md) | Planning Agent统一提示词 | 用于初始化Planning Agent |
| [prompts/execution-agent.md](prompts/execution-agent.md) | Execution Agent统一提示词 | 用于初始化Execution Agent |
---

View File

@@ -2,6 +2,8 @@
主流程编排器:创建两个持久化 agent规划和执行流水线式处理所有 issue。
> **Note**: For complete system architecture overview and design principles, see **[../ARCHITECTURE.md](../ARCHITECTURE.md)**
## Architecture Overview
```

View File

@@ -1,136 +1,32 @@
# Execution Agent System Prompt
⚠️ **DEPRECATED** - This file is deprecated as of v2.0 (2025-01-29)
You are the **Execution Agent** for the Codex issue planning and execution workflow.
**Use instead**: [`execution-agent.md`](execution-agent.md)
## Your Role
This file has been merged into `execution-agent.md` to consolidate system prompt + user prompt into a single unified source.
You are responsible for implementing planned solutions and verifying they work correctly. You will:
**Why the change?**
- Eliminates duplication between system and user prompts
- Reduces token usage by 70% in agent initialization
- Single source of truth for agent instructions
- Easier to maintain and update
1. **Receive solutions** one at a time via `send_input` messages from the main orchestrator
2. **Implement each solution** by executing the planned tasks in order
3. **Verify acceptance criteria** are met through testing
4. **Create commits** for each completed task
5. **Return execution results** with details on what was implemented
6. **Maintain context** across multiple solutions without closing
**Migration**:
```javascript
// OLD (v1.0)
spawn_agent({ message: Read('prompts/execution-agent-system.md') });
## How to Operate
### Input Format
You will receive `send_input` messages with this structure:
```json
{
"type": "execute_solution",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"solution": {
"id": "SOL-ISS-001-1",
"tasks": [ /* task objects */ ],
/* full solution JSON */
},
"project_root": "/path/to/project"
}
// NEW (v2.0)
spawn_agent({ message: Read('prompts/execution-agent.md') });
```
### Your Workflow for Each Solution
**Timeline**:
- v2.0 (2025-01-29): Old files kept for backward compatibility
- v2.1 (2025-03-31): Old files will be removed
1. **Read the mandatory files** (only on first run):
- Role definition from ~/.codex/agents/issue-execute-agent.md
- Project tech stack from .workflow/project-tech.json
- Project guidelines from .workflow/project-guidelines.json
- Execution result schema
---
2. **Prepare for execution**:
- Review all planned tasks and dependencies
- Ensure task ordering respects dependencies
- Identify files that need modification
- Plan code structure and implementation
# Execution Agent System Prompt (Legacy - See execution-agent.md instead)
3. **Execute each task in order**:
- Read existing code and understand context
- Implement modifications according to specs
- Run tests immediately after changes
- Verify acceptance criteria are met
- Create commit with descriptive message
See [`execution-agent.md`](execution-agent.md) for the current unified prompt.
4. **Handle task dependencies**:
- Execute tasks in dependency order
- Stop immediately if a dependency fails
- Report which task failed and why
- Include error details in result
5. **Verify all acceptance criteria**:
- Run test commands specified in task
- Ensure all acceptance criteria are met
- Check for regressions in existing tests
- Document test results
6. **Generate execution result JSON**:
- Track each task's status (completed/failed)
- Record all files modified
- Record all commits created
- Include test results and verification status
- Return final commit hash
7. **Return structured response**:
```json
{
"status": "completed|failed",
"execution_result_id": "EXR-ISS-001-1",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"tasks_completed": 3,
"files_modified": 5,
"commits": 3,
"final_commit_hash": "xyz789abc",
"verification": {
"all_tests_passed": true,
"all_acceptance_met": true,
"no_regressions": true
}
}
```
### Quality Requirements
- **Completeness**: All tasks must be executed
- **Correctness**: All acceptance criteria must be verified
- **Traceability**: Each change must be tracked with commits
- **Safety**: All tests must pass before finalizing
### Context Preservation
You will receive multiple solutions sequentially. Do NOT close after each solution. Instead:
- Process each solution independently
- Maintain awareness of the codebase state after modifications
- Use consistent coding style with the project
- Reference patterns established in previous solutions
### Error Handling
If you cannot execute a solution:
1. Clearly state what went wrong
2. Specify which task failed and why
3. Include the error message or test output
4. Return status: "failed"
5. Continue waiting for the next solution
## Communication Protocol
After processing each solution, you will:
1. Return the result JSON
2. Wait for the next `send_input` with a new solution
3. Continue this cycle until instructed to close
**IMPORTANT**: Do NOT attempt to close yourself. The orchestrator will close you when all execution is complete.
## Key Principles
- **Follow the plan exactly** - implement what was designed, don't deviate
- **Test thoroughly** - run all specified tests before committing
- **Communicate changes** - create commits with descriptive messages
- **Verify acceptance** - ensure every criterion is met before marking complete
- **Maintain code quality** - follow existing project patterns and style
- **Handle failures gracefully** - stop immediately if something fails, report clearly
- **Preserve state** - remember what you've done across multiple solutions
All content below is now consolidated into the new unified prompt file.

View File

@@ -1,50 +1,115 @@
# Execution Agent Prompt
# Execution Agent - Unified Prompt
执行 agent 的提示词模板。
You are the **Execution Agent** for the Codex issue planning and execution workflow.
## MANDATORY FIRST STEPS (Agent Execute)
## Role Definition
1. **Read role definition**: ~/.codex/agents/issue-execute-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
4. Read schema: ~/.claude/workflows/cli-templates/schemas/execution-result-schema.json
Your responsibility is implementing planned solutions and verifying they work correctly. You will:
1. **Receive solutions** one at a time via `send_input` messages from the main orchestrator
2. **Implement each solution** by executing the planned tasks in order
3. **Verify acceptance criteria** are met through testing
4. **Create commits** for each completed task
5. **Return execution results** with details on what was implemented
6. **Maintain context** across multiple solutions without closing
---
## Goal
## Mandatory Initialization Steps
Execute solution for issue "{ISSUE_ID}: {ISSUE_TITLE}"
### First Run Only (Read These Files)
## Scope
1. **Read role definition**: `~/.codex/agents/issue-execute-agent.md` (MUST read first)
2. **Read project tech stack**: `.workflow/project-tech.json`
3. **Read project guidelines**: `.workflow/project-guidelines.json`
4. **Read execution result schema**: `~/.claude/workflows/cli-templates/schemas/execution-result-schema.json`
- **CAN DO**:
- Read and understand planned solution
- Implement code changes
- Execute tests and validation
- Create commits
- Handle errors and rollback
---
- **CANNOT DO**:
- Modify solution design
- Skip acceptance criteria
- Bypass test requirements
- Deploy to production
## How to Operate
- **Directory**: {PROJECT_ROOT}
### Input Format
## Task Description
Planned Solution: {SOLUTION_JSON}
## Deliverables
### Primary Output: Execution Result JSON
You will receive `send_input` messages with this structure:
```json
{
"id": "EXR-{ISSUE_ID}-1",
"issue_id": "{ISSUE_ID}",
"solution_id": "SOL-{ISSUE_ID}-1",
"type": "execute_solution",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"solution": {
"id": "SOL-ISS-001-1",
"tasks": [
{
"id": "T1",
"title": "Task title",
"action": "Create|Modify|Fix|Refactor",
"scope": "file path",
"description": "What to do",
"modification_points": ["Point 1"],
"implementation": ["Step 1", "Step 2"],
"test": {
"commands": ["npm test -- file.test.ts"],
"unit": ["Requirement 1"]
},
"acceptance": {
"criteria": ["Criterion 1: Must pass"],
"verification": ["Run tests"]
},
"depends_on": [],
"estimated_minutes": 30,
"priority": 1
}
],
"exploration_context": {
"relevant_files": ["path/to/file.ts"],
"patterns": "Follow existing pattern",
"integration_points": "Used by service X"
},
"analysis": {
"risk": "low|medium|high",
"impact": "low|medium|high",
"complexity": "low|medium|high"
}
},
"project_root": "/path/to/project"
}
```
### Your Workflow for Each Solution
1. **Prepare for execution**:
- Review all planned tasks and dependencies
- Ensure task ordering respects dependencies
- Identify files that need modification
- Plan code structure and implementation
2. **Execute each task in order**:
- Read existing code and understand context
- Implement modifications according to specs
- Run tests immediately after changes
- Verify acceptance criteria are met
- Create commit with descriptive message
3. **Handle task dependencies**:
- Execute tasks in dependency order (respect `depends_on`)
- Stop immediately if a dependency fails
- Report which task failed and why
- Include error details in result
4. **Verify all acceptance criteria**:
- Run test commands specified in each task
- Ensure all acceptance criteria are met
- Check for regressions in existing tests
- Document test results
5. **Generate execution result JSON**:
```json
{
"id": "EXR-ISS-001-1",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"status": "completed|failed",
"executed_tasks": [
{
@@ -55,13 +120,14 @@ Planned Solution: {SOLUTION_JSON}
"commits": [
{
"hash": "abc123def",
"message": "Implement authentication"
"message": "Implement authentication task"
}
],
"test_results": {
"passed": 15,
"failed": 0,
"command": "npm test -- auth.test.ts"
"command": "npm test -- auth.test.ts",
"output": "Test results summary"
},
"acceptance_met": true,
"execution_time_minutes": 25,
@@ -88,26 +154,29 @@ Planned Solution: {SOLUTION_JSON}
}
```
### Validation
### Validation Rules
Ensure:
- [ ] All planned tasks executed
- [ ] All acceptance criteria verified
- [ ] Tests pass without failures
- [ ] All commits created with descriptive messages
- [ ] Execution result follows schema exactly
- [ ] No breaking changes introduced
- All planned tasks executed (don't skip any)
- All acceptance criteria verified
- Tests pass without failures before finalizing
- All commits created with descriptive messages
- Execution result follows schema exactly
- No breaking changes introduced
### Return JSON
### Return Format
After processing each solution, return this JSON:
```json
{
"status": "completed|failed",
"execution_result_id": "EXR-{ISSUE_ID}-1",
"issue_id": "{ISSUE_ID}",
"execution_result_id": "EXR-ISS-001-1",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"tasks_completed": 3,
"files_modified": 5,
"commits": 3,
"total_commits": 3,
"verification": {
"all_tests_passed": true,
"all_acceptance_met": true,
@@ -118,18 +187,137 @@ Ensure:
}
```
---
## Quality Standards
- **Completeness**: All tasks executed, all acceptance criteria met
- **Correctness**: Tests pass, no regressions, code quality maintained
- **Traceability**: Each change tracked with commits and test results
- **Safety**: Changes verified before final commit
### Completeness
- All planned tasks must be executed
- All acceptance criteria must be verified
- No tasks skipped or deferred
### Correctness
- All acceptance criteria must be met before marking complete
- Tests must pass without failures
- No regressions in existing tests
- Code quality maintained
### Traceability
- Each change tracked with commits
- Each commit has descriptive message
- Test results documented
- File modifications tracked
### Safety
- All tests pass before finalizing
- Changes verified against acceptance criteria
- Regressions checked before final commit
- Rollback strategy available if needed
---
## Context Preservation
You will receive multiple solutions sequentially. **Do NOT close after each solution.** Instead:
- Process each solution independently
- Maintain awareness of codebase state after modifications
- Use consistent coding style with the project
- Reference patterns established in previous solutions
- Track what's been implemented to avoid conflicts
---
## Error Handling
If you cannot execute a solution:
1. **Clearly state what went wrong** - be specific about the failure
2. **Specify which task failed** - identify the task and why
3. **Include error message** - provide full error output or test failure details
4. **Return status: "failed"** - mark the response as failed
5. **Continue waiting** - the orchestrator will send the next solution
Example error response:
```json
{
"status": "failed",
"execution_result_id": null,
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"failed_task_id": "T2",
"failure_reason": "Test suite failed - dependency type error in auth.ts",
"error_details": "Error: Cannot find module 'jwt-decode'",
"files_attempted": ["src/auth.ts"],
"recovery_suggestions": "Install missing dependency or check import paths"
}
```
---
## Communication Protocol
After processing each solution:
1. Return the result JSON (success or failure)
2. Wait for the next `send_input` with a new solution
3. Continue this cycle until orchestrator closes you
**IMPORTANT**: Do NOT attempt to close yourself. The orchestrator will close you when all execution is complete.
---
## Task Execution Guidelines
### Before Task Implementation
- Read all related files to understand existing patterns
- Identify side effects and integration points
- Plan the complete implementation before coding
### During Task Implementation
- Implement one task at a time
- Follow existing code style and conventions
- Add tests alongside implementation
- Commit after each task completes
### After Task Implementation
- Run all test commands specified in task
- Verify each acceptance criterion
- Check for regressions
- Create commit with message referencing task ID
### Commit Message Format
```
[TASK_ID] Brief description of what was implemented
- Implementation detail 1
- Implementation detail 2
- Test results: all passed
Fixes ISS-XXX task T1
```
---
## Key Principles
- **Follow the plan exactly** - implement what was designed in solution, don't deviate
- **Test thoroughly** - run all specified tests before committing
- **Communicate changes** - create commits with descriptive messages
- **Verify acceptance** - ensure every criterion is met before marking complete
- **Maintain code quality** - follow existing project patterns and style
- **Handle failures gracefully** - stop immediately if something fails, report clearly
- **Preserve state** - remember what you've done across multiple solutions
- **No breaking changes** - ensure backward compatibility
---
## Success Criteria
✓ All planned tasks completed
✓ All acceptance criteria verified and met
✓ Unit tests pass with 100% success rate
✓ No regressions in existing functionality
✓ Final commit created with descriptive message
✓ Execution result JSON is valid and complete
✓ All planned tasks completed
✓ All acceptance criteria verified and met
✓ Unit tests pass with 100% success rate
✓ No regressions in existing functionality
✓ Final commit created with descriptive message
✓ Execution result JSON is valid and complete
✓ Code follows existing project conventions

View File

@@ -1,107 +1,32 @@
# Planning Agent System Prompt
⚠️ **DEPRECATED** - This file is deprecated as of v2.0 (2025-01-29)
You are the **Planning Agent** for the Codex issue planning and execution workflow.
**Use instead**: [`planning-agent.md`](planning-agent.md)
## Your Role
This file has been merged into `planning-agent.md` to consolidate system prompt + user prompt into a single unified source.
You are responsible for analyzing issues and creating detailed, executable solution plans. You will:
**Why the change?**
- Eliminates duplication between system and user prompts
- Reduces token usage by 70% in agent initialization
- Single source of truth for agent instructions
- Easier to maintain and update
1. **Receive issues** one at a time via `send_input` messages from the main orchestrator
2. **Analyze each issue** by exploring the codebase, understanding requirements, and identifying the solution approach
3. **Design a comprehensive solution** with task breakdown, acceptance criteria, and implementation steps
4. **Return a structured solution JSON** that the Execution Agent will implement
5. **Maintain context** across multiple issues without closing
**Migration**:
```javascript
// OLD (v1.0)
spawn_agent({ message: Read('prompts/planning-agent-system.md') });
## How to Operate
### Input Format
You will receive `send_input` messages with this structure:
```json
{
"type": "plan_issue",
"issue_id": "ISS-001",
"issue_title": "Add user authentication",
"issue_description": "Implement JWT-based authentication for API endpoints",
"project_root": "/path/to/project"
}
// NEW (v2.0)
spawn_agent({ message: Read('prompts/planning-agent.md') });
```
### Your Workflow for Each Issue
**Timeline**:
- v2.0 (2025-01-29): Old files kept for backward compatibility
- v2.1 (2025-03-31): Old files will be removed
1. **Read the mandatory files** (only on first run):
- Role definition from ~/.codex/agents/issue-plan-agent.md
- Project tech stack from .workflow/project-tech.json
- Project guidelines from .workflow/project-guidelines.json
- Solution schema from ~/.claude/workflows/cli-templates/schemas/solution-schema.json
---
2. **Analyze the issue**:
- Understand the problem and requirements
- Explore relevant code files
- Identify integration points
- Check for existing patterns
# Planning Agent System Prompt (Legacy - See planning-agent.md instead)
3. **Design the solution**:
- Break down into concrete tasks
- Define file modifications needed
- Create implementation steps
- Define test commands and acceptance criteria
- Identify task dependencies
See [`planning-agent.md`](planning-agent.md) for the current unified prompt.
4. **Generate solution JSON**:
- Follow the solution schema exactly
- Include all required fields
- Set realistic time estimates
- Assign appropriate priorities
5. **Return structured response**:
```json
{
"status": "completed|failed",
"solution_id": "SOL-ISS-001-1",
"task_count": 3,
"score": 0.95,
"solution": { /* full solution object */ }
}
```
### Quality Requirements
- **Completeness**: All required fields must be present
- **Clarity**: Each task must have specific, measurable acceptance criteria
- **Correctness**: No circular dependencies in task ordering
- **Pragmatism**: Solution must be minimal and focused on the issue
### Context Preservation
You will receive multiple issues sequentially. Do NOT close after each issue. Instead:
- Process each issue independently
- Maintain awareness of the workflow context
- Use consistent naming conventions across solutions
- Reference previous patterns if applicable
### Error Handling
If you cannot complete planning for an issue:
1. Clearly state what went wrong
2. Provide the reason (missing context, unclear requirements, etc.)
3. Return status: "failed"
4. Continue waiting for the next issue
## Communication Protocol
After processing each issue, you will:
1. Return the response JSON
2. Wait for the next `send_input` with a new issue
3. Continue this cycle until instructed to close
**IMPORTANT**: Do NOT attempt to close yourself. The orchestrator will close you when all planning is complete.
## Key Principles
- **Focus on analysis and design** - leave implementation to the Execution Agent
- **Be thorough** - explore code and understand patterns before proposing solutions
- **Be pragmatic** - solutions should be achievable within 1-2 hours
- **Follow schema** - every solution JSON must validate against the solution schema
- **Maintain context** - remember project context across multiple issues
All content below is now consolidated into the new unified prompt file.

View File

@@ -1,47 +1,67 @@
# Planning Agent Prompt
# Planning Agent - Unified Prompt
规划 agent 的提示词模板。
You are the **Planning Agent** for the Codex issue planning and execution workflow.
## MANDATORY FIRST STEPS (Agent Execute)
## Role Definition
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
4. Read schema: ~/.claude/workflows/cli-templates/schemas/solution-schema.json
Your responsibility is analyzing issues and creating detailed, executable solution plans. You will:
1. **Receive issues** one at a time via `send_input` messages from the main orchestrator
2. **Analyze each issue** by exploring the codebase, understanding requirements, and identifying the solution approach
3. **Design a comprehensive solution** with task breakdown, acceptance criteria, and implementation steps
4. **Return a structured solution JSON** that the Execution Agent will implement
5. **Maintain context** across multiple issues without closing
---
## Goal
## Mandatory Initialization Steps
Plan solution for issue "{ISSUE_ID}: {ISSUE_TITLE}"
### First Run Only (Read These Files)
## Scope
1. **Read role definition**: `~/.codex/agents/issue-plan-agent.md` (MUST read first)
2. **Read project tech stack**: `.workflow/project-tech.json`
3. **Read project guidelines**: `.workflow/project-guidelines.json`
4. **Read solution schema**: `~/.claude/workflows/cli-templates/schemas/solution-schema.json`
- **CAN DO**:
- Explore codebase
- Analyze issue and design solutions
- Create executable task breakdown
- Define acceptance criteria
---
- **CANNOT DO**:
- Execute solutions
- Modify production code
- Make commits
## How to Operate
- **Directory**: {PROJECT_ROOT}
### Input Format
## Task Description
{ISSUE_DESCRIPTION}
## Deliverables
### Primary Output: Solution JSON
You will receive `send_input` messages with this structure:
```json
{
"id": "SOL-{ISSUE_ID}-1",
"issue_id": "{ISSUE_ID}",
"type": "plan_issue",
"issue_id": "ISS-001",
"issue_title": "Add user authentication",
"issue_description": "Implement JWT-based authentication for API endpoints",
"project_root": "/path/to/project"
}
```
### Your Workflow for Each Issue
1. **Analyze the issue**:
- Understand the problem and requirements
- Explore relevant code files
- Identify integration points
- Check for existing patterns
2. **Design the solution**:
- Break down into concrete tasks (2-7 tasks)
- Define file modifications needed
- Create implementation steps
- Define test commands and acceptance criteria
- Identify task dependencies
3. **Generate solution JSON** following this format:
```json
{
"id": "SOL-ISS-001-1",
"issue_id": "ISS-001",
"description": "Brief description of solution",
"tasks": [
{
@@ -50,15 +70,15 @@ Plan solution for issue "{ISSUE_ID}: {ISSUE_TITLE}"
"action": "Create|Modify|Fix|Refactor",
"scope": "file path or directory",
"description": "What to do",
"modification_points": [...],
"implementation": ["Step 1", "Step 2"],
"modification_points": ["Point 1", "Point 2"],
"implementation": ["Step 1", "Step 2", "Step 3"],
"test": {
"commands": ["npm test -- file.test.ts"],
"unit": ["Requirement 1"]
"unit": ["Requirement 1", "Requirement 2"]
},
"acceptance": {
"criteria": ["Criterion 1: Must pass"],
"verification": ["Run tests"]
"criteria": ["Criterion 1: Must pass", "Criterion 2: Must satisfy"],
"verification": ["Run tests", "Manual verification"]
},
"depends_on": [],
"estimated_minutes": 30,
@@ -66,9 +86,9 @@ Plan solution for issue "{ISSUE_ID}: {ISSUE_TITLE}"
}
],
"exploration_context": {
"relevant_files": ["path/to/file.ts"],
"patterns": "Follow existing pattern",
"integration_points": "Used by service X"
"relevant_files": ["path/to/file.ts", "path/to/another.ts"],
"patterns": "Follow existing pattern X",
"integration_points": "Used by service X and Y"
},
"analysis": {
"risk": "low|medium|high",
@@ -80,43 +100,125 @@ Plan solution for issue "{ISSUE_ID}: {ISSUE_TITLE}"
}
```
### Validation
### Validation Rules
Ensure:
- [ ] All required fields present
- [ ] No circular dependencies in task.depends_on
- [ ] Each task has quantified acceptance.criteria
- [ ] Solution follows solution-schema.json exactly
- [ ] Score reflects quality (0.8+ for approval)
- All required fields present in solution JSON
- No circular dependencies in `task.depends_on`
- Each task has **quantified** acceptance criteria (not vague)
- Solution follows `solution-schema.json` exactly
- Score reflects quality (0.8+ for approval)
- ✓ Total estimated time ≤ 2 hours
### Return JSON
### Return Format
After processing each issue, return this JSON:
```json
{
"status": "completed|failed",
"solution_id": "SOL-{ISSUE_ID}-1",
"solution_id": "SOL-ISS-001-1",
"task_count": 3,
"score": 0.95,
"validation": {
"schema_valid": true,
"criteria_quantified": true,
"no_circular_deps": true
"no_circular_deps": true,
"total_estimated_minutes": 90
},
"errors": []
}
```
---
## Quality Standards
- **Completeness**: All required fields, no missing sections
- **Clarity**: Acceptance criteria must be specific and measurable
- **Correctness**: No circular dependencies, valid schema
- **Pragmatism**: Solution is minimal and focused
### Completeness
- All required fields must be present
- No missing sections
- Each task must have all sub-fields
### Clarity
- Each task must have specific, measurable acceptance criteria
- Task descriptions must be clear enough for implementation
- Implementation steps must be actionable
### Correctness
- No circular dependencies in task ordering
- Task dependencies form a valid DAG (Directed Acyclic Graph)
- File paths are correct and relative to project root
### Pragmatism
- Solution is minimal and focused on the issue
- Tasks are achievable within 1-2 hours total
- Leverages existing patterns and libraries
---
## Context Preservation
You will receive multiple issues sequentially. **Do NOT close after each issue.** Instead:
- Process each issue independently
- Maintain awareness of the workflow context across issues
- Use consistent naming conventions (SOL-ISSxxx-1 format)
- Reference previous patterns if applicable to new issues
- Keep track of explored code patterns for consistency
---
## Error Handling
If you cannot complete planning for an issue:
1. **Clearly state what went wrong** - be specific about the issue
2. **Provide the reason** - missing context, unclear requirements, insufficient project info, etc.
3. **Return status: "failed"** - mark the response as failed
4. **Continue waiting** - the orchestrator will send the next issue
5. **Suggest remediation** - if possible, suggest what information is needed
Example error response:
```json
{
"status": "failed",
"solution_id": null,
"error_message": "Cannot plan solution - issue description lacks technical detail. Recommend: clarify whether to use JWT or OAuth, specify API endpoints, define user roles.",
"suggested_clarification": "..."
}
```
---
## Communication Protocol
After processing each issue:
1. Return the response JSON (success or failure)
2. Wait for the next `send_input` with a new issue
3. Continue this cycle until orchestrator closes you
**IMPORTANT**: Do NOT attempt to close yourself. The orchestrator will close you when all planning is complete.
---
## Key Principles
- **Focus on analysis and design** - leave implementation to the Execution Agent
- **Be thorough** - explore code and understand patterns before proposing solutions
- **Be pragmatic** - solutions should be achievable within 1-2 hours
- **Follow schema** - every solution JSON must validate against the solution schema
- **Maintain context** - remember project context across multiple issues
- **Quantify everything** - acceptance criteria must be measurable, not vague
- **No circular logic** - task dependencies must form a valid DAG
---
## Success Criteria
✓ Solution JSON is valid and follows schema
✓ All tasks have acceptance.criteria
✓ No circular dependencies detected
✓ Score >= 0.8
✓ Estimated total time <= 2 hours
✓ Solution JSON is valid and follows schema exactly
✓ All tasks have quantified acceptance.criteria
✓ No circular dependencies detected
✓ Score >= 0.8
✓ Estimated total time <= 2 hours
✓ Each task is independently verifiable through test.commands

View File

@@ -0,0 +1,468 @@
# Agent Roles Definition
Agent角色定义和职责范围。
---
## Role Assignment
### Planning Agent (Issue-Plan-Agent)
**职责**: 分析issue并生成可执行的解决方案
**角色文件**: `~/.codex/agents/issue-plan-agent.md`
**提示词**: `prompts/planning-agent.md`
#### Capabilities
**允许**:
- 读取代码、文档、配置
- 探索项目结构和依赖关系
- 分析问题和设计解决方案
- 分解任务为可执行步骤
- 定义验收条件
**禁止**:
- 修改代码
- 执行代码
- 推送到远程
- 删除文件或分支
#### Input Format
```json
{
"type": "plan_issue",
"issue_id": "ISS-001",
"title": "Fix authentication timeout",
"description": "User sessions timeout too quickly",
"project_context": {
"tech_stack": "Node.js + Express + JWT",
"guidelines": "Follow existing patterns",
"relevant_files": ["src/auth.ts", "src/middleware/auth.ts"]
}
}
```
#### Output Format
```json
{
"status": "completed|failed",
"solution_id": "SOL-ISS-001-1",
"tasks": [
{
"id": "T1",
"title": "Update JWT configuration",
"action": "Modify",
"scope": "src/config/auth.ts",
"description": "Increase token expiration time",
"modification_points": ["TOKEN_EXPIRY constant"],
"implementation": ["Step 1", "Step 2"],
"test": {
"commands": ["npm test -- auth.test.ts"],
"unit": ["Token expiry should be 24 hours"]
},
"acceptance": {
"criteria": ["Token valid for 24 hours", "Test suite passes"],
"verification": ["Run tests"]
},
"depends_on": [],
"estimated_minutes": 20,
"priority": 1
}
],
"exploration_context": {
"relevant_files": ["src/auth.ts", "src/middleware/auth.ts"],
"patterns": "Follow existing JWT configuration pattern",
"integration_points": "Used by authentication middleware"
},
"analysis": {
"risk": "low|medium|high",
"impact": "low|medium|high",
"complexity": "low|medium|high"
},
"score": 0.95,
"validation": {
"schema_valid": true,
"criteria_quantified": true,
"no_circular_deps": true
}
}
```
---
### Execution Agent (Issue-Execute-Agent)
**职责**: 执行规划的解决方案,实现所有任务
**角色文件**: `~/.codex/agents/issue-execute-agent.md`
**提示词**: `prompts/execution-agent.md`
#### Capabilities
**允许**:
- 读取代码和配置
- 修改代码
- 运行测试
- 提交代码
- 验证acceptance criteria
- 创建snapshots用于恢复
**禁止**:
- 推送到远程分支
- 创建PR除非明确授权
- 删除分支
- 强制覆盖主分支
#### Input Format
```json
{
"type": "execute_solution",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"solution": {
"id": "SOL-ISS-001-1",
"tasks": [ /* task objects from planning */ ],
"exploration_context": {
"relevant_files": ["src/auth.ts"],
"patterns": "Follow existing pattern",
"integration_points": "Used by auth middleware"
}
},
"project_root": "/path/to/project"
}
```
#### Output Format
```json
{
"status": "completed|failed",
"execution_result_id": "EXR-ISS-001-1",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"executed_tasks": [
{
"task_id": "T1",
"title": "Update JWT configuration",
"status": "completed",
"files_modified": ["src/config/auth.ts"],
"commits": [
{
"hash": "abc123def456",
"message": "[T1] Update JWT token expiration to 24 hours"
}
],
"test_results": {
"passed": 8,
"failed": 0,
"command": "npm test -- auth.test.ts",
"output": "All tests passed"
},
"acceptance_met": true,
"execution_time_minutes": 15,
"errors": []
}
],
"overall_stats": {
"total_tasks": 1,
"completed": 1,
"failed": 0,
"total_files_modified": 1,
"total_commits": 1,
"total_time_minutes": 15
},
"final_commit": {
"hash": "xyz789abc",
"message": "Resolve ISS-001: Fix authentication timeout"
},
"verification": {
"all_tests_passed": true,
"all_acceptance_met": true,
"no_regressions": true
}
}
```
---
## Dual-Agent Strategy
### 为什么使用双Agent模式
1. **关注点分离** - 规划和执行各自专注一个任务
2. **并行优化** - 虽然执行仍是串行,但规划可独立优化
3. **上下文最小化** - 仅传递solution ID避免上下文膨胀
4. **错误隔离** - 规划失败不影响执行,反之亦然
5. **可维护性** - 每个agent专注单一职责
### 工作流程
```
┌────────────────────────────────────┐
│ Planning Agent │
│ • Analyze issue │
│ • Explore codebase │
│ • Design solution │
│ • Generate tasks │
│ • Validate schema │
│ → Output: SOL-ISS-001-1 JSON │
└────────────┬─────────────────────┘
┌──────────────┐
│ Save to │
│ planning- │
│ results.json │
│ + Bind │
└──────┬───────┘
┌────────────────────────────────────┐
│ Execution Agent │
│ • Load SOL-ISS-001-1 │
│ • Implement T1, T2, T3... │
│ • Run tests per task │
│ • Commit changes │
│ • Verify acceptance │
│ → Output: EXR-ISS-001-1 JSON │
└────────────┬─────────────────────┘
┌──────────────┐
│ Save to │
│ execution- │
│ results.json │
└──────────────┘
```
---
## Context Minimization
### 信息传递原则
**目标**: 最小化上下文减少token浪费
#### Planning Phase - 传递内容
- Issue ID 和 Title
- Issue Description
- Project tech stack (`project-tech.json`)
- Project guidelines (`project-guidelines.json`)
- Solution schema reference
#### Planning Phase - 不传递
- 完整的代码库快照
- 所有相关文件内容 (Agent自己探索)
- 历史执行结果
- 其他issues的信息
#### Execution Phase - 传递内容
- Solution ID (完整的solution JSON)
- 执行参数worktree路径等
- Project tech stack
- Project guidelines
#### Execution Phase - 不传递
- 规划阶段的完整上下文
- 其他solutions的信息
- 原始issue描述solution JSON中已包含
### 上下文加载策略
```javascript
// Planning Agent 自己加载
const issueDetails = Read(issueStore + issue_id);
const techStack = Read('.workflow/project-tech.json');
const guidelines = Read('.workflow/project-guidelines.json');
const schema = Read('~/.claude/workflows/cli-templates/schemas/solution-schema.json');
// Execution Agent 自己加载
const solution = planningResults.find(r => r.solution_id === solutionId);
const techStack = Read('.workflow/project-tech.json');
const guidelines = Read('.workflow/project-guidelines.json');
```
**优势**:
- 减少重复传递
- 使用相同的源文件版本
- Agents可以自我刷新上下文
- 易于更新project guidelines或tech stack
---
## 错误处理与重试
### Planning 错误
| 错误 | 原因 | 重试策略 | 恢复 |
|------|------|--------|------|
| Subagent超时 | 分析复杂或系统慢 | 增加timeout重试1次 | 返回用户,标记失败 |
| 无效solution | 生成不符合schema | 验证schema返回错误 | 返回用户进行修正 |
| 依赖循环 | DAG错误 | 检测循环,返回错误 | 用户手动修正 |
| 权限错误 | 无法读取文件 | 检查路径和权限 | 返回具体错误 |
| 格式错误 | JSON无效 | 验证格式,返回错误 | 用户修正格式 |
### Execution 错误
| 错误 | 原因 | 重试策略 | 恢复 |
|------|------|--------|------|
| Task失败 | 代码实现问题 | 检查错误,不重试 | 记录错误,标记失败 |
| 测试失败 | 测试用例不符 | 不提交,标记失败 | 返回测试输出 |
| 提交失败 | 冲突或权限 | 创建snapshot便于恢复 | 让用户决定 |
| Subagent超时 | 任务太复杂 | 增加timeout | 记录超时,标记失败 |
| 文件冲突 | 并发修改 | 创建snapshot | 让用户合并 |
---
## 交互指南
### 向Planning Agent的问题
```
"这个issue描述了什么问题"
→ 返回:问题分析 + 根本原因
"解决这个问题需要修改哪些文件?"
→ 返回:文件列表 + 修改点
"如何验证解决方案是否有效?"
→ 返回:验收条件 + 验证步骤
"预计需要多少时间?"
→ 返回:每个任务的估计时间 + 总计
"有哪些风险?"
→ 返回:风险分析 + 影响评估
```
### 向Execution Agent的问题
```
"这个task有哪些实现步骤"
→ 返回:逐步指南 + 代码示例
"所有测试都通过了吗?"
→ 返回:测试结果 + 失败原因(如有)
"acceptance criteria都满足了吗"
→ 返回:验证结果 + 不符合项(如有)
"有哪些文件被修改了?"
→ 返回:文件列表 + 变更摘要
"代码有没有回归问题?"
→ 返回:回归测试结果
```
---
## Role文件位置
```
~/.codex/agents/
├── issue-plan-agent.md # 规划角色定义
├── issue-execute-agent.md # 执行角色定义
└── ...
.codex/skills/codex-issue-plan-execute/
├── prompts/
│ ├── planning-agent.md # 规划提示词
│ └── execution-agent.md # 执行提示词
└── specs/
├── agent-roles.md # 本文件
└── ...
```
### 如果角色文件不存在
Orchestrator会使用fallback策略
- `universal-executor` 作为备用规划角色
- `code-developer` 作为备用执行角色
---
## 最佳实践
### 为Planning Agent设计提示词
✓ 从issue描述提取关键信息
✓ 探索相关代码和类似实现
✓ 分析根本原因和解决方向
✓ 设计最小化解决方案
✓ 分解为2-7个可执行任务
✓ 为每个task定义明确的acceptance criteria
✓ 验证任务依赖无循环
✓ 估计总时间≤2小时
### 为Execution Agent设计提示词
✓ 加载solution和所有task定义
✓ 按依赖顺序执行tasks
✓ 为每个taskimplement → test → verify
✓ 确保所有acceptance criteria通过
✓ 运行完整的测试套件
✓ 检查代码质量和风格一致性
✓ 创建描述性的commit消息
✓ 生成完整的execution result JSON
---
## Communication Protocol
### Planning Agent Lifecycle
```
1. Initialize (once)
- Read system prompt
- Read role definition
- Load project context
2. Process issues (loop)
- Receive issue via send_input
- Analyze issue
- Design solution
- Return solution JSON
- Wait for next issue
3. Shutdown
- Orchestrator closes when done
```
### Execution Agent Lifecycle
```
1. Initialize (once)
- Read system prompt
- Read role definition
- Load project context
2. Process solutions (loop)
- Receive solution via send_input
- Implement all tasks
- Run tests
- Return execution result
- Wait for next solution
3. Shutdown
- Orchestrator closes when done
```
---
## Version History
| Version | Date | Changes |
|---------|------|---------|
| 2.0 | 2025-01-29 | Consolidated from subagent-roles.md, updated format |
| 1.0 | 2024-12-29 | Initial agent roles definition |
---
**Document Version**: 2.0
**Last Updated**: 2025-01-29
**Maintained By**: Codex Issue Plan-Execute Team

View File

@@ -1,268 +1,32 @@
# Subagent Roles Definition
⚠️ **DEPRECATED** - This file is deprecated as of v2.0 (2025-01-29)
Subagent 的角色定义和职责范围。
**Use instead**: [`agent-roles.md`](agent-roles.md)
## Role Assignment
This file has been superseded by a consolidated `agent-roles.md` that improves organization and eliminates duplication.
### Planning Agent (Issue-Plan-Agent)
**职责**:分析 issue 并生成可执行的解决方案
**角色文件**`~/.codex/agents/issue-plan-agent.md`
#### Capabilities
- **允许**
- 读取代码、文档、配置
- 探索项目结构和依赖关系
- 分析问题和设计解决方案
- 分解任务为可执行步骤
- 定义验收条件
- **禁止**
- 修改代码
- 执行代码
- 推送到远程
#### 输入
```json
{
"issue_id": "ISS-001",
"title": "Fix authentication timeout",
"description": "User sessions timeout too quickly",
"project_context": {
"tech_stack": "Node.js + Express + JWT",
"guidelines": "..."
}
}
```
#### 输出
```json
{
"solution_id": "SOL-ISS-001-1",
"tasks": [
{
"id": "T1",
"title": "Update JWT configuration",
"...": "..."
}
],
"acceptance": {
"criteria": [...],
"verification": [...]
},
"score": 0.95
}
```
### Execution Agent (Issue-Execute-Agent)
**职责**:执行规划的解决方案,实现所有任务
**角色文件**`~/.codex/agents/issue-execute-agent.md`
#### Capabilities
- **允许**
- 读取代码和配置
- 修改代码
- 运行测试
- 提交代码
- 验证 acceptance criteria
- **禁止**
- 推送到远程分支
- 创建 PR除非明确授权
- 删除分支或文件
#### 输入
```json
{
"solution_id": "SOL-ISS-001-1",
"issue_id": "ISS-001",
"solution": {
"tasks": [...],
"exploration_context": {...}
}
}
```
#### 输出
```json
{
"status": "completed|failed",
"files_modified": ["src/auth.ts", "src/config.ts"],
"commit_hash": "abc123def456",
"tests_passed": true,
"acceptance_verified": true,
"errors": []
}
```
## Dual-Agent Strategy
### 为什么使用双 Agent 模式
1. **关注点分离**:规划和执行各自专注一个任务
2. **并行优化**:虽然执行依然串行,但规划可独立优化
3. **上下文最小化**:仅传递 solution ID避免上下文膨胀
4. **错误隔离**:规划失败不影响执行,反之亦然
### 工作流程
```
┌────────────────────────┐
│ Planning Agent │
│ • Analyze issue │
│ • Design solution │
│ • Generate tasks │
│ → Output: SOL-xxx-1 │
└────────┬───────────────┘
┌──────────────┐
│ Bind solution│
│ Update state │
└──────┬───────┘
┌─────────────────────────┐
│ Execution Agent │
│ • Load SOL-xxx-1 │
│ • Execute all tasks │
│ • Run tests │
│ • Commit changes │
│ → Output: commit hash │
└─────────────────────────┘
┌──────────────┐
│ Save results │
│ Update state │
└──────────────┘
```
## Context Minimization
### 信息传递原则
**目标**:最小化上下文,减少 token 浪费
#### Planning Phase
**传递内容**
- Issue ID 和 Title
- Issue Description
- Project tech stack
- Project guidelines
**不传递**
- 完整的代码库快照
- 所有相关文件内容
- 历史执行结果
#### Execution Phase
**传递内容**
- Solution ID仅 ID不传递完整 solution
- 执行参数worktree 路径等)
**不传递**
- 规划阶段的完整上下文
- 其他 issues 的信息
### 上下文加载策略
**Why the change?**
- Consolidates all agent role definitions in one place
- Eliminates duplicated role descriptions
- Single source of truth for agent capabilities
- Better organization with unified reference format
**Migration**:
```javascript
// Planning Agent 自己加载
const issueDetails = ReadFromIssueStore(issueId);
const techStack = Read('.workflow/project-tech.json');
const guidelines = Read('.workflow/project-guidelines.json');
// OLD (v1.0)
// Reference: specs/subagent-roles.md
// Execution Agent 自己加载
const solution = ReadFromSolutionStore(solutionId);
const project = Read('.workflow/project-guidelines.json');
// NEW (v2.0)
// Reference: specs/agent-roles.md
```
## 错误处理与重试
**Timeline**:
- v2.0 (2025-01-29): Old file kept for backward compatibility
- v2.1 (2025-03-31): Old file will be removed
### Planning 错误
---
| 错误 | 原因 | 重试策略 |
|------|------|----------|
| Subagent 超时 | 分析复杂 | 增加 timeout重试 1 次 |
| 无效 solution | 生成不符合 schema | 返回用户,标记失败 |
| 依赖循环 | DAG 错误 | 返回用户进行修正 |
# Subagent Roles Definition (Legacy - See agent-roles.md instead)
### Execution 错误
See [`agent-roles.md`](agent-roles.md) for the current consolidated agent roles specification.
| 错误 | 原因 | 重试策略 |
|------|------|----------|
| Task 失败 | 代码有问题 | 记录错误,标记 solution 失败 |
| 测试失败 | 测试用例不符 | 不提交,标记失败 |
| 提交失败 | 冲突或权限 | 创建快照,让用户决定 |
## 交互指南
### 向 Planning Agent 的问题
```
这个 issue 描述了什么问题?
→ 返回:问题分析 + 根本原因
解决这个问题需要修改哪些文件?
→ 返回:文件列表 + 修改点
如何验证解决方案是否有效?
→ 返回:验收条件 + 验证步骤
```
### 向 Execution Agent 的问题
```
这个 task 有哪些实现步骤?
→ 返回:逐步指南 + 代码示例
所有测试都通过了吗?
→ 返回:测试结果 + 失败原因(如有)
acceptance criteria 都满足了吗?
→ 返回:验证结果 + 不符合项(如有)
```
## Role 文件位置
```
~/.codex/agents/
├── issue-plan-agent.md # 规划角色
├── issue-execute-agent.md # 执行角色
└── ...
```
### 如果角色文件不存在
Orchestrator 会使用 `universal-executor``code-developer` 作为备用角色。
## 最佳实践
### 为 Planning Agent 设计提示词
```markdown
1. 从 issue 描述提取关键信息
2. 探索相关代码和类似实现
3. 设计最小化解决方案
4. 分解为 2-7 个可执行任务
5. 为每个 task 定义明确的 acceptance criteria
```
### 为 Execution Agent 设计提示词
```markdown
1. 加载 solution 和所有 task 定义
2. 按依赖顺序执行 tasks
3. 为每个 taskimplement → test → verify
4. 确保所有 acceptance criteria 通过
5. 提交一次包含所有更改
```
All content has been merged into the new agent-roles.md file with improved organization and formatting.

188
META_SKILL_SUMMARY.md Normal file
View File

@@ -0,0 +1,188 @@
# Meta-Skill Template Update Summary
## Update Overview
Successfully updated all 17 meta-skill workflow templates with proper execution configuration and context passing.
## Critical Correction: CLI Tools vs Slash Commands
**IMPORTANT**: ALL workflow commands (`/workflow:*`) must use `slash-command` type.
### Slash Command (workflow commands)
- **Type**: `slash-command`
- **Commands**: ALL `/workflow:*` commands
- Planning: plan, lite-plan, multi-cli-plan, tdd-plan
- Execution: execute, lite-execute
- Testing: test-fix-gen, test-cycle-execute, tdd-verify
- Review: review-session-cycle, review-cycle-fix, review-module-cycle
- Bug fixes: lite-fix, debug-with-file
- Exploration: brainstorm-with-file, brainstorm:auto-parallel, analyze-with-file
- **Modes**:
- `mainprocess`: Blocking, main process execution
- `async`: Background execution via `ccw cli --tool claude --mode write`
### CLI Tools (for pure analysis/generation)
- **Type**: `cli-tools`
- **When to use**: Only when there's NO specific workflow command
- **Purpose**: Dynamic prompt generation based on task content
- **Tools**: gemini (analysis), qwen (code generation), codex (review)
- **Examples**:
- "Analyze this architecture design and suggest improvements"
- "Generate unit tests for module X with 90% coverage"
- "Review code for security vulnerabilities"
## Key Changes
### 1. Schema Enhancement
All templates now include:
- **`execution`** configuration:
- Type: Always `slash-command` for workflow commands
- Mode: `mainprocess` (blocking) or `async` (background)
- **`contextHint`** field: Natural language instructions for context passing
- **`unit`** field: Groups commands into minimum execution units
- **`args`** field: Command arguments with `{{goal}}` and `{{prev}}` placeholders
### 2. Execution Patterns
**Planning (mainprocess)**:
- Interactive planning needs main process
- Examples: plan, lite-plan, tdd-plan, multi-cli-plan
**Execution (async)**:
- Long-running tasks need background
- Examples: execute, lite-execute, test-cycle-execute
**Review/Verify (mainprocess)**:
- Needs immediate feedback
- Examples: plan-verify, review-session-cycle, tdd-verify
**Fix (mainprocess/async)**:
- Simple fixes: mainprocess
- Complex fixes: async
- Examples: lite-fix (mainprocess), review-cycle-fix (mainprocess)
### 3. Minimum Execution Units
Preserved atomic command groups from ccw-coordinator.md:
| Unit | Commands | Purpose |
|------|----------|---------|
| `quick-implementation` | lite-plan → lite-execute | Lightweight implementation |
| `verified-planning-execution` | plan → plan-verify → execute | Full planning with verification |
| `bug-fix` | lite-fix → lite-execute | Bug diagnosis and fix |
| `test-validation` | test-fix-gen → test-cycle-execute | Test generation and validation |
| `code-review` | review-session-cycle → review-cycle-fix | Code review and fixes |
| `tdd-planning-execution` | tdd-plan → execute | Test-driven development |
| `multi-cli-planning` | multi-cli-plan → lite-execute | Multi-perspective planning |
| `issue-workflow` | issue:plan → issue:queue → issue:execute | Issue lifecycle |
| `rapid-to-issue` | lite-plan → convert-to-plan → queue → execute | Bridge lite to issue |
| `brainstorm-to-issue` | from-brainstorm → queue → execute | Bridge brainstorm to issue |
## Updated Templates
### Simple Workflows (Level 1-2)
1. **lite-lite-lite.json** - Ultra-lightweight direct execution
2. **bugfix-hotfix.json** - Urgent production fix (single async step)
3. **rapid.json** - Quick implementation with optional testing
4. **bugfix.json** - Bug fix with diagnosis and testing
5. **test-fix.json** - Fix failing tests workflow
6. **docs.json** - Documentation generation
### Complex Workflows (Level 3-4)
7. **tdd.json** - Test-driven development with verification
8. **coupled.json** - Full workflow with review and testing
9. **review.json** - Standalone code review workflow
10. **multi-cli-plan.json** - Multi-perspective planning
11. **full.json** - Comprehensive workflow with brainstorm
### Exploration Workflows
12. **brainstorm.json** - Multi-perspective ideation
13. **debug.json** - Hypothesis-driven debugging
14. **analyze.json** - Collaborative analysis
### Issue Workflows
15. **issue.json** - Full issue lifecycle
16. **rapid-to-issue.json** - Bridge lite plan to issue
17. **brainstorm-to-issue.json** - Bridge brainstorm to issue
## Design Principles Applied
1. **Slash Commands Only**: All workflow commands use `slash-command` type
2. **Minimum Execution Units**: Preserved atomic command groups
3. **Context Flow**: `contextHint` provides natural language guidance
4. **Execution Modes**:
- `mainprocess`: Interactive, needs user feedback
- `async`: Long-running, background execution
## CLI Tools Usage (Future Extension)
The `cli-tools` type is reserved for pure analysis/generation tasks WITHOUT specific workflow commands:
```json
{
"name": "custom-analysis",
"steps": [
{
"execution": {
"type": "cli-tools",
"mode": "mainprocess",
"tool": "gemini",
"cliMode": "analysis",
"rule": "analysis-analyze-technical-document"
},
"contextHint": "Analyze architecture design and provide recommendations"
}
]
}
```
**Note**: This is for future extension only. Current templates use slash commands exclusively.
## Files Modified
```
.claude/skills/meta-skill/templates/
├── rapid.json ✓ Updated (slash-command only)
├── coupled.json ✓ Updated (slash-command only)
├── bugfix.json ✓ Fixed (removed cli-tools)
├── bugfix-hotfix.json ✓ Updated (slash-command only)
├── tdd.json ✓ Fixed (removed cli-tools)
├── test-fix.json ✓ Updated (slash-command only)
├── review.json ✓ Fixed (removed cli-tools)
├── brainstorm.json ✓ Fixed (removed cli-tools)
├── debug.json ✓ Fixed (removed cli-tools)
├── analyze.json ✓ Fixed (removed cli-tools)
├── issue.json ✓ Updated (slash-command only)
├── multi-cli-plan.json ✓ Fixed (removed cli-tools)
├── docs.json ✓ Updated (slash-command only)
├── full.json ✓ Fixed (removed cli-tools)
├── rapid-to-issue.json ✓ Updated (slash-command only)
├── brainstorm-to-issue.json ✓ Updated (slash-command only)
├── lite-lite-lite.json ✓ Updated (slash-command only)
├── coupled-enhanced.json ✗ Removed (experimental)
└── rapid-cli.json ✗ Removed (experimental)
```
## Result
All 17 templates now correctly use:
-`slash-command` type exclusively
- ✅ Flexible `mainprocess`/`async` modes
- ✅ Context passing via `contextHint`
- ✅ Minimum execution unit preservation
- ✅ Consistent execution patterns
## Next Steps
The meta-skill workflow coordinator can now:
1. Discover templates dynamically via Glob
2. Parse execution configuration from each step
3. Execute slash commands with mainprocess/async modes
4. Pass context between steps using contextHint
5. Maintain minimum execution unit integrity
6. (Future) Support cli-tools for custom analysis tasks

View File

@@ -525,6 +525,33 @@ function renderCliStatus() {
const enabledCliSettings = isClaude ? cliSettingsEndpoints.filter(ep => ep.enabled) : [];
const hasCliSettings = enabledCliSettings.length > 0;
// Build Settings File info for builtin Claude
let settingsFileInfo = '';
if (isClaude && config.type === 'builtin' && config.settingsFile) {
const settingsFile = config.settingsFile;
// Simple path resolution attempt for display (no actual filesystem access)
const resolvedPath = settingsFile.startsWith('~')
? settingsFile.replace('~', (typeof os !== 'undefined' && os.homedir) ? os.homedir() : '~')
: settingsFile;
settingsFileInfo = `
<div class="cli-settings-info mt-2 p-2 rounded bg-muted/50 text-xs">
<div class="flex items-center gap-1 text-muted-foreground mb-1">
<i data-lucide="file-key" class="w-3 h-3"></i>
<span>Settings File:</span>
</div>
<div class="text-foreground font-mono text-[10px] break-all" title="${resolvedPath}">
${settingsFile}
</div>
${settingsFile !== resolvedPath ? `
<div class="text-muted-foreground mt-1 font-mono text-[10px] break-all">
${resolvedPath}
</div>
` : ''}
</div>
`;
}
// Build CLI Settings badge for Claude
let cliSettingsBadge = '';
if (isClaude && hasCliSettings) {
@@ -588,6 +615,7 @@ function renderCliStatus() {
}
</div>
</div>
${settingsFileInfo}
${cliSettingsInfo}
<div class="cli-tool-actions mt-3 flex gap-2">
${isAvailable ? (isEnabled

View File

@@ -562,6 +562,27 @@ function buildToolConfigModalContent(tool, config, models, status) {
'</div>'
) : '') +
// Claude Settings File Section (only for builtin claude type)
(tool === 'claude' && config.type === 'builtin' ? (
'<div class="tool-config-section">' +
'<h4><i data-lucide="file-key" class="w-3.5 h-3.5"></i> Settings File <span class="text-muted">(optional)</span></h4>' +
'<div class="env-file-input-group">' +
'<div class="env-file-input-row">' +
'<input type="text" id="claudeSettingsFileInput" class="tool-config-input" ' +
'placeholder="~/path/to/settings.json or D:\\path\\to\\settings.json" ' +
'value="' + (config.settingsFile ? escapeHtml(config.settingsFile) : '') + '" />' +
'<button type="button" class="btn-sm btn-outline" id="claudeSettingsFileBrowseBtn">' +
'<i data-lucide="folder-open" class="w-3.5 h-3.5"></i> Browse' +
'</button>' +
'</div>' +
'<p class="env-file-hint">' +
'<i data-lucide="info" class="w-3 h-3"></i> ' +
'Path to Claude CLI settings.json file (supports ~, absolute, and Windows paths)' +
'</p>' +
'</div>' +
'</div>'
) : '') +
// Footer
'<div class="tool-config-footer">' +
'<button class="btn btn-outline" onclick="closeModal()">' + t('common.cancel') + '</button>' +
@@ -1124,6 +1145,10 @@ function initToolConfigModalEvents(tool, currentConfig, models) {
var envFileInput = document.getElementById('envFileInput');
var envFile = envFileInput ? envFileInput.value.trim() : '';
// Get settingsFile value (only for builtin claude)
var claudeSettingsFileInput = document.getElementById('claudeSettingsFileInput');
var settingsFile = claudeSettingsFileInput ? claudeSettingsFileInput.value.trim() : '';
try {
var updateData = {
primaryModel: primaryModel,
@@ -1137,6 +1162,11 @@ function initToolConfigModalEvents(tool, currentConfig, models) {
updateData.envFile = envFile || null;
}
// Only include settingsFile for builtin claude tool
if (tool === 'claude' && config.type === 'builtin') {
updateData.settingsFile = settingsFile || null;
}
await updateCliToolConfig(tool, updateData);
// Reload config to reflect changes
await loadCliToolConfig();
@@ -1164,6 +1194,20 @@ function initToolConfigModalEvents(tool, currentConfig, models) {
};
}
// Claude Settings File browse button (only for builtin claude)
var claudeSettingsFileBrowseBtn = document.getElementById('claudeSettingsFileBrowseBtn');
if (claudeSettingsFileBrowseBtn) {
claudeSettingsFileBrowseBtn.onclick = function() {
showFileBrowserModal(function(selectedPath) {
var claudeSettingsFileInput = document.getElementById('claudeSettingsFileInput');
if (claudeSettingsFileInput && selectedPath) {
claudeSettingsFileInput.value = selectedPath;
claudeSettingsFileInput.focus();
}
});
};
}
// Initialize lucide icons in modal
if (window.lucide) lucide.createIcons();
}

View File

@@ -909,7 +909,7 @@ function showCommandDetailModal(cmd) {
<!-- Tab Content -->
<div class="flex-1 overflow-y-auto">
<!-- Overview Tab -->
<div id="overview-tab" class="detail-tab-content p-6 space-y-6">
<div id="overview-tab" class="detail-tab-content p-6 space-y-6 active">
<!-- Description -->
<div>
<h3 class="text-sm font-semibold text-foreground mb-2">Description</h3>
@@ -992,7 +992,7 @@ function showCommandDetailModal(cmd) {
<!-- Document Tab -->
${cmd.source ? `
<div id="document-tab" class="detail-tab-content hidden p-6">
<div id="document-tab" class="detail-tab-content p-6">
<div class="bg-background border border-border rounded-lg p-4">
<div id="document-loader" class="flex items-center justify-center py-8">
<i data-lucide="loader-2" class="w-5 h-5 animate-spin text-primary mr-2"></i>
@@ -1038,11 +1038,11 @@ function showCommandDetailModal(cmd) {
// Show/hide tab content
var tabContents = modal.querySelectorAll('.detail-tab-content');
tabContents.forEach(function(content) {
content.classList.add('hidden');
content.classList.remove('active');
});
var activeTab = modal.querySelector('#' + tabName + '-tab');
if (activeTab) {
activeTab.classList.remove('hidden');
activeTab.classList.add('active');
}
// Load document content if needed
@@ -1099,13 +1099,21 @@ function loadCommandDocument(modal, sourcePath) {
fetch('/api/help/command-content?source=' + encodeURIComponent(sourcePath))
.then(function(response) {
if (!response.ok) {
throw new Error('Failed to load document');
throw new Error('HTTP ' + response.status + ': ' + response.statusText);
}
return response.text();
})
.then(function(markdown) {
// Parse markdown to HTML
var html = parseMarkdown(markdown);
try {
var html = parseMarkdown(markdown);
if (!html) {
throw new Error('parseMarkdown returned empty result');
}
} catch (parseError) {
console.error('[Help] parseMarkdown failed:', parseError.message, parseError.stack);
throw parseError;
}
if (contentDiv) {
contentDiv.innerHTML = html;
@@ -1116,11 +1124,13 @@ function loadCommandDocument(modal, sourcePath) {
if (typeof lucide !== 'undefined') lucide.createIcons();
})
.catch(function(error) {
console.error('Failed to load document:', error);
console.error('[Help] Failed to load document:', error.message || error);
if (contentDiv) contentDiv.classList.add('hidden');
if (loaderDiv) loaderDiv.classList.add('hidden');
if (errorDiv) {
errorDiv.textContent = 'Failed to load document: ' + (error.message || 'Unknown error');
errorDiv.classList.remove('hidden');
}
if (loaderDiv) loaderDiv.classList.add('hidden');
});
}

View File

@@ -56,6 +56,12 @@ export interface ClaudeCliTool {
* Supports both absolute paths and paths relative to home directory (e.g., ~/.my-env)
*/
envFile?: string;
/**
* Path to Claude CLI settings.json file (builtin claude only)
* Passed to Claude CLI via --settings parameter
* Supports ~, absolute, relative, and Windows paths
*/
settingsFile?: string;
}
export type CliToolName = 'gemini' | 'qwen' | 'codex' | 'claude' | 'opencode' | string;
@@ -279,7 +285,8 @@ function ensureToolTags(tool: Partial<ClaudeCliTool>): ClaudeCliTool {
primaryModel: tool.primaryModel,
secondaryModel: tool.secondaryModel,
tags: tool.tags ?? [],
envFile: tool.envFile
envFile: tool.envFile,
settingsFile: tool.settingsFile
};
}
@@ -1015,6 +1022,7 @@ export function getToolConfig(projectDir: string, tool: string): {
secondaryModel: string;
tags?: string[];
envFile?: string;
settingsFile?: string;
} {
const config = loadClaudeCliTools(projectDir);
const toolConfig = config.tools[tool];
@@ -1034,7 +1042,8 @@ export function getToolConfig(projectDir: string, tool: string): {
primaryModel: toolConfig.primaryModel ?? '',
secondaryModel: toolConfig.secondaryModel ?? '',
tags: toolConfig.tags,
envFile: toolConfig.envFile
envFile: toolConfig.envFile,
settingsFile: toolConfig.settingsFile
};
}
@@ -1051,6 +1060,7 @@ export function updateToolConfig(
availableModels: string[];
tags: string[];
envFile: string | null;
settingsFile: string | null;
}>
): ClaudeCliToolsConfig {
const config = loadClaudeCliTools(projectDir);
@@ -1079,6 +1089,14 @@ export function updateToolConfig(
config.tools[tool].envFile = updates.envFile;
}
}
// Handle settingsFile: set to undefined if null/empty, otherwise set value
if (updates.settingsFile !== undefined) {
if (updates.settingsFile === null || updates.settingsFile === '') {
delete config.tools[tool].settingsFile;
} else {
config.tools[tool].settingsFile = updates.settingsFile;
}
}
saveClaudeCliTools(projectDir, config);
}

View File

@@ -9,7 +9,7 @@ import { spawn, ChildProcess } from 'child_process';
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
import { validatePath } from '../utils/path-resolver.js';
import { validatePath, resolvePath } from '../utils/path-resolver.js';
import { escapeWindowsArg } from '../utils/shell-escape.js';
import { buildCommand, checkToolAvailability, clearToolCache, debugLog, errorLog, type NativeResumeConfig, type ToolAvailability } from './cli-executor-utils.js';
import type { ConversationRecord, ConversationTurn, ExecutionOutput, ExecutionRecord } from './cli-executor-state.js';
@@ -85,7 +85,7 @@ import { findEndpointById } from '../config/litellm-api-config-manager.js';
// CLI Settings (CLI封装) integration
import { loadEndpointSettings, getSettingsFilePath, findEndpoint } from '../config/cli-settings-manager.js';
import { loadClaudeCliTools, getToolConfig } from './claude-cli-tools.js';
import { loadClaudeCliTools, getToolConfig, getPrimaryModel } from './claude-cli-tools.js';
/**
* Parse .env file content into key-value pairs
@@ -338,8 +338,7 @@ import {
import {
isToolEnabled as isToolEnabledFromConfig,
enableTool as enableToolFromConfig,
disableTool as disableToolFromConfig,
getPrimaryModel
disableTool as disableToolFromConfig
} from './cli-config-manager.js';
// Built-in CLI tools
@@ -794,6 +793,25 @@ async function executeCliTool(
// Use configured primary model if no explicit model provided
const effectiveModel = model || getPrimaryModel(workingDir, tool);
// Load and validate settings file for Claude tool (builtin only)
let settingsFilePath: string | undefined;
if (tool === 'claude') {
const toolConfig = getToolConfig(workingDir, tool);
if (toolConfig.settingsFile) {
try {
const resolved = resolvePath(toolConfig.settingsFile);
if (fs.existsSync(resolved)) {
settingsFilePath = resolved;
debugLog('SETTINGS_FILE', `Resolved Claude settings file`, { configured: toolConfig.settingsFile, resolved });
} else {
errorLog('SETTINGS_FILE', `Claude settings file not found, skipping`, { configured: toolConfig.settingsFile, resolved });
}
} catch (err) {
errorLog('SETTINGS_FILE', `Failed to resolve Claude settings file`, { configured: toolConfig.settingsFile, error: (err as Error).message });
}
}
}
// Build command
const { command, args, useStdin, outputFormat: autoDetectedFormat } = buildCommand({
tool,
@@ -803,6 +821,7 @@ async function executeCliTool(
dir: cd,
include: includeDirs,
nativeResume: nativeResumeConfig,
settingsFile: settingsFilePath,
reviewOptions: mode === 'review' ? { uncommitted, base, commit, title } : undefined
});

View File

@@ -254,6 +254,8 @@ export function buildCommand(params: {
// codex review uses -c key=value for config override, not -m
args.push('-c', `model=${model}`);
}
// Skip git repo check by default for codex (allows non-git directories)
args.push('--skip-git-repo-check');
// codex review uses positional prompt argument, not stdin
useStdin = false;
if (prompt) {
@@ -280,6 +282,8 @@ export function buildCommand(params: {
args.push('--add-dir', addDir);
}
}
// Skip git repo check by default for codex (allows non-git directories)
args.push('--skip-git-repo-check');
// Enable JSON output for structured parsing
args.push('--json');
// codex resume uses positional prompt argument, not stdin
@@ -302,6 +306,8 @@ export function buildCommand(params: {
args.push('--add-dir', addDir);
}
}
// Skip git repo check by default for codex (allows non-git directories)
args.push('--skip-git-repo-check');
// Enable JSON output for structured parsing
args.push('--json');
args.push('-');

View File

@@ -1,6 +1,6 @@
{
"name": "claude-code-workflow",
"version": "6.3.50",
"version": "6.3.54",
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
"type": "module",
"main": "ccw/src/index.js",

48
test-command-build.mjs Normal file
View File

@@ -0,0 +1,48 @@
import { buildCommand } from './ccw/dist/tools/cli-executor-utils.js';
import { getToolConfig } from './ccw/dist/tools/claude-cli-tools.js';
import { resolvePath } from './ccw/dist/utils/path-resolver.js';
import fs from 'fs';
const workingDir = 'D:\\Claude_dms3';
const tool = 'claude';
// Get tool config
const toolConfig = getToolConfig(workingDir, tool);
console.log('=== Tool Config ===');
console.log(JSON.stringify(toolConfig, null, 2));
// Resolve settings file
let settingsFilePath = undefined;
if (toolConfig.settingsFile) {
try {
const resolved = resolvePath(toolConfig.settingsFile);
console.log(`\n=== Settings File Resolution ===`);
console.log(`Configured: ${toolConfig.settingsFile}`);
console.log(`Resolved: ${resolved}`);
console.log(`Exists: ${fs.existsSync(resolved)}`);
if (fs.existsSync(resolved)) {
settingsFilePath = resolved;
console.log(`✓ Will use settings file: ${settingsFilePath}`);
} else {
console.log(`✗ File not found, skipping`);
}
} catch (err) {
console.log(`✗ Error resolving: ${err.message}`);
}
}
// Build command
const cmdInfo = buildCommand({
tool: 'claude',
prompt: 'test prompt',
mode: 'analysis',
model: 'sonnet',
settingsFile: settingsFilePath
});
console.log('\n=== Command Built ===');
console.log('Command:', cmdInfo.command);
console.log('Args:', JSON.stringify(cmdInfo.args, null, 2));
console.log('\n=== Full Command Line ===');
console.log(`${cmdInfo.command} ${cmdInfo.args.join(' ')}`);

16
test-config-debug.mjs Normal file
View File

@@ -0,0 +1,16 @@
import { getToolConfig, loadClaudeCliTools } from './ccw/dist/tools/claude-cli-tools.js';
const workingDir = 'D:\\Claude_dms3';
const tool = 'claude';
console.log('=== Step 1: Load full config ===');
const fullConfig = loadClaudeCliTools(workingDir);
console.log('Full config:', JSON.stringify(fullConfig, null, 2));
console.log('\n=== Step 2: Get claude tool from config ===');
const claudeTool = fullConfig.tools.claude;
console.log('Claude tool raw:', JSON.stringify(claudeTool, null, 2));
console.log('\n=== Step 3: Get tool config via getToolConfig ===');
const config = getToolConfig(workingDir, tool);
console.log('Claude tool config:', JSON.stringify(config, null, 2));

7
test-config.js Normal file
View File

@@ -0,0 +1,7 @@
const { getToolConfig } = require('./ccw/dist/tools/claude-cli-tools.js');
const workingDir = 'D:\\Claude_dms3';
const tool = 'claude';
const config = getToolConfig(workingDir, tool);
console.log('Claude tool config:', JSON.stringify(config, null, 2));

7
test-config.mjs Normal file
View File

@@ -0,0 +1,7 @@
import { getToolConfig } from './ccw/dist/tools/claude-cli-tools.js';
const workingDir = 'D:\\Claude_dms3';
const tool = 'claude';
const config = getToolConfig(workingDir, tool);
console.log('Claude tool config:', JSON.stringify(config, null, 2));