diff --git a/.codex/skills/brainstorm-with-file/SKILL.md b/.codex/skills/brainstorm-with-file/SKILL.md index cfa005c0..84a7d093 100644 --- a/.codex/skills/brainstorm-with-file/SKILL.md +++ b/.codex/skills/brainstorm-with-file/SKILL.md @@ -1,56 +1,105 @@ --- name: brainstorm-with-file -description: Interactive brainstorming with parallel subagent collaboration, idea expansion, and documented thought evolution. Parallel multi-perspective analysis for Codex. +description: Interactive brainstorming with documented thought evolution, multi-perspective analysis, and iterative refinement. Serial execution with no agent delegation. argument-hint: "TOPIC=\"\" [--perspectives=creative,pragmatic,systematic] [--max-ideas=]" --- -# Codex Brainstorm-With-File Workflow - -## Quick Start - -Interactive brainstorming workflow with **documented thought evolution**. Expands initial ideas through questioning, **parallel subagent analysis**, and iterative refinement. - -**Core workflow**: Seed Idea → Expand → Parallel Subagent Explore → Synthesize → Refine → Crystallize - -**Key features**: -- **brainstorm.md**: Complete thought evolution timeline -- **Parallel multi-perspective**: Creative + Pragmatic + Systematic (concurrent subagents) -- **Idea expansion**: Progressive questioning and exploration -- **Diverge-Converge cycles**: Generate options then focus on best paths - -**Codex-Specific Features**: -- Parallel subagent execution via `spawn_agent` + batch `wait({ ids: [...] })` -- Role loading via TOML agent definition (agent_type parameter in spawn_agent) -- Deep interaction with `send_input` for multi-round refinement within single agent -- Explicit lifecycle management with `close_agent` +# Codex Brainstorm-With-File Prompt ## Overview -This workflow enables iterative exploration and refinement of ideas through parallel-capable phases: +Interactive brainstorming workflow with **documented thought evolution**. Expands initial ideas through questioning, inline multi-perspective analysis, and iterative refinement. -1. **Seed Understanding** - Parse the initial idea and identify exploration vectors -2. **Divergent Exploration** - Gather codebase context and execute parallel multi-perspective analysis -3. **Interactive Refinement** - Multi-round idea selection, deep-dive, and refinement via send_input -4. **Convergence & Crystallization** - Synthesize final ideas and generate recommendations +**Core workflow**: Seed Idea → Expand → Multi-Perspective Explore → Synthesize → Refine → Crystallize -The key innovation is **documented thought evolution** that captures how ideas develop, perspectives differ, and insights emerge across all phases. +**Key features**: +- **brainstorm.md**: Complete thought evolution timeline +- **Multi-perspective analysis**: Creative + Pragmatic + Systematic (serial, inline) +- **Idea expansion**: Progressive questioning and exploration +- **Diverge-Converge cycles**: Generate options then focus on best paths -## Output Structure +## Auto Mode + +When `--yes` or `-y`: Auto-confirm exploration decisions, use recommended perspectives, skip interactive scoping. + +## Quick Start + +```bash +# Basic usage +/codex:brainstorm-with-file TOPIC="How to improve developer onboarding experience" + +# With perspective selection +/codex:brainstorm-with-file TOPIC="New caching strategy" --perspectives=creative,pragmatic,systematic + +# Continue existing session +/codex:brainstorm-with-file TOPIC="caching strategy" --continue + +# Auto mode (skip confirmations) +/codex:brainstorm-with-file -y TOPIC="Plugin architecture ideas" +``` + +## Target Topic + +**$TOPIC** + +## Configuration + +| Flag | Default | Description | +|------|---------|-------------| +| `-y, --yes` | false | Auto-confirm all decisions | +| `--continue` | false | Continue existing session | +| `--perspectives` | creative,pragmatic,systematic | Comma-separated perspective list | +| `--max-ideas` | 15 | Maximum ideas to track | + +**Session ID format**: `BS-{slug}-{YYYY-MM-DD}` +- slug: lowercase, alphanumeric + CJK characters, max 40 chars +- date: YYYY-MM-DD (UTC+8) +- Auto-detect continue: session folder + brainstorm.md exists → continue mode + +## Brainstorm Flow ``` -{projectRoot}/.workflow/.brainstorm/BS-{slug}-{date}/ -├── brainstorm.md # ⭐ Complete thought evolution timeline -├── exploration-codebase.json # Phase 2: Codebase context -├── perspectives/ # Phase 2: Individual perspective outputs -│ ├── creative.json -│ ├── pragmatic.json -│ └── systematic.json -├── perspectives.json # Phase 2: Aggregated parallel findings with synthesis -├── synthesis.json # Phase 4: Final synthesis -└── ideas/ # Phase 3: Individual idea deep-dives - ├── idea-1.md - ├── idea-2.md - └── merged-idea-1.md +Step 0: Session Setup + ├─ Parse topic, flags (--perspectives, --continue, -y) + ├─ Generate session ID: BS-{slug}-{date} + └─ Create session folder (or detect existing → continue mode) + +Step 1: Seed Understanding + ├─ Parse topic, identify brainstorm dimensions + ├─ Role/perspective selection with user (or auto) + ├─ Initial scoping (mode, focus areas, constraints) + ├─ Expand seed into exploration vectors + └─ Initialize brainstorm.md + +Step 2: Divergent Exploration (Inline, No Agents) + ├─ Detect codebase → search relevant modules, patterns + │ ├─ Run `ccw spec load --category exploration` (if spec system available) + │ └─ Use Grep, Glob, Read, mcp__ace-tool__search_context + ├─ Multi-perspective analysis (serial, inline) + │ ├─ Creative perspective: innovation, cross-domain, challenge assumptions + │ ├─ Pragmatic perspective: feasibility, effort, blockers + │ └─ Systematic perspective: decomposition, patterns, scalability + ├─ Aggregate findings → perspectives.json + ├─ Update brainstorm.md with Round 1 + └─ Initial Idea Coverage Check + +Step 3: Interactive Refinement (Multi-Round, max 6) + ├─ Present current ideas and perspectives + ├─ Gather user feedback + ├─ Process response: + │ ├─ Deep Dive → deeper inline analysis on selected ideas + │ ├─ Diverge → new inline analysis with different angles + │ ├─ Challenge → devil's advocate inline analysis + │ ├─ Merge → synthesize complementary ideas inline + │ └─ Converge → exit loop for synthesis + ├─ Update brainstorm.md with round details + └─ Repeat until user selects converge or max rounds + +Step 4: Convergence & Crystallization + ├─ Consolidate all insights → synthesis.json + ├─ Update brainstorm.md with final synthesis + ├─ Interactive Top-Idea Review (per-idea confirm/modify/reject) + └─ Offer options: show next-step commands / export / done ``` ## Output Artifacts @@ -67,16 +116,17 @@ The key innovation is **documented thought evolution** that captures how ideas d | Artifact | Purpose | |----------|---------| | `exploration-codebase.json` | Codebase context: relevant files, patterns, architecture constraints | -| `perspectives/*.json` | Individual perspective outputs from parallel subagents | -| `perspectives.json` | Aggregated parallel findings with synthesis (convergent/conflicting themes) | -| Updated `brainstorm.md` | Round 2: Exploration results and multi-perspective analysis | +| `perspectives/*.json` | Individual perspective outputs (creative, pragmatic, systematic) | +| `perspectives.json` | Aggregated findings with synthesis (convergent/conflicting themes) | +| Updated `brainstorm.md` | Round 1: Exploration results and multi-perspective analysis | ### Phase 3: Interactive Refinement | Artifact | Purpose | |----------|---------| | `ideas/{idea-slug}.md` | Deep-dive analysis for selected ideas | -| Updated `brainstorm.md` | Round 3-6: User feedback, idea selections, refinement cycles | +| `ideas/merged-idea-{n}.md` | Merged idea documents | +| Updated `brainstorm.md` | Round 2-6: User feedback, idea selections, refinement cycles | ### Phase 4: Convergence & Crystallization @@ -87,68 +137,97 @@ The key innovation is **documented thought evolution** that captures how ideas d --- -## Implementation Details +## Recording Protocol -### Session Initialization +**CRITICAL**: During brainstorming, the following situations **MUST** trigger immediate recording to brainstorm.md: -##### Step 0: Determine Project Root +| Trigger | What to Record | Target Section | +|---------|---------------|----------------| +| **Idea generated** | Idea content, source perspective, novelty/feasibility ratings | `#### Ideas Generated` | +| **Perspective shift** | Old framing → new framing, trigger reason | `#### Decision Log` | +| **User feedback** | User's original input, which ideas selected/rejected | `#### User Input` | +| **Assumption challenged** | Original assumption → challenge result, survivability | `#### Challenged Assumptions` | +| **Ideas merged** | Source ideas, merged concept, what was preserved/discarded | `#### Decision Log` | +| **Scope adjustment** | Before/after scope, trigger reason | `#### Decision Log` | -检测项目根目录,确保 `.workflow/` 产物位置正确: +### Decision Record Format -```bash -PROJECT_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd) +```markdown +> **Decision**: [Description of the decision] +> - **Context**: [What triggered this decision] +> - **Options considered**: [Alternatives evaluated] +> - **Chosen**: [Selected approach] — **Reason**: [Rationale] +> - **Rejected**: [Why other options were discarded] +> - **Impact**: [Effect on brainstorming direction] ``` -优先通过 git 获取仓库根目录;非 git 项目回退到 `pwd` 取当前绝对路径。 -存储为 `{projectRoot}`,后续所有 `.workflow/` 路径必须以此为前缀。 +### Narrative Synthesis Format -The workflow automatically generates a unique session identifier and directory structure based on the topic and current date (UTC+8). +Append after each round update: -**Session ID Format**: `BS-{slug}-{date}` -- `slug`: Lowercase alphanumeric + Chinese characters, max 40 chars -- `date`: YYYY-MM-DD format (UTC+8) +```markdown +### Round N: Narrative Synthesis +**Starting point**: Based on previous round's [conclusions/questions], this round explored [starting point]. +**Key progress**: [New ideas/findings] [confirmed/refuted/expanded] previous understanding of [topic area]. +**Decision impact**: User selected [feedback type], directing brainstorming toward [adjusted/deepened/maintained]. +**Current state**: After this round, top ideas are [updated idea rankings]. +**Open directions**: [remaining exploration angles for next round] +``` -**Session Directory**: `{projectRoot}/.workflow/.brainstorm/{sessionId}/` +## Implementation Details -**Auto-Detection**: If session folder exists with brainstorm.md, automatically enters continue mode. Otherwise, creates new session. +### Phase 0: Session Initialization -**Brainstorm Modes**: -- `creative`: Emphasize novelty and innovation, relaxed constraints -- `structured`: Balance creativity with feasibility, realistic scope -- `balanced`: Default, moderate innovation with practical considerations +```javascript +const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString() ---- +// Parse flags +const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y') +const continueMode = $ARGUMENTS.includes('--continue') +const perspectivesMatch = $ARGUMENTS.match(/--perspectives[=\s]([\w,]+)/) +const selectedPerspectiveNames = perspectivesMatch + ? perspectivesMatch[1].split(',') + : ['creative', 'pragmatic', 'systematic'] -## Phase 1: Seed Understanding +// Extract topic +const topic = $ARGUMENTS.replace(/--yes|-y|--continue|--perspectives[=\s][\w,]+|--max-ideas[=\s]\d+|TOPIC=/g, '').replace(/^["']|["']$/g, '').trim() + +// Determine project root +const projectRoot = Bash('git rev-parse --show-toplevel 2>/dev/null || pwd').trim() + +const slug = topic.toLowerCase().replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-').substring(0, 40) +const dateStr = getUtc8ISOString().substring(0, 10) +const sessionId = `BS-${slug}-${dateStr}` +const sessionFolder = `${projectRoot}/.workflow/.brainstorm/${sessionId}` + +// Auto-detect continue: session folder + brainstorm.md exists → continue mode +// If continue → load brainstorm.md + perspectives, resume from last round +Bash(`mkdir -p ${sessionFolder}`) +``` + +### Phase 1: Seed Understanding **Objective**: Parse the initial idea, identify exploration vectors, scope preferences, and initialize the brainstorm document. -### Step 1.1: Parse Seed & Identify Dimensions +##### Step 1.1: Parse Seed & Identify Dimensions -The workflow analyzes the topic text against predefined brainstorm dimensions. +Match topic keywords against brainstorm dimensions (see [Dimensions Reference](#brainstorm-dimensions)): -**Brainstorm Dimensions**: +```javascript +// Match topic text against keyword lists from Dimensions Reference +// If multiple dimensions match, include all +// If none match, default to "technical" and "innovation" +const dimensions = identifyDimensions(topic, BRAINSTORM_DIMENSIONS) +``` -| Dimension | Keywords | -|-----------|----------| -| technical | 技术, technical, implementation, code, 实现, architecture | -| ux | 用户, user, experience, UX, UI, 体验, interaction | -| business | 业务, business, value, ROI, 价值, market | -| innovation | 创新, innovation, novel, creative, 新颖 | -| feasibility | 可行, feasible, practical, realistic, 实际 | -| scalability | 扩展, scale, growth, performance, 性能 | -| security | 安全, security, risk, protection, 风险 | - -**Matching Logic**: Compare topic text against keyword lists to identify relevant dimensions. - -### Step 1.2: Role Selection +##### Step 1.2: Role Selection Recommend roles based on topic keywords, then let user confirm or override. **Professional Roles** (recommended based on topic keywords): -| Role | Perspective Agent Focus | Keywords | -|------|------------------------|----------| +| Role | Perspective Focus | Keywords | +|------|------------------|----------| | system-architect | Architecture, patterns | 架构, architecture, system, 系统, design pattern | | product-manager | Business value, roadmap | 产品, product, feature, 功能, roadmap | | ui-designer | Visual design, interaction | UI, 界面, interface, visual, 视觉 | @@ -157,7 +236,7 @@ Recommend roles based on topic keywords, then let user confirm or override. | test-strategist | Quality, testing | 测试, test, quality, 质量, QA | | subject-matter-expert | Domain knowledge | 领域, domain, industry, 行业, expert | -**Simple Perspectives** (fallback - always available): +**Simple Perspectives** (fallback — always available): | Perspective | Focus | Best For | |-------------|-------|----------| @@ -170,30 +249,56 @@ Recommend roles based on topic keywords, then let user confirm or override. 2. **Manual mode**: Present recommended roles + "Use simple perspectives" option 3. **Continue mode**: Use roles from previous session -### Step 1.3: Initial Scoping (New Session Only) +##### Step 1.3: Initial Scoping (New Session Only) -For new brainstorm sessions, gather user preferences before exploration. +For new brainstorm sessions, gather user preferences before exploration (skipped in auto mode or continue mode): -**Brainstorm Mode** (Single-select): -- 创意模式 (Creative mode - 15-20 minutes, 1 subagent) -- 平衡模式 (Balanced mode - 30-60 minutes, 3 parallel subagents) -- 深度模式 (Deep mode - 1-2+ hours, 3 parallel subagents + deep refinement) +```javascript +if (!autoYes && !continueMode) { + // 1. Brainstorm Mode (single-select) + const mode = request_user_input({ + questions: [{ + header: "Brainstorm Mode", + id: "mode", + question: "Select brainstorming intensity:", + options: [ + { label: "Creative Mode", description: "Fast, high novelty, 1 perspective" }, + { label: "Balanced Mode(Recommended)", description: "Moderate, 3 perspectives" }, + { label: "Deep Mode", description: "Comprehensive, 3 perspectives + deep refinement" } + ] + }] + }) -**Focus Areas** (Multi-select): -- 技术方案 (Technical solutions) -- 用户体验 (User experience) -- 创新突破 (Innovation breakthroughs) -- 可行性评估 (Feasibility assessment) + // 2. Focus Areas (multi-select) + const focusAreas = request_user_input({ + questions: [{ + header: "Focus Areas", + id: "focus", + question: "Select brainstorming focus:", + options: generateFocusOptions(dimensions) // Dynamic based on dimensions + }] + }) -**Constraints** (Multi-select): -- 现有架构 (Existing architecture constraints) -- 时间限制 (Time constraints) -- 资源限制 (Resource constraints) -- 无约束 (No constraints) + // 3. Constraints (multi-select) + const constraints = request_user_input({ + questions: [{ + header: "Constraints", + id: "constraints", + question: "Any constraints to consider?", + options: [ + { label: "Existing Architecture", description: "Must fit current system" }, + { label: "Time Constraints", description: "Short implementation timeline" }, + { label: "Resource Constraints", description: "Limited team/budget" }, + { label: "No Constraints", description: "Blue-sky thinking" } + ] + }] + }) +} +``` -### Step 1.4: Expand Seed into Exploration Vectors +##### Step 1.4: Expand Seed into Exploration Vectors -Generate key questions that guide the brainstorming exploration. Use a subagent for vector generation. +Generate key questions that guide the brainstorming exploration. Done inline — no agent delegation. **Exploration Vectors**: 1. **Core question**: What is the fundamental problem/opportunity? @@ -204,578 +309,368 @@ Generate key questions that guide the brainstorming exploration. Use a subagent 6. **Innovation angle**: What would make this 10x better? 7. **Integration**: How does this fit with existing systems/processes? -**Subagent for Vector Generation**: +Analyze the topic inline against user focus areas and constraints to produce 5-7 exploration vectors. + +##### Step 1.5: Initialize brainstorm.md ```javascript -const vectorAgent = spawn_agent({ - agent_type: "cli_explore_agent", - message: ` -## TASK ASSIGNMENT +const brainstormMd = `# Brainstorm Session + +**Session ID**: ${sessionId} +**Topic**: ${topic} +**Started**: ${getUtc8ISOString()} +**Dimensions**: ${dimensions.join(', ')} +**Mode**: ${brainstormMode} + +## Table of Contents + +- [Session Context](#session-context) +- [Current Ideas](#current-ideas) +- [Thought Evolution Timeline](#thought-evolution-timeline) + +## Current Ideas + + +> To be populated after exploration. + +## Session Context +- Focus areas: ${focusAreas.join(', ')} +- Perspectives: ${selectedPerspectiveNames.join(', ')} +- Constraints: ${constraints.join(', ')} +- Mode: ${brainstormMode} + +## Exploration Vectors +${explorationVectors.map((v, i) => `${i+1}. ${v}`).join('\n')} + +## Initial Decisions +> Record why these perspectives and focus areas were selected. --- -## Context -Topic: ${idea_or_topic} -User focus areas: ${userFocusAreas.join(', ')} -Constraints: ${constraints.join(', ')} +## Thought Evolution Timeline -## Task -Generate 5-7 exploration vectors (questions/directions) to expand this idea: -1. Core question: What is the fundamental problem/opportunity? -2. User perspective: Who benefits and how? -3. Technical angle: What enables this technically? -4. Alternative approaches: What other ways could this be solved? -5. Challenges: What could go wrong or block success? -6. Innovation angle: What would make this 10x better? -7. Integration: How does this fit with existing systems/processes? +> Rounds will be appended below as brainstorming progresses. -## Deliverables -Return structured exploration vectors for multi-perspective analysis. +--- + +## Decision Trail + +> Consolidated critical decisions across all rounds (populated in Phase 4). ` -}) - -const result = wait({ ids: [vectorAgent], timeout_ms: 120000 }) -close_agent({ id: vectorAgent }) +Write(`${sessionFolder}/brainstorm.md`, brainstormMd) ``` -**Purpose**: These vectors guide each perspective subagent's analysis and ensure comprehensive exploration. - -### Step 1.5: Initialize brainstorm.md - -Create the main brainstorm document with session metadata and expansion content. - -**brainstorm.md Structure**: -- **Header**: Session ID, topic, start time, brainstorm mode, dimensions -- **Initial Context**: Focus areas, depth level, constraints -- **Roles**: Selected roles (professional or simple perspectives) -- **Seed Expansion**: Original idea + exploration vectors -- **Thought Evolution Timeline**: Round-by-round findings -- **Current Ideas**: To be populated after exploration - **Success Criteria**: -- Session folder created successfully -- brainstorm.md initialized with all metadata -- 1-3 roles selected (professional or simple perspectives) -- Brainstorm mode and dimensions identified +- Session folder created with brainstorm.md initialized +- Brainstorm dimensions identified and user preferences captured +- **Initial decisions recorded**: Perspective selection rationale, excluded options with reasons - Exploration vectors generated -- User preferences captured +- 1-3 perspectives selected ---- +### Phase 2: Divergent Exploration -## Phase 2: Divergent Exploration +**Objective**: Gather codebase context and execute multi-perspective analysis to generate diverse viewpoints. All exploration done inline — no agent delegation. -**Objective**: Gather codebase context and execute parallel multi-perspective analysis via subagents to generate diverse viewpoints. +##### Step 2.1: Detect Codebase & Explore -**Execution Model**: Parallel subagent execution - spawn 3 perspective agents simultaneously, batch wait for all results, then aggregate. +```javascript +const hasCodebase = Bash(` + test -f package.json && echo "nodejs" || + test -f go.mod && echo "golang" || + test -f Cargo.toml && echo "rust" || + test -f pyproject.toml && echo "python" || + test -f pom.xml && echo "java" || + test -d src && echo "generic" || + echo "none" +`).trim() -**Key API Pattern**: -``` -spawn_agent × 3 → wait({ ids: [...] }) → aggregate → close_agent × 3 +if (hasCodebase !== 'none') { + // 1. Read project metadata (if exists) + // - Run `ccw spec load --category exploration` (load project specs) + // - Run `ccw spec load --category debug` (known issues and root-cause notes) + // - .workflow/specs/*.md (project conventions) + + // 2. Search codebase for relevant content + // Use: Grep, Glob, Read, or mcp__ace-tool__search_context + // Focus on: modules/components, patterns/structure, integration points, config/dependencies + + // 3. Write findings + Write(`${sessionFolder}/exploration-codebase.json`, JSON.stringify({ + project_type: hasCodebase, + relevant_files: [...], // [{path, relevance, summary}] + existing_patterns: [...], // [{pattern, files, description}] + architecture_constraints: [...], // Constraints found + integration_points: [...], // [{location, description}] + key_findings: [...], // Main insights from code search + _metadata: { timestamp: getUtc8ISOString(), exploration_scope: '...' } + }, null, 2)) +} ``` -### Step 2.1: Codebase Context Gathering +##### Step 2.2: Multi-Perspective Analysis (Serial, Inline) -Use built-in tools to understand the codebase structure before spawning perspective agents. - -**Context Gathering Activities**: -1. **Get project structure** - Execute `ccw tool exec get_modules_by_depth '{}'` -2. **Search for related code** - Use Grep/Glob to find files matching topic keywords -3. **Read project tech context** - Run `ccw spec load --category "exploration planning"` if spec system available -4. **Analyze patterns** - Identify common code patterns and architecture decisions - -**exploration-codebase.json Structure**: -- `relevant_files[]`: Files related to the topic with relevance indicators -- `existing_patterns[]`: Common code patterns and architectural styles -- `architecture_constraints[]`: Project-level constraints -- `integration_points[]`: Key integration patterns between modules -- `_metadata`: Timestamp and context information - -### Step 2.2: Parallel Multi-Perspective Analysis - -**⚠️ IMPORTANT**: Role files are NOT read by main process. Pass path in message, agent reads itself. - -Spawn 3 perspective agents in parallel: Creative + Pragmatic + Systematic. +Analyze from each selected perspective. All analysis done inline by the AI — no agents. **Perspective Definitions**: -| Perspective | Role File | Focus | -|-------------|-----------|-------| -| Creative | `cli_explore_agent` | Innovation, cross-domain inspiration, challenging assumptions | -| Pragmatic | `cli_explore_agent` | Implementation feasibility, effort estimates, blockers | -| Systematic | `cli_explore_agent` | Problem decomposition, patterns, scalability | +| Perspective | Focus | Tasks | +|-------------|-------|-------| +| Creative | Innovation, cross-domain | Think beyond obvious, explore cross-domain inspiration, challenge assumptions, generate moonshot ideas | +| Pragmatic | Implementation reality | Evaluate feasibility, identify existing patterns/libraries, estimate complexity, highlight blockers | +| Systematic | Architecture thinking | Decompose problem, identify architectural patterns, map dependencies, consider scalability | -**Parallel Subagent Execution**: +**Serial execution** — analyze each perspective sequentially: ```javascript -// Build shared context from codebase exploration -const explorationContext = ` -CODEBASE CONTEXT: -- Key files: ${explorationResults.relevant_files.slice(0,5).map(f => f.path).join(', ')} -- Existing patterns: ${explorationResults.existing_patterns.slice(0,3).join(', ')} -- Architecture constraints: ${explorationResults.architecture_constraints.slice(0,3).join(', ')}` +const perspectives = ['creative', 'pragmatic', 'systematic'] -// Define perspectives -const perspectives = [ - { - name: 'creative', - focus: 'Innovation and novelty', - tasks: [ - 'Think beyond obvious solutions - what would be surprising/delightful?', - 'Explore cross-domain inspiration', - 'Challenge assumptions - what if the opposite were true?', - 'Generate moonshot ideas alongside practical ones' - ] - }, - { - name: 'pragmatic', - focus: 'Implementation reality', - tasks: [ - 'Evaluate technical feasibility of core concept', - 'Identify existing patterns/libraries that could help', - 'Estimate implementation complexity', - 'Highlight potential technical blockers' - ] - }, - { - name: 'systematic', - focus: 'Architecture thinking', - tasks: [ - 'Decompose the problem into sub-problems', - 'Identify architectural patterns that apply', - 'Map dependencies and interactions', - 'Consider scalability implications' - ] - } -] - -// Parallel spawn - all agents start immediately -const agentIds = perspectives.map(perspective => { - return spawn_agent({ - agent_type: "cli_explore_agent", - message: ` -## TASK ASSIGNMENT - -### MANDATORY FIRST STEPS (Agent Execute) -1. Run: `ccw spec load --category "exploration planning"` -2. Read project tech context from loaded specs - ---- - -## Brainstorm Context -Topic: ${idea_or_topic} -Perspective: ${perspective.name} - ${perspective.focus} -Session: ${sessionFolder} - -${explorationContext} - -## ${perspective.name.toUpperCase()} Perspective Tasks -${perspective.tasks.map(t => `• ${t}`).join('\n')} - -## Deliverables -Write findings to: ${sessionFolder}/perspectives/${perspective.name}.json - -Schema: { - perspective: "${perspective.name}", - ideas: [{ title, description, novelty, feasibility, rationale }], - key_findings: [], - challenged_assumptions: [], - open_questions: [], - _metadata: { perspective, timestamp } -} - -## Success Criteria -- [ ] Role definition read -- [ ] 3-5 ideas generated with ratings -- [ ] Key findings documented -- [ ] JSON output follows schema -` - }) +perspectives.forEach(perspective => { + // Analyze inline using exploration-codebase.json as context + // Generate ideas from this perspective's focus + Write(`${sessionFolder}/perspectives/${perspective}.json`, JSON.stringify({ + perspective: perspective, + ideas: [ // 3-5 ideas per perspective + { title: '...', description: '...', novelty: 1-5, feasibility: 1-5, rationale: '...' } + ], + key_findings: [...], + challenged_assumptions: [...], + open_questions: [...], + _metadata: { perspective, timestamp: getUtc8ISOString() } + }, null, 2)) }) - -// Batch wait - TRUE PARALLELISM (key Codex advantage) -const results = wait({ - ids: agentIds, - timeout_ms: 600000 // 10 minutes for all -}) - -// Handle timeout -if (results.timed_out) { - // Some agents may still be running - // Option: continue waiting or use completed results -} - -// Collect results from all perspectives -const completedFindings = {} -agentIds.forEach((agentId, index) => { - const perspective = perspectives[index] - if (results.status[agentId].completed) { - completedFindings[perspective.name] = results.status[agentId].completed - } -}) - -// Batch cleanup -agentIds.forEach(id => close_agent({ id })) ``` -### Step 2.3: Aggregate Multi-Perspective Findings - -Consolidate results from all three parallel perspective agents. - -**perspectives.json Structure**: -- `session_id`: Reference to brainstorm session -- `timestamp`: Completion time -- `topic`: Original idea/topic -- `creative`: Creative perspective findings (ideas with novelty ratings) -- `pragmatic`: Pragmatic perspective findings (approaches with effort ratings) -- `systematic`: Systematic perspective findings (architectural options) -- `synthesis`: {convergent_themes, conflicting_views, unique_contributions} -- `aggregated_ideas[]`: Merged ideas from all perspectives -- `key_findings[]`: Main insights across all perspectives - -**Aggregation Activities**: -1. Extract ideas and findings from each perspective's output -2. Identify themes all perspectives agree on (convergent) -3. Note conflicting views and tradeoffs -4. Extract unique contributions from each perspective -5. Merge and deduplicate similar ideas +##### Step 2.3: Aggregate Multi-Perspective Findings ```javascript const synthesis = { session_id: sessionId, - timestamp: new Date().toISOString(), - topic: idea_or_topic, + timestamp: getUtc8ISOString(), + topic, // Individual perspective findings - creative: completedFindings.creative || {}, - pragmatic: completedFindings.pragmatic || {}, - systematic: completedFindings.systematic || {}, + creative: readJson(`${sessionFolder}/perspectives/creative.json`), + pragmatic: readJson(`${sessionFolder}/perspectives/pragmatic.json`), + systematic: readJson(`${sessionFolder}/perspectives/systematic.json`), // Cross-perspective synthesis synthesis: { - convergent_themes: extractConvergentThemes(completedFindings), - conflicting_views: extractConflicts(completedFindings), - unique_contributions: extractUniqueInsights(completedFindings) + convergent_themes: [...], // What all perspectives agree on + conflicting_views: [...], // Where perspectives differ + unique_contributions: [...] // Insights unique to specific perspectives }, // Aggregated for refinement - aggregated_ideas: mergeAllIdeas(completedFindings), - key_findings: mergeKeyFindings(completedFindings) + aggregated_ideas: [...], // Merged and deduplicated ideas from all perspectives + key_findings: [...] // Main insights across all perspectives } +Write(`${sessionFolder}/perspectives.json`, JSON.stringify(synthesis, null, 2)) ``` -### Step 2.4: Update brainstorm.md +##### Step 2.4: Update brainstorm.md -Append exploration results to the brainstorm timeline. +Append Round 1 with exploration results using the [Round Documentation Pattern](#round-documentation-pattern). -**Round 2 Sections** (Multi-Perspective Exploration): +**Round 1 Sections** (Multi-Perspective Exploration): - **Creative Perspective**: Novel ideas with novelty/impact ratings - **Pragmatic Perspective**: Practical approaches with effort/risk ratings - **Systematic Perspective**: Architectural options with tradeoff analysis - **Perspective Synthesis**: Convergent themes, conflicts, unique contributions -**Documentation Standards**: -- Include evidence from codebase exploration -- Organize findings by perspective -- Highlight areas of agreement and disagreement -- Note key assumptions and reasoning +##### Step 2.5: Initial Idea Coverage Check + +```javascript +// Check exploration vectors against Round 1 findings +appendToBrainstorm(` +#### Initial Idea Coverage Check (Post-Exploration) +${explorationVectors.map((vector, i) => { + const status = assessCoverage(vector, explorationFindings) + return `- ${status.icon} Vector ${i+1}: ${vector} — ${status.detail}` +}).join('\n')} + +> Next rounds will focus on uncovered and in-progress vectors. +`) +``` **Success Criteria**: -- All 3 subagents spawned and completed (or timeout handled) -- `exploration-codebase.json` created with comprehensive context -- `perspectives/*.json` created for each perspective -- `perspectives.json` created with aggregated findings and synthesis -- `brainstorm.md` updated with Round 2 results -- All agents closed properly -- Ready for interactive refinement phase +- exploration-codebase.json created with codebase context (if codebase exists) +- perspectives/*.json created for each perspective +- perspectives.json created with aggregated findings and synthesis +- brainstorm.md updated with Round 1 results +- **Initial Idea Coverage Check** completed +- **Key findings recorded** with evidence and ratings ---- +### Phase 3: Interactive Refinement -## Phase 3: Interactive Refinement +**Objective**: Iteratively refine ideas through multi-round user-guided exploration cycles. **Max Rounds**: 6. All analysis done inline. -**Objective**: Iteratively refine ideas through multi-round user-guided exploration cycles with deep dives, challenge testing, and idea merging. +**Auto mode behavior** (`--yes`): +- Balanced/Deep mode: Run 2 auto-rounds (1× Deep Dive on top 2 ideas, 1× Challenge on top 3 ideas), then auto-converge +- Creative mode: Run 1 auto-round (1× Diverge), then auto-converge +- Skip user direction prompts; auto-select based on idea scores -**Max Rounds**: 6 refinement rounds (can exit earlier if user indicates completion) +##### Step 3.1: Present Findings & Gather User Direction -**Execution Model**: Use `send_input` for deep interaction within same agent context, or spawn new agent for significantly different exploration angles. - -### Step 3.1: Present Findings & Gather User Direction - -Display current ideas and perspectives to the user. - -**Presentation Content**: -- Top ideas from each perspective with ratings -- Convergent themes and areas of agreement -- Conflicting views and tradeoffs -- Open questions for further exploration - -**User Feedback Options** (Single-select): - -| Option | Purpose | Next Action | -|--------|---------|------------| -| **深入探索** | Explore selected ideas in detail | `send_input` to active agent OR spawn deep-dive agent | -| **继续发散** | Generate more ideas | Spawn new agent with different angles | -| **挑战验证** | Test ideas critically | Spawn challenge agent (devil's advocate) | -| **合并综合** | Combine multiple ideas | Spawn merge agent to synthesize | -| **准备收敛** | Begin convergence | Exit refinement loop for synthesis | - -### Step 3.2: Deep Dive on Selected Ideas (via send_input or new agent) - -When user selects "deep dive", provide comprehensive analysis. - -**Option A: send_input to Existing Agent** (preferred if agent still active) +**Current Understanding Summary** (Round >= 2, BEFORE presenting new findings): +- Generate 1-2 sentence recap of top ideas and last round's direction +- Example: "Top ideas so far: [idea1], [idea2]. Last round [deepened/challenged/merged]. Here are the latest findings:" ```javascript -// Continue with existing agent context -send_input({ - id: perspectiveAgent, // Reuse agent from Phase 2 if not closed - message: ` -## CONTINUATION: Deep Dive Analysis - -Based on your initial exploration, the user wants deeper investigation on these ideas: -${selectedIdeas.map((idea, i) => `${i+1}. ${idea.title}`).join('\n')} - -## Deep Dive Tasks -• Elaborate each concept in detail -• Identify implementation requirements and dependencies -• Analyze potential challenges and propose mitigations -• Suggest proof-of-concept approach -• Define success metrics - -## Deliverables -Write to: ${sessionFolder}/ideas/{idea-slug}.md for each selected idea - -## Success Criteria -- [ ] Each idea has detailed breakdown -- [ ] Technical requirements documented -- [ ] Risk analysis with mitigations -` -}) - -const deepDiveResult = wait({ ids: [perspectiveAgent], timeout_ms: 600000 }) -``` - -**Option B: Spawn New Deep-Dive Agent** (if prior agents closed) - -```javascript -const deepDiveAgent = spawn_agent({ - agent_type: "cli_explore_agent", - message: ` -## TASK ASSIGNMENT - -### MANDATORY FIRST STEPS (Agent Execute) -1. Read: ${sessionFolder}/perspectives.json (prior findings) -2. Run: `ccw spec load --category "exploration planning"` - ---- - -## Deep Dive Context -Topic: ${idea_or_topic} -Selected Ideas: ${selectedIdeas.map(i => i.title).join(', ')} - -## Deep Dive Tasks -${selectedIdeas.map(idea => ` -### ${idea.title} -• Elaborate the core concept in detail -• Identify implementation requirements -• List potential challenges and mitigations -• Suggest proof-of-concept approach -• Define success metrics -`).join('\n')} - -## Deliverables -Write: ${sessionFolder}/ideas/{idea-slug}.md for each idea - -Include for each: -- Detailed concept description -- Technical requirements list -- Risk/challenge matrix -- MVP definition -- Success criteria -` -}) - -const result = wait({ ids: [deepDiveAgent], timeout_ms: 600000 }) -close_agent({ id: deepDiveAgent }) -``` - -### Step 3.3: Devil's Advocate Challenge (spawn new agent) - -When user selects "challenge", spawn a dedicated challenge agent. - -```javascript -const challengeAgent = spawn_agent({ - agent_type: "cli_explore_agent", - message: ` -## TASK ASSIGNMENT - -### MANDATORY FIRST STEPS (Agent Execute) -1. Read: ${sessionFolder}/perspectives.json (ideas to challenge) - ---- - -## Challenge Context -Topic: ${idea_or_topic} -Ideas to Challenge: -${selectedIdeas.map((idea, i) => `${i+1}. ${idea.title}: ${idea.description}`).join('\n')} - -## Devil's Advocate Tasks -• For each idea, identify 3 strongest objections -• Challenge core assumptions -• Identify scenarios where this fails -• Consider competitive/alternative solutions -• Assess whether this solves the right problem -• Rate survivability after challenge (1-5) - -## Deliverables -Return structured challenge results: -{ - challenges: [{ - idea: "...", - objections: [], - challenged_assumptions: [], - failure_scenarios: [], - alternatives: [], - survivability_rating: 1-5, - strengthened_version: "..." - }] +if (!autoYes) { + const feedback = request_user_input({ + questions: [{ + header: "Brainstorm Direction", + id: "direction", + question: `Brainstorm round ${round}: What would you like to do next?`, + options: [ + { label: "Deep Dive", description: "Explore selected ideas in detail" }, + { label: "Diverge More", description: "Generate more ideas from different angles" }, + { label: "Challenge", description: "Devil's advocate — test ideas critically" }, + { label: "Merge Ideas", description: "Combine complementary ideas" }, + { label: "Ready to Converge", description: "Sufficient ideas, proceed to synthesis" } + ] + }] + }) } - -## Success Criteria -- [ ] 3+ objections per idea -- [ ] Assumptions explicitly challenged -- [ ] Survivability ratings assigned -` -}) - -const result = wait({ ids: [challengeAgent], timeout_ms: 300000 }) -close_agent({ id: challengeAgent }) ``` -### Step 3.4: Merge Multiple Ideas (spawn merge agent) +##### Step 3.2: Process User Response -When user selects "merge", synthesize complementary ideas. +**Recording Checkpoint**: Regardless of option selected, MUST record to brainstorm.md: +- User's original choice and expression +- Impact on brainstorming direction +- If direction changed, record a full Decision Record + +| Response | Action | +|----------|--------| +| **Deep Dive** | Ask which ideas to explore. Inline analysis: elaborate concept, identify requirements/dependencies, analyze challenges, suggest PoC approach, define success metrics. Write to `ideas/{idea-slug}.md`. | +| **Diverge More** | Inline analysis with different angles: alternative framings, cross-domain inspiration, what-if scenarios, constraint relaxation. Generate new ideas. | +| **Challenge** | Inline devil's advocate analysis: 3 strongest objections per idea, challenge assumptions, failure scenarios, competitive alternatives, survivability rating (1-5). | +| **Merge Ideas** | Ask which ideas to merge. Inline synthesis: identify complementary elements, resolve contradictions, create unified concept, preserve strengths. Write to `ideas/merged-idea-{n}.md`. | +| **Ready to Converge** | Record why concluding. Exit loop → Phase 4. | + +##### Step 3.3: Deep Dive on Selected Ideas + +When user selects "deep dive", provide comprehensive inline analysis: ```javascript -const mergeAgent = spawn_agent({ - agent_type: "cli_explore_agent", - message: ` -## TASK ASSIGNMENT - -### MANDATORY FIRST STEPS (Agent Execute) -1. Read: ${sessionFolder}/perspectives.json (source ideas) - ---- - -## Merge Context -Topic: ${idea_or_topic} -Ideas to Merge: -${selectedIdeas.map((idea, i) => ` -${i+1}. ${idea.title} (${idea.source_perspective}) - ${idea.description} - Strengths: ${idea.strengths?.join(', ') || 'N/A'} -`).join('\n')} - -## Merge Tasks -• Identify complementary elements -• Resolve contradictions -• Create unified concept -• Preserve key strengths from each -• Describe the merged solution -• Assess viability of merged idea - -## Deliverables -Write to: ${sessionFolder}/ideas/merged-idea-{n}.md - -Include: -- Merged concept description -- Elements taken from each source idea -- Contradictions resolved (or noted as tradeoffs) -- New combined strengths -- Implementation considerations - -## Success Criteria -- [ ] Coherent merged concept -- [ ] Source attributions clear -- [ ] Contradictions addressed -` +// For each selected idea, analyze inline +selectedIdeas.forEach(idea => { + const deepDive = { + title: idea.title, + detailed_description: '...', // Elaborated concept + technical_requirements: [...], // Implementation needs + dependencies: [...], // What this depends on + challenges: [ // Risk/challenge matrix + { challenge: '...', severity: 'high|medium|low', mitigation: '...' } + ], + poc_approach: '...', // Proof-of-concept suggestion + success_metrics: [...], // How to measure success + source_perspectives: [...] // Which perspectives contributed + } + Write(`${sessionFolder}/ideas/${ideaSlug}.md`, formatIdeaMarkdown(deepDive)) }) - -const result = wait({ ids: [mergeAgent], timeout_ms: 300000 }) -close_agent({ id: mergeAgent }) ``` -### Step 3.5: Document Each Round +##### Step 3.4: Devil's Advocate Challenge -Update brainstorm.md with results from each refinement round. +When user selects "challenge", perform inline critical analysis: -**Round N Sections** (Rounds 3-6): +```javascript +selectedIdeas.forEach(idea => { + const challenge = { + idea: idea.title, + objections: [...], // 3+ strongest objections + challenged_assumptions: [...], // Core assumptions tested + failure_scenarios: [...], // When/how this fails + alternatives: [...], // Competitive/alternative solutions + survivability_rating: 1-5, // How well idea survives challenge + strengthened_version: '...' // Improved version post-challenge + } + // Record in brainstorm.md +}) +``` -| Section | Content | -|---------|---------| -| User Direction | Action taken and ideas selected | -| Findings | New findings and clarifications | -| Idea Updates | Changes to idea scores and status | -| Insights | Key learnings and realizations | -| Next Directions | Suggested follow-up investigations | +##### Step 3.5: Merge Multiple Ideas -**Documentation Standards**: -- Clear timestamps and action taken -- Evidence-based findings with code references -- Updated idea rankings and status changes -- Explicit tracking of assumption changes -- Organized by exploration vector +When user selects "merge", synthesize inline: + +```javascript +const merged = { + title: '...', // New merged concept name + description: '...', // Unified concept description + source_ideas: [...], // Which ideas were merged + elements_from_each: [...], // What was taken from each source + contradictions_resolved: [...], // How conflicts were handled + combined_strengths: [...], // New combined advantages + implementation_considerations: '...' +} +Write(`${sessionFolder}/ideas/merged-idea-${n}.md`, formatMergedIdeaMarkdown(merged)) +``` + +##### Step 3.6: Document Each Round + +Update brainstorm.md using the [Round Documentation Pattern](#round-documentation-pattern). + +**Append** to Thought Evolution Timeline: User Direction, Decision Log, Ideas Generated/Updated, Analysis Results, Challenged Assumptions, Open Items, Narrative Synthesis. + +**Replace** (not append): + +| Section | Update Rule | +|---------|-------------| +| `## Current Ideas` | Overwrite with latest ranked idea list | +| `## Table of Contents` | Update links to include new Round N sections | **Success Criteria**: - User feedback processed for each round -- `brainstorm.md` updated with all refinement rounds +- brainstorm.md updated with all refinement rounds - Ideas in `ideas/` folder for selected deep-dives -- All spawned agents closed properly - Exit condition reached (user selects converge or max rounds) ---- - -## Phase 4: Convergence & Crystallization +### Phase 4: Convergence & Crystallization **Objective**: Synthesize final ideas, generate conclusions and recommendations, and offer next steps. -### Step 4.1: Consolidate Insights +##### Step 4.1: Consolidate Insights -Extract and synthesize all findings from refinement rounds into final conclusions. +```javascript +const synthesis = { + session_id: sessionId, + topic, + completed: getUtc8ISOString(), + total_rounds: roundCount, + top_ideas: [ // Top 5 ranked ideas + { + title: '...', description: '...', + source_perspective: '...', + score: 1-10, // Final viability score + novelty: 1-5, // Innovation rating + feasibility: 1-5, // Implementation feasibility + key_strengths: [...], + main_challenges: [...], + next_steps: [...], + review_status: 'accepted|modified|rejected|pending' + } + ], + parked_ideas: [...], // Ideas for future consideration + key_insights: [...], // Key learnings from brainstorming + recommendations: { + primary: '...', // Best path forward + alternatives: [...] // Other viable options + }, + follow_up: [ // Suggested next steps + { type: 'implement|research|validate', summary: '...' } + ], + decision_trail: [ // Consolidated from all phases + { round: 1, decision: '...', context: '...', chosen: '...', reason: '...', impact: '...' } + ] +} +Write(`${sessionFolder}/synthesis.json`, JSON.stringify(synthesis, null, 2)) +``` -**Consolidation Activities**: -1. Review all refinement rounds and accumulated findings -2. Rank ideas by score, feasibility, and impact -3. Identify top 5 viable ideas -4. Extract key learnings and insights -5. Generate recommendations with rationale +##### Step 4.2: Final brainstorm.md Update -**synthesis.json Structure**: -- `session_id`: Session identifier -- `topic`: Original idea/topic -- `completed`: Completion timestamp -- `total_rounds`: Number of refinement rounds -- `top_ideas[]`: Top 5 ranked ideas with scores and next steps -- `parked_ideas[]`: Ideas parked for future consideration -- `key_insights[]`: Key learnings from brainstorming process -- `recommendations`: Primary recommendation and alternatives -- `follow_up[]`: Suggested next steps (implementation, research, validation) - -**Idea Format**: -- `title`: Clear, descriptive title -- `description`: Complete concept description -- `source_perspective`: Which perspective(s) contributed -- `score`: Final viability score (1-10) -- `novelty`: Novelty/innovation rating (1-5) -- `feasibility`: Implementation feasibility (1-5) -- `key_strengths`: Main advantages and benefits -- `main_challenges`: Key challenges and limitations -- `next_steps`: Recommended actions to pursue - -### Step 4.2: Final brainstorm.md Update - -Append conclusions section and finalize the thinking document. - -**Synthesis & Conclusions Section**: +**Synthesis & Conclusions**: - **Executive Summary**: High-level overview of brainstorming results - **Top Ideas**: Ranked list with descriptions and strengths/challenges - **Primary Recommendation**: Best path forward with clear rationale @@ -783,24 +678,67 @@ Append conclusions section and finalize the thinking document. - **Parked Ideas**: Future considerations with potential triggers - **Key Insights**: Important learnings from the process -**Session Statistics**: -- Total refinement rounds completed -- Ideas generated and evaluated -- Ideas survived challenges -- Perspectives used (creative, pragmatic, systematic) -- Artifacts generated +**Current Ideas (Final)**: -### Step 4.3: Post-Completion Options +| Subsection | Content | +|------------|---------| +| Top Ideas | Ranked by score with strengths/challenges | +| Idea Evolution | How top ideas developed across rounds | +| Key Insights | Valuable learnings for future reference | -Offer user follow-up actions based on brainstorming results. +**Decision Trail**: + +| Subsection | Content | +|------------|---------| +| Critical Decisions | Pivotal decisions that shaped the outcome | +| Direction Changes | Timeline of scope/focus adjustments with rationale | +| Trade-offs Made | Key trade-offs and why certain paths were chosen | + +**Session Statistics**: Total rounds, ideas generated, ideas survived challenges, perspectives used, artifacts generated. + +##### Step 4.3: Interactive Top-Idea Review (skip in auto mode) + +Walk through top ideas one-by-one (ordered by score): + +```javascript +for (const [index, idea] of rankedIdeas.entries()) { + const review = request_user_input({ + questions: [{ + header: `Idea #${index + 1}`, + id: `idea_${index + 1}`, + question: `Idea #${index + 1}: "${idea.title}" (score: ${idea.score}, novelty: ${idea.novelty}, feasibility: ${idea.feasibility}). Your decision:`, + options: [ + { label: "Accept(Recommended)", description: "Keep this idea in final recommendations" }, + { label: "Modify", description: "Adjust scope, description, or priority" }, + { label: "Reject", description: "Remove from final recommendations" } + ] + }] + }) + // Accept → "accepted" | Modify → gather text → "modified" | Reject → gather reason → "rejected" + // Accept All Remaining → mark all remaining as "accepted", break loop + // Record review decision to brainstorm.md Decision Log + update synthesis.json +} +``` + +**Review Summary** (append to brainstorm.md): +```markdown +### Top Idea Review Summary +| # | Idea | Score | Novelty | Feasibility | Review Status | Notes | +|---|------|-------|---------|-------------|---------------|-------| +| 1 | [title] | 8 | 4 | 3 | Accepted | | +| 2 | [title] | 7 | 5 | 2 | Modified | [notes] | +| 3 | [title] | 6 | 3 | 4 | Rejected | [reason] | +``` + +##### Step 4.4: Post-Completion Options **Available Options** (this skill is brainstorming-only — NEVER auto-launch other skills): | Option | Purpose | Action | |--------|---------|--------| -| **显示后续命令** | Show available next-step commands | Display command list for user to manually run | -| **导出分享** | Generate shareable report | Create formatted report document | -| **完成** | No further action | End workflow | +| **Show Next-Step Commands** | Show available commands | Display command list for user to manually run | +| **Export Report** | Generate shareable report | Create formatted report document | +| **Done** | No further action | End workflow | **Next-step commands to display** (user runs manually, NOT auto-launched): - `/workflow-lite-plan "..."` → Generate implementation plan @@ -808,49 +746,128 @@ Offer user follow-up actions based on brainstorming results. - `/workflow:analyze-with-file "..."` → Analyze top idea in detail **Success Criteria**: -- `synthesis.json` created with complete synthesis -- `brainstorm.md` finalized with all conclusions +- synthesis.json created with complete synthesis +- brainstorm.md finalized with all conclusions - User offered meaningful next step options - Session complete and all artifacts available ---- +## Templates -## Configuration +### Round Documentation Pattern -### Brainstorm Dimensions Reference +Each round follows this structure in brainstorm.md: -Dimensions guide brainstorming scope and focus: +```markdown +### Round N - [DeepDive|Diverge|Challenge|Merge] (timestamp) -| Dimension | Keywords | Best For | -|-----------|----------|----------| -| technical | 技术, technical, implementation, code | Implementation approaches | -| ux | 用户, user, experience, UI | User-facing design ideas | -| business | 业务, business, value | Business model innovations | -| innovation | 创新, innovation, novel | Breakthrough ideas | -| feasibility | 可行, feasible, practical | Realistic approaches | -| scalability | 扩展, scale, growth | Large-scale solutions | -| security | 安全, security, risk | Security considerations | +#### User Input +What the user indicated they wanted to focus on + +#### Decision Log + + +#### Ideas Generated +New ideas from this round with ratings + +#### Analysis Results +Detailed findings from this round's analysis +- Finding 1 (evidence: file:line or rationale) +- Finding 2 (evidence: file:line or rationale) + +#### Challenged Assumptions +- ~~Previous assumption~~ → New understanding + - Reason: Why the assumption was wrong + +#### Open Items +Remaining questions or exploration directions + +#### Narrative Synthesis + +``` + +### brainstorm.md Evolution Summary + +- **Header**: Session ID, topic, start time, dimensions, mode +- **Session Context**: Focus areas, perspectives, constraints +- **Exploration Vectors**: Key questions guiding exploration +- **Initial Decisions**: Why these perspectives and focus areas were selected +- **Thought Evolution Timeline**: Round-by-round findings + - Round 1: Exploration Results + Decision Log + Narrative Synthesis + - Round 2-N: Current Ideas Summary + User feedback + direction adjustments + new ideas + Decision Log + Narrative Synthesis +- **Decision Trail**: Consolidated critical decisions across all rounds +- **Synthesis & Conclusions**: Summary, top ideas, recommendations +- **Current Ideas (Final)**: Consolidated ranked ideas +- **Session Statistics**: Rounds completed, ideas generated, artifacts produced + +## Reference + +### Output Structure + +``` +{projectRoot}/.workflow/.brainstorm/BS-{slug}-{date}/ +├── brainstorm.md # Complete thought evolution timeline +├── exploration-codebase.json # Phase 2: Codebase context +├── perspectives/ # Phase 2: Individual perspective outputs +│ ├── creative.json +│ ├── pragmatic.json +│ └── systematic.json +├── perspectives.json # Phase 2: Aggregated findings with synthesis +├── synthesis.json # Phase 4: Final synthesis +└── ideas/ # Phase 3: Individual idea deep-dives + ├── idea-1.md + ├── idea-2.md + └── merged-idea-1.md +``` + +| File | Phase | Description | +|------|-------|-------------| +| `brainstorm.md` | 1-4 | Session metadata → thought evolution → conclusions | +| `exploration-codebase.json` | 2 | Codebase context: relevant files, patterns, constraints | +| `perspectives/*.json` | 2 | Per-perspective idea generation results | +| `perspectives.json` | 2 | Aggregated findings with cross-perspective synthesis | +| `ideas/*.md` | 3 | Individual idea deep-dives and merged ideas | +| `synthesis.json` | 4 | Final synthesis: top ideas, recommendations, insights | + +### Brainstorm Dimensions + +| Dimension | Keywords | Description | +|-----------|----------|-------------| +| technical | 技术, technical, implementation, code, 实现, architecture | Implementation approaches | +| ux | 用户, user, experience, UX, UI, 体验, interaction | User-facing design ideas | +| business | 业务, business, value, ROI, 价值, market | Business model innovations | +| innovation | 创新, innovation, novel, creative, 新颖 | Breakthrough ideas | +| feasibility | 可行, feasible, practical, realistic, 实际 | Realistic approaches | +| scalability | 扩展, scale, growth, performance, 性能 | Large-scale solutions | +| security | 安全, security, risk, protection, 风险 | Security considerations | + +### Brainstorm Perspectives + +| Perspective | Focus | Best For | +|-------------|-------|----------| +| **Creative** | Innovation, cross-domain inspiration, challenging assumptions | Generating novel and surprising ideas | +| **Pragmatic** | Implementation feasibility, effort estimates, blockers | Reality-checking ideas | +| **Systematic** | Problem decomposition, patterns, scalability, architecture | Organizing and structuring solutions | ### Brainstorm Modes -| Mode | Duration | Intensity | Subagents | -|------|----------|-----------|-----------| -| Creative | 15-20 min | High novelty | 1 agent, short timeout | -| Balanced | 30-60 min | Mixed | 3 parallel agents | -| Deep | 1-2+ hours | Comprehensive | 3 parallel agents + deep refinement | +| Mode | Intensity | Perspectives | Description | +|------|-----------|-------------|-------------| +| Creative | High novelty | 1 perspective | Fast, focus on novel ideas | +| Balanced | Mixed | 3 perspectives | Moderate, balanced exploration (default) | +| Deep | Comprehensive | 3 perspectives + deep refinement | Thorough multi-round investigation | ### Collaboration Patterns | Pattern | Usage | Description | |---------|-------|-------------| -| Parallel Divergence | New topic | All perspectives explore simultaneously via parallel subagents | -| Sequential Deep-Dive | Promising idea | `send_input` to one agent for elaboration, others critique via new agents | -| Debate Mode | Controversial approach | Spawn opposing agents to argue for/against | -| Synthesis Mode | Ready to decide | Spawn synthesis agent combining insights from all perspectives | +| Parallel Divergence | New topic | All perspectives explored serially for comprehensive coverage | +| Sequential Deep-Dive | Promising idea | One perspective elaborates, others critique | +| Debate Mode | Controversial approach | Inline analysis arguing for/against | +| Synthesis Mode | Ready to decide | Inline synthesis combining insights from all perspectives | ### Context Overflow Protection -**Per-Agent Limits**: +**Per-Perspective Limits**: - Main analysis output: < 3000 words - Sub-document (if any): < 2000 words each - Maximum sub-documents: 5 per perspective @@ -860,143 +877,33 @@ Dimensions guide brainstorming scope and focus: - Large ideas automatically split into separate idea documents in ideas/ folder **Recovery Steps**: -1. Check agent outputs for truncation or overflow +1. Check outputs for truncation or overflow 2. Reduce scope: fewer perspectives or simpler topic 3. Use structured brainstorm mode for more focused output 4. Split complex topics into multiple sessions ---- - -## Error Handling & Recovery +### Error Handling | Situation | Action | Recovery | |-----------|--------|----------| -| **Subagent timeout** | Check `results.timed_out`, continue `wait()` or use partial results | Reduce scope, use 2 perspectives instead of 3 | -| **Agent closed prematurely** | Cannot recover closed agent | Spawn new agent with prior context from perspectives.json | -| **Parallel agent partial failure** | Some perspectives complete, some fail | Use completed results, note gaps in synthesis | -| **send_input to closed agent** | Error: agent not found | Spawn new agent with prior findings as context | -| **No good ideas** | Reframe problem or adjust constraints | Try new exploration angles | -| **User disengaged** | Summarize progress and offer break | Save state, keep agents alive for resume | -| **Perspectives conflict** | Present as tradeoff options | Let user select preferred direction | -| **Max rounds reached** | Force synthesis phase | Highlight unresolved questions | -| **Session folder conflict** | Append timestamp suffix | Create unique folder | - -### Codex-Specific Error Patterns - -```javascript -// Safe parallel execution with error handling -try { - const agentIds = perspectives.map(p => spawn_agent({ agent_type: "cli_explore_agent", message: buildPrompt(p) })) - - const results = wait({ ids: agentIds, timeout_ms: 600000 }) - - if (results.timed_out) { - // Handle partial completion - const completed = agentIds.filter(id => results.status[id].completed) - const pending = agentIds.filter(id => !results.status[id].completed) - - // Option 1: Continue waiting for pending - // const moreResults = wait({ ids: pending, timeout_ms: 300000 }) - - // Option 2: Use partial results - // processPartialResults(completed, results) - } - - // Process all results - processResults(agentIds, results) - -} finally { - // ALWAYS cleanup, even on errors - agentIds.forEach(id => { - try { close_agent({ id }) } catch (e) { /* ignore */ } - }) -} -``` - ---- - -## Iteration Patterns - -### First Brainstorm Session (Parallel Mode) - -``` -User initiates: TOPIC="idea or topic" - ├─ No session exists → New session mode - ├─ Parse topic and identify dimensions - ├─ Scope with user (focus, depth, mode) - ├─ Create brainstorm.md - ├─ Expand seed into vectors - ├─ Gather codebase context - │ - ├─ Execute parallel perspective exploration: - │ ├─ spawn_agent × 3 (Creative + Pragmatic + Systematic) - │ ├─ wait({ ids: [...] }) ← TRUE PARALLELISM - │ └─ close_agent × 3 - │ - ├─ Aggregate findings with synthesis - └─ Enter multi-round refinement loop -``` - -### Continue Existing Session - -``` -User resumes: TOPIC="same topic" - ├─ Session exists → Continue mode - ├─ Load previous brainstorm.md - ├─ Load perspectives.json - └─ Resume from last refinement round -``` - -### Refinement Loop (Rounds 3-6) - -``` -Each round: - ├─ Present current findings and top ideas - ├─ Gather user feedback (deep dive/diverge/challenge/merge/converge) - ├─ Process response: - │ ├─ Deep Dive → send_input to active agent OR spawn deep-dive agent - │ ├─ Diverge → spawn new agent with different angles - │ ├─ Challenge → spawn challenge agent (devil's advocate) - │ ├─ Merge → spawn merge agent to synthesize - │ └─ Converge → Exit loop for synthesis - ├─ wait({ ids: [...] }) for result - ├─ Update brainstorm.md - └─ Repeat until user selects converge or max rounds reached -``` - -### Agent Lifecycle Management - -``` -Subagent lifecycle: - ├─ spawn_agent({ message }) → Create with role path + task - ├─ wait({ ids, timeout_ms }) → Get results (ONLY way to get output) - ├─ send_input({ id, message }) → Continue interaction (if not closed) - └─ close_agent({ id }) → Cleanup (MUST do, cannot recover) - -Key rules: - ├─ NEVER close before you're done with an agent - ├─ ALWAYS use wait() to get results, NOT close_agent() - ├─ Batch wait for parallel agents: wait({ ids: [a, b, c] }) - └─ Consider keeping agents alive for send_input during refinement -``` - -### Completion Flow - -``` -Final synthesis: - ├─ Consolidate all findings into top ideas - ├─ Generate synthesis.json - ├─ Update brainstorm.md with final conclusions - ├─ close_agent for any remaining active agents - ├─ Offer follow-up options - └─ Archive session artifacts -``` - ---- +| No codebase detected | Normal flow, pure topic brainstorming | Proceed without exploration-codebase.json | +| Codebase search fails | Continue with available context | Note limitation in brainstorm.md | +| No good ideas | Reframe problem or adjust constraints | Try new exploration angles | +| Perspectives conflict | Present as tradeoff options | Let user select preferred direction | +| Max rounds reached (6) | Force synthesis phase | Highlight unresolved questions | +| Session folder conflict | Append timestamp suffix | Create unique folder | +| User timeout | Save state, show resume command | Use `--continue` to resume | ## Best Practices -### Before Starting Brainstorm +### Core Principles + +1. **No code modifications**: This skill is strictly read-only. It produces analysis and idea documents but NEVER modifies source code. +2. **Record Decisions Immediately**: Capture decisions as they happen using the Decision Record format +3. **Evidence-Based**: Ideas referencing codebase patterns should include file:line evidence +4. **Embrace Conflicts**: Perspective conflicts often reveal important tradeoffs + +### Before Starting 1. **Clear Topic Definition**: Detailed topics lead to better dimension identification 2. **User Context**: Understanding preferences helps guide brainstorming intensity @@ -1004,30 +911,33 @@ Final synthesis: ### During Brainstorming -1. **Review Perspectives**: Check all three perspectives before refinement rounds +1. **Review Perspectives**: Check all perspective results before refinement rounds 2. **Document Assumptions**: Track what you think is true for correction later 3. **Use Continue Mode**: Resume sessions to build on previous exploration -4. **Embrace Conflicts**: Perspective conflicts often reveal important tradeoffs -5. **Iterate Thoughtfully**: Each refinement round should meaningfully advance ideas - -### Codex Subagent Best Practices - -1. **Agent Type, Not Path**: Use `agent_type` parameter in spawn_agent, not manual file path reading -2. **Parallel for Perspectives**: Use batch spawn + wait for 3 perspective agents -3. **Delay close_agent for Refinement**: Keep perspective agents alive for `send_input` reuse -4. **Batch wait**: Use `wait({ ids: [a, b, c] })` for parallel agents, not sequential waits -5. **Handle Timeouts**: Check `results.timed_out` and decide: continue waiting or use partial results -6. **Explicit Cleanup**: Always `close_agent` when done, even on errors (use try/finally pattern) -7. **send_input vs spawn**: Prefer `send_input` for same-context deep-dive, `spawn` for new exploration angles +4. **Iterate Thoughtfully**: Each refinement round should meaningfully advance ideas +5. **Track Idea Evolution**: Document how ideas changed across rounds ### Documentation Practices -1. **Evidence-Based**: Every idea should reference codebase patterns or feasibility analysis -2. **Perspective Diversity**: Capture viewpoints from all three perspectives -3. **Timeline Clarity**: Use clear timestamps for traceability -4. **Evolution Tracking**: Document how ideas changed and evolved -5. **Action Items**: Generate specific, implementable recommendations -6. **Synthesis Quality**: Ensure convergent/conflicting themes are clearly documented +1. **Timeline Clarity**: Use clear timestamps for traceability +2. **Evolution Tracking**: Document how ideas developed and morphed +3. **Multi-Perspective Synthesis**: Document convergent/conflicting themes +4. **Action Items**: Generate specific, implementable recommendations + +## When to Use + +**Use brainstorm-with-file when:** +- Generating new ideas and solutions for a topic +- Need multi-perspective exploration of possibilities +- Want documented thought evolution showing how ideas develop +- Exploring creative solutions before committing to implementation +- Need diverge-converge cycles to refine ideas + +**Consider alternatives when:** +- Analyzing existing code/architecture → use `analyze-with-file` +- Specific bug diagnosis needed → use `debug-with-file` +- Complex planning with requirements → use `collaborative-plan-with-file` +- Ready to implement → use `lite-plan` --- diff --git a/.codex/skills/collaborative-plan-with-file/SKILL.md b/.codex/skills/collaborative-plan-with-file/SKILL.md deleted file mode 100644 index a580b1ad..00000000 --- a/.codex/skills/collaborative-plan-with-file/SKILL.md +++ /dev/null @@ -1,830 +0,0 @@ ---- -name: collaborative-plan-with-file -description: Serial collaborative planning with Plan Note - Multi-domain serial task generation, unified plan-note.md, conflict detection. No agent delegation. -argument-hint: "[-y|--yes] [--max-domains=5]" ---- - -# Collaborative-Plan-With-File Workflow - -## Quick Start - -Serial collaborative planning workflow using **Plan Note** architecture. Analyzes requirements, identifies sub-domains, generates detailed plans per domain serially, and detects conflicts across domains. - -```bash -# Basic usage -/codex:collaborative-plan-with-file "Implement real-time notification system" - -# With options -/codex:collaborative-plan-with-file "Refactor authentication module" --max-domains=4 -/codex:collaborative-plan-with-file "Add payment gateway support" -y -``` - -**Core workflow**: Understand → Template → Serial Domain Planning → Conflict Detection → Completion - -**Key features**: -- **plan-note.md**: Shared collaborative document with pre-allocated sections per domain -- **Serial domain planning**: Each sub-domain planned sequentially with full codebase context -- **Conflict detection**: Automatic file, dependency, and strategy conflict scanning -- **No merge needed**: Pre-allocated sections eliminate merge conflicts - -## Auto Mode - -When `--yes` or `-y`: Auto-approve splits, skip confirmations. - -## Overview - -This workflow enables structured planning through sequential phases: - -1. **Understanding & Template** — Analyze requirements, identify sub-domains, create plan-note.md template -2. **Serial Domain Planning** — Plan each sub-domain sequentially using direct search and analysis -3. **Conflict Detection** — Scan plan-note.md for conflicts across all domains -4. **Completion** — Generate human-readable plan.md summary - -The key innovation is the **Plan Note** architecture — a shared collaborative document with pre-allocated sections per sub-domain, eliminating merge conflicts even in serial execution. - -``` -┌─────────────────────────────────────────────────────────────────────────┐ -│ PLAN NOTE COLLABORATIVE PLANNING │ -├─────────────────────────────────────────────────────────────────────────┤ -│ │ -│ Phase 1: Understanding & Template Creation │ -│ ├─ Analyze requirements (inline search & analysis) │ -│ ├─ Identify 2-5 sub-domains (focus areas) │ -│ ├─ Create plan-note.md with pre-allocated sections │ -│ └─ Assign TASK ID ranges (no conflicts) │ -│ │ -│ Phase 2: Serial Domain Planning │ -│ ┌──────────────┐ │ -│ │ Domain 1 │→ Explore codebase → Generate .task/TASK-*.json │ -│ │ Section 1 │→ Fill task pool + evidence in plan-note.md │ -│ └──────┬───────┘ │ -│ ┌──────▼───────┐ │ -│ │ Domain 2 │→ Explore codebase → Generate .task/TASK-*.json │ -│ │ Section 2 │→ Fill task pool + evidence in plan-note.md │ -│ └──────┬───────┘ │ -│ ┌──────▼───────┐ │ -│ │ Domain N │→ ... │ -│ └──────────────┘ │ -│ │ -│ Phase 3: Conflict Detection (Single Source) │ -│ ├─ Parse plan-note.md (all sections) │ -│ ├─ Detect file/dependency/strategy conflicts │ -│ └─ Update plan-note.md conflict section │ -│ │ -│ Phase 4: Completion (No Merge) │ -│ ├─ Collect domain .task/*.json → session .task/*.json │ -│ ├─ Generate plan.md (human-readable) │ -│ └─ Ready for execution │ -│ │ -└─────────────────────────────────────────────────────────────────────────┘ -``` - -## Output Structure - -> **Schema**: `cat ~/.ccw/workflows/cli-templates/schemas/task-schema.json` - -``` -{projectRoot}/.workflow/.planning/CPLAN-{slug}-{date}/ -├── plan-note.md # ⭐ Core: Requirements + Tasks + Conflicts -├── requirement-analysis.json # Phase 1: Sub-domain assignments -├── domains/ # Phase 2: Per-domain plans -│ ├── {domain-1}/ -│ │ └── .task/ # Per-domain task JSON files -│ │ ├── TASK-001.json -│ │ └── ... -│ ├── {domain-2}/ -│ │ └── .task/ -│ │ ├── TASK-101.json -│ │ └── ... -│ └── ... -├── plan.json # Plan overview (plan-overview-base-schema.json) -├── .task/ # ⭐ Merged task JSON files (all domains) -│ ├── TASK-001.json -│ ├── TASK-101.json -│ └── ... -├── conflicts.json # Phase 3: Conflict report -└── plan.md # Phase 4: Human-readable summary -``` - -## Output Artifacts - -### Phase 1: Understanding & Template - -| Artifact | Purpose | -|----------|---------| -| `plan-note.md` | Collaborative template with pre-allocated task pool and evidence sections per domain | -| `requirement-analysis.json` | Sub-domain assignments, TASK ID ranges, complexity assessment | - -### Phase 2: Serial Domain Planning - -| Artifact | Purpose | -|----------|---------| -| `domains/{domain}/.task/TASK-*.json` | Task JSON files per domain (one file per task with convergence) | -| Updated `plan-note.md` | Task pool and evidence sections filled for each domain | - -### Phase 3: Conflict Detection - -| Artifact | Purpose | -|----------|---------| -| `conflicts.json` | Detected conflicts with types, severity, and resolutions | -| Updated `plan-note.md` | Conflict markers section populated | - -### Phase 4: Completion - -| Artifact | Purpose | -|----------|---------| -| `.task/TASK-*.json` | Merged task JSON files from all domains (consumable by unified-execute) | -| `plan.json` | Plan overview following plan-overview-base-schema.json | -| `plan.md` | Human-readable summary with requirements, tasks, and conflicts | - ---- - -## Implementation Details - -### Session Initialization - -##### Step 0: Initialize Session - -```javascript -const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString() - -// Detect project root -const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim() - -// Parse arguments -const autoMode = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y') -const maxDomainsMatch = $ARGUMENTS.match(/--max-domains=(\d+)/) -const maxDomains = maxDomainsMatch ? parseInt(maxDomainsMatch[1]) : 5 - -// Clean task description -const taskDescription = $ARGUMENTS - .replace(/--yes|-y|--max-domains=\d+/g, '') - .trim() - -const slug = taskDescription.toLowerCase() - .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-') - .substring(0, 30) -const dateStr = getUtc8ISOString().substring(0, 10) -const sessionId = `CPLAN-${slug}-${dateStr}` -const sessionFolder = `${projectRoot}/.workflow/.planning/${sessionId}` - -// Auto-detect continue: session folder + plan-note.md exists → continue mode -// If continue → load existing state and resume from incomplete phase -Bash(`mkdir -p ${sessionFolder}/domains`) -``` - -**Session Variables**: -- `sessionId`: Unique session identifier -- `sessionFolder`: Base directory for all artifacts -- `maxDomains`: Maximum number of sub-domains (default: 5) -- `autoMode`: Boolean for auto-confirmation - -**Auto-Detection**: If session folder exists with plan-note.md, automatically enters continue mode. - ---- - -## Phase 1: Understanding & Template Creation - -**Objective**: Analyze task requirements, identify parallelizable sub-domains, and create the plan-note.md template with pre-allocated sections. - -### Step 1.1: Analyze Task Description - -Use built-in tools directly to understand the task scope and identify sub-domains. - -**Analysis Activities**: -1. **Search for references** — Find related documentation, README files, and architecture guides - - Use: `mcp__ace-tool__search_context`, Grep, Glob, Read - - Run: `ccw spec load --category planning` (if spec system available) -2. **Extract task keywords** — Identify key terms and concepts from the task description -3. **Identify ambiguities** — List any unclear points or multiple possible interpretations -4. **Clarify with user** — If ambiguities found, use request_user_input for clarification -5. **Identify sub-domains** — Split into 2-{maxDomains} parallelizable focus areas based on task complexity -6. **Assess complexity** — Evaluate overall task complexity (Low/Medium/High) - -**Sub-Domain Identification Patterns**: - -| Pattern | Keywords | -|---------|----------| -| Backend API | 服务, 后端, API, 接口 | -| Frontend | 界面, 前端, UI, 视图 | -| Database | 数据, 存储, 数据库, 持久化 | -| Testing | 测试, 验证, QA | -| Infrastructure | 部署, 基础, 运维, 配置 | - -**Guideline**: Prioritize identifying latest documentation (README, design docs, architecture guides). When ambiguities exist, ask user for clarification instead of assuming interpretations. - -### Step 1.2: Create plan-note.md Template - -Generate a structured template with pre-allocated sections for each sub-domain. - -**plan-note.md Structure**: - -```yaml ---- -session_id: CPLAN-{slug}-{date} -original_requirement: "{task description}" -created_at: "{ISO timestamp}" -complexity: Low | Medium | High -sub_domains: ["{domain-1}", "{domain-2}", ...] -domain_task_id_ranges: - "{domain-1}": [1, 100] - "{domain-2}": [101, 200] -status: planning ---- -``` - -**Sections**: -- `## 需求理解` — Core objectives, key points, constraints, split strategy -- `## 任务池 - {Domain N}` — Pre-allocated task section per domain (TASK-{range}) -- `## 依赖关系` — Auto-generated after all domains complete -- `## 冲突标记` — Populated in Phase 3 -- `## 上下文证据 - {Domain N}` — Evidence section per domain - -**TASK ID Range Allocation**: Each domain receives a non-overlapping range of 100 IDs (e.g., Domain 1: TASK-001~100, Domain 2: TASK-101~200). - -### Step 1.3: Generate requirement-analysis.json - -```javascript -Write(`${sessionFolder}/requirement-analysis.json`, JSON.stringify({ - session_id: sessionId, - original_requirement: taskDescription, - complexity: complexity, // Low | Medium | High - sub_domains: subDomains.map(sub => ({ - focus_area: sub.focus_area, - description: sub.description, - task_id_range: sub.task_id_range, - estimated_effort: sub.estimated_effort, - dependencies: sub.dependencies // cross-domain dependencies - })), - total_domains: subDomains.length -}, null, 2)) -``` - -**Success Criteria**: -- Latest documentation identified and referenced (if available) -- Ambiguities resolved via user clarification (if any found) -- 2-{maxDomains} clear sub-domains identified -- Each sub-domain can be planned independently -- Plan Note template includes all pre-allocated sections -- TASK ID ranges have no overlap (100 IDs per domain) -- Requirements understanding is comprehensive - ---- - -## Phase 2: Serial Sub-Domain Planning - -**Objective**: Plan each sub-domain sequentially, generating detailed plans and updating plan-note.md. - -**Execution Model**: Serial inline execution — each domain explored and planned directly using search tools, one at a time. - -### Step 2.1: User Confirmation (unless autoMode) - -Display identified sub-domains and confirm before starting. - -```javascript -if (!autoMode) { - request_user_input({ - questions: [{ - header: "确认规划", - id: "confirm", - question: `已识别 ${subDomains.length} 个子领域:\n${subDomains.map((s, i) => - `${i+1}. ${s.focus_area}: ${s.description}`).join('\n')}\n\n确认开始规划?`, - options: [ - { label: "开始规划(Recommended)", description: "逐域进行规划" }, - { label: "调整拆分", description: "修改子领域划分" }, - { label: "取消", description: "退出规划" } - ] - }] - }) -} -``` - -### Step 2.2: Serial Domain Planning - -For each sub-domain, execute the full planning cycle inline: - -```javascript -for (const sub of subDomains) { - // 1. Create domain directory with .task/ subfolder - Bash(`mkdir -p ${sessionFolder}/domains/${sub.focus_area}/.task`) - - // 2. Explore codebase for domain-relevant context - // Use: mcp__ace-tool__search_context, Grep, Glob, Read - // Focus on: - // - Modules/components related to this domain - // - Existing patterns to follow - // - Integration points with other domains - // - Architecture constraints - - // 3. Generate task JSON records (following task-schema.json) - const domainTasks = [ - // For each task within the assigned ID range: - { - id: `TASK-${String(sub.task_id_range[0]).padStart(3, '0')}`, - title: "...", - description: "...", // scope/goal of this task - type: "feature", // infrastructure|feature|enhancement|fix|refactor|testing - priority: "medium", // high|medium|low - effort: "medium", // small|medium|large - scope: "...", // Brief scope description - depends_on: [], // TASK-xxx references - convergence: { - criteria: ["... (testable)"], // Testable conditions - verification: "... (executable)", // Command or steps - definition_of_done: "... (business language)" - }, - files: [ // Files to modify - { - path: "...", - action: "modify", // modify|create|delete - changes: ["..."], // Change descriptions - conflict_risk: "low" // low|medium|high - } - ], - source: { - tool: "collaborative-plan-with-file", - session_id: sessionId, - original_id: `TASK-${String(sub.task_id_range[0]).padStart(3, '0')}` - } - } - // ... more tasks - ] - - // 4. Write individual task JSON files (one per task) - domainTasks.forEach(task => { - Write(`${sessionFolder}/domains/${sub.focus_area}/.task/${task.id}.json`, - JSON.stringify(task, null, 2)) - }) - - // 5. Sync summary to plan-note.md - // Read current plan-note.md - // Locate pre-allocated sections: - // - Task Pool: "## 任务池 - ${toTitleCase(sub.focus_area)}" - // - Evidence: "## 上下文证据 - ${toTitleCase(sub.focus_area)}" - // Fill with task summaries and evidence - // Write back plan-note.md -} -``` - -**Task Summary Format** (for plan-note.md task pool sections): - -```markdown -### TASK-{ID}: {Title} [{focus-area}] -- **状态**: pending -- **类型**: feature/fix/refactor/enhancement/testing/infrastructure -- **优先级**: high/medium/low -- **工作量**: small/medium/large -- **依赖**: TASK-xxx (if any) -- **范围**: Brief scope description -- **修改文件**: `file-path` (action): change summary -- **收敛标准**: - - criteria 1 - - criteria 2 -- **验证方式**: executable command or steps -- **完成定义**: business language definition -``` - -**Evidence Format** (for plan-note.md evidence sections): - -```markdown -- **相关文件**: file list with relevance -- **现有模式**: patterns identified -- **约束**: constraints discovered -``` - -**Domain Planning Rules**: -- Each domain modifies ONLY its pre-allocated sections in plan-note.md -- Use assigned TASK ID range exclusively -- Include convergence criteria for each task (criteria + verification + definition_of_done) -- Include `files[]` with conflict_risk assessment per file -- Reference cross-domain dependencies explicitly -- Each task record must be self-contained (can be independently consumed by unified-execute) - -### Step 2.3: Verify plan-note.md Consistency - -After all domains are planned, verify the shared document. - -**Verification Activities**: -1. Read final plan-note.md -2. Verify all task pool sections are populated -3. Verify all evidence sections are populated -4. Validate TASK ID uniqueness across all domains -5. Check for any section format inconsistencies - -**Success Criteria**: -- `domains/{domain}/.task/TASK-*.json` created for each domain (one file per task) -- Each task has convergence (criteria + verification + definition_of_done) -- `plan-note.md` updated with all task pools and evidence sections -- Task summaries follow consistent format -- No TASK ID overlaps across domains - ---- - -## Phase 3: Conflict Detection - -**Objective**: Analyze plan-note.md for conflicts across all domain contributions. - -### Step 3.1: Parse plan-note.md - -Extract all tasks from all "任务池" sections and domain .task/*.json files. - -```javascript -// parsePlanNote(markdown) -// - Extract YAML frontmatter between `---` markers -// - Scan for heading patterns: /^(#{2,})\s+(.+)$/ -// - Build sections array: { level, heading, start, content } -// - Return: { frontmatter, sections } - -// Also load all domain .task/*.json for detailed data -// loadDomainTasks(sessionFolder, subDomains): -// const allTasks = [] -// for (const sub of subDomains) { -// const taskDir = `${sessionFolder}/domains/${sub.focus_area}/.task` -// const taskFiles = Glob(`${taskDir}/TASK-*.json`) -// taskFiles.forEach(file => { -// allTasks.push(JSON.parse(Read(file))) -// }) -// } -// return allTasks - -// extractTasksFromSection(content, sectionHeading) -// - Match: /### (TASK-\d+):\s+(.+?)\s+\[(.+?)\]/ -// - For each: extract taskId, title, author -// - Parse details: status, type, priority, effort, depends_on, files, convergence -// - Return: array of task objects - -// parseTaskDetails(content) -// - Extract via regex: -// - /\*\*状态\*\*:\s*(.+)/ → status -// - /\*\*类型\*\*:\s*(.+)/ → type -// - /\*\*优先级\*\*:\s*(.+)/ → priority -// - /\*\*工作量\*\*:\s*(.+)/ → effort -// - /\*\*依赖\*\*:\s*(.+)/ → depends_on (extract TASK-\d+ references) -// - Extract files: /- `([^`]+)` \((\w+)\):\s*(.+)/ → path, action, change -// - Return: { status, type, priority, effort, depends_on[], files[], convergence } -``` - -### Step 3.2: Detect Conflicts - -Scan all tasks for three categories of conflicts. - -**Conflict Types**: - -| Type | Severity | Detection Logic | Resolution | -|------|----------|-----------------|------------| -| file_conflict | high | Same file:location modified by multiple domains | Coordinate modification order or merge changes | -| dependency_cycle | critical | Circular dependencies in task graph (DFS detection) | Remove or reorganize dependencies | -| strategy_conflict | medium | Multiple high-risk tasks in same file from different domains | Review approaches and align on single strategy | - -**Detection Functions**: - -```javascript -// detectFileConflicts(tasks) -// Build fileMap: { "file-path": [{ task_id, task_title, source_domain, changes }] } -// For each file with modifications from multiple domains: -// → conflict: type='file_conflict', severity='high' -// → include: file, tasks_involved, domains_involved, changes -// → resolution: 'Coordinate modification order or merge changes' - -// detectDependencyCycles(tasks) -// Build dependency graph: { taskId: [dependsOn_taskIds] } -// DFS with recursion stack to detect cycles: -function detectCycles(tasks) { - const graph = new Map(tasks.map(t => [t.id, t.depends_on || []])) - const visited = new Set(), inStack = new Set(), cycles = [] - function dfs(node, path) { - if (inStack.has(node)) { cycles.push([...path, node].join(' → ')); return } - if (visited.has(node)) return - visited.add(node); inStack.add(node) - ;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node])) - inStack.delete(node) - } - tasks.forEach(t => { if (!visited.has(t.id)) dfs(t.id, []) }) - return cycles -} - -// detectStrategyConflicts(tasks) -// Group tasks by files they modify (from task.files[].path) -// For each file with tasks from multiple domains: -// Filter for tasks with files[].conflict_risk === 'high' or 'medium' -// If >1 high-risk from different domains: -// → conflict: type='strategy_conflict', severity='medium' -// → resolution: 'Review approaches and align on single strategy' -``` - -### Step 3.3: Generate Conflict Artifacts - -Write conflict results and update plan-note.md. - -```javascript -// 1. Write conflicts.json -Write(`${sessionFolder}/conflicts.json`, JSON.stringify({ - detected_at: getUtc8ISOString(), - total_tasks: allTasks.length, - total_domains: subDomains.length, - total_conflicts: allConflicts.length, - conflicts: allConflicts // { type, severity, tasks_involved, description, suggested_resolution } -}, null, 2)) - -// 2. Update plan-note.md "## 冲突标记" section -// generateConflictMarkdown(conflicts): -// If empty: return '✅ 无冲突检测到' -// For each conflict: -// ### CONFLICT-{padded_index}: {description} -// - **严重程度**: critical | high | medium -// - **涉及任务**: TASK-xxx, TASK-yyy -// - **涉及领域**: domain-a, domain-b -// - **问题详情**: (based on conflict type) -// - **建议解决方案**: ... -// - **决策状态**: [ ] 待解决 - -// replaceSectionContent(markdown, sectionHeading, newContent): -// Find section heading position via regex -// Find next heading of same or higher level -// Replace content between heading and next section -// If section not found: append at end -``` - -**Success Criteria**: -- All tasks extracted and analyzed -- `conflicts.json` written with detection results -- `plan-note.md` updated with conflict markers -- All conflict types checked (file, dependency, strategy) - ---- - -## Phase 4: Completion - -**Objective**: Generate human-readable plan summary and finalize workflow. - -### Step 4.1: Collect Domain .task/*.json to Session .task/ - -Copy all per-domain task JSON files into a single session-level `.task/` directory. - -```javascript -// Create session-level .task/ directory -Bash(`mkdir -p ${sessionFolder}/.task`) - -// Collect all domain task files -for (const sub of subDomains) { - const taskDir = `${sessionFolder}/domains/${sub.focus_area}/.task` - const taskFiles = Glob(`${taskDir}/TASK-*.json`) - taskFiles.forEach(file => { - const filename = path.basename(file) - // Copy domain task file to session .task/ directory - Bash(`cp ${file} ${sessionFolder}/.task/${filename}`) - }) -} -``` - -### Step 4.2: Generate plan.json - -Generate a plan overview following the plan-overview-base-schema. - -```javascript -// Generate plan.json (plan-overview-base-schema) -const allTaskFiles = Glob(`${sessionFolder}/.task/TASK-*.json`) -const taskIds = allTaskFiles.map(f => JSON.parse(Read(f)).id).sort() - -// Guard: skip plan.json if no tasks generated -if (taskIds.length === 0) { - console.warn('No tasks generated; skipping plan.json') -} else { - -const planOverview = { - summary: `Collaborative plan for: ${taskDescription}`, - approach: `Multi-domain planning across ${subDomains.length} sub-domains: ${subDomains.map(s => s.focus_area).join(', ')}`, - task_ids: taskIds, - task_count: taskIds.length, - complexity: complexity, - recommended_execution: "Agent", - _metadata: { - timestamp: getUtc8ISOString(), - source: "direct-planning", - planning_mode: "direct", - plan_type: "collaborative", - schema_version: "2.0" - } -} -Write(`${sessionFolder}/plan.json`, JSON.stringify(planOverview, null, 2)) - -} // end guard -``` - -### Step 4.3: Generate plan.md - -Create a human-readable summary from plan-note.md content. - -**plan.md Structure**: - -| Section | Content | -|---------|---------| -| Header | Session ID, task description, creation time | -| 需求 (Requirements) | Copied from plan-note.md "需求理解" section | -| 子领域拆分 (Sub-Domains) | Each domain with description, task range, estimated effort | -| 任务概览 (Task Overview) | All tasks with complexity, dependencies, and target files | -| 冲突报告 (Conflict Report) | Summary of detected conflicts or "无冲突" | -| 执行指令 (Execution) | Command to execute the plan | - -```javascript -const planMd = `# Collaborative Plan - -**Session**: ${sessionId} -**Requirement**: ${taskDescription} -**Created**: ${getUtc8ISOString()} -**Complexity**: ${complexity} -**Domains**: ${subDomains.length} - -## 需求理解 - -${requirementSection} - -## 子领域拆分 - -| # | Focus Area | Description | TASK Range | Effort | -|---|-----------|-------------|------------|--------| -${subDomains.map((s, i) => `| ${i+1} | ${s.focus_area} | ${s.description} | ${s.task_id_range[0]}-${s.task_id_range[1]} | ${s.estimated_effort} |`).join('\n')} - -## 任务概览 - -${subDomains.map(sub => { - const domainTasks = allTasks.filter(t => t.source?.original_id?.startsWith('TASK') && t.source?.session_id === sessionId) - return `### ${sub.focus_area}\n\n` + - domainTasks.map(t => `- **${t.id}**: ${t.title} (${t.type}, ${t.effort}) ${t.depends_on.length ? '← ' + t.depends_on.join(', ') : ''}`).join('\n') -}).join('\n\n')} - -## 冲突报告 - -${allConflicts.length === 0 - ? '✅ 无冲突检测到' - : allConflicts.map(c => `- **${c.type}** (${c.severity}): ${c.description}`).join('\n')} - -## 执行 - -\`\`\`bash -/workflow:unified-execute-with-file PLAN="${sessionFolder}/.task/" -\`\`\` - -**Session artifacts**: \`${sessionFolder}/\` -` -Write(`${sessionFolder}/plan.md`, planMd) -``` - -### Step 4.4: Display Completion Summary - -Present session statistics and next steps. - -```javascript -// Display: -// - Session ID and directory path -// - Total domains planned -// - Total tasks generated -// - Conflict status (count and severity) -// - Execution command for next step - -if (!autoMode) { - request_user_input({ - questions: [{ - header: "下一步", - id: "next_step", - question: `规划完成:\n- ${subDomains.length} 个子领域\n- ${allTasks.length} 个任务\n- ${allConflicts.length} 个冲突\n\n下一步:`, - options: [ - { label: "Execute Plan(Recommended)", description: "使用 unified-execute 执行计划" }, - { label: "Review Conflicts", description: "查看并解决冲突" }, - { label: "Done", description: "保存产物,稍后执行" } - ] - }] - }) -} -``` - -| Selection | Action | -|-----------|--------| -| Execute Plan | `Skill(skill="workflow:unified-execute-with-file", args="PLAN=\"${sessionFolder}/.task/\"")` | -| Review Conflicts | Display conflicts.json content for manual resolution | -| Export | Copy plan.md + plan-note.md to user-specified location | -| Done | Display artifact paths, end workflow | - -### Step 4.5: Sync Session State - -```bash -$session-sync -y "Plan complete: {domains} domains, {tasks} tasks" -``` - -Updates specs/*.md with planning insights and project-tech.json with planning session entry. - -**Success Criteria**: -- `plan.md` generated with complete summary -- `.task/TASK-*.json` collected at session root (consumable by unified-execute) -- All artifacts present in session directory -- Session state synced via `$session-sync` -- User informed of completion and next steps - ---- - -## Configuration - -| Flag | Default | Description | -|------|---------|-------------| -| `--max-domains` | 5 | Maximum sub-domains to identify | -| `-y, --yes` | false | Auto-confirm all decisions | - -## Iteration Patterns - -### New Planning Session - -``` -User initiates: TASK="task description" - ├─ No session exists → New session mode - ├─ Analyze task with inline search tools - ├─ Identify sub-domains - ├─ Create plan-note.md template - ├─ Generate requirement-analysis.json - │ - ├─ Serial domain planning: - │ ├─ Domain 1: explore → .task/TASK-*.json → fill plan-note.md - │ ├─ Domain 2: explore → .task/TASK-*.json → fill plan-note.md - │ └─ Domain N: ... - │ - ├─ Collect domain .task/*.json → session .task/ - │ - ├─ Verify plan-note.md consistency - ├─ Detect conflicts - ├─ Generate plan.md summary - └─ Report completion -``` - -### Continue Existing Session - -``` -User resumes: TASK="same task" - ├─ Session exists → Continue mode - ├─ Load plan-note.md and requirement-analysis.json - ├─ Identify incomplete domains (empty task pool sections) - ├─ Plan remaining domains serially - └─ Continue with conflict detection -``` - ---- - -## Error Handling & Recovery - -| Situation | Action | Recovery | -|-----------|--------|----------| -| No codebase detected | Normal flow, pure requirement planning | Proceed without codebase context | -| Codebase search fails | Continue with available context | Note limitation in plan-note.md | -| Domain planning fails | Record error, continue with next domain | Retry failed domain or plan manually | -| Section not found in plan-note | Create section defensively | Continue with new section | -| No tasks generated for a domain | Review domain description | Refine scope and retry | -| Conflict detection fails | Continue with empty conflicts | Note in completion summary | -| Session folder conflict | Append timestamp suffix | Create unique folder | -| plan-note.md format inconsistency | Validate and fix format after each domain | Re-read and normalize | - ---- - -## Best Practices - -### Before Starting Planning - -1. **Clear Task Description**: Detailed requirements lead to better sub-domain splitting -2. **Reference Documentation**: Ensure latest README and design docs are identified during Phase 1 -3. **Clarify Ambiguities**: Resolve unclear requirements before committing to sub-domains - -### During Planning - -1. **Review Plan Note**: Check plan-note.md between domains to verify progress -2. **Verify Independence**: Ensure sub-domains are truly independent and have minimal overlap -3. **Check Dependencies**: Cross-domain dependencies should be documented explicitly -4. **Inspect Details**: Review `domains/{domain}/.task/TASK-*.json` for specifics when needed -5. **Consistent Format**: Follow task summary format strictly across all domains -6. **TASK ID Isolation**: Use pre-assigned non-overlapping ranges to prevent ID conflicts - -### After Planning - -1. **Resolve Conflicts**: Address high/critical conflicts before execution -2. **Review Summary**: Check plan.md for completeness and accuracy -3. **Validate Tasks**: Ensure all tasks have clear scope and modification targets - -## When to Use - -**Use collaborative-plan-with-file when:** -- A complex task spans multiple sub-domains (backend + frontend + database, etc.) -- Need structured multi-domain task breakdown with conflict detection -- Planning a feature that touches many parts of the codebase -- Want pre-allocated section organization for clear domain separation - -**Use lite-plan when:** -- Single domain, clear task with no sub-domain splitting needed -- Quick planning without conflict detection - -**Use req-plan-with-file when:** -- Requirement-level progressive roadmap needed (MVP → iterations) -- Higher-level decomposition before detailed planning - -**Use analyze-with-file when:** -- Need in-depth analysis before planning -- Understanding and discussion, not task generation - ---- - -**Now execute collaborative-plan-with-file for**: $ARGUMENTS diff --git a/.codex/skills/csv-wave-pipeline/SKILL.md b/.codex/skills/csv-wave-pipeline/SKILL.md index 6809b55b..d83a1005 100644 --- a/.codex/skills/csv-wave-pipeline/SKILL.md +++ b/.codex/skills/csv-wave-pipeline/SKILL.md @@ -25,9 +25,6 @@ $csv-wave-pipeline --continue "auth-20260228" - `-c, --concurrency N`: Max concurrent agents within each wave (default: 4) - `--continue`: Resume existing session -**Output Directory**: `.workflow/.csv-wave/{session-id}/` -**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report) - --- ## Overview @@ -37,35 +34,75 @@ Wave-based batch execution using `spawn_agents_on_csv` with **cross-wave context **Core workflow**: Decompose → Compute Waves → Execute Wave-by-Wave → Aggregate ``` -┌─────────────────────────────────────────────────────────────────────────┐ -│ CSV BATCH EXECUTION WORKFLOW │ -├─────────────────────────────────────────────────────────────────────────┤ -│ │ -│ Phase 1: Requirement → CSV │ -│ ├─ Parse requirement into subtasks (3-10 tasks) │ -│ ├─ Identify dependencies (deps column) │ -│ ├─ Compute dependency waves (topological sort → depth grouping) │ -│ ├─ Generate tasks.csv with wave column │ -│ └─ User validates task breakdown (skip if -y) │ -│ │ -│ Phase 2: Wave Execution Engine │ -│ ├─ For each wave (1..N): │ -│ │ ├─ Build wave CSV (filter rows for this wave) │ -│ │ ├─ Inject previous wave findings into prev_context column │ -│ │ ├─ spawn_agents_on_csv(wave CSV) │ -│ │ ├─ Collect results, merge into master tasks.csv │ -│ │ └─ Check: any failed? → skip dependents or retry │ -│ └─ discoveries.ndjson shared across all waves (append-only) │ -│ │ -│ Phase 3: Results Aggregation │ -│ ├─ Export final results.csv │ -│ ├─ Generate context.md with all findings │ -│ ├─ Display summary: completed/failed/skipped per wave │ -│ └─ Offer: view results | retry failed | done │ -│ │ -└─────────────────────────────────────────────────────────────────────────┘ +Phase 1: Requirement → CSV + ├─ Parse requirement into subtasks (3-10 tasks) + ├─ Identify dependencies (deps column) + ├─ Compute dependency waves (topological sort → depth grouping) + ├─ Generate tasks.csv with wave column + └─ User validates task breakdown (skip if -y) + +Phase 2: Wave Execution Engine + ├─ For each wave (1..N): + │ ├─ Build wave CSV (filter rows for this wave) + │ ├─ Inject previous wave findings into prev_context column + │ ├─ spawn_agents_on_csv(wave CSV) + │ ├─ Collect results, merge into master tasks.csv + │ └─ Check: any failed? → skip dependents or retry + └─ discoveries.ndjson shared across all waves (append-only) + +Phase 3: Results Aggregation + ├─ Export final results.csv + ├─ Generate context.md with all findings + ├─ Display summary: completed/failed/skipped per wave + └─ Offer: view results | retry failed | done ``` +### Context Propagation + +Two context channels flow across waves: + +1. **CSV findings** (structured): `context_from` column → `prev_context` injection — task-specific directed context +2. **NDJSON discoveries** (broadcast): `discoveries.ndjson` — general exploration findings available to all + +``` +Wave 1 agents: + ├─ Execute tasks (no prev_context) + ├─ Write findings to report_agent_job_result + └─ Append discoveries to discoveries.ndjson + ↓ merge results into master CSV +Wave 2 agents: + ├─ Read discoveries.ndjson (exploration sharing) + ├─ Read prev_context column (wave 1 findings from context_from) + ├─ Execute tasks with full upstream context + ├─ Write findings to report_agent_job_result + └─ Append new discoveries to discoveries.ndjson + ↓ merge results into master CSV +Wave 3+ agents: same pattern, accumulated context from all prior waves +``` + +--- + +## Session & Output Structure + +``` +.workflow/.csv-wave/{session-id}/ +├── tasks.csv # Master state (updated per wave) +├── results.csv # Final results export (Phase 3) +├── discoveries.ndjson # Shared discovery board (all agents, append-only) +├── context.md # Human-readable report (Phase 3) +├── wave-{N}.csv # Temporary per-wave input (cleaned up after merge) +└── wave-{N}-results.csv # Temporary per-wave output (cleaned up after merge) +``` + +| File | Purpose | Lifecycle | +|------|---------|-----------| +| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave | +| `wave-{N}.csv` | Per-wave input with prev_context column | Created before wave, deleted after | +| `wave-{N}-results.csv` | Per-wave output from spawn_agents_on_csv | Created during wave, deleted after merge | +| `results.csv` | Final export of all task results | Created in Phase 3 | +| `discoveries.ndjson` | Shared exploration board across all agents | Append-only, carries across waves | +| `context.md` | Human-readable execution report | Created in Phase 3 | + --- ## CSV Schema @@ -104,7 +141,7 @@ id,title,description,test,acceptance_criteria,scope,hints,execution_directives,d ### Per-Wave CSV (Temporary) -Each wave generates a temporary `wave-{N}.csv` with an extra `prev_context` column: +Each wave generates a temporary `wave-{N}.csv` with an extra `prev_context` column built from `context_from` by looking up completed tasks' `findings` in the master CSV: ```csv id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context @@ -112,32 +149,37 @@ id,title,description,test,acceptance_criteria,scope,hints,execution_directives,d "3","Add JWT tokens","Implement JWT","Unit test: sign/verify round-trip; Edge test: expired token returns 401","generateToken() returns valid JWT; verifyToken() rejects expired/tampered tokens","src/auth/jwt/**","Use jsonwebtoken library; Set default expiry 1h || src/config/auth.ts","Ensure tsc --noEmit passes","1","1","2","[Task 1] Created auth/ with index.ts and types.ts" ``` -The `prev_context` column is built from `context_from` by looking up completed tasks' `findings` in the master CSV. - --- -## Output Artifacts +## Shared Discovery Board Protocol -| File | Purpose | Lifecycle | -|------|---------|-----------| -| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave | -| `wave-{N}.csv` | Per-wave input (temporary) | Created before wave, deleted after | -| `results.csv` | Final export of all task results | Created in Phase 3 | -| `discoveries.ndjson` | Shared exploration board across all agents | Append-only, carries across waves | -| `context.md` | Human-readable execution report | Created in Phase 3 | +All agents across all waves share `discoveries.ndjson`. This eliminates redundant codebase exploration. ---- +**Lifecycle**: Created by the first agent to write a discovery. Carries over across waves — never cleared. Agents append via `echo '...' >> discoveries.ndjson`. -## Session Structure +**Format**: NDJSON, each line is a self-contained JSON: +```jsonl +{"ts":"2026-02-28T10:00:00+08:00","worker":"1","type":"code_pattern","data":{"name":"repository-pattern","file":"src/repos/Base.ts","description":"Abstract CRUD repository"}} +{"ts":"2026-02-28T10:01:00+08:00","worker":"2","type":"integration_point","data":{"file":"src/auth/index.ts","description":"Auth module entry","exports":["authenticate","authorize"]}} ``` -.workflow/.csv-wave/{session-id}/ -├── tasks.csv # Master state (updated per wave) -├── results.csv # Final results export -├── discoveries.ndjson # Shared discovery board (all agents) -├── context.md # Human-readable report -└── wave-{N}.csv # Temporary per-wave input (cleaned up) -``` + +**Discovery Types**: + +| type | Dedup Key | Description | +|------|-----------|-------------| +| `code_pattern` | `data.name` | Reusable code pattern found | +| `integration_point` | `data.file` | Module connection point | +| `convention` | singleton | Code style conventions | +| `blocker` | `data.issue` | Blocking issue encountered | +| `tech_stack` | singleton | Project technology stack | +| `test_command` | singleton | Test commands discovered | + +**Protocol Rules**: +1. Read board before own exploration → skip covered areas +2. Write discoveries immediately via `echo >>` → don't batch +3. Deduplicate — check existing entries; skip if same type + dedup key exists +4. Append-only — never modify or delete existing lines --- @@ -154,17 +196,19 @@ const continueMode = $ARGUMENTS.includes('--continue') const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/) const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4 -// Clean requirement text (remove flags) +// Clean requirement text (remove flags — word-boundary safe) const requirement = $ARGUMENTS - .replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '') + .replace(/--yes|(?:^|\s)-y(?=\s|$)|--continue|--concurrency\s+\d+|-c\s+\d+/g, '') .trim() +let sessionId, sessionFolder + const slug = requirement.toLowerCase() .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-') .substring(0, 40) const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '') -const sessionId = `cwp-${slug}-${dateStr}` -const sessionFolder = `.workflow/.csv-wave/${sessionId}` +sessionId = `cwp-${slug}-${dateStr}` +sessionFolder = `.workflow/.csv-wave/${sessionId}` // Continue mode: find existing session if (continueMode) { @@ -181,6 +225,60 @@ if (continueMode) { Bash(`mkdir -p ${sessionFolder}`) ``` +### CSV Utility Functions + +```javascript +// Escape a value for CSV (wrap in quotes, double internal quotes) +function csvEscape(value) { + const str = String(value ?? '') + return str.replace(/"/g, '""') +} + +// Parse CSV string into array of objects (header row → keys) +function parseCsv(csvString) { + const lines = csvString.trim().split('\n') + if (lines.length < 2) return [] + const headers = parseCsvLine(lines[0]).map(h => h.replace(/^"|"$/g, '')) + return lines.slice(1).map(line => { + const cells = parseCsvLine(line).map(c => c.replace(/^"|"$/g, '').replace(/""/g, '"')) + const obj = {} + headers.forEach((h, i) => { obj[h] = cells[i] ?? '' }) + return obj + }) +} + +// Parse a single CSV line respecting quoted fields with commas/newlines +function parseCsvLine(line) { + const cells = [] + let current = '' + let inQuotes = false + for (let i = 0; i < line.length; i++) { + const ch = line[i] + if (inQuotes) { + if (ch === '"' && line[i + 1] === '"') { + current += '"' + i++ // skip escaped quote + } else if (ch === '"') { + inQuotes = false + } else { + current += ch + } + } else { + if (ch === '"') { + inQuotes = true + } else if (ch === ',') { + cells.push(current) + current = '' + } else { + current += ch + } + } + } + cells.push(current) + return cells +} +``` + --- ### Phase 1: Requirement → CSV @@ -222,11 +320,28 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break // Parse JSON from CLI output → decomposedTasks[] ``` -2. **Compute Waves** (Topological Sort → Depth Grouping) +2. **Compute Waves** (Kahn's BFS topological sort with depth tracking) ```javascript + // Algorithm: + // 1. Build in-degree map and adjacency list from deps + // 2. Enqueue all tasks with in-degree 0 at wave 1 + // 3. BFS: for each dequeued task at wave W, for each dependent D: + // - Decrement D's in-degree + // - D.wave = max(D.wave, W + 1) + // - If D's in-degree reaches 0, enqueue D + // 4. Any task without wave assignment → circular dependency error + // + // Wave properties: + // Wave 1: no dependencies — fully independent + // Wave N: all deps in waves 1..(N-1) — guaranteed completed before start + // Within a wave: tasks are independent → safe for concurrent execution + // + // Example: + // A(no deps)→W1, B(no deps)→W1, C(deps:A)→W2, D(deps:A,B)→W2, E(deps:C,D)→W3 + // Wave 1: [A,B] concurrent → Wave 2: [C,D] concurrent → Wave 3: [E] + function computeWaves(tasks) { - // Build adjacency: task.deps → predecessors const taskMap = new Map(tasks.map(t => [t.id, t])) const inDegree = new Map(tasks.map(t => [t.id, 0])) const adjList = new Map(tasks.map(t => [t.id, []])) @@ -267,7 +382,7 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break } } - // Detect cycles: any task without wave assignment + // Detect cycles for (const task of tasks) { if (!waveAssignment.has(task.id)) { throw new Error(`Circular dependency detected involving task ${task.id}`) @@ -344,10 +459,7 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break } ``` -**Success Criteria**: -- tasks.csv created with valid schema and wave assignments -- No circular dependencies -- User approved (or AUTO_YES) +**Success Criteria**: tasks.csv created with valid schema and wave assignments, no circular dependencies, user approved (or AUTO_YES). --- @@ -378,7 +490,6 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break const deps = task.deps.split(';').filter(Boolean) if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) { skippedIds.add(task.id) - // Update master CSV: mark as skipped updateMasterCsvRow(sessionFolder, task.id, { status: 'skipped', error: 'Dependency failed or skipped' @@ -394,7 +505,7 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break continue } - // 4. Build prev_context for each task + // 4. Build prev_context for each task (from context_from → master CSV findings) for (const task of executableTasks) { const contextIds = task.context_from.split(';').filter(Boolean) const prevFindings = contextIds @@ -465,8 +576,8 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break } } - // 8. Cleanup temporary wave CSV - Bash(`rm -f "${sessionFolder}/wave-${wave}.csv"`) + // 8. Cleanup temporary wave CSVs + Bash(`rm -f "${sessionFolder}/wave-${wave}.csv" "${sessionFolder}/wave-${wave}-results.csv"`) console.log(` Wave ${wave} done: ${waveResults.filter(r => r.status === 'completed').length} completed, ${waveResults.filter(r => r.status === 'failed').length} failed`) } @@ -535,6 +646,8 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break - \`integration_point\`: {file, description, exports[]} — module connection points - \`convention\`: {naming, imports, formatting} — code style conventions - \`blocker\`: {issue, severity, impact} — blocking issues encountered +- \`tech_stack\`: {runtime, framework, language} — project technology stack +- \`test_command\`: {command, scope, description} — test commands discovered --- @@ -587,11 +700,7 @@ Otherwise set status to "failed" with details in error field. } ``` -**Success Criteria**: -- All waves executed in order -- Each wave's results merged into master CSV before next wave starts -- Dependent tasks skipped when predecessor failed -- discoveries.ndjson accumulated across all waves +**Success Criteria**: All waves executed in order, each wave's results merged into master CSV before next wave starts, dependent tasks skipped when predecessor failed, discoveries.ndjson accumulated across all waves. --- @@ -741,120 +850,7 @@ ${[...new Set(tasks.flatMap(t => (t.files_modified || '').split(';')).filter(Boo } ``` -**Success Criteria**: -- results.csv exported -- context.md generated -- Summary displayed to user - ---- - -## Shared Discovery Board Protocol - -All agents across all waves share `discoveries.ndjson`. This eliminates redundant codebase exploration. - -**Lifecycle**: -- Created by the first agent to write a discovery -- Carries over across waves — never cleared -- Agents append via `echo '...' >> discoveries.ndjson` - -**Format**: NDJSON, each line is a self-contained JSON: - -```jsonl -{"ts":"2026-02-28T10:00:00+08:00","worker":"1","type":"code_pattern","data":{"name":"repository-pattern","file":"src/repos/Base.ts","description":"Abstract CRUD repository"}} -{"ts":"2026-02-28T10:01:00+08:00","worker":"2","type":"integration_point","data":{"file":"src/auth/index.ts","description":"Auth module entry","exports":["authenticate","authorize"]}} -``` - -**Discovery Types**: - -| type | Dedup Key | Description | -|------|-----------|-------------| -| `code_pattern` | `data.name` | Reusable code pattern found | -| `integration_point` | `data.file` | Module connection point | -| `convention` | singleton | Code style conventions | -| `blocker` | `data.issue` | Blocking issue encountered | -| `tech_stack` | singleton | Project technology stack | -| `test_command` | singleton | Test commands discovered | - -**Protocol Rules**: -1. Read board before own exploration → skip covered areas -2. Write discoveries immediately via `echo >>` → don't batch -3. Deduplicate — check existing entries; skip if same type + dedup key exists -4. Append-only — never modify or delete existing lines - ---- - -## Wave Computation Details - -### Algorithm - -Kahn's BFS topological sort with depth tracking: - -``` -Input: tasks[] with deps[] -Output: waveAssignment (taskId → wave number) - -1. Build in-degree map and adjacency list from deps -2. Enqueue all tasks with in-degree 0 at wave 1 -3. BFS: for each dequeued task at wave W: - - For each dependent task D: - - Decrement D's in-degree - - D.wave = max(D.wave, W + 1) - - If D's in-degree reaches 0, enqueue D -4. Any task without wave assignment → circular dependency error -``` - -### Wave Properties - -- **Wave 1**: No dependencies — all tasks in wave 1 are fully independent -- **Wave N**: All dependencies are in waves 1..(N-1) — guaranteed completed before wave N starts -- **Within a wave**: Tasks are independent of each other → safe for concurrent execution - -### Example - -``` -Task A (no deps) → Wave 1 -Task B (no deps) → Wave 1 -Task C (deps: A) → Wave 2 -Task D (deps: A, B) → Wave 2 -Task E (deps: C, D) → Wave 3 - -Execution: - Wave 1: [A, B] ← concurrent - Wave 2: [C, D] ← concurrent, sees A+B findings - Wave 3: [E] ← sees A+B+C+D findings -``` - ---- - -## Context Propagation Flow - -``` -Wave 1 agents: - ├─ Execute tasks (no prev_context) - ├─ Write findings to report_agent_job_result - └─ Append discoveries to discoveries.ndjson - - ↓ merge results into master CSV - -Wave 2 agents: - ├─ Read discoveries.ndjson (exploration sharing) - ├─ Read prev_context column (wave 1 findings from context_from) - ├─ Execute tasks with full upstream context - ├─ Write findings to report_agent_job_result - └─ Append new discoveries to discoveries.ndjson - - ↓ merge results into master CSV - -Wave 3 agents: - ├─ Read discoveries.ndjson (accumulated from waves 1+2) - ├─ Read prev_context column (wave 1+2 findings from context_from) - ├─ Execute tasks - └─ ... -``` - -**Two context channels**: -1. **CSV findings** (structured): `context_from` column → `prev_context` injection — task-specific directed context -2. **NDJSON discoveries** (broadcast): `discoveries.ndjson` — general exploration findings available to all +**Success Criteria**: results.csv exported, context.md generated, summary displayed to user. --- @@ -872,7 +868,9 @@ Wave 3 agents: --- -## Core Rules +## Rules & Best Practices + +### Core Rules 1. **Start Immediately**: First action is session initialization, then Phase 1 2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged @@ -880,22 +878,18 @@ Wave 3 agents: 4. **Context Propagation**: prev_context built from master CSV, not from memory 5. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson 6. **Skip on Failure**: If a dependency failed, skip the dependent task (don't attempt) -7. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged +7. **Cleanup Temp Files**: Remove wave-{N}.csv and wave-{N}-results.csv after results are merged 8. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped ---- +### Task Design -## Best Practices +- **Granularity**: 3-10 tasks optimal; too many = overhead, too few = no parallelism benefit +- **Minimize Cross-Wave Deps**: More tasks in wave 1 = more parallelism +- **Specific Descriptions**: Agent sees only its CSV row + prev_context — make description self-contained +- **Context From ≠ Deps**: `deps` = execution order constraint; `context_from` = information flow. A task can have `context_from` without `deps` (it just reads previous findings but doesn't require them to be done first in its wave) +- **Concurrency Tuning**: `-c 1` for serial execution (maximum context sharing); `-c 8` for I/O-bound tasks -1. **Task Granularity**: 3-10 tasks optimal; too many = overhead, too few = no parallelism benefit -2. **Minimize Cross-Wave Deps**: More tasks in wave 1 = more parallelism -3. **Specific Descriptions**: Agent sees only its CSV row + prev_context — make description self-contained -4. **Context From ≠ Deps**: `deps` = execution order constraint; `context_from` = information flow. A task can have `context_from` without `deps` (it just reads previous findings but doesn't require them to be done first in its wave) -5. **Concurrency Tuning**: `-c 1` for serial execution (maximum context sharing); `-c 8` for I/O-bound tasks - ---- - -## Usage Recommendations +### Scenario Recommendations | Scenario | Recommended Approach | |----------|---------------------| @@ -903,4 +897,4 @@ Wave 3 agents: | Linear pipeline (A→B→C) | `$csv-wave-pipeline -c 1` — 3 waves, serial, full context | | Diamond dependency (A→B,C→D) | `$csv-wave-pipeline` — 3 waves, B+C concurrent in wave 2 | | Complex requirement, unclear tasks | Use `$roadmap-with-file` first for planning, then feed issues here | -| Single complex task | Use `$workflow-lite-plan` instead | +| Single complex task | Use `$workflow-lite-plan` instead | \ No newline at end of file diff --git a/.codex/skills/unified-execute-with-file/SKILL.md b/.codex/skills/unified-execute-with-file/SKILL.md deleted file mode 100644 index 5092b0a0..00000000 --- a/.codex/skills/unified-execute-with-file/SKILL.md +++ /dev/null @@ -1,797 +0,0 @@ ---- -name: unified-execute-with-file -description: Universal execution engine consuming .task/*.json directory format. Serial task execution with convergence verification, progress tracking via execution.md + execution-events.md. -argument-hint: "PLAN=\"\" [--auto-commit] [--dry-run]" ---- - -# Unified-Execute-With-File Workflow - -## Quick Start - -Universal execution engine consuming **`.task/*.json`** directory and executing tasks serially with convergence verification and progress tracking. - -```bash -# Execute from lite-plan output -/codex:unified-execute-with-file PLAN=".workflow/.lite-plan/LPLAN-auth-2025-01-21/.task/" - -# Execute from workflow session output -/codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/" --auto-commit - -# Execute a single task JSON file -/codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/IMPL-001.json" --dry-run - -# Auto-detect from .workflow/ directories -/codex:unified-execute-with-file -``` - -**Core workflow**: Scan .task/*.json → Validate → Pre-Execution Analysis → Execute → Verify Convergence → Track Progress - -**Key features**: -- **Directory-based**: Consumes `.task/` directory containing individual task JSON files -- **Convergence-driven**: Verifies each task's convergence criteria after execution -- **Serial execution**: Process tasks in topological order with dependency tracking -- **Dual progress tracking**: `execution.md` (overview) + `execution-events.md` (event stream) -- **Auto-commit**: Optional conventional commits per task -- **Dry-run mode**: Simulate execution without changes -- **Flexible input**: Accepts `.task/` directory path or a single `.json` file path - -**Input format**: Each task is a standalone JSON file in `.task/` directory (e.g., `IMPL-001.json`). Use `plan-converter` to convert other formats to `.task/*.json` first. - -## Overview - -``` -┌─────────────────────────────────────────────────────────────┐ -│ UNIFIED EXECUTE WORKFLOW │ -├─────────────────────────────────────────────────────────────┤ -│ │ -│ Phase 1: Load & Validate │ -│ ├─ Scan .task/*.json (one task per file) │ -│ ├─ Validate schema (id, title, depends_on, convergence) │ -│ ├─ Detect cycles, build topological order │ -│ └─ Initialize execution.md + execution-events.md │ -│ │ -│ Phase 2: Pre-Execution Analysis │ -│ ├─ Check file conflicts (multiple tasks → same file) │ -│ ├─ Verify file existence │ -│ ├─ Generate feasibility report │ -│ └─ User confirmation (unless dry-run) │ -│ │ -│ Phase 3: Serial Execution + Convergence Verification │ -│ For each task in topological order: │ -│ ├─ Check dependencies satisfied │ -│ ├─ Record START event │ -│ ├─ Execute directly (Read/Edit/Write/Grep/Glob/Bash) │ -│ ├─ Verify convergence.criteria[] │ -│ ├─ Run convergence.verification command │ -│ ├─ Record COMPLETE/FAIL event with verification results │ -│ ├─ Update _execution state in task JSON file │ -│ └─ Auto-commit if enabled │ -│ │ -│ Phase 4: Completion │ -│ ├─ Finalize execution.md with summary statistics │ -│ ├─ Finalize execution-events.md with session footer │ -│ ├─ Write back .task/*.json with _execution states │ -│ └─ Offer follow-up actions │ -│ │ -└─────────────────────────────────────────────────────────────┘ -``` - -## Output Structure - -``` -${projectRoot}/.workflow/.execution/EXEC-{slug}-{date}-{random}/ -├── execution.md # Plan overview + task table + summary -└── execution-events.md # ⭐ Unified event log (single source of truth) -``` - -Additionally, each source `.task/*.json` file is updated in-place with `_execution` states. - ---- - -## Implementation Details - -### Session Initialization - -##### Step 0: Initialize Session - -```javascript -const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString() -const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim() - -// Parse arguments -const autoCommit = $ARGUMENTS.includes('--auto-commit') -const dryRun = $ARGUMENTS.includes('--dry-run') -const planMatch = $ARGUMENTS.match(/PLAN="([^"]+)"/) || $ARGUMENTS.match(/PLAN=(\S+)/) -let planPath = planMatch ? planMatch[1] : null - -// Auto-detect if no PLAN specified -if (!planPath) { - // Search in order (most recent first): - // .workflow/active/*/.task/ - // .workflow/.lite-plan/*/.task/ - // .workflow/.req-plan/*/.task/ - // .workflow/.planning/*/.task/ - // Use most recently modified directory containing *.json files -} - -// Resolve path -planPath = path.isAbsolute(planPath) ? planPath : `${projectRoot}/${planPath}` - -// Generate session ID -const slug = path.basename(path.dirname(planPath)).toLowerCase().substring(0, 30) -const dateStr = getUtc8ISOString().substring(0, 10) -const random = Math.random().toString(36).substring(2, 9) -const sessionId = `EXEC-${slug}-${dateStr}-${random}` -const sessionFolder = `${projectRoot}/.workflow/.execution/${sessionId}` - -Bash(`mkdir -p ${sessionFolder}`) -``` - ---- - -## Phase 1: Load & Validate - -**Objective**: Scan `.task/` directory, parse individual task JSON files, validate schema and dependencies, build execution order. - -### Step 1.1: Scan .task/ Directory and Parse Task Files - -```javascript -// Determine if planPath is a directory or single file -const isDirectory = planPath.endsWith('/') || Bash(`test -d "${planPath}" && echo dir || echo file`).trim() === 'dir' - -let taskFiles, tasks - -if (isDirectory) { - // Directory mode: scan for all *.json files - taskFiles = Glob('*.json', planPath) - if (taskFiles.length === 0) throw new Error(`No .json files found in ${planPath}`) - - tasks = taskFiles.map(filePath => { - try { - const content = Read(filePath) - const task = JSON.parse(content) - task._source_file = filePath // Track source file for write-back - return task - } catch (e) { - throw new Error(`${path.basename(filePath)}: Invalid JSON - ${e.message}`) - } - }) -} else { - // Single file mode: parse one task JSON - try { - const content = Read(planPath) - const task = JSON.parse(content) - task._source_file = planPath - tasks = [task] - } catch (e) { - throw new Error(`${path.basename(planPath)}: Invalid JSON - ${e.message}`) - } -} - -if (tasks.length === 0) throw new Error('No tasks found') -``` - -### Step 1.2: Validate Schema - -Validate against unified task schema: `~/.ccw/workflows/cli-templates/schemas/task-schema.json` - -```javascript -const errors = [] -tasks.forEach((task, i) => { - const src = task._source_file ? path.basename(task._source_file) : `Task ${i + 1}` - - // Required fields (per task-schema.json) - if (!task.id) errors.push(`${src}: missing 'id'`) - if (!task.title) errors.push(`${src}: missing 'title'`) - if (!task.description) errors.push(`${src}: missing 'description'`) - if (!Array.isArray(task.depends_on)) errors.push(`${task.id || src}: missing 'depends_on' array`) - - // Context block (optional but validated if present) - if (task.context) { - if (task.context.requirements && !Array.isArray(task.context.requirements)) - errors.push(`${task.id}: context.requirements must be array`) - if (task.context.acceptance && !Array.isArray(task.context.acceptance)) - errors.push(`${task.id}: context.acceptance must be array`) - if (task.context.focus_paths && !Array.isArray(task.context.focus_paths)) - errors.push(`${task.id}: context.focus_paths must be array`) - } - - // Convergence (required for execution verification) - if (!task.convergence) { - errors.push(`${task.id || src}: missing 'convergence'`) - } else { - if (!task.convergence.criteria?.length) errors.push(`${task.id}: empty convergence.criteria`) - if (!task.convergence.verification) errors.push(`${task.id}: missing convergence.verification`) - if (!task.convergence.definition_of_done) errors.push(`${task.id}: missing convergence.definition_of_done`) - } - - // Flow control (optional but validated if present) - if (task.flow_control) { - if (task.flow_control.target_files && !Array.isArray(task.flow_control.target_files)) - errors.push(`${task.id}: flow_control.target_files must be array`) - } - - // New unified schema fields (backward compatible addition) - if (task.focus_paths && !Array.isArray(task.focus_paths)) - errors.push(`${task.id}: focus_paths must be array`) - if (task.implementation && !Array.isArray(task.implementation)) - errors.push(`${task.id}: implementation must be array`) - if (task.files && !Array.isArray(task.files)) - errors.push(`${task.id}: files must be array`) -}) - -if (errors.length) { - // Report errors, stop execution -} -``` - -### Step 1.3: Build Execution Order - -```javascript -// 1. Validate dependency references -const taskIds = new Set(tasks.map(t => t.id)) -tasks.forEach(task => { - task.depends_on.forEach(dep => { - if (!taskIds.has(dep)) errors.push(`${task.id}: depends on unknown task '${dep}'`) - }) -}) - -// 2. Detect cycles (DFS) -function detectCycles(tasks) { - const graph = new Map(tasks.map(t => [t.id, t.depends_on || []])) - const visited = new Set(), inStack = new Set(), cycles = [] - function dfs(node, path) { - if (inStack.has(node)) { cycles.push([...path, node].join(' → ')); return } - if (visited.has(node)) return - visited.add(node); inStack.add(node) - ;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node])) - inStack.delete(node) - } - tasks.forEach(t => { if (!visited.has(t.id)) dfs(t.id, []) }) - return cycles -} -const cycles = detectCycles(tasks) -if (cycles.length) errors.push(`Circular dependencies: ${cycles.join('; ')}`) - -// 3. Topological sort -function topoSort(tasks) { - const inDegree = new Map(tasks.map(t => [t.id, 0])) - tasks.forEach(t => t.depends_on.forEach(dep => { - inDegree.set(t.id, (inDegree.get(t.id) || 0) + 1) - })) - const queue = tasks.filter(t => inDegree.get(t.id) === 0).map(t => t.id) - const order = [] - while (queue.length) { - const id = queue.shift() - order.push(id) - tasks.forEach(t => { - if (t.depends_on.includes(id)) { - inDegree.set(t.id, inDegree.get(t.id) - 1) - if (inDegree.get(t.id) === 0) queue.push(t.id) - } - }) - } - return order -} -const executionOrder = topoSort(tasks) -``` - -### Step 1.4: Initialize Execution Artifacts - -```javascript -// execution.md -const executionMd = `# Execution Overview - -## Session Info -- **Session ID**: ${sessionId} -- **Plan Source**: ${planPath} -- **Started**: ${getUtc8ISOString()} -- **Total Tasks**: ${tasks.length} -- **Mode**: ${dryRun ? 'Dry-run (no changes)' : 'Direct inline execution'} -- **Auto-Commit**: ${autoCommit ? 'Enabled' : 'Disabled'} - -## Task Overview - -| # | ID | Title | Type | Priority | Effort | Dependencies | Status | -|---|-----|-------|------|----------|--------|--------------|--------| -${tasks.map((t, i) => `| ${i+1} | ${t.id} | ${t.title} | ${t.type || '-'} | ${t.priority || '-'} | ${t.effort || '-'} | ${t.depends_on.join(', ') || '-'} | pending |`).join('\n')} - -## Pre-Execution Analysis -> Populated in Phase 2 - -## Execution Timeline -> Updated as tasks complete - -## Execution Summary -> Updated after all tasks complete -` -Write(`${sessionFolder}/execution.md`, executionMd) - -// execution-events.md -Write(`${sessionFolder}/execution-events.md`, `# Execution Events - -**Session**: ${sessionId} -**Started**: ${getUtc8ISOString()} -**Source**: ${planPath} - ---- - -`) -``` - ---- - -## Phase 2: Pre-Execution Analysis - -**Objective**: Validate feasibility and identify issues before execution. - -### Step 2.1: Analyze File Conflicts - -```javascript -const fileTaskMap = new Map() // file → [taskIds] -tasks.forEach(task => { - (task.files || []).forEach(f => { - const key = f.path - if (!fileTaskMap.has(key)) fileTaskMap.set(key, []) - fileTaskMap.get(key).push(task.id) - }) -}) - -const conflicts = [] -fileTaskMap.forEach((taskIds, file) => { - if (taskIds.length > 1) { - conflicts.push({ file, tasks: taskIds, resolution: 'Execute in dependency order' }) - } -}) - -// Check file existence -const missingFiles = [] -tasks.forEach(task => { - (task.files || []).forEach(f => { - if (f.action !== 'create' && !file_exists(f.path)) { - missingFiles.push({ file: f.path, task: task.id }) - } - }) -}) -``` - -### Step 2.2: Append to execution.md - -```javascript -// Replace "Pre-Execution Analysis" section with: -// - File Conflicts (list or "No conflicts") -// - Missing Files (list or "All files exist") -// - Dependency Validation (errors or "No issues") -// - Execution Order (numbered list) -``` - -### Step 2.3: User Confirmation - -```javascript -if (!dryRun) { - request_user_input({ - questions: [{ - header: "Confirm", - id: "confirm_execute", - question: `Execute ${tasks.length} tasks?`, - options: [ - { label: "Execute (Recommended)", description: "Start serial execution" }, - { label: "Dry Run", description: "Simulate without changes" }, - { label: "Cancel", description: "Abort execution" } - ] - }] - }) - // answer.answers.confirm_execute.answers[0] → selected label -} -``` - ---- - -## Phase 3: Serial Execution + Convergence Verification - -**Objective**: Execute tasks sequentially, verify convergence after each task, track all state. - -**Execution Model**: Direct inline execution — main process reads, edits, writes files directly. No CLI delegation. - -### Step 3.1: Execution Loop - -```javascript -const completedTasks = new Set() -const failedTasks = new Set() -const skippedTasks = new Set() - -for (const taskId of executionOrder) { - const task = tasks.find(t => t.id === taskId) - const startTime = getUtc8ISOString() - - // 1. Check dependencies - const unmetDeps = task.depends_on.filter(dep => !completedTasks.has(dep)) - if (unmetDeps.length) { - appendToEvents(task, 'BLOCKED', `Unmet dependencies: ${unmetDeps.join(', ')}`) - skippedTasks.add(task.id) - task._execution = { status: 'skipped', executed_at: startTime, - result: { success: false, error: `Blocked by: ${unmetDeps.join(', ')}` } } - continue - } - - // 2. Record START event - appendToEvents(`## ${getUtc8ISOString()} — ${task.id}: ${task.title} - -**Type**: ${task.type || '-'} | **Priority**: ${task.priority || '-'} | **Effort**: ${task.effort || '-'} -**Status**: ⏳ IN PROGRESS -**Files**: ${(task.files || []).map(f => f.path).join(', ') || 'To be determined'} -**Description**: ${task.description} -**Convergence Criteria**: -${task.convergence.criteria.map(c => `- [ ] ${c}`).join('\n')} - -### Execution Log -`) - - if (dryRun) { - // Simulate: mark as completed without changes - appendToEvents(`\n**Status**: ⏭ DRY RUN (no changes)\n\n---\n`) - task._execution = { status: 'completed', executed_at: startTime, - result: { success: true, summary: 'Dry run — no changes made' } } - completedTasks.add(task.id) - continue - } - - // 3. Execute task directly - // - Read each file in task.files (if specified) - // - Analyze what changes satisfy task.description + task.convergence.criteria - // - If task.files has detailed changes, use them as guidance - // - Apply changes using Edit (preferred) or Write (for new files) - // - Use Grep/Glob/mcp__ace-tool for discovery if needed - // - Use Bash for build/test commands - - // Dual-path field access (supports both unified and legacy 6-field schema) - // const targetFiles = task.files?.map(f => f.path) || task.flow_control?.target_files || [] - // const acceptanceCriteria = task.convergence?.criteria || task.context?.acceptance || [] - // const requirements = task.implementation || task.context?.requirements || [] - // const focusPaths = task.focus_paths || task.context?.focus_paths || [] - - // 4. Verify convergence - const convergenceResults = verifyConvergence(task) - - const endTime = getUtc8ISOString() - const filesModified = getModifiedFiles() - - if (convergenceResults.allPassed) { - // 5a. Record SUCCESS - appendToEvents(` -**Status**: ✅ COMPLETED -**Duration**: ${calculateDuration(startTime, endTime)} -**Files Modified**: ${filesModified.join(', ')} - -#### Changes Summary -${changeSummary} - -#### Convergence Verification -${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}`).join('\n')} -- **Verification**: ${convergenceResults.verificationOutput} -- **Definition of Done**: ${task.convergence.definition_of_done} - ---- -`) - task._execution = { - status: 'completed', executed_at: endTime, - result: { - success: true, - files_modified: filesModified, - summary: changeSummary, - convergence_verified: convergenceResults.verified - } - } - completedTasks.add(task.id) - } else { - // 5b. Record FAILURE - handleTaskFailure(task, convergenceResults, startTime, endTime) - } - - // 6. Auto-commit if enabled - if (autoCommit && task._execution.status === 'completed') { - autoCommitTask(task, filesModified) - } -} -``` - -### Step 3.2: Convergence Verification - -```javascript -function verifyConvergence(task) { - const results = { - verified: [], // boolean[] per criterion - verificationOutput: '', // output of verification command - allPassed: true - } - - // 1. Check each criterion - // For each criterion in task.convergence.criteria: - // - If it references a testable condition, check it - // - If it's manual, mark as verified based on changes made - // - Record true/false per criterion - task.convergence.criteria.forEach(criterion => { - const passed = evaluateCriterion(criterion, task) - results.verified.push(passed) - if (!passed) results.allPassed = false - }) - - // 2. Run verification command (if executable) - const verification = task.convergence.verification - if (isExecutableCommand(verification)) { - try { - const output = Bash(verification, { timeout: 120000 }) - results.verificationOutput = `${verification} → PASS` - } catch (e) { - results.verificationOutput = `${verification} → FAIL: ${e.message}` - results.allPassed = false - } - } else { - results.verificationOutput = `Manual: ${verification}` - } - - return results -} - -function isExecutableCommand(verification) { - // Detect executable patterns: npm, npx, jest, tsc, curl, pytest, go test, etc. - return /^(npm|npx|jest|tsc|eslint|pytest|go\s+test|cargo\s+test|curl|make)/.test(verification.trim()) -} -``` - -### Step 3.3: Failure Handling - -```javascript -function handleTaskFailure(task, convergenceResults, startTime, endTime) { - appendToEvents(` -**Status**: ❌ FAILED -**Duration**: ${calculateDuration(startTime, endTime)} -**Error**: Convergence verification failed - -#### Failed Criteria -${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}`).join('\n')} -- **Verification**: ${convergenceResults.verificationOutput} - ---- -`) - - task._execution = { - status: 'failed', executed_at: endTime, - result: { - success: false, - error: 'Convergence verification failed', - convergence_verified: convergenceResults.verified - } - } - failedTasks.add(task.id) - - // Ask user - request_user_input({ - questions: [{ - header: "Failure", - id: "handle_failure", - question: `Task ${task.id} failed convergence verification. How to proceed?`, - options: [ - { label: "Skip & Continue (Recommended)", description: "Skip this task, continue with next" }, - { label: "Retry", description: "Retry this task" }, - { label: "Abort", description: "Stop execution, keep progress" } - ] - }] - }) - // answer.answers.handle_failure.answers[0] → selected label -} -``` - -### Step 3.4: Auto-Commit - -```javascript -function autoCommitTask(task, filesModified) { - Bash(`git add ${filesModified.join(' ')}`) - - const commitType = { - fix: 'fix', refactor: 'refactor', feature: 'feat', - enhancement: 'feat', testing: 'test', infrastructure: 'chore' - }[task.type] || 'chore' - - const scope = inferScope(filesModified) - - Bash(`git commit -m "$(cat <<'EOF' -${commitType}(${scope}): ${task.title} - -Task: ${task.id} -Source: ${path.basename(planPath)} -EOF -)"`) - - appendToEvents(`**Commit**: \`${commitType}(${scope}): ${task.title}\`\n`) -} -``` - ---- - -## Phase 4: Completion - -**Objective**: Finalize all artifacts, write back execution state, offer follow-up actions. - -### Step 4.1: Finalize execution.md - -Append summary statistics to execution.md: - -```javascript -const summary = ` -## Execution Summary - -- **Completed**: ${getUtc8ISOString()} -- **Total Tasks**: ${tasks.length} -- **Succeeded**: ${completedTasks.size} -- **Failed**: ${failedTasks.size} -- **Skipped**: ${skippedTasks.size} -- **Success Rate**: ${Math.round(completedTasks.size / tasks.length * 100)}% - -### Task Results - -| ID | Title | Status | Convergence | Files Modified | -|----|-------|--------|-------------|----------------| -${tasks.map(t => { - const ex = t._execution || {} - const convergenceStatus = ex.result?.convergence_verified - ? `${ex.result.convergence_verified.filter(v => v).length}/${ex.result.convergence_verified.length}` - : '-' - return `| ${t.id} | ${t.title} | ${ex.status || 'pending'} | ${convergenceStatus} | ${(ex.result?.files_modified || []).join(', ') || '-'} |` -}).join('\n')} - -${failedTasks.size > 0 ? `### Failed Tasks - -${[...failedTasks].map(id => { - const t = tasks.find(t => t.id === id) - return `- **${t.id}**: ${t.title} — ${t._execution?.result?.error || 'Unknown'}` -}).join('\n')} -` : ''} -### Artifacts -- **Plan Source**: ${planPath} -- **Execution Overview**: ${sessionFolder}/execution.md -- **Execution Events**: ${sessionFolder}/execution-events.md -` -// Append to execution.md -``` - -### Step 4.2: Finalize execution-events.md - -```javascript -appendToEvents(` ---- - -# Session Summary - -- **Session**: ${sessionId} -- **Completed**: ${getUtc8ISOString()} -- **Tasks**: ${completedTasks.size} completed, ${failedTasks.size} failed, ${skippedTasks.size} skipped -- **Total Events**: ${completedTasks.size + failedTasks.size + skippedTasks.size} -`) -``` - -### Step 4.3: Write Back .task/*.json with _execution - -Update each source task JSON file with execution states: - -```javascript -tasks.forEach(task => { - const filePath = task._source_file - if (!filePath) return - - // Read current file to preserve formatting and non-execution fields - const current = JSON.parse(Read(filePath)) - - // Update _execution status and result - current._execution = { - status: task._execution?.status || 'pending', - executed_at: task._execution?.executed_at || null, - result: task._execution?.result || null - } - - // Write back individual task file - Write(filePath, JSON.stringify(current, null, 2)) -}) -// Each task JSON file now has _execution: { status, executed_at, result } -``` - -### Step 4.4: Post-Completion Options - -```javascript -request_user_input({ - questions: [{ - header: "Post Execute", - id: "post_execute", - question: `Execution complete: ${completedTasks.size}/${tasks.length} succeeded. Next step?`, - options: [ - { label: "Done (Recommended)", description: "End workflow" }, - { label: "Retry Failed", description: `Re-execute ${failedTasks.size} failed tasks` }, - { label: "Create Issue", description: "Create issue from failed tasks" } - ] - }] -}) -// answer.answers.post_execute.answers[0] → selected label -``` - -| Selection | Action | -|-----------|--------| -| Retry Failed | Filter tasks with `_execution.status === 'failed'`, re-execute, append `[RETRY]` events | -| View Events | Display execution-events.md content | -| Create Issue | `Skill(skill="issue:new", args="...")` from failed task details | -| Done | Display artifact paths, sync session state, end workflow | - -### Step 4.5: Sync Session State - -After completion (regardless of user selection), unless `--dry-run`: - -```bash -$session-sync -y "Execution complete: {completed}/{total} tasks succeeded" -``` - -Updates specs/*.md with execution learnings and project-tech.json with development index entry. - ---- - -## Configuration - -| Flag | Default | Description | -|------|---------|-------------| -| `PLAN="..."` | auto-detect | Path to `.task/` directory or single task `.json` file | -| `--auto-commit` | false | Commit changes after each successful task | -| `--dry-run` | false | Simulate execution without making changes | - -### Plan Auto-Detection Order - -When no `PLAN` specified, search for `.task/` directories in order (most recent first): - -1. `.workflow/active/*/.task/` -2. `.workflow/.lite-plan/*/.task/` -3. `.workflow/.req-plan/*/.task/` -4. `.workflow/.planning/*/.task/` - -**If source is not `.task/*.json`**: Run `plan-converter` first to generate `.task/` directory. - ---- - -## Error Handling & Recovery - -| Situation | Action | Recovery | -|-----------|--------|----------| -| .task/ directory not found | Report error with path | Check path, run plan-converter | -| Invalid JSON in task file | Report filename and error | Fix task JSON file manually | -| Missing convergence | Report validation error | Run plan-converter to add convergence | -| Circular dependency | Stop, report cycle path | Fix dependencies in task JSON | -| Task execution fails | Record in events, ask user | Retry, skip, accept, or abort | -| Convergence verification fails | Mark task failed, ask user | Fix code and retry, or accept | -| Verification command timeout | Mark as unverified | Manual verification needed | -| File conflict during execution | Document in events | Resolve in dependency order | -| All tasks fail | Report, suggest plan review | Re-analyze or manual intervention | - ---- - -## Best Practices - -### Before Execution - -1. **Validate Plan**: Use `--dry-run` first to check plan feasibility -2. **Check Convergence**: Ensure all tasks have meaningful convergence criteria -3. **Review Dependencies**: Verify execution order makes sense -4. **Backup**: Commit pending changes before starting -5. **Convert First**: Use `plan-converter` for non-.task/ sources - -### During Execution - -1. **Monitor Events**: Check execution-events.md for real-time progress -2. **Handle Failures**: Review convergence failures carefully before deciding -3. **Check Commits**: Verify auto-commits are correct if enabled - -### After Execution - -1. **Review Summary**: Check execution.md statistics and failed tasks -2. **Verify Changes**: Inspect modified files match expectations -3. **Check Task Files**: Review `_execution` states in `.task/*.json` files -4. **Next Steps**: Use completion options for follow-up - ---- - -**Now execute unified-execute-with-file for**: $PLAN