From 8dc115a89481fe88f530d881aff75566769c3413 Mon Sep 17 00:00:00 2001 From: catlog22 Date: Sun, 1 Feb 2026 17:20:00 +0800 Subject: [PATCH] feat(codex): convert 4 workflow commands to Codex prompt format with serial execution MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Convert parallel multi-agent/multi-CLI workflows to Codex-compatible serial execution: - brainstorm-with-file: Parallel Creative/Pragmatic/Systematic perspectives → Serial CLI execution - analyze-with-file: cli-explore-agent + parallel CLI → Native tools (Glob/Grep/Read) + serial Gemini CLI - collaborative-plan-with-file: Parallel sub-agents → Serial domain planning loop - unified-execute-with-file: DAG-based parallel wave execution → Serial task-by-task execution Key Technical Changes: - YAML header: Remove 'name' and 'allowed-tools' fields - Variables: $ARGUMENTS → $TOPIC/$TASK/$PLAN - Task agents: Task() calls → ccw cli commands with --tool flag - Parallel execution: Parallel Task/Bash calls → Sequential loops - Session format: Match existing Codex prompt structure Pattern: For multi-step workflows, use explicit wait points with "⏳ Wait for completion before proceeding" --- .codex/prompts/analyze-with-file.md | 657 ++++++------ .codex/prompts/brainstorm-with-file.md | 734 ++++++------- .../prompts/collaborative-plan-with-file.md | 549 ++++++++++ .codex/prompts/unified-execute-with-file.md | 963 +++++++----------- 4 files changed, 1551 insertions(+), 1352 deletions(-) create mode 100644 .codex/prompts/collaborative-plan-with-file.md diff --git a/.codex/prompts/analyze-with-file.md b/.codex/prompts/analyze-with-file.md index 3c6bdfc9..d4494813 100644 --- a/.codex/prompts/analyze-with-file.md +++ b/.codex/prompts/analyze-with-file.md @@ -1,29 +1,24 @@ --- -description: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding. Supports depth control and iteration limits. -argument-hint: "TOPIC=\"\" [--depth=standard|deep|full] [--max-iterations=] [--verbose]" +description: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding. Serial analysis for Codex. +argument-hint: "TOPIC=\"\" [--focus=] [--depth=quick|standard|deep] [--continue]" --- # Codex Analyze-With-File Prompt ## Overview -Interactive collaborative analysis workflow with **documented discussion process**. Records understanding evolution, facilitates multi-round Q&A, and uses deep analysis for codebase and concept exploration. +Interactive collaborative analysis workflow with **documented discussion process**. Records understanding evolution, facilitates multi-round Q&A, and uses CLI tools for deep exploration. **Core workflow**: Topic → Explore → Discuss → Document → Refine → Conclude -**Key features**: -- **discussion.md**: Timeline of discussions and understanding evolution -- **Multi-round Q&A**: Iterative clarification with user -- **Analysis-assisted exploration**: Deep codebase and concept analysis -- **Consolidated insights**: Synthesizes discussions into actionable conclusions -- **Flexible continuation**: Resume analysis sessions to build on previous work - ## Target Topic **$TOPIC** -- `--depth`: Analysis depth (standard|deep|full) -- `--max-iterations`: Max discussion rounds +**Parameters**: +- `--focus`: Focus area (code|architecture|practice|diagnosis, default: code) +- `--depth`: Analysis depth (quick/standard/deep, default: standard) +- `--continue`: Resume existing analysis session ## Execution Process @@ -35,54 +30,68 @@ Session Detection: Phase 1: Topic Understanding ├─ Parse topic/question - ├─ Identify analysis dimensions (architecture, implementation, concept, etc.) + ├─ Identify analysis dimensions ├─ Initial scoping with user - └─ Document initial understanding in discussion.md + └─ Initialize discussion.md -Phase 2: Exploration (Parallel) - ├─ Search codebase for relevant patterns - ├─ Analyze code structure and dependencies - └─ Aggregate findings into exploration summary +Phase 2: CLI Exploration (Serial) + ├─ Step 1: Codebase context gathering (Glob/Grep/Read) + ├─ Step 2: Gemini CLI analysis (build on codebase findings) + └─ Aggregate findings into explorations.json Phase 3: Interactive Discussion (Multi-Round) ├─ Present exploration findings ├─ Facilitate Q&A with user - ├─ Capture user insights and requirements + ├─ Capture user insights and corrections + ├─ Actions: Deepen | Adjust | Answer | Complete ├─ Update discussion.md with each round - └─ Repeat until user is satisfied or clarity achieved + └─ Repeat until clarity achieved (max 5 rounds) Phase 4: Synthesis & Conclusion ├─ Consolidate all insights - ├─ Update discussion.md with conclusions - ├─ Generate actionable recommendations - └─ Optional: Create follow-up tasks or issues + ├─ Generate conclusions with recommendations + ├─ Update discussion.md with final synthesis + └─ Offer follow-up options Output: - ├─ .workflow/.analysis/{slug}-{date}/discussion.md (evolving document) - ├─ .workflow/.analysis/{slug}-{date}/explorations.json (findings) + ├─ .workflow/.analysis/{slug}-{date}/discussion.md (evolution) + ├─ .workflow/.analysis/{slug}-{date}/exploration-codebase.json (codebase context) + ├─ .workflow/.analysis/{slug}-{date}/explorations.json (CLI findings) └─ .workflow/.analysis/{slug}-{date}/conclusions.json (final synthesis) ``` +## Output Structure + +``` +.workflow/.analysis/ANL-{slug}-{date}/ +├── discussion.md # ⭐ Evolution of understanding & discussions +├── exploration-codebase.json # Phase 2: Codebase context +├── explorations.json # Phase 2: CLI analysis findings +└── conclusions.json # Phase 4: Final synthesis +``` + +--- + ## Implementation Details -### Session Setup & Mode Detection +### Session Setup ```javascript const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString() -const topicSlug = "$TOPIC".toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40) +const topicSlug = "$TOPIC".toLowerCase().replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-').substring(0, 40) const dateStr = getUtc8ISOString().substring(0, 10) const sessionId = `ANL-${topicSlug}-${dateStr}` const sessionFolder = `.workflow/.analysis/${sessionId}` const discussionPath = `${sessionFolder}/discussion.md` +const explorationPath = `${sessionFolder}/exploration-codebase.json` const explorationsPath = `${sessionFolder}/explorations.json` const conclusionsPath = `${sessionFolder}/conclusions.json` // Auto-detect mode const sessionExists = fs.existsSync(sessionFolder) const hasDiscussion = sessionExists && fs.existsSync(discussionPath) - const mode = hasDiscussion ? 'continue' : 'new' if (!sessionExists) { @@ -97,15 +106,14 @@ if (!sessionExists) { #### Step 1.1: Parse Topic & Identify Dimensions ```javascript -// Analyze topic to determine analysis dimensions const ANALYSIS_DIMENSIONS = { - architecture: ['架构', 'architecture', 'design', 'structure', '设计'], - implementation: ['实现', 'implement', 'code', 'coding', '代码'], - performance: ['性能', 'performance', 'optimize', 'bottleneck', '优化'], - security: ['安全', 'security', 'auth', 'permission', '权限'], - concept: ['概念', 'concept', 'theory', 'principle', '原理'], - comparison: ['比较', 'compare', 'vs', 'difference', '区别'], - decision: ['决策', 'decision', 'choice', 'tradeoff', '选择'] + architecture: ['架构', 'architecture', 'design', 'structure', '设计', 'pattern'], + implementation: ['实现', 'implement', 'code', 'coding', '代码', 'logic'], + performance: ['性能', 'performance', 'optimize', 'bottleneck', '优化', 'speed'], + security: ['安全', 'security', 'auth', 'permission', '权限', 'vulnerability'], + concept: ['概念', 'concept', 'theory', 'principle', '原理', 'understand'], + comparison: ['比较', 'compare', 'vs', 'difference', '区别', 'versus'], + decision: ['决策', 'decision', 'choice', 'tradeoff', '选择', 'trade-off'] } function identifyDimensions(topic) { @@ -118,7 +126,7 @@ function identifyDimensions(topic) { } } - return matched.length > 0 ? matched : ['general'] + return matched.length > 0 ? matched : ['architecture', 'implementation'] } const dimensions = identifyDimensions("$TOPIC") @@ -129,14 +137,12 @@ const dimensions = identifyDimensions("$TOPIC") Ask user to scope the analysis: - Focus areas: 代码实现 / 架构设计 / 最佳实践 / 问题诊断 -- Analysis depth: Quick Overview / Standard Analysis / Deep Dive +- Analysis depth: 快速概览 / 标准分析 / 深度挖掘 -#### Step 1.3: Create/Update discussion.md - -For new session: +#### Step 1.3: Initialize discussion.md ```markdown -# Analysis Discussion +# Analysis Session **Session ID**: ${sessionId} **Topic**: $TOPIC @@ -145,10 +151,17 @@ For new session: --- -## User Context +## Analysis Context -**Focus Areas**: ${userFocusAreas.join(', ')} -**Analysis Depth**: ${analysisDepth} +**Focus Areas**: ${focusAreas.join(', ')} +**Depth**: ${analysisDepth} +**Scope**: ${scope || 'Full codebase'} + +--- + +## Initial Questions + +${keyQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')} --- @@ -156,90 +169,112 @@ For new session: ### Round 1 - Initial Understanding (${timestamp}) -#### Topic Analysis - -Based on topic "$TOPIC": - -- **Primary dimensions**: ${dimensions.join(', ')} -- **Initial scope**: ${initialScope} -- **Key questions to explore**: - - ${question1} - - ${question2} - - ${question3} - -#### Next Steps - -- Search codebase for relevant patterns -- Gather insights via analysis -- Prepare discussion points for user +#### Key Questions +${keyQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')} --- ## Current Understanding -${initialUnderstanding} -``` - -For continue session, append: - -```markdown -### Round ${n} - Continuation (${timestamp}) - -#### Previous Context - -Resuming analysis based on prior discussion. - -#### New Focus - -${newFocusFromUser} +*To be populated after exploration phases* ``` --- -### Phase 2: Exploration +### Phase 2: CLI Exploration (Serial) -#### Step 2.1: Codebase Search +#### Step 2.1: Codebase Context Gathering + +Use built-in tools (no agent needed): ```javascript -// Extract keywords from topic -const keywords = extractTopicKeywords("$TOPIC") +// 1. Get project structure +const modules = bash("ccw tool exec get_modules_by_depth '{}'") -// Search codebase for relevant code -const searchResults = [] -for (const keyword of keywords) { - const results = Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 }) - searchResults.push({ keyword, results }) +// 2. Search for related code +const topicKeywords = extractKeywords("$TOPIC") +const relatedFiles = Grep({ + pattern: topicKeywords.join('|'), + path: "src/", + output_mode: "files_with_matches" +}) + +// 3. Read project tech context +const projectTech = Read(".workflow/project-tech.json") + +// Build exploration context +const explorationContext = { + relevant_files: relatedFiles.map(f => ({ path: f, relevance: 'high' })), + patterns: extractPatterns(modules), + constraints: projectTech?.constraints || [], + integration_points: projectTech?.integrations || [] } -// Identify affected files and patterns -const relevantLocations = analyzeSearchResults(searchResults) +Write(`${sessionFolder}/exploration-codebase.json`, JSON.stringify(explorationContext, null, 2)) ``` -#### Step 2.2: Pattern Analysis +#### Step 2.2: Gemini CLI Analysis -Analyze the codebase from identified dimensions: +**CLI Call** (synchronous): +```bash +ccw cli -p " +PURPOSE: Analyze '${topic}' from ${dimensions.join(', ')} perspectives +Success: Actionable insights with clear reasoning -1. Architecture patterns and structure -2. Implementation conventions -3. Dependency relationships -4. Potential issues or improvements +PRIOR CODEBASE CONTEXT: +- Key files: ${explorationContext.relevant_files.slice(0,5).map(f => f.path).join(', ')} +- Patterns found: ${explorationContext.patterns.slice(0,3).join(', ')} +- Constraints: ${explorationContext.constraints.slice(0,3).join(', ')} + +TASK: +• Build on exploration findings above +• Analyze common patterns and anti-patterns +• Highlight potential issues or opportunities +• Generate discussion points for user clarification +• Provide 3-5 key insights with evidence + +MODE: analysis + +CONTEXT: @**/* | Topic: $TOPIC + +EXPECTED: +- Structured analysis with clear sections +- Specific insights tied to evidence (file:line references where applicable) +- Questions to deepen understanding +- Recommendations with rationale +- Confidence levels (high/medium/low) for each conclusion + +CONSTRAINTS: Focus on ${dimensions.join(', ')} | Ignore test files +" --tool gemini --mode analysis +``` + +**⏳ Wait for completion** #### Step 2.3: Aggregate Findings ```javascript -// Aggregate into explorations.json const explorations = { session_id: sessionId, timestamp: getUtc8ISOString(), topic: "$TOPIC", dimensions: dimensions, + sources: [ - { type: "codebase", summary: codebaseSummary }, - { type: "analysis", summary: analysisSummary } + { type: 'codebase', summary: 'Project structure and related files' }, + { type: 'cli_analysis', summary: 'Gemini deep analysis' } ], - key_findings: [...], - discussion_points: [...], - open_questions: [...] + + key_findings: [ + // Populated from CLI analysis + ], + + discussion_points: [ + // Questions for user engagement + ], + + open_questions: [ + // Unresolved items + ] } Write(explorationsPath, JSON.stringify(explorations, null, 2)) @@ -247,104 +282,194 @@ Write(explorationsPath, JSON.stringify(explorations, null, 2)) #### Step 2.4: Update discussion.md -```markdown -#### Exploration Results (${timestamp}) +Append Round 2 section: -**Sources Analyzed**: -${sources.map(s => `- ${s.type}: ${s.summary}`).join('\n')} +```markdown +### Round 2 - Initial Exploration (${timestamp}) + +#### Codebase Findings +${explorationContext.relevant_files.slice(0,5).map(f => `- ${f.path}: ${f.relevance}`).join('\n')} + +#### Analysis Results **Key Findings**: -${keyFindings.map((f, i) => `${i+1}. ${f}`).join('\n')} +${keyFindings.map(f => `- 📍 ${f}`).join('\n')} -**Points for Discussion**: -${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')} +**Discussion Points**: +${discussionPoints.map(p => `- ❓ ${p}`).join('\n')} -**Open Questions**: -${openQuestions.map((q, i) => `- ${q}`).join('\n')} +**Recommendations**: +${recommendations.map(r => `- ✅ ${r}`).join('\n')} + +--- ``` --- -### Phase 3: Interactive Discussion (Multi-Round) +### Phase 3: Interactive Discussion -#### Step 3.1: Present Findings & Gather Feedback +#### Step 3.1: Present & Gather Feedback ```javascript -// Maximum discussion rounds const MAX_ROUNDS = 5 -let roundNumber = 1 -let discussionComplete = false +let roundNumber = 3 -while (!discussionComplete && roundNumber <= MAX_ROUNDS) { - // Display current findings +while (!analysisComplete && roundNumber <= MAX_ROUNDS) { + + // Present current findings console.log(` -## Discussion Round ${roundNumber} +## Analysis Round ${roundNumber} -${currentFindings} +### Current Understanding +${currentUnderstanding} -### Key Points for Your Input -${discussionPoints.map((p, i) => `${i+1}. ${p}`).join('\n')} +### Key Questions Still Open +${openQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')} + +### User Options: +- 继续深入: Deepen current direction +- 调整方向: Change analysis angle +- 有具体问题: Ask specific question +- 分析完成: Ready for synthesis `) - // Gather user input - // Options: - // - 同意,继续深入: Deepen analysis in current direction - // - 需要调整方向: Get user's adjusted focus - // - 分析完成: Exit loop - // - 有具体问题: Answer specific questions + // User selects direction: + // - 继续深入: Deepen analysis in current direction + // - 调整方向: Change focus area + // - 有具体问题: Capture specific questions + // - 分析完成: Exit discussion loop - // Process user response and update understanding - updateDiscussionDocument(roundNumber, userResponse, findings) roundNumber++ } ``` -#### Step 3.2: Document Each Round +#### Step 3.2: Deepen Analysis + +**CLI Call** (synchronous): +```bash +ccw cli -p " +PURPOSE: Deepen analysis on '${topic}' - more detailed investigation +Success: Comprehensive understanding with actionable insights + +PRIOR FINDINGS: +${priorFindings.join('\n')} + +DEEPEN ON: +${focusAreas.map(a => `- ${a}: ${details}`).join('\n')} + +TASK: +• Elaborate on prior findings +• Investigate edge cases or special scenarios +• Identify patterns not yet discussed +• Suggest implementation or improvement approaches +• Rate risk/impact for each finding (1-5) + +MODE: analysis + +CONTEXT: @**/* + +EXPECTED: +- Detailed breakdown of prior findings +- Risk/impact assessment +- Specific improvement suggestions +- Code examples or patterns where applicable +" --tool gemini --mode analysis +``` + +#### Step 3.3: Adjust Direction + +**CLI Call** (synchronous): +```bash +ccw cli -p " +PURPOSE: Analyze '${topic}' from different perspective: ${newFocus} +Success: Fresh insights from new angle + +PRIOR ANALYSIS: +${priorAnalysis} + +NEW FOCUS: +Shift emphasis to: ${newFocus} + +TASK: +• Analyze topic from new perspective +• Identify what was missed in prior analysis +• Generate insights specific to new focus +• Cross-reference with prior findings +• Suggest next investigation steps + +MODE: analysis + +CONTEXT: @**/* + +EXPECTED: +- New perspective insights +- Gaps in prior analysis +- Integrated view (prior + new) +" --tool gemini --mode analysis +``` + +#### Step 3.4: Answer Specific Questions + +**CLI Call** (synchronous): +```bash +ccw cli -p " +PURPOSE: Answer specific questions about '${topic}' +Success: Clear, evidence-based answers + +QUESTIONS FROM USER: +${userQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')} + +PRIOR CONTEXT: +${priorAnalysis} + +TASK: +• Answer each question directly +• Provide evidence or examples +• Clarify any ambiguous points +• Suggest related investigation + +MODE: analysis + +CONTEXT: @**/* + +EXPECTED: +- Direct answer to each question +- Supporting evidence +- Confidence level for each answer +" --tool gemini --mode analysis +``` + +#### Step 3.5: Document Each Round Append to discussion.md: ```markdown -### Round ${n} - Discussion (${timestamp}) +### Round ${n} - ${action} (${timestamp}) -#### User Input +#### User Direction +- **Action**: ${action} +- **Focus**: ${focus || 'Same as prior'} -${userInputSummary} +#### Analysis Results -${userResponse === 'adjustment' ? ` -**Direction Adjustment**: ${adjustmentDetails} -` : ''} +**Key Findings**: +${newFindings.map(f => `- ${f}`).join('\n')} -${userResponse === 'questions' ? ` -**User Questions**: -${userQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')} +**Insights**: +${insights.map(i => `- 💡 ${i}`).join('\n')} -**Answers**: -${answers.map((a, i) => `${i+1}. ${a}`).join('\n')} -` : ''} - -#### Updated Understanding - -Based on user feedback: -- ${insight1} -- ${insight2} +**Next Steps**: +${nextSteps.map(s => `${s.priority} - ${s.action}`).join('\n')} #### Corrected Assumptions - -${corrections.length > 0 ? corrections.map(c => ` -- ~~${c.wrong}~~ → ${c.corrected} - - Reason: ${c.reason} -`).join('\n') : 'None'} - -#### New Insights - -${newInsights.map(i => `- ${i}`).join('\n')} +${corrections.map(c => `- ~~${c.before}~~ → ${c.after}`).join('\n')} ``` --- ### Phase 4: Synthesis & Conclusion -#### Step 4.1: Consolidate Insights +#### Step 4.1: Final Synthesis ```javascript const conclusions = { @@ -353,21 +478,20 @@ const conclusions = { completed: getUtc8ISOString(), total_rounds: roundNumber, - summary: "...", + summary: executiveSummary, key_conclusions: [ - { point: "...", evidence: "...", confidence: "high|medium|low" } + { point: '...', evidence: '...', confidence: 'high|medium|low' } ], recommendations: [ - { action: "...", rationale: "...", priority: "high|medium|low" } + { action: '...', rationale: '...', priority: 'high|medium|low' } ], - open_questions: [...], + open_questions: remainingQuestions, follow_up_suggestions: [ - { type: "issue", summary: "..." }, - { type: "task", summary: "..." } + { type: 'issue|task|research', summary: '...' } ] } @@ -379,164 +503,108 @@ Write(conclusionsPath, JSON.stringify(conclusions, null, 2)) ```markdown --- -## Conclusions (${timestamp}) +## Synthesis & Conclusions (${timestamp}) -### Summary - -${summaryParagraph} +### Executive Summary +${summary} ### Key Conclusions -${conclusions.key_conclusions.map((c, i) => ` +${keyConclusions.map((c, i) => ` ${i+1}. **${c.point}** (Confidence: ${c.confidence}) - - Evidence: ${c.evidence} + Evidence: ${c.evidence} `).join('\n')} ### Recommendations -${conclusions.recommendations.map((r, i) => ` -${i+1}. **${r.action}** (Priority: ${r.priority}) - - Rationale: ${r.rationale} +${recommendations.map((r, i) => ` +${i+1}. **[${r.priority}]** ${r.action} + Rationale: ${r.rationale} `).join('\n')} -### Remaining Questions +### Remaining Open Questions -${conclusions.open_questions.map(q => `- ${q}`).join('\n')} +${openQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')} --- ## Current Understanding (Final) ### What We Established +${established.map(p => `- ✅ ${p}`).join('\n')} -${establishedPoints.map(p => `- ${p}`).join('\n')} - -### What Was Clarified/Corrected - -${corrections.map(c => `- ~~${c.original}~~ → ${c.corrected}`).join('\n')} +### What Was Clarified +${clarified.map(c => `- ~~${c.before}~~ → ${c.after}`).join('\n')} ### Key Insights - -${keyInsights.map(i => `- ${i}`).join('\n')} +${insights.map(i => `- 💡 ${i}`).join('\n')} --- ## Session Statistics - **Total Rounds**: ${totalRounds} -- **Duration**: ${duration} -- **Sources Used**: ${sources.join(', ')} -- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json +- **Key Findings**: ${keyFindings.length} +- **Dimensions Analyzed**: ${dimensions.join(', ')} +- **Artifacts**: discussion.md, exploration-codebase.json, explorations.json, conclusions.json ``` #### Step 4.3: Post-Completion Options Offer follow-up options: -- Create Issue: Convert conclusions to actionable issues -- Generate Task: Create implementation tasks -- Export Report: Generate standalone analysis report -- Complete: No further action needed +- Create Issue (for findings) +- Generate Task (for improvements) +- Export Report +- Complete --- -## Session Folder Structure +## Configuration -``` -.workflow/.analysis/ANL-{slug}-{date}/ -├── discussion.md # Evolution of understanding & discussions -├── explorations.json # Exploration findings -├── conclusions.json # Final synthesis -└── exploration-*.json # Individual exploration results (optional) -``` +### Analysis Dimensions -## Discussion Document Template +| Dimension | Keywords | +|-----------|----------| +| architecture | 架构, architecture, design, structure, 设计 | +| implementation | 实现, implement, code, coding, 代码 | +| performance | 性能, performance, optimize, bottleneck, 优化 | +| security | 安全, security, auth, permission, 权限 | +| concept | 概念, concept, theory, principle, 原理 | +| comparison | 比较, compare, vs, difference, 区别 | +| decision | 决策, decision, choice, tradeoff, 选择 | -```markdown -# Analysis Discussion +### Depth Settings -**Session ID**: ANL-xxx-2025-01-25 -**Topic**: [topic or question] -**Started**: 2025-01-25T10:00:00+08:00 -**Dimensions**: [architecture, implementation, ...] +| Depth | Time | Scope | Questions | +|-------|------|-------|-----------| +| Quick (10-15min) | 1-2 | Surface level | 3-5 key | +| Standard (30-60min) | 2-4 | Moderate depth | 5-8 key | +| Deep (1-2hr) | 4+ | Comprehensive | 10+ key | --- -## User Context +## Error Handling -**Focus Areas**: [user-selected focus] -**Analysis Depth**: [quick|standard|deep] +| Situation | Action | +|-----------|--------| +| CLI timeout | Retry with shorter prompt, skip analysis | +| No relevant findings | Broaden search, adjust keywords | +| User disengaged | Summarize progress, offer break point | +| Max rounds reached | Force synthesis, highlight remaining questions | +| Session folder conflict | Append timestamp suffix | --- -## Discussion Timeline - -### Round 1 - Initial Understanding (2025-01-25 10:00) - -#### Topic Analysis -... - -#### Exploration Results -... - -### Round 2 - Discussion (2025-01-25 10:15) - -#### User Input -... - -#### Updated Understanding -... - -#### Corrected Assumptions -- ~~[wrong]~~ → [corrected] - -### Round 3 - Deep Dive (2025-01-25 10:30) -... - ---- - -## Conclusions (2025-01-25 11:00) - -### Summary -... - -### Key Conclusions -... - -### Recommendations -... - ---- - -## Current Understanding (Final) - -### What We Established -- [confirmed points] - -### What Was Clarified/Corrected -- ~~[original assumption]~~ → [corrected understanding] - -### Key Insights -- [insights gained] - ---- - -## Session Statistics - -- **Total Rounds**: 3 -- **Duration**: 1 hour -- **Sources Used**: codebase exploration, analysis -- **Artifacts Generated**: discussion.md, explorations.json, conclusions.json -``` - ## Iteration Flow ``` First Call (TOPIC="topic"): ├─ No session exists → New mode - ├─ Identify analysis dimensions + ├─ Identify dimensions ├─ Scope with user - ├─ Create discussion.md with initial understanding - ├─ Launch explorations + ├─ Create discussion.md + ├─ Codebase exploration + ├─ Gemini CLI analysis └─ Enter discussion loop Continue Call (TOPIC="topic"): @@ -549,10 +617,10 @@ Discussion Loop: ├─ Present current findings ├─ Gather user feedback ├─ Process response: - │ ├─ Agree → Deepen analysis - │ ├─ Adjust → Change direction - │ ├─ Question → Answer then continue - │ └─ Complete → Exit loop + │ ├─ Deepen → Deeper analysis on same topic + │ ├─ Adjust → Shift analysis focus + │ ├─ Questions → Answer specific questions + │ └─ Complete → Exit loop for synthesis ├─ Update discussion.md └─ Repeat until complete or max rounds @@ -562,49 +630,6 @@ Completion: └─ Offer follow-up options ``` -## Consolidation Rules - -When updating "Current Understanding": - -1. **Promote confirmed insights**: Move validated findings to "What We Established" -2. **Track corrections**: Keep important wrong→right transformations -3. **Focus on current state**: What do we know NOW -4. **Avoid timeline repetition**: Don't copy discussion details -5. **Preserve key learnings**: Keep insights valuable for future reference - -**Bad (cluttered)**: -```markdown -## Current Understanding - -In round 1 we discussed X, then in round 2 user said Y, and we explored Z... -``` - -**Good (consolidated)**: -```markdown -## Current Understanding - -### What We Established -- The authentication flow uses JWT with refresh tokens -- Rate limiting is implemented at API gateway level - -### What Was Clarified -- ~~Assumed Redis for sessions~~ → Actually uses database-backed sessions - -### Key Insights -- Current architecture supports horizontal scaling -- Security audit recommended before production -``` - -## Error Handling - -| Situation | Action | -|-----------|--------| -| Exploration fails | Continue with available context, note limitation | -| User timeout in discussion | Save state, show resume instructions | -| Max rounds reached | Force synthesis, offer continuation option | -| No relevant findings | Broaden search, ask user for clarification | -| Session folder conflict | Append timestamp suffix | - --- -**Now execute the analyze-with-file workflow for topic**: $TOPIC +**Now execute the analysis-with-file workflow for topic**: $TOPIC diff --git a/.codex/prompts/brainstorm-with-file.md b/.codex/prompts/brainstorm-with-file.md index 7c93e312..7a30d101 100644 --- a/.codex/prompts/brainstorm-with-file.md +++ b/.codex/prompts/brainstorm-with-file.md @@ -1,6 +1,6 @@ --- -description: Interactive brainstorming with multi-perspective analysis, idea expansion, and documented thought evolution. Supports perspective selection and idea limits. -argument-hint: "TOPIC=\"\" [--perspectives=role1,role2,...] [--max-ideas=] [--focus=] [--verbose]" +description: Interactive brainstorming with serial CLI collaboration, idea expansion, and documented thought evolution. Sequential multi-perspective analysis for Codex. +argument-hint: "TOPIC=\"\" [--perspectives=creative,pragmatic,systematic] [--max-ideas=] [--focus=] [--mode=creative|structured|balanced]" --- # Codex Brainstorm-With-File Prompt @@ -9,22 +9,23 @@ argument-hint: "TOPIC=\"\" [--perspectives=role1,role2,...] [--ma Interactive brainstorming workflow with **documented thought evolution**. Expands initial ideas through questioning, multi-perspective analysis, and iterative refinement. -**Core workflow**: Seed Idea → Expand → Multi-Perspective Explore → Synthesize → Refine → Crystallize +**Core workflow**: Seed Idea → Expand → Serial CLI Explore → Synthesize → Refine → Crystallize **Key features**: - **brainstorm.md**: Complete thought evolution timeline -- **Multi-perspective analysis**: Creative, Pragmatic, Systematic viewpoints +- **Serial multi-perspective**: Creative → Pragmatic → Systematic (sequential) - **Idea expansion**: Progressive questioning and exploration - **Diverge-Converge cycles**: Generate options then focus on best paths -- **Synthesis**: Merge multiple perspectives into coherent solutions ## Target Topic **$TOPIC** -- `--perspectives`: Analysis perspectives (role1,role2,...) -- `--max-ideas`: Max number of ideas -- `--focus`: Focus area +**Parameters**: +- `--perspectives`: Analysis perspectives (default: creative,pragmatic,systematic) +- `--max-ideas`: Max number of ideas per perspective (default: 5) +- `--focus`: Focus area (technical/ux/business/innovation) +- `--mode`: Brainstorm mode (creative/structured/balanced, default: balanced) ## Execution Process @@ -36,16 +37,17 @@ Session Detection: Phase 1: Seed Understanding ├─ Parse initial idea/topic - ├─ Identify brainstorm dimensions (technical, UX, business, etc.) + ├─ Identify brainstorm dimensions ├─ Initial scoping with user ├─ Expand seed into exploration vectors └─ Document in brainstorm.md -Phase 2: Divergent Exploration (Multi-Perspective) - ├─ Creative perspective: Innovative, unconventional ideas - ├─ Pragmatic perspective: Implementation-focused approaches - ├─ Systematic perspective: Architectural, structured solutions - └─ Aggregate diverse viewpoints +Phase 2: Divergent Exploration (Serial CLI) + ├─ Step 1: Codebase context gathering (Glob/Grep/Read) + ├─ Step 2: Creative perspective (Gemini CLI) + ├─ Step 3: Pragmatic perspective (Codex CLI) ← Wait for Step 2 + ├─ Step 4: Systematic perspective (Claude CLI) ← Wait for Step 3 + └─ Aggregate perspectives sequentially Phase 3: Interactive Refinement (Multi-Round) ├─ Present multi-perspective findings @@ -59,7 +61,7 @@ Phase 4: Convergence & Crystallization ├─ Synthesize best ideas ├─ Resolve conflicts between perspectives ├─ Formulate actionable conclusions - ├─ Generate next steps or implementation plan + ├─ Generate next steps └─ Final brainstorm.md update Output: @@ -69,9 +71,25 @@ Output: └─ .workflow/.brainstorm/{slug}-{date}/ideas/ (individual idea deep-dives) ``` +## Output Structure + +``` +.workflow/.brainstorm/BS-{slug}-{date}/ +├── brainstorm.md # ⭐ Complete thought evolution timeline +├── exploration-codebase.json # Phase 2: Codebase context +├── perspectives.json # Phase 2: Serial CLI findings +├── synthesis.json # Phase 4: Final synthesis +└── ideas/ # Phase 3: Individual idea deep-dives + ├── idea-1.md + ├── idea-2.md + └── merged-idea-1.md +``` + +--- + ## Implementation Details -### Session Setup & Mode Detection +### Session Setup ```javascript const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString() @@ -89,7 +107,6 @@ const ideasFolder = `${sessionFolder}/ideas` // Auto-detect mode const sessionExists = fs.existsSync(sessionFolder) const hasBrainstorm = sessionExists && fs.existsSync(brainstormPath) - const mode = hasBrainstorm ? 'continue' : 'new' if (!sessionExists) { @@ -104,7 +121,6 @@ if (!sessionExists) { #### Step 1.1: Parse Seed & Identify Dimensions ```javascript -// Brainstorm dimensions for multi-perspective analysis const BRAINSTORM_DIMENSIONS = { technical: ['技术', 'technical', 'implementation', 'code', '实现', 'architecture'], ux: ['用户', 'user', 'experience', 'UX', 'UI', '体验', 'interaction'], @@ -118,13 +134,13 @@ const BRAINSTORM_DIMENSIONS = { function identifyDimensions(topic) { const text = topic.toLowerCase() const matched = [] - + for (const [dimension, keywords] of Object.entries(BRAINSTORM_DIMENSIONS)) { if (keywords.some(k => text.includes(k))) { matched.push(dimension) } } - + return matched.length > 0 ? matched : ['technical', 'innovation', 'feasibility'] } @@ -140,8 +156,13 @@ Ask user to scope the brainstorm: #### Step 1.3: Expand Seed into Exploration Vectors -Generate exploration vectors from seed idea: +**CLI Call** (synchronous): +```bash +ccw cli -p " +Given the initial idea: '$TOPIC' +Dimensions: ${dimensions} +Generate 5-7 exploration vectors (questions/directions) to expand this idea: 1. Core question: What is the fundamental problem/opportunity? 2. User perspective: Who benefits and how? 3. Technical angle: What enables this technically? @@ -150,9 +171,11 @@ Generate exploration vectors from seed idea: 6. Innovation angle: What would make this 10x better? 7. Integration: How does this fit with existing systems/processes? -#### Step 1.4: Create/Update brainstorm.md +Output as structured exploration vectors for multi-perspective analysis. +" --tool gemini --mode analysis --model gemini-2.5-flash +``` -For new session: +#### Step 1.4: Create brainstorm.md ```markdown # Brainstorm Session @@ -178,7 +201,6 @@ For new session: > $TOPIC ### Exploration Vectors - ${explorationVectors.map((v, i) => ` #### Vector ${i+1}: ${v.title} **Question**: ${v.question} @@ -213,36 +235,55 @@ ${keyQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')} *Discarded ideas with reasons - kept for reference* ``` -For continue session, append: - -```markdown -### Round ${n} - Continuation (${timestamp}) - -#### Previous Context - -Resuming brainstorm based on prior discussion. - -#### New Focus - -${newFocusFromUser} -``` - --- -### Phase 2: Divergent Exploration (Multi-Perspective) +### Phase 2: Divergent Exploration (Serial CLI) -Launch 3 parallel agents for multi-perspective brainstorming: +**⚠️ CRITICAL: Execute CLI calls SERIALLY, not in parallel** + +Codex does not support parallel agent execution. Each perspective must complete before starting the next. + +#### Step 2.1: Codebase Context Gathering + +Use built-in tools to gather context (no agent needed): ```javascript -const cliPromises = [] +// 1. Get project structure +const modules = bash("ccw tool exec get_modules_by_depth '{}'") -// Agent 1: Creative/Innovative Perspective (Gemini) -cliPromises.push( - Bash({ - command: `ccw cli -p " +// 2. Search for related code +const relatedFiles = Glob("**/*.{ts,js,tsx,jsx}") +const topicSearch = Grep({ + pattern: topicKeywords.join('|'), + path: "src/", + output_mode: "files_with_matches" +}) + +// 3. Read project tech context +const projectTech = Read(".workflow/project-tech.json") + +// Build exploration context +const explorationContext = { + relevant_files: topicSearch.files, + existing_patterns: extractPatterns(modules), + architecture_constraints: projectTech?.architecture || [], + integration_points: projectTech?.integrations || [] +} + +Write(`${sessionFolder}/exploration-codebase.json`, JSON.stringify(explorationContext, null, 2)) +``` + +#### Step 2.2: Creative Perspective (FIRST) + +```bash +ccw cli -p " PURPOSE: Creative brainstorming for '$TOPIC' - generate innovative, unconventional ideas Success: 5+ unique creative solutions that push boundaries +PRIOR CODEBASE CONTEXT: +- Key files: ${explorationContext.relevant_files.slice(0,5).join(', ')} +- Existing patterns: ${explorationContext.existing_patterns.slice(0,3).join(', ')} + TASK: • Think beyond obvious solutions - what would be surprising/delightful? • Explore cross-domain inspiration (what can we learn from other industries?) @@ -253,7 +294,6 @@ TASK: MODE: analysis CONTEXT: @**/* | Topic: $TOPIC -Exploration vectors: ${explorationVectors.map(v => v.title).join(', ')} EXPECTED: - 5+ creative ideas with brief descriptions @@ -262,21 +302,28 @@ EXPECTED: - Cross-domain inspirations - One 'crazy' idea that might just work -CONSTRAINTS: ${brainstormMode === 'structured' ? 'Keep ideas technically feasible' : 'No constraints - think freely'} -" --tool gemini --mode analysis`, - run_in_background: true - }) -) +OUTPUT FORMAT: JSON with ideas[], challenged_assumptions[], inspirations[] +" --tool gemini --mode analysis +``` -// Agent 2: Pragmatic/Implementation Perspective (Codex) -cliPromises.push( - Bash({ - command: \`ccw cli -p " +**⏳ Wait for completion before proceeding** + +#### Step 2.3: Pragmatic Perspective (AFTER Creative) + +```bash +ccw cli -p " PURPOSE: Pragmatic analysis for '$TOPIC' - focus on implementation reality Success: Actionable approaches with clear implementation paths +PRIOR CODEBASE CONTEXT: +- Key files: ${explorationContext.relevant_files.slice(0,5).join(', ')} +- Architecture constraints: ${explorationContext.architecture_constraints.slice(0,3).join(', ')} + +CREATIVE IDEAS FROM PREVIOUS STEP: +${creativeResult.ideas.map(i => `- ${i.title}`).join('\n')} + TASK: -• Evaluate technical feasibility of core concept +• Evaluate technical feasibility of creative ideas above • Identify existing patterns/libraries that could help • Consider integration with current codebase • Estimate implementation complexity @@ -286,7 +333,6 @@ TASK: MODE: analysis CONTEXT: @**/* | Topic: $TOPIC -Exploration vectors: \${explorationVectors.map(v => v.title).join(', ')} EXPECTED: - 3-5 practical implementation approaches @@ -295,19 +341,26 @@ EXPECTED: - Quick wins vs long-term solutions - Recommended starting point -CONSTRAINTS: Focus on what can actually be built with current tech stack -" --tool codex --mode analysis\`, - run_in_background: true - }) -) +OUTPUT FORMAT: JSON with approaches[], blockers[], recommendations[] +" --tool codex --mode analysis +``` -// Agent 3: Systematic/Architectural Perspective (Claude) -cliPromises.push( - Bash({ - command: \`ccw cli -p " +**⏳ Wait for completion before proceeding** + +#### Step 2.4: Systematic Perspective (AFTER Pragmatic) + +```bash +ccw cli -p " PURPOSE: Systematic analysis for '$TOPIC' - architectural and structural thinking Success: Well-structured solution framework with clear tradeoffs +PRIOR CODEBASE CONTEXT: +- Architecture constraints: ${explorationContext.architecture_constraints.join(', ')} +- Integration points: ${explorationContext.integration_points.join(', ')} + +CREATIVE IDEAS: ${creativeResult.ideas.map(i => i.title).join(', ')} +PRAGMATIC APPROACHES: ${pragmaticResult.approaches.map(a => a.title).join(', ')} + TASK: • Decompose the problem into sub-problems • Identify architectural patterns that apply @@ -319,7 +372,6 @@ TASK: MODE: analysis CONTEXT: @**/* | Topic: $TOPIC -Exploration vectors: \${explorationVectors.map(v => v.title).join(', ')} EXPECTED: - Problem decomposition diagram (text) @@ -329,73 +381,42 @@ EXPECTED: - Recommended architecture pattern - Risk matrix -CONSTRAINTS: Consider existing system architecture -" --tool claude --mode analysis\`, - run_in_background: true - }) -) - -// Wait for all CLI analyses to complete -const [creativeResult, pragmaticResult, systematicResult] = await Promise.all(cliPromises) - -// Parse results from each perspective -const creativeIdeas = parseCreativeResult(creativeResult) -const pragmaticApproaches = parsePragmaticResult(pragmaticResult) -const architecturalOptions = parseSystematicResult(systematicResult) +OUTPUT FORMAT: JSON with decomposition[], patterns[], tradeoffs[], risks[] +" --tool claude --mode analysis ``` -**Multi-Perspective Coordination**: +**⏳ Wait for completion before proceeding** -| Agent | Perspective | Tool | Focus Areas | -|-------|-------------|------|-------------| -| 1 | Creative/Innovative | Gemini | Novel ideas, cross-domain inspiration, moonshots | -| 2 | Pragmatic/Implementation | Codex | Feasibility, tech stack, blockers, quick wins | -| 3 | Systematic/Architectural | Claude | Decomposition, patterns, scalability, risks | - -#### Step 2.4: Aggregate Multi-Perspective Findings +#### Step 2.5: Aggregate Perspectives ```javascript const perspectives = { session_id: sessionId, timestamp: getUtc8ISOString(), topic: "$TOPIC", - - creative: { - ideas: [...], - insights: [...], - challenges: [...] - }, - - pragmatic: { - approaches: [...], - blockers: [...], - recommendations: [...] - }, - - systematic: { - decomposition: [...], - patterns: [...], - tradeoffs: [...] - }, - + + creative: creativeResult, + pragmatic: pragmaticResult, + systematic: systematicResult, + synthesis: { - convergent_themes: [], - conflicting_views: [], - unique_contributions: [] + convergent_themes: findConvergentThemes(creativeResult, pragmaticResult, systematicResult), + conflicting_views: findConflicts(creativeResult, pragmaticResult, systematicResult), + unique_contributions: extractUniqueInsights(creativeResult, pragmaticResult, systematicResult) } } Write(perspectivesPath, JSON.stringify(perspectives, null, 2)) ``` -#### Step 2.5: Update brainstorm.md with Perspectives +#### Step 2.6: Update brainstorm.md + +Append Round 2 section: ```markdown ### Round 2 - Multi-Perspective Exploration (${timestamp}) #### Creative Perspective - -**Top Creative Ideas**: ${creativeIdeas.map((idea, i) => ` ${i+1}. **${idea.title}** ⭐ Novelty: ${idea.novelty}/5 | Impact: ${idea.impact}/5 ${idea.description} @@ -404,14 +425,9 @@ ${i+1}. **${idea.title}** ⭐ Novelty: ${idea.novelty}/5 | Impact: ${idea.impact **Challenged Assumptions**: ${challengedAssumptions.map(a => `- ~~${a.assumption}~~ → Consider: ${a.alternative}`).join('\n')} -**Cross-Domain Inspirations**: -${inspirations.map(i => `- ${i}`).join('\n')} - --- #### Pragmatic Perspective - -**Implementation Approaches**: ${pragmaticApproaches.map((a, i) => ` ${i+1}. **${a.title}** | Effort: ${a.effort}/5 | Risk: ${a.risk}/5 ${a.description} @@ -425,7 +441,6 @@ ${blockers.map(b => `- ⚠️ ${b}`).join('\n')} --- #### Systematic Perspective - **Problem Decomposition**: ${decomposition} @@ -445,12 +460,7 @@ ${i+1}. **${opt.pattern}** ${convergentThemes.map(t => `- ✅ ${t}`).join('\n')} **Conflicting Views** (need resolution): -${conflictingViews.map(v => ` -- 🔄 ${v.topic} - - Creative: ${v.creative} - - Pragmatic: ${v.pragmatic} - - Systematic: ${v.systematic} -`).join('\n')} +${conflictingViews.map(v => `- 🔄 ${v.topic}: ${v.summary}`).join('\n')} **Unique Contributions**: ${uniqueContributions.map(c => `- 💡 [${c.source}] ${c.insight}`).join('\n')} @@ -458,23 +468,21 @@ ${uniqueContributions.map(c => `- 💡 [${c.source}] ${c.insight}`).join('\n')} --- -### Phase 3: Interactive Refinement (Multi-Round) +### Phase 3: Interactive Refinement #### Step 3.1: Present & Select Directions ```javascript const MAX_ROUNDS = 6 -let roundNumber = 3 // After initial exploration -let brainstormComplete = false +let roundNumber = 3 while (!brainstormComplete && roundNumber <= MAX_ROUNDS) { - - // Present current state + + // Present current top ideas console.log(` ## Brainstorm Round ${roundNumber} ### Top Ideas So Far - ${topIdeas.map((idea, i) => ` ${i+1}. **${idea.title}** (${idea.source}) ${idea.brief} @@ -485,80 +493,119 @@ ${i+1}. **${idea.title}** (${idea.source}) ${openQuestions.map((q, i) => `${i+1}. ${q}`).join('\n')} `) - // Gather user direction - options: + // User selects direction: // - 深入探索: Deep dive on selected ideas // - 继续发散: Generate more ideas // - 挑战验证: Devil's advocate challenge // - 合并综合: Merge multiple ideas // - 准备收敛: Start concluding - - // Process based on direction and update brainstorm.md + roundNumber++ } ``` -#### Step 3.2: Deep Dive on Selected Ideas +#### Step 3.2: Deep Dive on Selected Idea -For each selected idea, create dedicated idea file: +**CLI Call** (synchronous): +```bash +ccw cli -p " +PURPOSE: Deep dive analysis on idea '${idea.title}' +Success: Comprehensive understanding with actionable next steps -```javascript -async function deepDiveIdea(idea) { - const ideaPath = `${ideasFolder}/${idea.slug}.md` - - // Deep dive analysis: - // - Elaborate the core concept in detail - // - Identify implementation requirements - // - List potential challenges and mitigations - // - Suggest proof-of-concept approach - // - Define success metrics - // - Map related/dependent features - - // Output: - // - Detailed concept description - // - Technical requirements list - // - Risk/challenge matrix - // - MVP definition - // - Success criteria - // - Recommendation: pursue/pivot/park - - Write(ideaPath, deepDiveContent) -} +TASK: +• Elaborate the core concept in detail +• Identify implementation requirements +• List potential challenges and mitigations +• Suggest proof-of-concept approach +• Define success metrics +• Map related/dependent features + +MODE: analysis + +CONTEXT: @**/* +Original idea: ${idea.description} +Source perspective: ${idea.source} + +EXPECTED: +- Detailed concept description +- Technical requirements list +- Risk/challenge matrix +- MVP definition +- Success criteria +- Recommendation: pursue/pivot/park +" --tool gemini --mode analysis ``` +Write output to `${ideasFolder}/${idea.slug}.md` + #### Step 3.3: Devil's Advocate Challenge -For each idea, identify: -- 3 strongest objections -- Challenge core assumptions -- Scenarios where this fails -- Competitive/alternative solutions -- Whether this solves the right problem -- Survivability rating after challenge (1-5) +**CLI Call** (synchronous): +```bash +ccw cli -p " +PURPOSE: Devil's advocate - rigorously challenge these brainstorm ideas +Success: Uncover hidden weaknesses and strengthen viable ideas -Output: +IDEAS TO CHALLENGE: +${ideas.map((idea, i) => `${i+1}. ${idea.title}: ${idea.brief}`).join('\n')} + +TASK: +• For each idea, identify 3 strongest objections +• Challenge core assumptions +• Identify scenarios where this fails +• Consider competitive/alternative solutions +• Assess whether this solves the right problem +• Rate survivability after challenge (1-5) + +MODE: analysis + +EXPECTED: - Per-idea challenge report - Critical weaknesses exposed - Counter-arguments to objections (if any) - Ideas that survive the challenge - Modified/strengthened versions -#### Step 3.4: Merge & Synthesize Ideas +CONSTRAINTS: Be genuinely critical, not just contrarian +" --tool codex --mode analysis +``` -When merging selected ideas: -- Identify complementary elements -- Resolve contradictions -- Create unified concept -- Preserve key strengths from each -- Describe the merged solution -- Assess viability of merged idea +#### Step 3.4: Merge Ideas -Output: +**CLI Call** (synchronous): +```bash +ccw cli -p " +PURPOSE: Synthesize multiple ideas into unified concept +Success: Coherent merged idea that captures best elements + +IDEAS TO MERGE: +${selectedIdeas.map((idea, i) => ` +${i+1}. ${idea.title} (${idea.source}) + ${idea.description} + Strengths: ${idea.strengths.join(', ')} +`).join('\n')} + +TASK: +• Identify complementary elements +• Resolve contradictions +• Create unified concept +• Preserve key strengths from each +• Describe the merged solution +• Assess viability of merged idea + +MODE: analysis + +EXPECTED: - Merged concept description - Elements taken from each source idea - Contradictions resolved (or noted as tradeoffs) - New combined strengths - Implementation considerations +CONSTRAINTS: Don't force incompatible ideas together +" --tool gemini --mode analysis +``` + #### Step 3.5: Document Each Round Append to brainstorm.md: @@ -571,59 +618,9 @@ Append to brainstorm.md: - **Action**: ${action} - **Reasoning**: ${userReasoning || 'Not specified'} -${roundType === 'deep-dive' ? ` -#### Deep Dive: ${ideaTitle} - -**Elaborated Concept**: -${elaboratedConcept} - -**Implementation Requirements**: -${requirements.map(r => `- ${r}`).join('\n')} - -**Challenges & Mitigations**: -${challenges.map(c => `- ⚠️ ${c.challenge} → ✅ ${c.mitigation}`).join('\n')} - -**MVP Definition**: -${mvpDefinition} - -**Recommendation**: ${recommendation} -` : ''} - -${roundType === 'challenge' ? ` -#### Devil's Advocate Results - -**Challenges Raised**: -${challenges.map(c => ` -- 🔴 **${c.idea}**: ${c.objection} - - Counter: ${c.counter || 'No strong counter-argument'} - - Survivability: ${c.survivability}/5 -`).join('\n')} - -**Ideas That Survived**: -${survivedIdeas.map(i => `- ✅ ${i}`).join('\n')} - -**Eliminated/Parked**: -${eliminatedIdeas.map(i => `- ❌ ${i.title}: ${i.reason}`).join('\n')} -` : ''} - -${roundType === 'merge' ? ` -#### Merged Idea: ${mergedIdea.title} - -**Source Ideas Combined**: -${sourceIdeas.map(i => `- ${i}`).join('\n')} - -**Unified Concept**: -${mergedIdea.description} - -**Key Elements Preserved**: -${preservedElements.map(e => `- ✅ ${e}`).join('\n')} - -**Tradeoffs Accepted**: -${tradeoffs.map(t => `- ⚖️ ${t}`).join('\n')} -` : ''} +${roundContent} #### Updated Idea Ranking - ${updatedRanking.map((idea, i) => ` ${i+1}. **${idea.title}** ${idea.status} - Score: ${idea.score}/10 @@ -643,38 +640,37 @@ const synthesis = { topic: "$TOPIC", completed: getUtc8ISOString(), total_rounds: roundNumber, - - // Top ideas with full details - top_ideas: ideas.filter(i => i.status === 'active').sort((a,b) => b.score - a.score).slice(0, 5).map(idea => ({ - title: idea.title, - description: idea.description, - source_perspective: idea.source, - score: idea.score, - novelty: idea.novelty, - feasibility: idea.feasibility, - key_strengths: idea.strengths, - main_challenges: idea.challenges, - next_steps: idea.nextSteps - })), - - // Parked ideas for future reference - parked_ideas: ideas.filter(i => i.status === 'parked').map(idea => ({ - title: idea.title, - reason_parked: idea.parkReason, - potential_future_trigger: idea.futureTrigger - })), - - // Key insights from the process + + top_ideas: ideas.filter(i => i.status === 'active') + .sort((a,b) => b.score - a.score) + .slice(0, 5) + .map(idea => ({ + title: idea.title, + description: idea.description, + source_perspective: idea.source, + score: idea.score, + novelty: idea.novelty, + feasibility: idea.feasibility, + key_strengths: idea.strengths, + main_challenges: idea.challenges, + next_steps: idea.nextSteps + })), + + parked_ideas: ideas.filter(i => i.status === 'parked') + .map(idea => ({ + title: idea.title, + reason_parked: idea.parkReason, + potential_future_trigger: idea.futureTrigger + })), + key_insights: keyInsights, - - // Recommendations + recommendations: { primary: primaryRecommendation, alternatives: alternativeApproaches, not_recommended: notRecommended }, - - // Follow-up suggestions + follow_up: [ { type: 'implementation', summary: '...' }, { type: 'research', summary: '...' }, @@ -693,11 +689,9 @@ Write(synthesisPath, JSON.stringify(synthesis, null, 2)) ## Synthesis & Conclusions (${timestamp}) ### Executive Summary - ${executiveSummary} ### Top Ideas (Final Ranking) - ${topIdeas.map((idea, i) => ` #### ${i+1}. ${idea.title} ⭐ Score: ${idea.score}/10 @@ -716,18 +710,11 @@ ${idea.nextSteps.map((s, j) => `${j+1}. ${s}`).join('\n')} `).join('\n')} ### Primary Recommendation - > ${primaryRecommendation} **Rationale**: ${primaryRationale} -**Quick Start Path**: -1. ${step1} -2. ${step2} -3. ${step3} - ### Alternative Approaches - ${alternatives.map((alt, i) => ` ${i+1}. **${alt.title}** - When to consider: ${alt.whenToConsider} @@ -735,7 +722,6 @@ ${i+1}. **${alt.title}** `).join('\n')} ### Ideas Parked for Future - ${parkedIdeas.map(idea => ` - **${idea.title}** (Parked: ${idea.reason}) - Revisit when: ${idea.futureTrigger} @@ -746,34 +732,11 @@ ${parkedIdeas.map(idea => ` ## Key Insights ### Process Discoveries - ${processDiscoveries.map(d => `- 💡 ${d}`).join('\n')} ### Assumptions Challenged - ${challengedAssumptions.map(a => `- ~~${a.original}~~ → ${a.updated}`).join('\n')} -### Unexpected Connections - -${unexpectedConnections.map(c => `- 🔗 ${c}`).join('\n')} - ---- - -## Current Understanding (Final) - -### Problem Reframed - -${reframedProblem} - -### Solution Space Mapped - -${solutionSpaceMap} - -### Decision Framework - -When to choose each approach: -${decisionFramework} - --- ## Session Statistics @@ -781,188 +744,65 @@ ${decisionFramework} - **Total Rounds**: ${totalRounds} - **Ideas Generated**: ${totalIdeas} - **Ideas Survived**: ${survivedIdeas} -- **Perspectives Used**: Creative, Pragmatic, Systematic -- **Duration**: ${duration} +- **Perspectives Used**: Creative → Pragmatic → Systematic (serial) - **Artifacts**: brainstorm.md, perspectives.json, synthesis.json, ${ideaFiles.length} idea deep-dives ``` #### Step 4.3: Post-Completion Options Offer follow-up options: -- Create Implementation Plan: Convert best idea to implementation plan -- Create Issue: Turn ideas into trackable issues -- Deep Analysis: Run detailed technical analysis on an idea -- Export Report: Generate shareable report -- Complete: No further action needed +- Create Implementation Plan +- Create Issue +- Deep Analysis +- Export Report +- Complete --- -## Session Folder Structure +## Configuration -``` -.workflow/.brainstorm/BS-{slug}-{date}/ -├── brainstorm.md # Complete thought evolution -├── perspectives.json # Multi-perspective analysis findings -├── synthesis.json # Final synthesis -└── ideas/ # Individual idea deep-dives - ├── idea-1.md - ├── idea-2.md - └── merged-idea-1.md -``` +### Brainstorm Dimensions -## Brainstorm Document Template +| Dimension | Keywords | +|-----------|----------| +| technical | 技术, technical, implementation, code, 实现 | +| ux | 用户, user, experience, UX, UI, 体验 | +| business | 业务, business, value, ROI, 价值 | +| innovation | 创新, innovation, novel, creative, 新颖 | +| feasibility | 可行, feasible, practical, realistic, 实际 | +| scalability | 扩展, scale, growth, performance, 性能 | +| security | 安全, security, risk, protection, 风险 | -```markdown -# Brainstorm Session +### Perspective Configuration -**Session ID**: BS-xxx-2025-01-28 -**Topic**: [idea or topic] -**Started**: 2025-01-28T10:00:00+08:00 -**Dimensions**: [technical, ux, innovation, ...] +| Perspective | CLI Tool | Focus | Execution Order | +|-------------|----------|-------|-----------------| +| Creative | Gemini | Innovation, cross-domain | 1st (baseline) | +| Pragmatic | Codex | Implementation, feasibility | 2nd (builds on Creative) | +| Systematic | Claude | Architecture, structure | 3rd (integrates both) | + +### Serial Execution Benefits + +1. **Context building**: Each perspective builds on previous findings +2. **No race conditions**: Deterministic output order +3. **Better synthesis**: Later perspectives can reference earlier ones +4. **Simpler error handling**: Single failure point at a time --- -## Initial Context - -**Focus Areas**: [selected focus areas] -**Depth**: [quick|balanced|deep] -**Constraints**: [if any] - ---- - -## Seed Expansion - -### Original Idea -> [the initial idea] - -### Exploration Vectors -[generated questions and directions] - ---- - -## Thought Evolution Timeline - -### Round 1 - Seed Understanding -... - -### Round 2 - Multi-Perspective Exploration - -#### Creative Perspective -... - -#### Pragmatic Perspective -... - -#### Systematic Perspective -... - -#### Perspective Synthesis -... - -### Round 3 - Deep Dive -... - -### Round 4 - Challenge -... - ---- - -## Synthesis & Conclusions - -### Executive Summary -... - -### Top Ideas (Final Ranking) -... - -### Primary Recommendation -... - ---- - -## Key Insights -... - ---- - -## Current Understanding (Final) -... - ---- - -## Session Statistics -... -``` - -## Multi-Perspective Analysis Strategy - -### Perspective Roles - -| Perspective | Focus | Best For | -|-------------|-------|----------| -| Creative | Innovation, cross-domain | Generating novel ideas | -| Pragmatic | Implementation, feasibility | Reality-checking ideas | -| Systematic | Architecture, structure | Organizing solutions | - -### Analysis Patterns - -1. **Parallel Divergence**: All perspectives explore simultaneously from different angles -2. **Sequential Deep-Dive**: One perspective expands, others critique/refine -3. **Debate Mode**: Perspectives argue for/against specific approaches -4. **Synthesis Mode**: Combine insights from all perspectives - -### When to Use Each Pattern - -- **New topic**: Parallel Divergence → get diverse initial ideas -- **Promising idea**: Sequential Deep-Dive → thorough exploration -- **Controversial approach**: Debate Mode → uncover hidden issues -- **Ready to decide**: Synthesis Mode → create actionable conclusion - -## Consolidation Rules - -When updating "Current Understanding": - -1. **Promote confirmed insights**: Move validated findings to "What We Established" -2. **Track corrections**: Keep important wrong→right transformations -3. **Focus on current state**: What do we know NOW -4. **Avoid timeline repetition**: Don't copy discussion details -5. **Preserve key learnings**: Keep insights valuable for future reference - -**Bad (cluttered)**: -```markdown -## Current Understanding - -In round 1 we discussed X, then in round 2 user said Y, and we explored Z... -``` - -**Good (consolidated)**: -```markdown -## Current Understanding - -### Problem Reframed -The core challenge is not X but actually Y because... - -### Solution Space Mapped -Three viable approaches emerged: A (creative), B (pragmatic), C (hybrid) - -### Decision Framework -- Choose A when: innovation is priority -- Choose B when: time-to-market matters -- Choose C when: balanced approach needed -``` - ## Error Handling | Situation | Action | |-----------|--------| -| Analysis timeout | Retry with focused scope, or continue without that perspective | -| No good ideas | Reframe the problem, adjust constraints, try different angles | -| User disengaged | Summarize progress, offer break point with resume option | -| Perspectives conflict | Present as tradeoff, let user decide direction | +| CLI timeout | Retry with shorter prompt, or skip perspective | +| No good ideas | Reframe the problem, adjust constraints | +| User disengaged | Summarize progress, offer break point | +| Perspectives conflict | Present as tradeoff, let user decide | | Max rounds reached | Force synthesis, highlight unresolved questions | -| All ideas fail challenge | Return to divergent phase with new constraints | | Session folder conflict | Append timestamp suffix | +--- + ## Iteration Flow ``` @@ -970,9 +810,9 @@ First Call (TOPIC="topic"): ├─ No session exists → New mode ├─ Identify brainstorm dimensions ├─ Scope with user - ├─ Create brainstorm.md with initial understanding + ├─ Create brainstorm.md ├─ Expand seed into exploration vectors - ├─ Launch multi-perspective exploration + ├─ Serial CLI exploration (Creative → Pragmatic → Systematic) └─ Enter refinement loop Continue Call (TOPIC="topic"): @@ -985,7 +825,7 @@ Refinement Loop: ├─ Present current findings and top ideas ├─ Gather user feedback ├─ Process response: - │ ├─ Deep dive → Explore selected ideas in depth + │ ├─ Deep dive → Explore selected ideas │ ├─ Diverge → Generate more ideas │ ├─ Challenge → Devil's advocate testing │ ├─ Merge → Combine multiple ideas diff --git a/.codex/prompts/collaborative-plan-with-file.md b/.codex/prompts/collaborative-plan-with-file.md new file mode 100644 index 00000000..551740ed --- /dev/null +++ b/.codex/prompts/collaborative-plan-with-file.md @@ -0,0 +1,549 @@ +--- +description: Serial collaborative planning with Plan Note - Single-agent sequential task generation, unified plan-note.md, conflict detection. Codex-optimized. +argument-hint: "TASK=\"\" [--max-domains=5] [--focus=]" +--- + +# Codex Collaborative-Plan-With-File Prompt + +## Overview + +Serial collaborative planning workflow using **Plan Note** architecture: + +1. **Understanding**: Analyze requirements and identify 2-5 sub-domains +2. **Sequential Planning**: Process each sub-domain sequentially, generating plan.json + updating plan-note.md +3. **Conflict Detection**: Scan plan-note.md for conflicts +4. **Completion**: Generate executable plan.md summary + +**Note**: Codex does not support parallel agent execution. All domains processed serially. + +## Target Task + +**$TASK** + +**Parameters**: +- `--max-domains`: Maximum sub-domains to identify (default: 5) +- `--focus`: Focus specific domain (optional) + +## Execution Process + +``` +Session Detection: + ├─ Check if planning session exists for task + ├─ EXISTS + plan-note.md exists → Continue mode + └─ NOT_FOUND → New session mode + +Phase 1: Understanding & Template Creation + ├─ Analyze task description (Glob/Grep/Bash) + ├─ Identify 2-5 sub-domains + ├─ Create plan-note.md template + └─ Generate requirement-analysis.json + +Phase 2: Sequential Sub-Domain Planning (Serial) + ├─ For each sub-domain (LOOP): + │ ├─ Gemini CLI: Generate detailed plan + │ ├─ Extract task summary + │ └─ Update plan-note.md section + └─ Complete all domains sequentially + +Phase 3: Conflict Detection + ├─ Parse plan-note.md + ├─ Extract all tasks from all sections + ├─ Detect file/dependency/strategy conflicts + └─ Update conflict markers in plan-note.md + +Phase 4: Completion + ├─ Generate conflicts.json + ├─ Generate plan.md summary + └─ Ready for execution + +Output: + ├─ .workflow/.planning/{slug}-{date}/plan-note.md (executable) + ├─ .workflow/.planning/{slug}-{date}/requirement-analysis.json (metadata) + ├─ .workflow/.planning/{slug}-{date}/conflicts.json (conflict report) + ├─ .workflow/.planning/{slug}-{date}/plan.md (human-readable) + └─ .workflow/.planning/{slug}-{date}/agents/{domain}/plan.json (detailed) +``` + +## Output Structure + +``` +.workflow/.planning/CPLAN-{slug}-{date}/ +├── plan-note.md # ⭐ Core: Requirements + Tasks + Conflicts +├── requirement-analysis.json # Phase 1: Sub-domain assignments +├── agents/ # Phase 2: Per-domain plans (serial) +│ ├── {domain-1}/ +│ │ └── plan.json # Detailed plan +│ ├── {domain-2}/ +│ │ └── plan.json +│ └── ... +├── conflicts.json # Phase 3: Conflict report +└── plan.md # Phase 4: Human-readable summary +``` + +--- + +## Implementation Details + +### Session Setup + +```javascript +const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString() + +const taskSlug = "$TASK".toLowerCase().replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-').substring(0, 30) +const dateStr = getUtc8ISOString().substring(0, 10) + +const sessionId = `CPLAN-${taskSlug}-${dateStr}` +const sessionFolder = `.workflow/.planning/${sessionId}` +const planNotePath = `${sessionFolder}/plan-note.md` +const requirementsPath = `${sessionFolder}/requirement-analysis.json` +const conflictsPath = `${sessionFolder}/conflicts.json` +const planPath = `${sessionFolder}/plan.md` + +// Auto-detect mode +const sessionExists = fs.existsSync(sessionFolder) +const hasPlanNote = sessionExists && fs.existsSync(planNotePath) +const mode = hasPlanNote ? 'continue' : 'new' + +if (!sessionExists) { + bash(`mkdir -p ${sessionFolder}/agents`) +} +``` + +--- + +### Phase 1: Understanding & Template Creation + +#### Step 1.1: Analyze Task Description + +Use built-in tools (no agent): + +```javascript +// 1. Extract task keywords +const taskKeywords = extractKeywords("$TASK") + +// 2. Identify sub-domains via analysis +// Example: "Implement real-time notification system" +// → Domains: [Backend API, Frontend UI, Notification Service, Data Storage, Testing] + +const subDomains = identifySubDomains("$TASK", { + maxDomains: 5, // --max-domains parameter + keywords: taskKeywords +}) + +// 3. Estimate scope +const complexity = assessComplexity("$TASK") +``` + +#### Step 1.2: Create plan-note.md Template + +Generate structured template: + +```markdown +--- +session_id: ${sessionId} +original_requirement: | + $TASK +created_at: ${getUtc8ISOString()} +complexity: ${complexity} +sub_domains: ${subDomains.map(d => d.name).join(', ')} +status: in_progress +--- + +# 协作规划 + +**Session ID**: ${sessionId} +**任务**: $TASK +**复杂度**: ${complexity} +**创建时间**: ${getUtc8ISOString()} + +--- + +## 需求理解 + +### 核心目标 +${extractObjectives("$TASK")} + +### 关键要点 +${extractKeyPoints("$TASK")} + +### 约束条件 +${extractConstraints("$TASK")} + +### 拆分策略 +${subDomains.length} 个子领域: +${subDomains.map((d, i) => `${i+1}. **${d.name}**: ${d.description}`).join('\n')} + +--- + +## 任务池 - ${subDomains[0].name} +*(TASK-001 ~ TASK-100)* + +*待由规划流程填充* + +--- + +## 任务池 - ${subDomains[1].name} +*(TASK-101 ~ TASK-200)* + +*待由规划流程填充* + +--- + +## 依赖关系 + +*所有子域规划完成后自动生成* + +--- + +## 冲突标记 + +*冲突检测阶段生成* + +--- + +## 上下文证据 - ${subDomains[0].name} + +*相关文件、现有模式、约束等* + +--- + +## 上下文证据 - ${subDomains[1].name} + +*相关文件、现有模式、约束等* + +--- +``` + +#### Step 1.3: Generate requirement-analysis.json + +```javascript +const requirements = { + session_id: sessionId, + original_requirement: "$TASK", + complexity: complexity, + sub_domains: subDomains.map((domain, index) => ({ + focus_area: domain.name, + description: domain.description, + task_id_range: [index * 100 + 1, (index + 1) * 100], + estimated_effort: domain.effort, + dependencies: domain.dependencies || [] + })), + total_domains: subDomains.length +} + +Write(requirementsPath, JSON.stringify(requirements, null, 2)) +``` + +--- + +### Phase 2: Sequential Sub-Domain Planning + +#### Step 2.1: Plan Each Domain Sequentially + +```javascript +for (let i = 0; i < subDomains.length; i++) { + const domain = subDomains[i] + const domainFolder = `${sessionFolder}/agents/${domain.slug}` + const domainPlanPath = `${domainFolder}/plan.json` + + console.log(`Planning Domain ${i+1}/${subDomains.length}: ${domain.name}`) + + // Execute Gemini CLI for this domain + // ⏳ Wait for completion before proceeding to next domain +} +``` + +#### Step 2.2: CLI Planning for Current Domain + +**CLI Call** (synchronous): +```bash +ccw cli -p " +PURPOSE: Generate detailed implementation plan for domain '${domain.name}' in task: $TASK +Success: Comprehensive task breakdown with clear dependencies and effort estimates + +DOMAIN CONTEXT: +- Focus Area: ${domain.name} +- Description: ${domain.description} +- Task ID Range: ${domain.task_id_range[0]}-${domain.task_id_range[1]} +- Related Domains: ${relatedDomains.join(', ')} + +PRIOR DOMAINS (if any): +${completedDomains.map(d => `- ${d.name}: ${completedTaskCount} tasks`).join('\n')} + +TASK: +• Analyze ${domain.name} in detail +• Identify all necessary tasks (use TASK-ID range: ${domain.task_id_range[0]}-${domain.task_id_range[1]}) +• Define task dependencies and order +• Estimate effort and complexity for each task +• Identify file modifications needed +• Assess conflict risks with other domains + +MODE: analysis + +CONTEXT: @**/* + +EXPECTED: +JSON output with: +- tasks[]: {id, title, description, complexity, depends_on[], files_to_modify[], conflict_risk} +- summary: Overview of domain plan +- interdependencies: Links to other domains +- total_effort: Estimated effort points + +OUTPUT FORMAT: Structured JSON +" --tool gemini --mode analysis +``` + +#### Step 2.3: Parse and Update plan-note.md + +After CLI completes for each domain: + +```javascript +// Parse CLI output +const planJson = parseCLIOutput(cliResult) + +// Save detailed plan +Write(domainPlanPath, JSON.stringify(planJson, null, 2)) + +// Extract task summary +const taskSummary = planJson.tasks.map((t, idx) => ` +### TASK-${t.id}: ${t.title} [${domain.slug}] + +**状态**: 规划中 +**复杂度**: ${t.complexity} +**依赖**: ${t.depends_on.length > 0 ? t.depends_on.map(d => `TASK-${d}`).join(', ') : 'None'} +**范围**: ${t.description} + +**修改点**: +${t.files_to_modify.map(f => `- \`${f.path}:${f.line_range}\`: ${f.summary}`).join('\n')} + +**冲突风险**: ${t.conflict_risk} +`).join('\n') + +// Update plan-note.md +updatePlanNoteSection( + planNotePath, + `## 任务池 - ${domain.name}`, + taskSummary +) + +// Extract evidence +const evidence = ` +**相关文件**: +${planJson.related_files.map(f => `- ${f.path}: ${f.relevance}`).join('\n')} + +**现有模式**: +${planJson.existing_patterns.map(p => `- ${p}`).join('\n')} + +**约束**: +${planJson.constraints.map(c => `- ${c}`).join('\n')} +` + +updatePlanNoteSection( + planNotePath, + `## 上下文证据 - ${domain.name}`, + evidence +) +``` + +#### Step 2.4: Process All Domains + +```javascript +const completedDomains = [] + +for (const domain of subDomains) { + // Step 2.2: CLI call (synchronous) + const cliResult = executeCLI(domain) + + // Step 2.3: Parse and update + updatePlanNoteFromCLI(domain, cliResult) + + completedDomains.push(domain) + console.log(`✅ Completed: ${domain.name}`) +} +``` + +--- + +### Phase 3: Conflict Detection + +#### Step 3.1: Parse plan-note.md + +```javascript +const planContent = Read(planNotePath) +const sections = parsePlanNoteSections(planContent) +const allTasks = [] + +// Extract tasks from all domains +for (const section of sections) { + if (section.heading.includes('任务池')) { + const tasks = extractTasks(section.content) + allTasks.push(...tasks) + } +} +``` + +#### Step 3.2: Detect Conflicts + +```javascript +const conflicts = [] + +// 1. File conflicts +const fileMap = new Map() +for (const task of allTasks) { + for (const file of task.files_to_modify) { + const key = `${file.path}:${file.line_range}` + if (!fileMap.has(key)) fileMap.set(key, []) + fileMap.get(key).push(task) + } +} + +for (const [location, tasks] of fileMap.entries()) { + if (tasks.length > 1) { + const agents = new Set(tasks.map(t => t.domain)) + if (agents.size > 1) { + conflicts.push({ + type: 'file_conflict', + severity: 'high', + location: location, + tasks_involved: tasks.map(t => t.id), + agents_involved: Array.from(agents), + description: `Multiple domains modifying: ${location}`, + suggested_resolution: 'Coordinate modification order' + }) + } + } +} + +// 2. Dependency cycles +const depGraph = buildDependencyGraph(allTasks) +const cycles = detectCycles(depGraph) +for (const cycle of cycles) { + conflicts.push({ + type: 'dependency_cycle', + severity: 'critical', + tasks_involved: cycle, + description: `Circular dependency: ${cycle.join(' → ')}`, + suggested_resolution: 'Remove or reorganize dependencies' + }) +} + +// Write conflicts.json +Write(conflictsPath, JSON.stringify({ + detected_at: getUtc8ISOString(), + total_conflicts: conflicts.length, + conflicts: conflicts +}, null, 2)) +``` + +#### Step 3.3: Update plan-note.md + +```javascript +const conflictMarkdown = generateConflictMarkdown(conflicts) + +updatePlanNoteSection( + planNotePath, + '## 冲突标记', + conflictMarkdown +) +``` + +--- + +### Phase 4: Completion + +#### Step 4.1: Generate plan.md + +```markdown +# 实现计划 + +**Session**: ${sessionId} +**任务**: $TASK +**创建**: ${getUtc8ISOString()} + +--- + +## 需求 + +${copySection(planNotePath, '## 需求理解')} + +--- + +## 子领域拆分 + +${subDomains.map((domain, i) => ` +### ${i+1}. ${domain.name} +- **描述**: ${domain.description} +- **任务范围**: TASK-${domain.task_id_range[0]} ~ TASK-${domain.task_id_range[1]} +- **预估工作量**: ${domain.effort} +`).join('\n')} + +--- + +## 任务概览 + +${allTasks.map(t => ` +### ${t.id}: ${t.title} +- **复杂度**: ${t.complexity} +- **依赖**: ${t.depends_on.length > 0 ? t.depends_on.join(', ') : 'None'} +- **文件**: ${t.files_to_modify.map(f => f.path).join(', ')} +`).join('\n')} + +--- + +## 冲突报告 + +${conflicts.length > 0 + ? `检测到 ${conflicts.length} 个冲突:\n${copySection(planNotePath, '## 冲突标记')}` + : '✅ 无冲突检测到'} + +--- + +## 执行指令 + +\`\`\`bash +/workflow:unified-execute-with-file ${planPath} +\`\`\` +``` + +#### Step 4.2: Write Summary + +```javascript +Write(planPath, planMarkdown) +``` + +--- + +## Configuration + +### Sub-Domain Identification + +Common domain patterns: +- Backend API: "服务", "后端", "API", "接口" +- Frontend: "界面", "前端", "UI", "视图" +- Database: "数据", "存储", "数据库", "持久化" +- Testing: "测试", "验证", "QA" +- Infrastructure: "部署", "基础", "运维", "配置" + +--- + +## Error Handling + +| Error | Resolution | +|-------|------------| +| CLI timeout | Retry with shorter prompt | +| No tasks generated | Review domain description, retry | +| Section not found | Recreate section in plan-note.md | +| Conflict detection fails | Continue with empty conflicts | + +--- + +## Best Practices + +1. **Clear Task Description**: Detailed requirements → better sub-domains +2. **Review plan-note.md**: Check before moving to next phase +3. **Resolve Conflicts**: Address before execution +4. **Inspect Details**: Review agents/{domain}/plan.json for specifics + +--- + +**Now execute collaborative-plan-with-file for**: $TASK diff --git a/.codex/prompts/unified-execute-with-file.md b/.codex/prompts/unified-execute-with-file.md index 353d871c..2fd6bb2d 100644 --- a/.codex/prompts/unified-execute-with-file.md +++ b/.codex/prompts/unified-execute-with-file.md @@ -1,722 +1,507 @@ --- -description: Universal execution engine consuming planning/brainstorm/analysis output. Coordinates multi-agents, manages dependencies, and tracks execution with unified progress logging. -argument-hint: "PLAN_PATH=\"\" [EXECUTION_MODE=\"sequential|parallel\"] [AUTO_CONFIRM=\"yes|no\"] [EXECUTION_CONTEXT=\"\"]" +description: Universal execution engine for consuming planning/brainstorm/analysis output. Serial task execution with progress tracking. Codex-optimized. +argument-hint: "PLAN=\"\" [--auto-commit] [--dry-run]" --- # Codex Unified-Execute-With-File Prompt ## Overview -Universal execution engine that consumes **any** planning/brainstorm/analysis output and executes it with minimal progress tracking. Coordinates multiple agents (code-developer, test-fix-agent, doc-generator, cli-execution-agent), handles dependencies intelligently, and maintains unified execution timeline. +Universal execution engine consuming **any** planning output and executing tasks serially with progress tracking. -**Core workflow**: Load Plan → Parse Tasks → Validate Dependencies → Execute Waves → Track Progress → Report Results +**Core workflow**: Load Plan → Parse Tasks → Execute Sequentially → Track Progress → Verify -**Key features**: -- **Plan Format Agnostic**: Consumes IMPL_PLAN.md, brainstorm synthesis.json, analysis conclusions.json, debug resolutions -- **execution-events.md**: Single source of truth - unified execution log with full agent history -- **Multi-Agent Orchestration**: Parallel execution where possible, sequential where needed -- **Incremental Execution**: Resume from failure point, no re-execution of completed tasks -- **Dependency Management**: Automatic topological sort and execution wave grouping -- **Knowledge Chain**: Each agent reads all previous execution history in context +## Target Plan -## Target Execution Plan +**$PLAN** -**Plan Source**: $PLAN_PATH - -- `EXECUTION_MODE`: Strategy (sequential|parallel) -- `AUTO_CONFIRM`: Skip confirmations (yes|no) -- `EXECUTION_CONTEXT`: Focus area/module (optional) +**Parameters**: +- `--auto-commit`: Auto-commit after each task (conventional commits) +- `--dry-run`: Simulate execution without making changes ## Execution Process ``` -Session Detection: - ├─ Check if execution session exists - ├─ If exists → Resume mode - └─ If not → New session mode +Session Initialization: + ├─ Detect or load plan file + ├─ Parse tasks from plan (JSON, Markdown, or other formats) + ├─ Build task dependency graph + └─ Validate for cycles and feasibility -Phase 1: Plan Loading & Validation - ├─ Detect and parse plan file (multiple formats supported) - ├─ Extract and normalize tasks - ├─ Validate dependencies (detect cycles) - ├─ Create execution session folder - ├─ Initialize execution.md and execution-events.md - └─ Pre-execution validation +Pre-Execution: + ├─ Analyze plan structure + ├─ Identify modification targets (files) + ├─ Check file conflicts and feasibility + └─ Generate execution strategy -Phase 2: Execution Orchestration - ├─ Topological sort for execution order - ├─ Group tasks into execution waves (parallel-safe groups) - ├─ Execute waves sequentially (tasks within wave execute in parallel) - ├─ Monitor completion and capture artifacts - ├─ Update progress in execution.md and execution-events.md - └─ Handle failures with retry/skip/abort logic +Serial Execution (Task by Task): + ├─ For each task: + │ ├─ Extract task context + │ ├─ Load previous task outputs + │ ├─ Route to Codex CLI for execution + │ ├─ Track progress in execution.md + │ ├─ Auto-commit if enabled + │ └─ Next task + └─ Complete all tasks -Phase 3: Progress Tracking & Unified Event Logging - ├─ execution-events.md: Append-only unified log (SINGLE SOURCE OF TRUTH) - ├─ Each agent reads all previous events at start - ├─ Agent executes task with full context from previous agents - ├─ Agent appends execution event (success/failure) with artifacts and notes - └─ Next agent reads complete history → knowledge chain +Post-Execution: + ├─ Generate execution summary + ├─ Record completion status + ├─ Identify any failures + └─ Suggest next steps -Phase 4: Completion & Summary - ├─ Collect execution statistics - ├─ Update execution.md with final status - ├─ execution-events.md contains complete execution record - └─ Report results and offer follow-up options +Output: + ├─ .workflow/.execution/{session-id}/execution.md (overview + timeline) + ├─ .workflow/.execution/{session-id}/execution-events.md (detailed log) + └─ Git commits (if --auto-commit enabled) ``` +## Output Structure + +``` +.workflow/.execution/EXEC-{slug}-{date}/ +├── execution.md # Plan overview + task table + timeline +└── execution-events.md # ⭐ Unified log (all executions) - SINGLE SOURCE OF TRUTH +``` + +--- + ## Implementation Details -### Session Setup & Plan Detection +### Session Setup ```javascript const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString() -// Plan detection from $PLAN_PATH -let planPath = "$PLAN_PATH" - -// If not provided, auto-detect -if (!planPath || planPath === "") { +// Resolve plan path +let planPath = "$PLAN" +if (!fs.existsSync(planPath)) { + // Auto-detect from common locations const candidates = [ - '.workflow/.plan/IMPL_PLAN.md', - '.workflow/plans/IMPL_PLAN.md', '.workflow/IMPL_PLAN.md', - '.workflow/brainstorm/*/synthesis.json', - '.workflow/analyze/*/conclusions.json' + '.workflow/.planning/*/plan-note.md', + '.workflow/.brainstorm/*/synthesis.json', + '.workflow/.analysis/*/conclusions.json' ] - - // Find most recent plan - planPath = findMostRecentFile(candidates) - - if (!planPath) { - throw new Error("No execution plan found. Provide PLAN_PATH or ensure .workflow/IMPL_PLAN.md exists") - } + planPath = autoDetectPlan(candidates) } -// Session setup -const executionMode = "$EXECUTION_MODE" || "parallel" -const autoConfirm = "$AUTO_CONFIRM" === "yes" -const executionContext = "$EXECUTION_CONTEXT" || "" +// Create session +const planSlug = path.basename(planPath).replace(/[^a-z0-9-]/g, '').substring(0, 30) +const dateStr = getUtc8ISOString().substring(0, 10) +const randomId = Math.random().toString(36).substring(7) +const sessionId = `EXEC-${planSlug}-${dateStr}-${randomId}` -const planContent = Read(planPath) -const plan = parsePlan(planContent, planPath) +const sessionFolder = `.workflow/.execution/${sessionId}` +const executionPath = `${sessionFolder}/execution.md` +const eventsPath = `${sessionFolder}/execution-events.md` -const executionId = `EXEC-${plan.slug}-${getUtc8ISOString().substring(0, 10)}-${randomId(4)}` -const executionFolder = `.workflow/.execution/${executionId}` -const executionPath = `${executionFolder}/execution.md` -const eventLogPath = `${executionFolder}/execution-events.md` - -bash(`mkdir -p "${executionFolder}"`) +bash(`mkdir -p ${sessionFolder}`) ``` --- -### Plan Format Parsers +### Phase 1: Plan Detection & Parsing -Support multiple plan sources: +#### Step 1.1: Load Plan File ```javascript -function parsePlan(content, filePath) { - const ext = filePath.split('.').pop() +// Detect plan format and parse +let tasks = [] - if (filePath.includes('IMPL_PLAN')) { - return parseImplPlan(content) - } else if (filePath.includes('brainstorm') && filePath.includes('synthesis')) { - return parseSynthesisPlan(content) - } else if (filePath.includes('analyze') && filePath.includes('conclusions')) { - return parseConclusionsPlan(content) - } else if (filePath.includes('debug') && filePath.includes('recommendations')) { - return parseDebugResolutionPlan(content) - } else if (ext === 'json' && content.includes('tasks')) { - return parseTaskJson(content) - } - - throw new Error(`Unsupported plan format: ${filePath}`) -} - -// IMPL_PLAN.md parser -function parseImplPlan(content) { - return { - type: 'impl-plan', - title: extractSection(content, 'Overview'), - phases: extractPhases(content), - tasks: extractTasks(content), - criticalFiles: extractCriticalFiles(content), - estimatedDuration: extractEstimate(content) - } -} - -// Brainstorm synthesis.json parser -function parseSynthesisPlan(content) { - const synthesis = JSON.parse(content) - return { - type: 'brainstorm-synthesis', - title: synthesis.topic, - ideas: synthesis.top_ideas, - tasks: synthesis.top_ideas.map(idea => ({ - id: `IDEA-${slugify(idea.title)}`, - type: 'investigation', - title: idea.title, - description: idea.description, - dependencies: [], - agent_type: 'universal-executor', - prompt: `Implement: ${idea.title}\n${idea.description}`, - expected_output: idea.next_steps - })) - } +if (planPath.endsWith('.json')) { + // JSON plan (from lite-plan, collaborative-plan, etc.) + const planJson = JSON.parse(Read(planPath)) + tasks = parsePlanJson(planJson) +} else if (planPath.endsWith('.md')) { + // Markdown plan (IMPL_PLAN.md, plan-note.md, etc.) + const planMd = Read(planPath) + tasks = parsePlanMarkdown(planMd) +} else if (planPath.endsWith('synthesis.json')) { + // Brainstorm synthesis + const synthesis = JSON.parse(Read(planPath)) + tasks = convertSynthesisToTasks(synthesis) +} else if (planPath.endsWith('conclusions.json')) { + // Analysis conclusions + const conclusions = JSON.parse(Read(planPath)) + tasks = convertConclusionsToTasks(conclusions) +} else { + throw new Error(`Unsupported plan format: ${planPath}`) } ``` ---- - -## Phase 1: Plan Loading & Validation - -### Step 1.1: Parse Plan and Extract Tasks +#### Step 1.2: Build Execution Order ```javascript -const tasks = plan.tasks || parseTasksFromContent(plan) +// Handle task dependencies +const depGraph = buildDependencyGraph(tasks) -// Normalize task structure -const normalizedTasks = tasks.map(task => ({ - id: task.id || `TASK-${generateId()}`, - title: task.title || task.content, - description: task.description || task.activeForm, - type: task.type || inferTaskType(task), // 'code', 'test', 'doc', 'analysis', 'integration' - agent_type: task.agent_type || selectBestAgent(task), - dependencies: task.dependencies || [], +// Validate: no cycles +validateNoCycles(depGraph) - // Execution parameters - prompt: task.prompt || task.description, - files_to_modify: task.files_to_modify || [], - expected_output: task.expected_output || [], +// Calculate execution order (simple topological sort) +const executionOrder = topologicalSort(depGraph, tasks) - // Metadata - priority: task.priority || 'normal', - parallel_safe: task.parallel_safe !== false, - - // Status tracking - status: 'pending', - attempts: 0, - max_retries: 2 -})) - -// Validate and detect issues -const validation = { - cycles: detectDependencyCycles(normalizedTasks), - missing_dependencies: findMissingDependencies(normalizedTasks), - file_conflicts: detectOutputConflicts(normalizedTasks), - warnings: [] -} - -if (validation.cycles.length > 0) { - throw new Error(`Circular dependencies detected: ${validation.cycles.join(', ')}`) -} +// In Codex: serial execution, no parallel waves +console.log(`Total tasks: ${tasks.length}`) +console.log(`Execution order: ${executionOrder.map(t => t.id).join(' → ')}`) ``` -### Step 1.2: Create execution.md +#### Step 1.3: Generate execution.md -```javascript -const executionMarkdown = `# Execution Progress +```markdown +# 执行计划 -**Execution ID**: ${executionId} +**Session**: ${sessionId} **Plan Source**: ${planPath} **Started**: ${getUtc8ISOString()} -**Mode**: ${executionMode} - -**Plan Summary**: -- Title: ${plan.title} -- Total Tasks: ${normalizedTasks.length} -- Phases: ${plan.phases?.length || 'N/A'} --- -## Execution Plan +## 计划概览 -### Task Overview - -| Task ID | Title | Type | Agent | Dependencies | Status | -|---------|-------|------|-------|--------------|--------| -${normalizedTasks.map(t => `| ${t.id} | ${t.title} | ${t.type} | ${t.agent_type} | ${t.dependencies.join(',')} | ${t.status} |`).join('\n')} - -### Dependency Graph - -\`\`\` -${generateDependencyGraph(normalizedTasks)} -\`\`\` - -### Execution Strategy - -- **Mode**: ${executionMode} -- **Parallelization**: ${calculateParallel(normalizedTasks)} -- **Estimated Duration**: ${estimateTotalDuration(normalizedTasks)} +| 字段 | 值 | +|------|-----| +| 总任务数 | ${tasks.length} | +| 计划来源 | ${planPath} | +| 执行模式 | ${dryRun ? '模拟' : '实际'} | +| 自动提交 | ${autoCommit ? '启用' : '禁用'} | --- -## Execution Timeline +## 任务列表 -*Updates as execution progresses* +| ID | 标题 | 复杂度 | 依赖 | 状态 | +|----|------|--------|-------|-------| +${tasks.map(t => `| ${t.id} | ${t.title} | ${t.complexity || 'medium'} | ${t.depends_on?.join(',') || '-'} | ⏳ |`).join('\n')} --- -## Current Status +## 执行时间线 -${executionStatus()} -` +*(更新于 execution-events.md)* -Write(executionPath, executionMarkdown) +--- ``` -### Step 1.3: Pre-Execution Confirmation +--- + +### Phase 2: Pre-Execution Analysis + +#### Step 2.1: Feasibility Check ```javascript -if (!autoConfirm) { - AskUserQuestion({ - questions: [{ - question: `准备执行 ${normalizedTasks.length} 个任务,模式: ${executionMode}\n\n关键任务:\n${normalizedTasks.slice(0, 3).map(t => `• ${t.id}: ${t.title}`).join('\n')}\n\n继续?`, - header: "Confirmation", - multiSelect: false, - options: [ - { label: "开始执行", description: "按计划执行" }, - { label: "调整参数", description: "修改执行参数" }, - { label: "查看详情", description: "查看完整任务列表" }, - { label: "取消", description: "退出不执行" } - ] - }] - }) +const issues = [] + +// Check file conflicts +const fileMap = new Map() +for (const task of tasks) { + for (const file of task.files_to_modify || []) { + if (!fileMap.has(file)) fileMap.set(file, []) + fileMap.get(file).push(task.id) + } } -``` ---- - -## Phase 2: Execution Orchestration - -### Step 2.1: Determine Execution Order - -```javascript -// Topological sort for execution order -const executionOrder = topologicalSort(normalizedTasks) - -// For parallel mode, group tasks into waves -let executionWaves = [] -if (executionMode === 'parallel') { - executionWaves = groupIntoWaves(executionOrder, parallelLimit = 3) -} else { - executionWaves = executionOrder.map(task => [task]) +for (const [file, taskIds] of fileMap.entries()) { + if (taskIds.length > 1) { + // Sequential modification of same file + console.log(`⚠️ Sequential modification: ${file} (${taskIds.join(' → ')})`) + } } -``` -### Step 2.2: Execute Task Waves - -```javascript -let completedCount = 0 -let failedCount = 0 -const results = {} - -for (let waveIndex = 0; waveIndex < executionWaves.length; waveIndex++) { - const wave = executionWaves[waveIndex] - - console.log(`\n=== Wave ${waveIndex + 1}/${executionWaves.length} ===`) - console.log(`Tasks: ${wave.map(t => t.id).join(', ')}`) - - // Launch tasks in parallel - const taskPromises = wave.map(task => executeTask(task, executionFolder)) - - // Wait for wave completion - const waveResults = await Promise.allSettled(taskPromises) - - // Process results - for (let i = 0; i < waveResults.length; i++) { - const result = waveResults[i] - const task = wave[i] - - if (result.status === 'fulfilled') { - results[task.id] = result.value - if (result.value.success) { - completedCount++ - task.status = 'completed' - console.log(`✅ ${task.id}: Completed`) - } else if (result.value.retry) { - console.log(`⚠️ ${task.id}: Will retry`) - task.status = 'pending' - } else { - console.log(`❌ ${task.id}: Failed`) - } - } else { - console.log(`❌ ${task.id}: Execution error`) +// Check missing dependencies +for (const task of tasks) { + for (const depId of task.depends_on || []) { + if (!tasks.find(t => t.id === depId)) { + issues.push(`Task ${task.id} depends on missing ${depId}`) } } +} - // Update execution.md summary - appendExecutionTimeline(executionPath, waveIndex + 1, wave, waveResults) +if (issues.length > 0) { + console.log(`⚠️ Issues found:\n${issues.map(i => `- ${i}`).join('\n')}`) } ``` -### Step 2.3: Execute Individual Task with Unified Event Logging +--- + +### Phase 3: Serial Task Execution + +#### Step 3.1: Execute Tasks Sequentially ```javascript -async function executeTask(task, executionFolder) { - const eventLogPath = `${executionFolder}/execution-events.md` - const startTime = Date.now() +const executionLog = [] +const taskResults = new Map() + +for (const task of executionOrder) { + console.log(`\n📋 Executing: ${task.id} - ${task.title}`) + + const eventRecord = { + timestamp: getUtc8ISOString(), + task_id: task.id, + task_title: task.title, + status: 'in_progress', + notes: [] + } try { - // Read previous execution events for context - let previousEvents = '' - if (fs.existsSync(eventLogPath)) { - previousEvents = Read(eventLogPath) + // Load context from previous tasks + const priorOutputs = executionOrder + .slice(0, executionOrder.indexOf(task)) + .map(t => taskResults.get(t.id)) + .filter(Boolean) + + const context = { + task: task, + prior_outputs: priorOutputs, + plan_source: planPath } - // Select agent based on task type - const agent = selectAgent(task.agent_type) - - // Build execution context including previous agent outputs - const executionContext = ` -## Previous Agent Executions (for reference) - -${previousEvents} - ---- - -## Current Task: ${task.id} - -**Title**: ${task.title} -**Agent**: ${agent} -**Time**: ${getUtc8ISOString()} - -### Description -${task.description} - -### Context -- Modified Files: ${task.files_to_modify.join(', ')} -- Expected Output: ${task.expected_output.join(', ')} - -### Requirements -${task.requirements || 'Follow the plan'} - -### Constraints -${task.constraints || 'No breaking changes'} -` - - // Execute based on agent type - let result - - if (agent === 'code-developer' || agent === 'tdd-developer') { - result = await Task({ - subagent_type: agent, - description: `Execute: ${task.title}`, - prompt: executionContext, - run_in_background: false - }) - } else if (agent === 'test-fix-agent') { - result = await Task({ - subagent_type: 'test-fix-agent', - description: `Execute Tests: ${task.title}`, - prompt: executionContext, - run_in_background: false - }) + // Execute task via Codex CLI + if (dryRun) { + console.log(`[DRY RUN] ${task.id}`) + eventRecord.status = 'completed' + eventRecord.notes.push('Dry run - no changes made') } else { - result = await Task({ - subagent_type: 'universal-executor', - description: task.title, - prompt: executionContext, - run_in_background: false - }) + await executeTaskViaCLI(task, context) + eventRecord.status = 'completed' + + // Auto-commit if enabled + if (autoCommit) { + commitTask(task) + eventRecord.notes.push(`✅ Committed: ${task.id}`) + } } - // Capture artifacts - const artifacts = captureArtifacts(task, executionFolder) - - // Append to unified execution events log - const eventEntry = ` -## Task ${task.id} - COMPLETED ✅ - -**Timestamp**: ${getUtc8ISOString()} -**Duration**: ${calculateDuration(startTime)}ms -**Agent**: ${agent} - -### Execution Summary - -${generateSummary(result)} - -### Key Outputs - -${formatOutputs(result)} - -### Generated Artifacts - -${artifacts.map(a => `- **${a.type}**: \`${a.path}\` (${a.size})`).join('\n')} - -### Notes for Next Agent - -${generateNotesForNextAgent(result, task)} - ---- -` - - appendToEventLog(eventLogPath, eventEntry) - - return { - success: true, - task_id: task.id, - output: result, - artifacts: artifacts, - duration: calculateDuration(startTime) - } } catch (error) { - // Append failure event to unified log - const failureEntry = ` -## Task ${task.id} - FAILED ❌ + eventRecord.status = 'failed' + eventRecord.error = error.message + eventRecord.notes.push(`❌ Error: ${error.message}`) + console.log(`❌ Failed: ${task.id}`) + } -**Timestamp**: ${getUtc8ISOString()} -**Duration**: ${calculateDuration(startTime)}ms -**Agent**: ${agent} -**Error**: ${error.message} + executionLog.push(eventRecord) + updateExecutionEvents(eventsPath, executionLog) + updateExecutionMd(executionPath, task, eventRecord) +} +``` -### Error Details +#### Step 3.2: Execute Task via CLI -\`\`\` -${error.stack} -\`\`\` +**CLI Call** (synchronous): +```bash +ccw cli -p " +PURPOSE: Execute task '${task.id}: ${task.title}' from plan +Success: Task completed as specified in plan -### Recovery Notes for Next Attempt +TASK DETAILS: +- ID: ${task.id} +- Title: ${task.title} +- Description: ${task.description} +- Complexity: ${task.complexity} +- Estimated Effort: ${task.effort} -${generateRecoveryNotes(error, task)} +REQUIRED CHANGES: +${task.files_to_modify?.map(f => `- \`${f.path}\`: ${f.summary}`).join('\n')} + +PRIOR CONTEXT: +${priorOutputs.map(p => `- ${p.task_id}: ${p.notes.join('; ')}`).join('\n')} + +TASK ACTIONS: +${task.actions?.map((a, i) => `${i+1}. ${a}`).join('\n')} + +MODE: write + +CONTEXT: @**/* | Plan Source: ${planPath} | Task: ${task.id} + +EXPECTED: +- Modifications implemented as specified +- Code follows project conventions +- No test failures introduced +- All required files updated + +CONSTRAINTS: Exactly as specified in plan | No additional scope +" --tool codex --mode write +``` + +#### Step 3.3: Track Progress + +```javascript +function updateExecutionEvents(eventsPath, log) { + const eventsMd = `# 执行日志 + +**Session**: ${sessionId} +**更新**: ${getUtc8ISOString()} --- + +## 事件时间线 + +${log.map((e, i) => ` +### 事件 ${i+1}: ${e.task_id} + +**时间**: ${e.timestamp} +**任务**: ${e.task_title} +**状态**: ${e.status === 'completed' ? '✅' : e.status === 'failed' ? '❌' : '⏳'} + +**笔记**: +${e.notes.map(n => `- ${n}`).join('\n')} + +${e.error ? `**错误**: ${e.error}` : ''} +`).join('\n')} + +--- + +## 统计 + +- **总数**: ${log.length} +- **完成**: ${log.filter(e => e.status === 'completed').length} +- **失败**: ${log.filter(e => e.status === 'failed').length} +- **进行中**: ${log.filter(e => e.status === 'in_progress').length} ` - appendToEventLog(eventLogPath, failureEntry) - - // Handle failure: retry, skip, or abort - task.attempts++ - if (task.attempts < task.max_retries && autoConfirm) { - console.log(`⚠️ ${task.id}: Failed, retrying (${task.attempts}/${task.max_retries})`) - return { success: false, task_id: task.id, error: error.message, retry: true, duration: calculateDuration(startTime) } - } else if (task.attempts >= task.max_retries && !autoConfirm) { - const decision = AskUserQuestion({ - questions: [{ - question: `任务失败: ${task.id}\n错误: ${error.message}`, - header: "Decision", - multiSelect: false, - options: [ - { label: "重试", description: "重新执行该任务" }, - { label: "跳过", description: "跳过此任务,继续下一个" }, - { label: "终止", description: "停止整个执行" } - ] - }] - }) - if (decision === 'retry') { - task.attempts = 0 - return { success: false, task_id: task.id, error: error.message, retry: true, duration: calculateDuration(startTime) } - } else if (decision === 'skip') { - task.status = 'skipped' - skipDependentTasks(task.id, normalizedTasks) - } else { - throw new Error('Execution aborted by user') - } - } else { - task.status = 'failed' - skipDependentTasks(task.id, normalizedTasks) - } - - return { - success: false, - task_id: task.id, - error: error.message, - duration: calculateDuration(startTime) - } - } + Write(eventsPath, eventsMd) } -function appendToEventLog(logPath, eventEntry) { - if (fs.existsSync(logPath)) { - const currentContent = Read(logPath) - Write(logPath, currentContent + eventEntry) - } else { - Write(logPath, eventEntry) - } +function updateExecutionMd(mdPath, task, record) { + const content = Read(mdPath) + + // Update task status in table + const updated = content.replace( + new RegExp(`\\| ${task.id} \\|.*\\| ⏳ \\|`), + `| ${task.id} | ... | ... | ... | ${record.status === 'completed' ? '✅' : '❌'} |` + ) + + Write(mdPath, updated) } ``` --- -## Phase 3: Progress Tracking & Event Logging +### Phase 4: Completion -**execution-events.md** is the **SINGLE SOURCE OF TRUTH**: -- Append-only, chronological execution log -- Each task records: timestamp, duration, agent type, execution summary, artifacts, notes for next agent -- Failures include error details and recovery notes -- Format: Human-readable markdown with machine-parseable status indicators (✅/❌/⏳) - -**Event log format** (appended entry): -```markdown -## Task {id} - {STATUS} {emoji} - -**Timestamp**: {time} -**Duration**: {ms} -**Agent**: {type} - -### Execution Summary -{What was done} - -### Generated Artifacts -- `src/types/auth.ts` (2.3KB) - -### Notes for Next Agent -- Key decisions made -- Potential issues -- Ready for: TASK-003 -``` - ---- - -## Phase 4: Completion & Summary - -After all tasks complete or max failures reached: +#### Step 4.1: Generate Summary ```javascript -const statistics = { - total_tasks: normalizedTasks.length, - completed: normalizedTasks.filter(t => t.status === 'completed').length, - failed: normalizedTasks.filter(t => t.status === 'failed').length, - skipped: normalizedTasks.filter(t => t.status === 'skipped').length, - success_rate: (completedCount / normalizedTasks.length * 100).toFixed(1) -} +const completed = executionLog.filter(e => e.status === 'completed').length +const failed = executionLog.filter(e => e.status === 'failed').length -// Update execution.md with final status -appendExecutionSummary(executionPath, statistics) +const summary = ` +# 执行完成 + +**Session**: ${sessionId} +**完成时间**: ${getUtc8ISOString()} + +## 结果 + +| 指标 | 数值 | +|------|------| +| 总任务 | ${executionLog.length} | +| 成功 | ${completed} ✅ | +| 失败 | ${failed} ❌ | +| 成功率 | ${Math.round(completed / executionLog.length * 100)}% | + +## 后续步骤 + +${failed > 0 ? ` +### ❌ 修复失败的任务 + +\`\`\`bash +# 检查失败详情 +cat ${eventsPath} + +# 重新执行失败任务 +${executionLog.filter(e => e.status === 'failed').map(e => `# ${e.task_id}`).join('\n')} +\`\`\` +` : ` +### ✅ 执行完成 + +所有任务已成功完成! +`} + +## 提交日志 + +${executionLog.filter(e => e.notes.some(n => n.includes('Committed'))).map(e => `- ${e.task_id}: ✅`).join('\n')} +` + +Write(executionPath, summary) ``` -**Post-Completion Options** (unless auto-confirm): +#### Step 4.2: Report Results ```javascript -AskUserQuestion({ - questions: [{ - question: "执行完成。是否需要后续操作?", - header: "Next Steps", - multiSelect: true, - options: [ - { label: "查看详情", description: "查看完整执行日志" }, - { label: "调试失败项", description: "对失败任务进行调试" }, - { label: "优化执行", description: "分析执行改进建议" }, - { label: "完成", description: "不需要后续操作" } - ] - }] -}) +console.log(` +✅ 执行完成: ${sessionId} + 成功: ${completed}/${executionLog.length} + ${failed > 0 ? `失败: ${failed}` : '无失败'} + +📁 详情: ${eventsPath} +`) ``` --- -## Session Folder Structure +## Configuration +### Task Format Detection + +Supports multiple plan formats: + +| Format | Source | Parser | +|--------|--------|--------| +| JSON | lite-plan, collaborative-plan | parsePlanJson() | +| Markdown | IMPL_PLAN.md, plan-note.md | parsePlanMarkdown() | +| JSON synthesis | Brainstorm session | convertSynthesisToTasks() | +| JSON conclusions | Analysis session | convertConclusionsToTasks() | + +### Auto-Commit Format + +Conventional Commits: ``` -.workflow/.execution/{executionId}/ -├── execution.md # Execution plan and overall status -└── execution-events.md # SINGLE SOURCE OF TRUTH - all agent executions - # Both human-readable AND machine-parseable +{type}({scope}): {description} -# Generated files go directly to project directories (not into execution folder) -# E.g., TASK-001 generates: src/types/auth.ts (not artifacts/src/types/auth.ts) -# execution-events.md records the actual project paths +{task_id}: {task_title} +Files: {list of modified files} ``` --- -## Agent Selection Strategy +## Error Handling -```javascript -function selectBestAgent(task) { - if (task.type === 'code' || task.type === 'implementation') { - return task.includes_tests ? 'tdd-developer' : 'code-developer' - } else if (task.type === 'test' || task.type === 'test-fix') { - return 'test-fix-agent' - } else if (task.type === 'doc' || task.type === 'documentation') { - return 'doc-generator' - } else if (task.type === 'analysis' || task.type === 'investigation') { - return 'cli-execution-agent' - } else if (task.type === 'debug') { - return 'debug-explore-agent' - } else { - return 'universal-executor' - } -} +| Error | Resolution | +|-------|------------| +| Plan not found | Use explicit --plan flag or check .workflow/ | +| Unsupported format | Verify plan file format matches supported types | +| Task execution fails | Check execution-events.md for details | +| Dependency missing | Verify plan completeness | + +--- + +## Execution Modes + +| Mode | Behavior | +|------|----------| +| Normal | Execute tasks sequentially, auto-commit disabled | +| --auto-commit | Execute + commit each task | +| --dry-run | Simulate execution, no changes | + +--- + +## Usage + +```bash +# Load and execute plan +PLAN="path/to/plan.json" \ + --auto-commit + +# Dry run first +PLAN="path/to/plan.json" \ + --dry-run + +# Auto-detect plan +# (searches .workflow/ for recent plans) ``` --- -## Parallelization Rules - -```javascript -function calculateParallel(tasks) { - // Group tasks into execution waves - // Constraints: - // - Tasks with same file modifications must be sequential - // - Tasks with dependencies must wait - // - Max 3 parallel tasks per wave (resource constraint) - - const waves = [] - const completed = new Set() - - while (completed.size < tasks.length) { - const available = tasks.filter(t => - !completed.has(t.id) && - t.dependencies.every(d => completed.has(d)) - ) - - if (available.length === 0) break - - // Check for file conflicts - const noConflict = [] - const modifiedFiles = new Set() - - for (const task of available) { - const conflicts = task.files_to_modify.some(f => modifiedFiles.has(f)) - if (!conflicts && noConflict.length < 3) { - noConflict.push(task) - task.files_to_modify.forEach(f => modifiedFiles.add(f)) - } - } - - if (noConflict.length > 0) { - waves.push(noConflict) - noConflict.forEach(t => completed.add(t.id)) - } - } - - return waves -} -``` - ---- - -## Error Handling & Recovery - -| Situation | Action | -|-----------|--------| -| Task timeout | Mark as timeout, ask user: retry/skip/abort | -| Missing dependency | Auto-skip dependent tasks, log warning | -| File conflict | Detect before execution, ask for resolution | -| Output mismatch | Validate against expected_output, flag for review | -| Agent unavailable | Fallback to universal-executor | - ---- - -## Usage Recommendations - -Use this execution engine when: -- Executing any planning document (IMPL_PLAN.md, brainstorm conclusions, analysis recommendations) -- Multiple tasks with dependencies need orchestration -- Want minimal progress tracking without clutter -- Need to handle failures gracefully and resume -- Want to parallelize where possible but ensure correctness - -Consumes output from: -- `/workflow:plan` → IMPL_PLAN.md -- `/workflow:brainstorm-with-file` → synthesis.json → execution -- `/workflow:analyze-with-file` → conclusions.json → execution -- `/workflow:debug-with-file` → recommendations → execution -- `/workflow:lite-plan` → task JSONs → execution - ---- - -**Now execute the unified execution workflow for plan**: $PLAN_PATH - +**Now execute unified-execute-with-file for**: $PLAN